Feb 19 18:29:39 crc systemd[1]: Starting Kubernetes Kubelet... Feb 19 18:29:39 crc restorecon[4749]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 18:29:39 crc restorecon[4749]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 18:29:40 crc restorecon[4749]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 18:29:40 crc restorecon[4749]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 19 18:29:41 crc kubenswrapper[4813]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 19 18:29:41 crc kubenswrapper[4813]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 19 18:29:41 crc kubenswrapper[4813]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 19 18:29:41 crc kubenswrapper[4813]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 19 18:29:41 crc kubenswrapper[4813]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 19 18:29:41 crc kubenswrapper[4813]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.198556 4813 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.207414 4813 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.207447 4813 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.207458 4813 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.207467 4813 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.207476 4813 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.207485 4813 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.207493 4813 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.207515 4813 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.207523 4813 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.207531 4813 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.207539 4813 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.207547 4813 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.207554 4813 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.207562 4813 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.207570 4813 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.207578 4813 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.207591 4813 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.207602 4813 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.207615 4813 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.207626 4813 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.207637 4813 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.207648 4813 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.207657 4813 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.207665 4813 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.207673 4813 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.207681 4813 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.207688 4813 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.207696 4813 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.207704 4813 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.207711 4813 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.207718 4813 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.207726 4813 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.207734 4813 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.207741 4813 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.207749 4813 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.207758 4813 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.207766 4813 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.207774 4813 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.207781 4813 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.207789 4813 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.207797 4813 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.207805 4813 feature_gate.go:330] unrecognized feature gate: Example Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.207813 4813 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.207821 4813 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.207828 4813 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.207836 4813 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.207844 4813 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.207854 4813 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.207862 4813 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.207870 4813 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.207877 4813 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.207888 4813 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.207896 4813 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.207904 4813 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.207915 4813 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.207927 4813 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.207936 4813 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.207944 4813 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.207978 4813 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.208024 4813 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.208034 4813 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.208044 4813 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.208054 4813 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.208063 4813 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.208071 4813 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.208079 4813 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.208087 4813 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.208100 4813 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.208111 4813 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.208119 4813 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.208126 4813 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209036 4813 flags.go:64] FLAG: --address="0.0.0.0" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209059 4813 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209078 4813 flags.go:64] FLAG: --anonymous-auth="true" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209090 4813 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209101 4813 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209110 4813 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209122 4813 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209133 4813 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209143 4813 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209152 4813 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209162 4813 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209171 4813 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209181 4813 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209190 4813 flags.go:64] FLAG: --cgroup-root="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209199 4813 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209209 4813 flags.go:64] FLAG: --client-ca-file="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209218 4813 flags.go:64] FLAG: --cloud-config="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209228 4813 flags.go:64] FLAG: --cloud-provider="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209236 4813 flags.go:64] FLAG: --cluster-dns="[]" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209247 4813 flags.go:64] FLAG: --cluster-domain="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209256 4813 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209265 4813 flags.go:64] FLAG: --config-dir="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209273 4813 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209284 4813 flags.go:64] FLAG: --container-log-max-files="5" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209295 4813 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209304 4813 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209313 4813 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209323 4813 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209332 4813 flags.go:64] FLAG: --contention-profiling="false" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209342 4813 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209352 4813 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209361 4813 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209372 4813 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209384 4813 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209394 4813 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209403 4813 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209412 4813 flags.go:64] FLAG: --enable-load-reader="false" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209421 4813 flags.go:64] FLAG: --enable-server="true" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209431 4813 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209443 4813 flags.go:64] FLAG: --event-burst="100" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209453 4813 flags.go:64] FLAG: --event-qps="50" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209492 4813 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209502 4813 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209512 4813 flags.go:64] FLAG: --eviction-hard="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209524 4813 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209535 4813 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209547 4813 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209558 4813 flags.go:64] FLAG: --eviction-soft="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209572 4813 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209584 4813 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209595 4813 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209607 4813 flags.go:64] FLAG: --experimental-mounter-path="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209618 4813 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209629 4813 flags.go:64] FLAG: --fail-swap-on="true" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209641 4813 flags.go:64] FLAG: --feature-gates="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209656 4813 flags.go:64] FLAG: --file-check-frequency="20s" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209668 4813 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209681 4813 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209694 4813 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209706 4813 flags.go:64] FLAG: --healthz-port="10248" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209719 4813 flags.go:64] FLAG: --help="false" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209731 4813 flags.go:64] FLAG: --hostname-override="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209743 4813 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209756 4813 flags.go:64] FLAG: --http-check-frequency="20s" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209768 4813 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209780 4813 flags.go:64] FLAG: --image-credential-provider-config="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209791 4813 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209802 4813 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209813 4813 flags.go:64] FLAG: --image-service-endpoint="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209822 4813 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209832 4813 flags.go:64] FLAG: --kube-api-burst="100" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209841 4813 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209851 4813 flags.go:64] FLAG: --kube-api-qps="50" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209860 4813 flags.go:64] FLAG: --kube-reserved="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209869 4813 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209878 4813 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209888 4813 flags.go:64] FLAG: --kubelet-cgroups="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209897 4813 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209906 4813 flags.go:64] FLAG: --lock-file="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209915 4813 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209928 4813 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209938 4813 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.209991 4813 flags.go:64] FLAG: --log-json-split-stream="false" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.210005 4813 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.210016 4813 flags.go:64] FLAG: --log-text-split-stream="false" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.210029 4813 flags.go:64] FLAG: --logging-format="text" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.210042 4813 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.210056 4813 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.210068 4813 flags.go:64] FLAG: --manifest-url="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.210080 4813 flags.go:64] FLAG: --manifest-url-header="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.210096 4813 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.210109 4813 flags.go:64] FLAG: --max-open-files="1000000" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.210125 4813 flags.go:64] FLAG: --max-pods="110" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.210137 4813 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.210149 4813 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.210161 4813 flags.go:64] FLAG: --memory-manager-policy="None" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.210175 4813 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.210188 4813 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.210200 4813 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.210212 4813 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.210242 4813 flags.go:64] FLAG: --node-status-max-images="50" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.210255 4813 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.210266 4813 flags.go:64] FLAG: --oom-score-adj="-999" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.210275 4813 flags.go:64] FLAG: --pod-cidr="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.210286 4813 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.210300 4813 flags.go:64] FLAG: --pod-manifest-path="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.210309 4813 flags.go:64] FLAG: --pod-max-pids="-1" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.210319 4813 flags.go:64] FLAG: --pods-per-core="0" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.210327 4813 flags.go:64] FLAG: --port="10250" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.210336 4813 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.210345 4813 flags.go:64] FLAG: --provider-id="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.210354 4813 flags.go:64] FLAG: --qos-reserved="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.210366 4813 flags.go:64] FLAG: --read-only-port="10255" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.210376 4813 flags.go:64] FLAG: --register-node="true" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.210385 4813 flags.go:64] FLAG: --register-schedulable="true" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.210395 4813 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.210410 4813 flags.go:64] FLAG: --registry-burst="10" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.210419 4813 flags.go:64] FLAG: --registry-qps="5" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.210428 4813 flags.go:64] FLAG: --reserved-cpus="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.210436 4813 flags.go:64] FLAG: --reserved-memory="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.210448 4813 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.210457 4813 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.210466 4813 flags.go:64] FLAG: --rotate-certificates="false" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.210475 4813 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.210484 4813 flags.go:64] FLAG: --runonce="false" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.210493 4813 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.210502 4813 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.210512 4813 flags.go:64] FLAG: --seccomp-default="false" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.210521 4813 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.210530 4813 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.210539 4813 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.210548 4813 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.210560 4813 flags.go:64] FLAG: --storage-driver-password="root" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.210571 4813 flags.go:64] FLAG: --storage-driver-secure="false" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.210583 4813 flags.go:64] FLAG: --storage-driver-table="stats" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.210595 4813 flags.go:64] FLAG: --storage-driver-user="root" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.210606 4813 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.210619 4813 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.210632 4813 flags.go:64] FLAG: --system-cgroups="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.210644 4813 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.210675 4813 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.210687 4813 flags.go:64] FLAG: --tls-cert-file="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.210699 4813 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.210714 4813 flags.go:64] FLAG: --tls-min-version="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.210728 4813 flags.go:64] FLAG: --tls-private-key-file="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.210741 4813 flags.go:64] FLAG: --topology-manager-policy="none" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.210753 4813 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.210766 4813 flags.go:64] FLAG: --topology-manager-scope="container" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.210778 4813 flags.go:64] FLAG: --v="2" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.210795 4813 flags.go:64] FLAG: --version="false" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.210810 4813 flags.go:64] FLAG: --vmodule="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.210825 4813 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.210838 4813 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.211101 4813 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.211113 4813 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.211122 4813 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.211131 4813 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.211141 4813 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.211149 4813 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.211158 4813 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.211166 4813 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.211174 4813 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.211182 4813 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.211193 4813 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.211205 4813 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.211216 4813 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.211226 4813 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.211236 4813 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.211248 4813 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.211259 4813 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.211269 4813 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.211280 4813 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.211290 4813 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.211305 4813 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.211319 4813 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.211332 4813 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.211348 4813 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.211359 4813 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.211370 4813 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.211381 4813 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.211392 4813 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.211402 4813 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.211413 4813 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.211423 4813 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.211433 4813 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.211444 4813 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.211454 4813 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.211464 4813 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.211472 4813 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.211480 4813 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.211488 4813 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.211496 4813 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.211503 4813 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.211511 4813 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.211519 4813 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.211528 4813 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.211536 4813 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.211544 4813 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.211552 4813 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.211560 4813 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.211569 4813 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.211577 4813 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.211590 4813 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.211602 4813 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.211614 4813 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.211625 4813 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.211638 4813 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.211648 4813 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.211657 4813 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.211665 4813 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.211673 4813 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.211681 4813 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.211691 4813 feature_gate.go:330] unrecognized feature gate: Example Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.211699 4813 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.211706 4813 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.211714 4813 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.211722 4813 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.211730 4813 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.211740 4813 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.211753 4813 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.211766 4813 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.211778 4813 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.211790 4813 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.211803 4813 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.211820 4813 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.225460 4813 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.225582 4813 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.225749 4813 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.225785 4813 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.225799 4813 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.225816 4813 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.225828 4813 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.225838 4813 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.225849 4813 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.225859 4813 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.225869 4813 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.225880 4813 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.225892 4813 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.225905 4813 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.225916 4813 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.225927 4813 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.225937 4813 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.225947 4813 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.225993 4813 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.226005 4813 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.226015 4813 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.226051 4813 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.226064 4813 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.226074 4813 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.226084 4813 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.226093 4813 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.226103 4813 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.226113 4813 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.226123 4813 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.226133 4813 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.226142 4813 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.226150 4813 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.226159 4813 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.226167 4813 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.226177 4813 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.226188 4813 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.226203 4813 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.226213 4813 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.226224 4813 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.226234 4813 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.226245 4813 feature_gate.go:330] unrecognized feature gate: Example Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.226255 4813 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.226266 4813 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.226278 4813 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.226290 4813 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.226300 4813 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.226311 4813 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.226324 4813 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.226335 4813 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.226346 4813 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.226356 4813 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.226366 4813 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.226376 4813 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.226387 4813 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.226398 4813 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.226411 4813 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.226425 4813 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.226436 4813 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.226447 4813 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.226458 4813 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.226467 4813 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.226478 4813 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.226488 4813 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.226499 4813 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.226510 4813 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.226520 4813 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.226531 4813 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.226544 4813 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.226554 4813 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.226565 4813 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.226575 4813 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.226585 4813 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.226598 4813 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.226615 4813 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.226938 4813 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.226998 4813 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.227012 4813 feature_gate.go:330] unrecognized feature gate: Example Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.227021 4813 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.227030 4813 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.227038 4813 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.227047 4813 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.227055 4813 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.227063 4813 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.227074 4813 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.227087 4813 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.227102 4813 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.227110 4813 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.227118 4813 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.227126 4813 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.227135 4813 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.227145 4813 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.227155 4813 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.227163 4813 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.227171 4813 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.227178 4813 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.227186 4813 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.227194 4813 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.227202 4813 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.227210 4813 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.227218 4813 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.227227 4813 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.227237 4813 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.227248 4813 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.227258 4813 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.227269 4813 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.227281 4813 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.227291 4813 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.227301 4813 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.227314 4813 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.227322 4813 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.227331 4813 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.227341 4813 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.227350 4813 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.227360 4813 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.227371 4813 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.227384 4813 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.227396 4813 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.227406 4813 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.227416 4813 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.227424 4813 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.227432 4813 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.227440 4813 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.227448 4813 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.227457 4813 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.227466 4813 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.227476 4813 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.227488 4813 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.227499 4813 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.227508 4813 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.227517 4813 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.227527 4813 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.227539 4813 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.227548 4813 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.227557 4813 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.227565 4813 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.227574 4813 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.227583 4813 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.227591 4813 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.227598 4813 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.227606 4813 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.227613 4813 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.227621 4813 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.227629 4813 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.227638 4813 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.227648 4813 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.227664 4813 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.228930 4813 server.go:940] "Client rotation is on, will bootstrap in background" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.235288 4813 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.235431 4813 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.237067 4813 server.go:997] "Starting client certificate rotation" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.237147 4813 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.237335 4813 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-16 17:55:31.53214079 +0000 UTC Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.237450 4813 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.264428 4813 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 19 18:29:41 crc kubenswrapper[4813]: E0219 18:29:41.267937 4813 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.69:6443: connect: connection refused" logger="UnhandledError" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.271600 4813 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.287686 4813 log.go:25] "Validated CRI v1 runtime API" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.326400 4813 log.go:25] "Validated CRI v1 image API" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.328775 4813 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.333735 4813 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-19-18-24-44-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.333811 4813 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:41 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.364971 4813 manager.go:217] Machine: {Timestamp:2026-02-19 18:29:41.361277547 +0000 UTC m=+0.586718168 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654112256 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:3f17b88b-2b9a-42bd-94af-777e7f325932 BootID:72639c5f-de3a-44bf-8f3b-4707b19d9f7d Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827056128 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827056128 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:41 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108168 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:81:18:e7 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:81:18:e7 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:fa:48:ba Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:b2:37:d6 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:29:69:0e Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:4c:c4:cf Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:70:59:a1 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:1a:cf:21:c2:c7:8e Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:52:34:f3:82:83:cc Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654112256 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.365619 4813 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.365791 4813 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.367020 4813 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.367242 4813 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.367290 4813 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.367612 4813 topology_manager.go:138] "Creating topology manager with none policy" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.367625 4813 container_manager_linux.go:303] "Creating device plugin manager" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.368195 4813 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.368235 4813 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.369166 4813 state_mem.go:36] "Initialized new in-memory state store" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.369276 4813 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.374464 4813 kubelet.go:418] "Attempting to sync node with API server" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.374493 4813 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.374514 4813 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.374534 4813 kubelet.go:324] "Adding apiserver pod source" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.374552 4813 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.382815 4813 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.382844 4813 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Feb 19 18:29:41 crc kubenswrapper[4813]: E0219 18:29:41.383071 4813 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.69:6443: connect: connection refused" logger="UnhandledError" Feb 19 18:29:41 crc kubenswrapper[4813]: E0219 18:29:41.383096 4813 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.69:6443: connect: connection refused" logger="UnhandledError" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.384328 4813 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.385370 4813 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.388346 4813 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.390206 4813 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.390257 4813 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.390274 4813 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.390289 4813 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.390312 4813 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.390326 4813 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.390339 4813 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.390362 4813 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.390378 4813 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.390393 4813 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.390412 4813 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.390426 4813 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.391170 4813 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.391996 4813 server.go:1280] "Started kubelet" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.392105 4813 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.393152 4813 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.393149 4813 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.394367 4813 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 19 18:29:41 crc systemd[1]: Started Kubernetes Kubelet. Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.395619 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.395670 4813 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.396052 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 14:19:52.576492831 +0000 UTC Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.396396 4813 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.396429 4813 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.396451 4813 server.go:460] "Adding debug handlers to kubelet server" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.396624 4813 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 19 18:29:41 crc kubenswrapper[4813]: E0219 18:29:41.397579 4813 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.398404 4813 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.398618 4813 factory.go:55] Registering systemd factory Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.398765 4813 factory.go:221] Registration of the systemd container factory successfully Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.399208 4813 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Feb 19 18:29:41 crc kubenswrapper[4813]: E0219 18:29:41.399358 4813 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.69:6443: connect: connection refused" logger="UnhandledError" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.399919 4813 factory.go:153] Registering CRI-O factory Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.400214 4813 factory.go:221] Registration of the crio container factory successfully Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.400483 4813 factory.go:103] Registering Raw factory Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.400744 4813 manager.go:1196] Started watching for new ooms in manager Feb 19 18:29:41 crc kubenswrapper[4813]: E0219 18:29:41.401037 4813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" interval="200ms" Feb 19 18:29:41 crc kubenswrapper[4813]: E0219 18:29:41.401338 4813 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.69:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1895b94fee4f927d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 18:29:41.391905405 +0000 UTC m=+0.617345986,LastTimestamp:2026-02-19 18:29:41.391905405 +0000 UTC m=+0.617345986,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.403855 4813 manager.go:319] Starting recovery of all containers Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.419261 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.419352 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.419387 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.419415 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.419438 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.419457 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.419477 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.419496 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.419520 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.419539 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.419558 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.419581 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.419607 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.419634 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.419656 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.419681 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.419705 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.419727 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.419752 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.419834 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.419852 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.419871 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.419889 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.419918 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.419937 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.419994 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.420024 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.420118 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.420140 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.420157 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.420176 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.420196 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.420215 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.420234 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.420253 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.420273 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.420293 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.420313 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.420397 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.420417 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.420438 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.420457 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.420478 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.420500 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.420521 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.420541 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.420561 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.420586 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.420606 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.420630 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.420652 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.420674 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.420711 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.420741 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.420773 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.420804 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.420834 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.420864 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.420892 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.420922 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.420981 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.421013 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.421039 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.421071 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.421100 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.421127 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.421153 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.421179 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.421202 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.421223 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.421242 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.421263 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.421284 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.421305 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.421326 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.421350 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.421376 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.421499 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.421529 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.421562 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.421591 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.421619 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.421647 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.421675 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.421704 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.421731 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.421759 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.421787 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.421812 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.421839 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.421866 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.421894 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.421923 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.421982 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.422049 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.422080 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.422106 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.422134 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.422160 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.422188 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.422217 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.422242 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.422268 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.422295 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.422331 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.422360 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.422390 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.422422 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.422454 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.422482 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.422511 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.422543 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.422572 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.422600 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.422625 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.422655 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.422683 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.422710 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.422736 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.422764 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.424432 4813 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.424491 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.424525 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.424557 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.424587 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.424615 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.424646 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.424707 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.424739 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.424767 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.424795 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.424823 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.424853 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.424881 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.424912 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.424939 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.425008 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.425039 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.425069 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.425097 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.425124 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.425151 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.425172 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.425202 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.425231 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.425260 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.425286 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.425315 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.425343 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.425372 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.425399 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.425425 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.425452 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.425480 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.425508 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.425536 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.425565 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.425590 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.425620 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.425654 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.425682 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.425712 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.425740 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.425768 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.425794 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.425821 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.425848 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.425878 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.425908 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.425938 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.425998 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.426063 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.426093 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.426121 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.426147 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.426180 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.426206 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.426235 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.426264 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.426294 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.426324 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.426352 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.426380 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.426407 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.426434 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.426461 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.426489 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.426520 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.426546 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.426575 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.426605 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.426632 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.426661 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.426690 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.426720 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.426750 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.426778 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.426806 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.426834 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.426862 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.426889 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.426915 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.426943 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.427004 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.427033 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.427061 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.427100 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.427130 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.427154 4813 reconstruct.go:97] "Volume reconstruction finished" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.427172 4813 reconciler.go:26] "Reconciler: start to sync state" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.441293 4813 manager.go:324] Recovery completed Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.454854 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.456600 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.456652 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.456670 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.457523 4813 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.457551 4813 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.457579 4813 state_mem.go:36] "Initialized new in-memory state store" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.462195 4813 policy_none.go:49] "None policy: Start" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.463400 4813 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.463430 4813 state_mem.go:35] "Initializing new in-memory state store" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.466865 4813 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.470096 4813 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.470169 4813 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.470219 4813 kubelet.go:2335] "Starting kubelet main sync loop" Feb 19 18:29:41 crc kubenswrapper[4813]: E0219 18:29:41.470291 4813 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.473074 4813 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Feb 19 18:29:41 crc kubenswrapper[4813]: E0219 18:29:41.473205 4813 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.69:6443: connect: connection refused" logger="UnhandledError" Feb 19 18:29:41 crc kubenswrapper[4813]: E0219 18:29:41.497925 4813 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.524110 4813 manager.go:334] "Starting Device Plugin manager" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.524415 4813 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.524436 4813 server.go:79] "Starting device plugin registration server" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.524972 4813 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.524990 4813 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.525180 4813 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.525323 4813 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.525339 4813 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 19 18:29:41 crc kubenswrapper[4813]: E0219 18:29:41.534885 4813 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.571367 4813 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc"] Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.571597 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.573556 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.573609 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.573626 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.573824 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.574291 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.574369 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.574710 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.574759 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.574777 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.574932 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.575133 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.575519 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.579508 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.579540 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.579559 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.582095 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.582156 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.582176 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.582356 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.582616 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.582819 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.582632 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.583111 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.583296 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.584240 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.584290 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.584306 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.584628 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.585085 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.585290 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.585704 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.586843 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.587102 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.593046 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.593091 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.593132 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.593146 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.593097 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.593279 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.594570 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.594633 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.595925 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.596009 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.596031 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:29:41 crc kubenswrapper[4813]: E0219 18:29:41.602608 4813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" interval="400ms" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.625176 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.626640 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.626707 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.626732 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.626771 4813 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 18:29:41 crc kubenswrapper[4813]: E0219 18:29:41.627492 4813 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.69:6443: connect: connection refused" node="crc" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.630032 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.630074 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.630110 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.630136 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.630283 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.630357 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.630435 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.630496 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.630533 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.630613 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.630648 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.630686 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.630718 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.630751 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.630838 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.731638 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.731712 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.731746 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.731783 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.731818 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.731855 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.731890 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.731988 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.732043 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.732052 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.732094 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.732054 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.731922 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.732234 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.732265 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.732291 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.732103 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.732321 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.731941 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.732234 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.732367 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.732394 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.732440 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.732488 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.732523 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.732570 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.732604 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.732615 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.732644 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.732818 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.828610 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.830870 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.830946 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.830997 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.831041 4813 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 18:29:41 crc kubenswrapper[4813]: E0219 18:29:41.831749 4813 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.69:6443: connect: connection refused" node="crc" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.921403 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.927913 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.939617 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.957835 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 18:29:41 crc kubenswrapper[4813]: I0219 18:29:41.964274 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.979342 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-aafc4c3e02f8e6f065532b5360a8aebd676264c0ed19537e96664fcdf6f4a205 WatchSource:0}: Error finding container aafc4c3e02f8e6f065532b5360a8aebd676264c0ed19537e96664fcdf6f4a205: Status 404 returned error can't find the container with id aafc4c3e02f8e6f065532b5360a8aebd676264c0ed19537e96664fcdf6f4a205 Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.980173 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-349db7c131216fb33b477614248051bb922707ab92c841fc07f0b6bdaae0362c WatchSource:0}: Error finding container 349db7c131216fb33b477614248051bb922707ab92c841fc07f0b6bdaae0362c: Status 404 returned error can't find the container with id 349db7c131216fb33b477614248051bb922707ab92c841fc07f0b6bdaae0362c Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.994843 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-283ef2b6a83bab7eb87dbe9b52b6029dc3c9b9033b3f6763f6d97594124e3f2a WatchSource:0}: Error finding container 283ef2b6a83bab7eb87dbe9b52b6029dc3c9b9033b3f6763f6d97594124e3f2a: Status 404 returned error can't find the container with id 283ef2b6a83bab7eb87dbe9b52b6029dc3c9b9033b3f6763f6d97594124e3f2a Feb 19 18:29:41 crc kubenswrapper[4813]: W0219 18:29:41.999259 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-fbef30c767781512f2421fa5f686ac1bde33020a8dc5d44bcda2ec35b99982cf WatchSource:0}: Error finding container fbef30c767781512f2421fa5f686ac1bde33020a8dc5d44bcda2ec35b99982cf: Status 404 returned error can't find the container with id fbef30c767781512f2421fa5f686ac1bde33020a8dc5d44bcda2ec35b99982cf Feb 19 18:29:42 crc kubenswrapper[4813]: E0219 18:29:42.007824 4813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" interval="800ms" Feb 19 18:29:42 crc kubenswrapper[4813]: W0219 18:29:42.219708 4813 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Feb 19 18:29:42 crc kubenswrapper[4813]: E0219 18:29:42.219899 4813 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.69:6443: connect: connection refused" logger="UnhandledError" Feb 19 18:29:42 crc kubenswrapper[4813]: I0219 18:29:42.232425 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 18:29:42 crc kubenswrapper[4813]: I0219 18:29:42.235367 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:29:42 crc kubenswrapper[4813]: I0219 18:29:42.235437 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:29:42 crc kubenswrapper[4813]: I0219 18:29:42.235456 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:29:42 crc kubenswrapper[4813]: I0219 18:29:42.235501 4813 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 18:29:42 crc kubenswrapper[4813]: E0219 18:29:42.236281 4813 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.69:6443: connect: connection refused" node="crc" Feb 19 18:29:42 crc kubenswrapper[4813]: W0219 18:29:42.283162 4813 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Feb 19 18:29:42 crc kubenswrapper[4813]: E0219 18:29:42.283275 4813 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.69:6443: connect: connection refused" logger="UnhandledError" Feb 19 18:29:42 crc kubenswrapper[4813]: W0219 18:29:42.354475 4813 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Feb 19 18:29:42 crc kubenswrapper[4813]: E0219 18:29:42.354580 4813 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.69:6443: connect: connection refused" logger="UnhandledError" Feb 19 18:29:42 crc kubenswrapper[4813]: I0219 18:29:42.392911 4813 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Feb 19 18:29:42 crc kubenswrapper[4813]: I0219 18:29:42.397017 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 13:27:42.408455248 +0000 UTC Feb 19 18:29:42 crc kubenswrapper[4813]: I0219 18:29:42.481315 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"283ef2b6a83bab7eb87dbe9b52b6029dc3c9b9033b3f6763f6d97594124e3f2a"} Feb 19 18:29:42 crc kubenswrapper[4813]: I0219 18:29:42.483042 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"5ab903db38e9b45e2f99418360d7bb3a8c0af9b15877d53727419ad74b72c2f7"} Feb 19 18:29:42 crc kubenswrapper[4813]: I0219 18:29:42.485406 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"349db7c131216fb33b477614248051bb922707ab92c841fc07f0b6bdaae0362c"} Feb 19 18:29:42 crc kubenswrapper[4813]: I0219 18:29:42.487271 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"aafc4c3e02f8e6f065532b5360a8aebd676264c0ed19537e96664fcdf6f4a205"} Feb 19 18:29:42 crc kubenswrapper[4813]: I0219 18:29:42.489712 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"fbef30c767781512f2421fa5f686ac1bde33020a8dc5d44bcda2ec35b99982cf"} Feb 19 18:29:42 crc kubenswrapper[4813]: E0219 18:29:42.809323 4813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" interval="1.6s" Feb 19 18:29:42 crc kubenswrapper[4813]: W0219 18:29:42.860578 4813 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Feb 19 18:29:42 crc kubenswrapper[4813]: E0219 18:29:42.860710 4813 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.69:6443: connect: connection refused" logger="UnhandledError" Feb 19 18:29:42 crc kubenswrapper[4813]: E0219 18:29:42.963739 4813 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.69:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1895b94fee4f927d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 18:29:41.391905405 +0000 UTC m=+0.617345986,LastTimestamp:2026-02-19 18:29:41.391905405 +0000 UTC m=+0.617345986,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 18:29:43 crc kubenswrapper[4813]: I0219 18:29:43.037019 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 18:29:43 crc kubenswrapper[4813]: I0219 18:29:43.039453 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:29:43 crc kubenswrapper[4813]: I0219 18:29:43.039536 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:29:43 crc kubenswrapper[4813]: I0219 18:29:43.039555 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:29:43 crc kubenswrapper[4813]: I0219 18:29:43.039599 4813 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 18:29:43 crc kubenswrapper[4813]: E0219 18:29:43.040464 4813 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.69:6443: connect: connection refused" node="crc" Feb 19 18:29:43 crc kubenswrapper[4813]: I0219 18:29:43.393360 4813 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Feb 19 18:29:43 crc kubenswrapper[4813]: I0219 18:29:43.397441 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 22:46:04.814681864 +0000 UTC Feb 19 18:29:43 crc kubenswrapper[4813]: I0219 18:29:43.434629 4813 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 19 18:29:43 crc kubenswrapper[4813]: E0219 18:29:43.435641 4813 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.69:6443: connect: connection refused" logger="UnhandledError" Feb 19 18:29:43 crc kubenswrapper[4813]: I0219 18:29:43.495292 4813 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864" exitCode=0 Feb 19 18:29:43 crc kubenswrapper[4813]: I0219 18:29:43.495382 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864"} Feb 19 18:29:43 crc kubenswrapper[4813]: I0219 18:29:43.495466 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 18:29:43 crc kubenswrapper[4813]: I0219 18:29:43.496812 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:29:43 crc kubenswrapper[4813]: I0219 18:29:43.496868 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:29:43 crc kubenswrapper[4813]: I0219 18:29:43.496887 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:29:43 crc kubenswrapper[4813]: I0219 18:29:43.499286 4813 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="9b9746c82ed08c95759481dc08426b4e8446ed834da21c003dccfeb8231fbec2" exitCode=0 Feb 19 18:29:43 crc kubenswrapper[4813]: I0219 18:29:43.499344 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"9b9746c82ed08c95759481dc08426b4e8446ed834da21c003dccfeb8231fbec2"} Feb 19 18:29:43 crc kubenswrapper[4813]: I0219 18:29:43.499504 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 18:29:43 crc kubenswrapper[4813]: I0219 18:29:43.499674 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 18:29:43 crc kubenswrapper[4813]: I0219 18:29:43.500870 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:29:43 crc kubenswrapper[4813]: I0219 18:29:43.500920 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:29:43 crc kubenswrapper[4813]: I0219 18:29:43.500942 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:29:43 crc kubenswrapper[4813]: I0219 18:29:43.500919 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:29:43 crc kubenswrapper[4813]: I0219 18:29:43.501028 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:29:43 crc kubenswrapper[4813]: I0219 18:29:43.501051 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:29:43 crc kubenswrapper[4813]: I0219 18:29:43.501577 4813 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="5c2aec2ac5b69b6d516b5f3bddbff02e25892d26a95e9f8bce5c9c5f98144310" exitCode=0 Feb 19 18:29:43 crc kubenswrapper[4813]: I0219 18:29:43.502066 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"5c2aec2ac5b69b6d516b5f3bddbff02e25892d26a95e9f8bce5c9c5f98144310"} Feb 19 18:29:43 crc kubenswrapper[4813]: I0219 18:29:43.502229 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 18:29:43 crc kubenswrapper[4813]: I0219 18:29:43.504003 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:29:43 crc kubenswrapper[4813]: I0219 18:29:43.504046 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:29:43 crc kubenswrapper[4813]: I0219 18:29:43.504080 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:29:43 crc kubenswrapper[4813]: I0219 18:29:43.507531 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 18:29:43 crc kubenswrapper[4813]: I0219 18:29:43.507626 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"c7c86da51b4095ca48cd6d13dadef813f656825387e3166eb716fe2b8cc46ce3"} Feb 19 18:29:43 crc kubenswrapper[4813]: I0219 18:29:43.507314 4813 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="c7c86da51b4095ca48cd6d13dadef813f656825387e3166eb716fe2b8cc46ce3" exitCode=0 Feb 19 18:29:43 crc kubenswrapper[4813]: I0219 18:29:43.508933 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:29:43 crc kubenswrapper[4813]: I0219 18:29:43.509024 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:29:43 crc kubenswrapper[4813]: I0219 18:29:43.509052 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:29:43 crc kubenswrapper[4813]: I0219 18:29:43.511841 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"58efdcb776fb39a7e33af3fda3f0589e38c4bae15b8db3371541d189729d06c1"} Feb 19 18:29:43 crc kubenswrapper[4813]: I0219 18:29:43.511910 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f4ec0b5d3efae532f52cc2cad5bdf32bbba176159a6d3e26584b0fabe4e67fc9"} Feb 19 18:29:43 crc kubenswrapper[4813]: I0219 18:29:43.511932 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"72a262261f9ec388a9b2c38c6c30d35645d6e42d482ed7468891523b7bc731fe"} Feb 19 18:29:44 crc kubenswrapper[4813]: W0219 18:29:44.177123 4813 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Feb 19 18:29:44 crc kubenswrapper[4813]: E0219 18:29:44.177240 4813 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.69:6443: connect: connection refused" logger="UnhandledError" Feb 19 18:29:44 crc kubenswrapper[4813]: W0219 18:29:44.245058 4813 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Feb 19 18:29:44 crc kubenswrapper[4813]: E0219 18:29:44.245129 4813 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.69:6443: connect: connection refused" logger="UnhandledError" Feb 19 18:29:44 crc kubenswrapper[4813]: I0219 18:29:44.393306 4813 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Feb 19 18:29:44 crc kubenswrapper[4813]: I0219 18:29:44.397708 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 10:31:51.052787437 +0000 UTC Feb 19 18:29:44 crc kubenswrapper[4813]: E0219 18:29:44.410530 4813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" interval="3.2s" Feb 19 18:29:44 crc kubenswrapper[4813]: I0219 18:29:44.516400 4813 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="cc8db58cc6cac8c9b830087814e9d7652fdd990b18e4e4e778669d5cbc06b508" exitCode=0 Feb 19 18:29:44 crc kubenswrapper[4813]: I0219 18:29:44.516483 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"cc8db58cc6cac8c9b830087814e9d7652fdd990b18e4e4e778669d5cbc06b508"} Feb 19 18:29:44 crc kubenswrapper[4813]: I0219 18:29:44.516647 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 18:29:44 crc kubenswrapper[4813]: I0219 18:29:44.517692 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:29:44 crc kubenswrapper[4813]: I0219 18:29:44.517730 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:29:44 crc kubenswrapper[4813]: I0219 18:29:44.517743 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:29:44 crc kubenswrapper[4813]: I0219 18:29:44.524880 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"7ca66b5227f874cb972da780e7130516ccefdef0f8a8c6b5258886aba6a54ae8"} Feb 19 18:29:44 crc kubenswrapper[4813]: I0219 18:29:44.525006 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 18:29:44 crc kubenswrapper[4813]: I0219 18:29:44.527652 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:29:44 crc kubenswrapper[4813]: I0219 18:29:44.527741 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:29:44 crc kubenswrapper[4813]: I0219 18:29:44.527758 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:29:44 crc kubenswrapper[4813]: I0219 18:29:44.536346 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"f4760dc159a8f00a8d74f75e4c3b1f50b3548501b3a7623e96aaa16c1013611c"} Feb 19 18:29:44 crc kubenswrapper[4813]: I0219 18:29:44.536415 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"b6995816d5f4dd389140af1ce1d1d5e45df8cf15a28c51cd5e13ee52c094377f"} Feb 19 18:29:44 crc kubenswrapper[4813]: I0219 18:29:44.536430 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"5d2b8202d5d41248673716db8cad0618006cc52e967751eb392b99663c4aa90c"} Feb 19 18:29:44 crc kubenswrapper[4813]: I0219 18:29:44.536463 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 18:29:44 crc kubenswrapper[4813]: I0219 18:29:44.539798 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:29:44 crc kubenswrapper[4813]: I0219 18:29:44.539840 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:29:44 crc kubenswrapper[4813]: I0219 18:29:44.539854 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:29:44 crc kubenswrapper[4813]: I0219 18:29:44.543291 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"11a2584d7de40323b884e08e2cb47b6ab8054fadccc7516e4736c5439aef60e6"} Feb 19 18:29:44 crc kubenswrapper[4813]: I0219 18:29:44.543428 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 18:29:44 crc kubenswrapper[4813]: I0219 18:29:44.544641 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:29:44 crc kubenswrapper[4813]: I0219 18:29:44.544675 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:29:44 crc kubenswrapper[4813]: I0219 18:29:44.544692 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:29:44 crc kubenswrapper[4813]: I0219 18:29:44.552512 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7b414bdca98a18315a2ebb69c39baa4748407aa769d6bc8efa4115efb47c9f4f"} Feb 19 18:29:44 crc kubenswrapper[4813]: I0219 18:29:44.552561 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"72ae02b48db247486c8af5ba910b32e186c15460f9195868b319d1ddc111fa0a"} Feb 19 18:29:44 crc kubenswrapper[4813]: I0219 18:29:44.552578 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c8e90168ea6a75495be5812591cd0c929407c9adcb7c088d4bcbe3658a430320"} Feb 19 18:29:44 crc kubenswrapper[4813]: I0219 18:29:44.552592 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3a5217c8c5d8bdd49d4930ceb46695c6404c9cb98f22fa570d519f6fd9e90a6b"} Feb 19 18:29:44 crc kubenswrapper[4813]: I0219 18:29:44.641510 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 18:29:44 crc kubenswrapper[4813]: I0219 18:29:44.642902 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:29:44 crc kubenswrapper[4813]: I0219 18:29:44.642968 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:29:44 crc kubenswrapper[4813]: I0219 18:29:44.642987 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:29:44 crc kubenswrapper[4813]: I0219 18:29:44.643019 4813 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 18:29:44 crc kubenswrapper[4813]: E0219 18:29:44.643551 4813 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.69:6443: connect: connection refused" node="crc" Feb 19 18:29:44 crc kubenswrapper[4813]: W0219 18:29:44.924695 4813 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Feb 19 18:29:44 crc kubenswrapper[4813]: E0219 18:29:44.924828 4813 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.69:6443: connect: connection refused" logger="UnhandledError" Feb 19 18:29:44 crc kubenswrapper[4813]: W0219 18:29:44.944310 4813 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Feb 19 18:29:44 crc kubenswrapper[4813]: E0219 18:29:44.944464 4813 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.69:6443: connect: connection refused" logger="UnhandledError" Feb 19 18:29:45 crc kubenswrapper[4813]: I0219 18:29:45.398592 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 03:20:37.445845978 +0000 UTC Feb 19 18:29:45 crc kubenswrapper[4813]: I0219 18:29:45.561113 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f62c4c18d632f0210d0676c04bf42a7b28c8669c123d29ef70fd4129f0051a2b"} Feb 19 18:29:45 crc kubenswrapper[4813]: I0219 18:29:45.561331 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 18:29:45 crc kubenswrapper[4813]: I0219 18:29:45.562572 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:29:45 crc kubenswrapper[4813]: I0219 18:29:45.562637 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:29:45 crc kubenswrapper[4813]: I0219 18:29:45.562658 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:29:45 crc kubenswrapper[4813]: I0219 18:29:45.564991 4813 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="baca15e257ea2007369156b6a31eef0b7cd0e336cef7a9c733164b93e1927f8a" exitCode=0 Feb 19 18:29:45 crc kubenswrapper[4813]: I0219 18:29:45.565105 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 18:29:45 crc kubenswrapper[4813]: I0219 18:29:45.565152 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 18:29:45 crc kubenswrapper[4813]: I0219 18:29:45.565179 4813 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 18:29:45 crc kubenswrapper[4813]: I0219 18:29:45.565230 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 18:29:45 crc kubenswrapper[4813]: I0219 18:29:45.565566 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"baca15e257ea2007369156b6a31eef0b7cd0e336cef7a9c733164b93e1927f8a"} Feb 19 18:29:45 crc kubenswrapper[4813]: I0219 18:29:45.565806 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 18:29:45 crc kubenswrapper[4813]: I0219 18:29:45.567426 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:29:45 crc kubenswrapper[4813]: I0219 18:29:45.567473 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:29:45 crc kubenswrapper[4813]: I0219 18:29:45.567473 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:29:45 crc kubenswrapper[4813]: I0219 18:29:45.567523 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:29:45 crc kubenswrapper[4813]: I0219 18:29:45.567490 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:29:45 crc kubenswrapper[4813]: I0219 18:29:45.567544 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:29:45 crc kubenswrapper[4813]: I0219 18:29:45.567865 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:29:45 crc kubenswrapper[4813]: I0219 18:29:45.568051 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:29:45 crc kubenswrapper[4813]: I0219 18:29:45.568157 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:29:45 crc kubenswrapper[4813]: I0219 18:29:45.568292 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:29:45 crc kubenswrapper[4813]: I0219 18:29:45.568313 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:29:45 crc kubenswrapper[4813]: I0219 18:29:45.568601 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:29:45 crc kubenswrapper[4813]: I0219 18:29:45.670598 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 18:29:45 crc kubenswrapper[4813]: I0219 18:29:45.963432 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 18:29:46 crc kubenswrapper[4813]: I0219 18:29:46.082053 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 18:29:46 crc kubenswrapper[4813]: I0219 18:29:46.399339 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 23:32:07.080555055 +0000 UTC Feb 19 18:29:46 crc kubenswrapper[4813]: I0219 18:29:46.573397 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 18:29:46 crc kubenswrapper[4813]: I0219 18:29:46.573444 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 18:29:46 crc kubenswrapper[4813]: I0219 18:29:46.573426 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a46edb0e58f8e2e040cc223708f42cef7060dc0a841fff9db182e66e659ced5e"} Feb 19 18:29:46 crc kubenswrapper[4813]: I0219 18:29:46.573615 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f95dc9a2b0e36680a894dd162836072614413ad26fcf5adbae44db96aff8a00c"} Feb 19 18:29:46 crc kubenswrapper[4813]: I0219 18:29:46.573637 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"216064c0aa1256384d93ad5b952228d4b67c2767f284715e62505dd390c11268"} Feb 19 18:29:46 crc kubenswrapper[4813]: I0219 18:29:46.573639 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 18:29:46 crc kubenswrapper[4813]: I0219 18:29:46.574932 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:29:46 crc kubenswrapper[4813]: I0219 18:29:46.574995 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:29:46 crc kubenswrapper[4813]: I0219 18:29:46.575008 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:29:46 crc kubenswrapper[4813]: I0219 18:29:46.575020 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:29:46 crc kubenswrapper[4813]: I0219 18:29:46.575057 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:29:46 crc kubenswrapper[4813]: I0219 18:29:46.575073 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:29:46 crc kubenswrapper[4813]: I0219 18:29:46.575373 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:29:46 crc kubenswrapper[4813]: I0219 18:29:46.575443 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:29:46 crc kubenswrapper[4813]: I0219 18:29:46.575466 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:29:47 crc kubenswrapper[4813]: I0219 18:29:47.399795 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 18:07:02.151864245 +0000 UTC Feb 19 18:29:47 crc kubenswrapper[4813]: I0219 18:29:47.474437 4813 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 19 18:29:47 crc kubenswrapper[4813]: I0219 18:29:47.583005 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"bed857dedd4a4cf59288fcbcfaa298370a93630e185b949d2e574eb828831df5"} Feb 19 18:29:47 crc kubenswrapper[4813]: I0219 18:29:47.583081 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"def8abdfd513befc4fac3d78b8284b3b37ba3f5da05d74a7b911932ee1099344"} Feb 19 18:29:47 crc kubenswrapper[4813]: I0219 18:29:47.583127 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 18:29:47 crc kubenswrapper[4813]: I0219 18:29:47.583139 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 18:29:47 crc kubenswrapper[4813]: I0219 18:29:47.584663 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:29:47 crc kubenswrapper[4813]: I0219 18:29:47.584730 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:29:47 crc kubenswrapper[4813]: I0219 18:29:47.584663 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:29:47 crc kubenswrapper[4813]: I0219 18:29:47.584805 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:29:47 crc kubenswrapper[4813]: I0219 18:29:47.584830 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:29:47 crc kubenswrapper[4813]: I0219 18:29:47.584758 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:29:47 crc kubenswrapper[4813]: I0219 18:29:47.807461 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 18:29:47 crc kubenswrapper[4813]: I0219 18:29:47.807728 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 18:29:47 crc kubenswrapper[4813]: I0219 18:29:47.809641 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:29:47 crc kubenswrapper[4813]: I0219 18:29:47.809697 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:29:47 crc kubenswrapper[4813]: I0219 18:29:47.809717 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:29:47 crc kubenswrapper[4813]: I0219 18:29:47.844072 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 18:29:47 crc kubenswrapper[4813]: I0219 18:29:47.845373 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:29:47 crc kubenswrapper[4813]: I0219 18:29:47.845427 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:29:47 crc kubenswrapper[4813]: I0219 18:29:47.845447 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:29:47 crc kubenswrapper[4813]: I0219 18:29:47.845482 4813 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 18:29:47 crc kubenswrapper[4813]: I0219 18:29:47.974716 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 18:29:48 crc kubenswrapper[4813]: I0219 18:29:48.400868 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 06:16:48.464463245 +0000 UTC Feb 19 18:29:48 crc kubenswrapper[4813]: I0219 18:29:48.586508 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 18:29:48 crc kubenswrapper[4813]: I0219 18:29:48.586536 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 18:29:48 crc kubenswrapper[4813]: I0219 18:29:48.588203 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:29:48 crc kubenswrapper[4813]: I0219 18:29:48.588265 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:29:48 crc kubenswrapper[4813]: I0219 18:29:48.588295 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:29:48 crc kubenswrapper[4813]: I0219 18:29:48.588355 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:29:48 crc kubenswrapper[4813]: I0219 18:29:48.588410 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:29:48 crc kubenswrapper[4813]: I0219 18:29:48.588428 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:29:49 crc kubenswrapper[4813]: I0219 18:29:49.083089 4813 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 18:29:49 crc kubenswrapper[4813]: I0219 18:29:49.083247 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 18:29:49 crc kubenswrapper[4813]: I0219 18:29:49.402042 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 14:29:43.752735409 +0000 UTC Feb 19 18:29:49 crc kubenswrapper[4813]: I0219 18:29:49.447336 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 18:29:49 crc kubenswrapper[4813]: I0219 18:29:49.589313 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 18:29:49 crc kubenswrapper[4813]: I0219 18:29:49.591097 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:29:49 crc kubenswrapper[4813]: I0219 18:29:49.591148 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:29:49 crc kubenswrapper[4813]: I0219 18:29:49.591160 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:29:49 crc kubenswrapper[4813]: I0219 18:29:49.861544 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 18:29:49 crc kubenswrapper[4813]: I0219 18:29:49.861800 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 18:29:49 crc kubenswrapper[4813]: I0219 18:29:49.863405 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:29:49 crc kubenswrapper[4813]: I0219 18:29:49.863456 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:29:49 crc kubenswrapper[4813]: I0219 18:29:49.863473 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:29:49 crc kubenswrapper[4813]: I0219 18:29:49.869726 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 18:29:50 crc kubenswrapper[4813]: I0219 18:29:50.402644 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 18:27:48.694278778 +0000 UTC Feb 19 18:29:50 crc kubenswrapper[4813]: I0219 18:29:50.441007 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 19 18:29:50 crc kubenswrapper[4813]: I0219 18:29:50.441282 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 18:29:50 crc kubenswrapper[4813]: I0219 18:29:50.443116 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:29:50 crc kubenswrapper[4813]: I0219 18:29:50.443168 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:29:50 crc kubenswrapper[4813]: I0219 18:29:50.443186 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:29:50 crc kubenswrapper[4813]: I0219 18:29:50.591641 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 18:29:50 crc kubenswrapper[4813]: I0219 18:29:50.592938 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:29:50 crc kubenswrapper[4813]: I0219 18:29:50.593030 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:29:50 crc kubenswrapper[4813]: I0219 18:29:50.593053 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:29:51 crc kubenswrapper[4813]: I0219 18:29:51.403464 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 00:54:53.280120553 +0000 UTC Feb 19 18:29:51 crc kubenswrapper[4813]: E0219 18:29:51.537163 4813 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 19 18:29:52 crc kubenswrapper[4813]: I0219 18:29:52.101017 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 18:29:52 crc kubenswrapper[4813]: I0219 18:29:52.101239 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 18:29:52 crc kubenswrapper[4813]: I0219 18:29:52.102772 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:29:52 crc kubenswrapper[4813]: I0219 18:29:52.102823 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:29:52 crc kubenswrapper[4813]: I0219 18:29:52.102855 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:29:52 crc kubenswrapper[4813]: I0219 18:29:52.109645 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 18:29:52 crc kubenswrapper[4813]: I0219 18:29:52.403920 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 22:49:05.254772914 +0000 UTC Feb 19 18:29:52 crc kubenswrapper[4813]: I0219 18:29:52.599154 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 18:29:52 crc kubenswrapper[4813]: I0219 18:29:52.600625 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:29:52 crc kubenswrapper[4813]: I0219 18:29:52.600667 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:29:52 crc kubenswrapper[4813]: I0219 18:29:52.600686 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:29:53 crc kubenswrapper[4813]: I0219 18:29:53.404261 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 01:09:21.56100834 +0000 UTC Feb 19 18:29:54 crc kubenswrapper[4813]: I0219 18:29:54.404593 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 19:48:53.602906921 +0000 UTC Feb 19 18:29:54 crc kubenswrapper[4813]: I0219 18:29:54.905822 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 19 18:29:54 crc kubenswrapper[4813]: I0219 18:29:54.906084 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 18:29:54 crc kubenswrapper[4813]: I0219 18:29:54.907906 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:29:54 crc kubenswrapper[4813]: I0219 18:29:54.907937 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:29:54 crc kubenswrapper[4813]: I0219 18:29:54.907949 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:29:55 crc kubenswrapper[4813]: I0219 18:29:55.393465 4813 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 19 18:29:55 crc kubenswrapper[4813]: I0219 18:29:55.405025 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 18:35:30.544033697 +0000 UTC Feb 19 18:29:55 crc kubenswrapper[4813]: I0219 18:29:55.892080 4813 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 19 18:29:55 crc kubenswrapper[4813]: I0219 18:29:55.892164 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 19 18:29:55 crc kubenswrapper[4813]: I0219 18:29:55.910147 4813 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 19 18:29:55 crc kubenswrapper[4813]: I0219 18:29:55.910414 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 19 18:29:56 crc kubenswrapper[4813]: I0219 18:29:56.405580 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 21:56:20.92773475 +0000 UTC Feb 19 18:29:57 crc kubenswrapper[4813]: I0219 18:29:57.407020 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 23:10:49.554568668 +0000 UTC Feb 19 18:29:57 crc kubenswrapper[4813]: I0219 18:29:57.979744 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 18:29:57 crc kubenswrapper[4813]: I0219 18:29:57.980328 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 18:29:57 crc kubenswrapper[4813]: I0219 18:29:57.982066 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:29:57 crc kubenswrapper[4813]: I0219 18:29:57.982136 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:29:57 crc kubenswrapper[4813]: I0219 18:29:57.982155 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:29:57 crc kubenswrapper[4813]: I0219 18:29:57.987409 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 18:29:58 crc kubenswrapper[4813]: I0219 18:29:58.407453 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 16:27:06.200190356 +0000 UTC Feb 19 18:29:58 crc kubenswrapper[4813]: I0219 18:29:58.618799 4813 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 18:29:58 crc kubenswrapper[4813]: I0219 18:29:58.618875 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 18:29:58 crc kubenswrapper[4813]: I0219 18:29:58.620470 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:29:58 crc kubenswrapper[4813]: I0219 18:29:58.620535 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:29:58 crc kubenswrapper[4813]: I0219 18:29:58.620554 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:29:59 crc kubenswrapper[4813]: I0219 18:29:59.082794 4813 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 18:29:59 crc kubenswrapper[4813]: I0219 18:29:59.082886 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 18:29:59 crc kubenswrapper[4813]: I0219 18:29:59.407750 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 16:10:16.361098203 +0000 UTC Feb 19 18:30:00 crc kubenswrapper[4813]: I0219 18:30:00.408802 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 00:47:48.457762714 +0000 UTC Feb 19 18:30:00 crc kubenswrapper[4813]: E0219 18:30:00.907663 4813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 19 18:30:00 crc kubenswrapper[4813]: I0219 18:30:00.910484 4813 trace.go:236] Trace[1780787299]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (19-Feb-2026 18:29:50.381) (total time: 10528ms): Feb 19 18:30:00 crc kubenswrapper[4813]: Trace[1780787299]: ---"Objects listed" error: 10528ms (18:30:00.910) Feb 19 18:30:00 crc kubenswrapper[4813]: Trace[1780787299]: [10.528485936s] [10.528485936s] END Feb 19 18:30:00 crc kubenswrapper[4813]: I0219 18:30:00.910799 4813 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 19 18:30:00 crc kubenswrapper[4813]: I0219 18:30:00.911110 4813 trace.go:236] Trace[24978088]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (19-Feb-2026 18:29:48.253) (total time: 12657ms): Feb 19 18:30:00 crc kubenswrapper[4813]: Trace[24978088]: ---"Objects listed" error: 12657ms (18:30:00.911) Feb 19 18:30:00 crc kubenswrapper[4813]: Trace[24978088]: [12.657544999s] [12.657544999s] END Feb 19 18:30:00 crc kubenswrapper[4813]: I0219 18:30:00.911591 4813 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 19 18:30:00 crc kubenswrapper[4813]: E0219 18:30:00.912911 4813 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 19 18:30:00 crc kubenswrapper[4813]: I0219 18:30:00.913704 4813 trace.go:236] Trace[825022992]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (19-Feb-2026 18:29:48.363) (total time: 12550ms): Feb 19 18:30:00 crc kubenswrapper[4813]: Trace[825022992]: ---"Objects listed" error: 12549ms (18:30:00.913) Feb 19 18:30:00 crc kubenswrapper[4813]: Trace[825022992]: [12.550056481s] [12.550056481s] END Feb 19 18:30:00 crc kubenswrapper[4813]: I0219 18:30:00.913777 4813 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 19 18:30:00 crc kubenswrapper[4813]: I0219 18:30:00.914426 4813 trace.go:236] Trace[2079861579]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (19-Feb-2026 18:29:48.299) (total time: 12614ms): Feb 19 18:30:00 crc kubenswrapper[4813]: Trace[2079861579]: ---"Objects listed" error: 12614ms (18:30:00.914) Feb 19 18:30:00 crc kubenswrapper[4813]: Trace[2079861579]: [12.614877367s] [12.614877367s] END Feb 19 18:30:00 crc kubenswrapper[4813]: I0219 18:30:00.914487 4813 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 19 18:30:00 crc kubenswrapper[4813]: I0219 18:30:00.915190 4813 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 19 18:30:00 crc kubenswrapper[4813]: I0219 18:30:00.929981 4813 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 19 18:30:00 crc kubenswrapper[4813]: I0219 18:30:00.950574 4813 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:52088->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 19 18:30:00 crc kubenswrapper[4813]: I0219 18:30:00.950606 4813 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:49410->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 19 18:30:00 crc kubenswrapper[4813]: I0219 18:30:00.950656 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:52088->192.168.126.11:17697: read: connection reset by peer" Feb 19 18:30:00 crc kubenswrapper[4813]: I0219 18:30:00.950751 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:49410->192.168.126.11:17697: read: connection reset by peer" Feb 19 18:30:00 crc kubenswrapper[4813]: I0219 18:30:00.951272 4813 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 19 18:30:00 crc kubenswrapper[4813]: I0219 18:30:00.951301 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 19 18:30:00 crc kubenswrapper[4813]: I0219 18:30:00.962463 4813 csr.go:261] certificate signing request csr-x9h42 is approved, waiting to be issued Feb 19 18:30:00 crc kubenswrapper[4813]: I0219 18:30:00.969296 4813 csr.go:257] certificate signing request csr-x9h42 is issued Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.236650 4813 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 19 18:30:01 crc kubenswrapper[4813]: W0219 18:30:01.236857 4813 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 19 18:30:01 crc kubenswrapper[4813]: W0219 18:30:01.236893 4813 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 19 18:30:01 crc kubenswrapper[4813]: W0219 18:30:01.236896 4813 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 19 18:30:01 crc kubenswrapper[4813]: E0219 18:30:01.236898 4813 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/events\": read tcp 38.102.83.69:42194->38.102.83.69:6443: use of closed network connection" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1895b950126e33cf openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 18:29:41.997892559 +0000 UTC m=+1.223333140,LastTimestamp:2026-02-19 18:29:41.997892559 +0000 UTC m=+1.223333140,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 18:30:01 crc kubenswrapper[4813]: W0219 18:30:01.237015 4813 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.388125 4813 apiserver.go:52] "Watching apiserver" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.400317 4813 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.400736 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-vpk9w","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.401106 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.401248 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 18:30:01 crc kubenswrapper[4813]: E0219 18:30:01.401327 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.401707 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 18:30:01 crc kubenswrapper[4813]: E0219 18:30:01.401776 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.401872 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.401930 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.402114 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 18:30:01 crc kubenswrapper[4813]: E0219 18:30:01.402187 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.402236 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-vpk9w" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.402664 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.403295 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.403315 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.404618 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.404677 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.404682 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.405232 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.405352 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.405376 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.405474 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.405796 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.407102 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.409173 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 22:27:11.841921575 +0000 UTC Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.423268 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.435562 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.448180 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.455456 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.462146 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vpk9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0640f474-4d6f-4a87-9750-dda71e69dd95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vpk9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.476592 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.490177 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.497185 4813 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.503160 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.513978 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.519260 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.519409 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.519496 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.519599 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.519758 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.519835 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.519906 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.520033 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.520011 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: E0219 18:30:01.520124 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 18:30:02.020049735 +0000 UTC m=+21.245490356 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.520320 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.520248 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.520747 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.520867 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.521076 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.521144 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.521144 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.521210 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.521382 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.521576 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.521742 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.521821 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.521822 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.521890 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.521999 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.522072 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.522105 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.522140 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.522168 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.522193 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.522188 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.522220 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.522250 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.522280 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.522310 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.522334 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.522360 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.522387 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.522411 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.522416 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.522434 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.522460 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.522485 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.522512 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.522536 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.522558 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.522582 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.522605 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.522592 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.522634 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.522666 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.522665 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.522676 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.522695 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.522724 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.522755 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.522784 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.522812 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.522847 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.522872 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.522894 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.522895 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.522936 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.522939 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.522990 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.523019 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.523044 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.523070 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.523076 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.523101 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.523138 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.523173 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.523198 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.523202 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.523294 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.523322 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.523351 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.523377 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.523402 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.523426 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.523451 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.523474 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.523496 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.523505 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.523518 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.523536 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.523540 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.523553 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.523572 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.523852 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.523878 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.523891 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.523895 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.524161 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.524207 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.524239 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.524242 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.524311 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.524386 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.524564 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.524649 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.524672 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.525516 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.523573 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.525889 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.525993 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.526079 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.526153 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.526223 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.526296 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.526383 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.526453 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.526525 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.526594 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.526670 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.527157 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.527239 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.527331 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.528584 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.528650 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.528696 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.528745 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.528792 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.528834 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.528873 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.528916 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.528988 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.529033 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.529072 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.529107 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.529150 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.529186 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.529229 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.529269 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.529326 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.529367 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.529413 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.529452 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.529489 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.529574 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.529616 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.530053 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.530102 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.530150 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.530189 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.530224 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.530263 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.530299 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.530335 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.530373 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.530413 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.526026 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.530472 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.526398 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.526680 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.526781 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.526806 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.527671 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.527805 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.527908 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.527929 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.527971 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.528014 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.528027 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.528380 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.529926 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.530271 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.530433 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.530446 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.530457 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.530585 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.530451 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.530792 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.530829 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.530858 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.530884 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.530912 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.530939 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.531016 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.531046 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.531076 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.531104 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.531130 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.531158 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.531189 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.531214 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.531241 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.531266 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.531295 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.531305 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.531320 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.528318 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.531349 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.531374 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.531399 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.531423 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.531446 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.531471 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.531497 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.531522 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.531548 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.531572 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.531596 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.531619 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.531642 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.531666 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.531692 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.531715 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.531739 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.531765 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.531789 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.531814 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.531841 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.531869 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.531884 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.531894 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.532189 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.532459 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.532721 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.532903 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.533020 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.533315 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.533782 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.534109 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.534310 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.531897 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.534638 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.534690 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.534724 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.534729 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.534811 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.534862 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.534896 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.534925 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.534917 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.534976 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.536085 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.536426 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.536750 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.536908 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.536922 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.536932 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.537072 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.537206 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.537278 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.538106 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.538121 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.538183 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.538332 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.538372 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.538401 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.538490 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.538569 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.538593 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.538678 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.538712 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.538735 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.538749 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.538819 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.538851 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.538879 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.538904 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.538931 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.538978 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.539005 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.539068 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.539103 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.539132 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.539157 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.539206 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.539232 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.539258 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.539284 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.539310 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.539335 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.539361 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.539387 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.539412 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.539441 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.539465 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.539490 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.539517 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.539586 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7m6kg\" (UniqueName: \"kubernetes.io/projected/0640f474-4d6f-4a87-9750-dda71e69dd95-kube-api-access-7m6kg\") pod \"node-resolver-vpk9w\" (UID: \"0640f474-4d6f-4a87-9750-dda71e69dd95\") " pod="openshift-dns/node-resolver-vpk9w" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.539629 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.539664 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.539690 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/0640f474-4d6f-4a87-9750-dda71e69dd95-hosts-file\") pod \"node-resolver-vpk9w\" (UID: \"0640f474-4d6f-4a87-9750-dda71e69dd95\") " pod="openshift-dns/node-resolver-vpk9w" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.539718 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.539747 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.539773 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.539804 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.539832 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.539860 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.539887 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.539915 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.539943 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.539989 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.540012 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.540036 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.540140 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.540154 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.540169 4813 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.540184 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.540197 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.540210 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.540223 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.540237 4813 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.540250 4813 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.540263 4813 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.540276 4813 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.540288 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.540302 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.540315 4813 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.540329 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.540343 4813 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.540357 4813 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.540370 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.540383 4813 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.540400 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.540415 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.540429 4813 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.540441 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.540454 4813 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.540466 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.540479 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.540492 4813 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.540562 4813 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.540576 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.540589 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.540602 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.540616 4813 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.540632 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.540647 4813 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.540662 4813 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.540677 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.540692 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.540708 4813 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.540724 4813 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.540739 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.540754 4813 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.540767 4813 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.540779 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.540792 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.540806 4813 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.540819 4813 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.540833 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.540846 4813 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.540859 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.540873 4813 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.540887 4813 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.540899 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.540912 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.540924 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.540937 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.540969 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.540981 4813 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.540993 4813 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.541007 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.541020 4813 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.541032 4813 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.541046 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.541061 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.541074 4813 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.541086 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.541098 4813 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.541109 4813 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.541121 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.541133 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.541145 4813 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.541159 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.541171 4813 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.541184 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.541257 4813 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.541317 4813 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.541329 4813 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.541372 4813 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.541386 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.541449 4813 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.541462 4813 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.541474 4813 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.541487 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.541500 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.541513 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.541526 4813 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.539101 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.539292 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.538635 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.539421 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.539469 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.539518 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.539571 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.539589 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.540156 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.540701 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.540793 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.541037 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.541291 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.541542 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.541613 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.542311 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.542528 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.542817 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.543069 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.543162 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.543272 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.543379 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.543434 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.543467 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.543444 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.543568 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.543769 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.543782 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.544161 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.544192 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.544931 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.545294 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.545841 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.548665 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.545300 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.545461 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.545479 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.545748 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.545944 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.545882 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.546024 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.546267 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.546322 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.546407 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.546560 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.547030 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.547187 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.547223 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.547270 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.548977 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 18:30:01 crc kubenswrapper[4813]: E0219 18:30:01.547367 4813 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 18:30:01 crc kubenswrapper[4813]: E0219 18:30:01.549102 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 18:30:02.049076394 +0000 UTC m=+21.274516975 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.547478 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.547551 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.547566 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.547845 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.547999 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.548165 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.548228 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.548243 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.548495 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.548644 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: E0219 18:30:01.548789 4813 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 18:30:01 crc kubenswrapper[4813]: E0219 18:30:01.549299 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 18:30:02.04928536 +0000 UTC m=+21.274725931 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.549628 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.549700 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.549739 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.549769 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.550315 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.550415 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.550432 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.550735 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.551170 4813 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.551815 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.552053 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.552544 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.552580 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.552674 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.552713 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.552720 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.553485 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.560898 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.562795 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.563263 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 18:30:01 crc kubenswrapper[4813]: E0219 18:30:01.566337 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 18:30:01 crc kubenswrapper[4813]: E0219 18:30:01.566377 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 18:30:01 crc kubenswrapper[4813]: E0219 18:30:01.566397 4813 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 18:30:01 crc kubenswrapper[4813]: E0219 18:30:01.566472 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 18:30:02.066448427 +0000 UTC m=+21.291889008 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.567760 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: E0219 18:30:01.568256 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 18:30:01 crc kubenswrapper[4813]: E0219 18:30:01.568318 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 18:30:01 crc kubenswrapper[4813]: E0219 18:30:01.568350 4813 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 18:30:01 crc kubenswrapper[4813]: E0219 18:30:01.568494 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 18:30:02.068439466 +0000 UTC m=+21.293880207 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.568781 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.571083 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.571917 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vpk9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0640f474-4d6f-4a87-9750-dda71e69dd95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vpk9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.575644 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.576455 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.576656 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.577383 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.578860 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.579216 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.579569 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.579819 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.583143 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.583925 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.584086 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.584904 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.585133 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.585343 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.586439 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.586567 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.587122 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.588358 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.588494 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.588591 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.589286 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.589671 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.589686 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.589881 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.590662 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.596923 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.597061 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.597127 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.599045 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.599250 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.599507 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.600768 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.601079 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.610797 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.613469 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.617010 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.625800 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.629244 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.630417 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.633031 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.633114 4813 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f62c4c18d632f0210d0676c04bf42a7b28c8669c123d29ef70fd4129f0051a2b" exitCode=255 Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.633158 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"f62c4c18d632f0210d0676c04bf42a7b28c8669c123d29ef70fd4129f0051a2b"} Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.642345 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.642371 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/0640f474-4d6f-4a87-9750-dda71e69dd95-hosts-file\") pod \"node-resolver-vpk9w\" (UID: \"0640f474-4d6f-4a87-9750-dda71e69dd95\") " pod="openshift-dns/node-resolver-vpk9w" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.642413 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.642421 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.642471 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7m6kg\" (UniqueName: \"kubernetes.io/projected/0640f474-4d6f-4a87-9750-dda71e69dd95-kube-api-access-7m6kg\") pod \"node-resolver-vpk9w\" (UID: \"0640f474-4d6f-4a87-9750-dda71e69dd95\") " pod="openshift-dns/node-resolver-vpk9w" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.642519 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.642533 4813 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.642543 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.642547 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.642569 4813 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.642579 4813 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.642588 4813 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.642596 4813 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.642605 4813 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.642613 4813 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.642621 4813 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.642632 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.642641 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.642650 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.642659 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.642667 4813 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.642675 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.642684 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.642692 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.642701 4813 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.642709 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.642719 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.642727 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.642735 4813 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.642744 4813 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.642753 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.642763 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.642773 4813 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.642783 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.642792 4813 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.642801 4813 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.642811 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.642819 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.642829 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.642829 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/0640f474-4d6f-4a87-9750-dda71e69dd95-hosts-file\") pod \"node-resolver-vpk9w\" (UID: \"0640f474-4d6f-4a87-9750-dda71e69dd95\") " pod="openshift-dns/node-resolver-vpk9w" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.642837 4813 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.642861 4813 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.642874 4813 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.642887 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.642900 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.642912 4813 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.642924 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.642936 4813 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.642974 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.642987 4813 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.642999 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.643013 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.643025 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.643037 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.643049 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.643061 4813 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.643074 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.643085 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.643097 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.643109 4813 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.643122 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.643133 4813 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.643144 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.643156 4813 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.643167 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.643179 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.643190 4813 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.643203 4813 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.643214 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.643227 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.643238 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.643249 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.643262 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.643273 4813 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.643285 4813 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.643296 4813 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.643308 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.643319 4813 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.643331 4813 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.643343 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.643354 4813 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.643366 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.643377 4813 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.643390 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.643402 4813 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.643414 4813 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.643426 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.643438 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.643448 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.643460 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.643471 4813 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.643483 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.643494 4813 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.643506 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.643517 4813 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.643528 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.643538 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.643549 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.643561 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.643572 4813 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.643583 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.643594 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.643605 4813 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.643618 4813 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.643630 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.643641 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.643655 4813 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.643666 4813 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.643677 4813 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.643689 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.643700 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.643711 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.643724 4813 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.643736 4813 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.643940 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.643980 4813 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.653818 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.655994 4813 scope.go:117] "RemoveContainer" containerID="f62c4c18d632f0210d0676c04bf42a7b28c8669c123d29ef70fd4129f0051a2b" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.657244 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.663138 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.663240 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7m6kg\" (UniqueName: \"kubernetes.io/projected/0640f474-4d6f-4a87-9750-dda71e69dd95-kube-api-access-7m6kg\") pod \"node-resolver-vpk9w\" (UID: \"0640f474-4d6f-4a87-9750-dda71e69dd95\") " pod="openshift-dns/node-resolver-vpk9w" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.673761 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.682338 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vpk9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0640f474-4d6f-4a87-9750-dda71e69dd95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vpk9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.693497 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.703332 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.713643 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.715897 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.725622 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-vpk9w" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.732304 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.738821 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 18:30:01 crc kubenswrapper[4813]: W0219 18:30:01.739101 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0640f474_4d6f_4a87_9750_dda71e69dd95.slice/crio-9536ab4bc24fc6cf23ffbf01feff3532e38da1388af430b606c70d610f31b81e WatchSource:0}: Error finding container 9536ab4bc24fc6cf23ffbf01feff3532e38da1388af430b606c70d610f31b81e: Status 404 returned error can't find the container with id 9536ab4bc24fc6cf23ffbf01feff3532e38da1388af430b606c70d610f31b81e Feb 19 18:30:01 crc kubenswrapper[4813]: W0219 18:30:01.768042 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-2d5e4211ccc8beeb002e55268a4cdcd51bcfaefe25b6b8efb98a846f941e037f WatchSource:0}: Error finding container 2d5e4211ccc8beeb002e55268a4cdcd51bcfaefe25b6b8efb98a846f941e037f: Status 404 returned error can't find the container with id 2d5e4211ccc8beeb002e55268a4cdcd51bcfaefe25b6b8efb98a846f941e037f Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.971295 4813 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-19 18:25:00 +0000 UTC, rotation deadline is 2026-12-12 11:56:21.000432407 +0000 UTC Feb 19 18:30:01 crc kubenswrapper[4813]: I0219 18:30:01.971713 4813 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7097h26m19.028723001s for next certificate rotation Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.047373 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 18:30:02 crc kubenswrapper[4813]: E0219 18:30:02.047577 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 18:30:03.047547338 +0000 UTC m=+22.272987879 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.148823 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.148876 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.148899 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.148920 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:30:02 crc kubenswrapper[4813]: E0219 18:30:02.148937 4813 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 18:30:02 crc kubenswrapper[4813]: E0219 18:30:02.149029 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 18:30:03.149011366 +0000 UTC m=+22.374451907 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 18:30:02 crc kubenswrapper[4813]: E0219 18:30:02.149037 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 18:30:02 crc kubenswrapper[4813]: E0219 18:30:02.149066 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 18:30:02 crc kubenswrapper[4813]: E0219 18:30:02.149077 4813 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 18:30:02 crc kubenswrapper[4813]: E0219 18:30:02.149048 4813 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 18:30:02 crc kubenswrapper[4813]: E0219 18:30:02.149131 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 18:30:02 crc kubenswrapper[4813]: E0219 18:30:02.149123 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 18:30:03.149106609 +0000 UTC m=+22.374547150 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 18:30:02 crc kubenswrapper[4813]: E0219 18:30:02.149171 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 18:30:02 crc kubenswrapper[4813]: E0219 18:30:02.149187 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 18:30:03.149172281 +0000 UTC m=+22.374612812 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 18:30:02 crc kubenswrapper[4813]: E0219 18:30:02.149192 4813 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 18:30:02 crc kubenswrapper[4813]: E0219 18:30:02.149278 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 18:30:03.149250713 +0000 UTC m=+22.374691325 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.409859 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 01:43:35.974508185 +0000 UTC Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.558742 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-jlz6v"] Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.559948 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-jlz6v" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.561195 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-hksqw"] Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.561555 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-hksqw" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.561579 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-gfswm"] Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.562094 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.562796 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.563106 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.563304 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.563564 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.563712 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.563774 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.564278 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.566087 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.566298 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.566470 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.566783 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.566991 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.581827 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7e9f575-0f71-4cf9-bbca-161279ecc067\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5217c8c5d8bdd49d4930ceb46695c6404c9cb98f22fa570d519f6fd9e90a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72ae02b48db247486c8af5ba910b32e186c15460f9195868b319d1ddc111fa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e90168ea6a75495be5812591cd0c929407c9adcb7c088d4bcbe3658a430320\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f62c4c18d632f0210d0676c04bf42a7b28c8669c123d29ef70fd4129f0051a2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f62c4c18d632f0210d0676c04bf42a7b28c8669c123d29ef70fd4129f0051a2b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T18:30:00Z\\\",\\\"message\\\":\\\".003654 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 18:29:55.006059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3464865525/tls.crt::/tmp/serving-cert-3464865525/tls.key\\\\\\\"\\\\nI0219 18:30:00.917714 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 18:30:00.925836 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 18:30:00.925875 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 18:30:00.925917 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 18:30:00.925928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 18:30:00.935151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 18:30:00.935201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935214 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935228 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 18:30:00.935217 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 18:30:00.935236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 18:30:00.935260 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 18:30:00.935268 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 18:30:00.942033 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nF0219 18:30:00.942062 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b414bdca98a18315a2ebb69c39baa4748407aa769d6bc8efa4115efb47c9f4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.596502 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.608494 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlz6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f58d0592-08dd-49db-8c98-f262b9808e0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlz6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.620528 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.633857 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.637244 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-vpk9w" event={"ID":"0640f474-4d6f-4a87-9750-dda71e69dd95","Type":"ContainerStarted","Data":"c58268d212369ac3ebcca0839f2c778d1e2e6e36fdc4b7e374cb9044adb39163"} Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.637291 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-vpk9w" event={"ID":"0640f474-4d6f-4a87-9750-dda71e69dd95","Type":"ContainerStarted","Data":"9536ab4bc24fc6cf23ffbf01feff3532e38da1388af430b606c70d610f31b81e"} Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.638979 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.640727 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6f319cf58bd58c099ace281ef84d183ac951986506fa62207a45ec01f7398e7b"} Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.640994 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.642662 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"4c6488cf097d9700ded1960ebddf696d77c08ffdfc6dd8b351c594ea026bb6b7"} Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.642696 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"c6181782d0c635897318effbe40705986067ccb55cf05ffdba6948e46c1a054b"} Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.642710 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"9db6133df4b247fd13cea297045595fc8ed1729534aef09dc5e7ec99d597a8de"} Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.644100 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"f44daee369357470d5c3b416d63e53f621a0434cc6fe90db2dd15a6078849bc6"} Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.644144 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"511f814be652de741e5ac9feebd341437b74dd7dfdf8bc171f38efcbcc3c2d2c"} Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.645306 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"2d5e4211ccc8beeb002e55268a4cdcd51bcfaefe25b6b8efb98a846f941e037f"} Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.646306 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.653657 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b099cefb-f2e5-4f3f-976c-7433dba77ef2-hostroot\") pod \"multus-hksqw\" (UID: \"b099cefb-f2e5-4f3f-976c-7433dba77ef2\") " pod="openshift-multus/multus-hksqw" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.653693 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f58d0592-08dd-49db-8c98-f262b9808e0e-system-cni-dir\") pod \"multus-additional-cni-plugins-jlz6v\" (UID: \"f58d0592-08dd-49db-8c98-f262b9808e0e\") " pod="openshift-multus/multus-additional-cni-plugins-jlz6v" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.653719 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b099cefb-f2e5-4f3f-976c-7433dba77ef2-system-cni-dir\") pod \"multus-hksqw\" (UID: \"b099cefb-f2e5-4f3f-976c-7433dba77ef2\") " pod="openshift-multus/multus-hksqw" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.653741 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f58d0592-08dd-49db-8c98-f262b9808e0e-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jlz6v\" (UID: \"f58d0592-08dd-49db-8c98-f262b9808e0e\") " pod="openshift-multus/multus-additional-cni-plugins-jlz6v" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.653763 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b099cefb-f2e5-4f3f-976c-7433dba77ef2-host-var-lib-cni-multus\") pod \"multus-hksqw\" (UID: \"b099cefb-f2e5-4f3f-976c-7433dba77ef2\") " pod="openshift-multus/multus-hksqw" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.653783 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b099cefb-f2e5-4f3f-976c-7433dba77ef2-multus-socket-dir-parent\") pod \"multus-hksqw\" (UID: \"b099cefb-f2e5-4f3f-976c-7433dba77ef2\") " pod="openshift-multus/multus-hksqw" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.653839 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b099cefb-f2e5-4f3f-976c-7433dba77ef2-host-run-multus-certs\") pod \"multus-hksqw\" (UID: \"b099cefb-f2e5-4f3f-976c-7433dba77ef2\") " pod="openshift-multus/multus-hksqw" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.653926 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b099cefb-f2e5-4f3f-976c-7433dba77ef2-host-var-lib-cni-bin\") pod \"multus-hksqw\" (UID: \"b099cefb-f2e5-4f3f-976c-7433dba77ef2\") " pod="openshift-multus/multus-hksqw" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.653976 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f58d0592-08dd-49db-8c98-f262b9808e0e-os-release\") pod \"multus-additional-cni-plugins-jlz6v\" (UID: \"f58d0592-08dd-49db-8c98-f262b9808e0e\") " pod="openshift-multus/multus-additional-cni-plugins-jlz6v" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.653995 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f58d0592-08dd-49db-8c98-f262b9808e0e-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-jlz6v\" (UID: \"f58d0592-08dd-49db-8c98-f262b9808e0e\") " pod="openshift-multus/multus-additional-cni-plugins-jlz6v" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.654013 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b099cefb-f2e5-4f3f-976c-7433dba77ef2-multus-daemon-config\") pod \"multus-hksqw\" (UID: \"b099cefb-f2e5-4f3f-976c-7433dba77ef2\") " pod="openshift-multus/multus-hksqw" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.654031 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zdh8\" (UniqueName: \"kubernetes.io/projected/f58d0592-08dd-49db-8c98-f262b9808e0e-kube-api-access-6zdh8\") pod \"multus-additional-cni-plugins-jlz6v\" (UID: \"f58d0592-08dd-49db-8c98-f262b9808e0e\") " pod="openshift-multus/multus-additional-cni-plugins-jlz6v" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.654055 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b099cefb-f2e5-4f3f-976c-7433dba77ef2-host-run-k8s-cni-cncf-io\") pod \"multus-hksqw\" (UID: \"b099cefb-f2e5-4f3f-976c-7433dba77ef2\") " pod="openshift-multus/multus-hksqw" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.654073 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b099cefb-f2e5-4f3f-976c-7433dba77ef2-cnibin\") pod \"multus-hksqw\" (UID: \"b099cefb-f2e5-4f3f-976c-7433dba77ef2\") " pod="openshift-multus/multus-hksqw" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.654104 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b099cefb-f2e5-4f3f-976c-7433dba77ef2-multus-conf-dir\") pod \"multus-hksqw\" (UID: \"b099cefb-f2e5-4f3f-976c-7433dba77ef2\") " pod="openshift-multus/multus-hksqw" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.654139 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7qdz\" (UniqueName: \"kubernetes.io/projected/b099cefb-f2e5-4f3f-976c-7433dba77ef2-kube-api-access-n7qdz\") pod \"multus-hksqw\" (UID: \"b099cefb-f2e5-4f3f-976c-7433dba77ef2\") " pod="openshift-multus/multus-hksqw" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.654169 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f58d0592-08dd-49db-8c98-f262b9808e0e-cni-binary-copy\") pod \"multus-additional-cni-plugins-jlz6v\" (UID: \"f58d0592-08dd-49db-8c98-f262b9808e0e\") " pod="openshift-multus/multus-additional-cni-plugins-jlz6v" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.654193 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b099cefb-f2e5-4f3f-976c-7433dba77ef2-cni-binary-copy\") pod \"multus-hksqw\" (UID: \"b099cefb-f2e5-4f3f-976c-7433dba77ef2\") " pod="openshift-multus/multus-hksqw" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.654215 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b099cefb-f2e5-4f3f-976c-7433dba77ef2-host-var-lib-kubelet\") pod \"multus-hksqw\" (UID: \"b099cefb-f2e5-4f3f-976c-7433dba77ef2\") " pod="openshift-multus/multus-hksqw" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.654233 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/481977a2-7072-4176-abd4-863cb6104d70-rootfs\") pod \"machine-config-daemon-gfswm\" (UID: \"481977a2-7072-4176-abd4-863cb6104d70\") " pod="openshift-machine-config-operator/machine-config-daemon-gfswm" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.654256 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b099cefb-f2e5-4f3f-976c-7433dba77ef2-multus-cni-dir\") pod \"multus-hksqw\" (UID: \"b099cefb-f2e5-4f3f-976c-7433dba77ef2\") " pod="openshift-multus/multus-hksqw" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.654283 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b099cefb-f2e5-4f3f-976c-7433dba77ef2-os-release\") pod \"multus-hksqw\" (UID: \"b099cefb-f2e5-4f3f-976c-7433dba77ef2\") " pod="openshift-multus/multus-hksqw" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.654302 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/481977a2-7072-4176-abd4-863cb6104d70-mcd-auth-proxy-config\") pod \"machine-config-daemon-gfswm\" (UID: \"481977a2-7072-4176-abd4-863cb6104d70\") " pod="openshift-machine-config-operator/machine-config-daemon-gfswm" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.654361 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b099cefb-f2e5-4f3f-976c-7433dba77ef2-host-run-netns\") pod \"multus-hksqw\" (UID: \"b099cefb-f2e5-4f3f-976c-7433dba77ef2\") " pod="openshift-multus/multus-hksqw" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.654715 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b099cefb-f2e5-4f3f-976c-7433dba77ef2-etc-kubernetes\") pod \"multus-hksqw\" (UID: \"b099cefb-f2e5-4f3f-976c-7433dba77ef2\") " pod="openshift-multus/multus-hksqw" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.654760 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/481977a2-7072-4176-abd4-863cb6104d70-proxy-tls\") pod \"machine-config-daemon-gfswm\" (UID: \"481977a2-7072-4176-abd4-863cb6104d70\") " pod="openshift-machine-config-operator/machine-config-daemon-gfswm" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.654784 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n76nc\" (UniqueName: \"kubernetes.io/projected/481977a2-7072-4176-abd4-863cb6104d70-kube-api-access-n76nc\") pod \"machine-config-daemon-gfswm\" (UID: \"481977a2-7072-4176-abd4-863cb6104d70\") " pod="openshift-machine-config-operator/machine-config-daemon-gfswm" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.654805 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f58d0592-08dd-49db-8c98-f262b9808e0e-cnibin\") pod \"multus-additional-cni-plugins-jlz6v\" (UID: \"f58d0592-08dd-49db-8c98-f262b9808e0e\") " pod="openshift-multus/multus-additional-cni-plugins-jlz6v" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.655472 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.662374 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vpk9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0640f474-4d6f-4a87-9750-dda71e69dd95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vpk9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.670287 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.680509 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44daee369357470d5c3b416d63e53f621a0434cc6fe90db2dd15a6078849bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.693206 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7e9f575-0f71-4cf9-bbca-161279ecc067\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5217c8c5d8bdd49d4930ceb46695c6404c9cb98f22fa570d519f6fd9e90a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72ae02b48db247486c8af5ba910b32e186c15460f9195868b319d1ddc111fa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e90168ea6a75495be5812591cd0c929407c9adcb7c088d4bcbe3658a430320\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f319cf58bd58c099ace281ef84d183ac951986506fa62207a45ec01f7398e7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f62c4c18d632f0210d0676c04bf42a7b28c8669c123d29ef70fd4129f0051a2b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T18:30:00Z\\\",\\\"message\\\":\\\".003654 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 18:29:55.006059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3464865525/tls.crt::/tmp/serving-cert-3464865525/tls.key\\\\\\\"\\\\nI0219 18:30:00.917714 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 18:30:00.925836 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 18:30:00.925875 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 18:30:00.925917 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 18:30:00.925928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 18:30:00.935151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 18:30:00.935201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935214 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935228 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 18:30:00.935217 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 18:30:00.935236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 18:30:00.935260 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 18:30:00.935268 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 18:30:00.942033 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nF0219 18:30:00.942062 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b414bdca98a18315a2ebb69c39baa4748407aa769d6bc8efa4115efb47c9f4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.705742 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.714625 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.725718 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c6488cf097d9700ded1960ebddf696d77c08ffdfc6dd8b351c594ea026bb6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6181782d0c635897318effbe40705986067ccb55cf05ffdba6948e46c1a054b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.739825 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hksqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b099cefb-f2e5-4f3f-976c-7433dba77ef2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hksqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.754434 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlz6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f58d0592-08dd-49db-8c98-f262b9808e0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlz6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.755719 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b099cefb-f2e5-4f3f-976c-7433dba77ef2-host-var-lib-cni-multus\") pod \"multus-hksqw\" (UID: \"b099cefb-f2e5-4f3f-976c-7433dba77ef2\") " pod="openshift-multus/multus-hksqw" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.755807 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b099cefb-f2e5-4f3f-976c-7433dba77ef2-multus-socket-dir-parent\") pod \"multus-hksqw\" (UID: \"b099cefb-f2e5-4f3f-976c-7433dba77ef2\") " pod="openshift-multus/multus-hksqw" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.755852 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b099cefb-f2e5-4f3f-976c-7433dba77ef2-host-var-lib-cni-multus\") pod \"multus-hksqw\" (UID: \"b099cefb-f2e5-4f3f-976c-7433dba77ef2\") " pod="openshift-multus/multus-hksqw" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.755861 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b099cefb-f2e5-4f3f-976c-7433dba77ef2-host-run-multus-certs\") pod \"multus-hksqw\" (UID: \"b099cefb-f2e5-4f3f-976c-7433dba77ef2\") " pod="openshift-multus/multus-hksqw" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.755921 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b099cefb-f2e5-4f3f-976c-7433dba77ef2-host-run-multus-certs\") pod \"multus-hksqw\" (UID: \"b099cefb-f2e5-4f3f-976c-7433dba77ef2\") " pod="openshift-multus/multus-hksqw" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.755945 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b099cefb-f2e5-4f3f-976c-7433dba77ef2-host-var-lib-cni-bin\") pod \"multus-hksqw\" (UID: \"b099cefb-f2e5-4f3f-976c-7433dba77ef2\") " pod="openshift-multus/multus-hksqw" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.755982 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f58d0592-08dd-49db-8c98-f262b9808e0e-os-release\") pod \"multus-additional-cni-plugins-jlz6v\" (UID: \"f58d0592-08dd-49db-8c98-f262b9808e0e\") " pod="openshift-multus/multus-additional-cni-plugins-jlz6v" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.756004 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f58d0592-08dd-49db-8c98-f262b9808e0e-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-jlz6v\" (UID: \"f58d0592-08dd-49db-8c98-f262b9808e0e\") " pod="openshift-multus/multus-additional-cni-plugins-jlz6v" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.756018 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b099cefb-f2e5-4f3f-976c-7433dba77ef2-host-var-lib-cni-bin\") pod \"multus-hksqw\" (UID: \"b099cefb-f2e5-4f3f-976c-7433dba77ef2\") " pod="openshift-multus/multus-hksqw" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.756026 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b099cefb-f2e5-4f3f-976c-7433dba77ef2-multus-daemon-config\") pod \"multus-hksqw\" (UID: \"b099cefb-f2e5-4f3f-976c-7433dba77ef2\") " pod="openshift-multus/multus-hksqw" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.756060 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zdh8\" (UniqueName: \"kubernetes.io/projected/f58d0592-08dd-49db-8c98-f262b9808e0e-kube-api-access-6zdh8\") pod \"multus-additional-cni-plugins-jlz6v\" (UID: \"f58d0592-08dd-49db-8c98-f262b9808e0e\") " pod="openshift-multus/multus-additional-cni-plugins-jlz6v" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.756080 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b099cefb-f2e5-4f3f-976c-7433dba77ef2-host-run-k8s-cni-cncf-io\") pod \"multus-hksqw\" (UID: \"b099cefb-f2e5-4f3f-976c-7433dba77ef2\") " pod="openshift-multus/multus-hksqw" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.756114 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b099cefb-f2e5-4f3f-976c-7433dba77ef2-cnibin\") pod \"multus-hksqw\" (UID: \"b099cefb-f2e5-4f3f-976c-7433dba77ef2\") " pod="openshift-multus/multus-hksqw" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.756130 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b099cefb-f2e5-4f3f-976c-7433dba77ef2-multus-conf-dir\") pod \"multus-hksqw\" (UID: \"b099cefb-f2e5-4f3f-976c-7433dba77ef2\") " pod="openshift-multus/multus-hksqw" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.756145 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7qdz\" (UniqueName: \"kubernetes.io/projected/b099cefb-f2e5-4f3f-976c-7433dba77ef2-kube-api-access-n7qdz\") pod \"multus-hksqw\" (UID: \"b099cefb-f2e5-4f3f-976c-7433dba77ef2\") " pod="openshift-multus/multus-hksqw" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.756160 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f58d0592-08dd-49db-8c98-f262b9808e0e-cni-binary-copy\") pod \"multus-additional-cni-plugins-jlz6v\" (UID: \"f58d0592-08dd-49db-8c98-f262b9808e0e\") " pod="openshift-multus/multus-additional-cni-plugins-jlz6v" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.756173 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b099cefb-f2e5-4f3f-976c-7433dba77ef2-cni-binary-copy\") pod \"multus-hksqw\" (UID: \"b099cefb-f2e5-4f3f-976c-7433dba77ef2\") " pod="openshift-multus/multus-hksqw" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.756189 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b099cefb-f2e5-4f3f-976c-7433dba77ef2-multus-socket-dir-parent\") pod \"multus-hksqw\" (UID: \"b099cefb-f2e5-4f3f-976c-7433dba77ef2\") " pod="openshift-multus/multus-hksqw" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.756214 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b099cefb-f2e5-4f3f-976c-7433dba77ef2-host-var-lib-kubelet\") pod \"multus-hksqw\" (UID: \"b099cefb-f2e5-4f3f-976c-7433dba77ef2\") " pod="openshift-multus/multus-hksqw" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.756198 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b099cefb-f2e5-4f3f-976c-7433dba77ef2-host-var-lib-kubelet\") pod \"multus-hksqw\" (UID: \"b099cefb-f2e5-4f3f-976c-7433dba77ef2\") " pod="openshift-multus/multus-hksqw" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.756260 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/481977a2-7072-4176-abd4-863cb6104d70-rootfs\") pod \"machine-config-daemon-gfswm\" (UID: \"481977a2-7072-4176-abd4-863cb6104d70\") " pod="openshift-machine-config-operator/machine-config-daemon-gfswm" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.756298 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b099cefb-f2e5-4f3f-976c-7433dba77ef2-multus-cni-dir\") pod \"multus-hksqw\" (UID: \"b099cefb-f2e5-4f3f-976c-7433dba77ef2\") " pod="openshift-multus/multus-hksqw" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.756329 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b099cefb-f2e5-4f3f-976c-7433dba77ef2-os-release\") pod \"multus-hksqw\" (UID: \"b099cefb-f2e5-4f3f-976c-7433dba77ef2\") " pod="openshift-multus/multus-hksqw" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.756345 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b099cefb-f2e5-4f3f-976c-7433dba77ef2-host-run-k8s-cni-cncf-io\") pod \"multus-hksqw\" (UID: \"b099cefb-f2e5-4f3f-976c-7433dba77ef2\") " pod="openshift-multus/multus-hksqw" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.756359 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/481977a2-7072-4176-abd4-863cb6104d70-mcd-auth-proxy-config\") pod \"machine-config-daemon-gfswm\" (UID: \"481977a2-7072-4176-abd4-863cb6104d70\") " pod="openshift-machine-config-operator/machine-config-daemon-gfswm" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.756391 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b099cefb-f2e5-4f3f-976c-7433dba77ef2-host-run-netns\") pod \"multus-hksqw\" (UID: \"b099cefb-f2e5-4f3f-976c-7433dba77ef2\") " pod="openshift-multus/multus-hksqw" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.756425 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n76nc\" (UniqueName: \"kubernetes.io/projected/481977a2-7072-4176-abd4-863cb6104d70-kube-api-access-n76nc\") pod \"machine-config-daemon-gfswm\" (UID: \"481977a2-7072-4176-abd4-863cb6104d70\") " pod="openshift-machine-config-operator/machine-config-daemon-gfswm" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.756482 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b099cefb-f2e5-4f3f-976c-7433dba77ef2-etc-kubernetes\") pod \"multus-hksqw\" (UID: \"b099cefb-f2e5-4f3f-976c-7433dba77ef2\") " pod="openshift-multus/multus-hksqw" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.756512 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/481977a2-7072-4176-abd4-863cb6104d70-proxy-tls\") pod \"machine-config-daemon-gfswm\" (UID: \"481977a2-7072-4176-abd4-863cb6104d70\") " pod="openshift-machine-config-operator/machine-config-daemon-gfswm" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.756543 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f58d0592-08dd-49db-8c98-f262b9808e0e-cnibin\") pod \"multus-additional-cni-plugins-jlz6v\" (UID: \"f58d0592-08dd-49db-8c98-f262b9808e0e\") " pod="openshift-multus/multus-additional-cni-plugins-jlz6v" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.756592 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b099cefb-f2e5-4f3f-976c-7433dba77ef2-multus-daemon-config\") pod \"multus-hksqw\" (UID: \"b099cefb-f2e5-4f3f-976c-7433dba77ef2\") " pod="openshift-multus/multus-hksqw" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.756626 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b099cefb-f2e5-4f3f-976c-7433dba77ef2-host-run-netns\") pod \"multus-hksqw\" (UID: \"b099cefb-f2e5-4f3f-976c-7433dba77ef2\") " pod="openshift-multus/multus-hksqw" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.756639 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b099cefb-f2e5-4f3f-976c-7433dba77ef2-hostroot\") pod \"multus-hksqw\" (UID: \"b099cefb-f2e5-4f3f-976c-7433dba77ef2\") " pod="openshift-multus/multus-hksqw" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.756669 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/481977a2-7072-4176-abd4-863cb6104d70-rootfs\") pod \"machine-config-daemon-gfswm\" (UID: \"481977a2-7072-4176-abd4-863cb6104d70\") " pod="openshift-machine-config-operator/machine-config-daemon-gfswm" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.756673 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f58d0592-08dd-49db-8c98-f262b9808e0e-system-cni-dir\") pod \"multus-additional-cni-plugins-jlz6v\" (UID: \"f58d0592-08dd-49db-8c98-f262b9808e0e\") " pod="openshift-multus/multus-additional-cni-plugins-jlz6v" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.756720 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b099cefb-f2e5-4f3f-976c-7433dba77ef2-system-cni-dir\") pod \"multus-hksqw\" (UID: \"b099cefb-f2e5-4f3f-976c-7433dba77ef2\") " pod="openshift-multus/multus-hksqw" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.756749 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f58d0592-08dd-49db-8c98-f262b9808e0e-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jlz6v\" (UID: \"f58d0592-08dd-49db-8c98-f262b9808e0e\") " pod="openshift-multus/multus-additional-cni-plugins-jlz6v" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.756757 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b099cefb-f2e5-4f3f-976c-7433dba77ef2-multus-cni-dir\") pod \"multus-hksqw\" (UID: \"b099cefb-f2e5-4f3f-976c-7433dba77ef2\") " pod="openshift-multus/multus-hksqw" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.756816 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f58d0592-08dd-49db-8c98-f262b9808e0e-os-release\") pod \"multus-additional-cni-plugins-jlz6v\" (UID: \"f58d0592-08dd-49db-8c98-f262b9808e0e\") " pod="openshift-multus/multus-additional-cni-plugins-jlz6v" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.757197 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b099cefb-f2e5-4f3f-976c-7433dba77ef2-etc-kubernetes\") pod \"multus-hksqw\" (UID: \"b099cefb-f2e5-4f3f-976c-7433dba77ef2\") " pod="openshift-multus/multus-hksqw" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.757224 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b099cefb-f2e5-4f3f-976c-7433dba77ef2-hostroot\") pod \"multus-hksqw\" (UID: \"b099cefb-f2e5-4f3f-976c-7433dba77ef2\") " pod="openshift-multus/multus-hksqw" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.757372 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f58d0592-08dd-49db-8c98-f262b9808e0e-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-jlz6v\" (UID: \"f58d0592-08dd-49db-8c98-f262b9808e0e\") " pod="openshift-multus/multus-additional-cni-plugins-jlz6v" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.757573 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f58d0592-08dd-49db-8c98-f262b9808e0e-system-cni-dir\") pod \"multus-additional-cni-plugins-jlz6v\" (UID: \"f58d0592-08dd-49db-8c98-f262b9808e0e\") " pod="openshift-multus/multus-additional-cni-plugins-jlz6v" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.757655 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b099cefb-f2e5-4f3f-976c-7433dba77ef2-os-release\") pod \"multus-hksqw\" (UID: \"b099cefb-f2e5-4f3f-976c-7433dba77ef2\") " pod="openshift-multus/multus-hksqw" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.757707 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b099cefb-f2e5-4f3f-976c-7433dba77ef2-system-cni-dir\") pod \"multus-hksqw\" (UID: \"b099cefb-f2e5-4f3f-976c-7433dba77ef2\") " pod="openshift-multus/multus-hksqw" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.757730 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f58d0592-08dd-49db-8c98-f262b9808e0e-cni-binary-copy\") pod \"multus-additional-cni-plugins-jlz6v\" (UID: \"f58d0592-08dd-49db-8c98-f262b9808e0e\") " pod="openshift-multus/multus-additional-cni-plugins-jlz6v" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.757731 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f58d0592-08dd-49db-8c98-f262b9808e0e-cnibin\") pod \"multus-additional-cni-plugins-jlz6v\" (UID: \"f58d0592-08dd-49db-8c98-f262b9808e0e\") " pod="openshift-multus/multus-additional-cni-plugins-jlz6v" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.757749 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b099cefb-f2e5-4f3f-976c-7433dba77ef2-cnibin\") pod \"multus-hksqw\" (UID: \"b099cefb-f2e5-4f3f-976c-7433dba77ef2\") " pod="openshift-multus/multus-hksqw" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.757767 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b099cefb-f2e5-4f3f-976c-7433dba77ef2-multus-conf-dir\") pod \"multus-hksqw\" (UID: \"b099cefb-f2e5-4f3f-976c-7433dba77ef2\") " pod="openshift-multus/multus-hksqw" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.757859 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f58d0592-08dd-49db-8c98-f262b9808e0e-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jlz6v\" (UID: \"f58d0592-08dd-49db-8c98-f262b9808e0e\") " pod="openshift-multus/multus-additional-cni-plugins-jlz6v" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.757988 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/481977a2-7072-4176-abd4-863cb6104d70-mcd-auth-proxy-config\") pod \"machine-config-daemon-gfswm\" (UID: \"481977a2-7072-4176-abd4-863cb6104d70\") " pod="openshift-machine-config-operator/machine-config-daemon-gfswm" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.758056 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b099cefb-f2e5-4f3f-976c-7433dba77ef2-cni-binary-copy\") pod \"multus-hksqw\" (UID: \"b099cefb-f2e5-4f3f-976c-7433dba77ef2\") " pod="openshift-multus/multus-hksqw" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.761352 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/481977a2-7072-4176-abd4-863cb6104d70-proxy-tls\") pod \"machine-config-daemon-gfswm\" (UID: \"481977a2-7072-4176-abd4-863cb6104d70\") " pod="openshift-machine-config-operator/machine-config-daemon-gfswm" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.765326 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.772525 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n76nc\" (UniqueName: \"kubernetes.io/projected/481977a2-7072-4176-abd4-863cb6104d70-kube-api-access-n76nc\") pod \"machine-config-daemon-gfswm\" (UID: \"481977a2-7072-4176-abd4-863cb6104d70\") " pod="openshift-machine-config-operator/machine-config-daemon-gfswm" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.773807 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vpk9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0640f474-4d6f-4a87-9750-dda71e69dd95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58268d212369ac3ebcca0839f2c778d1e2e6e36fdc4b7e374cb9044adb39163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vpk9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.774885 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7qdz\" (UniqueName: \"kubernetes.io/projected/b099cefb-f2e5-4f3f-976c-7433dba77ef2-kube-api-access-n7qdz\") pod \"multus-hksqw\" (UID: \"b099cefb-f2e5-4f3f-976c-7433dba77ef2\") " pod="openshift-multus/multus-hksqw" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.783254 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.783511 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zdh8\" (UniqueName: \"kubernetes.io/projected/f58d0592-08dd-49db-8c98-f262b9808e0e-kube-api-access-6zdh8\") pod \"multus-additional-cni-plugins-jlz6v\" (UID: \"f58d0592-08dd-49db-8c98-f262b9808e0e\") " pod="openshift-multus/multus-additional-cni-plugins-jlz6v" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.829460 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"481977a2-7072-4176-abd4-863cb6104d70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfswm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.910558 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-jlz6v" Feb 19 18:30:02 crc kubenswrapper[4813]: W0219 18:30:02.919801 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf58d0592_08dd_49db_8c98_f262b9808e0e.slice/crio-e9453a29bd50c61da780de2d9e293702ff62213bac017a96f8016501fb5c3857 WatchSource:0}: Error finding container e9453a29bd50c61da780de2d9e293702ff62213bac017a96f8016501fb5c3857: Status 404 returned error can't find the container with id e9453a29bd50c61da780de2d9e293702ff62213bac017a96f8016501fb5c3857 Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.926550 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-hksqw" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.927553 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pc9t2"] Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.928500 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.930202 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.931547 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.932072 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.932376 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.932412 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.932599 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.934159 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.937730 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.939924 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44daee369357470d5c3b416d63e53f621a0434cc6fe90db2dd15a6078849bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 18:30:02 crc kubenswrapper[4813]: W0219 18:30:02.940771 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb099cefb_f2e5_4f3f_976c_7433dba77ef2.slice/crio-3459238040e4e8132a1fb337535ee0c41a62242ba244b4e51b9460c66480ceae WatchSource:0}: Error finding container 3459238040e4e8132a1fb337535ee0c41a62242ba244b4e51b9460c66480ceae: Status 404 returned error can't find the container with id 3459238040e4e8132a1fb337535ee0c41a62242ba244b4e51b9460c66480ceae Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.953517 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7e9f575-0f71-4cf9-bbca-161279ecc067\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5217c8c5d8bdd49d4930ceb46695c6404c9cb98f22fa570d519f6fd9e90a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72ae02b48db247486c8af5ba910b32e186c15460f9195868b319d1ddc111fa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e90168ea6a75495be5812591cd0c929407c9adcb7c088d4bcbe3658a430320\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f319cf58bd58c099ace281ef84d183ac951986506fa62207a45ec01f7398e7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f62c4c18d632f0210d0676c04bf42a7b28c8669c123d29ef70fd4129f0051a2b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T18:30:00Z\\\",\\\"message\\\":\\\".003654 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 18:29:55.006059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3464865525/tls.crt::/tmp/serving-cert-3464865525/tls.key\\\\\\\"\\\\nI0219 18:30:00.917714 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 18:30:00.925836 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 18:30:00.925875 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 18:30:00.925917 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 18:30:00.925928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 18:30:00.935151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 18:30:00.935201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935214 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935228 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 18:30:00.935217 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 18:30:00.935236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 18:30:00.935260 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 18:30:00.935268 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 18:30:00.942033 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nF0219 18:30:00.942062 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b414bdca98a18315a2ebb69c39baa4748407aa769d6bc8efa4115efb47c9f4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 18:30:02 crc kubenswrapper[4813]: W0219 18:30:02.954581 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod481977a2_7072_4176_abd4_863cb6104d70.slice/crio-f11500821b503db46283565da9abbe5a724eabaca0df4f4874d0f7943e5dcf9a WatchSource:0}: Error finding container f11500821b503db46283565da9abbe5a724eabaca0df4f4874d0f7943e5dcf9a: Status 404 returned error can't find the container with id f11500821b503db46283565da9abbe5a724eabaca0df4f4874d0f7943e5dcf9a Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.961677 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c6488cf097d9700ded1960ebddf696d77c08ffdfc6dd8b351c594ea026bb6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6181782d0c635897318effbe40705986067ccb55cf05ffdba6948e46c1a054b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.971998 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hksqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b099cefb-f2e5-4f3f-976c-7433dba77ef2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hksqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.984319 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 18:30:02 crc kubenswrapper[4813]: I0219 18:30:02.993485 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.003943 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlz6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f58d0592-08dd-49db-8c98-f262b9808e0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlz6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.026274 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"928c75f4-605c-4556-8c29-14ff4bdf6f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pc9t2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.037496 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.048562 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"481977a2-7072-4176-abd4-863cb6104d70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfswm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.058058 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.061261 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.061388 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-etc-openvswitch\") pod \"ovnkube-node-pc9t2\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" Feb 19 18:30:03 crc kubenswrapper[4813]: E0219 18:30:03.061464 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 18:30:05.061433077 +0000 UTC m=+24.286873658 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.061543 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-systemd-units\") pod \"ovnkube-node-pc9t2\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.061578 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-node-log\") pod \"ovnkube-node-pc9t2\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.061601 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-host-run-netns\") pod \"ovnkube-node-pc9t2\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.061617 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-host-cni-netd\") pod \"ovnkube-node-pc9t2\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.061647 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-host-run-ovn-kubernetes\") pod \"ovnkube-node-pc9t2\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.061665 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/928c75f4-605c-4556-8c29-14ff4bdf6f5e-ovnkube-config\") pod \"ovnkube-node-pc9t2\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.061682 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/928c75f4-605c-4556-8c29-14ff4bdf6f5e-ovnkube-script-lib\") pod \"ovnkube-node-pc9t2\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.061727 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf9qf\" (UniqueName: \"kubernetes.io/projected/928c75f4-605c-4556-8c29-14ff4bdf6f5e-kube-api-access-cf9qf\") pod \"ovnkube-node-pc9t2\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.061759 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-run-openvswitch\") pod \"ovnkube-node-pc9t2\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.061862 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-var-lib-openvswitch\") pod \"ovnkube-node-pc9t2\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.061905 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-run-ovn\") pod \"ovnkube-node-pc9t2\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.061929 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/928c75f4-605c-4556-8c29-14ff4bdf6f5e-ovn-node-metrics-cert\") pod \"ovnkube-node-pc9t2\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.061975 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-host-cni-bin\") pod \"ovnkube-node-pc9t2\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.061998 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/928c75f4-605c-4556-8c29-14ff4bdf6f5e-env-overrides\") pod \"ovnkube-node-pc9t2\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.062020 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-log-socket\") pod \"ovnkube-node-pc9t2\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.062048 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-run-systemd\") pod \"ovnkube-node-pc9t2\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.062079 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pc9t2\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.062111 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-host-kubelet\") pod \"ovnkube-node-pc9t2\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.062134 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-host-slash\") pod \"ovnkube-node-pc9t2\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.067234 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vpk9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0640f474-4d6f-4a87-9750-dda71e69dd95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58268d212369ac3ebcca0839f2c778d1e2e6e36fdc4b7e374cb9044adb39163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vpk9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.162716 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-run-openvswitch\") pod \"ovnkube-node-pc9t2\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.162752 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-var-lib-openvswitch\") pod \"ovnkube-node-pc9t2\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.162771 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-run-ovn\") pod \"ovnkube-node-pc9t2\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.162790 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.162805 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/928c75f4-605c-4556-8c29-14ff4bdf6f5e-ovn-node-metrics-cert\") pod \"ovnkube-node-pc9t2\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.162807 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-run-openvswitch\") pod \"ovnkube-node-pc9t2\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.162845 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-host-cni-bin\") pod \"ovnkube-node-pc9t2\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.162820 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-host-cni-bin\") pod \"ovnkube-node-pc9t2\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" Feb 19 18:30:03 crc kubenswrapper[4813]: E0219 18:30:03.162897 4813 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 18:30:03 crc kubenswrapper[4813]: E0219 18:30:03.162935 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 18:30:05.162922826 +0000 UTC m=+24.388363367 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.162892 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/928c75f4-605c-4556-8c29-14ff4bdf6f5e-env-overrides\") pod \"ovnkube-node-pc9t2\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.163025 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-var-lib-openvswitch\") pod \"ovnkube-node-pc9t2\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.163045 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-log-socket\") pod \"ovnkube-node-pc9t2\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.162973 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-run-ovn\") pod \"ovnkube-node-pc9t2\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.163082 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-run-systemd\") pod \"ovnkube-node-pc9t2\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.163102 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-log-socket\") pod \"ovnkube-node-pc9t2\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.163107 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pc9t2\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.163137 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pc9t2\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.163166 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-host-kubelet\") pod \"ovnkube-node-pc9t2\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.163172 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-run-systemd\") pod \"ovnkube-node-pc9t2\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.163195 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-host-slash\") pod \"ovnkube-node-pc9t2\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.163224 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-etc-openvswitch\") pod \"ovnkube-node-pc9t2\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.163257 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-host-kubelet\") pod \"ovnkube-node-pc9t2\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.163278 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-host-slash\") pod \"ovnkube-node-pc9t2\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.163267 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 18:30:03 crc kubenswrapper[4813]: E0219 18:30:03.163383 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 18:30:03 crc kubenswrapper[4813]: E0219 18:30:03.163408 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 18:30:03 crc kubenswrapper[4813]: E0219 18:30:03.163424 4813 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 18:30:03 crc kubenswrapper[4813]: E0219 18:30:03.163446 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 18:30:03 crc kubenswrapper[4813]: E0219 18:30:03.163459 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 18:30:03 crc kubenswrapper[4813]: E0219 18:30:03.163466 4813 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 18:30:03 crc kubenswrapper[4813]: E0219 18:30:03.163479 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 18:30:05.163460212 +0000 UTC m=+24.388900773 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 18:30:03 crc kubenswrapper[4813]: E0219 18:30:03.163501 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 18:30:05.163490653 +0000 UTC m=+24.388931274 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.163500 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-etc-openvswitch\") pod \"ovnkube-node-pc9t2\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.163388 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.163557 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-systemd-units\") pod \"ovnkube-node-pc9t2\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.163586 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-node-log\") pod \"ovnkube-node-pc9t2\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.163616 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-host-cni-netd\") pod \"ovnkube-node-pc9t2\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.163644 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-node-log\") pod \"ovnkube-node-pc9t2\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.163651 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.163678 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-host-cni-netd\") pod \"ovnkube-node-pc9t2\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.163682 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-host-run-netns\") pod \"ovnkube-node-pc9t2\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.163704 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/928c75f4-605c-4556-8c29-14ff4bdf6f5e-env-overrides\") pod \"ovnkube-node-pc9t2\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" Feb 19 18:30:03 crc kubenswrapper[4813]: E0219 18:30:03.163737 4813 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.163712 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/928c75f4-605c-4556-8c29-14ff4bdf6f5e-ovnkube-script-lib\") pod \"ovnkube-node-pc9t2\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" Feb 19 18:30:03 crc kubenswrapper[4813]: E0219 18:30:03.163774 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 18:30:05.163760861 +0000 UTC m=+24.389201502 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.163793 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-host-run-ovn-kubernetes\") pod \"ovnkube-node-pc9t2\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.163811 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/928c75f4-605c-4556-8c29-14ff4bdf6f5e-ovnkube-config\") pod \"ovnkube-node-pc9t2\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.163813 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-host-run-netns\") pod \"ovnkube-node-pc9t2\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.163826 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cf9qf\" (UniqueName: \"kubernetes.io/projected/928c75f4-605c-4556-8c29-14ff4bdf6f5e-kube-api-access-cf9qf\") pod \"ovnkube-node-pc9t2\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.163618 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-systemd-units\") pod \"ovnkube-node-pc9t2\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.163876 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-host-run-ovn-kubernetes\") pod \"ovnkube-node-pc9t2\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.164568 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/928c75f4-605c-4556-8c29-14ff4bdf6f5e-ovnkube-script-lib\") pod \"ovnkube-node-pc9t2\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.164708 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/928c75f4-605c-4556-8c29-14ff4bdf6f5e-ovnkube-config\") pod \"ovnkube-node-pc9t2\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.168467 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/928c75f4-605c-4556-8c29-14ff4bdf6f5e-ovn-node-metrics-cert\") pod \"ovnkube-node-pc9t2\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.180082 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cf9qf\" (UniqueName: \"kubernetes.io/projected/928c75f4-605c-4556-8c29-14ff4bdf6f5e-kube-api-access-cf9qf\") pod \"ovnkube-node-pc9t2\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.251137 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" Feb 19 18:30:03 crc kubenswrapper[4813]: W0219 18:30:03.262736 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod928c75f4_605c_4556_8c29_14ff4bdf6f5e.slice/crio-a720de7497231498de63ba2981716e9712d34d7099ecb3e56cbb4e9370af7958 WatchSource:0}: Error finding container a720de7497231498de63ba2981716e9712d34d7099ecb3e56cbb4e9370af7958: Status 404 returned error can't find the container with id a720de7497231498de63ba2981716e9712d34d7099ecb3e56cbb4e9370af7958 Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.410559 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 04:44:47.552434335 +0000 UTC Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.470571 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.470637 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.470584 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:30:03 crc kubenswrapper[4813]: E0219 18:30:03.470701 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 18:30:03 crc kubenswrapper[4813]: E0219 18:30:03.470792 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 18:30:03 crc kubenswrapper[4813]: E0219 18:30:03.470872 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.475255 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.475866 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.477117 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.477743 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.478699 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.479218 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.479741 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.480654 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.481286 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.482205 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.482693 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.483680 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.484158 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.484633 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.485490 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.486021 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.487066 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.487467 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.488021 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.488948 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.489484 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.490449 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.490853 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.491829 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.492282 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.492849 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.493864 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.494328 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.495238 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.495927 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.496923 4813 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.497058 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.498606 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.499631 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.500087 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.501457 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.502114 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.503007 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.503577 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.504551 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.505150 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.506084 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.506699 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.507618 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.508107 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.508941 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.509425 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.510461 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.510949 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.511739 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.512214 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.513219 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.513755 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.514222 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.649736 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" event={"ID":"481977a2-7072-4176-abd4-863cb6104d70","Type":"ContainerStarted","Data":"b2b35ef24fab5f4dfd17bf95d550000ed1167454385baff2c19e1bc795e2fa8f"} Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.649782 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" event={"ID":"481977a2-7072-4176-abd4-863cb6104d70","Type":"ContainerStarted","Data":"194a57de3f000b8b7bd147b882314c518fbd8173a9c366d4ee48e1e5fab9800b"} Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.649797 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" event={"ID":"481977a2-7072-4176-abd4-863cb6104d70","Type":"ContainerStarted","Data":"f11500821b503db46283565da9abbe5a724eabaca0df4f4874d0f7943e5dcf9a"} Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.652100 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hksqw" event={"ID":"b099cefb-f2e5-4f3f-976c-7433dba77ef2","Type":"ContainerStarted","Data":"3313ad9bea8833862d8c116421e00a0c2376c81e214043db1fa4096177d01094"} Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.652304 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hksqw" event={"ID":"b099cefb-f2e5-4f3f-976c-7433dba77ef2","Type":"ContainerStarted","Data":"3459238040e4e8132a1fb337535ee0c41a62242ba244b4e51b9460c66480ceae"} Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.653577 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jlz6v" event={"ID":"f58d0592-08dd-49db-8c98-f262b9808e0e","Type":"ContainerStarted","Data":"b7968520498abaa880445c128f02c0838e254befc000966e147908c84203f74a"} Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.653608 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jlz6v" event={"ID":"f58d0592-08dd-49db-8c98-f262b9808e0e","Type":"ContainerStarted","Data":"e9453a29bd50c61da780de2d9e293702ff62213bac017a96f8016501fb5c3857"} Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.654997 4813 generic.go:334] "Generic (PLEG): container finished" podID="928c75f4-605c-4556-8c29-14ff4bdf6f5e" containerID="0244cf62334ae37b44f11119385a76667200a01b2162a064829f16e8d6c0cf65" exitCode=0 Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.655097 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" event={"ID":"928c75f4-605c-4556-8c29-14ff4bdf6f5e","Type":"ContainerDied","Data":"0244cf62334ae37b44f11119385a76667200a01b2162a064829f16e8d6c0cf65"} Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.655155 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" event={"ID":"928c75f4-605c-4556-8c29-14ff4bdf6f5e","Type":"ContainerStarted","Data":"a720de7497231498de63ba2981716e9712d34d7099ecb3e56cbb4e9370af7958"} Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.670064 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:03Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.689486 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:03Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.708236 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c6488cf097d9700ded1960ebddf696d77c08ffdfc6dd8b351c594ea026bb6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6181782d0c635897318effbe40705986067ccb55cf05ffdba6948e46c1a054b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:03Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.723860 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hksqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b099cefb-f2e5-4f3f-976c-7433dba77ef2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hksqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:03Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.742663 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlz6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f58d0592-08dd-49db-8c98-f262b9808e0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlz6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:03Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.766202 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"928c75f4-605c-4556-8c29-14ff4bdf6f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pc9t2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:03Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.781240 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:03Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.794908 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vpk9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0640f474-4d6f-4a87-9750-dda71e69dd95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58268d212369ac3ebcca0839f2c778d1e2e6e36fdc4b7e374cb9044adb39163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vpk9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:03Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.807614 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:03Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.823579 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"481977a2-7072-4176-abd4-863cb6104d70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2b35ef24fab5f4dfd17bf95d550000ed1167454385baff2c19e1bc795e2fa8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a57de3f000b8b7bd147b882314c518fbd8173a9c366d4ee48e1e5fab9800b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfswm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:03Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.842632 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7e9f575-0f71-4cf9-bbca-161279ecc067\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5217c8c5d8bdd49d4930ceb46695c6404c9cb98f22fa570d519f6fd9e90a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72ae02b48db247486c8af5ba910b32e186c15460f9195868b319d1ddc111fa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e90168ea6a75495be5812591cd0c929407c9adcb7c088d4bcbe3658a430320\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f319cf58bd58c099ace281ef84d183ac951986506fa62207a45ec01f7398e7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f62c4c18d632f0210d0676c04bf42a7b28c8669c123d29ef70fd4129f0051a2b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T18:30:00Z\\\",\\\"message\\\":\\\".003654 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 18:29:55.006059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3464865525/tls.crt::/tmp/serving-cert-3464865525/tls.key\\\\\\\"\\\\nI0219 18:30:00.917714 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 18:30:00.925836 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 18:30:00.925875 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 18:30:00.925917 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 18:30:00.925928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 18:30:00.935151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 18:30:00.935201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935214 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935228 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 18:30:00.935217 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 18:30:00.935236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 18:30:00.935260 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 18:30:00.935268 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 18:30:00.942033 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nF0219 18:30:00.942062 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b414bdca98a18315a2ebb69c39baa4748407aa769d6bc8efa4115efb47c9f4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:03Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.857694 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44daee369357470d5c3b416d63e53f621a0434cc6fe90db2dd15a6078849bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:03Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.879025 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlz6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f58d0592-08dd-49db-8c98-f262b9808e0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7968520498abaa880445c128f02c0838e254befc000966e147908c84203f74a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlz6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:03Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.899734 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"928c75f4-605c-4556-8c29-14ff4bdf6f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0244cf62334ae37b44f11119385a76667200a01b2162a064829f16e8d6c0cf65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0244cf62334ae37b44f11119385a76667200a01b2162a064829f16e8d6c0cf65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pc9t2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:03Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.912398 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"481977a2-7072-4176-abd4-863cb6104d70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2b35ef24fab5f4dfd17bf95d550000ed1167454385baff2c19e1bc795e2fa8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a57de3f000b8b7bd147b882314c518fbd8173a9c366d4ee48e1e5fab9800b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfswm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:03Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.927612 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:03Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.939421 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vpk9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0640f474-4d6f-4a87-9750-dda71e69dd95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58268d212369ac3ebcca0839f2c778d1e2e6e36fdc4b7e374cb9044adb39163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vpk9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:03Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.951870 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:03Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.963851 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7e9f575-0f71-4cf9-bbca-161279ecc067\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5217c8c5d8bdd49d4930ceb46695c6404c9cb98f22fa570d519f6fd9e90a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72ae02b48db247486c8af5ba910b32e186c15460f9195868b319d1ddc111fa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e90168ea6a75495be5812591cd0c929407c9adcb7c088d4bcbe3658a430320\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f319cf58bd58c099ace281ef84d183ac951986506fa62207a45ec01f7398e7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f62c4c18d632f0210d0676c04bf42a7b28c8669c123d29ef70fd4129f0051a2b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T18:30:00Z\\\",\\\"message\\\":\\\".003654 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 18:29:55.006059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3464865525/tls.crt::/tmp/serving-cert-3464865525/tls.key\\\\\\\"\\\\nI0219 18:30:00.917714 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 18:30:00.925836 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 18:30:00.925875 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 18:30:00.925917 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 18:30:00.925928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 18:30:00.935151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 18:30:00.935201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935214 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935228 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 18:30:00.935217 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 18:30:00.935236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 18:30:00.935260 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 18:30:00.935268 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 18:30:00.942033 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nF0219 18:30:00.942062 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b414bdca98a18315a2ebb69c39baa4748407aa769d6bc8efa4115efb47c9f4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:03Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.975884 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44daee369357470d5c3b416d63e53f621a0434cc6fe90db2dd15a6078849bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:03Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:03 crc kubenswrapper[4813]: I0219 18:30:03.991217 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hksqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b099cefb-f2e5-4f3f-976c-7433dba77ef2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3313ad9bea8833862d8c116421e00a0c2376c81e214043db1fa4096177d01094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hksqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:03Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:04 crc kubenswrapper[4813]: I0219 18:30:04.005915 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:04Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:04 crc kubenswrapper[4813]: I0219 18:30:04.020168 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:04Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:04 crc kubenswrapper[4813]: I0219 18:30:04.030414 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c6488cf097d9700ded1960ebddf696d77c08ffdfc6dd8b351c594ea026bb6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6181782d0c635897318effbe40705986067ccb55cf05ffdba6948e46c1a054b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:04Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:04 crc kubenswrapper[4813]: I0219 18:30:04.411101 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 21:00:45.358217179 +0000 UTC Feb 19 18:30:04 crc kubenswrapper[4813]: I0219 18:30:04.658679 4813 generic.go:334] "Generic (PLEG): container finished" podID="f58d0592-08dd-49db-8c98-f262b9808e0e" containerID="b7968520498abaa880445c128f02c0838e254befc000966e147908c84203f74a" exitCode=0 Feb 19 18:30:04 crc kubenswrapper[4813]: I0219 18:30:04.658757 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jlz6v" event={"ID":"f58d0592-08dd-49db-8c98-f262b9808e0e","Type":"ContainerDied","Data":"b7968520498abaa880445c128f02c0838e254befc000966e147908c84203f74a"} Feb 19 18:30:04 crc kubenswrapper[4813]: I0219 18:30:04.661970 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" event={"ID":"928c75f4-605c-4556-8c29-14ff4bdf6f5e","Type":"ContainerStarted","Data":"6191934d31b3f711d3c9d8f365251db200dea5a925cad670632dbfc527f76504"} Feb 19 18:30:04 crc kubenswrapper[4813]: I0219 18:30:04.662000 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" event={"ID":"928c75f4-605c-4556-8c29-14ff4bdf6f5e","Type":"ContainerStarted","Data":"8c1b878b9bc710465c933e1965728be9fe220fe89a3d5c05fa538abd4bf22b81"} Feb 19 18:30:04 crc kubenswrapper[4813]: I0219 18:30:04.662009 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" event={"ID":"928c75f4-605c-4556-8c29-14ff4bdf6f5e","Type":"ContainerStarted","Data":"e03ec2a3e1656fbfb567377e79bcca10e1c8a6f7660727c164936eab7365930a"} Feb 19 18:30:04 crc kubenswrapper[4813]: I0219 18:30:04.662017 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" event={"ID":"928c75f4-605c-4556-8c29-14ff4bdf6f5e","Type":"ContainerStarted","Data":"ec80078ab4536d8f1962994c3bdf8af5fcf81516f8845e84dbb33a78b1aa2062"} Feb 19 18:30:04 crc kubenswrapper[4813]: I0219 18:30:04.662025 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" event={"ID":"928c75f4-605c-4556-8c29-14ff4bdf6f5e","Type":"ContainerStarted","Data":"a81cfe261685862b6b74d263ab66b4a2b1d68409791e82424b4f210c56561099"} Feb 19 18:30:04 crc kubenswrapper[4813]: I0219 18:30:04.662033 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" event={"ID":"928c75f4-605c-4556-8c29-14ff4bdf6f5e","Type":"ContainerStarted","Data":"c5fed64977a80ae6acac3a225e2b6b6ab865db1f36338df4a15011baae90f949"} Feb 19 18:30:04 crc kubenswrapper[4813]: I0219 18:30:04.663722 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"8429a2d929ad94a25a922e436298db7a8eb7522560dfa37f87f6e24f4f99b4e0"} Feb 19 18:30:04 crc kubenswrapper[4813]: I0219 18:30:04.671830 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlz6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f58d0592-08dd-49db-8c98-f262b9808e0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7968520498abaa880445c128f02c0838e254befc000966e147908c84203f74a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7968520498abaa880445c128f02c0838e254befc000966e147908c84203f74a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlz6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:04Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:04 crc kubenswrapper[4813]: I0219 18:30:04.692422 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"928c75f4-605c-4556-8c29-14ff4bdf6f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0244cf62334ae37b44f11119385a76667200a01b2162a064829f16e8d6c0cf65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0244cf62334ae37b44f11119385a76667200a01b2162a064829f16e8d6c0cf65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pc9t2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:04Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:04 crc kubenswrapper[4813]: I0219 18:30:04.703465 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:04Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:04 crc kubenswrapper[4813]: I0219 18:30:04.714659 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vpk9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0640f474-4d6f-4a87-9750-dda71e69dd95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58268d212369ac3ebcca0839f2c778d1e2e6e36fdc4b7e374cb9044adb39163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vpk9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:04Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:04 crc kubenswrapper[4813]: I0219 18:30:04.727027 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:04Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:04 crc kubenswrapper[4813]: I0219 18:30:04.745407 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"481977a2-7072-4176-abd4-863cb6104d70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2b35ef24fab5f4dfd17bf95d550000ed1167454385baff2c19e1bc795e2fa8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a57de3f000b8b7bd147b882314c518fbd8173a9c366d4ee48e1e5fab9800b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfswm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:04Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:04 crc kubenswrapper[4813]: I0219 18:30:04.758790 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7e9f575-0f71-4cf9-bbca-161279ecc067\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5217c8c5d8bdd49d4930ceb46695c6404c9cb98f22fa570d519f6fd9e90a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72ae02b48db247486c8af5ba910b32e186c15460f9195868b319d1ddc111fa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e90168ea6a75495be5812591cd0c929407c9adcb7c088d4bcbe3658a430320\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f319cf58bd58c099ace281ef84d183ac951986506fa62207a45ec01f7398e7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f62c4c18d632f0210d0676c04bf42a7b28c8669c123d29ef70fd4129f0051a2b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T18:30:00Z\\\",\\\"message\\\":\\\".003654 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 18:29:55.006059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3464865525/tls.crt::/tmp/serving-cert-3464865525/tls.key\\\\\\\"\\\\nI0219 18:30:00.917714 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 18:30:00.925836 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 18:30:00.925875 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 18:30:00.925917 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 18:30:00.925928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 18:30:00.935151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 18:30:00.935201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935214 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935228 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 18:30:00.935217 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 18:30:00.935236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 18:30:00.935260 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 18:30:00.935268 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 18:30:00.942033 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nF0219 18:30:00.942062 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b414bdca98a18315a2ebb69c39baa4748407aa769d6bc8efa4115efb47c9f4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:04Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:04 crc kubenswrapper[4813]: I0219 18:30:04.771339 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44daee369357470d5c3b416d63e53f621a0434cc6fe90db2dd15a6078849bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:04Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:04 crc kubenswrapper[4813]: I0219 18:30:04.781756 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:04Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:04 crc kubenswrapper[4813]: I0219 18:30:04.792373 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:04Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:04 crc kubenswrapper[4813]: I0219 18:30:04.805367 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c6488cf097d9700ded1960ebddf696d77c08ffdfc6dd8b351c594ea026bb6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6181782d0c635897318effbe40705986067ccb55cf05ffdba6948e46c1a054b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:04Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:04 crc kubenswrapper[4813]: I0219 18:30:04.817705 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hksqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b099cefb-f2e5-4f3f-976c-7433dba77ef2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3313ad9bea8833862d8c116421e00a0c2376c81e214043db1fa4096177d01094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hksqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:04Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:04 crc kubenswrapper[4813]: I0219 18:30:04.826304 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"481977a2-7072-4176-abd4-863cb6104d70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2b35ef24fab5f4dfd17bf95d550000ed1167454385baff2c19e1bc795e2fa8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a57de3f000b8b7bd147b882314c518fbd8173a9c366d4ee48e1e5fab9800b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfswm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:04Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:04 crc kubenswrapper[4813]: I0219 18:30:04.838113 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:04Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:04 crc kubenswrapper[4813]: I0219 18:30:04.848063 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vpk9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0640f474-4d6f-4a87-9750-dda71e69dd95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58268d212369ac3ebcca0839f2c778d1e2e6e36fdc4b7e374cb9044adb39163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vpk9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:04Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:04 crc kubenswrapper[4813]: I0219 18:30:04.858815 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8429a2d929ad94a25a922e436298db7a8eb7522560dfa37f87f6e24f4f99b4e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:04Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:04 crc kubenswrapper[4813]: I0219 18:30:04.870437 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7e9f575-0f71-4cf9-bbca-161279ecc067\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5217c8c5d8bdd49d4930ceb46695c6404c9cb98f22fa570d519f6fd9e90a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72ae02b48db247486c8af5ba910b32e186c15460f9195868b319d1ddc111fa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e90168ea6a75495be5812591cd0c929407c9adcb7c088d4bcbe3658a430320\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f319cf58bd58c099ace281ef84d183ac951986506fa62207a45ec01f7398e7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f62c4c18d632f0210d0676c04bf42a7b28c8669c123d29ef70fd4129f0051a2b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T18:30:00Z\\\",\\\"message\\\":\\\".003654 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 18:29:55.006059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3464865525/tls.crt::/tmp/serving-cert-3464865525/tls.key\\\\\\\"\\\\nI0219 18:30:00.917714 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 18:30:00.925836 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 18:30:00.925875 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 18:30:00.925917 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 18:30:00.925928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 18:30:00.935151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 18:30:00.935201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935214 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935228 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 18:30:00.935217 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 18:30:00.935236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 18:30:00.935260 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 18:30:00.935268 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 18:30:00.942033 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nF0219 18:30:00.942062 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b414bdca98a18315a2ebb69c39baa4748407aa769d6bc8efa4115efb47c9f4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:04Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:04 crc kubenswrapper[4813]: I0219 18:30:04.883084 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44daee369357470d5c3b416d63e53f621a0434cc6fe90db2dd15a6078849bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:04Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:04 crc kubenswrapper[4813]: I0219 18:30:04.893909 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hksqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b099cefb-f2e5-4f3f-976c-7433dba77ef2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3313ad9bea8833862d8c116421e00a0c2376c81e214043db1fa4096177d01094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hksqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:04Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:04 crc kubenswrapper[4813]: I0219 18:30:04.905754 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:04Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:04 crc kubenswrapper[4813]: I0219 18:30:04.919712 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:04Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:04 crc kubenswrapper[4813]: I0219 18:30:04.931421 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c6488cf097d9700ded1960ebddf696d77c08ffdfc6dd8b351c594ea026bb6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6181782d0c635897318effbe40705986067ccb55cf05ffdba6948e46c1a054b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:04Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:04 crc kubenswrapper[4813]: I0219 18:30:04.941532 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 19 18:30:04 crc kubenswrapper[4813]: I0219 18:30:04.948849 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlz6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f58d0592-08dd-49db-8c98-f262b9808e0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7968520498abaa880445c128f02c0838e254befc000966e147908c84203f74a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7968520498abaa880445c128f02c0838e254befc000966e147908c84203f74a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlz6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:04Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:04 crc kubenswrapper[4813]: I0219 18:30:04.953775 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 19 18:30:04 crc kubenswrapper[4813]: I0219 18:30:04.956437 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 19 18:30:04 crc kubenswrapper[4813]: I0219 18:30:04.967260 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"928c75f4-605c-4556-8c29-14ff4bdf6f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0244cf62334ae37b44f11119385a76667200a01b2162a064829f16e8d6c0cf65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0244cf62334ae37b44f11119385a76667200a01b2162a064829f16e8d6c0cf65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pc9t2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:04Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:04 crc kubenswrapper[4813]: I0219 18:30:04.983786 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7e9f575-0f71-4cf9-bbca-161279ecc067\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5217c8c5d8bdd49d4930ceb46695c6404c9cb98f22fa570d519f6fd9e90a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72ae02b48db247486c8af5ba910b32e186c15460f9195868b319d1ddc111fa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e90168ea6a75495be5812591cd0c929407c9adcb7c088d4bcbe3658a430320\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f319cf58bd58c099ace281ef84d183ac951986506fa62207a45ec01f7398e7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f62c4c18d632f0210d0676c04bf42a7b28c8669c123d29ef70fd4129f0051a2b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T18:30:00Z\\\",\\\"message\\\":\\\".003654 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 18:29:55.006059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3464865525/tls.crt::/tmp/serving-cert-3464865525/tls.key\\\\\\\"\\\\nI0219 18:30:00.917714 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 18:30:00.925836 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 18:30:00.925875 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 18:30:00.925917 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 18:30:00.925928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 18:30:00.935151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 18:30:00.935201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935214 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935228 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 18:30:00.935217 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 18:30:00.935236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 18:30:00.935260 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 18:30:00.935268 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 18:30:00.942033 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nF0219 18:30:00.942062 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b414bdca98a18315a2ebb69c39baa4748407aa769d6bc8efa4115efb47c9f4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:04Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:05 crc kubenswrapper[4813]: I0219 18:30:05.001549 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"424add47-7924-4d6b-9c4a-4696e02c5a45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95dc9a2b0e36680a894dd162836072614413ad26fcf5adbae44db96aff8a00c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46edb0e58f8e2e040cc223708f42cef7060dc0a841fff9db182e66e659ced5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://def8abdfd513befc4fac3d78b8284b3b37ba3f5da05d74a7b911932ee1099344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed857dedd4a4cf59288fcbcfaa298370a93630e185b949d2e574eb828831df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://216064c0aa1256384d93ad5b952228d4b67c2767f284715e62505dd390c11268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b9746c82ed08c95759481dc08426b4e8446ed834da21c003dccfeb8231fbec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b9746c82ed08c95759481dc08426b4e8446ed834da21c003dccfeb8231fbec2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc8db58cc6cac8c9b830087814e9d7652fdd990b18e4e4e778669d5cbc06b508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc8db58cc6cac8c9b830087814e9d7652fdd990b18e4e4e778669d5cbc06b508\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://baca15e257ea2007369156b6a31eef0b7cd0e336cef7a9c733164b93e1927f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baca15e257ea2007369156b6a31eef0b7cd0e336cef7a9c733164b93e1927f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:04Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:05 crc kubenswrapper[4813]: I0219 18:30:05.011802 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44daee369357470d5c3b416d63e53f621a0434cc6fe90db2dd15a6078849bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:05Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:05 crc kubenswrapper[4813]: I0219 18:30:05.024560 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:05Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:05 crc kubenswrapper[4813]: I0219 18:30:05.034809 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:05Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:05 crc kubenswrapper[4813]: I0219 18:30:05.046303 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c6488cf097d9700ded1960ebddf696d77c08ffdfc6dd8b351c594ea026bb6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6181782d0c635897318effbe40705986067ccb55cf05ffdba6948e46c1a054b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:05Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:05 crc kubenswrapper[4813]: I0219 18:30:05.060327 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hksqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b099cefb-f2e5-4f3f-976c-7433dba77ef2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3313ad9bea8833862d8c116421e00a0c2376c81e214043db1fa4096177d01094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hksqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:05Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:05 crc kubenswrapper[4813]: I0219 18:30:05.074267 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlz6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f58d0592-08dd-49db-8c98-f262b9808e0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7968520498abaa880445c128f02c0838e254befc000966e147908c84203f74a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7968520498abaa880445c128f02c0838e254befc000966e147908c84203f74a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlz6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:05Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:05 crc kubenswrapper[4813]: I0219 18:30:05.080842 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 18:30:05 crc kubenswrapper[4813]: E0219 18:30:05.081010 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 18:30:09.080987922 +0000 UTC m=+28.306428483 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:30:05 crc kubenswrapper[4813]: I0219 18:30:05.091096 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"928c75f4-605c-4556-8c29-14ff4bdf6f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0244cf62334ae37b44f11119385a76667200a01b2162a064829f16e8d6c0cf65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0244cf62334ae37b44f11119385a76667200a01b2162a064829f16e8d6c0cf65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pc9t2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:05Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:05 crc kubenswrapper[4813]: I0219 18:30:05.101454 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:05Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:05 crc kubenswrapper[4813]: I0219 18:30:05.109530 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vpk9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0640f474-4d6f-4a87-9750-dda71e69dd95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58268d212369ac3ebcca0839f2c778d1e2e6e36fdc4b7e374cb9044adb39163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vpk9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:05Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:05 crc kubenswrapper[4813]: I0219 18:30:05.119094 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8429a2d929ad94a25a922e436298db7a8eb7522560dfa37f87f6e24f4f99b4e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:05Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:05 crc kubenswrapper[4813]: I0219 18:30:05.128188 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"481977a2-7072-4176-abd4-863cb6104d70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2b35ef24fab5f4dfd17bf95d550000ed1167454385baff2c19e1bc795e2fa8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a57de3f000b8b7bd147b882314c518fbd8173a9c366d4ee48e1e5fab9800b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfswm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:05Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:05 crc kubenswrapper[4813]: I0219 18:30:05.157534 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-55mxf"] Feb 19 18:30:05 crc kubenswrapper[4813]: I0219 18:30:05.157897 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-55mxf" Feb 19 18:30:05 crc kubenswrapper[4813]: I0219 18:30:05.162652 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 19 18:30:05 crc kubenswrapper[4813]: I0219 18:30:05.162712 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 19 18:30:05 crc kubenswrapper[4813]: I0219 18:30:05.162939 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 19 18:30:05 crc kubenswrapper[4813]: I0219 18:30:05.163206 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 19 18:30:05 crc kubenswrapper[4813]: I0219 18:30:05.178272 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c6488cf097d9700ded1960ebddf696d77c08ffdfc6dd8b351c594ea026bb6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6181782d0c635897318effbe40705986067ccb55cf05ffdba6948e46c1a054b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:05Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:05 crc kubenswrapper[4813]: I0219 18:30:05.182150 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 18:30:05 crc kubenswrapper[4813]: I0219 18:30:05.182209 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 18:30:05 crc kubenswrapper[4813]: I0219 18:30:05.182255 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:30:05 crc kubenswrapper[4813]: E0219 18:30:05.182302 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 18:30:05 crc kubenswrapper[4813]: I0219 18:30:05.182309 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:30:05 crc kubenswrapper[4813]: E0219 18:30:05.182321 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 18:30:05 crc kubenswrapper[4813]: E0219 18:30:05.182333 4813 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 18:30:05 crc kubenswrapper[4813]: E0219 18:30:05.182377 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 18:30:09.182362448 +0000 UTC m=+28.407802989 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 18:30:05 crc kubenswrapper[4813]: E0219 18:30:05.182424 4813 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 18:30:05 crc kubenswrapper[4813]: E0219 18:30:05.182469 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 18:30:05 crc kubenswrapper[4813]: E0219 18:30:05.182505 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 18:30:05 crc kubenswrapper[4813]: E0219 18:30:05.182521 4813 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 18:30:05 crc kubenswrapper[4813]: E0219 18:30:05.182520 4813 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 18:30:05 crc kubenswrapper[4813]: E0219 18:30:05.182549 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 18:30:09.182527213 +0000 UTC m=+28.407967784 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 18:30:05 crc kubenswrapper[4813]: E0219 18:30:05.182584 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 18:30:09.182565664 +0000 UTC m=+28.408006215 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 18:30:05 crc kubenswrapper[4813]: E0219 18:30:05.182624 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 18:30:09.182594275 +0000 UTC m=+28.408034846 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 18:30:05 crc kubenswrapper[4813]: I0219 18:30:05.189599 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hksqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b099cefb-f2e5-4f3f-976c-7433dba77ef2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3313ad9bea8833862d8c116421e00a0c2376c81e214043db1fa4096177d01094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hksqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:05Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:05 crc kubenswrapper[4813]: I0219 18:30:05.200757 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:05Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:05 crc kubenswrapper[4813]: I0219 18:30:05.210742 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:05Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:05 crc kubenswrapper[4813]: I0219 18:30:05.226387 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlz6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f58d0592-08dd-49db-8c98-f262b9808e0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7968520498abaa880445c128f02c0838e254befc000966e147908c84203f74a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7968520498abaa880445c128f02c0838e254befc000966e147908c84203f74a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlz6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:05Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:05 crc kubenswrapper[4813]: I0219 18:30:05.244797 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"928c75f4-605c-4556-8c29-14ff4bdf6f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0244cf62334ae37b44f11119385a76667200a01b2162a064829f16e8d6c0cf65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0244cf62334ae37b44f11119385a76667200a01b2162a064829f16e8d6c0cf65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pc9t2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:05Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:05 crc kubenswrapper[4813]: I0219 18:30:05.276995 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8429a2d929ad94a25a922e436298db7a8eb7522560dfa37f87f6e24f4f99b4e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:05Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:05 crc kubenswrapper[4813]: I0219 18:30:05.283397 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/af5ac962-f7d8-4759-b913-b3784e37a704-host\") pod \"node-ca-55mxf\" (UID: \"af5ac962-f7d8-4759-b913-b3784e37a704\") " pod="openshift-image-registry/node-ca-55mxf" Feb 19 18:30:05 crc kubenswrapper[4813]: I0219 18:30:05.283448 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvvw7\" (UniqueName: \"kubernetes.io/projected/af5ac962-f7d8-4759-b913-b3784e37a704-kube-api-access-gvvw7\") pod \"node-ca-55mxf\" (UID: \"af5ac962-f7d8-4759-b913-b3784e37a704\") " pod="openshift-image-registry/node-ca-55mxf" Feb 19 18:30:05 crc kubenswrapper[4813]: I0219 18:30:05.283504 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/af5ac962-f7d8-4759-b913-b3784e37a704-serviceca\") pod \"node-ca-55mxf\" (UID: \"af5ac962-f7d8-4759-b913-b3784e37a704\") " pod="openshift-image-registry/node-ca-55mxf" Feb 19 18:30:05 crc kubenswrapper[4813]: I0219 18:30:05.317184 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"481977a2-7072-4176-abd4-863cb6104d70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2b35ef24fab5f4dfd17bf95d550000ed1167454385baff2c19e1bc795e2fa8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a57de3f000b8b7bd147b882314c518fbd8173a9c366d4ee48e1e5fab9800b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfswm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:05Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:05 crc kubenswrapper[4813]: I0219 18:30:05.358250 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:05Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:05 crc kubenswrapper[4813]: I0219 18:30:05.383862 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/af5ac962-f7d8-4759-b913-b3784e37a704-host\") pod \"node-ca-55mxf\" (UID: \"af5ac962-f7d8-4759-b913-b3784e37a704\") " pod="openshift-image-registry/node-ca-55mxf" Feb 19 18:30:05 crc kubenswrapper[4813]: I0219 18:30:05.383905 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvvw7\" (UniqueName: \"kubernetes.io/projected/af5ac962-f7d8-4759-b913-b3784e37a704-kube-api-access-gvvw7\") pod \"node-ca-55mxf\" (UID: \"af5ac962-f7d8-4759-b913-b3784e37a704\") " pod="openshift-image-registry/node-ca-55mxf" Feb 19 18:30:05 crc kubenswrapper[4813]: I0219 18:30:05.383983 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/af5ac962-f7d8-4759-b913-b3784e37a704-serviceca\") pod \"node-ca-55mxf\" (UID: \"af5ac962-f7d8-4759-b913-b3784e37a704\") " pod="openshift-image-registry/node-ca-55mxf" Feb 19 18:30:05 crc kubenswrapper[4813]: I0219 18:30:05.384070 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/af5ac962-f7d8-4759-b913-b3784e37a704-host\") pod \"node-ca-55mxf\" (UID: \"af5ac962-f7d8-4759-b913-b3784e37a704\") " pod="openshift-image-registry/node-ca-55mxf" Feb 19 18:30:05 crc kubenswrapper[4813]: I0219 18:30:05.385220 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/af5ac962-f7d8-4759-b913-b3784e37a704-serviceca\") pod \"node-ca-55mxf\" (UID: \"af5ac962-f7d8-4759-b913-b3784e37a704\") " pod="openshift-image-registry/node-ca-55mxf" Feb 19 18:30:05 crc kubenswrapper[4813]: I0219 18:30:05.401161 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vpk9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0640f474-4d6f-4a87-9750-dda71e69dd95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58268d212369ac3ebcca0839f2c778d1e2e6e36fdc4b7e374cb9044adb39163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vpk9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:05Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:05 crc kubenswrapper[4813]: I0219 18:30:05.411654 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 06:33:07.380033074 +0000 UTC Feb 19 18:30:05 crc kubenswrapper[4813]: I0219 18:30:05.436915 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvvw7\" (UniqueName: \"kubernetes.io/projected/af5ac962-f7d8-4759-b913-b3784e37a704-kube-api-access-gvvw7\") pod \"node-ca-55mxf\" (UID: \"af5ac962-f7d8-4759-b913-b3784e37a704\") " pod="openshift-image-registry/node-ca-55mxf" Feb 19 18:30:05 crc kubenswrapper[4813]: I0219 18:30:05.464481 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44daee369357470d5c3b416d63e53f621a0434cc6fe90db2dd15a6078849bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:05Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:05 crc kubenswrapper[4813]: I0219 18:30:05.471062 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:30:05 crc kubenswrapper[4813]: I0219 18:30:05.471106 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 18:30:05 crc kubenswrapper[4813]: E0219 18:30:05.471186 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 18:30:05 crc kubenswrapper[4813]: E0219 18:30:05.471360 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 18:30:05 crc kubenswrapper[4813]: I0219 18:30:05.471495 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 18:30:05 crc kubenswrapper[4813]: E0219 18:30:05.471603 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 18:30:05 crc kubenswrapper[4813]: I0219 18:30:05.475087 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-55mxf" Feb 19 18:30:05 crc kubenswrapper[4813]: W0219 18:30:05.497572 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf5ac962_f7d8_4759_b913_b3784e37a704.slice/crio-76549dff5d798faf252ae74027b1cf2d050861449c567e40278725641985797e WatchSource:0}: Error finding container 76549dff5d798faf252ae74027b1cf2d050861449c567e40278725641985797e: Status 404 returned error can't find the container with id 76549dff5d798faf252ae74027b1cf2d050861449c567e40278725641985797e Feb 19 18:30:05 crc kubenswrapper[4813]: I0219 18:30:05.498374 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55mxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af5ac962-f7d8-4759-b913-b3784e37a704\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55mxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:05Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:05 crc kubenswrapper[4813]: I0219 18:30:05.544596 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7e9f575-0f71-4cf9-bbca-161279ecc067\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5217c8c5d8bdd49d4930ceb46695c6404c9cb98f22fa570d519f6fd9e90a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72ae02b48db247486c8af5ba910b32e186c15460f9195868b319d1ddc111fa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e90168ea6a75495be5812591cd0c929407c9adcb7c088d4bcbe3658a430320\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f319cf58bd58c099ace281ef84d183ac951986506fa62207a45ec01f7398e7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f62c4c18d632f0210d0676c04bf42a7b28c8669c123d29ef70fd4129f0051a2b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T18:30:00Z\\\",\\\"message\\\":\\\".003654 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 18:29:55.006059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3464865525/tls.crt::/tmp/serving-cert-3464865525/tls.key\\\\\\\"\\\\nI0219 18:30:00.917714 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 18:30:00.925836 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 18:30:00.925875 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 18:30:00.925917 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 18:30:00.925928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 18:30:00.935151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 18:30:00.935201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935214 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935228 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 18:30:00.935217 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 18:30:00.935236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 18:30:00.935260 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 18:30:00.935268 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 18:30:00.942033 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nF0219 18:30:00.942062 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b414bdca98a18315a2ebb69c39baa4748407aa769d6bc8efa4115efb47c9f4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:05Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:05 crc kubenswrapper[4813]: I0219 18:30:05.584975 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"424add47-7924-4d6b-9c4a-4696e02c5a45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95dc9a2b0e36680a894dd162836072614413ad26fcf5adbae44db96aff8a00c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46edb0e58f8e2e040cc223708f42cef7060dc0a841fff9db182e66e659ced5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://def8abdfd513befc4fac3d78b8284b3b37ba3f5da05d74a7b911932ee1099344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed857dedd4a4cf59288fcbcfaa298370a93630e185b949d2e574eb828831df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://216064c0aa1256384d93ad5b952228d4b67c2767f284715e62505dd390c11268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b9746c82ed08c95759481dc08426b4e8446ed834da21c003dccfeb8231fbec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b9746c82ed08c95759481dc08426b4e8446ed834da21c003dccfeb8231fbec2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc8db58cc6cac8c9b830087814e9d7652fdd990b18e4e4e778669d5cbc06b508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc8db58cc6cac8c9b830087814e9d7652fdd990b18e4e4e778669d5cbc06b508\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://baca15e257ea2007369156b6a31eef0b7cd0e336cef7a9c733164b93e1927f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baca15e257ea2007369156b6a31eef0b7cd0e336cef7a9c733164b93e1927f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:05Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:05 crc kubenswrapper[4813]: I0219 18:30:05.670008 4813 generic.go:334] "Generic (PLEG): container finished" podID="f58d0592-08dd-49db-8c98-f262b9808e0e" containerID="17313aa183028703a67d1cf40b4246d09224ceee622cd9f4f7d2ef239b26db47" exitCode=0 Feb 19 18:30:05 crc kubenswrapper[4813]: I0219 18:30:05.670050 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jlz6v" event={"ID":"f58d0592-08dd-49db-8c98-f262b9808e0e","Type":"ContainerDied","Data":"17313aa183028703a67d1cf40b4246d09224ceee622cd9f4f7d2ef239b26db47"} Feb 19 18:30:05 crc kubenswrapper[4813]: I0219 18:30:05.671388 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-55mxf" event={"ID":"af5ac962-f7d8-4759-b913-b3784e37a704","Type":"ContainerStarted","Data":"76549dff5d798faf252ae74027b1cf2d050861449c567e40278725641985797e"} Feb 19 18:30:05 crc kubenswrapper[4813]: I0219 18:30:05.690232 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:05Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:05 crc kubenswrapper[4813]: I0219 18:30:05.704658 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:05Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:05 crc kubenswrapper[4813]: I0219 18:30:05.720525 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c6488cf097d9700ded1960ebddf696d77c08ffdfc6dd8b351c594ea026bb6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6181782d0c635897318effbe40705986067ccb55cf05ffdba6948e46c1a054b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:05Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:05 crc kubenswrapper[4813]: I0219 18:30:05.741022 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hksqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b099cefb-f2e5-4f3f-976c-7433dba77ef2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3313ad9bea8833862d8c116421e00a0c2376c81e214043db1fa4096177d01094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hksqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:05Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:05 crc kubenswrapper[4813]: I0219 18:30:05.783388 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlz6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f58d0592-08dd-49db-8c98-f262b9808e0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7968520498abaa880445c128f02c0838e254befc000966e147908c84203f74a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7968520498abaa880445c128f02c0838e254befc000966e147908c84203f74a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17313aa183028703a67d1cf40b4246d09224ceee622cd9f4f7d2ef239b26db47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17313aa183028703a67d1cf40b4246d09224ceee622cd9f4f7d2ef239b26db47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlz6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:05Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:05 crc kubenswrapper[4813]: I0219 18:30:05.826354 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"928c75f4-605c-4556-8c29-14ff4bdf6f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0244cf62334ae37b44f11119385a76667200a01b2162a064829f16e8d6c0cf65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0244cf62334ae37b44f11119385a76667200a01b2162a064829f16e8d6c0cf65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pc9t2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:05Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:05 crc kubenswrapper[4813]: I0219 18:30:05.863383 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:05Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:05 crc kubenswrapper[4813]: I0219 18:30:05.899919 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vpk9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0640f474-4d6f-4a87-9750-dda71e69dd95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58268d212369ac3ebcca0839f2c778d1e2e6e36fdc4b7e374cb9044adb39163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vpk9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:05Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:05 crc kubenswrapper[4813]: I0219 18:30:05.941139 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8429a2d929ad94a25a922e436298db7a8eb7522560dfa37f87f6e24f4f99b4e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:05Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:05 crc kubenswrapper[4813]: I0219 18:30:05.980005 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"481977a2-7072-4176-abd4-863cb6104d70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2b35ef24fab5f4dfd17bf95d550000ed1167454385baff2c19e1bc795e2fa8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a57de3f000b8b7bd147b882314c518fbd8173a9c366d4ee48e1e5fab9800b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfswm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:05Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:06 crc kubenswrapper[4813]: I0219 18:30:06.021196 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7e9f575-0f71-4cf9-bbca-161279ecc067\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5217c8c5d8bdd49d4930ceb46695c6404c9cb98f22fa570d519f6fd9e90a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72ae02b48db247486c8af5ba910b32e186c15460f9195868b319d1ddc111fa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e90168ea6a75495be5812591cd0c929407c9adcb7c088d4bcbe3658a430320\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f319cf58bd58c099ace281ef84d183ac951986506fa62207a45ec01f7398e7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f62c4c18d632f0210d0676c04bf42a7b28c8669c123d29ef70fd4129f0051a2b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T18:30:00Z\\\",\\\"message\\\":\\\".003654 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 18:29:55.006059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3464865525/tls.crt::/tmp/serving-cert-3464865525/tls.key\\\\\\\"\\\\nI0219 18:30:00.917714 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 18:30:00.925836 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 18:30:00.925875 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 18:30:00.925917 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 18:30:00.925928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 18:30:00.935151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 18:30:00.935201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935214 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935228 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 18:30:00.935217 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 18:30:00.935236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 18:30:00.935260 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 18:30:00.935268 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 18:30:00.942033 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nF0219 18:30:00.942062 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b414bdca98a18315a2ebb69c39baa4748407aa769d6bc8efa4115efb47c9f4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:06Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:06 crc kubenswrapper[4813]: I0219 18:30:06.077991 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"424add47-7924-4d6b-9c4a-4696e02c5a45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95dc9a2b0e36680a894dd162836072614413ad26fcf5adbae44db96aff8a00c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46edb0e58f8e2e040cc223708f42cef7060dc0a841fff9db182e66e659ced5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://def8abdfd513befc4fac3d78b8284b3b37ba3f5da05d74a7b911932ee1099344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed857dedd4a4cf59288fcbcfaa298370a93630e185b949d2e574eb828831df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://216064c0aa1256384d93ad5b952228d4b67c2767f284715e62505dd390c11268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b9746c82ed08c95759481dc08426b4e8446ed834da21c003dccfeb8231fbec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b9746c82ed08c95759481dc08426b4e8446ed834da21c003dccfeb8231fbec2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc8db58cc6cac8c9b830087814e9d7652fdd990b18e4e4e778669d5cbc06b508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc8db58cc6cac8c9b830087814e9d7652fdd990b18e4e4e778669d5cbc06b508\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://baca15e257ea2007369156b6a31eef0b7cd0e336cef7a9c733164b93e1927f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baca15e257ea2007369156b6a31eef0b7cd0e336cef7a9c733164b93e1927f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:06Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:06 crc kubenswrapper[4813]: I0219 18:30:06.086798 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 18:30:06 crc kubenswrapper[4813]: I0219 18:30:06.090592 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 18:30:06 crc kubenswrapper[4813]: I0219 18:30:06.104446 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44daee369357470d5c3b416d63e53f621a0434cc6fe90db2dd15a6078849bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:06Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:06 crc kubenswrapper[4813]: I0219 18:30:06.117164 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 19 18:30:06 crc kubenswrapper[4813]: I0219 18:30:06.162698 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55mxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af5ac962-f7d8-4759-b913-b3784e37a704\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55mxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:06Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:06 crc kubenswrapper[4813]: I0219 18:30:06.206912 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlz6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f58d0592-08dd-49db-8c98-f262b9808e0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7968520498abaa880445c128f02c0838e254befc000966e147908c84203f74a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7968520498abaa880445c128f02c0838e254befc000966e147908c84203f74a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17313aa183028703a67d1cf40b4246d09224ceee622cd9f4f7d2ef239b26db47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17313aa183028703a67d1cf40b4246d09224ceee622cd9f4f7d2ef239b26db47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlz6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:06Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:06 crc kubenswrapper[4813]: I0219 18:30:06.252863 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"928c75f4-605c-4556-8c29-14ff4bdf6f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0244cf62334ae37b44f11119385a76667200a01b2162a064829f16e8d6c0cf65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0244cf62334ae37b44f11119385a76667200a01b2162a064829f16e8d6c0cf65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pc9t2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:06Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:06 crc kubenswrapper[4813]: I0219 18:30:06.283058 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:06Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:06 crc kubenswrapper[4813]: I0219 18:30:06.322033 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vpk9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0640f474-4d6f-4a87-9750-dda71e69dd95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58268d212369ac3ebcca0839f2c778d1e2e6e36fdc4b7e374cb9044adb39163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vpk9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:06Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:06 crc kubenswrapper[4813]: I0219 18:30:06.364687 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8429a2d929ad94a25a922e436298db7a8eb7522560dfa37f87f6e24f4f99b4e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:06Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:06 crc kubenswrapper[4813]: I0219 18:30:06.404017 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"481977a2-7072-4176-abd4-863cb6104d70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2b35ef24fab5f4dfd17bf95d550000ed1167454385baff2c19e1bc795e2fa8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a57de3f000b8b7bd147b882314c518fbd8173a9c366d4ee48e1e5fab9800b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfswm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:06Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:06 crc kubenswrapper[4813]: I0219 18:30:06.411877 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 21:03:08.187349648 +0000 UTC Feb 19 18:30:06 crc kubenswrapper[4813]: I0219 18:30:06.441915 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7e9f575-0f71-4cf9-bbca-161279ecc067\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5217c8c5d8bdd49d4930ceb46695c6404c9cb98f22fa570d519f6fd9e90a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72ae02b48db247486c8af5ba910b32e186c15460f9195868b319d1ddc111fa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e90168ea6a75495be5812591cd0c929407c9adcb7c088d4bcbe3658a430320\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f319cf58bd58c099ace281ef84d183ac951986506fa62207a45ec01f7398e7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f62c4c18d632f0210d0676c04bf42a7b28c8669c123d29ef70fd4129f0051a2b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T18:30:00Z\\\",\\\"message\\\":\\\".003654 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 18:29:55.006059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3464865525/tls.crt::/tmp/serving-cert-3464865525/tls.key\\\\\\\"\\\\nI0219 18:30:00.917714 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 18:30:00.925836 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 18:30:00.925875 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 18:30:00.925917 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 18:30:00.925928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 18:30:00.935151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 18:30:00.935201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935214 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935228 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 18:30:00.935217 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 18:30:00.935236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 18:30:00.935260 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 18:30:00.935268 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 18:30:00.942033 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nF0219 18:30:00.942062 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b414bdca98a18315a2ebb69c39baa4748407aa769d6bc8efa4115efb47c9f4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:06Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:06 crc kubenswrapper[4813]: I0219 18:30:06.495927 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"424add47-7924-4d6b-9c4a-4696e02c5a45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95dc9a2b0e36680a894dd162836072614413ad26fcf5adbae44db96aff8a00c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46edb0e58f8e2e040cc223708f42cef7060dc0a841fff9db182e66e659ced5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://def8abdfd513befc4fac3d78b8284b3b37ba3f5da05d74a7b911932ee1099344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed857dedd4a4cf59288fcbcfaa298370a93630e185b949d2e574eb828831df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://216064c0aa1256384d93ad5b952228d4b67c2767f284715e62505dd390c11268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b9746c82ed08c95759481dc08426b4e8446ed834da21c003dccfeb8231fbec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b9746c82ed08c95759481dc08426b4e8446ed834da21c003dccfeb8231fbec2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc8db58cc6cac8c9b830087814e9d7652fdd990b18e4e4e778669d5cbc06b508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc8db58cc6cac8c9b830087814e9d7652fdd990b18e4e4e778669d5cbc06b508\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://baca15e257ea2007369156b6a31eef0b7cd0e336cef7a9c733164b93e1927f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baca15e257ea2007369156b6a31eef0b7cd0e336cef7a9c733164b93e1927f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:06Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:06 crc kubenswrapper[4813]: I0219 18:30:06.524023 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44daee369357470d5c3b416d63e53f621a0434cc6fe90db2dd15a6078849bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:06Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:06 crc kubenswrapper[4813]: I0219 18:30:06.558880 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55mxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af5ac962-f7d8-4759-b913-b3784e37a704\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55mxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:06Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:06 crc kubenswrapper[4813]: I0219 18:30:06.603760 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e24bb1ce-8e7b-485c-8ade-4d1c8d24d31d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ec0b5d3efae532f52cc2cad5bdf32bbba176159a6d3e26584b0fabe4e67fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72a262261f9ec388a9b2c38c6c30d35645d6e42d482ed7468891523b7bc731fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58efdcb776fb39a7e33af3fda3f0589e38c4bae15b8db3371541d189729d06c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a2584d7de40323b884e08e2cb47b6ab8054fadccc7516e4736c5439aef60e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:06Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:06 crc kubenswrapper[4813]: I0219 18:30:06.643937 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:06Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:06 crc kubenswrapper[4813]: I0219 18:30:06.679146 4813 generic.go:334] "Generic (PLEG): container finished" podID="f58d0592-08dd-49db-8c98-f262b9808e0e" containerID="98ab70722c91747196d065d768e30f52befde81fdedb465f22c3f7eef4705210" exitCode=0 Feb 19 18:30:06 crc kubenswrapper[4813]: I0219 18:30:06.679371 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jlz6v" event={"ID":"f58d0592-08dd-49db-8c98-f262b9808e0e","Type":"ContainerDied","Data":"98ab70722c91747196d065d768e30f52befde81fdedb465f22c3f7eef4705210"} Feb 19 18:30:06 crc kubenswrapper[4813]: I0219 18:30:06.681877 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-55mxf" event={"ID":"af5ac962-f7d8-4759-b913-b3784e37a704","Type":"ContainerStarted","Data":"5006f3df33dcac6500b89a837e686eb2b35a2d1c609091bda101500d15d808d2"} Feb 19 18:30:06 crc kubenswrapper[4813]: I0219 18:30:06.692213 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:06Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:06 crc kubenswrapper[4813]: I0219 18:30:06.723694 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c6488cf097d9700ded1960ebddf696d77c08ffdfc6dd8b351c594ea026bb6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6181782d0c635897318effbe40705986067ccb55cf05ffdba6948e46c1a054b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:06Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:06 crc kubenswrapper[4813]: I0219 18:30:06.764308 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hksqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b099cefb-f2e5-4f3f-976c-7433dba77ef2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3313ad9bea8833862d8c116421e00a0c2376c81e214043db1fa4096177d01094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hksqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:06Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:06 crc kubenswrapper[4813]: I0219 18:30:06.803408 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c6488cf097d9700ded1960ebddf696d77c08ffdfc6dd8b351c594ea026bb6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6181782d0c635897318effbe40705986067ccb55cf05ffdba6948e46c1a054b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:06Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:06 crc kubenswrapper[4813]: I0219 18:30:06.843278 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hksqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b099cefb-f2e5-4f3f-976c-7433dba77ef2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3313ad9bea8833862d8c116421e00a0c2376c81e214043db1fa4096177d01094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hksqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:06Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:06 crc kubenswrapper[4813]: I0219 18:30:06.893467 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e24bb1ce-8e7b-485c-8ade-4d1c8d24d31d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ec0b5d3efae532f52cc2cad5bdf32bbba176159a6d3e26584b0fabe4e67fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72a262261f9ec388a9b2c38c6c30d35645d6e42d482ed7468891523b7bc731fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58efdcb776fb39a7e33af3fda3f0589e38c4bae15b8db3371541d189729d06c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a2584d7de40323b884e08e2cb47b6ab8054fadccc7516e4736c5439aef60e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:06Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:06 crc kubenswrapper[4813]: I0219 18:30:06.935251 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:06Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:06 crc kubenswrapper[4813]: I0219 18:30:06.958110 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:06Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.000561 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlz6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f58d0592-08dd-49db-8c98-f262b9808e0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7968520498abaa880445c128f02c0838e254befc000966e147908c84203f74a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7968520498abaa880445c128f02c0838e254befc000966e147908c84203f74a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17313aa183028703a67d1cf40b4246d09224ceee622cd9f4f7d2ef239b26db47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17313aa183028703a67d1cf40b4246d09224ceee622cd9f4f7d2ef239b26db47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ab70722c91747196d065d768e30f52befde81fdedb465f22c3f7eef4705210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98ab70722c91747196d065d768e30f52befde81fdedb465f22c3f7eef4705210\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlz6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:06Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.046426 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"928c75f4-605c-4556-8c29-14ff4bdf6f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0244cf62334ae37b44f11119385a76667200a01b2162a064829f16e8d6c0cf65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0244cf62334ae37b44f11119385a76667200a01b2162a064829f16e8d6c0cf65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pc9t2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:07Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.076942 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8429a2d929ad94a25a922e436298db7a8eb7522560dfa37f87f6e24f4f99b4e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:07Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.122523 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"481977a2-7072-4176-abd4-863cb6104d70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2b35ef24fab5f4dfd17bf95d550000ed1167454385baff2c19e1bc795e2fa8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a57de3f000b8b7bd147b882314c518fbd8173a9c366d4ee48e1e5fab9800b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfswm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:07Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.160417 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:07Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.199559 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vpk9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0640f474-4d6f-4a87-9750-dda71e69dd95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58268d212369ac3ebcca0839f2c778d1e2e6e36fdc4b7e374cb9044adb39163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vpk9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:07Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.242087 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44daee369357470d5c3b416d63e53f621a0434cc6fe90db2dd15a6078849bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:07Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.279011 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55mxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af5ac962-f7d8-4759-b913-b3784e37a704\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5006f3df33dcac6500b89a837e686eb2b35a2d1c609091bda101500d15d808d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55mxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:07Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.313236 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.319471 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.319521 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.319534 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.319638 4813 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.338661 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7e9f575-0f71-4cf9-bbca-161279ecc067\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5217c8c5d8bdd49d4930ceb46695c6404c9cb98f22fa570d519f6fd9e90a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72ae02b48db247486c8af5ba910b32e186c15460f9195868b319d1ddc111fa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e90168ea6a75495be5812591cd0c929407c9adcb7c088d4bcbe3658a430320\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f319cf58bd58c099ace281ef84d183ac951986506fa62207a45ec01f7398e7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f62c4c18d632f0210d0676c04bf42a7b28c8669c123d29ef70fd4129f0051a2b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T18:30:00Z\\\",\\\"message\\\":\\\".003654 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 18:29:55.006059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3464865525/tls.crt::/tmp/serving-cert-3464865525/tls.key\\\\\\\"\\\\nI0219 18:30:00.917714 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 18:30:00.925836 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 18:30:00.925875 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 18:30:00.925917 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 18:30:00.925928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 18:30:00.935151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 18:30:00.935201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935214 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935228 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 18:30:00.935217 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 18:30:00.935236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 18:30:00.935260 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 18:30:00.935268 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 18:30:00.942033 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nF0219 18:30:00.942062 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b414bdca98a18315a2ebb69c39baa4748407aa769d6bc8efa4115efb47c9f4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:07Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.351357 4813 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.351807 4813 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.354039 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.354089 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.354105 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.354125 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.354138 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:07Z","lastTransitionTime":"2026-02-19T18:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:07 crc kubenswrapper[4813]: E0219 18:30:07.381069 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"72639c5f-de3a-44bf-8f3b-4707b19d9f7d\\\",\\\"systemUUID\\\":\\\"3f17b88b-2b9a-42bd-94af-777e7f325932\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:07Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.384844 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.384884 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.384901 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.384920 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.384931 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:07Z","lastTransitionTime":"2026-02-19T18:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:07 crc kubenswrapper[4813]: E0219 18:30:07.403821 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"72639c5f-de3a-44bf-8f3b-4707b19d9f7d\\\",\\\"systemUUID\\\":\\\"3f17b88b-2b9a-42bd-94af-777e7f325932\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:07Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.407794 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.407845 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.407860 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.407879 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.407892 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:07Z","lastTransitionTime":"2026-02-19T18:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.412780 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 02:04:55.804415212 +0000 UTC Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.414224 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"424add47-7924-4d6b-9c4a-4696e02c5a45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95dc9a2b0e36680a894dd162836072614413ad26fcf5adbae44db96aff8a00c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46edb0e58f8e2e040cc223708f42cef7060dc0a841fff9db182e66e659ced5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://def8abdfd513befc4fac3d78b8284b3b37ba3f5da05d74a7b911932ee1099344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed857dedd4a4cf59288fcbcfaa298370a93630e185b949d2e574eb828831df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://216064c0aa1256384d93ad5b952228d4b67c2767f284715e62505dd390c11268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b9746c82ed08c95759481dc08426b4e8446ed834da21c003dccfeb8231fbec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b9746c82ed08c95759481dc08426b4e8446ed834da21c003dccfeb8231fbec2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc8db58cc6cac8c9b830087814e9d7652fdd990b18e4e4e778669d5cbc06b508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc8db58cc6cac8c9b830087814e9d7652fdd990b18e4e4e778669d5cbc06b508\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://baca15e257ea2007369156b6a31eef0b7cd0e336cef7a9c733164b93e1927f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baca15e257ea2007369156b6a31eef0b7cd0e336cef7a9c733164b93e1927f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:07Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:07 crc kubenswrapper[4813]: E0219 18:30:07.425633 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"72639c5f-de3a-44bf-8f3b-4707b19d9f7d\\\",\\\"systemUUID\\\":\\\"3f17b88b-2b9a-42bd-94af-777e7f325932\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:07Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.429379 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.429407 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.429416 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.429436 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.429446 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:07Z","lastTransitionTime":"2026-02-19T18:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:07 crc kubenswrapper[4813]: E0219 18:30:07.442435 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"72639c5f-de3a-44bf-8f3b-4707b19d9f7d\\\",\\\"systemUUID\\\":\\\"3f17b88b-2b9a-42bd-94af-777e7f325932\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:07Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.446194 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.446244 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.446257 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.446276 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.446290 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:07Z","lastTransitionTime":"2026-02-19T18:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:07 crc kubenswrapper[4813]: E0219 18:30:07.459071 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"72639c5f-de3a-44bf-8f3b-4707b19d9f7d\\\",\\\"systemUUID\\\":\\\"3f17b88b-2b9a-42bd-94af-777e7f325932\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:07Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:07 crc kubenswrapper[4813]: E0219 18:30:07.459304 4813 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.461145 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.461188 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.461201 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.461224 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.461238 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:07Z","lastTransitionTime":"2026-02-19T18:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.471324 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.471421 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.471406 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 18:30:07 crc kubenswrapper[4813]: E0219 18:30:07.471589 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 18:30:07 crc kubenswrapper[4813]: E0219 18:30:07.471701 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 18:30:07 crc kubenswrapper[4813]: E0219 18:30:07.471763 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.564043 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.564089 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.564100 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.564119 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.564130 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:07Z","lastTransitionTime":"2026-02-19T18:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.666661 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.666715 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.666729 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.666746 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.666756 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:07Z","lastTransitionTime":"2026-02-19T18:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.686358 4813 generic.go:334] "Generic (PLEG): container finished" podID="f58d0592-08dd-49db-8c98-f262b9808e0e" containerID="6ab6dd9d03726493fd9fd2c358b0a87ca1997cbe2111d43a548adc9c9f43c662" exitCode=0 Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.686428 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jlz6v" event={"ID":"f58d0592-08dd-49db-8c98-f262b9808e0e","Type":"ContainerDied","Data":"6ab6dd9d03726493fd9fd2c358b0a87ca1997cbe2111d43a548adc9c9f43c662"} Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.691917 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" event={"ID":"928c75f4-605c-4556-8c29-14ff4bdf6f5e","Type":"ContainerStarted","Data":"1caf7a922d435742a5c6d5302c70003bc86cc3ab542e2e3fe0a368c150de7838"} Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.709252 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e24bb1ce-8e7b-485c-8ade-4d1c8d24d31d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ec0b5d3efae532f52cc2cad5bdf32bbba176159a6d3e26584b0fabe4e67fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72a262261f9ec388a9b2c38c6c30d35645d6e42d482ed7468891523b7bc731fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58efdcb776fb39a7e33af3fda3f0589e38c4bae15b8db3371541d189729d06c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a2584d7de40323b884e08e2cb47b6ab8054fadccc7516e4736c5439aef60e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:07Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.723613 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:07Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.740572 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:07Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.760229 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c6488cf097d9700ded1960ebddf696d77c08ffdfc6dd8b351c594ea026bb6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6181782d0c635897318effbe40705986067ccb55cf05ffdba6948e46c1a054b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:07Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.769374 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.769416 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.769428 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.769449 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.769464 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:07Z","lastTransitionTime":"2026-02-19T18:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.784103 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hksqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b099cefb-f2e5-4f3f-976c-7433dba77ef2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3313ad9bea8833862d8c116421e00a0c2376c81e214043db1fa4096177d01094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hksqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:07Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.809527 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlz6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f58d0592-08dd-49db-8c98-f262b9808e0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7968520498abaa880445c128f02c0838e254befc000966e147908c84203f74a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7968520498abaa880445c128f02c0838e254befc000966e147908c84203f74a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17313aa183028703a67d1cf40b4246d09224ceee622cd9f4f7d2ef239b26db47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17313aa183028703a67d1cf40b4246d09224ceee622cd9f4f7d2ef239b26db47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ab70722c91747196d065d768e30f52befde81fdedb465f22c3f7eef4705210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98ab70722c91747196d065d768e30f52befde81fdedb465f22c3f7eef4705210\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ab6dd9d03726493fd9fd2c358b0a87ca1997cbe2111d43a548adc9c9f43c662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ab6dd9d03726493fd9fd2c358b0a87ca1997cbe2111d43a548adc9c9f43c662\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlz6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:07Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.831041 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"928c75f4-605c-4556-8c29-14ff4bdf6f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0244cf62334ae37b44f11119385a76667200a01b2162a064829f16e8d6c0cf65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0244cf62334ae37b44f11119385a76667200a01b2162a064829f16e8d6c0cf65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pc9t2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:07Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.843522 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:07Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.855993 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vpk9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0640f474-4d6f-4a87-9750-dda71e69dd95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58268d212369ac3ebcca0839f2c778d1e2e6e36fdc4b7e374cb9044adb39163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vpk9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:07Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.869518 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8429a2d929ad94a25a922e436298db7a8eb7522560dfa37f87f6e24f4f99b4e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:07Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.872018 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.872069 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.872088 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.872124 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.872143 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:07Z","lastTransitionTime":"2026-02-19T18:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.882531 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"481977a2-7072-4176-abd4-863cb6104d70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2b35ef24fab5f4dfd17bf95d550000ed1167454385baff2c19e1bc795e2fa8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a57de3f000b8b7bd147b882314c518fbd8173a9c366d4ee48e1e5fab9800b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfswm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:07Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.897784 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7e9f575-0f71-4cf9-bbca-161279ecc067\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5217c8c5d8bdd49d4930ceb46695c6404c9cb98f22fa570d519f6fd9e90a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72ae02b48db247486c8af5ba910b32e186c15460f9195868b319d1ddc111fa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e90168ea6a75495be5812591cd0c929407c9adcb7c088d4bcbe3658a430320\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f319cf58bd58c099ace281ef84d183ac951986506fa62207a45ec01f7398e7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f62c4c18d632f0210d0676c04bf42a7b28c8669c123d29ef70fd4129f0051a2b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T18:30:00Z\\\",\\\"message\\\":\\\".003654 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 18:29:55.006059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3464865525/tls.crt::/tmp/serving-cert-3464865525/tls.key\\\\\\\"\\\\nI0219 18:30:00.917714 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 18:30:00.925836 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 18:30:00.925875 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 18:30:00.925917 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 18:30:00.925928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 18:30:00.935151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 18:30:00.935201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935214 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935228 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 18:30:00.935217 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 18:30:00.935236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 18:30:00.935260 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 18:30:00.935268 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 18:30:00.942033 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nF0219 18:30:00.942062 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b414bdca98a18315a2ebb69c39baa4748407aa769d6bc8efa4115efb47c9f4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:07Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.940998 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"424add47-7924-4d6b-9c4a-4696e02c5a45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95dc9a2b0e36680a894dd162836072614413ad26fcf5adbae44db96aff8a00c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46edb0e58f8e2e040cc223708f42cef7060dc0a841fff9db182e66e659ced5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://def8abdfd513befc4fac3d78b8284b3b37ba3f5da05d74a7b911932ee1099344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed857dedd4a4cf59288fcbcfaa298370a93630e185b949d2e574eb828831df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://216064c0aa1256384d93ad5b952228d4b67c2767f284715e62505dd390c11268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b9746c82ed08c95759481dc08426b4e8446ed834da21c003dccfeb8231fbec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b9746c82ed08c95759481dc08426b4e8446ed834da21c003dccfeb8231fbec2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc8db58cc6cac8c9b830087814e9d7652fdd990b18e4e4e778669d5cbc06b508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc8db58cc6cac8c9b830087814e9d7652fdd990b18e4e4e778669d5cbc06b508\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://baca15e257ea2007369156b6a31eef0b7cd0e336cef7a9c733164b93e1927f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baca15e257ea2007369156b6a31eef0b7cd0e336cef7a9c733164b93e1927f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:07Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.965488 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44daee369357470d5c3b416d63e53f621a0434cc6fe90db2dd15a6078849bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:07Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.974767 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.974822 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.974840 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.974867 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:07 crc kubenswrapper[4813]: I0219 18:30:07.974889 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:07Z","lastTransitionTime":"2026-02-19T18:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:08 crc kubenswrapper[4813]: I0219 18:30:08.001021 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55mxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af5ac962-f7d8-4759-b913-b3784e37a704\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5006f3df33dcac6500b89a837e686eb2b35a2d1c609091bda101500d15d808d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55mxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:07Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:08 crc kubenswrapper[4813]: I0219 18:30:08.078713 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:08 crc kubenswrapper[4813]: I0219 18:30:08.078752 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:08 crc kubenswrapper[4813]: I0219 18:30:08.078828 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:08 crc kubenswrapper[4813]: I0219 18:30:08.078847 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:08 crc kubenswrapper[4813]: I0219 18:30:08.078860 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:08Z","lastTransitionTime":"2026-02-19T18:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:08 crc kubenswrapper[4813]: I0219 18:30:08.182142 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:08 crc kubenswrapper[4813]: I0219 18:30:08.182212 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:08 crc kubenswrapper[4813]: I0219 18:30:08.182231 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:08 crc kubenswrapper[4813]: I0219 18:30:08.182259 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:08 crc kubenswrapper[4813]: I0219 18:30:08.182278 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:08Z","lastTransitionTime":"2026-02-19T18:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:08 crc kubenswrapper[4813]: I0219 18:30:08.285358 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:08 crc kubenswrapper[4813]: I0219 18:30:08.285417 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:08 crc kubenswrapper[4813]: I0219 18:30:08.285434 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:08 crc kubenswrapper[4813]: I0219 18:30:08.285457 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:08 crc kubenswrapper[4813]: I0219 18:30:08.285491 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:08Z","lastTransitionTime":"2026-02-19T18:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:08 crc kubenswrapper[4813]: I0219 18:30:08.389071 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:08 crc kubenswrapper[4813]: I0219 18:30:08.389132 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:08 crc kubenswrapper[4813]: I0219 18:30:08.389152 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:08 crc kubenswrapper[4813]: I0219 18:30:08.389176 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:08 crc kubenswrapper[4813]: I0219 18:30:08.389203 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:08Z","lastTransitionTime":"2026-02-19T18:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:08 crc kubenswrapper[4813]: I0219 18:30:08.413066 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 16:59:49.947956316 +0000 UTC Feb 19 18:30:08 crc kubenswrapper[4813]: I0219 18:30:08.491636 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:08 crc kubenswrapper[4813]: I0219 18:30:08.491692 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:08 crc kubenswrapper[4813]: I0219 18:30:08.491708 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:08 crc kubenswrapper[4813]: I0219 18:30:08.491732 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:08 crc kubenswrapper[4813]: I0219 18:30:08.491751 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:08Z","lastTransitionTime":"2026-02-19T18:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:08 crc kubenswrapper[4813]: I0219 18:30:08.594409 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:08 crc kubenswrapper[4813]: I0219 18:30:08.594471 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:08 crc kubenswrapper[4813]: I0219 18:30:08.594490 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:08 crc kubenswrapper[4813]: I0219 18:30:08.594514 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:08 crc kubenswrapper[4813]: I0219 18:30:08.594533 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:08Z","lastTransitionTime":"2026-02-19T18:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:08 crc kubenswrapper[4813]: I0219 18:30:08.696438 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:08 crc kubenswrapper[4813]: I0219 18:30:08.696470 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:08 crc kubenswrapper[4813]: I0219 18:30:08.696479 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:08 crc kubenswrapper[4813]: I0219 18:30:08.696492 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:08 crc kubenswrapper[4813]: I0219 18:30:08.696503 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:08Z","lastTransitionTime":"2026-02-19T18:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:08 crc kubenswrapper[4813]: I0219 18:30:08.700099 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jlz6v" event={"ID":"f58d0592-08dd-49db-8c98-f262b9808e0e","Type":"ContainerStarted","Data":"11144c876f7a0cbaa5787ac1a093f138ec3fdb8a3cd118d4efec6cc650e3e97a"} Feb 19 18:30:08 crc kubenswrapper[4813]: I0219 18:30:08.717700 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vpk9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0640f474-4d6f-4a87-9750-dda71e69dd95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58268d212369ac3ebcca0839f2c778d1e2e6e36fdc4b7e374cb9044adb39163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vpk9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:08Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:08 crc kubenswrapper[4813]: I0219 18:30:08.735875 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8429a2d929ad94a25a922e436298db7a8eb7522560dfa37f87f6e24f4f99b4e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:08Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:08 crc kubenswrapper[4813]: I0219 18:30:08.754127 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"481977a2-7072-4176-abd4-863cb6104d70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2b35ef24fab5f4dfd17bf95d550000ed1167454385baff2c19e1bc795e2fa8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a57de3f000b8b7bd147b882314c518fbd8173a9c366d4ee48e1e5fab9800b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfswm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:08Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:08 crc kubenswrapper[4813]: I0219 18:30:08.773503 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:08Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:08 crc kubenswrapper[4813]: I0219 18:30:08.799770 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:08 crc kubenswrapper[4813]: I0219 18:30:08.799823 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:08 crc kubenswrapper[4813]: I0219 18:30:08.799841 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:08 crc kubenswrapper[4813]: I0219 18:30:08.799865 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:08 crc kubenswrapper[4813]: I0219 18:30:08.799882 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:08Z","lastTransitionTime":"2026-02-19T18:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:08 crc kubenswrapper[4813]: I0219 18:30:08.802567 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"424add47-7924-4d6b-9c4a-4696e02c5a45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95dc9a2b0e36680a894dd162836072614413ad26fcf5adbae44db96aff8a00c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46edb0e58f8e2e040cc223708f42cef7060dc0a841fff9db182e66e659ced5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://def8abdfd513befc4fac3d78b8284b3b37ba3f5da05d74a7b911932ee1099344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed857dedd4a4cf59288fcbcfaa298370a93630e185b949d2e574eb828831df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://216064c0aa1256384d93ad5b952228d4b67c2767f284715e62505dd390c11268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b9746c82ed08c95759481dc08426b4e8446ed834da21c003dccfeb8231fbec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b9746c82ed08c95759481dc08426b4e8446ed834da21c003dccfeb8231fbec2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc8db58cc6cac8c9b830087814e9d7652fdd990b18e4e4e778669d5cbc06b508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc8db58cc6cac8c9b830087814e9d7652fdd990b18e4e4e778669d5cbc06b508\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://baca15e257ea2007369156b6a31eef0b7cd0e336cef7a9c733164b93e1927f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baca15e257ea2007369156b6a31eef0b7cd0e336cef7a9c733164b93e1927f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:08Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:08 crc kubenswrapper[4813]: I0219 18:30:08.823817 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44daee369357470d5c3b416d63e53f621a0434cc6fe90db2dd15a6078849bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:08Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:08 crc kubenswrapper[4813]: I0219 18:30:08.838917 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55mxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af5ac962-f7d8-4759-b913-b3784e37a704\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5006f3df33dcac6500b89a837e686eb2b35a2d1c609091bda101500d15d808d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55mxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:08Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:08 crc kubenswrapper[4813]: I0219 18:30:08.861188 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7e9f575-0f71-4cf9-bbca-161279ecc067\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5217c8c5d8bdd49d4930ceb46695c6404c9cb98f22fa570d519f6fd9e90a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72ae02b48db247486c8af5ba910b32e186c15460f9195868b319d1ddc111fa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e90168ea6a75495be5812591cd0c929407c9adcb7c088d4bcbe3658a430320\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f319cf58bd58c099ace281ef84d183ac951986506fa62207a45ec01f7398e7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f62c4c18d632f0210d0676c04bf42a7b28c8669c123d29ef70fd4129f0051a2b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T18:30:00Z\\\",\\\"message\\\":\\\".003654 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 18:29:55.006059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3464865525/tls.crt::/tmp/serving-cert-3464865525/tls.key\\\\\\\"\\\\nI0219 18:30:00.917714 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 18:30:00.925836 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 18:30:00.925875 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 18:30:00.925917 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 18:30:00.925928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 18:30:00.935151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 18:30:00.935201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935214 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935228 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 18:30:00.935217 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 18:30:00.935236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 18:30:00.935260 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 18:30:00.935268 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 18:30:00.942033 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nF0219 18:30:00.942062 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b414bdca98a18315a2ebb69c39baa4748407aa769d6bc8efa4115efb47c9f4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:08Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:08 crc kubenswrapper[4813]: I0219 18:30:08.885478 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:08Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:08 crc kubenswrapper[4813]: I0219 18:30:08.904795 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:08 crc kubenswrapper[4813]: I0219 18:30:08.904845 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:08 crc kubenswrapper[4813]: I0219 18:30:08.904862 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:08 crc kubenswrapper[4813]: I0219 18:30:08.904886 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:08 crc kubenswrapper[4813]: I0219 18:30:08.904902 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:08Z","lastTransitionTime":"2026-02-19T18:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:08 crc kubenswrapper[4813]: I0219 18:30:08.906881 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c6488cf097d9700ded1960ebddf696d77c08ffdfc6dd8b351c594ea026bb6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6181782d0c635897318effbe40705986067ccb55cf05ffdba6948e46c1a054b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:08Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:08 crc kubenswrapper[4813]: I0219 18:30:08.930815 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hksqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b099cefb-f2e5-4f3f-976c-7433dba77ef2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3313ad9bea8833862d8c116421e00a0c2376c81e214043db1fa4096177d01094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hksqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:08Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:08 crc kubenswrapper[4813]: I0219 18:30:08.949621 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e24bb1ce-8e7b-485c-8ade-4d1c8d24d31d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ec0b5d3efae532f52cc2cad5bdf32bbba176159a6d3e26584b0fabe4e67fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72a262261f9ec388a9b2c38c6c30d35645d6e42d482ed7468891523b7bc731fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58efdcb776fb39a7e33af3fda3f0589e38c4bae15b8db3371541d189729d06c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a2584d7de40323b884e08e2cb47b6ab8054fadccc7516e4736c5439aef60e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:08Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:08 crc kubenswrapper[4813]: I0219 18:30:08.970014 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:08Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:08 crc kubenswrapper[4813]: I0219 18:30:08.989573 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlz6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f58d0592-08dd-49db-8c98-f262b9808e0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7968520498abaa880445c128f02c0838e254befc000966e147908c84203f74a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7968520498abaa880445c128f02c0838e254befc000966e147908c84203f74a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17313aa183028703a67d1cf40b4246d09224ceee622cd9f4f7d2ef239b26db47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17313aa183028703a67d1cf40b4246d09224ceee622cd9f4f7d2ef239b26db47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ab70722c91747196d065d768e30f52befde81fdedb465f22c3f7eef4705210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98ab70722c91747196d065d768e30f52befde81fdedb465f22c3f7eef4705210\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ab6dd9d03726493fd9fd2c358b0a87ca1997cbe2111d43a548adc9c9f43c662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ab6dd9d03726493fd9fd2c358b0a87ca1997cbe2111d43a548adc9c9f43c662\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11144c876f7a0cbaa5787ac1a093f138ec3fdb8a3cd118d4efec6cc650e3e97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlz6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:08Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.008837 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.008899 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.008916 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.008942 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.008987 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:09Z","lastTransitionTime":"2026-02-19T18:30:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.015188 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"928c75f4-605c-4556-8c29-14ff4bdf6f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0244cf62334ae37b44f11119385a76667200a01b2162a064829f16e8d6c0cf65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0244cf62334ae37b44f11119385a76667200a01b2162a064829f16e8d6c0cf65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pc9t2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:09Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.111263 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.111335 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.111357 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.111383 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.111401 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:09Z","lastTransitionTime":"2026-02-19T18:30:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.120999 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 18:30:09 crc kubenswrapper[4813]: E0219 18:30:09.121228 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 18:30:17.121193335 +0000 UTC m=+36.346633906 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.215119 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.215165 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.215182 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.215204 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.215222 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:09Z","lastTransitionTime":"2026-02-19T18:30:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.221709 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.221795 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.221835 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.221881 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:30:09 crc kubenswrapper[4813]: E0219 18:30:09.221897 4813 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 18:30:09 crc kubenswrapper[4813]: E0219 18:30:09.222013 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 18:30:17.221983924 +0000 UTC m=+36.447424495 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 18:30:09 crc kubenswrapper[4813]: E0219 18:30:09.222035 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 18:30:09 crc kubenswrapper[4813]: E0219 18:30:09.222068 4813 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 18:30:09 crc kubenswrapper[4813]: E0219 18:30:09.222078 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 18:30:09 crc kubenswrapper[4813]: E0219 18:30:09.222107 4813 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 18:30:09 crc kubenswrapper[4813]: E0219 18:30:09.222138 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 18:30:09 crc kubenswrapper[4813]: E0219 18:30:09.222175 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 18:30:09 crc kubenswrapper[4813]: E0219 18:30:09.222195 4813 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 18:30:09 crc kubenswrapper[4813]: E0219 18:30:09.222154 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 18:30:17.222128838 +0000 UTC m=+36.447569419 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 18:30:09 crc kubenswrapper[4813]: E0219 18:30:09.222275 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 18:30:17.222256052 +0000 UTC m=+36.447696633 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 18:30:09 crc kubenswrapper[4813]: E0219 18:30:09.222297 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 18:30:17.222285463 +0000 UTC m=+36.447726034 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.318943 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.319316 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.319579 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.319729 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.320018 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:09Z","lastTransitionTime":"2026-02-19T18:30:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.414142 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 15:27:24.633621547 +0000 UTC Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.423761 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.423828 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.423853 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.423876 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.423896 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:09Z","lastTransitionTime":"2026-02-19T18:30:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.472255 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:30:09 crc kubenswrapper[4813]: E0219 18:30:09.472424 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.472872 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.473000 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 18:30:09 crc kubenswrapper[4813]: E0219 18:30:09.473094 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 18:30:09 crc kubenswrapper[4813]: E0219 18:30:09.473228 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.526567 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.526615 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.526635 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.526659 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.526683 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:09Z","lastTransitionTime":"2026-02-19T18:30:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.629818 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.629894 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.629916 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.629979 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.630006 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:09Z","lastTransitionTime":"2026-02-19T18:30:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.709731 4813 generic.go:334] "Generic (PLEG): container finished" podID="f58d0592-08dd-49db-8c98-f262b9808e0e" containerID="11144c876f7a0cbaa5787ac1a093f138ec3fdb8a3cd118d4efec6cc650e3e97a" exitCode=0 Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.709857 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jlz6v" event={"ID":"f58d0592-08dd-49db-8c98-f262b9808e0e","Type":"ContainerDied","Data":"11144c876f7a0cbaa5787ac1a093f138ec3fdb8a3cd118d4efec6cc650e3e97a"} Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.721994 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" event={"ID":"928c75f4-605c-4556-8c29-14ff4bdf6f5e","Type":"ContainerStarted","Data":"08cf5873d49e8121f0c8a7f63f0469c77affb5ba521602da5d12c7a45847a0fd"} Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.722640 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.723119 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.735122 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.735188 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.735210 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.735239 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.735265 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:09Z","lastTransitionTime":"2026-02-19T18:30:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.746514 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlz6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f58d0592-08dd-49db-8c98-f262b9808e0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7968520498abaa880445c128f02c0838e254befc000966e147908c84203f74a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7968520498abaa880445c128f02c0838e254befc000966e147908c84203f74a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17313aa183028703a67d1cf40b4246d09224ceee622cd9f4f7d2ef239b26db47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17313aa183028703a67d1cf40b4246d09224ceee622cd9f4f7d2ef239b26db47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ab70722c91747196d065d768e30f52befde81fdedb465f22c3f7eef4705210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98ab70722c91747196d065d768e30f52befde81fdedb465f22c3f7eef4705210\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ab6dd9d03726493fd9fd2c358b0a87ca1997cbe2111d43a548adc9c9f43c662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ab6dd9d03726493fd9fd2c358b0a87ca1997cbe2111d43a548adc9c9f43c662\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11144c876f7a0cbaa5787ac1a093f138ec3fdb8a3cd118d4efec6cc650e3e97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11144c876f7a0cbaa5787ac1a093f138ec3fdb8a3cd118d4efec6cc650e3e97a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlz6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:09Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.766000 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.767627 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.800011 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"928c75f4-605c-4556-8c29-14ff4bdf6f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0244cf62334ae37b44f11119385a76667200a01b2162a064829f16e8d6c0cf65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0244cf62334ae37b44f11119385a76667200a01b2162a064829f16e8d6c0cf65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pc9t2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:09Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.819891 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"481977a2-7072-4176-abd4-863cb6104d70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2b35ef24fab5f4dfd17bf95d550000ed1167454385baff2c19e1bc795e2fa8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a57de3f000b8b7bd147b882314c518fbd8173a9c366d4ee48e1e5fab9800b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfswm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:09Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.841333 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:09Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.841446 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.841492 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.841511 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.841537 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.841557 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:09Z","lastTransitionTime":"2026-02-19T18:30:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.855394 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vpk9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0640f474-4d6f-4a87-9750-dda71e69dd95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58268d212369ac3ebcca0839f2c778d1e2e6e36fdc4b7e374cb9044adb39163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vpk9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:09Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.867296 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8429a2d929ad94a25a922e436298db7a8eb7522560dfa37f87f6e24f4f99b4e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:09Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.877062 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55mxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af5ac962-f7d8-4759-b913-b3784e37a704\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5006f3df33dcac6500b89a837e686eb2b35a2d1c609091bda101500d15d808d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55mxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:09Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.892599 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7e9f575-0f71-4cf9-bbca-161279ecc067\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5217c8c5d8bdd49d4930ceb46695c6404c9cb98f22fa570d519f6fd9e90a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72ae02b48db247486c8af5ba910b32e186c15460f9195868b319d1ddc111fa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e90168ea6a75495be5812591cd0c929407c9adcb7c088d4bcbe3658a430320\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f319cf58bd58c099ace281ef84d183ac951986506fa62207a45ec01f7398e7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f62c4c18d632f0210d0676c04bf42a7b28c8669c123d29ef70fd4129f0051a2b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T18:30:00Z\\\",\\\"message\\\":\\\".003654 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 18:29:55.006059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3464865525/tls.crt::/tmp/serving-cert-3464865525/tls.key\\\\\\\"\\\\nI0219 18:30:00.917714 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 18:30:00.925836 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 18:30:00.925875 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 18:30:00.925917 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 18:30:00.925928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 18:30:00.935151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 18:30:00.935201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935214 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935228 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 18:30:00.935217 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 18:30:00.935236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 18:30:00.935260 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 18:30:00.935268 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 18:30:00.942033 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nF0219 18:30:00.942062 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b414bdca98a18315a2ebb69c39baa4748407aa769d6bc8efa4115efb47c9f4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:09Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.911904 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"424add47-7924-4d6b-9c4a-4696e02c5a45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95dc9a2b0e36680a894dd162836072614413ad26fcf5adbae44db96aff8a00c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46edb0e58f8e2e040cc223708f42cef7060dc0a841fff9db182e66e659ced5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://def8abdfd513befc4fac3d78b8284b3b37ba3f5da05d74a7b911932ee1099344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed857dedd4a4cf59288fcbcfaa298370a93630e185b949d2e574eb828831df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://216064c0aa1256384d93ad5b952228d4b67c2767f284715e62505dd390c11268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b9746c82ed08c95759481dc08426b4e8446ed834da21c003dccfeb8231fbec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b9746c82ed08c95759481dc08426b4e8446ed834da21c003dccfeb8231fbec2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc8db58cc6cac8c9b830087814e9d7652fdd990b18e4e4e778669d5cbc06b508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc8db58cc6cac8c9b830087814e9d7652fdd990b18e4e4e778669d5cbc06b508\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://baca15e257ea2007369156b6a31eef0b7cd0e336cef7a9c733164b93e1927f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baca15e257ea2007369156b6a31eef0b7cd0e336cef7a9c733164b93e1927f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:09Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.926745 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44daee369357470d5c3b416d63e53f621a0434cc6fe90db2dd15a6078849bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:09Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.939728 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hksqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b099cefb-f2e5-4f3f-976c-7433dba77ef2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3313ad9bea8833862d8c116421e00a0c2376c81e214043db1fa4096177d01094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hksqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:09Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.943423 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.943457 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.943468 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.943483 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.943494 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:09Z","lastTransitionTime":"2026-02-19T18:30:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.955268 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e24bb1ce-8e7b-485c-8ade-4d1c8d24d31d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ec0b5d3efae532f52cc2cad5bdf32bbba176159a6d3e26584b0fabe4e67fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72a262261f9ec388a9b2c38c6c30d35645d6e42d482ed7468891523b7bc731fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58efdcb776fb39a7e33af3fda3f0589e38c4bae15b8db3371541d189729d06c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a2584d7de40323b884e08e2cb47b6ab8054fadccc7516e4736c5439aef60e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:09Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.968440 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:09Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.983231 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:09Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:09 crc kubenswrapper[4813]: I0219 18:30:09.997099 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c6488cf097d9700ded1960ebddf696d77c08ffdfc6dd8b351c594ea026bb6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6181782d0c635897318effbe40705986067ccb55cf05ffdba6948e46c1a054b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:09Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.012213 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44daee369357470d5c3b416d63e53f621a0434cc6fe90db2dd15a6078849bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:10Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.023361 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55mxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af5ac962-f7d8-4759-b913-b3784e37a704\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5006f3df33dcac6500b89a837e686eb2b35a2d1c609091bda101500d15d808d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55mxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:10Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.039650 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7e9f575-0f71-4cf9-bbca-161279ecc067\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5217c8c5d8bdd49d4930ceb46695c6404c9cb98f22fa570d519f6fd9e90a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72ae02b48db247486c8af5ba910b32e186c15460f9195868b319d1ddc111fa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e90168ea6a75495be5812591cd0c929407c9adcb7c088d4bcbe3658a430320\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f319cf58bd58c099ace281ef84d183ac951986506fa62207a45ec01f7398e7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f62c4c18d632f0210d0676c04bf42a7b28c8669c123d29ef70fd4129f0051a2b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T18:30:00Z\\\",\\\"message\\\":\\\".003654 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 18:29:55.006059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3464865525/tls.crt::/tmp/serving-cert-3464865525/tls.key\\\\\\\"\\\\nI0219 18:30:00.917714 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 18:30:00.925836 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 18:30:00.925875 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 18:30:00.925917 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 18:30:00.925928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 18:30:00.935151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 18:30:00.935201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935214 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935228 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 18:30:00.935217 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 18:30:00.935236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 18:30:00.935260 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 18:30:00.935268 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 18:30:00.942033 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nF0219 18:30:00.942062 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b414bdca98a18315a2ebb69c39baa4748407aa769d6bc8efa4115efb47c9f4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:10Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.045872 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.045925 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.045940 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.045987 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.046002 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:10Z","lastTransitionTime":"2026-02-19T18:30:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.058614 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"424add47-7924-4d6b-9c4a-4696e02c5a45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95dc9a2b0e36680a894dd162836072614413ad26fcf5adbae44db96aff8a00c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46edb0e58f8e2e040cc223708f42cef7060dc0a841fff9db182e66e659ced5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://def8abdfd513befc4fac3d78b8284b3b37ba3f5da05d74a7b911932ee1099344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed857dedd4a4cf59288fcbcfaa298370a93630e185b949d2e574eb828831df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://216064c0aa1256384d93ad5b952228d4b67c2767f284715e62505dd390c11268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b9746c82ed08c95759481dc08426b4e8446ed834da21c003dccfeb8231fbec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b9746c82ed08c95759481dc08426b4e8446ed834da21c003dccfeb8231fbec2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc8db58cc6cac8c9b830087814e9d7652fdd990b18e4e4e778669d5cbc06b508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc8db58cc6cac8c9b830087814e9d7652fdd990b18e4e4e778669d5cbc06b508\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://baca15e257ea2007369156b6a31eef0b7cd0e336cef7a9c733164b93e1927f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baca15e257ea2007369156b6a31eef0b7cd0e336cef7a9c733164b93e1927f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:10Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.080255 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c6488cf097d9700ded1960ebddf696d77c08ffdfc6dd8b351c594ea026bb6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6181782d0c635897318effbe40705986067ccb55cf05ffdba6948e46c1a054b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:10Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.093715 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hksqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b099cefb-f2e5-4f3f-976c-7433dba77ef2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3313ad9bea8833862d8c116421e00a0c2376c81e214043db1fa4096177d01094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hksqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:10Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.109660 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e24bb1ce-8e7b-485c-8ade-4d1c8d24d31d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ec0b5d3efae532f52cc2cad5bdf32bbba176159a6d3e26584b0fabe4e67fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72a262261f9ec388a9b2c38c6c30d35645d6e42d482ed7468891523b7bc731fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58efdcb776fb39a7e33af3fda3f0589e38c4bae15b8db3371541d189729d06c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a2584d7de40323b884e08e2cb47b6ab8054fadccc7516e4736c5439aef60e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:10Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.124707 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:10Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.138585 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:10Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.148232 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.148265 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.148276 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.148294 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.148307 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:10Z","lastTransitionTime":"2026-02-19T18:30:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.154579 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlz6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f58d0592-08dd-49db-8c98-f262b9808e0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7968520498abaa880445c128f02c0838e254befc000966e147908c84203f74a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7968520498abaa880445c128f02c0838e254befc000966e147908c84203f74a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17313aa183028703a67d1cf40b4246d09224ceee622cd9f4f7d2ef239b26db47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17313aa183028703a67d1cf40b4246d09224ceee622cd9f4f7d2ef239b26db47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ab70722c91747196d065d768e30f52befde81fdedb465f22c3f7eef4705210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98ab70722c91747196d065d768e30f52befde81fdedb465f22c3f7eef4705210\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ab6dd9d03726493fd9fd2c358b0a87ca1997cbe2111d43a548adc9c9f43c662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ab6dd9d03726493fd9fd2c358b0a87ca1997cbe2111d43a548adc9c9f43c662\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11144c876f7a0cbaa5787ac1a093f138ec3fdb8a3cd118d4efec6cc650e3e97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11144c876f7a0cbaa5787ac1a093f138ec3fdb8a3cd118d4efec6cc650e3e97a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlz6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:10Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.179174 4813 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.183569 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"928c75f4-605c-4556-8c29-14ff4bdf6f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec80078ab4536d8f1962994c3bdf8af5fcf81516f8845e84dbb33a78b1aa2062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e03ec2a3e1656fbfb567377e79bcca10e1c8a6f7660727c164936eab7365930a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6191934d31b3f711d3c9d8f365251db200dea5a925cad670632dbfc527f76504\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1b878b9bc710465c933e1965728be9fe220fe89a3d5c05fa538abd4bf22b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81cfe261685862b6b74d263ab66b4a2b1d68409791e82424b4f210c56561099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fed64977a80ae6acac3a225e2b6b6ab865db1f36338df4a15011baae90f949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08cf5873d49e8121f0c8a7f63f0469c77affb5ba521602da5d12c7a45847a0fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1caf7a922d435742a5c6d5302c70003bc86cc3ab542e2e3fe0a368c150de7838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0244cf62334ae37b44f11119385a76667200a01b2162a064829f16e8d6c0cf65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0244cf62334ae37b44f11119385a76667200a01b2162a064829f16e8d6c0cf65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pc9t2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:10Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.198542 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8429a2d929ad94a25a922e436298db7a8eb7522560dfa37f87f6e24f4f99b4e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:10Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.209091 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"481977a2-7072-4176-abd4-863cb6104d70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2b35ef24fab5f4dfd17bf95d550000ed1167454385baff2c19e1bc795e2fa8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a57de3f000b8b7bd147b882314c518fbd8173a9c366d4ee48e1e5fab9800b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfswm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:10Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.223553 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:10Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.236795 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vpk9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0640f474-4d6f-4a87-9750-dda71e69dd95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58268d212369ac3ebcca0839f2c778d1e2e6e36fdc4b7e374cb9044adb39163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vpk9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:10Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.251128 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.251180 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.251198 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.251224 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.251241 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:10Z","lastTransitionTime":"2026-02-19T18:30:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.314228 4813 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.356403 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.356490 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.356517 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.356553 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.356589 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:10Z","lastTransitionTime":"2026-02-19T18:30:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.414864 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 16:23:24.937794767 +0000 UTC Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.460806 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.460869 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.460887 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.461358 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.461384 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:10Z","lastTransitionTime":"2026-02-19T18:30:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.564140 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.564194 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.564211 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.564236 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.564253 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:10Z","lastTransitionTime":"2026-02-19T18:30:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.667803 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.667865 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.667883 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.667913 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.667933 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:10Z","lastTransitionTime":"2026-02-19T18:30:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.728641 4813 generic.go:334] "Generic (PLEG): container finished" podID="f58d0592-08dd-49db-8c98-f262b9808e0e" containerID="53230e417c1722eded51e2c7a87acb4e9b610aaf5d36a22d4f3f4fcd987c5160" exitCode=0 Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.728710 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jlz6v" event={"ID":"f58d0592-08dd-49db-8c98-f262b9808e0e","Type":"ContainerDied","Data":"53230e417c1722eded51e2c7a87acb4e9b610aaf5d36a22d4f3f4fcd987c5160"} Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.728789 4813 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.748614 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e24bb1ce-8e7b-485c-8ade-4d1c8d24d31d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ec0b5d3efae532f52cc2cad5bdf32bbba176159a6d3e26584b0fabe4e67fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72a262261f9ec388a9b2c38c6c30d35645d6e42d482ed7468891523b7bc731fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58efdcb776fb39a7e33af3fda3f0589e38c4bae15b8db3371541d189729d06c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a2584d7de40323b884e08e2cb47b6ab8054fadccc7516e4736c5439aef60e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:10Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.767390 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:10Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.771143 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.771174 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.771185 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.771200 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.771209 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:10Z","lastTransitionTime":"2026-02-19T18:30:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.785590 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:10Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.806872 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c6488cf097d9700ded1960ebddf696d77c08ffdfc6dd8b351c594ea026bb6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6181782d0c635897318effbe40705986067ccb55cf05ffdba6948e46c1a054b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:10Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.823443 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hksqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b099cefb-f2e5-4f3f-976c-7433dba77ef2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3313ad9bea8833862d8c116421e00a0c2376c81e214043db1fa4096177d01094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hksqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:10Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.839991 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlz6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f58d0592-08dd-49db-8c98-f262b9808e0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7968520498abaa880445c128f02c0838e254befc000966e147908c84203f74a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7968520498abaa880445c128f02c0838e254befc000966e147908c84203f74a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17313aa183028703a67d1cf40b4246d09224ceee622cd9f4f7d2ef239b26db47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17313aa183028703a67d1cf40b4246d09224ceee622cd9f4f7d2ef239b26db47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ab70722c91747196d065d768e30f52befde81fdedb465f22c3f7eef4705210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98ab70722c91747196d065d768e30f52befde81fdedb465f22c3f7eef4705210\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ab6dd9d03726493fd9fd2c358b0a87ca1997cbe2111d43a548adc9c9f43c662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ab6dd9d03726493fd9fd2c358b0a87ca1997cbe2111d43a548adc9c9f43c662\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11144c876f7a0cbaa5787ac1a093f138ec3fdb8a3cd118d4efec6cc650e3e97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11144c876f7a0cbaa5787ac1a093f138ec3fdb8a3cd118d4efec6cc650e3e97a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53230e417c1722eded51e2c7a87acb4e9b610aaf5d36a22d4f3f4fcd987c5160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53230e417c1722eded51e2c7a87acb4e9b610aaf5d36a22d4f3f4fcd987c5160\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlz6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:10Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.858787 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"928c75f4-605c-4556-8c29-14ff4bdf6f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec80078ab4536d8f1962994c3bdf8af5fcf81516f8845e84dbb33a78b1aa2062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e03ec2a3e1656fbfb567377e79bcca10e1c8a6f7660727c164936eab7365930a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6191934d31b3f711d3c9d8f365251db200dea5a925cad670632dbfc527f76504\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1b878b9bc710465c933e1965728be9fe220fe89a3d5c05fa538abd4bf22b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81cfe261685862b6b74d263ab66b4a2b1d68409791e82424b4f210c56561099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fed64977a80ae6acac3a225e2b6b6ab865db1f36338df4a15011baae90f949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08cf5873d49e8121f0c8a7f63f0469c77affb5ba521602da5d12c7a45847a0fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1caf7a922d435742a5c6d5302c70003bc86cc3ab542e2e3fe0a368c150de7838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0244cf62334ae37b44f11119385a76667200a01b2162a064829f16e8d6c0cf65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0244cf62334ae37b44f11119385a76667200a01b2162a064829f16e8d6c0cf65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pc9t2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:10Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.873008 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:10Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.875105 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.875172 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.875196 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.875224 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.875250 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:10Z","lastTransitionTime":"2026-02-19T18:30:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.885277 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vpk9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0640f474-4d6f-4a87-9750-dda71e69dd95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58268d212369ac3ebcca0839f2c778d1e2e6e36fdc4b7e374cb9044adb39163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vpk9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:10Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.898073 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8429a2d929ad94a25a922e436298db7a8eb7522560dfa37f87f6e24f4f99b4e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:10Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.909735 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"481977a2-7072-4176-abd4-863cb6104d70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2b35ef24fab5f4dfd17bf95d550000ed1167454385baff2c19e1bc795e2fa8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a57de3f000b8b7bd147b882314c518fbd8173a9c366d4ee48e1e5fab9800b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfswm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:10Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.928257 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7e9f575-0f71-4cf9-bbca-161279ecc067\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5217c8c5d8bdd49d4930ceb46695c6404c9cb98f22fa570d519f6fd9e90a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72ae02b48db247486c8af5ba910b32e186c15460f9195868b319d1ddc111fa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e90168ea6a75495be5812591cd0c929407c9adcb7c088d4bcbe3658a430320\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f319cf58bd58c099ace281ef84d183ac951986506fa62207a45ec01f7398e7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f62c4c18d632f0210d0676c04bf42a7b28c8669c123d29ef70fd4129f0051a2b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T18:30:00Z\\\",\\\"message\\\":\\\".003654 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 18:29:55.006059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3464865525/tls.crt::/tmp/serving-cert-3464865525/tls.key\\\\\\\"\\\\nI0219 18:30:00.917714 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 18:30:00.925836 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 18:30:00.925875 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 18:30:00.925917 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 18:30:00.925928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 18:30:00.935151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 18:30:00.935201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935214 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935228 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 18:30:00.935217 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 18:30:00.935236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 18:30:00.935260 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 18:30:00.935268 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 18:30:00.942033 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nF0219 18:30:00.942062 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b414bdca98a18315a2ebb69c39baa4748407aa769d6bc8efa4115efb47c9f4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:10Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.969592 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"424add47-7924-4d6b-9c4a-4696e02c5a45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95dc9a2b0e36680a894dd162836072614413ad26fcf5adbae44db96aff8a00c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46edb0e58f8e2e040cc223708f42cef7060dc0a841fff9db182e66e659ced5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://def8abdfd513befc4fac3d78b8284b3b37ba3f5da05d74a7b911932ee1099344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed857dedd4a4cf59288fcbcfaa298370a93630e185b949d2e574eb828831df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://216064c0aa1256384d93ad5b952228d4b67c2767f284715e62505dd390c11268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b9746c82ed08c95759481dc08426b4e8446ed834da21c003dccfeb8231fbec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b9746c82ed08c95759481dc08426b4e8446ed834da21c003dccfeb8231fbec2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc8db58cc6cac8c9b830087814e9d7652fdd990b18e4e4e778669d5cbc06b508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc8db58cc6cac8c9b830087814e9d7652fdd990b18e4e4e778669d5cbc06b508\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://baca15e257ea2007369156b6a31eef0b7cd0e336cef7a9c733164b93e1927f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baca15e257ea2007369156b6a31eef0b7cd0e336cef7a9c733164b93e1927f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:10Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.977640 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.977680 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.977691 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.977709 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.977721 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:10Z","lastTransitionTime":"2026-02-19T18:30:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:10 crc kubenswrapper[4813]: I0219 18:30:10.993386 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44daee369357470d5c3b416d63e53f621a0434cc6fe90db2dd15a6078849bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:10Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.011382 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55mxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af5ac962-f7d8-4759-b913-b3784e37a704\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5006f3df33dcac6500b89a837e686eb2b35a2d1c609091bda101500d15d808d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55mxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:11Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.080803 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.080842 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.080853 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.080867 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.080878 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:11Z","lastTransitionTime":"2026-02-19T18:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.184571 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.184616 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.184626 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.184645 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.184657 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:11Z","lastTransitionTime":"2026-02-19T18:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.287742 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.287812 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.287831 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.287857 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.287875 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:11Z","lastTransitionTime":"2026-02-19T18:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.391298 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.391343 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.391360 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.391383 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.391398 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:11Z","lastTransitionTime":"2026-02-19T18:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.415800 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 05:46:16.561859064 +0000 UTC Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.470442 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.470558 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 18:30:11 crc kubenswrapper[4813]: E0219 18:30:11.470610 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.470669 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 18:30:11 crc kubenswrapper[4813]: E0219 18:30:11.470813 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 18:30:11 crc kubenswrapper[4813]: E0219 18:30:11.470984 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.492982 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7e9f575-0f71-4cf9-bbca-161279ecc067\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5217c8c5d8bdd49d4930ceb46695c6404c9cb98f22fa570d519f6fd9e90a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72ae02b48db247486c8af5ba910b32e186c15460f9195868b319d1ddc111fa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e90168ea6a75495be5812591cd0c929407c9adcb7c088d4bcbe3658a430320\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f319cf58bd58c099ace281ef84d183ac951986506fa62207a45ec01f7398e7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f62c4c18d632f0210d0676c04bf42a7b28c8669c123d29ef70fd4129f0051a2b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T18:30:00Z\\\",\\\"message\\\":\\\".003654 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 18:29:55.006059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3464865525/tls.crt::/tmp/serving-cert-3464865525/tls.key\\\\\\\"\\\\nI0219 18:30:00.917714 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 18:30:00.925836 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 18:30:00.925875 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 18:30:00.925917 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 18:30:00.925928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 18:30:00.935151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 18:30:00.935201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935214 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935228 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 18:30:00.935217 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 18:30:00.935236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 18:30:00.935260 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 18:30:00.935268 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 18:30:00.942033 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nF0219 18:30:00.942062 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b414bdca98a18315a2ebb69c39baa4748407aa769d6bc8efa4115efb47c9f4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:11Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.495865 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.495904 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.495917 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.495937 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.495970 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:11Z","lastTransitionTime":"2026-02-19T18:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.518235 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"424add47-7924-4d6b-9c4a-4696e02c5a45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95dc9a2b0e36680a894dd162836072614413ad26fcf5adbae44db96aff8a00c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46edb0e58f8e2e040cc223708f42cef7060dc0a841fff9db182e66e659ced5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://def8abdfd513befc4fac3d78b8284b3b37ba3f5da05d74a7b911932ee1099344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed857dedd4a4cf59288fcbcfaa298370a93630e185b949d2e574eb828831df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://216064c0aa1256384d93ad5b952228d4b67c2767f284715e62505dd390c11268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b9746c82ed08c95759481dc08426b4e8446ed834da21c003dccfeb8231fbec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b9746c82ed08c95759481dc08426b4e8446ed834da21c003dccfeb8231fbec2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc8db58cc6cac8c9b830087814e9d7652fdd990b18e4e4e778669d5cbc06b508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc8db58cc6cac8c9b830087814e9d7652fdd990b18e4e4e778669d5cbc06b508\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://baca15e257ea2007369156b6a31eef0b7cd0e336cef7a9c733164b93e1927f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baca15e257ea2007369156b6a31eef0b7cd0e336cef7a9c733164b93e1927f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:11Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.534934 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44daee369357470d5c3b416d63e53f621a0434cc6fe90db2dd15a6078849bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:11Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.546555 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55mxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af5ac962-f7d8-4759-b913-b3784e37a704\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5006f3df33dcac6500b89a837e686eb2b35a2d1c609091bda101500d15d808d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55mxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:11Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.560163 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e24bb1ce-8e7b-485c-8ade-4d1c8d24d31d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ec0b5d3efae532f52cc2cad5bdf32bbba176159a6d3e26584b0fabe4e67fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72a262261f9ec388a9b2c38c6c30d35645d6e42d482ed7468891523b7bc731fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58efdcb776fb39a7e33af3fda3f0589e38c4bae15b8db3371541d189729d06c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a2584d7de40323b884e08e2cb47b6ab8054fadccc7516e4736c5439aef60e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:11Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.572874 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:11Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.587372 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:11Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.600285 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.600577 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.600708 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.600832 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.600941 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:11Z","lastTransitionTime":"2026-02-19T18:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.602704 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c6488cf097d9700ded1960ebddf696d77c08ffdfc6dd8b351c594ea026bb6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6181782d0c635897318effbe40705986067ccb55cf05ffdba6948e46c1a054b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:11Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.623833 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hksqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b099cefb-f2e5-4f3f-976c-7433dba77ef2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3313ad9bea8833862d8c116421e00a0c2376c81e214043db1fa4096177d01094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hksqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:11Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.641363 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlz6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f58d0592-08dd-49db-8c98-f262b9808e0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7968520498abaa880445c128f02c0838e254befc000966e147908c84203f74a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7968520498abaa880445c128f02c0838e254befc000966e147908c84203f74a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17313aa183028703a67d1cf40b4246d09224ceee622cd9f4f7d2ef239b26db47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17313aa183028703a67d1cf40b4246d09224ceee622cd9f4f7d2ef239b26db47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ab70722c91747196d065d768e30f52befde81fdedb465f22c3f7eef4705210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98ab70722c91747196d065d768e30f52befde81fdedb465f22c3f7eef4705210\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ab6dd9d03726493fd9fd2c358b0a87ca1997cbe2111d43a548adc9c9f43c662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ab6dd9d03726493fd9fd2c358b0a87ca1997cbe2111d43a548adc9c9f43c662\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11144c876f7a0cbaa5787ac1a093f138ec3fdb8a3cd118d4efec6cc650e3e97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11144c876f7a0cbaa5787ac1a093f138ec3fdb8a3cd118d4efec6cc650e3e97a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53230e417c1722eded51e2c7a87acb4e9b610aaf5d36a22d4f3f4fcd987c5160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53230e417c1722eded51e2c7a87acb4e9b610aaf5d36a22d4f3f4fcd987c5160\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlz6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:11Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.662460 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"928c75f4-605c-4556-8c29-14ff4bdf6f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec80078ab4536d8f1962994c3bdf8af5fcf81516f8845e84dbb33a78b1aa2062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e03ec2a3e1656fbfb567377e79bcca10e1c8a6f7660727c164936eab7365930a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6191934d31b3f711d3c9d8f365251db200dea5a925cad670632dbfc527f76504\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1b878b9bc710465c933e1965728be9fe220fe89a3d5c05fa538abd4bf22b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81cfe261685862b6b74d263ab66b4a2b1d68409791e82424b4f210c56561099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fed64977a80ae6acac3a225e2b6b6ab865db1f36338df4a15011baae90f949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08cf5873d49e8121f0c8a7f63f0469c77affb5ba521602da5d12c7a45847a0fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1caf7a922d435742a5c6d5302c70003bc86cc3ab542e2e3fe0a368c150de7838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0244cf62334ae37b44f11119385a76667200a01b2162a064829f16e8d6c0cf65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0244cf62334ae37b44f11119385a76667200a01b2162a064829f16e8d6c0cf65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pc9t2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:11Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.678304 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:11Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.694288 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vpk9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0640f474-4d6f-4a87-9750-dda71e69dd95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58268d212369ac3ebcca0839f2c778d1e2e6e36fdc4b7e374cb9044adb39163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vpk9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:11Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.703929 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.704236 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.704335 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.704439 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.704525 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:11Z","lastTransitionTime":"2026-02-19T18:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.709326 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8429a2d929ad94a25a922e436298db7a8eb7522560dfa37f87f6e24f4f99b4e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:11Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.723427 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"481977a2-7072-4176-abd4-863cb6104d70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2b35ef24fab5f4dfd17bf95d550000ed1167454385baff2c19e1bc795e2fa8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a57de3f000b8b7bd147b882314c518fbd8173a9c366d4ee48e1e5fab9800b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfswm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:11Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.727327 4813 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.736250 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jlz6v" event={"ID":"f58d0592-08dd-49db-8c98-f262b9808e0e","Type":"ContainerStarted","Data":"934ac9bf4c98a77f31d9fddb47e9981f38daeca4f37afc09a488b48c69a13b94"} Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.736551 4813 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.754541 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8429a2d929ad94a25a922e436298db7a8eb7522560dfa37f87f6e24f4f99b4e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:11Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.765379 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"481977a2-7072-4176-abd4-863cb6104d70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2b35ef24fab5f4dfd17bf95d550000ed1167454385baff2c19e1bc795e2fa8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a57de3f000b8b7bd147b882314c518fbd8173a9c366d4ee48e1e5fab9800b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfswm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:11Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.778247 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:11Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.787450 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vpk9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0640f474-4d6f-4a87-9750-dda71e69dd95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58268d212369ac3ebcca0839f2c778d1e2e6e36fdc4b7e374cb9044adb39163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vpk9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:11Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.798582 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44daee369357470d5c3b416d63e53f621a0434cc6fe90db2dd15a6078849bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:11Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.806129 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.806318 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.806440 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.806570 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.806688 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:11Z","lastTransitionTime":"2026-02-19T18:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.807753 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55mxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af5ac962-f7d8-4759-b913-b3784e37a704\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5006f3df33dcac6500b89a837e686eb2b35a2d1c609091bda101500d15d808d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55mxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:11Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.819404 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7e9f575-0f71-4cf9-bbca-161279ecc067\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5217c8c5d8bdd49d4930ceb46695c6404c9cb98f22fa570d519f6fd9e90a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72ae02b48db247486c8af5ba910b32e186c15460f9195868b319d1ddc111fa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e90168ea6a75495be5812591cd0c929407c9adcb7c088d4bcbe3658a430320\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f319cf58bd58c099ace281ef84d183ac951986506fa62207a45ec01f7398e7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f62c4c18d632f0210d0676c04bf42a7b28c8669c123d29ef70fd4129f0051a2b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T18:30:00Z\\\",\\\"message\\\":\\\".003654 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 18:29:55.006059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3464865525/tls.crt::/tmp/serving-cert-3464865525/tls.key\\\\\\\"\\\\nI0219 18:30:00.917714 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 18:30:00.925836 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 18:30:00.925875 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 18:30:00.925917 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 18:30:00.925928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 18:30:00.935151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 18:30:00.935201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935214 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935228 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 18:30:00.935217 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 18:30:00.935236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 18:30:00.935260 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 18:30:00.935268 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 18:30:00.942033 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nF0219 18:30:00.942062 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b414bdca98a18315a2ebb69c39baa4748407aa769d6bc8efa4115efb47c9f4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:11Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.842355 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"424add47-7924-4d6b-9c4a-4696e02c5a45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95dc9a2b0e36680a894dd162836072614413ad26fcf5adbae44db96aff8a00c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46edb0e58f8e2e040cc223708f42cef7060dc0a841fff9db182e66e659ced5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://def8abdfd513befc4fac3d78b8284b3b37ba3f5da05d74a7b911932ee1099344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed857dedd4a4cf59288fcbcfaa298370a93630e185b949d2e574eb828831df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://216064c0aa1256384d93ad5b952228d4b67c2767f284715e62505dd390c11268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b9746c82ed08c95759481dc08426b4e8446ed834da21c003dccfeb8231fbec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b9746c82ed08c95759481dc08426b4e8446ed834da21c003dccfeb8231fbec2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc8db58cc6cac8c9b830087814e9d7652fdd990b18e4e4e778669d5cbc06b508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc8db58cc6cac8c9b830087814e9d7652fdd990b18e4e4e778669d5cbc06b508\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://baca15e257ea2007369156b6a31eef0b7cd0e336cef7a9c733164b93e1927f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baca15e257ea2007369156b6a31eef0b7cd0e336cef7a9c733164b93e1927f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:11Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.857776 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c6488cf097d9700ded1960ebddf696d77c08ffdfc6dd8b351c594ea026bb6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6181782d0c635897318effbe40705986067ccb55cf05ffdba6948e46c1a054b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:11Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.872049 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hksqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b099cefb-f2e5-4f3f-976c-7433dba77ef2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3313ad9bea8833862d8c116421e00a0c2376c81e214043db1fa4096177d01094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hksqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:11Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.884388 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e24bb1ce-8e7b-485c-8ade-4d1c8d24d31d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ec0b5d3efae532f52cc2cad5bdf32bbba176159a6d3e26584b0fabe4e67fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72a262261f9ec388a9b2c38c6c30d35645d6e42d482ed7468891523b7bc731fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58efdcb776fb39a7e33af3fda3f0589e38c4bae15b8db3371541d189729d06c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a2584d7de40323b884e08e2cb47b6ab8054fadccc7516e4736c5439aef60e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:11Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.899275 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:11Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.909515 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.909577 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.909590 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.909608 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.909621 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:11Z","lastTransitionTime":"2026-02-19T18:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.912485 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:11Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.929934 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlz6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f58d0592-08dd-49db-8c98-f262b9808e0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://934ac9bf4c98a77f31d9fddb47e9981f38daeca4f37afc09a488b48c69a13b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7968520498abaa880445c128f02c0838e254befc000966e147908c84203f74a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7968520498abaa880445c128f02c0838e254befc000966e147908c84203f74a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17313aa183028703a67d1cf40b4246d09224ceee622cd9f4f7d2ef239b26db47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17313aa183028703a67d1cf40b4246d09224ceee622cd9f4f7d2ef239b26db47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ab70722c91747196d065d768e30f52befde81fdedb465f22c3f7eef4705210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98ab70722c91747196d065d768e30f52befde81fdedb465f22c3f7eef4705210\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ab6dd9d03726493fd9fd2c358b0a87ca1997cbe2111d43a548adc9c9f43c662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ab6dd9d03726493fd9fd2c358b0a87ca1997cbe2111d43a548adc9c9f43c662\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11144c876f7a0cbaa5787ac1a093f138ec3fdb8a3cd118d4efec6cc650e3e97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11144c876f7a0cbaa5787ac1a093f138ec3fdb8a3cd118d4efec6cc650e3e97a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53230e417c1722eded51e2c7a87acb4e9b610aaf5d36a22d4f3f4fcd987c5160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53230e417c1722eded51e2c7a87acb4e9b610aaf5d36a22d4f3f4fcd987c5160\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlz6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:11Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:11 crc kubenswrapper[4813]: I0219 18:30:11.947708 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"928c75f4-605c-4556-8c29-14ff4bdf6f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec80078ab4536d8f1962994c3bdf8af5fcf81516f8845e84dbb33a78b1aa2062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e03ec2a3e1656fbfb567377e79bcca10e1c8a6f7660727c164936eab7365930a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6191934d31b3f711d3c9d8f365251db200dea5a925cad670632dbfc527f76504\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1b878b9bc710465c933e1965728be9fe220fe89a3d5c05fa538abd4bf22b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81cfe261685862b6b74d263ab66b4a2b1d68409791e82424b4f210c56561099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fed64977a80ae6acac3a225e2b6b6ab865db1f36338df4a15011baae90f949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08cf5873d49e8121f0c8a7f63f0469c77affb5ba521602da5d12c7a45847a0fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1caf7a922d435742a5c6d5302c70003bc86cc3ab542e2e3fe0a368c150de7838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0244cf62334ae37b44f11119385a76667200a01b2162a064829f16e8d6c0cf65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0244cf62334ae37b44f11119385a76667200a01b2162a064829f16e8d6c0cf65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pc9t2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:11Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:12 crc kubenswrapper[4813]: I0219 18:30:12.011682 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:12 crc kubenswrapper[4813]: I0219 18:30:12.011905 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:12 crc kubenswrapper[4813]: I0219 18:30:12.011987 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:12 crc kubenswrapper[4813]: I0219 18:30:12.012048 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:12 crc kubenswrapper[4813]: I0219 18:30:12.012130 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:12Z","lastTransitionTime":"2026-02-19T18:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:12 crc kubenswrapper[4813]: I0219 18:30:12.114668 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:12 crc kubenswrapper[4813]: I0219 18:30:12.114710 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:12 crc kubenswrapper[4813]: I0219 18:30:12.114718 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:12 crc kubenswrapper[4813]: I0219 18:30:12.114732 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:12 crc kubenswrapper[4813]: I0219 18:30:12.114742 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:12Z","lastTransitionTime":"2026-02-19T18:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:12 crc kubenswrapper[4813]: I0219 18:30:12.218423 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:12 crc kubenswrapper[4813]: I0219 18:30:12.218461 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:12 crc kubenswrapper[4813]: I0219 18:30:12.218473 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:12 crc kubenswrapper[4813]: I0219 18:30:12.218495 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:12 crc kubenswrapper[4813]: I0219 18:30:12.218507 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:12Z","lastTransitionTime":"2026-02-19T18:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:12 crc kubenswrapper[4813]: I0219 18:30:12.322569 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:12 crc kubenswrapper[4813]: I0219 18:30:12.322633 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:12 crc kubenswrapper[4813]: I0219 18:30:12.322656 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:12 crc kubenswrapper[4813]: I0219 18:30:12.322682 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:12 crc kubenswrapper[4813]: I0219 18:30:12.322700 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:12Z","lastTransitionTime":"2026-02-19T18:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:12 crc kubenswrapper[4813]: I0219 18:30:12.416030 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 22:15:20.172587688 +0000 UTC Feb 19 18:30:12 crc kubenswrapper[4813]: I0219 18:30:12.425451 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:12 crc kubenswrapper[4813]: I0219 18:30:12.425489 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:12 crc kubenswrapper[4813]: I0219 18:30:12.425507 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:12 crc kubenswrapper[4813]: I0219 18:30:12.425528 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:12 crc kubenswrapper[4813]: I0219 18:30:12.425542 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:12Z","lastTransitionTime":"2026-02-19T18:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:12 crc kubenswrapper[4813]: I0219 18:30:12.528870 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:12 crc kubenswrapper[4813]: I0219 18:30:12.528924 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:12 crc kubenswrapper[4813]: I0219 18:30:12.528936 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:12 crc kubenswrapper[4813]: I0219 18:30:12.528992 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:12 crc kubenswrapper[4813]: I0219 18:30:12.529006 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:12Z","lastTransitionTime":"2026-02-19T18:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:12 crc kubenswrapper[4813]: I0219 18:30:12.632089 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:12 crc kubenswrapper[4813]: I0219 18:30:12.632151 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:12 crc kubenswrapper[4813]: I0219 18:30:12.632170 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:12 crc kubenswrapper[4813]: I0219 18:30:12.632198 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:12 crc kubenswrapper[4813]: I0219 18:30:12.632216 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:12Z","lastTransitionTime":"2026-02-19T18:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:12 crc kubenswrapper[4813]: I0219 18:30:12.735004 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:12 crc kubenswrapper[4813]: I0219 18:30:12.735073 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:12 crc kubenswrapper[4813]: I0219 18:30:12.735087 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:12 crc kubenswrapper[4813]: I0219 18:30:12.735104 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:12 crc kubenswrapper[4813]: I0219 18:30:12.735116 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:12Z","lastTransitionTime":"2026-02-19T18:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:12 crc kubenswrapper[4813]: I0219 18:30:12.742353 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pc9t2_928c75f4-605c-4556-8c29-14ff4bdf6f5e/ovnkube-controller/0.log" Feb 19 18:30:12 crc kubenswrapper[4813]: I0219 18:30:12.746362 4813 generic.go:334] "Generic (PLEG): container finished" podID="928c75f4-605c-4556-8c29-14ff4bdf6f5e" containerID="08cf5873d49e8121f0c8a7f63f0469c77affb5ba521602da5d12c7a45847a0fd" exitCode=1 Feb 19 18:30:12 crc kubenswrapper[4813]: I0219 18:30:12.746418 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" event={"ID":"928c75f4-605c-4556-8c29-14ff4bdf6f5e","Type":"ContainerDied","Data":"08cf5873d49e8121f0c8a7f63f0469c77affb5ba521602da5d12c7a45847a0fd"} Feb 19 18:30:12 crc kubenswrapper[4813]: I0219 18:30:12.747563 4813 scope.go:117] "RemoveContainer" containerID="08cf5873d49e8121f0c8a7f63f0469c77affb5ba521602da5d12c7a45847a0fd" Feb 19 18:30:12 crc kubenswrapper[4813]: I0219 18:30:12.774043 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlz6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f58d0592-08dd-49db-8c98-f262b9808e0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://934ac9bf4c98a77f31d9fddb47e9981f38daeca4f37afc09a488b48c69a13b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7968520498abaa880445c128f02c0838e254befc000966e147908c84203f74a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7968520498abaa880445c128f02c0838e254befc000966e147908c84203f74a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17313aa183028703a67d1cf40b4246d09224ceee622cd9f4f7d2ef239b26db47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17313aa183028703a67d1cf40b4246d09224ceee622cd9f4f7d2ef239b26db47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ab70722c91747196d065d768e30f52befde81fdedb465f22c3f7eef4705210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98ab70722c91747196d065d768e30f52befde81fdedb465f22c3f7eef4705210\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ab6dd9d03726493fd9fd2c358b0a87ca1997cbe2111d43a548adc9c9f43c662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ab6dd9d03726493fd9fd2c358b0a87ca1997cbe2111d43a548adc9c9f43c662\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11144c876f7a0cbaa5787ac1a093f138ec3fdb8a3cd118d4efec6cc650e3e97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11144c876f7a0cbaa5787ac1a093f138ec3fdb8a3cd118d4efec6cc650e3e97a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53230e417c1722eded51e2c7a87acb4e9b610aaf5d36a22d4f3f4fcd987c5160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53230e417c1722eded51e2c7a87acb4e9b610aaf5d36a22d4f3f4fcd987c5160\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlz6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:12Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:12 crc kubenswrapper[4813]: I0219 18:30:12.801428 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"928c75f4-605c-4556-8c29-14ff4bdf6f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec80078ab4536d8f1962994c3bdf8af5fcf81516f8845e84dbb33a78b1aa2062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e03ec2a3e1656fbfb567377e79bcca10e1c8a6f7660727c164936eab7365930a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6191934d31b3f711d3c9d8f365251db200dea5a925cad670632dbfc527f76504\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1b878b9bc710465c933e1965728be9fe220fe89a3d5c05fa538abd4bf22b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81cfe261685862b6b74d263ab66b4a2b1d68409791e82424b4f210c56561099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fed64977a80ae6acac3a225e2b6b6ab865db1f36338df4a15011baae90f949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08cf5873d49e8121f0c8a7f63f0469c77affb5ba521602da5d12c7a45847a0fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08cf5873d49e8121f0c8a7f63f0469c77affb5ba521602da5d12c7a45847a0fd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T18:30:12Z\\\",\\\"message\\\":\\\" 6\\\\nI0219 18:30:12.328752 6098 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 18:30:12.329298 6098 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 18:30:12.329407 6098 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 18:30:12.329583 6098 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 18:30:12.329668 6098 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 18:30:12.329743 6098 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 18:30:12.329789 6098 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 18:30:12.329872 6098 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 18:30:12.330181 6098 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 18:30:12.330677 6098 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1caf7a922d435742a5c6d5302c70003bc86cc3ab542e2e3fe0a368c150de7838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0244cf62334ae37b44f11119385a76667200a01b2162a064829f16e8d6c0cf65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0244cf62334ae37b44f11119385a76667200a01b2162a064829f16e8d6c0cf65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pc9t2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:12Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:12 crc kubenswrapper[4813]: I0219 18:30:12.826031 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8429a2d929ad94a25a922e436298db7a8eb7522560dfa37f87f6e24f4f99b4e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:12Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:12 crc kubenswrapper[4813]: I0219 18:30:12.838563 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:12 crc kubenswrapper[4813]: I0219 18:30:12.838622 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:12 crc kubenswrapper[4813]: I0219 18:30:12.838638 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:12 crc kubenswrapper[4813]: I0219 18:30:12.838661 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:12 crc kubenswrapper[4813]: I0219 18:30:12.838677 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:12Z","lastTransitionTime":"2026-02-19T18:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:12 crc kubenswrapper[4813]: I0219 18:30:12.847585 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"481977a2-7072-4176-abd4-863cb6104d70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2b35ef24fab5f4dfd17bf95d550000ed1167454385baff2c19e1bc795e2fa8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a57de3f000b8b7bd147b882314c518fbd8173a9c366d4ee48e1e5fab9800b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfswm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:12Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:12 crc kubenswrapper[4813]: I0219 18:30:12.869721 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:12Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:12 crc kubenswrapper[4813]: I0219 18:30:12.882673 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vpk9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0640f474-4d6f-4a87-9750-dda71e69dd95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58268d212369ac3ebcca0839f2c778d1e2e6e36fdc4b7e374cb9044adb39163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vpk9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:12Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:12 crc kubenswrapper[4813]: I0219 18:30:12.899552 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44daee369357470d5c3b416d63e53f621a0434cc6fe90db2dd15a6078849bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:12Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:12 crc kubenswrapper[4813]: I0219 18:30:12.912353 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55mxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af5ac962-f7d8-4759-b913-b3784e37a704\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5006f3df33dcac6500b89a837e686eb2b35a2d1c609091bda101500d15d808d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55mxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:12Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:12 crc kubenswrapper[4813]: I0219 18:30:12.928779 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7e9f575-0f71-4cf9-bbca-161279ecc067\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5217c8c5d8bdd49d4930ceb46695c6404c9cb98f22fa570d519f6fd9e90a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72ae02b48db247486c8af5ba910b32e186c15460f9195868b319d1ddc111fa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e90168ea6a75495be5812591cd0c929407c9adcb7c088d4bcbe3658a430320\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f319cf58bd58c099ace281ef84d183ac951986506fa62207a45ec01f7398e7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f62c4c18d632f0210d0676c04bf42a7b28c8669c123d29ef70fd4129f0051a2b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T18:30:00Z\\\",\\\"message\\\":\\\".003654 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 18:29:55.006059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3464865525/tls.crt::/tmp/serving-cert-3464865525/tls.key\\\\\\\"\\\\nI0219 18:30:00.917714 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 18:30:00.925836 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 18:30:00.925875 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 18:30:00.925917 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 18:30:00.925928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 18:30:00.935151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 18:30:00.935201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935214 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935228 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 18:30:00.935217 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 18:30:00.935236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 18:30:00.935260 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 18:30:00.935268 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 18:30:00.942033 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nF0219 18:30:00.942062 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b414bdca98a18315a2ebb69c39baa4748407aa769d6bc8efa4115efb47c9f4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:12Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:12 crc kubenswrapper[4813]: I0219 18:30:12.941823 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:12 crc kubenswrapper[4813]: I0219 18:30:12.941944 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:12 crc kubenswrapper[4813]: I0219 18:30:12.942014 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:12 crc kubenswrapper[4813]: I0219 18:30:12.942044 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:12 crc kubenswrapper[4813]: I0219 18:30:12.942063 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:12Z","lastTransitionTime":"2026-02-19T18:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:12 crc kubenswrapper[4813]: I0219 18:30:12.967055 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"424add47-7924-4d6b-9c4a-4696e02c5a45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95dc9a2b0e36680a894dd162836072614413ad26fcf5adbae44db96aff8a00c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46edb0e58f8e2e040cc223708f42cef7060dc0a841fff9db182e66e659ced5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://def8abdfd513befc4fac3d78b8284b3b37ba3f5da05d74a7b911932ee1099344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed857dedd4a4cf59288fcbcfaa298370a93630e185b949d2e574eb828831df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://216064c0aa1256384d93ad5b952228d4b67c2767f284715e62505dd390c11268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b9746c82ed08c95759481dc08426b4e8446ed834da21c003dccfeb8231fbec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b9746c82ed08c95759481dc08426b4e8446ed834da21c003dccfeb8231fbec2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc8db58cc6cac8c9b830087814e9d7652fdd990b18e4e4e778669d5cbc06b508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc8db58cc6cac8c9b830087814e9d7652fdd990b18e4e4e778669d5cbc06b508\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://baca15e257ea2007369156b6a31eef0b7cd0e336cef7a9c733164b93e1927f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baca15e257ea2007369156b6a31eef0b7cd0e336cef7a9c733164b93e1927f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:12Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:12 crc kubenswrapper[4813]: I0219 18:30:12.985884 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c6488cf097d9700ded1960ebddf696d77c08ffdfc6dd8b351c594ea026bb6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6181782d0c635897318effbe40705986067ccb55cf05ffdba6948e46c1a054b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:12Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:13 crc kubenswrapper[4813]: I0219 18:30:13.004370 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hksqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b099cefb-f2e5-4f3f-976c-7433dba77ef2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3313ad9bea8833862d8c116421e00a0c2376c81e214043db1fa4096177d01094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hksqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:13Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:13 crc kubenswrapper[4813]: I0219 18:30:13.025089 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e24bb1ce-8e7b-485c-8ade-4d1c8d24d31d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ec0b5d3efae532f52cc2cad5bdf32bbba176159a6d3e26584b0fabe4e67fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72a262261f9ec388a9b2c38c6c30d35645d6e42d482ed7468891523b7bc731fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58efdcb776fb39a7e33af3fda3f0589e38c4bae15b8db3371541d189729d06c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a2584d7de40323b884e08e2cb47b6ab8054fadccc7516e4736c5439aef60e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:13Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:13 crc kubenswrapper[4813]: I0219 18:30:13.041729 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:13Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:13 crc kubenswrapper[4813]: I0219 18:30:13.045269 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:13 crc kubenswrapper[4813]: I0219 18:30:13.045305 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:13 crc kubenswrapper[4813]: I0219 18:30:13.045314 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:13 crc kubenswrapper[4813]: I0219 18:30:13.045330 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:13 crc kubenswrapper[4813]: I0219 18:30:13.045341 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:13Z","lastTransitionTime":"2026-02-19T18:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:13 crc kubenswrapper[4813]: I0219 18:30:13.053695 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:13Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:13 crc kubenswrapper[4813]: I0219 18:30:13.148710 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:13 crc kubenswrapper[4813]: I0219 18:30:13.148787 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:13 crc kubenswrapper[4813]: I0219 18:30:13.148810 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:13 crc kubenswrapper[4813]: I0219 18:30:13.148839 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:13 crc kubenswrapper[4813]: I0219 18:30:13.148861 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:13Z","lastTransitionTime":"2026-02-19T18:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:13 crc kubenswrapper[4813]: I0219 18:30:13.251467 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:13 crc kubenswrapper[4813]: I0219 18:30:13.251498 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:13 crc kubenswrapper[4813]: I0219 18:30:13.251510 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:13 crc kubenswrapper[4813]: I0219 18:30:13.251525 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:13 crc kubenswrapper[4813]: I0219 18:30:13.251536 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:13Z","lastTransitionTime":"2026-02-19T18:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:13 crc kubenswrapper[4813]: I0219 18:30:13.354835 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:13 crc kubenswrapper[4813]: I0219 18:30:13.354905 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:13 crc kubenswrapper[4813]: I0219 18:30:13.354922 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:13 crc kubenswrapper[4813]: I0219 18:30:13.354989 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:13 crc kubenswrapper[4813]: I0219 18:30:13.355008 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:13Z","lastTransitionTime":"2026-02-19T18:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:13 crc kubenswrapper[4813]: I0219 18:30:13.417127 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 17:52:45.189923562 +0000 UTC Feb 19 18:30:13 crc kubenswrapper[4813]: I0219 18:30:13.457611 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:13 crc kubenswrapper[4813]: I0219 18:30:13.457679 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:13 crc kubenswrapper[4813]: I0219 18:30:13.457693 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:13 crc kubenswrapper[4813]: I0219 18:30:13.457717 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:13 crc kubenswrapper[4813]: I0219 18:30:13.457732 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:13Z","lastTransitionTime":"2026-02-19T18:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:13 crc kubenswrapper[4813]: I0219 18:30:13.471221 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 18:30:13 crc kubenswrapper[4813]: I0219 18:30:13.471282 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:30:13 crc kubenswrapper[4813]: I0219 18:30:13.471373 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 18:30:13 crc kubenswrapper[4813]: E0219 18:30:13.471484 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 18:30:13 crc kubenswrapper[4813]: E0219 18:30:13.471642 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 18:30:13 crc kubenswrapper[4813]: E0219 18:30:13.471835 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 18:30:13 crc kubenswrapper[4813]: I0219 18:30:13.561141 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:13 crc kubenswrapper[4813]: I0219 18:30:13.561221 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:13 crc kubenswrapper[4813]: I0219 18:30:13.561242 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:13 crc kubenswrapper[4813]: I0219 18:30:13.561275 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:13 crc kubenswrapper[4813]: I0219 18:30:13.561297 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:13Z","lastTransitionTime":"2026-02-19T18:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:13 crc kubenswrapper[4813]: I0219 18:30:13.665779 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:13 crc kubenswrapper[4813]: I0219 18:30:13.665846 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:13 crc kubenswrapper[4813]: I0219 18:30:13.665866 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:13 crc kubenswrapper[4813]: I0219 18:30:13.665891 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:13 crc kubenswrapper[4813]: I0219 18:30:13.665910 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:13Z","lastTransitionTime":"2026-02-19T18:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:13 crc kubenswrapper[4813]: I0219 18:30:13.701626 4813 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 19 18:30:13 crc kubenswrapper[4813]: I0219 18:30:13.752518 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pc9t2_928c75f4-605c-4556-8c29-14ff4bdf6f5e/ovnkube-controller/1.log" Feb 19 18:30:13 crc kubenswrapper[4813]: I0219 18:30:13.753680 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pc9t2_928c75f4-605c-4556-8c29-14ff4bdf6f5e/ovnkube-controller/0.log" Feb 19 18:30:13 crc kubenswrapper[4813]: I0219 18:30:13.757283 4813 generic.go:334] "Generic (PLEG): container finished" podID="928c75f4-605c-4556-8c29-14ff4bdf6f5e" containerID="fe42f57029c227e4eeab376bc77a8709fce4c6c19e78d7dd1c77e65966b14388" exitCode=1 Feb 19 18:30:13 crc kubenswrapper[4813]: I0219 18:30:13.757331 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" event={"ID":"928c75f4-605c-4556-8c29-14ff4bdf6f5e","Type":"ContainerDied","Data":"fe42f57029c227e4eeab376bc77a8709fce4c6c19e78d7dd1c77e65966b14388"} Feb 19 18:30:13 crc kubenswrapper[4813]: I0219 18:30:13.757371 4813 scope.go:117] "RemoveContainer" containerID="08cf5873d49e8121f0c8a7f63f0469c77affb5ba521602da5d12c7a45847a0fd" Feb 19 18:30:13 crc kubenswrapper[4813]: I0219 18:30:13.758781 4813 scope.go:117] "RemoveContainer" containerID="fe42f57029c227e4eeab376bc77a8709fce4c6c19e78d7dd1c77e65966b14388" Feb 19 18:30:13 crc kubenswrapper[4813]: E0219 18:30:13.759310 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pc9t2_openshift-ovn-kubernetes(928c75f4-605c-4556-8c29-14ff4bdf6f5e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" podUID="928c75f4-605c-4556-8c29-14ff4bdf6f5e" Feb 19 18:30:13 crc kubenswrapper[4813]: I0219 18:30:13.768841 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:13 crc kubenswrapper[4813]: I0219 18:30:13.768882 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:13 crc kubenswrapper[4813]: I0219 18:30:13.768897 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:13 crc kubenswrapper[4813]: I0219 18:30:13.768916 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:13 crc kubenswrapper[4813]: I0219 18:30:13.768928 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:13Z","lastTransitionTime":"2026-02-19T18:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:13 crc kubenswrapper[4813]: I0219 18:30:13.780826 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"424add47-7924-4d6b-9c4a-4696e02c5a45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95dc9a2b0e36680a894dd162836072614413ad26fcf5adbae44db96aff8a00c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46edb0e58f8e2e040cc223708f42cef7060dc0a841fff9db182e66e659ced5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://def8abdfd513befc4fac3d78b8284b3b37ba3f5da05d74a7b911932ee1099344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed857dedd4a4cf59288fcbcfaa298370a93630e185b949d2e574eb828831df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://216064c0aa1256384d93ad5b952228d4b67c2767f284715e62505dd390c11268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b9746c82ed08c95759481dc08426b4e8446ed834da21c003dccfeb8231fbec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b9746c82ed08c95759481dc08426b4e8446ed834da21c003dccfeb8231fbec2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc8db58cc6cac8c9b830087814e9d7652fdd990b18e4e4e778669d5cbc06b508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc8db58cc6cac8c9b830087814e9d7652fdd990b18e4e4e778669d5cbc06b508\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://baca15e257ea2007369156b6a31eef0b7cd0e336cef7a9c733164b93e1927f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baca15e257ea2007369156b6a31eef0b7cd0e336cef7a9c733164b93e1927f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:13Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:13 crc kubenswrapper[4813]: I0219 18:30:13.793234 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44daee369357470d5c3b416d63e53f621a0434cc6fe90db2dd15a6078849bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:13Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:13 crc kubenswrapper[4813]: I0219 18:30:13.804381 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55mxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af5ac962-f7d8-4759-b913-b3784e37a704\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5006f3df33dcac6500b89a837e686eb2b35a2d1c609091bda101500d15d808d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55mxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:13Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:13 crc kubenswrapper[4813]: I0219 18:30:13.822897 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7e9f575-0f71-4cf9-bbca-161279ecc067\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5217c8c5d8bdd49d4930ceb46695c6404c9cb98f22fa570d519f6fd9e90a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72ae02b48db247486c8af5ba910b32e186c15460f9195868b319d1ddc111fa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e90168ea6a75495be5812591cd0c929407c9adcb7c088d4bcbe3658a430320\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f319cf58bd58c099ace281ef84d183ac951986506fa62207a45ec01f7398e7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f62c4c18d632f0210d0676c04bf42a7b28c8669c123d29ef70fd4129f0051a2b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T18:30:00Z\\\",\\\"message\\\":\\\".003654 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 18:29:55.006059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3464865525/tls.crt::/tmp/serving-cert-3464865525/tls.key\\\\\\\"\\\\nI0219 18:30:00.917714 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 18:30:00.925836 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 18:30:00.925875 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 18:30:00.925917 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 18:30:00.925928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 18:30:00.935151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 18:30:00.935201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935214 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935228 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 18:30:00.935217 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 18:30:00.935236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 18:30:00.935260 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 18:30:00.935268 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 18:30:00.942033 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nF0219 18:30:00.942062 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b414bdca98a18315a2ebb69c39baa4748407aa769d6bc8efa4115efb47c9f4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:13Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:13 crc kubenswrapper[4813]: I0219 18:30:13.838640 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:13Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:13 crc kubenswrapper[4813]: I0219 18:30:13.857238 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c6488cf097d9700ded1960ebddf696d77c08ffdfc6dd8b351c594ea026bb6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6181782d0c635897318effbe40705986067ccb55cf05ffdba6948e46c1a054b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:13Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:13 crc kubenswrapper[4813]: I0219 18:30:13.871086 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:13 crc kubenswrapper[4813]: I0219 18:30:13.871127 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:13 crc kubenswrapper[4813]: I0219 18:30:13.871136 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:13 crc kubenswrapper[4813]: I0219 18:30:13.871154 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:13 crc kubenswrapper[4813]: I0219 18:30:13.871165 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:13Z","lastTransitionTime":"2026-02-19T18:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:13 crc kubenswrapper[4813]: I0219 18:30:13.882502 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hksqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b099cefb-f2e5-4f3f-976c-7433dba77ef2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3313ad9bea8833862d8c116421e00a0c2376c81e214043db1fa4096177d01094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hksqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:13Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:13 crc kubenswrapper[4813]: I0219 18:30:13.896941 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e24bb1ce-8e7b-485c-8ade-4d1c8d24d31d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ec0b5d3efae532f52cc2cad5bdf32bbba176159a6d3e26584b0fabe4e67fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72a262261f9ec388a9b2c38c6c30d35645d6e42d482ed7468891523b7bc731fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58efdcb776fb39a7e33af3fda3f0589e38c4bae15b8db3371541d189729d06c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a2584d7de40323b884e08e2cb47b6ab8054fadccc7516e4736c5439aef60e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:13Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:13 crc kubenswrapper[4813]: I0219 18:30:13.912475 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:13Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:13 crc kubenswrapper[4813]: I0219 18:30:13.940392 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlz6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f58d0592-08dd-49db-8c98-f262b9808e0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://934ac9bf4c98a77f31d9fddb47e9981f38daeca4f37afc09a488b48c69a13b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7968520498abaa880445c128f02c0838e254befc000966e147908c84203f74a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7968520498abaa880445c128f02c0838e254befc000966e147908c84203f74a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17313aa183028703a67d1cf40b4246d09224ceee622cd9f4f7d2ef239b26db47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17313aa183028703a67d1cf40b4246d09224ceee622cd9f4f7d2ef239b26db47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ab70722c91747196d065d768e30f52befde81fdedb465f22c3f7eef4705210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98ab70722c91747196d065d768e30f52befde81fdedb465f22c3f7eef4705210\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ab6dd9d03726493fd9fd2c358b0a87ca1997cbe2111d43a548adc9c9f43c662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ab6dd9d03726493fd9fd2c358b0a87ca1997cbe2111d43a548adc9c9f43c662\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11144c876f7a0cbaa5787ac1a093f138ec3fdb8a3cd118d4efec6cc650e3e97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11144c876f7a0cbaa5787ac1a093f138ec3fdb8a3cd118d4efec6cc650e3e97a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53230e417c1722eded51e2c7a87acb4e9b610aaf5d36a22d4f3f4fcd987c5160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53230e417c1722eded51e2c7a87acb4e9b610aaf5d36a22d4f3f4fcd987c5160\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlz6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:13Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:13 crc kubenswrapper[4813]: I0219 18:30:13.969278 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"928c75f4-605c-4556-8c29-14ff4bdf6f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec80078ab4536d8f1962994c3bdf8af5fcf81516f8845e84dbb33a78b1aa2062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e03ec2a3e1656fbfb567377e79bcca10e1c8a6f7660727c164936eab7365930a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6191934d31b3f711d3c9d8f365251db200dea5a925cad670632dbfc527f76504\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1b878b9bc710465c933e1965728be9fe220fe89a3d5c05fa538abd4bf22b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81cfe261685862b6b74d263ab66b4a2b1d68409791e82424b4f210c56561099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fed64977a80ae6acac3a225e2b6b6ab865db1f36338df4a15011baae90f949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe42f57029c227e4eeab376bc77a8709fce4c6c19e78d7dd1c77e65966b14388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08cf5873d49e8121f0c8a7f63f0469c77affb5ba521602da5d12c7a45847a0fd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T18:30:12Z\\\",\\\"message\\\":\\\" 6\\\\nI0219 18:30:12.328752 6098 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 18:30:12.329298 6098 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 18:30:12.329407 6098 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 18:30:12.329583 6098 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 18:30:12.329668 6098 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 18:30:12.329743 6098 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 18:30:12.329789 6098 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 18:30:12.329872 6098 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 18:30:12.330181 6098 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 18:30:12.330677 6098 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe42f57029c227e4eeab376bc77a8709fce4c6c19e78d7dd1c77e65966b14388\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T18:30:13Z\\\",\\\"message\\\":\\\"13.650485 6267 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI0219 18:30:13.650490 6267 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0219 18:30:13.650501 6267 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0219 18:30:13.650508 6267 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0219 18:30:13.650514 6267 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0219 18:30:13.650519 6267 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0219 18:30:13.650449 6267 services_controller.go:434] Service default/kubernetes retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{kubernetes default 1fcaffea-cfe2-4295-9c2a-a3b3626fb3f1 259 0 2025-02-23 05:11:12 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[component:apiserver provider:kubernetes] map[] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 6443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.217.4.1,Type:ClusterIP,ExternalI\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1caf7a922d435742a5c6d5302c70003bc86cc3ab542e2e3fe0a368c150de7838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0244cf62334ae37b44f11119385a76667200a01b2162a064829f16e8d6c0cf65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0244cf62334ae37b44f11119385a76667200a01b2162a064829f16e8d6c0cf65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pc9t2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:13Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:13 crc kubenswrapper[4813]: I0219 18:30:13.974158 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:13 crc kubenswrapper[4813]: I0219 18:30:13.974368 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:13 crc kubenswrapper[4813]: I0219 18:30:13.974515 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:13 crc kubenswrapper[4813]: I0219 18:30:13.974719 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:13 crc kubenswrapper[4813]: I0219 18:30:13.974901 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:13Z","lastTransitionTime":"2026-02-19T18:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:13 crc kubenswrapper[4813]: I0219 18:30:13.984784 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vpk9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0640f474-4d6f-4a87-9750-dda71e69dd95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58268d212369ac3ebcca0839f2c778d1e2e6e36fdc4b7e374cb9044adb39163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vpk9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:13Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:14 crc kubenswrapper[4813]: I0219 18:30:14.003303 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8429a2d929ad94a25a922e436298db7a8eb7522560dfa37f87f6e24f4f99b4e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:14Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:14 crc kubenswrapper[4813]: I0219 18:30:14.020100 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"481977a2-7072-4176-abd4-863cb6104d70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2b35ef24fab5f4dfd17bf95d550000ed1167454385baff2c19e1bc795e2fa8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a57de3f000b8b7bd147b882314c518fbd8173a9c366d4ee48e1e5fab9800b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfswm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:14Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:14 crc kubenswrapper[4813]: I0219 18:30:14.039841 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:14Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:14 crc kubenswrapper[4813]: I0219 18:30:14.077540 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:14 crc kubenswrapper[4813]: I0219 18:30:14.077735 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:14 crc kubenswrapper[4813]: I0219 18:30:14.077836 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:14 crc kubenswrapper[4813]: I0219 18:30:14.077920 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:14 crc kubenswrapper[4813]: I0219 18:30:14.078042 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:14Z","lastTransitionTime":"2026-02-19T18:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:14 crc kubenswrapper[4813]: I0219 18:30:14.181394 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:14 crc kubenswrapper[4813]: I0219 18:30:14.181457 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:14 crc kubenswrapper[4813]: I0219 18:30:14.181476 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:14 crc kubenswrapper[4813]: I0219 18:30:14.181502 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:14 crc kubenswrapper[4813]: I0219 18:30:14.181523 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:14Z","lastTransitionTime":"2026-02-19T18:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:14 crc kubenswrapper[4813]: I0219 18:30:14.283583 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:14 crc kubenswrapper[4813]: I0219 18:30:14.283640 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:14 crc kubenswrapper[4813]: I0219 18:30:14.283660 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:14 crc kubenswrapper[4813]: I0219 18:30:14.283684 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:14 crc kubenswrapper[4813]: I0219 18:30:14.283702 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:14Z","lastTransitionTime":"2026-02-19T18:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:14 crc kubenswrapper[4813]: I0219 18:30:14.387380 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:14 crc kubenswrapper[4813]: I0219 18:30:14.387435 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:14 crc kubenswrapper[4813]: I0219 18:30:14.387451 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:14 crc kubenswrapper[4813]: I0219 18:30:14.387480 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:14 crc kubenswrapper[4813]: I0219 18:30:14.387503 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:14Z","lastTransitionTime":"2026-02-19T18:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:14 crc kubenswrapper[4813]: I0219 18:30:14.419225 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 00:28:52.517171622 +0000 UTC Feb 19 18:30:14 crc kubenswrapper[4813]: I0219 18:30:14.490328 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:14 crc kubenswrapper[4813]: I0219 18:30:14.490389 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:14 crc kubenswrapper[4813]: I0219 18:30:14.490405 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:14 crc kubenswrapper[4813]: I0219 18:30:14.490430 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:14 crc kubenswrapper[4813]: I0219 18:30:14.490448 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:14Z","lastTransitionTime":"2026-02-19T18:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:14 crc kubenswrapper[4813]: I0219 18:30:14.593159 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:14 crc kubenswrapper[4813]: I0219 18:30:14.593223 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:14 crc kubenswrapper[4813]: I0219 18:30:14.593240 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:14 crc kubenswrapper[4813]: I0219 18:30:14.593264 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:14 crc kubenswrapper[4813]: I0219 18:30:14.593282 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:14Z","lastTransitionTime":"2026-02-19T18:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:14 crc kubenswrapper[4813]: I0219 18:30:14.696204 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:14 crc kubenswrapper[4813]: I0219 18:30:14.696286 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:14 crc kubenswrapper[4813]: I0219 18:30:14.696309 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:14 crc kubenswrapper[4813]: I0219 18:30:14.696341 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:14 crc kubenswrapper[4813]: I0219 18:30:14.696368 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:14Z","lastTransitionTime":"2026-02-19T18:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:14 crc kubenswrapper[4813]: I0219 18:30:14.715056 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqbrz"] Feb 19 18:30:14 crc kubenswrapper[4813]: I0219 18:30:14.715772 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqbrz" Feb 19 18:30:14 crc kubenswrapper[4813]: I0219 18:30:14.718312 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 19 18:30:14 crc kubenswrapper[4813]: I0219 18:30:14.718365 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 19 18:30:14 crc kubenswrapper[4813]: I0219 18:30:14.742271 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7e9f575-0f71-4cf9-bbca-161279ecc067\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5217c8c5d8bdd49d4930ceb46695c6404c9cb98f22fa570d519f6fd9e90a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72ae02b48db247486c8af5ba910b32e186c15460f9195868b319d1ddc111fa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e90168ea6a75495be5812591cd0c929407c9adcb7c088d4bcbe3658a430320\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f319cf58bd58c099ace281ef84d183ac951986506fa62207a45ec01f7398e7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f62c4c18d632f0210d0676c04bf42a7b28c8669c123d29ef70fd4129f0051a2b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T18:30:00Z\\\",\\\"message\\\":\\\".003654 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 18:29:55.006059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3464865525/tls.crt::/tmp/serving-cert-3464865525/tls.key\\\\\\\"\\\\nI0219 18:30:00.917714 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 18:30:00.925836 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 18:30:00.925875 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 18:30:00.925917 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 18:30:00.925928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 18:30:00.935151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 18:30:00.935201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935214 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935228 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 18:30:00.935217 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 18:30:00.935236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 18:30:00.935260 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 18:30:00.935268 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 18:30:00.942033 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nF0219 18:30:00.942062 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b414bdca98a18315a2ebb69c39baa4748407aa769d6bc8efa4115efb47c9f4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:14Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:14 crc kubenswrapper[4813]: I0219 18:30:14.763830 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pc9t2_928c75f4-605c-4556-8c29-14ff4bdf6f5e/ovnkube-controller/1.log" Feb 19 18:30:14 crc kubenswrapper[4813]: I0219 18:30:14.783117 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"424add47-7924-4d6b-9c4a-4696e02c5a45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95dc9a2b0e36680a894dd162836072614413ad26fcf5adbae44db96aff8a00c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46edb0e58f8e2e040cc223708f42cef7060dc0a841fff9db182e66e659ced5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://def8abdfd513befc4fac3d78b8284b3b37ba3f5da05d74a7b911932ee1099344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed857dedd4a4cf59288fcbcfaa298370a93630e185b949d2e574eb828831df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://216064c0aa1256384d93ad5b952228d4b67c2767f284715e62505dd390c11268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b9746c82ed08c95759481dc08426b4e8446ed834da21c003dccfeb8231fbec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b9746c82ed08c95759481dc08426b4e8446ed834da21c003dccfeb8231fbec2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc8db58cc6cac8c9b830087814e9d7652fdd990b18e4e4e778669d5cbc06b508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc8db58cc6cac8c9b830087814e9d7652fdd990b18e4e4e778669d5cbc06b508\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://baca15e257ea2007369156b6a31eef0b7cd0e336cef7a9c733164b93e1927f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baca15e257ea2007369156b6a31eef0b7cd0e336cef7a9c733164b93e1927f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:14Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:14 crc kubenswrapper[4813]: I0219 18:30:14.800016 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:14 crc kubenswrapper[4813]: I0219 18:30:14.800087 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:14 crc kubenswrapper[4813]: I0219 18:30:14.800109 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:14 crc kubenswrapper[4813]: I0219 18:30:14.800140 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:14 crc kubenswrapper[4813]: I0219 18:30:14.800164 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:14Z","lastTransitionTime":"2026-02-19T18:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:14 crc kubenswrapper[4813]: I0219 18:30:14.805712 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44daee369357470d5c3b416d63e53f621a0434cc6fe90db2dd15a6078849bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:14Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:14 crc kubenswrapper[4813]: I0219 18:30:14.821910 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55mxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af5ac962-f7d8-4759-b913-b3784e37a704\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5006f3df33dcac6500b89a837e686eb2b35a2d1c609091bda101500d15d808d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55mxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:14Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:14 crc kubenswrapper[4813]: I0219 18:30:14.842470 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e24bb1ce-8e7b-485c-8ade-4d1c8d24d31d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ec0b5d3efae532f52cc2cad5bdf32bbba176159a6d3e26584b0fabe4e67fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72a262261f9ec388a9b2c38c6c30d35645d6e42d482ed7468891523b7bc731fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58efdcb776fb39a7e33af3fda3f0589e38c4bae15b8db3371541d189729d06c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a2584d7de40323b884e08e2cb47b6ab8054fadccc7516e4736c5439aef60e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:14Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:14 crc kubenswrapper[4813]: I0219 18:30:14.861568 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:14Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:14 crc kubenswrapper[4813]: I0219 18:30:14.878027 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:14Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:14 crc kubenswrapper[4813]: I0219 18:30:14.884571 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/614b0374-4288-40dc-9d95-e6f6566bd1ff-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-pqbrz\" (UID: \"614b0374-4288-40dc-9d95-e6f6566bd1ff\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqbrz" Feb 19 18:30:14 crc kubenswrapper[4813]: I0219 18:30:14.884662 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/614b0374-4288-40dc-9d95-e6f6566bd1ff-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-pqbrz\" (UID: \"614b0374-4288-40dc-9d95-e6f6566bd1ff\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqbrz" Feb 19 18:30:14 crc kubenswrapper[4813]: I0219 18:30:14.884777 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/614b0374-4288-40dc-9d95-e6f6566bd1ff-env-overrides\") pod \"ovnkube-control-plane-749d76644c-pqbrz\" (UID: \"614b0374-4288-40dc-9d95-e6f6566bd1ff\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqbrz" Feb 19 18:30:14 crc kubenswrapper[4813]: I0219 18:30:14.884867 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpkbr\" (UniqueName: \"kubernetes.io/projected/614b0374-4288-40dc-9d95-e6f6566bd1ff-kube-api-access-gpkbr\") pod \"ovnkube-control-plane-749d76644c-pqbrz\" (UID: \"614b0374-4288-40dc-9d95-e6f6566bd1ff\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqbrz" Feb 19 18:30:14 crc kubenswrapper[4813]: I0219 18:30:14.898708 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c6488cf097d9700ded1960ebddf696d77c08ffdfc6dd8b351c594ea026bb6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6181782d0c635897318effbe40705986067ccb55cf05ffdba6948e46c1a054b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:14Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:14 crc kubenswrapper[4813]: I0219 18:30:14.903081 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:14 crc kubenswrapper[4813]: I0219 18:30:14.903142 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:14 crc kubenswrapper[4813]: I0219 18:30:14.903165 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:14 crc kubenswrapper[4813]: I0219 18:30:14.903199 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:14 crc kubenswrapper[4813]: I0219 18:30:14.903222 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:14Z","lastTransitionTime":"2026-02-19T18:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:14 crc kubenswrapper[4813]: I0219 18:30:14.921536 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hksqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b099cefb-f2e5-4f3f-976c-7433dba77ef2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3313ad9bea8833862d8c116421e00a0c2376c81e214043db1fa4096177d01094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hksqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:14Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:14 crc kubenswrapper[4813]: I0219 18:30:14.939580 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqbrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"614b0374-4288-40dc-9d95-e6f6566bd1ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpkbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpkbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqbrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:14Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:14 crc kubenswrapper[4813]: I0219 18:30:14.962750 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlz6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f58d0592-08dd-49db-8c98-f262b9808e0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://934ac9bf4c98a77f31d9fddb47e9981f38daeca4f37afc09a488b48c69a13b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7968520498abaa880445c128f02c0838e254befc000966e147908c84203f74a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7968520498abaa880445c128f02c0838e254befc000966e147908c84203f74a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17313aa183028703a67d1cf40b4246d09224ceee622cd9f4f7d2ef239b26db47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17313aa183028703a67d1cf40b4246d09224ceee622cd9f4f7d2ef239b26db47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ab70722c91747196d065d768e30f52befde81fdedb465f22c3f7eef4705210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98ab70722c91747196d065d768e30f52befde81fdedb465f22c3f7eef4705210\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ab6dd9d03726493fd9fd2c358b0a87ca1997cbe2111d43a548adc9c9f43c662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ab6dd9d03726493fd9fd2c358b0a87ca1997cbe2111d43a548adc9c9f43c662\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11144c876f7a0cbaa5787ac1a093f138ec3fdb8a3cd118d4efec6cc650e3e97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11144c876f7a0cbaa5787ac1a093f138ec3fdb8a3cd118d4efec6cc650e3e97a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53230e417c1722eded51e2c7a87acb4e9b610aaf5d36a22d4f3f4fcd987c5160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53230e417c1722eded51e2c7a87acb4e9b610aaf5d36a22d4f3f4fcd987c5160\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlz6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:14Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:14 crc kubenswrapper[4813]: I0219 18:30:14.985772 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/614b0374-4288-40dc-9d95-e6f6566bd1ff-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-pqbrz\" (UID: \"614b0374-4288-40dc-9d95-e6f6566bd1ff\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqbrz" Feb 19 18:30:14 crc kubenswrapper[4813]: I0219 18:30:14.985839 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/614b0374-4288-40dc-9d95-e6f6566bd1ff-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-pqbrz\" (UID: \"614b0374-4288-40dc-9d95-e6f6566bd1ff\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqbrz" Feb 19 18:30:14 crc kubenswrapper[4813]: I0219 18:30:14.985887 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/614b0374-4288-40dc-9d95-e6f6566bd1ff-env-overrides\") pod \"ovnkube-control-plane-749d76644c-pqbrz\" (UID: \"614b0374-4288-40dc-9d95-e6f6566bd1ff\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqbrz" Feb 19 18:30:14 crc kubenswrapper[4813]: I0219 18:30:14.985933 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpkbr\" (UniqueName: \"kubernetes.io/projected/614b0374-4288-40dc-9d95-e6f6566bd1ff-kube-api-access-gpkbr\") pod \"ovnkube-control-plane-749d76644c-pqbrz\" (UID: \"614b0374-4288-40dc-9d95-e6f6566bd1ff\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqbrz" Feb 19 18:30:14 crc kubenswrapper[4813]: I0219 18:30:14.987117 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/614b0374-4288-40dc-9d95-e6f6566bd1ff-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-pqbrz\" (UID: \"614b0374-4288-40dc-9d95-e6f6566bd1ff\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqbrz" Feb 19 18:30:14 crc kubenswrapper[4813]: I0219 18:30:14.987122 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/614b0374-4288-40dc-9d95-e6f6566bd1ff-env-overrides\") pod \"ovnkube-control-plane-749d76644c-pqbrz\" (UID: \"614b0374-4288-40dc-9d95-e6f6566bd1ff\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqbrz" Feb 19 18:30:14 crc kubenswrapper[4813]: I0219 18:30:14.995039 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/614b0374-4288-40dc-9d95-e6f6566bd1ff-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-pqbrz\" (UID: \"614b0374-4288-40dc-9d95-e6f6566bd1ff\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqbrz" Feb 19 18:30:14 crc kubenswrapper[4813]: I0219 18:30:14.994774 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"928c75f4-605c-4556-8c29-14ff4bdf6f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec80078ab4536d8f1962994c3bdf8af5fcf81516f8845e84dbb33a78b1aa2062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e03ec2a3e1656fbfb567377e79bcca10e1c8a6f7660727c164936eab7365930a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6191934d31b3f711d3c9d8f365251db200dea5a925cad670632dbfc527f76504\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1b878b9bc710465c933e1965728be9fe220fe89a3d5c05fa538abd4bf22b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81cfe261685862b6b74d263ab66b4a2b1d68409791e82424b4f210c56561099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fed64977a80ae6acac3a225e2b6b6ab865db1f36338df4a15011baae90f949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe42f57029c227e4eeab376bc77a8709fce4c6c19e78d7dd1c77e65966b14388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08cf5873d49e8121f0c8a7f63f0469c77affb5ba521602da5d12c7a45847a0fd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T18:30:12Z\\\",\\\"message\\\":\\\" 6\\\\nI0219 18:30:12.328752 6098 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 18:30:12.329298 6098 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 18:30:12.329407 6098 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 18:30:12.329583 6098 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 18:30:12.329668 6098 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 18:30:12.329743 6098 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 18:30:12.329789 6098 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 18:30:12.329872 6098 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 18:30:12.330181 6098 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 18:30:12.330677 6098 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe42f57029c227e4eeab376bc77a8709fce4c6c19e78d7dd1c77e65966b14388\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T18:30:13Z\\\",\\\"message\\\":\\\"13.650485 6267 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI0219 18:30:13.650490 6267 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0219 18:30:13.650501 6267 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0219 18:30:13.650508 6267 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0219 18:30:13.650514 6267 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0219 18:30:13.650519 6267 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0219 18:30:13.650449 6267 services_controller.go:434] Service default/kubernetes retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{kubernetes default 1fcaffea-cfe2-4295-9c2a-a3b3626fb3f1 259 0 2025-02-23 05:11:12 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[component:apiserver provider:kubernetes] map[] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 6443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.217.4.1,Type:ClusterIP,ExternalI\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1caf7a922d435742a5c6d5302c70003bc86cc3ab542e2e3fe0a368c150de7838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0244cf62334ae37b44f11119385a76667200a01b2162a064829f16e8d6c0cf65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0244cf62334ae37b44f11119385a76667200a01b2162a064829f16e8d6c0cf65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pc9t2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:14Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:15 crc kubenswrapper[4813]: I0219 18:30:15.006295 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:15 crc kubenswrapper[4813]: I0219 18:30:15.006364 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:15 crc kubenswrapper[4813]: I0219 18:30:15.006388 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:15 crc kubenswrapper[4813]: I0219 18:30:15.006412 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:15 crc kubenswrapper[4813]: I0219 18:30:15.006431 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:15Z","lastTransitionTime":"2026-02-19T18:30:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:15 crc kubenswrapper[4813]: I0219 18:30:15.019690 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpkbr\" (UniqueName: \"kubernetes.io/projected/614b0374-4288-40dc-9d95-e6f6566bd1ff-kube-api-access-gpkbr\") pod \"ovnkube-control-plane-749d76644c-pqbrz\" (UID: \"614b0374-4288-40dc-9d95-e6f6566bd1ff\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqbrz" Feb 19 18:30:15 crc kubenswrapper[4813]: I0219 18:30:15.021394 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:15Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:15 crc kubenswrapper[4813]: I0219 18:30:15.037093 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqbrz" Feb 19 18:30:15 crc kubenswrapper[4813]: I0219 18:30:15.037223 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vpk9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0640f474-4d6f-4a87-9750-dda71e69dd95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58268d212369ac3ebcca0839f2c778d1e2e6e36fdc4b7e374cb9044adb39163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vpk9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:15Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:15 crc kubenswrapper[4813]: I0219 18:30:15.060479 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8429a2d929ad94a25a922e436298db7a8eb7522560dfa37f87f6e24f4f99b4e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:15Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:15 crc kubenswrapper[4813]: W0219 18:30:15.062494 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod614b0374_4288_40dc_9d95_e6f6566bd1ff.slice/crio-40d7ebe59c1fd3ec2b6a41cad3bbaa538e0d5b2ea4e1f1e4be83e0214194a730 WatchSource:0}: Error finding container 40d7ebe59c1fd3ec2b6a41cad3bbaa538e0d5b2ea4e1f1e4be83e0214194a730: Status 404 returned error can't find the container with id 40d7ebe59c1fd3ec2b6a41cad3bbaa538e0d5b2ea4e1f1e4be83e0214194a730 Feb 19 18:30:15 crc kubenswrapper[4813]: I0219 18:30:15.079218 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"481977a2-7072-4176-abd4-863cb6104d70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2b35ef24fab5f4dfd17bf95d550000ed1167454385baff2c19e1bc795e2fa8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a57de3f000b8b7bd147b882314c518fbd8173a9c366d4ee48e1e5fab9800b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfswm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:15Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:15 crc kubenswrapper[4813]: I0219 18:30:15.110132 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:15 crc kubenswrapper[4813]: I0219 18:30:15.110203 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:15 crc kubenswrapper[4813]: I0219 18:30:15.110230 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:15 crc kubenswrapper[4813]: I0219 18:30:15.110260 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:15 crc kubenswrapper[4813]: I0219 18:30:15.110282 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:15Z","lastTransitionTime":"2026-02-19T18:30:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:15 crc kubenswrapper[4813]: I0219 18:30:15.213185 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:15 crc kubenswrapper[4813]: I0219 18:30:15.213247 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:15 crc kubenswrapper[4813]: I0219 18:30:15.213264 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:15 crc kubenswrapper[4813]: I0219 18:30:15.213286 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:15 crc kubenswrapper[4813]: I0219 18:30:15.213306 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:15Z","lastTransitionTime":"2026-02-19T18:30:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:15 crc kubenswrapper[4813]: I0219 18:30:15.315831 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:15 crc kubenswrapper[4813]: I0219 18:30:15.315893 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:15 crc kubenswrapper[4813]: I0219 18:30:15.315911 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:15 crc kubenswrapper[4813]: I0219 18:30:15.315938 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:15 crc kubenswrapper[4813]: I0219 18:30:15.315986 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:15Z","lastTransitionTime":"2026-02-19T18:30:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:15 crc kubenswrapper[4813]: I0219 18:30:15.419210 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:15 crc kubenswrapper[4813]: I0219 18:30:15.419278 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:15 crc kubenswrapper[4813]: I0219 18:30:15.419303 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:15 crc kubenswrapper[4813]: I0219 18:30:15.419336 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:15 crc kubenswrapper[4813]: I0219 18:30:15.419361 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:15Z","lastTransitionTime":"2026-02-19T18:30:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:15 crc kubenswrapper[4813]: I0219 18:30:15.419366 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 22:44:40.069532554 +0000 UTC Feb 19 18:30:15 crc kubenswrapper[4813]: I0219 18:30:15.471023 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 18:30:15 crc kubenswrapper[4813]: I0219 18:30:15.471074 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 18:30:15 crc kubenswrapper[4813]: I0219 18:30:15.471529 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:30:15 crc kubenswrapper[4813]: E0219 18:30:15.471695 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 18:30:15 crc kubenswrapper[4813]: E0219 18:30:15.472119 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 18:30:15 crc kubenswrapper[4813]: E0219 18:30:15.472278 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 18:30:15 crc kubenswrapper[4813]: I0219 18:30:15.522647 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:15 crc kubenswrapper[4813]: I0219 18:30:15.522711 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:15 crc kubenswrapper[4813]: I0219 18:30:15.522735 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:15 crc kubenswrapper[4813]: I0219 18:30:15.522768 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:15 crc kubenswrapper[4813]: I0219 18:30:15.522792 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:15Z","lastTransitionTime":"2026-02-19T18:30:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:15 crc kubenswrapper[4813]: I0219 18:30:15.625991 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:15 crc kubenswrapper[4813]: I0219 18:30:15.626060 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:15 crc kubenswrapper[4813]: I0219 18:30:15.626083 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:15 crc kubenswrapper[4813]: I0219 18:30:15.626112 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:15 crc kubenswrapper[4813]: I0219 18:30:15.626133 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:15Z","lastTransitionTime":"2026-02-19T18:30:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:15 crc kubenswrapper[4813]: I0219 18:30:15.728926 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:15 crc kubenswrapper[4813]: I0219 18:30:15.729013 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:15 crc kubenswrapper[4813]: I0219 18:30:15.729031 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:15 crc kubenswrapper[4813]: I0219 18:30:15.729054 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:15 crc kubenswrapper[4813]: I0219 18:30:15.729072 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:15Z","lastTransitionTime":"2026-02-19T18:30:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:15 crc kubenswrapper[4813]: I0219 18:30:15.774778 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqbrz" event={"ID":"614b0374-4288-40dc-9d95-e6f6566bd1ff","Type":"ContainerStarted","Data":"ef107096c0521504fd7160ad4e3c0feff6ed1500af21c7a0328a981c340825f8"} Feb 19 18:30:15 crc kubenswrapper[4813]: I0219 18:30:15.774877 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqbrz" event={"ID":"614b0374-4288-40dc-9d95-e6f6566bd1ff","Type":"ContainerStarted","Data":"99f4d2e3db84c62b82edf3e5f8290b3b259e58655a5ea95ea51ed8aeec8e8844"} Feb 19 18:30:15 crc kubenswrapper[4813]: I0219 18:30:15.774899 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqbrz" event={"ID":"614b0374-4288-40dc-9d95-e6f6566bd1ff","Type":"ContainerStarted","Data":"40d7ebe59c1fd3ec2b6a41cad3bbaa538e0d5b2ea4e1f1e4be83e0214194a730"} Feb 19 18:30:15 crc kubenswrapper[4813]: I0219 18:30:15.799683 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlz6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f58d0592-08dd-49db-8c98-f262b9808e0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://934ac9bf4c98a77f31d9fddb47e9981f38daeca4f37afc09a488b48c69a13b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7968520498abaa880445c128f02c0838e254befc000966e147908c84203f74a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7968520498abaa880445c128f02c0838e254befc000966e147908c84203f74a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17313aa183028703a67d1cf40b4246d09224ceee622cd9f4f7d2ef239b26db47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17313aa183028703a67d1cf40b4246d09224ceee622cd9f4f7d2ef239b26db47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ab70722c91747196d065d768e30f52befde81fdedb465f22c3f7eef4705210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98ab70722c91747196d065d768e30f52befde81fdedb465f22c3f7eef4705210\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ab6dd9d03726493fd9fd2c358b0a87ca1997cbe2111d43a548adc9c9f43c662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ab6dd9d03726493fd9fd2c358b0a87ca1997cbe2111d43a548adc9c9f43c662\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11144c876f7a0cbaa5787ac1a093f138ec3fdb8a3cd118d4efec6cc650e3e97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11144c876f7a0cbaa5787ac1a093f138ec3fdb8a3cd118d4efec6cc650e3e97a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53230e417c1722eded51e2c7a87acb4e9b610aaf5d36a22d4f3f4fcd987c5160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53230e417c1722eded51e2c7a87acb4e9b610aaf5d36a22d4f3f4fcd987c5160\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlz6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:15Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:15 crc kubenswrapper[4813]: I0219 18:30:15.832052 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:15 crc kubenswrapper[4813]: I0219 18:30:15.832144 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:15 crc kubenswrapper[4813]: I0219 18:30:15.832163 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:15 crc kubenswrapper[4813]: I0219 18:30:15.832194 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:15 crc kubenswrapper[4813]: I0219 18:30:15.832211 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:15Z","lastTransitionTime":"2026-02-19T18:30:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:15 crc kubenswrapper[4813]: I0219 18:30:15.839499 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"928c75f4-605c-4556-8c29-14ff4bdf6f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec80078ab4536d8f1962994c3bdf8af5fcf81516f8845e84dbb33a78b1aa2062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e03ec2a3e1656fbfb567377e79bcca10e1c8a6f7660727c164936eab7365930a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6191934d31b3f711d3c9d8f365251db200dea5a925cad670632dbfc527f76504\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1b878b9bc710465c933e1965728be9fe220fe89a3d5c05fa538abd4bf22b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81cfe261685862b6b74d263ab66b4a2b1d68409791e82424b4f210c56561099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fed64977a80ae6acac3a225e2b6b6ab865db1f36338df4a15011baae90f949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe42f57029c227e4eeab376bc77a8709fce4c6c19e78d7dd1c77e65966b14388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08cf5873d49e8121f0c8a7f63f0469c77affb5ba521602da5d12c7a45847a0fd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T18:30:12Z\\\",\\\"message\\\":\\\" 6\\\\nI0219 18:30:12.328752 6098 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 18:30:12.329298 6098 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 18:30:12.329407 6098 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 18:30:12.329583 6098 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 18:30:12.329668 6098 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 18:30:12.329743 6098 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 18:30:12.329789 6098 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 18:30:12.329872 6098 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 18:30:12.330181 6098 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 18:30:12.330677 6098 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe42f57029c227e4eeab376bc77a8709fce4c6c19e78d7dd1c77e65966b14388\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T18:30:13Z\\\",\\\"message\\\":\\\"13.650485 6267 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI0219 18:30:13.650490 6267 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0219 18:30:13.650501 6267 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0219 18:30:13.650508 6267 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0219 18:30:13.650514 6267 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0219 18:30:13.650519 6267 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0219 18:30:13.650449 6267 services_controller.go:434] Service default/kubernetes retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{kubernetes default 1fcaffea-cfe2-4295-9c2a-a3b3626fb3f1 259 0 2025-02-23 05:11:12 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[component:apiserver provider:kubernetes] map[] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 6443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.217.4.1,Type:ClusterIP,ExternalI\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1caf7a922d435742a5c6d5302c70003bc86cc3ab542e2e3fe0a368c150de7838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0244cf62334ae37b44f11119385a76667200a01b2162a064829f16e8d6c0cf65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0244cf62334ae37b44f11119385a76667200a01b2162a064829f16e8d6c0cf65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pc9t2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:15Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:15 crc kubenswrapper[4813]: I0219 18:30:15.858036 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"481977a2-7072-4176-abd4-863cb6104d70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2b35ef24fab5f4dfd17bf95d550000ed1167454385baff2c19e1bc795e2fa8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a57de3f000b8b7bd147b882314c518fbd8173a9c366d4ee48e1e5fab9800b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfswm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:15Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:15 crc kubenswrapper[4813]: I0219 18:30:15.877687 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:15Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:15 crc kubenswrapper[4813]: I0219 18:30:15.891403 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vpk9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0640f474-4d6f-4a87-9750-dda71e69dd95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58268d212369ac3ebcca0839f2c778d1e2e6e36fdc4b7e374cb9044adb39163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vpk9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:15Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:15 crc kubenswrapper[4813]: I0219 18:30:15.908375 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8429a2d929ad94a25a922e436298db7a8eb7522560dfa37f87f6e24f4f99b4e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:15Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:15 crc kubenswrapper[4813]: I0219 18:30:15.926726 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55mxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af5ac962-f7d8-4759-b913-b3784e37a704\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5006f3df33dcac6500b89a837e686eb2b35a2d1c609091bda101500d15d808d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55mxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:15Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:15 crc kubenswrapper[4813]: I0219 18:30:15.935726 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:15 crc kubenswrapper[4813]: I0219 18:30:15.935786 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:15 crc kubenswrapper[4813]: I0219 18:30:15.935802 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:15 crc kubenswrapper[4813]: I0219 18:30:15.935825 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:15 crc kubenswrapper[4813]: I0219 18:30:15.935844 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:15Z","lastTransitionTime":"2026-02-19T18:30:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:15 crc kubenswrapper[4813]: I0219 18:30:15.949357 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7e9f575-0f71-4cf9-bbca-161279ecc067\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5217c8c5d8bdd49d4930ceb46695c6404c9cb98f22fa570d519f6fd9e90a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72ae02b48db247486c8af5ba910b32e186c15460f9195868b319d1ddc111fa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e90168ea6a75495be5812591cd0c929407c9adcb7c088d4bcbe3658a430320\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f319cf58bd58c099ace281ef84d183ac951986506fa62207a45ec01f7398e7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f62c4c18d632f0210d0676c04bf42a7b28c8669c123d29ef70fd4129f0051a2b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T18:30:00Z\\\",\\\"message\\\":\\\".003654 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 18:29:55.006059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3464865525/tls.crt::/tmp/serving-cert-3464865525/tls.key\\\\\\\"\\\\nI0219 18:30:00.917714 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 18:30:00.925836 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 18:30:00.925875 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 18:30:00.925917 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 18:30:00.925928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 18:30:00.935151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 18:30:00.935201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935214 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935228 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 18:30:00.935217 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 18:30:00.935236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 18:30:00.935260 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 18:30:00.935268 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 18:30:00.942033 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nF0219 18:30:00.942062 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b414bdca98a18315a2ebb69c39baa4748407aa769d6bc8efa4115efb47c9f4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:15Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:15 crc kubenswrapper[4813]: I0219 18:30:15.972249 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 18:30:15 crc kubenswrapper[4813]: I0219 18:30:15.981798 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"424add47-7924-4d6b-9c4a-4696e02c5a45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95dc9a2b0e36680a894dd162836072614413ad26fcf5adbae44db96aff8a00c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46edb0e58f8e2e040cc223708f42cef7060dc0a841fff9db182e66e659ced5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://def8abdfd513befc4fac3d78b8284b3b37ba3f5da05d74a7b911932ee1099344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed857dedd4a4cf59288fcbcfaa298370a93630e185b949d2e574eb828831df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://216064c0aa1256384d93ad5b952228d4b67c2767f284715e62505dd390c11268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b9746c82ed08c95759481dc08426b4e8446ed834da21c003dccfeb8231fbec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b9746c82ed08c95759481dc08426b4e8446ed834da21c003dccfeb8231fbec2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc8db58cc6cac8c9b830087814e9d7652fdd990b18e4e4e778669d5cbc06b508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc8db58cc6cac8c9b830087814e9d7652fdd990b18e4e4e778669d5cbc06b508\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://baca15e257ea2007369156b6a31eef0b7cd0e336cef7a9c733164b93e1927f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baca15e257ea2007369156b6a31eef0b7cd0e336cef7a9c733164b93e1927f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:15Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.001648 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44daee369357470d5c3b416d63e53f621a0434cc6fe90db2dd15a6078849bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:15Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.018417 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hksqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b099cefb-f2e5-4f3f-976c-7433dba77ef2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3313ad9bea8833862d8c116421e00a0c2376c81e214043db1fa4096177d01094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hksqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:16Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.031176 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqbrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"614b0374-4288-40dc-9d95-e6f6566bd1ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99f4d2e3db84c62b82edf3e5f8290b3b259e58655a5ea95ea51ed8aeec8e8844\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpkbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef107096c0521504fd7160ad4e3c0feff6ed1500af21c7a0328a981c340825f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpkbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqbrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:16Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.039498 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.039552 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.039570 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.039595 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.039612 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:16Z","lastTransitionTime":"2026-02-19T18:30:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.048679 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e24bb1ce-8e7b-485c-8ade-4d1c8d24d31d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ec0b5d3efae532f52cc2cad5bdf32bbba176159a6d3e26584b0fabe4e67fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72a262261f9ec388a9b2c38c6c30d35645d6e42d482ed7468891523b7bc731fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58efdcb776fb39a7e33af3fda3f0589e38c4bae15b8db3371541d189729d06c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a2584d7de40323b884e08e2cb47b6ab8054fadccc7516e4736c5439aef60e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:16Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.067741 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:16Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.085388 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:16Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.102579 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c6488cf097d9700ded1960ebddf696d77c08ffdfc6dd8b351c594ea026bb6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6181782d0c635897318effbe40705986067ccb55cf05ffdba6948e46c1a054b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:16Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.117998 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqbrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"614b0374-4288-40dc-9d95-e6f6566bd1ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99f4d2e3db84c62b82edf3e5f8290b3b259e58655a5ea95ea51ed8aeec8e8844\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpkbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef107096c0521504fd7160ad4e3c0feff6ed1500af21c7a0328a981c340825f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpkbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqbrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:16Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.135692 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e24bb1ce-8e7b-485c-8ade-4d1c8d24d31d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ec0b5d3efae532f52cc2cad5bdf32bbba176159a6d3e26584b0fabe4e67fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72a262261f9ec388a9b2c38c6c30d35645d6e42d482ed7468891523b7bc731fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58efdcb776fb39a7e33af3fda3f0589e38c4bae15b8db3371541d189729d06c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a2584d7de40323b884e08e2cb47b6ab8054fadccc7516e4736c5439aef60e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:16Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.143009 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.143096 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.143116 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.143140 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.143161 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:16Z","lastTransitionTime":"2026-02-19T18:30:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.153355 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:16Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.176433 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:16Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.196629 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c6488cf097d9700ded1960ebddf696d77c08ffdfc6dd8b351c594ea026bb6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6181782d0c635897318effbe40705986067ccb55cf05ffdba6948e46c1a054b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:16Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.217492 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hksqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b099cefb-f2e5-4f3f-976c-7433dba77ef2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3313ad9bea8833862d8c116421e00a0c2376c81e214043db1fa4096177d01094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hksqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:16Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.229189 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-l5vng"] Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.229910 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5vng" Feb 19 18:30:16 crc kubenswrapper[4813]: E0219 18:30:16.230041 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5vng" podUID="6fc21e0b-f723-4c9c-9ced-1683cc02fa00" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.240456 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlz6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f58d0592-08dd-49db-8c98-f262b9808e0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://934ac9bf4c98a77f31d9fddb47e9981f38daeca4f37afc09a488b48c69a13b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7968520498abaa880445c128f02c0838e254befc000966e147908c84203f74a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7968520498abaa880445c128f02c0838e254befc000966e147908c84203f74a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17313aa183028703a67d1cf40b4246d09224ceee622cd9f4f7d2ef239b26db47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17313aa183028703a67d1cf40b4246d09224ceee622cd9f4f7d2ef239b26db47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ab70722c91747196d065d768e30f52befde81fdedb465f22c3f7eef4705210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98ab70722c91747196d065d768e30f52befde81fdedb465f22c3f7eef4705210\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ab6dd9d03726493fd9fd2c358b0a87ca1997cbe2111d43a548adc9c9f43c662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ab6dd9d03726493fd9fd2c358b0a87ca1997cbe2111d43a548adc9c9f43c662\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11144c876f7a0cbaa5787ac1a093f138ec3fdb8a3cd118d4efec6cc650e3e97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11144c876f7a0cbaa5787ac1a093f138ec3fdb8a3cd118d4efec6cc650e3e97a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53230e417c1722eded51e2c7a87acb4e9b610aaf5d36a22d4f3f4fcd987c5160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53230e417c1722eded51e2c7a87acb4e9b610aaf5d36a22d4f3f4fcd987c5160\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlz6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:16Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.245329 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.245378 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.245395 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.245417 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.245435 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:16Z","lastTransitionTime":"2026-02-19T18:30:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.270769 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"928c75f4-605c-4556-8c29-14ff4bdf6f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec80078ab4536d8f1962994c3bdf8af5fcf81516f8845e84dbb33a78b1aa2062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e03ec2a3e1656fbfb567377e79bcca10e1c8a6f7660727c164936eab7365930a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6191934d31b3f711d3c9d8f365251db200dea5a925cad670632dbfc527f76504\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1b878b9bc710465c933e1965728be9fe220fe89a3d5c05fa538abd4bf22b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81cfe261685862b6b74d263ab66b4a2b1d68409791e82424b4f210c56561099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fed64977a80ae6acac3a225e2b6b6ab865db1f36338df4a15011baae90f949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe42f57029c227e4eeab376bc77a8709fce4c6c19e78d7dd1c77e65966b14388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08cf5873d49e8121f0c8a7f63f0469c77affb5ba521602da5d12c7a45847a0fd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T18:30:12Z\\\",\\\"message\\\":\\\" 6\\\\nI0219 18:30:12.328752 6098 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 18:30:12.329298 6098 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 18:30:12.329407 6098 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 18:30:12.329583 6098 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 18:30:12.329668 6098 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 18:30:12.329743 6098 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 18:30:12.329789 6098 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 18:30:12.329872 6098 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 18:30:12.330181 6098 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 18:30:12.330677 6098 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe42f57029c227e4eeab376bc77a8709fce4c6c19e78d7dd1c77e65966b14388\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T18:30:13Z\\\",\\\"message\\\":\\\"13.650485 6267 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI0219 18:30:13.650490 6267 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0219 18:30:13.650501 6267 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0219 18:30:13.650508 6267 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0219 18:30:13.650514 6267 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0219 18:30:13.650519 6267 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0219 18:30:13.650449 6267 services_controller.go:434] Service default/kubernetes retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{kubernetes default 1fcaffea-cfe2-4295-9c2a-a3b3626fb3f1 259 0 2025-02-23 05:11:12 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[component:apiserver provider:kubernetes] map[] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 6443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.217.4.1,Type:ClusterIP,ExternalI\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1caf7a922d435742a5c6d5302c70003bc86cc3ab542e2e3fe0a368c150de7838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0244cf62334ae37b44f11119385a76667200a01b2162a064829f16e8d6c0cf65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0244cf62334ae37b44f11119385a76667200a01b2162a064829f16e8d6c0cf65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pc9t2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:16Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.291507 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:16Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.308018 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vpk9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0640f474-4d6f-4a87-9750-dda71e69dd95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58268d212369ac3ebcca0839f2c778d1e2e6e36fdc4b7e374cb9044adb39163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vpk9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:16Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.328062 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8429a2d929ad94a25a922e436298db7a8eb7522560dfa37f87f6e24f4f99b4e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:16Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.346174 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"481977a2-7072-4176-abd4-863cb6104d70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2b35ef24fab5f4dfd17bf95d550000ed1167454385baff2c19e1bc795e2fa8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a57de3f000b8b7bd147b882314c518fbd8173a9c366d4ee48e1e5fab9800b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfswm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:16Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.348552 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.348602 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.348620 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.348644 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.348663 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:16Z","lastTransitionTime":"2026-02-19T18:30:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.369928 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7e9f575-0f71-4cf9-bbca-161279ecc067\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5217c8c5d8bdd49d4930ceb46695c6404c9cb98f22fa570d519f6fd9e90a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72ae02b48db247486c8af5ba910b32e186c15460f9195868b319d1ddc111fa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e90168ea6a75495be5812591cd0c929407c9adcb7c088d4bcbe3658a430320\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f319cf58bd58c099ace281ef84d183ac951986506fa62207a45ec01f7398e7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f62c4c18d632f0210d0676c04bf42a7b28c8669c123d29ef70fd4129f0051a2b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T18:30:00Z\\\",\\\"message\\\":\\\".003654 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 18:29:55.006059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3464865525/tls.crt::/tmp/serving-cert-3464865525/tls.key\\\\\\\"\\\\nI0219 18:30:00.917714 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 18:30:00.925836 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 18:30:00.925875 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 18:30:00.925917 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 18:30:00.925928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 18:30:00.935151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 18:30:00.935201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935214 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935228 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 18:30:00.935217 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 18:30:00.935236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 18:30:00.935260 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 18:30:00.935268 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 18:30:00.942033 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nF0219 18:30:00.942062 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b414bdca98a18315a2ebb69c39baa4748407aa769d6bc8efa4115efb47c9f4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:16Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.402156 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6fc21e0b-f723-4c9c-9ced-1683cc02fa00-metrics-certs\") pod \"network-metrics-daemon-l5vng\" (UID: \"6fc21e0b-f723-4c9c-9ced-1683cc02fa00\") " pod="openshift-multus/network-metrics-daemon-l5vng" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.402237 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmn2m\" (UniqueName: \"kubernetes.io/projected/6fc21e0b-f723-4c9c-9ced-1683cc02fa00-kube-api-access-kmn2m\") pod \"network-metrics-daemon-l5vng\" (UID: \"6fc21e0b-f723-4c9c-9ced-1683cc02fa00\") " pod="openshift-multus/network-metrics-daemon-l5vng" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.403654 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"424add47-7924-4d6b-9c4a-4696e02c5a45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95dc9a2b0e36680a894dd162836072614413ad26fcf5adbae44db96aff8a00c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46edb0e58f8e2e040cc223708f42cef7060dc0a841fff9db182e66e659ced5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://def8abdfd513befc4fac3d78b8284b3b37ba3f5da05d74a7b911932ee1099344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed857dedd4a4cf59288fcbcfaa298370a93630e185b949d2e574eb828831df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://216064c0aa1256384d93ad5b952228d4b67c2767f284715e62505dd390c11268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b9746c82ed08c95759481dc08426b4e8446ed834da21c003dccfeb8231fbec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b9746c82ed08c95759481dc08426b4e8446ed834da21c003dccfeb8231fbec2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc8db58cc6cac8c9b830087814e9d7652fdd990b18e4e4e778669d5cbc06b508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc8db58cc6cac8c9b830087814e9d7652fdd990b18e4e4e778669d5cbc06b508\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://baca15e257ea2007369156b6a31eef0b7cd0e336cef7a9c733164b93e1927f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baca15e257ea2007369156b6a31eef0b7cd0e336cef7a9c733164b93e1927f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:16Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.419632 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 20:53:55.714843385 +0000 UTC Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.424882 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44daee369357470d5c3b416d63e53f621a0434cc6fe90db2dd15a6078849bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:16Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.441092 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55mxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af5ac962-f7d8-4759-b913-b3784e37a704\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5006f3df33dcac6500b89a837e686eb2b35a2d1c609091bda101500d15d808d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55mxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:16Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.452234 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.452291 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.452309 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.452335 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.452352 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:16Z","lastTransitionTime":"2026-02-19T18:30:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.472538 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"424add47-7924-4d6b-9c4a-4696e02c5a45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95dc9a2b0e36680a894dd162836072614413ad26fcf5adbae44db96aff8a00c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46edb0e58f8e2e040cc223708f42cef7060dc0a841fff9db182e66e659ced5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://def8abdfd513befc4fac3d78b8284b3b37ba3f5da05d74a7b911932ee1099344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed857dedd4a4cf59288fcbcfaa298370a93630e185b949d2e574eb828831df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://216064c0aa1256384d93ad5b952228d4b67c2767f284715e62505dd390c11268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b9746c82ed08c95759481dc08426b4e8446ed834da21c003dccfeb8231fbec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b9746c82ed08c95759481dc08426b4e8446ed834da21c003dccfeb8231fbec2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc8db58cc6cac8c9b830087814e9d7652fdd990b18e4e4e778669d5cbc06b508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc8db58cc6cac8c9b830087814e9d7652fdd990b18e4e4e778669d5cbc06b508\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://baca15e257ea2007369156b6a31eef0b7cd0e336cef7a9c733164b93e1927f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baca15e257ea2007369156b6a31eef0b7cd0e336cef7a9c733164b93e1927f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:16Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.494276 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44daee369357470d5c3b416d63e53f621a0434cc6fe90db2dd15a6078849bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:16Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.503545 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmn2m\" (UniqueName: \"kubernetes.io/projected/6fc21e0b-f723-4c9c-9ced-1683cc02fa00-kube-api-access-kmn2m\") pod \"network-metrics-daemon-l5vng\" (UID: \"6fc21e0b-f723-4c9c-9ced-1683cc02fa00\") " pod="openshift-multus/network-metrics-daemon-l5vng" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.503649 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6fc21e0b-f723-4c9c-9ced-1683cc02fa00-metrics-certs\") pod \"network-metrics-daemon-l5vng\" (UID: \"6fc21e0b-f723-4c9c-9ced-1683cc02fa00\") " pod="openshift-multus/network-metrics-daemon-l5vng" Feb 19 18:30:16 crc kubenswrapper[4813]: E0219 18:30:16.503811 4813 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 18:30:16 crc kubenswrapper[4813]: E0219 18:30:16.503922 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6fc21e0b-f723-4c9c-9ced-1683cc02fa00-metrics-certs podName:6fc21e0b-f723-4c9c-9ced-1683cc02fa00 nodeName:}" failed. No retries permitted until 2026-02-19 18:30:17.003895817 +0000 UTC m=+36.229336398 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6fc21e0b-f723-4c9c-9ced-1683cc02fa00-metrics-certs") pod "network-metrics-daemon-l5vng" (UID: "6fc21e0b-f723-4c9c-9ced-1683cc02fa00") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.509776 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55mxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af5ac962-f7d8-4759-b913-b3784e37a704\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5006f3df33dcac6500b89a837e686eb2b35a2d1c609091bda101500d15d808d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55mxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:16Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.539578 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7e9f575-0f71-4cf9-bbca-161279ecc067\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5217c8c5d8bdd49d4930ceb46695c6404c9cb98f22fa570d519f6fd9e90a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72ae02b48db247486c8af5ba910b32e186c15460f9195868b319d1ddc111fa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e90168ea6a75495be5812591cd0c929407c9adcb7c088d4bcbe3658a430320\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f319cf58bd58c099ace281ef84d183ac951986506fa62207a45ec01f7398e7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f62c4c18d632f0210d0676c04bf42a7b28c8669c123d29ef70fd4129f0051a2b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T18:30:00Z\\\",\\\"message\\\":\\\".003654 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 18:29:55.006059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3464865525/tls.crt::/tmp/serving-cert-3464865525/tls.key\\\\\\\"\\\\nI0219 18:30:00.917714 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 18:30:00.925836 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 18:30:00.925875 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 18:30:00.925917 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 18:30:00.925928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 18:30:00.935151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 18:30:00.935201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935214 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935228 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 18:30:00.935217 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 18:30:00.935236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 18:30:00.935260 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 18:30:00.935268 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 18:30:00.942033 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nF0219 18:30:00.942062 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b414bdca98a18315a2ebb69c39baa4748407aa769d6bc8efa4115efb47c9f4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:16Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.549391 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmn2m\" (UniqueName: \"kubernetes.io/projected/6fc21e0b-f723-4c9c-9ced-1683cc02fa00-kube-api-access-kmn2m\") pod \"network-metrics-daemon-l5vng\" (UID: \"6fc21e0b-f723-4c9c-9ced-1683cc02fa00\") " pod="openshift-multus/network-metrics-daemon-l5vng" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.555725 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.555890 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.555909 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.555933 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.555973 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:16Z","lastTransitionTime":"2026-02-19T18:30:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.596267 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:16Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.611562 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:16Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.626229 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c6488cf097d9700ded1960ebddf696d77c08ffdfc6dd8b351c594ea026bb6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6181782d0c635897318effbe40705986067ccb55cf05ffdba6948e46c1a054b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:16Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.639970 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hksqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b099cefb-f2e5-4f3f-976c-7433dba77ef2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3313ad9bea8833862d8c116421e00a0c2376c81e214043db1fa4096177d01094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hksqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:16Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.652135 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqbrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"614b0374-4288-40dc-9d95-e6f6566bd1ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99f4d2e3db84c62b82edf3e5f8290b3b259e58655a5ea95ea51ed8aeec8e8844\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpkbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef107096c0521504fd7160ad4e3c0feff6ed1500af21c7a0328a981c340825f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpkbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqbrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:16Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.658033 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.658082 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.658096 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.658115 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.658129 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:16Z","lastTransitionTime":"2026-02-19T18:30:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.664869 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l5vng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fc21e0b-f723-4c9c-9ced-1683cc02fa00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmn2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmn2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l5vng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:16Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.679772 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e24bb1ce-8e7b-485c-8ade-4d1c8d24d31d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ec0b5d3efae532f52cc2cad5bdf32bbba176159a6d3e26584b0fabe4e67fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72a262261f9ec388a9b2c38c6c30d35645d6e42d482ed7468891523b7bc731fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58efdcb776fb39a7e33af3fda3f0589e38c4bae15b8db3371541d189729d06c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a2584d7de40323b884e08e2cb47b6ab8054fadccc7516e4736c5439aef60e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:16Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.705369 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"928c75f4-605c-4556-8c29-14ff4bdf6f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec80078ab4536d8f1962994c3bdf8af5fcf81516f8845e84dbb33a78b1aa2062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e03ec2a3e1656fbfb567377e79bcca10e1c8a6f7660727c164936eab7365930a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6191934d31b3f711d3c9d8f365251db200dea5a925cad670632dbfc527f76504\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1b878b9bc710465c933e1965728be9fe220fe89a3d5c05fa538abd4bf22b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81cfe261685862b6b74d263ab66b4a2b1d68409791e82424b4f210c56561099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fed64977a80ae6acac3a225e2b6b6ab865db1f36338df4a15011baae90f949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe42f57029c227e4eeab376bc77a8709fce4c6c19e78d7dd1c77e65966b14388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08cf5873d49e8121f0c8a7f63f0469c77affb5ba521602da5d12c7a45847a0fd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T18:30:12Z\\\",\\\"message\\\":\\\" 6\\\\nI0219 18:30:12.328752 6098 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 18:30:12.329298 6098 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 18:30:12.329407 6098 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 18:30:12.329583 6098 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 18:30:12.329668 6098 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 18:30:12.329743 6098 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 18:30:12.329789 6098 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 18:30:12.329872 6098 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 18:30:12.330181 6098 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 18:30:12.330677 6098 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe42f57029c227e4eeab376bc77a8709fce4c6c19e78d7dd1c77e65966b14388\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T18:30:13Z\\\",\\\"message\\\":\\\"13.650485 6267 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI0219 18:30:13.650490 6267 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0219 18:30:13.650501 6267 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0219 18:30:13.650508 6267 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0219 18:30:13.650514 6267 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0219 18:30:13.650519 6267 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0219 18:30:13.650449 6267 services_controller.go:434] Service default/kubernetes retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{kubernetes default 1fcaffea-cfe2-4295-9c2a-a3b3626fb3f1 259 0 2025-02-23 05:11:12 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[component:apiserver provider:kubernetes] map[] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 6443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.217.4.1,Type:ClusterIP,ExternalI\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1caf7a922d435742a5c6d5302c70003bc86cc3ab542e2e3fe0a368c150de7838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0244cf62334ae37b44f11119385a76667200a01b2162a064829f16e8d6c0cf65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0244cf62334ae37b44f11119385a76667200a01b2162a064829f16e8d6c0cf65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pc9t2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:16Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.728393 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlz6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f58d0592-08dd-49db-8c98-f262b9808e0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://934ac9bf4c98a77f31d9fddb47e9981f38daeca4f37afc09a488b48c69a13b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7968520498abaa880445c128f02c0838e254befc000966e147908c84203f74a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7968520498abaa880445c128f02c0838e254befc000966e147908c84203f74a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17313aa183028703a67d1cf40b4246d09224ceee622cd9f4f7d2ef239b26db47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17313aa183028703a67d1cf40b4246d09224ceee622cd9f4f7d2ef239b26db47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ab70722c91747196d065d768e30f52befde81fdedb465f22c3f7eef4705210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98ab70722c91747196d065d768e30f52befde81fdedb465f22c3f7eef4705210\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ab6dd9d03726493fd9fd2c358b0a87ca1997cbe2111d43a548adc9c9f43c662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ab6dd9d03726493fd9fd2c358b0a87ca1997cbe2111d43a548adc9c9f43c662\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11144c876f7a0cbaa5787ac1a093f138ec3fdb8a3cd118d4efec6cc650e3e97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11144c876f7a0cbaa5787ac1a093f138ec3fdb8a3cd118d4efec6cc650e3e97a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53230e417c1722eded51e2c7a87acb4e9b610aaf5d36a22d4f3f4fcd987c5160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53230e417c1722eded51e2c7a87acb4e9b610aaf5d36a22d4f3f4fcd987c5160\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlz6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:16Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.746616 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:16Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.760666 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.760732 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.760752 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.760773 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.760787 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:16Z","lastTransitionTime":"2026-02-19T18:30:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.762197 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vpk9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0640f474-4d6f-4a87-9750-dda71e69dd95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58268d212369ac3ebcca0839f2c778d1e2e6e36fdc4b7e374cb9044adb39163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vpk9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:16Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.783269 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8429a2d929ad94a25a922e436298db7a8eb7522560dfa37f87f6e24f4f99b4e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:16Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.800306 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"481977a2-7072-4176-abd4-863cb6104d70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2b35ef24fab5f4dfd17bf95d550000ed1167454385baff2c19e1bc795e2fa8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a57de3f000b8b7bd147b882314c518fbd8173a9c366d4ee48e1e5fab9800b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfswm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:16Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.809655 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.810742 4813 scope.go:117] "RemoveContainer" containerID="fe42f57029c227e4eeab376bc77a8709fce4c6c19e78d7dd1c77e65966b14388" Feb 19 18:30:16 crc kubenswrapper[4813]: E0219 18:30:16.810923 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pc9t2_openshift-ovn-kubernetes(928c75f4-605c-4556-8c29-14ff4bdf6f5e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" podUID="928c75f4-605c-4556-8c29-14ff4bdf6f5e" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.831053 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7e9f575-0f71-4cf9-bbca-161279ecc067\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5217c8c5d8bdd49d4930ceb46695c6404c9cb98f22fa570d519f6fd9e90a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72ae02b48db247486c8af5ba910b32e186c15460f9195868b319d1ddc111fa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e90168ea6a75495be5812591cd0c929407c9adcb7c088d4bcbe3658a430320\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f319cf58bd58c099ace281ef84d183ac951986506fa62207a45ec01f7398e7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f62c4c18d632f0210d0676c04bf42a7b28c8669c123d29ef70fd4129f0051a2b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T18:30:00Z\\\",\\\"message\\\":\\\".003654 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 18:29:55.006059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3464865525/tls.crt::/tmp/serving-cert-3464865525/tls.key\\\\\\\"\\\\nI0219 18:30:00.917714 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 18:30:00.925836 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 18:30:00.925875 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 18:30:00.925917 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 18:30:00.925928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 18:30:00.935151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 18:30:00.935201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935214 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935228 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 18:30:00.935217 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 18:30:00.935236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 18:30:00.935260 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 18:30:00.935268 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 18:30:00.942033 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nF0219 18:30:00.942062 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b414bdca98a18315a2ebb69c39baa4748407aa769d6bc8efa4115efb47c9f4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:16Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.855094 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"424add47-7924-4d6b-9c4a-4696e02c5a45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95dc9a2b0e36680a894dd162836072614413ad26fcf5adbae44db96aff8a00c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46edb0e58f8e2e040cc223708f42cef7060dc0a841fff9db182e66e659ced5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://def8abdfd513befc4fac3d78b8284b3b37ba3f5da05d74a7b911932ee1099344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed857dedd4a4cf59288fcbcfaa298370a93630e185b949d2e574eb828831df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://216064c0aa1256384d93ad5b952228d4b67c2767f284715e62505dd390c11268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b9746c82ed08c95759481dc08426b4e8446ed834da21c003dccfeb8231fbec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b9746c82ed08c95759481dc08426b4e8446ed834da21c003dccfeb8231fbec2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc8db58cc6cac8c9b830087814e9d7652fdd990b18e4e4e778669d5cbc06b508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc8db58cc6cac8c9b830087814e9d7652fdd990b18e4e4e778669d5cbc06b508\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://baca15e257ea2007369156b6a31eef0b7cd0e336cef7a9c733164b93e1927f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baca15e257ea2007369156b6a31eef0b7cd0e336cef7a9c733164b93e1927f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:16Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.863688 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.863733 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.863750 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.863772 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.863787 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:16Z","lastTransitionTime":"2026-02-19T18:30:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.876680 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44daee369357470d5c3b416d63e53f621a0434cc6fe90db2dd15a6078849bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:16Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.892299 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55mxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af5ac962-f7d8-4759-b913-b3784e37a704\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5006f3df33dcac6500b89a837e686eb2b35a2d1c609091bda101500d15d808d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55mxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:16Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.909090 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqbrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"614b0374-4288-40dc-9d95-e6f6566bd1ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99f4d2e3db84c62b82edf3e5f8290b3b259e58655a5ea95ea51ed8aeec8e8844\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpkbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef107096c0521504fd7160ad4e3c0feff6ed1500af21c7a0328a981c340825f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpkbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqbrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:16Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.922027 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l5vng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fc21e0b-f723-4c9c-9ced-1683cc02fa00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmn2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmn2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l5vng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:16Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.936915 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e24bb1ce-8e7b-485c-8ade-4d1c8d24d31d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ec0b5d3efae532f52cc2cad5bdf32bbba176159a6d3e26584b0fabe4e67fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72a262261f9ec388a9b2c38c6c30d35645d6e42d482ed7468891523b7bc731fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58efdcb776fb39a7e33af3fda3f0589e38c4bae15b8db3371541d189729d06c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a2584d7de40323b884e08e2cb47b6ab8054fadccc7516e4736c5439aef60e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:16Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.951083 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:16Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.966344 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.966410 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.966429 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.966454 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.966474 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:16Z","lastTransitionTime":"2026-02-19T18:30:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.968138 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:16Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:16 crc kubenswrapper[4813]: I0219 18:30:16.985637 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c6488cf097d9700ded1960ebddf696d77c08ffdfc6dd8b351c594ea026bb6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6181782d0c635897318effbe40705986067ccb55cf05ffdba6948e46c1a054b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:16Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.001157 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hksqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b099cefb-f2e5-4f3f-976c-7433dba77ef2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3313ad9bea8833862d8c116421e00a0c2376c81e214043db1fa4096177d01094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hksqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:16Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.007522 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6fc21e0b-f723-4c9c-9ced-1683cc02fa00-metrics-certs\") pod \"network-metrics-daemon-l5vng\" (UID: \"6fc21e0b-f723-4c9c-9ced-1683cc02fa00\") " pod="openshift-multus/network-metrics-daemon-l5vng" Feb 19 18:30:17 crc kubenswrapper[4813]: E0219 18:30:17.007700 4813 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 18:30:17 crc kubenswrapper[4813]: E0219 18:30:17.007778 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6fc21e0b-f723-4c9c-9ced-1683cc02fa00-metrics-certs podName:6fc21e0b-f723-4c9c-9ced-1683cc02fa00 nodeName:}" failed. No retries permitted until 2026-02-19 18:30:18.00775631 +0000 UTC m=+37.233196881 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6fc21e0b-f723-4c9c-9ced-1683cc02fa00-metrics-certs") pod "network-metrics-daemon-l5vng" (UID: "6fc21e0b-f723-4c9c-9ced-1683cc02fa00") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.022948 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlz6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f58d0592-08dd-49db-8c98-f262b9808e0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://934ac9bf4c98a77f31d9fddb47e9981f38daeca4f37afc09a488b48c69a13b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7968520498abaa880445c128f02c0838e254befc000966e147908c84203f74a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7968520498abaa880445c128f02c0838e254befc000966e147908c84203f74a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17313aa183028703a67d1cf40b4246d09224ceee622cd9f4f7d2ef239b26db47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17313aa183028703a67d1cf40b4246d09224ceee622cd9f4f7d2ef239b26db47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ab70722c91747196d065d768e30f52befde81fdedb465f22c3f7eef4705210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98ab70722c91747196d065d768e30f52befde81fdedb465f22c3f7eef4705210\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ab6dd9d03726493fd9fd2c358b0a87ca1997cbe2111d43a548adc9c9f43c662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ab6dd9d03726493fd9fd2c358b0a87ca1997cbe2111d43a548adc9c9f43c662\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11144c876f7a0cbaa5787ac1a093f138ec3fdb8a3cd118d4efec6cc650e3e97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11144c876f7a0cbaa5787ac1a093f138ec3fdb8a3cd118d4efec6cc650e3e97a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53230e417c1722eded51e2c7a87acb4e9b610aaf5d36a22d4f3f4fcd987c5160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53230e417c1722eded51e2c7a87acb4e9b610aaf5d36a22d4f3f4fcd987c5160\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlz6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:17Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.051281 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"928c75f4-605c-4556-8c29-14ff4bdf6f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec80078ab4536d8f1962994c3bdf8af5fcf81516f8845e84dbb33a78b1aa2062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e03ec2a3e1656fbfb567377e79bcca10e1c8a6f7660727c164936eab7365930a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6191934d31b3f711d3c9d8f365251db200dea5a925cad670632dbfc527f76504\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1b878b9bc710465c933e1965728be9fe220fe89a3d5c05fa538abd4bf22b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81cfe261685862b6b74d263ab66b4a2b1d68409791e82424b4f210c56561099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fed64977a80ae6acac3a225e2b6b6ab865db1f36338df4a15011baae90f949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe42f57029c227e4eeab376bc77a8709fce4c6c19e78d7dd1c77e65966b14388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe42f57029c227e4eeab376bc77a8709fce4c6c19e78d7dd1c77e65966b14388\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T18:30:13Z\\\",\\\"message\\\":\\\"13.650485 6267 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI0219 18:30:13.650490 6267 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0219 18:30:13.650501 6267 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0219 18:30:13.650508 6267 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0219 18:30:13.650514 6267 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0219 18:30:13.650519 6267 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0219 18:30:13.650449 6267 services_controller.go:434] Service default/kubernetes retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{kubernetes default 1fcaffea-cfe2-4295-9c2a-a3b3626fb3f1 259 0 2025-02-23 05:11:12 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[component:apiserver provider:kubernetes] map[] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 6443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.217.4.1,Type:ClusterIP,ExternalI\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pc9t2_openshift-ovn-kubernetes(928c75f4-605c-4556-8c29-14ff4bdf6f5e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1caf7a922d435742a5c6d5302c70003bc86cc3ab542e2e3fe0a368c150de7838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0244cf62334ae37b44f11119385a76667200a01b2162a064829f16e8d6c0cf65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0244cf62334ae37b44f11119385a76667200a01b2162a064829f16e8d6c0cf65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pc9t2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:17Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.064873 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:17Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.074600 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vpk9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0640f474-4d6f-4a87-9750-dda71e69dd95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58268d212369ac3ebcca0839f2c778d1e2e6e36fdc4b7e374cb9044adb39163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vpk9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:17Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.085349 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8429a2d929ad94a25a922e436298db7a8eb7522560dfa37f87f6e24f4f99b4e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:17Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.095565 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"481977a2-7072-4176-abd4-863cb6104d70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2b35ef24fab5f4dfd17bf95d550000ed1167454385baff2c19e1bc795e2fa8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a57de3f000b8b7bd147b882314c518fbd8173a9c366d4ee48e1e5fab9800b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfswm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:17Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.098901 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.098943 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.098980 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.098999 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.099013 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:17Z","lastTransitionTime":"2026-02-19T18:30:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.202831 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.202889 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.202908 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.202929 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.203085 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:17Z","lastTransitionTime":"2026-02-19T18:30:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.209518 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 18:30:17 crc kubenswrapper[4813]: E0219 18:30:17.209726 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 18:30:33.209696689 +0000 UTC m=+52.435137260 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.305574 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.305649 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.305671 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.305701 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.305720 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:17Z","lastTransitionTime":"2026-02-19T18:30:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.311196 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.311311 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:30:17 crc kubenswrapper[4813]: E0219 18:30:17.311383 4813 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.311460 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 18:30:17 crc kubenswrapper[4813]: E0219 18:30:17.311512 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 18:30:33.311498168 +0000 UTC m=+52.536938709 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 18:30:17 crc kubenswrapper[4813]: E0219 18:30:17.311591 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.311598 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 18:30:17 crc kubenswrapper[4813]: E0219 18:30:17.311627 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 18:30:17 crc kubenswrapper[4813]: E0219 18:30:17.311651 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 18:30:17 crc kubenswrapper[4813]: E0219 18:30:17.311654 4813 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 18:30:17 crc kubenswrapper[4813]: E0219 18:30:17.311667 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 18:30:17 crc kubenswrapper[4813]: E0219 18:30:17.311680 4813 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 18:30:17 crc kubenswrapper[4813]: E0219 18:30:17.311711 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 18:30:33.311703354 +0000 UTC m=+52.537143895 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 18:30:17 crc kubenswrapper[4813]: E0219 18:30:17.311425 4813 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 18:30:17 crc kubenswrapper[4813]: E0219 18:30:17.311723 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 18:30:33.311718554 +0000 UTC m=+52.537159095 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 18:30:17 crc kubenswrapper[4813]: E0219 18:30:17.311753 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 18:30:33.311745455 +0000 UTC m=+52.537185996 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.408321 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.408386 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.408404 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.408429 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.408447 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:17Z","lastTransitionTime":"2026-02-19T18:30:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.420034 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 18:28:10.331394027 +0000 UTC Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.470781 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.470827 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 18:30:17 crc kubenswrapper[4813]: E0219 18:30:17.471001 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.471020 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 18:30:17 crc kubenswrapper[4813]: E0219 18:30:17.471312 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 18:30:17 crc kubenswrapper[4813]: E0219 18:30:17.471189 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.511838 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.511897 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.511917 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.511942 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.511987 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:17Z","lastTransitionTime":"2026-02-19T18:30:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.615453 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.615520 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.615537 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.615563 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.615581 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:17Z","lastTransitionTime":"2026-02-19T18:30:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.717988 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.718059 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.718077 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.718104 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.718129 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:17Z","lastTransitionTime":"2026-02-19T18:30:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.733404 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.733496 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.733523 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.733556 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.733581 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:17Z","lastTransitionTime":"2026-02-19T18:30:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:17 crc kubenswrapper[4813]: E0219 18:30:17.754812 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"72639c5f-de3a-44bf-8f3b-4707b19d9f7d\\\",\\\"systemUUID\\\":\\\"3f17b88b-2b9a-42bd-94af-777e7f325932\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:17Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.759858 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.759923 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.759940 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.759991 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.760011 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:17Z","lastTransitionTime":"2026-02-19T18:30:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:17 crc kubenswrapper[4813]: E0219 18:30:17.779851 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"72639c5f-de3a-44bf-8f3b-4707b19d9f7d\\\",\\\"systemUUID\\\":\\\"3f17b88b-2b9a-42bd-94af-777e7f325932\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:17Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.784249 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.784299 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.784317 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.784340 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.784357 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:17Z","lastTransitionTime":"2026-02-19T18:30:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:17 crc kubenswrapper[4813]: E0219 18:30:17.806894 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"72639c5f-de3a-44bf-8f3b-4707b19d9f7d\\\",\\\"systemUUID\\\":\\\"3f17b88b-2b9a-42bd-94af-777e7f325932\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:17Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.812791 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.812844 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.812864 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.812883 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.812899 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:17Z","lastTransitionTime":"2026-02-19T18:30:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:17 crc kubenswrapper[4813]: E0219 18:30:17.835615 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"72639c5f-de3a-44bf-8f3b-4707b19d9f7d\\\",\\\"systemUUID\\\":\\\"3f17b88b-2b9a-42bd-94af-777e7f325932\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:17Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.841053 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.841285 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.841421 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.841577 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.841717 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:17Z","lastTransitionTime":"2026-02-19T18:30:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:17 crc kubenswrapper[4813]: E0219 18:30:17.861408 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"72639c5f-de3a-44bf-8f3b-4707b19d9f7d\\\",\\\"systemUUID\\\":\\\"3f17b88b-2b9a-42bd-94af-777e7f325932\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:17Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:17 crc kubenswrapper[4813]: E0219 18:30:17.861644 4813 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.864179 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.864227 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.864247 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.864268 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.864283 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:17Z","lastTransitionTime":"2026-02-19T18:30:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.967106 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.967170 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.967187 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.967212 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:17 crc kubenswrapper[4813]: I0219 18:30:17.967231 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:17Z","lastTransitionTime":"2026-02-19T18:30:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:18 crc kubenswrapper[4813]: I0219 18:30:18.019656 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6fc21e0b-f723-4c9c-9ced-1683cc02fa00-metrics-certs\") pod \"network-metrics-daemon-l5vng\" (UID: \"6fc21e0b-f723-4c9c-9ced-1683cc02fa00\") " pod="openshift-multus/network-metrics-daemon-l5vng" Feb 19 18:30:18 crc kubenswrapper[4813]: E0219 18:30:18.019856 4813 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 18:30:18 crc kubenswrapper[4813]: E0219 18:30:18.020005 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6fc21e0b-f723-4c9c-9ced-1683cc02fa00-metrics-certs podName:6fc21e0b-f723-4c9c-9ced-1683cc02fa00 nodeName:}" failed. No retries permitted until 2026-02-19 18:30:20.019939378 +0000 UTC m=+39.245379959 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6fc21e0b-f723-4c9c-9ced-1683cc02fa00-metrics-certs") pod "network-metrics-daemon-l5vng" (UID: "6fc21e0b-f723-4c9c-9ced-1683cc02fa00") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 18:30:18 crc kubenswrapper[4813]: I0219 18:30:18.069888 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:18 crc kubenswrapper[4813]: I0219 18:30:18.069993 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:18 crc kubenswrapper[4813]: I0219 18:30:18.070023 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:18 crc kubenswrapper[4813]: I0219 18:30:18.070052 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:18 crc kubenswrapper[4813]: I0219 18:30:18.070075 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:18Z","lastTransitionTime":"2026-02-19T18:30:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:18 crc kubenswrapper[4813]: I0219 18:30:18.173420 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:18 crc kubenswrapper[4813]: I0219 18:30:18.173481 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:18 crc kubenswrapper[4813]: I0219 18:30:18.173499 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:18 crc kubenswrapper[4813]: I0219 18:30:18.173524 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:18 crc kubenswrapper[4813]: I0219 18:30:18.173541 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:18Z","lastTransitionTime":"2026-02-19T18:30:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:18 crc kubenswrapper[4813]: I0219 18:30:18.277054 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:18 crc kubenswrapper[4813]: I0219 18:30:18.277124 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:18 crc kubenswrapper[4813]: I0219 18:30:18.277141 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:18 crc kubenswrapper[4813]: I0219 18:30:18.277169 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:18 crc kubenswrapper[4813]: I0219 18:30:18.277189 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:18Z","lastTransitionTime":"2026-02-19T18:30:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:18 crc kubenswrapper[4813]: I0219 18:30:18.380538 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:18 crc kubenswrapper[4813]: I0219 18:30:18.380598 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:18 crc kubenswrapper[4813]: I0219 18:30:18.380614 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:18 crc kubenswrapper[4813]: I0219 18:30:18.380691 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:18 crc kubenswrapper[4813]: I0219 18:30:18.380710 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:18Z","lastTransitionTime":"2026-02-19T18:30:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:18 crc kubenswrapper[4813]: I0219 18:30:18.420465 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 20:31:26.191240319 +0000 UTC Feb 19 18:30:18 crc kubenswrapper[4813]: I0219 18:30:18.470757 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5vng" Feb 19 18:30:18 crc kubenswrapper[4813]: E0219 18:30:18.471032 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5vng" podUID="6fc21e0b-f723-4c9c-9ced-1683cc02fa00" Feb 19 18:30:18 crc kubenswrapper[4813]: I0219 18:30:18.484371 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:18 crc kubenswrapper[4813]: I0219 18:30:18.484417 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:18 crc kubenswrapper[4813]: I0219 18:30:18.484433 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:18 crc kubenswrapper[4813]: I0219 18:30:18.484455 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:18 crc kubenswrapper[4813]: I0219 18:30:18.484475 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:18Z","lastTransitionTime":"2026-02-19T18:30:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:18 crc kubenswrapper[4813]: I0219 18:30:18.587679 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:18 crc kubenswrapper[4813]: I0219 18:30:18.587742 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:18 crc kubenswrapper[4813]: I0219 18:30:18.587759 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:18 crc kubenswrapper[4813]: I0219 18:30:18.587786 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:18 crc kubenswrapper[4813]: I0219 18:30:18.587803 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:18Z","lastTransitionTime":"2026-02-19T18:30:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:18 crc kubenswrapper[4813]: I0219 18:30:18.690634 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:18 crc kubenswrapper[4813]: I0219 18:30:18.690712 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:18 crc kubenswrapper[4813]: I0219 18:30:18.690733 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:18 crc kubenswrapper[4813]: I0219 18:30:18.690784 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:18 crc kubenswrapper[4813]: I0219 18:30:18.690803 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:18Z","lastTransitionTime":"2026-02-19T18:30:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:18 crc kubenswrapper[4813]: I0219 18:30:18.793348 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:18 crc kubenswrapper[4813]: I0219 18:30:18.793407 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:18 crc kubenswrapper[4813]: I0219 18:30:18.793417 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:18 crc kubenswrapper[4813]: I0219 18:30:18.793431 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:18 crc kubenswrapper[4813]: I0219 18:30:18.793444 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:18Z","lastTransitionTime":"2026-02-19T18:30:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:18 crc kubenswrapper[4813]: I0219 18:30:18.896610 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:18 crc kubenswrapper[4813]: I0219 18:30:18.896698 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:18 crc kubenswrapper[4813]: I0219 18:30:18.896726 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:18 crc kubenswrapper[4813]: I0219 18:30:18.896756 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:18 crc kubenswrapper[4813]: I0219 18:30:18.896779 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:18Z","lastTransitionTime":"2026-02-19T18:30:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:18 crc kubenswrapper[4813]: I0219 18:30:18.999784 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:18 crc kubenswrapper[4813]: I0219 18:30:18.999848 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:18 crc kubenswrapper[4813]: I0219 18:30:18.999865 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:18 crc kubenswrapper[4813]: I0219 18:30:18.999891 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:19 crc kubenswrapper[4813]: I0219 18:30:18.999909 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:18Z","lastTransitionTime":"2026-02-19T18:30:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:19 crc kubenswrapper[4813]: I0219 18:30:19.102640 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:19 crc kubenswrapper[4813]: I0219 18:30:19.102728 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:19 crc kubenswrapper[4813]: I0219 18:30:19.102752 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:19 crc kubenswrapper[4813]: I0219 18:30:19.102785 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:19 crc kubenswrapper[4813]: I0219 18:30:19.102807 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:19Z","lastTransitionTime":"2026-02-19T18:30:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:19 crc kubenswrapper[4813]: I0219 18:30:19.206025 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:19 crc kubenswrapper[4813]: I0219 18:30:19.206137 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:19 crc kubenswrapper[4813]: I0219 18:30:19.206157 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:19 crc kubenswrapper[4813]: I0219 18:30:19.206183 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:19 crc kubenswrapper[4813]: I0219 18:30:19.206202 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:19Z","lastTransitionTime":"2026-02-19T18:30:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:19 crc kubenswrapper[4813]: I0219 18:30:19.309698 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:19 crc kubenswrapper[4813]: I0219 18:30:19.309760 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:19 crc kubenswrapper[4813]: I0219 18:30:19.309778 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:19 crc kubenswrapper[4813]: I0219 18:30:19.309801 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:19 crc kubenswrapper[4813]: I0219 18:30:19.309818 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:19Z","lastTransitionTime":"2026-02-19T18:30:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:19 crc kubenswrapper[4813]: I0219 18:30:19.412984 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:19 crc kubenswrapper[4813]: I0219 18:30:19.413056 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:19 crc kubenswrapper[4813]: I0219 18:30:19.413077 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:19 crc kubenswrapper[4813]: I0219 18:30:19.413104 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:19 crc kubenswrapper[4813]: I0219 18:30:19.413122 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:19Z","lastTransitionTime":"2026-02-19T18:30:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:19 crc kubenswrapper[4813]: I0219 18:30:19.421397 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 16:42:22.513555265 +0000 UTC Feb 19 18:30:19 crc kubenswrapper[4813]: I0219 18:30:19.471268 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 18:30:19 crc kubenswrapper[4813]: I0219 18:30:19.471345 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 18:30:19 crc kubenswrapper[4813]: I0219 18:30:19.471345 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:30:19 crc kubenswrapper[4813]: E0219 18:30:19.471502 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 18:30:19 crc kubenswrapper[4813]: E0219 18:30:19.471684 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 18:30:19 crc kubenswrapper[4813]: E0219 18:30:19.471850 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 18:30:19 crc kubenswrapper[4813]: I0219 18:30:19.516730 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:19 crc kubenswrapper[4813]: I0219 18:30:19.516909 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:19 crc kubenswrapper[4813]: I0219 18:30:19.516935 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:19 crc kubenswrapper[4813]: I0219 18:30:19.516999 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:19 crc kubenswrapper[4813]: I0219 18:30:19.517027 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:19Z","lastTransitionTime":"2026-02-19T18:30:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:19 crc kubenswrapper[4813]: I0219 18:30:19.619823 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:19 crc kubenswrapper[4813]: I0219 18:30:19.619876 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:19 crc kubenswrapper[4813]: I0219 18:30:19.619892 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:19 crc kubenswrapper[4813]: I0219 18:30:19.619915 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:19 crc kubenswrapper[4813]: I0219 18:30:19.619932 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:19Z","lastTransitionTime":"2026-02-19T18:30:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:19 crc kubenswrapper[4813]: I0219 18:30:19.723277 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:19 crc kubenswrapper[4813]: I0219 18:30:19.723346 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:19 crc kubenswrapper[4813]: I0219 18:30:19.723367 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:19 crc kubenswrapper[4813]: I0219 18:30:19.723395 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:19 crc kubenswrapper[4813]: I0219 18:30:19.723415 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:19Z","lastTransitionTime":"2026-02-19T18:30:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:19 crc kubenswrapper[4813]: I0219 18:30:19.825756 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:19 crc kubenswrapper[4813]: I0219 18:30:19.825829 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:19 crc kubenswrapper[4813]: I0219 18:30:19.825847 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:19 crc kubenswrapper[4813]: I0219 18:30:19.825872 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:19 crc kubenswrapper[4813]: I0219 18:30:19.825890 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:19Z","lastTransitionTime":"2026-02-19T18:30:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:19 crc kubenswrapper[4813]: I0219 18:30:19.928673 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:19 crc kubenswrapper[4813]: I0219 18:30:19.928732 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:19 crc kubenswrapper[4813]: I0219 18:30:19.928751 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:19 crc kubenswrapper[4813]: I0219 18:30:19.928785 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:19 crc kubenswrapper[4813]: I0219 18:30:19.928810 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:19Z","lastTransitionTime":"2026-02-19T18:30:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:20 crc kubenswrapper[4813]: I0219 18:30:20.031985 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:20 crc kubenswrapper[4813]: I0219 18:30:20.032035 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:20 crc kubenswrapper[4813]: I0219 18:30:20.032048 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:20 crc kubenswrapper[4813]: I0219 18:30:20.032065 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:20 crc kubenswrapper[4813]: I0219 18:30:20.032076 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:20Z","lastTransitionTime":"2026-02-19T18:30:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:20 crc kubenswrapper[4813]: I0219 18:30:20.042356 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6fc21e0b-f723-4c9c-9ced-1683cc02fa00-metrics-certs\") pod \"network-metrics-daemon-l5vng\" (UID: \"6fc21e0b-f723-4c9c-9ced-1683cc02fa00\") " pod="openshift-multus/network-metrics-daemon-l5vng" Feb 19 18:30:20 crc kubenswrapper[4813]: E0219 18:30:20.042550 4813 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 18:30:20 crc kubenswrapper[4813]: E0219 18:30:20.042604 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6fc21e0b-f723-4c9c-9ced-1683cc02fa00-metrics-certs podName:6fc21e0b-f723-4c9c-9ced-1683cc02fa00 nodeName:}" failed. No retries permitted until 2026-02-19 18:30:24.042588875 +0000 UTC m=+43.268029426 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6fc21e0b-f723-4c9c-9ced-1683cc02fa00-metrics-certs") pod "network-metrics-daemon-l5vng" (UID: "6fc21e0b-f723-4c9c-9ced-1683cc02fa00") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 18:30:20 crc kubenswrapper[4813]: I0219 18:30:20.134934 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:20 crc kubenswrapper[4813]: I0219 18:30:20.135019 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:20 crc kubenswrapper[4813]: I0219 18:30:20.135037 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:20 crc kubenswrapper[4813]: I0219 18:30:20.135062 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:20 crc kubenswrapper[4813]: I0219 18:30:20.135080 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:20Z","lastTransitionTime":"2026-02-19T18:30:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:20 crc kubenswrapper[4813]: I0219 18:30:20.238206 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:20 crc kubenswrapper[4813]: I0219 18:30:20.238299 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:20 crc kubenswrapper[4813]: I0219 18:30:20.238389 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:20 crc kubenswrapper[4813]: I0219 18:30:20.238504 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:20 crc kubenswrapper[4813]: I0219 18:30:20.238536 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:20Z","lastTransitionTime":"2026-02-19T18:30:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:20 crc kubenswrapper[4813]: I0219 18:30:20.341572 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:20 crc kubenswrapper[4813]: I0219 18:30:20.341638 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:20 crc kubenswrapper[4813]: I0219 18:30:20.341649 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:20 crc kubenswrapper[4813]: I0219 18:30:20.341668 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:20 crc kubenswrapper[4813]: I0219 18:30:20.341679 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:20Z","lastTransitionTime":"2026-02-19T18:30:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:20 crc kubenswrapper[4813]: I0219 18:30:20.422258 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 18:38:17.661592039 +0000 UTC Feb 19 18:30:20 crc kubenswrapper[4813]: I0219 18:30:20.443937 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:20 crc kubenswrapper[4813]: I0219 18:30:20.444007 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:20 crc kubenswrapper[4813]: I0219 18:30:20.444024 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:20 crc kubenswrapper[4813]: I0219 18:30:20.444043 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:20 crc kubenswrapper[4813]: I0219 18:30:20.444060 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:20Z","lastTransitionTime":"2026-02-19T18:30:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:20 crc kubenswrapper[4813]: I0219 18:30:20.470385 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5vng" Feb 19 18:30:20 crc kubenswrapper[4813]: E0219 18:30:20.470486 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5vng" podUID="6fc21e0b-f723-4c9c-9ced-1683cc02fa00" Feb 19 18:30:20 crc kubenswrapper[4813]: I0219 18:30:20.546687 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:20 crc kubenswrapper[4813]: I0219 18:30:20.547072 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:20 crc kubenswrapper[4813]: I0219 18:30:20.547259 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:20 crc kubenswrapper[4813]: I0219 18:30:20.547427 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:20 crc kubenswrapper[4813]: I0219 18:30:20.547610 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:20Z","lastTransitionTime":"2026-02-19T18:30:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:20 crc kubenswrapper[4813]: I0219 18:30:20.650377 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:20 crc kubenswrapper[4813]: I0219 18:30:20.650441 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:20 crc kubenswrapper[4813]: I0219 18:30:20.650460 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:20 crc kubenswrapper[4813]: I0219 18:30:20.650488 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:20 crc kubenswrapper[4813]: I0219 18:30:20.650505 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:20Z","lastTransitionTime":"2026-02-19T18:30:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:20 crc kubenswrapper[4813]: I0219 18:30:20.753331 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:20 crc kubenswrapper[4813]: I0219 18:30:20.753410 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:20 crc kubenswrapper[4813]: I0219 18:30:20.753435 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:20 crc kubenswrapper[4813]: I0219 18:30:20.753465 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:20 crc kubenswrapper[4813]: I0219 18:30:20.753486 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:20Z","lastTransitionTime":"2026-02-19T18:30:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:20 crc kubenswrapper[4813]: I0219 18:30:20.856989 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:20 crc kubenswrapper[4813]: I0219 18:30:20.857352 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:20 crc kubenswrapper[4813]: I0219 18:30:20.857570 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:20 crc kubenswrapper[4813]: I0219 18:30:20.857758 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:20 crc kubenswrapper[4813]: I0219 18:30:20.857928 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:20Z","lastTransitionTime":"2026-02-19T18:30:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:20 crc kubenswrapper[4813]: I0219 18:30:20.961831 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:20 crc kubenswrapper[4813]: I0219 18:30:20.961916 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:20 crc kubenswrapper[4813]: I0219 18:30:20.961941 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:20 crc kubenswrapper[4813]: I0219 18:30:20.962011 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:20 crc kubenswrapper[4813]: I0219 18:30:20.962039 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:20Z","lastTransitionTime":"2026-02-19T18:30:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:21 crc kubenswrapper[4813]: I0219 18:30:21.065133 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:21 crc kubenswrapper[4813]: I0219 18:30:21.065198 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:21 crc kubenswrapper[4813]: I0219 18:30:21.065246 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:21 crc kubenswrapper[4813]: I0219 18:30:21.065279 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:21 crc kubenswrapper[4813]: I0219 18:30:21.065302 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:21Z","lastTransitionTime":"2026-02-19T18:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:21 crc kubenswrapper[4813]: I0219 18:30:21.168405 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:21 crc kubenswrapper[4813]: I0219 18:30:21.168477 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:21 crc kubenswrapper[4813]: I0219 18:30:21.168494 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:21 crc kubenswrapper[4813]: I0219 18:30:21.168522 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:21 crc kubenswrapper[4813]: I0219 18:30:21.168542 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:21Z","lastTransitionTime":"2026-02-19T18:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:21 crc kubenswrapper[4813]: I0219 18:30:21.271671 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:21 crc kubenswrapper[4813]: I0219 18:30:21.271728 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:21 crc kubenswrapper[4813]: I0219 18:30:21.271745 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:21 crc kubenswrapper[4813]: I0219 18:30:21.271770 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:21 crc kubenswrapper[4813]: I0219 18:30:21.271792 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:21Z","lastTransitionTime":"2026-02-19T18:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:21 crc kubenswrapper[4813]: I0219 18:30:21.374669 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:21 crc kubenswrapper[4813]: I0219 18:30:21.374713 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:21 crc kubenswrapper[4813]: I0219 18:30:21.374724 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:21 crc kubenswrapper[4813]: I0219 18:30:21.374740 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:21 crc kubenswrapper[4813]: I0219 18:30:21.374751 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:21Z","lastTransitionTime":"2026-02-19T18:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:21 crc kubenswrapper[4813]: I0219 18:30:21.423020 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 23:13:35.611215508 +0000 UTC Feb 19 18:30:21 crc kubenswrapper[4813]: I0219 18:30:21.471042 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:30:21 crc kubenswrapper[4813]: I0219 18:30:21.471201 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 18:30:21 crc kubenswrapper[4813]: I0219 18:30:21.471042 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 18:30:21 crc kubenswrapper[4813]: E0219 18:30:21.471301 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 18:30:21 crc kubenswrapper[4813]: E0219 18:30:21.471492 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 18:30:21 crc kubenswrapper[4813]: E0219 18:30:21.471644 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 18:30:21 crc kubenswrapper[4813]: I0219 18:30:21.477798 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:21 crc kubenswrapper[4813]: I0219 18:30:21.477859 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:21 crc kubenswrapper[4813]: I0219 18:30:21.477880 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:21 crc kubenswrapper[4813]: I0219 18:30:21.477910 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:21 crc kubenswrapper[4813]: I0219 18:30:21.477933 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:21Z","lastTransitionTime":"2026-02-19T18:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:21 crc kubenswrapper[4813]: I0219 18:30:21.495118 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"928c75f4-605c-4556-8c29-14ff4bdf6f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec80078ab4536d8f1962994c3bdf8af5fcf81516f8845e84dbb33a78b1aa2062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e03ec2a3e1656fbfb567377e79bcca10e1c8a6f7660727c164936eab7365930a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6191934d31b3f711d3c9d8f365251db200dea5a925cad670632dbfc527f76504\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1b878b9bc710465c933e1965728be9fe220fe89a3d5c05fa538abd4bf22b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81cfe261685862b6b74d263ab66b4a2b1d68409791e82424b4f210c56561099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fed64977a80ae6acac3a225e2b6b6ab865db1f36338df4a15011baae90f949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe42f57029c227e4eeab376bc77a8709fce4c6c19e78d7dd1c77e65966b14388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe42f57029c227e4eeab376bc77a8709fce4c6c19e78d7dd1c77e65966b14388\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T18:30:13Z\\\",\\\"message\\\":\\\"13.650485 6267 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI0219 18:30:13.650490 6267 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0219 18:30:13.650501 6267 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0219 18:30:13.650508 6267 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0219 18:30:13.650514 6267 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0219 18:30:13.650519 6267 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0219 18:30:13.650449 6267 services_controller.go:434] Service default/kubernetes retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{kubernetes default 1fcaffea-cfe2-4295-9c2a-a3b3626fb3f1 259 0 2025-02-23 05:11:12 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[component:apiserver provider:kubernetes] map[] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 6443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.217.4.1,Type:ClusterIP,ExternalI\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pc9t2_openshift-ovn-kubernetes(928c75f4-605c-4556-8c29-14ff4bdf6f5e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1caf7a922d435742a5c6d5302c70003bc86cc3ab542e2e3fe0a368c150de7838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0244cf62334ae37b44f11119385a76667200a01b2162a064829f16e8d6c0cf65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0244cf62334ae37b44f11119385a76667200a01b2162a064829f16e8d6c0cf65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pc9t2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:21Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:21 crc kubenswrapper[4813]: I0219 18:30:21.513834 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlz6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f58d0592-08dd-49db-8c98-f262b9808e0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://934ac9bf4c98a77f31d9fddb47e9981f38daeca4f37afc09a488b48c69a13b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7968520498abaa880445c128f02c0838e254befc000966e147908c84203f74a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7968520498abaa880445c128f02c0838e254befc000966e147908c84203f74a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17313aa183028703a67d1cf40b4246d09224ceee622cd9f4f7d2ef239b26db47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17313aa183028703a67d1cf40b4246d09224ceee622cd9f4f7d2ef239b26db47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ab70722c91747196d065d768e30f52befde81fdedb465f22c3f7eef4705210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98ab70722c91747196d065d768e30f52befde81fdedb465f22c3f7eef4705210\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ab6dd9d03726493fd9fd2c358b0a87ca1997cbe2111d43a548adc9c9f43c662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ab6dd9d03726493fd9fd2c358b0a87ca1997cbe2111d43a548adc9c9f43c662\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11144c876f7a0cbaa5787ac1a093f138ec3fdb8a3cd118d4efec6cc650e3e97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11144c876f7a0cbaa5787ac1a093f138ec3fdb8a3cd118d4efec6cc650e3e97a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53230e417c1722eded51e2c7a87acb4e9b610aaf5d36a22d4f3f4fcd987c5160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53230e417c1722eded51e2c7a87acb4e9b610aaf5d36a22d4f3f4fcd987c5160\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlz6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:21Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:21 crc kubenswrapper[4813]: I0219 18:30:21.527895 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:21Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:21 crc kubenswrapper[4813]: I0219 18:30:21.538089 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vpk9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0640f474-4d6f-4a87-9750-dda71e69dd95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58268d212369ac3ebcca0839f2c778d1e2e6e36fdc4b7e374cb9044adb39163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vpk9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:21Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:21 crc kubenswrapper[4813]: I0219 18:30:21.555898 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8429a2d929ad94a25a922e436298db7a8eb7522560dfa37f87f6e24f4f99b4e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:21Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:21 crc kubenswrapper[4813]: I0219 18:30:21.572027 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"481977a2-7072-4176-abd4-863cb6104d70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2b35ef24fab5f4dfd17bf95d550000ed1167454385baff2c19e1bc795e2fa8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a57de3f000b8b7bd147b882314c518fbd8173a9c366d4ee48e1e5fab9800b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfswm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:21Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:21 crc kubenswrapper[4813]: I0219 18:30:21.580291 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:21 crc kubenswrapper[4813]: I0219 18:30:21.580359 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:21 crc kubenswrapper[4813]: I0219 18:30:21.580383 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:21 crc kubenswrapper[4813]: I0219 18:30:21.580414 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:21 crc kubenswrapper[4813]: I0219 18:30:21.580439 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:21Z","lastTransitionTime":"2026-02-19T18:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:21 crc kubenswrapper[4813]: I0219 18:30:21.603774 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"424add47-7924-4d6b-9c4a-4696e02c5a45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95dc9a2b0e36680a894dd162836072614413ad26fcf5adbae44db96aff8a00c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46edb0e58f8e2e040cc223708f42cef7060dc0a841fff9db182e66e659ced5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://def8abdfd513befc4fac3d78b8284b3b37ba3f5da05d74a7b911932ee1099344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed857dedd4a4cf59288fcbcfaa298370a93630e185b949d2e574eb828831df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://216064c0aa1256384d93ad5b952228d4b67c2767f284715e62505dd390c11268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b9746c82ed08c95759481dc08426b4e8446ed834da21c003dccfeb8231fbec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b9746c82ed08c95759481dc08426b4e8446ed834da21c003dccfeb8231fbec2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc8db58cc6cac8c9b830087814e9d7652fdd990b18e4e4e778669d5cbc06b508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc8db58cc6cac8c9b830087814e9d7652fdd990b18e4e4e778669d5cbc06b508\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://baca15e257ea2007369156b6a31eef0b7cd0e336cef7a9c733164b93e1927f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baca15e257ea2007369156b6a31eef0b7cd0e336cef7a9c733164b93e1927f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:21Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:21 crc kubenswrapper[4813]: I0219 18:30:21.624563 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44daee369357470d5c3b416d63e53f621a0434cc6fe90db2dd15a6078849bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:21Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:21 crc kubenswrapper[4813]: I0219 18:30:21.639879 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55mxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af5ac962-f7d8-4759-b913-b3784e37a704\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5006f3df33dcac6500b89a837e686eb2b35a2d1c609091bda101500d15d808d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55mxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:21Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:21 crc kubenswrapper[4813]: I0219 18:30:21.662869 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7e9f575-0f71-4cf9-bbca-161279ecc067\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5217c8c5d8bdd49d4930ceb46695c6404c9cb98f22fa570d519f6fd9e90a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72ae02b48db247486c8af5ba910b32e186c15460f9195868b319d1ddc111fa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e90168ea6a75495be5812591cd0c929407c9adcb7c088d4bcbe3658a430320\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f319cf58bd58c099ace281ef84d183ac951986506fa62207a45ec01f7398e7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f62c4c18d632f0210d0676c04bf42a7b28c8669c123d29ef70fd4129f0051a2b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T18:30:00Z\\\",\\\"message\\\":\\\".003654 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 18:29:55.006059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3464865525/tls.crt::/tmp/serving-cert-3464865525/tls.key\\\\\\\"\\\\nI0219 18:30:00.917714 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 18:30:00.925836 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 18:30:00.925875 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 18:30:00.925917 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 18:30:00.925928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 18:30:00.935151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 18:30:00.935201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935214 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935228 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 18:30:00.935217 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 18:30:00.935236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 18:30:00.935260 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 18:30:00.935268 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 18:30:00.942033 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nF0219 18:30:00.942062 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b414bdca98a18315a2ebb69c39baa4748407aa769d6bc8efa4115efb47c9f4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:21Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:21 crc kubenswrapper[4813]: I0219 18:30:21.678804 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:21Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:21 crc kubenswrapper[4813]: I0219 18:30:21.683830 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:21 crc kubenswrapper[4813]: I0219 18:30:21.683899 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:21 crc kubenswrapper[4813]: I0219 18:30:21.683922 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:21 crc kubenswrapper[4813]: I0219 18:30:21.683990 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:21 crc kubenswrapper[4813]: I0219 18:30:21.684017 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:21Z","lastTransitionTime":"2026-02-19T18:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:21 crc kubenswrapper[4813]: I0219 18:30:21.698892 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:21Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:21 crc kubenswrapper[4813]: I0219 18:30:21.714916 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c6488cf097d9700ded1960ebddf696d77c08ffdfc6dd8b351c594ea026bb6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6181782d0c635897318effbe40705986067ccb55cf05ffdba6948e46c1a054b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:21Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:21 crc kubenswrapper[4813]: I0219 18:30:21.728254 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hksqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b099cefb-f2e5-4f3f-976c-7433dba77ef2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3313ad9bea8833862d8c116421e00a0c2376c81e214043db1fa4096177d01094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hksqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:21Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:21 crc kubenswrapper[4813]: I0219 18:30:21.740342 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqbrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"614b0374-4288-40dc-9d95-e6f6566bd1ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99f4d2e3db84c62b82edf3e5f8290b3b259e58655a5ea95ea51ed8aeec8e8844\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpkbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef107096c0521504fd7160ad4e3c0feff6ed1500af21c7a0328a981c340825f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpkbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqbrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:21Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:21 crc kubenswrapper[4813]: I0219 18:30:21.751425 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l5vng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fc21e0b-f723-4c9c-9ced-1683cc02fa00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmn2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmn2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l5vng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:21Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:21 crc kubenswrapper[4813]: I0219 18:30:21.768332 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e24bb1ce-8e7b-485c-8ade-4d1c8d24d31d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ec0b5d3efae532f52cc2cad5bdf32bbba176159a6d3e26584b0fabe4e67fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72a262261f9ec388a9b2c38c6c30d35645d6e42d482ed7468891523b7bc731fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58efdcb776fb39a7e33af3fda3f0589e38c4bae15b8db3371541d189729d06c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a2584d7de40323b884e08e2cb47b6ab8054fadccc7516e4736c5439aef60e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:21Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:21 crc kubenswrapper[4813]: I0219 18:30:21.786139 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:21 crc kubenswrapper[4813]: I0219 18:30:21.786200 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:21 crc kubenswrapper[4813]: I0219 18:30:21.786215 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:21 crc kubenswrapper[4813]: I0219 18:30:21.786237 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:21 crc kubenswrapper[4813]: I0219 18:30:21.786254 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:21Z","lastTransitionTime":"2026-02-19T18:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:21 crc kubenswrapper[4813]: I0219 18:30:21.890061 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:21 crc kubenswrapper[4813]: I0219 18:30:21.890145 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:21 crc kubenswrapper[4813]: I0219 18:30:21.890163 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:21 crc kubenswrapper[4813]: I0219 18:30:21.890187 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:21 crc kubenswrapper[4813]: I0219 18:30:21.890204 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:21Z","lastTransitionTime":"2026-02-19T18:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:21 crc kubenswrapper[4813]: I0219 18:30:21.993251 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:21 crc kubenswrapper[4813]: I0219 18:30:21.993304 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:21 crc kubenswrapper[4813]: I0219 18:30:21.993316 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:21 crc kubenswrapper[4813]: I0219 18:30:21.993332 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:21 crc kubenswrapper[4813]: I0219 18:30:21.993344 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:21Z","lastTransitionTime":"2026-02-19T18:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:22 crc kubenswrapper[4813]: I0219 18:30:22.096003 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:22 crc kubenswrapper[4813]: I0219 18:30:22.096109 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:22 crc kubenswrapper[4813]: I0219 18:30:22.096134 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:22 crc kubenswrapper[4813]: I0219 18:30:22.096160 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:22 crc kubenswrapper[4813]: I0219 18:30:22.096179 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:22Z","lastTransitionTime":"2026-02-19T18:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:22 crc kubenswrapper[4813]: I0219 18:30:22.198595 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:22 crc kubenswrapper[4813]: I0219 18:30:22.198650 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:22 crc kubenswrapper[4813]: I0219 18:30:22.198670 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:22 crc kubenswrapper[4813]: I0219 18:30:22.198692 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:22 crc kubenswrapper[4813]: I0219 18:30:22.198709 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:22Z","lastTransitionTime":"2026-02-19T18:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:22 crc kubenswrapper[4813]: I0219 18:30:22.301710 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:22 crc kubenswrapper[4813]: I0219 18:30:22.301761 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:22 crc kubenswrapper[4813]: I0219 18:30:22.301780 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:22 crc kubenswrapper[4813]: I0219 18:30:22.301803 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:22 crc kubenswrapper[4813]: I0219 18:30:22.301821 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:22Z","lastTransitionTime":"2026-02-19T18:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:22 crc kubenswrapper[4813]: I0219 18:30:22.405016 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:22 crc kubenswrapper[4813]: I0219 18:30:22.405078 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:22 crc kubenswrapper[4813]: I0219 18:30:22.405095 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:22 crc kubenswrapper[4813]: I0219 18:30:22.405119 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:22 crc kubenswrapper[4813]: I0219 18:30:22.405139 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:22Z","lastTransitionTime":"2026-02-19T18:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:22 crc kubenswrapper[4813]: I0219 18:30:22.423505 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 08:28:48.524247312 +0000 UTC Feb 19 18:30:22 crc kubenswrapper[4813]: I0219 18:30:22.471229 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5vng" Feb 19 18:30:22 crc kubenswrapper[4813]: E0219 18:30:22.471418 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5vng" podUID="6fc21e0b-f723-4c9c-9ced-1683cc02fa00" Feb 19 18:30:22 crc kubenswrapper[4813]: I0219 18:30:22.507315 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:22 crc kubenswrapper[4813]: I0219 18:30:22.507386 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:22 crc kubenswrapper[4813]: I0219 18:30:22.507411 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:22 crc kubenswrapper[4813]: I0219 18:30:22.507440 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:22 crc kubenswrapper[4813]: I0219 18:30:22.507461 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:22Z","lastTransitionTime":"2026-02-19T18:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:22 crc kubenswrapper[4813]: I0219 18:30:22.610808 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:22 crc kubenswrapper[4813]: I0219 18:30:22.610911 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:22 crc kubenswrapper[4813]: I0219 18:30:22.610927 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:22 crc kubenswrapper[4813]: I0219 18:30:22.610948 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:22 crc kubenswrapper[4813]: I0219 18:30:22.610988 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:22Z","lastTransitionTime":"2026-02-19T18:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:22 crc kubenswrapper[4813]: I0219 18:30:22.713617 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:22 crc kubenswrapper[4813]: I0219 18:30:22.713721 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:22 crc kubenswrapper[4813]: I0219 18:30:22.713738 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:22 crc kubenswrapper[4813]: I0219 18:30:22.713778 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:22 crc kubenswrapper[4813]: I0219 18:30:22.713795 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:22Z","lastTransitionTime":"2026-02-19T18:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:22 crc kubenswrapper[4813]: I0219 18:30:22.816833 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:22 crc kubenswrapper[4813]: I0219 18:30:22.816886 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:22 crc kubenswrapper[4813]: I0219 18:30:22.816897 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:22 crc kubenswrapper[4813]: I0219 18:30:22.816914 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:22 crc kubenswrapper[4813]: I0219 18:30:22.816925 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:22Z","lastTransitionTime":"2026-02-19T18:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:22 crc kubenswrapper[4813]: I0219 18:30:22.919657 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:22 crc kubenswrapper[4813]: I0219 18:30:22.919726 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:22 crc kubenswrapper[4813]: I0219 18:30:22.919744 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:22 crc kubenswrapper[4813]: I0219 18:30:22.919765 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:22 crc kubenswrapper[4813]: I0219 18:30:22.919782 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:22Z","lastTransitionTime":"2026-02-19T18:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:23 crc kubenswrapper[4813]: I0219 18:30:23.022361 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:23 crc kubenswrapper[4813]: I0219 18:30:23.022413 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:23 crc kubenswrapper[4813]: I0219 18:30:23.022431 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:23 crc kubenswrapper[4813]: I0219 18:30:23.022451 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:23 crc kubenswrapper[4813]: I0219 18:30:23.022468 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:23Z","lastTransitionTime":"2026-02-19T18:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:23 crc kubenswrapper[4813]: I0219 18:30:23.125472 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:23 crc kubenswrapper[4813]: I0219 18:30:23.125804 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:23 crc kubenswrapper[4813]: I0219 18:30:23.126022 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:23 crc kubenswrapper[4813]: I0219 18:30:23.126226 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:23 crc kubenswrapper[4813]: I0219 18:30:23.126444 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:23Z","lastTransitionTime":"2026-02-19T18:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:23 crc kubenswrapper[4813]: I0219 18:30:23.229205 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:23 crc kubenswrapper[4813]: I0219 18:30:23.229277 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:23 crc kubenswrapper[4813]: I0219 18:30:23.229299 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:23 crc kubenswrapper[4813]: I0219 18:30:23.229328 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:23 crc kubenswrapper[4813]: I0219 18:30:23.229349 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:23Z","lastTransitionTime":"2026-02-19T18:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:23 crc kubenswrapper[4813]: I0219 18:30:23.331679 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:23 crc kubenswrapper[4813]: I0219 18:30:23.332012 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:23 crc kubenswrapper[4813]: I0219 18:30:23.332133 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:23 crc kubenswrapper[4813]: I0219 18:30:23.332249 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:23 crc kubenswrapper[4813]: I0219 18:30:23.332335 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:23Z","lastTransitionTime":"2026-02-19T18:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:23 crc kubenswrapper[4813]: I0219 18:30:23.424730 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 22:52:01.409913458 +0000 UTC Feb 19 18:30:23 crc kubenswrapper[4813]: I0219 18:30:23.435763 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:23 crc kubenswrapper[4813]: I0219 18:30:23.435847 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:23 crc kubenswrapper[4813]: I0219 18:30:23.435871 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:23 crc kubenswrapper[4813]: I0219 18:30:23.435899 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:23 crc kubenswrapper[4813]: I0219 18:30:23.435921 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:23Z","lastTransitionTime":"2026-02-19T18:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:23 crc kubenswrapper[4813]: I0219 18:30:23.471183 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 18:30:23 crc kubenswrapper[4813]: I0219 18:30:23.471227 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 18:30:23 crc kubenswrapper[4813]: I0219 18:30:23.471284 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:30:23 crc kubenswrapper[4813]: E0219 18:30:23.471369 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 18:30:23 crc kubenswrapper[4813]: E0219 18:30:23.471525 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 18:30:23 crc kubenswrapper[4813]: E0219 18:30:23.471593 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 18:30:23 crc kubenswrapper[4813]: I0219 18:30:23.538424 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:23 crc kubenswrapper[4813]: I0219 18:30:23.538467 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:23 crc kubenswrapper[4813]: I0219 18:30:23.538476 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:23 crc kubenswrapper[4813]: I0219 18:30:23.538491 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:23 crc kubenswrapper[4813]: I0219 18:30:23.538502 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:23Z","lastTransitionTime":"2026-02-19T18:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:23 crc kubenswrapper[4813]: I0219 18:30:23.641504 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:23 crc kubenswrapper[4813]: I0219 18:30:23.641558 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:23 crc kubenswrapper[4813]: I0219 18:30:23.641574 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:23 crc kubenswrapper[4813]: I0219 18:30:23.641600 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:23 crc kubenswrapper[4813]: I0219 18:30:23.641617 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:23Z","lastTransitionTime":"2026-02-19T18:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:23 crc kubenswrapper[4813]: I0219 18:30:23.744878 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:23 crc kubenswrapper[4813]: I0219 18:30:23.744940 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:23 crc kubenswrapper[4813]: I0219 18:30:23.744989 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:23 crc kubenswrapper[4813]: I0219 18:30:23.745015 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:23 crc kubenswrapper[4813]: I0219 18:30:23.745032 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:23Z","lastTransitionTime":"2026-02-19T18:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:23 crc kubenswrapper[4813]: I0219 18:30:23.848495 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:23 crc kubenswrapper[4813]: I0219 18:30:23.848582 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:23 crc kubenswrapper[4813]: I0219 18:30:23.848601 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:23 crc kubenswrapper[4813]: I0219 18:30:23.848626 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:23 crc kubenswrapper[4813]: I0219 18:30:23.848646 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:23Z","lastTransitionTime":"2026-02-19T18:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:23 crc kubenswrapper[4813]: I0219 18:30:23.951754 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:23 crc kubenswrapper[4813]: I0219 18:30:23.951813 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:23 crc kubenswrapper[4813]: I0219 18:30:23.951831 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:23 crc kubenswrapper[4813]: I0219 18:30:23.951854 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:23 crc kubenswrapper[4813]: I0219 18:30:23.951871 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:23Z","lastTransitionTime":"2026-02-19T18:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:24 crc kubenswrapper[4813]: I0219 18:30:24.054618 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:24 crc kubenswrapper[4813]: I0219 18:30:24.054675 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:24 crc kubenswrapper[4813]: I0219 18:30:24.054691 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:24 crc kubenswrapper[4813]: I0219 18:30:24.054715 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:24 crc kubenswrapper[4813]: I0219 18:30:24.054732 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:24Z","lastTransitionTime":"2026-02-19T18:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:24 crc kubenswrapper[4813]: I0219 18:30:24.085609 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6fc21e0b-f723-4c9c-9ced-1683cc02fa00-metrics-certs\") pod \"network-metrics-daemon-l5vng\" (UID: \"6fc21e0b-f723-4c9c-9ced-1683cc02fa00\") " pod="openshift-multus/network-metrics-daemon-l5vng" Feb 19 18:30:24 crc kubenswrapper[4813]: E0219 18:30:24.085765 4813 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 18:30:24 crc kubenswrapper[4813]: E0219 18:30:24.085839 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6fc21e0b-f723-4c9c-9ced-1683cc02fa00-metrics-certs podName:6fc21e0b-f723-4c9c-9ced-1683cc02fa00 nodeName:}" failed. No retries permitted until 2026-02-19 18:30:32.085818057 +0000 UTC m=+51.311258628 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6fc21e0b-f723-4c9c-9ced-1683cc02fa00-metrics-certs") pod "network-metrics-daemon-l5vng" (UID: "6fc21e0b-f723-4c9c-9ced-1683cc02fa00") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 18:30:24 crc kubenswrapper[4813]: I0219 18:30:24.157001 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:24 crc kubenswrapper[4813]: I0219 18:30:24.157062 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:24 crc kubenswrapper[4813]: I0219 18:30:24.157080 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:24 crc kubenswrapper[4813]: I0219 18:30:24.157105 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:24 crc kubenswrapper[4813]: I0219 18:30:24.157122 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:24Z","lastTransitionTime":"2026-02-19T18:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:24 crc kubenswrapper[4813]: I0219 18:30:24.259642 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:24 crc kubenswrapper[4813]: I0219 18:30:24.259686 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:24 crc kubenswrapper[4813]: I0219 18:30:24.259696 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:24 crc kubenswrapper[4813]: I0219 18:30:24.259715 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:24 crc kubenswrapper[4813]: I0219 18:30:24.259730 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:24Z","lastTransitionTime":"2026-02-19T18:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:24 crc kubenswrapper[4813]: I0219 18:30:24.362097 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:24 crc kubenswrapper[4813]: I0219 18:30:24.362164 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:24 crc kubenswrapper[4813]: I0219 18:30:24.362181 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:24 crc kubenswrapper[4813]: I0219 18:30:24.362204 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:24 crc kubenswrapper[4813]: I0219 18:30:24.362223 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:24Z","lastTransitionTime":"2026-02-19T18:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:24 crc kubenswrapper[4813]: I0219 18:30:24.425138 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 08:36:21.521709037 +0000 UTC Feb 19 18:30:24 crc kubenswrapper[4813]: I0219 18:30:24.464018 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:24 crc kubenswrapper[4813]: I0219 18:30:24.464059 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:24 crc kubenswrapper[4813]: I0219 18:30:24.464070 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:24 crc kubenswrapper[4813]: I0219 18:30:24.464087 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:24 crc kubenswrapper[4813]: I0219 18:30:24.464099 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:24Z","lastTransitionTime":"2026-02-19T18:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:24 crc kubenswrapper[4813]: I0219 18:30:24.471273 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5vng" Feb 19 18:30:24 crc kubenswrapper[4813]: E0219 18:30:24.471408 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5vng" podUID="6fc21e0b-f723-4c9c-9ced-1683cc02fa00" Feb 19 18:30:24 crc kubenswrapper[4813]: I0219 18:30:24.567087 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:24 crc kubenswrapper[4813]: I0219 18:30:24.567147 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:24 crc kubenswrapper[4813]: I0219 18:30:24.567164 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:24 crc kubenswrapper[4813]: I0219 18:30:24.567185 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:24 crc kubenswrapper[4813]: I0219 18:30:24.567204 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:24Z","lastTransitionTime":"2026-02-19T18:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:24 crc kubenswrapper[4813]: I0219 18:30:24.670170 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:24 crc kubenswrapper[4813]: I0219 18:30:24.670216 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:24 crc kubenswrapper[4813]: I0219 18:30:24.670227 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:24 crc kubenswrapper[4813]: I0219 18:30:24.670244 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:24 crc kubenswrapper[4813]: I0219 18:30:24.670254 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:24Z","lastTransitionTime":"2026-02-19T18:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:24 crc kubenswrapper[4813]: I0219 18:30:24.773670 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:24 crc kubenswrapper[4813]: I0219 18:30:24.773724 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:24 crc kubenswrapper[4813]: I0219 18:30:24.773738 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:24 crc kubenswrapper[4813]: I0219 18:30:24.773759 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:24 crc kubenswrapper[4813]: I0219 18:30:24.773770 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:24Z","lastTransitionTime":"2026-02-19T18:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:24 crc kubenswrapper[4813]: I0219 18:30:24.877197 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:24 crc kubenswrapper[4813]: I0219 18:30:24.877270 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:24 crc kubenswrapper[4813]: I0219 18:30:24.877288 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:24 crc kubenswrapper[4813]: I0219 18:30:24.877311 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:24 crc kubenswrapper[4813]: I0219 18:30:24.877329 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:24Z","lastTransitionTime":"2026-02-19T18:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:24 crc kubenswrapper[4813]: I0219 18:30:24.980550 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:24 crc kubenswrapper[4813]: I0219 18:30:24.980639 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:24 crc kubenswrapper[4813]: I0219 18:30:24.980667 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:24 crc kubenswrapper[4813]: I0219 18:30:24.980700 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:24 crc kubenswrapper[4813]: I0219 18:30:24.980727 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:24Z","lastTransitionTime":"2026-02-19T18:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:25 crc kubenswrapper[4813]: I0219 18:30:25.083436 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:25 crc kubenswrapper[4813]: I0219 18:30:25.083525 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:25 crc kubenswrapper[4813]: I0219 18:30:25.083554 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:25 crc kubenswrapper[4813]: I0219 18:30:25.083586 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:25 crc kubenswrapper[4813]: I0219 18:30:25.083612 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:25Z","lastTransitionTime":"2026-02-19T18:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:25 crc kubenswrapper[4813]: I0219 18:30:25.186077 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:25 crc kubenswrapper[4813]: I0219 18:30:25.186151 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:25 crc kubenswrapper[4813]: I0219 18:30:25.186168 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:25 crc kubenswrapper[4813]: I0219 18:30:25.186196 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:25 crc kubenswrapper[4813]: I0219 18:30:25.186213 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:25Z","lastTransitionTime":"2026-02-19T18:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:25 crc kubenswrapper[4813]: I0219 18:30:25.289706 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:25 crc kubenswrapper[4813]: I0219 18:30:25.289810 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:25 crc kubenswrapper[4813]: I0219 18:30:25.289834 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:25 crc kubenswrapper[4813]: I0219 18:30:25.289866 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:25 crc kubenswrapper[4813]: I0219 18:30:25.289893 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:25Z","lastTransitionTime":"2026-02-19T18:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:25 crc kubenswrapper[4813]: I0219 18:30:25.393774 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:25 crc kubenswrapper[4813]: I0219 18:30:25.393847 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:25 crc kubenswrapper[4813]: I0219 18:30:25.393864 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:25 crc kubenswrapper[4813]: I0219 18:30:25.393898 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:25 crc kubenswrapper[4813]: I0219 18:30:25.393916 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:25Z","lastTransitionTime":"2026-02-19T18:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:25 crc kubenswrapper[4813]: I0219 18:30:25.425811 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 07:36:58.021224346 +0000 UTC Feb 19 18:30:25 crc kubenswrapper[4813]: I0219 18:30:25.471683 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 18:30:25 crc kubenswrapper[4813]: I0219 18:30:25.471749 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 18:30:25 crc kubenswrapper[4813]: I0219 18:30:25.471692 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:30:25 crc kubenswrapper[4813]: E0219 18:30:25.471920 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 18:30:25 crc kubenswrapper[4813]: E0219 18:30:25.472072 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 18:30:25 crc kubenswrapper[4813]: E0219 18:30:25.472160 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 18:30:25 crc kubenswrapper[4813]: I0219 18:30:25.497269 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:25 crc kubenswrapper[4813]: I0219 18:30:25.497357 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:25 crc kubenswrapper[4813]: I0219 18:30:25.497385 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:25 crc kubenswrapper[4813]: I0219 18:30:25.497418 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:25 crc kubenswrapper[4813]: I0219 18:30:25.497445 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:25Z","lastTransitionTime":"2026-02-19T18:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:25 crc kubenswrapper[4813]: I0219 18:30:25.600323 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:25 crc kubenswrapper[4813]: I0219 18:30:25.600389 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:25 crc kubenswrapper[4813]: I0219 18:30:25.600404 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:25 crc kubenswrapper[4813]: I0219 18:30:25.600426 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:25 crc kubenswrapper[4813]: I0219 18:30:25.600440 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:25Z","lastTransitionTime":"2026-02-19T18:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:25 crc kubenswrapper[4813]: I0219 18:30:25.703624 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:25 crc kubenswrapper[4813]: I0219 18:30:25.703703 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:25 crc kubenswrapper[4813]: I0219 18:30:25.703732 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:25 crc kubenswrapper[4813]: I0219 18:30:25.703764 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:25 crc kubenswrapper[4813]: I0219 18:30:25.703787 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:25Z","lastTransitionTime":"2026-02-19T18:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:25 crc kubenswrapper[4813]: I0219 18:30:25.807346 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:25 crc kubenswrapper[4813]: I0219 18:30:25.807416 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:25 crc kubenswrapper[4813]: I0219 18:30:25.807434 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:25 crc kubenswrapper[4813]: I0219 18:30:25.807462 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:25 crc kubenswrapper[4813]: I0219 18:30:25.807479 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:25Z","lastTransitionTime":"2026-02-19T18:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:25 crc kubenswrapper[4813]: I0219 18:30:25.910572 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:25 crc kubenswrapper[4813]: I0219 18:30:25.910642 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:25 crc kubenswrapper[4813]: I0219 18:30:25.910665 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:25 crc kubenswrapper[4813]: I0219 18:30:25.910695 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:25 crc kubenswrapper[4813]: I0219 18:30:25.910717 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:25Z","lastTransitionTime":"2026-02-19T18:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:26 crc kubenswrapper[4813]: I0219 18:30:26.013092 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:26 crc kubenswrapper[4813]: I0219 18:30:26.013160 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:26 crc kubenswrapper[4813]: I0219 18:30:26.013183 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:26 crc kubenswrapper[4813]: I0219 18:30:26.013211 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:26 crc kubenswrapper[4813]: I0219 18:30:26.013233 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:26Z","lastTransitionTime":"2026-02-19T18:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:26 crc kubenswrapper[4813]: I0219 18:30:26.115845 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:26 crc kubenswrapper[4813]: I0219 18:30:26.115909 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:26 crc kubenswrapper[4813]: I0219 18:30:26.115929 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:26 crc kubenswrapper[4813]: I0219 18:30:26.115954 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:26 crc kubenswrapper[4813]: I0219 18:30:26.116002 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:26Z","lastTransitionTime":"2026-02-19T18:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:26 crc kubenswrapper[4813]: I0219 18:30:26.218627 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:26 crc kubenswrapper[4813]: I0219 18:30:26.218689 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:26 crc kubenswrapper[4813]: I0219 18:30:26.218701 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:26 crc kubenswrapper[4813]: I0219 18:30:26.218722 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:26 crc kubenswrapper[4813]: I0219 18:30:26.218736 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:26Z","lastTransitionTime":"2026-02-19T18:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:26 crc kubenswrapper[4813]: I0219 18:30:26.322708 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:26 crc kubenswrapper[4813]: I0219 18:30:26.322777 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:26 crc kubenswrapper[4813]: I0219 18:30:26.322800 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:26 crc kubenswrapper[4813]: I0219 18:30:26.322830 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:26 crc kubenswrapper[4813]: I0219 18:30:26.322852 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:26Z","lastTransitionTime":"2026-02-19T18:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:26 crc kubenswrapper[4813]: I0219 18:30:26.425451 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:26 crc kubenswrapper[4813]: I0219 18:30:26.425621 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:26 crc kubenswrapper[4813]: I0219 18:30:26.425649 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:26 crc kubenswrapper[4813]: I0219 18:30:26.425678 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:26 crc kubenswrapper[4813]: I0219 18:30:26.425695 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:26Z","lastTransitionTime":"2026-02-19T18:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:26 crc kubenswrapper[4813]: I0219 18:30:26.425907 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 10:11:17.694938519 +0000 UTC Feb 19 18:30:26 crc kubenswrapper[4813]: I0219 18:30:26.470723 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5vng" Feb 19 18:30:26 crc kubenswrapper[4813]: E0219 18:30:26.471074 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5vng" podUID="6fc21e0b-f723-4c9c-9ced-1683cc02fa00" Feb 19 18:30:26 crc kubenswrapper[4813]: I0219 18:30:26.529355 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:26 crc kubenswrapper[4813]: I0219 18:30:26.529412 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:26 crc kubenswrapper[4813]: I0219 18:30:26.529432 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:26 crc kubenswrapper[4813]: I0219 18:30:26.529455 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:26 crc kubenswrapper[4813]: I0219 18:30:26.529472 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:26Z","lastTransitionTime":"2026-02-19T18:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:26 crc kubenswrapper[4813]: I0219 18:30:26.632342 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:26 crc kubenswrapper[4813]: I0219 18:30:26.632412 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:26 crc kubenswrapper[4813]: I0219 18:30:26.632434 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:26 crc kubenswrapper[4813]: I0219 18:30:26.632462 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:26 crc kubenswrapper[4813]: I0219 18:30:26.632489 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:26Z","lastTransitionTime":"2026-02-19T18:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:26 crc kubenswrapper[4813]: I0219 18:30:26.735459 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:26 crc kubenswrapper[4813]: I0219 18:30:26.735547 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:26 crc kubenswrapper[4813]: I0219 18:30:26.735567 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:26 crc kubenswrapper[4813]: I0219 18:30:26.735601 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:26 crc kubenswrapper[4813]: I0219 18:30:26.735621 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:26Z","lastTransitionTime":"2026-02-19T18:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:26 crc kubenswrapper[4813]: I0219 18:30:26.842159 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:26 crc kubenswrapper[4813]: I0219 18:30:26.842236 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:26 crc kubenswrapper[4813]: I0219 18:30:26.842257 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:26 crc kubenswrapper[4813]: I0219 18:30:26.842283 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:26 crc kubenswrapper[4813]: I0219 18:30:26.842303 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:26Z","lastTransitionTime":"2026-02-19T18:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:26 crc kubenswrapper[4813]: I0219 18:30:26.946807 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:26 crc kubenswrapper[4813]: I0219 18:30:26.946882 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:26 crc kubenswrapper[4813]: I0219 18:30:26.946898 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:26 crc kubenswrapper[4813]: I0219 18:30:26.947284 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:26 crc kubenswrapper[4813]: I0219 18:30:26.947564 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:26Z","lastTransitionTime":"2026-02-19T18:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:27 crc kubenswrapper[4813]: I0219 18:30:27.050319 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:27 crc kubenswrapper[4813]: I0219 18:30:27.050369 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:27 crc kubenswrapper[4813]: I0219 18:30:27.050387 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:27 crc kubenswrapper[4813]: I0219 18:30:27.050411 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:27 crc kubenswrapper[4813]: I0219 18:30:27.050429 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:27Z","lastTransitionTime":"2026-02-19T18:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:27 crc kubenswrapper[4813]: I0219 18:30:27.181519 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:27 crc kubenswrapper[4813]: I0219 18:30:27.181571 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:27 crc kubenswrapper[4813]: I0219 18:30:27.181587 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:27 crc kubenswrapper[4813]: I0219 18:30:27.181611 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:27 crc kubenswrapper[4813]: I0219 18:30:27.181638 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:27Z","lastTransitionTime":"2026-02-19T18:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:27 crc kubenswrapper[4813]: I0219 18:30:27.284535 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:27 crc kubenswrapper[4813]: I0219 18:30:27.284601 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:27 crc kubenswrapper[4813]: I0219 18:30:27.284670 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:27 crc kubenswrapper[4813]: I0219 18:30:27.284701 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:27 crc kubenswrapper[4813]: I0219 18:30:27.284723 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:27Z","lastTransitionTime":"2026-02-19T18:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:27 crc kubenswrapper[4813]: I0219 18:30:27.387843 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:27 crc kubenswrapper[4813]: I0219 18:30:27.387904 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:27 crc kubenswrapper[4813]: I0219 18:30:27.387920 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:27 crc kubenswrapper[4813]: I0219 18:30:27.387947 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:27 crc kubenswrapper[4813]: I0219 18:30:27.387997 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:27Z","lastTransitionTime":"2026-02-19T18:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:27 crc kubenswrapper[4813]: I0219 18:30:27.426546 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 06:25:31.303861679 +0000 UTC Feb 19 18:30:27 crc kubenswrapper[4813]: I0219 18:30:27.471062 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:30:27 crc kubenswrapper[4813]: I0219 18:30:27.471175 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 18:30:27 crc kubenswrapper[4813]: E0219 18:30:27.471260 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 18:30:27 crc kubenswrapper[4813]: I0219 18:30:27.471187 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 18:30:27 crc kubenswrapper[4813]: E0219 18:30:27.471471 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 18:30:27 crc kubenswrapper[4813]: E0219 18:30:27.471564 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 18:30:27 crc kubenswrapper[4813]: I0219 18:30:27.491201 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:27 crc kubenswrapper[4813]: I0219 18:30:27.491270 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:27 crc kubenswrapper[4813]: I0219 18:30:27.491296 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:27 crc kubenswrapper[4813]: I0219 18:30:27.491336 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:27 crc kubenswrapper[4813]: I0219 18:30:27.491366 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:27Z","lastTransitionTime":"2026-02-19T18:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:27 crc kubenswrapper[4813]: I0219 18:30:27.594270 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:27 crc kubenswrapper[4813]: I0219 18:30:27.594339 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:27 crc kubenswrapper[4813]: I0219 18:30:27.594358 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:27 crc kubenswrapper[4813]: I0219 18:30:27.594382 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:27 crc kubenswrapper[4813]: I0219 18:30:27.594399 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:27Z","lastTransitionTime":"2026-02-19T18:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:27 crc kubenswrapper[4813]: I0219 18:30:27.697346 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:27 crc kubenswrapper[4813]: I0219 18:30:27.697400 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:27 crc kubenswrapper[4813]: I0219 18:30:27.697418 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:27 crc kubenswrapper[4813]: I0219 18:30:27.697442 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:27 crc kubenswrapper[4813]: I0219 18:30:27.697460 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:27Z","lastTransitionTime":"2026-02-19T18:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:27 crc kubenswrapper[4813]: I0219 18:30:27.800757 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:27 crc kubenswrapper[4813]: I0219 18:30:27.800848 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:27 crc kubenswrapper[4813]: I0219 18:30:27.800866 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:27 crc kubenswrapper[4813]: I0219 18:30:27.800891 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:27 crc kubenswrapper[4813]: I0219 18:30:27.800908 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:27Z","lastTransitionTime":"2026-02-19T18:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:27 crc kubenswrapper[4813]: I0219 18:30:27.904043 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:27 crc kubenswrapper[4813]: I0219 18:30:27.904169 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:27 crc kubenswrapper[4813]: I0219 18:30:27.904189 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:27 crc kubenswrapper[4813]: I0219 18:30:27.904215 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:27 crc kubenswrapper[4813]: I0219 18:30:27.904233 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:27Z","lastTransitionTime":"2026-02-19T18:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:28 crc kubenswrapper[4813]: I0219 18:30:28.007311 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:28 crc kubenswrapper[4813]: I0219 18:30:28.007357 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:28 crc kubenswrapper[4813]: I0219 18:30:28.007371 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:28 crc kubenswrapper[4813]: I0219 18:30:28.007390 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:28 crc kubenswrapper[4813]: I0219 18:30:28.007403 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:28Z","lastTransitionTime":"2026-02-19T18:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:28 crc kubenswrapper[4813]: I0219 18:30:28.110419 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:28 crc kubenswrapper[4813]: I0219 18:30:28.110478 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:28 crc kubenswrapper[4813]: I0219 18:30:28.110495 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:28 crc kubenswrapper[4813]: I0219 18:30:28.110520 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:28 crc kubenswrapper[4813]: I0219 18:30:28.110537 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:28Z","lastTransitionTime":"2026-02-19T18:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:28 crc kubenswrapper[4813]: I0219 18:30:28.213386 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:28 crc kubenswrapper[4813]: I0219 18:30:28.213447 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:28 crc kubenswrapper[4813]: I0219 18:30:28.213472 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:28 crc kubenswrapper[4813]: I0219 18:30:28.213496 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:28 crc kubenswrapper[4813]: I0219 18:30:28.213513 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:28Z","lastTransitionTime":"2026-02-19T18:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:28 crc kubenswrapper[4813]: I0219 18:30:28.223028 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:28 crc kubenswrapper[4813]: I0219 18:30:28.223078 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:28 crc kubenswrapper[4813]: I0219 18:30:28.223096 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:28 crc kubenswrapper[4813]: I0219 18:30:28.223117 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:28 crc kubenswrapper[4813]: I0219 18:30:28.223133 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:28Z","lastTransitionTime":"2026-02-19T18:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:28 crc kubenswrapper[4813]: E0219 18:30:28.242639 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"72639c5f-de3a-44bf-8f3b-4707b19d9f7d\\\",\\\"systemUUID\\\":\\\"3f17b88b-2b9a-42bd-94af-777e7f325932\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:28Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:28 crc kubenswrapper[4813]: I0219 18:30:28.248118 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:28 crc kubenswrapper[4813]: I0219 18:30:28.248179 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:28 crc kubenswrapper[4813]: I0219 18:30:28.248196 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:28 crc kubenswrapper[4813]: I0219 18:30:28.248219 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:28 crc kubenswrapper[4813]: I0219 18:30:28.248238 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:28Z","lastTransitionTime":"2026-02-19T18:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:28 crc kubenswrapper[4813]: E0219 18:30:28.268513 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"72639c5f-de3a-44bf-8f3b-4707b19d9f7d\\\",\\\"systemUUID\\\":\\\"3f17b88b-2b9a-42bd-94af-777e7f325932\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:28Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:28 crc kubenswrapper[4813]: I0219 18:30:28.273154 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:28 crc kubenswrapper[4813]: I0219 18:30:28.273199 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:28 crc kubenswrapper[4813]: I0219 18:30:28.273210 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:28 crc kubenswrapper[4813]: I0219 18:30:28.273227 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:28 crc kubenswrapper[4813]: I0219 18:30:28.273238 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:28Z","lastTransitionTime":"2026-02-19T18:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:28 crc kubenswrapper[4813]: E0219 18:30:28.292416 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"72639c5f-de3a-44bf-8f3b-4707b19d9f7d\\\",\\\"systemUUID\\\":\\\"3f17b88b-2b9a-42bd-94af-777e7f325932\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:28Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:28 crc kubenswrapper[4813]: I0219 18:30:28.303171 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:28 crc kubenswrapper[4813]: I0219 18:30:28.303257 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:28 crc kubenswrapper[4813]: I0219 18:30:28.303283 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:28 crc kubenswrapper[4813]: I0219 18:30:28.303313 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:28 crc kubenswrapper[4813]: I0219 18:30:28.303347 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:28Z","lastTransitionTime":"2026-02-19T18:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:28 crc kubenswrapper[4813]: E0219 18:30:28.324475 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"72639c5f-de3a-44bf-8f3b-4707b19d9f7d\\\",\\\"systemUUID\\\":\\\"3f17b88b-2b9a-42bd-94af-777e7f325932\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:28Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:28 crc kubenswrapper[4813]: I0219 18:30:28.332522 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:28 crc kubenswrapper[4813]: I0219 18:30:28.333002 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:28 crc kubenswrapper[4813]: I0219 18:30:28.333208 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:28 crc kubenswrapper[4813]: I0219 18:30:28.333350 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:28 crc kubenswrapper[4813]: I0219 18:30:28.333482 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:28Z","lastTransitionTime":"2026-02-19T18:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:28 crc kubenswrapper[4813]: E0219 18:30:28.356485 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"72639c5f-de3a-44bf-8f3b-4707b19d9f7d\\\",\\\"systemUUID\\\":\\\"3f17b88b-2b9a-42bd-94af-777e7f325932\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:28Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:28 crc kubenswrapper[4813]: E0219 18:30:28.357362 4813 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 18:30:28 crc kubenswrapper[4813]: I0219 18:30:28.360601 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:28 crc kubenswrapper[4813]: I0219 18:30:28.360837 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:28 crc kubenswrapper[4813]: I0219 18:30:28.361040 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:28 crc kubenswrapper[4813]: I0219 18:30:28.361216 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:28 crc kubenswrapper[4813]: I0219 18:30:28.361353 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:28Z","lastTransitionTime":"2026-02-19T18:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:28 crc kubenswrapper[4813]: I0219 18:30:28.426798 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 05:54:57.353282534 +0000 UTC Feb 19 18:30:28 crc kubenswrapper[4813]: I0219 18:30:28.466229 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:28 crc kubenswrapper[4813]: I0219 18:30:28.466791 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:28 crc kubenswrapper[4813]: I0219 18:30:28.467038 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:28 crc kubenswrapper[4813]: I0219 18:30:28.467220 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:28 crc kubenswrapper[4813]: I0219 18:30:28.467394 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:28Z","lastTransitionTime":"2026-02-19T18:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:28 crc kubenswrapper[4813]: I0219 18:30:28.471547 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5vng" Feb 19 18:30:28 crc kubenswrapper[4813]: E0219 18:30:28.471770 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5vng" podUID="6fc21e0b-f723-4c9c-9ced-1683cc02fa00" Feb 19 18:30:28 crc kubenswrapper[4813]: I0219 18:30:28.571486 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:28 crc kubenswrapper[4813]: I0219 18:30:28.571895 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:28 crc kubenswrapper[4813]: I0219 18:30:28.572082 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:28 crc kubenswrapper[4813]: I0219 18:30:28.572235 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:28 crc kubenswrapper[4813]: I0219 18:30:28.572366 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:28Z","lastTransitionTime":"2026-02-19T18:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:28 crc kubenswrapper[4813]: I0219 18:30:28.675777 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:28 crc kubenswrapper[4813]: I0219 18:30:28.675835 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:28 crc kubenswrapper[4813]: I0219 18:30:28.675847 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:28 crc kubenswrapper[4813]: I0219 18:30:28.675866 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:28 crc kubenswrapper[4813]: I0219 18:30:28.675880 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:28Z","lastTransitionTime":"2026-02-19T18:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:28 crc kubenswrapper[4813]: I0219 18:30:28.779650 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:28 crc kubenswrapper[4813]: I0219 18:30:28.779701 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:28 crc kubenswrapper[4813]: I0219 18:30:28.779718 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:28 crc kubenswrapper[4813]: I0219 18:30:28.779741 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:28 crc kubenswrapper[4813]: I0219 18:30:28.779759 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:28Z","lastTransitionTime":"2026-02-19T18:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:28 crc kubenswrapper[4813]: I0219 18:30:28.883136 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:28 crc kubenswrapper[4813]: I0219 18:30:28.883179 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:28 crc kubenswrapper[4813]: I0219 18:30:28.883191 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:28 crc kubenswrapper[4813]: I0219 18:30:28.883207 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:28 crc kubenswrapper[4813]: I0219 18:30:28.883218 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:28Z","lastTransitionTime":"2026-02-19T18:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:28 crc kubenswrapper[4813]: I0219 18:30:28.986082 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:28 crc kubenswrapper[4813]: I0219 18:30:28.986161 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:28 crc kubenswrapper[4813]: I0219 18:30:28.986186 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:28 crc kubenswrapper[4813]: I0219 18:30:28.986215 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:28 crc kubenswrapper[4813]: I0219 18:30:28.986239 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:28Z","lastTransitionTime":"2026-02-19T18:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:29 crc kubenswrapper[4813]: I0219 18:30:29.089397 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:29 crc kubenswrapper[4813]: I0219 18:30:29.089476 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:29 crc kubenswrapper[4813]: I0219 18:30:29.089500 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:29 crc kubenswrapper[4813]: I0219 18:30:29.089535 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:29 crc kubenswrapper[4813]: I0219 18:30:29.089557 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:29Z","lastTransitionTime":"2026-02-19T18:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:29 crc kubenswrapper[4813]: I0219 18:30:29.193078 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:29 crc kubenswrapper[4813]: I0219 18:30:29.193152 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:29 crc kubenswrapper[4813]: I0219 18:30:29.193172 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:29 crc kubenswrapper[4813]: I0219 18:30:29.193198 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:29 crc kubenswrapper[4813]: I0219 18:30:29.193217 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:29Z","lastTransitionTime":"2026-02-19T18:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:29 crc kubenswrapper[4813]: I0219 18:30:29.296659 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:29 crc kubenswrapper[4813]: I0219 18:30:29.296729 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:29 crc kubenswrapper[4813]: I0219 18:30:29.296749 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:29 crc kubenswrapper[4813]: I0219 18:30:29.296778 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:29 crc kubenswrapper[4813]: I0219 18:30:29.296798 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:29Z","lastTransitionTime":"2026-02-19T18:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:29 crc kubenswrapper[4813]: I0219 18:30:29.400678 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:29 crc kubenswrapper[4813]: I0219 18:30:29.400750 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:29 crc kubenswrapper[4813]: I0219 18:30:29.400768 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:29 crc kubenswrapper[4813]: I0219 18:30:29.400797 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:29 crc kubenswrapper[4813]: I0219 18:30:29.400816 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:29Z","lastTransitionTime":"2026-02-19T18:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:29 crc kubenswrapper[4813]: I0219 18:30:29.427408 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 14:57:18.051865989 +0000 UTC Feb 19 18:30:29 crc kubenswrapper[4813]: I0219 18:30:29.470844 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:30:29 crc kubenswrapper[4813]: I0219 18:30:29.470922 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 18:30:29 crc kubenswrapper[4813]: I0219 18:30:29.470871 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 18:30:29 crc kubenswrapper[4813]: E0219 18:30:29.471161 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 18:30:29 crc kubenswrapper[4813]: E0219 18:30:29.471260 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 18:30:29 crc kubenswrapper[4813]: E0219 18:30:29.471452 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 18:30:29 crc kubenswrapper[4813]: I0219 18:30:29.504698 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:29 crc kubenswrapper[4813]: I0219 18:30:29.505120 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:29 crc kubenswrapper[4813]: I0219 18:30:29.505276 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:29 crc kubenswrapper[4813]: I0219 18:30:29.505477 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:29 crc kubenswrapper[4813]: I0219 18:30:29.505648 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:29Z","lastTransitionTime":"2026-02-19T18:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:29 crc kubenswrapper[4813]: I0219 18:30:29.609104 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:29 crc kubenswrapper[4813]: I0219 18:30:29.609158 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:29 crc kubenswrapper[4813]: I0219 18:30:29.609170 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:29 crc kubenswrapper[4813]: I0219 18:30:29.609195 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:29 crc kubenswrapper[4813]: I0219 18:30:29.609208 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:29Z","lastTransitionTime":"2026-02-19T18:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:29 crc kubenswrapper[4813]: I0219 18:30:29.712459 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:29 crc kubenswrapper[4813]: I0219 18:30:29.712511 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:29 crc kubenswrapper[4813]: I0219 18:30:29.712524 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:29 crc kubenswrapper[4813]: I0219 18:30:29.712539 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:29 crc kubenswrapper[4813]: I0219 18:30:29.712551 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:29Z","lastTransitionTime":"2026-02-19T18:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:29 crc kubenswrapper[4813]: I0219 18:30:29.814992 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:29 crc kubenswrapper[4813]: I0219 18:30:29.815055 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:29 crc kubenswrapper[4813]: I0219 18:30:29.815071 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:29 crc kubenswrapper[4813]: I0219 18:30:29.815094 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:29 crc kubenswrapper[4813]: I0219 18:30:29.815111 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:29Z","lastTransitionTime":"2026-02-19T18:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:29 crc kubenswrapper[4813]: I0219 18:30:29.918465 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:29 crc kubenswrapper[4813]: I0219 18:30:29.918526 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:29 crc kubenswrapper[4813]: I0219 18:30:29.918542 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:29 crc kubenswrapper[4813]: I0219 18:30:29.918567 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:29 crc kubenswrapper[4813]: I0219 18:30:29.918585 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:29Z","lastTransitionTime":"2026-02-19T18:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:30 crc kubenswrapper[4813]: I0219 18:30:30.020951 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:30 crc kubenswrapper[4813]: I0219 18:30:30.021354 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:30 crc kubenswrapper[4813]: I0219 18:30:30.021537 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:30 crc kubenswrapper[4813]: I0219 18:30:30.021693 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:30 crc kubenswrapper[4813]: I0219 18:30:30.021860 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:30Z","lastTransitionTime":"2026-02-19T18:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:30 crc kubenswrapper[4813]: I0219 18:30:30.124409 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:30 crc kubenswrapper[4813]: I0219 18:30:30.124685 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:30 crc kubenswrapper[4813]: I0219 18:30:30.125043 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:30 crc kubenswrapper[4813]: I0219 18:30:30.125207 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:30 crc kubenswrapper[4813]: I0219 18:30:30.125322 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:30Z","lastTransitionTime":"2026-02-19T18:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:30 crc kubenswrapper[4813]: I0219 18:30:30.228176 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:30 crc kubenswrapper[4813]: I0219 18:30:30.228237 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:30 crc kubenswrapper[4813]: I0219 18:30:30.228296 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:30 crc kubenswrapper[4813]: I0219 18:30:30.228320 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:30 crc kubenswrapper[4813]: I0219 18:30:30.228337 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:30Z","lastTransitionTime":"2026-02-19T18:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:30 crc kubenswrapper[4813]: I0219 18:30:30.331561 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:30 crc kubenswrapper[4813]: I0219 18:30:30.332159 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:30 crc kubenswrapper[4813]: I0219 18:30:30.332240 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:30 crc kubenswrapper[4813]: I0219 18:30:30.332265 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:30 crc kubenswrapper[4813]: I0219 18:30:30.332283 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:30Z","lastTransitionTime":"2026-02-19T18:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:30 crc kubenswrapper[4813]: I0219 18:30:30.428529 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 04:49:23.741690702 +0000 UTC Feb 19 18:30:30 crc kubenswrapper[4813]: I0219 18:30:30.435109 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:30 crc kubenswrapper[4813]: I0219 18:30:30.435173 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:30 crc kubenswrapper[4813]: I0219 18:30:30.435191 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:30 crc kubenswrapper[4813]: I0219 18:30:30.435215 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:30 crc kubenswrapper[4813]: I0219 18:30:30.435233 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:30Z","lastTransitionTime":"2026-02-19T18:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:30 crc kubenswrapper[4813]: I0219 18:30:30.470803 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5vng" Feb 19 18:30:30 crc kubenswrapper[4813]: E0219 18:30:30.471371 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5vng" podUID="6fc21e0b-f723-4c9c-9ced-1683cc02fa00" Feb 19 18:30:30 crc kubenswrapper[4813]: I0219 18:30:30.538725 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:30 crc kubenswrapper[4813]: I0219 18:30:30.539115 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:30 crc kubenswrapper[4813]: I0219 18:30:30.539154 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:30 crc kubenswrapper[4813]: I0219 18:30:30.539184 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:30 crc kubenswrapper[4813]: I0219 18:30:30.539203 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:30Z","lastTransitionTime":"2026-02-19T18:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:30 crc kubenswrapper[4813]: I0219 18:30:30.642745 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:30 crc kubenswrapper[4813]: I0219 18:30:30.642811 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:30 crc kubenswrapper[4813]: I0219 18:30:30.642833 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:30 crc kubenswrapper[4813]: I0219 18:30:30.642864 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:30 crc kubenswrapper[4813]: I0219 18:30:30.642885 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:30Z","lastTransitionTime":"2026-02-19T18:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:30 crc kubenswrapper[4813]: I0219 18:30:30.746384 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:30 crc kubenswrapper[4813]: I0219 18:30:30.746440 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:30 crc kubenswrapper[4813]: I0219 18:30:30.746456 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:30 crc kubenswrapper[4813]: I0219 18:30:30.746481 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:30 crc kubenswrapper[4813]: I0219 18:30:30.746498 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:30Z","lastTransitionTime":"2026-02-19T18:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:30 crc kubenswrapper[4813]: I0219 18:30:30.849999 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:30 crc kubenswrapper[4813]: I0219 18:30:30.850059 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:30 crc kubenswrapper[4813]: I0219 18:30:30.850076 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:30 crc kubenswrapper[4813]: I0219 18:30:30.850100 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:30 crc kubenswrapper[4813]: I0219 18:30:30.850118 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:30Z","lastTransitionTime":"2026-02-19T18:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:30 crc kubenswrapper[4813]: I0219 18:30:30.953213 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:30 crc kubenswrapper[4813]: I0219 18:30:30.953281 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:30 crc kubenswrapper[4813]: I0219 18:30:30.953300 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:30 crc kubenswrapper[4813]: I0219 18:30:30.953323 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:30 crc kubenswrapper[4813]: I0219 18:30:30.953341 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:30Z","lastTransitionTime":"2026-02-19T18:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.055723 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.055754 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.055763 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.055777 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.055787 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:31Z","lastTransitionTime":"2026-02-19T18:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.159210 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.159271 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.159288 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.159312 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.159331 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:31Z","lastTransitionTime":"2026-02-19T18:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.262641 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.262689 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.262706 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.262728 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.262745 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:31Z","lastTransitionTime":"2026-02-19T18:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.365631 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.366093 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.366262 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.366408 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.366590 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:31Z","lastTransitionTime":"2026-02-19T18:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.428726 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 17:24:29.303269888 +0000 UTC Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.469434 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.469501 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.469519 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.469542 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.469561 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:31Z","lastTransitionTime":"2026-02-19T18:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.470681 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.470723 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:30:31 crc kubenswrapper[4813]: E0219 18:30:31.470829 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.470894 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 18:30:31 crc kubenswrapper[4813]: E0219 18:30:31.471354 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 18:30:31 crc kubenswrapper[4813]: E0219 18:30:31.471443 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.471596 4813 scope.go:117] "RemoveContainer" containerID="fe42f57029c227e4eeab376bc77a8709fce4c6c19e78d7dd1c77e65966b14388" Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.494687 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e24bb1ce-8e7b-485c-8ade-4d1c8d24d31d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ec0b5d3efae532f52cc2cad5bdf32bbba176159a6d3e26584b0fabe4e67fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72a262261f9ec388a9b2c38c6c30d35645d6e42d482ed7468891523b7bc731fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58efdcb776fb39a7e33af3fda3f0589e38c4bae15b8db3371541d189729d06c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a2584d7de40323b884e08e2cb47b6ab8054fadccc7516e4736c5439aef60e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:31Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.515640 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:31Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.533124 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:31Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.551293 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c6488cf097d9700ded1960ebddf696d77c08ffdfc6dd8b351c594ea026bb6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6181782d0c635897318effbe40705986067ccb55cf05ffdba6948e46c1a054b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:31Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.572768 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hksqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b099cefb-f2e5-4f3f-976c-7433dba77ef2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3313ad9bea8833862d8c116421e00a0c2376c81e214043db1fa4096177d01094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hksqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:31Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.573265 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.573348 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.573411 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.573474 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.573526 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:31Z","lastTransitionTime":"2026-02-19T18:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.588348 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqbrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"614b0374-4288-40dc-9d95-e6f6566bd1ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99f4d2e3db84c62b82edf3e5f8290b3b259e58655a5ea95ea51ed8aeec8e8844\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpkbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef107096c0521504fd7160ad4e3c0feff6ed1500af21c7a0328a981c340825f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpkbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqbrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:31Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.602982 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l5vng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fc21e0b-f723-4c9c-9ced-1683cc02fa00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmn2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmn2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l5vng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:31Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.622035 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlz6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f58d0592-08dd-49db-8c98-f262b9808e0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://934ac9bf4c98a77f31d9fddb47e9981f38daeca4f37afc09a488b48c69a13b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7968520498abaa880445c128f02c0838e254befc000966e147908c84203f74a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7968520498abaa880445c128f02c0838e254befc000966e147908c84203f74a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17313aa183028703a67d1cf40b4246d09224ceee622cd9f4f7d2ef239b26db47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17313aa183028703a67d1cf40b4246d09224ceee622cd9f4f7d2ef239b26db47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ab70722c91747196d065d768e30f52befde81fdedb465f22c3f7eef4705210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98ab70722c91747196d065d768e30f52befde81fdedb465f22c3f7eef4705210\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ab6dd9d03726493fd9fd2c358b0a87ca1997cbe2111d43a548adc9c9f43c662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ab6dd9d03726493fd9fd2c358b0a87ca1997cbe2111d43a548adc9c9f43c662\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11144c876f7a0cbaa5787ac1a093f138ec3fdb8a3cd118d4efec6cc650e3e97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11144c876f7a0cbaa5787ac1a093f138ec3fdb8a3cd118d4efec6cc650e3e97a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53230e417c1722eded51e2c7a87acb4e9b610aaf5d36a22d4f3f4fcd987c5160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53230e417c1722eded51e2c7a87acb4e9b610aaf5d36a22d4f3f4fcd987c5160\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlz6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:31Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.642850 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"928c75f4-605c-4556-8c29-14ff4bdf6f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec80078ab4536d8f1962994c3bdf8af5fcf81516f8845e84dbb33a78b1aa2062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e03ec2a3e1656fbfb567377e79bcca10e1c8a6f7660727c164936eab7365930a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6191934d31b3f711d3c9d8f365251db200dea5a925cad670632dbfc527f76504\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1b878b9bc710465c933e1965728be9fe220fe89a3d5c05fa538abd4bf22b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81cfe261685862b6b74d263ab66b4a2b1d68409791e82424b4f210c56561099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fed64977a80ae6acac3a225e2b6b6ab865db1f36338df4a15011baae90f949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe42f57029c227e4eeab376bc77a8709fce4c6c19e78d7dd1c77e65966b14388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe42f57029c227e4eeab376bc77a8709fce4c6c19e78d7dd1c77e65966b14388\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T18:30:13Z\\\",\\\"message\\\":\\\"13.650485 6267 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI0219 18:30:13.650490 6267 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0219 18:30:13.650501 6267 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0219 18:30:13.650508 6267 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0219 18:30:13.650514 6267 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0219 18:30:13.650519 6267 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0219 18:30:13.650449 6267 services_controller.go:434] Service default/kubernetes retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{kubernetes default 1fcaffea-cfe2-4295-9c2a-a3b3626fb3f1 259 0 2025-02-23 05:11:12 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[component:apiserver provider:kubernetes] map[] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 6443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.217.4.1,Type:ClusterIP,ExternalI\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pc9t2_openshift-ovn-kubernetes(928c75f4-605c-4556-8c29-14ff4bdf6f5e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1caf7a922d435742a5c6d5302c70003bc86cc3ab542e2e3fe0a368c150de7838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0244cf62334ae37b44f11119385a76667200a01b2162a064829f16e8d6c0cf65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0244cf62334ae37b44f11119385a76667200a01b2162a064829f16e8d6c0cf65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pc9t2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:31Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.663238 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:31Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.675661 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.675679 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.675686 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.675699 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.675707 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:31Z","lastTransitionTime":"2026-02-19T18:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.676573 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vpk9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0640f474-4d6f-4a87-9750-dda71e69dd95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58268d212369ac3ebcca0839f2c778d1e2e6e36fdc4b7e374cb9044adb39163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vpk9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:31Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.690931 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8429a2d929ad94a25a922e436298db7a8eb7522560dfa37f87f6e24f4f99b4e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:31Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.707454 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"481977a2-7072-4176-abd4-863cb6104d70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2b35ef24fab5f4dfd17bf95d550000ed1167454385baff2c19e1bc795e2fa8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a57de3f000b8b7bd147b882314c518fbd8173a9c366d4ee48e1e5fab9800b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfswm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:31Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.724171 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7e9f575-0f71-4cf9-bbca-161279ecc067\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5217c8c5d8bdd49d4930ceb46695c6404c9cb98f22fa570d519f6fd9e90a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72ae02b48db247486c8af5ba910b32e186c15460f9195868b319d1ddc111fa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e90168ea6a75495be5812591cd0c929407c9adcb7c088d4bcbe3658a430320\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f319cf58bd58c099ace281ef84d183ac951986506fa62207a45ec01f7398e7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f62c4c18d632f0210d0676c04bf42a7b28c8669c123d29ef70fd4129f0051a2b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T18:30:00Z\\\",\\\"message\\\":\\\".003654 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 18:29:55.006059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3464865525/tls.crt::/tmp/serving-cert-3464865525/tls.key\\\\\\\"\\\\nI0219 18:30:00.917714 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 18:30:00.925836 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 18:30:00.925875 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 18:30:00.925917 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 18:30:00.925928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 18:30:00.935151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 18:30:00.935201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935214 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935228 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 18:30:00.935217 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 18:30:00.935236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 18:30:00.935260 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 18:30:00.935268 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 18:30:00.942033 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nF0219 18:30:00.942062 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b414bdca98a18315a2ebb69c39baa4748407aa769d6bc8efa4115efb47c9f4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:31Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.749565 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"424add47-7924-4d6b-9c4a-4696e02c5a45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95dc9a2b0e36680a894dd162836072614413ad26fcf5adbae44db96aff8a00c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46edb0e58f8e2e040cc223708f42cef7060dc0a841fff9db182e66e659ced5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://def8abdfd513befc4fac3d78b8284b3b37ba3f5da05d74a7b911932ee1099344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed857dedd4a4cf59288fcbcfaa298370a93630e185b949d2e574eb828831df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://216064c0aa1256384d93ad5b952228d4b67c2767f284715e62505dd390c11268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b9746c82ed08c95759481dc08426b4e8446ed834da21c003dccfeb8231fbec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b9746c82ed08c95759481dc08426b4e8446ed834da21c003dccfeb8231fbec2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc8db58cc6cac8c9b830087814e9d7652fdd990b18e4e4e778669d5cbc06b508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc8db58cc6cac8c9b830087814e9d7652fdd990b18e4e4e778669d5cbc06b508\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://baca15e257ea2007369156b6a31eef0b7cd0e336cef7a9c733164b93e1927f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baca15e257ea2007369156b6a31eef0b7cd0e336cef7a9c733164b93e1927f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:31Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.762667 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44daee369357470d5c3b416d63e53f621a0434cc6fe90db2dd15a6078849bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:31Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.772839 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55mxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af5ac962-f7d8-4759-b913-b3784e37a704\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5006f3df33dcac6500b89a837e686eb2b35a2d1c609091bda101500d15d808d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55mxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:31Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.777725 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.777767 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.777777 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.777793 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.777805 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:31Z","lastTransitionTime":"2026-02-19T18:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.836966 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pc9t2_928c75f4-605c-4556-8c29-14ff4bdf6f5e/ovnkube-controller/1.log" Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.839872 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" event={"ID":"928c75f4-605c-4556-8c29-14ff4bdf6f5e","Type":"ContainerStarted","Data":"90c570c2b717e5736262333676c0fd0ffbdbd3957f8c27964a5126d2df92cc13"} Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.840487 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.855825 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:31Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.872468 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vpk9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0640f474-4d6f-4a87-9750-dda71e69dd95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58268d212369ac3ebcca0839f2c778d1e2e6e36fdc4b7e374cb9044adb39163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vpk9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:31Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.880516 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.880556 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.880567 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.880586 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.880597 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:31Z","lastTransitionTime":"2026-02-19T18:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.895106 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8429a2d929ad94a25a922e436298db7a8eb7522560dfa37f87f6e24f4f99b4e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:31Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.914790 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"481977a2-7072-4176-abd4-863cb6104d70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2b35ef24fab5f4dfd17bf95d550000ed1167454385baff2c19e1bc795e2fa8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a57de3f000b8b7bd147b882314c518fbd8173a9c366d4ee48e1e5fab9800b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfswm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:31Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.964389 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7e9f575-0f71-4cf9-bbca-161279ecc067\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5217c8c5d8bdd49d4930ceb46695c6404c9cb98f22fa570d519f6fd9e90a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72ae02b48db247486c8af5ba910b32e186c15460f9195868b319d1ddc111fa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e90168ea6a75495be5812591cd0c929407c9adcb7c088d4bcbe3658a430320\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f319cf58bd58c099ace281ef84d183ac951986506fa62207a45ec01f7398e7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f62c4c18d632f0210d0676c04bf42a7b28c8669c123d29ef70fd4129f0051a2b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T18:30:00Z\\\",\\\"message\\\":\\\".003654 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 18:29:55.006059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3464865525/tls.crt::/tmp/serving-cert-3464865525/tls.key\\\\\\\"\\\\nI0219 18:30:00.917714 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 18:30:00.925836 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 18:30:00.925875 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 18:30:00.925917 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 18:30:00.925928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 18:30:00.935151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 18:30:00.935201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935214 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935228 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 18:30:00.935217 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 18:30:00.935236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 18:30:00.935260 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 18:30:00.935268 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 18:30:00.942033 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nF0219 18:30:00.942062 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b414bdca98a18315a2ebb69c39baa4748407aa769d6bc8efa4115efb47c9f4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:31Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.983200 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.983369 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.983431 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.983517 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.983574 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:31Z","lastTransitionTime":"2026-02-19T18:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:31 crc kubenswrapper[4813]: I0219 18:30:31.996739 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"424add47-7924-4d6b-9c4a-4696e02c5a45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95dc9a2b0e36680a894dd162836072614413ad26fcf5adbae44db96aff8a00c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46edb0e58f8e2e040cc223708f42cef7060dc0a841fff9db182e66e659ced5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://def8abdfd513befc4fac3d78b8284b3b37ba3f5da05d74a7b911932ee1099344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed857dedd4a4cf59288fcbcfaa298370a93630e185b949d2e574eb828831df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://216064c0aa1256384d93ad5b952228d4b67c2767f284715e62505dd390c11268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b9746c82ed08c95759481dc08426b4e8446ed834da21c003dccfeb8231fbec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b9746c82ed08c95759481dc08426b4e8446ed834da21c003dccfeb8231fbec2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc8db58cc6cac8c9b830087814e9d7652fdd990b18e4e4e778669d5cbc06b508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc8db58cc6cac8c9b830087814e9d7652fdd990b18e4e4e778669d5cbc06b508\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://baca15e257ea2007369156b6a31eef0b7cd0e336cef7a9c733164b93e1927f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baca15e257ea2007369156b6a31eef0b7cd0e336cef7a9c733164b93e1927f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:31Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:32 crc kubenswrapper[4813]: I0219 18:30:32.030014 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44daee369357470d5c3b416d63e53f621a0434cc6fe90db2dd15a6078849bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:32Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:32 crc kubenswrapper[4813]: I0219 18:30:32.044030 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55mxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af5ac962-f7d8-4759-b913-b3784e37a704\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5006f3df33dcac6500b89a837e686eb2b35a2d1c609091bda101500d15d808d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55mxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:32Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:32 crc kubenswrapper[4813]: I0219 18:30:32.056456 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e24bb1ce-8e7b-485c-8ade-4d1c8d24d31d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ec0b5d3efae532f52cc2cad5bdf32bbba176159a6d3e26584b0fabe4e67fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72a262261f9ec388a9b2c38c6c30d35645d6e42d482ed7468891523b7bc731fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58efdcb776fb39a7e33af3fda3f0589e38c4bae15b8db3371541d189729d06c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a2584d7de40323b884e08e2cb47b6ab8054fadccc7516e4736c5439aef60e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:32Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:32 crc kubenswrapper[4813]: I0219 18:30:32.068064 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:32Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:32 crc kubenswrapper[4813]: I0219 18:30:32.079802 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:32Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:32 crc kubenswrapper[4813]: I0219 18:30:32.086311 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:32 crc kubenswrapper[4813]: I0219 18:30:32.086348 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:32 crc kubenswrapper[4813]: I0219 18:30:32.086357 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:32 crc kubenswrapper[4813]: I0219 18:30:32.086372 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:32 crc kubenswrapper[4813]: I0219 18:30:32.086381 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:32Z","lastTransitionTime":"2026-02-19T18:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:32 crc kubenswrapper[4813]: I0219 18:30:32.090738 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c6488cf097d9700ded1960ebddf696d77c08ffdfc6dd8b351c594ea026bb6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6181782d0c635897318effbe40705986067ccb55cf05ffdba6948e46c1a054b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:32Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:32 crc kubenswrapper[4813]: I0219 18:30:32.103516 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hksqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b099cefb-f2e5-4f3f-976c-7433dba77ef2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3313ad9bea8833862d8c116421e00a0c2376c81e214043db1fa4096177d01094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hksqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:32Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:32 crc kubenswrapper[4813]: I0219 18:30:32.118939 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqbrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"614b0374-4288-40dc-9d95-e6f6566bd1ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99f4d2e3db84c62b82edf3e5f8290b3b259e58655a5ea95ea51ed8aeec8e8844\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpkbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef107096c0521504fd7160ad4e3c0feff6ed1500af21c7a0328a981c340825f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpkbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqbrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:32Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:32 crc kubenswrapper[4813]: I0219 18:30:32.132901 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l5vng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fc21e0b-f723-4c9c-9ced-1683cc02fa00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmn2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmn2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l5vng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:32Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:32 crc kubenswrapper[4813]: I0219 18:30:32.151037 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlz6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f58d0592-08dd-49db-8c98-f262b9808e0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://934ac9bf4c98a77f31d9fddb47e9981f38daeca4f37afc09a488b48c69a13b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7968520498abaa880445c128f02c0838e254befc000966e147908c84203f74a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7968520498abaa880445c128f02c0838e254befc000966e147908c84203f74a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17313aa183028703a67d1cf40b4246d09224ceee622cd9f4f7d2ef239b26db47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17313aa183028703a67d1cf40b4246d09224ceee622cd9f4f7d2ef239b26db47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ab70722c91747196d065d768e30f52befde81fdedb465f22c3f7eef4705210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98ab70722c91747196d065d768e30f52befde81fdedb465f22c3f7eef4705210\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ab6dd9d03726493fd9fd2c358b0a87ca1997cbe2111d43a548adc9c9f43c662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ab6dd9d03726493fd9fd2c358b0a87ca1997cbe2111d43a548adc9c9f43c662\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11144c876f7a0cbaa5787ac1a093f138ec3fdb8a3cd118d4efec6cc650e3e97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11144c876f7a0cbaa5787ac1a093f138ec3fdb8a3cd118d4efec6cc650e3e97a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53230e417c1722eded51e2c7a87acb4e9b610aaf5d36a22d4f3f4fcd987c5160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53230e417c1722eded51e2c7a87acb4e9b610aaf5d36a22d4f3f4fcd987c5160\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlz6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:32Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:32 crc kubenswrapper[4813]: I0219 18:30:32.176856 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6fc21e0b-f723-4c9c-9ced-1683cc02fa00-metrics-certs\") pod \"network-metrics-daemon-l5vng\" (UID: \"6fc21e0b-f723-4c9c-9ced-1683cc02fa00\") " pod="openshift-multus/network-metrics-daemon-l5vng" Feb 19 18:30:32 crc kubenswrapper[4813]: E0219 18:30:32.177032 4813 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 18:30:32 crc kubenswrapper[4813]: E0219 18:30:32.177085 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6fc21e0b-f723-4c9c-9ced-1683cc02fa00-metrics-certs podName:6fc21e0b-f723-4c9c-9ced-1683cc02fa00 nodeName:}" failed. No retries permitted until 2026-02-19 18:30:48.177070631 +0000 UTC m=+67.402511172 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6fc21e0b-f723-4c9c-9ced-1683cc02fa00-metrics-certs") pod "network-metrics-daemon-l5vng" (UID: "6fc21e0b-f723-4c9c-9ced-1683cc02fa00") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 18:30:32 crc kubenswrapper[4813]: I0219 18:30:32.178985 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"928c75f4-605c-4556-8c29-14ff4bdf6f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec80078ab4536d8f1962994c3bdf8af5fcf81516f8845e84dbb33a78b1aa2062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e03ec2a3e1656fbfb567377e79bcca10e1c8a6f7660727c164936eab7365930a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6191934d31b3f711d3c9d8f365251db200dea5a925cad670632dbfc527f76504\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1b878b9bc710465c933e1965728be9fe220fe89a3d5c05fa538abd4bf22b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81cfe261685862b6b74d263ab66b4a2b1d68409791e82424b4f210c56561099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fed64977a80ae6acac3a225e2b6b6ab865db1f36338df4a15011baae90f949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90c570c2b717e5736262333676c0fd0ffbdbd3957f8c27964a5126d2df92cc13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe42f57029c227e4eeab376bc77a8709fce4c6c19e78d7dd1c77e65966b14388\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T18:30:13Z\\\",\\\"message\\\":\\\"13.650485 6267 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI0219 18:30:13.650490 6267 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0219 18:30:13.650501 6267 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0219 18:30:13.650508 6267 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0219 18:30:13.650514 6267 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0219 18:30:13.650519 6267 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0219 18:30:13.650449 6267 services_controller.go:434] Service default/kubernetes retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{kubernetes default 1fcaffea-cfe2-4295-9c2a-a3b3626fb3f1 259 0 2025-02-23 05:11:12 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[component:apiserver provider:kubernetes] map[] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 6443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.217.4.1,Type:ClusterIP,ExternalI\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1caf7a922d435742a5c6d5302c70003bc86cc3ab542e2e3fe0a368c150de7838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0244cf62334ae37b44f11119385a76667200a01b2162a064829f16e8d6c0cf65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0244cf62334ae37b44f11119385a76667200a01b2162a064829f16e8d6c0cf65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pc9t2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:32Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:32 crc kubenswrapper[4813]: I0219 18:30:32.189877 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:32 crc kubenswrapper[4813]: I0219 18:30:32.189949 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:32 crc kubenswrapper[4813]: I0219 18:30:32.189991 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:32 crc kubenswrapper[4813]: I0219 18:30:32.190014 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:32 crc kubenswrapper[4813]: I0219 18:30:32.190031 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:32Z","lastTransitionTime":"2026-02-19T18:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:32 crc kubenswrapper[4813]: I0219 18:30:32.293089 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:32 crc kubenswrapper[4813]: I0219 18:30:32.293149 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:32 crc kubenswrapper[4813]: I0219 18:30:32.293166 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:32 crc kubenswrapper[4813]: I0219 18:30:32.293191 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:32 crc kubenswrapper[4813]: I0219 18:30:32.293208 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:32Z","lastTransitionTime":"2026-02-19T18:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:32 crc kubenswrapper[4813]: I0219 18:30:32.396224 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:32 crc kubenswrapper[4813]: I0219 18:30:32.396272 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:32 crc kubenswrapper[4813]: I0219 18:30:32.396288 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:32 crc kubenswrapper[4813]: I0219 18:30:32.396310 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:32 crc kubenswrapper[4813]: I0219 18:30:32.396327 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:32Z","lastTransitionTime":"2026-02-19T18:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:32 crc kubenswrapper[4813]: I0219 18:30:32.429434 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 14:24:32.122504271 +0000 UTC Feb 19 18:30:32 crc kubenswrapper[4813]: I0219 18:30:32.471017 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5vng" Feb 19 18:30:32 crc kubenswrapper[4813]: E0219 18:30:32.471184 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5vng" podUID="6fc21e0b-f723-4c9c-9ced-1683cc02fa00" Feb 19 18:30:32 crc kubenswrapper[4813]: I0219 18:30:32.499176 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:32 crc kubenswrapper[4813]: I0219 18:30:32.499240 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:32 crc kubenswrapper[4813]: I0219 18:30:32.499258 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:32 crc kubenswrapper[4813]: I0219 18:30:32.499282 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:32 crc kubenswrapper[4813]: I0219 18:30:32.499301 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:32Z","lastTransitionTime":"2026-02-19T18:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:32 crc kubenswrapper[4813]: I0219 18:30:32.601606 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:32 crc kubenswrapper[4813]: I0219 18:30:32.601660 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:32 crc kubenswrapper[4813]: I0219 18:30:32.601704 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:32 crc kubenswrapper[4813]: I0219 18:30:32.601728 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:32 crc kubenswrapper[4813]: I0219 18:30:32.601745 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:32Z","lastTransitionTime":"2026-02-19T18:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:32 crc kubenswrapper[4813]: I0219 18:30:32.704085 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:32 crc kubenswrapper[4813]: I0219 18:30:32.704131 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:32 crc kubenswrapper[4813]: I0219 18:30:32.704148 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:32 crc kubenswrapper[4813]: I0219 18:30:32.704171 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:32 crc kubenswrapper[4813]: I0219 18:30:32.704190 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:32Z","lastTransitionTime":"2026-02-19T18:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:32 crc kubenswrapper[4813]: I0219 18:30:32.806620 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:32 crc kubenswrapper[4813]: I0219 18:30:32.806675 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:32 crc kubenswrapper[4813]: I0219 18:30:32.806693 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:32 crc kubenswrapper[4813]: I0219 18:30:32.806718 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:32 crc kubenswrapper[4813]: I0219 18:30:32.806735 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:32Z","lastTransitionTime":"2026-02-19T18:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:32 crc kubenswrapper[4813]: I0219 18:30:32.845597 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pc9t2_928c75f4-605c-4556-8c29-14ff4bdf6f5e/ovnkube-controller/2.log" Feb 19 18:30:32 crc kubenswrapper[4813]: I0219 18:30:32.846478 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pc9t2_928c75f4-605c-4556-8c29-14ff4bdf6f5e/ovnkube-controller/1.log" Feb 19 18:30:32 crc kubenswrapper[4813]: I0219 18:30:32.850333 4813 generic.go:334] "Generic (PLEG): container finished" podID="928c75f4-605c-4556-8c29-14ff4bdf6f5e" containerID="90c570c2b717e5736262333676c0fd0ffbdbd3957f8c27964a5126d2df92cc13" exitCode=1 Feb 19 18:30:32 crc kubenswrapper[4813]: I0219 18:30:32.850392 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" event={"ID":"928c75f4-605c-4556-8c29-14ff4bdf6f5e","Type":"ContainerDied","Data":"90c570c2b717e5736262333676c0fd0ffbdbd3957f8c27964a5126d2df92cc13"} Feb 19 18:30:32 crc kubenswrapper[4813]: I0219 18:30:32.850437 4813 scope.go:117] "RemoveContainer" containerID="fe42f57029c227e4eeab376bc77a8709fce4c6c19e78d7dd1c77e65966b14388" Feb 19 18:30:32 crc kubenswrapper[4813]: I0219 18:30:32.852817 4813 scope.go:117] "RemoveContainer" containerID="90c570c2b717e5736262333676c0fd0ffbdbd3957f8c27964a5126d2df92cc13" Feb 19 18:30:32 crc kubenswrapper[4813]: E0219 18:30:32.853142 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pc9t2_openshift-ovn-kubernetes(928c75f4-605c-4556-8c29-14ff4bdf6f5e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" podUID="928c75f4-605c-4556-8c29-14ff4bdf6f5e" Feb 19 18:30:32 crc kubenswrapper[4813]: I0219 18:30:32.885400 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"928c75f4-605c-4556-8c29-14ff4bdf6f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec80078ab4536d8f1962994c3bdf8af5fcf81516f8845e84dbb33a78b1aa2062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e03ec2a3e1656fbfb567377e79bcca10e1c8a6f7660727c164936eab7365930a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6191934d31b3f711d3c9d8f365251db200dea5a925cad670632dbfc527f76504\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1b878b9bc710465c933e1965728be9fe220fe89a3d5c05fa538abd4bf22b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81cfe261685862b6b74d263ab66b4a2b1d68409791e82424b4f210c56561099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fed64977a80ae6acac3a225e2b6b6ab865db1f36338df4a15011baae90f949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90c570c2b717e5736262333676c0fd0ffbdbd3957f8c27964a5126d2df92cc13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe42f57029c227e4eeab376bc77a8709fce4c6c19e78d7dd1c77e65966b14388\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T18:30:13Z\\\",\\\"message\\\":\\\"13.650485 6267 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI0219 18:30:13.650490 6267 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0219 18:30:13.650501 6267 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0219 18:30:13.650508 6267 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0219 18:30:13.650514 6267 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0219 18:30:13.650519 6267 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0219 18:30:13.650449 6267 services_controller.go:434] Service default/kubernetes retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{kubernetes default 1fcaffea-cfe2-4295-9c2a-a3b3626fb3f1 259 0 2025-02-23 05:11:12 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[component:apiserver provider:kubernetes] map[] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 6443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.217.4.1,Type:ClusterIP,ExternalI\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90c570c2b717e5736262333676c0fd0ffbdbd3957f8c27964a5126d2df92cc13\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T18:30:32Z\\\",\\\"message\\\":\\\"s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 18:30:32.490543 6488 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 18:30:32.490653 6488 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 18:30:32.491223 6488 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 18:30:32.491297 6488 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 18:30:32.491309 6488 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 18:30:32.491356 6488 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 18:30:32.491372 6488 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 18:30:32.491393 6488 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 18:30:32.491409 6488 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 18:30:32.491423 6488 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 18:30:32.491463 6488 factory.go:656] Stopping watch factory\\\\nI0219 18:30:32.491489 6488 ovnkube.go:599] Stopped ovnkube\\\\nI0219 18:30:32.491500 6488 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 18:30:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1caf7a922d435742a5c6d5302c70003bc86cc3ab542e2e3fe0a368c150de7838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0244cf62334ae37b44f11119385a76667200a01b2162a064829f16e8d6c0cf65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0244cf62334ae37b44f11119385a76667200a01b2162a064829f16e8d6c0cf65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pc9t2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:32Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:32 crc kubenswrapper[4813]: I0219 18:30:32.907668 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlz6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f58d0592-08dd-49db-8c98-f262b9808e0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://934ac9bf4c98a77f31d9fddb47e9981f38daeca4f37afc09a488b48c69a13b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7968520498abaa880445c128f02c0838e254befc000966e147908c84203f74a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7968520498abaa880445c128f02c0838e254befc000966e147908c84203f74a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17313aa183028703a67d1cf40b4246d09224ceee622cd9f4f7d2ef239b26db47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17313aa183028703a67d1cf40b4246d09224ceee622cd9f4f7d2ef239b26db47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ab70722c91747196d065d768e30f52befde81fdedb465f22c3f7eef4705210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98ab70722c91747196d065d768e30f52befde81fdedb465f22c3f7eef4705210\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ab6dd9d03726493fd9fd2c358b0a87ca1997cbe2111d43a548adc9c9f43c662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ab6dd9d03726493fd9fd2c358b0a87ca1997cbe2111d43a548adc9c9f43c662\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11144c876f7a0cbaa5787ac1a093f138ec3fdb8a3cd118d4efec6cc650e3e97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11144c876f7a0cbaa5787ac1a093f138ec3fdb8a3cd118d4efec6cc650e3e97a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53230e417c1722eded51e2c7a87acb4e9b610aaf5d36a22d4f3f4fcd987c5160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53230e417c1722eded51e2c7a87acb4e9b610aaf5d36a22d4f3f4fcd987c5160\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlz6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:32Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:32 crc kubenswrapper[4813]: I0219 18:30:32.911713 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:32 crc kubenswrapper[4813]: I0219 18:30:32.911779 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:32 crc kubenswrapper[4813]: I0219 18:30:32.911803 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:32 crc kubenswrapper[4813]: I0219 18:30:32.911828 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:32 crc kubenswrapper[4813]: I0219 18:30:32.911845 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:32Z","lastTransitionTime":"2026-02-19T18:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:32 crc kubenswrapper[4813]: I0219 18:30:32.928428 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:32Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:32 crc kubenswrapper[4813]: I0219 18:30:32.946293 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vpk9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0640f474-4d6f-4a87-9750-dda71e69dd95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58268d212369ac3ebcca0839f2c778d1e2e6e36fdc4b7e374cb9044adb39163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vpk9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:32Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:32 crc kubenswrapper[4813]: I0219 18:30:32.965155 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8429a2d929ad94a25a922e436298db7a8eb7522560dfa37f87f6e24f4f99b4e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:32Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:32 crc kubenswrapper[4813]: I0219 18:30:32.983340 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"481977a2-7072-4176-abd4-863cb6104d70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2b35ef24fab5f4dfd17bf95d550000ed1167454385baff2c19e1bc795e2fa8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a57de3f000b8b7bd147b882314c518fbd8173a9c366d4ee48e1e5fab9800b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfswm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:32Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:33 crc kubenswrapper[4813]: I0219 18:30:33.016225 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:33 crc kubenswrapper[4813]: I0219 18:30:33.016281 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:33 crc kubenswrapper[4813]: I0219 18:30:33.016298 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:33 crc kubenswrapper[4813]: I0219 18:30:33.016320 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:33 crc kubenswrapper[4813]: I0219 18:30:33.016338 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:33Z","lastTransitionTime":"2026-02-19T18:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:33 crc kubenswrapper[4813]: I0219 18:30:33.016859 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"424add47-7924-4d6b-9c4a-4696e02c5a45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95dc9a2b0e36680a894dd162836072614413ad26fcf5adbae44db96aff8a00c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46edb0e58f8e2e040cc223708f42cef7060dc0a841fff9db182e66e659ced5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://def8abdfd513befc4fac3d78b8284b3b37ba3f5da05d74a7b911932ee1099344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed857dedd4a4cf59288fcbcfaa298370a93630e185b949d2e574eb828831df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://216064c0aa1256384d93ad5b952228d4b67c2767f284715e62505dd390c11268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b9746c82ed08c95759481dc08426b4e8446ed834da21c003dccfeb8231fbec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b9746c82ed08c95759481dc08426b4e8446ed834da21c003dccfeb8231fbec2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc8db58cc6cac8c9b830087814e9d7652fdd990b18e4e4e778669d5cbc06b508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc8db58cc6cac8c9b830087814e9d7652fdd990b18e4e4e778669d5cbc06b508\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://baca15e257ea2007369156b6a31eef0b7cd0e336cef7a9c733164b93e1927f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baca15e257ea2007369156b6a31eef0b7cd0e336cef7a9c733164b93e1927f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:33Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:33 crc kubenswrapper[4813]: I0219 18:30:33.037054 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44daee369357470d5c3b416d63e53f621a0434cc6fe90db2dd15a6078849bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:33Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:33 crc kubenswrapper[4813]: I0219 18:30:33.052121 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55mxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af5ac962-f7d8-4759-b913-b3784e37a704\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5006f3df33dcac6500b89a837e686eb2b35a2d1c609091bda101500d15d808d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55mxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:33Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:33 crc kubenswrapper[4813]: I0219 18:30:33.074549 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7e9f575-0f71-4cf9-bbca-161279ecc067\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5217c8c5d8bdd49d4930ceb46695c6404c9cb98f22fa570d519f6fd9e90a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72ae02b48db247486c8af5ba910b32e186c15460f9195868b319d1ddc111fa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e90168ea6a75495be5812591cd0c929407c9adcb7c088d4bcbe3658a430320\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f319cf58bd58c099ace281ef84d183ac951986506fa62207a45ec01f7398e7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f62c4c18d632f0210d0676c04bf42a7b28c8669c123d29ef70fd4129f0051a2b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T18:30:00Z\\\",\\\"message\\\":\\\".003654 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 18:29:55.006059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3464865525/tls.crt::/tmp/serving-cert-3464865525/tls.key\\\\\\\"\\\\nI0219 18:30:00.917714 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 18:30:00.925836 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 18:30:00.925875 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 18:30:00.925917 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 18:30:00.925928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 18:30:00.935151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 18:30:00.935201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935214 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935228 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 18:30:00.935217 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 18:30:00.935236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 18:30:00.935260 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 18:30:00.935268 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 18:30:00.942033 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nF0219 18:30:00.942062 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b414bdca98a18315a2ebb69c39baa4748407aa769d6bc8efa4115efb47c9f4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:33Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:33 crc kubenswrapper[4813]: I0219 18:30:33.093578 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:33Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:33 crc kubenswrapper[4813]: I0219 18:30:33.109666 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:33Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:33 crc kubenswrapper[4813]: I0219 18:30:33.123533 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:33 crc kubenswrapper[4813]: I0219 18:30:33.123602 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:33 crc kubenswrapper[4813]: I0219 18:30:33.123629 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:33 crc kubenswrapper[4813]: I0219 18:30:33.123659 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:33 crc kubenswrapper[4813]: I0219 18:30:33.123678 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:33Z","lastTransitionTime":"2026-02-19T18:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:33 crc kubenswrapper[4813]: I0219 18:30:33.132021 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c6488cf097d9700ded1960ebddf696d77c08ffdfc6dd8b351c594ea026bb6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6181782d0c635897318effbe40705986067ccb55cf05ffdba6948e46c1a054b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:33Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:33 crc kubenswrapper[4813]: I0219 18:30:33.152167 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hksqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b099cefb-f2e5-4f3f-976c-7433dba77ef2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3313ad9bea8833862d8c116421e00a0c2376c81e214043db1fa4096177d01094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hksqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:33Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:33 crc kubenswrapper[4813]: I0219 18:30:33.170615 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqbrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"614b0374-4288-40dc-9d95-e6f6566bd1ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99f4d2e3db84c62b82edf3e5f8290b3b259e58655a5ea95ea51ed8aeec8e8844\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpkbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef107096c0521504fd7160ad4e3c0feff6ed1500af21c7a0328a981c340825f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpkbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqbrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:33Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:33 crc kubenswrapper[4813]: I0219 18:30:33.186587 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l5vng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fc21e0b-f723-4c9c-9ced-1683cc02fa00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmn2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmn2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l5vng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:33Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:33 crc kubenswrapper[4813]: I0219 18:30:33.210350 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e24bb1ce-8e7b-485c-8ade-4d1c8d24d31d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ec0b5d3efae532f52cc2cad5bdf32bbba176159a6d3e26584b0fabe4e67fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72a262261f9ec388a9b2c38c6c30d35645d6e42d482ed7468891523b7bc731fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58efdcb776fb39a7e33af3fda3f0589e38c4bae15b8db3371541d189729d06c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a2584d7de40323b884e08e2cb47b6ab8054fadccc7516e4736c5439aef60e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:33Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:33 crc kubenswrapper[4813]: I0219 18:30:33.227062 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:33 crc kubenswrapper[4813]: I0219 18:30:33.227133 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:33 crc kubenswrapper[4813]: I0219 18:30:33.227153 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:33 crc kubenswrapper[4813]: I0219 18:30:33.227177 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:33 crc kubenswrapper[4813]: I0219 18:30:33.227196 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:33Z","lastTransitionTime":"2026-02-19T18:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:33 crc kubenswrapper[4813]: I0219 18:30:33.288735 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 18:30:33 crc kubenswrapper[4813]: E0219 18:30:33.289749 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 18:31:05.289706559 +0000 UTC m=+84.515147140 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:30:33 crc kubenswrapper[4813]: I0219 18:30:33.331753 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:33 crc kubenswrapper[4813]: I0219 18:30:33.331824 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:33 crc kubenswrapper[4813]: I0219 18:30:33.331847 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:33 crc kubenswrapper[4813]: I0219 18:30:33.331874 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:33 crc kubenswrapper[4813]: I0219 18:30:33.331895 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:33Z","lastTransitionTime":"2026-02-19T18:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:33 crc kubenswrapper[4813]: I0219 18:30:33.390072 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:30:33 crc kubenswrapper[4813]: I0219 18:30:33.390174 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 18:30:33 crc kubenswrapper[4813]: I0219 18:30:33.390225 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 18:30:33 crc kubenswrapper[4813]: I0219 18:30:33.390263 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:30:33 crc kubenswrapper[4813]: E0219 18:30:33.390271 4813 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 18:30:33 crc kubenswrapper[4813]: E0219 18:30:33.390377 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 18:31:05.390347984 +0000 UTC m=+84.615788565 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 18:30:33 crc kubenswrapper[4813]: E0219 18:30:33.390392 4813 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 18:30:33 crc kubenswrapper[4813]: E0219 18:30:33.390415 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 18:30:33 crc kubenswrapper[4813]: E0219 18:30:33.390458 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 18:31:05.390437187 +0000 UTC m=+84.615877758 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 18:30:33 crc kubenswrapper[4813]: E0219 18:30:33.390464 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 18:30:33 crc kubenswrapper[4813]: E0219 18:30:33.390494 4813 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 18:30:33 crc kubenswrapper[4813]: E0219 18:30:33.390566 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 18:31:05.39054132 +0000 UTC m=+84.615981951 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 18:30:33 crc kubenswrapper[4813]: E0219 18:30:33.390455 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 18:30:33 crc kubenswrapper[4813]: E0219 18:30:33.390605 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 18:30:33 crc kubenswrapper[4813]: E0219 18:30:33.390633 4813 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 18:30:33 crc kubenswrapper[4813]: E0219 18:30:33.390677 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 18:31:05.390663204 +0000 UTC m=+84.616103785 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 18:30:33 crc kubenswrapper[4813]: I0219 18:30:33.429862 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 09:12:40.252092389 +0000 UTC Feb 19 18:30:33 crc kubenswrapper[4813]: I0219 18:30:33.435463 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:33 crc kubenswrapper[4813]: I0219 18:30:33.435523 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:33 crc kubenswrapper[4813]: I0219 18:30:33.435546 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:33 crc kubenswrapper[4813]: I0219 18:30:33.435576 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:33 crc kubenswrapper[4813]: I0219 18:30:33.435600 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:33Z","lastTransitionTime":"2026-02-19T18:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:33 crc kubenswrapper[4813]: I0219 18:30:33.470924 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 18:30:33 crc kubenswrapper[4813]: I0219 18:30:33.470937 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 18:30:33 crc kubenswrapper[4813]: I0219 18:30:33.470984 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:30:33 crc kubenswrapper[4813]: E0219 18:30:33.471111 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 18:30:33 crc kubenswrapper[4813]: E0219 18:30:33.471290 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 18:30:33 crc kubenswrapper[4813]: E0219 18:30:33.471450 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 18:30:33 crc kubenswrapper[4813]: I0219 18:30:33.538035 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:33 crc kubenswrapper[4813]: I0219 18:30:33.538094 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:33 crc kubenswrapper[4813]: I0219 18:30:33.538111 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:33 crc kubenswrapper[4813]: I0219 18:30:33.538136 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:33 crc kubenswrapper[4813]: I0219 18:30:33.538156 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:33Z","lastTransitionTime":"2026-02-19T18:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:33 crc kubenswrapper[4813]: I0219 18:30:33.640752 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:33 crc kubenswrapper[4813]: I0219 18:30:33.640798 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:33 crc kubenswrapper[4813]: I0219 18:30:33.640815 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:33 crc kubenswrapper[4813]: I0219 18:30:33.640839 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:33 crc kubenswrapper[4813]: I0219 18:30:33.640856 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:33Z","lastTransitionTime":"2026-02-19T18:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:33 crc kubenswrapper[4813]: I0219 18:30:33.743583 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:33 crc kubenswrapper[4813]: I0219 18:30:33.743621 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:33 crc kubenswrapper[4813]: I0219 18:30:33.743629 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:33 crc kubenswrapper[4813]: I0219 18:30:33.743643 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:33 crc kubenswrapper[4813]: I0219 18:30:33.743654 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:33Z","lastTransitionTime":"2026-02-19T18:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:33 crc kubenswrapper[4813]: I0219 18:30:33.847127 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:33 crc kubenswrapper[4813]: I0219 18:30:33.847213 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:33 crc kubenswrapper[4813]: I0219 18:30:33.847231 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:33 crc kubenswrapper[4813]: I0219 18:30:33.847254 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:33 crc kubenswrapper[4813]: I0219 18:30:33.847270 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:33Z","lastTransitionTime":"2026-02-19T18:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:33 crc kubenswrapper[4813]: I0219 18:30:33.855666 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pc9t2_928c75f4-605c-4556-8c29-14ff4bdf6f5e/ovnkube-controller/2.log" Feb 19 18:30:33 crc kubenswrapper[4813]: I0219 18:30:33.860848 4813 scope.go:117] "RemoveContainer" containerID="90c570c2b717e5736262333676c0fd0ffbdbd3957f8c27964a5126d2df92cc13" Feb 19 18:30:33 crc kubenswrapper[4813]: E0219 18:30:33.861173 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pc9t2_openshift-ovn-kubernetes(928c75f4-605c-4556-8c29-14ff4bdf6f5e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" podUID="928c75f4-605c-4556-8c29-14ff4bdf6f5e" Feb 19 18:30:33 crc kubenswrapper[4813]: I0219 18:30:33.877346 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:33Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:33 crc kubenswrapper[4813]: I0219 18:30:33.896852 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c6488cf097d9700ded1960ebddf696d77c08ffdfc6dd8b351c594ea026bb6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6181782d0c635897318effbe40705986067ccb55cf05ffdba6948e46c1a054b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:33Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:33 crc kubenswrapper[4813]: I0219 18:30:33.913267 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hksqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b099cefb-f2e5-4f3f-976c-7433dba77ef2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3313ad9bea8833862d8c116421e00a0c2376c81e214043db1fa4096177d01094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hksqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:33Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:33 crc kubenswrapper[4813]: I0219 18:30:33.928471 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqbrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"614b0374-4288-40dc-9d95-e6f6566bd1ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99f4d2e3db84c62b82edf3e5f8290b3b259e58655a5ea95ea51ed8aeec8e8844\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpkbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef107096c0521504fd7160ad4e3c0feff6ed1500af21c7a0328a981c340825f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpkbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqbrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:33Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:33 crc kubenswrapper[4813]: I0219 18:30:33.944194 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l5vng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fc21e0b-f723-4c9c-9ced-1683cc02fa00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmn2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmn2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l5vng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:33Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:33 crc kubenswrapper[4813]: I0219 18:30:33.949777 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:33 crc kubenswrapper[4813]: I0219 18:30:33.949832 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:33 crc kubenswrapper[4813]: I0219 18:30:33.949851 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:33 crc kubenswrapper[4813]: I0219 18:30:33.949878 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:33 crc kubenswrapper[4813]: I0219 18:30:33.949895 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:33Z","lastTransitionTime":"2026-02-19T18:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:33 crc kubenswrapper[4813]: I0219 18:30:33.963356 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e24bb1ce-8e7b-485c-8ade-4d1c8d24d31d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ec0b5d3efae532f52cc2cad5bdf32bbba176159a6d3e26584b0fabe4e67fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72a262261f9ec388a9b2c38c6c30d35645d6e42d482ed7468891523b7bc731fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58efdcb776fb39a7e33af3fda3f0589e38c4bae15b8db3371541d189729d06c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a2584d7de40323b884e08e2cb47b6ab8054fadccc7516e4736c5439aef60e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:33Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:33 crc kubenswrapper[4813]: I0219 18:30:33.979450 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:33Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:34 crc kubenswrapper[4813]: I0219 18:30:34.003325 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlz6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f58d0592-08dd-49db-8c98-f262b9808e0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://934ac9bf4c98a77f31d9fddb47e9981f38daeca4f37afc09a488b48c69a13b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7968520498abaa880445c128f02c0838e254befc000966e147908c84203f74a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7968520498abaa880445c128f02c0838e254befc000966e147908c84203f74a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17313aa183028703a67d1cf40b4246d09224ceee622cd9f4f7d2ef239b26db47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17313aa183028703a67d1cf40b4246d09224ceee622cd9f4f7d2ef239b26db47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ab70722c91747196d065d768e30f52befde81fdedb465f22c3f7eef4705210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98ab70722c91747196d065d768e30f52befde81fdedb465f22c3f7eef4705210\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ab6dd9d03726493fd9fd2c358b0a87ca1997cbe2111d43a548adc9c9f43c662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ab6dd9d03726493fd9fd2c358b0a87ca1997cbe2111d43a548adc9c9f43c662\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11144c876f7a0cbaa5787ac1a093f138ec3fdb8a3cd118d4efec6cc650e3e97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11144c876f7a0cbaa5787ac1a093f138ec3fdb8a3cd118d4efec6cc650e3e97a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53230e417c1722eded51e2c7a87acb4e9b610aaf5d36a22d4f3f4fcd987c5160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53230e417c1722eded51e2c7a87acb4e9b610aaf5d36a22d4f3f4fcd987c5160\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlz6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:34Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:34 crc kubenswrapper[4813]: I0219 18:30:34.033875 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"928c75f4-605c-4556-8c29-14ff4bdf6f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec80078ab4536d8f1962994c3bdf8af5fcf81516f8845e84dbb33a78b1aa2062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e03ec2a3e1656fbfb567377e79bcca10e1c8a6f7660727c164936eab7365930a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6191934d31b3f711d3c9d8f365251db200dea5a925cad670632dbfc527f76504\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1b878b9bc710465c933e1965728be9fe220fe89a3d5c05fa538abd4bf22b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81cfe261685862b6b74d263ab66b4a2b1d68409791e82424b4f210c56561099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fed64977a80ae6acac3a225e2b6b6ab865db1f36338df4a15011baae90f949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90c570c2b717e5736262333676c0fd0ffbdbd3957f8c27964a5126d2df92cc13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90c570c2b717e5736262333676c0fd0ffbdbd3957f8c27964a5126d2df92cc13\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T18:30:32Z\\\",\\\"message\\\":\\\"s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 18:30:32.490543 6488 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 18:30:32.490653 6488 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 18:30:32.491223 6488 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 18:30:32.491297 6488 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 18:30:32.491309 6488 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 18:30:32.491356 6488 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 18:30:32.491372 6488 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 18:30:32.491393 6488 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 18:30:32.491409 6488 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 18:30:32.491423 6488 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 18:30:32.491463 6488 factory.go:656] Stopping watch factory\\\\nI0219 18:30:32.491489 6488 ovnkube.go:599] Stopped ovnkube\\\\nI0219 18:30:32.491500 6488 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 18:30:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pc9t2_openshift-ovn-kubernetes(928c75f4-605c-4556-8c29-14ff4bdf6f5e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1caf7a922d435742a5c6d5302c70003bc86cc3ab542e2e3fe0a368c150de7838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0244cf62334ae37b44f11119385a76667200a01b2162a064829f16e8d6c0cf65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0244cf62334ae37b44f11119385a76667200a01b2162a064829f16e8d6c0cf65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pc9t2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:34Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:34 crc kubenswrapper[4813]: I0219 18:30:34.048353 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vpk9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0640f474-4d6f-4a87-9750-dda71e69dd95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58268d212369ac3ebcca0839f2c778d1e2e6e36fdc4b7e374cb9044adb39163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vpk9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:34Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:34 crc kubenswrapper[4813]: I0219 18:30:34.052240 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:34 crc kubenswrapper[4813]: I0219 18:30:34.052334 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:34 crc kubenswrapper[4813]: I0219 18:30:34.052347 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:34 crc kubenswrapper[4813]: I0219 18:30:34.052365 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:34 crc kubenswrapper[4813]: I0219 18:30:34.052377 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:34Z","lastTransitionTime":"2026-02-19T18:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:34 crc kubenswrapper[4813]: I0219 18:30:34.067395 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8429a2d929ad94a25a922e436298db7a8eb7522560dfa37f87f6e24f4f99b4e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:34Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:34 crc kubenswrapper[4813]: I0219 18:30:34.085605 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"481977a2-7072-4176-abd4-863cb6104d70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2b35ef24fab5f4dfd17bf95d550000ed1167454385baff2c19e1bc795e2fa8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a57de3f000b8b7bd147b882314c518fbd8173a9c366d4ee48e1e5fab9800b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfswm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:34Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:34 crc kubenswrapper[4813]: I0219 18:30:34.108639 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:34Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:34 crc kubenswrapper[4813]: I0219 18:30:34.142238 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"424add47-7924-4d6b-9c4a-4696e02c5a45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95dc9a2b0e36680a894dd162836072614413ad26fcf5adbae44db96aff8a00c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46edb0e58f8e2e040cc223708f42cef7060dc0a841fff9db182e66e659ced5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://def8abdfd513befc4fac3d78b8284b3b37ba3f5da05d74a7b911932ee1099344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed857dedd4a4cf59288fcbcfaa298370a93630e185b949d2e574eb828831df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://216064c0aa1256384d93ad5b952228d4b67c2767f284715e62505dd390c11268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b9746c82ed08c95759481dc08426b4e8446ed834da21c003dccfeb8231fbec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b9746c82ed08c95759481dc08426b4e8446ed834da21c003dccfeb8231fbec2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc8db58cc6cac8c9b830087814e9d7652fdd990b18e4e4e778669d5cbc06b508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc8db58cc6cac8c9b830087814e9d7652fdd990b18e4e4e778669d5cbc06b508\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://baca15e257ea2007369156b6a31eef0b7cd0e336cef7a9c733164b93e1927f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baca15e257ea2007369156b6a31eef0b7cd0e336cef7a9c733164b93e1927f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:34Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:34 crc kubenswrapper[4813]: I0219 18:30:34.154662 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:34 crc kubenswrapper[4813]: I0219 18:30:34.154718 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:34 crc kubenswrapper[4813]: I0219 18:30:34.154737 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:34 crc kubenswrapper[4813]: I0219 18:30:34.154760 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:34 crc kubenswrapper[4813]: I0219 18:30:34.154777 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:34Z","lastTransitionTime":"2026-02-19T18:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:34 crc kubenswrapper[4813]: I0219 18:30:34.164314 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44daee369357470d5c3b416d63e53f621a0434cc6fe90db2dd15a6078849bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:34Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:34 crc kubenswrapper[4813]: I0219 18:30:34.182777 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55mxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af5ac962-f7d8-4759-b913-b3784e37a704\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5006f3df33dcac6500b89a837e686eb2b35a2d1c609091bda101500d15d808d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55mxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:34Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:34 crc kubenswrapper[4813]: I0219 18:30:34.213116 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7e9f575-0f71-4cf9-bbca-161279ecc067\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5217c8c5d8bdd49d4930ceb46695c6404c9cb98f22fa570d519f6fd9e90a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72ae02b48db247486c8af5ba910b32e186c15460f9195868b319d1ddc111fa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e90168ea6a75495be5812591cd0c929407c9adcb7c088d4bcbe3658a430320\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f319cf58bd58c099ace281ef84d183ac951986506fa62207a45ec01f7398e7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f62c4c18d632f0210d0676c04bf42a7b28c8669c123d29ef70fd4129f0051a2b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T18:30:00Z\\\",\\\"message\\\":\\\".003654 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 18:29:55.006059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3464865525/tls.crt::/tmp/serving-cert-3464865525/tls.key\\\\\\\"\\\\nI0219 18:30:00.917714 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 18:30:00.925836 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 18:30:00.925875 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 18:30:00.925917 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 18:30:00.925928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 18:30:00.935151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 18:30:00.935201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935214 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935228 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 18:30:00.935217 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 18:30:00.935236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 18:30:00.935260 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 18:30:00.935268 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 18:30:00.942033 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nF0219 18:30:00.942062 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b414bdca98a18315a2ebb69c39baa4748407aa769d6bc8efa4115efb47c9f4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:34Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:34 crc kubenswrapper[4813]: I0219 18:30:34.258335 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:34 crc kubenswrapper[4813]: I0219 18:30:34.258406 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:34 crc kubenswrapper[4813]: I0219 18:30:34.258424 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:34 crc kubenswrapper[4813]: I0219 18:30:34.258833 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:34 crc kubenswrapper[4813]: I0219 18:30:34.258889 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:34Z","lastTransitionTime":"2026-02-19T18:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:34 crc kubenswrapper[4813]: I0219 18:30:34.362884 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:34 crc kubenswrapper[4813]: I0219 18:30:34.363002 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:34 crc kubenswrapper[4813]: I0219 18:30:34.363027 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:34 crc kubenswrapper[4813]: I0219 18:30:34.363057 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:34 crc kubenswrapper[4813]: I0219 18:30:34.363115 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:34Z","lastTransitionTime":"2026-02-19T18:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:34 crc kubenswrapper[4813]: I0219 18:30:34.430478 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 23:12:20.888947824 +0000 UTC Feb 19 18:30:34 crc kubenswrapper[4813]: I0219 18:30:34.465655 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:34 crc kubenswrapper[4813]: I0219 18:30:34.465712 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:34 crc kubenswrapper[4813]: I0219 18:30:34.465729 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:34 crc kubenswrapper[4813]: I0219 18:30:34.465753 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:34 crc kubenswrapper[4813]: I0219 18:30:34.465774 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:34Z","lastTransitionTime":"2026-02-19T18:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:34 crc kubenswrapper[4813]: I0219 18:30:34.471189 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5vng" Feb 19 18:30:34 crc kubenswrapper[4813]: E0219 18:30:34.471428 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5vng" podUID="6fc21e0b-f723-4c9c-9ced-1683cc02fa00" Feb 19 18:30:34 crc kubenswrapper[4813]: I0219 18:30:34.568911 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:34 crc kubenswrapper[4813]: I0219 18:30:34.569035 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:34 crc kubenswrapper[4813]: I0219 18:30:34.569061 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:34 crc kubenswrapper[4813]: I0219 18:30:34.569163 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:34 crc kubenswrapper[4813]: I0219 18:30:34.569188 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:34Z","lastTransitionTime":"2026-02-19T18:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:34 crc kubenswrapper[4813]: I0219 18:30:34.672775 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:34 crc kubenswrapper[4813]: I0219 18:30:34.672822 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:34 crc kubenswrapper[4813]: I0219 18:30:34.672834 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:34 crc kubenswrapper[4813]: I0219 18:30:34.672851 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:34 crc kubenswrapper[4813]: I0219 18:30:34.672864 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:34Z","lastTransitionTime":"2026-02-19T18:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:34 crc kubenswrapper[4813]: I0219 18:30:34.775893 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:34 crc kubenswrapper[4813]: I0219 18:30:34.775944 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:34 crc kubenswrapper[4813]: I0219 18:30:34.775978 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:34 crc kubenswrapper[4813]: I0219 18:30:34.776023 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:34 crc kubenswrapper[4813]: I0219 18:30:34.776038 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:34Z","lastTransitionTime":"2026-02-19T18:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:34 crc kubenswrapper[4813]: I0219 18:30:34.879129 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:34 crc kubenswrapper[4813]: I0219 18:30:34.879186 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:34 crc kubenswrapper[4813]: I0219 18:30:34.879204 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:34 crc kubenswrapper[4813]: I0219 18:30:34.879227 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:34 crc kubenswrapper[4813]: I0219 18:30:34.879243 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:34Z","lastTransitionTime":"2026-02-19T18:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:34 crc kubenswrapper[4813]: I0219 18:30:34.983230 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:34 crc kubenswrapper[4813]: I0219 18:30:34.983296 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:34 crc kubenswrapper[4813]: I0219 18:30:34.983314 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:34 crc kubenswrapper[4813]: I0219 18:30:34.983341 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:34 crc kubenswrapper[4813]: I0219 18:30:34.983361 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:34Z","lastTransitionTime":"2026-02-19T18:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:35 crc kubenswrapper[4813]: I0219 18:30:35.086920 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:35 crc kubenswrapper[4813]: I0219 18:30:35.087021 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:35 crc kubenswrapper[4813]: I0219 18:30:35.087040 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:35 crc kubenswrapper[4813]: I0219 18:30:35.087068 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:35 crc kubenswrapper[4813]: I0219 18:30:35.087086 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:35Z","lastTransitionTime":"2026-02-19T18:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:35 crc kubenswrapper[4813]: I0219 18:30:35.190315 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:35 crc kubenswrapper[4813]: I0219 18:30:35.190385 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:35 crc kubenswrapper[4813]: I0219 18:30:35.190403 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:35 crc kubenswrapper[4813]: I0219 18:30:35.190428 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:35 crc kubenswrapper[4813]: I0219 18:30:35.190450 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:35Z","lastTransitionTime":"2026-02-19T18:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:35 crc kubenswrapper[4813]: I0219 18:30:35.293206 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:35 crc kubenswrapper[4813]: I0219 18:30:35.293273 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:35 crc kubenswrapper[4813]: I0219 18:30:35.293292 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:35 crc kubenswrapper[4813]: I0219 18:30:35.293318 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:35 crc kubenswrapper[4813]: I0219 18:30:35.293338 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:35Z","lastTransitionTime":"2026-02-19T18:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:35 crc kubenswrapper[4813]: I0219 18:30:35.396612 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:35 crc kubenswrapper[4813]: I0219 18:30:35.396689 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:35 crc kubenswrapper[4813]: I0219 18:30:35.396707 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:35 crc kubenswrapper[4813]: I0219 18:30:35.396731 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:35 crc kubenswrapper[4813]: I0219 18:30:35.396750 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:35Z","lastTransitionTime":"2026-02-19T18:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:35 crc kubenswrapper[4813]: I0219 18:30:35.430675 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 04:07:55.91798432 +0000 UTC Feb 19 18:30:35 crc kubenswrapper[4813]: I0219 18:30:35.471499 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:30:35 crc kubenswrapper[4813]: I0219 18:30:35.471575 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 18:30:35 crc kubenswrapper[4813]: E0219 18:30:35.471693 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 18:30:35 crc kubenswrapper[4813]: I0219 18:30:35.471727 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 18:30:35 crc kubenswrapper[4813]: E0219 18:30:35.471918 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 18:30:35 crc kubenswrapper[4813]: E0219 18:30:35.472037 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 18:30:35 crc kubenswrapper[4813]: I0219 18:30:35.499856 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:35 crc kubenswrapper[4813]: I0219 18:30:35.499904 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:35 crc kubenswrapper[4813]: I0219 18:30:35.499934 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:35 crc kubenswrapper[4813]: I0219 18:30:35.499992 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:35 crc kubenswrapper[4813]: I0219 18:30:35.500012 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:35Z","lastTransitionTime":"2026-02-19T18:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:35 crc kubenswrapper[4813]: I0219 18:30:35.602753 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:35 crc kubenswrapper[4813]: I0219 18:30:35.602824 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:35 crc kubenswrapper[4813]: I0219 18:30:35.602841 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:35 crc kubenswrapper[4813]: I0219 18:30:35.602889 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:35 crc kubenswrapper[4813]: I0219 18:30:35.602908 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:35Z","lastTransitionTime":"2026-02-19T18:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:35 crc kubenswrapper[4813]: I0219 18:30:35.678053 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 18:30:35 crc kubenswrapper[4813]: I0219 18:30:35.690082 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 19 18:30:35 crc kubenswrapper[4813]: I0219 18:30:35.701000 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:35Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:35 crc kubenswrapper[4813]: I0219 18:30:35.705507 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:35 crc kubenswrapper[4813]: I0219 18:30:35.705558 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:35 crc kubenswrapper[4813]: I0219 18:30:35.705581 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:35 crc kubenswrapper[4813]: I0219 18:30:35.705610 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:35 crc kubenswrapper[4813]: I0219 18:30:35.705631 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:35Z","lastTransitionTime":"2026-02-19T18:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:35 crc kubenswrapper[4813]: I0219 18:30:35.718319 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vpk9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0640f474-4d6f-4a87-9750-dda71e69dd95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58268d212369ac3ebcca0839f2c778d1e2e6e36fdc4b7e374cb9044adb39163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vpk9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:35Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:35 crc kubenswrapper[4813]: I0219 18:30:35.735395 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8429a2d929ad94a25a922e436298db7a8eb7522560dfa37f87f6e24f4f99b4e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:35Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:35 crc kubenswrapper[4813]: I0219 18:30:35.754181 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"481977a2-7072-4176-abd4-863cb6104d70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2b35ef24fab5f4dfd17bf95d550000ed1167454385baff2c19e1bc795e2fa8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a57de3f000b8b7bd147b882314c518fbd8173a9c366d4ee48e1e5fab9800b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfswm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:35Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:35 crc kubenswrapper[4813]: I0219 18:30:35.775481 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7e9f575-0f71-4cf9-bbca-161279ecc067\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5217c8c5d8bdd49d4930ceb46695c6404c9cb98f22fa570d519f6fd9e90a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72ae02b48db247486c8af5ba910b32e186c15460f9195868b319d1ddc111fa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e90168ea6a75495be5812591cd0c929407c9adcb7c088d4bcbe3658a430320\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f319cf58bd58c099ace281ef84d183ac951986506fa62207a45ec01f7398e7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f62c4c18d632f0210d0676c04bf42a7b28c8669c123d29ef70fd4129f0051a2b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T18:30:00Z\\\",\\\"message\\\":\\\".003654 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 18:29:55.006059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3464865525/tls.crt::/tmp/serving-cert-3464865525/tls.key\\\\\\\"\\\\nI0219 18:30:00.917714 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 18:30:00.925836 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 18:30:00.925875 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 18:30:00.925917 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 18:30:00.925928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 18:30:00.935151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 18:30:00.935201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935214 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935228 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 18:30:00.935217 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 18:30:00.935236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 18:30:00.935260 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 18:30:00.935268 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 18:30:00.942033 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nF0219 18:30:00.942062 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b414bdca98a18315a2ebb69c39baa4748407aa769d6bc8efa4115efb47c9f4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:35Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:35 crc kubenswrapper[4813]: I0219 18:30:35.806694 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"424add47-7924-4d6b-9c4a-4696e02c5a45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95dc9a2b0e36680a894dd162836072614413ad26fcf5adbae44db96aff8a00c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46edb0e58f8e2e040cc223708f42cef7060dc0a841fff9db182e66e659ced5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://def8abdfd513befc4fac3d78b8284b3b37ba3f5da05d74a7b911932ee1099344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed857dedd4a4cf59288fcbcfaa298370a93630e185b949d2e574eb828831df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://216064c0aa1256384d93ad5b952228d4b67c2767f284715e62505dd390c11268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b9746c82ed08c95759481dc08426b4e8446ed834da21c003dccfeb8231fbec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b9746c82ed08c95759481dc08426b4e8446ed834da21c003dccfeb8231fbec2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc8db58cc6cac8c9b830087814e9d7652fdd990b18e4e4e778669d5cbc06b508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc8db58cc6cac8c9b830087814e9d7652fdd990b18e4e4e778669d5cbc06b508\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://baca15e257ea2007369156b6a31eef0b7cd0e336cef7a9c733164b93e1927f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baca15e257ea2007369156b6a31eef0b7cd0e336cef7a9c733164b93e1927f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:35Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:35 crc kubenswrapper[4813]: I0219 18:30:35.808618 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:35 crc kubenswrapper[4813]: I0219 18:30:35.808673 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:35 crc kubenswrapper[4813]: I0219 18:30:35.808690 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:35 crc kubenswrapper[4813]: I0219 18:30:35.808720 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:35 crc kubenswrapper[4813]: I0219 18:30:35.808737 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:35Z","lastTransitionTime":"2026-02-19T18:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:35 crc kubenswrapper[4813]: I0219 18:30:35.833130 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44daee369357470d5c3b416d63e53f621a0434cc6fe90db2dd15a6078849bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:35Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:35 crc kubenswrapper[4813]: I0219 18:30:35.849717 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55mxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af5ac962-f7d8-4759-b913-b3784e37a704\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5006f3df33dcac6500b89a837e686eb2b35a2d1c609091bda101500d15d808d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55mxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:35Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:35 crc kubenswrapper[4813]: I0219 18:30:35.866196 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l5vng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fc21e0b-f723-4c9c-9ced-1683cc02fa00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmn2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmn2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l5vng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:35Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:35 crc kubenswrapper[4813]: I0219 18:30:35.884876 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e24bb1ce-8e7b-485c-8ade-4d1c8d24d31d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ec0b5d3efae532f52cc2cad5bdf32bbba176159a6d3e26584b0fabe4e67fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72a262261f9ec388a9b2c38c6c30d35645d6e42d482ed7468891523b7bc731fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58efdcb776fb39a7e33af3fda3f0589e38c4bae15b8db3371541d189729d06c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a2584d7de40323b884e08e2cb47b6ab8054fadccc7516e4736c5439aef60e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:35Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:35 crc kubenswrapper[4813]: I0219 18:30:35.903593 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:35Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:35 crc kubenswrapper[4813]: I0219 18:30:35.911945 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:35 crc kubenswrapper[4813]: I0219 18:30:35.912037 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:35 crc kubenswrapper[4813]: I0219 18:30:35.912060 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:35 crc kubenswrapper[4813]: I0219 18:30:35.912089 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:35 crc kubenswrapper[4813]: I0219 18:30:35.912111 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:35Z","lastTransitionTime":"2026-02-19T18:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:35 crc kubenswrapper[4813]: I0219 18:30:35.923351 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:35Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:35 crc kubenswrapper[4813]: I0219 18:30:35.942442 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c6488cf097d9700ded1960ebddf696d77c08ffdfc6dd8b351c594ea026bb6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6181782d0c635897318effbe40705986067ccb55cf05ffdba6948e46c1a054b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:35Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:35 crc kubenswrapper[4813]: I0219 18:30:35.963670 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hksqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b099cefb-f2e5-4f3f-976c-7433dba77ef2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3313ad9bea8833862d8c116421e00a0c2376c81e214043db1fa4096177d01094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hksqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:35Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:35 crc kubenswrapper[4813]: I0219 18:30:35.982753 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqbrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"614b0374-4288-40dc-9d95-e6f6566bd1ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99f4d2e3db84c62b82edf3e5f8290b3b259e58655a5ea95ea51ed8aeec8e8844\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpkbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef107096c0521504fd7160ad4e3c0feff6ed1500af21c7a0328a981c340825f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpkbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqbrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:35Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:36 crc kubenswrapper[4813]: I0219 18:30:36.005231 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlz6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f58d0592-08dd-49db-8c98-f262b9808e0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://934ac9bf4c98a77f31d9fddb47e9981f38daeca4f37afc09a488b48c69a13b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7968520498abaa880445c128f02c0838e254befc000966e147908c84203f74a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7968520498abaa880445c128f02c0838e254befc000966e147908c84203f74a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17313aa183028703a67d1cf40b4246d09224ceee622cd9f4f7d2ef239b26db47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17313aa183028703a67d1cf40b4246d09224ceee622cd9f4f7d2ef239b26db47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ab70722c91747196d065d768e30f52befde81fdedb465f22c3f7eef4705210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98ab70722c91747196d065d768e30f52befde81fdedb465f22c3f7eef4705210\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ab6dd9d03726493fd9fd2c358b0a87ca1997cbe2111d43a548adc9c9f43c662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ab6dd9d03726493fd9fd2c358b0a87ca1997cbe2111d43a548adc9c9f43c662\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11144c876f7a0cbaa5787ac1a093f138ec3fdb8a3cd118d4efec6cc650e3e97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11144c876f7a0cbaa5787ac1a093f138ec3fdb8a3cd118d4efec6cc650e3e97a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53230e417c1722eded51e2c7a87acb4e9b610aaf5d36a22d4f3f4fcd987c5160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53230e417c1722eded51e2c7a87acb4e9b610aaf5d36a22d4f3f4fcd987c5160\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlz6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:36Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:36 crc kubenswrapper[4813]: I0219 18:30:36.014825 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:36 crc kubenswrapper[4813]: I0219 18:30:36.014895 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:36 crc kubenswrapper[4813]: I0219 18:30:36.014913 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:36 crc kubenswrapper[4813]: I0219 18:30:36.014937 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:36 crc kubenswrapper[4813]: I0219 18:30:36.014988 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:36Z","lastTransitionTime":"2026-02-19T18:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:36 crc kubenswrapper[4813]: I0219 18:30:36.035711 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"928c75f4-605c-4556-8c29-14ff4bdf6f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec80078ab4536d8f1962994c3bdf8af5fcf81516f8845e84dbb33a78b1aa2062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e03ec2a3e1656fbfb567377e79bcca10e1c8a6f7660727c164936eab7365930a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6191934d31b3f711d3c9d8f365251db200dea5a925cad670632dbfc527f76504\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1b878b9bc710465c933e1965728be9fe220fe89a3d5c05fa538abd4bf22b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81cfe261685862b6b74d263ab66b4a2b1d68409791e82424b4f210c56561099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fed64977a80ae6acac3a225e2b6b6ab865db1f36338df4a15011baae90f949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90c570c2b717e5736262333676c0fd0ffbdbd3957f8c27964a5126d2df92cc13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90c570c2b717e5736262333676c0fd0ffbdbd3957f8c27964a5126d2df92cc13\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T18:30:32Z\\\",\\\"message\\\":\\\"s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 18:30:32.490543 6488 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 18:30:32.490653 6488 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 18:30:32.491223 6488 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 18:30:32.491297 6488 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 18:30:32.491309 6488 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 18:30:32.491356 6488 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 18:30:32.491372 6488 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 18:30:32.491393 6488 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 18:30:32.491409 6488 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 18:30:32.491423 6488 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 18:30:32.491463 6488 factory.go:656] Stopping watch factory\\\\nI0219 18:30:32.491489 6488 ovnkube.go:599] Stopped ovnkube\\\\nI0219 18:30:32.491500 6488 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 18:30:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pc9t2_openshift-ovn-kubernetes(928c75f4-605c-4556-8c29-14ff4bdf6f5e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1caf7a922d435742a5c6d5302c70003bc86cc3ab542e2e3fe0a368c150de7838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0244cf62334ae37b44f11119385a76667200a01b2162a064829f16e8d6c0cf65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0244cf62334ae37b44f11119385a76667200a01b2162a064829f16e8d6c0cf65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pc9t2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:36Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:36 crc kubenswrapper[4813]: I0219 18:30:36.118125 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:36 crc kubenswrapper[4813]: I0219 18:30:36.118170 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:36 crc kubenswrapper[4813]: I0219 18:30:36.118178 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:36 crc kubenswrapper[4813]: I0219 18:30:36.118194 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:36 crc kubenswrapper[4813]: I0219 18:30:36.118230 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:36Z","lastTransitionTime":"2026-02-19T18:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:36 crc kubenswrapper[4813]: I0219 18:30:36.220717 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:36 crc kubenswrapper[4813]: I0219 18:30:36.220769 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:36 crc kubenswrapper[4813]: I0219 18:30:36.220778 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:36 crc kubenswrapper[4813]: I0219 18:30:36.220794 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:36 crc kubenswrapper[4813]: I0219 18:30:36.220803 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:36Z","lastTransitionTime":"2026-02-19T18:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:36 crc kubenswrapper[4813]: I0219 18:30:36.323445 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:36 crc kubenswrapper[4813]: I0219 18:30:36.323528 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:36 crc kubenswrapper[4813]: I0219 18:30:36.323542 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:36 crc kubenswrapper[4813]: I0219 18:30:36.323563 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:36 crc kubenswrapper[4813]: I0219 18:30:36.323580 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:36Z","lastTransitionTime":"2026-02-19T18:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:36 crc kubenswrapper[4813]: I0219 18:30:36.425840 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:36 crc kubenswrapper[4813]: I0219 18:30:36.425880 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:36 crc kubenswrapper[4813]: I0219 18:30:36.425891 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:36 crc kubenswrapper[4813]: I0219 18:30:36.425906 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:36 crc kubenswrapper[4813]: I0219 18:30:36.425917 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:36Z","lastTransitionTime":"2026-02-19T18:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:36 crc kubenswrapper[4813]: I0219 18:30:36.431226 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 23:57:44.236467114 +0000 UTC Feb 19 18:30:36 crc kubenswrapper[4813]: I0219 18:30:36.471010 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5vng" Feb 19 18:30:36 crc kubenswrapper[4813]: E0219 18:30:36.471256 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5vng" podUID="6fc21e0b-f723-4c9c-9ced-1683cc02fa00" Feb 19 18:30:36 crc kubenswrapper[4813]: I0219 18:30:36.528287 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:36 crc kubenswrapper[4813]: I0219 18:30:36.528348 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:36 crc kubenswrapper[4813]: I0219 18:30:36.528367 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:36 crc kubenswrapper[4813]: I0219 18:30:36.528390 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:36 crc kubenswrapper[4813]: I0219 18:30:36.528407 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:36Z","lastTransitionTime":"2026-02-19T18:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:36 crc kubenswrapper[4813]: I0219 18:30:36.631284 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:36 crc kubenswrapper[4813]: I0219 18:30:36.631336 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:36 crc kubenswrapper[4813]: I0219 18:30:36.631382 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:36 crc kubenswrapper[4813]: I0219 18:30:36.631407 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:36 crc kubenswrapper[4813]: I0219 18:30:36.631423 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:36Z","lastTransitionTime":"2026-02-19T18:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:36 crc kubenswrapper[4813]: I0219 18:30:36.734296 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:36 crc kubenswrapper[4813]: I0219 18:30:36.734361 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:36 crc kubenswrapper[4813]: I0219 18:30:36.734379 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:36 crc kubenswrapper[4813]: I0219 18:30:36.734401 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:36 crc kubenswrapper[4813]: I0219 18:30:36.734419 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:36Z","lastTransitionTime":"2026-02-19T18:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:36 crc kubenswrapper[4813]: I0219 18:30:36.837747 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:36 crc kubenswrapper[4813]: I0219 18:30:36.837808 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:36 crc kubenswrapper[4813]: I0219 18:30:36.837827 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:36 crc kubenswrapper[4813]: I0219 18:30:36.837849 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:36 crc kubenswrapper[4813]: I0219 18:30:36.837867 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:36Z","lastTransitionTime":"2026-02-19T18:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:36 crc kubenswrapper[4813]: I0219 18:30:36.940482 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:36 crc kubenswrapper[4813]: I0219 18:30:36.940570 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:36 crc kubenswrapper[4813]: I0219 18:30:36.940588 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:36 crc kubenswrapper[4813]: I0219 18:30:36.940610 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:36 crc kubenswrapper[4813]: I0219 18:30:36.940626 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:36Z","lastTransitionTime":"2026-02-19T18:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:37 crc kubenswrapper[4813]: I0219 18:30:37.043647 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:37 crc kubenswrapper[4813]: I0219 18:30:37.043708 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:37 crc kubenswrapper[4813]: I0219 18:30:37.043725 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:37 crc kubenswrapper[4813]: I0219 18:30:37.043782 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:37 crc kubenswrapper[4813]: I0219 18:30:37.043800 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:37Z","lastTransitionTime":"2026-02-19T18:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:37 crc kubenswrapper[4813]: I0219 18:30:37.146730 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:37 crc kubenswrapper[4813]: I0219 18:30:37.146792 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:37 crc kubenswrapper[4813]: I0219 18:30:37.146811 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:37 crc kubenswrapper[4813]: I0219 18:30:37.146836 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:37 crc kubenswrapper[4813]: I0219 18:30:37.146853 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:37Z","lastTransitionTime":"2026-02-19T18:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:37 crc kubenswrapper[4813]: I0219 18:30:37.249773 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:37 crc kubenswrapper[4813]: I0219 18:30:37.249840 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:37 crc kubenswrapper[4813]: I0219 18:30:37.249858 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:37 crc kubenswrapper[4813]: I0219 18:30:37.249882 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:37 crc kubenswrapper[4813]: I0219 18:30:37.249899 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:37Z","lastTransitionTime":"2026-02-19T18:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:37 crc kubenswrapper[4813]: I0219 18:30:37.353200 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:37 crc kubenswrapper[4813]: I0219 18:30:37.353290 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:37 crc kubenswrapper[4813]: I0219 18:30:37.353309 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:37 crc kubenswrapper[4813]: I0219 18:30:37.353363 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:37 crc kubenswrapper[4813]: I0219 18:30:37.353383 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:37Z","lastTransitionTime":"2026-02-19T18:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:37 crc kubenswrapper[4813]: I0219 18:30:37.431798 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 13:14:59.731259827 +0000 UTC Feb 19 18:30:37 crc kubenswrapper[4813]: I0219 18:30:37.456288 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:37 crc kubenswrapper[4813]: I0219 18:30:37.456333 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:37 crc kubenswrapper[4813]: I0219 18:30:37.456349 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:37 crc kubenswrapper[4813]: I0219 18:30:37.456372 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:37 crc kubenswrapper[4813]: I0219 18:30:37.456390 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:37Z","lastTransitionTime":"2026-02-19T18:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:37 crc kubenswrapper[4813]: I0219 18:30:37.471082 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 18:30:37 crc kubenswrapper[4813]: I0219 18:30:37.471239 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:30:37 crc kubenswrapper[4813]: E0219 18:30:37.471401 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 18:30:37 crc kubenswrapper[4813]: I0219 18:30:37.471550 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 18:30:37 crc kubenswrapper[4813]: E0219 18:30:37.471632 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 18:30:37 crc kubenswrapper[4813]: E0219 18:30:37.471852 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 18:30:37 crc kubenswrapper[4813]: I0219 18:30:37.559443 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:37 crc kubenswrapper[4813]: I0219 18:30:37.559493 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:37 crc kubenswrapper[4813]: I0219 18:30:37.559509 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:37 crc kubenswrapper[4813]: I0219 18:30:37.559530 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:37 crc kubenswrapper[4813]: I0219 18:30:37.559546 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:37Z","lastTransitionTime":"2026-02-19T18:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:37 crc kubenswrapper[4813]: I0219 18:30:37.662561 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:37 crc kubenswrapper[4813]: I0219 18:30:37.662608 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:37 crc kubenswrapper[4813]: I0219 18:30:37.662624 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:37 crc kubenswrapper[4813]: I0219 18:30:37.662654 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:37 crc kubenswrapper[4813]: I0219 18:30:37.662670 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:37Z","lastTransitionTime":"2026-02-19T18:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:37 crc kubenswrapper[4813]: I0219 18:30:37.767232 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:37 crc kubenswrapper[4813]: I0219 18:30:37.767281 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:37 crc kubenswrapper[4813]: I0219 18:30:37.767292 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:37 crc kubenswrapper[4813]: I0219 18:30:37.767309 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:37 crc kubenswrapper[4813]: I0219 18:30:37.767324 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:37Z","lastTransitionTime":"2026-02-19T18:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:37 crc kubenswrapper[4813]: I0219 18:30:37.870831 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:37 crc kubenswrapper[4813]: I0219 18:30:37.870887 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:37 crc kubenswrapper[4813]: I0219 18:30:37.870904 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:37 crc kubenswrapper[4813]: I0219 18:30:37.870926 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:37 crc kubenswrapper[4813]: I0219 18:30:37.870942 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:37Z","lastTransitionTime":"2026-02-19T18:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:37 crc kubenswrapper[4813]: I0219 18:30:37.973652 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:37 crc kubenswrapper[4813]: I0219 18:30:37.973691 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:37 crc kubenswrapper[4813]: I0219 18:30:37.973703 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:37 crc kubenswrapper[4813]: I0219 18:30:37.973718 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:37 crc kubenswrapper[4813]: I0219 18:30:37.973730 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:37Z","lastTransitionTime":"2026-02-19T18:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:38 crc kubenswrapper[4813]: I0219 18:30:38.075422 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:38 crc kubenswrapper[4813]: I0219 18:30:38.075498 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:38 crc kubenswrapper[4813]: I0219 18:30:38.075520 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:38 crc kubenswrapper[4813]: I0219 18:30:38.075552 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:38 crc kubenswrapper[4813]: I0219 18:30:38.075574 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:38Z","lastTransitionTime":"2026-02-19T18:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:38 crc kubenswrapper[4813]: I0219 18:30:38.178892 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:38 crc kubenswrapper[4813]: I0219 18:30:38.178933 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:38 crc kubenswrapper[4813]: I0219 18:30:38.178942 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:38 crc kubenswrapper[4813]: I0219 18:30:38.178974 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:38 crc kubenswrapper[4813]: I0219 18:30:38.178985 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:38Z","lastTransitionTime":"2026-02-19T18:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:38 crc kubenswrapper[4813]: I0219 18:30:38.281886 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:38 crc kubenswrapper[4813]: I0219 18:30:38.281944 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:38 crc kubenswrapper[4813]: I0219 18:30:38.281993 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:38 crc kubenswrapper[4813]: I0219 18:30:38.282016 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:38 crc kubenswrapper[4813]: I0219 18:30:38.282035 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:38Z","lastTransitionTime":"2026-02-19T18:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:38 crc kubenswrapper[4813]: I0219 18:30:38.385338 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:38 crc kubenswrapper[4813]: I0219 18:30:38.385405 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:38 crc kubenswrapper[4813]: I0219 18:30:38.385418 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:38 crc kubenswrapper[4813]: I0219 18:30:38.385439 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:38 crc kubenswrapper[4813]: I0219 18:30:38.385452 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:38Z","lastTransitionTime":"2026-02-19T18:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:38 crc kubenswrapper[4813]: I0219 18:30:38.432190 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 10:39:16.291252989 +0000 UTC Feb 19 18:30:38 crc kubenswrapper[4813]: I0219 18:30:38.470903 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5vng" Feb 19 18:30:38 crc kubenswrapper[4813]: E0219 18:30:38.471107 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5vng" podUID="6fc21e0b-f723-4c9c-9ced-1683cc02fa00" Feb 19 18:30:38 crc kubenswrapper[4813]: I0219 18:30:38.488439 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:38 crc kubenswrapper[4813]: I0219 18:30:38.488482 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:38 crc kubenswrapper[4813]: I0219 18:30:38.488497 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:38 crc kubenswrapper[4813]: I0219 18:30:38.488518 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:38 crc kubenswrapper[4813]: I0219 18:30:38.488535 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:38Z","lastTransitionTime":"2026-02-19T18:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:38 crc kubenswrapper[4813]: I0219 18:30:38.591151 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:38 crc kubenswrapper[4813]: I0219 18:30:38.591202 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:38 crc kubenswrapper[4813]: I0219 18:30:38.591219 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:38 crc kubenswrapper[4813]: I0219 18:30:38.591243 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:38 crc kubenswrapper[4813]: I0219 18:30:38.591260 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:38Z","lastTransitionTime":"2026-02-19T18:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:38 crc kubenswrapper[4813]: I0219 18:30:38.694168 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:38 crc kubenswrapper[4813]: I0219 18:30:38.694218 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:38 crc kubenswrapper[4813]: I0219 18:30:38.694234 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:38 crc kubenswrapper[4813]: I0219 18:30:38.694257 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:38 crc kubenswrapper[4813]: I0219 18:30:38.694273 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:38Z","lastTransitionTime":"2026-02-19T18:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:38 crc kubenswrapper[4813]: I0219 18:30:38.739574 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:38 crc kubenswrapper[4813]: I0219 18:30:38.739621 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:38 crc kubenswrapper[4813]: I0219 18:30:38.739636 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:38 crc kubenswrapper[4813]: I0219 18:30:38.739656 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:38 crc kubenswrapper[4813]: I0219 18:30:38.739668 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:38Z","lastTransitionTime":"2026-02-19T18:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:38 crc kubenswrapper[4813]: E0219 18:30:38.757752 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"72639c5f-de3a-44bf-8f3b-4707b19d9f7d\\\",\\\"systemUUID\\\":\\\"3f17b88b-2b9a-42bd-94af-777e7f325932\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:38Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:38 crc kubenswrapper[4813]: I0219 18:30:38.762617 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:38 crc kubenswrapper[4813]: I0219 18:30:38.762672 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:38 crc kubenswrapper[4813]: I0219 18:30:38.762691 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:38 crc kubenswrapper[4813]: I0219 18:30:38.762715 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:38 crc kubenswrapper[4813]: I0219 18:30:38.762733 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:38Z","lastTransitionTime":"2026-02-19T18:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:38 crc kubenswrapper[4813]: E0219 18:30:38.781846 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"72639c5f-de3a-44bf-8f3b-4707b19d9f7d\\\",\\\"systemUUID\\\":\\\"3f17b88b-2b9a-42bd-94af-777e7f325932\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:38Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:38 crc kubenswrapper[4813]: I0219 18:30:38.786618 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:38 crc kubenswrapper[4813]: I0219 18:30:38.786675 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:38 crc kubenswrapper[4813]: I0219 18:30:38.786692 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:38 crc kubenswrapper[4813]: I0219 18:30:38.786715 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:38 crc kubenswrapper[4813]: I0219 18:30:38.786734 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:38Z","lastTransitionTime":"2026-02-19T18:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:38 crc kubenswrapper[4813]: E0219 18:30:38.803273 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"72639c5f-de3a-44bf-8f3b-4707b19d9f7d\\\",\\\"systemUUID\\\":\\\"3f17b88b-2b9a-42bd-94af-777e7f325932\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:38Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:38 crc kubenswrapper[4813]: I0219 18:30:38.807749 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:38 crc kubenswrapper[4813]: I0219 18:30:38.807800 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:38 crc kubenswrapper[4813]: I0219 18:30:38.807816 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:38 crc kubenswrapper[4813]: I0219 18:30:38.807837 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:38 crc kubenswrapper[4813]: I0219 18:30:38.807857 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:38Z","lastTransitionTime":"2026-02-19T18:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:38 crc kubenswrapper[4813]: E0219 18:30:38.824660 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"72639c5f-de3a-44bf-8f3b-4707b19d9f7d\\\",\\\"systemUUID\\\":\\\"3f17b88b-2b9a-42bd-94af-777e7f325932\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:38Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:38 crc kubenswrapper[4813]: I0219 18:30:38.828923 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:38 crc kubenswrapper[4813]: I0219 18:30:38.828996 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:38 crc kubenswrapper[4813]: I0219 18:30:38.829011 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:38 crc kubenswrapper[4813]: I0219 18:30:38.829032 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:38 crc kubenswrapper[4813]: I0219 18:30:38.829043 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:38Z","lastTransitionTime":"2026-02-19T18:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:38 crc kubenswrapper[4813]: E0219 18:30:38.842865 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"72639c5f-de3a-44bf-8f3b-4707b19d9f7d\\\",\\\"systemUUID\\\":\\\"3f17b88b-2b9a-42bd-94af-777e7f325932\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:38Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:38 crc kubenswrapper[4813]: E0219 18:30:38.843112 4813 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 18:30:38 crc kubenswrapper[4813]: I0219 18:30:38.844644 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:38 crc kubenswrapper[4813]: I0219 18:30:38.844689 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:38 crc kubenswrapper[4813]: I0219 18:30:38.844702 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:38 crc kubenswrapper[4813]: I0219 18:30:38.844724 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:38 crc kubenswrapper[4813]: I0219 18:30:38.844736 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:38Z","lastTransitionTime":"2026-02-19T18:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:38 crc kubenswrapper[4813]: I0219 18:30:38.947294 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:38 crc kubenswrapper[4813]: I0219 18:30:38.947339 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:38 crc kubenswrapper[4813]: I0219 18:30:38.947349 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:38 crc kubenswrapper[4813]: I0219 18:30:38.947368 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:38 crc kubenswrapper[4813]: I0219 18:30:38.947379 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:38Z","lastTransitionTime":"2026-02-19T18:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:39 crc kubenswrapper[4813]: I0219 18:30:39.049901 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:39 crc kubenswrapper[4813]: I0219 18:30:39.050010 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:39 crc kubenswrapper[4813]: I0219 18:30:39.050030 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:39 crc kubenswrapper[4813]: I0219 18:30:39.050054 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:39 crc kubenswrapper[4813]: I0219 18:30:39.050071 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:39Z","lastTransitionTime":"2026-02-19T18:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:39 crc kubenswrapper[4813]: I0219 18:30:39.153642 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:39 crc kubenswrapper[4813]: I0219 18:30:39.153705 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:39 crc kubenswrapper[4813]: I0219 18:30:39.153722 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:39 crc kubenswrapper[4813]: I0219 18:30:39.153746 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:39 crc kubenswrapper[4813]: I0219 18:30:39.153764 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:39Z","lastTransitionTime":"2026-02-19T18:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:39 crc kubenswrapper[4813]: I0219 18:30:39.258810 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:39 crc kubenswrapper[4813]: I0219 18:30:39.258881 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:39 crc kubenswrapper[4813]: I0219 18:30:39.258903 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:39 crc kubenswrapper[4813]: I0219 18:30:39.258945 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:39 crc kubenswrapper[4813]: I0219 18:30:39.258991 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:39Z","lastTransitionTime":"2026-02-19T18:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:39 crc kubenswrapper[4813]: I0219 18:30:39.361663 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:39 crc kubenswrapper[4813]: I0219 18:30:39.361717 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:39 crc kubenswrapper[4813]: I0219 18:30:39.361732 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:39 crc kubenswrapper[4813]: I0219 18:30:39.361754 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:39 crc kubenswrapper[4813]: I0219 18:30:39.361772 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:39Z","lastTransitionTime":"2026-02-19T18:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:39 crc kubenswrapper[4813]: I0219 18:30:39.433401 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 07:45:44.347157595 +0000 UTC Feb 19 18:30:39 crc kubenswrapper[4813]: I0219 18:30:39.464581 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:39 crc kubenswrapper[4813]: I0219 18:30:39.464636 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:39 crc kubenswrapper[4813]: I0219 18:30:39.464652 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:39 crc kubenswrapper[4813]: I0219 18:30:39.464670 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:39 crc kubenswrapper[4813]: I0219 18:30:39.464683 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:39Z","lastTransitionTime":"2026-02-19T18:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:39 crc kubenswrapper[4813]: I0219 18:30:39.471212 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 18:30:39 crc kubenswrapper[4813]: I0219 18:30:39.471290 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:30:39 crc kubenswrapper[4813]: E0219 18:30:39.471351 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 18:30:39 crc kubenswrapper[4813]: E0219 18:30:39.471448 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 18:30:39 crc kubenswrapper[4813]: I0219 18:30:39.471526 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 18:30:39 crc kubenswrapper[4813]: E0219 18:30:39.471684 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 18:30:39 crc kubenswrapper[4813]: I0219 18:30:39.567647 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:39 crc kubenswrapper[4813]: I0219 18:30:39.567739 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:39 crc kubenswrapper[4813]: I0219 18:30:39.567752 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:39 crc kubenswrapper[4813]: I0219 18:30:39.567768 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:39 crc kubenswrapper[4813]: I0219 18:30:39.567779 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:39Z","lastTransitionTime":"2026-02-19T18:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:39 crc kubenswrapper[4813]: I0219 18:30:39.670108 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:39 crc kubenswrapper[4813]: I0219 18:30:39.670447 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:39 crc kubenswrapper[4813]: I0219 18:30:39.670622 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:39 crc kubenswrapper[4813]: I0219 18:30:39.670815 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:39 crc kubenswrapper[4813]: I0219 18:30:39.671020 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:39Z","lastTransitionTime":"2026-02-19T18:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:39 crc kubenswrapper[4813]: I0219 18:30:39.772996 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:39 crc kubenswrapper[4813]: I0219 18:30:39.773075 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:39 crc kubenswrapper[4813]: I0219 18:30:39.773099 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:39 crc kubenswrapper[4813]: I0219 18:30:39.773125 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:39 crc kubenswrapper[4813]: I0219 18:30:39.773147 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:39Z","lastTransitionTime":"2026-02-19T18:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:39 crc kubenswrapper[4813]: I0219 18:30:39.876345 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:39 crc kubenswrapper[4813]: I0219 18:30:39.876676 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:39 crc kubenswrapper[4813]: I0219 18:30:39.876854 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:39 crc kubenswrapper[4813]: I0219 18:30:39.877072 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:39 crc kubenswrapper[4813]: I0219 18:30:39.877259 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:39Z","lastTransitionTime":"2026-02-19T18:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:39 crc kubenswrapper[4813]: I0219 18:30:39.979769 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:39 crc kubenswrapper[4813]: I0219 18:30:39.979830 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:39 crc kubenswrapper[4813]: I0219 18:30:39.979849 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:39 crc kubenswrapper[4813]: I0219 18:30:39.979871 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:39 crc kubenswrapper[4813]: I0219 18:30:39.979891 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:39Z","lastTransitionTime":"2026-02-19T18:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:40 crc kubenswrapper[4813]: I0219 18:30:40.081828 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:40 crc kubenswrapper[4813]: I0219 18:30:40.082176 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:40 crc kubenswrapper[4813]: I0219 18:30:40.082282 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:40 crc kubenswrapper[4813]: I0219 18:30:40.082396 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:40 crc kubenswrapper[4813]: I0219 18:30:40.082494 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:40Z","lastTransitionTime":"2026-02-19T18:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:40 crc kubenswrapper[4813]: I0219 18:30:40.184790 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:40 crc kubenswrapper[4813]: I0219 18:30:40.184829 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:40 crc kubenswrapper[4813]: I0219 18:30:40.184837 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:40 crc kubenswrapper[4813]: I0219 18:30:40.184853 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:40 crc kubenswrapper[4813]: I0219 18:30:40.184861 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:40Z","lastTransitionTime":"2026-02-19T18:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:40 crc kubenswrapper[4813]: I0219 18:30:40.287337 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:40 crc kubenswrapper[4813]: I0219 18:30:40.287394 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:40 crc kubenswrapper[4813]: I0219 18:30:40.287412 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:40 crc kubenswrapper[4813]: I0219 18:30:40.287435 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:40 crc kubenswrapper[4813]: I0219 18:30:40.287451 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:40Z","lastTransitionTime":"2026-02-19T18:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:40 crc kubenswrapper[4813]: I0219 18:30:40.391107 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:40 crc kubenswrapper[4813]: I0219 18:30:40.391162 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:40 crc kubenswrapper[4813]: I0219 18:30:40.391185 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:40 crc kubenswrapper[4813]: I0219 18:30:40.391211 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:40 crc kubenswrapper[4813]: I0219 18:30:40.391235 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:40Z","lastTransitionTime":"2026-02-19T18:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:40 crc kubenswrapper[4813]: I0219 18:30:40.434213 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 13:40:19.811340745 +0000 UTC Feb 19 18:30:40 crc kubenswrapper[4813]: I0219 18:30:40.471525 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5vng" Feb 19 18:30:40 crc kubenswrapper[4813]: E0219 18:30:40.471741 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5vng" podUID="6fc21e0b-f723-4c9c-9ced-1683cc02fa00" Feb 19 18:30:40 crc kubenswrapper[4813]: I0219 18:30:40.494388 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:40 crc kubenswrapper[4813]: I0219 18:30:40.494436 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:40 crc kubenswrapper[4813]: I0219 18:30:40.494449 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:40 crc kubenswrapper[4813]: I0219 18:30:40.494466 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:40 crc kubenswrapper[4813]: I0219 18:30:40.494479 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:40Z","lastTransitionTime":"2026-02-19T18:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:40 crc kubenswrapper[4813]: I0219 18:30:40.597855 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:40 crc kubenswrapper[4813]: I0219 18:30:40.597901 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:40 crc kubenswrapper[4813]: I0219 18:30:40.597913 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:40 crc kubenswrapper[4813]: I0219 18:30:40.597929 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:40 crc kubenswrapper[4813]: I0219 18:30:40.597940 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:40Z","lastTransitionTime":"2026-02-19T18:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:40 crc kubenswrapper[4813]: I0219 18:30:40.700057 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:40 crc kubenswrapper[4813]: I0219 18:30:40.700114 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:40 crc kubenswrapper[4813]: I0219 18:30:40.700134 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:40 crc kubenswrapper[4813]: I0219 18:30:40.700158 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:40 crc kubenswrapper[4813]: I0219 18:30:40.700176 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:40Z","lastTransitionTime":"2026-02-19T18:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:40 crc kubenswrapper[4813]: I0219 18:30:40.802639 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:40 crc kubenswrapper[4813]: I0219 18:30:40.803123 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:40 crc kubenswrapper[4813]: I0219 18:30:40.803318 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:40 crc kubenswrapper[4813]: I0219 18:30:40.803480 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:40 crc kubenswrapper[4813]: I0219 18:30:40.803658 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:40Z","lastTransitionTime":"2026-02-19T18:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:40 crc kubenswrapper[4813]: I0219 18:30:40.906814 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:40 crc kubenswrapper[4813]: I0219 18:30:40.906870 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:40 crc kubenswrapper[4813]: I0219 18:30:40.906887 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:40 crc kubenswrapper[4813]: I0219 18:30:40.906911 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:40 crc kubenswrapper[4813]: I0219 18:30:40.906928 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:40Z","lastTransitionTime":"2026-02-19T18:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:41 crc kubenswrapper[4813]: I0219 18:30:41.009890 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:41 crc kubenswrapper[4813]: I0219 18:30:41.009988 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:41 crc kubenswrapper[4813]: I0219 18:30:41.010007 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:41 crc kubenswrapper[4813]: I0219 18:30:41.010033 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:41 crc kubenswrapper[4813]: I0219 18:30:41.010050 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:41Z","lastTransitionTime":"2026-02-19T18:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:41 crc kubenswrapper[4813]: I0219 18:30:41.112338 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:41 crc kubenswrapper[4813]: I0219 18:30:41.112399 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:41 crc kubenswrapper[4813]: I0219 18:30:41.112415 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:41 crc kubenswrapper[4813]: I0219 18:30:41.112440 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:41 crc kubenswrapper[4813]: I0219 18:30:41.112457 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:41Z","lastTransitionTime":"2026-02-19T18:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:41 crc kubenswrapper[4813]: I0219 18:30:41.215120 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:41 crc kubenswrapper[4813]: I0219 18:30:41.215168 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:41 crc kubenswrapper[4813]: I0219 18:30:41.215191 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:41 crc kubenswrapper[4813]: I0219 18:30:41.215216 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:41 crc kubenswrapper[4813]: I0219 18:30:41.215233 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:41Z","lastTransitionTime":"2026-02-19T18:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:41 crc kubenswrapper[4813]: I0219 18:30:41.318116 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:41 crc kubenswrapper[4813]: I0219 18:30:41.318172 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:41 crc kubenswrapper[4813]: I0219 18:30:41.318189 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:41 crc kubenswrapper[4813]: I0219 18:30:41.318213 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:41 crc kubenswrapper[4813]: I0219 18:30:41.318230 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:41Z","lastTransitionTime":"2026-02-19T18:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:41 crc kubenswrapper[4813]: I0219 18:30:41.420308 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:41 crc kubenswrapper[4813]: I0219 18:30:41.420711 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:41 crc kubenswrapper[4813]: I0219 18:30:41.420725 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:41 crc kubenswrapper[4813]: I0219 18:30:41.420743 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:41 crc kubenswrapper[4813]: I0219 18:30:41.420757 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:41Z","lastTransitionTime":"2026-02-19T18:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:41 crc kubenswrapper[4813]: I0219 18:30:41.434845 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 06:21:58.218789767 +0000 UTC Feb 19 18:30:41 crc kubenswrapper[4813]: I0219 18:30:41.470465 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 18:30:41 crc kubenswrapper[4813]: I0219 18:30:41.470545 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:30:41 crc kubenswrapper[4813]: E0219 18:30:41.470632 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 18:30:41 crc kubenswrapper[4813]: I0219 18:30:41.470699 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 18:30:41 crc kubenswrapper[4813]: E0219 18:30:41.470839 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 18:30:41 crc kubenswrapper[4813]: E0219 18:30:41.470927 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 18:30:41 crc kubenswrapper[4813]: I0219 18:30:41.489759 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqbrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"614b0374-4288-40dc-9d95-e6f6566bd1ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99f4d2e3db84c62b82edf3e5f8290b3b259e58655a5ea95ea51ed8aeec8e8844\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpkbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef107096c0521504fd7160ad4e3c0feff6ed1500af21c7a0328a981c340825f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpkbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqbrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:41Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:41 crc kubenswrapper[4813]: I0219 18:30:41.505715 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l5vng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fc21e0b-f723-4c9c-9ced-1683cc02fa00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmn2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmn2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l5vng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:41Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:41 crc kubenswrapper[4813]: I0219 18:30:41.524052 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:41 crc kubenswrapper[4813]: I0219 18:30:41.524112 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:41 crc kubenswrapper[4813]: I0219 18:30:41.524130 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:41 crc kubenswrapper[4813]: I0219 18:30:41.524154 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:41 crc kubenswrapper[4813]: I0219 18:30:41.524171 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:41Z","lastTransitionTime":"2026-02-19T18:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:41 crc kubenswrapper[4813]: I0219 18:30:41.524492 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e24bb1ce-8e7b-485c-8ade-4d1c8d24d31d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ec0b5d3efae532f52cc2cad5bdf32bbba176159a6d3e26584b0fabe4e67fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72a262261f9ec388a9b2c38c6c30d35645d6e42d482ed7468891523b7bc731fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58efdcb776fb39a7e33af3fda3f0589e38c4bae15b8db3371541d189729d06c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a2584d7de40323b884e08e2cb47b6ab8054fadccc7516e4736c5439aef60e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:41Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:41 crc kubenswrapper[4813]: I0219 18:30:41.544275 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:41Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:41 crc kubenswrapper[4813]: I0219 18:30:41.566064 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:41Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:41 crc kubenswrapper[4813]: I0219 18:30:41.586269 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c6488cf097d9700ded1960ebddf696d77c08ffdfc6dd8b351c594ea026bb6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6181782d0c635897318effbe40705986067ccb55cf05ffdba6948e46c1a054b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:41Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:41 crc kubenswrapper[4813]: I0219 18:30:41.605875 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hksqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b099cefb-f2e5-4f3f-976c-7433dba77ef2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3313ad9bea8833862d8c116421e00a0c2376c81e214043db1fa4096177d01094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hksqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:41Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:41 crc kubenswrapper[4813]: I0219 18:30:41.625892 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlz6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f58d0592-08dd-49db-8c98-f262b9808e0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://934ac9bf4c98a77f31d9fddb47e9981f38daeca4f37afc09a488b48c69a13b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7968520498abaa880445c128f02c0838e254befc000966e147908c84203f74a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7968520498abaa880445c128f02c0838e254befc000966e147908c84203f74a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17313aa183028703a67d1cf40b4246d09224ceee622cd9f4f7d2ef239b26db47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17313aa183028703a67d1cf40b4246d09224ceee622cd9f4f7d2ef239b26db47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ab70722c91747196d065d768e30f52befde81fdedb465f22c3f7eef4705210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98ab70722c91747196d065d768e30f52befde81fdedb465f22c3f7eef4705210\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ab6dd9d03726493fd9fd2c358b0a87ca1997cbe2111d43a548adc9c9f43c662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ab6dd9d03726493fd9fd2c358b0a87ca1997cbe2111d43a548adc9c9f43c662\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11144c876f7a0cbaa5787ac1a093f138ec3fdb8a3cd118d4efec6cc650e3e97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11144c876f7a0cbaa5787ac1a093f138ec3fdb8a3cd118d4efec6cc650e3e97a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53230e417c1722eded51e2c7a87acb4e9b610aaf5d36a22d4f3f4fcd987c5160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53230e417c1722eded51e2c7a87acb4e9b610aaf5d36a22d4f3f4fcd987c5160\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlz6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:41Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:41 crc kubenswrapper[4813]: I0219 18:30:41.627829 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:41 crc kubenswrapper[4813]: I0219 18:30:41.627894 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:41 crc kubenswrapper[4813]: I0219 18:30:41.627918 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:41 crc kubenswrapper[4813]: I0219 18:30:41.627946 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:41 crc kubenswrapper[4813]: I0219 18:30:41.628002 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:41Z","lastTransitionTime":"2026-02-19T18:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:41 crc kubenswrapper[4813]: I0219 18:30:41.655946 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"928c75f4-605c-4556-8c29-14ff4bdf6f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec80078ab4536d8f1962994c3bdf8af5fcf81516f8845e84dbb33a78b1aa2062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e03ec2a3e1656fbfb567377e79bcca10e1c8a6f7660727c164936eab7365930a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6191934d31b3f711d3c9d8f365251db200dea5a925cad670632dbfc527f76504\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1b878b9bc710465c933e1965728be9fe220fe89a3d5c05fa538abd4bf22b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81cfe261685862b6b74d263ab66b4a2b1d68409791e82424b4f210c56561099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fed64977a80ae6acac3a225e2b6b6ab865db1f36338df4a15011baae90f949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90c570c2b717e5736262333676c0fd0ffbdbd3957f8c27964a5126d2df92cc13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90c570c2b717e5736262333676c0fd0ffbdbd3957f8c27964a5126d2df92cc13\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T18:30:32Z\\\",\\\"message\\\":\\\"s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 18:30:32.490543 6488 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 18:30:32.490653 6488 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 18:30:32.491223 6488 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 18:30:32.491297 6488 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 18:30:32.491309 6488 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 18:30:32.491356 6488 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 18:30:32.491372 6488 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 18:30:32.491393 6488 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 18:30:32.491409 6488 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 18:30:32.491423 6488 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 18:30:32.491463 6488 factory.go:656] Stopping watch factory\\\\nI0219 18:30:32.491489 6488 ovnkube.go:599] Stopped ovnkube\\\\nI0219 18:30:32.491500 6488 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 18:30:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pc9t2_openshift-ovn-kubernetes(928c75f4-605c-4556-8c29-14ff4bdf6f5e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1caf7a922d435742a5c6d5302c70003bc86cc3ab542e2e3fe0a368c150de7838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0244cf62334ae37b44f11119385a76667200a01b2162a064829f16e8d6c0cf65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0244cf62334ae37b44f11119385a76667200a01b2162a064829f16e8d6c0cf65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pc9t2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:41Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:41 crc kubenswrapper[4813]: I0219 18:30:41.675489 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3c31506-f64a-4742-a5f9-fb4f7cb97f3e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2b8202d5d41248673716db8cad0618006cc52e967751eb392b99663c4aa90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6995816d5f4dd389140af1ce1d1d5e45df8cf15a28c51cd5e13ee52c094377f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4760dc159a8f00a8d74f75e4c3b1f50b3548501b3a7623e96aaa16c1013611c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c86da51b4095ca48cd6d13dadef813f656825387e3166eb716fe2b8cc46ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7c86da51b4095ca48cd6d13dadef813f656825387e3166eb716fe2b8cc46ce3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:41Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:41 crc kubenswrapper[4813]: I0219 18:30:41.694729 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:41Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:41 crc kubenswrapper[4813]: I0219 18:30:41.708405 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vpk9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0640f474-4d6f-4a87-9750-dda71e69dd95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58268d212369ac3ebcca0839f2c778d1e2e6e36fdc4b7e374cb9044adb39163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vpk9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:41Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:41 crc kubenswrapper[4813]: I0219 18:30:41.726324 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8429a2d929ad94a25a922e436298db7a8eb7522560dfa37f87f6e24f4f99b4e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:41Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:41 crc kubenswrapper[4813]: I0219 18:30:41.731595 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:41 crc kubenswrapper[4813]: I0219 18:30:41.731670 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:41 crc kubenswrapper[4813]: I0219 18:30:41.731688 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:41 crc kubenswrapper[4813]: I0219 18:30:41.731714 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:41 crc kubenswrapper[4813]: I0219 18:30:41.731738 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:41Z","lastTransitionTime":"2026-02-19T18:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:41 crc kubenswrapper[4813]: I0219 18:30:41.743608 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"481977a2-7072-4176-abd4-863cb6104d70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2b35ef24fab5f4dfd17bf95d550000ed1167454385baff2c19e1bc795e2fa8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a57de3f000b8b7bd147b882314c518fbd8173a9c366d4ee48e1e5fab9800b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfswm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:41Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:41 crc kubenswrapper[4813]: I0219 18:30:41.766155 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7e9f575-0f71-4cf9-bbca-161279ecc067\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5217c8c5d8bdd49d4930ceb46695c6404c9cb98f22fa570d519f6fd9e90a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72ae02b48db247486c8af5ba910b32e186c15460f9195868b319d1ddc111fa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e90168ea6a75495be5812591cd0c929407c9adcb7c088d4bcbe3658a430320\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f319cf58bd58c099ace281ef84d183ac951986506fa62207a45ec01f7398e7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f62c4c18d632f0210d0676c04bf42a7b28c8669c123d29ef70fd4129f0051a2b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T18:30:00Z\\\",\\\"message\\\":\\\".003654 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 18:29:55.006059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3464865525/tls.crt::/tmp/serving-cert-3464865525/tls.key\\\\\\\"\\\\nI0219 18:30:00.917714 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 18:30:00.925836 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 18:30:00.925875 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 18:30:00.925917 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 18:30:00.925928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 18:30:00.935151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 18:30:00.935201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935214 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935228 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 18:30:00.935217 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 18:30:00.935236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 18:30:00.935260 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 18:30:00.935268 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 18:30:00.942033 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nF0219 18:30:00.942062 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b414bdca98a18315a2ebb69c39baa4748407aa769d6bc8efa4115efb47c9f4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:41Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:41 crc kubenswrapper[4813]: I0219 18:30:41.799994 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"424add47-7924-4d6b-9c4a-4696e02c5a45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95dc9a2b0e36680a894dd162836072614413ad26fcf5adbae44db96aff8a00c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46edb0e58f8e2e040cc223708f42cef7060dc0a841fff9db182e66e659ced5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://def8abdfd513befc4fac3d78b8284b3b37ba3f5da05d74a7b911932ee1099344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed857dedd4a4cf59288fcbcfaa298370a93630e185b949d2e574eb828831df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://216064c0aa1256384d93ad5b952228d4b67c2767f284715e62505dd390c11268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b9746c82ed08c95759481dc08426b4e8446ed834da21c003dccfeb8231fbec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b9746c82ed08c95759481dc08426b4e8446ed834da21c003dccfeb8231fbec2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc8db58cc6cac8c9b830087814e9d7652fdd990b18e4e4e778669d5cbc06b508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc8db58cc6cac8c9b830087814e9d7652fdd990b18e4e4e778669d5cbc06b508\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://baca15e257ea2007369156b6a31eef0b7cd0e336cef7a9c733164b93e1927f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baca15e257ea2007369156b6a31eef0b7cd0e336cef7a9c733164b93e1927f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:41Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:41 crc kubenswrapper[4813]: I0219 18:30:41.823139 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44daee369357470d5c3b416d63e53f621a0434cc6fe90db2dd15a6078849bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:41Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:41 crc kubenswrapper[4813]: I0219 18:30:41.835074 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:41 crc kubenswrapper[4813]: I0219 18:30:41.835359 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:41 crc kubenswrapper[4813]: I0219 18:30:41.835504 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:41 crc kubenswrapper[4813]: I0219 18:30:41.835646 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:41 crc kubenswrapper[4813]: I0219 18:30:41.835773 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:41Z","lastTransitionTime":"2026-02-19T18:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:41 crc kubenswrapper[4813]: I0219 18:30:41.840723 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55mxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af5ac962-f7d8-4759-b913-b3784e37a704\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5006f3df33dcac6500b89a837e686eb2b35a2d1c609091bda101500d15d808d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55mxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:41Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:41 crc kubenswrapper[4813]: I0219 18:30:41.938376 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:41 crc kubenswrapper[4813]: I0219 18:30:41.938697 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:41 crc kubenswrapper[4813]: I0219 18:30:41.938841 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:41 crc kubenswrapper[4813]: I0219 18:30:41.939018 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:41 crc kubenswrapper[4813]: I0219 18:30:41.939261 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:41Z","lastTransitionTime":"2026-02-19T18:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:42 crc kubenswrapper[4813]: I0219 18:30:42.042424 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:42 crc kubenswrapper[4813]: I0219 18:30:42.042764 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:42 crc kubenswrapper[4813]: I0219 18:30:42.043050 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:42 crc kubenswrapper[4813]: I0219 18:30:42.043249 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:42 crc kubenswrapper[4813]: I0219 18:30:42.043413 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:42Z","lastTransitionTime":"2026-02-19T18:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:42 crc kubenswrapper[4813]: I0219 18:30:42.146557 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:42 crc kubenswrapper[4813]: I0219 18:30:42.147306 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:42 crc kubenswrapper[4813]: I0219 18:30:42.147468 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:42 crc kubenswrapper[4813]: I0219 18:30:42.147606 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:42 crc kubenswrapper[4813]: I0219 18:30:42.147729 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:42Z","lastTransitionTime":"2026-02-19T18:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:42 crc kubenswrapper[4813]: I0219 18:30:42.250400 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:42 crc kubenswrapper[4813]: I0219 18:30:42.250449 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:42 crc kubenswrapper[4813]: I0219 18:30:42.250466 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:42 crc kubenswrapper[4813]: I0219 18:30:42.250495 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:42 crc kubenswrapper[4813]: I0219 18:30:42.250516 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:42Z","lastTransitionTime":"2026-02-19T18:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:42 crc kubenswrapper[4813]: I0219 18:30:42.354103 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:42 crc kubenswrapper[4813]: I0219 18:30:42.354160 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:42 crc kubenswrapper[4813]: I0219 18:30:42.354182 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:42 crc kubenswrapper[4813]: I0219 18:30:42.354211 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:42 crc kubenswrapper[4813]: I0219 18:30:42.354234 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:42Z","lastTransitionTime":"2026-02-19T18:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:42 crc kubenswrapper[4813]: I0219 18:30:42.436088 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 08:51:42.194829188 +0000 UTC Feb 19 18:30:42 crc kubenswrapper[4813]: I0219 18:30:42.456929 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:42 crc kubenswrapper[4813]: I0219 18:30:42.457032 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:42 crc kubenswrapper[4813]: I0219 18:30:42.457062 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:42 crc kubenswrapper[4813]: I0219 18:30:42.457087 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:42 crc kubenswrapper[4813]: I0219 18:30:42.457109 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:42Z","lastTransitionTime":"2026-02-19T18:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:42 crc kubenswrapper[4813]: I0219 18:30:42.471543 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5vng" Feb 19 18:30:42 crc kubenswrapper[4813]: E0219 18:30:42.471772 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5vng" podUID="6fc21e0b-f723-4c9c-9ced-1683cc02fa00" Feb 19 18:30:42 crc kubenswrapper[4813]: I0219 18:30:42.559862 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:42 crc kubenswrapper[4813]: I0219 18:30:42.559935 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:42 crc kubenswrapper[4813]: I0219 18:30:42.559986 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:42 crc kubenswrapper[4813]: I0219 18:30:42.560019 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:42 crc kubenswrapper[4813]: I0219 18:30:42.560043 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:42Z","lastTransitionTime":"2026-02-19T18:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:42 crc kubenswrapper[4813]: I0219 18:30:42.663066 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:42 crc kubenswrapper[4813]: I0219 18:30:42.663125 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:42 crc kubenswrapper[4813]: I0219 18:30:42.663141 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:42 crc kubenswrapper[4813]: I0219 18:30:42.663167 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:42 crc kubenswrapper[4813]: I0219 18:30:42.663185 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:42Z","lastTransitionTime":"2026-02-19T18:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:42 crc kubenswrapper[4813]: I0219 18:30:42.766257 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:42 crc kubenswrapper[4813]: I0219 18:30:42.766317 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:42 crc kubenswrapper[4813]: I0219 18:30:42.766334 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:42 crc kubenswrapper[4813]: I0219 18:30:42.766361 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:42 crc kubenswrapper[4813]: I0219 18:30:42.766381 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:42Z","lastTransitionTime":"2026-02-19T18:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:42 crc kubenswrapper[4813]: I0219 18:30:42.870035 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:42 crc kubenswrapper[4813]: I0219 18:30:42.870455 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:42 crc kubenswrapper[4813]: I0219 18:30:42.870655 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:42 crc kubenswrapper[4813]: I0219 18:30:42.870853 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:42 crc kubenswrapper[4813]: I0219 18:30:42.871152 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:42Z","lastTransitionTime":"2026-02-19T18:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:42 crc kubenswrapper[4813]: I0219 18:30:42.974160 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:42 crc kubenswrapper[4813]: I0219 18:30:42.974238 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:42 crc kubenswrapper[4813]: I0219 18:30:42.974260 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:42 crc kubenswrapper[4813]: I0219 18:30:42.974293 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:42 crc kubenswrapper[4813]: I0219 18:30:42.974314 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:42Z","lastTransitionTime":"2026-02-19T18:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:43 crc kubenswrapper[4813]: I0219 18:30:43.077451 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:43 crc kubenswrapper[4813]: I0219 18:30:43.077524 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:43 crc kubenswrapper[4813]: I0219 18:30:43.077545 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:43 crc kubenswrapper[4813]: I0219 18:30:43.077574 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:43 crc kubenswrapper[4813]: I0219 18:30:43.077595 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:43Z","lastTransitionTime":"2026-02-19T18:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:43 crc kubenswrapper[4813]: I0219 18:30:43.181254 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:43 crc kubenswrapper[4813]: I0219 18:30:43.181333 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:43 crc kubenswrapper[4813]: I0219 18:30:43.181353 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:43 crc kubenswrapper[4813]: I0219 18:30:43.181378 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:43 crc kubenswrapper[4813]: I0219 18:30:43.181395 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:43Z","lastTransitionTime":"2026-02-19T18:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:43 crc kubenswrapper[4813]: I0219 18:30:43.284657 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:43 crc kubenswrapper[4813]: I0219 18:30:43.284731 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:43 crc kubenswrapper[4813]: I0219 18:30:43.284753 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:43 crc kubenswrapper[4813]: I0219 18:30:43.284772 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:43 crc kubenswrapper[4813]: I0219 18:30:43.284785 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:43Z","lastTransitionTime":"2026-02-19T18:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:43 crc kubenswrapper[4813]: I0219 18:30:43.387760 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:43 crc kubenswrapper[4813]: I0219 18:30:43.387811 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:43 crc kubenswrapper[4813]: I0219 18:30:43.387826 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:43 crc kubenswrapper[4813]: I0219 18:30:43.387843 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:43 crc kubenswrapper[4813]: I0219 18:30:43.387871 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:43Z","lastTransitionTime":"2026-02-19T18:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:43 crc kubenswrapper[4813]: I0219 18:30:43.436534 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 18:43:07.826948592 +0000 UTC Feb 19 18:30:43 crc kubenswrapper[4813]: I0219 18:30:43.470765 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 18:30:43 crc kubenswrapper[4813]: I0219 18:30:43.470801 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:30:43 crc kubenswrapper[4813]: I0219 18:30:43.470801 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 18:30:43 crc kubenswrapper[4813]: E0219 18:30:43.470987 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 18:30:43 crc kubenswrapper[4813]: E0219 18:30:43.471057 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 18:30:43 crc kubenswrapper[4813]: E0219 18:30:43.471147 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 18:30:43 crc kubenswrapper[4813]: I0219 18:30:43.491698 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:43 crc kubenswrapper[4813]: I0219 18:30:43.491752 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:43 crc kubenswrapper[4813]: I0219 18:30:43.491770 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:43 crc kubenswrapper[4813]: I0219 18:30:43.491835 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:43 crc kubenswrapper[4813]: I0219 18:30:43.491854 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:43Z","lastTransitionTime":"2026-02-19T18:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:43 crc kubenswrapper[4813]: I0219 18:30:43.595176 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:43 crc kubenswrapper[4813]: I0219 18:30:43.595252 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:43 crc kubenswrapper[4813]: I0219 18:30:43.595274 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:43 crc kubenswrapper[4813]: I0219 18:30:43.595306 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:43 crc kubenswrapper[4813]: I0219 18:30:43.595327 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:43Z","lastTransitionTime":"2026-02-19T18:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:43 crc kubenswrapper[4813]: I0219 18:30:43.697816 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:43 crc kubenswrapper[4813]: I0219 18:30:43.697926 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:43 crc kubenswrapper[4813]: I0219 18:30:43.697943 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:43 crc kubenswrapper[4813]: I0219 18:30:43.698346 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:43 crc kubenswrapper[4813]: I0219 18:30:43.698623 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:43Z","lastTransitionTime":"2026-02-19T18:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:43 crc kubenswrapper[4813]: I0219 18:30:43.801947 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:43 crc kubenswrapper[4813]: I0219 18:30:43.802051 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:43 crc kubenswrapper[4813]: I0219 18:30:43.802068 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:43 crc kubenswrapper[4813]: I0219 18:30:43.802093 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:43 crc kubenswrapper[4813]: I0219 18:30:43.802112 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:43Z","lastTransitionTime":"2026-02-19T18:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:43 crc kubenswrapper[4813]: I0219 18:30:43.904849 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:43 crc kubenswrapper[4813]: I0219 18:30:43.904931 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:43 crc kubenswrapper[4813]: I0219 18:30:43.904945 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:43 crc kubenswrapper[4813]: I0219 18:30:43.904994 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:43 crc kubenswrapper[4813]: I0219 18:30:43.905006 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:43Z","lastTransitionTime":"2026-02-19T18:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:44 crc kubenswrapper[4813]: I0219 18:30:44.008241 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:44 crc kubenswrapper[4813]: I0219 18:30:44.008295 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:44 crc kubenswrapper[4813]: I0219 18:30:44.008311 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:44 crc kubenswrapper[4813]: I0219 18:30:44.008332 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:44 crc kubenswrapper[4813]: I0219 18:30:44.008350 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:44Z","lastTransitionTime":"2026-02-19T18:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:44 crc kubenswrapper[4813]: I0219 18:30:44.111404 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:44 crc kubenswrapper[4813]: I0219 18:30:44.111487 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:44 crc kubenswrapper[4813]: I0219 18:30:44.111505 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:44 crc kubenswrapper[4813]: I0219 18:30:44.111530 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:44 crc kubenswrapper[4813]: I0219 18:30:44.111548 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:44Z","lastTransitionTime":"2026-02-19T18:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:44 crc kubenswrapper[4813]: I0219 18:30:44.214488 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:44 crc kubenswrapper[4813]: I0219 18:30:44.214545 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:44 crc kubenswrapper[4813]: I0219 18:30:44.214563 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:44 crc kubenswrapper[4813]: I0219 18:30:44.214585 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:44 crc kubenswrapper[4813]: I0219 18:30:44.214603 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:44Z","lastTransitionTime":"2026-02-19T18:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:44 crc kubenswrapper[4813]: I0219 18:30:44.316909 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:44 crc kubenswrapper[4813]: I0219 18:30:44.316974 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:44 crc kubenswrapper[4813]: I0219 18:30:44.316987 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:44 crc kubenswrapper[4813]: I0219 18:30:44.317005 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:44 crc kubenswrapper[4813]: I0219 18:30:44.317021 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:44Z","lastTransitionTime":"2026-02-19T18:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:44 crc kubenswrapper[4813]: I0219 18:30:44.419486 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:44 crc kubenswrapper[4813]: I0219 18:30:44.419543 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:44 crc kubenswrapper[4813]: I0219 18:30:44.419557 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:44 crc kubenswrapper[4813]: I0219 18:30:44.419573 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:44 crc kubenswrapper[4813]: I0219 18:30:44.419587 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:44Z","lastTransitionTime":"2026-02-19T18:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:44 crc kubenswrapper[4813]: I0219 18:30:44.437043 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 05:36:11.171948012 +0000 UTC Feb 19 18:30:44 crc kubenswrapper[4813]: I0219 18:30:44.471440 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5vng" Feb 19 18:30:44 crc kubenswrapper[4813]: E0219 18:30:44.472046 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5vng" podUID="6fc21e0b-f723-4c9c-9ced-1683cc02fa00" Feb 19 18:30:44 crc kubenswrapper[4813]: I0219 18:30:44.522229 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:44 crc kubenswrapper[4813]: I0219 18:30:44.522275 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:44 crc kubenswrapper[4813]: I0219 18:30:44.522290 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:44 crc kubenswrapper[4813]: I0219 18:30:44.522311 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:44 crc kubenswrapper[4813]: I0219 18:30:44.522325 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:44Z","lastTransitionTime":"2026-02-19T18:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:44 crc kubenswrapper[4813]: I0219 18:30:44.625225 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:44 crc kubenswrapper[4813]: I0219 18:30:44.625297 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:44 crc kubenswrapper[4813]: I0219 18:30:44.625311 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:44 crc kubenswrapper[4813]: I0219 18:30:44.625333 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:44 crc kubenswrapper[4813]: I0219 18:30:44.625368 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:44Z","lastTransitionTime":"2026-02-19T18:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:44 crc kubenswrapper[4813]: I0219 18:30:44.728241 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:44 crc kubenswrapper[4813]: I0219 18:30:44.728300 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:44 crc kubenswrapper[4813]: I0219 18:30:44.728318 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:44 crc kubenswrapper[4813]: I0219 18:30:44.728340 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:44 crc kubenswrapper[4813]: I0219 18:30:44.728354 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:44Z","lastTransitionTime":"2026-02-19T18:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:44 crc kubenswrapper[4813]: I0219 18:30:44.831044 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:44 crc kubenswrapper[4813]: I0219 18:30:44.831103 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:44 crc kubenswrapper[4813]: I0219 18:30:44.831125 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:44 crc kubenswrapper[4813]: I0219 18:30:44.831153 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:44 crc kubenswrapper[4813]: I0219 18:30:44.831174 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:44Z","lastTransitionTime":"2026-02-19T18:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:44 crc kubenswrapper[4813]: I0219 18:30:44.933573 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:44 crc kubenswrapper[4813]: I0219 18:30:44.933608 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:44 crc kubenswrapper[4813]: I0219 18:30:44.933618 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:44 crc kubenswrapper[4813]: I0219 18:30:44.933631 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:44 crc kubenswrapper[4813]: I0219 18:30:44.933640 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:44Z","lastTransitionTime":"2026-02-19T18:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:45 crc kubenswrapper[4813]: I0219 18:30:45.037509 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:45 crc kubenswrapper[4813]: I0219 18:30:45.037567 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:45 crc kubenswrapper[4813]: I0219 18:30:45.037582 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:45 crc kubenswrapper[4813]: I0219 18:30:45.037606 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:45 crc kubenswrapper[4813]: I0219 18:30:45.037623 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:45Z","lastTransitionTime":"2026-02-19T18:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:45 crc kubenswrapper[4813]: I0219 18:30:45.140268 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:45 crc kubenswrapper[4813]: I0219 18:30:45.140347 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:45 crc kubenswrapper[4813]: I0219 18:30:45.140369 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:45 crc kubenswrapper[4813]: I0219 18:30:45.140398 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:45 crc kubenswrapper[4813]: I0219 18:30:45.140421 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:45Z","lastTransitionTime":"2026-02-19T18:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:45 crc kubenswrapper[4813]: I0219 18:30:45.243544 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:45 crc kubenswrapper[4813]: I0219 18:30:45.243591 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:45 crc kubenswrapper[4813]: I0219 18:30:45.243608 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:45 crc kubenswrapper[4813]: I0219 18:30:45.243630 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:45 crc kubenswrapper[4813]: I0219 18:30:45.243646 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:45Z","lastTransitionTime":"2026-02-19T18:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:45 crc kubenswrapper[4813]: I0219 18:30:45.346805 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:45 crc kubenswrapper[4813]: I0219 18:30:45.346861 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:45 crc kubenswrapper[4813]: I0219 18:30:45.346878 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:45 crc kubenswrapper[4813]: I0219 18:30:45.346901 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:45 crc kubenswrapper[4813]: I0219 18:30:45.346918 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:45Z","lastTransitionTime":"2026-02-19T18:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:45 crc kubenswrapper[4813]: I0219 18:30:45.437967 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 09:53:12.016922989 +0000 UTC Feb 19 18:30:45 crc kubenswrapper[4813]: I0219 18:30:45.449425 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:45 crc kubenswrapper[4813]: I0219 18:30:45.449646 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:45 crc kubenswrapper[4813]: I0219 18:30:45.449781 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:45 crc kubenswrapper[4813]: I0219 18:30:45.449933 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:45 crc kubenswrapper[4813]: I0219 18:30:45.450143 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:45Z","lastTransitionTime":"2026-02-19T18:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:45 crc kubenswrapper[4813]: I0219 18:30:45.471253 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:30:45 crc kubenswrapper[4813]: I0219 18:30:45.471296 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 18:30:45 crc kubenswrapper[4813]: I0219 18:30:45.471269 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 18:30:45 crc kubenswrapper[4813]: E0219 18:30:45.471492 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 18:30:45 crc kubenswrapper[4813]: E0219 18:30:45.471592 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 18:30:45 crc kubenswrapper[4813]: E0219 18:30:45.471717 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 18:30:45 crc kubenswrapper[4813]: I0219 18:30:45.553259 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:45 crc kubenswrapper[4813]: I0219 18:30:45.553306 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:45 crc kubenswrapper[4813]: I0219 18:30:45.553318 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:45 crc kubenswrapper[4813]: I0219 18:30:45.553336 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:45 crc kubenswrapper[4813]: I0219 18:30:45.553349 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:45Z","lastTransitionTime":"2026-02-19T18:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:45 crc kubenswrapper[4813]: I0219 18:30:45.656564 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:45 crc kubenswrapper[4813]: I0219 18:30:45.656636 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:45 crc kubenswrapper[4813]: I0219 18:30:45.656657 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:45 crc kubenswrapper[4813]: I0219 18:30:45.656682 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:45 crc kubenswrapper[4813]: I0219 18:30:45.656698 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:45Z","lastTransitionTime":"2026-02-19T18:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:45 crc kubenswrapper[4813]: I0219 18:30:45.759193 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:45 crc kubenswrapper[4813]: I0219 18:30:45.759250 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:45 crc kubenswrapper[4813]: I0219 18:30:45.759270 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:45 crc kubenswrapper[4813]: I0219 18:30:45.759295 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:45 crc kubenswrapper[4813]: I0219 18:30:45.759311 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:45Z","lastTransitionTime":"2026-02-19T18:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:45 crc kubenswrapper[4813]: I0219 18:30:45.861749 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:45 crc kubenswrapper[4813]: I0219 18:30:45.861812 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:45 crc kubenswrapper[4813]: I0219 18:30:45.861830 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:45 crc kubenswrapper[4813]: I0219 18:30:45.861853 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:45 crc kubenswrapper[4813]: I0219 18:30:45.861871 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:45Z","lastTransitionTime":"2026-02-19T18:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:45 crc kubenswrapper[4813]: I0219 18:30:45.965390 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:45 crc kubenswrapper[4813]: I0219 18:30:45.965446 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:45 crc kubenswrapper[4813]: I0219 18:30:45.965464 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:45 crc kubenswrapper[4813]: I0219 18:30:45.965490 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:45 crc kubenswrapper[4813]: I0219 18:30:45.965507 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:45Z","lastTransitionTime":"2026-02-19T18:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:46 crc kubenswrapper[4813]: I0219 18:30:46.068286 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:46 crc kubenswrapper[4813]: I0219 18:30:46.068341 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:46 crc kubenswrapper[4813]: I0219 18:30:46.068358 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:46 crc kubenswrapper[4813]: I0219 18:30:46.068384 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:46 crc kubenswrapper[4813]: I0219 18:30:46.068401 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:46Z","lastTransitionTime":"2026-02-19T18:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:46 crc kubenswrapper[4813]: I0219 18:30:46.171389 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:46 crc kubenswrapper[4813]: I0219 18:30:46.171449 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:46 crc kubenswrapper[4813]: I0219 18:30:46.171467 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:46 crc kubenswrapper[4813]: I0219 18:30:46.171489 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:46 crc kubenswrapper[4813]: I0219 18:30:46.171508 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:46Z","lastTransitionTime":"2026-02-19T18:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:46 crc kubenswrapper[4813]: I0219 18:30:46.274149 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:46 crc kubenswrapper[4813]: I0219 18:30:46.274198 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:46 crc kubenswrapper[4813]: I0219 18:30:46.274212 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:46 crc kubenswrapper[4813]: I0219 18:30:46.274228 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:46 crc kubenswrapper[4813]: I0219 18:30:46.274239 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:46Z","lastTransitionTime":"2026-02-19T18:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:46 crc kubenswrapper[4813]: I0219 18:30:46.377164 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:46 crc kubenswrapper[4813]: I0219 18:30:46.377217 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:46 crc kubenswrapper[4813]: I0219 18:30:46.377235 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:46 crc kubenswrapper[4813]: I0219 18:30:46.377259 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:46 crc kubenswrapper[4813]: I0219 18:30:46.377299 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:46Z","lastTransitionTime":"2026-02-19T18:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:46 crc kubenswrapper[4813]: I0219 18:30:46.438683 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 12:13:22.330601151 +0000 UTC Feb 19 18:30:46 crc kubenswrapper[4813]: I0219 18:30:46.470458 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5vng" Feb 19 18:30:46 crc kubenswrapper[4813]: E0219 18:30:46.470549 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5vng" podUID="6fc21e0b-f723-4c9c-9ced-1683cc02fa00" Feb 19 18:30:46 crc kubenswrapper[4813]: I0219 18:30:46.479098 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:46 crc kubenswrapper[4813]: I0219 18:30:46.479211 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:46 crc kubenswrapper[4813]: I0219 18:30:46.479275 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:46 crc kubenswrapper[4813]: I0219 18:30:46.479339 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:46 crc kubenswrapper[4813]: I0219 18:30:46.479414 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:46Z","lastTransitionTime":"2026-02-19T18:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:46 crc kubenswrapper[4813]: I0219 18:30:46.581293 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:46 crc kubenswrapper[4813]: I0219 18:30:46.581452 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:46 crc kubenswrapper[4813]: I0219 18:30:46.581511 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:46 crc kubenswrapper[4813]: I0219 18:30:46.581575 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:46 crc kubenswrapper[4813]: I0219 18:30:46.581634 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:46Z","lastTransitionTime":"2026-02-19T18:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:46 crc kubenswrapper[4813]: I0219 18:30:46.683685 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:46 crc kubenswrapper[4813]: I0219 18:30:46.683724 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:46 crc kubenswrapper[4813]: I0219 18:30:46.683733 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:46 crc kubenswrapper[4813]: I0219 18:30:46.683747 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:46 crc kubenswrapper[4813]: I0219 18:30:46.683755 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:46Z","lastTransitionTime":"2026-02-19T18:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:46 crc kubenswrapper[4813]: I0219 18:30:46.785544 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:46 crc kubenswrapper[4813]: I0219 18:30:46.785583 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:46 crc kubenswrapper[4813]: I0219 18:30:46.785591 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:46 crc kubenswrapper[4813]: I0219 18:30:46.785606 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:46 crc kubenswrapper[4813]: I0219 18:30:46.785617 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:46Z","lastTransitionTime":"2026-02-19T18:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:46 crc kubenswrapper[4813]: I0219 18:30:46.888284 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:46 crc kubenswrapper[4813]: I0219 18:30:46.888346 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:46 crc kubenswrapper[4813]: I0219 18:30:46.888363 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:46 crc kubenswrapper[4813]: I0219 18:30:46.888427 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:46 crc kubenswrapper[4813]: I0219 18:30:46.888448 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:46Z","lastTransitionTime":"2026-02-19T18:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:46 crc kubenswrapper[4813]: I0219 18:30:46.990676 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:46 crc kubenswrapper[4813]: I0219 18:30:46.990933 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:46 crc kubenswrapper[4813]: I0219 18:30:46.991018 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:46 crc kubenswrapper[4813]: I0219 18:30:46.991097 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:46 crc kubenswrapper[4813]: I0219 18:30:46.991165 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:46Z","lastTransitionTime":"2026-02-19T18:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:47 crc kubenswrapper[4813]: I0219 18:30:47.093749 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:47 crc kubenswrapper[4813]: I0219 18:30:47.094046 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:47 crc kubenswrapper[4813]: I0219 18:30:47.094126 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:47 crc kubenswrapper[4813]: I0219 18:30:47.094193 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:47 crc kubenswrapper[4813]: I0219 18:30:47.094255 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:47Z","lastTransitionTime":"2026-02-19T18:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:47 crc kubenswrapper[4813]: I0219 18:30:47.196651 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:47 crc kubenswrapper[4813]: I0219 18:30:47.196921 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:47 crc kubenswrapper[4813]: I0219 18:30:47.197023 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:47 crc kubenswrapper[4813]: I0219 18:30:47.197096 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:47 crc kubenswrapper[4813]: I0219 18:30:47.197160 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:47Z","lastTransitionTime":"2026-02-19T18:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:47 crc kubenswrapper[4813]: I0219 18:30:47.300282 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:47 crc kubenswrapper[4813]: I0219 18:30:47.300742 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:47 crc kubenswrapper[4813]: I0219 18:30:47.300761 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:47 crc kubenswrapper[4813]: I0219 18:30:47.300787 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:47 crc kubenswrapper[4813]: I0219 18:30:47.300810 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:47Z","lastTransitionTime":"2026-02-19T18:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:47 crc kubenswrapper[4813]: I0219 18:30:47.404312 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:47 crc kubenswrapper[4813]: I0219 18:30:47.404363 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:47 crc kubenswrapper[4813]: I0219 18:30:47.404373 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:47 crc kubenswrapper[4813]: I0219 18:30:47.404390 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:47 crc kubenswrapper[4813]: I0219 18:30:47.404404 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:47Z","lastTransitionTime":"2026-02-19T18:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:47 crc kubenswrapper[4813]: I0219 18:30:47.439748 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 20:22:50.483184663 +0000 UTC Feb 19 18:30:47 crc kubenswrapper[4813]: I0219 18:30:47.471454 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:30:47 crc kubenswrapper[4813]: I0219 18:30:47.471514 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 18:30:47 crc kubenswrapper[4813]: I0219 18:30:47.471456 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 18:30:47 crc kubenswrapper[4813]: E0219 18:30:47.471608 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 18:30:47 crc kubenswrapper[4813]: E0219 18:30:47.471719 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 18:30:47 crc kubenswrapper[4813]: E0219 18:30:47.471791 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 18:30:47 crc kubenswrapper[4813]: I0219 18:30:47.506524 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:47 crc kubenswrapper[4813]: I0219 18:30:47.506571 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:47 crc kubenswrapper[4813]: I0219 18:30:47.506584 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:47 crc kubenswrapper[4813]: I0219 18:30:47.506603 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:47 crc kubenswrapper[4813]: I0219 18:30:47.506617 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:47Z","lastTransitionTime":"2026-02-19T18:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:47 crc kubenswrapper[4813]: I0219 18:30:47.610724 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:47 crc kubenswrapper[4813]: I0219 18:30:47.611199 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:47 crc kubenswrapper[4813]: I0219 18:30:47.611413 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:47 crc kubenswrapper[4813]: I0219 18:30:47.611577 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:47 crc kubenswrapper[4813]: I0219 18:30:47.611733 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:47Z","lastTransitionTime":"2026-02-19T18:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:47 crc kubenswrapper[4813]: I0219 18:30:47.714713 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:47 crc kubenswrapper[4813]: I0219 18:30:47.715113 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:47 crc kubenswrapper[4813]: I0219 18:30:47.715282 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:47 crc kubenswrapper[4813]: I0219 18:30:47.715421 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:47 crc kubenswrapper[4813]: I0219 18:30:47.715555 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:47Z","lastTransitionTime":"2026-02-19T18:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:47 crc kubenswrapper[4813]: I0219 18:30:47.818446 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:47 crc kubenswrapper[4813]: I0219 18:30:47.818495 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:47 crc kubenswrapper[4813]: I0219 18:30:47.818513 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:47 crc kubenswrapper[4813]: I0219 18:30:47.818537 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:47 crc kubenswrapper[4813]: I0219 18:30:47.818553 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:47Z","lastTransitionTime":"2026-02-19T18:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:47 crc kubenswrapper[4813]: I0219 18:30:47.921885 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:47 crc kubenswrapper[4813]: I0219 18:30:47.921922 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:47 crc kubenswrapper[4813]: I0219 18:30:47.921931 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:47 crc kubenswrapper[4813]: I0219 18:30:47.921945 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:47 crc kubenswrapper[4813]: I0219 18:30:47.921971 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:47Z","lastTransitionTime":"2026-02-19T18:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:48 crc kubenswrapper[4813]: I0219 18:30:48.024744 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:48 crc kubenswrapper[4813]: I0219 18:30:48.024795 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:48 crc kubenswrapper[4813]: I0219 18:30:48.024811 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:48 crc kubenswrapper[4813]: I0219 18:30:48.024833 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:48 crc kubenswrapper[4813]: I0219 18:30:48.024849 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:48Z","lastTransitionTime":"2026-02-19T18:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:48 crc kubenswrapper[4813]: I0219 18:30:48.127413 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:48 crc kubenswrapper[4813]: I0219 18:30:48.127446 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:48 crc kubenswrapper[4813]: I0219 18:30:48.127459 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:48 crc kubenswrapper[4813]: I0219 18:30:48.127475 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:48 crc kubenswrapper[4813]: I0219 18:30:48.127486 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:48Z","lastTransitionTime":"2026-02-19T18:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:48 crc kubenswrapper[4813]: I0219 18:30:48.229792 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:48 crc kubenswrapper[4813]: I0219 18:30:48.229817 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:48 crc kubenswrapper[4813]: I0219 18:30:48.229825 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:48 crc kubenswrapper[4813]: I0219 18:30:48.229836 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:48 crc kubenswrapper[4813]: I0219 18:30:48.229844 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:48Z","lastTransitionTime":"2026-02-19T18:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:48 crc kubenswrapper[4813]: I0219 18:30:48.265891 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6fc21e0b-f723-4c9c-9ced-1683cc02fa00-metrics-certs\") pod \"network-metrics-daemon-l5vng\" (UID: \"6fc21e0b-f723-4c9c-9ced-1683cc02fa00\") " pod="openshift-multus/network-metrics-daemon-l5vng" Feb 19 18:30:48 crc kubenswrapper[4813]: E0219 18:30:48.266033 4813 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 18:30:48 crc kubenswrapper[4813]: E0219 18:30:48.266097 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6fc21e0b-f723-4c9c-9ced-1683cc02fa00-metrics-certs podName:6fc21e0b-f723-4c9c-9ced-1683cc02fa00 nodeName:}" failed. No retries permitted until 2026-02-19 18:31:20.266083692 +0000 UTC m=+99.491524233 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6fc21e0b-f723-4c9c-9ced-1683cc02fa00-metrics-certs") pod "network-metrics-daemon-l5vng" (UID: "6fc21e0b-f723-4c9c-9ced-1683cc02fa00") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 18:30:48 crc kubenswrapper[4813]: I0219 18:30:48.331976 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:48 crc kubenswrapper[4813]: I0219 18:30:48.332010 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:48 crc kubenswrapper[4813]: I0219 18:30:48.332022 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:48 crc kubenswrapper[4813]: I0219 18:30:48.332037 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:48 crc kubenswrapper[4813]: I0219 18:30:48.332049 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:48Z","lastTransitionTime":"2026-02-19T18:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:48 crc kubenswrapper[4813]: I0219 18:30:48.434927 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:48 crc kubenswrapper[4813]: I0219 18:30:48.435034 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:48 crc kubenswrapper[4813]: I0219 18:30:48.435055 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:48 crc kubenswrapper[4813]: I0219 18:30:48.435078 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:48 crc kubenswrapper[4813]: I0219 18:30:48.435095 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:48Z","lastTransitionTime":"2026-02-19T18:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:48 crc kubenswrapper[4813]: I0219 18:30:48.441172 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 06:27:34.479725057 +0000 UTC Feb 19 18:30:48 crc kubenswrapper[4813]: I0219 18:30:48.470719 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5vng" Feb 19 18:30:48 crc kubenswrapper[4813]: E0219 18:30:48.470863 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5vng" podUID="6fc21e0b-f723-4c9c-9ced-1683cc02fa00" Feb 19 18:30:48 crc kubenswrapper[4813]: I0219 18:30:48.471910 4813 scope.go:117] "RemoveContainer" containerID="90c570c2b717e5736262333676c0fd0ffbdbd3957f8c27964a5126d2df92cc13" Feb 19 18:30:48 crc kubenswrapper[4813]: E0219 18:30:48.472290 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pc9t2_openshift-ovn-kubernetes(928c75f4-605c-4556-8c29-14ff4bdf6f5e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" podUID="928c75f4-605c-4556-8c29-14ff4bdf6f5e" Feb 19 18:30:48 crc kubenswrapper[4813]: I0219 18:30:48.538358 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:48 crc kubenswrapper[4813]: I0219 18:30:48.538411 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:48 crc kubenswrapper[4813]: I0219 18:30:48.538427 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:48 crc kubenswrapper[4813]: I0219 18:30:48.538451 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:48 crc kubenswrapper[4813]: I0219 18:30:48.538471 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:48Z","lastTransitionTime":"2026-02-19T18:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:48 crc kubenswrapper[4813]: I0219 18:30:48.641379 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:48 crc kubenswrapper[4813]: I0219 18:30:48.641431 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:48 crc kubenswrapper[4813]: I0219 18:30:48.641456 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:48 crc kubenswrapper[4813]: I0219 18:30:48.641485 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:48 crc kubenswrapper[4813]: I0219 18:30:48.641508 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:48Z","lastTransitionTime":"2026-02-19T18:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:48 crc kubenswrapper[4813]: I0219 18:30:48.744863 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:48 crc kubenswrapper[4813]: I0219 18:30:48.744919 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:48 crc kubenswrapper[4813]: I0219 18:30:48.744936 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:48 crc kubenswrapper[4813]: I0219 18:30:48.744984 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:48 crc kubenswrapper[4813]: I0219 18:30:48.745003 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:48Z","lastTransitionTime":"2026-02-19T18:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:48 crc kubenswrapper[4813]: I0219 18:30:48.847977 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:48 crc kubenswrapper[4813]: I0219 18:30:48.848050 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:48 crc kubenswrapper[4813]: I0219 18:30:48.848074 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:48 crc kubenswrapper[4813]: I0219 18:30:48.848102 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:48 crc kubenswrapper[4813]: I0219 18:30:48.848123 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:48Z","lastTransitionTime":"2026-02-19T18:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:48 crc kubenswrapper[4813]: I0219 18:30:48.950756 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:48 crc kubenswrapper[4813]: I0219 18:30:48.950795 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:48 crc kubenswrapper[4813]: I0219 18:30:48.950805 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:48 crc kubenswrapper[4813]: I0219 18:30:48.950821 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:48 crc kubenswrapper[4813]: I0219 18:30:48.950835 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:48Z","lastTransitionTime":"2026-02-19T18:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:49 crc kubenswrapper[4813]: I0219 18:30:49.031529 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:49 crc kubenswrapper[4813]: I0219 18:30:49.031611 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:49 crc kubenswrapper[4813]: I0219 18:30:49.031638 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:49 crc kubenswrapper[4813]: I0219 18:30:49.031667 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:49 crc kubenswrapper[4813]: I0219 18:30:49.031693 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:49Z","lastTransitionTime":"2026-02-19T18:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:49 crc kubenswrapper[4813]: E0219 18:30:49.050624 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"72639c5f-de3a-44bf-8f3b-4707b19d9f7d\\\",\\\"systemUUID\\\":\\\"3f17b88b-2b9a-42bd-94af-777e7f325932\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:49Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:49 crc kubenswrapper[4813]: I0219 18:30:49.055612 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:49 crc kubenswrapper[4813]: I0219 18:30:49.055651 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:49 crc kubenswrapper[4813]: I0219 18:30:49.055660 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:49 crc kubenswrapper[4813]: I0219 18:30:49.055678 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:49 crc kubenswrapper[4813]: I0219 18:30:49.055689 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:49Z","lastTransitionTime":"2026-02-19T18:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:49 crc kubenswrapper[4813]: E0219 18:30:49.069126 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"72639c5f-de3a-44bf-8f3b-4707b19d9f7d\\\",\\\"systemUUID\\\":\\\"3f17b88b-2b9a-42bd-94af-777e7f325932\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:49Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:49 crc kubenswrapper[4813]: I0219 18:30:49.072549 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:49 crc kubenswrapper[4813]: I0219 18:30:49.072590 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:49 crc kubenswrapper[4813]: I0219 18:30:49.072604 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:49 crc kubenswrapper[4813]: I0219 18:30:49.072621 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:49 crc kubenswrapper[4813]: I0219 18:30:49.072633 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:49Z","lastTransitionTime":"2026-02-19T18:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:49 crc kubenswrapper[4813]: E0219 18:30:49.089423 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"72639c5f-de3a-44bf-8f3b-4707b19d9f7d\\\",\\\"systemUUID\\\":\\\"3f17b88b-2b9a-42bd-94af-777e7f325932\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:49Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:49 crc kubenswrapper[4813]: I0219 18:30:49.093054 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:49 crc kubenswrapper[4813]: I0219 18:30:49.093104 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:49 crc kubenswrapper[4813]: I0219 18:30:49.093121 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:49 crc kubenswrapper[4813]: I0219 18:30:49.093141 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:49 crc kubenswrapper[4813]: I0219 18:30:49.093160 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:49Z","lastTransitionTime":"2026-02-19T18:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:49 crc kubenswrapper[4813]: E0219 18:30:49.111158 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"72639c5f-de3a-44bf-8f3b-4707b19d9f7d\\\",\\\"systemUUID\\\":\\\"3f17b88b-2b9a-42bd-94af-777e7f325932\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:49Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:49 crc kubenswrapper[4813]: I0219 18:30:49.114967 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:49 crc kubenswrapper[4813]: I0219 18:30:49.114998 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:49 crc kubenswrapper[4813]: I0219 18:30:49.115010 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:49 crc kubenswrapper[4813]: I0219 18:30:49.115026 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:49 crc kubenswrapper[4813]: I0219 18:30:49.115037 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:49Z","lastTransitionTime":"2026-02-19T18:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:49 crc kubenswrapper[4813]: E0219 18:30:49.129926 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"72639c5f-de3a-44bf-8f3b-4707b19d9f7d\\\",\\\"systemUUID\\\":\\\"3f17b88b-2b9a-42bd-94af-777e7f325932\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:49Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:49 crc kubenswrapper[4813]: E0219 18:30:49.130081 4813 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 18:30:49 crc kubenswrapper[4813]: I0219 18:30:49.131477 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:49 crc kubenswrapper[4813]: I0219 18:30:49.131503 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:49 crc kubenswrapper[4813]: I0219 18:30:49.131512 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:49 crc kubenswrapper[4813]: I0219 18:30:49.131524 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:49 crc kubenswrapper[4813]: I0219 18:30:49.131534 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:49Z","lastTransitionTime":"2026-02-19T18:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:49 crc kubenswrapper[4813]: I0219 18:30:49.233518 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:49 crc kubenswrapper[4813]: I0219 18:30:49.233544 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:49 crc kubenswrapper[4813]: I0219 18:30:49.233552 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:49 crc kubenswrapper[4813]: I0219 18:30:49.233564 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:49 crc kubenswrapper[4813]: I0219 18:30:49.233573 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:49Z","lastTransitionTime":"2026-02-19T18:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:49 crc kubenswrapper[4813]: I0219 18:30:49.335701 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:49 crc kubenswrapper[4813]: I0219 18:30:49.335733 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:49 crc kubenswrapper[4813]: I0219 18:30:49.335743 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:49 crc kubenswrapper[4813]: I0219 18:30:49.335755 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:49 crc kubenswrapper[4813]: I0219 18:30:49.335765 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:49Z","lastTransitionTime":"2026-02-19T18:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:49 crc kubenswrapper[4813]: I0219 18:30:49.438586 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:49 crc kubenswrapper[4813]: I0219 18:30:49.438635 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:49 crc kubenswrapper[4813]: I0219 18:30:49.438654 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:49 crc kubenswrapper[4813]: I0219 18:30:49.438675 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:49 crc kubenswrapper[4813]: I0219 18:30:49.438691 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:49Z","lastTransitionTime":"2026-02-19T18:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:49 crc kubenswrapper[4813]: I0219 18:30:49.441992 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 19:18:54.671672703 +0000 UTC Feb 19 18:30:49 crc kubenswrapper[4813]: I0219 18:30:49.471444 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 18:30:49 crc kubenswrapper[4813]: I0219 18:30:49.471492 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 18:30:49 crc kubenswrapper[4813]: I0219 18:30:49.471515 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:30:49 crc kubenswrapper[4813]: E0219 18:30:49.471626 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 18:30:49 crc kubenswrapper[4813]: E0219 18:30:49.471804 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 18:30:49 crc kubenswrapper[4813]: E0219 18:30:49.471882 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 18:30:49 crc kubenswrapper[4813]: I0219 18:30:49.540854 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:49 crc kubenswrapper[4813]: I0219 18:30:49.540885 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:49 crc kubenswrapper[4813]: I0219 18:30:49.540893 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:49 crc kubenswrapper[4813]: I0219 18:30:49.540906 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:49 crc kubenswrapper[4813]: I0219 18:30:49.540914 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:49Z","lastTransitionTime":"2026-02-19T18:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:49 crc kubenswrapper[4813]: I0219 18:30:49.643838 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:49 crc kubenswrapper[4813]: I0219 18:30:49.643895 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:49 crc kubenswrapper[4813]: I0219 18:30:49.643913 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:49 crc kubenswrapper[4813]: I0219 18:30:49.643935 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:49 crc kubenswrapper[4813]: I0219 18:30:49.643980 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:49Z","lastTransitionTime":"2026-02-19T18:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:49 crc kubenswrapper[4813]: I0219 18:30:49.746208 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:49 crc kubenswrapper[4813]: I0219 18:30:49.746248 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:49 crc kubenswrapper[4813]: I0219 18:30:49.746257 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:49 crc kubenswrapper[4813]: I0219 18:30:49.746271 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:49 crc kubenswrapper[4813]: I0219 18:30:49.746282 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:49Z","lastTransitionTime":"2026-02-19T18:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:49 crc kubenswrapper[4813]: I0219 18:30:49.853486 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:49 crc kubenswrapper[4813]: I0219 18:30:49.853751 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:49 crc kubenswrapper[4813]: I0219 18:30:49.853832 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:49 crc kubenswrapper[4813]: I0219 18:30:49.853920 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:49 crc kubenswrapper[4813]: I0219 18:30:49.854048 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:49Z","lastTransitionTime":"2026-02-19T18:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:49 crc kubenswrapper[4813]: I0219 18:30:49.956703 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:49 crc kubenswrapper[4813]: I0219 18:30:49.956982 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:49 crc kubenswrapper[4813]: I0219 18:30:49.957083 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:49 crc kubenswrapper[4813]: I0219 18:30:49.957333 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:49 crc kubenswrapper[4813]: I0219 18:30:49.957501 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:49Z","lastTransitionTime":"2026-02-19T18:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:50 crc kubenswrapper[4813]: I0219 18:30:50.059882 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:50 crc kubenswrapper[4813]: I0219 18:30:50.060058 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:50 crc kubenswrapper[4813]: I0219 18:30:50.060115 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:50 crc kubenswrapper[4813]: I0219 18:30:50.060191 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:50 crc kubenswrapper[4813]: I0219 18:30:50.060256 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:50Z","lastTransitionTime":"2026-02-19T18:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:50 crc kubenswrapper[4813]: I0219 18:30:50.162074 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:50 crc kubenswrapper[4813]: I0219 18:30:50.162278 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:50 crc kubenswrapper[4813]: I0219 18:30:50.162341 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:50 crc kubenswrapper[4813]: I0219 18:30:50.162420 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:50 crc kubenswrapper[4813]: I0219 18:30:50.162490 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:50Z","lastTransitionTime":"2026-02-19T18:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:50 crc kubenswrapper[4813]: I0219 18:30:50.264567 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:50 crc kubenswrapper[4813]: I0219 18:30:50.264746 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:50 crc kubenswrapper[4813]: I0219 18:30:50.264810 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:50 crc kubenswrapper[4813]: I0219 18:30:50.264900 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:50 crc kubenswrapper[4813]: I0219 18:30:50.264980 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:50Z","lastTransitionTime":"2026-02-19T18:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:50 crc kubenswrapper[4813]: I0219 18:30:50.368238 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:50 crc kubenswrapper[4813]: I0219 18:30:50.368483 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:50 crc kubenswrapper[4813]: I0219 18:30:50.368560 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:50 crc kubenswrapper[4813]: I0219 18:30:50.368776 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:50 crc kubenswrapper[4813]: I0219 18:30:50.368939 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:50Z","lastTransitionTime":"2026-02-19T18:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:50 crc kubenswrapper[4813]: I0219 18:30:50.442608 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 23:01:02.781445817 +0000 UTC Feb 19 18:30:50 crc kubenswrapper[4813]: I0219 18:30:50.470550 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5vng" Feb 19 18:30:50 crc kubenswrapper[4813]: I0219 18:30:50.470881 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:50 crc kubenswrapper[4813]: I0219 18:30:50.470912 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:50 crc kubenswrapper[4813]: I0219 18:30:50.470923 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:50 crc kubenswrapper[4813]: I0219 18:30:50.470937 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:50 crc kubenswrapper[4813]: I0219 18:30:50.470947 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:50Z","lastTransitionTime":"2026-02-19T18:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:50 crc kubenswrapper[4813]: E0219 18:30:50.471493 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5vng" podUID="6fc21e0b-f723-4c9c-9ced-1683cc02fa00" Feb 19 18:30:50 crc kubenswrapper[4813]: I0219 18:30:50.573260 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:50 crc kubenswrapper[4813]: I0219 18:30:50.573314 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:50 crc kubenswrapper[4813]: I0219 18:30:50.573331 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:50 crc kubenswrapper[4813]: I0219 18:30:50.573356 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:50 crc kubenswrapper[4813]: I0219 18:30:50.573373 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:50Z","lastTransitionTime":"2026-02-19T18:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:50 crc kubenswrapper[4813]: I0219 18:30:50.675388 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:50 crc kubenswrapper[4813]: I0219 18:30:50.675421 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:50 crc kubenswrapper[4813]: I0219 18:30:50.675432 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:50 crc kubenswrapper[4813]: I0219 18:30:50.675446 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:50 crc kubenswrapper[4813]: I0219 18:30:50.675457 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:50Z","lastTransitionTime":"2026-02-19T18:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:50 crc kubenswrapper[4813]: I0219 18:30:50.777448 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:50 crc kubenswrapper[4813]: I0219 18:30:50.777473 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:50 crc kubenswrapper[4813]: I0219 18:30:50.777481 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:50 crc kubenswrapper[4813]: I0219 18:30:50.777492 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:50 crc kubenswrapper[4813]: I0219 18:30:50.777500 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:50Z","lastTransitionTime":"2026-02-19T18:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:50 crc kubenswrapper[4813]: I0219 18:30:50.879247 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:50 crc kubenswrapper[4813]: I0219 18:30:50.879283 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:50 crc kubenswrapper[4813]: I0219 18:30:50.879293 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:50 crc kubenswrapper[4813]: I0219 18:30:50.879307 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:50 crc kubenswrapper[4813]: I0219 18:30:50.879317 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:50Z","lastTransitionTime":"2026-02-19T18:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:50 crc kubenswrapper[4813]: I0219 18:30:50.981190 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:50 crc kubenswrapper[4813]: I0219 18:30:50.981218 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:50 crc kubenswrapper[4813]: I0219 18:30:50.981226 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:50 crc kubenswrapper[4813]: I0219 18:30:50.981237 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:50 crc kubenswrapper[4813]: I0219 18:30:50.981246 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:50Z","lastTransitionTime":"2026-02-19T18:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:51 crc kubenswrapper[4813]: I0219 18:30:51.084246 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:51 crc kubenswrapper[4813]: I0219 18:30:51.084286 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:51 crc kubenswrapper[4813]: I0219 18:30:51.084297 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:51 crc kubenswrapper[4813]: I0219 18:30:51.084310 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:51 crc kubenswrapper[4813]: I0219 18:30:51.084321 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:51Z","lastTransitionTime":"2026-02-19T18:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:51 crc kubenswrapper[4813]: I0219 18:30:51.187276 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:51 crc kubenswrapper[4813]: I0219 18:30:51.187305 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:51 crc kubenswrapper[4813]: I0219 18:30:51.187317 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:51 crc kubenswrapper[4813]: I0219 18:30:51.187330 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:51 crc kubenswrapper[4813]: I0219 18:30:51.187339 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:51Z","lastTransitionTime":"2026-02-19T18:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:51 crc kubenswrapper[4813]: I0219 18:30:51.289363 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:51 crc kubenswrapper[4813]: I0219 18:30:51.289393 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:51 crc kubenswrapper[4813]: I0219 18:30:51.289402 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:51 crc kubenswrapper[4813]: I0219 18:30:51.289418 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:51 crc kubenswrapper[4813]: I0219 18:30:51.289430 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:51Z","lastTransitionTime":"2026-02-19T18:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:51 crc kubenswrapper[4813]: I0219 18:30:51.392626 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:51 crc kubenswrapper[4813]: I0219 18:30:51.392701 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:51 crc kubenswrapper[4813]: I0219 18:30:51.392723 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:51 crc kubenswrapper[4813]: I0219 18:30:51.392751 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:51 crc kubenswrapper[4813]: I0219 18:30:51.392771 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:51Z","lastTransitionTime":"2026-02-19T18:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:51 crc kubenswrapper[4813]: I0219 18:30:51.443069 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 01:05:05.863930778 +0000 UTC Feb 19 18:30:51 crc kubenswrapper[4813]: I0219 18:30:51.471493 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 18:30:51 crc kubenswrapper[4813]: I0219 18:30:51.471528 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:30:51 crc kubenswrapper[4813]: I0219 18:30:51.471535 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 18:30:51 crc kubenswrapper[4813]: E0219 18:30:51.471659 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 18:30:51 crc kubenswrapper[4813]: E0219 18:30:51.471786 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 18:30:51 crc kubenswrapper[4813]: E0219 18:30:51.471891 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 18:30:51 crc kubenswrapper[4813]: I0219 18:30:51.483453 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqbrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"614b0374-4288-40dc-9d95-e6f6566bd1ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99f4d2e3db84c62b82edf3e5f8290b3b259e58655a5ea95ea51ed8aeec8e8844\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpkbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef107096c0521504fd7160ad4e3c0feff6ed1500af21c7a0328a981c340825f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpkbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqbrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:51Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:51 crc kubenswrapper[4813]: I0219 18:30:51.497133 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l5vng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fc21e0b-f723-4c9c-9ced-1683cc02fa00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmn2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmn2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l5vng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:51Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:51 crc kubenswrapper[4813]: I0219 18:30:51.498130 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:51 crc kubenswrapper[4813]: I0219 18:30:51.498236 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:51 crc kubenswrapper[4813]: I0219 18:30:51.498323 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:51 crc kubenswrapper[4813]: I0219 18:30:51.498409 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:51 crc kubenswrapper[4813]: I0219 18:30:51.498451 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:51Z","lastTransitionTime":"2026-02-19T18:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:51 crc kubenswrapper[4813]: I0219 18:30:51.514156 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e24bb1ce-8e7b-485c-8ade-4d1c8d24d31d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ec0b5d3efae532f52cc2cad5bdf32bbba176159a6d3e26584b0fabe4e67fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72a262261f9ec388a9b2c38c6c30d35645d6e42d482ed7468891523b7bc731fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58efdcb776fb39a7e33af3fda3f0589e38c4bae15b8db3371541d189729d06c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a2584d7de40323b884e08e2cb47b6ab8054fadccc7516e4736c5439aef60e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:51Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:51 crc kubenswrapper[4813]: I0219 18:30:51.529560 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:51Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:51 crc kubenswrapper[4813]: I0219 18:30:51.542023 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:51Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:51 crc kubenswrapper[4813]: I0219 18:30:51.558985 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c6488cf097d9700ded1960ebddf696d77c08ffdfc6dd8b351c594ea026bb6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6181782d0c635897318effbe40705986067ccb55cf05ffdba6948e46c1a054b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:51Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:51 crc kubenswrapper[4813]: I0219 18:30:51.576213 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hksqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b099cefb-f2e5-4f3f-976c-7433dba77ef2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3313ad9bea8833862d8c116421e00a0c2376c81e214043db1fa4096177d01094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hksqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:51Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:51 crc kubenswrapper[4813]: I0219 18:30:51.596358 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlz6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f58d0592-08dd-49db-8c98-f262b9808e0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://934ac9bf4c98a77f31d9fddb47e9981f38daeca4f37afc09a488b48c69a13b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7968520498abaa880445c128f02c0838e254befc000966e147908c84203f74a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7968520498abaa880445c128f02c0838e254befc000966e147908c84203f74a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17313aa183028703a67d1cf40b4246d09224ceee622cd9f4f7d2ef239b26db47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17313aa183028703a67d1cf40b4246d09224ceee622cd9f4f7d2ef239b26db47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ab70722c91747196d065d768e30f52befde81fdedb465f22c3f7eef4705210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98ab70722c91747196d065d768e30f52befde81fdedb465f22c3f7eef4705210\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ab6dd9d03726493fd9fd2c358b0a87ca1997cbe2111d43a548adc9c9f43c662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ab6dd9d03726493fd9fd2c358b0a87ca1997cbe2111d43a548adc9c9f43c662\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11144c876f7a0cbaa5787ac1a093f138ec3fdb8a3cd118d4efec6cc650e3e97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11144c876f7a0cbaa5787ac1a093f138ec3fdb8a3cd118d4efec6cc650e3e97a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53230e417c1722eded51e2c7a87acb4e9b610aaf5d36a22d4f3f4fcd987c5160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53230e417c1722eded51e2c7a87acb4e9b610aaf5d36a22d4f3f4fcd987c5160\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlz6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:51Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:51 crc kubenswrapper[4813]: I0219 18:30:51.600988 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:51 crc kubenswrapper[4813]: I0219 18:30:51.601021 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:51 crc kubenswrapper[4813]: I0219 18:30:51.601032 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:51 crc kubenswrapper[4813]: I0219 18:30:51.601050 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:51 crc kubenswrapper[4813]: I0219 18:30:51.601063 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:51Z","lastTransitionTime":"2026-02-19T18:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:51 crc kubenswrapper[4813]: I0219 18:30:51.623211 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"928c75f4-605c-4556-8c29-14ff4bdf6f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec80078ab4536d8f1962994c3bdf8af5fcf81516f8845e84dbb33a78b1aa2062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e03ec2a3e1656fbfb567377e79bcca10e1c8a6f7660727c164936eab7365930a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6191934d31b3f711d3c9d8f365251db200dea5a925cad670632dbfc527f76504\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1b878b9bc710465c933e1965728be9fe220fe89a3d5c05fa538abd4bf22b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81cfe261685862b6b74d263ab66b4a2b1d68409791e82424b4f210c56561099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fed64977a80ae6acac3a225e2b6b6ab865db1f36338df4a15011baae90f949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90c570c2b717e5736262333676c0fd0ffbdbd3957f8c27964a5126d2df92cc13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90c570c2b717e5736262333676c0fd0ffbdbd3957f8c27964a5126d2df92cc13\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T18:30:32Z\\\",\\\"message\\\":\\\"s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 18:30:32.490543 6488 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 18:30:32.490653 6488 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 18:30:32.491223 6488 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 18:30:32.491297 6488 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 18:30:32.491309 6488 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 18:30:32.491356 6488 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 18:30:32.491372 6488 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 18:30:32.491393 6488 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 18:30:32.491409 6488 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 18:30:32.491423 6488 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 18:30:32.491463 6488 factory.go:656] Stopping watch factory\\\\nI0219 18:30:32.491489 6488 ovnkube.go:599] Stopped ovnkube\\\\nI0219 18:30:32.491500 6488 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 18:30:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pc9t2_openshift-ovn-kubernetes(928c75f4-605c-4556-8c29-14ff4bdf6f5e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1caf7a922d435742a5c6d5302c70003bc86cc3ab542e2e3fe0a368c150de7838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0244cf62334ae37b44f11119385a76667200a01b2162a064829f16e8d6c0cf65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0244cf62334ae37b44f11119385a76667200a01b2162a064829f16e8d6c0cf65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pc9t2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:51Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:51 crc kubenswrapper[4813]: I0219 18:30:51.649552 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3c31506-f64a-4742-a5f9-fb4f7cb97f3e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2b8202d5d41248673716db8cad0618006cc52e967751eb392b99663c4aa90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6995816d5f4dd389140af1ce1d1d5e45df8cf15a28c51cd5e13ee52c094377f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4760dc159a8f00a8d74f75e4c3b1f50b3548501b3a7623e96aaa16c1013611c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c86da51b4095ca48cd6d13dadef813f656825387e3166eb716fe2b8cc46ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7c86da51b4095ca48cd6d13dadef813f656825387e3166eb716fe2b8cc46ce3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:51Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:51 crc kubenswrapper[4813]: I0219 18:30:51.668675 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:51Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:51 crc kubenswrapper[4813]: I0219 18:30:51.682611 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vpk9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0640f474-4d6f-4a87-9750-dda71e69dd95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58268d212369ac3ebcca0839f2c778d1e2e6e36fdc4b7e374cb9044adb39163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vpk9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:51Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:51 crc kubenswrapper[4813]: I0219 18:30:51.698789 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8429a2d929ad94a25a922e436298db7a8eb7522560dfa37f87f6e24f4f99b4e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:51Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:51 crc kubenswrapper[4813]: I0219 18:30:51.702680 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:51 crc kubenswrapper[4813]: I0219 18:30:51.702726 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:51 crc kubenswrapper[4813]: I0219 18:30:51.702740 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:51 crc kubenswrapper[4813]: I0219 18:30:51.702757 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:51 crc kubenswrapper[4813]: I0219 18:30:51.702768 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:51Z","lastTransitionTime":"2026-02-19T18:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:51 crc kubenswrapper[4813]: I0219 18:30:51.714291 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"481977a2-7072-4176-abd4-863cb6104d70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2b35ef24fab5f4dfd17bf95d550000ed1167454385baff2c19e1bc795e2fa8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a57de3f000b8b7bd147b882314c518fbd8173a9c366d4ee48e1e5fab9800b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfswm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:51Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:51 crc kubenswrapper[4813]: I0219 18:30:51.733012 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7e9f575-0f71-4cf9-bbca-161279ecc067\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5217c8c5d8bdd49d4930ceb46695c6404c9cb98f22fa570d519f6fd9e90a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72ae02b48db247486c8af5ba910b32e186c15460f9195868b319d1ddc111fa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e90168ea6a75495be5812591cd0c929407c9adcb7c088d4bcbe3658a430320\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f319cf58bd58c099ace281ef84d183ac951986506fa62207a45ec01f7398e7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f62c4c18d632f0210d0676c04bf42a7b28c8669c123d29ef70fd4129f0051a2b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T18:30:00Z\\\",\\\"message\\\":\\\".003654 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 18:29:55.006059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3464865525/tls.crt::/tmp/serving-cert-3464865525/tls.key\\\\\\\"\\\\nI0219 18:30:00.917714 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 18:30:00.925836 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 18:30:00.925875 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 18:30:00.925917 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 18:30:00.925928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 18:30:00.935151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 18:30:00.935201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935214 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935228 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 18:30:00.935217 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 18:30:00.935236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 18:30:00.935260 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 18:30:00.935268 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 18:30:00.942033 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nF0219 18:30:00.942062 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b414bdca98a18315a2ebb69c39baa4748407aa769d6bc8efa4115efb47c9f4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:51Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:51 crc kubenswrapper[4813]: I0219 18:30:51.760429 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"424add47-7924-4d6b-9c4a-4696e02c5a45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95dc9a2b0e36680a894dd162836072614413ad26fcf5adbae44db96aff8a00c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46edb0e58f8e2e040cc223708f42cef7060dc0a841fff9db182e66e659ced5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://def8abdfd513befc4fac3d78b8284b3b37ba3f5da05d74a7b911932ee1099344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed857dedd4a4cf59288fcbcfaa298370a93630e185b949d2e574eb828831df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://216064c0aa1256384d93ad5b952228d4b67c2767f284715e62505dd390c11268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b9746c82ed08c95759481dc08426b4e8446ed834da21c003dccfeb8231fbec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b9746c82ed08c95759481dc08426b4e8446ed834da21c003dccfeb8231fbec2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc8db58cc6cac8c9b830087814e9d7652fdd990b18e4e4e778669d5cbc06b508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc8db58cc6cac8c9b830087814e9d7652fdd990b18e4e4e778669d5cbc06b508\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://baca15e257ea2007369156b6a31eef0b7cd0e336cef7a9c733164b93e1927f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baca15e257ea2007369156b6a31eef0b7cd0e336cef7a9c733164b93e1927f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:51Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:51 crc kubenswrapper[4813]: I0219 18:30:51.778246 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44daee369357470d5c3b416d63e53f621a0434cc6fe90db2dd15a6078849bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:51Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:51 crc kubenswrapper[4813]: I0219 18:30:51.791352 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55mxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af5ac962-f7d8-4759-b913-b3784e37a704\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5006f3df33dcac6500b89a837e686eb2b35a2d1c609091bda101500d15d808d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55mxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:51Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:51 crc kubenswrapper[4813]: I0219 18:30:51.804944 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:51 crc kubenswrapper[4813]: I0219 18:30:51.804998 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:51 crc kubenswrapper[4813]: I0219 18:30:51.805007 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:51 crc kubenswrapper[4813]: I0219 18:30:51.805023 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:51 crc kubenswrapper[4813]: I0219 18:30:51.805034 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:51Z","lastTransitionTime":"2026-02-19T18:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:51 crc kubenswrapper[4813]: I0219 18:30:51.907312 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:51 crc kubenswrapper[4813]: I0219 18:30:51.907604 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:51 crc kubenswrapper[4813]: I0219 18:30:51.907751 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:51 crc kubenswrapper[4813]: I0219 18:30:51.907878 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:51 crc kubenswrapper[4813]: I0219 18:30:51.907995 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:51Z","lastTransitionTime":"2026-02-19T18:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:51 crc kubenswrapper[4813]: I0219 18:30:51.919039 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hksqw_b099cefb-f2e5-4f3f-976c-7433dba77ef2/kube-multus/0.log" Feb 19 18:30:51 crc kubenswrapper[4813]: I0219 18:30:51.919138 4813 generic.go:334] "Generic (PLEG): container finished" podID="b099cefb-f2e5-4f3f-976c-7433dba77ef2" containerID="3313ad9bea8833862d8c116421e00a0c2376c81e214043db1fa4096177d01094" exitCode=1 Feb 19 18:30:51 crc kubenswrapper[4813]: I0219 18:30:51.919240 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hksqw" event={"ID":"b099cefb-f2e5-4f3f-976c-7433dba77ef2","Type":"ContainerDied","Data":"3313ad9bea8833862d8c116421e00a0c2376c81e214043db1fa4096177d01094"} Feb 19 18:30:51 crc kubenswrapper[4813]: I0219 18:30:51.919726 4813 scope.go:117] "RemoveContainer" containerID="3313ad9bea8833862d8c116421e00a0c2376c81e214043db1fa4096177d01094" Feb 19 18:30:51 crc kubenswrapper[4813]: I0219 18:30:51.948389 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"424add47-7924-4d6b-9c4a-4696e02c5a45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95dc9a2b0e36680a894dd162836072614413ad26fcf5adbae44db96aff8a00c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46edb0e58f8e2e040cc223708f42cef7060dc0a841fff9db182e66e659ced5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://def8abdfd513befc4fac3d78b8284b3b37ba3f5da05d74a7b911932ee1099344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed857dedd4a4cf59288fcbcfaa298370a93630e185b949d2e574eb828831df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://216064c0aa1256384d93ad5b952228d4b67c2767f284715e62505dd390c11268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b9746c82ed08c95759481dc08426b4e8446ed834da21c003dccfeb8231fbec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b9746c82ed08c95759481dc08426b4e8446ed834da21c003dccfeb8231fbec2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc8db58cc6cac8c9b830087814e9d7652fdd990b18e4e4e778669d5cbc06b508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc8db58cc6cac8c9b830087814e9d7652fdd990b18e4e4e778669d5cbc06b508\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://baca15e257ea2007369156b6a31eef0b7cd0e336cef7a9c733164b93e1927f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baca15e257ea2007369156b6a31eef0b7cd0e336cef7a9c733164b93e1927f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:51Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:51 crc kubenswrapper[4813]: I0219 18:30:51.960520 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44daee369357470d5c3b416d63e53f621a0434cc6fe90db2dd15a6078849bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:51Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:51 crc kubenswrapper[4813]: I0219 18:30:51.970558 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55mxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af5ac962-f7d8-4759-b913-b3784e37a704\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5006f3df33dcac6500b89a837e686eb2b35a2d1c609091bda101500d15d808d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55mxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:51Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:51 crc kubenswrapper[4813]: I0219 18:30:51.984919 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7e9f575-0f71-4cf9-bbca-161279ecc067\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5217c8c5d8bdd49d4930ceb46695c6404c9cb98f22fa570d519f6fd9e90a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72ae02b48db247486c8af5ba910b32e186c15460f9195868b319d1ddc111fa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e90168ea6a75495be5812591cd0c929407c9adcb7c088d4bcbe3658a430320\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f319cf58bd58c099ace281ef84d183ac951986506fa62207a45ec01f7398e7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f62c4c18d632f0210d0676c04bf42a7b28c8669c123d29ef70fd4129f0051a2b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T18:30:00Z\\\",\\\"message\\\":\\\".003654 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 18:29:55.006059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3464865525/tls.crt::/tmp/serving-cert-3464865525/tls.key\\\\\\\"\\\\nI0219 18:30:00.917714 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 18:30:00.925836 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 18:30:00.925875 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 18:30:00.925917 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 18:30:00.925928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 18:30:00.935151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 18:30:00.935201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935214 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935228 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 18:30:00.935217 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 18:30:00.935236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 18:30:00.935260 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 18:30:00.935268 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 18:30:00.942033 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nF0219 18:30:00.942062 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b414bdca98a18315a2ebb69c39baa4748407aa769d6bc8efa4115efb47c9f4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:51Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:52 crc kubenswrapper[4813]: I0219 18:30:52.000562 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:51Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:52 crc kubenswrapper[4813]: I0219 18:30:52.010027 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:52 crc kubenswrapper[4813]: I0219 18:30:52.010059 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:52 crc kubenswrapper[4813]: I0219 18:30:52.010068 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:52 crc kubenswrapper[4813]: I0219 18:30:52.010082 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:52 crc kubenswrapper[4813]: I0219 18:30:52.010091 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:52Z","lastTransitionTime":"2026-02-19T18:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:52 crc kubenswrapper[4813]: I0219 18:30:52.017016 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c6488cf097d9700ded1960ebddf696d77c08ffdfc6dd8b351c594ea026bb6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6181782d0c635897318effbe40705986067ccb55cf05ffdba6948e46c1a054b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:52Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:52 crc kubenswrapper[4813]: I0219 18:30:52.030854 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hksqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b099cefb-f2e5-4f3f-976c-7433dba77ef2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3313ad9bea8833862d8c116421e00a0c2376c81e214043db1fa4096177d01094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3313ad9bea8833862d8c116421e00a0c2376c81e214043db1fa4096177d01094\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T18:30:50Z\\\",\\\"message\\\":\\\"2026-02-19T18:30:05+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9b918bad-a915-4f26-9241-805aac721196\\\\n2026-02-19T18:30:05+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9b918bad-a915-4f26-9241-805aac721196 to /host/opt/cni/bin/\\\\n2026-02-19T18:30:05Z [verbose] multus-daemon started\\\\n2026-02-19T18:30:05Z [verbose] Readiness Indicator file check\\\\n2026-02-19T18:30:50Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hksqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:52Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:52 crc kubenswrapper[4813]: I0219 18:30:52.045417 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqbrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"614b0374-4288-40dc-9d95-e6f6566bd1ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99f4d2e3db84c62b82edf3e5f8290b3b259e58655a5ea95ea51ed8aeec8e8844\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpkbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef107096c0521504fd7160ad4e3c0feff6ed1500af21c7a0328a981c340825f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpkbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqbrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:52Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:52 crc kubenswrapper[4813]: I0219 18:30:52.059359 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l5vng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fc21e0b-f723-4c9c-9ced-1683cc02fa00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmn2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmn2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l5vng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:52Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:52 crc kubenswrapper[4813]: I0219 18:30:52.075472 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e24bb1ce-8e7b-485c-8ade-4d1c8d24d31d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ec0b5d3efae532f52cc2cad5bdf32bbba176159a6d3e26584b0fabe4e67fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72a262261f9ec388a9b2c38c6c30d35645d6e42d482ed7468891523b7bc731fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58efdcb776fb39a7e33af3fda3f0589e38c4bae15b8db3371541d189729d06c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a2584d7de40323b884e08e2cb47b6ab8054fadccc7516e4736c5439aef60e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:52Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:52 crc kubenswrapper[4813]: I0219 18:30:52.088329 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:52Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:52 crc kubenswrapper[4813]: I0219 18:30:52.107580 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlz6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f58d0592-08dd-49db-8c98-f262b9808e0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://934ac9bf4c98a77f31d9fddb47e9981f38daeca4f37afc09a488b48c69a13b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7968520498abaa880445c128f02c0838e254befc000966e147908c84203f74a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7968520498abaa880445c128f02c0838e254befc000966e147908c84203f74a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17313aa183028703a67d1cf40b4246d09224ceee622cd9f4f7d2ef239b26db47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17313aa183028703a67d1cf40b4246d09224ceee622cd9f4f7d2ef239b26db47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ab70722c91747196d065d768e30f52befde81fdedb465f22c3f7eef4705210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98ab70722c91747196d065d768e30f52befde81fdedb465f22c3f7eef4705210\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ab6dd9d03726493fd9fd2c358b0a87ca1997cbe2111d43a548adc9c9f43c662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ab6dd9d03726493fd9fd2c358b0a87ca1997cbe2111d43a548adc9c9f43c662\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11144c876f7a0cbaa5787ac1a093f138ec3fdb8a3cd118d4efec6cc650e3e97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11144c876f7a0cbaa5787ac1a093f138ec3fdb8a3cd118d4efec6cc650e3e97a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53230e417c1722eded51e2c7a87acb4e9b610aaf5d36a22d4f3f4fcd987c5160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53230e417c1722eded51e2c7a87acb4e9b610aaf5d36a22d4f3f4fcd987c5160\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlz6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:52Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:52 crc kubenswrapper[4813]: I0219 18:30:52.111939 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:52 crc kubenswrapper[4813]: I0219 18:30:52.112076 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:52 crc kubenswrapper[4813]: I0219 18:30:52.112152 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:52 crc kubenswrapper[4813]: I0219 18:30:52.112229 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:52 crc kubenswrapper[4813]: I0219 18:30:52.112295 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:52Z","lastTransitionTime":"2026-02-19T18:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:52 crc kubenswrapper[4813]: I0219 18:30:52.131924 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"928c75f4-605c-4556-8c29-14ff4bdf6f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec80078ab4536d8f1962994c3bdf8af5fcf81516f8845e84dbb33a78b1aa2062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e03ec2a3e1656fbfb567377e79bcca10e1c8a6f7660727c164936eab7365930a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6191934d31b3f711d3c9d8f365251db200dea5a925cad670632dbfc527f76504\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1b878b9bc710465c933e1965728be9fe220fe89a3d5c05fa538abd4bf22b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81cfe261685862b6b74d263ab66b4a2b1d68409791e82424b4f210c56561099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fed64977a80ae6acac3a225e2b6b6ab865db1f36338df4a15011baae90f949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90c570c2b717e5736262333676c0fd0ffbdbd3957f8c27964a5126d2df92cc13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90c570c2b717e5736262333676c0fd0ffbdbd3957f8c27964a5126d2df92cc13\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T18:30:32Z\\\",\\\"message\\\":\\\"s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 18:30:32.490543 6488 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 18:30:32.490653 6488 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 18:30:32.491223 6488 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 18:30:32.491297 6488 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 18:30:32.491309 6488 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 18:30:32.491356 6488 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 18:30:32.491372 6488 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 18:30:32.491393 6488 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 18:30:32.491409 6488 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 18:30:32.491423 6488 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 18:30:32.491463 6488 factory.go:656] Stopping watch factory\\\\nI0219 18:30:32.491489 6488 ovnkube.go:599] Stopped ovnkube\\\\nI0219 18:30:32.491500 6488 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 18:30:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pc9t2_openshift-ovn-kubernetes(928c75f4-605c-4556-8c29-14ff4bdf6f5e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1caf7a922d435742a5c6d5302c70003bc86cc3ab542e2e3fe0a368c150de7838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0244cf62334ae37b44f11119385a76667200a01b2162a064829f16e8d6c0cf65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0244cf62334ae37b44f11119385a76667200a01b2162a064829f16e8d6c0cf65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pc9t2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:52Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:52 crc kubenswrapper[4813]: I0219 18:30:52.143980 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vpk9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0640f474-4d6f-4a87-9750-dda71e69dd95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58268d212369ac3ebcca0839f2c778d1e2e6e36fdc4b7e374cb9044adb39163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vpk9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:52Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:52 crc kubenswrapper[4813]: I0219 18:30:52.155678 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8429a2d929ad94a25a922e436298db7a8eb7522560dfa37f87f6e24f4f99b4e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:52Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:52 crc kubenswrapper[4813]: I0219 18:30:52.168650 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"481977a2-7072-4176-abd4-863cb6104d70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2b35ef24fab5f4dfd17bf95d550000ed1167454385baff2c19e1bc795e2fa8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a57de3f000b8b7bd147b882314c518fbd8173a9c366d4ee48e1e5fab9800b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfswm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:52Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:52 crc kubenswrapper[4813]: I0219 18:30:52.182744 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3c31506-f64a-4742-a5f9-fb4f7cb97f3e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2b8202d5d41248673716db8cad0618006cc52e967751eb392b99663c4aa90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6995816d5f4dd389140af1ce1d1d5e45df8cf15a28c51cd5e13ee52c094377f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4760dc159a8f00a8d74f75e4c3b1f50b3548501b3a7623e96aaa16c1013611c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c86da51b4095ca48cd6d13dadef813f656825387e3166eb716fe2b8cc46ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7c86da51b4095ca48cd6d13dadef813f656825387e3166eb716fe2b8cc46ce3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:52Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:52 crc kubenswrapper[4813]: I0219 18:30:52.199870 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:52Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:52 crc kubenswrapper[4813]: I0219 18:30:52.214377 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:52 crc kubenswrapper[4813]: I0219 18:30:52.214402 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:52 crc kubenswrapper[4813]: I0219 18:30:52.214410 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:52 crc kubenswrapper[4813]: I0219 18:30:52.214423 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:52 crc kubenswrapper[4813]: I0219 18:30:52.214432 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:52Z","lastTransitionTime":"2026-02-19T18:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:52 crc kubenswrapper[4813]: I0219 18:30:52.316687 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:52 crc kubenswrapper[4813]: I0219 18:30:52.316717 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:52 crc kubenswrapper[4813]: I0219 18:30:52.316725 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:52 crc kubenswrapper[4813]: I0219 18:30:52.316737 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:52 crc kubenswrapper[4813]: I0219 18:30:52.316746 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:52Z","lastTransitionTime":"2026-02-19T18:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:52 crc kubenswrapper[4813]: I0219 18:30:52.419153 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:52 crc kubenswrapper[4813]: I0219 18:30:52.419304 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:52 crc kubenswrapper[4813]: I0219 18:30:52.419372 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:52 crc kubenswrapper[4813]: I0219 18:30:52.419439 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:52 crc kubenswrapper[4813]: I0219 18:30:52.419502 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:52Z","lastTransitionTime":"2026-02-19T18:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:52 crc kubenswrapper[4813]: I0219 18:30:52.443869 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 11:23:22.418000808 +0000 UTC Feb 19 18:30:52 crc kubenswrapper[4813]: I0219 18:30:52.471245 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5vng" Feb 19 18:30:52 crc kubenswrapper[4813]: E0219 18:30:52.471447 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5vng" podUID="6fc21e0b-f723-4c9c-9ced-1683cc02fa00" Feb 19 18:30:52 crc kubenswrapper[4813]: I0219 18:30:52.484442 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 19 18:30:52 crc kubenswrapper[4813]: I0219 18:30:52.522373 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:52 crc kubenswrapper[4813]: I0219 18:30:52.522414 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:52 crc kubenswrapper[4813]: I0219 18:30:52.522424 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:52 crc kubenswrapper[4813]: I0219 18:30:52.522440 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:52 crc kubenswrapper[4813]: I0219 18:30:52.522449 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:52Z","lastTransitionTime":"2026-02-19T18:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:52 crc kubenswrapper[4813]: I0219 18:30:52.624895 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:52 crc kubenswrapper[4813]: I0219 18:30:52.625113 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:52 crc kubenswrapper[4813]: I0219 18:30:52.625198 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:52 crc kubenswrapper[4813]: I0219 18:30:52.625261 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:52 crc kubenswrapper[4813]: I0219 18:30:52.625317 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:52Z","lastTransitionTime":"2026-02-19T18:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:52 crc kubenswrapper[4813]: I0219 18:30:52.728457 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:52 crc kubenswrapper[4813]: I0219 18:30:52.728489 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:52 crc kubenswrapper[4813]: I0219 18:30:52.728497 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:52 crc kubenswrapper[4813]: I0219 18:30:52.728512 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:52 crc kubenswrapper[4813]: I0219 18:30:52.728522 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:52Z","lastTransitionTime":"2026-02-19T18:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:52 crc kubenswrapper[4813]: I0219 18:30:52.831220 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:52 crc kubenswrapper[4813]: I0219 18:30:52.831260 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:52 crc kubenswrapper[4813]: I0219 18:30:52.831270 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:52 crc kubenswrapper[4813]: I0219 18:30:52.831285 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:52 crc kubenswrapper[4813]: I0219 18:30:52.831294 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:52Z","lastTransitionTime":"2026-02-19T18:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:52 crc kubenswrapper[4813]: I0219 18:30:52.924123 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hksqw_b099cefb-f2e5-4f3f-976c-7433dba77ef2/kube-multus/0.log" Feb 19 18:30:52 crc kubenswrapper[4813]: I0219 18:30:52.924362 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hksqw" event={"ID":"b099cefb-f2e5-4f3f-976c-7433dba77ef2","Type":"ContainerStarted","Data":"38e8bb938cc19db067c4260fd25416cf155dd95f94c41f03a9ef2b4d29589ad2"} Feb 19 18:30:52 crc kubenswrapper[4813]: I0219 18:30:52.933153 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:52 crc kubenswrapper[4813]: I0219 18:30:52.933226 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:52 crc kubenswrapper[4813]: I0219 18:30:52.933244 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:52 crc kubenswrapper[4813]: I0219 18:30:52.933268 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:52 crc kubenswrapper[4813]: I0219 18:30:52.933285 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:52Z","lastTransitionTime":"2026-02-19T18:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:52 crc kubenswrapper[4813]: I0219 18:30:52.940753 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqbrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"614b0374-4288-40dc-9d95-e6f6566bd1ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99f4d2e3db84c62b82edf3e5f8290b3b259e58655a5ea95ea51ed8aeec8e8844\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpkbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef107096c0521504fd7160ad4e3c0feff6ed1500af21c7a0328a981c340825f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpkbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqbrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:52Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:52 crc kubenswrapper[4813]: I0219 18:30:52.955011 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l5vng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fc21e0b-f723-4c9c-9ced-1683cc02fa00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmn2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmn2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l5vng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:52Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:52 crc kubenswrapper[4813]: I0219 18:30:52.968416 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e24bb1ce-8e7b-485c-8ade-4d1c8d24d31d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ec0b5d3efae532f52cc2cad5bdf32bbba176159a6d3e26584b0fabe4e67fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72a262261f9ec388a9b2c38c6c30d35645d6e42d482ed7468891523b7bc731fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58efdcb776fb39a7e33af3fda3f0589e38c4bae15b8db3371541d189729d06c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a2584d7de40323b884e08e2cb47b6ab8054fadccc7516e4736c5439aef60e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:52Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:52 crc kubenswrapper[4813]: I0219 18:30:52.982201 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:52Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:52 crc kubenswrapper[4813]: I0219 18:30:52.999885 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:52Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:53 crc kubenswrapper[4813]: I0219 18:30:53.014287 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c6488cf097d9700ded1960ebddf696d77c08ffdfc6dd8b351c594ea026bb6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6181782d0c635897318effbe40705986067ccb55cf05ffdba6948e46c1a054b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:53Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:53 crc kubenswrapper[4813]: I0219 18:30:53.027173 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hksqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b099cefb-f2e5-4f3f-976c-7433dba77ef2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e8bb938cc19db067c4260fd25416cf155dd95f94c41f03a9ef2b4d29589ad2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3313ad9bea8833862d8c116421e00a0c2376c81e214043db1fa4096177d01094\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T18:30:50Z\\\",\\\"message\\\":\\\"2026-02-19T18:30:05+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9b918bad-a915-4f26-9241-805aac721196\\\\n2026-02-19T18:30:05+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9b918bad-a915-4f26-9241-805aac721196 to /host/opt/cni/bin/\\\\n2026-02-19T18:30:05Z [verbose] multus-daemon started\\\\n2026-02-19T18:30:05Z [verbose] Readiness Indicator file check\\\\n2026-02-19T18:30:50Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hksqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:53Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:53 crc kubenswrapper[4813]: I0219 18:30:53.035187 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:53 crc kubenswrapper[4813]: I0219 18:30:53.035232 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:53 crc kubenswrapper[4813]: I0219 18:30:53.035242 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:53 crc kubenswrapper[4813]: I0219 18:30:53.035256 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:53 crc kubenswrapper[4813]: I0219 18:30:53.035266 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:53Z","lastTransitionTime":"2026-02-19T18:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:53 crc kubenswrapper[4813]: I0219 18:30:53.047719 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlz6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f58d0592-08dd-49db-8c98-f262b9808e0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://934ac9bf4c98a77f31d9fddb47e9981f38daeca4f37afc09a488b48c69a13b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7968520498abaa880445c128f02c0838e254befc000966e147908c84203f74a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7968520498abaa880445c128f02c0838e254befc000966e147908c84203f74a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17313aa183028703a67d1cf40b4246d09224ceee622cd9f4f7d2ef239b26db47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17313aa183028703a67d1cf40b4246d09224ceee622cd9f4f7d2ef239b26db47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ab70722c91747196d065d768e30f52befde81fdedb465f22c3f7eef4705210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98ab70722c91747196d065d768e30f52befde81fdedb465f22c3f7eef4705210\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ab6dd9d03726493fd9fd2c358b0a87ca1997cbe2111d43a548adc9c9f43c662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ab6dd9d03726493fd9fd2c358b0a87ca1997cbe2111d43a548adc9c9f43c662\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11144c876f7a0cbaa5787ac1a093f138ec3fdb8a3cd118d4efec6cc650e3e97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11144c876f7a0cbaa5787ac1a093f138ec3fdb8a3cd118d4efec6cc650e3e97a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53230e417c1722eded51e2c7a87acb4e9b610aaf5d36a22d4f3f4fcd987c5160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53230e417c1722eded51e2c7a87acb4e9b610aaf5d36a22d4f3f4fcd987c5160\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlz6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:53Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:53 crc kubenswrapper[4813]: I0219 18:30:53.065438 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"928c75f4-605c-4556-8c29-14ff4bdf6f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec80078ab4536d8f1962994c3bdf8af5fcf81516f8845e84dbb33a78b1aa2062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e03ec2a3e1656fbfb567377e79bcca10e1c8a6f7660727c164936eab7365930a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6191934d31b3f711d3c9d8f365251db200dea5a925cad670632dbfc527f76504\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1b878b9bc710465c933e1965728be9fe220fe89a3d5c05fa538abd4bf22b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81cfe261685862b6b74d263ab66b4a2b1d68409791e82424b4f210c56561099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fed64977a80ae6acac3a225e2b6b6ab865db1f36338df4a15011baae90f949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90c570c2b717e5736262333676c0fd0ffbdbd3957f8c27964a5126d2df92cc13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90c570c2b717e5736262333676c0fd0ffbdbd3957f8c27964a5126d2df92cc13\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T18:30:32Z\\\",\\\"message\\\":\\\"s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 18:30:32.490543 6488 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 18:30:32.490653 6488 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 18:30:32.491223 6488 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 18:30:32.491297 6488 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 18:30:32.491309 6488 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 18:30:32.491356 6488 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 18:30:32.491372 6488 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 18:30:32.491393 6488 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 18:30:32.491409 6488 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 18:30:32.491423 6488 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 18:30:32.491463 6488 factory.go:656] Stopping watch factory\\\\nI0219 18:30:32.491489 6488 ovnkube.go:599] Stopped ovnkube\\\\nI0219 18:30:32.491500 6488 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 18:30:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pc9t2_openshift-ovn-kubernetes(928c75f4-605c-4556-8c29-14ff4bdf6f5e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1caf7a922d435742a5c6d5302c70003bc86cc3ab542e2e3fe0a368c150de7838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0244cf62334ae37b44f11119385a76667200a01b2162a064829f16e8d6c0cf65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0244cf62334ae37b44f11119385a76667200a01b2162a064829f16e8d6c0cf65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pc9t2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:53Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:53 crc kubenswrapper[4813]: I0219 18:30:53.081081 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3c31506-f64a-4742-a5f9-fb4f7cb97f3e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2b8202d5d41248673716db8cad0618006cc52e967751eb392b99663c4aa90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6995816d5f4dd389140af1ce1d1d5e45df8cf15a28c51cd5e13ee52c094377f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4760dc159a8f00a8d74f75e4c3b1f50b3548501b3a7623e96aaa16c1013611c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c86da51b4095ca48cd6d13dadef813f656825387e3166eb716fe2b8cc46ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7c86da51b4095ca48cd6d13dadef813f656825387e3166eb716fe2b8cc46ce3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:53Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:53 crc kubenswrapper[4813]: I0219 18:30:53.096418 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:53Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:53 crc kubenswrapper[4813]: I0219 18:30:53.106694 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vpk9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0640f474-4d6f-4a87-9750-dda71e69dd95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58268d212369ac3ebcca0839f2c778d1e2e6e36fdc4b7e374cb9044adb39163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vpk9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:53Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:53 crc kubenswrapper[4813]: I0219 18:30:53.119978 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8429a2d929ad94a25a922e436298db7a8eb7522560dfa37f87f6e24f4f99b4e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:53Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:53 crc kubenswrapper[4813]: I0219 18:30:53.133986 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"481977a2-7072-4176-abd4-863cb6104d70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2b35ef24fab5f4dfd17bf95d550000ed1167454385baff2c19e1bc795e2fa8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a57de3f000b8b7bd147b882314c518fbd8173a9c366d4ee48e1e5fab9800b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfswm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:53Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:53 crc kubenswrapper[4813]: I0219 18:30:53.137735 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:53 crc kubenswrapper[4813]: I0219 18:30:53.137761 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:53 crc kubenswrapper[4813]: I0219 18:30:53.137770 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:53 crc kubenswrapper[4813]: I0219 18:30:53.137784 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:53 crc kubenswrapper[4813]: I0219 18:30:53.137794 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:53Z","lastTransitionTime":"2026-02-19T18:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:53 crc kubenswrapper[4813]: I0219 18:30:53.151480 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7e9f575-0f71-4cf9-bbca-161279ecc067\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5217c8c5d8bdd49d4930ceb46695c6404c9cb98f22fa570d519f6fd9e90a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72ae02b48db247486c8af5ba910b32e186c15460f9195868b319d1ddc111fa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e90168ea6a75495be5812591cd0c929407c9adcb7c088d4bcbe3658a430320\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f319cf58bd58c099ace281ef84d183ac951986506fa62207a45ec01f7398e7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f62c4c18d632f0210d0676c04bf42a7b28c8669c123d29ef70fd4129f0051a2b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T18:30:00Z\\\",\\\"message\\\":\\\".003654 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 18:29:55.006059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3464865525/tls.crt::/tmp/serving-cert-3464865525/tls.key\\\\\\\"\\\\nI0219 18:30:00.917714 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 18:30:00.925836 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 18:30:00.925875 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 18:30:00.925917 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 18:30:00.925928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 18:30:00.935151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 18:30:00.935201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935214 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935228 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 18:30:00.935217 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 18:30:00.935236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 18:30:00.935260 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 18:30:00.935268 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 18:30:00.942033 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nF0219 18:30:00.942062 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b414bdca98a18315a2ebb69c39baa4748407aa769d6bc8efa4115efb47c9f4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:53Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:53 crc kubenswrapper[4813]: I0219 18:30:53.161329 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9fdac97-44c9-402a-9097-cc8bb3cd60c1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ca66b5227f874cb972da780e7130516ccefdef0f8a8c6b5258886aba6a54ae8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c2aec2ac5b69b6d516b5f3bddbff02e25892d26a95e9f8bce5c9c5f98144310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c2aec2ac5b69b6d516b5f3bddbff02e25892d26a95e9f8bce5c9c5f98144310\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:53Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:53 crc kubenswrapper[4813]: I0219 18:30:53.177571 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"424add47-7924-4d6b-9c4a-4696e02c5a45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95dc9a2b0e36680a894dd162836072614413ad26fcf5adbae44db96aff8a00c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46edb0e58f8e2e040cc223708f42cef7060dc0a841fff9db182e66e659ced5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://def8abdfd513befc4fac3d78b8284b3b37ba3f5da05d74a7b911932ee1099344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed857dedd4a4cf59288fcbcfaa298370a93630e185b949d2e574eb828831df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://216064c0aa1256384d93ad5b952228d4b67c2767f284715e62505dd390c11268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b9746c82ed08c95759481dc08426b4e8446ed834da21c003dccfeb8231fbec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b9746c82ed08c95759481dc08426b4e8446ed834da21c003dccfeb8231fbec2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc8db58cc6cac8c9b830087814e9d7652fdd990b18e4e4e778669d5cbc06b508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc8db58cc6cac8c9b830087814e9d7652fdd990b18e4e4e778669d5cbc06b508\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://baca15e257ea2007369156b6a31eef0b7cd0e336cef7a9c733164b93e1927f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baca15e257ea2007369156b6a31eef0b7cd0e336cef7a9c733164b93e1927f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:53Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:53 crc kubenswrapper[4813]: I0219 18:30:53.193397 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44daee369357470d5c3b416d63e53f621a0434cc6fe90db2dd15a6078849bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:53Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:53 crc kubenswrapper[4813]: I0219 18:30:53.204211 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55mxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af5ac962-f7d8-4759-b913-b3784e37a704\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5006f3df33dcac6500b89a837e686eb2b35a2d1c609091bda101500d15d808d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55mxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:53Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:53 crc kubenswrapper[4813]: I0219 18:30:53.240327 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:53 crc kubenswrapper[4813]: I0219 18:30:53.240380 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:53 crc kubenswrapper[4813]: I0219 18:30:53.240393 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:53 crc kubenswrapper[4813]: I0219 18:30:53.240413 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:53 crc kubenswrapper[4813]: I0219 18:30:53.240426 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:53Z","lastTransitionTime":"2026-02-19T18:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:53 crc kubenswrapper[4813]: I0219 18:30:53.342769 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:53 crc kubenswrapper[4813]: I0219 18:30:53.342830 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:53 crc kubenswrapper[4813]: I0219 18:30:53.342851 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:53 crc kubenswrapper[4813]: I0219 18:30:53.342874 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:53 crc kubenswrapper[4813]: I0219 18:30:53.342893 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:53Z","lastTransitionTime":"2026-02-19T18:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:53 crc kubenswrapper[4813]: I0219 18:30:53.444061 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 07:58:03.743720145 +0000 UTC Feb 19 18:30:53 crc kubenswrapper[4813]: I0219 18:30:53.445088 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:53 crc kubenswrapper[4813]: I0219 18:30:53.445178 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:53 crc kubenswrapper[4813]: I0219 18:30:53.445261 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:53 crc kubenswrapper[4813]: I0219 18:30:53.445328 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:53 crc kubenswrapper[4813]: I0219 18:30:53.445391 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:53Z","lastTransitionTime":"2026-02-19T18:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:53 crc kubenswrapper[4813]: I0219 18:30:53.471449 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:30:53 crc kubenswrapper[4813]: I0219 18:30:53.471583 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 18:30:53 crc kubenswrapper[4813]: E0219 18:30:53.471598 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 18:30:53 crc kubenswrapper[4813]: I0219 18:30:53.471458 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 18:30:53 crc kubenswrapper[4813]: E0219 18:30:53.471739 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 18:30:53 crc kubenswrapper[4813]: E0219 18:30:53.471877 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 18:30:53 crc kubenswrapper[4813]: I0219 18:30:53.547581 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:53 crc kubenswrapper[4813]: I0219 18:30:53.547643 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:53 crc kubenswrapper[4813]: I0219 18:30:53.547661 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:53 crc kubenswrapper[4813]: I0219 18:30:53.547687 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:53 crc kubenswrapper[4813]: I0219 18:30:53.547705 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:53Z","lastTransitionTime":"2026-02-19T18:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:53 crc kubenswrapper[4813]: I0219 18:30:53.650108 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:53 crc kubenswrapper[4813]: I0219 18:30:53.650290 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:53 crc kubenswrapper[4813]: I0219 18:30:53.650379 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:53 crc kubenswrapper[4813]: I0219 18:30:53.650448 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:53 crc kubenswrapper[4813]: I0219 18:30:53.650515 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:53Z","lastTransitionTime":"2026-02-19T18:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:53 crc kubenswrapper[4813]: I0219 18:30:53.753736 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:53 crc kubenswrapper[4813]: I0219 18:30:53.753905 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:53 crc kubenswrapper[4813]: I0219 18:30:53.753980 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:53 crc kubenswrapper[4813]: I0219 18:30:53.754050 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:53 crc kubenswrapper[4813]: I0219 18:30:53.754109 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:53Z","lastTransitionTime":"2026-02-19T18:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:53 crc kubenswrapper[4813]: I0219 18:30:53.857439 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:53 crc kubenswrapper[4813]: I0219 18:30:53.857486 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:53 crc kubenswrapper[4813]: I0219 18:30:53.857497 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:53 crc kubenswrapper[4813]: I0219 18:30:53.857513 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:53 crc kubenswrapper[4813]: I0219 18:30:53.857525 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:53Z","lastTransitionTime":"2026-02-19T18:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:53 crc kubenswrapper[4813]: I0219 18:30:53.960316 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:53 crc kubenswrapper[4813]: I0219 18:30:53.960345 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:53 crc kubenswrapper[4813]: I0219 18:30:53.960353 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:53 crc kubenswrapper[4813]: I0219 18:30:53.960366 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:53 crc kubenswrapper[4813]: I0219 18:30:53.960375 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:53Z","lastTransitionTime":"2026-02-19T18:30:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:54 crc kubenswrapper[4813]: I0219 18:30:54.062866 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:54 crc kubenswrapper[4813]: I0219 18:30:54.063155 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:54 crc kubenswrapper[4813]: I0219 18:30:54.063314 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:54 crc kubenswrapper[4813]: I0219 18:30:54.063454 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:54 crc kubenswrapper[4813]: I0219 18:30:54.063788 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:54Z","lastTransitionTime":"2026-02-19T18:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:54 crc kubenswrapper[4813]: I0219 18:30:54.167685 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:54 crc kubenswrapper[4813]: I0219 18:30:54.169571 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:54 crc kubenswrapper[4813]: I0219 18:30:54.170046 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:54 crc kubenswrapper[4813]: I0219 18:30:54.170231 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:54 crc kubenswrapper[4813]: I0219 18:30:54.170381 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:54Z","lastTransitionTime":"2026-02-19T18:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:54 crc kubenswrapper[4813]: I0219 18:30:54.273473 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:54 crc kubenswrapper[4813]: I0219 18:30:54.273533 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:54 crc kubenswrapper[4813]: I0219 18:30:54.273545 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:54 crc kubenswrapper[4813]: I0219 18:30:54.273560 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:54 crc kubenswrapper[4813]: I0219 18:30:54.273572 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:54Z","lastTransitionTime":"2026-02-19T18:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:54 crc kubenswrapper[4813]: I0219 18:30:54.375656 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:54 crc kubenswrapper[4813]: I0219 18:30:54.375696 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:54 crc kubenswrapper[4813]: I0219 18:30:54.375704 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:54 crc kubenswrapper[4813]: I0219 18:30:54.375719 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:54 crc kubenswrapper[4813]: I0219 18:30:54.375728 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:54Z","lastTransitionTime":"2026-02-19T18:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:54 crc kubenswrapper[4813]: I0219 18:30:54.444552 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 16:44:28.162781999 +0000 UTC Feb 19 18:30:54 crc kubenswrapper[4813]: I0219 18:30:54.470948 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5vng" Feb 19 18:30:54 crc kubenswrapper[4813]: E0219 18:30:54.471139 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5vng" podUID="6fc21e0b-f723-4c9c-9ced-1683cc02fa00" Feb 19 18:30:54 crc kubenswrapper[4813]: I0219 18:30:54.478468 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:54 crc kubenswrapper[4813]: I0219 18:30:54.478501 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:54 crc kubenswrapper[4813]: I0219 18:30:54.478514 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:54 crc kubenswrapper[4813]: I0219 18:30:54.478531 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:54 crc kubenswrapper[4813]: I0219 18:30:54.478544 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:54Z","lastTransitionTime":"2026-02-19T18:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:54 crc kubenswrapper[4813]: I0219 18:30:54.580600 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:54 crc kubenswrapper[4813]: I0219 18:30:54.580637 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:54 crc kubenswrapper[4813]: I0219 18:30:54.580645 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:54 crc kubenswrapper[4813]: I0219 18:30:54.580659 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:54 crc kubenswrapper[4813]: I0219 18:30:54.580669 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:54Z","lastTransitionTime":"2026-02-19T18:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:54 crc kubenswrapper[4813]: I0219 18:30:54.682499 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:54 crc kubenswrapper[4813]: I0219 18:30:54.682566 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:54 crc kubenswrapper[4813]: I0219 18:30:54.682590 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:54 crc kubenswrapper[4813]: I0219 18:30:54.682617 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:54 crc kubenswrapper[4813]: I0219 18:30:54.682638 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:54Z","lastTransitionTime":"2026-02-19T18:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:54 crc kubenswrapper[4813]: I0219 18:30:54.784920 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:54 crc kubenswrapper[4813]: I0219 18:30:54.784982 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:54 crc kubenswrapper[4813]: I0219 18:30:54.785000 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:54 crc kubenswrapper[4813]: I0219 18:30:54.785019 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:54 crc kubenswrapper[4813]: I0219 18:30:54.785035 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:54Z","lastTransitionTime":"2026-02-19T18:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:54 crc kubenswrapper[4813]: I0219 18:30:54.887087 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:54 crc kubenswrapper[4813]: I0219 18:30:54.887137 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:54 crc kubenswrapper[4813]: I0219 18:30:54.887149 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:54 crc kubenswrapper[4813]: I0219 18:30:54.887168 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:54 crc kubenswrapper[4813]: I0219 18:30:54.887184 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:54Z","lastTransitionTime":"2026-02-19T18:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:54 crc kubenswrapper[4813]: I0219 18:30:54.989184 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:54 crc kubenswrapper[4813]: I0219 18:30:54.989265 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:54 crc kubenswrapper[4813]: I0219 18:30:54.989285 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:54 crc kubenswrapper[4813]: I0219 18:30:54.989305 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:54 crc kubenswrapper[4813]: I0219 18:30:54.989323 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:54Z","lastTransitionTime":"2026-02-19T18:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:55 crc kubenswrapper[4813]: I0219 18:30:55.091582 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:55 crc kubenswrapper[4813]: I0219 18:30:55.091621 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:55 crc kubenswrapper[4813]: I0219 18:30:55.091632 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:55 crc kubenswrapper[4813]: I0219 18:30:55.091646 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:55 crc kubenswrapper[4813]: I0219 18:30:55.091655 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:55Z","lastTransitionTime":"2026-02-19T18:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:55 crc kubenswrapper[4813]: I0219 18:30:55.194466 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:55 crc kubenswrapper[4813]: I0219 18:30:55.194534 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:55 crc kubenswrapper[4813]: I0219 18:30:55.194546 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:55 crc kubenswrapper[4813]: I0219 18:30:55.194565 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:55 crc kubenswrapper[4813]: I0219 18:30:55.194578 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:55Z","lastTransitionTime":"2026-02-19T18:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:55 crc kubenswrapper[4813]: I0219 18:30:55.298101 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:55 crc kubenswrapper[4813]: I0219 18:30:55.298165 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:55 crc kubenswrapper[4813]: I0219 18:30:55.298176 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:55 crc kubenswrapper[4813]: I0219 18:30:55.298191 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:55 crc kubenswrapper[4813]: I0219 18:30:55.298203 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:55Z","lastTransitionTime":"2026-02-19T18:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:55 crc kubenswrapper[4813]: I0219 18:30:55.401249 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:55 crc kubenswrapper[4813]: I0219 18:30:55.401300 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:55 crc kubenswrapper[4813]: I0219 18:30:55.401314 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:55 crc kubenswrapper[4813]: I0219 18:30:55.401334 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:55 crc kubenswrapper[4813]: I0219 18:30:55.401349 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:55Z","lastTransitionTime":"2026-02-19T18:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:55 crc kubenswrapper[4813]: I0219 18:30:55.444670 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 13:34:18.837942112 +0000 UTC Feb 19 18:30:55 crc kubenswrapper[4813]: I0219 18:30:55.470921 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:30:55 crc kubenswrapper[4813]: I0219 18:30:55.471102 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 18:30:55 crc kubenswrapper[4813]: E0219 18:30:55.471160 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 18:30:55 crc kubenswrapper[4813]: I0219 18:30:55.470921 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 18:30:55 crc kubenswrapper[4813]: E0219 18:30:55.471307 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 18:30:55 crc kubenswrapper[4813]: E0219 18:30:55.471400 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 18:30:55 crc kubenswrapper[4813]: I0219 18:30:55.504536 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:55 crc kubenswrapper[4813]: I0219 18:30:55.504592 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:55 crc kubenswrapper[4813]: I0219 18:30:55.504611 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:55 crc kubenswrapper[4813]: I0219 18:30:55.504634 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:55 crc kubenswrapper[4813]: I0219 18:30:55.504679 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:55Z","lastTransitionTime":"2026-02-19T18:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:55 crc kubenswrapper[4813]: I0219 18:30:55.607994 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:55 crc kubenswrapper[4813]: I0219 18:30:55.608069 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:55 crc kubenswrapper[4813]: I0219 18:30:55.608090 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:55 crc kubenswrapper[4813]: I0219 18:30:55.608119 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:55 crc kubenswrapper[4813]: I0219 18:30:55.608138 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:55Z","lastTransitionTime":"2026-02-19T18:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:55 crc kubenswrapper[4813]: I0219 18:30:55.710550 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:55 crc kubenswrapper[4813]: I0219 18:30:55.710606 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:55 crc kubenswrapper[4813]: I0219 18:30:55.710623 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:55 crc kubenswrapper[4813]: I0219 18:30:55.710651 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:55 crc kubenswrapper[4813]: I0219 18:30:55.710672 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:55Z","lastTransitionTime":"2026-02-19T18:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:55 crc kubenswrapper[4813]: I0219 18:30:55.812927 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:55 crc kubenswrapper[4813]: I0219 18:30:55.813027 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:55 crc kubenswrapper[4813]: I0219 18:30:55.813040 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:55 crc kubenswrapper[4813]: I0219 18:30:55.813055 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:55 crc kubenswrapper[4813]: I0219 18:30:55.813067 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:55Z","lastTransitionTime":"2026-02-19T18:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:55 crc kubenswrapper[4813]: I0219 18:30:55.915498 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:55 crc kubenswrapper[4813]: I0219 18:30:55.915540 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:55 crc kubenswrapper[4813]: I0219 18:30:55.915552 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:55 crc kubenswrapper[4813]: I0219 18:30:55.915569 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:55 crc kubenswrapper[4813]: I0219 18:30:55.915581 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:55Z","lastTransitionTime":"2026-02-19T18:30:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:56 crc kubenswrapper[4813]: I0219 18:30:56.018215 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:56 crc kubenswrapper[4813]: I0219 18:30:56.018281 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:56 crc kubenswrapper[4813]: I0219 18:30:56.018298 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:56 crc kubenswrapper[4813]: I0219 18:30:56.018327 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:56 crc kubenswrapper[4813]: I0219 18:30:56.018344 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:56Z","lastTransitionTime":"2026-02-19T18:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:56 crc kubenswrapper[4813]: I0219 18:30:56.121477 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:56 crc kubenswrapper[4813]: I0219 18:30:56.121820 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:56 crc kubenswrapper[4813]: I0219 18:30:56.122077 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:56 crc kubenswrapper[4813]: I0219 18:30:56.122246 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:56 crc kubenswrapper[4813]: I0219 18:30:56.122377 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:56Z","lastTransitionTime":"2026-02-19T18:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:56 crc kubenswrapper[4813]: I0219 18:30:56.225816 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:56 crc kubenswrapper[4813]: I0219 18:30:56.225865 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:56 crc kubenswrapper[4813]: I0219 18:30:56.225877 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:56 crc kubenswrapper[4813]: I0219 18:30:56.225896 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:56 crc kubenswrapper[4813]: I0219 18:30:56.225914 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:56Z","lastTransitionTime":"2026-02-19T18:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:56 crc kubenswrapper[4813]: I0219 18:30:56.328815 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:56 crc kubenswrapper[4813]: I0219 18:30:56.328864 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:56 crc kubenswrapper[4813]: I0219 18:30:56.328874 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:56 crc kubenswrapper[4813]: I0219 18:30:56.328890 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:56 crc kubenswrapper[4813]: I0219 18:30:56.328901 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:56Z","lastTransitionTime":"2026-02-19T18:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:56 crc kubenswrapper[4813]: I0219 18:30:56.431186 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:56 crc kubenswrapper[4813]: I0219 18:30:56.431227 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:56 crc kubenswrapper[4813]: I0219 18:30:56.431236 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:56 crc kubenswrapper[4813]: I0219 18:30:56.431251 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:56 crc kubenswrapper[4813]: I0219 18:30:56.431261 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:56Z","lastTransitionTime":"2026-02-19T18:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:56 crc kubenswrapper[4813]: I0219 18:30:56.445655 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 22:57:16.610072598 +0000 UTC Feb 19 18:30:56 crc kubenswrapper[4813]: I0219 18:30:56.470604 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5vng" Feb 19 18:30:56 crc kubenswrapper[4813]: E0219 18:30:56.471060 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5vng" podUID="6fc21e0b-f723-4c9c-9ced-1683cc02fa00" Feb 19 18:30:56 crc kubenswrapper[4813]: I0219 18:30:56.534076 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:56 crc kubenswrapper[4813]: I0219 18:30:56.534133 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:56 crc kubenswrapper[4813]: I0219 18:30:56.534143 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:56 crc kubenswrapper[4813]: I0219 18:30:56.534156 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:56 crc kubenswrapper[4813]: I0219 18:30:56.534165 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:56Z","lastTransitionTime":"2026-02-19T18:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:56 crc kubenswrapper[4813]: I0219 18:30:56.637474 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:56 crc kubenswrapper[4813]: I0219 18:30:56.637526 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:56 crc kubenswrapper[4813]: I0219 18:30:56.637539 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:56 crc kubenswrapper[4813]: I0219 18:30:56.637558 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:56 crc kubenswrapper[4813]: I0219 18:30:56.637571 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:56Z","lastTransitionTime":"2026-02-19T18:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:56 crc kubenswrapper[4813]: I0219 18:30:56.741218 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:56 crc kubenswrapper[4813]: I0219 18:30:56.741287 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:56 crc kubenswrapper[4813]: I0219 18:30:56.741305 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:56 crc kubenswrapper[4813]: I0219 18:30:56.741332 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:56 crc kubenswrapper[4813]: I0219 18:30:56.741350 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:56Z","lastTransitionTime":"2026-02-19T18:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:56 crc kubenswrapper[4813]: I0219 18:30:56.844187 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:56 crc kubenswrapper[4813]: I0219 18:30:56.844251 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:56 crc kubenswrapper[4813]: I0219 18:30:56.844263 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:56 crc kubenswrapper[4813]: I0219 18:30:56.844284 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:56 crc kubenswrapper[4813]: I0219 18:30:56.844295 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:56Z","lastTransitionTime":"2026-02-19T18:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:56 crc kubenswrapper[4813]: I0219 18:30:56.947289 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:56 crc kubenswrapper[4813]: I0219 18:30:56.947346 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:56 crc kubenswrapper[4813]: I0219 18:30:56.947365 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:56 crc kubenswrapper[4813]: I0219 18:30:56.947389 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:56 crc kubenswrapper[4813]: I0219 18:30:56.947406 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:56Z","lastTransitionTime":"2026-02-19T18:30:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:57 crc kubenswrapper[4813]: I0219 18:30:57.064299 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:57 crc kubenswrapper[4813]: I0219 18:30:57.064348 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:57 crc kubenswrapper[4813]: I0219 18:30:57.064360 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:57 crc kubenswrapper[4813]: I0219 18:30:57.064379 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:57 crc kubenswrapper[4813]: I0219 18:30:57.064392 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:57Z","lastTransitionTime":"2026-02-19T18:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:57 crc kubenswrapper[4813]: I0219 18:30:57.167380 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:57 crc kubenswrapper[4813]: I0219 18:30:57.167426 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:57 crc kubenswrapper[4813]: I0219 18:30:57.167437 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:57 crc kubenswrapper[4813]: I0219 18:30:57.167454 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:57 crc kubenswrapper[4813]: I0219 18:30:57.167467 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:57Z","lastTransitionTime":"2026-02-19T18:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:57 crc kubenswrapper[4813]: I0219 18:30:57.269774 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:57 crc kubenswrapper[4813]: I0219 18:30:57.269813 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:57 crc kubenswrapper[4813]: I0219 18:30:57.269825 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:57 crc kubenswrapper[4813]: I0219 18:30:57.269841 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:57 crc kubenswrapper[4813]: I0219 18:30:57.269851 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:57Z","lastTransitionTime":"2026-02-19T18:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:57 crc kubenswrapper[4813]: I0219 18:30:57.373471 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:57 crc kubenswrapper[4813]: I0219 18:30:57.373519 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:57 crc kubenswrapper[4813]: I0219 18:30:57.373529 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:57 crc kubenswrapper[4813]: I0219 18:30:57.373548 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:57 crc kubenswrapper[4813]: I0219 18:30:57.373560 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:57Z","lastTransitionTime":"2026-02-19T18:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:57 crc kubenswrapper[4813]: I0219 18:30:57.446362 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 08:55:59.476086821 +0000 UTC Feb 19 18:30:57 crc kubenswrapper[4813]: I0219 18:30:57.471144 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 18:30:57 crc kubenswrapper[4813]: I0219 18:30:57.471633 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 18:30:57 crc kubenswrapper[4813]: I0219 18:30:57.472051 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:30:57 crc kubenswrapper[4813]: E0219 18:30:57.472444 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 18:30:57 crc kubenswrapper[4813]: E0219 18:30:57.472300 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 18:30:57 crc kubenswrapper[4813]: E0219 18:30:57.473023 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 18:30:57 crc kubenswrapper[4813]: I0219 18:30:57.475897 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:57 crc kubenswrapper[4813]: I0219 18:30:57.475940 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:57 crc kubenswrapper[4813]: I0219 18:30:57.475974 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:57 crc kubenswrapper[4813]: I0219 18:30:57.475991 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:57 crc kubenswrapper[4813]: I0219 18:30:57.476003 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:57Z","lastTransitionTime":"2026-02-19T18:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:57 crc kubenswrapper[4813]: I0219 18:30:57.578838 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:57 crc kubenswrapper[4813]: I0219 18:30:57.579193 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:57 crc kubenswrapper[4813]: I0219 18:30:57.579300 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:57 crc kubenswrapper[4813]: I0219 18:30:57.579407 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:57 crc kubenswrapper[4813]: I0219 18:30:57.579517 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:57Z","lastTransitionTime":"2026-02-19T18:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:57 crc kubenswrapper[4813]: I0219 18:30:57.682397 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:57 crc kubenswrapper[4813]: I0219 18:30:57.682447 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:57 crc kubenswrapper[4813]: I0219 18:30:57.682459 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:57 crc kubenswrapper[4813]: I0219 18:30:57.682475 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:57 crc kubenswrapper[4813]: I0219 18:30:57.682488 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:57Z","lastTransitionTime":"2026-02-19T18:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:57 crc kubenswrapper[4813]: I0219 18:30:57.784809 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:57 crc kubenswrapper[4813]: I0219 18:30:57.784855 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:57 crc kubenswrapper[4813]: I0219 18:30:57.784871 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:57 crc kubenswrapper[4813]: I0219 18:30:57.784894 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:57 crc kubenswrapper[4813]: I0219 18:30:57.784913 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:57Z","lastTransitionTime":"2026-02-19T18:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:57 crc kubenswrapper[4813]: I0219 18:30:57.887675 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:57 crc kubenswrapper[4813]: I0219 18:30:57.887714 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:57 crc kubenswrapper[4813]: I0219 18:30:57.887729 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:57 crc kubenswrapper[4813]: I0219 18:30:57.887751 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:57 crc kubenswrapper[4813]: I0219 18:30:57.887768 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:57Z","lastTransitionTime":"2026-02-19T18:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:57 crc kubenswrapper[4813]: I0219 18:30:57.990713 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:57 crc kubenswrapper[4813]: I0219 18:30:57.990762 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:57 crc kubenswrapper[4813]: I0219 18:30:57.990773 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:57 crc kubenswrapper[4813]: I0219 18:30:57.990792 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:57 crc kubenswrapper[4813]: I0219 18:30:57.990806 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:57Z","lastTransitionTime":"2026-02-19T18:30:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:58 crc kubenswrapper[4813]: I0219 18:30:58.093836 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:58 crc kubenswrapper[4813]: I0219 18:30:58.093890 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:58 crc kubenswrapper[4813]: I0219 18:30:58.093909 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:58 crc kubenswrapper[4813]: I0219 18:30:58.093934 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:58 crc kubenswrapper[4813]: I0219 18:30:58.093978 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:58Z","lastTransitionTime":"2026-02-19T18:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:58 crc kubenswrapper[4813]: I0219 18:30:58.196407 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:58 crc kubenswrapper[4813]: I0219 18:30:58.196460 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:58 crc kubenswrapper[4813]: I0219 18:30:58.196478 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:58 crc kubenswrapper[4813]: I0219 18:30:58.196501 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:58 crc kubenswrapper[4813]: I0219 18:30:58.196518 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:58Z","lastTransitionTime":"2026-02-19T18:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:58 crc kubenswrapper[4813]: I0219 18:30:58.299577 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:58 crc kubenswrapper[4813]: I0219 18:30:58.299634 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:58 crc kubenswrapper[4813]: I0219 18:30:58.299701 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:58 crc kubenswrapper[4813]: I0219 18:30:58.299730 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:58 crc kubenswrapper[4813]: I0219 18:30:58.299748 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:58Z","lastTransitionTime":"2026-02-19T18:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:58 crc kubenswrapper[4813]: I0219 18:30:58.402383 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:58 crc kubenswrapper[4813]: I0219 18:30:58.402438 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:58 crc kubenswrapper[4813]: I0219 18:30:58.402451 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:58 crc kubenswrapper[4813]: I0219 18:30:58.402466 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:58 crc kubenswrapper[4813]: I0219 18:30:58.402478 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:58Z","lastTransitionTime":"2026-02-19T18:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:58 crc kubenswrapper[4813]: I0219 18:30:58.447086 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 06:24:43.049055858 +0000 UTC Feb 19 18:30:58 crc kubenswrapper[4813]: I0219 18:30:58.471494 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5vng" Feb 19 18:30:58 crc kubenswrapper[4813]: E0219 18:30:58.471690 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5vng" podUID="6fc21e0b-f723-4c9c-9ced-1683cc02fa00" Feb 19 18:30:58 crc kubenswrapper[4813]: I0219 18:30:58.504944 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:58 crc kubenswrapper[4813]: I0219 18:30:58.505350 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:58 crc kubenswrapper[4813]: I0219 18:30:58.505496 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:58 crc kubenswrapper[4813]: I0219 18:30:58.505689 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:58 crc kubenswrapper[4813]: I0219 18:30:58.505835 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:58Z","lastTransitionTime":"2026-02-19T18:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:58 crc kubenswrapper[4813]: I0219 18:30:58.608891 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:58 crc kubenswrapper[4813]: I0219 18:30:58.609278 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:58 crc kubenswrapper[4813]: I0219 18:30:58.609454 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:58 crc kubenswrapper[4813]: I0219 18:30:58.609625 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:58 crc kubenswrapper[4813]: I0219 18:30:58.609753 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:58Z","lastTransitionTime":"2026-02-19T18:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:58 crc kubenswrapper[4813]: I0219 18:30:58.713578 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:58 crc kubenswrapper[4813]: I0219 18:30:58.714726 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:58 crc kubenswrapper[4813]: I0219 18:30:58.715136 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:58 crc kubenswrapper[4813]: I0219 18:30:58.715302 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:58 crc kubenswrapper[4813]: I0219 18:30:58.715425 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:58Z","lastTransitionTime":"2026-02-19T18:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:58 crc kubenswrapper[4813]: I0219 18:30:58.818488 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:58 crc kubenswrapper[4813]: I0219 18:30:58.818545 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:58 crc kubenswrapper[4813]: I0219 18:30:58.818563 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:58 crc kubenswrapper[4813]: I0219 18:30:58.818586 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:58 crc kubenswrapper[4813]: I0219 18:30:58.818602 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:58Z","lastTransitionTime":"2026-02-19T18:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:58 crc kubenswrapper[4813]: I0219 18:30:58.922530 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:58 crc kubenswrapper[4813]: I0219 18:30:58.928116 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:58 crc kubenswrapper[4813]: I0219 18:30:58.928338 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:58 crc kubenswrapper[4813]: I0219 18:30:58.928505 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:58 crc kubenswrapper[4813]: I0219 18:30:58.928649 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:58Z","lastTransitionTime":"2026-02-19T18:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:59 crc kubenswrapper[4813]: I0219 18:30:59.032296 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:59 crc kubenswrapper[4813]: I0219 18:30:59.032356 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:59 crc kubenswrapper[4813]: I0219 18:30:59.032373 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:59 crc kubenswrapper[4813]: I0219 18:30:59.032397 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:59 crc kubenswrapper[4813]: I0219 18:30:59.032430 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:59Z","lastTransitionTime":"2026-02-19T18:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:59 crc kubenswrapper[4813]: I0219 18:30:59.135309 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:59 crc kubenswrapper[4813]: I0219 18:30:59.135601 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:59 crc kubenswrapper[4813]: I0219 18:30:59.135727 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:59 crc kubenswrapper[4813]: I0219 18:30:59.135865 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:59 crc kubenswrapper[4813]: I0219 18:30:59.135993 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:59Z","lastTransitionTime":"2026-02-19T18:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:59 crc kubenswrapper[4813]: I0219 18:30:59.238777 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:59 crc kubenswrapper[4813]: I0219 18:30:59.238847 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:59 crc kubenswrapper[4813]: I0219 18:30:59.238865 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:59 crc kubenswrapper[4813]: I0219 18:30:59.238889 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:59 crc kubenswrapper[4813]: I0219 18:30:59.238905 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:59Z","lastTransitionTime":"2026-02-19T18:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:59 crc kubenswrapper[4813]: I0219 18:30:59.342256 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:59 crc kubenswrapper[4813]: I0219 18:30:59.342574 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:59 crc kubenswrapper[4813]: I0219 18:30:59.342799 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:59 crc kubenswrapper[4813]: I0219 18:30:59.342903 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:59 crc kubenswrapper[4813]: I0219 18:30:59.343021 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:59Z","lastTransitionTime":"2026-02-19T18:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:59 crc kubenswrapper[4813]: I0219 18:30:59.445287 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:59 crc kubenswrapper[4813]: I0219 18:30:59.445536 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:59 crc kubenswrapper[4813]: I0219 18:30:59.445628 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:59 crc kubenswrapper[4813]: I0219 18:30:59.445712 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:59 crc kubenswrapper[4813]: I0219 18:30:59.445784 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:59Z","lastTransitionTime":"2026-02-19T18:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:59 crc kubenswrapper[4813]: I0219 18:30:59.447621 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 10:58:46.699440328 +0000 UTC Feb 19 18:30:59 crc kubenswrapper[4813]: I0219 18:30:59.470552 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:30:59 crc kubenswrapper[4813]: E0219 18:30:59.470702 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 18:30:59 crc kubenswrapper[4813]: I0219 18:30:59.470552 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 18:30:59 crc kubenswrapper[4813]: I0219 18:30:59.470770 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 18:30:59 crc kubenswrapper[4813]: E0219 18:30:59.470944 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 18:30:59 crc kubenswrapper[4813]: E0219 18:30:59.471065 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 18:30:59 crc kubenswrapper[4813]: I0219 18:30:59.519537 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:59 crc kubenswrapper[4813]: I0219 18:30:59.519808 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:59 crc kubenswrapper[4813]: I0219 18:30:59.519904 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:59 crc kubenswrapper[4813]: I0219 18:30:59.520027 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:59 crc kubenswrapper[4813]: I0219 18:30:59.520114 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:59Z","lastTransitionTime":"2026-02-19T18:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:59 crc kubenswrapper[4813]: E0219 18:30:59.539670 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"72639c5f-de3a-44bf-8f3b-4707b19d9f7d\\\",\\\"systemUUID\\\":\\\"3f17b88b-2b9a-42bd-94af-777e7f325932\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:59Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:59 crc kubenswrapper[4813]: I0219 18:30:59.544114 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:59 crc kubenswrapper[4813]: I0219 18:30:59.544173 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:59 crc kubenswrapper[4813]: I0219 18:30:59.544198 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:59 crc kubenswrapper[4813]: I0219 18:30:59.544227 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:59 crc kubenswrapper[4813]: I0219 18:30:59.544246 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:59Z","lastTransitionTime":"2026-02-19T18:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:59 crc kubenswrapper[4813]: E0219 18:30:59.559081 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"72639c5f-de3a-44bf-8f3b-4707b19d9f7d\\\",\\\"systemUUID\\\":\\\"3f17b88b-2b9a-42bd-94af-777e7f325932\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:59Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:59 crc kubenswrapper[4813]: I0219 18:30:59.563833 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:59 crc kubenswrapper[4813]: I0219 18:30:59.564018 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:59 crc kubenswrapper[4813]: I0219 18:30:59.564137 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:59 crc kubenswrapper[4813]: I0219 18:30:59.564246 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:59 crc kubenswrapper[4813]: I0219 18:30:59.564379 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:59Z","lastTransitionTime":"2026-02-19T18:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:59 crc kubenswrapper[4813]: E0219 18:30:59.608654 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"72639c5f-de3a-44bf-8f3b-4707b19d9f7d\\\",\\\"systemUUID\\\":\\\"3f17b88b-2b9a-42bd-94af-777e7f325932\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:59Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:59 crc kubenswrapper[4813]: I0219 18:30:59.615718 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:59 crc kubenswrapper[4813]: I0219 18:30:59.615791 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:59 crc kubenswrapper[4813]: I0219 18:30:59.615813 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:59 crc kubenswrapper[4813]: I0219 18:30:59.615845 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:59 crc kubenswrapper[4813]: I0219 18:30:59.615870 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:59Z","lastTransitionTime":"2026-02-19T18:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:59 crc kubenswrapper[4813]: E0219 18:30:59.638987 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"72639c5f-de3a-44bf-8f3b-4707b19d9f7d\\\",\\\"systemUUID\\\":\\\"3f17b88b-2b9a-42bd-94af-777e7f325932\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:59Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:59 crc kubenswrapper[4813]: I0219 18:30:59.643714 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:59 crc kubenswrapper[4813]: I0219 18:30:59.643921 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:59 crc kubenswrapper[4813]: I0219 18:30:59.644056 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:59 crc kubenswrapper[4813]: I0219 18:30:59.644185 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:59 crc kubenswrapper[4813]: I0219 18:30:59.644313 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:59Z","lastTransitionTime":"2026-02-19T18:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:59 crc kubenswrapper[4813]: E0219 18:30:59.664017 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:30:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"72639c5f-de3a-44bf-8f3b-4707b19d9f7d\\\",\\\"systemUUID\\\":\\\"3f17b88b-2b9a-42bd-94af-777e7f325932\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:30:59Z is after 2025-08-24T17:21:41Z" Feb 19 18:30:59 crc kubenswrapper[4813]: E0219 18:30:59.664137 4813 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 18:30:59 crc kubenswrapper[4813]: I0219 18:30:59.665918 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:59 crc kubenswrapper[4813]: I0219 18:30:59.666032 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:59 crc kubenswrapper[4813]: I0219 18:30:59.666120 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:59 crc kubenswrapper[4813]: I0219 18:30:59.666201 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:59 crc kubenswrapper[4813]: I0219 18:30:59.666284 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:59Z","lastTransitionTime":"2026-02-19T18:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:59 crc kubenswrapper[4813]: I0219 18:30:59.769866 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:59 crc kubenswrapper[4813]: I0219 18:30:59.770803 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:59 crc kubenswrapper[4813]: I0219 18:30:59.771057 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:59 crc kubenswrapper[4813]: I0219 18:30:59.771262 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:59 crc kubenswrapper[4813]: I0219 18:30:59.771480 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:59Z","lastTransitionTime":"2026-02-19T18:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:59 crc kubenswrapper[4813]: I0219 18:30:59.875359 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:59 crc kubenswrapper[4813]: I0219 18:30:59.875424 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:59 crc kubenswrapper[4813]: I0219 18:30:59.875444 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:59 crc kubenswrapper[4813]: I0219 18:30:59.875470 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:59 crc kubenswrapper[4813]: I0219 18:30:59.875487 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:59Z","lastTransitionTime":"2026-02-19T18:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:30:59 crc kubenswrapper[4813]: I0219 18:30:59.978513 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:30:59 crc kubenswrapper[4813]: I0219 18:30:59.978611 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:30:59 crc kubenswrapper[4813]: I0219 18:30:59.978628 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:30:59 crc kubenswrapper[4813]: I0219 18:30:59.978652 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:30:59 crc kubenswrapper[4813]: I0219 18:30:59.978669 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:30:59Z","lastTransitionTime":"2026-02-19T18:30:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:00 crc kubenswrapper[4813]: I0219 18:31:00.081069 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:00 crc kubenswrapper[4813]: I0219 18:31:00.081650 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:00 crc kubenswrapper[4813]: I0219 18:31:00.081815 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:00 crc kubenswrapper[4813]: I0219 18:31:00.081983 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:00 crc kubenswrapper[4813]: I0219 18:31:00.082139 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:00Z","lastTransitionTime":"2026-02-19T18:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:00 crc kubenswrapper[4813]: I0219 18:31:00.185002 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:00 crc kubenswrapper[4813]: I0219 18:31:00.185076 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:00 crc kubenswrapper[4813]: I0219 18:31:00.185100 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:00 crc kubenswrapper[4813]: I0219 18:31:00.185132 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:00 crc kubenswrapper[4813]: I0219 18:31:00.185156 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:00Z","lastTransitionTime":"2026-02-19T18:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:00 crc kubenswrapper[4813]: I0219 18:31:00.288063 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:00 crc kubenswrapper[4813]: I0219 18:31:00.288432 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:00 crc kubenswrapper[4813]: I0219 18:31:00.288560 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:00 crc kubenswrapper[4813]: I0219 18:31:00.288800 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:00 crc kubenswrapper[4813]: I0219 18:31:00.288990 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:00Z","lastTransitionTime":"2026-02-19T18:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:00 crc kubenswrapper[4813]: I0219 18:31:00.392429 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:00 crc kubenswrapper[4813]: I0219 18:31:00.392940 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:00 crc kubenswrapper[4813]: I0219 18:31:00.393471 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:00 crc kubenswrapper[4813]: I0219 18:31:00.393704 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:00 crc kubenswrapper[4813]: I0219 18:31:00.393980 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:00Z","lastTransitionTime":"2026-02-19T18:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:00 crc kubenswrapper[4813]: I0219 18:31:00.448681 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 17:30:39.127078476 +0000 UTC Feb 19 18:31:00 crc kubenswrapper[4813]: I0219 18:31:00.470977 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5vng" Feb 19 18:31:00 crc kubenswrapper[4813]: E0219 18:31:00.471731 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5vng" podUID="6fc21e0b-f723-4c9c-9ced-1683cc02fa00" Feb 19 18:31:00 crc kubenswrapper[4813]: I0219 18:31:00.497464 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:00 crc kubenswrapper[4813]: I0219 18:31:00.497526 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:00 crc kubenswrapper[4813]: I0219 18:31:00.497545 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:00 crc kubenswrapper[4813]: I0219 18:31:00.497570 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:00 crc kubenswrapper[4813]: I0219 18:31:00.497588 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:00Z","lastTransitionTime":"2026-02-19T18:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:00 crc kubenswrapper[4813]: I0219 18:31:00.600442 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:00 crc kubenswrapper[4813]: I0219 18:31:00.600790 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:00 crc kubenswrapper[4813]: I0219 18:31:00.601011 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:00 crc kubenswrapper[4813]: I0219 18:31:00.601203 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:00 crc kubenswrapper[4813]: I0219 18:31:00.601373 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:00Z","lastTransitionTime":"2026-02-19T18:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:00 crc kubenswrapper[4813]: I0219 18:31:00.704591 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:00 crc kubenswrapper[4813]: I0219 18:31:00.705128 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:00 crc kubenswrapper[4813]: I0219 18:31:00.705165 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:00 crc kubenswrapper[4813]: I0219 18:31:00.710198 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:00 crc kubenswrapper[4813]: I0219 18:31:00.710565 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:00Z","lastTransitionTime":"2026-02-19T18:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:00 crc kubenswrapper[4813]: I0219 18:31:00.813236 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:00 crc kubenswrapper[4813]: I0219 18:31:00.813297 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:00 crc kubenswrapper[4813]: I0219 18:31:00.813314 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:00 crc kubenswrapper[4813]: I0219 18:31:00.813338 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:00 crc kubenswrapper[4813]: I0219 18:31:00.813355 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:00Z","lastTransitionTime":"2026-02-19T18:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:00 crc kubenswrapper[4813]: I0219 18:31:00.915864 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:00 crc kubenswrapper[4813]: I0219 18:31:00.915920 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:00 crc kubenswrapper[4813]: I0219 18:31:00.915996 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:00 crc kubenswrapper[4813]: I0219 18:31:00.916020 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:00 crc kubenswrapper[4813]: I0219 18:31:00.916037 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:00Z","lastTransitionTime":"2026-02-19T18:31:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:01 crc kubenswrapper[4813]: I0219 18:31:01.018861 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:01 crc kubenswrapper[4813]: I0219 18:31:01.018928 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:01 crc kubenswrapper[4813]: I0219 18:31:01.018974 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:01 crc kubenswrapper[4813]: I0219 18:31:01.018997 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:01 crc kubenswrapper[4813]: I0219 18:31:01.019012 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:01Z","lastTransitionTime":"2026-02-19T18:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:01 crc kubenswrapper[4813]: I0219 18:31:01.122154 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:01 crc kubenswrapper[4813]: I0219 18:31:01.122190 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:01 crc kubenswrapper[4813]: I0219 18:31:01.122201 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:01 crc kubenswrapper[4813]: I0219 18:31:01.122217 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:01 crc kubenswrapper[4813]: I0219 18:31:01.122230 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:01Z","lastTransitionTime":"2026-02-19T18:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:01 crc kubenswrapper[4813]: I0219 18:31:01.224845 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:01 crc kubenswrapper[4813]: I0219 18:31:01.224905 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:01 crc kubenswrapper[4813]: I0219 18:31:01.224921 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:01 crc kubenswrapper[4813]: I0219 18:31:01.224943 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:01 crc kubenswrapper[4813]: I0219 18:31:01.224998 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:01Z","lastTransitionTime":"2026-02-19T18:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:01 crc kubenswrapper[4813]: I0219 18:31:01.327892 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:01 crc kubenswrapper[4813]: I0219 18:31:01.327979 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:01 crc kubenswrapper[4813]: I0219 18:31:01.327998 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:01 crc kubenswrapper[4813]: I0219 18:31:01.328023 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:01 crc kubenswrapper[4813]: I0219 18:31:01.328041 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:01Z","lastTransitionTime":"2026-02-19T18:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:01 crc kubenswrapper[4813]: I0219 18:31:01.431068 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:01 crc kubenswrapper[4813]: I0219 18:31:01.431138 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:01 crc kubenswrapper[4813]: I0219 18:31:01.431156 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:01 crc kubenswrapper[4813]: I0219 18:31:01.431181 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:01 crc kubenswrapper[4813]: I0219 18:31:01.431201 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:01Z","lastTransitionTime":"2026-02-19T18:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:01 crc kubenswrapper[4813]: I0219 18:31:01.449447 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 23:38:01.96036954 +0000 UTC Feb 19 18:31:01 crc kubenswrapper[4813]: I0219 18:31:01.470977 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:31:01 crc kubenswrapper[4813]: I0219 18:31:01.471015 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 18:31:01 crc kubenswrapper[4813]: I0219 18:31:01.471135 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 18:31:01 crc kubenswrapper[4813]: E0219 18:31:01.471140 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 18:31:01 crc kubenswrapper[4813]: E0219 18:31:01.471491 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 18:31:01 crc kubenswrapper[4813]: E0219 18:31:01.471927 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 18:31:01 crc kubenswrapper[4813]: I0219 18:31:01.473750 4813 scope.go:117] "RemoveContainer" containerID="90c570c2b717e5736262333676c0fd0ffbdbd3957f8c27964a5126d2df92cc13" Feb 19 18:31:01 crc kubenswrapper[4813]: I0219 18:31:01.491097 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e24bb1ce-8e7b-485c-8ade-4d1c8d24d31d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ec0b5d3efae532f52cc2cad5bdf32bbba176159a6d3e26584b0fabe4e67fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72a262261f9ec388a9b2c38c6c30d35645d6e42d482ed7468891523b7bc731fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58efdcb776fb39a7e33af3fda3f0589e38c4bae15b8db3371541d189729d06c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a2584d7de40323b884e08e2cb47b6ab8054fadccc7516e4736c5439aef60e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:01Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:01 crc kubenswrapper[4813]: I0219 18:31:01.511884 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:01Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:01 crc kubenswrapper[4813]: I0219 18:31:01.531825 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:01Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:01 crc kubenswrapper[4813]: I0219 18:31:01.535130 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:01 crc kubenswrapper[4813]: I0219 18:31:01.535190 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:01 crc kubenswrapper[4813]: I0219 18:31:01.535206 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:01 crc kubenswrapper[4813]: I0219 18:31:01.535230 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:01 crc kubenswrapper[4813]: I0219 18:31:01.535246 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:01Z","lastTransitionTime":"2026-02-19T18:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:01 crc kubenswrapper[4813]: I0219 18:31:01.551472 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c6488cf097d9700ded1960ebddf696d77c08ffdfc6dd8b351c594ea026bb6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6181782d0c635897318effbe40705986067ccb55cf05ffdba6948e46c1a054b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:01Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:01 crc kubenswrapper[4813]: I0219 18:31:01.574514 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hksqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b099cefb-f2e5-4f3f-976c-7433dba77ef2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e8bb938cc19db067c4260fd25416cf155dd95f94c41f03a9ef2b4d29589ad2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3313ad9bea8833862d8c116421e00a0c2376c81e214043db1fa4096177d01094\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T18:30:50Z\\\",\\\"message\\\":\\\"2026-02-19T18:30:05+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9b918bad-a915-4f26-9241-805aac721196\\\\n2026-02-19T18:30:05+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9b918bad-a915-4f26-9241-805aac721196 to /host/opt/cni/bin/\\\\n2026-02-19T18:30:05Z [verbose] multus-daemon started\\\\n2026-02-19T18:30:05Z [verbose] Readiness Indicator file check\\\\n2026-02-19T18:30:50Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hksqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:01Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:01 crc kubenswrapper[4813]: I0219 18:31:01.590843 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqbrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"614b0374-4288-40dc-9d95-e6f6566bd1ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99f4d2e3db84c62b82edf3e5f8290b3b259e58655a5ea95ea51ed8aeec8e8844\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpkbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef107096c0521504fd7160ad4e3c0feff6ed1500af21c7a0328a981c340825f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpkbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqbrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:01Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:01 crc kubenswrapper[4813]: I0219 18:31:01.608255 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l5vng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fc21e0b-f723-4c9c-9ced-1683cc02fa00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmn2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmn2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l5vng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:01Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:01 crc kubenswrapper[4813]: I0219 18:31:01.631870 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlz6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f58d0592-08dd-49db-8c98-f262b9808e0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://934ac9bf4c98a77f31d9fddb47e9981f38daeca4f37afc09a488b48c69a13b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7968520498abaa880445c128f02c0838e254befc000966e147908c84203f74a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7968520498abaa880445c128f02c0838e254befc000966e147908c84203f74a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17313aa183028703a67d1cf40b4246d09224ceee622cd9f4f7d2ef239b26db47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17313aa183028703a67d1cf40b4246d09224ceee622cd9f4f7d2ef239b26db47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ab70722c91747196d065d768e30f52befde81fdedb465f22c3f7eef4705210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98ab70722c91747196d065d768e30f52befde81fdedb465f22c3f7eef4705210\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ab6dd9d03726493fd9fd2c358b0a87ca1997cbe2111d43a548adc9c9f43c662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ab6dd9d03726493fd9fd2c358b0a87ca1997cbe2111d43a548adc9c9f43c662\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11144c876f7a0cbaa5787ac1a093f138ec3fdb8a3cd118d4efec6cc650e3e97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11144c876f7a0cbaa5787ac1a093f138ec3fdb8a3cd118d4efec6cc650e3e97a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53230e417c1722eded51e2c7a87acb4e9b610aaf5d36a22d4f3f4fcd987c5160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53230e417c1722eded51e2c7a87acb4e9b610aaf5d36a22d4f3f4fcd987c5160\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlz6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:01Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:01 crc kubenswrapper[4813]: I0219 18:31:01.637721 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:01 crc kubenswrapper[4813]: I0219 18:31:01.637768 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:01 crc kubenswrapper[4813]: I0219 18:31:01.637785 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:01 crc kubenswrapper[4813]: I0219 18:31:01.637810 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:01 crc kubenswrapper[4813]: I0219 18:31:01.637828 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:01Z","lastTransitionTime":"2026-02-19T18:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:01 crc kubenswrapper[4813]: I0219 18:31:01.663908 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"928c75f4-605c-4556-8c29-14ff4bdf6f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec80078ab4536d8f1962994c3bdf8af5fcf81516f8845e84dbb33a78b1aa2062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e03ec2a3e1656fbfb567377e79bcca10e1c8a6f7660727c164936eab7365930a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6191934d31b3f711d3c9d8f365251db200dea5a925cad670632dbfc527f76504\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1b878b9bc710465c933e1965728be9fe220fe89a3d5c05fa538abd4bf22b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81cfe261685862b6b74d263ab66b4a2b1d68409791e82424b4f210c56561099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fed64977a80ae6acac3a225e2b6b6ab865db1f36338df4a15011baae90f949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90c570c2b717e5736262333676c0fd0ffbdbd3957f8c27964a5126d2df92cc13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90c570c2b717e5736262333676c0fd0ffbdbd3957f8c27964a5126d2df92cc13\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T18:30:32Z\\\",\\\"message\\\":\\\"s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 18:30:32.490543 6488 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 18:30:32.490653 6488 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 18:30:32.491223 6488 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 18:30:32.491297 6488 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 18:30:32.491309 6488 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 18:30:32.491356 6488 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 18:30:32.491372 6488 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 18:30:32.491393 6488 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 18:30:32.491409 6488 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 18:30:32.491423 6488 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 18:30:32.491463 6488 factory.go:656] Stopping watch factory\\\\nI0219 18:30:32.491489 6488 ovnkube.go:599] Stopped ovnkube\\\\nI0219 18:30:32.491500 6488 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 18:30:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pc9t2_openshift-ovn-kubernetes(928c75f4-605c-4556-8c29-14ff4bdf6f5e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1caf7a922d435742a5c6d5302c70003bc86cc3ab542e2e3fe0a368c150de7838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0244cf62334ae37b44f11119385a76667200a01b2162a064829f16e8d6c0cf65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0244cf62334ae37b44f11119385a76667200a01b2162a064829f16e8d6c0cf65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pc9t2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:01Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:01 crc kubenswrapper[4813]: I0219 18:31:01.679773 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3c31506-f64a-4742-a5f9-fb4f7cb97f3e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2b8202d5d41248673716db8cad0618006cc52e967751eb392b99663c4aa90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6995816d5f4dd389140af1ce1d1d5e45df8cf15a28c51cd5e13ee52c094377f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4760dc159a8f00a8d74f75e4c3b1f50b3548501b3a7623e96aaa16c1013611c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c86da51b4095ca48cd6d13dadef813f656825387e3166eb716fe2b8cc46ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7c86da51b4095ca48cd6d13dadef813f656825387e3166eb716fe2b8cc46ce3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:01Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:01 crc kubenswrapper[4813]: I0219 18:31:01.696858 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:01Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:01 crc kubenswrapper[4813]: I0219 18:31:01.710674 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vpk9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0640f474-4d6f-4a87-9750-dda71e69dd95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58268d212369ac3ebcca0839f2c778d1e2e6e36fdc4b7e374cb9044adb39163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vpk9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:01Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:01 crc kubenswrapper[4813]: I0219 18:31:01.728451 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8429a2d929ad94a25a922e436298db7a8eb7522560dfa37f87f6e24f4f99b4e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:01Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:01 crc kubenswrapper[4813]: I0219 18:31:01.743722 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"481977a2-7072-4176-abd4-863cb6104d70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2b35ef24fab5f4dfd17bf95d550000ed1167454385baff2c19e1bc795e2fa8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a57de3f000b8b7bd147b882314c518fbd8173a9c366d4ee48e1e5fab9800b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfswm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:01Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:01 crc kubenswrapper[4813]: I0219 18:31:01.744288 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:01 crc kubenswrapper[4813]: I0219 18:31:01.744324 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:01 crc kubenswrapper[4813]: I0219 18:31:01.744375 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:01 crc kubenswrapper[4813]: I0219 18:31:01.744404 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:01 crc kubenswrapper[4813]: I0219 18:31:01.744416 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:01Z","lastTransitionTime":"2026-02-19T18:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:01 crc kubenswrapper[4813]: I0219 18:31:01.767066 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7e9f575-0f71-4cf9-bbca-161279ecc067\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5217c8c5d8bdd49d4930ceb46695c6404c9cb98f22fa570d519f6fd9e90a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72ae02b48db247486c8af5ba910b32e186c15460f9195868b319d1ddc111fa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e90168ea6a75495be5812591cd0c929407c9adcb7c088d4bcbe3658a430320\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f319cf58bd58c099ace281ef84d183ac951986506fa62207a45ec01f7398e7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f62c4c18d632f0210d0676c04bf42a7b28c8669c123d29ef70fd4129f0051a2b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T18:30:00Z\\\",\\\"message\\\":\\\".003654 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 18:29:55.006059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3464865525/tls.crt::/tmp/serving-cert-3464865525/tls.key\\\\\\\"\\\\nI0219 18:30:00.917714 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 18:30:00.925836 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 18:30:00.925875 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 18:30:00.925917 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 18:30:00.925928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 18:30:00.935151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 18:30:00.935201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935214 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935228 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 18:30:00.935217 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 18:30:00.935236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 18:30:00.935260 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 18:30:00.935268 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 18:30:00.942033 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nF0219 18:30:00.942062 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b414bdca98a18315a2ebb69c39baa4748407aa769d6bc8efa4115efb47c9f4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:01Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:01 crc kubenswrapper[4813]: I0219 18:31:01.783337 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9fdac97-44c9-402a-9097-cc8bb3cd60c1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ca66b5227f874cb972da780e7130516ccefdef0f8a8c6b5258886aba6a54ae8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c2aec2ac5b69b6d516b5f3bddbff02e25892d26a95e9f8bce5c9c5f98144310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c2aec2ac5b69b6d516b5f3bddbff02e25892d26a95e9f8bce5c9c5f98144310\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:01Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:01 crc kubenswrapper[4813]: I0219 18:31:01.814432 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"424add47-7924-4d6b-9c4a-4696e02c5a45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95dc9a2b0e36680a894dd162836072614413ad26fcf5adbae44db96aff8a00c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46edb0e58f8e2e040cc223708f42cef7060dc0a841fff9db182e66e659ced5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://def8abdfd513befc4fac3d78b8284b3b37ba3f5da05d74a7b911932ee1099344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed857dedd4a4cf59288fcbcfaa298370a93630e185b949d2e574eb828831df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://216064c0aa1256384d93ad5b952228d4b67c2767f284715e62505dd390c11268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b9746c82ed08c95759481dc08426b4e8446ed834da21c003dccfeb8231fbec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b9746c82ed08c95759481dc08426b4e8446ed834da21c003dccfeb8231fbec2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc8db58cc6cac8c9b830087814e9d7652fdd990b18e4e4e778669d5cbc06b508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc8db58cc6cac8c9b830087814e9d7652fdd990b18e4e4e778669d5cbc06b508\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://baca15e257ea2007369156b6a31eef0b7cd0e336cef7a9c733164b93e1927f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baca15e257ea2007369156b6a31eef0b7cd0e336cef7a9c733164b93e1927f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:01Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:01 crc kubenswrapper[4813]: I0219 18:31:01.830842 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44daee369357470d5c3b416d63e53f621a0434cc6fe90db2dd15a6078849bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:01Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:01 crc kubenswrapper[4813]: I0219 18:31:01.845372 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55mxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af5ac962-f7d8-4759-b913-b3784e37a704\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5006f3df33dcac6500b89a837e686eb2b35a2d1c609091bda101500d15d808d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55mxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:01Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:01 crc kubenswrapper[4813]: I0219 18:31:01.847291 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:01 crc kubenswrapper[4813]: I0219 18:31:01.847360 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:01 crc kubenswrapper[4813]: I0219 18:31:01.847385 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:01 crc kubenswrapper[4813]: I0219 18:31:01.847412 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:01 crc kubenswrapper[4813]: I0219 18:31:01.847433 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:01Z","lastTransitionTime":"2026-02-19T18:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:01 crc kubenswrapper[4813]: I0219 18:31:01.950712 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:01 crc kubenswrapper[4813]: I0219 18:31:01.950757 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:01 crc kubenswrapper[4813]: I0219 18:31:01.950769 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:01 crc kubenswrapper[4813]: I0219 18:31:01.950788 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:01 crc kubenswrapper[4813]: I0219 18:31:01.950800 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:01Z","lastTransitionTime":"2026-02-19T18:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:01 crc kubenswrapper[4813]: I0219 18:31:01.958221 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pc9t2_928c75f4-605c-4556-8c29-14ff4bdf6f5e/ovnkube-controller/2.log" Feb 19 18:31:01 crc kubenswrapper[4813]: I0219 18:31:01.961287 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" event={"ID":"928c75f4-605c-4556-8c29-14ff4bdf6f5e","Type":"ContainerStarted","Data":"0cc1c57ce812d6b74b0a2631aac1b67e009e76561c3561fcc94cf8bfa6c6a61c"} Feb 19 18:31:01 crc kubenswrapper[4813]: I0219 18:31:01.962185 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" Feb 19 18:31:01 crc kubenswrapper[4813]: I0219 18:31:01.988016 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlz6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f58d0592-08dd-49db-8c98-f262b9808e0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://934ac9bf4c98a77f31d9fddb47e9981f38daeca4f37afc09a488b48c69a13b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7968520498abaa880445c128f02c0838e254befc000966e147908c84203f74a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7968520498abaa880445c128f02c0838e254befc000966e147908c84203f74a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17313aa183028703a67d1cf40b4246d09224ceee622cd9f4f7d2ef239b26db47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17313aa183028703a67d1cf40b4246d09224ceee622cd9f4f7d2ef239b26db47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ab70722c91747196d065d768e30f52befde81fdedb465f22c3f7eef4705210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98ab70722c91747196d065d768e30f52befde81fdedb465f22c3f7eef4705210\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ab6dd9d03726493fd9fd2c358b0a87ca1997cbe2111d43a548adc9c9f43c662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ab6dd9d03726493fd9fd2c358b0a87ca1997cbe2111d43a548adc9c9f43c662\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11144c876f7a0cbaa5787ac1a093f138ec3fdb8a3cd118d4efec6cc650e3e97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11144c876f7a0cbaa5787ac1a093f138ec3fdb8a3cd118d4efec6cc650e3e97a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53230e417c1722eded51e2c7a87acb4e9b610aaf5d36a22d4f3f4fcd987c5160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53230e417c1722eded51e2c7a87acb4e9b610aaf5d36a22d4f3f4fcd987c5160\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlz6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:01Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:02 crc kubenswrapper[4813]: I0219 18:31:02.024846 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"928c75f4-605c-4556-8c29-14ff4bdf6f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec80078ab4536d8f1962994c3bdf8af5fcf81516f8845e84dbb33a78b1aa2062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e03ec2a3e1656fbfb567377e79bcca10e1c8a6f7660727c164936eab7365930a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6191934d31b3f711d3c9d8f365251db200dea5a925cad670632dbfc527f76504\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1b878b9bc710465c933e1965728be9fe220fe89a3d5c05fa538abd4bf22b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81cfe261685862b6b74d263ab66b4a2b1d68409791e82424b4f210c56561099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fed64977a80ae6acac3a225e2b6b6ab865db1f36338df4a15011baae90f949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc1c57ce812d6b74b0a2631aac1b67e009e76561c3561fcc94cf8bfa6c6a61c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90c570c2b717e5736262333676c0fd0ffbdbd3957f8c27964a5126d2df92cc13\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T18:30:32Z\\\",\\\"message\\\":\\\"s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 18:30:32.490543 6488 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 18:30:32.490653 6488 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 18:30:32.491223 6488 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 18:30:32.491297 6488 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 18:30:32.491309 6488 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 18:30:32.491356 6488 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 18:30:32.491372 6488 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 18:30:32.491393 6488 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 18:30:32.491409 6488 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 18:30:32.491423 6488 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 18:30:32.491463 6488 factory.go:656] Stopping watch factory\\\\nI0219 18:30:32.491489 6488 ovnkube.go:599] Stopped ovnkube\\\\nI0219 18:30:32.491500 6488 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 18:30:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:31:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1caf7a922d435742a5c6d5302c70003bc86cc3ab542e2e3fe0a368c150de7838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0244cf62334ae37b44f11119385a76667200a01b2162a064829f16e8d6c0cf65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0244cf62334ae37b44f11119385a76667200a01b2162a064829f16e8d6c0cf65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pc9t2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:02Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:02 crc kubenswrapper[4813]: I0219 18:31:02.050839 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"481977a2-7072-4176-abd4-863cb6104d70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2b35ef24fab5f4dfd17bf95d550000ed1167454385baff2c19e1bc795e2fa8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a57de3f000b8b7bd147b882314c518fbd8173a9c366d4ee48e1e5fab9800b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfswm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:02Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:02 crc kubenswrapper[4813]: I0219 18:31:02.052539 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:02 crc kubenswrapper[4813]: I0219 18:31:02.052705 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:02 crc kubenswrapper[4813]: I0219 18:31:02.052833 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:02 crc kubenswrapper[4813]: I0219 18:31:02.053014 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:02 crc kubenswrapper[4813]: I0219 18:31:02.053138 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:02Z","lastTransitionTime":"2026-02-19T18:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:02 crc kubenswrapper[4813]: I0219 18:31:02.069913 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3c31506-f64a-4742-a5f9-fb4f7cb97f3e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2b8202d5d41248673716db8cad0618006cc52e967751eb392b99663c4aa90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6995816d5f4dd389140af1ce1d1d5e45df8cf15a28c51cd5e13ee52c094377f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4760dc159a8f00a8d74f75e4c3b1f50b3548501b3a7623e96aaa16c1013611c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c86da51b4095ca48cd6d13dadef813f656825387e3166eb716fe2b8cc46ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7c86da51b4095ca48cd6d13dadef813f656825387e3166eb716fe2b8cc46ce3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:02Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:02 crc kubenswrapper[4813]: I0219 18:31:02.084430 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:02Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:02 crc kubenswrapper[4813]: I0219 18:31:02.095016 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vpk9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0640f474-4d6f-4a87-9750-dda71e69dd95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58268d212369ac3ebcca0839f2c778d1e2e6e36fdc4b7e374cb9044adb39163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vpk9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:02Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:02 crc kubenswrapper[4813]: I0219 18:31:02.110984 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8429a2d929ad94a25a922e436298db7a8eb7522560dfa37f87f6e24f4f99b4e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:02Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:02 crc kubenswrapper[4813]: I0219 18:31:02.122658 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55mxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af5ac962-f7d8-4759-b913-b3784e37a704\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5006f3df33dcac6500b89a837e686eb2b35a2d1c609091bda101500d15d808d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55mxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:02Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:02 crc kubenswrapper[4813]: I0219 18:31:02.136779 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7e9f575-0f71-4cf9-bbca-161279ecc067\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5217c8c5d8bdd49d4930ceb46695c6404c9cb98f22fa570d519f6fd9e90a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72ae02b48db247486c8af5ba910b32e186c15460f9195868b319d1ddc111fa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e90168ea6a75495be5812591cd0c929407c9adcb7c088d4bcbe3658a430320\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f319cf58bd58c099ace281ef84d183ac951986506fa62207a45ec01f7398e7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f62c4c18d632f0210d0676c04bf42a7b28c8669c123d29ef70fd4129f0051a2b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T18:30:00Z\\\",\\\"message\\\":\\\".003654 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 18:29:55.006059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3464865525/tls.crt::/tmp/serving-cert-3464865525/tls.key\\\\\\\"\\\\nI0219 18:30:00.917714 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 18:30:00.925836 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 18:30:00.925875 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 18:30:00.925917 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 18:30:00.925928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 18:30:00.935151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 18:30:00.935201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935214 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935228 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 18:30:00.935217 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 18:30:00.935236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 18:30:00.935260 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 18:30:00.935268 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 18:30:00.942033 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nF0219 18:30:00.942062 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b414bdca98a18315a2ebb69c39baa4748407aa769d6bc8efa4115efb47c9f4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:02Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:02 crc kubenswrapper[4813]: I0219 18:31:02.154153 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9fdac97-44c9-402a-9097-cc8bb3cd60c1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ca66b5227f874cb972da780e7130516ccefdef0f8a8c6b5258886aba6a54ae8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c2aec2ac5b69b6d516b5f3bddbff02e25892d26a95e9f8bce5c9c5f98144310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c2aec2ac5b69b6d516b5f3bddbff02e25892d26a95e9f8bce5c9c5f98144310\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:02Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:02 crc kubenswrapper[4813]: I0219 18:31:02.155821 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:02 crc kubenswrapper[4813]: I0219 18:31:02.155868 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:02 crc kubenswrapper[4813]: I0219 18:31:02.155880 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:02 crc kubenswrapper[4813]: I0219 18:31:02.155895 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:02 crc kubenswrapper[4813]: I0219 18:31:02.155904 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:02Z","lastTransitionTime":"2026-02-19T18:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:02 crc kubenswrapper[4813]: I0219 18:31:02.178290 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"424add47-7924-4d6b-9c4a-4696e02c5a45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95dc9a2b0e36680a894dd162836072614413ad26fcf5adbae44db96aff8a00c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46edb0e58f8e2e040cc223708f42cef7060dc0a841fff9db182e66e659ced5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://def8abdfd513befc4fac3d78b8284b3b37ba3f5da05d74a7b911932ee1099344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed857dedd4a4cf59288fcbcfaa298370a93630e185b949d2e574eb828831df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://216064c0aa1256384d93ad5b952228d4b67c2767f284715e62505dd390c11268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b9746c82ed08c95759481dc08426b4e8446ed834da21c003dccfeb8231fbec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b9746c82ed08c95759481dc08426b4e8446ed834da21c003dccfeb8231fbec2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc8db58cc6cac8c9b830087814e9d7652fdd990b18e4e4e778669d5cbc06b508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc8db58cc6cac8c9b830087814e9d7652fdd990b18e4e4e778669d5cbc06b508\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://baca15e257ea2007369156b6a31eef0b7cd0e336cef7a9c733164b93e1927f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baca15e257ea2007369156b6a31eef0b7cd0e336cef7a9c733164b93e1927f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:02Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:02 crc kubenswrapper[4813]: I0219 18:31:02.196762 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44daee369357470d5c3b416d63e53f621a0434cc6fe90db2dd15a6078849bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:02Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:02 crc kubenswrapper[4813]: I0219 18:31:02.212578 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hksqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b099cefb-f2e5-4f3f-976c-7433dba77ef2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e8bb938cc19db067c4260fd25416cf155dd95f94c41f03a9ef2b4d29589ad2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3313ad9bea8833862d8c116421e00a0c2376c81e214043db1fa4096177d01094\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T18:30:50Z\\\",\\\"message\\\":\\\"2026-02-19T18:30:05+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9b918bad-a915-4f26-9241-805aac721196\\\\n2026-02-19T18:30:05+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9b918bad-a915-4f26-9241-805aac721196 to /host/opt/cni/bin/\\\\n2026-02-19T18:30:05Z [verbose] multus-daemon started\\\\n2026-02-19T18:30:05Z [verbose] Readiness Indicator file check\\\\n2026-02-19T18:30:50Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hksqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:02Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:02 crc kubenswrapper[4813]: I0219 18:31:02.230916 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqbrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"614b0374-4288-40dc-9d95-e6f6566bd1ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99f4d2e3db84c62b82edf3e5f8290b3b259e58655a5ea95ea51ed8aeec8e8844\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpkbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef107096c0521504fd7160ad4e3c0feff6ed1500af21c7a0328a981c340825f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpkbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqbrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:02Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:02 crc kubenswrapper[4813]: I0219 18:31:02.248571 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l5vng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fc21e0b-f723-4c9c-9ced-1683cc02fa00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmn2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmn2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l5vng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:02Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:02 crc kubenswrapper[4813]: I0219 18:31:02.258544 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:02 crc kubenswrapper[4813]: I0219 18:31:02.258583 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:02 crc kubenswrapper[4813]: I0219 18:31:02.258597 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:02 crc kubenswrapper[4813]: I0219 18:31:02.258620 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:02 crc kubenswrapper[4813]: I0219 18:31:02.258635 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:02Z","lastTransitionTime":"2026-02-19T18:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:02 crc kubenswrapper[4813]: I0219 18:31:02.271521 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e24bb1ce-8e7b-485c-8ade-4d1c8d24d31d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ec0b5d3efae532f52cc2cad5bdf32bbba176159a6d3e26584b0fabe4e67fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72a262261f9ec388a9b2c38c6c30d35645d6e42d482ed7468891523b7bc731fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58efdcb776fb39a7e33af3fda3f0589e38c4bae15b8db3371541d189729d06c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a2584d7de40323b884e08e2cb47b6ab8054fadccc7516e4736c5439aef60e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:02Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:02 crc kubenswrapper[4813]: I0219 18:31:02.287312 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:02Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:02 crc kubenswrapper[4813]: I0219 18:31:02.302196 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:02Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:02 crc kubenswrapper[4813]: I0219 18:31:02.328805 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c6488cf097d9700ded1960ebddf696d77c08ffdfc6dd8b351c594ea026bb6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6181782d0c635897318effbe40705986067ccb55cf05ffdba6948e46c1a054b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:02Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:02 crc kubenswrapper[4813]: I0219 18:31:02.362019 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:02 crc kubenswrapper[4813]: I0219 18:31:02.362081 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:02 crc kubenswrapper[4813]: I0219 18:31:02.362097 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:02 crc kubenswrapper[4813]: I0219 18:31:02.362118 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:02 crc kubenswrapper[4813]: I0219 18:31:02.362132 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:02Z","lastTransitionTime":"2026-02-19T18:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:02 crc kubenswrapper[4813]: I0219 18:31:02.450037 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 02:02:50.172628104 +0000 UTC Feb 19 18:31:02 crc kubenswrapper[4813]: I0219 18:31:02.465644 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:02 crc kubenswrapper[4813]: I0219 18:31:02.465699 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:02 crc kubenswrapper[4813]: I0219 18:31:02.465716 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:02 crc kubenswrapper[4813]: I0219 18:31:02.465741 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:02 crc kubenswrapper[4813]: I0219 18:31:02.465757 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:02Z","lastTransitionTime":"2026-02-19T18:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:02 crc kubenswrapper[4813]: I0219 18:31:02.470930 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5vng" Feb 19 18:31:02 crc kubenswrapper[4813]: E0219 18:31:02.471311 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5vng" podUID="6fc21e0b-f723-4c9c-9ced-1683cc02fa00" Feb 19 18:31:02 crc kubenswrapper[4813]: I0219 18:31:02.568139 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:02 crc kubenswrapper[4813]: I0219 18:31:02.568228 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:02 crc kubenswrapper[4813]: I0219 18:31:02.568242 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:02 crc kubenswrapper[4813]: I0219 18:31:02.568259 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:02 crc kubenswrapper[4813]: I0219 18:31:02.568536 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:02Z","lastTransitionTime":"2026-02-19T18:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:02 crc kubenswrapper[4813]: I0219 18:31:02.671735 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:02 crc kubenswrapper[4813]: I0219 18:31:02.671789 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:02 crc kubenswrapper[4813]: I0219 18:31:02.671805 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:02 crc kubenswrapper[4813]: I0219 18:31:02.671825 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:02 crc kubenswrapper[4813]: I0219 18:31:02.671841 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:02Z","lastTransitionTime":"2026-02-19T18:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:02 crc kubenswrapper[4813]: I0219 18:31:02.774420 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:02 crc kubenswrapper[4813]: I0219 18:31:02.774506 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:02 crc kubenswrapper[4813]: I0219 18:31:02.774532 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:02 crc kubenswrapper[4813]: I0219 18:31:02.774566 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:02 crc kubenswrapper[4813]: I0219 18:31:02.774592 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:02Z","lastTransitionTime":"2026-02-19T18:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:02 crc kubenswrapper[4813]: I0219 18:31:02.877332 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:02 crc kubenswrapper[4813]: I0219 18:31:02.877406 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:02 crc kubenswrapper[4813]: I0219 18:31:02.877424 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:02 crc kubenswrapper[4813]: I0219 18:31:02.878002 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:02 crc kubenswrapper[4813]: I0219 18:31:02.878052 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:02Z","lastTransitionTime":"2026-02-19T18:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:02 crc kubenswrapper[4813]: I0219 18:31:02.967260 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pc9t2_928c75f4-605c-4556-8c29-14ff4bdf6f5e/ovnkube-controller/3.log" Feb 19 18:31:02 crc kubenswrapper[4813]: I0219 18:31:02.968271 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pc9t2_928c75f4-605c-4556-8c29-14ff4bdf6f5e/ovnkube-controller/2.log" Feb 19 18:31:02 crc kubenswrapper[4813]: I0219 18:31:02.971591 4813 generic.go:334] "Generic (PLEG): container finished" podID="928c75f4-605c-4556-8c29-14ff4bdf6f5e" containerID="0cc1c57ce812d6b74b0a2631aac1b67e009e76561c3561fcc94cf8bfa6c6a61c" exitCode=1 Feb 19 18:31:02 crc kubenswrapper[4813]: I0219 18:31:02.971646 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" event={"ID":"928c75f4-605c-4556-8c29-14ff4bdf6f5e","Type":"ContainerDied","Data":"0cc1c57ce812d6b74b0a2631aac1b67e009e76561c3561fcc94cf8bfa6c6a61c"} Feb 19 18:31:02 crc kubenswrapper[4813]: I0219 18:31:02.971693 4813 scope.go:117] "RemoveContainer" containerID="90c570c2b717e5736262333676c0fd0ffbdbd3957f8c27964a5126d2df92cc13" Feb 19 18:31:02 crc kubenswrapper[4813]: I0219 18:31:02.972758 4813 scope.go:117] "RemoveContainer" containerID="0cc1c57ce812d6b74b0a2631aac1b67e009e76561c3561fcc94cf8bfa6c6a61c" Feb 19 18:31:02 crc kubenswrapper[4813]: E0219 18:31:02.973023 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pc9t2_openshift-ovn-kubernetes(928c75f4-605c-4556-8c29-14ff4bdf6f5e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" podUID="928c75f4-605c-4556-8c29-14ff4bdf6f5e" Feb 19 18:31:02 crc kubenswrapper[4813]: I0219 18:31:02.983276 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:02 crc kubenswrapper[4813]: I0219 18:31:02.983308 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:02 crc kubenswrapper[4813]: I0219 18:31:02.983320 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:02 crc kubenswrapper[4813]: I0219 18:31:02.983336 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:02 crc kubenswrapper[4813]: I0219 18:31:02.983349 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:02Z","lastTransitionTime":"2026-02-19T18:31:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:02 crc kubenswrapper[4813]: I0219 18:31:02.996354 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8429a2d929ad94a25a922e436298db7a8eb7522560dfa37f87f6e24f4f99b4e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:02Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:03 crc kubenswrapper[4813]: I0219 18:31:03.013588 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"481977a2-7072-4176-abd4-863cb6104d70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2b35ef24fab5f4dfd17bf95d550000ed1167454385baff2c19e1bc795e2fa8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a57de3f000b8b7bd147b882314c518fbd8173a9c366d4ee48e1e5fab9800b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfswm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:03Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:03 crc kubenswrapper[4813]: I0219 18:31:03.031809 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3c31506-f64a-4742-a5f9-fb4f7cb97f3e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2b8202d5d41248673716db8cad0618006cc52e967751eb392b99663c4aa90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6995816d5f4dd389140af1ce1d1d5e45df8cf15a28c51cd5e13ee52c094377f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4760dc159a8f00a8d74f75e4c3b1f50b3548501b3a7623e96aaa16c1013611c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c86da51b4095ca48cd6d13dadef813f656825387e3166eb716fe2b8cc46ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7c86da51b4095ca48cd6d13dadef813f656825387e3166eb716fe2b8cc46ce3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:03Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:03 crc kubenswrapper[4813]: I0219 18:31:03.052189 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:03Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:03 crc kubenswrapper[4813]: I0219 18:31:03.067232 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vpk9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0640f474-4d6f-4a87-9750-dda71e69dd95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58268d212369ac3ebcca0839f2c778d1e2e6e36fdc4b7e374cb9044adb39163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vpk9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:03Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:03 crc kubenswrapper[4813]: I0219 18:31:03.086792 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:03 crc kubenswrapper[4813]: I0219 18:31:03.086849 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:03 crc kubenswrapper[4813]: I0219 18:31:03.086865 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:03 crc kubenswrapper[4813]: I0219 18:31:03.086889 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:03 crc kubenswrapper[4813]: I0219 18:31:03.086906 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:03Z","lastTransitionTime":"2026-02-19T18:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:03 crc kubenswrapper[4813]: I0219 18:31:03.088151 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44daee369357470d5c3b416d63e53f621a0434cc6fe90db2dd15a6078849bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:03Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:03 crc kubenswrapper[4813]: I0219 18:31:03.106081 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55mxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af5ac962-f7d8-4759-b913-b3784e37a704\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5006f3df33dcac6500b89a837e686eb2b35a2d1c609091bda101500d15d808d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55mxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:03Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:03 crc kubenswrapper[4813]: I0219 18:31:03.128604 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7e9f575-0f71-4cf9-bbca-161279ecc067\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5217c8c5d8bdd49d4930ceb46695c6404c9cb98f22fa570d519f6fd9e90a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72ae02b48db247486c8af5ba910b32e186c15460f9195868b319d1ddc111fa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e90168ea6a75495be5812591cd0c929407c9adcb7c088d4bcbe3658a430320\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f319cf58bd58c099ace281ef84d183ac951986506fa62207a45ec01f7398e7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f62c4c18d632f0210d0676c04bf42a7b28c8669c123d29ef70fd4129f0051a2b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T18:30:00Z\\\",\\\"message\\\":\\\".003654 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 18:29:55.006059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3464865525/tls.crt::/tmp/serving-cert-3464865525/tls.key\\\\\\\"\\\\nI0219 18:30:00.917714 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 18:30:00.925836 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 18:30:00.925875 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 18:30:00.925917 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 18:30:00.925928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 18:30:00.935151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 18:30:00.935201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935214 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935228 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 18:30:00.935217 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 18:30:00.935236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 18:30:00.935260 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 18:30:00.935268 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 18:30:00.942033 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nF0219 18:30:00.942062 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b414bdca98a18315a2ebb69c39baa4748407aa769d6bc8efa4115efb47c9f4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:03Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:03 crc kubenswrapper[4813]: I0219 18:31:03.146866 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9fdac97-44c9-402a-9097-cc8bb3cd60c1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ca66b5227f874cb972da780e7130516ccefdef0f8a8c6b5258886aba6a54ae8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c2aec2ac5b69b6d516b5f3bddbff02e25892d26a95e9f8bce5c9c5f98144310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c2aec2ac5b69b6d516b5f3bddbff02e25892d26a95e9f8bce5c9c5f98144310\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:03Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:03 crc kubenswrapper[4813]: I0219 18:31:03.181125 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"424add47-7924-4d6b-9c4a-4696e02c5a45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95dc9a2b0e36680a894dd162836072614413ad26fcf5adbae44db96aff8a00c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46edb0e58f8e2e040cc223708f42cef7060dc0a841fff9db182e66e659ced5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://def8abdfd513befc4fac3d78b8284b3b37ba3f5da05d74a7b911932ee1099344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed857dedd4a4cf59288fcbcfaa298370a93630e185b949d2e574eb828831df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://216064c0aa1256384d93ad5b952228d4b67c2767f284715e62505dd390c11268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b9746c82ed08c95759481dc08426b4e8446ed834da21c003dccfeb8231fbec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b9746c82ed08c95759481dc08426b4e8446ed834da21c003dccfeb8231fbec2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc8db58cc6cac8c9b830087814e9d7652fdd990b18e4e4e778669d5cbc06b508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc8db58cc6cac8c9b830087814e9d7652fdd990b18e4e4e778669d5cbc06b508\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://baca15e257ea2007369156b6a31eef0b7cd0e336cef7a9c733164b93e1927f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baca15e257ea2007369156b6a31eef0b7cd0e336cef7a9c733164b93e1927f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:03Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:03 crc kubenswrapper[4813]: I0219 18:31:03.189582 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:03 crc kubenswrapper[4813]: I0219 18:31:03.189636 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:03 crc kubenswrapper[4813]: I0219 18:31:03.189654 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:03 crc kubenswrapper[4813]: I0219 18:31:03.189678 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:03 crc kubenswrapper[4813]: I0219 18:31:03.189697 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:03Z","lastTransitionTime":"2026-02-19T18:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:03 crc kubenswrapper[4813]: I0219 18:31:03.203521 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c6488cf097d9700ded1960ebddf696d77c08ffdfc6dd8b351c594ea026bb6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6181782d0c635897318effbe40705986067ccb55cf05ffdba6948e46c1a054b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:03Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:03 crc kubenswrapper[4813]: I0219 18:31:03.231138 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hksqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b099cefb-f2e5-4f3f-976c-7433dba77ef2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e8bb938cc19db067c4260fd25416cf155dd95f94c41f03a9ef2b4d29589ad2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3313ad9bea8833862d8c116421e00a0c2376c81e214043db1fa4096177d01094\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T18:30:50Z\\\",\\\"message\\\":\\\"2026-02-19T18:30:05+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9b918bad-a915-4f26-9241-805aac721196\\\\n2026-02-19T18:30:05+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9b918bad-a915-4f26-9241-805aac721196 to /host/opt/cni/bin/\\\\n2026-02-19T18:30:05Z [verbose] multus-daemon started\\\\n2026-02-19T18:30:05Z [verbose] Readiness Indicator file check\\\\n2026-02-19T18:30:50Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hksqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:03Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:03 crc kubenswrapper[4813]: I0219 18:31:03.251771 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqbrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"614b0374-4288-40dc-9d95-e6f6566bd1ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99f4d2e3db84c62b82edf3e5f8290b3b259e58655a5ea95ea51ed8aeec8e8844\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpkbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef107096c0521504fd7160ad4e3c0feff6ed1500af21c7a0328a981c340825f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpkbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqbrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:03Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:03 crc kubenswrapper[4813]: I0219 18:31:03.268499 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l5vng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fc21e0b-f723-4c9c-9ced-1683cc02fa00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmn2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmn2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l5vng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:03Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:03 crc kubenswrapper[4813]: I0219 18:31:03.333865 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:03 crc kubenswrapper[4813]: I0219 18:31:03.333915 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:03 crc kubenswrapper[4813]: I0219 18:31:03.333931 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:03 crc kubenswrapper[4813]: I0219 18:31:03.333979 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:03 crc kubenswrapper[4813]: I0219 18:31:03.334002 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:03Z","lastTransitionTime":"2026-02-19T18:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:03 crc kubenswrapper[4813]: I0219 18:31:03.337233 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e24bb1ce-8e7b-485c-8ade-4d1c8d24d31d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ec0b5d3efae532f52cc2cad5bdf32bbba176159a6d3e26584b0fabe4e67fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72a262261f9ec388a9b2c38c6c30d35645d6e42d482ed7468891523b7bc731fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58efdcb776fb39a7e33af3fda3f0589e38c4bae15b8db3371541d189729d06c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a2584d7de40323b884e08e2cb47b6ab8054fadccc7516e4736c5439aef60e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:03Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:03 crc kubenswrapper[4813]: I0219 18:31:03.349202 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:03Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:03 crc kubenswrapper[4813]: I0219 18:31:03.366896 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:03Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:03 crc kubenswrapper[4813]: I0219 18:31:03.386327 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlz6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f58d0592-08dd-49db-8c98-f262b9808e0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://934ac9bf4c98a77f31d9fddb47e9981f38daeca4f37afc09a488b48c69a13b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7968520498abaa880445c128f02c0838e254befc000966e147908c84203f74a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7968520498abaa880445c128f02c0838e254befc000966e147908c84203f74a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17313aa183028703a67d1cf40b4246d09224ceee622cd9f4f7d2ef239b26db47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17313aa183028703a67d1cf40b4246d09224ceee622cd9f4f7d2ef239b26db47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ab70722c91747196d065d768e30f52befde81fdedb465f22c3f7eef4705210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98ab70722c91747196d065d768e30f52befde81fdedb465f22c3f7eef4705210\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ab6dd9d03726493fd9fd2c358b0a87ca1997cbe2111d43a548adc9c9f43c662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ab6dd9d03726493fd9fd2c358b0a87ca1997cbe2111d43a548adc9c9f43c662\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11144c876f7a0cbaa5787ac1a093f138ec3fdb8a3cd118d4efec6cc650e3e97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11144c876f7a0cbaa5787ac1a093f138ec3fdb8a3cd118d4efec6cc650e3e97a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53230e417c1722eded51e2c7a87acb4e9b610aaf5d36a22d4f3f4fcd987c5160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53230e417c1722eded51e2c7a87acb4e9b610aaf5d36a22d4f3f4fcd987c5160\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlz6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:03Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:03 crc kubenswrapper[4813]: I0219 18:31:03.411880 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"928c75f4-605c-4556-8c29-14ff4bdf6f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec80078ab4536d8f1962994c3bdf8af5fcf81516f8845e84dbb33a78b1aa2062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e03ec2a3e1656fbfb567377e79bcca10e1c8a6f7660727c164936eab7365930a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6191934d31b3f711d3c9d8f365251db200dea5a925cad670632dbfc527f76504\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1b878b9bc710465c933e1965728be9fe220fe89a3d5c05fa538abd4bf22b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81cfe261685862b6b74d263ab66b4a2b1d68409791e82424b4f210c56561099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fed64977a80ae6acac3a225e2b6b6ab865db1f36338df4a15011baae90f949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc1c57ce812d6b74b0a2631aac1b67e009e76561c3561fcc94cf8bfa6c6a61c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90c570c2b717e5736262333676c0fd0ffbdbd3957f8c27964a5126d2df92cc13\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T18:30:32Z\\\",\\\"message\\\":\\\"s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 18:30:32.490543 6488 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 18:30:32.490653 6488 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 18:30:32.491223 6488 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 18:30:32.491297 6488 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 18:30:32.491309 6488 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 18:30:32.491356 6488 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 18:30:32.491372 6488 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 18:30:32.491393 6488 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 18:30:32.491409 6488 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 18:30:32.491423 6488 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 18:30:32.491463 6488 factory.go:656] Stopping watch factory\\\\nI0219 18:30:32.491489 6488 ovnkube.go:599] Stopped ovnkube\\\\nI0219 18:30:32.491500 6488 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 18:30:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cc1c57ce812d6b74b0a2631aac1b67e009e76561c3561fcc94cf8bfa6c6a61c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T18:31:02Z\\\",\\\"message\\\":\\\"reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 18:31:02.493402 6884 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 18:31:02.493481 6884 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 18:31:02.493608 6884 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 18:31:02.493649 6884 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 18:31:02.493658 6884 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 18:31:02.493689 6884 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 18:31:02.493700 6884 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 18:31:02.493731 6884 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 18:31:02.493737 6884 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 18:31:02.493752 6884 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 18:31:02.493767 6884 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 18:31:02.493777 6884 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 18:31:02.493785 6884 factory.go:656] Stopping watch factory\\\\nI0219 18:31:02.493788 6884 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 18:31:02.493799 6884 ovnkube.go:599] Stopped ovnkube\\\\nI0219 18:31:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:31:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1caf7a922d435742a5c6d5302c70003bc86cc3ab542e2e3fe0a368c150de7838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0244cf62334ae37b44f11119385a76667200a01b2162a064829f16e8d6c0cf65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0244cf62334ae37b44f11119385a76667200a01b2162a064829f16e8d6c0cf65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pc9t2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:03Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:03 crc kubenswrapper[4813]: I0219 18:31:03.437936 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:03 crc kubenswrapper[4813]: I0219 18:31:03.438019 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:03 crc kubenswrapper[4813]: I0219 18:31:03.438041 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:03 crc kubenswrapper[4813]: I0219 18:31:03.438068 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:03 crc kubenswrapper[4813]: I0219 18:31:03.438086 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:03Z","lastTransitionTime":"2026-02-19T18:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:03 crc kubenswrapper[4813]: I0219 18:31:03.450428 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 11:33:34.597829322 +0000 UTC Feb 19 18:31:03 crc kubenswrapper[4813]: I0219 18:31:03.470992 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:31:03 crc kubenswrapper[4813]: I0219 18:31:03.471092 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 18:31:03 crc kubenswrapper[4813]: E0219 18:31:03.471176 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 18:31:03 crc kubenswrapper[4813]: E0219 18:31:03.471266 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 18:31:03 crc kubenswrapper[4813]: I0219 18:31:03.471345 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 18:31:03 crc kubenswrapper[4813]: E0219 18:31:03.471522 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 18:31:03 crc kubenswrapper[4813]: I0219 18:31:03.540632 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:03 crc kubenswrapper[4813]: I0219 18:31:03.540677 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:03 crc kubenswrapper[4813]: I0219 18:31:03.540693 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:03 crc kubenswrapper[4813]: I0219 18:31:03.540720 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:03 crc kubenswrapper[4813]: I0219 18:31:03.540737 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:03Z","lastTransitionTime":"2026-02-19T18:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:03 crc kubenswrapper[4813]: I0219 18:31:03.644061 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:03 crc kubenswrapper[4813]: I0219 18:31:03.644125 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:03 crc kubenswrapper[4813]: I0219 18:31:03.644146 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:03 crc kubenswrapper[4813]: I0219 18:31:03.644173 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:03 crc kubenswrapper[4813]: I0219 18:31:03.644192 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:03Z","lastTransitionTime":"2026-02-19T18:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:03 crc kubenswrapper[4813]: I0219 18:31:03.746667 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:03 crc kubenswrapper[4813]: I0219 18:31:03.746734 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:03 crc kubenswrapper[4813]: I0219 18:31:03.746750 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:03 crc kubenswrapper[4813]: I0219 18:31:03.746777 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:03 crc kubenswrapper[4813]: I0219 18:31:03.746793 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:03Z","lastTransitionTime":"2026-02-19T18:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:03 crc kubenswrapper[4813]: I0219 18:31:03.850402 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:03 crc kubenswrapper[4813]: I0219 18:31:03.850464 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:03 crc kubenswrapper[4813]: I0219 18:31:03.850482 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:03 crc kubenswrapper[4813]: I0219 18:31:03.850509 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:03 crc kubenswrapper[4813]: I0219 18:31:03.850536 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:03Z","lastTransitionTime":"2026-02-19T18:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:03 crc kubenswrapper[4813]: I0219 18:31:03.953370 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:03 crc kubenswrapper[4813]: I0219 18:31:03.953429 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:03 crc kubenswrapper[4813]: I0219 18:31:03.953445 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:03 crc kubenswrapper[4813]: I0219 18:31:03.953469 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:03 crc kubenswrapper[4813]: I0219 18:31:03.953507 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:03Z","lastTransitionTime":"2026-02-19T18:31:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:03 crc kubenswrapper[4813]: I0219 18:31:03.977773 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pc9t2_928c75f4-605c-4556-8c29-14ff4bdf6f5e/ovnkube-controller/3.log" Feb 19 18:31:03 crc kubenswrapper[4813]: I0219 18:31:03.983212 4813 scope.go:117] "RemoveContainer" containerID="0cc1c57ce812d6b74b0a2631aac1b67e009e76561c3561fcc94cf8bfa6c6a61c" Feb 19 18:31:03 crc kubenswrapper[4813]: E0219 18:31:03.983558 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pc9t2_openshift-ovn-kubernetes(928c75f4-605c-4556-8c29-14ff4bdf6f5e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" podUID="928c75f4-605c-4556-8c29-14ff4bdf6f5e" Feb 19 18:31:04 crc kubenswrapper[4813]: I0219 18:31:04.006456 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7e9f575-0f71-4cf9-bbca-161279ecc067\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5217c8c5d8bdd49d4930ceb46695c6404c9cb98f22fa570d519f6fd9e90a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72ae02b48db247486c8af5ba910b32e186c15460f9195868b319d1ddc111fa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e90168ea6a75495be5812591cd0c929407c9adcb7c088d4bcbe3658a430320\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f319cf58bd58c099ace281ef84d183ac951986506fa62207a45ec01f7398e7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f62c4c18d632f0210d0676c04bf42a7b28c8669c123d29ef70fd4129f0051a2b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T18:30:00Z\\\",\\\"message\\\":\\\".003654 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 18:29:55.006059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3464865525/tls.crt::/tmp/serving-cert-3464865525/tls.key\\\\\\\"\\\\nI0219 18:30:00.917714 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 18:30:00.925836 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 18:30:00.925875 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 18:30:00.925917 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 18:30:00.925928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 18:30:00.935151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 18:30:00.935201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935214 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935228 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 18:30:00.935217 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 18:30:00.935236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 18:30:00.935260 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 18:30:00.935268 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 18:30:00.942033 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nF0219 18:30:00.942062 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b414bdca98a18315a2ebb69c39baa4748407aa769d6bc8efa4115efb47c9f4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:04Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:04 crc kubenswrapper[4813]: I0219 18:31:04.024334 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9fdac97-44c9-402a-9097-cc8bb3cd60c1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ca66b5227f874cb972da780e7130516ccefdef0f8a8c6b5258886aba6a54ae8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c2aec2ac5b69b6d516b5f3bddbff02e25892d26a95e9f8bce5c9c5f98144310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c2aec2ac5b69b6d516b5f3bddbff02e25892d26a95e9f8bce5c9c5f98144310\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:04Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:04 crc kubenswrapper[4813]: I0219 18:31:04.050657 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"424add47-7924-4d6b-9c4a-4696e02c5a45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95dc9a2b0e36680a894dd162836072614413ad26fcf5adbae44db96aff8a00c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46edb0e58f8e2e040cc223708f42cef7060dc0a841fff9db182e66e659ced5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://def8abdfd513befc4fac3d78b8284b3b37ba3f5da05d74a7b911932ee1099344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed857dedd4a4cf59288fcbcfaa298370a93630e185b949d2e574eb828831df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://216064c0aa1256384d93ad5b952228d4b67c2767f284715e62505dd390c11268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b9746c82ed08c95759481dc08426b4e8446ed834da21c003dccfeb8231fbec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b9746c82ed08c95759481dc08426b4e8446ed834da21c003dccfeb8231fbec2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc8db58cc6cac8c9b830087814e9d7652fdd990b18e4e4e778669d5cbc06b508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc8db58cc6cac8c9b830087814e9d7652fdd990b18e4e4e778669d5cbc06b508\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://baca15e257ea2007369156b6a31eef0b7cd0e336cef7a9c733164b93e1927f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baca15e257ea2007369156b6a31eef0b7cd0e336cef7a9c733164b93e1927f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:04Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:04 crc kubenswrapper[4813]: I0219 18:31:04.056531 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:04 crc kubenswrapper[4813]: I0219 18:31:04.056580 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:04 crc kubenswrapper[4813]: I0219 18:31:04.056590 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:04 crc kubenswrapper[4813]: I0219 18:31:04.056606 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:04 crc kubenswrapper[4813]: I0219 18:31:04.056618 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:04Z","lastTransitionTime":"2026-02-19T18:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:04 crc kubenswrapper[4813]: I0219 18:31:04.068338 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44daee369357470d5c3b416d63e53f621a0434cc6fe90db2dd15a6078849bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:04Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:04 crc kubenswrapper[4813]: I0219 18:31:04.079437 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55mxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af5ac962-f7d8-4759-b913-b3784e37a704\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5006f3df33dcac6500b89a837e686eb2b35a2d1c609091bda101500d15d808d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55mxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:04Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:04 crc kubenswrapper[4813]: I0219 18:31:04.097984 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e24bb1ce-8e7b-485c-8ade-4d1c8d24d31d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ec0b5d3efae532f52cc2cad5bdf32bbba176159a6d3e26584b0fabe4e67fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72a262261f9ec388a9b2c38c6c30d35645d6e42d482ed7468891523b7bc731fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58efdcb776fb39a7e33af3fda3f0589e38c4bae15b8db3371541d189729d06c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a2584d7de40323b884e08e2cb47b6ab8054fadccc7516e4736c5439aef60e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:04Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:04 crc kubenswrapper[4813]: I0219 18:31:04.116720 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:04Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:04 crc kubenswrapper[4813]: I0219 18:31:04.134448 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:04Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:04 crc kubenswrapper[4813]: I0219 18:31:04.153247 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c6488cf097d9700ded1960ebddf696d77c08ffdfc6dd8b351c594ea026bb6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6181782d0c635897318effbe40705986067ccb55cf05ffdba6948e46c1a054b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:04Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:04 crc kubenswrapper[4813]: I0219 18:31:04.159020 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:04 crc kubenswrapper[4813]: I0219 18:31:04.159100 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:04 crc kubenswrapper[4813]: I0219 18:31:04.159115 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:04 crc kubenswrapper[4813]: I0219 18:31:04.159136 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:04 crc kubenswrapper[4813]: I0219 18:31:04.159152 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:04Z","lastTransitionTime":"2026-02-19T18:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:04 crc kubenswrapper[4813]: I0219 18:31:04.172201 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hksqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b099cefb-f2e5-4f3f-976c-7433dba77ef2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e8bb938cc19db067c4260fd25416cf155dd95f94c41f03a9ef2b4d29589ad2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3313ad9bea8833862d8c116421e00a0c2376c81e214043db1fa4096177d01094\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T18:30:50Z\\\",\\\"message\\\":\\\"2026-02-19T18:30:05+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9b918bad-a915-4f26-9241-805aac721196\\\\n2026-02-19T18:30:05+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9b918bad-a915-4f26-9241-805aac721196 to /host/opt/cni/bin/\\\\n2026-02-19T18:30:05Z [verbose] multus-daemon started\\\\n2026-02-19T18:30:05Z [verbose] Readiness Indicator file check\\\\n2026-02-19T18:30:50Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hksqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:04Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:04 crc kubenswrapper[4813]: I0219 18:31:04.188316 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqbrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"614b0374-4288-40dc-9d95-e6f6566bd1ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99f4d2e3db84c62b82edf3e5f8290b3b259e58655a5ea95ea51ed8aeec8e8844\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpkbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef107096c0521504fd7160ad4e3c0feff6ed1500af21c7a0328a981c340825f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpkbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqbrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:04Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:04 crc kubenswrapper[4813]: I0219 18:31:04.201325 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l5vng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fc21e0b-f723-4c9c-9ced-1683cc02fa00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmn2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmn2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l5vng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:04Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:04 crc kubenswrapper[4813]: I0219 18:31:04.222383 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlz6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f58d0592-08dd-49db-8c98-f262b9808e0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://934ac9bf4c98a77f31d9fddb47e9981f38daeca4f37afc09a488b48c69a13b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7968520498abaa880445c128f02c0838e254befc000966e147908c84203f74a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7968520498abaa880445c128f02c0838e254befc000966e147908c84203f74a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17313aa183028703a67d1cf40b4246d09224ceee622cd9f4f7d2ef239b26db47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17313aa183028703a67d1cf40b4246d09224ceee622cd9f4f7d2ef239b26db47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ab70722c91747196d065d768e30f52befde81fdedb465f22c3f7eef4705210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98ab70722c91747196d065d768e30f52befde81fdedb465f22c3f7eef4705210\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ab6dd9d03726493fd9fd2c358b0a87ca1997cbe2111d43a548adc9c9f43c662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ab6dd9d03726493fd9fd2c358b0a87ca1997cbe2111d43a548adc9c9f43c662\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11144c876f7a0cbaa5787ac1a093f138ec3fdb8a3cd118d4efec6cc650e3e97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11144c876f7a0cbaa5787ac1a093f138ec3fdb8a3cd118d4efec6cc650e3e97a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53230e417c1722eded51e2c7a87acb4e9b610aaf5d36a22d4f3f4fcd987c5160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53230e417c1722eded51e2c7a87acb4e9b610aaf5d36a22d4f3f4fcd987c5160\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlz6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:04Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:04 crc kubenswrapper[4813]: I0219 18:31:04.251610 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"928c75f4-605c-4556-8c29-14ff4bdf6f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec80078ab4536d8f1962994c3bdf8af5fcf81516f8845e84dbb33a78b1aa2062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e03ec2a3e1656fbfb567377e79bcca10e1c8a6f7660727c164936eab7365930a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6191934d31b3f711d3c9d8f365251db200dea5a925cad670632dbfc527f76504\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1b878b9bc710465c933e1965728be9fe220fe89a3d5c05fa538abd4bf22b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81cfe261685862b6b74d263ab66b4a2b1d68409791e82424b4f210c56561099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fed64977a80ae6acac3a225e2b6b6ab865db1f36338df4a15011baae90f949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc1c57ce812d6b74b0a2631aac1b67e009e76561c3561fcc94cf8bfa6c6a61c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cc1c57ce812d6b74b0a2631aac1b67e009e76561c3561fcc94cf8bfa6c6a61c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T18:31:02Z\\\",\\\"message\\\":\\\"reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 18:31:02.493402 6884 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 18:31:02.493481 6884 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 18:31:02.493608 6884 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 18:31:02.493649 6884 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 18:31:02.493658 6884 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 18:31:02.493689 6884 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 18:31:02.493700 6884 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 18:31:02.493731 6884 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 18:31:02.493737 6884 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 18:31:02.493752 6884 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 18:31:02.493767 6884 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 18:31:02.493777 6884 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 18:31:02.493785 6884 factory.go:656] Stopping watch factory\\\\nI0219 18:31:02.493788 6884 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 18:31:02.493799 6884 ovnkube.go:599] Stopped ovnkube\\\\nI0219 18:31:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:31:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pc9t2_openshift-ovn-kubernetes(928c75f4-605c-4556-8c29-14ff4bdf6f5e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1caf7a922d435742a5c6d5302c70003bc86cc3ab542e2e3fe0a368c150de7838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0244cf62334ae37b44f11119385a76667200a01b2162a064829f16e8d6c0cf65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0244cf62334ae37b44f11119385a76667200a01b2162a064829f16e8d6c0cf65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pc9t2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:04Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:04 crc kubenswrapper[4813]: I0219 18:31:04.262346 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:04 crc kubenswrapper[4813]: I0219 18:31:04.262386 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:04 crc kubenswrapper[4813]: I0219 18:31:04.262397 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:04 crc kubenswrapper[4813]: I0219 18:31:04.262413 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:04 crc kubenswrapper[4813]: I0219 18:31:04.262425 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:04Z","lastTransitionTime":"2026-02-19T18:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:04 crc kubenswrapper[4813]: I0219 18:31:04.271238 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3c31506-f64a-4742-a5f9-fb4f7cb97f3e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2b8202d5d41248673716db8cad0618006cc52e967751eb392b99663c4aa90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6995816d5f4dd389140af1ce1d1d5e45df8cf15a28c51cd5e13ee52c094377f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4760dc159a8f00a8d74f75e4c3b1f50b3548501b3a7623e96aaa16c1013611c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c86da51b4095ca48cd6d13dadef813f656825387e3166eb716fe2b8cc46ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7c86da51b4095ca48cd6d13dadef813f656825387e3166eb716fe2b8cc46ce3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:04Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:04 crc kubenswrapper[4813]: I0219 18:31:04.286257 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:04Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:04 crc kubenswrapper[4813]: I0219 18:31:04.304220 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vpk9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0640f474-4d6f-4a87-9750-dda71e69dd95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58268d212369ac3ebcca0839f2c778d1e2e6e36fdc4b7e374cb9044adb39163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vpk9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:04Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:04 crc kubenswrapper[4813]: I0219 18:31:04.319673 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8429a2d929ad94a25a922e436298db7a8eb7522560dfa37f87f6e24f4f99b4e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:04Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:04 crc kubenswrapper[4813]: I0219 18:31:04.332024 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"481977a2-7072-4176-abd4-863cb6104d70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2b35ef24fab5f4dfd17bf95d550000ed1167454385baff2c19e1bc795e2fa8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a57de3f000b8b7bd147b882314c518fbd8173a9c366d4ee48e1e5fab9800b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfswm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:04Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:04 crc kubenswrapper[4813]: I0219 18:31:04.365452 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:04 crc kubenswrapper[4813]: I0219 18:31:04.365493 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:04 crc kubenswrapper[4813]: I0219 18:31:04.365510 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:04 crc kubenswrapper[4813]: I0219 18:31:04.365533 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:04 crc kubenswrapper[4813]: I0219 18:31:04.365553 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:04Z","lastTransitionTime":"2026-02-19T18:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:04 crc kubenswrapper[4813]: I0219 18:31:04.451114 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 14:35:37.451905469 +0000 UTC Feb 19 18:31:04 crc kubenswrapper[4813]: I0219 18:31:04.468087 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:04 crc kubenswrapper[4813]: I0219 18:31:04.468117 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:04 crc kubenswrapper[4813]: I0219 18:31:04.468130 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:04 crc kubenswrapper[4813]: I0219 18:31:04.468144 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:04 crc kubenswrapper[4813]: I0219 18:31:04.468155 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:04Z","lastTransitionTime":"2026-02-19T18:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:04 crc kubenswrapper[4813]: I0219 18:31:04.470976 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5vng" Feb 19 18:31:04 crc kubenswrapper[4813]: E0219 18:31:04.471079 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5vng" podUID="6fc21e0b-f723-4c9c-9ced-1683cc02fa00" Feb 19 18:31:04 crc kubenswrapper[4813]: I0219 18:31:04.600481 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:04 crc kubenswrapper[4813]: I0219 18:31:04.600541 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:04 crc kubenswrapper[4813]: I0219 18:31:04.600557 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:04 crc kubenswrapper[4813]: I0219 18:31:04.600580 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:04 crc kubenswrapper[4813]: I0219 18:31:04.600601 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:04Z","lastTransitionTime":"2026-02-19T18:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:04 crc kubenswrapper[4813]: I0219 18:31:04.703081 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:04 crc kubenswrapper[4813]: I0219 18:31:04.703128 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:04 crc kubenswrapper[4813]: I0219 18:31:04.703145 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:04 crc kubenswrapper[4813]: I0219 18:31:04.703170 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:04 crc kubenswrapper[4813]: I0219 18:31:04.703188 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:04Z","lastTransitionTime":"2026-02-19T18:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:04 crc kubenswrapper[4813]: I0219 18:31:04.807249 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:04 crc kubenswrapper[4813]: I0219 18:31:04.807309 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:04 crc kubenswrapper[4813]: I0219 18:31:04.807321 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:04 crc kubenswrapper[4813]: I0219 18:31:04.807342 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:04 crc kubenswrapper[4813]: I0219 18:31:04.807359 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:04Z","lastTransitionTime":"2026-02-19T18:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:04 crc kubenswrapper[4813]: I0219 18:31:04.910995 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:04 crc kubenswrapper[4813]: I0219 18:31:04.911045 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:04 crc kubenswrapper[4813]: I0219 18:31:04.911063 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:04 crc kubenswrapper[4813]: I0219 18:31:04.911085 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:04 crc kubenswrapper[4813]: I0219 18:31:04.911102 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:04Z","lastTransitionTime":"2026-02-19T18:31:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:05 crc kubenswrapper[4813]: I0219 18:31:05.014705 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:05 crc kubenswrapper[4813]: I0219 18:31:05.014779 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:05 crc kubenswrapper[4813]: I0219 18:31:05.014802 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:05 crc kubenswrapper[4813]: I0219 18:31:05.014832 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:05 crc kubenswrapper[4813]: I0219 18:31:05.014855 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:05Z","lastTransitionTime":"2026-02-19T18:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:05 crc kubenswrapper[4813]: I0219 18:31:05.118671 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:05 crc kubenswrapper[4813]: I0219 18:31:05.118725 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:05 crc kubenswrapper[4813]: I0219 18:31:05.118742 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:05 crc kubenswrapper[4813]: I0219 18:31:05.118764 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:05 crc kubenswrapper[4813]: I0219 18:31:05.118783 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:05Z","lastTransitionTime":"2026-02-19T18:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:05 crc kubenswrapper[4813]: I0219 18:31:05.221523 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:05 crc kubenswrapper[4813]: I0219 18:31:05.221583 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:05 crc kubenswrapper[4813]: I0219 18:31:05.221599 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:05 crc kubenswrapper[4813]: I0219 18:31:05.221626 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:05 crc kubenswrapper[4813]: I0219 18:31:05.221643 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:05Z","lastTransitionTime":"2026-02-19T18:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:05 crc kubenswrapper[4813]: I0219 18:31:05.325390 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:05 crc kubenswrapper[4813]: I0219 18:31:05.325438 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:05 crc kubenswrapper[4813]: I0219 18:31:05.325453 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:05 crc kubenswrapper[4813]: I0219 18:31:05.325475 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:05 crc kubenswrapper[4813]: I0219 18:31:05.325491 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:05Z","lastTransitionTime":"2026-02-19T18:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:05 crc kubenswrapper[4813]: I0219 18:31:05.351000 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 18:31:05 crc kubenswrapper[4813]: E0219 18:31:05.351211 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 18:32:09.351182906 +0000 UTC m=+148.576623487 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:31:05 crc kubenswrapper[4813]: I0219 18:31:05.428462 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:05 crc kubenswrapper[4813]: I0219 18:31:05.428521 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:05 crc kubenswrapper[4813]: I0219 18:31:05.428537 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:05 crc kubenswrapper[4813]: I0219 18:31:05.428581 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:05 crc kubenswrapper[4813]: I0219 18:31:05.428598 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:05Z","lastTransitionTime":"2026-02-19T18:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:05 crc kubenswrapper[4813]: I0219 18:31:05.451343 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 01:15:36.437310335 +0000 UTC Feb 19 18:31:05 crc kubenswrapper[4813]: I0219 18:31:05.451775 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:31:05 crc kubenswrapper[4813]: I0219 18:31:05.451862 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 18:31:05 crc kubenswrapper[4813]: I0219 18:31:05.451895 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 18:31:05 crc kubenswrapper[4813]: I0219 18:31:05.451931 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:31:05 crc kubenswrapper[4813]: E0219 18:31:05.452094 4813 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 18:31:05 crc kubenswrapper[4813]: E0219 18:31:05.452169 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 18:32:09.452148556 +0000 UTC m=+148.677589127 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 18:31:05 crc kubenswrapper[4813]: E0219 18:31:05.452292 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 18:31:05 crc kubenswrapper[4813]: E0219 18:31:05.452326 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 18:31:05 crc kubenswrapper[4813]: E0219 18:31:05.452339 4813 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 18:31:05 crc kubenswrapper[4813]: E0219 18:31:05.452398 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 18:32:09.452378914 +0000 UTC m=+148.677819505 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 18:31:05 crc kubenswrapper[4813]: E0219 18:31:05.452456 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 18:31:05 crc kubenswrapper[4813]: E0219 18:31:05.452480 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 18:31:05 crc kubenswrapper[4813]: E0219 18:31:05.452498 4813 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 18:31:05 crc kubenswrapper[4813]: E0219 18:31:05.452545 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 18:32:09.452529469 +0000 UTC m=+148.677970050 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 18:31:05 crc kubenswrapper[4813]: E0219 18:31:05.452633 4813 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 18:31:05 crc kubenswrapper[4813]: E0219 18:31:05.452712 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 18:32:09.452692994 +0000 UTC m=+148.678133565 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 18:31:05 crc kubenswrapper[4813]: I0219 18:31:05.471346 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:31:05 crc kubenswrapper[4813]: I0219 18:31:05.471401 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 18:31:05 crc kubenswrapper[4813]: E0219 18:31:05.471446 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 18:31:05 crc kubenswrapper[4813]: E0219 18:31:05.471515 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 18:31:05 crc kubenswrapper[4813]: I0219 18:31:05.471564 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 18:31:05 crc kubenswrapper[4813]: E0219 18:31:05.471753 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 18:31:05 crc kubenswrapper[4813]: I0219 18:31:05.531534 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:05 crc kubenswrapper[4813]: I0219 18:31:05.531598 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:05 crc kubenswrapper[4813]: I0219 18:31:05.531615 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:05 crc kubenswrapper[4813]: I0219 18:31:05.531640 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:05 crc kubenswrapper[4813]: I0219 18:31:05.531661 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:05Z","lastTransitionTime":"2026-02-19T18:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:05 crc kubenswrapper[4813]: I0219 18:31:05.633789 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:05 crc kubenswrapper[4813]: I0219 18:31:05.633830 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:05 crc kubenswrapper[4813]: I0219 18:31:05.633840 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:05 crc kubenswrapper[4813]: I0219 18:31:05.633855 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:05 crc kubenswrapper[4813]: I0219 18:31:05.633868 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:05Z","lastTransitionTime":"2026-02-19T18:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:05 crc kubenswrapper[4813]: I0219 18:31:05.735873 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:05 crc kubenswrapper[4813]: I0219 18:31:05.735924 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:05 crc kubenswrapper[4813]: I0219 18:31:05.735935 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:05 crc kubenswrapper[4813]: I0219 18:31:05.735969 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:05 crc kubenswrapper[4813]: I0219 18:31:05.735983 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:05Z","lastTransitionTime":"2026-02-19T18:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:05 crc kubenswrapper[4813]: I0219 18:31:05.839592 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:05 crc kubenswrapper[4813]: I0219 18:31:05.839649 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:05 crc kubenswrapper[4813]: I0219 18:31:05.839668 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:05 crc kubenswrapper[4813]: I0219 18:31:05.839690 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:05 crc kubenswrapper[4813]: I0219 18:31:05.839708 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:05Z","lastTransitionTime":"2026-02-19T18:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:05 crc kubenswrapper[4813]: I0219 18:31:05.942242 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:05 crc kubenswrapper[4813]: I0219 18:31:05.942296 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:05 crc kubenswrapper[4813]: I0219 18:31:05.942314 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:05 crc kubenswrapper[4813]: I0219 18:31:05.942337 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:05 crc kubenswrapper[4813]: I0219 18:31:05.942354 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:05Z","lastTransitionTime":"2026-02-19T18:31:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:06 crc kubenswrapper[4813]: I0219 18:31:06.044432 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:06 crc kubenswrapper[4813]: I0219 18:31:06.044484 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:06 crc kubenswrapper[4813]: I0219 18:31:06.044501 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:06 crc kubenswrapper[4813]: I0219 18:31:06.044524 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:06 crc kubenswrapper[4813]: I0219 18:31:06.044540 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:06Z","lastTransitionTime":"2026-02-19T18:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:06 crc kubenswrapper[4813]: I0219 18:31:06.147123 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:06 crc kubenswrapper[4813]: I0219 18:31:06.147218 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:06 crc kubenswrapper[4813]: I0219 18:31:06.147237 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:06 crc kubenswrapper[4813]: I0219 18:31:06.147262 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:06 crc kubenswrapper[4813]: I0219 18:31:06.147280 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:06Z","lastTransitionTime":"2026-02-19T18:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:06 crc kubenswrapper[4813]: I0219 18:31:06.250563 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:06 crc kubenswrapper[4813]: I0219 18:31:06.250630 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:06 crc kubenswrapper[4813]: I0219 18:31:06.250654 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:06 crc kubenswrapper[4813]: I0219 18:31:06.250686 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:06 crc kubenswrapper[4813]: I0219 18:31:06.250709 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:06Z","lastTransitionTime":"2026-02-19T18:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:06 crc kubenswrapper[4813]: I0219 18:31:06.353723 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:06 crc kubenswrapper[4813]: I0219 18:31:06.353776 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:06 crc kubenswrapper[4813]: I0219 18:31:06.353791 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:06 crc kubenswrapper[4813]: I0219 18:31:06.353813 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:06 crc kubenswrapper[4813]: I0219 18:31:06.353830 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:06Z","lastTransitionTime":"2026-02-19T18:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:06 crc kubenswrapper[4813]: I0219 18:31:06.451568 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 09:16:33.541509099 +0000 UTC Feb 19 18:31:06 crc kubenswrapper[4813]: I0219 18:31:06.457101 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:06 crc kubenswrapper[4813]: I0219 18:31:06.457152 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:06 crc kubenswrapper[4813]: I0219 18:31:06.457168 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:06 crc kubenswrapper[4813]: I0219 18:31:06.457190 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:06 crc kubenswrapper[4813]: I0219 18:31:06.457207 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:06Z","lastTransitionTime":"2026-02-19T18:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:06 crc kubenswrapper[4813]: I0219 18:31:06.470806 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5vng" Feb 19 18:31:06 crc kubenswrapper[4813]: E0219 18:31:06.470972 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5vng" podUID="6fc21e0b-f723-4c9c-9ced-1683cc02fa00" Feb 19 18:31:06 crc kubenswrapper[4813]: I0219 18:31:06.560000 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:06 crc kubenswrapper[4813]: I0219 18:31:06.560057 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:06 crc kubenswrapper[4813]: I0219 18:31:06.560079 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:06 crc kubenswrapper[4813]: I0219 18:31:06.560108 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:06 crc kubenswrapper[4813]: I0219 18:31:06.560128 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:06Z","lastTransitionTime":"2026-02-19T18:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:06 crc kubenswrapper[4813]: I0219 18:31:06.662562 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:06 crc kubenswrapper[4813]: I0219 18:31:06.662640 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:06 crc kubenswrapper[4813]: I0219 18:31:06.662663 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:06 crc kubenswrapper[4813]: I0219 18:31:06.662696 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:06 crc kubenswrapper[4813]: I0219 18:31:06.662719 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:06Z","lastTransitionTime":"2026-02-19T18:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:06 crc kubenswrapper[4813]: I0219 18:31:06.765863 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:06 crc kubenswrapper[4813]: I0219 18:31:06.765913 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:06 crc kubenswrapper[4813]: I0219 18:31:06.765923 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:06 crc kubenswrapper[4813]: I0219 18:31:06.765938 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:06 crc kubenswrapper[4813]: I0219 18:31:06.765991 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:06Z","lastTransitionTime":"2026-02-19T18:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:06 crc kubenswrapper[4813]: I0219 18:31:06.868478 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:06 crc kubenswrapper[4813]: I0219 18:31:06.868787 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:06 crc kubenswrapper[4813]: I0219 18:31:06.868885 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:06 crc kubenswrapper[4813]: I0219 18:31:06.869005 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:06 crc kubenswrapper[4813]: I0219 18:31:06.869079 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:06Z","lastTransitionTime":"2026-02-19T18:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:06 crc kubenswrapper[4813]: I0219 18:31:06.971515 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:06 crc kubenswrapper[4813]: I0219 18:31:06.972115 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:06 crc kubenswrapper[4813]: I0219 18:31:06.972274 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:06 crc kubenswrapper[4813]: I0219 18:31:06.972428 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:06 crc kubenswrapper[4813]: I0219 18:31:06.972578 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:06Z","lastTransitionTime":"2026-02-19T18:31:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:07 crc kubenswrapper[4813]: I0219 18:31:07.075704 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:07 crc kubenswrapper[4813]: I0219 18:31:07.075766 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:07 crc kubenswrapper[4813]: I0219 18:31:07.075798 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:07 crc kubenswrapper[4813]: I0219 18:31:07.075823 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:07 crc kubenswrapper[4813]: I0219 18:31:07.075840 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:07Z","lastTransitionTime":"2026-02-19T18:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:07 crc kubenswrapper[4813]: I0219 18:31:07.179261 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:07 crc kubenswrapper[4813]: I0219 18:31:07.179334 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:07 crc kubenswrapper[4813]: I0219 18:31:07.179353 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:07 crc kubenswrapper[4813]: I0219 18:31:07.179376 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:07 crc kubenswrapper[4813]: I0219 18:31:07.179396 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:07Z","lastTransitionTime":"2026-02-19T18:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:07 crc kubenswrapper[4813]: I0219 18:31:07.283496 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:07 crc kubenswrapper[4813]: I0219 18:31:07.283538 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:07 crc kubenswrapper[4813]: I0219 18:31:07.283553 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:07 crc kubenswrapper[4813]: I0219 18:31:07.283744 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:07 crc kubenswrapper[4813]: I0219 18:31:07.283807 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:07Z","lastTransitionTime":"2026-02-19T18:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:07 crc kubenswrapper[4813]: I0219 18:31:07.386798 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:07 crc kubenswrapper[4813]: I0219 18:31:07.386862 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:07 crc kubenswrapper[4813]: I0219 18:31:07.386887 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:07 crc kubenswrapper[4813]: I0219 18:31:07.386916 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:07 crc kubenswrapper[4813]: I0219 18:31:07.386938 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:07Z","lastTransitionTime":"2026-02-19T18:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:07 crc kubenswrapper[4813]: I0219 18:31:07.452045 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 16:44:23.428143041 +0000 UTC Feb 19 18:31:07 crc kubenswrapper[4813]: I0219 18:31:07.470895 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 18:31:07 crc kubenswrapper[4813]: I0219 18:31:07.471051 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:31:07 crc kubenswrapper[4813]: E0219 18:31:07.471127 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 18:31:07 crc kubenswrapper[4813]: I0219 18:31:07.471168 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 18:31:07 crc kubenswrapper[4813]: E0219 18:31:07.471344 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 18:31:07 crc kubenswrapper[4813]: E0219 18:31:07.471479 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 18:31:07 crc kubenswrapper[4813]: I0219 18:31:07.489923 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:07 crc kubenswrapper[4813]: I0219 18:31:07.490040 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:07 crc kubenswrapper[4813]: I0219 18:31:07.490062 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:07 crc kubenswrapper[4813]: I0219 18:31:07.490086 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:07 crc kubenswrapper[4813]: I0219 18:31:07.490102 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:07Z","lastTransitionTime":"2026-02-19T18:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:07 crc kubenswrapper[4813]: I0219 18:31:07.593397 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:07 crc kubenswrapper[4813]: I0219 18:31:07.593661 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:07 crc kubenswrapper[4813]: I0219 18:31:07.593802 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:07 crc kubenswrapper[4813]: I0219 18:31:07.593979 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:07 crc kubenswrapper[4813]: I0219 18:31:07.594140 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:07Z","lastTransitionTime":"2026-02-19T18:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:07 crc kubenswrapper[4813]: I0219 18:31:07.697699 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:07 crc kubenswrapper[4813]: I0219 18:31:07.697774 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:07 crc kubenswrapper[4813]: I0219 18:31:07.697799 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:07 crc kubenswrapper[4813]: I0219 18:31:07.697828 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:07 crc kubenswrapper[4813]: I0219 18:31:07.697847 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:07Z","lastTransitionTime":"2026-02-19T18:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:07 crc kubenswrapper[4813]: I0219 18:31:07.800392 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:07 crc kubenswrapper[4813]: I0219 18:31:07.800459 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:07 crc kubenswrapper[4813]: I0219 18:31:07.800478 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:07 crc kubenswrapper[4813]: I0219 18:31:07.800502 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:07 crc kubenswrapper[4813]: I0219 18:31:07.800521 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:07Z","lastTransitionTime":"2026-02-19T18:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:07 crc kubenswrapper[4813]: I0219 18:31:07.903609 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:07 crc kubenswrapper[4813]: I0219 18:31:07.903669 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:07 crc kubenswrapper[4813]: I0219 18:31:07.903687 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:07 crc kubenswrapper[4813]: I0219 18:31:07.903713 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:07 crc kubenswrapper[4813]: I0219 18:31:07.903730 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:07Z","lastTransitionTime":"2026-02-19T18:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:08 crc kubenswrapper[4813]: I0219 18:31:08.006635 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:08 crc kubenswrapper[4813]: I0219 18:31:08.006706 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:08 crc kubenswrapper[4813]: I0219 18:31:08.006727 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:08 crc kubenswrapper[4813]: I0219 18:31:08.006751 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:08 crc kubenswrapper[4813]: I0219 18:31:08.006767 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:08Z","lastTransitionTime":"2026-02-19T18:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:08 crc kubenswrapper[4813]: I0219 18:31:08.109237 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:08 crc kubenswrapper[4813]: I0219 18:31:08.109301 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:08 crc kubenswrapper[4813]: I0219 18:31:08.109319 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:08 crc kubenswrapper[4813]: I0219 18:31:08.109343 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:08 crc kubenswrapper[4813]: I0219 18:31:08.109361 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:08Z","lastTransitionTime":"2026-02-19T18:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:08 crc kubenswrapper[4813]: I0219 18:31:08.212124 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:08 crc kubenswrapper[4813]: I0219 18:31:08.212187 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:08 crc kubenswrapper[4813]: I0219 18:31:08.212210 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:08 crc kubenswrapper[4813]: I0219 18:31:08.212238 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:08 crc kubenswrapper[4813]: I0219 18:31:08.212262 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:08Z","lastTransitionTime":"2026-02-19T18:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:08 crc kubenswrapper[4813]: I0219 18:31:08.315640 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:08 crc kubenswrapper[4813]: I0219 18:31:08.315707 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:08 crc kubenswrapper[4813]: I0219 18:31:08.315730 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:08 crc kubenswrapper[4813]: I0219 18:31:08.315760 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:08 crc kubenswrapper[4813]: I0219 18:31:08.315781 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:08Z","lastTransitionTime":"2026-02-19T18:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:08 crc kubenswrapper[4813]: I0219 18:31:08.419179 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:08 crc kubenswrapper[4813]: I0219 18:31:08.419232 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:08 crc kubenswrapper[4813]: I0219 18:31:08.419247 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:08 crc kubenswrapper[4813]: I0219 18:31:08.419267 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:08 crc kubenswrapper[4813]: I0219 18:31:08.419285 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:08Z","lastTransitionTime":"2026-02-19T18:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:08 crc kubenswrapper[4813]: I0219 18:31:08.452836 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 11:39:55.733937975 +0000 UTC Feb 19 18:31:08 crc kubenswrapper[4813]: I0219 18:31:08.471204 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5vng" Feb 19 18:31:08 crc kubenswrapper[4813]: E0219 18:31:08.471390 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5vng" podUID="6fc21e0b-f723-4c9c-9ced-1683cc02fa00" Feb 19 18:31:08 crc kubenswrapper[4813]: I0219 18:31:08.522246 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:08 crc kubenswrapper[4813]: I0219 18:31:08.522309 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:08 crc kubenswrapper[4813]: I0219 18:31:08.522324 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:08 crc kubenswrapper[4813]: I0219 18:31:08.522346 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:08 crc kubenswrapper[4813]: I0219 18:31:08.522363 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:08Z","lastTransitionTime":"2026-02-19T18:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:08 crc kubenswrapper[4813]: I0219 18:31:08.625287 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:08 crc kubenswrapper[4813]: I0219 18:31:08.625365 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:08 crc kubenswrapper[4813]: I0219 18:31:08.625378 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:08 crc kubenswrapper[4813]: I0219 18:31:08.625421 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:08 crc kubenswrapper[4813]: I0219 18:31:08.625437 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:08Z","lastTransitionTime":"2026-02-19T18:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:08 crc kubenswrapper[4813]: I0219 18:31:08.727999 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:08 crc kubenswrapper[4813]: I0219 18:31:08.728061 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:08 crc kubenswrapper[4813]: I0219 18:31:08.728132 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:08 crc kubenswrapper[4813]: I0219 18:31:08.728156 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:08 crc kubenswrapper[4813]: I0219 18:31:08.728174 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:08Z","lastTransitionTime":"2026-02-19T18:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:08 crc kubenswrapper[4813]: I0219 18:31:08.831805 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:08 crc kubenswrapper[4813]: I0219 18:31:08.831863 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:08 crc kubenswrapper[4813]: I0219 18:31:08.831901 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:08 crc kubenswrapper[4813]: I0219 18:31:08.831932 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:08 crc kubenswrapper[4813]: I0219 18:31:08.832003 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:08Z","lastTransitionTime":"2026-02-19T18:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:08 crc kubenswrapper[4813]: I0219 18:31:08.935316 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:08 crc kubenswrapper[4813]: I0219 18:31:08.935390 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:08 crc kubenswrapper[4813]: I0219 18:31:08.935408 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:08 crc kubenswrapper[4813]: I0219 18:31:08.935434 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:08 crc kubenswrapper[4813]: I0219 18:31:08.935453 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:08Z","lastTransitionTime":"2026-02-19T18:31:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:09 crc kubenswrapper[4813]: I0219 18:31:09.038619 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:09 crc kubenswrapper[4813]: I0219 18:31:09.038690 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:09 crc kubenswrapper[4813]: I0219 18:31:09.038707 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:09 crc kubenswrapper[4813]: I0219 18:31:09.038733 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:09 crc kubenswrapper[4813]: I0219 18:31:09.038751 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:09Z","lastTransitionTime":"2026-02-19T18:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:09 crc kubenswrapper[4813]: I0219 18:31:09.141536 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:09 crc kubenswrapper[4813]: I0219 18:31:09.141588 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:09 crc kubenswrapper[4813]: I0219 18:31:09.141603 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:09 crc kubenswrapper[4813]: I0219 18:31:09.141624 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:09 crc kubenswrapper[4813]: I0219 18:31:09.141640 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:09Z","lastTransitionTime":"2026-02-19T18:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:09 crc kubenswrapper[4813]: I0219 18:31:09.245021 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:09 crc kubenswrapper[4813]: I0219 18:31:09.245085 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:09 crc kubenswrapper[4813]: I0219 18:31:09.245107 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:09 crc kubenswrapper[4813]: I0219 18:31:09.245138 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:09 crc kubenswrapper[4813]: I0219 18:31:09.245159 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:09Z","lastTransitionTime":"2026-02-19T18:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:09 crc kubenswrapper[4813]: I0219 18:31:09.348600 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:09 crc kubenswrapper[4813]: I0219 18:31:09.348671 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:09 crc kubenswrapper[4813]: I0219 18:31:09.348694 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:09 crc kubenswrapper[4813]: I0219 18:31:09.348723 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:09 crc kubenswrapper[4813]: I0219 18:31:09.348744 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:09Z","lastTransitionTime":"2026-02-19T18:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:09 crc kubenswrapper[4813]: I0219 18:31:09.451379 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:09 crc kubenswrapper[4813]: I0219 18:31:09.451419 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:09 crc kubenswrapper[4813]: I0219 18:31:09.451430 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:09 crc kubenswrapper[4813]: I0219 18:31:09.451445 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:09 crc kubenswrapper[4813]: I0219 18:31:09.451456 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:09Z","lastTransitionTime":"2026-02-19T18:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:09 crc kubenswrapper[4813]: I0219 18:31:09.453314 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 19:32:33.19970782 +0000 UTC Feb 19 18:31:09 crc kubenswrapper[4813]: I0219 18:31:09.471397 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:31:09 crc kubenswrapper[4813]: I0219 18:31:09.471466 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 18:31:09 crc kubenswrapper[4813]: I0219 18:31:09.471573 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 18:31:09 crc kubenswrapper[4813]: E0219 18:31:09.471604 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 18:31:09 crc kubenswrapper[4813]: E0219 18:31:09.471713 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 18:31:09 crc kubenswrapper[4813]: E0219 18:31:09.471836 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 18:31:09 crc kubenswrapper[4813]: I0219 18:31:09.555231 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:09 crc kubenswrapper[4813]: I0219 18:31:09.555290 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:09 crc kubenswrapper[4813]: I0219 18:31:09.555306 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:09 crc kubenswrapper[4813]: I0219 18:31:09.555329 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:09 crc kubenswrapper[4813]: I0219 18:31:09.555346 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:09Z","lastTransitionTime":"2026-02-19T18:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:09 crc kubenswrapper[4813]: I0219 18:31:09.658624 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:09 crc kubenswrapper[4813]: I0219 18:31:09.658690 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:09 crc kubenswrapper[4813]: I0219 18:31:09.658705 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:09 crc kubenswrapper[4813]: I0219 18:31:09.658726 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:09 crc kubenswrapper[4813]: I0219 18:31:09.658740 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:09Z","lastTransitionTime":"2026-02-19T18:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:09 crc kubenswrapper[4813]: I0219 18:31:09.761469 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:09 crc kubenswrapper[4813]: I0219 18:31:09.761526 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:09 crc kubenswrapper[4813]: I0219 18:31:09.761543 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:09 crc kubenswrapper[4813]: I0219 18:31:09.761568 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:09 crc kubenswrapper[4813]: I0219 18:31:09.761585 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:09Z","lastTransitionTime":"2026-02-19T18:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:09 crc kubenswrapper[4813]: I0219 18:31:09.771530 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:09 crc kubenswrapper[4813]: I0219 18:31:09.771580 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:09 crc kubenswrapper[4813]: I0219 18:31:09.771597 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:09 crc kubenswrapper[4813]: I0219 18:31:09.771618 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:09 crc kubenswrapper[4813]: I0219 18:31:09.771635 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:09Z","lastTransitionTime":"2026-02-19T18:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:09 crc kubenswrapper[4813]: E0219 18:31:09.792389 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:31:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:31:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:31:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:31:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:31:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:31:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:31:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:31:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"72639c5f-de3a-44bf-8f3b-4707b19d9f7d\\\",\\\"systemUUID\\\":\\\"3f17b88b-2b9a-42bd-94af-777e7f325932\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:09Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:09 crc kubenswrapper[4813]: I0219 18:31:09.797393 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:09 crc kubenswrapper[4813]: I0219 18:31:09.797450 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:09 crc kubenswrapper[4813]: I0219 18:31:09.797468 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:09 crc kubenswrapper[4813]: I0219 18:31:09.797491 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:09 crc kubenswrapper[4813]: I0219 18:31:09.797507 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:09Z","lastTransitionTime":"2026-02-19T18:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:09 crc kubenswrapper[4813]: E0219 18:31:09.818077 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:31:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:31:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:31:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:31:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:31:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:31:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:31:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:31:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"72639c5f-de3a-44bf-8f3b-4707b19d9f7d\\\",\\\"systemUUID\\\":\\\"3f17b88b-2b9a-42bd-94af-777e7f325932\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:09Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:09 crc kubenswrapper[4813]: I0219 18:31:09.823287 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:09 crc kubenswrapper[4813]: I0219 18:31:09.823335 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:09 crc kubenswrapper[4813]: I0219 18:31:09.823353 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:09 crc kubenswrapper[4813]: I0219 18:31:09.823376 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:09 crc kubenswrapper[4813]: I0219 18:31:09.823392 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:09Z","lastTransitionTime":"2026-02-19T18:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:09 crc kubenswrapper[4813]: E0219 18:31:09.843062 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:31:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:31:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:31:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:31:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:31:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:31:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:31:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:31:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"72639c5f-de3a-44bf-8f3b-4707b19d9f7d\\\",\\\"systemUUID\\\":\\\"3f17b88b-2b9a-42bd-94af-777e7f325932\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:09Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:09 crc kubenswrapper[4813]: I0219 18:31:09.847814 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:09 crc kubenswrapper[4813]: I0219 18:31:09.847865 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:09 crc kubenswrapper[4813]: I0219 18:31:09.847884 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:09 crc kubenswrapper[4813]: I0219 18:31:09.847905 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:09 crc kubenswrapper[4813]: I0219 18:31:09.847920 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:09Z","lastTransitionTime":"2026-02-19T18:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:09 crc kubenswrapper[4813]: E0219 18:31:09.868687 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:31:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:31:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:31:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:31:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:31:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:31:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:31:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:31:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"72639c5f-de3a-44bf-8f3b-4707b19d9f7d\\\",\\\"systemUUID\\\":\\\"3f17b88b-2b9a-42bd-94af-777e7f325932\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:09Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:09 crc kubenswrapper[4813]: I0219 18:31:09.874154 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:09 crc kubenswrapper[4813]: I0219 18:31:09.874194 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:09 crc kubenswrapper[4813]: I0219 18:31:09.874212 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:09 crc kubenswrapper[4813]: I0219 18:31:09.874234 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:09 crc kubenswrapper[4813]: I0219 18:31:09.874251 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:09Z","lastTransitionTime":"2026-02-19T18:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:09 crc kubenswrapper[4813]: E0219 18:31:09.894140 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:31:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:31:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:31:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:31:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:31:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:31:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:31:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:31:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"72639c5f-de3a-44bf-8f3b-4707b19d9f7d\\\",\\\"systemUUID\\\":\\\"3f17b88b-2b9a-42bd-94af-777e7f325932\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:09Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:09 crc kubenswrapper[4813]: E0219 18:31:09.894356 4813 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 18:31:09 crc kubenswrapper[4813]: I0219 18:31:09.896739 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:09 crc kubenswrapper[4813]: I0219 18:31:09.896794 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:09 crc kubenswrapper[4813]: I0219 18:31:09.896812 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:09 crc kubenswrapper[4813]: I0219 18:31:09.896834 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:09 crc kubenswrapper[4813]: I0219 18:31:09.896851 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:09Z","lastTransitionTime":"2026-02-19T18:31:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:10 crc kubenswrapper[4813]: I0219 18:31:10.000049 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:10 crc kubenswrapper[4813]: I0219 18:31:10.000103 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:10 crc kubenswrapper[4813]: I0219 18:31:10.000121 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:10 crc kubenswrapper[4813]: I0219 18:31:10.000172 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:10 crc kubenswrapper[4813]: I0219 18:31:10.000197 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:10Z","lastTransitionTime":"2026-02-19T18:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:10 crc kubenswrapper[4813]: I0219 18:31:10.103819 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:10 crc kubenswrapper[4813]: I0219 18:31:10.103862 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:10 crc kubenswrapper[4813]: I0219 18:31:10.103874 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:10 crc kubenswrapper[4813]: I0219 18:31:10.103889 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:10 crc kubenswrapper[4813]: I0219 18:31:10.103912 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:10Z","lastTransitionTime":"2026-02-19T18:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:10 crc kubenswrapper[4813]: I0219 18:31:10.207604 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:10 crc kubenswrapper[4813]: I0219 18:31:10.207665 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:10 crc kubenswrapper[4813]: I0219 18:31:10.207683 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:10 crc kubenswrapper[4813]: I0219 18:31:10.207711 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:10 crc kubenswrapper[4813]: I0219 18:31:10.207728 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:10Z","lastTransitionTime":"2026-02-19T18:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:10 crc kubenswrapper[4813]: I0219 18:31:10.309797 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:10 crc kubenswrapper[4813]: I0219 18:31:10.309842 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:10 crc kubenswrapper[4813]: I0219 18:31:10.309883 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:10 crc kubenswrapper[4813]: I0219 18:31:10.309923 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:10 crc kubenswrapper[4813]: I0219 18:31:10.309939 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:10Z","lastTransitionTime":"2026-02-19T18:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:10 crc kubenswrapper[4813]: I0219 18:31:10.412971 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:10 crc kubenswrapper[4813]: I0219 18:31:10.413329 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:10 crc kubenswrapper[4813]: I0219 18:31:10.413493 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:10 crc kubenswrapper[4813]: I0219 18:31:10.413597 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:10 crc kubenswrapper[4813]: I0219 18:31:10.413701 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:10Z","lastTransitionTime":"2026-02-19T18:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:10 crc kubenswrapper[4813]: I0219 18:31:10.453864 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 08:05:16.815103226 +0000 UTC Feb 19 18:31:10 crc kubenswrapper[4813]: I0219 18:31:10.471244 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5vng" Feb 19 18:31:10 crc kubenswrapper[4813]: E0219 18:31:10.471577 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5vng" podUID="6fc21e0b-f723-4c9c-9ced-1683cc02fa00" Feb 19 18:31:10 crc kubenswrapper[4813]: I0219 18:31:10.516313 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:10 crc kubenswrapper[4813]: I0219 18:31:10.516397 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:10 crc kubenswrapper[4813]: I0219 18:31:10.516416 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:10 crc kubenswrapper[4813]: I0219 18:31:10.516438 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:10 crc kubenswrapper[4813]: I0219 18:31:10.516454 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:10Z","lastTransitionTime":"2026-02-19T18:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:10 crc kubenswrapper[4813]: I0219 18:31:10.619646 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:10 crc kubenswrapper[4813]: I0219 18:31:10.619977 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:10 crc kubenswrapper[4813]: I0219 18:31:10.620195 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:10 crc kubenswrapper[4813]: I0219 18:31:10.620396 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:10 crc kubenswrapper[4813]: I0219 18:31:10.620780 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:10Z","lastTransitionTime":"2026-02-19T18:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:10 crc kubenswrapper[4813]: I0219 18:31:10.724150 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:10 crc kubenswrapper[4813]: I0219 18:31:10.724202 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:10 crc kubenswrapper[4813]: I0219 18:31:10.724219 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:10 crc kubenswrapper[4813]: I0219 18:31:10.724242 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:10 crc kubenswrapper[4813]: I0219 18:31:10.724259 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:10Z","lastTransitionTime":"2026-02-19T18:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:10 crc kubenswrapper[4813]: I0219 18:31:10.827350 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:10 crc kubenswrapper[4813]: I0219 18:31:10.827639 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:10 crc kubenswrapper[4813]: I0219 18:31:10.827748 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:10 crc kubenswrapper[4813]: I0219 18:31:10.827852 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:10 crc kubenswrapper[4813]: I0219 18:31:10.827943 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:10Z","lastTransitionTime":"2026-02-19T18:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:10 crc kubenswrapper[4813]: I0219 18:31:10.930820 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:10 crc kubenswrapper[4813]: I0219 18:31:10.930865 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:10 crc kubenswrapper[4813]: I0219 18:31:10.930876 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:10 crc kubenswrapper[4813]: I0219 18:31:10.930895 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:10 crc kubenswrapper[4813]: I0219 18:31:10.930908 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:10Z","lastTransitionTime":"2026-02-19T18:31:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:11 crc kubenswrapper[4813]: I0219 18:31:11.034009 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:11 crc kubenswrapper[4813]: I0219 18:31:11.034064 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:11 crc kubenswrapper[4813]: I0219 18:31:11.034084 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:11 crc kubenswrapper[4813]: I0219 18:31:11.034108 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:11 crc kubenswrapper[4813]: I0219 18:31:11.034128 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:11Z","lastTransitionTime":"2026-02-19T18:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:11 crc kubenswrapper[4813]: I0219 18:31:11.137477 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:11 crc kubenswrapper[4813]: I0219 18:31:11.137544 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:11 crc kubenswrapper[4813]: I0219 18:31:11.137559 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:11 crc kubenswrapper[4813]: I0219 18:31:11.137586 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:11 crc kubenswrapper[4813]: I0219 18:31:11.137602 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:11Z","lastTransitionTime":"2026-02-19T18:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:11 crc kubenswrapper[4813]: I0219 18:31:11.240985 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:11 crc kubenswrapper[4813]: I0219 18:31:11.241047 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:11 crc kubenswrapper[4813]: I0219 18:31:11.241059 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:11 crc kubenswrapper[4813]: I0219 18:31:11.241077 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:11 crc kubenswrapper[4813]: I0219 18:31:11.241091 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:11Z","lastTransitionTime":"2026-02-19T18:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:11 crc kubenswrapper[4813]: I0219 18:31:11.344902 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:11 crc kubenswrapper[4813]: I0219 18:31:11.344990 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:11 crc kubenswrapper[4813]: I0219 18:31:11.345003 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:11 crc kubenswrapper[4813]: I0219 18:31:11.345016 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:11 crc kubenswrapper[4813]: I0219 18:31:11.345026 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:11Z","lastTransitionTime":"2026-02-19T18:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:11 crc kubenswrapper[4813]: I0219 18:31:11.448090 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:11 crc kubenswrapper[4813]: I0219 18:31:11.448140 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:11 crc kubenswrapper[4813]: I0219 18:31:11.448158 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:11 crc kubenswrapper[4813]: I0219 18:31:11.448184 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:11 crc kubenswrapper[4813]: I0219 18:31:11.448204 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:11Z","lastTransitionTime":"2026-02-19T18:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:11 crc kubenswrapper[4813]: I0219 18:31:11.454312 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 21:09:57.83471463 +0000 UTC Feb 19 18:31:11 crc kubenswrapper[4813]: I0219 18:31:11.471322 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 18:31:11 crc kubenswrapper[4813]: I0219 18:31:11.471321 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:31:11 crc kubenswrapper[4813]: E0219 18:31:11.471567 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 18:31:11 crc kubenswrapper[4813]: I0219 18:31:11.471619 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 18:31:11 crc kubenswrapper[4813]: E0219 18:31:11.473067 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 18:31:11 crc kubenswrapper[4813]: E0219 18:31:11.473143 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 18:31:11 crc kubenswrapper[4813]: I0219 18:31:11.494330 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3c31506-f64a-4742-a5f9-fb4f7cb97f3e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2b8202d5d41248673716db8cad0618006cc52e967751eb392b99663c4aa90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6995816d5f4dd389140af1ce1d1d5e45df8cf15a28c51cd5e13ee52c094377f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4760dc159a8f00a8d74f75e4c3b1f50b3548501b3a7623e96aaa16c1013611c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c86da51b4095ca48cd6d13dadef813f656825387e3166eb716fe2b8cc46ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7c86da51b4095ca48cd6d13dadef813f656825387e3166eb716fe2b8cc46ce3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:11Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:11 crc kubenswrapper[4813]: I0219 18:31:11.515064 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:11Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:11 crc kubenswrapper[4813]: I0219 18:31:11.533388 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vpk9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0640f474-4d6f-4a87-9750-dda71e69dd95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58268d212369ac3ebcca0839f2c778d1e2e6e36fdc4b7e374cb9044adb39163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vpk9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:11Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:11 crc kubenswrapper[4813]: I0219 18:31:11.552536 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:11 crc kubenswrapper[4813]: I0219 18:31:11.552593 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:11 crc kubenswrapper[4813]: I0219 18:31:11.552611 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:11 crc kubenswrapper[4813]: I0219 18:31:11.552635 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:11 crc kubenswrapper[4813]: I0219 18:31:11.552652 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:11Z","lastTransitionTime":"2026-02-19T18:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:11 crc kubenswrapper[4813]: I0219 18:31:11.554230 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8429a2d929ad94a25a922e436298db7a8eb7522560dfa37f87f6e24f4f99b4e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:11Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:11 crc kubenswrapper[4813]: I0219 18:31:11.572719 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"481977a2-7072-4176-abd4-863cb6104d70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2b35ef24fab5f4dfd17bf95d550000ed1167454385baff2c19e1bc795e2fa8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a57de3f000b8b7bd147b882314c518fbd8173a9c366d4ee48e1e5fab9800b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfswm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:11Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:11 crc kubenswrapper[4813]: I0219 18:31:11.594293 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7e9f575-0f71-4cf9-bbca-161279ecc067\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5217c8c5d8bdd49d4930ceb46695c6404c9cb98f22fa570d519f6fd9e90a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72ae02b48db247486c8af5ba910b32e186c15460f9195868b319d1ddc111fa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e90168ea6a75495be5812591cd0c929407c9adcb7c088d4bcbe3658a430320\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f319cf58bd58c099ace281ef84d183ac951986506fa62207a45ec01f7398e7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f62c4c18d632f0210d0676c04bf42a7b28c8669c123d29ef70fd4129f0051a2b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T18:30:00Z\\\",\\\"message\\\":\\\".003654 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 18:29:55.006059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3464865525/tls.crt::/tmp/serving-cert-3464865525/tls.key\\\\\\\"\\\\nI0219 18:30:00.917714 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 18:30:00.925836 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 18:30:00.925875 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 18:30:00.925917 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 18:30:00.925928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 18:30:00.935151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 18:30:00.935201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935214 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935228 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 18:30:00.935217 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 18:30:00.935236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 18:30:00.935260 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 18:30:00.935268 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 18:30:00.942033 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nF0219 18:30:00.942062 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b414bdca98a18315a2ebb69c39baa4748407aa769d6bc8efa4115efb47c9f4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:11Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:11 crc kubenswrapper[4813]: I0219 18:31:11.610367 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9fdac97-44c9-402a-9097-cc8bb3cd60c1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ca66b5227f874cb972da780e7130516ccefdef0f8a8c6b5258886aba6a54ae8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c2aec2ac5b69b6d516b5f3bddbff02e25892d26a95e9f8bce5c9c5f98144310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c2aec2ac5b69b6d516b5f3bddbff02e25892d26a95e9f8bce5c9c5f98144310\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:11Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:11 crc kubenswrapper[4813]: I0219 18:31:11.644676 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"424add47-7924-4d6b-9c4a-4696e02c5a45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95dc9a2b0e36680a894dd162836072614413ad26fcf5adbae44db96aff8a00c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46edb0e58f8e2e040cc223708f42cef7060dc0a841fff9db182e66e659ced5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://def8abdfd513befc4fac3d78b8284b3b37ba3f5da05d74a7b911932ee1099344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed857dedd4a4cf59288fcbcfaa298370a93630e185b949d2e574eb828831df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://216064c0aa1256384d93ad5b952228d4b67c2767f284715e62505dd390c11268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b9746c82ed08c95759481dc08426b4e8446ed834da21c003dccfeb8231fbec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b9746c82ed08c95759481dc08426b4e8446ed834da21c003dccfeb8231fbec2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc8db58cc6cac8c9b830087814e9d7652fdd990b18e4e4e778669d5cbc06b508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc8db58cc6cac8c9b830087814e9d7652fdd990b18e4e4e778669d5cbc06b508\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://baca15e257ea2007369156b6a31eef0b7cd0e336cef7a9c733164b93e1927f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baca15e257ea2007369156b6a31eef0b7cd0e336cef7a9c733164b93e1927f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:11Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:11 crc kubenswrapper[4813]: I0219 18:31:11.655220 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:11 crc kubenswrapper[4813]: I0219 18:31:11.655598 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:11 crc kubenswrapper[4813]: I0219 18:31:11.655805 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:11 crc kubenswrapper[4813]: I0219 18:31:11.656031 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:11 crc kubenswrapper[4813]: I0219 18:31:11.656252 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:11Z","lastTransitionTime":"2026-02-19T18:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:11 crc kubenswrapper[4813]: I0219 18:31:11.666241 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44daee369357470d5c3b416d63e53f621a0434cc6fe90db2dd15a6078849bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:11Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:11 crc kubenswrapper[4813]: I0219 18:31:11.682645 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55mxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af5ac962-f7d8-4759-b913-b3784e37a704\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5006f3df33dcac6500b89a837e686eb2b35a2d1c609091bda101500d15d808d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55mxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:11Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:11 crc kubenswrapper[4813]: I0219 18:31:11.704870 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e24bb1ce-8e7b-485c-8ade-4d1c8d24d31d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ec0b5d3efae532f52cc2cad5bdf32bbba176159a6d3e26584b0fabe4e67fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72a262261f9ec388a9b2c38c6c30d35645d6e42d482ed7468891523b7bc731fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58efdcb776fb39a7e33af3fda3f0589e38c4bae15b8db3371541d189729d06c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a2584d7de40323b884e08e2cb47b6ab8054fadccc7516e4736c5439aef60e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:11Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:11 crc kubenswrapper[4813]: I0219 18:31:11.725392 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:11Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:11 crc kubenswrapper[4813]: I0219 18:31:11.744798 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:11Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:11 crc kubenswrapper[4813]: I0219 18:31:11.759356 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:11 crc kubenswrapper[4813]: I0219 18:31:11.759429 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:11 crc kubenswrapper[4813]: I0219 18:31:11.759450 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:11 crc kubenswrapper[4813]: I0219 18:31:11.759480 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:11 crc kubenswrapper[4813]: I0219 18:31:11.759499 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:11Z","lastTransitionTime":"2026-02-19T18:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:11 crc kubenswrapper[4813]: I0219 18:31:11.764071 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c6488cf097d9700ded1960ebddf696d77c08ffdfc6dd8b351c594ea026bb6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6181782d0c635897318effbe40705986067ccb55cf05ffdba6948e46c1a054b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:11Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:11 crc kubenswrapper[4813]: I0219 18:31:11.784746 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hksqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b099cefb-f2e5-4f3f-976c-7433dba77ef2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e8bb938cc19db067c4260fd25416cf155dd95f94c41f03a9ef2b4d29589ad2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3313ad9bea8833862d8c116421e00a0c2376c81e214043db1fa4096177d01094\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T18:30:50Z\\\",\\\"message\\\":\\\"2026-02-19T18:30:05+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9b918bad-a915-4f26-9241-805aac721196\\\\n2026-02-19T18:30:05+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9b918bad-a915-4f26-9241-805aac721196 to /host/opt/cni/bin/\\\\n2026-02-19T18:30:05Z [verbose] multus-daemon started\\\\n2026-02-19T18:30:05Z [verbose] Readiness Indicator file check\\\\n2026-02-19T18:30:50Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hksqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:11Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:11 crc kubenswrapper[4813]: I0219 18:31:11.804832 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqbrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"614b0374-4288-40dc-9d95-e6f6566bd1ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99f4d2e3db84c62b82edf3e5f8290b3b259e58655a5ea95ea51ed8aeec8e8844\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpkbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef107096c0521504fd7160ad4e3c0feff6ed1500af21c7a0328a981c340825f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpkbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqbrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:11Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:11 crc kubenswrapper[4813]: I0219 18:31:11.823374 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l5vng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fc21e0b-f723-4c9c-9ced-1683cc02fa00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmn2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmn2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l5vng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:11Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:11 crc kubenswrapper[4813]: I0219 18:31:11.848670 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlz6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f58d0592-08dd-49db-8c98-f262b9808e0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://934ac9bf4c98a77f31d9fddb47e9981f38daeca4f37afc09a488b48c69a13b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7968520498abaa880445c128f02c0838e254befc000966e147908c84203f74a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7968520498abaa880445c128f02c0838e254befc000966e147908c84203f74a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17313aa183028703a67d1cf40b4246d09224ceee622cd9f4f7d2ef239b26db47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17313aa183028703a67d1cf40b4246d09224ceee622cd9f4f7d2ef239b26db47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ab70722c91747196d065d768e30f52befde81fdedb465f22c3f7eef4705210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98ab70722c91747196d065d768e30f52befde81fdedb465f22c3f7eef4705210\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ab6dd9d03726493fd9fd2c358b0a87ca1997cbe2111d43a548adc9c9f43c662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ab6dd9d03726493fd9fd2c358b0a87ca1997cbe2111d43a548adc9c9f43c662\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11144c876f7a0cbaa5787ac1a093f138ec3fdb8a3cd118d4efec6cc650e3e97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11144c876f7a0cbaa5787ac1a093f138ec3fdb8a3cd118d4efec6cc650e3e97a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53230e417c1722eded51e2c7a87acb4e9b610aaf5d36a22d4f3f4fcd987c5160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53230e417c1722eded51e2c7a87acb4e9b610aaf5d36a22d4f3f4fcd987c5160\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlz6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:11Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:11 crc kubenswrapper[4813]: I0219 18:31:11.861845 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:11 crc kubenswrapper[4813]: I0219 18:31:11.861911 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:11 crc kubenswrapper[4813]: I0219 18:31:11.861928 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:11 crc kubenswrapper[4813]: I0219 18:31:11.861992 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:11 crc kubenswrapper[4813]: I0219 18:31:11.862012 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:11Z","lastTransitionTime":"2026-02-19T18:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:11 crc kubenswrapper[4813]: I0219 18:31:11.881173 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"928c75f4-605c-4556-8c29-14ff4bdf6f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec80078ab4536d8f1962994c3bdf8af5fcf81516f8845e84dbb33a78b1aa2062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e03ec2a3e1656fbfb567377e79bcca10e1c8a6f7660727c164936eab7365930a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6191934d31b3f711d3c9d8f365251db200dea5a925cad670632dbfc527f76504\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1b878b9bc710465c933e1965728be9fe220fe89a3d5c05fa538abd4bf22b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81cfe261685862b6b74d263ab66b4a2b1d68409791e82424b4f210c56561099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fed64977a80ae6acac3a225e2b6b6ab865db1f36338df4a15011baae90f949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc1c57ce812d6b74b0a2631aac1b67e009e76561c3561fcc94cf8bfa6c6a61c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cc1c57ce812d6b74b0a2631aac1b67e009e76561c3561fcc94cf8bfa6c6a61c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T18:31:02Z\\\",\\\"message\\\":\\\"reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 18:31:02.493402 6884 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 18:31:02.493481 6884 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 18:31:02.493608 6884 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 18:31:02.493649 6884 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 18:31:02.493658 6884 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 18:31:02.493689 6884 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 18:31:02.493700 6884 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 18:31:02.493731 6884 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 18:31:02.493737 6884 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 18:31:02.493752 6884 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 18:31:02.493767 6884 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 18:31:02.493777 6884 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 18:31:02.493785 6884 factory.go:656] Stopping watch factory\\\\nI0219 18:31:02.493788 6884 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 18:31:02.493799 6884 ovnkube.go:599] Stopped ovnkube\\\\nI0219 18:31:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:31:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pc9t2_openshift-ovn-kubernetes(928c75f4-605c-4556-8c29-14ff4bdf6f5e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1caf7a922d435742a5c6d5302c70003bc86cc3ab542e2e3fe0a368c150de7838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0244cf62334ae37b44f11119385a76667200a01b2162a064829f16e8d6c0cf65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0244cf62334ae37b44f11119385a76667200a01b2162a064829f16e8d6c0cf65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pc9t2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:11Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:11 crc kubenswrapper[4813]: I0219 18:31:11.964866 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:11 crc kubenswrapper[4813]: I0219 18:31:11.964930 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:11 crc kubenswrapper[4813]: I0219 18:31:11.964982 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:11 crc kubenswrapper[4813]: I0219 18:31:11.965013 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:11 crc kubenswrapper[4813]: I0219 18:31:11.965036 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:11Z","lastTransitionTime":"2026-02-19T18:31:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:12 crc kubenswrapper[4813]: I0219 18:31:12.067585 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:12 crc kubenswrapper[4813]: I0219 18:31:12.067652 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:12 crc kubenswrapper[4813]: I0219 18:31:12.067672 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:12 crc kubenswrapper[4813]: I0219 18:31:12.067697 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:12 crc kubenswrapper[4813]: I0219 18:31:12.067715 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:12Z","lastTransitionTime":"2026-02-19T18:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:12 crc kubenswrapper[4813]: I0219 18:31:12.171281 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:12 crc kubenswrapper[4813]: I0219 18:31:12.171339 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:12 crc kubenswrapper[4813]: I0219 18:31:12.171356 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:12 crc kubenswrapper[4813]: I0219 18:31:12.171379 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:12 crc kubenswrapper[4813]: I0219 18:31:12.171433 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:12Z","lastTransitionTime":"2026-02-19T18:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:12 crc kubenswrapper[4813]: I0219 18:31:12.274288 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:12 crc kubenswrapper[4813]: I0219 18:31:12.274349 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:12 crc kubenswrapper[4813]: I0219 18:31:12.274366 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:12 crc kubenswrapper[4813]: I0219 18:31:12.274391 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:12 crc kubenswrapper[4813]: I0219 18:31:12.274408 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:12Z","lastTransitionTime":"2026-02-19T18:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:12 crc kubenswrapper[4813]: I0219 18:31:12.377359 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:12 crc kubenswrapper[4813]: I0219 18:31:12.377428 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:12 crc kubenswrapper[4813]: I0219 18:31:12.377447 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:12 crc kubenswrapper[4813]: I0219 18:31:12.377472 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:12 crc kubenswrapper[4813]: I0219 18:31:12.377489 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:12Z","lastTransitionTime":"2026-02-19T18:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:12 crc kubenswrapper[4813]: I0219 18:31:12.454867 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 08:30:33.446697818 +0000 UTC Feb 19 18:31:12 crc kubenswrapper[4813]: I0219 18:31:12.471290 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5vng" Feb 19 18:31:12 crc kubenswrapper[4813]: E0219 18:31:12.471482 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5vng" podUID="6fc21e0b-f723-4c9c-9ced-1683cc02fa00" Feb 19 18:31:12 crc kubenswrapper[4813]: I0219 18:31:12.480628 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:12 crc kubenswrapper[4813]: I0219 18:31:12.480699 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:12 crc kubenswrapper[4813]: I0219 18:31:12.480723 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:12 crc kubenswrapper[4813]: I0219 18:31:12.480751 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:12 crc kubenswrapper[4813]: I0219 18:31:12.480770 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:12Z","lastTransitionTime":"2026-02-19T18:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:12 crc kubenswrapper[4813]: I0219 18:31:12.583206 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:12 crc kubenswrapper[4813]: I0219 18:31:12.583273 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:12 crc kubenswrapper[4813]: I0219 18:31:12.583296 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:12 crc kubenswrapper[4813]: I0219 18:31:12.583320 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:12 crc kubenswrapper[4813]: I0219 18:31:12.583337 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:12Z","lastTransitionTime":"2026-02-19T18:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:12 crc kubenswrapper[4813]: I0219 18:31:12.686766 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:12 crc kubenswrapper[4813]: I0219 18:31:12.686813 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:12 crc kubenswrapper[4813]: I0219 18:31:12.686829 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:12 crc kubenswrapper[4813]: I0219 18:31:12.686851 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:12 crc kubenswrapper[4813]: I0219 18:31:12.686868 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:12Z","lastTransitionTime":"2026-02-19T18:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:12 crc kubenswrapper[4813]: I0219 18:31:12.789724 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:12 crc kubenswrapper[4813]: I0219 18:31:12.789798 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:12 crc kubenswrapper[4813]: I0219 18:31:12.789814 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:12 crc kubenswrapper[4813]: I0219 18:31:12.789840 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:12 crc kubenswrapper[4813]: I0219 18:31:12.789860 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:12Z","lastTransitionTime":"2026-02-19T18:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:12 crc kubenswrapper[4813]: I0219 18:31:12.892375 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:12 crc kubenswrapper[4813]: I0219 18:31:12.892460 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:12 crc kubenswrapper[4813]: I0219 18:31:12.892478 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:12 crc kubenswrapper[4813]: I0219 18:31:12.892499 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:12 crc kubenswrapper[4813]: I0219 18:31:12.892515 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:12Z","lastTransitionTime":"2026-02-19T18:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:12 crc kubenswrapper[4813]: I0219 18:31:12.994854 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:12 crc kubenswrapper[4813]: I0219 18:31:12.994884 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:12 crc kubenswrapper[4813]: I0219 18:31:12.994892 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:12 crc kubenswrapper[4813]: I0219 18:31:12.994906 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:12 crc kubenswrapper[4813]: I0219 18:31:12.994917 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:12Z","lastTransitionTime":"2026-02-19T18:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:13 crc kubenswrapper[4813]: I0219 18:31:13.097978 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:13 crc kubenswrapper[4813]: I0219 18:31:13.098036 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:13 crc kubenswrapper[4813]: I0219 18:31:13.098053 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:13 crc kubenswrapper[4813]: I0219 18:31:13.098075 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:13 crc kubenswrapper[4813]: I0219 18:31:13.098091 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:13Z","lastTransitionTime":"2026-02-19T18:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:13 crc kubenswrapper[4813]: I0219 18:31:13.200835 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:13 crc kubenswrapper[4813]: I0219 18:31:13.200894 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:13 crc kubenswrapper[4813]: I0219 18:31:13.200911 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:13 crc kubenswrapper[4813]: I0219 18:31:13.200936 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:13 crc kubenswrapper[4813]: I0219 18:31:13.200978 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:13Z","lastTransitionTime":"2026-02-19T18:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:13 crc kubenswrapper[4813]: I0219 18:31:13.303870 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:13 crc kubenswrapper[4813]: I0219 18:31:13.304016 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:13 crc kubenswrapper[4813]: I0219 18:31:13.304035 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:13 crc kubenswrapper[4813]: I0219 18:31:13.304065 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:13 crc kubenswrapper[4813]: I0219 18:31:13.304082 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:13Z","lastTransitionTime":"2026-02-19T18:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:13 crc kubenswrapper[4813]: I0219 18:31:13.406719 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:13 crc kubenswrapper[4813]: I0219 18:31:13.406786 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:13 crc kubenswrapper[4813]: I0219 18:31:13.406803 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:13 crc kubenswrapper[4813]: I0219 18:31:13.406826 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:13 crc kubenswrapper[4813]: I0219 18:31:13.406843 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:13Z","lastTransitionTime":"2026-02-19T18:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:13 crc kubenswrapper[4813]: I0219 18:31:13.455750 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 15:01:34.108369546 +0000 UTC Feb 19 18:31:13 crc kubenswrapper[4813]: I0219 18:31:13.471149 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:31:13 crc kubenswrapper[4813]: E0219 18:31:13.471338 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 18:31:13 crc kubenswrapper[4813]: I0219 18:31:13.471176 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 18:31:13 crc kubenswrapper[4813]: I0219 18:31:13.471474 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 18:31:13 crc kubenswrapper[4813]: E0219 18:31:13.471616 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 18:31:13 crc kubenswrapper[4813]: E0219 18:31:13.471723 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 18:31:13 crc kubenswrapper[4813]: I0219 18:31:13.509262 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:13 crc kubenswrapper[4813]: I0219 18:31:13.509312 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:13 crc kubenswrapper[4813]: I0219 18:31:13.509329 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:13 crc kubenswrapper[4813]: I0219 18:31:13.509354 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:13 crc kubenswrapper[4813]: I0219 18:31:13.509374 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:13Z","lastTransitionTime":"2026-02-19T18:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:13 crc kubenswrapper[4813]: I0219 18:31:13.612752 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:13 crc kubenswrapper[4813]: I0219 18:31:13.612804 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:13 crc kubenswrapper[4813]: I0219 18:31:13.612822 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:13 crc kubenswrapper[4813]: I0219 18:31:13.612845 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:13 crc kubenswrapper[4813]: I0219 18:31:13.612862 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:13Z","lastTransitionTime":"2026-02-19T18:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:13 crc kubenswrapper[4813]: I0219 18:31:13.715937 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:13 crc kubenswrapper[4813]: I0219 18:31:13.716051 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:13 crc kubenswrapper[4813]: I0219 18:31:13.716076 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:13 crc kubenswrapper[4813]: I0219 18:31:13.716104 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:13 crc kubenswrapper[4813]: I0219 18:31:13.716126 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:13Z","lastTransitionTime":"2026-02-19T18:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:13 crc kubenswrapper[4813]: I0219 18:31:13.819227 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:13 crc kubenswrapper[4813]: I0219 18:31:13.819579 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:13 crc kubenswrapper[4813]: I0219 18:31:13.819599 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:13 crc kubenswrapper[4813]: I0219 18:31:13.819621 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:13 crc kubenswrapper[4813]: I0219 18:31:13.819638 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:13Z","lastTransitionTime":"2026-02-19T18:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:13 crc kubenswrapper[4813]: I0219 18:31:13.923192 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:13 crc kubenswrapper[4813]: I0219 18:31:13.923252 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:13 crc kubenswrapper[4813]: I0219 18:31:13.923269 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:13 crc kubenswrapper[4813]: I0219 18:31:13.923292 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:13 crc kubenswrapper[4813]: I0219 18:31:13.923309 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:13Z","lastTransitionTime":"2026-02-19T18:31:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:14 crc kubenswrapper[4813]: I0219 18:31:14.026152 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:14 crc kubenswrapper[4813]: I0219 18:31:14.026223 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:14 crc kubenswrapper[4813]: I0219 18:31:14.026242 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:14 crc kubenswrapper[4813]: I0219 18:31:14.026268 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:14 crc kubenswrapper[4813]: I0219 18:31:14.026288 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:14Z","lastTransitionTime":"2026-02-19T18:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:14 crc kubenswrapper[4813]: I0219 18:31:14.128571 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:14 crc kubenswrapper[4813]: I0219 18:31:14.128636 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:14 crc kubenswrapper[4813]: I0219 18:31:14.128654 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:14 crc kubenswrapper[4813]: I0219 18:31:14.128732 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:14 crc kubenswrapper[4813]: I0219 18:31:14.128749 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:14Z","lastTransitionTime":"2026-02-19T18:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:14 crc kubenswrapper[4813]: I0219 18:31:14.231330 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:14 crc kubenswrapper[4813]: I0219 18:31:14.231394 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:14 crc kubenswrapper[4813]: I0219 18:31:14.231415 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:14 crc kubenswrapper[4813]: I0219 18:31:14.231443 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:14 crc kubenswrapper[4813]: I0219 18:31:14.231465 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:14Z","lastTransitionTime":"2026-02-19T18:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:14 crc kubenswrapper[4813]: I0219 18:31:14.335015 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:14 crc kubenswrapper[4813]: I0219 18:31:14.335113 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:14 crc kubenswrapper[4813]: I0219 18:31:14.335134 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:14 crc kubenswrapper[4813]: I0219 18:31:14.335156 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:14 crc kubenswrapper[4813]: I0219 18:31:14.335174 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:14Z","lastTransitionTime":"2026-02-19T18:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:14 crc kubenswrapper[4813]: I0219 18:31:14.437993 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:14 crc kubenswrapper[4813]: I0219 18:31:14.438047 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:14 crc kubenswrapper[4813]: I0219 18:31:14.438059 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:14 crc kubenswrapper[4813]: I0219 18:31:14.438075 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:14 crc kubenswrapper[4813]: I0219 18:31:14.438086 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:14Z","lastTransitionTime":"2026-02-19T18:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:14 crc kubenswrapper[4813]: I0219 18:31:14.456590 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 08:57:00.740264759 +0000 UTC Feb 19 18:31:14 crc kubenswrapper[4813]: I0219 18:31:14.470920 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5vng" Feb 19 18:31:14 crc kubenswrapper[4813]: E0219 18:31:14.471087 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5vng" podUID="6fc21e0b-f723-4c9c-9ced-1683cc02fa00" Feb 19 18:31:14 crc kubenswrapper[4813]: I0219 18:31:14.540361 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:14 crc kubenswrapper[4813]: I0219 18:31:14.540414 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:14 crc kubenswrapper[4813]: I0219 18:31:14.540432 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:14 crc kubenswrapper[4813]: I0219 18:31:14.540455 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:14 crc kubenswrapper[4813]: I0219 18:31:14.540472 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:14Z","lastTransitionTime":"2026-02-19T18:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:14 crc kubenswrapper[4813]: I0219 18:31:14.644140 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:14 crc kubenswrapper[4813]: I0219 18:31:14.644203 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:14 crc kubenswrapper[4813]: I0219 18:31:14.644224 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:14 crc kubenswrapper[4813]: I0219 18:31:14.644254 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:14 crc kubenswrapper[4813]: I0219 18:31:14.644277 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:14Z","lastTransitionTime":"2026-02-19T18:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:14 crc kubenswrapper[4813]: I0219 18:31:14.747075 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:14 crc kubenswrapper[4813]: I0219 18:31:14.747139 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:14 crc kubenswrapper[4813]: I0219 18:31:14.747158 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:14 crc kubenswrapper[4813]: I0219 18:31:14.747181 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:14 crc kubenswrapper[4813]: I0219 18:31:14.747199 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:14Z","lastTransitionTime":"2026-02-19T18:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:14 crc kubenswrapper[4813]: I0219 18:31:14.849846 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:14 crc kubenswrapper[4813]: I0219 18:31:14.849918 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:14 crc kubenswrapper[4813]: I0219 18:31:14.849940 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:14 crc kubenswrapper[4813]: I0219 18:31:14.850030 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:14 crc kubenswrapper[4813]: I0219 18:31:14.850068 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:14Z","lastTransitionTime":"2026-02-19T18:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:14 crc kubenswrapper[4813]: I0219 18:31:14.953157 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:14 crc kubenswrapper[4813]: I0219 18:31:14.953375 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:14 crc kubenswrapper[4813]: I0219 18:31:14.953414 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:14 crc kubenswrapper[4813]: I0219 18:31:14.953453 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:14 crc kubenswrapper[4813]: I0219 18:31:14.953478 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:14Z","lastTransitionTime":"2026-02-19T18:31:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:15 crc kubenswrapper[4813]: I0219 18:31:15.056156 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:15 crc kubenswrapper[4813]: I0219 18:31:15.056234 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:15 crc kubenswrapper[4813]: I0219 18:31:15.056254 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:15 crc kubenswrapper[4813]: I0219 18:31:15.056287 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:15 crc kubenswrapper[4813]: I0219 18:31:15.056309 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:15Z","lastTransitionTime":"2026-02-19T18:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:15 crc kubenswrapper[4813]: I0219 18:31:15.159659 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:15 crc kubenswrapper[4813]: I0219 18:31:15.159780 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:15 crc kubenswrapper[4813]: I0219 18:31:15.159798 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:15 crc kubenswrapper[4813]: I0219 18:31:15.159821 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:15 crc kubenswrapper[4813]: I0219 18:31:15.159838 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:15Z","lastTransitionTime":"2026-02-19T18:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:15 crc kubenswrapper[4813]: I0219 18:31:15.262481 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:15 crc kubenswrapper[4813]: I0219 18:31:15.262542 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:15 crc kubenswrapper[4813]: I0219 18:31:15.262559 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:15 crc kubenswrapper[4813]: I0219 18:31:15.262583 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:15 crc kubenswrapper[4813]: I0219 18:31:15.262600 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:15Z","lastTransitionTime":"2026-02-19T18:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:15 crc kubenswrapper[4813]: I0219 18:31:15.367380 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:15 crc kubenswrapper[4813]: I0219 18:31:15.367483 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:15 crc kubenswrapper[4813]: I0219 18:31:15.367503 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:15 crc kubenswrapper[4813]: I0219 18:31:15.367529 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:15 crc kubenswrapper[4813]: I0219 18:31:15.367557 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:15Z","lastTransitionTime":"2026-02-19T18:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:15 crc kubenswrapper[4813]: I0219 18:31:15.456712 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 20:33:39.073746598 +0000 UTC Feb 19 18:31:15 crc kubenswrapper[4813]: I0219 18:31:15.470818 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:31:15 crc kubenswrapper[4813]: I0219 18:31:15.470838 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 18:31:15 crc kubenswrapper[4813]: E0219 18:31:15.470980 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 18:31:15 crc kubenswrapper[4813]: E0219 18:31:15.471255 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 18:31:15 crc kubenswrapper[4813]: I0219 18:31:15.471296 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 18:31:15 crc kubenswrapper[4813]: E0219 18:31:15.471472 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 18:31:15 crc kubenswrapper[4813]: I0219 18:31:15.471548 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:15 crc kubenswrapper[4813]: I0219 18:31:15.471608 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:15 crc kubenswrapper[4813]: I0219 18:31:15.471632 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:15 crc kubenswrapper[4813]: I0219 18:31:15.471660 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:15 crc kubenswrapper[4813]: I0219 18:31:15.471683 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:15Z","lastTransitionTime":"2026-02-19T18:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:15 crc kubenswrapper[4813]: I0219 18:31:15.574805 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:15 crc kubenswrapper[4813]: I0219 18:31:15.575254 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:15 crc kubenswrapper[4813]: I0219 18:31:15.575424 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:15 crc kubenswrapper[4813]: I0219 18:31:15.575588 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:15 crc kubenswrapper[4813]: I0219 18:31:15.575726 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:15Z","lastTransitionTime":"2026-02-19T18:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:15 crc kubenswrapper[4813]: I0219 18:31:15.678173 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:15 crc kubenswrapper[4813]: I0219 18:31:15.678428 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:15 crc kubenswrapper[4813]: I0219 18:31:15.678568 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:15 crc kubenswrapper[4813]: I0219 18:31:15.678716 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:15 crc kubenswrapper[4813]: I0219 18:31:15.678844 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:15Z","lastTransitionTime":"2026-02-19T18:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:15 crc kubenswrapper[4813]: I0219 18:31:15.781791 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:15 crc kubenswrapper[4813]: I0219 18:31:15.781856 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:15 crc kubenswrapper[4813]: I0219 18:31:15.781875 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:15 crc kubenswrapper[4813]: I0219 18:31:15.781905 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:15 crc kubenswrapper[4813]: I0219 18:31:15.781926 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:15Z","lastTransitionTime":"2026-02-19T18:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:15 crc kubenswrapper[4813]: I0219 18:31:15.884503 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:15 crc kubenswrapper[4813]: I0219 18:31:15.884576 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:15 crc kubenswrapper[4813]: I0219 18:31:15.884594 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:15 crc kubenswrapper[4813]: I0219 18:31:15.884620 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:15 crc kubenswrapper[4813]: I0219 18:31:15.884637 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:15Z","lastTransitionTime":"2026-02-19T18:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:15 crc kubenswrapper[4813]: I0219 18:31:15.987256 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:15 crc kubenswrapper[4813]: I0219 18:31:15.987313 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:15 crc kubenswrapper[4813]: I0219 18:31:15.987331 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:15 crc kubenswrapper[4813]: I0219 18:31:15.987358 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:15 crc kubenswrapper[4813]: I0219 18:31:15.987378 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:15Z","lastTransitionTime":"2026-02-19T18:31:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:16 crc kubenswrapper[4813]: I0219 18:31:16.090419 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:16 crc kubenswrapper[4813]: I0219 18:31:16.090463 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:16 crc kubenswrapper[4813]: I0219 18:31:16.090481 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:16 crc kubenswrapper[4813]: I0219 18:31:16.090504 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:16 crc kubenswrapper[4813]: I0219 18:31:16.090521 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:16Z","lastTransitionTime":"2026-02-19T18:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:16 crc kubenswrapper[4813]: I0219 18:31:16.193781 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:16 crc kubenswrapper[4813]: I0219 18:31:16.193841 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:16 crc kubenswrapper[4813]: I0219 18:31:16.193858 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:16 crc kubenswrapper[4813]: I0219 18:31:16.193881 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:16 crc kubenswrapper[4813]: I0219 18:31:16.193899 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:16Z","lastTransitionTime":"2026-02-19T18:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:16 crc kubenswrapper[4813]: I0219 18:31:16.297588 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:16 crc kubenswrapper[4813]: I0219 18:31:16.297647 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:16 crc kubenswrapper[4813]: I0219 18:31:16.297663 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:16 crc kubenswrapper[4813]: I0219 18:31:16.297687 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:16 crc kubenswrapper[4813]: I0219 18:31:16.297708 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:16Z","lastTransitionTime":"2026-02-19T18:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:16 crc kubenswrapper[4813]: I0219 18:31:16.400480 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:16 crc kubenswrapper[4813]: I0219 18:31:16.400634 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:16 crc kubenswrapper[4813]: I0219 18:31:16.400653 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:16 crc kubenswrapper[4813]: I0219 18:31:16.400679 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:16 crc kubenswrapper[4813]: I0219 18:31:16.400697 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:16Z","lastTransitionTime":"2026-02-19T18:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:16 crc kubenswrapper[4813]: I0219 18:31:16.457348 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 17:22:15.568712282 +0000 UTC Feb 19 18:31:16 crc kubenswrapper[4813]: I0219 18:31:16.470856 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5vng" Feb 19 18:31:16 crc kubenswrapper[4813]: E0219 18:31:16.471061 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5vng" podUID="6fc21e0b-f723-4c9c-9ced-1683cc02fa00" Feb 19 18:31:16 crc kubenswrapper[4813]: I0219 18:31:16.472237 4813 scope.go:117] "RemoveContainer" containerID="0cc1c57ce812d6b74b0a2631aac1b67e009e76561c3561fcc94cf8bfa6c6a61c" Feb 19 18:31:16 crc kubenswrapper[4813]: E0219 18:31:16.472535 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pc9t2_openshift-ovn-kubernetes(928c75f4-605c-4556-8c29-14ff4bdf6f5e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" podUID="928c75f4-605c-4556-8c29-14ff4bdf6f5e" Feb 19 18:31:16 crc kubenswrapper[4813]: I0219 18:31:16.503856 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:16 crc kubenswrapper[4813]: I0219 18:31:16.503911 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:16 crc kubenswrapper[4813]: I0219 18:31:16.503936 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:16 crc kubenswrapper[4813]: I0219 18:31:16.504001 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:16 crc kubenswrapper[4813]: I0219 18:31:16.504027 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:16Z","lastTransitionTime":"2026-02-19T18:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:16 crc kubenswrapper[4813]: I0219 18:31:16.607332 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:16 crc kubenswrapper[4813]: I0219 18:31:16.607400 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:16 crc kubenswrapper[4813]: I0219 18:31:16.607423 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:16 crc kubenswrapper[4813]: I0219 18:31:16.607450 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:16 crc kubenswrapper[4813]: I0219 18:31:16.607470 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:16Z","lastTransitionTime":"2026-02-19T18:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:16 crc kubenswrapper[4813]: I0219 18:31:16.714403 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:16 crc kubenswrapper[4813]: I0219 18:31:16.715282 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:16 crc kubenswrapper[4813]: I0219 18:31:16.715348 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:16 crc kubenswrapper[4813]: I0219 18:31:16.715378 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:16 crc kubenswrapper[4813]: I0219 18:31:16.715396 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:16Z","lastTransitionTime":"2026-02-19T18:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:16 crc kubenswrapper[4813]: I0219 18:31:16.818457 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:16 crc kubenswrapper[4813]: I0219 18:31:16.818513 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:16 crc kubenswrapper[4813]: I0219 18:31:16.818529 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:16 crc kubenswrapper[4813]: I0219 18:31:16.818552 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:16 crc kubenswrapper[4813]: I0219 18:31:16.818569 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:16Z","lastTransitionTime":"2026-02-19T18:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:16 crc kubenswrapper[4813]: I0219 18:31:16.922167 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:16 crc kubenswrapper[4813]: I0219 18:31:16.922230 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:16 crc kubenswrapper[4813]: I0219 18:31:16.922247 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:16 crc kubenswrapper[4813]: I0219 18:31:16.922275 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:16 crc kubenswrapper[4813]: I0219 18:31:16.922293 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:16Z","lastTransitionTime":"2026-02-19T18:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:17 crc kubenswrapper[4813]: I0219 18:31:17.026259 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:17 crc kubenswrapper[4813]: I0219 18:31:17.026444 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:17 crc kubenswrapper[4813]: I0219 18:31:17.026571 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:17 crc kubenswrapper[4813]: I0219 18:31:17.026700 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:17 crc kubenswrapper[4813]: I0219 18:31:17.026796 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:17Z","lastTransitionTime":"2026-02-19T18:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:17 crc kubenswrapper[4813]: I0219 18:31:17.129406 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:17 crc kubenswrapper[4813]: I0219 18:31:17.129474 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:17 crc kubenswrapper[4813]: I0219 18:31:17.129493 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:17 crc kubenswrapper[4813]: I0219 18:31:17.129517 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:17 crc kubenswrapper[4813]: I0219 18:31:17.129535 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:17Z","lastTransitionTime":"2026-02-19T18:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:17 crc kubenswrapper[4813]: I0219 18:31:17.232378 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:17 crc kubenswrapper[4813]: I0219 18:31:17.232435 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:17 crc kubenswrapper[4813]: I0219 18:31:17.232452 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:17 crc kubenswrapper[4813]: I0219 18:31:17.232477 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:17 crc kubenswrapper[4813]: I0219 18:31:17.232495 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:17Z","lastTransitionTime":"2026-02-19T18:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:17 crc kubenswrapper[4813]: I0219 18:31:17.335324 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:17 crc kubenswrapper[4813]: I0219 18:31:17.335401 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:17 crc kubenswrapper[4813]: I0219 18:31:17.335422 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:17 crc kubenswrapper[4813]: I0219 18:31:17.335449 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:17 crc kubenswrapper[4813]: I0219 18:31:17.335466 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:17Z","lastTransitionTime":"2026-02-19T18:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:17 crc kubenswrapper[4813]: I0219 18:31:17.437873 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:17 crc kubenswrapper[4813]: I0219 18:31:17.437904 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:17 crc kubenswrapper[4813]: I0219 18:31:17.437912 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:17 crc kubenswrapper[4813]: I0219 18:31:17.437925 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:17 crc kubenswrapper[4813]: I0219 18:31:17.437933 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:17Z","lastTransitionTime":"2026-02-19T18:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:17 crc kubenswrapper[4813]: I0219 18:31:17.458553 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 13:26:32.755658276 +0000 UTC Feb 19 18:31:17 crc kubenswrapper[4813]: I0219 18:31:17.470838 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:31:17 crc kubenswrapper[4813]: I0219 18:31:17.470899 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 18:31:17 crc kubenswrapper[4813]: E0219 18:31:17.471014 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 18:31:17 crc kubenswrapper[4813]: E0219 18:31:17.471119 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 18:31:17 crc kubenswrapper[4813]: I0219 18:31:17.471206 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 18:31:17 crc kubenswrapper[4813]: E0219 18:31:17.471386 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 18:31:17 crc kubenswrapper[4813]: I0219 18:31:17.540023 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:17 crc kubenswrapper[4813]: I0219 18:31:17.540137 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:17 crc kubenswrapper[4813]: I0219 18:31:17.540162 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:17 crc kubenswrapper[4813]: I0219 18:31:17.540187 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:17 crc kubenswrapper[4813]: I0219 18:31:17.540209 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:17Z","lastTransitionTime":"2026-02-19T18:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:17 crc kubenswrapper[4813]: I0219 18:31:17.643452 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:17 crc kubenswrapper[4813]: I0219 18:31:17.643511 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:17 crc kubenswrapper[4813]: I0219 18:31:17.643528 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:17 crc kubenswrapper[4813]: I0219 18:31:17.643551 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:17 crc kubenswrapper[4813]: I0219 18:31:17.643569 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:17Z","lastTransitionTime":"2026-02-19T18:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:17 crc kubenswrapper[4813]: I0219 18:31:17.746898 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:17 crc kubenswrapper[4813]: I0219 18:31:17.746971 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:17 crc kubenswrapper[4813]: I0219 18:31:17.746988 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:17 crc kubenswrapper[4813]: I0219 18:31:17.747009 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:17 crc kubenswrapper[4813]: I0219 18:31:17.747027 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:17Z","lastTransitionTime":"2026-02-19T18:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:17 crc kubenswrapper[4813]: I0219 18:31:17.850439 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:17 crc kubenswrapper[4813]: I0219 18:31:17.850522 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:17 crc kubenswrapper[4813]: I0219 18:31:17.850544 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:17 crc kubenswrapper[4813]: I0219 18:31:17.850572 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:17 crc kubenswrapper[4813]: I0219 18:31:17.850592 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:17Z","lastTransitionTime":"2026-02-19T18:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:17 crc kubenswrapper[4813]: I0219 18:31:17.953852 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:17 crc kubenswrapper[4813]: I0219 18:31:17.953906 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:17 crc kubenswrapper[4813]: I0219 18:31:17.953916 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:17 crc kubenswrapper[4813]: I0219 18:31:17.953931 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:17 crc kubenswrapper[4813]: I0219 18:31:17.953940 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:17Z","lastTransitionTime":"2026-02-19T18:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:18 crc kubenswrapper[4813]: I0219 18:31:18.056378 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:18 crc kubenswrapper[4813]: I0219 18:31:18.056412 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:18 crc kubenswrapper[4813]: I0219 18:31:18.056422 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:18 crc kubenswrapper[4813]: I0219 18:31:18.056439 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:18 crc kubenswrapper[4813]: I0219 18:31:18.056452 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:18Z","lastTransitionTime":"2026-02-19T18:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:18 crc kubenswrapper[4813]: I0219 18:31:18.159775 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:18 crc kubenswrapper[4813]: I0219 18:31:18.159820 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:18 crc kubenswrapper[4813]: I0219 18:31:18.159837 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:18 crc kubenswrapper[4813]: I0219 18:31:18.159861 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:18 crc kubenswrapper[4813]: I0219 18:31:18.159879 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:18Z","lastTransitionTime":"2026-02-19T18:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:18 crc kubenswrapper[4813]: I0219 18:31:18.262478 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:18 crc kubenswrapper[4813]: I0219 18:31:18.262539 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:18 crc kubenswrapper[4813]: I0219 18:31:18.262557 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:18 crc kubenswrapper[4813]: I0219 18:31:18.262580 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:18 crc kubenswrapper[4813]: I0219 18:31:18.262597 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:18Z","lastTransitionTime":"2026-02-19T18:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:18 crc kubenswrapper[4813]: I0219 18:31:18.365544 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:18 crc kubenswrapper[4813]: I0219 18:31:18.365607 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:18 crc kubenswrapper[4813]: I0219 18:31:18.365624 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:18 crc kubenswrapper[4813]: I0219 18:31:18.365647 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:18 crc kubenswrapper[4813]: I0219 18:31:18.365664 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:18Z","lastTransitionTime":"2026-02-19T18:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:18 crc kubenswrapper[4813]: I0219 18:31:18.459373 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 18:11:06.168753574 +0000 UTC Feb 19 18:31:18 crc kubenswrapper[4813]: I0219 18:31:18.468344 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:18 crc kubenswrapper[4813]: I0219 18:31:18.468382 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:18 crc kubenswrapper[4813]: I0219 18:31:18.468394 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:18 crc kubenswrapper[4813]: I0219 18:31:18.468410 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:18 crc kubenswrapper[4813]: I0219 18:31:18.468420 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:18Z","lastTransitionTime":"2026-02-19T18:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:18 crc kubenswrapper[4813]: I0219 18:31:18.470755 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5vng" Feb 19 18:31:18 crc kubenswrapper[4813]: E0219 18:31:18.470860 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5vng" podUID="6fc21e0b-f723-4c9c-9ced-1683cc02fa00" Feb 19 18:31:18 crc kubenswrapper[4813]: I0219 18:31:18.571506 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:18 crc kubenswrapper[4813]: I0219 18:31:18.571570 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:18 crc kubenswrapper[4813]: I0219 18:31:18.571589 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:18 crc kubenswrapper[4813]: I0219 18:31:18.571616 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:18 crc kubenswrapper[4813]: I0219 18:31:18.571636 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:18Z","lastTransitionTime":"2026-02-19T18:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:18 crc kubenswrapper[4813]: I0219 18:31:18.674549 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:18 crc kubenswrapper[4813]: I0219 18:31:18.674611 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:18 crc kubenswrapper[4813]: I0219 18:31:18.674631 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:18 crc kubenswrapper[4813]: I0219 18:31:18.674656 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:18 crc kubenswrapper[4813]: I0219 18:31:18.674673 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:18Z","lastTransitionTime":"2026-02-19T18:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:18 crc kubenswrapper[4813]: I0219 18:31:18.777140 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:18 crc kubenswrapper[4813]: I0219 18:31:18.777184 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:18 crc kubenswrapper[4813]: I0219 18:31:18.777196 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:18 crc kubenswrapper[4813]: I0219 18:31:18.777211 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:18 crc kubenswrapper[4813]: I0219 18:31:18.777222 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:18Z","lastTransitionTime":"2026-02-19T18:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:18 crc kubenswrapper[4813]: I0219 18:31:18.882223 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:18 crc kubenswrapper[4813]: I0219 18:31:18.882318 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:18 crc kubenswrapper[4813]: I0219 18:31:18.882350 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:18 crc kubenswrapper[4813]: I0219 18:31:18.882385 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:18 crc kubenswrapper[4813]: I0219 18:31:18.882409 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:18Z","lastTransitionTime":"2026-02-19T18:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:18 crc kubenswrapper[4813]: I0219 18:31:18.985037 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:18 crc kubenswrapper[4813]: I0219 18:31:18.985120 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:18 crc kubenswrapper[4813]: I0219 18:31:18.985143 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:18 crc kubenswrapper[4813]: I0219 18:31:18.985174 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:18 crc kubenswrapper[4813]: I0219 18:31:18.985198 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:18Z","lastTransitionTime":"2026-02-19T18:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:19 crc kubenswrapper[4813]: I0219 18:31:19.088081 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:19 crc kubenswrapper[4813]: I0219 18:31:19.088146 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:19 crc kubenswrapper[4813]: I0219 18:31:19.088173 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:19 crc kubenswrapper[4813]: I0219 18:31:19.088206 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:19 crc kubenswrapper[4813]: I0219 18:31:19.088227 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:19Z","lastTransitionTime":"2026-02-19T18:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:19 crc kubenswrapper[4813]: I0219 18:31:19.191127 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:19 crc kubenswrapper[4813]: I0219 18:31:19.191192 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:19 crc kubenswrapper[4813]: I0219 18:31:19.191215 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:19 crc kubenswrapper[4813]: I0219 18:31:19.191245 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:19 crc kubenswrapper[4813]: I0219 18:31:19.191267 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:19Z","lastTransitionTime":"2026-02-19T18:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:19 crc kubenswrapper[4813]: I0219 18:31:19.294156 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:19 crc kubenswrapper[4813]: I0219 18:31:19.294215 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:19 crc kubenswrapper[4813]: I0219 18:31:19.294232 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:19 crc kubenswrapper[4813]: I0219 18:31:19.294257 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:19 crc kubenswrapper[4813]: I0219 18:31:19.294274 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:19Z","lastTransitionTime":"2026-02-19T18:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:19 crc kubenswrapper[4813]: I0219 18:31:19.396902 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:19 crc kubenswrapper[4813]: I0219 18:31:19.396987 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:19 crc kubenswrapper[4813]: I0219 18:31:19.397007 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:19 crc kubenswrapper[4813]: I0219 18:31:19.397030 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:19 crc kubenswrapper[4813]: I0219 18:31:19.397048 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:19Z","lastTransitionTime":"2026-02-19T18:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:19 crc kubenswrapper[4813]: I0219 18:31:19.460031 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 10:23:51.707446168 +0000 UTC Feb 19 18:31:19 crc kubenswrapper[4813]: I0219 18:31:19.471358 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 18:31:19 crc kubenswrapper[4813]: I0219 18:31:19.471407 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:31:19 crc kubenswrapper[4813]: E0219 18:31:19.471523 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 18:31:19 crc kubenswrapper[4813]: I0219 18:31:19.471604 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 18:31:19 crc kubenswrapper[4813]: E0219 18:31:19.471699 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 18:31:19 crc kubenswrapper[4813]: E0219 18:31:19.477848 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 18:31:19 crc kubenswrapper[4813]: I0219 18:31:19.499406 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:19 crc kubenswrapper[4813]: I0219 18:31:19.499463 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:19 crc kubenswrapper[4813]: I0219 18:31:19.499480 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:19 crc kubenswrapper[4813]: I0219 18:31:19.499504 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:19 crc kubenswrapper[4813]: I0219 18:31:19.499522 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:19Z","lastTransitionTime":"2026-02-19T18:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:19 crc kubenswrapper[4813]: I0219 18:31:19.603173 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:19 crc kubenswrapper[4813]: I0219 18:31:19.603239 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:19 crc kubenswrapper[4813]: I0219 18:31:19.603259 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:19 crc kubenswrapper[4813]: I0219 18:31:19.603326 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:19 crc kubenswrapper[4813]: I0219 18:31:19.603346 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:19Z","lastTransitionTime":"2026-02-19T18:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:19 crc kubenswrapper[4813]: I0219 18:31:19.706458 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:19 crc kubenswrapper[4813]: I0219 18:31:19.706514 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:19 crc kubenswrapper[4813]: I0219 18:31:19.706531 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:19 crc kubenswrapper[4813]: I0219 18:31:19.706555 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:19 crc kubenswrapper[4813]: I0219 18:31:19.706573 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:19Z","lastTransitionTime":"2026-02-19T18:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:19 crc kubenswrapper[4813]: I0219 18:31:19.809024 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:19 crc kubenswrapper[4813]: I0219 18:31:19.809120 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:19 crc kubenswrapper[4813]: I0219 18:31:19.809135 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:19 crc kubenswrapper[4813]: I0219 18:31:19.809154 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:19 crc kubenswrapper[4813]: I0219 18:31:19.809168 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:19Z","lastTransitionTime":"2026-02-19T18:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:19 crc kubenswrapper[4813]: I0219 18:31:19.912494 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:19 crc kubenswrapper[4813]: I0219 18:31:19.912544 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:19 crc kubenswrapper[4813]: I0219 18:31:19.912557 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:19 crc kubenswrapper[4813]: I0219 18:31:19.912575 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:19 crc kubenswrapper[4813]: I0219 18:31:19.912592 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:19Z","lastTransitionTime":"2026-02-19T18:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:19 crc kubenswrapper[4813]: I0219 18:31:19.975119 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:19 crc kubenswrapper[4813]: I0219 18:31:19.975187 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:19 crc kubenswrapper[4813]: I0219 18:31:19.975212 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:19 crc kubenswrapper[4813]: I0219 18:31:19.975242 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:19 crc kubenswrapper[4813]: I0219 18:31:19.975268 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:19Z","lastTransitionTime":"2026-02-19T18:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:19 crc kubenswrapper[4813]: E0219 18:31:19.995469 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:31:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:31:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:31:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:31:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:31:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:31:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:31:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:31:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"72639c5f-de3a-44bf-8f3b-4707b19d9f7d\\\",\\\"systemUUID\\\":\\\"3f17b88b-2b9a-42bd-94af-777e7f325932\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:19Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:20 crc kubenswrapper[4813]: I0219 18:31:20.000839 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:20 crc kubenswrapper[4813]: I0219 18:31:20.000908 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:20 crc kubenswrapper[4813]: I0219 18:31:20.001001 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:20 crc kubenswrapper[4813]: I0219 18:31:20.001030 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:20 crc kubenswrapper[4813]: I0219 18:31:20.001048 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:20Z","lastTransitionTime":"2026-02-19T18:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:20 crc kubenswrapper[4813]: E0219 18:31:20.022122 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:31:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:31:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:31:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:31:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:31:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:31:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:31:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:31:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"72639c5f-de3a-44bf-8f3b-4707b19d9f7d\\\",\\\"systemUUID\\\":\\\"3f17b88b-2b9a-42bd-94af-777e7f325932\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:20Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:20 crc kubenswrapper[4813]: I0219 18:31:20.027364 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:20 crc kubenswrapper[4813]: I0219 18:31:20.027439 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:20 crc kubenswrapper[4813]: I0219 18:31:20.027462 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:20 crc kubenswrapper[4813]: I0219 18:31:20.027494 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:20 crc kubenswrapper[4813]: I0219 18:31:20.027514 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:20Z","lastTransitionTime":"2026-02-19T18:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:20 crc kubenswrapper[4813]: E0219 18:31:20.047667 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:31:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:31:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:31:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:31:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:31:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:31:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:31:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:31:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"72639c5f-de3a-44bf-8f3b-4707b19d9f7d\\\",\\\"systemUUID\\\":\\\"3f17b88b-2b9a-42bd-94af-777e7f325932\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:20Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:20 crc kubenswrapper[4813]: I0219 18:31:20.052335 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:20 crc kubenswrapper[4813]: I0219 18:31:20.052403 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:20 crc kubenswrapper[4813]: I0219 18:31:20.052426 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:20 crc kubenswrapper[4813]: I0219 18:31:20.052456 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:20 crc kubenswrapper[4813]: I0219 18:31:20.052480 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:20Z","lastTransitionTime":"2026-02-19T18:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:20 crc kubenswrapper[4813]: E0219 18:31:20.072368 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:31:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:31:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:31:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:31:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:31:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:31:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:31:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:31:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"72639c5f-de3a-44bf-8f3b-4707b19d9f7d\\\",\\\"systemUUID\\\":\\\"3f17b88b-2b9a-42bd-94af-777e7f325932\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:20Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:20 crc kubenswrapper[4813]: I0219 18:31:20.077080 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:20 crc kubenswrapper[4813]: I0219 18:31:20.077174 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:20 crc kubenswrapper[4813]: I0219 18:31:20.077195 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:20 crc kubenswrapper[4813]: I0219 18:31:20.077219 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:20 crc kubenswrapper[4813]: I0219 18:31:20.077236 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:20Z","lastTransitionTime":"2026-02-19T18:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:20 crc kubenswrapper[4813]: E0219 18:31:20.096751 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:31:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:31:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:31:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:31:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:31:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:31:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T18:31:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T18:31:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"72639c5f-de3a-44bf-8f3b-4707b19d9f7d\\\",\\\"systemUUID\\\":\\\"3f17b88b-2b9a-42bd-94af-777e7f325932\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:20Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:20 crc kubenswrapper[4813]: E0219 18:31:20.097114 4813 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 18:31:20 crc kubenswrapper[4813]: I0219 18:31:20.099103 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:20 crc kubenswrapper[4813]: I0219 18:31:20.099152 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:20 crc kubenswrapper[4813]: I0219 18:31:20.099170 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:20 crc kubenswrapper[4813]: I0219 18:31:20.099192 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:20 crc kubenswrapper[4813]: I0219 18:31:20.099210 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:20Z","lastTransitionTime":"2026-02-19T18:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:20 crc kubenswrapper[4813]: I0219 18:31:20.201622 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:20 crc kubenswrapper[4813]: I0219 18:31:20.201688 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:20 crc kubenswrapper[4813]: I0219 18:31:20.201709 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:20 crc kubenswrapper[4813]: I0219 18:31:20.201736 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:20 crc kubenswrapper[4813]: I0219 18:31:20.201757 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:20Z","lastTransitionTime":"2026-02-19T18:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:20 crc kubenswrapper[4813]: I0219 18:31:20.304563 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:20 crc kubenswrapper[4813]: I0219 18:31:20.304623 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:20 crc kubenswrapper[4813]: I0219 18:31:20.304641 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:20 crc kubenswrapper[4813]: I0219 18:31:20.304664 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:20 crc kubenswrapper[4813]: I0219 18:31:20.304680 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:20Z","lastTransitionTime":"2026-02-19T18:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:20 crc kubenswrapper[4813]: I0219 18:31:20.315441 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6fc21e0b-f723-4c9c-9ced-1683cc02fa00-metrics-certs\") pod \"network-metrics-daemon-l5vng\" (UID: \"6fc21e0b-f723-4c9c-9ced-1683cc02fa00\") " pod="openshift-multus/network-metrics-daemon-l5vng" Feb 19 18:31:20 crc kubenswrapper[4813]: E0219 18:31:20.315677 4813 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 18:31:20 crc kubenswrapper[4813]: E0219 18:31:20.315764 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6fc21e0b-f723-4c9c-9ced-1683cc02fa00-metrics-certs podName:6fc21e0b-f723-4c9c-9ced-1683cc02fa00 nodeName:}" failed. No retries permitted until 2026-02-19 18:32:24.315740181 +0000 UTC m=+163.541180752 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6fc21e0b-f723-4c9c-9ced-1683cc02fa00-metrics-certs") pod "network-metrics-daemon-l5vng" (UID: "6fc21e0b-f723-4c9c-9ced-1683cc02fa00") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 18:31:20 crc kubenswrapper[4813]: I0219 18:31:20.408425 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:20 crc kubenswrapper[4813]: I0219 18:31:20.408484 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:20 crc kubenswrapper[4813]: I0219 18:31:20.408503 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:20 crc kubenswrapper[4813]: I0219 18:31:20.408526 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:20 crc kubenswrapper[4813]: I0219 18:31:20.408542 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:20Z","lastTransitionTime":"2026-02-19T18:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:20 crc kubenswrapper[4813]: I0219 18:31:20.460851 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 20:17:36.274379457 +0000 UTC Feb 19 18:31:20 crc kubenswrapper[4813]: I0219 18:31:20.471364 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5vng" Feb 19 18:31:20 crc kubenswrapper[4813]: E0219 18:31:20.471606 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5vng" podUID="6fc21e0b-f723-4c9c-9ced-1683cc02fa00" Feb 19 18:31:20 crc kubenswrapper[4813]: I0219 18:31:20.511299 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:20 crc kubenswrapper[4813]: I0219 18:31:20.511332 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:20 crc kubenswrapper[4813]: I0219 18:31:20.511342 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:20 crc kubenswrapper[4813]: I0219 18:31:20.511376 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:20 crc kubenswrapper[4813]: I0219 18:31:20.511387 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:20Z","lastTransitionTime":"2026-02-19T18:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:20 crc kubenswrapper[4813]: I0219 18:31:20.614198 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:20 crc kubenswrapper[4813]: I0219 18:31:20.614273 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:20 crc kubenswrapper[4813]: I0219 18:31:20.614297 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:20 crc kubenswrapper[4813]: I0219 18:31:20.614321 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:20 crc kubenswrapper[4813]: I0219 18:31:20.614339 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:20Z","lastTransitionTime":"2026-02-19T18:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:20 crc kubenswrapper[4813]: I0219 18:31:20.717190 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:20 crc kubenswrapper[4813]: I0219 18:31:20.717239 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:20 crc kubenswrapper[4813]: I0219 18:31:20.717251 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:20 crc kubenswrapper[4813]: I0219 18:31:20.717267 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:20 crc kubenswrapper[4813]: I0219 18:31:20.717280 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:20Z","lastTransitionTime":"2026-02-19T18:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:20 crc kubenswrapper[4813]: I0219 18:31:20.819943 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:20 crc kubenswrapper[4813]: I0219 18:31:20.820048 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:20 crc kubenswrapper[4813]: I0219 18:31:20.820071 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:20 crc kubenswrapper[4813]: I0219 18:31:20.820094 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:20 crc kubenswrapper[4813]: I0219 18:31:20.820111 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:20Z","lastTransitionTime":"2026-02-19T18:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:20 crc kubenswrapper[4813]: I0219 18:31:20.923726 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:20 crc kubenswrapper[4813]: I0219 18:31:20.923806 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:20 crc kubenswrapper[4813]: I0219 18:31:20.923829 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:20 crc kubenswrapper[4813]: I0219 18:31:20.923862 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:20 crc kubenswrapper[4813]: I0219 18:31:20.923886 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:20Z","lastTransitionTime":"2026-02-19T18:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:21 crc kubenswrapper[4813]: I0219 18:31:21.026894 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:21 crc kubenswrapper[4813]: I0219 18:31:21.026929 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:21 crc kubenswrapper[4813]: I0219 18:31:21.026939 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:21 crc kubenswrapper[4813]: I0219 18:31:21.026973 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:21 crc kubenswrapper[4813]: I0219 18:31:21.026984 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:21Z","lastTransitionTime":"2026-02-19T18:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:21 crc kubenswrapper[4813]: I0219 18:31:21.130540 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:21 crc kubenswrapper[4813]: I0219 18:31:21.130597 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:21 crc kubenswrapper[4813]: I0219 18:31:21.130613 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:21 crc kubenswrapper[4813]: I0219 18:31:21.130639 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:21 crc kubenswrapper[4813]: I0219 18:31:21.130656 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:21Z","lastTransitionTime":"2026-02-19T18:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:21 crc kubenswrapper[4813]: I0219 18:31:21.233050 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:21 crc kubenswrapper[4813]: I0219 18:31:21.233091 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:21 crc kubenswrapper[4813]: I0219 18:31:21.233102 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:21 crc kubenswrapper[4813]: I0219 18:31:21.233117 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:21 crc kubenswrapper[4813]: I0219 18:31:21.233128 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:21Z","lastTransitionTime":"2026-02-19T18:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:21 crc kubenswrapper[4813]: I0219 18:31:21.335979 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:21 crc kubenswrapper[4813]: I0219 18:31:21.336051 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:21 crc kubenswrapper[4813]: I0219 18:31:21.336074 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:21 crc kubenswrapper[4813]: I0219 18:31:21.336104 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:21 crc kubenswrapper[4813]: I0219 18:31:21.336122 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:21Z","lastTransitionTime":"2026-02-19T18:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:21 crc kubenswrapper[4813]: I0219 18:31:21.439101 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:21 crc kubenswrapper[4813]: I0219 18:31:21.439158 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:21 crc kubenswrapper[4813]: I0219 18:31:21.439175 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:21 crc kubenswrapper[4813]: I0219 18:31:21.439201 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:21 crc kubenswrapper[4813]: I0219 18:31:21.439218 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:21Z","lastTransitionTime":"2026-02-19T18:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:21 crc kubenswrapper[4813]: I0219 18:31:21.461916 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 04:56:16.346231698 +0000 UTC Feb 19 18:31:21 crc kubenswrapper[4813]: I0219 18:31:21.471363 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 18:31:21 crc kubenswrapper[4813]: I0219 18:31:21.471490 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:31:21 crc kubenswrapper[4813]: E0219 18:31:21.471674 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 18:31:21 crc kubenswrapper[4813]: I0219 18:31:21.471714 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 18:31:21 crc kubenswrapper[4813]: E0219 18:31:21.471905 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 18:31:21 crc kubenswrapper[4813]: E0219 18:31:21.472176 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 18:31:21 crc kubenswrapper[4813]: I0219 18:31:21.491197 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3c31506-f64a-4742-a5f9-fb4f7cb97f3e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2b8202d5d41248673716db8cad0618006cc52e967751eb392b99663c4aa90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6995816d5f4dd389140af1ce1d1d5e45df8cf15a28c51cd5e13ee52c094377f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4760dc159a8f00a8d74f75e4c3b1f50b3548501b3a7623e96aaa16c1013611c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c86da51b4095ca48cd6d13dadef813f656825387e3166eb716fe2b8cc46ce3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7c86da51b4095ca48cd6d13dadef813f656825387e3166eb716fe2b8cc46ce3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:21Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:21 crc kubenswrapper[4813]: I0219 18:31:21.510892 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:21Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:21 crc kubenswrapper[4813]: I0219 18:31:21.526340 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vpk9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0640f474-4d6f-4a87-9750-dda71e69dd95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58268d212369ac3ebcca0839f2c778d1e2e6e36fdc4b7e374cb9044adb39163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m6kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vpk9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:21Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:21 crc kubenswrapper[4813]: I0219 18:31:21.542222 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:21 crc kubenswrapper[4813]: I0219 18:31:21.542272 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:21 crc kubenswrapper[4813]: I0219 18:31:21.542288 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:21 crc kubenswrapper[4813]: I0219 18:31:21.542314 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:21 crc kubenswrapper[4813]: I0219 18:31:21.542332 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:21Z","lastTransitionTime":"2026-02-19T18:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:21 crc kubenswrapper[4813]: I0219 18:31:21.545118 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8429a2d929ad94a25a922e436298db7a8eb7522560dfa37f87f6e24f4f99b4e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:21Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:21 crc kubenswrapper[4813]: I0219 18:31:21.563810 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"481977a2-7072-4176-abd4-863cb6104d70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2b35ef24fab5f4dfd17bf95d550000ed1167454385baff2c19e1bc795e2fa8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://194a57de3f000b8b7bd147b882314c518fbd8173a9c366d4ee48e1e5fab9800b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n76nc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gfswm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:21Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:21 crc kubenswrapper[4813]: I0219 18:31:21.582079 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7e9f575-0f71-4cf9-bbca-161279ecc067\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5217c8c5d8bdd49d4930ceb46695c6404c9cb98f22fa570d519f6fd9e90a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72ae02b48db247486c8af5ba910b32e186c15460f9195868b319d1ddc111fa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e90168ea6a75495be5812591cd0c929407c9adcb7c088d4bcbe3658a430320\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f319cf58bd58c099ace281ef84d183ac951986506fa62207a45ec01f7398e7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f62c4c18d632f0210d0676c04bf42a7b28c8669c123d29ef70fd4129f0051a2b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T18:30:00Z\\\",\\\"message\\\":\\\".003654 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 18:29:55.006059 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3464865525/tls.crt::/tmp/serving-cert-3464865525/tls.key\\\\\\\"\\\\nI0219 18:30:00.917714 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 18:30:00.925836 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 18:30:00.925875 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 18:30:00.925917 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 18:30:00.925928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 18:30:00.935151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 18:30:00.935201 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935214 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 18:30:00.935228 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 18:30:00.935217 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 18:30:00.935236 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 18:30:00.935260 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 18:30:00.935268 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 18:30:00.942033 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nF0219 18:30:00.942062 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b414bdca98a18315a2ebb69c39baa4748407aa769d6bc8efa4115efb47c9f4f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:21Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:21 crc kubenswrapper[4813]: I0219 18:31:21.598711 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9fdac97-44c9-402a-9097-cc8bb3cd60c1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ca66b5227f874cb972da780e7130516ccefdef0f8a8c6b5258886aba6a54ae8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c2aec2ac5b69b6d516b5f3bddbff02e25892d26a95e9f8bce5c9c5f98144310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c2aec2ac5b69b6d516b5f3bddbff02e25892d26a95e9f8bce5c9c5f98144310\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:21Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:21 crc kubenswrapper[4813]: I0219 18:31:21.632915 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"424add47-7924-4d6b-9c4a-4696e02c5a45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f95dc9a2b0e36680a894dd162836072614413ad26fcf5adbae44db96aff8a00c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46edb0e58f8e2e040cc223708f42cef7060dc0a841fff9db182e66e659ced5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://def8abdfd513befc4fac3d78b8284b3b37ba3f5da05d74a7b911932ee1099344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed857dedd4a4cf59288fcbcfaa298370a93630e185b949d2e574eb828831df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://216064c0aa1256384d93ad5b952228d4b67c2767f284715e62505dd390c11268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b9746c82ed08c95759481dc08426b4e8446ed834da21c003dccfeb8231fbec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b9746c82ed08c95759481dc08426b4e8446ed834da21c003dccfeb8231fbec2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc8db58cc6cac8c9b830087814e9d7652fdd990b18e4e4e778669d5cbc06b508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc8db58cc6cac8c9b830087814e9d7652fdd990b18e4e4e778669d5cbc06b508\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://baca15e257ea2007369156b6a31eef0b7cd0e336cef7a9c733164b93e1927f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baca15e257ea2007369156b6a31eef0b7cd0e336cef7a9c733164b93e1927f8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:29:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:21Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:21 crc kubenswrapper[4813]: I0219 18:31:21.645324 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:21 crc kubenswrapper[4813]: I0219 18:31:21.645387 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:21 crc kubenswrapper[4813]: I0219 18:31:21.645409 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:21 crc kubenswrapper[4813]: I0219 18:31:21.645474 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:21 crc kubenswrapper[4813]: I0219 18:31:21.645498 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:21Z","lastTransitionTime":"2026-02-19T18:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:21 crc kubenswrapper[4813]: I0219 18:31:21.654827 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44daee369357470d5c3b416d63e53f621a0434cc6fe90db2dd15a6078849bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:21Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:21 crc kubenswrapper[4813]: I0219 18:31:21.668095 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-55mxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af5ac962-f7d8-4759-b913-b3784e37a704\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5006f3df33dcac6500b89a837e686eb2b35a2d1c609091bda101500d15d808d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-55mxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:21Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:21 crc kubenswrapper[4813]: I0219 18:31:21.687159 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e24bb1ce-8e7b-485c-8ade-4d1c8d24d31d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:29:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4ec0b5d3efae532f52cc2cad5bdf32bbba176159a6d3e26584b0fabe4e67fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72a262261f9ec388a9b2c38c6c30d35645d6e42d482ed7468891523b7bc731fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58efdcb776fb39a7e33af3fda3f0589e38c4bae15b8db3371541d189729d06c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11a2584d7de40323b884e08e2cb47b6ab8054fadccc7516e4736c5439aef60e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:29:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:29:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:21Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:21 crc kubenswrapper[4813]: I0219 18:31:21.706708 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:21Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:21 crc kubenswrapper[4813]: I0219 18:31:21.726633 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:21Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:21 crc kubenswrapper[4813]: I0219 18:31:21.743245 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c6488cf097d9700ded1960ebddf696d77c08ffdfc6dd8b351c594ea026bb6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6181782d0c635897318effbe40705986067ccb55cf05ffdba6948e46c1a054b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:21Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:21 crc kubenswrapper[4813]: I0219 18:31:21.750383 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:21 crc kubenswrapper[4813]: I0219 18:31:21.750439 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:21 crc kubenswrapper[4813]: I0219 18:31:21.750463 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:21 crc kubenswrapper[4813]: I0219 18:31:21.750496 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:21 crc kubenswrapper[4813]: I0219 18:31:21.750521 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:21Z","lastTransitionTime":"2026-02-19T18:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:21 crc kubenswrapper[4813]: I0219 18:31:21.765177 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hksqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b099cefb-f2e5-4f3f-976c-7433dba77ef2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e8bb938cc19db067c4260fd25416cf155dd95f94c41f03a9ef2b4d29589ad2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3313ad9bea8833862d8c116421e00a0c2376c81e214043db1fa4096177d01094\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T18:30:50Z\\\",\\\"message\\\":\\\"2026-02-19T18:30:05+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9b918bad-a915-4f26-9241-805aac721196\\\\n2026-02-19T18:30:05+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9b918bad-a915-4f26-9241-805aac721196 to /host/opt/cni/bin/\\\\n2026-02-19T18:30:05Z [verbose] multus-daemon started\\\\n2026-02-19T18:30:05Z [verbose] Readiness Indicator file check\\\\n2026-02-19T18:30:50Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7qdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hksqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:21Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:21 crc kubenswrapper[4813]: I0219 18:31:21.783463 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqbrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"614b0374-4288-40dc-9d95-e6f6566bd1ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99f4d2e3db84c62b82edf3e5f8290b3b259e58655a5ea95ea51ed8aeec8e8844\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpkbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef107096c0521504fd7160ad4e3c0feff6ed1500af21c7a0328a981c340825f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpkbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqbrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:21Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:21 crc kubenswrapper[4813]: I0219 18:31:21.800282 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l5vng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fc21e0b-f723-4c9c-9ced-1683cc02fa00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmn2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmn2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l5vng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:21Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:21 crc kubenswrapper[4813]: I0219 18:31:21.823135 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-jlz6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f58d0592-08dd-49db-8c98-f262b9808e0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://934ac9bf4c98a77f31d9fddb47e9981f38daeca4f37afc09a488b48c69a13b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7968520498abaa880445c128f02c0838e254befc000966e147908c84203f74a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7968520498abaa880445c128f02c0838e254befc000966e147908c84203f74a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17313aa183028703a67d1cf40b4246d09224ceee622cd9f4f7d2ef239b26db47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17313aa183028703a67d1cf40b4246d09224ceee622cd9f4f7d2ef239b26db47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ab70722c91747196d065d768e30f52befde81fdedb465f22c3f7eef4705210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98ab70722c91747196d065d768e30f52befde81fdedb465f22c3f7eef4705210\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ab6dd9d03726493fd9fd2c358b0a87ca1997cbe2111d43a548adc9c9f43c662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ab6dd9d03726493fd9fd2c358b0a87ca1997cbe2111d43a548adc9c9f43c662\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11144c876f7a0cbaa5787ac1a093f138ec3fdb8a3cd118d4efec6cc650e3e97a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11144c876f7a0cbaa5787ac1a093f138ec3fdb8a3cd118d4efec6cc650e3e97a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53230e417c1722eded51e2c7a87acb4e9b610aaf5d36a22d4f3f4fcd987c5160\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53230e417c1722eded51e2c7a87acb4e9b610aaf5d36a22d4f3f4fcd987c5160\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zdh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-jlz6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:21Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:21 crc kubenswrapper[4813]: I0219 18:31:21.853486 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:21 crc kubenswrapper[4813]: I0219 18:31:21.853552 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:21 crc kubenswrapper[4813]: I0219 18:31:21.853574 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:21 crc kubenswrapper[4813]: I0219 18:31:21.853602 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:21 crc kubenswrapper[4813]: I0219 18:31:21.853624 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:21Z","lastTransitionTime":"2026-02-19T18:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:21 crc kubenswrapper[4813]: I0219 18:31:21.856658 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"928c75f4-605c-4556-8c29-14ff4bdf6f5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T18:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec80078ab4536d8f1962994c3bdf8af5fcf81516f8845e84dbb33a78b1aa2062\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e03ec2a3e1656fbfb567377e79bcca10e1c8a6f7660727c164936eab7365930a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6191934d31b3f711d3c9d8f365251db200dea5a925cad670632dbfc527f76504\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1b878b9bc710465c933e1965728be9fe220fe89a3d5c05fa538abd4bf22b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81cfe261685862b6b74d263ab66b4a2b1d68409791e82424b4f210c56561099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fed64977a80ae6acac3a225e2b6b6ab865db1f36338df4a15011baae90f949\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cc1c57ce812d6b74b0a2631aac1b67e009e76561c3561fcc94cf8bfa6c6a61c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cc1c57ce812d6b74b0a2631aac1b67e009e76561c3561fcc94cf8bfa6c6a61c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T18:31:02Z\\\",\\\"message\\\":\\\"reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 18:31:02.493402 6884 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 18:31:02.493481 6884 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 18:31:02.493608 6884 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 18:31:02.493649 6884 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 18:31:02.493658 6884 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 18:31:02.493689 6884 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 18:31:02.493700 6884 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 18:31:02.493731 6884 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 18:31:02.493737 6884 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 18:31:02.493752 6884 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 18:31:02.493767 6884 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 18:31:02.493777 6884 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 18:31:02.493785 6884 factory.go:656] Stopping watch factory\\\\nI0219 18:31:02.493788 6884 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 18:31:02.493799 6884 ovnkube.go:599] Stopped ovnkube\\\\nI0219 18:31:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T18:31:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pc9t2_openshift-ovn-kubernetes(928c75f4-605c-4556-8c29-14ff4bdf6f5e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1caf7a922d435742a5c6d5302c70003bc86cc3ab542e2e3fe0a368c150de7838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T18:30:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0244cf62334ae37b44f11119385a76667200a01b2162a064829f16e8d6c0cf65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0244cf62334ae37b44f11119385a76667200a01b2162a064829f16e8d6c0cf65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T18:30:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T18:30:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cf9qf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T18:30:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pc9t2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T18:31:21Z is after 2025-08-24T17:21:41Z" Feb 19 18:31:21 crc kubenswrapper[4813]: I0219 18:31:21.956948 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:21 crc kubenswrapper[4813]: I0219 18:31:21.957287 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:21 crc kubenswrapper[4813]: I0219 18:31:21.957442 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:21 crc kubenswrapper[4813]: I0219 18:31:21.957595 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:21 crc kubenswrapper[4813]: I0219 18:31:21.957742 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:21Z","lastTransitionTime":"2026-02-19T18:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:22 crc kubenswrapper[4813]: I0219 18:31:22.060067 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:22 crc kubenswrapper[4813]: I0219 18:31:22.060117 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:22 crc kubenswrapper[4813]: I0219 18:31:22.060134 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:22 crc kubenswrapper[4813]: I0219 18:31:22.060156 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:22 crc kubenswrapper[4813]: I0219 18:31:22.060175 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:22Z","lastTransitionTime":"2026-02-19T18:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:22 crc kubenswrapper[4813]: I0219 18:31:22.163825 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:22 crc kubenswrapper[4813]: I0219 18:31:22.163901 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:22 crc kubenswrapper[4813]: I0219 18:31:22.163923 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:22 crc kubenswrapper[4813]: I0219 18:31:22.163991 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:22 crc kubenswrapper[4813]: I0219 18:31:22.164013 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:22Z","lastTransitionTime":"2026-02-19T18:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:22 crc kubenswrapper[4813]: I0219 18:31:22.267116 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:22 crc kubenswrapper[4813]: I0219 18:31:22.267174 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:22 crc kubenswrapper[4813]: I0219 18:31:22.267190 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:22 crc kubenswrapper[4813]: I0219 18:31:22.267217 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:22 crc kubenswrapper[4813]: I0219 18:31:22.267233 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:22Z","lastTransitionTime":"2026-02-19T18:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:22 crc kubenswrapper[4813]: I0219 18:31:22.370403 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:22 crc kubenswrapper[4813]: I0219 18:31:22.370467 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:22 crc kubenswrapper[4813]: I0219 18:31:22.370483 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:22 crc kubenswrapper[4813]: I0219 18:31:22.370508 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:22 crc kubenswrapper[4813]: I0219 18:31:22.370527 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:22Z","lastTransitionTime":"2026-02-19T18:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:22 crc kubenswrapper[4813]: I0219 18:31:22.463138 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 15:35:48.085508875 +0000 UTC Feb 19 18:31:22 crc kubenswrapper[4813]: I0219 18:31:22.471442 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5vng" Feb 19 18:31:22 crc kubenswrapper[4813]: E0219 18:31:22.471646 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5vng" podUID="6fc21e0b-f723-4c9c-9ced-1683cc02fa00" Feb 19 18:31:22 crc kubenswrapper[4813]: I0219 18:31:22.478716 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:22 crc kubenswrapper[4813]: I0219 18:31:22.478786 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:22 crc kubenswrapper[4813]: I0219 18:31:22.478804 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:22 crc kubenswrapper[4813]: I0219 18:31:22.478828 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:22 crc kubenswrapper[4813]: I0219 18:31:22.478846 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:22Z","lastTransitionTime":"2026-02-19T18:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:22 crc kubenswrapper[4813]: I0219 18:31:22.581219 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:22 crc kubenswrapper[4813]: I0219 18:31:22.581277 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:22 crc kubenswrapper[4813]: I0219 18:31:22.581293 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:22 crc kubenswrapper[4813]: I0219 18:31:22.581319 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:22 crc kubenswrapper[4813]: I0219 18:31:22.581337 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:22Z","lastTransitionTime":"2026-02-19T18:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:22 crc kubenswrapper[4813]: I0219 18:31:22.684030 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:22 crc kubenswrapper[4813]: I0219 18:31:22.684100 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:22 crc kubenswrapper[4813]: I0219 18:31:22.684117 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:22 crc kubenswrapper[4813]: I0219 18:31:22.684144 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:22 crc kubenswrapper[4813]: I0219 18:31:22.684161 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:22Z","lastTransitionTime":"2026-02-19T18:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:22 crc kubenswrapper[4813]: I0219 18:31:22.787526 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:22 crc kubenswrapper[4813]: I0219 18:31:22.787576 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:22 crc kubenswrapper[4813]: I0219 18:31:22.787616 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:22 crc kubenswrapper[4813]: I0219 18:31:22.787638 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:22 crc kubenswrapper[4813]: I0219 18:31:22.787655 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:22Z","lastTransitionTime":"2026-02-19T18:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:22 crc kubenswrapper[4813]: I0219 18:31:22.890437 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:22 crc kubenswrapper[4813]: I0219 18:31:22.890487 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:22 crc kubenswrapper[4813]: I0219 18:31:22.890504 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:22 crc kubenswrapper[4813]: I0219 18:31:22.890526 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:22 crc kubenswrapper[4813]: I0219 18:31:22.890541 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:22Z","lastTransitionTime":"2026-02-19T18:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:22 crc kubenswrapper[4813]: I0219 18:31:22.993101 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:22 crc kubenswrapper[4813]: I0219 18:31:22.993162 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:22 crc kubenswrapper[4813]: I0219 18:31:22.993180 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:22 crc kubenswrapper[4813]: I0219 18:31:22.993204 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:22 crc kubenswrapper[4813]: I0219 18:31:22.993221 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:22Z","lastTransitionTime":"2026-02-19T18:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:23 crc kubenswrapper[4813]: I0219 18:31:23.095647 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:23 crc kubenswrapper[4813]: I0219 18:31:23.095712 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:23 crc kubenswrapper[4813]: I0219 18:31:23.095733 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:23 crc kubenswrapper[4813]: I0219 18:31:23.095759 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:23 crc kubenswrapper[4813]: I0219 18:31:23.095776 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:23Z","lastTransitionTime":"2026-02-19T18:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:23 crc kubenswrapper[4813]: I0219 18:31:23.198525 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:23 crc kubenswrapper[4813]: I0219 18:31:23.198571 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:23 crc kubenswrapper[4813]: I0219 18:31:23.198587 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:23 crc kubenswrapper[4813]: I0219 18:31:23.198604 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:23 crc kubenswrapper[4813]: I0219 18:31:23.198621 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:23Z","lastTransitionTime":"2026-02-19T18:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:23 crc kubenswrapper[4813]: I0219 18:31:23.301430 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:23 crc kubenswrapper[4813]: I0219 18:31:23.301485 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:23 crc kubenswrapper[4813]: I0219 18:31:23.301503 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:23 crc kubenswrapper[4813]: I0219 18:31:23.301527 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:23 crc kubenswrapper[4813]: I0219 18:31:23.301546 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:23Z","lastTransitionTime":"2026-02-19T18:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:23 crc kubenswrapper[4813]: I0219 18:31:23.404285 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:23 crc kubenswrapper[4813]: I0219 18:31:23.404358 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:23 crc kubenswrapper[4813]: I0219 18:31:23.404370 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:23 crc kubenswrapper[4813]: I0219 18:31:23.404388 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:23 crc kubenswrapper[4813]: I0219 18:31:23.404402 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:23Z","lastTransitionTime":"2026-02-19T18:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:23 crc kubenswrapper[4813]: I0219 18:31:23.464290 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 16:39:40.920485903 +0000 UTC Feb 19 18:31:23 crc kubenswrapper[4813]: I0219 18:31:23.470788 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 18:31:23 crc kubenswrapper[4813]: I0219 18:31:23.470800 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 18:31:23 crc kubenswrapper[4813]: I0219 18:31:23.471235 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:31:23 crc kubenswrapper[4813]: E0219 18:31:23.471419 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 18:31:23 crc kubenswrapper[4813]: E0219 18:31:23.471632 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 18:31:23 crc kubenswrapper[4813]: E0219 18:31:23.471854 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 18:31:23 crc kubenswrapper[4813]: I0219 18:31:23.507184 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:23 crc kubenswrapper[4813]: I0219 18:31:23.507411 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:23 crc kubenswrapper[4813]: I0219 18:31:23.507559 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:23 crc kubenswrapper[4813]: I0219 18:31:23.507708 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:23 crc kubenswrapper[4813]: I0219 18:31:23.507844 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:23Z","lastTransitionTime":"2026-02-19T18:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:23 crc kubenswrapper[4813]: I0219 18:31:23.611594 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:23 crc kubenswrapper[4813]: I0219 18:31:23.611652 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:23 crc kubenswrapper[4813]: I0219 18:31:23.611673 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:23 crc kubenswrapper[4813]: I0219 18:31:23.611703 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:23 crc kubenswrapper[4813]: I0219 18:31:23.611725 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:23Z","lastTransitionTime":"2026-02-19T18:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:23 crc kubenswrapper[4813]: I0219 18:31:23.715240 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:23 crc kubenswrapper[4813]: I0219 18:31:23.715284 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:23 crc kubenswrapper[4813]: I0219 18:31:23.715293 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:23 crc kubenswrapper[4813]: I0219 18:31:23.715307 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:23 crc kubenswrapper[4813]: I0219 18:31:23.715318 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:23Z","lastTransitionTime":"2026-02-19T18:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:23 crc kubenswrapper[4813]: I0219 18:31:23.818097 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:23 crc kubenswrapper[4813]: I0219 18:31:23.818228 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:23 crc kubenswrapper[4813]: I0219 18:31:23.818250 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:23 crc kubenswrapper[4813]: I0219 18:31:23.818275 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:23 crc kubenswrapper[4813]: I0219 18:31:23.818293 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:23Z","lastTransitionTime":"2026-02-19T18:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:23 crc kubenswrapper[4813]: I0219 18:31:23.921279 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:23 crc kubenswrapper[4813]: I0219 18:31:23.921332 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:23 crc kubenswrapper[4813]: I0219 18:31:23.921348 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:23 crc kubenswrapper[4813]: I0219 18:31:23.921386 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:23 crc kubenswrapper[4813]: I0219 18:31:23.921403 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:23Z","lastTransitionTime":"2026-02-19T18:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:24 crc kubenswrapper[4813]: I0219 18:31:24.024922 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:24 crc kubenswrapper[4813]: I0219 18:31:24.025013 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:24 crc kubenswrapper[4813]: I0219 18:31:24.025032 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:24 crc kubenswrapper[4813]: I0219 18:31:24.025055 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:24 crc kubenswrapper[4813]: I0219 18:31:24.025074 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:24Z","lastTransitionTime":"2026-02-19T18:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:24 crc kubenswrapper[4813]: I0219 18:31:24.131745 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:24 crc kubenswrapper[4813]: I0219 18:31:24.131991 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:24 crc kubenswrapper[4813]: I0219 18:31:24.132085 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:24 crc kubenswrapper[4813]: I0219 18:31:24.132154 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:24 crc kubenswrapper[4813]: I0219 18:31:24.132229 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:24Z","lastTransitionTime":"2026-02-19T18:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:24 crc kubenswrapper[4813]: I0219 18:31:24.235167 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:24 crc kubenswrapper[4813]: I0219 18:31:24.235244 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:24 crc kubenswrapper[4813]: I0219 18:31:24.235266 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:24 crc kubenswrapper[4813]: I0219 18:31:24.235291 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:24 crc kubenswrapper[4813]: I0219 18:31:24.235311 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:24Z","lastTransitionTime":"2026-02-19T18:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:24 crc kubenswrapper[4813]: I0219 18:31:24.337605 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:24 crc kubenswrapper[4813]: I0219 18:31:24.337652 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:24 crc kubenswrapper[4813]: I0219 18:31:24.337669 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:24 crc kubenswrapper[4813]: I0219 18:31:24.337690 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:24 crc kubenswrapper[4813]: I0219 18:31:24.337704 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:24Z","lastTransitionTime":"2026-02-19T18:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:24 crc kubenswrapper[4813]: I0219 18:31:24.440733 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:24 crc kubenswrapper[4813]: I0219 18:31:24.440792 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:24 crc kubenswrapper[4813]: I0219 18:31:24.440811 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:24 crc kubenswrapper[4813]: I0219 18:31:24.440835 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:24 crc kubenswrapper[4813]: I0219 18:31:24.440854 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:24Z","lastTransitionTime":"2026-02-19T18:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:24 crc kubenswrapper[4813]: I0219 18:31:24.465278 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 21:36:20.69337995 +0000 UTC Feb 19 18:31:24 crc kubenswrapper[4813]: I0219 18:31:24.470776 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5vng" Feb 19 18:31:24 crc kubenswrapper[4813]: E0219 18:31:24.471000 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5vng" podUID="6fc21e0b-f723-4c9c-9ced-1683cc02fa00" Feb 19 18:31:24 crc kubenswrapper[4813]: I0219 18:31:24.543461 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:24 crc kubenswrapper[4813]: I0219 18:31:24.543519 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:24 crc kubenswrapper[4813]: I0219 18:31:24.543536 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:24 crc kubenswrapper[4813]: I0219 18:31:24.543560 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:24 crc kubenswrapper[4813]: I0219 18:31:24.543578 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:24Z","lastTransitionTime":"2026-02-19T18:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:24 crc kubenswrapper[4813]: I0219 18:31:24.646618 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:24 crc kubenswrapper[4813]: I0219 18:31:24.646666 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:24 crc kubenswrapper[4813]: I0219 18:31:24.646682 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:24 crc kubenswrapper[4813]: I0219 18:31:24.646706 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:24 crc kubenswrapper[4813]: I0219 18:31:24.646723 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:24Z","lastTransitionTime":"2026-02-19T18:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:24 crc kubenswrapper[4813]: I0219 18:31:24.749492 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:24 crc kubenswrapper[4813]: I0219 18:31:24.749553 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:24 crc kubenswrapper[4813]: I0219 18:31:24.749571 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:24 crc kubenswrapper[4813]: I0219 18:31:24.749593 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:24 crc kubenswrapper[4813]: I0219 18:31:24.749611 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:24Z","lastTransitionTime":"2026-02-19T18:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:24 crc kubenswrapper[4813]: I0219 18:31:24.853265 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:24 crc kubenswrapper[4813]: I0219 18:31:24.853330 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:24 crc kubenswrapper[4813]: I0219 18:31:24.853353 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:24 crc kubenswrapper[4813]: I0219 18:31:24.853380 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:24 crc kubenswrapper[4813]: I0219 18:31:24.853399 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:24Z","lastTransitionTime":"2026-02-19T18:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:24 crc kubenswrapper[4813]: I0219 18:31:24.956822 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:24 crc kubenswrapper[4813]: I0219 18:31:24.956876 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:24 crc kubenswrapper[4813]: I0219 18:31:24.956892 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:24 crc kubenswrapper[4813]: I0219 18:31:24.956914 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:24 crc kubenswrapper[4813]: I0219 18:31:24.956933 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:24Z","lastTransitionTime":"2026-02-19T18:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:25 crc kubenswrapper[4813]: I0219 18:31:25.063931 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:25 crc kubenswrapper[4813]: I0219 18:31:25.064035 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:25 crc kubenswrapper[4813]: I0219 18:31:25.064059 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:25 crc kubenswrapper[4813]: I0219 18:31:25.064090 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:25 crc kubenswrapper[4813]: I0219 18:31:25.064124 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:25Z","lastTransitionTime":"2026-02-19T18:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:25 crc kubenswrapper[4813]: I0219 18:31:25.167730 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:25 crc kubenswrapper[4813]: I0219 18:31:25.167783 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:25 crc kubenswrapper[4813]: I0219 18:31:25.167805 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:25 crc kubenswrapper[4813]: I0219 18:31:25.167831 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:25 crc kubenswrapper[4813]: I0219 18:31:25.167849 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:25Z","lastTransitionTime":"2026-02-19T18:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:25 crc kubenswrapper[4813]: I0219 18:31:25.272045 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:25 crc kubenswrapper[4813]: I0219 18:31:25.272487 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:25 crc kubenswrapper[4813]: I0219 18:31:25.272716 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:25 crc kubenswrapper[4813]: I0219 18:31:25.273227 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:25 crc kubenswrapper[4813]: I0219 18:31:25.273501 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:25Z","lastTransitionTime":"2026-02-19T18:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:25 crc kubenswrapper[4813]: I0219 18:31:25.377070 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:25 crc kubenswrapper[4813]: I0219 18:31:25.377410 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:25 crc kubenswrapper[4813]: I0219 18:31:25.377674 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:25 crc kubenswrapper[4813]: I0219 18:31:25.377833 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:25 crc kubenswrapper[4813]: I0219 18:31:25.378017 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:25Z","lastTransitionTime":"2026-02-19T18:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:25 crc kubenswrapper[4813]: I0219 18:31:25.465883 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 20:36:32.579044297 +0000 UTC Feb 19 18:31:25 crc kubenswrapper[4813]: I0219 18:31:25.471490 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 18:31:25 crc kubenswrapper[4813]: I0219 18:31:25.471806 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:31:25 crc kubenswrapper[4813]: I0219 18:31:25.471817 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 18:31:25 crc kubenswrapper[4813]: E0219 18:31:25.472063 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 18:31:25 crc kubenswrapper[4813]: E0219 18:31:25.472261 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 18:31:25 crc kubenswrapper[4813]: E0219 18:31:25.472331 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 18:31:25 crc kubenswrapper[4813]: I0219 18:31:25.480444 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:25 crc kubenswrapper[4813]: I0219 18:31:25.480497 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:25 crc kubenswrapper[4813]: I0219 18:31:25.480513 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:25 crc kubenswrapper[4813]: I0219 18:31:25.480533 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:25 crc kubenswrapper[4813]: I0219 18:31:25.480551 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:25Z","lastTransitionTime":"2026-02-19T18:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:25 crc kubenswrapper[4813]: I0219 18:31:25.584046 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:25 crc kubenswrapper[4813]: I0219 18:31:25.584088 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:25 crc kubenswrapper[4813]: I0219 18:31:25.584104 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:25 crc kubenswrapper[4813]: I0219 18:31:25.584126 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:25 crc kubenswrapper[4813]: I0219 18:31:25.584144 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:25Z","lastTransitionTime":"2026-02-19T18:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:25 crc kubenswrapper[4813]: I0219 18:31:25.687140 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:25 crc kubenswrapper[4813]: I0219 18:31:25.687194 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:25 crc kubenswrapper[4813]: I0219 18:31:25.687213 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:25 crc kubenswrapper[4813]: I0219 18:31:25.687236 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:25 crc kubenswrapper[4813]: I0219 18:31:25.687259 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:25Z","lastTransitionTime":"2026-02-19T18:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:25 crc kubenswrapper[4813]: I0219 18:31:25.790508 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:25 crc kubenswrapper[4813]: I0219 18:31:25.790795 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:25 crc kubenswrapper[4813]: I0219 18:31:25.791064 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:25 crc kubenswrapper[4813]: I0219 18:31:25.791205 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:25 crc kubenswrapper[4813]: I0219 18:31:25.791337 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:25Z","lastTransitionTime":"2026-02-19T18:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:25 crc kubenswrapper[4813]: I0219 18:31:25.894389 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:25 crc kubenswrapper[4813]: I0219 18:31:25.894452 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:25 crc kubenswrapper[4813]: I0219 18:31:25.894470 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:25 crc kubenswrapper[4813]: I0219 18:31:25.894494 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:25 crc kubenswrapper[4813]: I0219 18:31:25.894519 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:25Z","lastTransitionTime":"2026-02-19T18:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:25 crc kubenswrapper[4813]: I0219 18:31:25.996754 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:25 crc kubenswrapper[4813]: I0219 18:31:25.996817 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:25 crc kubenswrapper[4813]: I0219 18:31:25.996839 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:25 crc kubenswrapper[4813]: I0219 18:31:25.996863 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:25 crc kubenswrapper[4813]: I0219 18:31:25.996928 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:25Z","lastTransitionTime":"2026-02-19T18:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:26 crc kubenswrapper[4813]: I0219 18:31:26.099863 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:26 crc kubenswrapper[4813]: I0219 18:31:26.099912 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:26 crc kubenswrapper[4813]: I0219 18:31:26.099929 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:26 crc kubenswrapper[4813]: I0219 18:31:26.099980 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:26 crc kubenswrapper[4813]: I0219 18:31:26.099999 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:26Z","lastTransitionTime":"2026-02-19T18:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:26 crc kubenswrapper[4813]: I0219 18:31:26.202762 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:26 crc kubenswrapper[4813]: I0219 18:31:26.202837 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:26 crc kubenswrapper[4813]: I0219 18:31:26.202855 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:26 crc kubenswrapper[4813]: I0219 18:31:26.202881 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:26 crc kubenswrapper[4813]: I0219 18:31:26.202899 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:26Z","lastTransitionTime":"2026-02-19T18:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:26 crc kubenswrapper[4813]: I0219 18:31:26.305312 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:26 crc kubenswrapper[4813]: I0219 18:31:26.305379 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:26 crc kubenswrapper[4813]: I0219 18:31:26.305401 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:26 crc kubenswrapper[4813]: I0219 18:31:26.305428 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:26 crc kubenswrapper[4813]: I0219 18:31:26.305449 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:26Z","lastTransitionTime":"2026-02-19T18:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:26 crc kubenswrapper[4813]: I0219 18:31:26.408497 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:26 crc kubenswrapper[4813]: I0219 18:31:26.408585 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:26 crc kubenswrapper[4813]: I0219 18:31:26.408609 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:26 crc kubenswrapper[4813]: I0219 18:31:26.408651 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:26 crc kubenswrapper[4813]: I0219 18:31:26.408675 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:26Z","lastTransitionTime":"2026-02-19T18:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:26 crc kubenswrapper[4813]: I0219 18:31:26.466261 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 14:05:40.166836752 +0000 UTC Feb 19 18:31:26 crc kubenswrapper[4813]: I0219 18:31:26.470599 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5vng" Feb 19 18:31:26 crc kubenswrapper[4813]: E0219 18:31:26.470768 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5vng" podUID="6fc21e0b-f723-4c9c-9ced-1683cc02fa00" Feb 19 18:31:26 crc kubenswrapper[4813]: I0219 18:31:26.510867 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:26 crc kubenswrapper[4813]: I0219 18:31:26.510946 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:26 crc kubenswrapper[4813]: I0219 18:31:26.510991 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:26 crc kubenswrapper[4813]: I0219 18:31:26.511012 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:26 crc kubenswrapper[4813]: I0219 18:31:26.511029 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:26Z","lastTransitionTime":"2026-02-19T18:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:26 crc kubenswrapper[4813]: I0219 18:31:26.613588 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:26 crc kubenswrapper[4813]: I0219 18:31:26.613653 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:26 crc kubenswrapper[4813]: I0219 18:31:26.613682 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:26 crc kubenswrapper[4813]: I0219 18:31:26.613716 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:26 crc kubenswrapper[4813]: I0219 18:31:26.613740 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:26Z","lastTransitionTime":"2026-02-19T18:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:26 crc kubenswrapper[4813]: I0219 18:31:26.717497 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:26 crc kubenswrapper[4813]: I0219 18:31:26.717544 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:26 crc kubenswrapper[4813]: I0219 18:31:26.717559 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:26 crc kubenswrapper[4813]: I0219 18:31:26.717581 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:26 crc kubenswrapper[4813]: I0219 18:31:26.717597 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:26Z","lastTransitionTime":"2026-02-19T18:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:26 crc kubenswrapper[4813]: I0219 18:31:26.820944 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:26 crc kubenswrapper[4813]: I0219 18:31:26.821170 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:26 crc kubenswrapper[4813]: I0219 18:31:26.821192 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:26 crc kubenswrapper[4813]: I0219 18:31:26.821216 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:26 crc kubenswrapper[4813]: I0219 18:31:26.821232 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:26Z","lastTransitionTime":"2026-02-19T18:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:26 crc kubenswrapper[4813]: I0219 18:31:26.924658 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:26 crc kubenswrapper[4813]: I0219 18:31:26.924730 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:26 crc kubenswrapper[4813]: I0219 18:31:26.924751 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:26 crc kubenswrapper[4813]: I0219 18:31:26.924781 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:26 crc kubenswrapper[4813]: I0219 18:31:26.924803 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:26Z","lastTransitionTime":"2026-02-19T18:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:27 crc kubenswrapper[4813]: I0219 18:31:27.027830 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:27 crc kubenswrapper[4813]: I0219 18:31:27.027939 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:27 crc kubenswrapper[4813]: I0219 18:31:27.028020 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:27 crc kubenswrapper[4813]: I0219 18:31:27.028053 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:27 crc kubenswrapper[4813]: I0219 18:31:27.028075 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:27Z","lastTransitionTime":"2026-02-19T18:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:27 crc kubenswrapper[4813]: I0219 18:31:27.130562 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:27 crc kubenswrapper[4813]: I0219 18:31:27.130708 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:27 crc kubenswrapper[4813]: I0219 18:31:27.130737 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:27 crc kubenswrapper[4813]: I0219 18:31:27.130769 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:27 crc kubenswrapper[4813]: I0219 18:31:27.130793 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:27Z","lastTransitionTime":"2026-02-19T18:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:27 crc kubenswrapper[4813]: I0219 18:31:27.233539 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:27 crc kubenswrapper[4813]: I0219 18:31:27.233603 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:27 crc kubenswrapper[4813]: I0219 18:31:27.233626 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:27 crc kubenswrapper[4813]: I0219 18:31:27.233655 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:27 crc kubenswrapper[4813]: I0219 18:31:27.233678 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:27Z","lastTransitionTime":"2026-02-19T18:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:27 crc kubenswrapper[4813]: I0219 18:31:27.336643 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:27 crc kubenswrapper[4813]: I0219 18:31:27.336705 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:27 crc kubenswrapper[4813]: I0219 18:31:27.336723 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:27 crc kubenswrapper[4813]: I0219 18:31:27.336751 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:27 crc kubenswrapper[4813]: I0219 18:31:27.336769 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:27Z","lastTransitionTime":"2026-02-19T18:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:27 crc kubenswrapper[4813]: I0219 18:31:27.439211 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:27 crc kubenswrapper[4813]: I0219 18:31:27.439279 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:27 crc kubenswrapper[4813]: I0219 18:31:27.439296 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:27 crc kubenswrapper[4813]: I0219 18:31:27.439322 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:27 crc kubenswrapper[4813]: I0219 18:31:27.439338 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:27Z","lastTransitionTime":"2026-02-19T18:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:27 crc kubenswrapper[4813]: I0219 18:31:27.467055 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 21:57:48.310331961 +0000 UTC Feb 19 18:31:27 crc kubenswrapper[4813]: I0219 18:31:27.471560 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:31:27 crc kubenswrapper[4813]: I0219 18:31:27.471589 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 18:31:27 crc kubenswrapper[4813]: I0219 18:31:27.471985 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 18:31:27 crc kubenswrapper[4813]: E0219 18:31:27.472197 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 18:31:27 crc kubenswrapper[4813]: E0219 18:31:27.472562 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 18:31:27 crc kubenswrapper[4813]: E0219 18:31:27.472400 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 18:31:27 crc kubenswrapper[4813]: I0219 18:31:27.544937 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:27 crc kubenswrapper[4813]: I0219 18:31:27.545052 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:27 crc kubenswrapper[4813]: I0219 18:31:27.545075 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:27 crc kubenswrapper[4813]: I0219 18:31:27.545127 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:27 crc kubenswrapper[4813]: I0219 18:31:27.545149 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:27Z","lastTransitionTime":"2026-02-19T18:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:27 crc kubenswrapper[4813]: I0219 18:31:27.649447 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:27 crc kubenswrapper[4813]: I0219 18:31:27.649750 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:27 crc kubenswrapper[4813]: I0219 18:31:27.649938 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:27 crc kubenswrapper[4813]: I0219 18:31:27.650238 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:27 crc kubenswrapper[4813]: I0219 18:31:27.650398 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:27Z","lastTransitionTime":"2026-02-19T18:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:27 crc kubenswrapper[4813]: I0219 18:31:27.754305 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:27 crc kubenswrapper[4813]: I0219 18:31:27.754364 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:27 crc kubenswrapper[4813]: I0219 18:31:27.754379 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:27 crc kubenswrapper[4813]: I0219 18:31:27.754401 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:27 crc kubenswrapper[4813]: I0219 18:31:27.754417 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:27Z","lastTransitionTime":"2026-02-19T18:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:27 crc kubenswrapper[4813]: I0219 18:31:27.858447 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:27 crc kubenswrapper[4813]: I0219 18:31:27.858713 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:27 crc kubenswrapper[4813]: I0219 18:31:27.858874 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:27 crc kubenswrapper[4813]: I0219 18:31:27.859055 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:27 crc kubenswrapper[4813]: I0219 18:31:27.859206 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:27Z","lastTransitionTime":"2026-02-19T18:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:27 crc kubenswrapper[4813]: I0219 18:31:27.963030 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:27 crc kubenswrapper[4813]: I0219 18:31:27.963111 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:27 crc kubenswrapper[4813]: I0219 18:31:27.963137 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:27 crc kubenswrapper[4813]: I0219 18:31:27.963169 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:27 crc kubenswrapper[4813]: I0219 18:31:27.963194 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:27Z","lastTransitionTime":"2026-02-19T18:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:28 crc kubenswrapper[4813]: I0219 18:31:28.067587 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:28 crc kubenswrapper[4813]: I0219 18:31:28.068058 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:28 crc kubenswrapper[4813]: I0219 18:31:28.068282 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:28 crc kubenswrapper[4813]: I0219 18:31:28.068452 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:28 crc kubenswrapper[4813]: I0219 18:31:28.068649 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:28Z","lastTransitionTime":"2026-02-19T18:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:28 crc kubenswrapper[4813]: I0219 18:31:28.171157 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:28 crc kubenswrapper[4813]: I0219 18:31:28.171216 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:28 crc kubenswrapper[4813]: I0219 18:31:28.171239 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:28 crc kubenswrapper[4813]: I0219 18:31:28.171268 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:28 crc kubenswrapper[4813]: I0219 18:31:28.171289 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:28Z","lastTransitionTime":"2026-02-19T18:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:28 crc kubenswrapper[4813]: I0219 18:31:28.274390 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:28 crc kubenswrapper[4813]: I0219 18:31:28.274811 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:28 crc kubenswrapper[4813]: I0219 18:31:28.274994 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:28 crc kubenswrapper[4813]: I0219 18:31:28.275168 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:28 crc kubenswrapper[4813]: I0219 18:31:28.275321 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:28Z","lastTransitionTime":"2026-02-19T18:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:28 crc kubenswrapper[4813]: I0219 18:31:28.378539 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:28 crc kubenswrapper[4813]: I0219 18:31:28.378582 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:28 crc kubenswrapper[4813]: I0219 18:31:28.378592 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:28 crc kubenswrapper[4813]: I0219 18:31:28.378607 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:28 crc kubenswrapper[4813]: I0219 18:31:28.378619 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:28Z","lastTransitionTime":"2026-02-19T18:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:28 crc kubenswrapper[4813]: I0219 18:31:28.468011 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 11:13:43.004512898 +0000 UTC Feb 19 18:31:28 crc kubenswrapper[4813]: I0219 18:31:28.471291 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5vng" Feb 19 18:31:28 crc kubenswrapper[4813]: E0219 18:31:28.471427 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5vng" podUID="6fc21e0b-f723-4c9c-9ced-1683cc02fa00" Feb 19 18:31:28 crc kubenswrapper[4813]: I0219 18:31:28.481125 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:28 crc kubenswrapper[4813]: I0219 18:31:28.481185 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:28 crc kubenswrapper[4813]: I0219 18:31:28.481197 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:28 crc kubenswrapper[4813]: I0219 18:31:28.481218 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:28 crc kubenswrapper[4813]: I0219 18:31:28.481265 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:28Z","lastTransitionTime":"2026-02-19T18:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:28 crc kubenswrapper[4813]: I0219 18:31:28.583858 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:28 crc kubenswrapper[4813]: I0219 18:31:28.584099 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:28 crc kubenswrapper[4813]: I0219 18:31:28.584176 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:28 crc kubenswrapper[4813]: I0219 18:31:28.584254 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:28 crc kubenswrapper[4813]: I0219 18:31:28.584341 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:28Z","lastTransitionTime":"2026-02-19T18:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:28 crc kubenswrapper[4813]: I0219 18:31:28.686986 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:28 crc kubenswrapper[4813]: I0219 18:31:28.687244 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:28 crc kubenswrapper[4813]: I0219 18:31:28.687330 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:28 crc kubenswrapper[4813]: I0219 18:31:28.687406 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:28 crc kubenswrapper[4813]: I0219 18:31:28.687467 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:28Z","lastTransitionTime":"2026-02-19T18:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:28 crc kubenswrapper[4813]: I0219 18:31:28.790751 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:28 crc kubenswrapper[4813]: I0219 18:31:28.790806 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:28 crc kubenswrapper[4813]: I0219 18:31:28.790823 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:28 crc kubenswrapper[4813]: I0219 18:31:28.790846 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:28 crc kubenswrapper[4813]: I0219 18:31:28.790865 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:28Z","lastTransitionTime":"2026-02-19T18:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:28 crc kubenswrapper[4813]: I0219 18:31:28.893631 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:28 crc kubenswrapper[4813]: I0219 18:31:28.893693 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:28 crc kubenswrapper[4813]: I0219 18:31:28.893711 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:28 crc kubenswrapper[4813]: I0219 18:31:28.893736 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:28 crc kubenswrapper[4813]: I0219 18:31:28.893754 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:28Z","lastTransitionTime":"2026-02-19T18:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:28 crc kubenswrapper[4813]: I0219 18:31:28.997112 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:28 crc kubenswrapper[4813]: I0219 18:31:28.997206 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:28 crc kubenswrapper[4813]: I0219 18:31:28.997225 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:28 crc kubenswrapper[4813]: I0219 18:31:28.997249 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:28 crc kubenswrapper[4813]: I0219 18:31:28.997269 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:28Z","lastTransitionTime":"2026-02-19T18:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:29 crc kubenswrapper[4813]: I0219 18:31:29.099696 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:29 crc kubenswrapper[4813]: I0219 18:31:29.099775 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:29 crc kubenswrapper[4813]: I0219 18:31:29.099794 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:29 crc kubenswrapper[4813]: I0219 18:31:29.099820 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:29 crc kubenswrapper[4813]: I0219 18:31:29.099838 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:29Z","lastTransitionTime":"2026-02-19T18:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:29 crc kubenswrapper[4813]: I0219 18:31:29.203278 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:29 crc kubenswrapper[4813]: I0219 18:31:29.203569 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:29 crc kubenswrapper[4813]: I0219 18:31:29.203730 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:29 crc kubenswrapper[4813]: I0219 18:31:29.203886 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:29 crc kubenswrapper[4813]: I0219 18:31:29.204064 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:29Z","lastTransitionTime":"2026-02-19T18:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:29 crc kubenswrapper[4813]: I0219 18:31:29.307722 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:29 crc kubenswrapper[4813]: I0219 18:31:29.307774 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:29 crc kubenswrapper[4813]: I0219 18:31:29.307789 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:29 crc kubenswrapper[4813]: I0219 18:31:29.307812 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:29 crc kubenswrapper[4813]: I0219 18:31:29.307831 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:29Z","lastTransitionTime":"2026-02-19T18:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:29 crc kubenswrapper[4813]: I0219 18:31:29.410731 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:29 crc kubenswrapper[4813]: I0219 18:31:29.410803 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:29 crc kubenswrapper[4813]: I0219 18:31:29.410820 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:29 crc kubenswrapper[4813]: I0219 18:31:29.410846 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:29 crc kubenswrapper[4813]: I0219 18:31:29.410864 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:29Z","lastTransitionTime":"2026-02-19T18:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:29 crc kubenswrapper[4813]: I0219 18:31:29.469577 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 08:21:56.087422085 +0000 UTC Feb 19 18:31:29 crc kubenswrapper[4813]: I0219 18:31:29.471025 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 18:31:29 crc kubenswrapper[4813]: E0219 18:31:29.471184 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 18:31:29 crc kubenswrapper[4813]: I0219 18:31:29.471460 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 18:31:29 crc kubenswrapper[4813]: E0219 18:31:29.471554 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 18:31:29 crc kubenswrapper[4813]: I0219 18:31:29.471747 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:31:29 crc kubenswrapper[4813]: E0219 18:31:29.471827 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 18:31:29 crc kubenswrapper[4813]: I0219 18:31:29.514796 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:29 crc kubenswrapper[4813]: I0219 18:31:29.514866 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:29 crc kubenswrapper[4813]: I0219 18:31:29.514883 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:29 crc kubenswrapper[4813]: I0219 18:31:29.514907 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:29 crc kubenswrapper[4813]: I0219 18:31:29.514926 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:29Z","lastTransitionTime":"2026-02-19T18:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:29 crc kubenswrapper[4813]: I0219 18:31:29.618057 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:29 crc kubenswrapper[4813]: I0219 18:31:29.618120 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:29 crc kubenswrapper[4813]: I0219 18:31:29.618138 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:29 crc kubenswrapper[4813]: I0219 18:31:29.618162 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:29 crc kubenswrapper[4813]: I0219 18:31:29.618185 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:29Z","lastTransitionTime":"2026-02-19T18:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:29 crc kubenswrapper[4813]: I0219 18:31:29.722379 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:29 crc kubenswrapper[4813]: I0219 18:31:29.722436 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:29 crc kubenswrapper[4813]: I0219 18:31:29.722452 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:29 crc kubenswrapper[4813]: I0219 18:31:29.722475 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:29 crc kubenswrapper[4813]: I0219 18:31:29.722492 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:29Z","lastTransitionTime":"2026-02-19T18:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:29 crc kubenswrapper[4813]: I0219 18:31:29.825669 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:29 crc kubenswrapper[4813]: I0219 18:31:29.825739 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:29 crc kubenswrapper[4813]: I0219 18:31:29.825760 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:29 crc kubenswrapper[4813]: I0219 18:31:29.825782 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:29 crc kubenswrapper[4813]: I0219 18:31:29.825800 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:29Z","lastTransitionTime":"2026-02-19T18:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:29 crc kubenswrapper[4813]: I0219 18:31:29.928455 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:29 crc kubenswrapper[4813]: I0219 18:31:29.928523 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:29 crc kubenswrapper[4813]: I0219 18:31:29.928552 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:29 crc kubenswrapper[4813]: I0219 18:31:29.928576 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:29 crc kubenswrapper[4813]: I0219 18:31:29.928592 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:29Z","lastTransitionTime":"2026-02-19T18:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:30 crc kubenswrapper[4813]: I0219 18:31:30.031754 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:30 crc kubenswrapper[4813]: I0219 18:31:30.031833 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:30 crc kubenswrapper[4813]: I0219 18:31:30.031853 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:30 crc kubenswrapper[4813]: I0219 18:31:30.031876 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:30 crc kubenswrapper[4813]: I0219 18:31:30.031893 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:30Z","lastTransitionTime":"2026-02-19T18:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:30 crc kubenswrapper[4813]: I0219 18:31:30.134395 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:30 crc kubenswrapper[4813]: I0219 18:31:30.134478 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:30 crc kubenswrapper[4813]: I0219 18:31:30.134501 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:30 crc kubenswrapper[4813]: I0219 18:31:30.134530 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:30 crc kubenswrapper[4813]: I0219 18:31:30.134552 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:30Z","lastTransitionTime":"2026-02-19T18:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:30 crc kubenswrapper[4813]: I0219 18:31:30.238071 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:30 crc kubenswrapper[4813]: I0219 18:31:30.238122 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:30 crc kubenswrapper[4813]: I0219 18:31:30.238140 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:30 crc kubenswrapper[4813]: I0219 18:31:30.238163 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:30 crc kubenswrapper[4813]: I0219 18:31:30.238179 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:30Z","lastTransitionTime":"2026-02-19T18:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:30 crc kubenswrapper[4813]: I0219 18:31:30.341564 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:30 crc kubenswrapper[4813]: I0219 18:31:30.342032 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:30 crc kubenswrapper[4813]: I0219 18:31:30.342198 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:30 crc kubenswrapper[4813]: I0219 18:31:30.342349 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:30 crc kubenswrapper[4813]: I0219 18:31:30.342484 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:30Z","lastTransitionTime":"2026-02-19T18:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:30 crc kubenswrapper[4813]: I0219 18:31:30.446208 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:30 crc kubenswrapper[4813]: I0219 18:31:30.446270 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:30 crc kubenswrapper[4813]: I0219 18:31:30.446287 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:30 crc kubenswrapper[4813]: I0219 18:31:30.446310 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:30 crc kubenswrapper[4813]: I0219 18:31:30.446330 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:30Z","lastTransitionTime":"2026-02-19T18:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:30 crc kubenswrapper[4813]: I0219 18:31:30.454421 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 18:31:30 crc kubenswrapper[4813]: I0219 18:31:30.454482 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 18:31:30 crc kubenswrapper[4813]: I0219 18:31:30.454502 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 18:31:30 crc kubenswrapper[4813]: I0219 18:31:30.454526 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 18:31:30 crc kubenswrapper[4813]: I0219 18:31:30.454544 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T18:31:30Z","lastTransitionTime":"2026-02-19T18:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 18:31:30 crc kubenswrapper[4813]: I0219 18:31:30.470400 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 11:35:50.185838318 +0000 UTC Feb 19 18:31:30 crc kubenswrapper[4813]: I0219 18:31:30.470571 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5vng" Feb 19 18:31:30 crc kubenswrapper[4813]: E0219 18:31:30.470761 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5vng" podUID="6fc21e0b-f723-4c9c-9ced-1683cc02fa00" Feb 19 18:31:30 crc kubenswrapper[4813]: I0219 18:31:30.472580 4813 scope.go:117] "RemoveContainer" containerID="0cc1c57ce812d6b74b0a2631aac1b67e009e76561c3561fcc94cf8bfa6c6a61c" Feb 19 18:31:30 crc kubenswrapper[4813]: E0219 18:31:30.473050 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pc9t2_openshift-ovn-kubernetes(928c75f4-605c-4556-8c29-14ff4bdf6f5e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" podUID="928c75f4-605c-4556-8c29-14ff4bdf6f5e" Feb 19 18:31:30 crc kubenswrapper[4813]: I0219 18:31:30.527013 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-l9p7r"] Feb 19 18:31:30 crc kubenswrapper[4813]: I0219 18:31:30.527680 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l9p7r" Feb 19 18:31:30 crc kubenswrapper[4813]: I0219 18:31:30.533417 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 19 18:31:30 crc kubenswrapper[4813]: I0219 18:31:30.534032 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 19 18:31:30 crc kubenswrapper[4813]: I0219 18:31:30.534613 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 19 18:31:30 crc kubenswrapper[4813]: I0219 18:31:30.535495 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 19 18:31:30 crc kubenswrapper[4813]: I0219 18:31:30.576399 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=55.576368527 podStartE2EDuration="55.576368527s" podCreationTimestamp="2026-02-19 18:30:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:31:30.557226361 +0000 UTC m=+109.782666942" watchObservedRunningTime="2026-02-19 18:31:30.576368527 +0000 UTC m=+109.801809108" Feb 19 18:31:30 crc kubenswrapper[4813]: I0219 18:31:30.611453 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-vpk9w" podStartSLOduration=90.611424936 podStartE2EDuration="1m30.611424936s" podCreationTimestamp="2026-02-19 18:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:31:30.593033684 +0000 UTC m=+109.818474275" watchObservedRunningTime="2026-02-19 18:31:30.611424936 +0000 UTC m=+109.836865517" Feb 19 18:31:30 crc kubenswrapper[4813]: I0219 18:31:30.628056 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a76cd78-1f27-46d3-b8ed-b763924e2bba-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-l9p7r\" (UID: \"4a76cd78-1f27-46d3-b8ed-b763924e2bba\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l9p7r" Feb 19 18:31:30 crc kubenswrapper[4813]: I0219 18:31:30.628434 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4a76cd78-1f27-46d3-b8ed-b763924e2bba-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-l9p7r\" (UID: \"4a76cd78-1f27-46d3-b8ed-b763924e2bba\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l9p7r" Feb 19 18:31:30 crc kubenswrapper[4813]: I0219 18:31:30.628719 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4a76cd78-1f27-46d3-b8ed-b763924e2bba-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-l9p7r\" (UID: \"4a76cd78-1f27-46d3-b8ed-b763924e2bba\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l9p7r" Feb 19 18:31:30 crc kubenswrapper[4813]: I0219 18:31:30.629120 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4a76cd78-1f27-46d3-b8ed-b763924e2bba-service-ca\") pod \"cluster-version-operator-5c965bbfc6-l9p7r\" (UID: \"4a76cd78-1f27-46d3-b8ed-b763924e2bba\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l9p7r" Feb 19 18:31:30 crc kubenswrapper[4813]: I0219 18:31:30.629544 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4a76cd78-1f27-46d3-b8ed-b763924e2bba-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-l9p7r\" (UID: \"4a76cd78-1f27-46d3-b8ed-b763924e2bba\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l9p7r" Feb 19 18:31:30 crc kubenswrapper[4813]: I0219 18:31:30.630023 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podStartSLOduration=88.629999114 podStartE2EDuration="1m28.629999114s" podCreationTimestamp="2026-02-19 18:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:31:30.627901407 +0000 UTC m=+109.853341988" watchObservedRunningTime="2026-02-19 18:31:30.629999114 +0000 UTC m=+109.855439705" Feb 19 18:31:30 crc kubenswrapper[4813]: I0219 18:31:30.670863 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=89.670837949 podStartE2EDuration="1m29.670837949s" podCreationTimestamp="2026-02-19 18:30:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:31:30.654549304 +0000 UTC m=+109.879989885" watchObservedRunningTime="2026-02-19 18:31:30.670837949 +0000 UTC m=+109.896278500" Feb 19 18:31:30 crc kubenswrapper[4813]: I0219 18:31:30.704565 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=38.704546664 podStartE2EDuration="38.704546664s" podCreationTimestamp="2026-02-19 18:30:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:31:30.671346375 +0000 UTC m=+109.896786926" watchObservedRunningTime="2026-02-19 18:31:30.704546664 +0000 UTC m=+109.929987215" Feb 19 18:31:30 crc kubenswrapper[4813]: I0219 18:31:30.704911 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=86.704903605 podStartE2EDuration="1m26.704903605s" podCreationTimestamp="2026-02-19 18:30:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:31:30.702442336 +0000 UTC m=+109.927882887" watchObservedRunningTime="2026-02-19 18:31:30.704903605 +0000 UTC m=+109.930344156" Feb 19 18:31:30 crc kubenswrapper[4813]: I0219 18:31:30.730742 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4a76cd78-1f27-46d3-b8ed-b763924e2bba-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-l9p7r\" (UID: \"4a76cd78-1f27-46d3-b8ed-b763924e2bba\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l9p7r" Feb 19 18:31:30 crc kubenswrapper[4813]: I0219 18:31:30.730894 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a76cd78-1f27-46d3-b8ed-b763924e2bba-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-l9p7r\" (UID: \"4a76cd78-1f27-46d3-b8ed-b763924e2bba\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l9p7r" Feb 19 18:31:30 crc kubenswrapper[4813]: I0219 18:31:30.730991 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4a76cd78-1f27-46d3-b8ed-b763924e2bba-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-l9p7r\" (UID: \"4a76cd78-1f27-46d3-b8ed-b763924e2bba\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l9p7r" Feb 19 18:31:30 crc kubenswrapper[4813]: I0219 18:31:30.731033 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4a76cd78-1f27-46d3-b8ed-b763924e2bba-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-l9p7r\" (UID: \"4a76cd78-1f27-46d3-b8ed-b763924e2bba\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l9p7r" Feb 19 18:31:30 crc kubenswrapper[4813]: I0219 18:31:30.731134 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4a76cd78-1f27-46d3-b8ed-b763924e2bba-service-ca\") pod \"cluster-version-operator-5c965bbfc6-l9p7r\" (UID: \"4a76cd78-1f27-46d3-b8ed-b763924e2bba\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l9p7r" Feb 19 18:31:30 crc kubenswrapper[4813]: I0219 18:31:30.731392 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4a76cd78-1f27-46d3-b8ed-b763924e2bba-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-l9p7r\" (UID: \"4a76cd78-1f27-46d3-b8ed-b763924e2bba\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l9p7r" Feb 19 18:31:30 crc kubenswrapper[4813]: I0219 18:31:30.731530 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4a76cd78-1f27-46d3-b8ed-b763924e2bba-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-l9p7r\" (UID: \"4a76cd78-1f27-46d3-b8ed-b763924e2bba\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l9p7r" Feb 19 18:31:30 crc kubenswrapper[4813]: I0219 18:31:30.733689 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4a76cd78-1f27-46d3-b8ed-b763924e2bba-service-ca\") pod \"cluster-version-operator-5c965bbfc6-l9p7r\" (UID: \"4a76cd78-1f27-46d3-b8ed-b763924e2bba\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l9p7r" Feb 19 18:31:30 crc kubenswrapper[4813]: I0219 18:31:30.743285 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a76cd78-1f27-46d3-b8ed-b763924e2bba-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-l9p7r\" (UID: \"4a76cd78-1f27-46d3-b8ed-b763924e2bba\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l9p7r" Feb 19 18:31:30 crc kubenswrapper[4813]: I0219 18:31:30.750507 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-55mxf" podStartSLOduration=89.750485002 podStartE2EDuration="1m29.750485002s" podCreationTimestamp="2026-02-19 18:30:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:31:30.736585195 +0000 UTC m=+109.962025766" watchObservedRunningTime="2026-02-19 18:31:30.750485002 +0000 UTC m=+109.975925573" Feb 19 18:31:30 crc kubenswrapper[4813]: I0219 18:31:30.764846 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4a76cd78-1f27-46d3-b8ed-b763924e2bba-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-l9p7r\" (UID: \"4a76cd78-1f27-46d3-b8ed-b763924e2bba\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l9p7r" Feb 19 18:31:30 crc kubenswrapper[4813]: I0219 18:31:30.771490 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=84.771471688 podStartE2EDuration="1m24.771471688s" podCreationTimestamp="2026-02-19 18:30:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:31:30.770828397 +0000 UTC m=+109.996268948" watchObservedRunningTime="2026-02-19 18:31:30.771471688 +0000 UTC m=+109.996912269" Feb 19 18:31:30 crc kubenswrapper[4813]: I0219 18:31:30.854847 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l9p7r" Feb 19 18:31:30 crc kubenswrapper[4813]: I0219 18:31:30.862912 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqbrz" podStartSLOduration=88.862895321 podStartE2EDuration="1m28.862895321s" podCreationTimestamp="2026-02-19 18:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:31:30.862225699 +0000 UTC m=+110.087666280" watchObservedRunningTime="2026-02-19 18:31:30.862895321 +0000 UTC m=+110.088335872" Feb 19 18:31:30 crc kubenswrapper[4813]: I0219 18:31:30.863450 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-hksqw" podStartSLOduration=88.863443899 podStartE2EDuration="1m28.863443899s" podCreationTimestamp="2026-02-19 18:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:31:30.843571509 +0000 UTC m=+110.069012140" watchObservedRunningTime="2026-02-19 18:31:30.863443899 +0000 UTC m=+110.088884460" Feb 19 18:31:30 crc kubenswrapper[4813]: I0219 18:31:30.911210 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-jlz6v" podStartSLOduration=88.911193226 podStartE2EDuration="1m28.911193226s" podCreationTimestamp="2026-02-19 18:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:31:30.910587796 +0000 UTC m=+110.136028357" watchObservedRunningTime="2026-02-19 18:31:30.911193226 +0000 UTC m=+110.136633767" Feb 19 18:31:31 crc kubenswrapper[4813]: I0219 18:31:31.077553 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l9p7r" event={"ID":"4a76cd78-1f27-46d3-b8ed-b763924e2bba","Type":"ContainerStarted","Data":"9c63e532176a3b90c93d8ec2936258bba26691529cfd9fa284f83d2107ccc45f"} Feb 19 18:31:31 crc kubenswrapper[4813]: I0219 18:31:31.077823 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l9p7r" event={"ID":"4a76cd78-1f27-46d3-b8ed-b763924e2bba","Type":"ContainerStarted","Data":"5cca469fa103b1cca6fca35c3e3c78850009ddf2e7ffb4a808bda8553b34091e"} Feb 19 18:31:31 crc kubenswrapper[4813]: I0219 18:31:31.095464 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l9p7r" podStartSLOduration=90.095440807 podStartE2EDuration="1m30.095440807s" podCreationTimestamp="2026-02-19 18:30:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:31:31.095266341 +0000 UTC m=+110.320706892" watchObservedRunningTime="2026-02-19 18:31:31.095440807 +0000 UTC m=+110.320881378" Feb 19 18:31:31 crc kubenswrapper[4813]: I0219 18:31:31.471155 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 17:44:18.169468192 +0000 UTC Feb 19 18:31:31 crc kubenswrapper[4813]: I0219 18:31:31.472161 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 19 18:31:31 crc kubenswrapper[4813]: I0219 18:31:31.471537 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 18:31:31 crc kubenswrapper[4813]: I0219 18:31:31.471540 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 18:31:31 crc kubenswrapper[4813]: I0219 18:31:31.473256 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:31:31 crc kubenswrapper[4813]: E0219 18:31:31.473437 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 18:31:31 crc kubenswrapper[4813]: E0219 18:31:31.473823 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 18:31:31 crc kubenswrapper[4813]: E0219 18:31:31.474171 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 18:31:31 crc kubenswrapper[4813]: I0219 18:31:31.488451 4813 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 19 18:31:32 crc kubenswrapper[4813]: I0219 18:31:32.471128 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5vng" Feb 19 18:31:32 crc kubenswrapper[4813]: E0219 18:31:32.471468 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5vng" podUID="6fc21e0b-f723-4c9c-9ced-1683cc02fa00" Feb 19 18:31:33 crc kubenswrapper[4813]: I0219 18:31:33.470726 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:31:33 crc kubenswrapper[4813]: I0219 18:31:33.470726 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 18:31:33 crc kubenswrapper[4813]: I0219 18:31:33.470925 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 18:31:33 crc kubenswrapper[4813]: E0219 18:31:33.471103 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 18:31:33 crc kubenswrapper[4813]: E0219 18:31:33.471215 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 18:31:33 crc kubenswrapper[4813]: E0219 18:31:33.471578 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 18:31:34 crc kubenswrapper[4813]: I0219 18:31:34.470389 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5vng" Feb 19 18:31:34 crc kubenswrapper[4813]: E0219 18:31:34.470608 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5vng" podUID="6fc21e0b-f723-4c9c-9ced-1683cc02fa00" Feb 19 18:31:35 crc kubenswrapper[4813]: I0219 18:31:35.471005 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:31:35 crc kubenswrapper[4813]: I0219 18:31:35.471123 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 18:31:35 crc kubenswrapper[4813]: I0219 18:31:35.471136 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 18:31:35 crc kubenswrapper[4813]: E0219 18:31:35.471308 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 18:31:35 crc kubenswrapper[4813]: E0219 18:31:35.471383 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 18:31:35 crc kubenswrapper[4813]: E0219 18:31:35.471554 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 18:31:36 crc kubenswrapper[4813]: I0219 18:31:36.470859 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5vng" Feb 19 18:31:36 crc kubenswrapper[4813]: E0219 18:31:36.471143 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5vng" podUID="6fc21e0b-f723-4c9c-9ced-1683cc02fa00" Feb 19 18:31:37 crc kubenswrapper[4813]: I0219 18:31:37.471184 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 18:31:37 crc kubenswrapper[4813]: I0219 18:31:37.471311 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:31:37 crc kubenswrapper[4813]: I0219 18:31:37.471196 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 18:31:37 crc kubenswrapper[4813]: E0219 18:31:37.471369 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 18:31:37 crc kubenswrapper[4813]: E0219 18:31:37.471516 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 18:31:37 crc kubenswrapper[4813]: E0219 18:31:37.471844 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 18:31:38 crc kubenswrapper[4813]: I0219 18:31:38.106463 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hksqw_b099cefb-f2e5-4f3f-976c-7433dba77ef2/kube-multus/1.log" Feb 19 18:31:38 crc kubenswrapper[4813]: I0219 18:31:38.107376 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hksqw_b099cefb-f2e5-4f3f-976c-7433dba77ef2/kube-multus/0.log" Feb 19 18:31:38 crc kubenswrapper[4813]: I0219 18:31:38.107619 4813 generic.go:334] "Generic (PLEG): container finished" podID="b099cefb-f2e5-4f3f-976c-7433dba77ef2" containerID="38e8bb938cc19db067c4260fd25416cf155dd95f94c41f03a9ef2b4d29589ad2" exitCode=1 Feb 19 18:31:38 crc kubenswrapper[4813]: I0219 18:31:38.107778 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hksqw" event={"ID":"b099cefb-f2e5-4f3f-976c-7433dba77ef2","Type":"ContainerDied","Data":"38e8bb938cc19db067c4260fd25416cf155dd95f94c41f03a9ef2b4d29589ad2"} Feb 19 18:31:38 crc kubenswrapper[4813]: I0219 18:31:38.108057 4813 scope.go:117] "RemoveContainer" containerID="3313ad9bea8833862d8c116421e00a0c2376c81e214043db1fa4096177d01094" Feb 19 18:31:38 crc kubenswrapper[4813]: I0219 18:31:38.108724 4813 scope.go:117] "RemoveContainer" containerID="38e8bb938cc19db067c4260fd25416cf155dd95f94c41f03a9ef2b4d29589ad2" Feb 19 18:31:38 crc kubenswrapper[4813]: E0219 18:31:38.109089 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-hksqw_openshift-multus(b099cefb-f2e5-4f3f-976c-7433dba77ef2)\"" pod="openshift-multus/multus-hksqw" podUID="b099cefb-f2e5-4f3f-976c-7433dba77ef2" Feb 19 18:31:38 crc kubenswrapper[4813]: I0219 18:31:38.470896 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5vng" Feb 19 18:31:38 crc kubenswrapper[4813]: E0219 18:31:38.471166 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5vng" podUID="6fc21e0b-f723-4c9c-9ced-1683cc02fa00" Feb 19 18:31:39 crc kubenswrapper[4813]: I0219 18:31:39.112396 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hksqw_b099cefb-f2e5-4f3f-976c-7433dba77ef2/kube-multus/1.log" Feb 19 18:31:39 crc kubenswrapper[4813]: I0219 18:31:39.471319 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:31:39 crc kubenswrapper[4813]: I0219 18:31:39.471430 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 18:31:39 crc kubenswrapper[4813]: E0219 18:31:39.471523 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 18:31:39 crc kubenswrapper[4813]: E0219 18:31:39.471603 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 18:31:39 crc kubenswrapper[4813]: I0219 18:31:39.472055 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 18:31:39 crc kubenswrapper[4813]: E0219 18:31:39.472213 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 18:31:40 crc kubenswrapper[4813]: I0219 18:31:40.470512 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5vng" Feb 19 18:31:40 crc kubenswrapper[4813]: E0219 18:31:40.470718 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5vng" podUID="6fc21e0b-f723-4c9c-9ced-1683cc02fa00" Feb 19 18:31:41 crc kubenswrapper[4813]: E0219 18:31:41.424705 4813 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 19 18:31:41 crc kubenswrapper[4813]: I0219 18:31:41.470860 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 18:31:41 crc kubenswrapper[4813]: I0219 18:31:41.470995 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:31:41 crc kubenswrapper[4813]: E0219 18:31:41.472684 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 18:31:41 crc kubenswrapper[4813]: I0219 18:31:41.472756 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 18:31:41 crc kubenswrapper[4813]: E0219 18:31:41.472891 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 18:31:41 crc kubenswrapper[4813]: E0219 18:31:41.473148 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 18:31:41 crc kubenswrapper[4813]: E0219 18:31:41.560209 4813 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 18:31:42 crc kubenswrapper[4813]: I0219 18:31:42.471123 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5vng" Feb 19 18:31:42 crc kubenswrapper[4813]: E0219 18:31:42.471628 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5vng" podUID="6fc21e0b-f723-4c9c-9ced-1683cc02fa00" Feb 19 18:31:43 crc kubenswrapper[4813]: I0219 18:31:43.470940 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 18:31:43 crc kubenswrapper[4813]: I0219 18:31:43.471047 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 18:31:43 crc kubenswrapper[4813]: E0219 18:31:43.471144 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 18:31:43 crc kubenswrapper[4813]: I0219 18:31:43.471153 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:31:43 crc kubenswrapper[4813]: E0219 18:31:43.471326 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 18:31:43 crc kubenswrapper[4813]: E0219 18:31:43.471482 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 18:31:44 crc kubenswrapper[4813]: I0219 18:31:44.471430 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5vng" Feb 19 18:31:44 crc kubenswrapper[4813]: E0219 18:31:44.471992 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5vng" podUID="6fc21e0b-f723-4c9c-9ced-1683cc02fa00" Feb 19 18:31:45 crc kubenswrapper[4813]: I0219 18:31:45.470435 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:31:45 crc kubenswrapper[4813]: I0219 18:31:45.470531 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 18:31:45 crc kubenswrapper[4813]: I0219 18:31:45.470485 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 18:31:45 crc kubenswrapper[4813]: E0219 18:31:45.470704 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 18:31:45 crc kubenswrapper[4813]: E0219 18:31:45.471584 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 18:31:45 crc kubenswrapper[4813]: E0219 18:31:45.471706 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 18:31:45 crc kubenswrapper[4813]: I0219 18:31:45.472494 4813 scope.go:117] "RemoveContainer" containerID="0cc1c57ce812d6b74b0a2631aac1b67e009e76561c3561fcc94cf8bfa6c6a61c" Feb 19 18:31:46 crc kubenswrapper[4813]: I0219 18:31:46.138526 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pc9t2_928c75f4-605c-4556-8c29-14ff4bdf6f5e/ovnkube-controller/3.log" Feb 19 18:31:46 crc kubenswrapper[4813]: I0219 18:31:46.141690 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" event={"ID":"928c75f4-605c-4556-8c29-14ff4bdf6f5e","Type":"ContainerStarted","Data":"73bccc2e2b4d6078c9df512ec532a1628f449b7e4d7ec2890220e9986fb06cb6"} Feb 19 18:31:46 crc kubenswrapper[4813]: I0219 18:31:46.142374 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" Feb 19 18:31:46 crc kubenswrapper[4813]: I0219 18:31:46.188458 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" podStartSLOduration=104.188436767 podStartE2EDuration="1m44.188436767s" podCreationTimestamp="2026-02-19 18:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:31:46.186849495 +0000 UTC m=+125.412290076" watchObservedRunningTime="2026-02-19 18:31:46.188436767 +0000 UTC m=+125.413877338" Feb 19 18:31:46 crc kubenswrapper[4813]: I0219 18:31:46.347386 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-l5vng"] Feb 19 18:31:46 crc kubenswrapper[4813]: I0219 18:31:46.347526 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5vng" Feb 19 18:31:46 crc kubenswrapper[4813]: E0219 18:31:46.347664 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5vng" podUID="6fc21e0b-f723-4c9c-9ced-1683cc02fa00" Feb 19 18:31:46 crc kubenswrapper[4813]: E0219 18:31:46.561932 4813 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 18:31:47 crc kubenswrapper[4813]: I0219 18:31:47.471383 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 18:31:47 crc kubenswrapper[4813]: I0219 18:31:47.471468 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5vng" Feb 19 18:31:47 crc kubenswrapper[4813]: I0219 18:31:47.471629 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 18:31:47 crc kubenswrapper[4813]: E0219 18:31:47.471636 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 18:31:47 crc kubenswrapper[4813]: I0219 18:31:47.471739 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:31:47 crc kubenswrapper[4813]: E0219 18:31:47.471846 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 18:31:47 crc kubenswrapper[4813]: E0219 18:31:47.471942 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 18:31:47 crc kubenswrapper[4813]: E0219 18:31:47.472112 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5vng" podUID="6fc21e0b-f723-4c9c-9ced-1683cc02fa00" Feb 19 18:31:49 crc kubenswrapper[4813]: I0219 18:31:49.470738 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:31:49 crc kubenswrapper[4813]: I0219 18:31:49.470846 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 18:31:49 crc kubenswrapper[4813]: I0219 18:31:49.471037 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 18:31:49 crc kubenswrapper[4813]: I0219 18:31:49.471071 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5vng" Feb 19 18:31:49 crc kubenswrapper[4813]: E0219 18:31:49.471090 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 18:31:49 crc kubenswrapper[4813]: E0219 18:31:49.471275 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 18:31:49 crc kubenswrapper[4813]: E0219 18:31:49.471416 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5vng" podUID="6fc21e0b-f723-4c9c-9ced-1683cc02fa00" Feb 19 18:31:49 crc kubenswrapper[4813]: E0219 18:31:49.471479 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 18:31:51 crc kubenswrapper[4813]: I0219 18:31:51.470823 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 18:31:51 crc kubenswrapper[4813]: I0219 18:31:51.470922 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:31:51 crc kubenswrapper[4813]: E0219 18:31:51.472278 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 18:31:51 crc kubenswrapper[4813]: I0219 18:31:51.472323 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 18:31:51 crc kubenswrapper[4813]: I0219 18:31:51.472303 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5vng" Feb 19 18:31:51 crc kubenswrapper[4813]: E0219 18:31:51.472391 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 18:31:51 crc kubenswrapper[4813]: E0219 18:31:51.472592 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5vng" podUID="6fc21e0b-f723-4c9c-9ced-1683cc02fa00" Feb 19 18:31:51 crc kubenswrapper[4813]: E0219 18:31:51.472696 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 18:31:51 crc kubenswrapper[4813]: E0219 18:31:51.563050 4813 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 18:31:52 crc kubenswrapper[4813]: I0219 18:31:52.471903 4813 scope.go:117] "RemoveContainer" containerID="38e8bb938cc19db067c4260fd25416cf155dd95f94c41f03a9ef2b4d29589ad2" Feb 19 18:31:53 crc kubenswrapper[4813]: I0219 18:31:53.176750 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hksqw_b099cefb-f2e5-4f3f-976c-7433dba77ef2/kube-multus/1.log" Feb 19 18:31:53 crc kubenswrapper[4813]: I0219 18:31:53.177120 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hksqw" event={"ID":"b099cefb-f2e5-4f3f-976c-7433dba77ef2","Type":"ContainerStarted","Data":"1bf20d5a1dff3d1f2180385a366bd304f81d49489fb5beb68ef022b82460d17a"} Feb 19 18:31:53 crc kubenswrapper[4813]: I0219 18:31:53.471084 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5vng" Feb 19 18:31:53 crc kubenswrapper[4813]: I0219 18:31:53.471160 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:31:53 crc kubenswrapper[4813]: I0219 18:31:53.471213 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 18:31:53 crc kubenswrapper[4813]: I0219 18:31:53.471315 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 18:31:53 crc kubenswrapper[4813]: E0219 18:31:53.471570 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 18:31:53 crc kubenswrapper[4813]: E0219 18:31:53.471648 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 18:31:53 crc kubenswrapper[4813]: E0219 18:31:53.471715 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 18:31:53 crc kubenswrapper[4813]: E0219 18:31:53.471471 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5vng" podUID="6fc21e0b-f723-4c9c-9ced-1683cc02fa00" Feb 19 18:31:55 crc kubenswrapper[4813]: I0219 18:31:55.471403 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 18:31:55 crc kubenswrapper[4813]: I0219 18:31:55.471434 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5vng" Feb 19 18:31:55 crc kubenswrapper[4813]: I0219 18:31:55.471462 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:31:55 crc kubenswrapper[4813]: E0219 18:31:55.471547 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 18:31:55 crc kubenswrapper[4813]: I0219 18:31:55.471583 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 18:31:55 crc kubenswrapper[4813]: E0219 18:31:55.471756 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5vng" podUID="6fc21e0b-f723-4c9c-9ced-1683cc02fa00" Feb 19 18:31:55 crc kubenswrapper[4813]: E0219 18:31:55.472196 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 18:31:55 crc kubenswrapper[4813]: E0219 18:31:55.472562 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 18:31:57 crc kubenswrapper[4813]: I0219 18:31:57.471100 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 18:31:57 crc kubenswrapper[4813]: I0219 18:31:57.471210 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 18:31:57 crc kubenswrapper[4813]: I0219 18:31:57.471127 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5vng" Feb 19 18:31:57 crc kubenswrapper[4813]: I0219 18:31:57.471145 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:31:57 crc kubenswrapper[4813]: I0219 18:31:57.473484 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 19 18:31:57 crc kubenswrapper[4813]: I0219 18:31:57.473580 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 19 18:31:57 crc kubenswrapper[4813]: I0219 18:31:57.473840 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 19 18:31:57 crc kubenswrapper[4813]: I0219 18:31:57.473983 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 19 18:31:57 crc kubenswrapper[4813]: I0219 18:31:57.474012 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 19 18:31:57 crc kubenswrapper[4813]: I0219 18:31:57.474180 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.770156 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.823079 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-rksx7"] Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.824128 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-rksx7" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.828035 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mgsbg"] Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.829297 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.829330 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.829577 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.833030 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.833466 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-mgsbg" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.835222 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.835613 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.836223 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.842626 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-dlph2"] Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.855296 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.855868 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.856038 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.856348 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.856605 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.856754 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.856902 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.857075 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.857424 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-dlph2" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.858705 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-fdzh8"] Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.859261 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-fdzh8" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.859637 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-jfz6c"] Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.860253 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jfz6c" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.861567 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.864645 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4mmbp"] Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.865705 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-j6hz9"] Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.868116 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-9fx4h"] Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.870752 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-cvwkx"] Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.867237 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4mmbp" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.871233 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-9fx4h" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.869572 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-j6hz9" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.876652 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-wwn2x"] Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.865761 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.865829 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.865915 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.866037 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.871017 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.872921 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.873859 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.876463 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.876500 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.876682 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.876752 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.876759 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.876799 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.876829 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.876833 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.877906 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cvwkx" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.878319 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-dqj4z"] Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.878714 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-dqj4z" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.879033 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-wwn2x" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.879462 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-x6n5r"] Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.879635 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.879789 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.879905 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.880011 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.880045 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-x6n5r" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.880062 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.879921 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.880343 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.880391 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.880433 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.880504 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.880520 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.880028 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.880680 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.880687 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.880301 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.880323 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.880276 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.879917 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.881116 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-qqrj6"] Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.881231 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.881865 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-qqrj6" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.882873 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-5gzd8"] Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.887316 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.887355 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.891478 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rffpk"] Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.891883 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-bd28k"] Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.892215 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vkxmw"] Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.892606 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vkxmw" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.893757 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.894007 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.894071 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-5gzd8" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.894112 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.894203 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.894311 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rffpk" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.894528 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-bd28k" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.894621 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.894731 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.894824 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.894913 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.895890 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-rksx7"] Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.895916 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6zffl"] Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.904586 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-b84zp"] Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.905395 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6zffl" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.905395 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-b84zp" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.920502 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.920814 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.929924 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.930392 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.930599 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.930704 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.930805 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.930910 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.931187 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.931604 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-9j26p"] Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.932062 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-l78xq"] Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.932303 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-7w8k7"] Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.932549 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-9j26p" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.932706 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7w8k7" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.932855 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-l78xq" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.933028 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.933229 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.933366 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.933452 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.933552 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.933621 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.933786 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.933943 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.934043 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.934221 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.934299 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.934392 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.934529 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.934613 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.934698 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.936140 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s44zp"] Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.936702 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mgsbg"] Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.936796 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s44zp" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.937106 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.937249 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.937296 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.937380 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.937456 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.937511 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.937572 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.937629 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.937690 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-8dvp5"] Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.938438 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8dvp5" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.937700 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.937718 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.937763 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.939678 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-jfz6c"] Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.942020 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-fdzh8"] Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.942072 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6s798"] Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.942334 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.942610 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6s798" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.944482 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2rvnw"] Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.945071 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2rvnw" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.947150 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.948943 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ptfgm"] Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.949503 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ptfgm" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.949977 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-74n48"] Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.951160 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-74n48" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.952069 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-chl4q"] Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.952542 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.953269 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ddfe429-ea67-4b0c-bab1-bc72117fddda-config\") pod \"route-controller-manager-6576b87f9c-jfz6c\" (UID: \"0ddfe429-ea67-4b0c-bab1-bc72117fddda\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jfz6c" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.953378 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/17f8fff7-991c-4a69-afb4-e40b048cde5c-audit-dir\") pod \"apiserver-76f77b778f-rksx7\" (UID: \"17f8fff7-991c-4a69-afb4-e40b048cde5c\") " pod="openshift-apiserver/apiserver-76f77b778f-rksx7" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.953406 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/f144bc83-c0eb-4627-ab68-9ed862e2f402-machine-approver-tls\") pod \"machine-approver-56656f9798-j6hz9\" (UID: \"f144bc83-c0eb-4627-ab68-9ed862e2f402\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-j6hz9" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.953424 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f3b7488-7c5c-4689-9c41-eacd57fdce7d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-4mmbp\" (UID: \"7f3b7488-7c5c-4689-9c41-eacd57fdce7d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4mmbp" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.953437 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ddfe429-ea67-4b0c-bab1-bc72117fddda-serving-cert\") pod \"route-controller-manager-6576b87f9c-jfz6c\" (UID: \"0ddfe429-ea67-4b0c-bab1-bc72117fddda\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jfz6c" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.953452 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/17f8fff7-991c-4a69-afb4-e40b048cde5c-image-import-ca\") pod \"apiserver-76f77b778f-rksx7\" (UID: \"17f8fff7-991c-4a69-afb4-e40b048cde5c\") " pod="openshift-apiserver/apiserver-76f77b778f-rksx7" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.953473 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0ddfe429-ea67-4b0c-bab1-bc72117fddda-client-ca\") pod \"route-controller-manager-6576b87f9c-jfz6c\" (UID: \"0ddfe429-ea67-4b0c-bab1-bc72117fddda\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jfz6c" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.953489 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-x6n5r\" (UID: \"f230f42e-0e70-43d5-abbb-a2bbbe3eddcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-x6n5r" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.953505 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14156cf1-57df-48bd-8723-1a7a083bab21-config\") pod \"console-operator-58897d9998-9fx4h\" (UID: \"14156cf1-57df-48bd-8723-1a7a083bab21\") " pod="openshift-console-operator/console-operator-58897d9998-9fx4h" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.953522 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e3db18d-bbdd-4924-bb4c-d562ff4347b7-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-fdzh8\" (UID: \"6e3db18d-bbdd-4924-bb4c-d562ff4347b7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fdzh8" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.953540 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17f8fff7-991c-4a69-afb4-e40b048cde5c-config\") pod \"apiserver-76f77b778f-rksx7\" (UID: \"17f8fff7-991c-4a69-afb4-e40b048cde5c\") " pod="openshift-apiserver/apiserver-76f77b778f-rksx7" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.953554 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f144bc83-c0eb-4627-ab68-9ed862e2f402-config\") pod \"machine-approver-56656f9798-j6hz9\" (UID: \"f144bc83-c0eb-4627-ab68-9ed862e2f402\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-j6hz9" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.953571 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9swks\" (UniqueName: \"kubernetes.io/projected/17f8fff7-991c-4a69-afb4-e40b048cde5c-kube-api-access-9swks\") pod \"apiserver-76f77b778f-rksx7\" (UID: \"17f8fff7-991c-4a69-afb4-e40b048cde5c\") " pod="openshift-apiserver/apiserver-76f77b778f-rksx7" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.953586 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/36d151c0-fa73-4e5f-a6de-580629bef8f1-images\") pod \"machine-api-operator-5694c8668f-dlph2\" (UID: \"36d151c0-fa73-4e5f-a6de-580629bef8f1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dlph2" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.953600 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-x6n5r\" (UID: \"f230f42e-0e70-43d5-abbb-a2bbbe3eddcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-x6n5r" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.953616 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/14156cf1-57df-48bd-8723-1a7a083bab21-trusted-ca\") pod \"console-operator-58897d9998-9fx4h\" (UID: \"14156cf1-57df-48bd-8723-1a7a083bab21\") " pod="openshift-console-operator/console-operator-58897d9998-9fx4h" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.953631 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c2745dc-3e5e-4571-ba36-aa14c87c6336-trusted-ca-bundle\") pod \"console-f9d7485db-dqj4z\" (UID: \"9c2745dc-3e5e-4571-ba36-aa14c87c6336\") " pod="openshift-console/console-f9d7485db-dqj4z" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.953647 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fspl9\" (UniqueName: \"kubernetes.io/projected/6e3db18d-bbdd-4924-bb4c-d562ff4347b7-kube-api-access-fspl9\") pod \"authentication-operator-69f744f599-fdzh8\" (UID: \"6e3db18d-bbdd-4924-bb4c-d562ff4347b7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fdzh8" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.953665 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-x6n5r\" (UID: \"f230f42e-0e70-43d5-abbb-a2bbbe3eddcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-x6n5r" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.953687 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9c2745dc-3e5e-4571-ba36-aa14c87c6336-service-ca\") pod \"console-f9d7485db-dqj4z\" (UID: \"9c2745dc-3e5e-4571-ba36-aa14c87c6336\") " pod="openshift-console/console-f9d7485db-dqj4z" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.953707 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2884d383-8ee9-456c-8d6c-d41eaebd60e6-encryption-config\") pod \"apiserver-7bbb656c7d-cvwkx\" (UID: \"2884d383-8ee9-456c-8d6c-d41eaebd60e6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cvwkx" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.953724 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57nds\" (UniqueName: \"kubernetes.io/projected/624d3261-b732-4c7e-b1d9-56827d44c94f-kube-api-access-57nds\") pod \"controller-manager-879f6c89f-mgsbg\" (UID: \"624d3261-b732-4c7e-b1d9-56827d44c94f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mgsbg" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.953741 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-x6n5r\" (UID: \"f230f42e-0e70-43d5-abbb-a2bbbe3eddcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-x6n5r" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.953755 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/624d3261-b732-4c7e-b1d9-56827d44c94f-config\") pod \"controller-manager-879f6c89f-mgsbg\" (UID: \"624d3261-b732-4c7e-b1d9-56827d44c94f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mgsbg" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.953771 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/624d3261-b732-4c7e-b1d9-56827d44c94f-client-ca\") pod \"controller-manager-879f6c89f-mgsbg\" (UID: \"624d3261-b732-4c7e-b1d9-56827d44c94f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mgsbg" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.953786 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-x6n5r\" (UID: \"f230f42e-0e70-43d5-abbb-a2bbbe3eddcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-x6n5r" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.953803 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-x6n5r\" (UID: \"f230f42e-0e70-43d5-abbb-a2bbbe3eddcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-x6n5r" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.953817 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9c2745dc-3e5e-4571-ba36-aa14c87c6336-oauth-serving-cert\") pod \"console-f9d7485db-dqj4z\" (UID: \"9c2745dc-3e5e-4571-ba36-aa14c87c6336\") " pod="openshift-console/console-f9d7485db-dqj4z" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.953834 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/17f8fff7-991c-4a69-afb4-e40b048cde5c-etcd-serving-ca\") pod \"apiserver-76f77b778f-rksx7\" (UID: \"17f8fff7-991c-4a69-afb4-e40b048cde5c\") " pod="openshift-apiserver/apiserver-76f77b778f-rksx7" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.953847 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36d151c0-fa73-4e5f-a6de-580629bef8f1-config\") pod \"machine-api-operator-5694c8668f-dlph2\" (UID: \"36d151c0-fa73-4e5f-a6de-580629bef8f1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dlph2" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.953861 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2884d383-8ee9-456c-8d6c-d41eaebd60e6-etcd-client\") pod \"apiserver-7bbb656c7d-cvwkx\" (UID: \"2884d383-8ee9-456c-8d6c-d41eaebd60e6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cvwkx" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.953878 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e3db18d-bbdd-4924-bb4c-d562ff4347b7-config\") pod \"authentication-operator-69f744f599-fdzh8\" (UID: \"6e3db18d-bbdd-4924-bb4c-d562ff4347b7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fdzh8" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.953892 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2884d383-8ee9-456c-8d6c-d41eaebd60e6-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-cvwkx\" (UID: \"2884d383-8ee9-456c-8d6c-d41eaebd60e6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cvwkx" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.953922 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-x6n5r\" (UID: \"f230f42e-0e70-43d5-abbb-a2bbbe3eddcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-x6n5r" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.954042 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/624d3261-b732-4c7e-b1d9-56827d44c94f-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-mgsbg\" (UID: \"624d3261-b732-4c7e-b1d9-56827d44c94f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mgsbg" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.954075 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-x6n5r\" (UID: \"f230f42e-0e70-43d5-abbb-a2bbbe3eddcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-x6n5r" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.954103 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9c2745dc-3e5e-4571-ba36-aa14c87c6336-console-oauth-config\") pod \"console-f9d7485db-dqj4z\" (UID: \"9c2745dc-3e5e-4571-ba36-aa14c87c6336\") " pod="openshift-console/console-f9d7485db-dqj4z" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.954140 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnqj9\" (UniqueName: \"kubernetes.io/projected/8ef1d23f-3a55-4fef-ba88-96b5ba04313f-kube-api-access-tnqj9\") pod \"openshift-config-operator-7777fb866f-wwn2x\" (UID: \"8ef1d23f-3a55-4fef-ba88-96b5ba04313f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-wwn2x" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.954155 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/17f8fff7-991c-4a69-afb4-e40b048cde5c-encryption-config\") pod \"apiserver-76f77b778f-rksx7\" (UID: \"17f8fff7-991c-4a69-afb4-e40b048cde5c\") " pod="openshift-apiserver/apiserver-76f77b778f-rksx7" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.954171 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvxjx\" (UniqueName: \"kubernetes.io/projected/14156cf1-57df-48bd-8723-1a7a083bab21-kube-api-access-lvxjx\") pod \"console-operator-58897d9998-9fx4h\" (UID: \"14156cf1-57df-48bd-8723-1a7a083bab21\") " pod="openshift-console-operator/console-operator-58897d9998-9fx4h" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.954188 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/8ef1d23f-3a55-4fef-ba88-96b5ba04313f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-wwn2x\" (UID: \"8ef1d23f-3a55-4fef-ba88-96b5ba04313f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-wwn2x" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.954201 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2884d383-8ee9-456c-8d6c-d41eaebd60e6-serving-cert\") pod \"apiserver-7bbb656c7d-cvwkx\" (UID: \"2884d383-8ee9-456c-8d6c-d41eaebd60e6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cvwkx" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.954217 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ef1d23f-3a55-4fef-ba88-96b5ba04313f-serving-cert\") pod \"openshift-config-operator-7777fb866f-wwn2x\" (UID: \"8ef1d23f-3a55-4fef-ba88-96b5ba04313f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-wwn2x" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.954232 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-x6n5r\" (UID: \"f230f42e-0e70-43d5-abbb-a2bbbe3eddcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-x6n5r" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.954364 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/36d151c0-fa73-4e5f-a6de-580629bef8f1-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-dlph2\" (UID: \"36d151c0-fa73-4e5f-a6de-580629bef8f1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dlph2" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.954383 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-x6n5r\" (UID: \"f230f42e-0e70-43d5-abbb-a2bbbe3eddcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-x6n5r" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.954431 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvk6m\" (UniqueName: \"kubernetes.io/projected/6f78f99d-d988-4a96-90be-2984414b9bfe-kube-api-access-cvk6m\") pod \"dns-operator-744455d44c-qqrj6\" (UID: \"6f78f99d-d988-4a96-90be-2984414b9bfe\") " pod="openshift-dns-operator/dns-operator-744455d44c-qqrj6" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.954449 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17f8fff7-991c-4a69-afb4-e40b048cde5c-serving-cert\") pod \"apiserver-76f77b778f-rksx7\" (UID: \"17f8fff7-991c-4a69-afb4-e40b048cde5c\") " pod="openshift-apiserver/apiserver-76f77b778f-rksx7" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.954466 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-audit-dir\") pod \"oauth-openshift-558db77b4-x6n5r\" (UID: \"f230f42e-0e70-43d5-abbb-a2bbbe3eddcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-x6n5r" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.954854 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/17f8fff7-991c-4a69-afb4-e40b048cde5c-node-pullsecrets\") pod \"apiserver-76f77b778f-rksx7\" (UID: \"17f8fff7-991c-4a69-afb4-e40b048cde5c\") " pod="openshift-apiserver/apiserver-76f77b778f-rksx7" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.954932 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzbcs\" (UniqueName: \"kubernetes.io/projected/2884d383-8ee9-456c-8d6c-d41eaebd60e6-kube-api-access-dzbcs\") pod \"apiserver-7bbb656c7d-cvwkx\" (UID: \"2884d383-8ee9-456c-8d6c-d41eaebd60e6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cvwkx" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.954987 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwbdz\" (UniqueName: \"kubernetes.io/projected/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-kube-api-access-kwbdz\") pod \"oauth-openshift-558db77b4-x6n5r\" (UID: \"f230f42e-0e70-43d5-abbb-a2bbbe3eddcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-x6n5r" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.954498 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-chl4q" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.955854 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znnmz\" (UniqueName: \"kubernetes.io/projected/c705e274-b996-46b7-8e3f-97dff9107a6f-kube-api-access-znnmz\") pod \"downloads-7954f5f757-5gzd8\" (UID: \"c705e274-b996-46b7-8e3f-97dff9107a6f\") " pod="openshift-console/downloads-7954f5f757-5gzd8" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.955915 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f3b7488-7c5c-4689-9c41-eacd57fdce7d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-4mmbp\" (UID: \"7f3b7488-7c5c-4689-9c41-eacd57fdce7d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4mmbp" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.955939 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwdxp\" (UniqueName: \"kubernetes.io/projected/7f3b7488-7c5c-4689-9c41-eacd57fdce7d-kube-api-access-cwdxp\") pod \"openshift-apiserver-operator-796bbdcf4f-4mmbp\" (UID: \"7f3b7488-7c5c-4689-9c41-eacd57fdce7d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4mmbp" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.955976 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9c2745dc-3e5e-4571-ba36-aa14c87c6336-console-serving-cert\") pod \"console-f9d7485db-dqj4z\" (UID: \"9c2745dc-3e5e-4571-ba36-aa14c87c6336\") " pod="openshift-console/console-f9d7485db-dqj4z" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.955994 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9c2745dc-3e5e-4571-ba36-aa14c87c6336-console-config\") pod \"console-f9d7485db-dqj4z\" (UID: \"9c2745dc-3e5e-4571-ba36-aa14c87c6336\") " pod="openshift-console/console-f9d7485db-dqj4z" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.956016 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcl89\" (UniqueName: \"kubernetes.io/projected/0ddfe429-ea67-4b0c-bab1-bc72117fddda-kube-api-access-hcl89\") pod \"route-controller-manager-6576b87f9c-jfz6c\" (UID: \"0ddfe429-ea67-4b0c-bab1-bc72117fddda\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jfz6c" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.956031 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/17f8fff7-991c-4a69-afb4-e40b048cde5c-etcd-client\") pod \"apiserver-76f77b778f-rksx7\" (UID: \"17f8fff7-991c-4a69-afb4-e40b048cde5c\") " pod="openshift-apiserver/apiserver-76f77b778f-rksx7" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.956055 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhltx\" (UniqueName: \"kubernetes.io/projected/f144bc83-c0eb-4627-ab68-9ed862e2f402-kube-api-access-zhltx\") pod \"machine-approver-56656f9798-j6hz9\" (UID: \"f144bc83-c0eb-4627-ab68-9ed862e2f402\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-j6hz9" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.956352 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e3db18d-bbdd-4924-bb4c-d562ff4347b7-serving-cert\") pod \"authentication-operator-69f744f599-fdzh8\" (UID: \"6e3db18d-bbdd-4924-bb4c-d562ff4347b7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fdzh8" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.956401 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/17f8fff7-991c-4a69-afb4-e40b048cde5c-audit\") pod \"apiserver-76f77b778f-rksx7\" (UID: \"17f8fff7-991c-4a69-afb4-e40b048cde5c\") " pod="openshift-apiserver/apiserver-76f77b778f-rksx7" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.956420 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2884d383-8ee9-456c-8d6c-d41eaebd60e6-audit-policies\") pod \"apiserver-7bbb656c7d-cvwkx\" (UID: \"2884d383-8ee9-456c-8d6c-d41eaebd60e6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cvwkx" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.956450 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpkzh\" (UniqueName: \"kubernetes.io/projected/9c2745dc-3e5e-4571-ba36-aa14c87c6336-kube-api-access-kpkzh\") pod \"console-f9d7485db-dqj4z\" (UID: \"9c2745dc-3e5e-4571-ba36-aa14c87c6336\") " pod="openshift-console/console-f9d7485db-dqj4z" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.956525 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f144bc83-c0eb-4627-ab68-9ed862e2f402-auth-proxy-config\") pod \"machine-approver-56656f9798-j6hz9\" (UID: \"f144bc83-c0eb-4627-ab68-9ed862e2f402\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-j6hz9" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.956599 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2884d383-8ee9-456c-8d6c-d41eaebd60e6-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-cvwkx\" (UID: \"2884d383-8ee9-456c-8d6c-d41eaebd60e6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cvwkx" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.956674 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/624d3261-b732-4c7e-b1d9-56827d44c94f-serving-cert\") pod \"controller-manager-879f6c89f-mgsbg\" (UID: \"624d3261-b732-4c7e-b1d9-56827d44c94f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mgsbg" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.956701 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e3db18d-bbdd-4924-bb4c-d562ff4347b7-service-ca-bundle\") pod \"authentication-operator-69f744f599-fdzh8\" (UID: \"6e3db18d-bbdd-4924-bb4c-d562ff4347b7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fdzh8" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.956727 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzjc5\" (UniqueName: \"kubernetes.io/projected/36d151c0-fa73-4e5f-a6de-580629bef8f1-kube-api-access-rzjc5\") pod \"machine-api-operator-5694c8668f-dlph2\" (UID: \"36d151c0-fa73-4e5f-a6de-580629bef8f1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dlph2" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.956745 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-x6n5r\" (UID: \"f230f42e-0e70-43d5-abbb-a2bbbe3eddcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-x6n5r" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.956765 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6f78f99d-d988-4a96-90be-2984414b9bfe-metrics-tls\") pod \"dns-operator-744455d44c-qqrj6\" (UID: \"6f78f99d-d988-4a96-90be-2984414b9bfe\") " pod="openshift-dns-operator/dns-operator-744455d44c-qqrj6" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.956804 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14156cf1-57df-48bd-8723-1a7a083bab21-serving-cert\") pod \"console-operator-58897d9998-9fx4h\" (UID: \"14156cf1-57df-48bd-8723-1a7a083bab21\") " pod="openshift-console-operator/console-operator-58897d9998-9fx4h" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.956837 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2884d383-8ee9-456c-8d6c-d41eaebd60e6-audit-dir\") pod \"apiserver-7bbb656c7d-cvwkx\" (UID: \"2884d383-8ee9-456c-8d6c-d41eaebd60e6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cvwkx" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.956875 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-audit-policies\") pod \"oauth-openshift-558db77b4-x6n5r\" (UID: \"f230f42e-0e70-43d5-abbb-a2bbbe3eddcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-x6n5r" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.956967 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17f8fff7-991c-4a69-afb4-e40b048cde5c-trusted-ca-bundle\") pod \"apiserver-76f77b778f-rksx7\" (UID: \"17f8fff7-991c-4a69-afb4-e40b048cde5c\") " pod="openshift-apiserver/apiserver-76f77b778f-rksx7" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.967314 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.975700 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.977312 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.977682 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.978052 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.986222 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rcm5q"] Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.986581 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.986933 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rcm5q" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.990088 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.990753 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hkvpr"] Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.990859 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.991385 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hkvpr" Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.996231 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-t4pqd"] Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.999081 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-kmwrq"] Feb 19 18:32:00 crc kubenswrapper[4813]: I0219 18:32:00.999351 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-t4pqd" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.001504 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-crqgx"] Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.001662 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-kmwrq" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.001974 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-29vhv"] Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.002408 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525430-fq4mh"] Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.002898 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525430-fq4mh" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.003117 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-crqgx" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.003313 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-29vhv" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.006282 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.009297 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-qqsjr"] Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.010273 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-qqsjr" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.011931 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-c5246"] Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.013323 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-c5246" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.013385 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-h5w7j"] Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.014232 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-h5w7j" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.014368 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-5gzd8"] Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.015410 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rffpk"] Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.016764 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-qqrj6"] Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.017540 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-7qgtc"] Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.018454 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7qgtc" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.018696 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s44zp"] Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.020241 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-9fx4h"] Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.021763 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-dqj4z"] Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.022891 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-l78xq"] Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.023900 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-7w8k7"] Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.024977 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-wwn2x"] Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.026212 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.026770 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4mmbp"] Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.027985 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hkvpr"] Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.028756 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ptfgm"] Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.029586 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vkxmw"] Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.030590 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-t4pqd"] Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.033010 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-8dvp5"] Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.033463 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-bd28k"] Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.036676 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-29vhv"] Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.039286 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2rvnw"] Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.040432 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-chl4q"] Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.043114 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-h5w7j"] Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.044807 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-crqgx"] Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.046731 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.047036 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-dlph2"] Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.049046 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-hvwhh"] Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.050414 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-hvwhh" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.052736 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-x6n5r"] Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.054872 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-cvwkx"] Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.057445 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-x6n5r\" (UID: \"f230f42e-0e70-43d5-abbb-a2bbbe3eddcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-x6n5r" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.057471 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e3db18d-bbdd-4924-bb4c-d562ff4347b7-config\") pod \"authentication-operator-69f744f599-fdzh8\" (UID: \"6e3db18d-bbdd-4924-bb4c-d562ff4347b7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fdzh8" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.057492 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf8702bc-7079-425a-94d6-ae8adc8414f2-config\") pod \"kube-apiserver-operator-766d6c64bb-l78xq\" (UID: \"cf8702bc-7079-425a-94d6-ae8adc8414f2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-l78xq" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.057509 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf8702bc-7079-425a-94d6-ae8adc8414f2-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-l78xq\" (UID: \"cf8702bc-7079-425a-94d6-ae8adc8414f2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-l78xq" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.057524 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a8ad966-7829-4e5b-9665-5d07b39e882f-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-vkxmw\" (UID: \"5a8ad966-7829-4e5b-9665-5d07b39e882f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vkxmw" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.057540 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/36af60b0-5e67-4902-8891-adb18585316a-profile-collector-cert\") pod \"olm-operator-6b444d44fb-ptfgm\" (UID: \"36af60b0-5e67-4902-8891-adb18585316a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ptfgm" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.057564 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnqj9\" (UniqueName: \"kubernetes.io/projected/8ef1d23f-3a55-4fef-ba88-96b5ba04313f-kube-api-access-tnqj9\") pod \"openshift-config-operator-7777fb866f-wwn2x\" (UID: \"8ef1d23f-3a55-4fef-ba88-96b5ba04313f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-wwn2x" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.057580 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/17f8fff7-991c-4a69-afb4-e40b048cde5c-encryption-config\") pod \"apiserver-76f77b778f-rksx7\" (UID: \"17f8fff7-991c-4a69-afb4-e40b048cde5c\") " pod="openshift-apiserver/apiserver-76f77b778f-rksx7" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.057665 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-x6n5r\" (UID: \"f230f42e-0e70-43d5-abbb-a2bbbe3eddcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-x6n5r" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.057685 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f06a16f0-ed91-4d2e-bc89-36e2ac9b47e1-profile-collector-cert\") pod \"catalog-operator-68c6474976-6s798\" (UID: \"f06a16f0-ed91-4d2e-bc89-36e2ac9b47e1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6s798" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.057817 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sglfk\" (UniqueName: \"kubernetes.io/projected/36af60b0-5e67-4902-8891-adb18585316a-kube-api-access-sglfk\") pod \"olm-operator-6b444d44fb-ptfgm\" (UID: \"36af60b0-5e67-4902-8891-adb18585316a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ptfgm" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.058380 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-x6n5r\" (UID: \"f230f42e-0e70-43d5-abbb-a2bbbe3eddcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-x6n5r" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.058446 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvk6m\" (UniqueName: \"kubernetes.io/projected/6f78f99d-d988-4a96-90be-2984414b9bfe-kube-api-access-cvk6m\") pod \"dns-operator-744455d44c-qqrj6\" (UID: \"6f78f99d-d988-4a96-90be-2984414b9bfe\") " pod="openshift-dns-operator/dns-operator-744455d44c-qqrj6" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.058499 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f442d86c-f2fa-4597-a3c7-d462dd0aa9f6-apiservice-cert\") pod \"packageserver-d55dfcdfc-hkvpr\" (UID: \"f442d86c-f2fa-4597-a3c7-d462dd0aa9f6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hkvpr" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.058544 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/209ad545-4905-4130-baf8-1c3e9576e788-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-6zffl\" (UID: \"209ad545-4905-4130-baf8-1c3e9576e788\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6zffl" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.058590 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f3b7488-7c5c-4689-9c41-eacd57fdce7d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-4mmbp\" (UID: \"7f3b7488-7c5c-4689-9c41-eacd57fdce7d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4mmbp" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.058626 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzbcs\" (UniqueName: \"kubernetes.io/projected/2884d383-8ee9-456c-8d6c-d41eaebd60e6-kube-api-access-dzbcs\") pod \"apiserver-7bbb656c7d-cvwkx\" (UID: \"2884d383-8ee9-456c-8d6c-d41eaebd60e6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cvwkx" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.059013 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e3db18d-bbdd-4924-bb4c-d562ff4347b7-config\") pod \"authentication-operator-69f744f599-fdzh8\" (UID: \"6e3db18d-bbdd-4924-bb4c-d562ff4347b7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fdzh8" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.059338 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-x6n5r\" (UID: \"f230f42e-0e70-43d5-abbb-a2bbbe3eddcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-x6n5r" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.060096 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwbdz\" (UniqueName: \"kubernetes.io/projected/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-kube-api-access-kwbdz\") pod \"oauth-openshift-558db77b4-x6n5r\" (UID: \"f230f42e-0e70-43d5-abbb-a2bbbe3eddcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-x6n5r" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.060158 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9c2745dc-3e5e-4571-ba36-aa14c87c6336-console-serving-cert\") pod \"console-f9d7485db-dqj4z\" (UID: \"9c2745dc-3e5e-4571-ba36-aa14c87c6336\") " pod="openshift-console/console-f9d7485db-dqj4z" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.060192 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/209ad545-4905-4130-baf8-1c3e9576e788-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-6zffl\" (UID: \"209ad545-4905-4130-baf8-1c3e9576e788\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6zffl" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.060218 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/87c78a23-ef77-4a1e-ba67-45f3a4adccfb-proxy-tls\") pod \"machine-config-controller-84d6567774-7w8k7\" (UID: \"87c78a23-ef77-4a1e-ba67-45f3a4adccfb\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7w8k7" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.060239 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2c70e2d1-cf19-4a67-880f-e52e61ca6953-metrics-tls\") pod \"ingress-operator-5b745b69d9-8dvp5\" (UID: \"2c70e2d1-cf19-4a67-880f-e52e61ca6953\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8dvp5" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.060258 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/17f8fff7-991c-4a69-afb4-e40b048cde5c-audit\") pod \"apiserver-76f77b778f-rksx7\" (UID: \"17f8fff7-991c-4a69-afb4-e40b048cde5c\") " pod="openshift-apiserver/apiserver-76f77b778f-rksx7" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.060280 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/624d3261-b732-4c7e-b1d9-56827d44c94f-serving-cert\") pod \"controller-manager-879f6c89f-mgsbg\" (UID: \"624d3261-b732-4c7e-b1d9-56827d44c94f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mgsbg" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.060303 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqpsc\" (UniqueName: \"kubernetes.io/projected/b89f8e03-2720-4c2e-a7fc-e43dda75cc4c-kube-api-access-cqpsc\") pod \"cluster-samples-operator-665b6dd947-rffpk\" (UID: \"b89f8e03-2720-4c2e-a7fc-e43dda75cc4c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rffpk" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.060325 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a8ad966-7829-4e5b-9665-5d07b39e882f-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-vkxmw\" (UID: \"5a8ad966-7829-4e5b-9665-5d07b39e882f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vkxmw" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.060342 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f144bc83-c0eb-4627-ab68-9ed862e2f402-auth-proxy-config\") pod \"machine-approver-56656f9798-j6hz9\" (UID: \"f144bc83-c0eb-4627-ab68-9ed862e2f402\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-j6hz9" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.060364 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzjc5\" (UniqueName: \"kubernetes.io/projected/36d151c0-fa73-4e5f-a6de-580629bef8f1-kube-api-access-rzjc5\") pod \"machine-api-operator-5694c8668f-dlph2\" (UID: \"36d151c0-fa73-4e5f-a6de-580629bef8f1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dlph2" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.060387 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/361629f6-7c8a-4523-99e4-c7b152588855-config\") pod \"etcd-operator-b45778765-b84zp\" (UID: \"361629f6-7c8a-4523-99e4-c7b152588855\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b84zp" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.060414 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2c70e2d1-cf19-4a67-880f-e52e61ca6953-bound-sa-token\") pod \"ingress-operator-5b745b69d9-8dvp5\" (UID: \"2c70e2d1-cf19-4a67-880f-e52e61ca6953\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8dvp5" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.060436 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvvnd\" (UniqueName: \"kubernetes.io/projected/692367ab-53d4-4c1c-aa46-c70e247b848a-kube-api-access-rvvnd\") pod \"control-plane-machine-set-operator-78cbb6b69f-74n48\" (UID: \"692367ab-53d4-4c1c-aa46-c70e247b848a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-74n48" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.060455 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17f8fff7-991c-4a69-afb4-e40b048cde5c-trusted-ca-bundle\") pod \"apiserver-76f77b778f-rksx7\" (UID: \"17f8fff7-991c-4a69-afb4-e40b048cde5c\") " pod="openshift-apiserver/apiserver-76f77b778f-rksx7" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.060476 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-audit-policies\") pod \"oauth-openshift-558db77b4-x6n5r\" (UID: \"f230f42e-0e70-43d5-abbb-a2bbbe3eddcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-x6n5r" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.060533 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ddfe429-ea67-4b0c-bab1-bc72117fddda-config\") pod \"route-controller-manager-6576b87f9c-jfz6c\" (UID: \"0ddfe429-ea67-4b0c-bab1-bc72117fddda\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jfz6c" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.060553 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/17f8fff7-991c-4a69-afb4-e40b048cde5c-audit-dir\") pod \"apiserver-76f77b778f-rksx7\" (UID: \"17f8fff7-991c-4a69-afb4-e40b048cde5c\") " pod="openshift-apiserver/apiserver-76f77b778f-rksx7" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.060578 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqh9x\" (UniqueName: \"kubernetes.io/projected/3d471333-dfe7-45ec-bb58-f5020bca76cd-kube-api-access-rqh9x\") pod \"package-server-manager-789f6589d5-chl4q\" (UID: \"3d471333-dfe7-45ec-bb58-f5020bca76cd\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-chl4q" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.060610 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-x6n5r\" (UID: \"f230f42e-0e70-43d5-abbb-a2bbbe3eddcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-x6n5r" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.060632 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtp4q\" (UniqueName: \"kubernetes.io/projected/5a8ad966-7829-4e5b-9665-5d07b39e882f-kube-api-access-mtp4q\") pod \"openshift-controller-manager-operator-756b6f6bc6-vkxmw\" (UID: \"5a8ad966-7829-4e5b-9665-5d07b39e882f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vkxmw" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.060650 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/361629f6-7c8a-4523-99e4-c7b152588855-etcd-service-ca\") pod \"etcd-operator-b45778765-b84zp\" (UID: \"361629f6-7c8a-4523-99e4-c7b152588855\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b84zp" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.060673 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f144bc83-c0eb-4627-ab68-9ed862e2f402-config\") pod \"machine-approver-56656f9798-j6hz9\" (UID: \"f144bc83-c0eb-4627-ab68-9ed862e2f402\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-j6hz9" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.060694 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgd7r\" (UniqueName: \"kubernetes.io/projected/87c78a23-ef77-4a1e-ba67-45f3a4adccfb-kube-api-access-cgd7r\") pod \"machine-config-controller-84d6567774-7w8k7\" (UID: \"87c78a23-ef77-4a1e-ba67-45f3a4adccfb\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7w8k7" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.060714 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9c2745dc-3e5e-4571-ba36-aa14c87c6336-service-ca\") pod \"console-f9d7485db-dqj4z\" (UID: \"9c2745dc-3e5e-4571-ba36-aa14c87c6336\") " pod="openshift-console/console-f9d7485db-dqj4z" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.060734 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/361629f6-7c8a-4523-99e4-c7b152588855-etcd-client\") pod \"etcd-operator-b45778765-b84zp\" (UID: \"361629f6-7c8a-4523-99e4-c7b152588855\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b84zp" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.060750 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcvsq\" (UniqueName: \"kubernetes.io/projected/361629f6-7c8a-4523-99e4-c7b152588855-kube-api-access-jcvsq\") pod \"etcd-operator-b45778765-b84zp\" (UID: \"361629f6-7c8a-4523-99e4-c7b152588855\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b84zp" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.060775 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-x6n5r\" (UID: \"f230f42e-0e70-43d5-abbb-a2bbbe3eddcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-x6n5r" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.060796 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/17f8fff7-991c-4a69-afb4-e40b048cde5c-etcd-serving-ca\") pod \"apiserver-76f77b778f-rksx7\" (UID: \"17f8fff7-991c-4a69-afb4-e40b048cde5c\") " pod="openshift-apiserver/apiserver-76f77b778f-rksx7" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.060818 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2884d383-8ee9-456c-8d6c-d41eaebd60e6-etcd-client\") pod \"apiserver-7bbb656c7d-cvwkx\" (UID: \"2884d383-8ee9-456c-8d6c-d41eaebd60e6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cvwkx" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.060834 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/361629f6-7c8a-4523-99e4-c7b152588855-serving-cert\") pod \"etcd-operator-b45778765-b84zp\" (UID: \"361629f6-7c8a-4523-99e4-c7b152588855\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b84zp" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.060854 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b89f8e03-2720-4c2e-a7fc-e43dda75cc4c-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-rffpk\" (UID: \"b89f8e03-2720-4c2e-a7fc-e43dda75cc4c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rffpk" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.060874 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f442d86c-f2fa-4597-a3c7-d462dd0aa9f6-webhook-cert\") pod \"packageserver-d55dfcdfc-hkvpr\" (UID: \"f442d86c-f2fa-4597-a3c7-d462dd0aa9f6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hkvpr" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.060893 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2884d383-8ee9-456c-8d6c-d41eaebd60e6-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-cvwkx\" (UID: \"2884d383-8ee9-456c-8d6c-d41eaebd60e6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cvwkx" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.060912 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/624d3261-b732-4c7e-b1d9-56827d44c94f-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-mgsbg\" (UID: \"624d3261-b732-4c7e-b1d9-56827d44c94f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mgsbg" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.060928 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9c2745dc-3e5e-4571-ba36-aa14c87c6336-console-oauth-config\") pod \"console-f9d7485db-dqj4z\" (UID: \"9c2745dc-3e5e-4571-ba36-aa14c87c6336\") " pod="openshift-console/console-f9d7485db-dqj4z" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.060964 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/209ad545-4905-4130-baf8-1c3e9576e788-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-6zffl\" (UID: \"209ad545-4905-4130-baf8-1c3e9576e788\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6zffl" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.060984 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3d471333-dfe7-45ec-bb58-f5020bca76cd-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-chl4q\" (UID: \"3d471333-dfe7-45ec-bb58-f5020bca76cd\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-chl4q" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.061007 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/8ef1d23f-3a55-4fef-ba88-96b5ba04313f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-wwn2x\" (UID: \"8ef1d23f-3a55-4fef-ba88-96b5ba04313f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-wwn2x" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.061025 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvxjx\" (UniqueName: \"kubernetes.io/projected/14156cf1-57df-48bd-8723-1a7a083bab21-kube-api-access-lvxjx\") pod \"console-operator-58897d9998-9fx4h\" (UID: \"14156cf1-57df-48bd-8723-1a7a083bab21\") " pod="openshift-console-operator/console-operator-58897d9998-9fx4h" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.061045 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ef1d23f-3a55-4fef-ba88-96b5ba04313f-serving-cert\") pod \"openshift-config-operator-7777fb866f-wwn2x\" (UID: \"8ef1d23f-3a55-4fef-ba88-96b5ba04313f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-wwn2x" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.061064 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2884d383-8ee9-456c-8d6c-d41eaebd60e6-serving-cert\") pod \"apiserver-7bbb656c7d-cvwkx\" (UID: \"2884d383-8ee9-456c-8d6c-d41eaebd60e6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cvwkx" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.061084 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/361629f6-7c8a-4523-99e4-c7b152588855-etcd-ca\") pod \"etcd-operator-b45778765-b84zp\" (UID: \"361629f6-7c8a-4523-99e4-c7b152588855\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b84zp" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.061101 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17f8fff7-991c-4a69-afb4-e40b048cde5c-serving-cert\") pod \"apiserver-76f77b778f-rksx7\" (UID: \"17f8fff7-991c-4a69-afb4-e40b048cde5c\") " pod="openshift-apiserver/apiserver-76f77b778f-rksx7" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.061122 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/36d151c0-fa73-4e5f-a6de-580629bef8f1-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-dlph2\" (UID: \"36d151c0-fa73-4e5f-a6de-580629bef8f1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dlph2" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.061143 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-x6n5r\" (UID: \"f230f42e-0e70-43d5-abbb-a2bbbe3eddcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-x6n5r" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.061162 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2c70e2d1-cf19-4a67-880f-e52e61ca6953-trusted-ca\") pod \"ingress-operator-5b745b69d9-8dvp5\" (UID: \"2c70e2d1-cf19-4a67-880f-e52e61ca6953\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8dvp5" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.061183 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/36af60b0-5e67-4902-8891-adb18585316a-srv-cert\") pod \"olm-operator-6b444d44fb-ptfgm\" (UID: \"36af60b0-5e67-4902-8891-adb18585316a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ptfgm" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.061199 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-audit-dir\") pod \"oauth-openshift-558db77b4-x6n5r\" (UID: \"f230f42e-0e70-43d5-abbb-a2bbbe3eddcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-x6n5r" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.061236 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwdxp\" (UniqueName: \"kubernetes.io/projected/7f3b7488-7c5c-4689-9c41-eacd57fdce7d-kube-api-access-cwdxp\") pod \"openshift-apiserver-operator-796bbdcf4f-4mmbp\" (UID: \"7f3b7488-7c5c-4689-9c41-eacd57fdce7d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4mmbp" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.061256 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/17f8fff7-991c-4a69-afb4-e40b048cde5c-node-pullsecrets\") pod \"apiserver-76f77b778f-rksx7\" (UID: \"17f8fff7-991c-4a69-afb4-e40b048cde5c\") " pod="openshift-apiserver/apiserver-76f77b778f-rksx7" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.061277 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znnmz\" (UniqueName: \"kubernetes.io/projected/c705e274-b996-46b7-8e3f-97dff9107a6f-kube-api-access-znnmz\") pod \"downloads-7954f5f757-5gzd8\" (UID: \"c705e274-b996-46b7-8e3f-97dff9107a6f\") " pod="openshift-console/downloads-7954f5f757-5gzd8" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.061293 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9c2745dc-3e5e-4571-ba36-aa14c87c6336-console-config\") pod \"console-f9d7485db-dqj4z\" (UID: \"9c2745dc-3e5e-4571-ba36-aa14c87c6336\") " pod="openshift-console/console-f9d7485db-dqj4z" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.061314 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/692367ab-53d4-4c1c-aa46-c70e247b848a-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-74n48\" (UID: \"692367ab-53d4-4c1c-aa46-c70e247b848a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-74n48" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.061335 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcl89\" (UniqueName: \"kubernetes.io/projected/0ddfe429-ea67-4b0c-bab1-bc72117fddda-kube-api-access-hcl89\") pod \"route-controller-manager-6576b87f9c-jfz6c\" (UID: \"0ddfe429-ea67-4b0c-bab1-bc72117fddda\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jfz6c" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.061354 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/17f8fff7-991c-4a69-afb4-e40b048cde5c-etcd-client\") pod \"apiserver-76f77b778f-rksx7\" (UID: \"17f8fff7-991c-4a69-afb4-e40b048cde5c\") " pod="openshift-apiserver/apiserver-76f77b778f-rksx7" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.061374 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhltx\" (UniqueName: \"kubernetes.io/projected/f144bc83-c0eb-4627-ab68-9ed862e2f402-kube-api-access-zhltx\") pod \"machine-approver-56656f9798-j6hz9\" (UID: \"f144bc83-c0eb-4627-ab68-9ed862e2f402\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-j6hz9" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.061392 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/87c78a23-ef77-4a1e-ba67-45f3a4adccfb-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-7w8k7\" (UID: \"87c78a23-ef77-4a1e-ba67-45f3a4adccfb\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7w8k7" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.061414 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5eeb4175-f38f-4b50-a302-194880f33a30-config\") pod \"kube-controller-manager-operator-78b949d7b-s44zp\" (UID: \"5eeb4175-f38f-4b50-a302-194880f33a30\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s44zp" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.061434 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e3db18d-bbdd-4924-bb4c-d562ff4347b7-serving-cert\") pod \"authentication-operator-69f744f599-fdzh8\" (UID: \"6e3db18d-bbdd-4924-bb4c-d562ff4347b7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fdzh8" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.061453 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2884d383-8ee9-456c-8d6c-d41eaebd60e6-audit-policies\") pod \"apiserver-7bbb656c7d-cvwkx\" (UID: \"2884d383-8ee9-456c-8d6c-d41eaebd60e6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cvwkx" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.061469 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpkzh\" (UniqueName: \"kubernetes.io/projected/9c2745dc-3e5e-4571-ba36-aa14c87c6336-kube-api-access-kpkzh\") pod \"console-f9d7485db-dqj4z\" (UID: \"9c2745dc-3e5e-4571-ba36-aa14c87c6336\") " pod="openshift-console/console-f9d7485db-dqj4z" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.061487 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2884d383-8ee9-456c-8d6c-d41eaebd60e6-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-cvwkx\" (UID: \"2884d383-8ee9-456c-8d6c-d41eaebd60e6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cvwkx" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.061506 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-x6n5r\" (UID: \"f230f42e-0e70-43d5-abbb-a2bbbe3eddcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-x6n5r" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.061527 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e3db18d-bbdd-4924-bb4c-d562ff4347b7-service-ca-bundle\") pod \"authentication-operator-69f744f599-fdzh8\" (UID: \"6e3db18d-bbdd-4924-bb4c-d562ff4347b7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fdzh8" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.061549 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5eeb4175-f38f-4b50-a302-194880f33a30-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-s44zp\" (UID: \"5eeb4175-f38f-4b50-a302-194880f33a30\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s44zp" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.061566 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2884d383-8ee9-456c-8d6c-d41eaebd60e6-audit-dir\") pod \"apiserver-7bbb656c7d-cvwkx\" (UID: \"2884d383-8ee9-456c-8d6c-d41eaebd60e6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cvwkx" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.061587 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6f78f99d-d988-4a96-90be-2984414b9bfe-metrics-tls\") pod \"dns-operator-744455d44c-qqrj6\" (UID: \"6f78f99d-d988-4a96-90be-2984414b9bfe\") " pod="openshift-dns-operator/dns-operator-744455d44c-qqrj6" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.061606 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14156cf1-57df-48bd-8723-1a7a083bab21-serving-cert\") pod \"console-operator-58897d9998-9fx4h\" (UID: \"14156cf1-57df-48bd-8723-1a7a083bab21\") " pod="openshift-console-operator/console-operator-58897d9998-9fx4h" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.061635 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f06a16f0-ed91-4d2e-bc89-36e2ac9b47e1-srv-cert\") pod \"catalog-operator-68c6474976-6s798\" (UID: \"f06a16f0-ed91-4d2e-bc89-36e2ac9b47e1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6s798" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.061656 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8s26w\" (UniqueName: \"kubernetes.io/projected/2c70e2d1-cf19-4a67-880f-e52e61ca6953-kube-api-access-8s26w\") pod \"ingress-operator-5b745b69d9-8dvp5\" (UID: \"2c70e2d1-cf19-4a67-880f-e52e61ca6953\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8dvp5" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.061676 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f3b7488-7c5c-4689-9c41-eacd57fdce7d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-4mmbp\" (UID: \"7f3b7488-7c5c-4689-9c41-eacd57fdce7d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4mmbp" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.061693 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ddfe429-ea67-4b0c-bab1-bc72117fddda-serving-cert\") pod \"route-controller-manager-6576b87f9c-jfz6c\" (UID: \"0ddfe429-ea67-4b0c-bab1-bc72117fddda\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jfz6c" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.061713 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/f144bc83-c0eb-4627-ab68-9ed862e2f402-machine-approver-tls\") pod \"machine-approver-56656f9798-j6hz9\" (UID: \"f144bc83-c0eb-4627-ab68-9ed862e2f402\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-j6hz9" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.061732 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x66jx\" (UniqueName: \"kubernetes.io/projected/f442d86c-f2fa-4597-a3c7-d462dd0aa9f6-kube-api-access-x66jx\") pod \"packageserver-d55dfcdfc-hkvpr\" (UID: \"f442d86c-f2fa-4597-a3c7-d462dd0aa9f6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hkvpr" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.061763 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/17f8fff7-991c-4a69-afb4-e40b048cde5c-image-import-ca\") pod \"apiserver-76f77b778f-rksx7\" (UID: \"17f8fff7-991c-4a69-afb4-e40b048cde5c\") " pod="openshift-apiserver/apiserver-76f77b778f-rksx7" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.061798 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0ddfe429-ea67-4b0c-bab1-bc72117fddda-client-ca\") pod \"route-controller-manager-6576b87f9c-jfz6c\" (UID: \"0ddfe429-ea67-4b0c-bab1-bc72117fddda\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jfz6c" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.061814 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14156cf1-57df-48bd-8723-1a7a083bab21-config\") pod \"console-operator-58897d9998-9fx4h\" (UID: \"14156cf1-57df-48bd-8723-1a7a083bab21\") " pod="openshift-console-operator/console-operator-58897d9998-9fx4h" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.061834 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e3db18d-bbdd-4924-bb4c-d562ff4347b7-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-fdzh8\" (UID: \"6e3db18d-bbdd-4924-bb4c-d562ff4347b7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fdzh8" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.061854 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf8702bc-7079-425a-94d6-ae8adc8414f2-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-l78xq\" (UID: \"cf8702bc-7079-425a-94d6-ae8adc8414f2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-l78xq" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.061874 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17f8fff7-991c-4a69-afb4-e40b048cde5c-config\") pod \"apiserver-76f77b778f-rksx7\" (UID: \"17f8fff7-991c-4a69-afb4-e40b048cde5c\") " pod="openshift-apiserver/apiserver-76f77b778f-rksx7" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.061894 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/f442d86c-f2fa-4597-a3c7-d462dd0aa9f6-tmpfs\") pod \"packageserver-d55dfcdfc-hkvpr\" (UID: \"f442d86c-f2fa-4597-a3c7-d462dd0aa9f6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hkvpr" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.061921 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9swks\" (UniqueName: \"kubernetes.io/projected/17f8fff7-991c-4a69-afb4-e40b048cde5c-kube-api-access-9swks\") pod \"apiserver-76f77b778f-rksx7\" (UID: \"17f8fff7-991c-4a69-afb4-e40b048cde5c\") " pod="openshift-apiserver/apiserver-76f77b778f-rksx7" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.061940 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/36d151c0-fa73-4e5f-a6de-580629bef8f1-images\") pod \"machine-api-operator-5694c8668f-dlph2\" (UID: \"36d151c0-fa73-4e5f-a6de-580629bef8f1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dlph2" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.061980 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-x6n5r\" (UID: \"f230f42e-0e70-43d5-abbb-a2bbbe3eddcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-x6n5r" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.062001 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/14156cf1-57df-48bd-8723-1a7a083bab21-trusted-ca\") pod \"console-operator-58897d9998-9fx4h\" (UID: \"14156cf1-57df-48bd-8723-1a7a083bab21\") " pod="openshift-console-operator/console-operator-58897d9998-9fx4h" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.062018 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fspl9\" (UniqueName: \"kubernetes.io/projected/6e3db18d-bbdd-4924-bb4c-d562ff4347b7-kube-api-access-fspl9\") pod \"authentication-operator-69f744f599-fdzh8\" (UID: \"6e3db18d-bbdd-4924-bb4c-d562ff4347b7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fdzh8" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.062041 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5eeb4175-f38f-4b50-a302-194880f33a30-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-s44zp\" (UID: \"5eeb4175-f38f-4b50-a302-194880f33a30\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s44zp" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.062062 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c2745dc-3e5e-4571-ba36-aa14c87c6336-trusted-ca-bundle\") pod \"console-f9d7485db-dqj4z\" (UID: \"9c2745dc-3e5e-4571-ba36-aa14c87c6336\") " pod="openshift-console/console-f9d7485db-dqj4z" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.062082 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-x6n5r\" (UID: \"f230f42e-0e70-43d5-abbb-a2bbbe3eddcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-x6n5r" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.062100 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2884d383-8ee9-456c-8d6c-d41eaebd60e6-encryption-config\") pod \"apiserver-7bbb656c7d-cvwkx\" (UID: \"2884d383-8ee9-456c-8d6c-d41eaebd60e6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cvwkx" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.062125 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-x6n5r\" (UID: \"f230f42e-0e70-43d5-abbb-a2bbbe3eddcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-x6n5r" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.062144 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/624d3261-b732-4c7e-b1d9-56827d44c94f-config\") pod \"controller-manager-879f6c89f-mgsbg\" (UID: \"624d3261-b732-4c7e-b1d9-56827d44c94f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mgsbg" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.062164 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57nds\" (UniqueName: \"kubernetes.io/projected/624d3261-b732-4c7e-b1d9-56827d44c94f-kube-api-access-57nds\") pod \"controller-manager-879f6c89f-mgsbg\" (UID: \"624d3261-b732-4c7e-b1d9-56827d44c94f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mgsbg" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.062185 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/624d3261-b732-4c7e-b1d9-56827d44c94f-client-ca\") pod \"controller-manager-879f6c89f-mgsbg\" (UID: \"624d3261-b732-4c7e-b1d9-56827d44c94f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mgsbg" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.062202 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-x6n5r\" (UID: \"f230f42e-0e70-43d5-abbb-a2bbbe3eddcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-x6n5r" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.062221 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9c2745dc-3e5e-4571-ba36-aa14c87c6336-oauth-serving-cert\") pod \"console-f9d7485db-dqj4z\" (UID: \"9c2745dc-3e5e-4571-ba36-aa14c87c6336\") " pod="openshift-console/console-f9d7485db-dqj4z" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.062241 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36d151c0-fa73-4e5f-a6de-580629bef8f1-config\") pod \"machine-api-operator-5694c8668f-dlph2\" (UID: \"36d151c0-fa73-4e5f-a6de-580629bef8f1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dlph2" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.062262 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4725c\" (UniqueName: \"kubernetes.io/projected/209ad545-4905-4130-baf8-1c3e9576e788-kube-api-access-4725c\") pod \"cluster-image-registry-operator-dc59b4c8b-6zffl\" (UID: \"209ad545-4905-4130-baf8-1c3e9576e788\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6zffl" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.062280 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hszx4\" (UniqueName: \"kubernetes.io/projected/f06a16f0-ed91-4d2e-bc89-36e2ac9b47e1-kube-api-access-hszx4\") pod \"catalog-operator-68c6474976-6s798\" (UID: \"f06a16f0-ed91-4d2e-bc89-36e2ac9b47e1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6s798" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.063236 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f3b7488-7c5c-4689-9c41-eacd57fdce7d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-4mmbp\" (UID: \"7f3b7488-7c5c-4689-9c41-eacd57fdce7d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4mmbp" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.063305 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-x6n5r\" (UID: \"f230f42e-0e70-43d5-abbb-a2bbbe3eddcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-x6n5r" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.063339 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-74n48"] Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.063382 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-kmwrq"] Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.063391 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/17f8fff7-991c-4a69-afb4-e40b048cde5c-node-pullsecrets\") pod \"apiserver-76f77b778f-rksx7\" (UID: \"17f8fff7-991c-4a69-afb4-e40b048cde5c\") " pod="openshift-apiserver/apiserver-76f77b778f-rksx7" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.063625 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/17f8fff7-991c-4a69-afb4-e40b048cde5c-encryption-config\") pod \"apiserver-76f77b778f-rksx7\" (UID: \"17f8fff7-991c-4a69-afb4-e40b048cde5c\") " pod="openshift-apiserver/apiserver-76f77b778f-rksx7" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.064093 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/17f8fff7-991c-4a69-afb4-e40b048cde5c-audit\") pod \"apiserver-76f77b778f-rksx7\" (UID: \"17f8fff7-991c-4a69-afb4-e40b048cde5c\") " pod="openshift-apiserver/apiserver-76f77b778f-rksx7" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.064613 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/17f8fff7-991c-4a69-afb4-e40b048cde5c-etcd-serving-ca\") pod \"apiserver-76f77b778f-rksx7\" (UID: \"17f8fff7-991c-4a69-afb4-e40b048cde5c\") " pod="openshift-apiserver/apiserver-76f77b778f-rksx7" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.065065 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2884d383-8ee9-456c-8d6c-d41eaebd60e6-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-cvwkx\" (UID: \"2884d383-8ee9-456c-8d6c-d41eaebd60e6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cvwkx" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.065304 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ddfe429-ea67-4b0c-bab1-bc72117fddda-config\") pod \"route-controller-manager-6576b87f9c-jfz6c\" (UID: \"0ddfe429-ea67-4b0c-bab1-bc72117fddda\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jfz6c" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.066017 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9c2745dc-3e5e-4571-ba36-aa14c87c6336-console-config\") pod \"console-f9d7485db-dqj4z\" (UID: \"9c2745dc-3e5e-4571-ba36-aa14c87c6336\") " pod="openshift-console/console-f9d7485db-dqj4z" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.066150 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17f8fff7-991c-4a69-afb4-e40b048cde5c-trusted-ca-bundle\") pod \"apiserver-76f77b778f-rksx7\" (UID: \"17f8fff7-991c-4a69-afb4-e40b048cde5c\") " pod="openshift-apiserver/apiserver-76f77b778f-rksx7" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.066403 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-x6n5r\" (UID: \"f230f42e-0e70-43d5-abbb-a2bbbe3eddcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-x6n5r" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.066529 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/17f8fff7-991c-4a69-afb4-e40b048cde5c-audit-dir\") pod \"apiserver-76f77b778f-rksx7\" (UID: \"17f8fff7-991c-4a69-afb4-e40b048cde5c\") " pod="openshift-apiserver/apiserver-76f77b778f-rksx7" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.066638 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-audit-policies\") pod \"oauth-openshift-558db77b4-x6n5r\" (UID: \"f230f42e-0e70-43d5-abbb-a2bbbe3eddcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-x6n5r" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.067464 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/17f8fff7-991c-4a69-afb4-e40b048cde5c-image-import-ca\") pod \"apiserver-76f77b778f-rksx7\" (UID: \"17f8fff7-991c-4a69-afb4-e40b048cde5c\") " pod="openshift-apiserver/apiserver-76f77b778f-rksx7" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.068045 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2884d383-8ee9-456c-8d6c-d41eaebd60e6-etcd-client\") pod \"apiserver-7bbb656c7d-cvwkx\" (UID: \"2884d383-8ee9-456c-8d6c-d41eaebd60e6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cvwkx" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.068080 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0ddfe429-ea67-4b0c-bab1-bc72117fddda-client-ca\") pod \"route-controller-manager-6576b87f9c-jfz6c\" (UID: \"0ddfe429-ea67-4b0c-bab1-bc72117fddda\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jfz6c" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.068531 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2884d383-8ee9-456c-8d6c-d41eaebd60e6-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-cvwkx\" (UID: \"2884d383-8ee9-456c-8d6c-d41eaebd60e6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cvwkx" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.068577 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/624d3261-b732-4c7e-b1d9-56827d44c94f-serving-cert\") pod \"controller-manager-879f6c89f-mgsbg\" (UID: \"624d3261-b732-4c7e-b1d9-56827d44c94f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mgsbg" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.068702 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14156cf1-57df-48bd-8723-1a7a083bab21-config\") pod \"console-operator-58897d9998-9fx4h\" (UID: \"14156cf1-57df-48bd-8723-1a7a083bab21\") " pod="openshift-console-operator/console-operator-58897d9998-9fx4h" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.068974 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-x6n5r\" (UID: \"f230f42e-0e70-43d5-abbb-a2bbbe3eddcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-x6n5r" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.069374 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e3db18d-bbdd-4924-bb4c-d562ff4347b7-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-fdzh8\" (UID: \"6e3db18d-bbdd-4924-bb4c-d562ff4347b7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fdzh8" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.069741 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f144bc83-c0eb-4627-ab68-9ed862e2f402-config\") pod \"machine-approver-56656f9798-j6hz9\" (UID: \"f144bc83-c0eb-4627-ab68-9ed862e2f402\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-j6hz9" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.069877 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/624d3261-b732-4c7e-b1d9-56827d44c94f-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-mgsbg\" (UID: \"624d3261-b732-4c7e-b1d9-56827d44c94f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mgsbg" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.069877 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/17f8fff7-991c-4a69-afb4-e40b048cde5c-etcd-client\") pod \"apiserver-76f77b778f-rksx7\" (UID: \"17f8fff7-991c-4a69-afb4-e40b048cde5c\") " pod="openshift-apiserver/apiserver-76f77b778f-rksx7" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.069885 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17f8fff7-991c-4a69-afb4-e40b048cde5c-config\") pod \"apiserver-76f77b778f-rksx7\" (UID: \"17f8fff7-991c-4a69-afb4-e40b048cde5c\") " pod="openshift-apiserver/apiserver-76f77b778f-rksx7" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.070785 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/36d151c0-fa73-4e5f-a6de-580629bef8f1-images\") pod \"machine-api-operator-5694c8668f-dlph2\" (UID: \"36d151c0-fa73-4e5f-a6de-580629bef8f1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dlph2" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.070811 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f144bc83-c0eb-4627-ab68-9ed862e2f402-auth-proxy-config\") pod \"machine-approver-56656f9798-j6hz9\" (UID: \"f144bc83-c0eb-4627-ab68-9ed862e2f402\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-j6hz9" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.070904 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9c2745dc-3e5e-4571-ba36-aa14c87c6336-service-ca\") pod \"console-f9d7485db-dqj4z\" (UID: \"9c2745dc-3e5e-4571-ba36-aa14c87c6336\") " pod="openshift-console/console-f9d7485db-dqj4z" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.071223 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9c2745dc-3e5e-4571-ba36-aa14c87c6336-console-oauth-config\") pod \"console-f9d7485db-dqj4z\" (UID: \"9c2745dc-3e5e-4571-ba36-aa14c87c6336\") " pod="openshift-console/console-f9d7485db-dqj4z" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.071520 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/8ef1d23f-3a55-4fef-ba88-96b5ba04313f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-wwn2x\" (UID: \"8ef1d23f-3a55-4fef-ba88-96b5ba04313f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-wwn2x" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.071807 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/624d3261-b732-4c7e-b1d9-56827d44c94f-client-ca\") pod \"controller-manager-879f6c89f-mgsbg\" (UID: \"624d3261-b732-4c7e-b1d9-56827d44c94f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mgsbg" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.072701 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2884d383-8ee9-456c-8d6c-d41eaebd60e6-audit-dir\") pod \"apiserver-7bbb656c7d-cvwkx\" (UID: \"2884d383-8ee9-456c-8d6c-d41eaebd60e6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cvwkx" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.073108 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9c2745dc-3e5e-4571-ba36-aa14c87c6336-oauth-serving-cert\") pod \"console-f9d7485db-dqj4z\" (UID: \"9c2745dc-3e5e-4571-ba36-aa14c87c6336\") " pod="openshift-console/console-f9d7485db-dqj4z" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.074372 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36d151c0-fa73-4e5f-a6de-580629bef8f1-config\") pod \"machine-api-operator-5694c8668f-dlph2\" (UID: \"36d151c0-fa73-4e5f-a6de-580629bef8f1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dlph2" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.074415 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-x6n5r\" (UID: \"f230f42e-0e70-43d5-abbb-a2bbbe3eddcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-x6n5r" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.074700 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9c2745dc-3e5e-4571-ba36-aa14c87c6336-console-serving-cert\") pod \"console-f9d7485db-dqj4z\" (UID: \"9c2745dc-3e5e-4571-ba36-aa14c87c6336\") " pod="openshift-console/console-f9d7485db-dqj4z" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.074904 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e3db18d-bbdd-4924-bb4c-d562ff4347b7-service-ca-bundle\") pod \"authentication-operator-69f744f599-fdzh8\" (UID: \"6e3db18d-bbdd-4924-bb4c-d562ff4347b7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fdzh8" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.075098 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/624d3261-b732-4c7e-b1d9-56827d44c94f-config\") pod \"controller-manager-879f6c89f-mgsbg\" (UID: \"624d3261-b732-4c7e-b1d9-56827d44c94f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mgsbg" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.075405 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-x6n5r\" (UID: \"f230f42e-0e70-43d5-abbb-a2bbbe3eddcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-x6n5r" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.075901 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-x6n5r\" (UID: \"f230f42e-0e70-43d5-abbb-a2bbbe3eddcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-x6n5r" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.075070 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-x6n5r\" (UID: \"f230f42e-0e70-43d5-abbb-a2bbbe3eddcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-x6n5r" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.076476 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-x6n5r\" (UID: \"f230f42e-0e70-43d5-abbb-a2bbbe3eddcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-x6n5r" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.076537 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/14156cf1-57df-48bd-8723-1a7a083bab21-trusted-ca\") pod \"console-operator-58897d9998-9fx4h\" (UID: \"14156cf1-57df-48bd-8723-1a7a083bab21\") " pod="openshift-console-operator/console-operator-58897d9998-9fx4h" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.076760 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2884d383-8ee9-456c-8d6c-d41eaebd60e6-encryption-config\") pod \"apiserver-7bbb656c7d-cvwkx\" (UID: \"2884d383-8ee9-456c-8d6c-d41eaebd60e6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cvwkx" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.077095 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14156cf1-57df-48bd-8723-1a7a083bab21-serving-cert\") pod \"console-operator-58897d9998-9fx4h\" (UID: \"14156cf1-57df-48bd-8723-1a7a083bab21\") " pod="openshift-console-operator/console-operator-58897d9998-9fx4h" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.077162 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c2745dc-3e5e-4571-ba36-aa14c87c6336-trusted-ca-bundle\") pod \"console-f9d7485db-dqj4z\" (UID: \"9c2745dc-3e5e-4571-ba36-aa14c87c6336\") " pod="openshift-console/console-f9d7485db-dqj4z" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.077526 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-audit-dir\") pod \"oauth-openshift-558db77b4-x6n5r\" (UID: \"f230f42e-0e70-43d5-abbb-a2bbbe3eddcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-x6n5r" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.078084 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2884d383-8ee9-456c-8d6c-d41eaebd60e6-audit-policies\") pod \"apiserver-7bbb656c7d-cvwkx\" (UID: \"2884d383-8ee9-456c-8d6c-d41eaebd60e6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cvwkx" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.078516 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2884d383-8ee9-456c-8d6c-d41eaebd60e6-serving-cert\") pod \"apiserver-7bbb656c7d-cvwkx\" (UID: \"2884d383-8ee9-456c-8d6c-d41eaebd60e6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cvwkx" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.078812 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f3b7488-7c5c-4689-9c41-eacd57fdce7d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-4mmbp\" (UID: \"7f3b7488-7c5c-4689-9c41-eacd57fdce7d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4mmbp" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.078842 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6f78f99d-d988-4a96-90be-2984414b9bfe-metrics-tls\") pod \"dns-operator-744455d44c-qqrj6\" (UID: \"6f78f99d-d988-4a96-90be-2984414b9bfe\") " pod="openshift-dns-operator/dns-operator-744455d44c-qqrj6" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.079096 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.079126 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e3db18d-bbdd-4924-bb4c-d562ff4347b7-serving-cert\") pod \"authentication-operator-69f744f599-fdzh8\" (UID: \"6e3db18d-bbdd-4924-bb4c-d562ff4347b7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fdzh8" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.079381 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6zffl"] Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.079431 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525430-fq4mh"] Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.079448 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6s798"] Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.079940 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-x6n5r\" (UID: \"f230f42e-0e70-43d5-abbb-a2bbbe3eddcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-x6n5r" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.080459 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ef1d23f-3a55-4fef-ba88-96b5ba04313f-serving-cert\") pod \"openshift-config-operator-7777fb866f-wwn2x\" (UID: \"8ef1d23f-3a55-4fef-ba88-96b5ba04313f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-wwn2x" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.080642 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ddfe429-ea67-4b0c-bab1-bc72117fddda-serving-cert\") pod \"route-controller-manager-6576b87f9c-jfz6c\" (UID: \"0ddfe429-ea67-4b0c-bab1-bc72117fddda\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jfz6c" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.080786 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-x6n5r\" (UID: \"f230f42e-0e70-43d5-abbb-a2bbbe3eddcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-x6n5r" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.081392 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17f8fff7-991c-4a69-afb4-e40b048cde5c-serving-cert\") pod \"apiserver-76f77b778f-rksx7\" (UID: \"17f8fff7-991c-4a69-afb4-e40b048cde5c\") " pod="openshift-apiserver/apiserver-76f77b778f-rksx7" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.081550 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/f144bc83-c0eb-4627-ab68-9ed862e2f402-machine-approver-tls\") pod \"machine-approver-56656f9798-j6hz9\" (UID: \"f144bc83-c0eb-4627-ab68-9ed862e2f402\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-j6hz9" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.083385 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rcm5q"] Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.083601 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/36d151c0-fa73-4e5f-a6de-580629bef8f1-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-dlph2\" (UID: \"36d151c0-fa73-4e5f-a6de-580629bef8f1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dlph2" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.086216 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-b84zp"] Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.086884 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.087946 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-qqsjr"] Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.089187 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-c5246"] Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.090485 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-7qgtc"] Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.091563 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-ppc5s"] Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.092870 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-ppc5s"] Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.092876 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-ppc5s" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.106518 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.126465 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.146678 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.163574 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/209ad545-4905-4130-baf8-1c3e9576e788-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-6zffl\" (UID: \"209ad545-4905-4130-baf8-1c3e9576e788\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6zffl" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.163610 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3d471333-dfe7-45ec-bb58-f5020bca76cd-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-chl4q\" (UID: \"3d471333-dfe7-45ec-bb58-f5020bca76cd\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-chl4q" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.163640 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/361629f6-7c8a-4523-99e4-c7b152588855-etcd-ca\") pod \"etcd-operator-b45778765-b84zp\" (UID: \"361629f6-7c8a-4523-99e4-c7b152588855\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b84zp" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.163796 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2c70e2d1-cf19-4a67-880f-e52e61ca6953-trusted-ca\") pod \"ingress-operator-5b745b69d9-8dvp5\" (UID: \"2c70e2d1-cf19-4a67-880f-e52e61ca6953\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8dvp5" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.163821 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/36af60b0-5e67-4902-8891-adb18585316a-srv-cert\") pod \"olm-operator-6b444d44fb-ptfgm\" (UID: \"36af60b0-5e67-4902-8891-adb18585316a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ptfgm" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.163895 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/692367ab-53d4-4c1c-aa46-c70e247b848a-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-74n48\" (UID: \"692367ab-53d4-4c1c-aa46-c70e247b848a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-74n48" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.164346 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/361629f6-7c8a-4523-99e4-c7b152588855-etcd-ca\") pod \"etcd-operator-b45778765-b84zp\" (UID: \"361629f6-7c8a-4523-99e4-c7b152588855\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b84zp" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.165226 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5eeb4175-f38f-4b50-a302-194880f33a30-config\") pod \"kube-controller-manager-operator-78b949d7b-s44zp\" (UID: \"5eeb4175-f38f-4b50-a302-194880f33a30\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s44zp" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.165335 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/87c78a23-ef77-4a1e-ba67-45f3a4adccfb-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-7w8k7\" (UID: \"87c78a23-ef77-4a1e-ba67-45f3a4adccfb\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7w8k7" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.165582 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5eeb4175-f38f-4b50-a302-194880f33a30-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-s44zp\" (UID: \"5eeb4175-f38f-4b50-a302-194880f33a30\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s44zp" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.165618 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f06a16f0-ed91-4d2e-bc89-36e2ac9b47e1-srv-cert\") pod \"catalog-operator-68c6474976-6s798\" (UID: \"f06a16f0-ed91-4d2e-bc89-36e2ac9b47e1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6s798" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.165638 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8s26w\" (UniqueName: \"kubernetes.io/projected/2c70e2d1-cf19-4a67-880f-e52e61ca6953-kube-api-access-8s26w\") pod \"ingress-operator-5b745b69d9-8dvp5\" (UID: \"2c70e2d1-cf19-4a67-880f-e52e61ca6953\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8dvp5" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.165663 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x66jx\" (UniqueName: \"kubernetes.io/projected/f442d86c-f2fa-4597-a3c7-d462dd0aa9f6-kube-api-access-x66jx\") pod \"packageserver-d55dfcdfc-hkvpr\" (UID: \"f442d86c-f2fa-4597-a3c7-d462dd0aa9f6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hkvpr" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.165682 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf8702bc-7079-425a-94d6-ae8adc8414f2-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-l78xq\" (UID: \"cf8702bc-7079-425a-94d6-ae8adc8414f2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-l78xq" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.165701 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/f442d86c-f2fa-4597-a3c7-d462dd0aa9f6-tmpfs\") pod \"packageserver-d55dfcdfc-hkvpr\" (UID: \"f442d86c-f2fa-4597-a3c7-d462dd0aa9f6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hkvpr" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.165730 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5eeb4175-f38f-4b50-a302-194880f33a30-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-s44zp\" (UID: \"5eeb4175-f38f-4b50-a302-194880f33a30\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s44zp" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.165768 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hszx4\" (UniqueName: \"kubernetes.io/projected/f06a16f0-ed91-4d2e-bc89-36e2ac9b47e1-kube-api-access-hszx4\") pod \"catalog-operator-68c6474976-6s798\" (UID: \"f06a16f0-ed91-4d2e-bc89-36e2ac9b47e1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6s798" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.165784 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4725c\" (UniqueName: \"kubernetes.io/projected/209ad545-4905-4130-baf8-1c3e9576e788-kube-api-access-4725c\") pod \"cluster-image-registry-operator-dc59b4c8b-6zffl\" (UID: \"209ad545-4905-4130-baf8-1c3e9576e788\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6zffl" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.165804 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf8702bc-7079-425a-94d6-ae8adc8414f2-config\") pod \"kube-apiserver-operator-766d6c64bb-l78xq\" (UID: \"cf8702bc-7079-425a-94d6-ae8adc8414f2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-l78xq" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.165823 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf8702bc-7079-425a-94d6-ae8adc8414f2-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-l78xq\" (UID: \"cf8702bc-7079-425a-94d6-ae8adc8414f2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-l78xq" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.166253 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a8ad966-7829-4e5b-9665-5d07b39e882f-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-vkxmw\" (UID: \"5a8ad966-7829-4e5b-9665-5d07b39e882f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vkxmw" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.166277 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/36af60b0-5e67-4902-8891-adb18585316a-profile-collector-cert\") pod \"olm-operator-6b444d44fb-ptfgm\" (UID: \"36af60b0-5e67-4902-8891-adb18585316a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ptfgm" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.166317 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/f442d86c-f2fa-4597-a3c7-d462dd0aa9f6-tmpfs\") pod \"packageserver-d55dfcdfc-hkvpr\" (UID: \"f442d86c-f2fa-4597-a3c7-d462dd0aa9f6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hkvpr" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.166396 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/87c78a23-ef77-4a1e-ba67-45f3a4adccfb-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-7w8k7\" (UID: \"87c78a23-ef77-4a1e-ba67-45f3a4adccfb\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7w8k7" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.166471 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.166933 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a8ad966-7829-4e5b-9665-5d07b39e882f-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-vkxmw\" (UID: \"5a8ad966-7829-4e5b-9665-5d07b39e882f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vkxmw" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.167083 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f06a16f0-ed91-4d2e-bc89-36e2ac9b47e1-profile-collector-cert\") pod \"catalog-operator-68c6474976-6s798\" (UID: \"f06a16f0-ed91-4d2e-bc89-36e2ac9b47e1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6s798" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.167138 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sglfk\" (UniqueName: \"kubernetes.io/projected/36af60b0-5e67-4902-8891-adb18585316a-kube-api-access-sglfk\") pod \"olm-operator-6b444d44fb-ptfgm\" (UID: \"36af60b0-5e67-4902-8891-adb18585316a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ptfgm" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.167172 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f442d86c-f2fa-4597-a3c7-d462dd0aa9f6-apiservice-cert\") pod \"packageserver-d55dfcdfc-hkvpr\" (UID: \"f442d86c-f2fa-4597-a3c7-d462dd0aa9f6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hkvpr" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.167191 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/209ad545-4905-4130-baf8-1c3e9576e788-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-6zffl\" (UID: \"209ad545-4905-4130-baf8-1c3e9576e788\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6zffl" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.167237 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/209ad545-4905-4130-baf8-1c3e9576e788-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-6zffl\" (UID: \"209ad545-4905-4130-baf8-1c3e9576e788\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6zffl" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.167255 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/87c78a23-ef77-4a1e-ba67-45f3a4adccfb-proxy-tls\") pod \"machine-config-controller-84d6567774-7w8k7\" (UID: \"87c78a23-ef77-4a1e-ba67-45f3a4adccfb\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7w8k7" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.167271 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2c70e2d1-cf19-4a67-880f-e52e61ca6953-metrics-tls\") pod \"ingress-operator-5b745b69d9-8dvp5\" (UID: \"2c70e2d1-cf19-4a67-880f-e52e61ca6953\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8dvp5" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.167289 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqpsc\" (UniqueName: \"kubernetes.io/projected/b89f8e03-2720-4c2e-a7fc-e43dda75cc4c-kube-api-access-cqpsc\") pod \"cluster-samples-operator-665b6dd947-rffpk\" (UID: \"b89f8e03-2720-4c2e-a7fc-e43dda75cc4c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rffpk" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.167307 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a8ad966-7829-4e5b-9665-5d07b39e882f-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-vkxmw\" (UID: \"5a8ad966-7829-4e5b-9665-5d07b39e882f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vkxmw" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.167332 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/361629f6-7c8a-4523-99e4-c7b152588855-config\") pod \"etcd-operator-b45778765-b84zp\" (UID: \"361629f6-7c8a-4523-99e4-c7b152588855\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b84zp" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.167347 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2c70e2d1-cf19-4a67-880f-e52e61ca6953-bound-sa-token\") pod \"ingress-operator-5b745b69d9-8dvp5\" (UID: \"2c70e2d1-cf19-4a67-880f-e52e61ca6953\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8dvp5" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.167364 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvvnd\" (UniqueName: \"kubernetes.io/projected/692367ab-53d4-4c1c-aa46-c70e247b848a-kube-api-access-rvvnd\") pod \"control-plane-machine-set-operator-78cbb6b69f-74n48\" (UID: \"692367ab-53d4-4c1c-aa46-c70e247b848a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-74n48" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.167410 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqh9x\" (UniqueName: \"kubernetes.io/projected/3d471333-dfe7-45ec-bb58-f5020bca76cd-kube-api-access-rqh9x\") pod \"package-server-manager-789f6589d5-chl4q\" (UID: \"3d471333-dfe7-45ec-bb58-f5020bca76cd\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-chl4q" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.167433 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/361629f6-7c8a-4523-99e4-c7b152588855-etcd-service-ca\") pod \"etcd-operator-b45778765-b84zp\" (UID: \"361629f6-7c8a-4523-99e4-c7b152588855\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b84zp" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.167448 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtp4q\" (UniqueName: \"kubernetes.io/projected/5a8ad966-7829-4e5b-9665-5d07b39e882f-kube-api-access-mtp4q\") pod \"openshift-controller-manager-operator-756b6f6bc6-vkxmw\" (UID: \"5a8ad966-7829-4e5b-9665-5d07b39e882f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vkxmw" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.167465 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgd7r\" (UniqueName: \"kubernetes.io/projected/87c78a23-ef77-4a1e-ba67-45f3a4adccfb-kube-api-access-cgd7r\") pod \"machine-config-controller-84d6567774-7w8k7\" (UID: \"87c78a23-ef77-4a1e-ba67-45f3a4adccfb\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7w8k7" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.167482 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/361629f6-7c8a-4523-99e4-c7b152588855-etcd-client\") pod \"etcd-operator-b45778765-b84zp\" (UID: \"361629f6-7c8a-4523-99e4-c7b152588855\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b84zp" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.167498 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcvsq\" (UniqueName: \"kubernetes.io/projected/361629f6-7c8a-4523-99e4-c7b152588855-kube-api-access-jcvsq\") pod \"etcd-operator-b45778765-b84zp\" (UID: \"361629f6-7c8a-4523-99e4-c7b152588855\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b84zp" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.167524 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/361629f6-7c8a-4523-99e4-c7b152588855-serving-cert\") pod \"etcd-operator-b45778765-b84zp\" (UID: \"361629f6-7c8a-4523-99e4-c7b152588855\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b84zp" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.167553 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b89f8e03-2720-4c2e-a7fc-e43dda75cc4c-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-rffpk\" (UID: \"b89f8e03-2720-4c2e-a7fc-e43dda75cc4c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rffpk" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.167570 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f442d86c-f2fa-4597-a3c7-d462dd0aa9f6-webhook-cert\") pod \"packageserver-d55dfcdfc-hkvpr\" (UID: \"f442d86c-f2fa-4597-a3c7-d462dd0aa9f6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hkvpr" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.169194 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/209ad545-4905-4130-baf8-1c3e9576e788-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-6zffl\" (UID: \"209ad545-4905-4130-baf8-1c3e9576e788\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6zffl" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.171384 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/361629f6-7c8a-4523-99e4-c7b152588855-etcd-client\") pod \"etcd-operator-b45778765-b84zp\" (UID: \"361629f6-7c8a-4523-99e4-c7b152588855\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b84zp" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.171463 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b89f8e03-2720-4c2e-a7fc-e43dda75cc4c-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-rffpk\" (UID: \"b89f8e03-2720-4c2e-a7fc-e43dda75cc4c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rffpk" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.171515 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/361629f6-7c8a-4523-99e4-c7b152588855-serving-cert\") pod \"etcd-operator-b45778765-b84zp\" (UID: \"361629f6-7c8a-4523-99e4-c7b152588855\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b84zp" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.171577 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a8ad966-7829-4e5b-9665-5d07b39e882f-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-vkxmw\" (UID: \"5a8ad966-7829-4e5b-9665-5d07b39e882f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vkxmw" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.171896 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/209ad545-4905-4130-baf8-1c3e9576e788-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-6zffl\" (UID: \"209ad545-4905-4130-baf8-1c3e9576e788\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6zffl" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.187383 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.189583 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/361629f6-7c8a-4523-99e4-c7b152588855-etcd-service-ca\") pod \"etcd-operator-b45778765-b84zp\" (UID: \"361629f6-7c8a-4523-99e4-c7b152588855\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b84zp" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.206716 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.227507 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.259996 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.260737 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/361629f6-7c8a-4523-99e4-c7b152588855-config\") pod \"etcd-operator-b45778765-b84zp\" (UID: \"361629f6-7c8a-4523-99e4-c7b152588855\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b84zp" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.271293 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.287734 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.292623 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/87c78a23-ef77-4a1e-ba67-45f3a4adccfb-proxy-tls\") pod \"machine-config-controller-84d6567774-7w8k7\" (UID: \"87c78a23-ef77-4a1e-ba67-45f3a4adccfb\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7w8k7" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.307647 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.326723 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.346986 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.367152 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.386195 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.406218 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.427374 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.447877 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.459485 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf8702bc-7079-425a-94d6-ae8adc8414f2-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-l78xq\" (UID: \"cf8702bc-7079-425a-94d6-ae8adc8414f2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-l78xq" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.467241 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.487186 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.496937 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf8702bc-7079-425a-94d6-ae8adc8414f2-config\") pod \"kube-apiserver-operator-766d6c64bb-l78xq\" (UID: \"cf8702bc-7079-425a-94d6-ae8adc8414f2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-l78xq" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.506862 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.519855 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5eeb4175-f38f-4b50-a302-194880f33a30-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-s44zp\" (UID: \"5eeb4175-f38f-4b50-a302-194880f33a30\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s44zp" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.526568 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.547190 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.566345 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.587171 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.594815 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5eeb4175-f38f-4b50-a302-194880f33a30-config\") pod \"kube-controller-manager-operator-78b949d7b-s44zp\" (UID: \"5eeb4175-f38f-4b50-a302-194880f33a30\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s44zp" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.607479 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.614469 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2c70e2d1-cf19-4a67-880f-e52e61ca6953-metrics-tls\") pod \"ingress-operator-5b745b69d9-8dvp5\" (UID: \"2c70e2d1-cf19-4a67-880f-e52e61ca6953\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8dvp5" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.626858 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.659187 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.665015 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2c70e2d1-cf19-4a67-880f-e52e61ca6953-trusted-ca\") pod \"ingress-operator-5b745b69d9-8dvp5\" (UID: \"2c70e2d1-cf19-4a67-880f-e52e61ca6953\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8dvp5" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.667277 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.687229 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.707311 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.727181 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.746521 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.763880 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f06a16f0-ed91-4d2e-bc89-36e2ac9b47e1-srv-cert\") pod \"catalog-operator-68c6474976-6s798\" (UID: \"f06a16f0-ed91-4d2e-bc89-36e2ac9b47e1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6s798" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.767883 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.781110 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f06a16f0-ed91-4d2e-bc89-36e2ac9b47e1-profile-collector-cert\") pod \"catalog-operator-68c6474976-6s798\" (UID: \"f06a16f0-ed91-4d2e-bc89-36e2ac9b47e1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6s798" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.782586 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/36af60b0-5e67-4902-8891-adb18585316a-profile-collector-cert\") pod \"olm-operator-6b444d44fb-ptfgm\" (UID: \"36af60b0-5e67-4902-8891-adb18585316a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ptfgm" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.787224 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.806401 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.827473 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.847573 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.867212 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.886658 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.907111 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.918628 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/36af60b0-5e67-4902-8891-adb18585316a-srv-cert\") pod \"olm-operator-6b444d44fb-ptfgm\" (UID: \"36af60b0-5e67-4902-8891-adb18585316a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ptfgm" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.927875 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.938561 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/692367ab-53d4-4c1c-aa46-c70e247b848a-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-74n48\" (UID: \"692367ab-53d4-4c1c-aa46-c70e247b848a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-74n48" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.947719 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.964970 4813 request.go:700] Waited for 1.012837996s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver-operator/pods/openshift-apiserver-operator-796bbdcf4f-4mmbp Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.987477 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 19 18:32:01 crc kubenswrapper[4813]: I0219 18:32:01.998245 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3d471333-dfe7-45ec-bb58-f5020bca76cd-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-chl4q\" (UID: \"3d471333-dfe7-45ec-bb58-f5020bca76cd\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-chl4q" Feb 19 18:32:02 crc kubenswrapper[4813]: I0219 18:32:02.007563 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 19 18:32:02 crc kubenswrapper[4813]: I0219 18:32:02.011561 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f442d86c-f2fa-4597-a3c7-d462dd0aa9f6-webhook-cert\") pod \"packageserver-d55dfcdfc-hkvpr\" (UID: \"f442d86c-f2fa-4597-a3c7-d462dd0aa9f6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hkvpr" Feb 19 18:32:02 crc kubenswrapper[4813]: I0219 18:32:02.012638 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f442d86c-f2fa-4597-a3c7-d462dd0aa9f6-apiservice-cert\") pod \"packageserver-d55dfcdfc-hkvpr\" (UID: \"f442d86c-f2fa-4597-a3c7-d462dd0aa9f6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hkvpr" Feb 19 18:32:02 crc kubenswrapper[4813]: I0219 18:32:02.027540 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 19 18:32:02 crc kubenswrapper[4813]: I0219 18:32:02.047501 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 19 18:32:02 crc kubenswrapper[4813]: I0219 18:32:02.067740 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 19 18:32:02 crc kubenswrapper[4813]: I0219 18:32:02.088574 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 19 18:32:02 crc kubenswrapper[4813]: I0219 18:32:02.127376 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 19 18:32:02 crc kubenswrapper[4813]: I0219 18:32:02.146771 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 19 18:32:02 crc kubenswrapper[4813]: I0219 18:32:02.167496 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 19 18:32:02 crc kubenswrapper[4813]: I0219 18:32:02.186567 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 19 18:32:02 crc kubenswrapper[4813]: I0219 18:32:02.207547 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 19 18:32:02 crc kubenswrapper[4813]: I0219 18:32:02.228084 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 19 18:32:02 crc kubenswrapper[4813]: I0219 18:32:02.247678 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 18:32:02 crc kubenswrapper[4813]: I0219 18:32:02.266360 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 19 18:32:02 crc kubenswrapper[4813]: I0219 18:32:02.325022 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 18:32:02 crc kubenswrapper[4813]: I0219 18:32:02.326084 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 19 18:32:02 crc kubenswrapper[4813]: I0219 18:32:02.326828 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 19 18:32:02 crc kubenswrapper[4813]: I0219 18:32:02.346798 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 19 18:32:02 crc kubenswrapper[4813]: I0219 18:32:02.366523 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 19 18:32:02 crc kubenswrapper[4813]: I0219 18:32:02.386402 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 19 18:32:02 crc kubenswrapper[4813]: I0219 18:32:02.408653 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 19 18:32:02 crc kubenswrapper[4813]: I0219 18:32:02.426924 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 19 18:32:02 crc kubenswrapper[4813]: I0219 18:32:02.448398 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 19 18:32:02 crc kubenswrapper[4813]: I0219 18:32:02.467022 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 19 18:32:02 crc kubenswrapper[4813]: I0219 18:32:02.487003 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 19 18:32:02 crc kubenswrapper[4813]: I0219 18:32:02.506859 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 19 18:32:02 crc kubenswrapper[4813]: I0219 18:32:02.526461 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 19 18:32:02 crc kubenswrapper[4813]: I0219 18:32:02.547122 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 19 18:32:02 crc kubenswrapper[4813]: I0219 18:32:02.567147 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 19 18:32:02 crc kubenswrapper[4813]: I0219 18:32:02.587092 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 19 18:32:02 crc kubenswrapper[4813]: I0219 18:32:02.607394 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 19 18:32:02 crc kubenswrapper[4813]: I0219 18:32:02.626943 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 19 18:32:02 crc kubenswrapper[4813]: I0219 18:32:02.655313 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 19 18:32:02 crc kubenswrapper[4813]: I0219 18:32:02.667294 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 19 18:32:02 crc kubenswrapper[4813]: I0219 18:32:02.687550 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 19 18:32:02 crc kubenswrapper[4813]: I0219 18:32:02.708122 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 19 18:32:02 crc kubenswrapper[4813]: I0219 18:32:02.726986 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 19 18:32:02 crc kubenswrapper[4813]: I0219 18:32:02.747959 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 19 18:32:02 crc kubenswrapper[4813]: I0219 18:32:02.767737 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 19 18:32:02 crc kubenswrapper[4813]: I0219 18:32:02.787824 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 19 18:32:02 crc kubenswrapper[4813]: I0219 18:32:02.807532 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 19 18:32:02 crc kubenswrapper[4813]: I0219 18:32:02.857269 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnqj9\" (UniqueName: \"kubernetes.io/projected/8ef1d23f-3a55-4fef-ba88-96b5ba04313f-kube-api-access-tnqj9\") pod \"openshift-config-operator-7777fb866f-wwn2x\" (UID: \"8ef1d23f-3a55-4fef-ba88-96b5ba04313f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-wwn2x" Feb 19 18:32:02 crc kubenswrapper[4813]: I0219 18:32:02.877644 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwbdz\" (UniqueName: \"kubernetes.io/projected/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-kube-api-access-kwbdz\") pod \"oauth-openshift-558db77b4-x6n5r\" (UID: \"f230f42e-0e70-43d5-abbb-a2bbbe3eddcb\") " pod="openshift-authentication/oauth-openshift-558db77b4-x6n5r" Feb 19 18:32:02 crc kubenswrapper[4813]: I0219 18:32:02.894663 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvk6m\" (UniqueName: \"kubernetes.io/projected/6f78f99d-d988-4a96-90be-2984414b9bfe-kube-api-access-cvk6m\") pod \"dns-operator-744455d44c-qqrj6\" (UID: \"6f78f99d-d988-4a96-90be-2984414b9bfe\") " pod="openshift-dns-operator/dns-operator-744455d44c-qqrj6" Feb 19 18:32:02 crc kubenswrapper[4813]: I0219 18:32:02.912442 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzbcs\" (UniqueName: \"kubernetes.io/projected/2884d383-8ee9-456c-8d6c-d41eaebd60e6-kube-api-access-dzbcs\") pod \"apiserver-7bbb656c7d-cvwkx\" (UID: \"2884d383-8ee9-456c-8d6c-d41eaebd60e6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cvwkx" Feb 19 18:32:02 crc kubenswrapper[4813]: I0219 18:32:02.933529 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzjc5\" (UniqueName: \"kubernetes.io/projected/36d151c0-fa73-4e5f-a6de-580629bef8f1-kube-api-access-rzjc5\") pod \"machine-api-operator-5694c8668f-dlph2\" (UID: \"36d151c0-fa73-4e5f-a6de-580629bef8f1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dlph2" Feb 19 18:32:02 crc kubenswrapper[4813]: I0219 18:32:02.952733 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znnmz\" (UniqueName: \"kubernetes.io/projected/c705e274-b996-46b7-8e3f-97dff9107a6f-kube-api-access-znnmz\") pod \"downloads-7954f5f757-5gzd8\" (UID: \"c705e274-b996-46b7-8e3f-97dff9107a6f\") " pod="openshift-console/downloads-7954f5f757-5gzd8" Feb 19 18:32:02 crc kubenswrapper[4813]: I0219 18:32:02.968934 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcl89\" (UniqueName: \"kubernetes.io/projected/0ddfe429-ea67-4b0c-bab1-bc72117fddda-kube-api-access-hcl89\") pod \"route-controller-manager-6576b87f9c-jfz6c\" (UID: \"0ddfe429-ea67-4b0c-bab1-bc72117fddda\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jfz6c" Feb 19 18:32:02 crc kubenswrapper[4813]: I0219 18:32:02.984649 4813 request.go:700] Waited for 1.914595749s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa/token Feb 19 18:32:02 crc kubenswrapper[4813]: I0219 18:32:02.987056 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fspl9\" (UniqueName: \"kubernetes.io/projected/6e3db18d-bbdd-4924-bb4c-d562ff4347b7-kube-api-access-fspl9\") pod \"authentication-operator-69f744f599-fdzh8\" (UID: \"6e3db18d-bbdd-4924-bb4c-d562ff4347b7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fdzh8" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.006767 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9swks\" (UniqueName: \"kubernetes.io/projected/17f8fff7-991c-4a69-afb4-e40b048cde5c-kube-api-access-9swks\") pod \"apiserver-76f77b778f-rksx7\" (UID: \"17f8fff7-991c-4a69-afb4-e40b048cde5c\") " pod="openshift-apiserver/apiserver-76f77b778f-rksx7" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.023027 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhltx\" (UniqueName: \"kubernetes.io/projected/f144bc83-c0eb-4627-ab68-9ed862e2f402-kube-api-access-zhltx\") pod \"machine-approver-56656f9798-j6hz9\" (UID: \"f144bc83-c0eb-4627-ab68-9ed862e2f402\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-j6hz9" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.042241 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-dlph2" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.046619 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpkzh\" (UniqueName: \"kubernetes.io/projected/9c2745dc-3e5e-4571-ba36-aa14c87c6336-kube-api-access-kpkzh\") pod \"console-f9d7485db-dqj4z\" (UID: \"9c2745dc-3e5e-4571-ba36-aa14c87c6336\") " pod="openshift-console/console-f9d7485db-dqj4z" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.057270 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-fdzh8" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.066733 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvxjx\" (UniqueName: \"kubernetes.io/projected/14156cf1-57df-48bd-8723-1a7a083bab21-kube-api-access-lvxjx\") pod \"console-operator-58897d9998-9fx4h\" (UID: \"14156cf1-57df-48bd-8723-1a7a083bab21\") " pod="openshift-console-operator/console-operator-58897d9998-9fx4h" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.088011 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57nds\" (UniqueName: \"kubernetes.io/projected/624d3261-b732-4c7e-b1d9-56827d44c94f-kube-api-access-57nds\") pod \"controller-manager-879f6c89f-mgsbg\" (UID: \"624d3261-b732-4c7e-b1d9-56827d44c94f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mgsbg" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.091677 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jfz6c" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.107503 4813 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.112983 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-9fx4h" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.113541 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwdxp\" (UniqueName: \"kubernetes.io/projected/7f3b7488-7c5c-4689-9c41-eacd57fdce7d-kube-api-access-cwdxp\") pod \"openshift-apiserver-operator-796bbdcf4f-4mmbp\" (UID: \"7f3b7488-7c5c-4689-9c41-eacd57fdce7d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4mmbp" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.122223 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-j6hz9" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.126895 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.128307 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cvwkx" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.140775 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-dqj4z" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.147039 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.149075 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-wwn2x" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.153377 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-x6n5r" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.160325 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-qqrj6" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.187614 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/209ad545-4905-4130-baf8-1c3e9576e788-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-6zffl\" (UID: \"209ad545-4905-4130-baf8-1c3e9576e788\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6zffl" Feb 19 18:32:03 crc kubenswrapper[4813]: W0219 18:32:03.200898 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf144bc83_c0eb_4627_ab68_9ed862e2f402.slice/crio-4003abb14dc07a0b53e053c150bdac23813b516d8687984b928856659a1df19b WatchSource:0}: Error finding container 4003abb14dc07a0b53e053c150bdac23813b516d8687984b928856659a1df19b: Status 404 returned error can't find the container with id 4003abb14dc07a0b53e053c150bdac23813b516d8687984b928856659a1df19b Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.215936 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-j6hz9" event={"ID":"f144bc83-c0eb-4627-ab68-9ed862e2f402","Type":"ContainerStarted","Data":"4003abb14dc07a0b53e053c150bdac23813b516d8687984b928856659a1df19b"} Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.217291 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x66jx\" (UniqueName: \"kubernetes.io/projected/f442d86c-f2fa-4597-a3c7-d462dd0aa9f6-kube-api-access-x66jx\") pod \"packageserver-d55dfcdfc-hkvpr\" (UID: \"f442d86c-f2fa-4597-a3c7-d462dd0aa9f6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hkvpr" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.220843 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-5gzd8" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.231576 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8s26w\" (UniqueName: \"kubernetes.io/projected/2c70e2d1-cf19-4a67-880f-e52e61ca6953-kube-api-access-8s26w\") pod \"ingress-operator-5b745b69d9-8dvp5\" (UID: \"2c70e2d1-cf19-4a67-880f-e52e61ca6953\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8dvp5" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.242794 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hszx4\" (UniqueName: \"kubernetes.io/projected/f06a16f0-ed91-4d2e-bc89-36e2ac9b47e1-kube-api-access-hszx4\") pod \"catalog-operator-68c6474976-6s798\" (UID: \"f06a16f0-ed91-4d2e-bc89-36e2ac9b47e1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6s798" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.259057 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5eeb4175-f38f-4b50-a302-194880f33a30-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-s44zp\" (UID: \"5eeb4175-f38f-4b50-a302-194880f33a30\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s44zp" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.271870 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-rksx7" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.283580 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4725c\" (UniqueName: \"kubernetes.io/projected/209ad545-4905-4130-baf8-1c3e9576e788-kube-api-access-4725c\") pod \"cluster-image-registry-operator-dc59b4c8b-6zffl\" (UID: \"209ad545-4905-4130-baf8-1c3e9576e788\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6zffl" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.304520 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-mgsbg" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.307751 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s44zp" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.323925 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sglfk\" (UniqueName: \"kubernetes.io/projected/36af60b0-5e67-4902-8891-adb18585316a-kube-api-access-sglfk\") pod \"olm-operator-6b444d44fb-ptfgm\" (UID: \"36af60b0-5e67-4902-8891-adb18585316a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ptfgm" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.328546 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6s798" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.339355 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ptfgm" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.346321 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvvnd\" (UniqueName: \"kubernetes.io/projected/692367ab-53d4-4c1c-aa46-c70e247b848a-kube-api-access-rvvnd\") pod \"control-plane-machine-set-operator-78cbb6b69f-74n48\" (UID: \"692367ab-53d4-4c1c-aa46-c70e247b848a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-74n48" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.360559 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgd7r\" (UniqueName: \"kubernetes.io/projected/87c78a23-ef77-4a1e-ba67-45f3a4adccfb-kube-api-access-cgd7r\") pod \"machine-config-controller-84d6567774-7w8k7\" (UID: \"87c78a23-ef77-4a1e-ba67-45f3a4adccfb\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7w8k7" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.363277 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-74n48" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.376429 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf8702bc-7079-425a-94d6-ae8adc8414f2-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-l78xq\" (UID: \"cf8702bc-7079-425a-94d6-ae8adc8414f2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-l78xq" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.380678 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcvsq\" (UniqueName: \"kubernetes.io/projected/361629f6-7c8a-4523-99e4-c7b152588855-kube-api-access-jcvsq\") pod \"etcd-operator-b45778765-b84zp\" (UID: \"361629f6-7c8a-4523-99e4-c7b152588855\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b84zp" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.397359 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hkvpr" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.398266 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-dlph2"] Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.405219 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-fdzh8"] Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.405416 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4mmbp" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.409619 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqh9x\" (UniqueName: \"kubernetes.io/projected/3d471333-dfe7-45ec-bb58-f5020bca76cd-kube-api-access-rqh9x\") pod \"package-server-manager-789f6589d5-chl4q\" (UID: \"3d471333-dfe7-45ec-bb58-f5020bca76cd\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-chl4q" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.419372 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtp4q\" (UniqueName: \"kubernetes.io/projected/5a8ad966-7829-4e5b-9665-5d07b39e882f-kube-api-access-mtp4q\") pod \"openshift-controller-manager-operator-756b6f6bc6-vkxmw\" (UID: \"5a8ad966-7829-4e5b-9665-5d07b39e882f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vkxmw" Feb 19 18:32:03 crc kubenswrapper[4813]: W0219 18:32:03.420069 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36d151c0_fa73_4e5f_a6de_580629bef8f1.slice/crio-4ebd3598899c5a7ff6f7948bb549679da6b1416d62d97581917183e39d862300 WatchSource:0}: Error finding container 4ebd3598899c5a7ff6f7948bb549679da6b1416d62d97581917183e39d862300: Status 404 returned error can't find the container with id 4ebd3598899c5a7ff6f7948bb549679da6b1416d62d97581917183e39d862300 Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.470483 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2c70e2d1-cf19-4a67-880f-e52e61ca6953-bound-sa-token\") pod \"ingress-operator-5b745b69d9-8dvp5\" (UID: \"2c70e2d1-cf19-4a67-880f-e52e61ca6953\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8dvp5" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.513733 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vkxmw" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.523893 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqpsc\" (UniqueName: \"kubernetes.io/projected/b89f8e03-2720-4c2e-a7fc-e43dda75cc4c-kube-api-access-cqpsc\") pod \"cluster-samples-operator-665b6dd947-rffpk\" (UID: \"b89f8e03-2720-4c2e-a7fc-e43dda75cc4c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rffpk" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.530585 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rffpk" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.546457 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5a165501-6ea2-4d63-9479-6ac760c6b116-bound-sa-token\") pod \"image-registry-697d97f7c8-bd28k\" (UID: \"5a165501-6ea2-4d63-9479-6ac760c6b116\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd28k" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.546498 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5hmr\" (UniqueName: \"kubernetes.io/projected/4d98d1e8-0f5b-4f71-9809-b9577f15e21a-kube-api-access-b5hmr\") pod \"router-default-5444994796-9j26p\" (UID: \"4d98d1e8-0f5b-4f71-9809-b9577f15e21a\") " pod="openshift-ingress/router-default-5444994796-9j26p" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.546528 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zclv\" (UniqueName: \"kubernetes.io/projected/5a165501-6ea2-4d63-9479-6ac760c6b116-kube-api-access-2zclv\") pod \"image-registry-697d97f7c8-bd28k\" (UID: \"5a165501-6ea2-4d63-9479-6ac760c6b116\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd28k" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.546552 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff5bff76-f1fb-442b-81db-2ccc58aa61ef-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rcm5q\" (UID: \"ff5bff76-f1fb-442b-81db-2ccc58aa61ef\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rcm5q" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.546577 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5a165501-6ea2-4d63-9479-6ac760c6b116-installation-pull-secrets\") pod \"image-registry-697d97f7c8-bd28k\" (UID: \"5a165501-6ea2-4d63-9479-6ac760c6b116\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd28k" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.546595 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff5bff76-f1fb-442b-81db-2ccc58aa61ef-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rcm5q\" (UID: \"ff5bff76-f1fb-442b-81db-2ccc58aa61ef\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rcm5q" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.546635 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5a165501-6ea2-4d63-9479-6ac760c6b116-trusted-ca\") pod \"image-registry-697d97f7c8-bd28k\" (UID: \"5a165501-6ea2-4d63-9479-6ac760c6b116\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd28k" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.546654 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5a165501-6ea2-4d63-9479-6ac760c6b116-registry-certificates\") pod \"image-registry-697d97f7c8-bd28k\" (UID: \"5a165501-6ea2-4d63-9479-6ac760c6b116\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd28k" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.546702 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcwgb\" (UniqueName: \"kubernetes.io/projected/14cc2826-c23c-4055-a22a-814d43dc2bb4-kube-api-access-bcwgb\") pod \"kube-storage-version-migrator-operator-b67b599dd-2rvnw\" (UID: \"14cc2826-c23c-4055-a22a-814d43dc2bb4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2rvnw" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.546737 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14cc2826-c23c-4055-a22a-814d43dc2bb4-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-2rvnw\" (UID: \"14cc2826-c23c-4055-a22a-814d43dc2bb4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2rvnw" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.546770 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5a165501-6ea2-4d63-9479-6ac760c6b116-registry-tls\") pod \"image-registry-697d97f7c8-bd28k\" (UID: \"5a165501-6ea2-4d63-9479-6ac760c6b116\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd28k" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.546820 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5a165501-6ea2-4d63-9479-6ac760c6b116-ca-trust-extracted\") pod \"image-registry-697d97f7c8-bd28k\" (UID: \"5a165501-6ea2-4d63-9479-6ac760c6b116\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd28k" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.546862 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4d98d1e8-0f5b-4f71-9809-b9577f15e21a-metrics-certs\") pod \"router-default-5444994796-9j26p\" (UID: \"4d98d1e8-0f5b-4f71-9809-b9577f15e21a\") " pod="openshift-ingress/router-default-5444994796-9j26p" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.546903 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d98d1e8-0f5b-4f71-9809-b9577f15e21a-service-ca-bundle\") pod \"router-default-5444994796-9j26p\" (UID: \"4d98d1e8-0f5b-4f71-9809-b9577f15e21a\") " pod="openshift-ingress/router-default-5444994796-9j26p" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.546925 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ff5bff76-f1fb-442b-81db-2ccc58aa61ef-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rcm5q\" (UID: \"ff5bff76-f1fb-442b-81db-2ccc58aa61ef\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rcm5q" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.546942 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14cc2826-c23c-4055-a22a-814d43dc2bb4-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-2rvnw\" (UID: \"14cc2826-c23c-4055-a22a-814d43dc2bb4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2rvnw" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.546981 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/4d98d1e8-0f5b-4f71-9809-b9577f15e21a-default-certificate\") pod \"router-default-5444994796-9j26p\" (UID: \"4d98d1e8-0f5b-4f71-9809-b9577f15e21a\") " pod="openshift-ingress/router-default-5444994796-9j26p" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.547011 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/4d98d1e8-0f5b-4f71-9809-b9577f15e21a-stats-auth\") pod \"router-default-5444994796-9j26p\" (UID: \"4d98d1e8-0f5b-4f71-9809-b9577f15e21a\") " pod="openshift-ingress/router-default-5444994796-9j26p" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.547063 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd28k\" (UID: \"5a165501-6ea2-4d63-9479-6ac760c6b116\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd28k" Feb 19 18:32:03 crc kubenswrapper[4813]: E0219 18:32:03.548576 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 18:32:04.048565647 +0000 UTC m=+143.274006188 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd28k" (UID: "5a165501-6ea2-4d63-9479-6ac760c6b116") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.549175 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6zffl" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.559871 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-b84zp" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.573067 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7w8k7" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.583793 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-wwn2x"] Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.596525 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-jfz6c"] Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.601728 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-l78xq" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.618553 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8dvp5" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.647765 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.648183 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7jtd\" (UniqueName: \"kubernetes.io/projected/28579021-c721-4746-bed3-9b765dddbe11-kube-api-access-d7jtd\") pod \"migrator-59844c95c7-c5246\" (UID: \"28579021-c721-4746-bed3-9b765dddbe11\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-c5246" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.648207 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/871c6d8c-2897-4c2c-938e-9137fe5320af-csi-data-dir\") pod \"csi-hostpathplugin-ppc5s\" (UID: \"871c6d8c-2897-4c2c-938e-9137fe5320af\") " pod="hostpath-provisioner/csi-hostpathplugin-ppc5s" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.648233 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e63bf777-ed22-4aae-942e-74b8613ca4ce-config-volume\") pod \"collect-profiles-29525430-fq4mh\" (UID: \"e63bf777-ed22-4aae-942e-74b8613ca4ce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525430-fq4mh" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.648248 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/8eda42c4-5734-45ee-8bd1-0ae7d0d89346-signing-cabundle\") pod \"service-ca-9c57cc56f-29vhv\" (UID: \"8eda42c4-5734-45ee-8bd1-0ae7d0d89346\") " pod="openshift-service-ca/service-ca-9c57cc56f-29vhv" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.648284 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-546jq\" (UniqueName: \"kubernetes.io/projected/d3ba0b09-bcff-4819-b79a-f499049cf31c-kube-api-access-546jq\") pod \"machine-config-server-hvwhh\" (UID: \"d3ba0b09-bcff-4819-b79a-f499049cf31c\") " pod="openshift-machine-config-operator/machine-config-server-hvwhh" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.648335 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5a165501-6ea2-4d63-9479-6ac760c6b116-trusted-ca\") pod \"image-registry-697d97f7c8-bd28k\" (UID: \"5a165501-6ea2-4d63-9479-6ac760c6b116\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd28k" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.648353 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5a165501-6ea2-4d63-9479-6ac760c6b116-registry-certificates\") pod \"image-registry-697d97f7c8-bd28k\" (UID: \"5a165501-6ea2-4d63-9479-6ac760c6b116\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd28k" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.648371 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nw6pf\" (UniqueName: \"kubernetes.io/projected/28a6a1eb-17cf-4978-8940-fa976706f321-kube-api-access-nw6pf\") pod \"service-ca-operator-777779d784-crqgx\" (UID: \"28a6a1eb-17cf-4978-8940-fa976706f321\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-crqgx" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.648386 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e63bf777-ed22-4aae-942e-74b8613ca4ce-secret-volume\") pod \"collect-profiles-29525430-fq4mh\" (UID: \"e63bf777-ed22-4aae-942e-74b8613ca4ce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525430-fq4mh" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.648421 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcwgb\" (UniqueName: \"kubernetes.io/projected/14cc2826-c23c-4055-a22a-814d43dc2bb4-kube-api-access-bcwgb\") pod \"kube-storage-version-migrator-operator-b67b599dd-2rvnw\" (UID: \"14cc2826-c23c-4055-a22a-814d43dc2bb4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2rvnw" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.648445 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wkmt\" (UniqueName: \"kubernetes.io/projected/60220df2-7a87-42d6-8ff6-f4d2ddec5160-kube-api-access-6wkmt\") pod \"marketplace-operator-79b997595-h5w7j\" (UID: \"60220df2-7a87-42d6-8ff6-f4d2ddec5160\") " pod="openshift-marketplace/marketplace-operator-79b997595-h5w7j" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.648522 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14cc2826-c23c-4055-a22a-814d43dc2bb4-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-2rvnw\" (UID: \"14cc2826-c23c-4055-a22a-814d43dc2bb4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2rvnw" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.648565 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/81f53f8d-f6c6-4eff-b25c-1070003fbe99-config-volume\") pod \"dns-default-kmwrq\" (UID: \"81f53f8d-f6c6-4eff-b25c-1070003fbe99\") " pod="openshift-dns/dns-default-kmwrq" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.648581 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5a165501-6ea2-4d63-9479-6ac760c6b116-registry-tls\") pod \"image-registry-697d97f7c8-bd28k\" (UID: \"5a165501-6ea2-4d63-9479-6ac760c6b116\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd28k" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.648596 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/8eda42c4-5734-45ee-8bd1-0ae7d0d89346-signing-key\") pod \"service-ca-9c57cc56f-29vhv\" (UID: \"8eda42c4-5734-45ee-8bd1-0ae7d0d89346\") " pod="openshift-service-ca/service-ca-9c57cc56f-29vhv" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.648670 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpqb6\" (UniqueName: \"kubernetes.io/projected/81f53f8d-f6c6-4eff-b25c-1070003fbe99-kube-api-access-hpqb6\") pod \"dns-default-kmwrq\" (UID: \"81f53f8d-f6c6-4eff-b25c-1070003fbe99\") " pod="openshift-dns/dns-default-kmwrq" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.648689 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5a165501-6ea2-4d63-9479-6ac760c6b116-ca-trust-extracted\") pod \"image-registry-697d97f7c8-bd28k\" (UID: \"5a165501-6ea2-4d63-9479-6ac760c6b116\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd28k" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.648702 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28a6a1eb-17cf-4978-8940-fa976706f321-serving-cert\") pod \"service-ca-operator-777779d784-crqgx\" (UID: \"28a6a1eb-17cf-4978-8940-fa976706f321\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-crqgx" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.648718 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/60220df2-7a87-42d6-8ff6-f4d2ddec5160-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-h5w7j\" (UID: \"60220df2-7a87-42d6-8ff6-f4d2ddec5160\") " pod="openshift-marketplace/marketplace-operator-79b997595-h5w7j" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.648735 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gshxt\" (UniqueName: \"kubernetes.io/projected/8ff6065e-3f22-49b0-a4b9-2e33bdf923b1-kube-api-access-gshxt\") pod \"multus-admission-controller-857f4d67dd-qqsjr\" (UID: \"8ff6065e-3f22-49b0-a4b9-2e33bdf923b1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-qqsjr" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.648780 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d3ba0b09-bcff-4819-b79a-f499049cf31c-certs\") pod \"machine-config-server-hvwhh\" (UID: \"d3ba0b09-bcff-4819-b79a-f499049cf31c\") " pod="openshift-machine-config-operator/machine-config-server-hvwhh" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.648795 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjcdb\" (UniqueName: \"kubernetes.io/projected/8eda42c4-5734-45ee-8bd1-0ae7d0d89346-kube-api-access-sjcdb\") pod \"service-ca-9c57cc56f-29vhv\" (UID: \"8eda42c4-5734-45ee-8bd1-0ae7d0d89346\") " pod="openshift-service-ca/service-ca-9c57cc56f-29vhv" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.648809 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/81f53f8d-f6c6-4eff-b25c-1070003fbe99-metrics-tls\") pod \"dns-default-kmwrq\" (UID: \"81f53f8d-f6c6-4eff-b25c-1070003fbe99\") " pod="openshift-dns/dns-default-kmwrq" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.648826 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4d98d1e8-0f5b-4f71-9809-b9577f15e21a-metrics-certs\") pod \"router-default-5444994796-9j26p\" (UID: \"4d98d1e8-0f5b-4f71-9809-b9577f15e21a\") " pod="openshift-ingress/router-default-5444994796-9j26p" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.648904 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8ff6065e-3f22-49b0-a4b9-2e33bdf923b1-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-qqsjr\" (UID: \"8ff6065e-3f22-49b0-a4b9-2e33bdf923b1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-qqsjr" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.648920 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/871c6d8c-2897-4c2c-938e-9137fe5320af-plugins-dir\") pod \"csi-hostpathplugin-ppc5s\" (UID: \"871c6d8c-2897-4c2c-938e-9137fe5320af\") " pod="hostpath-provisioner/csi-hostpathplugin-ppc5s" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.648985 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d98d1e8-0f5b-4f71-9809-b9577f15e21a-service-ca-bundle\") pod \"router-default-5444994796-9j26p\" (UID: \"4d98d1e8-0f5b-4f71-9809-b9577f15e21a\") " pod="openshift-ingress/router-default-5444994796-9j26p" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.649019 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ff5bff76-f1fb-442b-81db-2ccc58aa61ef-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rcm5q\" (UID: \"ff5bff76-f1fb-442b-81db-2ccc58aa61ef\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rcm5q" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.649043 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/4d98d1e8-0f5b-4f71-9809-b9577f15e21a-default-certificate\") pod \"router-default-5444994796-9j26p\" (UID: \"4d98d1e8-0f5b-4f71-9809-b9577f15e21a\") " pod="openshift-ingress/router-default-5444994796-9j26p" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.649061 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14cc2826-c23c-4055-a22a-814d43dc2bb4-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-2rvnw\" (UID: \"14cc2826-c23c-4055-a22a-814d43dc2bb4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2rvnw" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.649076 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/60220df2-7a87-42d6-8ff6-f4d2ddec5160-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-h5w7j\" (UID: \"60220df2-7a87-42d6-8ff6-f4d2ddec5160\") " pod="openshift-marketplace/marketplace-operator-79b997595-h5w7j" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.649114 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/4d98d1e8-0f5b-4f71-9809-b9577f15e21a-stats-auth\") pod \"router-default-5444994796-9j26p\" (UID: \"4d98d1e8-0f5b-4f71-9809-b9577f15e21a\") " pod="openshift-ingress/router-default-5444994796-9j26p" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.649130 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9c3d2384-facd-44d8-a9d5-7e1cd7bfad8f-auth-proxy-config\") pod \"machine-config-operator-74547568cd-t4pqd\" (UID: \"9c3d2384-facd-44d8-a9d5-7e1cd7bfad8f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-t4pqd" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.649170 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2052a36c-60a1-4d21-ac02-9d7d22c16b5b-cert\") pod \"ingress-canary-7qgtc\" (UID: \"2052a36c-60a1-4d21-ac02-9d7d22c16b5b\") " pod="openshift-ingress-canary/ingress-canary-7qgtc" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.649265 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5a165501-6ea2-4d63-9479-6ac760c6b116-bound-sa-token\") pod \"image-registry-697d97f7c8-bd28k\" (UID: \"5a165501-6ea2-4d63-9479-6ac760c6b116\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd28k" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.649289 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4mpr\" (UniqueName: \"kubernetes.io/projected/2052a36c-60a1-4d21-ac02-9d7d22c16b5b-kube-api-access-k4mpr\") pod \"ingress-canary-7qgtc\" (UID: \"2052a36c-60a1-4d21-ac02-9d7d22c16b5b\") " pod="openshift-ingress-canary/ingress-canary-7qgtc" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.649307 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhrxt\" (UniqueName: \"kubernetes.io/projected/9c3d2384-facd-44d8-a9d5-7e1cd7bfad8f-kube-api-access-qhrxt\") pod \"machine-config-operator-74547568cd-t4pqd\" (UID: \"9c3d2384-facd-44d8-a9d5-7e1cd7bfad8f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-t4pqd" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.649326 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9c3d2384-facd-44d8-a9d5-7e1cd7bfad8f-images\") pod \"machine-config-operator-74547568cd-t4pqd\" (UID: \"9c3d2384-facd-44d8-a9d5-7e1cd7bfad8f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-t4pqd" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.649351 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5hmr\" (UniqueName: \"kubernetes.io/projected/4d98d1e8-0f5b-4f71-9809-b9577f15e21a-kube-api-access-b5hmr\") pod \"router-default-5444994796-9j26p\" (UID: \"4d98d1e8-0f5b-4f71-9809-b9577f15e21a\") " pod="openshift-ingress/router-default-5444994796-9j26p" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.649377 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zclv\" (UniqueName: \"kubernetes.io/projected/5a165501-6ea2-4d63-9479-6ac760c6b116-kube-api-access-2zclv\") pod \"image-registry-697d97f7c8-bd28k\" (UID: \"5a165501-6ea2-4d63-9479-6ac760c6b116\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd28k" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.649402 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkwfc\" (UniqueName: \"kubernetes.io/projected/e63bf777-ed22-4aae-942e-74b8613ca4ce-kube-api-access-nkwfc\") pod \"collect-profiles-29525430-fq4mh\" (UID: \"e63bf777-ed22-4aae-942e-74b8613ca4ce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525430-fq4mh" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.649417 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/871c6d8c-2897-4c2c-938e-9137fe5320af-mountpoint-dir\") pod \"csi-hostpathplugin-ppc5s\" (UID: \"871c6d8c-2897-4c2c-938e-9137fe5320af\") " pod="hostpath-provisioner/csi-hostpathplugin-ppc5s" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.649433 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/871c6d8c-2897-4c2c-938e-9137fe5320af-registration-dir\") pod \"csi-hostpathplugin-ppc5s\" (UID: \"871c6d8c-2897-4c2c-938e-9137fe5320af\") " pod="hostpath-provisioner/csi-hostpathplugin-ppc5s" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.649449 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff5bff76-f1fb-442b-81db-2ccc58aa61ef-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rcm5q\" (UID: \"ff5bff76-f1fb-442b-81db-2ccc58aa61ef\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rcm5q" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.649464 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9c3d2384-facd-44d8-a9d5-7e1cd7bfad8f-proxy-tls\") pod \"machine-config-operator-74547568cd-t4pqd\" (UID: \"9c3d2384-facd-44d8-a9d5-7e1cd7bfad8f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-t4pqd" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.649498 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28a6a1eb-17cf-4978-8940-fa976706f321-config\") pod \"service-ca-operator-777779d784-crqgx\" (UID: \"28a6a1eb-17cf-4978-8940-fa976706f321\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-crqgx" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.649522 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cwnh\" (UniqueName: \"kubernetes.io/projected/871c6d8c-2897-4c2c-938e-9137fe5320af-kube-api-access-2cwnh\") pod \"csi-hostpathplugin-ppc5s\" (UID: \"871c6d8c-2897-4c2c-938e-9137fe5320af\") " pod="hostpath-provisioner/csi-hostpathplugin-ppc5s" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.649556 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5a165501-6ea2-4d63-9479-6ac760c6b116-installation-pull-secrets\") pod \"image-registry-697d97f7c8-bd28k\" (UID: \"5a165501-6ea2-4d63-9479-6ac760c6b116\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd28k" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.649572 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d3ba0b09-bcff-4819-b79a-f499049cf31c-node-bootstrap-token\") pod \"machine-config-server-hvwhh\" (UID: \"d3ba0b09-bcff-4819-b79a-f499049cf31c\") " pod="openshift-machine-config-operator/machine-config-server-hvwhh" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.649587 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/871c6d8c-2897-4c2c-938e-9137fe5320af-socket-dir\") pod \"csi-hostpathplugin-ppc5s\" (UID: \"871c6d8c-2897-4c2c-938e-9137fe5320af\") " pod="hostpath-provisioner/csi-hostpathplugin-ppc5s" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.649612 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff5bff76-f1fb-442b-81db-2ccc58aa61ef-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rcm5q\" (UID: \"ff5bff76-f1fb-442b-81db-2ccc58aa61ef\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rcm5q" Feb 19 18:32:03 crc kubenswrapper[4813]: E0219 18:32:03.651749 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 18:32:04.151722781 +0000 UTC m=+143.377163322 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.654156 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff5bff76-f1fb-442b-81db-2ccc58aa61ef-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rcm5q\" (UID: \"ff5bff76-f1fb-442b-81db-2ccc58aa61ef\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rcm5q" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.655865 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5a165501-6ea2-4d63-9479-6ac760c6b116-trusted-ca\") pod \"image-registry-697d97f7c8-bd28k\" (UID: \"5a165501-6ea2-4d63-9479-6ac760c6b116\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd28k" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.672834 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d98d1e8-0f5b-4f71-9809-b9577f15e21a-service-ca-bundle\") pod \"router-default-5444994796-9j26p\" (UID: \"4d98d1e8-0f5b-4f71-9809-b9577f15e21a\") " pod="openshift-ingress/router-default-5444994796-9j26p" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.673170 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14cc2826-c23c-4055-a22a-814d43dc2bb4-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-2rvnw\" (UID: \"14cc2826-c23c-4055-a22a-814d43dc2bb4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2rvnw" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.678774 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5a165501-6ea2-4d63-9479-6ac760c6b116-installation-pull-secrets\") pod \"image-registry-697d97f7c8-bd28k\" (UID: \"5a165501-6ea2-4d63-9479-6ac760c6b116\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd28k" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.679016 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/4d98d1e8-0f5b-4f71-9809-b9577f15e21a-default-certificate\") pod \"router-default-5444994796-9j26p\" (UID: \"4d98d1e8-0f5b-4f71-9809-b9577f15e21a\") " pod="openshift-ingress/router-default-5444994796-9j26p" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.683682 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-9fx4h"] Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.684753 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4d98d1e8-0f5b-4f71-9809-b9577f15e21a-metrics-certs\") pod \"router-default-5444994796-9j26p\" (UID: \"4d98d1e8-0f5b-4f71-9809-b9577f15e21a\") " pod="openshift-ingress/router-default-5444994796-9j26p" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.684781 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff5bff76-f1fb-442b-81db-2ccc58aa61ef-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rcm5q\" (UID: \"ff5bff76-f1fb-442b-81db-2ccc58aa61ef\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rcm5q" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.685675 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-chl4q" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.687171 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5a165501-6ea2-4d63-9479-6ac760c6b116-ca-trust-extracted\") pod \"image-registry-697d97f7c8-bd28k\" (UID: \"5a165501-6ea2-4d63-9479-6ac760c6b116\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd28k" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.690240 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5a165501-6ea2-4d63-9479-6ac760c6b116-registry-tls\") pod \"image-registry-697d97f7c8-bd28k\" (UID: \"5a165501-6ea2-4d63-9479-6ac760c6b116\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd28k" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.697936 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14cc2826-c23c-4055-a22a-814d43dc2bb4-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-2rvnw\" (UID: \"14cc2826-c23c-4055-a22a-814d43dc2bb4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2rvnw" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.700209 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ff5bff76-f1fb-442b-81db-2ccc58aa61ef-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rcm5q\" (UID: \"ff5bff76-f1fb-442b-81db-2ccc58aa61ef\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rcm5q" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.703904 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5a165501-6ea2-4d63-9479-6ac760c6b116-registry-certificates\") pod \"image-registry-697d97f7c8-bd28k\" (UID: \"5a165501-6ea2-4d63-9479-6ac760c6b116\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd28k" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.704750 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-x6n5r"] Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.707704 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rcm5q" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.714546 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5a165501-6ea2-4d63-9479-6ac760c6b116-bound-sa-token\") pod \"image-registry-697d97f7c8-bd28k\" (UID: \"5a165501-6ea2-4d63-9479-6ac760c6b116\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd28k" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.714819 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/4d98d1e8-0f5b-4f71-9809-b9577f15e21a-stats-auth\") pod \"router-default-5444994796-9j26p\" (UID: \"4d98d1e8-0f5b-4f71-9809-b9577f15e21a\") " pod="openshift-ingress/router-default-5444994796-9j26p" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.716112 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-cvwkx"] Feb 19 18:32:03 crc kubenswrapper[4813]: W0219 18:32:03.725135 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ddfe429_ea67_4b0c_bab1_bc72117fddda.slice/crio-c5a623d90986ad3c5bae3c2215aa2f9caf17992ca8dd382ae9e923220023eae2 WatchSource:0}: Error finding container c5a623d90986ad3c5bae3c2215aa2f9caf17992ca8dd382ae9e923220023eae2: Status 404 returned error can't find the container with id c5a623d90986ad3c5bae3c2215aa2f9caf17992ca8dd382ae9e923220023eae2 Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.752864 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpqb6\" (UniqueName: \"kubernetes.io/projected/81f53f8d-f6c6-4eff-b25c-1070003fbe99-kube-api-access-hpqb6\") pod \"dns-default-kmwrq\" (UID: \"81f53f8d-f6c6-4eff-b25c-1070003fbe99\") " pod="openshift-dns/dns-default-kmwrq" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.752903 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28a6a1eb-17cf-4978-8940-fa976706f321-serving-cert\") pod \"service-ca-operator-777779d784-crqgx\" (UID: \"28a6a1eb-17cf-4978-8940-fa976706f321\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-crqgx" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.752924 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/60220df2-7a87-42d6-8ff6-f4d2ddec5160-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-h5w7j\" (UID: \"60220df2-7a87-42d6-8ff6-f4d2ddec5160\") " pod="openshift-marketplace/marketplace-operator-79b997595-h5w7j" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.752961 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gshxt\" (UniqueName: \"kubernetes.io/projected/8ff6065e-3f22-49b0-a4b9-2e33bdf923b1-kube-api-access-gshxt\") pod \"multus-admission-controller-857f4d67dd-qqsjr\" (UID: \"8ff6065e-3f22-49b0-a4b9-2e33bdf923b1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-qqsjr" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.752982 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d3ba0b09-bcff-4819-b79a-f499049cf31c-certs\") pod \"machine-config-server-hvwhh\" (UID: \"d3ba0b09-bcff-4819-b79a-f499049cf31c\") " pod="openshift-machine-config-operator/machine-config-server-hvwhh" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.753026 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjcdb\" (UniqueName: \"kubernetes.io/projected/8eda42c4-5734-45ee-8bd1-0ae7d0d89346-kube-api-access-sjcdb\") pod \"service-ca-9c57cc56f-29vhv\" (UID: \"8eda42c4-5734-45ee-8bd1-0ae7d0d89346\") " pod="openshift-service-ca/service-ca-9c57cc56f-29vhv" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.769834 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/81f53f8d-f6c6-4eff-b25c-1070003fbe99-metrics-tls\") pod \"dns-default-kmwrq\" (UID: \"81f53f8d-f6c6-4eff-b25c-1070003fbe99\") " pod="openshift-dns/dns-default-kmwrq" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.769934 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8ff6065e-3f22-49b0-a4b9-2e33bdf923b1-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-qqsjr\" (UID: \"8ff6065e-3f22-49b0-a4b9-2e33bdf923b1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-qqsjr" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.769956 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/871c6d8c-2897-4c2c-938e-9137fe5320af-plugins-dir\") pod \"csi-hostpathplugin-ppc5s\" (UID: \"871c6d8c-2897-4c2c-938e-9137fe5320af\") " pod="hostpath-provisioner/csi-hostpathplugin-ppc5s" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.769978 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/60220df2-7a87-42d6-8ff6-f4d2ddec5160-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-h5w7j\" (UID: \"60220df2-7a87-42d6-8ff6-f4d2ddec5160\") " pod="openshift-marketplace/marketplace-operator-79b997595-h5w7j" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.770010 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9c3d2384-facd-44d8-a9d5-7e1cd7bfad8f-auth-proxy-config\") pod \"machine-config-operator-74547568cd-t4pqd\" (UID: \"9c3d2384-facd-44d8-a9d5-7e1cd7bfad8f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-t4pqd" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.770047 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2052a36c-60a1-4d21-ac02-9d7d22c16b5b-cert\") pod \"ingress-canary-7qgtc\" (UID: \"2052a36c-60a1-4d21-ac02-9d7d22c16b5b\") " pod="openshift-ingress-canary/ingress-canary-7qgtc" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.770073 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd28k\" (UID: \"5a165501-6ea2-4d63-9479-6ac760c6b116\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd28k" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.770097 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4mpr\" (UniqueName: \"kubernetes.io/projected/2052a36c-60a1-4d21-ac02-9d7d22c16b5b-kube-api-access-k4mpr\") pod \"ingress-canary-7qgtc\" (UID: \"2052a36c-60a1-4d21-ac02-9d7d22c16b5b\") " pod="openshift-ingress-canary/ingress-canary-7qgtc" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.770111 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhrxt\" (UniqueName: \"kubernetes.io/projected/9c3d2384-facd-44d8-a9d5-7e1cd7bfad8f-kube-api-access-qhrxt\") pod \"machine-config-operator-74547568cd-t4pqd\" (UID: \"9c3d2384-facd-44d8-a9d5-7e1cd7bfad8f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-t4pqd" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.770128 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9c3d2384-facd-44d8-a9d5-7e1cd7bfad8f-images\") pod \"machine-config-operator-74547568cd-t4pqd\" (UID: \"9c3d2384-facd-44d8-a9d5-7e1cd7bfad8f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-t4pqd" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.770163 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/871c6d8c-2897-4c2c-938e-9137fe5320af-registration-dir\") pod \"csi-hostpathplugin-ppc5s\" (UID: \"871c6d8c-2897-4c2c-938e-9137fe5320af\") " pod="hostpath-provisioner/csi-hostpathplugin-ppc5s" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.770179 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkwfc\" (UniqueName: \"kubernetes.io/projected/e63bf777-ed22-4aae-942e-74b8613ca4ce-kube-api-access-nkwfc\") pod \"collect-profiles-29525430-fq4mh\" (UID: \"e63bf777-ed22-4aae-942e-74b8613ca4ce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525430-fq4mh" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.770193 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/871c6d8c-2897-4c2c-938e-9137fe5320af-mountpoint-dir\") pod \"csi-hostpathplugin-ppc5s\" (UID: \"871c6d8c-2897-4c2c-938e-9137fe5320af\") " pod="hostpath-provisioner/csi-hostpathplugin-ppc5s" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.770208 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9c3d2384-facd-44d8-a9d5-7e1cd7bfad8f-proxy-tls\") pod \"machine-config-operator-74547568cd-t4pqd\" (UID: \"9c3d2384-facd-44d8-a9d5-7e1cd7bfad8f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-t4pqd" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.770223 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28a6a1eb-17cf-4978-8940-fa976706f321-config\") pod \"service-ca-operator-777779d784-crqgx\" (UID: \"28a6a1eb-17cf-4978-8940-fa976706f321\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-crqgx" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.770239 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cwnh\" (UniqueName: \"kubernetes.io/projected/871c6d8c-2897-4c2c-938e-9137fe5320af-kube-api-access-2cwnh\") pod \"csi-hostpathplugin-ppc5s\" (UID: \"871c6d8c-2897-4c2c-938e-9137fe5320af\") " pod="hostpath-provisioner/csi-hostpathplugin-ppc5s" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.770261 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d3ba0b09-bcff-4819-b79a-f499049cf31c-node-bootstrap-token\") pod \"machine-config-server-hvwhh\" (UID: \"d3ba0b09-bcff-4819-b79a-f499049cf31c\") " pod="openshift-machine-config-operator/machine-config-server-hvwhh" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.770277 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/871c6d8c-2897-4c2c-938e-9137fe5320af-socket-dir\") pod \"csi-hostpathplugin-ppc5s\" (UID: \"871c6d8c-2897-4c2c-938e-9137fe5320af\") " pod="hostpath-provisioner/csi-hostpathplugin-ppc5s" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.770296 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7jtd\" (UniqueName: \"kubernetes.io/projected/28579021-c721-4746-bed3-9b765dddbe11-kube-api-access-d7jtd\") pod \"migrator-59844c95c7-c5246\" (UID: \"28579021-c721-4746-bed3-9b765dddbe11\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-c5246" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.770312 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/871c6d8c-2897-4c2c-938e-9137fe5320af-csi-data-dir\") pod \"csi-hostpathplugin-ppc5s\" (UID: \"871c6d8c-2897-4c2c-938e-9137fe5320af\") " pod="hostpath-provisioner/csi-hostpathplugin-ppc5s" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.770329 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e63bf777-ed22-4aae-942e-74b8613ca4ce-config-volume\") pod \"collect-profiles-29525430-fq4mh\" (UID: \"e63bf777-ed22-4aae-942e-74b8613ca4ce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525430-fq4mh" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.770349 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/8eda42c4-5734-45ee-8bd1-0ae7d0d89346-signing-cabundle\") pod \"service-ca-9c57cc56f-29vhv\" (UID: \"8eda42c4-5734-45ee-8bd1-0ae7d0d89346\") " pod="openshift-service-ca/service-ca-9c57cc56f-29vhv" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.770364 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-546jq\" (UniqueName: \"kubernetes.io/projected/d3ba0b09-bcff-4819-b79a-f499049cf31c-kube-api-access-546jq\") pod \"machine-config-server-hvwhh\" (UID: \"d3ba0b09-bcff-4819-b79a-f499049cf31c\") " pod="openshift-machine-config-operator/machine-config-server-hvwhh" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.770384 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nw6pf\" (UniqueName: \"kubernetes.io/projected/28a6a1eb-17cf-4978-8940-fa976706f321-kube-api-access-nw6pf\") pod \"service-ca-operator-777779d784-crqgx\" (UID: \"28a6a1eb-17cf-4978-8940-fa976706f321\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-crqgx" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.770399 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e63bf777-ed22-4aae-942e-74b8613ca4ce-secret-volume\") pod \"collect-profiles-29525430-fq4mh\" (UID: \"e63bf777-ed22-4aae-942e-74b8613ca4ce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525430-fq4mh" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.770421 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wkmt\" (UniqueName: \"kubernetes.io/projected/60220df2-7a87-42d6-8ff6-f4d2ddec5160-kube-api-access-6wkmt\") pod \"marketplace-operator-79b997595-h5w7j\" (UID: \"60220df2-7a87-42d6-8ff6-f4d2ddec5160\") " pod="openshift-marketplace/marketplace-operator-79b997595-h5w7j" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.770457 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/81f53f8d-f6c6-4eff-b25c-1070003fbe99-config-volume\") pod \"dns-default-kmwrq\" (UID: \"81f53f8d-f6c6-4eff-b25c-1070003fbe99\") " pod="openshift-dns/dns-default-kmwrq" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.770471 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/8eda42c4-5734-45ee-8bd1-0ae7d0d89346-signing-key\") pod \"service-ca-9c57cc56f-29vhv\" (UID: \"8eda42c4-5734-45ee-8bd1-0ae7d0d89346\") " pod="openshift-service-ca/service-ca-9c57cc56f-29vhv" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.770765 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-qqrj6"] Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.771904 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-rksx7"] Feb 19 18:32:03 crc kubenswrapper[4813]: E0219 18:32:03.773579 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 18:32:04.273567356 +0000 UTC m=+143.499007897 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd28k" (UID: "5a165501-6ea2-4d63-9479-6ac760c6b116") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.774395 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e63bf777-ed22-4aae-942e-74b8613ca4ce-config-volume\") pod \"collect-profiles-29525430-fq4mh\" (UID: \"e63bf777-ed22-4aae-942e-74b8613ca4ce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525430-fq4mh" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.775373 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-5gzd8"] Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.775409 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-dqj4z"] Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.775630 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/871c6d8c-2897-4c2c-938e-9137fe5320af-socket-dir\") pod \"csi-hostpathplugin-ppc5s\" (UID: \"871c6d8c-2897-4c2c-938e-9137fe5320af\") " pod="hostpath-provisioner/csi-hostpathplugin-ppc5s" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.775827 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5hmr\" (UniqueName: \"kubernetes.io/projected/4d98d1e8-0f5b-4f71-9809-b9577f15e21a-kube-api-access-b5hmr\") pod \"router-default-5444994796-9j26p\" (UID: \"4d98d1e8-0f5b-4f71-9809-b9577f15e21a\") " pod="openshift-ingress/router-default-5444994796-9j26p" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.776199 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/871c6d8c-2897-4c2c-938e-9137fe5320af-plugins-dir\") pod \"csi-hostpathplugin-ppc5s\" (UID: \"871c6d8c-2897-4c2c-938e-9137fe5320af\") " pod="hostpath-provisioner/csi-hostpathplugin-ppc5s" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.776420 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/8eda42c4-5734-45ee-8bd1-0ae7d0d89346-signing-key\") pod \"service-ca-9c57cc56f-29vhv\" (UID: \"8eda42c4-5734-45ee-8bd1-0ae7d0d89346\") " pod="openshift-service-ca/service-ca-9c57cc56f-29vhv" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.776647 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9c3d2384-facd-44d8-a9d5-7e1cd7bfad8f-auth-proxy-config\") pod \"machine-config-operator-74547568cd-t4pqd\" (UID: \"9c3d2384-facd-44d8-a9d5-7e1cd7bfad8f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-t4pqd" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.776930 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/8eda42c4-5734-45ee-8bd1-0ae7d0d89346-signing-cabundle\") pod \"service-ca-9c57cc56f-29vhv\" (UID: \"8eda42c4-5734-45ee-8bd1-0ae7d0d89346\") " pod="openshift-service-ca/service-ca-9c57cc56f-29vhv" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.777169 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/81f53f8d-f6c6-4eff-b25c-1070003fbe99-metrics-tls\") pod \"dns-default-kmwrq\" (UID: \"81f53f8d-f6c6-4eff-b25c-1070003fbe99\") " pod="openshift-dns/dns-default-kmwrq" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.777323 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/871c6d8c-2897-4c2c-938e-9137fe5320af-csi-data-dir\") pod \"csi-hostpathplugin-ppc5s\" (UID: \"871c6d8c-2897-4c2c-938e-9137fe5320af\") " pod="hostpath-provisioner/csi-hostpathplugin-ppc5s" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.777442 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9c3d2384-facd-44d8-a9d5-7e1cd7bfad8f-images\") pod \"machine-config-operator-74547568cd-t4pqd\" (UID: \"9c3d2384-facd-44d8-a9d5-7e1cd7bfad8f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-t4pqd" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.777856 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/81f53f8d-f6c6-4eff-b25c-1070003fbe99-config-volume\") pod \"dns-default-kmwrq\" (UID: \"81f53f8d-f6c6-4eff-b25c-1070003fbe99\") " pod="openshift-dns/dns-default-kmwrq" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.777970 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/871c6d8c-2897-4c2c-938e-9137fe5320af-registration-dir\") pod \"csi-hostpathplugin-ppc5s\" (UID: \"871c6d8c-2897-4c2c-938e-9137fe5320af\") " pod="hostpath-provisioner/csi-hostpathplugin-ppc5s" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.778289 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/60220df2-7a87-42d6-8ff6-f4d2ddec5160-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-h5w7j\" (UID: \"60220df2-7a87-42d6-8ff6-f4d2ddec5160\") " pod="openshift-marketplace/marketplace-operator-79b997595-h5w7j" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.778408 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/871c6d8c-2897-4c2c-938e-9137fe5320af-mountpoint-dir\") pod \"csi-hostpathplugin-ppc5s\" (UID: \"871c6d8c-2897-4c2c-938e-9137fe5320af\") " pod="hostpath-provisioner/csi-hostpathplugin-ppc5s" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.780180 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28a6a1eb-17cf-4978-8940-fa976706f321-config\") pod \"service-ca-operator-777779d784-crqgx\" (UID: \"28a6a1eb-17cf-4978-8940-fa976706f321\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-crqgx" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.782049 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d3ba0b09-bcff-4819-b79a-f499049cf31c-certs\") pod \"machine-config-server-hvwhh\" (UID: \"d3ba0b09-bcff-4819-b79a-f499049cf31c\") " pod="openshift-machine-config-operator/machine-config-server-hvwhh" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.782345 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/60220df2-7a87-42d6-8ff6-f4d2ddec5160-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-h5w7j\" (UID: \"60220df2-7a87-42d6-8ff6-f4d2ddec5160\") " pod="openshift-marketplace/marketplace-operator-79b997595-h5w7j" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.782563 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e63bf777-ed22-4aae-942e-74b8613ca4ce-secret-volume\") pod \"collect-profiles-29525430-fq4mh\" (UID: \"e63bf777-ed22-4aae-942e-74b8613ca4ce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525430-fq4mh" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.782830 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9c3d2384-facd-44d8-a9d5-7e1cd7bfad8f-proxy-tls\") pod \"machine-config-operator-74547568cd-t4pqd\" (UID: \"9c3d2384-facd-44d8-a9d5-7e1cd7bfad8f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-t4pqd" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.783508 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28a6a1eb-17cf-4978-8940-fa976706f321-serving-cert\") pod \"service-ca-operator-777779d784-crqgx\" (UID: \"28a6a1eb-17cf-4978-8940-fa976706f321\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-crqgx" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.787101 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8ff6065e-3f22-49b0-a4b9-2e33bdf923b1-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-qqsjr\" (UID: \"8ff6065e-3f22-49b0-a4b9-2e33bdf923b1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-qqsjr" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.787531 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d3ba0b09-bcff-4819-b79a-f499049cf31c-node-bootstrap-token\") pod \"machine-config-server-hvwhh\" (UID: \"d3ba0b09-bcff-4819-b79a-f499049cf31c\") " pod="openshift-machine-config-operator/machine-config-server-hvwhh" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.788071 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zclv\" (UniqueName: \"kubernetes.io/projected/5a165501-6ea2-4d63-9479-6ac760c6b116-kube-api-access-2zclv\") pod \"image-registry-697d97f7c8-bd28k\" (UID: \"5a165501-6ea2-4d63-9479-6ac760c6b116\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd28k" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.788284 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2052a36c-60a1-4d21-ac02-9d7d22c16b5b-cert\") pod \"ingress-canary-7qgtc\" (UID: \"2052a36c-60a1-4d21-ac02-9d7d22c16b5b\") " pod="openshift-ingress-canary/ingress-canary-7qgtc" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.789388 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcwgb\" (UniqueName: \"kubernetes.io/projected/14cc2826-c23c-4055-a22a-814d43dc2bb4-kube-api-access-bcwgb\") pod \"kube-storage-version-migrator-operator-b67b599dd-2rvnw\" (UID: \"14cc2826-c23c-4055-a22a-814d43dc2bb4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2rvnw" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.818681 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpqb6\" (UniqueName: \"kubernetes.io/projected/81f53f8d-f6c6-4eff-b25c-1070003fbe99-kube-api-access-hpqb6\") pod \"dns-default-kmwrq\" (UID: \"81f53f8d-f6c6-4eff-b25c-1070003fbe99\") " pod="openshift-dns/dns-default-kmwrq" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.842107 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gshxt\" (UniqueName: \"kubernetes.io/projected/8ff6065e-3f22-49b0-a4b9-2e33bdf923b1-kube-api-access-gshxt\") pod \"multus-admission-controller-857f4d67dd-qqsjr\" (UID: \"8ff6065e-3f22-49b0-a4b9-2e33bdf923b1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-qqsjr" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.862145 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjcdb\" (UniqueName: \"kubernetes.io/projected/8eda42c4-5734-45ee-8bd1-0ae7d0d89346-kube-api-access-sjcdb\") pod \"service-ca-9c57cc56f-29vhv\" (UID: \"8eda42c4-5734-45ee-8bd1-0ae7d0d89346\") " pod="openshift-service-ca/service-ca-9c57cc56f-29vhv" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.871578 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-9j26p" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.871711 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 18:32:03 crc kubenswrapper[4813]: E0219 18:32:03.871966 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 18:32:04.371931622 +0000 UTC m=+143.597372163 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.872029 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd28k\" (UID: \"5a165501-6ea2-4d63-9479-6ac760c6b116\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd28k" Feb 19 18:32:03 crc kubenswrapper[4813]: E0219 18:32:03.872437 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 18:32:04.37242645 +0000 UTC m=+143.597866991 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd28k" (UID: "5a165501-6ea2-4d63-9479-6ac760c6b116") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.880340 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cwnh\" (UniqueName: \"kubernetes.io/projected/871c6d8c-2897-4c2c-938e-9137fe5320af-kube-api-access-2cwnh\") pod \"csi-hostpathplugin-ppc5s\" (UID: \"871c6d8c-2897-4c2c-938e-9137fe5320af\") " pod="hostpath-provisioner/csi-hostpathplugin-ppc5s" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.900654 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nw6pf\" (UniqueName: \"kubernetes.io/projected/28a6a1eb-17cf-4978-8940-fa976706f321-kube-api-access-nw6pf\") pod \"service-ca-operator-777779d784-crqgx\" (UID: \"28a6a1eb-17cf-4978-8940-fa976706f321\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-crqgx" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.924497 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s44zp"] Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.928173 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-546jq\" (UniqueName: \"kubernetes.io/projected/d3ba0b09-bcff-4819-b79a-f499049cf31c-kube-api-access-546jq\") pod \"machine-config-server-hvwhh\" (UID: \"d3ba0b09-bcff-4819-b79a-f499049cf31c\") " pod="openshift-machine-config-operator/machine-config-server-hvwhh" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.940563 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7jtd\" (UniqueName: \"kubernetes.io/projected/28579021-c721-4746-bed3-9b765dddbe11-kube-api-access-d7jtd\") pod \"migrator-59844c95c7-c5246\" (UID: \"28579021-c721-4746-bed3-9b765dddbe11\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-c5246" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.965116 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wkmt\" (UniqueName: \"kubernetes.io/projected/60220df2-7a87-42d6-8ff6-f4d2ddec5160-kube-api-access-6wkmt\") pod \"marketplace-operator-79b997595-h5w7j\" (UID: \"60220df2-7a87-42d6-8ff6-f4d2ddec5160\") " pod="openshift-marketplace/marketplace-operator-79b997595-h5w7j" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.972658 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2rvnw" Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.973141 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 18:32:03 crc kubenswrapper[4813]: E0219 18:32:03.973498 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 18:32:04.473481493 +0000 UTC m=+143.698922034 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:03 crc kubenswrapper[4813]: I0219 18:32:03.994376 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4mpr\" (UniqueName: \"kubernetes.io/projected/2052a36c-60a1-4d21-ac02-9d7d22c16b5b-kube-api-access-k4mpr\") pod \"ingress-canary-7qgtc\" (UID: \"2052a36c-60a1-4d21-ac02-9d7d22c16b5b\") " pod="openshift-ingress-canary/ingress-canary-7qgtc" Feb 19 18:32:04 crc kubenswrapper[4813]: I0219 18:32:04.021526 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkwfc\" (UniqueName: \"kubernetes.io/projected/e63bf777-ed22-4aae-942e-74b8613ca4ce-kube-api-access-nkwfc\") pod \"collect-profiles-29525430-fq4mh\" (UID: \"e63bf777-ed22-4aae-942e-74b8613ca4ce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525430-fq4mh" Feb 19 18:32:04 crc kubenswrapper[4813]: I0219 18:32:04.025924 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-kmwrq" Feb 19 18:32:04 crc kubenswrapper[4813]: I0219 18:32:04.027391 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mgsbg"] Feb 19 18:32:04 crc kubenswrapper[4813]: I0219 18:32:04.031497 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525430-fq4mh" Feb 19 18:32:04 crc kubenswrapper[4813]: I0219 18:32:04.033474 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-74n48"] Feb 19 18:32:04 crc kubenswrapper[4813]: I0219 18:32:04.039760 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-crqgx" Feb 19 18:32:04 crc kubenswrapper[4813]: I0219 18:32:04.046627 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-29vhv" Feb 19 18:32:04 crc kubenswrapper[4813]: I0219 18:32:04.047723 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhrxt\" (UniqueName: \"kubernetes.io/projected/9c3d2384-facd-44d8-a9d5-7e1cd7bfad8f-kube-api-access-qhrxt\") pod \"machine-config-operator-74547568cd-t4pqd\" (UID: \"9c3d2384-facd-44d8-a9d5-7e1cd7bfad8f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-t4pqd" Feb 19 18:32:04 crc kubenswrapper[4813]: I0219 18:32:04.058230 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-qqsjr" Feb 19 18:32:04 crc kubenswrapper[4813]: I0219 18:32:04.067733 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-c5246" Feb 19 18:32:04 crc kubenswrapper[4813]: I0219 18:32:04.072873 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-h5w7j" Feb 19 18:32:04 crc kubenswrapper[4813]: I0219 18:32:04.074358 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd28k\" (UID: \"5a165501-6ea2-4d63-9479-6ac760c6b116\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd28k" Feb 19 18:32:04 crc kubenswrapper[4813]: E0219 18:32:04.074742 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 18:32:04.574728856 +0000 UTC m=+143.800169397 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd28k" (UID: "5a165501-6ea2-4d63-9479-6ac760c6b116") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:04 crc kubenswrapper[4813]: I0219 18:32:04.080509 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7qgtc" Feb 19 18:32:04 crc kubenswrapper[4813]: I0219 18:32:04.090372 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-hvwhh" Feb 19 18:32:04 crc kubenswrapper[4813]: I0219 18:32:04.114412 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-ppc5s" Feb 19 18:32:04 crc kubenswrapper[4813]: I0219 18:32:04.176769 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 18:32:04 crc kubenswrapper[4813]: E0219 18:32:04.177081 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 18:32:04.677067233 +0000 UTC m=+143.902507764 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:04 crc kubenswrapper[4813]: I0219 18:32:04.245801 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jfz6c" event={"ID":"0ddfe429-ea67-4b0c-bab1-bc72117fddda","Type":"ContainerStarted","Data":"c5a623d90986ad3c5bae3c2215aa2f9caf17992ca8dd382ae9e923220023eae2"} Feb 19 18:32:04 crc kubenswrapper[4813]: I0219 18:32:04.247021 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-rksx7" event={"ID":"17f8fff7-991c-4a69-afb4-e40b048cde5c","Type":"ContainerStarted","Data":"093f932ab89e271e3ffe002679fb4becb42fff201ca501674fea2d4283ee9da4"} Feb 19 18:32:04 crc kubenswrapper[4813]: I0219 18:32:04.247642 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-5gzd8" event={"ID":"c705e274-b996-46b7-8e3f-97dff9107a6f","Type":"ContainerStarted","Data":"b44294561ed21e0b54e038ec8852b33cebd87c4bb584d6273a92fa849bbe8efc"} Feb 19 18:32:04 crc kubenswrapper[4813]: I0219 18:32:04.249262 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-j6hz9" event={"ID":"f144bc83-c0eb-4627-ab68-9ed862e2f402","Type":"ContainerStarted","Data":"0625cd2d141944ff6c02eb70b5c528da6114c14ce2534d13f530f63a63056ce1"} Feb 19 18:32:04 crc kubenswrapper[4813]: I0219 18:32:04.251683 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-qqrj6" event={"ID":"6f78f99d-d988-4a96-90be-2984414b9bfe","Type":"ContainerStarted","Data":"c50a99ce6d321169a7c370a942a3b920c01fe2310f463afe6d7e421fc512dc5d"} Feb 19 18:32:04 crc kubenswrapper[4813]: I0219 18:32:04.253153 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-fdzh8" event={"ID":"6e3db18d-bbdd-4924-bb4c-d562ff4347b7","Type":"ContainerStarted","Data":"df437b9277aef41b93cdd4433f60e80a0c7dc4ebc395783c8b879a50b0668be4"} Feb 19 18:32:04 crc kubenswrapper[4813]: I0219 18:32:04.253172 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-fdzh8" event={"ID":"6e3db18d-bbdd-4924-bb4c-d562ff4347b7","Type":"ContainerStarted","Data":"26bf1ef845d5efb4812663c55069c419d5d87c23b464b7e5812b34204d3d031e"} Feb 19 18:32:04 crc kubenswrapper[4813]: I0219 18:32:04.258690 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-x6n5r" event={"ID":"f230f42e-0e70-43d5-abbb-a2bbbe3eddcb","Type":"ContainerStarted","Data":"f8fb6e82c85375df1f2e8053a9d2eaa149e13c1bcb2ad617c0eaae3f83361aaa"} Feb 19 18:32:04 crc kubenswrapper[4813]: I0219 18:32:04.265935 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rffpk"] Feb 19 18:32:04 crc kubenswrapper[4813]: I0219 18:32:04.268379 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-dqj4z" event={"ID":"9c2745dc-3e5e-4571-ba36-aa14c87c6336","Type":"ContainerStarted","Data":"f19fdf31675b98b4f4345d932a6e3f4222f710964a409bea9539311f612133e9"} Feb 19 18:32:04 crc kubenswrapper[4813]: I0219 18:32:04.270957 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cvwkx" event={"ID":"2884d383-8ee9-456c-8d6c-d41eaebd60e6","Type":"ContainerStarted","Data":"030a7d676cdef218f5b1824d72d6bc33377b63f8d056213977f4c8bdb1abff2b"} Feb 19 18:32:04 crc kubenswrapper[4813]: I0219 18:32:04.276697 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hkvpr"] Feb 19 18:32:04 crc kubenswrapper[4813]: I0219 18:32:04.277839 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd28k\" (UID: \"5a165501-6ea2-4d63-9479-6ac760c6b116\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd28k" Feb 19 18:32:04 crc kubenswrapper[4813]: E0219 18:32:04.278163 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 18:32:04.778150597 +0000 UTC m=+144.003591138 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd28k" (UID: "5a165501-6ea2-4d63-9479-6ac760c6b116") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:04 crc kubenswrapper[4813]: I0219 18:32:04.278455 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vkxmw"] Feb 19 18:32:04 crc kubenswrapper[4813]: I0219 18:32:04.291811 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-dlph2" event={"ID":"36d151c0-fa73-4e5f-a6de-580629bef8f1","Type":"ContainerStarted","Data":"7ce33db6188d47ccba48e259c194a29f0ecf940f044bde8a4fc1c9e3d01e78aa"} Feb 19 18:32:04 crc kubenswrapper[4813]: I0219 18:32:04.295216 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-dlph2" event={"ID":"36d151c0-fa73-4e5f-a6de-580629bef8f1","Type":"ContainerStarted","Data":"4ebd3598899c5a7ff6f7948bb549679da6b1416d62d97581917183e39d862300"} Feb 19 18:32:04 crc kubenswrapper[4813]: I0219 18:32:04.295259 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-mgsbg" event={"ID":"624d3261-b732-4c7e-b1d9-56827d44c94f","Type":"ContainerStarted","Data":"ee0d57d2a9683f7247c500d27417dc8636d4b63013044d4e3f2c02ae1a16a930"} Feb 19 18:32:04 crc kubenswrapper[4813]: I0219 18:32:04.295271 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-9fx4h" event={"ID":"14156cf1-57df-48bd-8723-1a7a083bab21","Type":"ContainerStarted","Data":"42acb84b5c0673f733f50ff0fbbf561e4b51959c78a9a7d3c8659357442a0780"} Feb 19 18:32:04 crc kubenswrapper[4813]: I0219 18:32:04.295322 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6s798"] Feb 19 18:32:04 crc kubenswrapper[4813]: I0219 18:32:04.300321 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-wwn2x" event={"ID":"8ef1d23f-3a55-4fef-ba88-96b5ba04313f","Type":"ContainerStarted","Data":"ac485caed71825240cecf15ebf35b27722d290e7d514738274cb24bb5b54b06c"} Feb 19 18:32:04 crc kubenswrapper[4813]: I0219 18:32:04.301358 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-9j26p" event={"ID":"4d98d1e8-0f5b-4f71-9809-b9577f15e21a","Type":"ContainerStarted","Data":"cc32d5e5171b043c7882b9fe9205a48bc990343cf6fce91a26c7b441bf079fd7"} Feb 19 18:32:04 crc kubenswrapper[4813]: I0219 18:32:04.307251 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-74n48" event={"ID":"692367ab-53d4-4c1c-aa46-c70e247b848a","Type":"ContainerStarted","Data":"eecd9e7f0c398ae57ccd49024c453d97ab84584c863e7214596c740fa87a9183"} Feb 19 18:32:04 crc kubenswrapper[4813]: I0219 18:32:04.308060 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s44zp" event={"ID":"5eeb4175-f38f-4b50-a302-194880f33a30","Type":"ContainerStarted","Data":"433505dffa8129f5e4aad34c64e302a68553a156c8b4f378ec595cf828e77aa3"} Feb 19 18:32:04 crc kubenswrapper[4813]: I0219 18:32:04.316441 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-t4pqd" Feb 19 18:32:04 crc kubenswrapper[4813]: I0219 18:32:04.333016 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4mmbp"] Feb 19 18:32:04 crc kubenswrapper[4813]: I0219 18:32:04.338161 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-7w8k7"] Feb 19 18:32:04 crc kubenswrapper[4813]: I0219 18:32:04.372652 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6zffl"] Feb 19 18:32:04 crc kubenswrapper[4813]: I0219 18:32:04.379201 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 18:32:04 crc kubenswrapper[4813]: E0219 18:32:04.379477 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 18:32:04.879456524 +0000 UTC m=+144.104897065 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:04 crc kubenswrapper[4813]: I0219 18:32:04.379519 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd28k\" (UID: \"5a165501-6ea2-4d63-9479-6ac760c6b116\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd28k" Feb 19 18:32:04 crc kubenswrapper[4813]: E0219 18:32:04.380612 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 18:32:04.88059712 +0000 UTC m=+144.106037661 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd28k" (UID: "5a165501-6ea2-4d63-9479-6ac760c6b116") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:04 crc kubenswrapper[4813]: I0219 18:32:04.389704 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-8dvp5"] Feb 19 18:32:04 crc kubenswrapper[4813]: I0219 18:32:04.397595 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-b84zp"] Feb 19 18:32:04 crc kubenswrapper[4813]: I0219 18:32:04.400448 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-l78xq"] Feb 19 18:32:04 crc kubenswrapper[4813]: I0219 18:32:04.403182 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ptfgm"] Feb 19 18:32:04 crc kubenswrapper[4813]: I0219 18:32:04.480540 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 18:32:04 crc kubenswrapper[4813]: E0219 18:32:04.480685 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 18:32:04.980665135 +0000 UTC m=+144.206105676 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:04 crc kubenswrapper[4813]: I0219 18:32:04.481032 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd28k\" (UID: \"5a165501-6ea2-4d63-9479-6ac760c6b116\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd28k" Feb 19 18:32:04 crc kubenswrapper[4813]: I0219 18:32:04.481144 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-chl4q"] Feb 19 18:32:04 crc kubenswrapper[4813]: E0219 18:32:04.481485 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 18:32:04.981476842 +0000 UTC m=+144.206917383 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd28k" (UID: "5a165501-6ea2-4d63-9479-6ac760c6b116") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:04 crc kubenswrapper[4813]: I0219 18:32:04.488084 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rcm5q"] Feb 19 18:32:04 crc kubenswrapper[4813]: I0219 18:32:04.588302 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-kmwrq"] Feb 19 18:32:04 crc kubenswrapper[4813]: I0219 18:32:04.589372 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 18:32:04 crc kubenswrapper[4813]: E0219 18:32:04.591201 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 18:32:05.091174484 +0000 UTC m=+144.316615025 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:04 crc kubenswrapper[4813]: I0219 18:32:04.603504 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd28k\" (UID: \"5a165501-6ea2-4d63-9479-6ac760c6b116\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd28k" Feb 19 18:32:04 crc kubenswrapper[4813]: E0219 18:32:04.608903 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 18:32:05.108876117 +0000 UTC m=+144.334316658 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd28k" (UID: "5a165501-6ea2-4d63-9479-6ac760c6b116") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:04 crc kubenswrapper[4813]: W0219 18:32:04.633186 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36af60b0_5e67_4902_8891_adb18585316a.slice/crio-7aa06f4aabebcfe4f784c0b18302c60ff94e321a3ebe5ff1e478c769fe88d4f6 WatchSource:0}: Error finding container 7aa06f4aabebcfe4f784c0b18302c60ff94e321a3ebe5ff1e478c769fe88d4f6: Status 404 returned error can't find the container with id 7aa06f4aabebcfe4f784c0b18302c60ff94e321a3ebe5ff1e478c769fe88d4f6 Feb 19 18:32:04 crc kubenswrapper[4813]: I0219 18:32:04.650572 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525430-fq4mh"] Feb 19 18:32:04 crc kubenswrapper[4813]: I0219 18:32:04.675114 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-crqgx"] Feb 19 18:32:04 crc kubenswrapper[4813]: I0219 18:32:04.722275 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 18:32:04 crc kubenswrapper[4813]: E0219 18:32:04.724264 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 18:32:05.224234527 +0000 UTC m=+144.449675068 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:04 crc kubenswrapper[4813]: I0219 18:32:04.724460 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd28k\" (UID: \"5a165501-6ea2-4d63-9479-6ac760c6b116\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd28k" Feb 19 18:32:04 crc kubenswrapper[4813]: E0219 18:32:04.724796 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 18:32:05.224788679 +0000 UTC m=+144.450229220 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd28k" (UID: "5a165501-6ea2-4d63-9479-6ac760c6b116") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:04 crc kubenswrapper[4813]: I0219 18:32:04.729734 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-29vhv"] Feb 19 18:32:04 crc kubenswrapper[4813]: W0219 18:32:04.787072 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode63bf777_ed22_4aae_942e_74b8613ca4ce.slice/crio-b5052808856778291d25d784120abaf014e0633cb7e2e18c21565efb51d077a6 WatchSource:0}: Error finding container b5052808856778291d25d784120abaf014e0633cb7e2e18c21565efb51d077a6: Status 404 returned error can't find the container with id b5052808856778291d25d784120abaf014e0633cb7e2e18c21565efb51d077a6 Feb 19 18:32:04 crc kubenswrapper[4813]: I0219 18:32:04.789735 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-c5246"] Feb 19 18:32:04 crc kubenswrapper[4813]: W0219 18:32:04.810235 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod28a6a1eb_17cf_4978_8940_fa976706f321.slice/crio-dcd1967ca9fba9642c1a3fd878ffa3311da4090e63ebd3280fddab7fd8bb6c34 WatchSource:0}: Error finding container dcd1967ca9fba9642c1a3fd878ffa3311da4090e63ebd3280fddab7fd8bb6c34: Status 404 returned error can't find the container with id dcd1967ca9fba9642c1a3fd878ffa3311da4090e63ebd3280fddab7fd8bb6c34 Feb 19 18:32:04 crc kubenswrapper[4813]: I0219 18:32:04.831596 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 18:32:04 crc kubenswrapper[4813]: E0219 18:32:04.831919 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 18:32:05.331904752 +0000 UTC m=+144.557345293 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:04 crc kubenswrapper[4813]: I0219 18:32:04.880611 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-7qgtc"] Feb 19 18:32:04 crc kubenswrapper[4813]: I0219 18:32:04.934145 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd28k\" (UID: \"5a165501-6ea2-4d63-9479-6ac760c6b116\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd28k" Feb 19 18:32:04 crc kubenswrapper[4813]: E0219 18:32:04.934490 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 18:32:05.434473621 +0000 UTC m=+144.659914162 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd28k" (UID: "5a165501-6ea2-4d63-9479-6ac760c6b116") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.035744 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 18:32:05 crc kubenswrapper[4813]: E0219 18:32:05.035886 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 18:32:05.535857773 +0000 UTC m=+144.761298324 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.036115 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd28k\" (UID: \"5a165501-6ea2-4d63-9479-6ac760c6b116\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd28k" Feb 19 18:32:05 crc kubenswrapper[4813]: E0219 18:32:05.036473 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 18:32:05.536458567 +0000 UTC m=+144.761899108 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd28k" (UID: "5a165501-6ea2-4d63-9479-6ac760c6b116") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.140683 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 18:32:05 crc kubenswrapper[4813]: E0219 18:32:05.140873 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 18:32:05.640845562 +0000 UTC m=+144.866286103 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.141497 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd28k\" (UID: \"5a165501-6ea2-4d63-9479-6ac760c6b116\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd28k" Feb 19 18:32:05 crc kubenswrapper[4813]: E0219 18:32:05.141818 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 18:32:05.641807418 +0000 UTC m=+144.867247959 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd28k" (UID: "5a165501-6ea2-4d63-9479-6ac760c6b116") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.192243 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-t4pqd"] Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.234066 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2rvnw"] Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.246631 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 18:32:05 crc kubenswrapper[4813]: E0219 18:32:05.247223 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 18:32:05.747203351 +0000 UTC m=+144.972643892 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.251980 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-qqsjr"] Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.252249 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-ppc5s"] Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.314617 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-h5w7j"] Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.334709 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rcm5q" event={"ID":"ff5bff76-f1fb-442b-81db-2ccc58aa61ef","Type":"ContainerStarted","Data":"873569b8304552e8e2d34208601d17b657fed94039df5322b31ffc754cbb0325"} Feb 19 18:32:05 crc kubenswrapper[4813]: W0219 18:32:05.339532 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod60220df2_7a87_42d6_8ff6_f4d2ddec5160.slice/crio-9d80ee95d049cc65b41aeb0934ed201c9b53d1fe220ae7edb4f92ca964932a59 WatchSource:0}: Error finding container 9d80ee95d049cc65b41aeb0934ed201c9b53d1fe220ae7edb4f92ca964932a59: Status 404 returned error can't find the container with id 9d80ee95d049cc65b41aeb0934ed201c9b53d1fe220ae7edb4f92ca964932a59 Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.340240 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-5gzd8" event={"ID":"c705e274-b996-46b7-8e3f-97dff9107a6f","Type":"ContainerStarted","Data":"899bf7e9fb46d466c0f207dfe884606d3902db587c58ce0307d98f30fe48985a"} Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.340834 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-5gzd8" Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.341890 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-74n48" event={"ID":"692367ab-53d4-4c1c-aa46-c70e247b848a","Type":"ContainerStarted","Data":"0053ed9a59ca455a132862b0ac671b9e3f2252704fc976f65229dcbb7dea02fe"} Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.342901 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7w8k7" event={"ID":"87c78a23-ef77-4a1e-ba67-45f3a4adccfb","Type":"ContainerStarted","Data":"54f7c9517a658c9fa97ba2fab93e114114dfd8f39ad3caddb8ced30c4b2ea497"} Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.343409 4813 patch_prober.go:28] interesting pod/downloads-7954f5f757-5gzd8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.343453 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5gzd8" podUID="c705e274-b996-46b7-8e3f-97dff9107a6f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.347899 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd28k\" (UID: \"5a165501-6ea2-4d63-9479-6ac760c6b116\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd28k" Feb 19 18:32:05 crc kubenswrapper[4813]: E0219 18:32:05.348260 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 18:32:05.848246313 +0000 UTC m=+145.073686854 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd28k" (UID: "5a165501-6ea2-4d63-9479-6ac760c6b116") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.349886 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-crqgx" event={"ID":"28a6a1eb-17cf-4978-8940-fa976706f321","Type":"ContainerStarted","Data":"dcd1967ca9fba9642c1a3fd878ffa3311da4090e63ebd3280fddab7fd8bb6c34"} Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.361236 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-kmwrq" event={"ID":"81f53f8d-f6c6-4eff-b25c-1070003fbe99","Type":"ContainerStarted","Data":"e8410bfdad6aa06d0dc8a1157113e0e872deccee8291768b402d5a2032532a6c"} Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.364468 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525430-fq4mh" event={"ID":"e63bf777-ed22-4aae-942e-74b8613ca4ce","Type":"ContainerStarted","Data":"b5052808856778291d25d784120abaf014e0633cb7e2e18c21565efb51d077a6"} Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.372400 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-l78xq" event={"ID":"cf8702bc-7079-425a-94d6-ae8adc8414f2","Type":"ContainerStarted","Data":"895273bafe69da55429c80cb8c8c975496b5e9afc21b7c61b89b8461e081413a"} Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.374897 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-29vhv" event={"ID":"8eda42c4-5734-45ee-8bd1-0ae7d0d89346","Type":"ContainerStarted","Data":"031c661bc1d72f5d3803cb10a42307a4ea7d833f4ebe091a91181bc7b3478f60"} Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.379798 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-9j26p" event={"ID":"4d98d1e8-0f5b-4f71-9809-b9577f15e21a","Type":"ContainerStarted","Data":"e17321ff7c90f58b4973aeaa6914590d5290a30840f60308c6db9b56b289a914"} Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.389759 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ptfgm" event={"ID":"36af60b0-5e67-4902-8891-adb18585316a","Type":"ContainerStarted","Data":"7aa06f4aabebcfe4f784c0b18302c60ff94e321a3ebe5ff1e478c769fe88d4f6"} Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.395380 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-qqsjr" event={"ID":"8ff6065e-3f22-49b0-a4b9-2e33bdf923b1","Type":"ContainerStarted","Data":"8c6cec06487a0d7c63442957c83df01eb4787b63c828aaca5972c75be5d6ff88"} Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.398445 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-b84zp" event={"ID":"361629f6-7c8a-4523-99e4-c7b152588855","Type":"ContainerStarted","Data":"1e738b519497a9f00d3d626ad37dd6757f496a354d78eb794914248fd67b28d7"} Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.400088 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-c5246" event={"ID":"28579021-c721-4746-bed3-9b765dddbe11","Type":"ContainerStarted","Data":"ef549a07222ff18f73f9b3b1b5d7d6338d5f3a36da430936e930e1cbdb1bd186"} Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.413254 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-chl4q" event={"ID":"3d471333-dfe7-45ec-bb58-f5020bca76cd","Type":"ContainerStarted","Data":"c0d54b3cca24767161b85fff429b6fecb67c9bcaab068e65466e5cc6cf1dfbc8"} Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.413315 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-chl4q" event={"ID":"3d471333-dfe7-45ec-bb58-f5020bca76cd","Type":"ContainerStarted","Data":"d139d2caba85e8db738e4804a67034b6b55aec75219d595a26da1f23c7798aaf"} Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.420125 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-9fx4h" event={"ID":"14156cf1-57df-48bd-8723-1a7a083bab21","Type":"ContainerStarted","Data":"e7cd62c4c6106e85fcf62478f3dab63ab78135ee75aa772d53c9b2e2e7115cf0"} Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.420552 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-9fx4h" Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.421481 4813 patch_prober.go:28] interesting pod/console-operator-58897d9998-9fx4h container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.421525 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-9fx4h" podUID="14156cf1-57df-48bd-8723-1a7a083bab21" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.426203 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4mmbp" event={"ID":"7f3b7488-7c5c-4689-9c41-eacd57fdce7d","Type":"ContainerStarted","Data":"f578568da40d761463619fa0ccc6983ce0dc2dc6bf2462daf7617b16da23cc1d"} Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.427642 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6zffl" event={"ID":"209ad545-4905-4130-baf8-1c3e9576e788","Type":"ContainerStarted","Data":"29f4925460be949e1d32bc604c66e05d6005b4ab1ffab93b1a24795073e068e6"} Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.429662 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jfz6c" event={"ID":"0ddfe429-ea67-4b0c-bab1-bc72117fddda","Type":"ContainerStarted","Data":"f1faf2f7e9ab2ddaee48139255b08e48e159a8d3e8a9c35ce3ad6b5b08c169f4"} Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.431686 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jfz6c" Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.449152 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 18:32:05 crc kubenswrapper[4813]: E0219 18:32:05.449369 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 18:32:05.949287014 +0000 UTC m=+145.174727555 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.450296 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd28k\" (UID: \"5a165501-6ea2-4d63-9479-6ac760c6b116\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd28k" Feb 19 18:32:05 crc kubenswrapper[4813]: E0219 18:32:05.452791 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 18:32:05.952776166 +0000 UTC m=+145.178216707 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd28k" (UID: "5a165501-6ea2-4d63-9479-6ac760c6b116") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.453404 4813 generic.go:334] "Generic (PLEG): container finished" podID="8ef1d23f-3a55-4fef-ba88-96b5ba04313f" containerID="6af5c79d775545f137027c14265e07a84877b1d793a1325ff6c6180972f333bd" exitCode=0 Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.453812 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-wwn2x" event={"ID":"8ef1d23f-3a55-4fef-ba88-96b5ba04313f","Type":"ContainerDied","Data":"6af5c79d775545f137027c14265e07a84877b1d793a1325ff6c6180972f333bd"} Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.461978 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-fdzh8" podStartSLOduration=124.461959006 podStartE2EDuration="2m4.461959006s" podCreationTimestamp="2026-02-19 18:30:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:32:05.460861034 +0000 UTC m=+144.686301575" watchObservedRunningTime="2026-02-19 18:32:05.461959006 +0000 UTC m=+144.687399557" Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.463175 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hkvpr" event={"ID":"f442d86c-f2fa-4597-a3c7-d462dd0aa9f6","Type":"ContainerStarted","Data":"dab8c50f889f3c690aef63ded91355ea1f9815fd92a085599c92a7112d64b1cc"} Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.463210 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hkvpr" event={"ID":"f442d86c-f2fa-4597-a3c7-d462dd0aa9f6","Type":"ContainerStarted","Data":"a0c4aa4b00ac95e4c3091bbded5712ff8c0278c5255d6c316cbc15eb2c04d756"} Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.464236 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hkvpr" Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.478174 4813 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-hkvpr container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:5443/healthz\": dial tcp 10.217.0.24:5443: connect: connection refused" start-of-body= Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.478231 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hkvpr" podUID="f442d86c-f2fa-4597-a3c7-d462dd0aa9f6" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.24:5443/healthz\": dial tcp 10.217.0.24:5443: connect: connection refused" Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.503663 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6s798" event={"ID":"f06a16f0-ed91-4d2e-bc89-36e2ac9b47e1","Type":"ContainerStarted","Data":"57d40cb566a82c4993a846d0e525974c6b8f23f55de94022625cf925136ab27b"} Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.505197 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jfz6c" podStartSLOduration=123.505175575 podStartE2EDuration="2m3.505175575s" podCreationTimestamp="2026-02-19 18:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:32:05.503882061 +0000 UTC m=+144.729322602" watchObservedRunningTime="2026-02-19 18:32:05.505175575 +0000 UTC m=+144.730616126" Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.506595 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s44zp" event={"ID":"5eeb4175-f38f-4b50-a302-194880f33a30","Type":"ContainerStarted","Data":"56dcbaebf2bc0c5fbfdb4ab637b0b6ee162351d873f18fe517d3d82fb9cac964"} Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.524544 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8dvp5" event={"ID":"2c70e2d1-cf19-4a67-880f-e52e61ca6953","Type":"ContainerStarted","Data":"4dd0c195e182ffe8b31e3679f167318036b4de7b93159a3ebf8d4456b8f1f7e0"} Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.533355 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-dqj4z" event={"ID":"9c2745dc-3e5e-4571-ba36-aa14c87c6336","Type":"ContainerStarted","Data":"0ccc54834bebf1ac902b60be67dfa35caf9184a6cd116e45ccacc269c7cc9f24"} Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.543112 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-5gzd8" podStartSLOduration=123.543096988 podStartE2EDuration="2m3.543096988s" podCreationTimestamp="2026-02-19 18:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:32:05.542171144 +0000 UTC m=+144.767611685" watchObservedRunningTime="2026-02-19 18:32:05.543096988 +0000 UTC m=+144.768537529" Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.555762 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 18:32:05 crc kubenswrapper[4813]: E0219 18:32:05.558525 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 18:32:06.058487857 +0000 UTC m=+145.283928408 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.558914 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd28k\" (UID: \"5a165501-6ea2-4d63-9479-6ac760c6b116\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd28k" Feb 19 18:32:05 crc kubenswrapper[4813]: E0219 18:32:05.568947 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 18:32:06.068932001 +0000 UTC m=+145.294372542 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd28k" (UID: "5a165501-6ea2-4d63-9479-6ac760c6b116") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.578751 4813 generic.go:334] "Generic (PLEG): container finished" podID="17f8fff7-991c-4a69-afb4-e40b048cde5c" containerID="820fb1d5ec06c3f63eeff86fff8cc620556f9d7a08b2f23a0e26f2989c24966b" exitCode=0 Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.579676 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-rksx7" event={"ID":"17f8fff7-991c-4a69-afb4-e40b048cde5c","Type":"ContainerDied","Data":"820fb1d5ec06c3f63eeff86fff8cc620556f9d7a08b2f23a0e26f2989c24966b"} Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.584935 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4mmbp" podStartSLOduration=124.584919526 podStartE2EDuration="2m4.584919526s" podCreationTimestamp="2026-02-19 18:30:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:32:05.584862933 +0000 UTC m=+144.810303474" watchObservedRunningTime="2026-02-19 18:32:05.584919526 +0000 UTC m=+144.810360067" Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.642444 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-qqrj6" event={"ID":"6f78f99d-d988-4a96-90be-2984414b9bfe","Type":"ContainerStarted","Data":"0cc2c3c8b308a2a3f3382cce6197e834646037272425706eb3d972629c823139"} Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.644264 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vkxmw" event={"ID":"5a8ad966-7829-4e5b-9665-5d07b39e882f","Type":"ContainerStarted","Data":"754777285d8b9a579e0e0314240bcac3dae4be0adf9928929f1f03b1c6be627d"} Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.648038 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-7qgtc" event={"ID":"2052a36c-60a1-4d21-ac02-9d7d22c16b5b","Type":"ContainerStarted","Data":"179181b00fbacf0ba1cefaa88fd0dc70a20a05cb0c1e388071dbfb4b42691fdc"} Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.654686 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-dlph2" event={"ID":"36d151c0-fa73-4e5f-a6de-580629bef8f1","Type":"ContainerStarted","Data":"e7cc2cde2aba1cebe8503c8743831a3b3d9bbae38443eef79822c5508b9c3c99"} Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.656219 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-mgsbg" event={"ID":"624d3261-b732-4c7e-b1d9-56827d44c94f","Type":"ContainerStarted","Data":"de9a5d7bb578e96bdf446f3fd9373f30b317c359a855eb55f889a051a9609d96"} Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.656761 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-mgsbg" Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.660048 4813 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-mgsbg container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.660083 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-mgsbg" podUID="624d3261-b732-4c7e-b1d9-56827d44c94f" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.660347 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.660622 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-t4pqd" event={"ID":"9c3d2384-facd-44d8-a9d5-7e1cd7bfad8f","Type":"ContainerStarted","Data":"39b9385d7aa12bb46b68405272d3f296eb31a4f05ea1eb47f89b61002eb13401"} Feb 19 18:32:05 crc kubenswrapper[4813]: E0219 18:32:05.660709 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 18:32:06.160693837 +0000 UTC m=+145.386134378 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.660767 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd28k\" (UID: \"5a165501-6ea2-4d63-9479-6ac760c6b116\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd28k" Feb 19 18:32:05 crc kubenswrapper[4813]: E0219 18:32:05.661838 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 18:32:06.161829792 +0000 UTC m=+145.387270333 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd28k" (UID: "5a165501-6ea2-4d63-9479-6ac760c6b116") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.662427 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-74n48" podStartSLOduration=123.662405635 podStartE2EDuration="2m3.662405635s" podCreationTimestamp="2026-02-19 18:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:32:05.656497184 +0000 UTC m=+144.881937725" watchObservedRunningTime="2026-02-19 18:32:05.662405635 +0000 UTC m=+144.887846176" Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.663441 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-9fx4h" podStartSLOduration=123.663434145 podStartE2EDuration="2m3.663434145s" podCreationTimestamp="2026-02-19 18:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:32:05.635067395 +0000 UTC m=+144.860507966" watchObservedRunningTime="2026-02-19 18:32:05.663434145 +0000 UTC m=+144.888874686" Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.683236 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-hvwhh" event={"ID":"d3ba0b09-bcff-4819-b79a-f499049cf31c","Type":"ContainerStarted","Data":"bc227b565f5e974f34f0fef4dcdb06b45c838bc4034fb25c1f5919ef7736c62a"} Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.686566 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2rvnw" event={"ID":"14cc2826-c23c-4055-a22a-814d43dc2bb4","Type":"ContainerStarted","Data":"11adef0b2770bf2b8299ad002f02224e1751a7af0ca617df079a7054d9112a62"} Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.689799 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rffpk" event={"ID":"b89f8e03-2720-4c2e-a7fc-e43dda75cc4c","Type":"ContainerStarted","Data":"b7a73afbd284d1aa04e6b50ff15f27f1f9574463471ff384c719c8d26b58993a"} Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.689823 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rffpk" event={"ID":"b89f8e03-2720-4c2e-a7fc-e43dda75cc4c","Type":"ContainerStarted","Data":"8f3817840309254f57e42086d1c153a7087303ef10bc57b41992a3e43398d889"} Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.692941 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-x6n5r" event={"ID":"f230f42e-0e70-43d5-abbb-a2bbbe3eddcb","Type":"ContainerStarted","Data":"b8b8dec88c60522687dfc8b48def3021bcfcd3fc3ee0beabf8ea755783a11e9f"} Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.693771 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-x6n5r" Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.698924 4813 generic.go:334] "Generic (PLEG): container finished" podID="2884d383-8ee9-456c-8d6c-d41eaebd60e6" containerID="009d64f31ea3596aa148f1fdaa5a7b2ba32902e696179319d0fd3f5f8e37addf" exitCode=0 Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.700022 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cvwkx" event={"ID":"2884d383-8ee9-456c-8d6c-d41eaebd60e6","Type":"ContainerDied","Data":"009d64f31ea3596aa148f1fdaa5a7b2ba32902e696179319d0fd3f5f8e37addf"} Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.701925 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-9j26p" podStartSLOduration=123.701901409 podStartE2EDuration="2m3.701901409s" podCreationTimestamp="2026-02-19 18:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:32:05.698745756 +0000 UTC m=+144.924186307" watchObservedRunningTime="2026-02-19 18:32:05.701901409 +0000 UTC m=+144.927341950" Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.704122 4813 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-x6n5r container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.29:6443/healthz\": dial tcp 10.217.0.29:6443: connect: connection refused" start-of-body= Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.704183 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-x6n5r" podUID="f230f42e-0e70-43d5-abbb-a2bbbe3eddcb" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.29:6443/healthz\": dial tcp 10.217.0.29:6443: connect: connection refused" Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.763092 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 18:32:05 crc kubenswrapper[4813]: E0219 18:32:05.764957 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 18:32:06.264923532 +0000 UTC m=+145.490364123 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.778494 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jfz6c" Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.810780 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-x6n5r" podStartSLOduration=124.810764583 podStartE2EDuration="2m4.810764583s" podCreationTimestamp="2026-02-19 18:30:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:32:05.809951685 +0000 UTC m=+145.035392226" watchObservedRunningTime="2026-02-19 18:32:05.810764583 +0000 UTC m=+145.036205124" Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.837245 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-dqj4z" podStartSLOduration=123.837221462 podStartE2EDuration="2m3.837221462s" podCreationTimestamp="2026-02-19 18:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:32:05.83683756 +0000 UTC m=+145.062278101" watchObservedRunningTime="2026-02-19 18:32:05.837221462 +0000 UTC m=+145.062662003" Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.858622 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-mgsbg" podStartSLOduration=123.858607699 podStartE2EDuration="2m3.858607699s" podCreationTimestamp="2026-02-19 18:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:32:05.858548005 +0000 UTC m=+145.083988556" watchObservedRunningTime="2026-02-19 18:32:05.858607699 +0000 UTC m=+145.084048240" Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.865390 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd28k\" (UID: \"5a165501-6ea2-4d63-9479-6ac760c6b116\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd28k" Feb 19 18:32:05 crc kubenswrapper[4813]: E0219 18:32:05.865994 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 18:32:06.365969444 +0000 UTC m=+145.591409985 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd28k" (UID: "5a165501-6ea2-4d63-9479-6ac760c6b116") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.874207 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-9j26p" Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.876562 4813 patch_prober.go:28] interesting pod/router-default-5444994796-9j26p container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.876645 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9j26p" podUID="4d98d1e8-0f5b-4f71-9809-b9577f15e21a" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.897084 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-dlph2" podStartSLOduration=123.896872441 podStartE2EDuration="2m3.896872441s" podCreationTimestamp="2026-02-19 18:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:32:05.895307941 +0000 UTC m=+145.120748482" watchObservedRunningTime="2026-02-19 18:32:05.896872441 +0000 UTC m=+145.122312982" Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.942175 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s44zp" podStartSLOduration=123.942155269 podStartE2EDuration="2m3.942155269s" podCreationTimestamp="2026-02-19 18:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:32:05.942138248 +0000 UTC m=+145.167578819" watchObservedRunningTime="2026-02-19 18:32:05.942155269 +0000 UTC m=+145.167595810" Feb 19 18:32:05 crc kubenswrapper[4813]: I0219 18:32:05.966340 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 18:32:05 crc kubenswrapper[4813]: E0219 18:32:05.967049 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 18:32:06.467026606 +0000 UTC m=+145.692467147 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:06 crc kubenswrapper[4813]: I0219 18:32:06.041699 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hkvpr" podStartSLOduration=124.041683922 podStartE2EDuration="2m4.041683922s" podCreationTimestamp="2026-02-19 18:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:32:06.041150522 +0000 UTC m=+145.266591093" watchObservedRunningTime="2026-02-19 18:32:06.041683922 +0000 UTC m=+145.267124463" Feb 19 18:32:06 crc kubenswrapper[4813]: I0219 18:32:06.069781 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd28k\" (UID: \"5a165501-6ea2-4d63-9479-6ac760c6b116\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd28k" Feb 19 18:32:06 crc kubenswrapper[4813]: E0219 18:32:06.070103 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 18:32:06.570089475 +0000 UTC m=+145.795530016 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd28k" (UID: "5a165501-6ea2-4d63-9479-6ac760c6b116") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:06 crc kubenswrapper[4813]: I0219 18:32:06.172860 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 18:32:06 crc kubenswrapper[4813]: E0219 18:32:06.173355 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 18:32:06.673336504 +0000 UTC m=+145.898777045 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:06 crc kubenswrapper[4813]: I0219 18:32:06.275922 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd28k\" (UID: \"5a165501-6ea2-4d63-9479-6ac760c6b116\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd28k" Feb 19 18:32:06 crc kubenswrapper[4813]: E0219 18:32:06.277055 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 18:32:06.777043759 +0000 UTC m=+146.002484300 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd28k" (UID: "5a165501-6ea2-4d63-9479-6ac760c6b116") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:06 crc kubenswrapper[4813]: I0219 18:32:06.378078 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 18:32:06 crc kubenswrapper[4813]: E0219 18:32:06.378367 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 18:32:06.878343106 +0000 UTC m=+146.103783647 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:06 crc kubenswrapper[4813]: I0219 18:32:06.378531 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd28k\" (UID: \"5a165501-6ea2-4d63-9479-6ac760c6b116\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd28k" Feb 19 18:32:06 crc kubenswrapper[4813]: E0219 18:32:06.379024 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 18:32:06.879008945 +0000 UTC m=+146.104449486 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd28k" (UID: "5a165501-6ea2-4d63-9479-6ac760c6b116") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:06 crc kubenswrapper[4813]: I0219 18:32:06.482843 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 18:32:06 crc kubenswrapper[4813]: E0219 18:32:06.483543 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 18:32:06.983529478 +0000 UTC m=+146.208970019 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:06 crc kubenswrapper[4813]: I0219 18:32:06.585028 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd28k\" (UID: \"5a165501-6ea2-4d63-9479-6ac760c6b116\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd28k" Feb 19 18:32:06 crc kubenswrapper[4813]: E0219 18:32:06.585429 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 18:32:07.085414717 +0000 UTC m=+146.310855278 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd28k" (UID: "5a165501-6ea2-4d63-9479-6ac760c6b116") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:06 crc kubenswrapper[4813]: I0219 18:32:06.686610 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 18:32:06 crc kubenswrapper[4813]: E0219 18:32:06.686762 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 18:32:07.186733385 +0000 UTC m=+146.412173926 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:06 crc kubenswrapper[4813]: I0219 18:32:06.687164 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd28k\" (UID: \"5a165501-6ea2-4d63-9479-6ac760c6b116\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd28k" Feb 19 18:32:06 crc kubenswrapper[4813]: E0219 18:32:06.687460 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 18:32:07.187449547 +0000 UTC m=+146.412890088 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd28k" (UID: "5a165501-6ea2-4d63-9479-6ac760c6b116") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:06 crc kubenswrapper[4813]: I0219 18:32:06.735395 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-b84zp" event={"ID":"361629f6-7c8a-4523-99e4-c7b152588855","Type":"ContainerStarted","Data":"30cb26be208a4b0489fa94ce63a4f6c57c4a45a7269456b9fbe082b40610a130"} Feb 19 18:32:06 crc kubenswrapper[4813]: I0219 18:32:06.755958 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-h5w7j" event={"ID":"60220df2-7a87-42d6-8ff6-f4d2ddec5160","Type":"ContainerStarted","Data":"f858c2d1694d80b9f81743707727ddef00e94eba84d6371de2bd96b282ab0915"} Feb 19 18:32:06 crc kubenswrapper[4813]: I0219 18:32:06.756015 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-h5w7j" event={"ID":"60220df2-7a87-42d6-8ff6-f4d2ddec5160","Type":"ContainerStarted","Data":"9d80ee95d049cc65b41aeb0934ed201c9b53d1fe220ae7edb4f92ca964932a59"} Feb 19 18:32:06 crc kubenswrapper[4813]: I0219 18:32:06.756428 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-h5w7j" Feb 19 18:32:06 crc kubenswrapper[4813]: I0219 18:32:06.757407 4813 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-h5w7j container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" start-of-body= Feb 19 18:32:06 crc kubenswrapper[4813]: I0219 18:32:06.757445 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-h5w7j" podUID="60220df2-7a87-42d6-8ff6-f4d2ddec5160" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" Feb 19 18:32:06 crc kubenswrapper[4813]: I0219 18:32:06.757612 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-ppc5s" event={"ID":"871c6d8c-2897-4c2c-938e-9137fe5320af","Type":"ContainerStarted","Data":"b0fa4f1a8b4bb48ef3b8eeea62527a2483d2e8763893bcb2f7b0a7e6b94336fd"} Feb 19 18:32:06 crc kubenswrapper[4813]: I0219 18:32:06.758790 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-l78xq" event={"ID":"cf8702bc-7079-425a-94d6-ae8adc8414f2","Type":"ContainerStarted","Data":"7f4878d6321c835aaf6b1869c12d0a92c3cbdf3a523fbeda2d9f6b6c01204b49"} Feb 19 18:32:06 crc kubenswrapper[4813]: I0219 18:32:06.763738 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-qqrj6" event={"ID":"6f78f99d-d988-4a96-90be-2984414b9bfe","Type":"ContainerStarted","Data":"66a8aac9706b06519a3020bdbe8893fda4d0b8a56105890939494fc79b0e6828"} Feb 19 18:32:06 crc kubenswrapper[4813]: I0219 18:32:06.765012 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-b84zp" podStartSLOduration=124.764990929 podStartE2EDuration="2m4.764990929s" podCreationTimestamp="2026-02-19 18:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:32:06.76432209 +0000 UTC m=+145.989762631" watchObservedRunningTime="2026-02-19 18:32:06.764990929 +0000 UTC m=+145.990431480" Feb 19 18:32:06 crc kubenswrapper[4813]: I0219 18:32:06.771443 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525430-fq4mh" event={"ID":"e63bf777-ed22-4aae-942e-74b8613ca4ce","Type":"ContainerStarted","Data":"90cc49563f9a6346069d787c35781a06ed6619b0e0ec51277e1ac74f238b3c08"} Feb 19 18:32:06 crc kubenswrapper[4813]: I0219 18:32:06.780753 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4mmbp" event={"ID":"7f3b7488-7c5c-4689-9c41-eacd57fdce7d","Type":"ContainerStarted","Data":"606577e3da8b017fd975efb0d77816baf625844980af4a1c18bdc3d6a19e4241"} Feb 19 18:32:06 crc kubenswrapper[4813]: I0219 18:32:06.784318 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-qqsjr" event={"ID":"8ff6065e-3f22-49b0-a4b9-2e33bdf923b1","Type":"ContainerStarted","Data":"fb21ea53880577c6702cdc8baa264f1a6ae56c802e90f831c4e548bcada8c475"} Feb 19 18:32:06 crc kubenswrapper[4813]: I0219 18:32:06.787687 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 18:32:06 crc kubenswrapper[4813]: E0219 18:32:06.788125 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 18:32:07.288098626 +0000 UTC m=+146.513539167 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:06 crc kubenswrapper[4813]: I0219 18:32:06.789034 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd28k\" (UID: \"5a165501-6ea2-4d63-9479-6ac760c6b116\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd28k" Feb 19 18:32:06 crc kubenswrapper[4813]: E0219 18:32:06.789770 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 18:32:07.289758331 +0000 UTC m=+146.515198872 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd28k" (UID: "5a165501-6ea2-4d63-9479-6ac760c6b116") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:06 crc kubenswrapper[4813]: I0219 18:32:06.796563 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-rksx7" event={"ID":"17f8fff7-991c-4a69-afb4-e40b048cde5c","Type":"ContainerStarted","Data":"8b7c19e10f2d9429988015cf9b953f58c9cb37de6a7a7e32f37a2c72936990c2"} Feb 19 18:32:06 crc kubenswrapper[4813]: I0219 18:32:06.810489 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2rvnw" event={"ID":"14cc2826-c23c-4055-a22a-814d43dc2bb4","Type":"ContainerStarted","Data":"9fc2209b151a1dada17f82a0a7edd1f3420dbaed9e7fb7ccb09cdbda36e62b3f"} Feb 19 18:32:06 crc kubenswrapper[4813]: I0219 18:32:06.823365 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-chl4q" event={"ID":"3d471333-dfe7-45ec-bb58-f5020bca76cd","Type":"ContainerStarted","Data":"fc7e506018c044314419b6ad5494bdb97f5d350837ee8a992be0bcb50fb7f9d0"} Feb 19 18:32:06 crc kubenswrapper[4813]: I0219 18:32:06.823442 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-chl4q" Feb 19 18:32:06 crc kubenswrapper[4813]: I0219 18:32:06.834761 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-crqgx" event={"ID":"28a6a1eb-17cf-4978-8940-fa976706f321","Type":"ContainerStarted","Data":"af52655e190c4fbbea9e11b6de34f7dbe2a9319737d420e32d325454f348639c"} Feb 19 18:32:06 crc kubenswrapper[4813]: I0219 18:32:06.838350 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-wwn2x" event={"ID":"8ef1d23f-3a55-4fef-ba88-96b5ba04313f","Type":"ContainerStarted","Data":"6654fdd25e608a8fd6a7b8e998fedebc42791182666596b773356fff83109405"} Feb 19 18:32:06 crc kubenswrapper[4813]: I0219 18:32:06.838826 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-wwn2x" Feb 19 18:32:06 crc kubenswrapper[4813]: I0219 18:32:06.839992 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-kmwrq" event={"ID":"81f53f8d-f6c6-4eff-b25c-1070003fbe99","Type":"ContainerStarted","Data":"c0fdb7874c4c1ba687ea1e4acad2721dbc38f57bded4500c421bb8a8912ca04e"} Feb 19 18:32:06 crc kubenswrapper[4813]: I0219 18:32:06.840016 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-kmwrq" event={"ID":"81f53f8d-f6c6-4eff-b25c-1070003fbe99","Type":"ContainerStarted","Data":"48ee7c2032d449a793d8b5fc27962b3fae0f103340db925afef72486a14d3514"} Feb 19 18:32:06 crc kubenswrapper[4813]: I0219 18:32:06.840341 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-kmwrq" Feb 19 18:32:06 crc kubenswrapper[4813]: I0219 18:32:06.841195 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-29vhv" event={"ID":"8eda42c4-5734-45ee-8bd1-0ae7d0d89346","Type":"ContainerStarted","Data":"6ee3dbba3050c7875ce7152d64990987ff7b1acc7bcc492733bddb3e86ceff04"} Feb 19 18:32:06 crc kubenswrapper[4813]: I0219 18:32:06.861266 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vkxmw" event={"ID":"5a8ad966-7829-4e5b-9665-5d07b39e882f","Type":"ContainerStarted","Data":"38655462d3fa358becaecb1f7f6dc861b3546b9a969f6ab17d176d1b2345b81d"} Feb 19 18:32:06 crc kubenswrapper[4813]: I0219 18:32:06.866005 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cvwkx" event={"ID":"2884d383-8ee9-456c-8d6c-d41eaebd60e6","Type":"ContainerStarted","Data":"6f92f21bb2c38314b87663d0325ba9988dfa4fc0c2256b4efb6aa5c875ffb320"} Feb 19 18:32:06 crc kubenswrapper[4813]: I0219 18:32:06.867575 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rffpk" event={"ID":"b89f8e03-2720-4c2e-a7fc-e43dda75cc4c","Type":"ContainerStarted","Data":"64aebb557de05e6f9b5cc57c8acacc5c38188174311ec86f1252d436a72ed811"} Feb 19 18:32:06 crc kubenswrapper[4813]: I0219 18:32:06.874868 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-7qgtc" event={"ID":"2052a36c-60a1-4d21-ac02-9d7d22c16b5b","Type":"ContainerStarted","Data":"807fc76067c6c9f68926c490105e28a07a8e2919adc05508c7d3280d8b2dfe9f"} Feb 19 18:32:06 crc kubenswrapper[4813]: I0219 18:32:06.879650 4813 patch_prober.go:28] interesting pod/router-default-5444994796-9j26p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 18:32:06 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Feb 19 18:32:06 crc kubenswrapper[4813]: [+]process-running ok Feb 19 18:32:06 crc kubenswrapper[4813]: healthz check failed Feb 19 18:32:06 crc kubenswrapper[4813]: I0219 18:32:06.879698 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9j26p" podUID="4d98d1e8-0f5b-4f71-9809-b9577f15e21a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 18:32:06 crc kubenswrapper[4813]: I0219 18:32:06.888452 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-h5w7j" podStartSLOduration=124.888435426 podStartE2EDuration="2m4.888435426s" podCreationTimestamp="2026-02-19 18:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:32:06.851706293 +0000 UTC m=+146.077146834" watchObservedRunningTime="2026-02-19 18:32:06.888435426 +0000 UTC m=+146.113875967" Feb 19 18:32:06 crc kubenswrapper[4813]: I0219 18:32:06.890800 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 18:32:06 crc kubenswrapper[4813]: E0219 18:32:06.892074 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 18:32:07.392059916 +0000 UTC m=+146.617500457 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:06 crc kubenswrapper[4813]: I0219 18:32:06.902051 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8dvp5" event={"ID":"2c70e2d1-cf19-4a67-880f-e52e61ca6953","Type":"ContainerStarted","Data":"6bd0be3ab325a0da646a9d17d71b82c9629250e7e1070a22bc43fa92c195285b"} Feb 19 18:32:06 crc kubenswrapper[4813]: I0219 18:32:06.902094 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8dvp5" event={"ID":"2c70e2d1-cf19-4a67-880f-e52e61ca6953","Type":"ContainerStarted","Data":"b9295401e27b4657ab0ecb42efd8258081b2dc716df1ef752de6d216b45d5141"} Feb 19 18:32:06 crc kubenswrapper[4813]: I0219 18:32:06.925128 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-qqrj6" podStartSLOduration=124.925108927 podStartE2EDuration="2m4.925108927s" podCreationTimestamp="2026-02-19 18:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:32:06.924743515 +0000 UTC m=+146.150184056" watchObservedRunningTime="2026-02-19 18:32:06.925108927 +0000 UTC m=+146.150549468" Feb 19 18:32:06 crc kubenswrapper[4813]: I0219 18:32:06.926669 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-l78xq" podStartSLOduration=124.926661356 podStartE2EDuration="2m4.926661356s" podCreationTimestamp="2026-02-19 18:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:32:06.890706047 +0000 UTC m=+146.116146588" watchObservedRunningTime="2026-02-19 18:32:06.926661356 +0000 UTC m=+146.152101897" Feb 19 18:32:06 crc kubenswrapper[4813]: I0219 18:32:06.927339 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-hvwhh" event={"ID":"d3ba0b09-bcff-4819-b79a-f499049cf31c","Type":"ContainerStarted","Data":"9579b3f8216058f1f3527110605470ca006d934e90d300979ce3de060d585580"} Feb 19 18:32:06 crc kubenswrapper[4813]: I0219 18:32:06.941881 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7w8k7" event={"ID":"87c78a23-ef77-4a1e-ba67-45f3a4adccfb","Type":"ContainerStarted","Data":"9cb0116aeb4f4c14673dde6207c39e7635ffbea97735628dfc0265aedd8ab707"} Feb 19 18:32:06 crc kubenswrapper[4813]: I0219 18:32:06.941943 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7w8k7" event={"ID":"87c78a23-ef77-4a1e-ba67-45f3a4adccfb","Type":"ContainerStarted","Data":"c1d5c2e5b21b3deafab0a43510b2bf78a85404ae581991588f55c82e9bd85984"} Feb 19 18:32:06 crc kubenswrapper[4813]: I0219 18:32:06.951785 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6s798" event={"ID":"f06a16f0-ed91-4d2e-bc89-36e2ac9b47e1","Type":"ContainerStarted","Data":"744dce93f74d06c37146e6885457a9d1031532f22ace003a8d7ed5674849f28a"} Feb 19 18:32:06 crc kubenswrapper[4813]: I0219 18:32:06.952909 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6s798" Feb 19 18:32:06 crc kubenswrapper[4813]: I0219 18:32:06.959216 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-qqsjr" podStartSLOduration=124.959182117 podStartE2EDuration="2m4.959182117s" podCreationTimestamp="2026-02-19 18:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:32:06.956733035 +0000 UTC m=+146.182173566" watchObservedRunningTime="2026-02-19 18:32:06.959182117 +0000 UTC m=+146.184622658" Feb 19 18:32:06 crc kubenswrapper[4813]: I0219 18:32:06.960108 4813 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-6s798 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Feb 19 18:32:06 crc kubenswrapper[4813]: I0219 18:32:06.960157 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6s798" podUID="f06a16f0-ed91-4d2e-bc89-36e2ac9b47e1" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Feb 19 18:32:06 crc kubenswrapper[4813]: I0219 18:32:06.961287 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-t4pqd" event={"ID":"9c3d2384-facd-44d8-a9d5-7e1cd7bfad8f","Type":"ContainerStarted","Data":"af209cd38b560a9998a6e99babdc5b104e8ff8fec7faf17e7c6cadeae6f3619a"} Feb 19 18:32:06 crc kubenswrapper[4813]: I0219 18:32:06.961323 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-t4pqd" event={"ID":"9c3d2384-facd-44d8-a9d5-7e1cd7bfad8f","Type":"ContainerStarted","Data":"3c569c6ee4c616e85b05669e6d6507db668c57e7990c71e9315dbd86ec7d27ba"} Feb 19 18:32:06 crc kubenswrapper[4813]: I0219 18:32:06.976296 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-j6hz9" event={"ID":"f144bc83-c0eb-4627-ab68-9ed862e2f402","Type":"ContainerStarted","Data":"5b002ce3a09f14dbf28f2053ee60ed4e12c44fe285f85f1e0ec685f54ed65e4b"} Feb 19 18:32:06 crc kubenswrapper[4813]: I0219 18:32:06.979335 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6zffl" event={"ID":"209ad545-4905-4130-baf8-1c3e9576e788","Type":"ContainerStarted","Data":"83befd11973be70a3369585c6085b7a5191ee3603874c9147ba923e94a3b44a2"} Feb 19 18:32:06 crc kubenswrapper[4813]: I0219 18:32:06.980764 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ptfgm" event={"ID":"36af60b0-5e67-4902-8891-adb18585316a","Type":"ContainerStarted","Data":"6bf6d71d6d7d014351ca336af82d30deebd7f037ca630356a25a4152f42b7a28"} Feb 19 18:32:06 crc kubenswrapper[4813]: I0219 18:32:06.981854 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ptfgm" Feb 19 18:32:06 crc kubenswrapper[4813]: I0219 18:32:06.990700 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-chl4q" podStartSLOduration=124.990665976 podStartE2EDuration="2m4.990665976s" podCreationTimestamp="2026-02-19 18:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:32:06.98813704 +0000 UTC m=+146.213577581" watchObservedRunningTime="2026-02-19 18:32:06.990665976 +0000 UTC m=+146.216106517" Feb 19 18:32:06 crc kubenswrapper[4813]: I0219 18:32:06.992852 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd28k\" (UID: \"5a165501-6ea2-4d63-9479-6ac760c6b116\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd28k" Feb 19 18:32:06 crc kubenswrapper[4813]: I0219 18:32:06.993744 4813 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-ptfgm container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Feb 19 18:32:06 crc kubenswrapper[4813]: I0219 18:32:06.993799 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ptfgm" podUID="36af60b0-5e67-4902-8891-adb18585316a" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Feb 19 18:32:06 crc kubenswrapper[4813]: I0219 18:32:06.993865 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-c5246" event={"ID":"28579021-c721-4746-bed3-9b765dddbe11","Type":"ContainerStarted","Data":"44f8044b64f05be8e45b5a002b8142e00cedeb74889c06f17ffd19e7ca6d4e33"} Feb 19 18:32:06 crc kubenswrapper[4813]: I0219 18:32:06.993897 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-c5246" event={"ID":"28579021-c721-4746-bed3-9b765dddbe11","Type":"ContainerStarted","Data":"c1305a42e4bf30234d75a2438eb158554d4bb49b889ca1f4507161c3bf050c30"} Feb 19 18:32:06 crc kubenswrapper[4813]: E0219 18:32:06.994369 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 18:32:07.49435267 +0000 UTC m=+146.719793211 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd28k" (UID: "5a165501-6ea2-4d63-9479-6ac760c6b116") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:07 crc kubenswrapper[4813]: I0219 18:32:07.005896 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rcm5q" event={"ID":"ff5bff76-f1fb-442b-81db-2ccc58aa61ef","Type":"ContainerStarted","Data":"e6bd864f774b2b3ad256e125867ac85d463bc88534c2ee02d1ae7bfdbc9c6c8a"} Feb 19 18:32:07 crc kubenswrapper[4813]: I0219 18:32:07.023820 4813 patch_prober.go:28] interesting pod/downloads-7954f5f757-5gzd8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Feb 19 18:32:07 crc kubenswrapper[4813]: I0219 18:32:07.024033 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5gzd8" podUID="c705e274-b996-46b7-8e3f-97dff9107a6f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Feb 19 18:32:07 crc kubenswrapper[4813]: I0219 18:32:07.027263 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-mgsbg" Feb 19 18:32:07 crc kubenswrapper[4813]: I0219 18:32:07.027937 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-crqgx" podStartSLOduration=125.02792191 podStartE2EDuration="2m5.02792191s" podCreationTimestamp="2026-02-19 18:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:32:07.024472711 +0000 UTC m=+146.249913252" watchObservedRunningTime="2026-02-19 18:32:07.02792191 +0000 UTC m=+146.253362451" Feb 19 18:32:07 crc kubenswrapper[4813]: I0219 18:32:07.054088 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rffpk" podStartSLOduration=126.054054601 podStartE2EDuration="2m6.054054601s" podCreationTimestamp="2026-02-19 18:30:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:32:07.050985714 +0000 UTC m=+146.276426265" watchObservedRunningTime="2026-02-19 18:32:07.054054601 +0000 UTC m=+146.279495162" Feb 19 18:32:07 crc kubenswrapper[4813]: I0219 18:32:07.094226 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 18:32:07 crc kubenswrapper[4813]: E0219 18:32:07.094351 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 18:32:07.594328639 +0000 UTC m=+146.819769180 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:07 crc kubenswrapper[4813]: I0219 18:32:07.094997 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd28k\" (UID: \"5a165501-6ea2-4d63-9479-6ac760c6b116\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd28k" Feb 19 18:32:07 crc kubenswrapper[4813]: E0219 18:32:07.099530 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 18:32:07.59951767 +0000 UTC m=+146.824958211 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd28k" (UID: "5a165501-6ea2-4d63-9479-6ac760c6b116") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:07 crc kubenswrapper[4813]: I0219 18:32:07.125898 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29525430-fq4mh" podStartSLOduration=126.125873353 podStartE2EDuration="2m6.125873353s" podCreationTimestamp="2026-02-19 18:30:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:32:07.120199405 +0000 UTC m=+146.345639946" watchObservedRunningTime="2026-02-19 18:32:07.125873353 +0000 UTC m=+146.351313894" Feb 19 18:32:07 crc kubenswrapper[4813]: I0219 18:32:07.127390 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8dvp5" podStartSLOduration=125.127382661 podStartE2EDuration="2m5.127382661s" podCreationTimestamp="2026-02-19 18:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:32:07.093994981 +0000 UTC m=+146.319435522" watchObservedRunningTime="2026-02-19 18:32:07.127382661 +0000 UTC m=+146.352823202" Feb 19 18:32:07 crc kubenswrapper[4813]: I0219 18:32:07.175479 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-wwn2x" podStartSLOduration=126.17546254 podStartE2EDuration="2m6.17546254s" podCreationTimestamp="2026-02-19 18:30:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:32:07.146867697 +0000 UTC m=+146.372308238" watchObservedRunningTime="2026-02-19 18:32:07.17546254 +0000 UTC m=+146.400903081" Feb 19 18:32:07 crc kubenswrapper[4813]: I0219 18:32:07.177251 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vkxmw" podStartSLOduration=125.177243333 podStartE2EDuration="2m5.177243333s" podCreationTimestamp="2026-02-19 18:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:32:07.175339353 +0000 UTC m=+146.400779894" watchObservedRunningTime="2026-02-19 18:32:07.177243333 +0000 UTC m=+146.402683874" Feb 19 18:32:07 crc kubenswrapper[4813]: I0219 18:32:07.195683 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 18:32:07 crc kubenswrapper[4813]: E0219 18:32:07.196071 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 18:32:07.696055821 +0000 UTC m=+146.921496362 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:07 crc kubenswrapper[4813]: I0219 18:32:07.225305 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-9fx4h" Feb 19 18:32:07 crc kubenswrapper[4813]: I0219 18:32:07.248901 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-kmwrq" podStartSLOduration=7.248884215 podStartE2EDuration="7.248884215s" podCreationTimestamp="2026-02-19 18:32:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:32:07.19552321 +0000 UTC m=+146.420963751" watchObservedRunningTime="2026-02-19 18:32:07.248884215 +0000 UTC m=+146.474324756" Feb 19 18:32:07 crc kubenswrapper[4813]: I0219 18:32:07.249581 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-7qgtc" podStartSLOduration=7.249577345 podStartE2EDuration="7.249577345s" podCreationTimestamp="2026-02-19 18:32:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:32:07.23722143 +0000 UTC m=+146.462661971" watchObservedRunningTime="2026-02-19 18:32:07.249577345 +0000 UTC m=+146.475017886" Feb 19 18:32:07 crc kubenswrapper[4813]: I0219 18:32:07.296849 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd28k\" (UID: \"5a165501-6ea2-4d63-9479-6ac760c6b116\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd28k" Feb 19 18:32:07 crc kubenswrapper[4813]: E0219 18:32:07.297193 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 18:32:07.797182227 +0000 UTC m=+147.022622768 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd28k" (UID: "5a165501-6ea2-4d63-9479-6ac760c6b116") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:07 crc kubenswrapper[4813]: I0219 18:32:07.304085 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cvwkx" podStartSLOduration=125.304064936 podStartE2EDuration="2m5.304064936s" podCreationTimestamp="2026-02-19 18:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:32:07.28235989 +0000 UTC m=+146.507800431" watchObservedRunningTime="2026-02-19 18:32:07.304064936 +0000 UTC m=+146.529505487" Feb 19 18:32:07 crc kubenswrapper[4813]: I0219 18:32:07.304247 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2rvnw" podStartSLOduration=125.304242815 podStartE2EDuration="2m5.304242815s" podCreationTimestamp="2026-02-19 18:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:32:07.304037954 +0000 UTC m=+146.529478495" watchObservedRunningTime="2026-02-19 18:32:07.304242815 +0000 UTC m=+146.529683356" Feb 19 18:32:07 crc kubenswrapper[4813]: I0219 18:32:07.387448 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-29vhv" podStartSLOduration=125.387433155 podStartE2EDuration="2m5.387433155s" podCreationTimestamp="2026-02-19 18:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:32:07.384733329 +0000 UTC m=+146.610173870" watchObservedRunningTime="2026-02-19 18:32:07.387433155 +0000 UTC m=+146.612873696" Feb 19 18:32:07 crc kubenswrapper[4813]: I0219 18:32:07.397637 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 18:32:07 crc kubenswrapper[4813]: E0219 18:32:07.397826 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 18:32:07.897801414 +0000 UTC m=+147.123241955 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:07 crc kubenswrapper[4813]: I0219 18:32:07.398056 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd28k\" (UID: \"5a165501-6ea2-4d63-9479-6ac760c6b116\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd28k" Feb 19 18:32:07 crc kubenswrapper[4813]: E0219 18:32:07.398395 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 18:32:07.898381088 +0000 UTC m=+147.123821629 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd28k" (UID: "5a165501-6ea2-4d63-9479-6ac760c6b116") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:07 crc kubenswrapper[4813]: I0219 18:32:07.444438 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-c5246" podStartSLOduration=125.444420379 podStartE2EDuration="2m5.444420379s" podCreationTimestamp="2026-02-19 18:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:32:07.415653227 +0000 UTC m=+146.641093768" watchObservedRunningTime="2026-02-19 18:32:07.444420379 +0000 UTC m=+146.669860920" Feb 19 18:32:07 crc kubenswrapper[4813]: I0219 18:32:07.446195 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-hvwhh" podStartSLOduration=7.446181892 podStartE2EDuration="7.446181892s" podCreationTimestamp="2026-02-19 18:32:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:32:07.443425942 +0000 UTC m=+146.668866483" watchObservedRunningTime="2026-02-19 18:32:07.446181892 +0000 UTC m=+146.671622433" Feb 19 18:32:07 crc kubenswrapper[4813]: I0219 18:32:07.465492 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7w8k7" podStartSLOduration=125.465476157 podStartE2EDuration="2m5.465476157s" podCreationTimestamp="2026-02-19 18:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:32:07.463003914 +0000 UTC m=+146.688444465" watchObservedRunningTime="2026-02-19 18:32:07.465476157 +0000 UTC m=+146.690916698" Feb 19 18:32:07 crc kubenswrapper[4813]: I0219 18:32:07.500373 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 18:32:07 crc kubenswrapper[4813]: E0219 18:32:07.500723 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 18:32:08.000708904 +0000 UTC m=+147.226149445 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:07 crc kubenswrapper[4813]: I0219 18:32:07.527550 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ptfgm" podStartSLOduration=125.527531275 podStartE2EDuration="2m5.527531275s" podCreationTimestamp="2026-02-19 18:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:32:07.494306824 +0000 UTC m=+146.719747365" watchObservedRunningTime="2026-02-19 18:32:07.527531275 +0000 UTC m=+146.752971816" Feb 19 18:32:07 crc kubenswrapper[4813]: I0219 18:32:07.553520 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6zffl" podStartSLOduration=125.553503586 podStartE2EDuration="2m5.553503586s" podCreationTimestamp="2026-02-19 18:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:32:07.54821502 +0000 UTC m=+146.773655561" watchObservedRunningTime="2026-02-19 18:32:07.553503586 +0000 UTC m=+146.778944127" Feb 19 18:32:07 crc kubenswrapper[4813]: I0219 18:32:07.569926 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-j6hz9" podStartSLOduration=127.569908805 podStartE2EDuration="2m7.569908805s" podCreationTimestamp="2026-02-19 18:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:32:07.569169692 +0000 UTC m=+146.794610223" watchObservedRunningTime="2026-02-19 18:32:07.569908805 +0000 UTC m=+146.795349346" Feb 19 18:32:07 crc kubenswrapper[4813]: I0219 18:32:07.601838 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd28k\" (UID: \"5a165501-6ea2-4d63-9479-6ac760c6b116\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd28k" Feb 19 18:32:07 crc kubenswrapper[4813]: E0219 18:32:07.602167 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 18:32:08.102155259 +0000 UTC m=+147.327595800 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd28k" (UID: "5a165501-6ea2-4d63-9479-6ac760c6b116") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:07 crc kubenswrapper[4813]: I0219 18:32:07.623405 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-x6n5r" Feb 19 18:32:07 crc kubenswrapper[4813]: I0219 18:32:07.669428 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hkvpr" Feb 19 18:32:07 crc kubenswrapper[4813]: I0219 18:32:07.688491 4813 csr.go:261] certificate signing request csr-qqgtn is approved, waiting to be issued Feb 19 18:32:07 crc kubenswrapper[4813]: I0219 18:32:07.703501 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 18:32:07 crc kubenswrapper[4813]: E0219 18:32:07.703877 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 18:32:08.203862279 +0000 UTC m=+147.429302810 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:07 crc kubenswrapper[4813]: I0219 18:32:07.704841 4813 csr.go:257] certificate signing request csr-qqgtn is issued Feb 19 18:32:07 crc kubenswrapper[4813]: I0219 18:32:07.708498 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rcm5q" podStartSLOduration=125.708480376 podStartE2EDuration="2m5.708480376s" podCreationTimestamp="2026-02-19 18:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:32:07.6350463 +0000 UTC m=+146.860486841" watchObservedRunningTime="2026-02-19 18:32:07.708480376 +0000 UTC m=+146.933920917" Feb 19 18:32:07 crc kubenswrapper[4813]: I0219 18:32:07.756127 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6s798" podStartSLOduration=125.756109449 podStartE2EDuration="2m5.756109449s" podCreationTimestamp="2026-02-19 18:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:32:07.754611643 +0000 UTC m=+146.980052184" watchObservedRunningTime="2026-02-19 18:32:07.756109449 +0000 UTC m=+146.981549990" Feb 19 18:32:07 crc kubenswrapper[4813]: I0219 18:32:07.790783 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-t4pqd" podStartSLOduration=125.790766683 podStartE2EDuration="2m5.790766683s" podCreationTimestamp="2026-02-19 18:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:32:07.789118587 +0000 UTC m=+147.014559128" watchObservedRunningTime="2026-02-19 18:32:07.790766683 +0000 UTC m=+147.016207224" Feb 19 18:32:07 crc kubenswrapper[4813]: I0219 18:32:07.808041 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd28k\" (UID: \"5a165501-6ea2-4d63-9479-6ac760c6b116\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd28k" Feb 19 18:32:07 crc kubenswrapper[4813]: E0219 18:32:07.808318 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 18:32:08.308306137 +0000 UTC m=+147.533746678 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd28k" (UID: "5a165501-6ea2-4d63-9479-6ac760c6b116") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:07 crc kubenswrapper[4813]: I0219 18:32:07.887178 4813 patch_prober.go:28] interesting pod/router-default-5444994796-9j26p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 18:32:07 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Feb 19 18:32:07 crc kubenswrapper[4813]: [+]process-running ok Feb 19 18:32:07 crc kubenswrapper[4813]: healthz check failed Feb 19 18:32:07 crc kubenswrapper[4813]: I0219 18:32:07.887255 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9j26p" podUID="4d98d1e8-0f5b-4f71-9809-b9577f15e21a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 18:32:07 crc kubenswrapper[4813]: I0219 18:32:07.910767 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 18:32:07 crc kubenswrapper[4813]: E0219 18:32:07.911343 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 18:32:08.411328243 +0000 UTC m=+147.636768784 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:08 crc kubenswrapper[4813]: I0219 18:32:08.012625 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd28k\" (UID: \"5a165501-6ea2-4d63-9479-6ac760c6b116\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd28k" Feb 19 18:32:08 crc kubenswrapper[4813]: E0219 18:32:08.013011 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 18:32:08.5129949 +0000 UTC m=+147.738435441 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd28k" (UID: "5a165501-6ea2-4d63-9479-6ac760c6b116") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:08 crc kubenswrapper[4813]: I0219 18:32:08.025492 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-qqsjr" event={"ID":"8ff6065e-3f22-49b0-a4b9-2e33bdf923b1","Type":"ContainerStarted","Data":"fc2f559cb3c3bd4cf76baa6b66e118395259a3f9fa215adfefa877f93b836a18"} Feb 19 18:32:08 crc kubenswrapper[4813]: I0219 18:32:08.027086 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-ppc5s" event={"ID":"871c6d8c-2897-4c2c-938e-9137fe5320af","Type":"ContainerStarted","Data":"a7447a25be70e20bee9d187f080f706f22bbe594e43c32b8c94bd66d0f612f33"} Feb 19 18:32:08 crc kubenswrapper[4813]: I0219 18:32:08.034351 4813 patch_prober.go:28] interesting pod/downloads-7954f5f757-5gzd8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Feb 19 18:32:08 crc kubenswrapper[4813]: I0219 18:32:08.034391 4813 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-h5w7j container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" start-of-body= Feb 19 18:32:08 crc kubenswrapper[4813]: I0219 18:32:08.034405 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5gzd8" podUID="c705e274-b996-46b7-8e3f-97dff9107a6f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Feb 19 18:32:08 crc kubenswrapper[4813]: I0219 18:32:08.034426 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-h5w7j" podUID="60220df2-7a87-42d6-8ff6-f4d2ddec5160" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" Feb 19 18:32:08 crc kubenswrapper[4813]: I0219 18:32:08.033666 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-rksx7" event={"ID":"17f8fff7-991c-4a69-afb4-e40b048cde5c","Type":"ContainerStarted","Data":"b9edc47992d9661bdaec2933cf73d06f21050cf606a69cedfe5befc1a010dd5f"} Feb 19 18:32:08 crc kubenswrapper[4813]: I0219 18:32:08.040198 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ptfgm" Feb 19 18:32:08 crc kubenswrapper[4813]: I0219 18:32:08.062980 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6s798" Feb 19 18:32:08 crc kubenswrapper[4813]: I0219 18:32:08.113340 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 18:32:08 crc kubenswrapper[4813]: E0219 18:32:08.113479 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 18:32:08.613442168 +0000 UTC m=+147.838882709 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:08 crc kubenswrapper[4813]: I0219 18:32:08.114238 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd28k\" (UID: \"5a165501-6ea2-4d63-9479-6ac760c6b116\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd28k" Feb 19 18:32:08 crc kubenswrapper[4813]: E0219 18:32:08.115820 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 18:32:08.615811695 +0000 UTC m=+147.841252236 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd28k" (UID: "5a165501-6ea2-4d63-9479-6ac760c6b116") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:08 crc kubenswrapper[4813]: I0219 18:32:08.133332 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cvwkx" Feb 19 18:32:08 crc kubenswrapper[4813]: I0219 18:32:08.133528 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cvwkx" Feb 19 18:32:08 crc kubenswrapper[4813]: I0219 18:32:08.151152 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-rksx7" podStartSLOduration=127.151134097 podStartE2EDuration="2m7.151134097s" podCreationTimestamp="2026-02-19 18:30:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:32:08.103927488 +0000 UTC m=+147.329368029" watchObservedRunningTime="2026-02-19 18:32:08.151134097 +0000 UTC m=+147.376574638" Feb 19 18:32:08 crc kubenswrapper[4813]: I0219 18:32:08.224413 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 18:32:08 crc kubenswrapper[4813]: E0219 18:32:08.224743 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 18:32:08.724727722 +0000 UTC m=+147.950168263 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:08 crc kubenswrapper[4813]: I0219 18:32:08.272355 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-rksx7" Feb 19 18:32:08 crc kubenswrapper[4813]: I0219 18:32:08.272400 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-rksx7" Feb 19 18:32:08 crc kubenswrapper[4813]: I0219 18:32:08.280220 4813 patch_prober.go:28] interesting pod/apiserver-76f77b778f-rksx7 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Feb 19 18:32:08 crc kubenswrapper[4813]: I0219 18:32:08.280276 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-rksx7" podUID="17f8fff7-991c-4a69-afb4-e40b048cde5c" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" Feb 19 18:32:08 crc kubenswrapper[4813]: I0219 18:32:08.325736 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd28k\" (UID: \"5a165501-6ea2-4d63-9479-6ac760c6b116\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd28k" Feb 19 18:32:08 crc kubenswrapper[4813]: E0219 18:32:08.326152 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 18:32:08.826136444 +0000 UTC m=+148.051576985 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd28k" (UID: "5a165501-6ea2-4d63-9479-6ac760c6b116") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:08 crc kubenswrapper[4813]: I0219 18:32:08.426513 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 18:32:08 crc kubenswrapper[4813]: E0219 18:32:08.426652 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 18:32:08.926622634 +0000 UTC m=+148.152063175 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:08 crc kubenswrapper[4813]: I0219 18:32:08.426945 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd28k\" (UID: \"5a165501-6ea2-4d63-9479-6ac760c6b116\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd28k" Feb 19 18:32:08 crc kubenswrapper[4813]: E0219 18:32:08.427313 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 18:32:08.927305323 +0000 UTC m=+148.152745864 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd28k" (UID: "5a165501-6ea2-4d63-9479-6ac760c6b116") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:08 crc kubenswrapper[4813]: I0219 18:32:08.527810 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 18:32:08 crc kubenswrapper[4813]: E0219 18:32:08.528042 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 18:32:09.028019126 +0000 UTC m=+148.253459667 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:08 crc kubenswrapper[4813]: I0219 18:32:08.528266 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd28k\" (UID: \"5a165501-6ea2-4d63-9479-6ac760c6b116\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd28k" Feb 19 18:32:08 crc kubenswrapper[4813]: E0219 18:32:08.528587 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 18:32:09.028572098 +0000 UTC m=+148.254012639 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd28k" (UID: "5a165501-6ea2-4d63-9479-6ac760c6b116") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:08 crc kubenswrapper[4813]: I0219 18:32:08.629010 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 18:32:08 crc kubenswrapper[4813]: E0219 18:32:08.629200 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 18:32:09.129175194 +0000 UTC m=+148.354615735 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:08 crc kubenswrapper[4813]: I0219 18:32:08.629327 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd28k\" (UID: \"5a165501-6ea2-4d63-9479-6ac760c6b116\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd28k" Feb 19 18:32:08 crc kubenswrapper[4813]: E0219 18:32:08.629635 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 18:32:09.12962335 +0000 UTC m=+148.355063891 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd28k" (UID: "5a165501-6ea2-4d63-9479-6ac760c6b116") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:08 crc kubenswrapper[4813]: I0219 18:32:08.706212 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-19 18:27:07 +0000 UTC, rotation deadline is 2026-11-09 03:11:46.695272475 +0000 UTC Feb 19 18:32:08 crc kubenswrapper[4813]: I0219 18:32:08.706255 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6296h39m37.989023915s for next certificate rotation Feb 19 18:32:08 crc kubenswrapper[4813]: I0219 18:32:08.730789 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 18:32:08 crc kubenswrapper[4813]: E0219 18:32:08.731011 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 18:32:09.230995451 +0000 UTC m=+148.456435992 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:08 crc kubenswrapper[4813]: I0219 18:32:08.731086 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd28k\" (UID: \"5a165501-6ea2-4d63-9479-6ac760c6b116\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd28k" Feb 19 18:32:08 crc kubenswrapper[4813]: E0219 18:32:08.731475 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 18:32:09.231457218 +0000 UTC m=+148.456897749 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd28k" (UID: "5a165501-6ea2-4d63-9479-6ac760c6b116") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:08 crc kubenswrapper[4813]: I0219 18:32:08.831977 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 18:32:08 crc kubenswrapper[4813]: E0219 18:32:08.832178 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 18:32:09.332148339 +0000 UTC m=+148.557588880 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:08 crc kubenswrapper[4813]: I0219 18:32:08.832318 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd28k\" (UID: \"5a165501-6ea2-4d63-9479-6ac760c6b116\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd28k" Feb 19 18:32:08 crc kubenswrapper[4813]: E0219 18:32:08.832624 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 18:32:09.332611726 +0000 UTC m=+148.558052267 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd28k" (UID: "5a165501-6ea2-4d63-9479-6ac760c6b116") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:08 crc kubenswrapper[4813]: I0219 18:32:08.874360 4813 patch_prober.go:28] interesting pod/router-default-5444994796-9j26p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 18:32:08 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Feb 19 18:32:08 crc kubenswrapper[4813]: [+]process-running ok Feb 19 18:32:08 crc kubenswrapper[4813]: healthz check failed Feb 19 18:32:08 crc kubenswrapper[4813]: I0219 18:32:08.874598 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9j26p" podUID="4d98d1e8-0f5b-4f71-9809-b9577f15e21a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 18:32:08 crc kubenswrapper[4813]: I0219 18:32:08.933521 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 18:32:08 crc kubenswrapper[4813]: E0219 18:32:08.933694 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 18:32:09.433668478 +0000 UTC m=+148.659109009 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:08 crc kubenswrapper[4813]: I0219 18:32:08.933847 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd28k\" (UID: \"5a165501-6ea2-4d63-9479-6ac760c6b116\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd28k" Feb 19 18:32:08 crc kubenswrapper[4813]: E0219 18:32:08.934176 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 18:32:09.434169157 +0000 UTC m=+148.659609698 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd28k" (UID: "5a165501-6ea2-4d63-9479-6ac760c6b116") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:08 crc kubenswrapper[4813]: I0219 18:32:08.996618 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cvwkx" Feb 19 18:32:09 crc kubenswrapper[4813]: I0219 18:32:09.037508 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 18:32:09 crc kubenswrapper[4813]: E0219 18:32:09.037884 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 18:32:09.537856161 +0000 UTC m=+148.763296702 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:09 crc kubenswrapper[4813]: I0219 18:32:09.048396 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-ppc5s" event={"ID":"871c6d8c-2897-4c2c-938e-9137fe5320af","Type":"ContainerStarted","Data":"8a78ce7bfa410e9d96004591174b1609a4b0ad22e99a4593eca4bad30b6a560d"} Feb 19 18:32:09 crc kubenswrapper[4813]: I0219 18:32:09.048430 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-ppc5s" event={"ID":"871c6d8c-2897-4c2c-938e-9137fe5320af","Type":"ContainerStarted","Data":"f446d4149308ca44a1233ee516ffd3a53f6fe9f90052c8c89aed036121452037"} Feb 19 18:32:09 crc kubenswrapper[4813]: I0219 18:32:09.092176 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cvwkx" Feb 19 18:32:09 crc kubenswrapper[4813]: I0219 18:32:09.138453 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd28k\" (UID: \"5a165501-6ea2-4d63-9479-6ac760c6b116\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd28k" Feb 19 18:32:09 crc kubenswrapper[4813]: E0219 18:32:09.141119 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 18:32:09.64110767 +0000 UTC m=+148.866548211 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd28k" (UID: "5a165501-6ea2-4d63-9479-6ac760c6b116") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:09 crc kubenswrapper[4813]: I0219 18:32:09.239521 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 18:32:09 crc kubenswrapper[4813]: E0219 18:32:09.239834 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 18:32:09.739820238 +0000 UTC m=+148.965260779 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:09 crc kubenswrapper[4813]: I0219 18:32:09.341216 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd28k\" (UID: \"5a165501-6ea2-4d63-9479-6ac760c6b116\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd28k" Feb 19 18:32:09 crc kubenswrapper[4813]: E0219 18:32:09.341660 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 18:32:09.841649855 +0000 UTC m=+149.067090396 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd28k" (UID: "5a165501-6ea2-4d63-9479-6ac760c6b116") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:09 crc kubenswrapper[4813]: I0219 18:32:09.442845 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 18:32:09 crc kubenswrapper[4813]: E0219 18:32:09.443260 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 18:32:09.943246408 +0000 UTC m=+149.168686949 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:09 crc kubenswrapper[4813]: I0219 18:32:09.493884 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-wwn2x" Feb 19 18:32:09 crc kubenswrapper[4813]: I0219 18:32:09.544168 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd28k\" (UID: \"5a165501-6ea2-4d63-9479-6ac760c6b116\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd28k" Feb 19 18:32:09 crc kubenswrapper[4813]: I0219 18:32:09.544235 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 18:32:09 crc kubenswrapper[4813]: I0219 18:32:09.544288 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:32:09 crc kubenswrapper[4813]: I0219 18:32:09.544314 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 18:32:09 crc kubenswrapper[4813]: I0219 18:32:09.544339 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:32:09 crc kubenswrapper[4813]: E0219 18:32:09.544547 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 18:32:10.044530794 +0000 UTC m=+149.269971335 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd28k" (UID: "5a165501-6ea2-4d63-9479-6ac760c6b116") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:09 crc kubenswrapper[4813]: I0219 18:32:09.545559 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:32:09 crc kubenswrapper[4813]: I0219 18:32:09.551054 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 18:32:09 crc kubenswrapper[4813]: I0219 18:32:09.553016 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:32:09 crc kubenswrapper[4813]: I0219 18:32:09.557518 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 18:32:09 crc kubenswrapper[4813]: I0219 18:32:09.635393 4813 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 19 18:32:09 crc kubenswrapper[4813]: I0219 18:32:09.635457 4813 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-19T18:32:09.635424009Z","Handler":null,"Name":""} Feb 19 18:32:09 crc kubenswrapper[4813]: I0219 18:32:09.645775 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 18:32:09 crc kubenswrapper[4813]: E0219 18:32:09.645968 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 18:32:10.145929646 +0000 UTC m=+149.371370187 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:09 crc kubenswrapper[4813]: I0219 18:32:09.646050 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd28k\" (UID: \"5a165501-6ea2-4d63-9479-6ac760c6b116\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd28k" Feb 19 18:32:09 crc kubenswrapper[4813]: E0219 18:32:09.646347 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 18:32:10.14633988 +0000 UTC m=+149.371780421 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd28k" (UID: "5a165501-6ea2-4d63-9479-6ac760c6b116") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 18:32:09 crc kubenswrapper[4813]: I0219 18:32:09.663719 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-x7z6q"] Feb 19 18:32:09 crc kubenswrapper[4813]: I0219 18:32:09.664597 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x7z6q" Feb 19 18:32:09 crc kubenswrapper[4813]: I0219 18:32:09.668187 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 19 18:32:09 crc kubenswrapper[4813]: I0219 18:32:09.669391 4813 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 19 18:32:09 crc kubenswrapper[4813]: I0219 18:32:09.669419 4813 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 19 18:32:09 crc kubenswrapper[4813]: I0219 18:32:09.680643 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x7z6q"] Feb 19 18:32:09 crc kubenswrapper[4813]: I0219 18:32:09.746779 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 18:32:09 crc kubenswrapper[4813]: I0219 18:32:09.747041 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a3d0dbc-cf99-4d9f-8ab2-a7f480f65d81-catalog-content\") pod \"community-operators-x7z6q\" (UID: \"9a3d0dbc-cf99-4d9f-8ab2-a7f480f65d81\") " pod="openshift-marketplace/community-operators-x7z6q" Feb 19 18:32:09 crc kubenswrapper[4813]: I0219 18:32:09.747085 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v2nb\" (UniqueName: \"kubernetes.io/projected/9a3d0dbc-cf99-4d9f-8ab2-a7f480f65d81-kube-api-access-4v2nb\") pod \"community-operators-x7z6q\" (UID: \"9a3d0dbc-cf99-4d9f-8ab2-a7f480f65d81\") " pod="openshift-marketplace/community-operators-x7z6q" Feb 19 18:32:09 crc kubenswrapper[4813]: I0219 18:32:09.747116 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a3d0dbc-cf99-4d9f-8ab2-a7f480f65d81-utilities\") pod \"community-operators-x7z6q\" (UID: \"9a3d0dbc-cf99-4d9f-8ab2-a7f480f65d81\") " pod="openshift-marketplace/community-operators-x7z6q" Feb 19 18:32:09 crc kubenswrapper[4813]: I0219 18:32:09.780162 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 18:32:09 crc kubenswrapper[4813]: I0219 18:32:09.787608 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 18:32:09 crc kubenswrapper[4813]: I0219 18:32:09.795460 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 18:32:09 crc kubenswrapper[4813]: I0219 18:32:09.806450 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 18:32:09 crc kubenswrapper[4813]: I0219 18:32:09.845125 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5q7bs"] Feb 19 18:32:09 crc kubenswrapper[4813]: I0219 18:32:09.846003 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5q7bs" Feb 19 18:32:09 crc kubenswrapper[4813]: I0219 18:32:09.851622 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 19 18:32:09 crc kubenswrapper[4813]: I0219 18:32:09.851784 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4v2nb\" (UniqueName: \"kubernetes.io/projected/9a3d0dbc-cf99-4d9f-8ab2-a7f480f65d81-kube-api-access-4v2nb\") pod \"community-operators-x7z6q\" (UID: \"9a3d0dbc-cf99-4d9f-8ab2-a7f480f65d81\") " pod="openshift-marketplace/community-operators-x7z6q" Feb 19 18:32:09 crc kubenswrapper[4813]: I0219 18:32:09.851831 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a3d0dbc-cf99-4d9f-8ab2-a7f480f65d81-utilities\") pod \"community-operators-x7z6q\" (UID: \"9a3d0dbc-cf99-4d9f-8ab2-a7f480f65d81\") " pod="openshift-marketplace/community-operators-x7z6q" Feb 19 18:32:09 crc kubenswrapper[4813]: I0219 18:32:09.851872 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd28k\" (UID: \"5a165501-6ea2-4d63-9479-6ac760c6b116\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd28k" Feb 19 18:32:09 crc kubenswrapper[4813]: I0219 18:32:09.851919 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a3d0dbc-cf99-4d9f-8ab2-a7f480f65d81-catalog-content\") pod \"community-operators-x7z6q\" (UID: \"9a3d0dbc-cf99-4d9f-8ab2-a7f480f65d81\") " pod="openshift-marketplace/community-operators-x7z6q" Feb 19 18:32:09 crc kubenswrapper[4813]: I0219 18:32:09.852397 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a3d0dbc-cf99-4d9f-8ab2-a7f480f65d81-catalog-content\") pod \"community-operators-x7z6q\" (UID: \"9a3d0dbc-cf99-4d9f-8ab2-a7f480f65d81\") " pod="openshift-marketplace/community-operators-x7z6q" Feb 19 18:32:09 crc kubenswrapper[4813]: I0219 18:32:09.852540 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a3d0dbc-cf99-4d9f-8ab2-a7f480f65d81-utilities\") pod \"community-operators-x7z6q\" (UID: \"9a3d0dbc-cf99-4d9f-8ab2-a7f480f65d81\") " pod="openshift-marketplace/community-operators-x7z6q" Feb 19 18:32:09 crc kubenswrapper[4813]: I0219 18:32:09.860929 4813 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 18:32:09 crc kubenswrapper[4813]: I0219 18:32:09.860985 4813 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd28k\" (UID: \"5a165501-6ea2-4d63-9479-6ac760c6b116\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-bd28k" Feb 19 18:32:09 crc kubenswrapper[4813]: I0219 18:32:09.875144 4813 patch_prober.go:28] interesting pod/router-default-5444994796-9j26p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 18:32:09 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Feb 19 18:32:09 crc kubenswrapper[4813]: [+]process-running ok Feb 19 18:32:09 crc kubenswrapper[4813]: healthz check failed Feb 19 18:32:09 crc kubenswrapper[4813]: I0219 18:32:09.875646 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9j26p" podUID="4d98d1e8-0f5b-4f71-9809-b9577f15e21a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 18:32:09 crc kubenswrapper[4813]: I0219 18:32:09.883390 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5q7bs"] Feb 19 18:32:09 crc kubenswrapper[4813]: I0219 18:32:09.895913 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4v2nb\" (UniqueName: \"kubernetes.io/projected/9a3d0dbc-cf99-4d9f-8ab2-a7f480f65d81-kube-api-access-4v2nb\") pod \"community-operators-x7z6q\" (UID: \"9a3d0dbc-cf99-4d9f-8ab2-a7f480f65d81\") " pod="openshift-marketplace/community-operators-x7z6q" Feb 19 18:32:09 crc kubenswrapper[4813]: I0219 18:32:09.952812 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/064d4c4c-ff7e-4368-817e-0234266467b5-utilities\") pod \"certified-operators-5q7bs\" (UID: \"064d4c4c-ff7e-4368-817e-0234266467b5\") " pod="openshift-marketplace/certified-operators-5q7bs" Feb 19 18:32:09 crc kubenswrapper[4813]: I0219 18:32:09.952853 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/064d4c4c-ff7e-4368-817e-0234266467b5-catalog-content\") pod \"certified-operators-5q7bs\" (UID: \"064d4c4c-ff7e-4368-817e-0234266467b5\") " pod="openshift-marketplace/certified-operators-5q7bs" Feb 19 18:32:09 crc kubenswrapper[4813]: I0219 18:32:09.952883 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl76p\" (UniqueName: \"kubernetes.io/projected/064d4c4c-ff7e-4368-817e-0234266467b5-kube-api-access-zl76p\") pod \"certified-operators-5q7bs\" (UID: \"064d4c4c-ff7e-4368-817e-0234266467b5\") " pod="openshift-marketplace/certified-operators-5q7bs" Feb 19 18:32:09 crc kubenswrapper[4813]: I0219 18:32:09.966300 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd28k\" (UID: \"5a165501-6ea2-4d63-9479-6ac760c6b116\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd28k" Feb 19 18:32:09 crc kubenswrapper[4813]: I0219 18:32:09.977130 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x7z6q" Feb 19 18:32:10 crc kubenswrapper[4813]: I0219 18:32:10.064095 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/064d4c4c-ff7e-4368-817e-0234266467b5-utilities\") pod \"certified-operators-5q7bs\" (UID: \"064d4c4c-ff7e-4368-817e-0234266467b5\") " pod="openshift-marketplace/certified-operators-5q7bs" Feb 19 18:32:10 crc kubenswrapper[4813]: I0219 18:32:10.064140 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/064d4c4c-ff7e-4368-817e-0234266467b5-catalog-content\") pod \"certified-operators-5q7bs\" (UID: \"064d4c4c-ff7e-4368-817e-0234266467b5\") " pod="openshift-marketplace/certified-operators-5q7bs" Feb 19 18:32:10 crc kubenswrapper[4813]: I0219 18:32:10.064190 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zl76p\" (UniqueName: \"kubernetes.io/projected/064d4c4c-ff7e-4368-817e-0234266467b5-kube-api-access-zl76p\") pod \"certified-operators-5q7bs\" (UID: \"064d4c4c-ff7e-4368-817e-0234266467b5\") " pod="openshift-marketplace/certified-operators-5q7bs" Feb 19 18:32:10 crc kubenswrapper[4813]: I0219 18:32:10.073465 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/064d4c4c-ff7e-4368-817e-0234266467b5-utilities\") pod \"certified-operators-5q7bs\" (UID: \"064d4c4c-ff7e-4368-817e-0234266467b5\") " pod="openshift-marketplace/certified-operators-5q7bs" Feb 19 18:32:10 crc kubenswrapper[4813]: I0219 18:32:10.074249 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/064d4c4c-ff7e-4368-817e-0234266467b5-catalog-content\") pod \"certified-operators-5q7bs\" (UID: \"064d4c4c-ff7e-4368-817e-0234266467b5\") " pod="openshift-marketplace/certified-operators-5q7bs" Feb 19 18:32:10 crc kubenswrapper[4813]: I0219 18:32:10.124393 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xg84q"] Feb 19 18:32:10 crc kubenswrapper[4813]: I0219 18:32:10.125423 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xg84q" Feb 19 18:32:10 crc kubenswrapper[4813]: I0219 18:32:10.134145 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xg84q"] Feb 19 18:32:10 crc kubenswrapper[4813]: I0219 18:32:10.141271 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-bd28k" Feb 19 18:32:10 crc kubenswrapper[4813]: I0219 18:32:10.160409 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-ppc5s" event={"ID":"871c6d8c-2897-4c2c-938e-9137fe5320af","Type":"ContainerStarted","Data":"3fc1916b64d050293040c13060c569ed1e9048a15aa8b3c8085d3a62bc363040"} Feb 19 18:32:10 crc kubenswrapper[4813]: I0219 18:32:10.166012 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91d14b93-63e4-4673-9a84-61fba91ac9ec-utilities\") pod \"community-operators-xg84q\" (UID: \"91d14b93-63e4-4673-9a84-61fba91ac9ec\") " pod="openshift-marketplace/community-operators-xg84q" Feb 19 18:32:10 crc kubenswrapper[4813]: I0219 18:32:10.166058 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdc4d\" (UniqueName: \"kubernetes.io/projected/91d14b93-63e4-4673-9a84-61fba91ac9ec-kube-api-access-rdc4d\") pod \"community-operators-xg84q\" (UID: \"91d14b93-63e4-4673-9a84-61fba91ac9ec\") " pod="openshift-marketplace/community-operators-xg84q" Feb 19 18:32:10 crc kubenswrapper[4813]: I0219 18:32:10.166100 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91d14b93-63e4-4673-9a84-61fba91ac9ec-catalog-content\") pod \"community-operators-xg84q\" (UID: \"91d14b93-63e4-4673-9a84-61fba91ac9ec\") " pod="openshift-marketplace/community-operators-xg84q" Feb 19 18:32:10 crc kubenswrapper[4813]: I0219 18:32:10.175101 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zl76p\" (UniqueName: \"kubernetes.io/projected/064d4c4c-ff7e-4368-817e-0234266467b5-kube-api-access-zl76p\") pod \"certified-operators-5q7bs\" (UID: \"064d4c4c-ff7e-4368-817e-0234266467b5\") " pod="openshift-marketplace/certified-operators-5q7bs" Feb 19 18:32:10 crc kubenswrapper[4813]: I0219 18:32:10.186293 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5q7bs" Feb 19 18:32:10 crc kubenswrapper[4813]: I0219 18:32:10.260059 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-ppc5s" podStartSLOduration=10.26003989 podStartE2EDuration="10.26003989s" podCreationTimestamp="2026-02-19 18:32:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:32:10.244506652 +0000 UTC m=+149.469947193" watchObservedRunningTime="2026-02-19 18:32:10.26003989 +0000 UTC m=+149.485480431" Feb 19 18:32:10 crc kubenswrapper[4813]: I0219 18:32:10.262270 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-m8sg9"] Feb 19 18:32:10 crc kubenswrapper[4813]: I0219 18:32:10.263181 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m8sg9" Feb 19 18:32:10 crc kubenswrapper[4813]: I0219 18:32:10.268597 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdc4d\" (UniqueName: \"kubernetes.io/projected/91d14b93-63e4-4673-9a84-61fba91ac9ec-kube-api-access-rdc4d\") pod \"community-operators-xg84q\" (UID: \"91d14b93-63e4-4673-9a84-61fba91ac9ec\") " pod="openshift-marketplace/community-operators-xg84q" Feb 19 18:32:10 crc kubenswrapper[4813]: I0219 18:32:10.268671 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91d14b93-63e4-4673-9a84-61fba91ac9ec-catalog-content\") pod \"community-operators-xg84q\" (UID: \"91d14b93-63e4-4673-9a84-61fba91ac9ec\") " pod="openshift-marketplace/community-operators-xg84q" Feb 19 18:32:10 crc kubenswrapper[4813]: I0219 18:32:10.268863 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91d14b93-63e4-4673-9a84-61fba91ac9ec-utilities\") pod \"community-operators-xg84q\" (UID: \"91d14b93-63e4-4673-9a84-61fba91ac9ec\") " pod="openshift-marketplace/community-operators-xg84q" Feb 19 18:32:10 crc kubenswrapper[4813]: I0219 18:32:10.269331 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91d14b93-63e4-4673-9a84-61fba91ac9ec-utilities\") pod \"community-operators-xg84q\" (UID: \"91d14b93-63e4-4673-9a84-61fba91ac9ec\") " pod="openshift-marketplace/community-operators-xg84q" Feb 19 18:32:10 crc kubenswrapper[4813]: I0219 18:32:10.272093 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91d14b93-63e4-4673-9a84-61fba91ac9ec-catalog-content\") pod \"community-operators-xg84q\" (UID: \"91d14b93-63e4-4673-9a84-61fba91ac9ec\") " pod="openshift-marketplace/community-operators-xg84q" Feb 19 18:32:10 crc kubenswrapper[4813]: I0219 18:32:10.287952 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m8sg9"] Feb 19 18:32:10 crc kubenswrapper[4813]: I0219 18:32:10.306644 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdc4d\" (UniqueName: \"kubernetes.io/projected/91d14b93-63e4-4673-9a84-61fba91ac9ec-kube-api-access-rdc4d\") pod \"community-operators-xg84q\" (UID: \"91d14b93-63e4-4673-9a84-61fba91ac9ec\") " pod="openshift-marketplace/community-operators-xg84q" Feb 19 18:32:10 crc kubenswrapper[4813]: I0219 18:32:10.373037 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c467423-e5c4-4329-9a0c-c68058d30c91-catalog-content\") pod \"certified-operators-m8sg9\" (UID: \"7c467423-e5c4-4329-9a0c-c68058d30c91\") " pod="openshift-marketplace/certified-operators-m8sg9" Feb 19 18:32:10 crc kubenswrapper[4813]: I0219 18:32:10.373099 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c467423-e5c4-4329-9a0c-c68058d30c91-utilities\") pod \"certified-operators-m8sg9\" (UID: \"7c467423-e5c4-4329-9a0c-c68058d30c91\") " pod="openshift-marketplace/certified-operators-m8sg9" Feb 19 18:32:10 crc kubenswrapper[4813]: I0219 18:32:10.373155 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5pdf\" (UniqueName: \"kubernetes.io/projected/7c467423-e5c4-4329-9a0c-c68058d30c91-kube-api-access-v5pdf\") pod \"certified-operators-m8sg9\" (UID: \"7c467423-e5c4-4329-9a0c-c68058d30c91\") " pod="openshift-marketplace/certified-operators-m8sg9" Feb 19 18:32:10 crc kubenswrapper[4813]: I0219 18:32:10.397779 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 19 18:32:10 crc kubenswrapper[4813]: I0219 18:32:10.398454 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 18:32:10 crc kubenswrapper[4813]: I0219 18:32:10.409184 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 19 18:32:10 crc kubenswrapper[4813]: I0219 18:32:10.409508 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 19 18:32:10 crc kubenswrapper[4813]: I0219 18:32:10.416345 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 19 18:32:10 crc kubenswrapper[4813]: I0219 18:32:10.476535 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c467423-e5c4-4329-9a0c-c68058d30c91-catalog-content\") pod \"certified-operators-m8sg9\" (UID: \"7c467423-e5c4-4329-9a0c-c68058d30c91\") " pod="openshift-marketplace/certified-operators-m8sg9" Feb 19 18:32:10 crc kubenswrapper[4813]: I0219 18:32:10.476591 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c467423-e5c4-4329-9a0c-c68058d30c91-utilities\") pod \"certified-operators-m8sg9\" (UID: \"7c467423-e5c4-4329-9a0c-c68058d30c91\") " pod="openshift-marketplace/certified-operators-m8sg9" Feb 19 18:32:10 crc kubenswrapper[4813]: I0219 18:32:10.476632 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fc7c86cd-f30f-42e5-aa8c-55f65bea7db8-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"fc7c86cd-f30f-42e5-aa8c-55f65bea7db8\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 18:32:10 crc kubenswrapper[4813]: I0219 18:32:10.476653 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fc7c86cd-f30f-42e5-aa8c-55f65bea7db8-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"fc7c86cd-f30f-42e5-aa8c-55f65bea7db8\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 18:32:10 crc kubenswrapper[4813]: I0219 18:32:10.476679 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5pdf\" (UniqueName: \"kubernetes.io/projected/7c467423-e5c4-4329-9a0c-c68058d30c91-kube-api-access-v5pdf\") pod \"certified-operators-m8sg9\" (UID: \"7c467423-e5c4-4329-9a0c-c68058d30c91\") " pod="openshift-marketplace/certified-operators-m8sg9" Feb 19 18:32:10 crc kubenswrapper[4813]: I0219 18:32:10.482263 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c467423-e5c4-4329-9a0c-c68058d30c91-utilities\") pod \"certified-operators-m8sg9\" (UID: \"7c467423-e5c4-4329-9a0c-c68058d30c91\") " pod="openshift-marketplace/certified-operators-m8sg9" Feb 19 18:32:10 crc kubenswrapper[4813]: I0219 18:32:10.483299 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c467423-e5c4-4329-9a0c-c68058d30c91-catalog-content\") pod \"certified-operators-m8sg9\" (UID: \"7c467423-e5c4-4329-9a0c-c68058d30c91\") " pod="openshift-marketplace/certified-operators-m8sg9" Feb 19 18:32:10 crc kubenswrapper[4813]: I0219 18:32:10.506799 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5pdf\" (UniqueName: \"kubernetes.io/projected/7c467423-e5c4-4329-9a0c-c68058d30c91-kube-api-access-v5pdf\") pod \"certified-operators-m8sg9\" (UID: \"7c467423-e5c4-4329-9a0c-c68058d30c91\") " pod="openshift-marketplace/certified-operators-m8sg9" Feb 19 18:32:10 crc kubenswrapper[4813]: I0219 18:32:10.529604 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xg84q" Feb 19 18:32:10 crc kubenswrapper[4813]: I0219 18:32:10.582612 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fc7c86cd-f30f-42e5-aa8c-55f65bea7db8-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"fc7c86cd-f30f-42e5-aa8c-55f65bea7db8\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 18:32:10 crc kubenswrapper[4813]: I0219 18:32:10.583135 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fc7c86cd-f30f-42e5-aa8c-55f65bea7db8-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"fc7c86cd-f30f-42e5-aa8c-55f65bea7db8\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 18:32:10 crc kubenswrapper[4813]: I0219 18:32:10.583209 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fc7c86cd-f30f-42e5-aa8c-55f65bea7db8-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"fc7c86cd-f30f-42e5-aa8c-55f65bea7db8\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 18:32:10 crc kubenswrapper[4813]: I0219 18:32:10.623119 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fc7c86cd-f30f-42e5-aa8c-55f65bea7db8-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"fc7c86cd-f30f-42e5-aa8c-55f65bea7db8\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 18:32:10 crc kubenswrapper[4813]: I0219 18:32:10.630244 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m8sg9" Feb 19 18:32:10 crc kubenswrapper[4813]: I0219 18:32:10.766851 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 18:32:10 crc kubenswrapper[4813]: I0219 18:32:10.874508 4813 patch_prober.go:28] interesting pod/router-default-5444994796-9j26p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 18:32:10 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Feb 19 18:32:10 crc kubenswrapper[4813]: [+]process-running ok Feb 19 18:32:10 crc kubenswrapper[4813]: healthz check failed Feb 19 18:32:10 crc kubenswrapper[4813]: I0219 18:32:10.875814 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9j26p" podUID="4d98d1e8-0f5b-4f71-9809-b9577f15e21a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 18:32:11 crc kubenswrapper[4813]: I0219 18:32:11.058169 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 19 18:32:11 crc kubenswrapper[4813]: I0219 18:32:11.117414 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x7z6q"] Feb 19 18:32:11 crc kubenswrapper[4813]: I0219 18:32:11.142163 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-bd28k"] Feb 19 18:32:11 crc kubenswrapper[4813]: W0219 18:32:11.142489 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a3d0dbc_cf99_4d9f_8ab2_a7f480f65d81.slice/crio-68f4511b0ced185d01c7cff6df9f7cc2a38eb7aba40c526d8f7352cee05fa3a9 WatchSource:0}: Error finding container 68f4511b0ced185d01c7cff6df9f7cc2a38eb7aba40c526d8f7352cee05fa3a9: Status 404 returned error can't find the container with id 68f4511b0ced185d01c7cff6df9f7cc2a38eb7aba40c526d8f7352cee05fa3a9 Feb 19 18:32:11 crc kubenswrapper[4813]: I0219 18:32:11.155710 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5q7bs"] Feb 19 18:32:11 crc kubenswrapper[4813]: W0219 18:32:11.176730 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod064d4c4c_ff7e_4368_817e_0234266467b5.slice/crio-9df8da8d0415337cfc57deaf42bad55c65ca33182467fe323a4f4e76a128c744 WatchSource:0}: Error finding container 9df8da8d0415337cfc57deaf42bad55c65ca33182467fe323a4f4e76a128c744: Status 404 returned error can't find the container with id 9df8da8d0415337cfc57deaf42bad55c65ca33182467fe323a4f4e76a128c744 Feb 19 18:32:11 crc kubenswrapper[4813]: I0219 18:32:11.180827 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-bd28k" event={"ID":"5a165501-6ea2-4d63-9479-6ac760c6b116","Type":"ContainerStarted","Data":"9a45c01e06e4c513a7379aa5a226ab357bc379ea608c1fa594eaad17d8b21bc9"} Feb 19 18:32:11 crc kubenswrapper[4813]: I0219 18:32:11.185585 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"a7e73fbd838716452760ebeb6b0d371351e9da2208df699cac1a049a5fbf28ab"} Feb 19 18:32:11 crc kubenswrapper[4813]: I0219 18:32:11.187307 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"26c09909e093f8557732fa4d3d0fb34d5c98be4c839e4604bb30d05ea940db3c"} Feb 19 18:32:11 crc kubenswrapper[4813]: I0219 18:32:11.187348 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"77ea405e4da2eb74cbeb7a5889724e4f3c5da67c6723034b7a6dc5c1a9ef4e19"} Feb 19 18:32:11 crc kubenswrapper[4813]: I0219 18:32:11.189865 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x7z6q" event={"ID":"9a3d0dbc-cf99-4d9f-8ab2-a7f480f65d81","Type":"ContainerStarted","Data":"68f4511b0ced185d01c7cff6df9f7cc2a38eb7aba40c526d8f7352cee05fa3a9"} Feb 19 18:32:11 crc kubenswrapper[4813]: I0219 18:32:11.199440 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"fc7c86cd-f30f-42e5-aa8c-55f65bea7db8","Type":"ContainerStarted","Data":"1d475b80b47f8142c8d7c3f01f3fab46a2c815d9e22be5d6a39ed12ce4185428"} Feb 19 18:32:11 crc kubenswrapper[4813]: I0219 18:32:11.219439 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xg84q"] Feb 19 18:32:11 crc kubenswrapper[4813]: I0219 18:32:11.235511 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"61aba3dae5c901fc1e4cd148d1985de9e0e63d2f12cce260d8ee692d86a6af1d"} Feb 19 18:32:11 crc kubenswrapper[4813]: I0219 18:32:11.235557 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"c7a2098d75142873446c275e4d20913cc2516879db83d4e0efd9bfc572f950ae"} Feb 19 18:32:11 crc kubenswrapper[4813]: I0219 18:32:11.240476 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m8sg9"] Feb 19 18:32:11 crc kubenswrapper[4813]: W0219 18:32:11.263153 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod91d14b93_63e4_4673_9a84_61fba91ac9ec.slice/crio-7e028ba200e25f05cd8ee1bc41b98c9c03d6e8ac0e45d3c7b57fa4bf0a458876 WatchSource:0}: Error finding container 7e028ba200e25f05cd8ee1bc41b98c9c03d6e8ac0e45d3c7b57fa4bf0a458876: Status 404 returned error can't find the container with id 7e028ba200e25f05cd8ee1bc41b98c9c03d6e8ac0e45d3c7b57fa4bf0a458876 Feb 19 18:32:11 crc kubenswrapper[4813]: W0219 18:32:11.265310 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7c467423_e5c4_4329_9a0c_c68058d30c91.slice/crio-e980d3f47dc6812d641d375a6296f11934bef1a26afa2b4a7b67ba48a3a10a06 WatchSource:0}: Error finding container e980d3f47dc6812d641d375a6296f11934bef1a26afa2b4a7b67ba48a3a10a06: Status 404 returned error can't find the container with id e980d3f47dc6812d641d375a6296f11934bef1a26afa2b4a7b67ba48a3a10a06 Feb 19 18:32:11 crc kubenswrapper[4813]: I0219 18:32:11.482001 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 19 18:32:11 crc kubenswrapper[4813]: I0219 18:32:11.644910 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-p2xbk"] Feb 19 18:32:11 crc kubenswrapper[4813]: I0219 18:32:11.646478 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p2xbk" Feb 19 18:32:11 crc kubenswrapper[4813]: I0219 18:32:11.654537 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 19 18:32:11 crc kubenswrapper[4813]: I0219 18:32:11.654620 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p2xbk"] Feb 19 18:32:11 crc kubenswrapper[4813]: I0219 18:32:11.710499 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5ded39f-d187-46dd-a014-517d0b291e9d-utilities\") pod \"redhat-marketplace-p2xbk\" (UID: \"a5ded39f-d187-46dd-a014-517d0b291e9d\") " pod="openshift-marketplace/redhat-marketplace-p2xbk" Feb 19 18:32:11 crc kubenswrapper[4813]: I0219 18:32:11.710540 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5ded39f-d187-46dd-a014-517d0b291e9d-catalog-content\") pod \"redhat-marketplace-p2xbk\" (UID: \"a5ded39f-d187-46dd-a014-517d0b291e9d\") " pod="openshift-marketplace/redhat-marketplace-p2xbk" Feb 19 18:32:11 crc kubenswrapper[4813]: I0219 18:32:11.710569 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wx22\" (UniqueName: \"kubernetes.io/projected/a5ded39f-d187-46dd-a014-517d0b291e9d-kube-api-access-9wx22\") pod \"redhat-marketplace-p2xbk\" (UID: \"a5ded39f-d187-46dd-a014-517d0b291e9d\") " pod="openshift-marketplace/redhat-marketplace-p2xbk" Feb 19 18:32:11 crc kubenswrapper[4813]: I0219 18:32:11.811496 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wx22\" (UniqueName: \"kubernetes.io/projected/a5ded39f-d187-46dd-a014-517d0b291e9d-kube-api-access-9wx22\") pod \"redhat-marketplace-p2xbk\" (UID: \"a5ded39f-d187-46dd-a014-517d0b291e9d\") " pod="openshift-marketplace/redhat-marketplace-p2xbk" Feb 19 18:32:11 crc kubenswrapper[4813]: I0219 18:32:11.811635 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5ded39f-d187-46dd-a014-517d0b291e9d-utilities\") pod \"redhat-marketplace-p2xbk\" (UID: \"a5ded39f-d187-46dd-a014-517d0b291e9d\") " pod="openshift-marketplace/redhat-marketplace-p2xbk" Feb 19 18:32:11 crc kubenswrapper[4813]: I0219 18:32:11.811663 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5ded39f-d187-46dd-a014-517d0b291e9d-catalog-content\") pod \"redhat-marketplace-p2xbk\" (UID: \"a5ded39f-d187-46dd-a014-517d0b291e9d\") " pod="openshift-marketplace/redhat-marketplace-p2xbk" Feb 19 18:32:11 crc kubenswrapper[4813]: I0219 18:32:11.812419 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5ded39f-d187-46dd-a014-517d0b291e9d-catalog-content\") pod \"redhat-marketplace-p2xbk\" (UID: \"a5ded39f-d187-46dd-a014-517d0b291e9d\") " pod="openshift-marketplace/redhat-marketplace-p2xbk" Feb 19 18:32:11 crc kubenswrapper[4813]: I0219 18:32:11.812473 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5ded39f-d187-46dd-a014-517d0b291e9d-utilities\") pod \"redhat-marketplace-p2xbk\" (UID: \"a5ded39f-d187-46dd-a014-517d0b291e9d\") " pod="openshift-marketplace/redhat-marketplace-p2xbk" Feb 19 18:32:11 crc kubenswrapper[4813]: I0219 18:32:11.833634 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wx22\" (UniqueName: \"kubernetes.io/projected/a5ded39f-d187-46dd-a014-517d0b291e9d-kube-api-access-9wx22\") pod \"redhat-marketplace-p2xbk\" (UID: \"a5ded39f-d187-46dd-a014-517d0b291e9d\") " pod="openshift-marketplace/redhat-marketplace-p2xbk" Feb 19 18:32:11 crc kubenswrapper[4813]: I0219 18:32:11.877471 4813 patch_prober.go:28] interesting pod/router-default-5444994796-9j26p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 18:32:11 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Feb 19 18:32:11 crc kubenswrapper[4813]: [+]process-running ok Feb 19 18:32:11 crc kubenswrapper[4813]: healthz check failed Feb 19 18:32:11 crc kubenswrapper[4813]: I0219 18:32:11.877544 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9j26p" podUID="4d98d1e8-0f5b-4f71-9809-b9577f15e21a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 18:32:12 crc kubenswrapper[4813]: I0219 18:32:12.032916 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p2xbk" Feb 19 18:32:12 crc kubenswrapper[4813]: I0219 18:32:12.047547 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xvhdg"] Feb 19 18:32:12 crc kubenswrapper[4813]: I0219 18:32:12.048910 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xvhdg" Feb 19 18:32:12 crc kubenswrapper[4813]: I0219 18:32:12.065904 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xvhdg"] Feb 19 18:32:12 crc kubenswrapper[4813]: I0219 18:32:12.118147 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksppk\" (UniqueName: \"kubernetes.io/projected/0d0adcf5-5570-499d-b626-fdf0bb785e79-kube-api-access-ksppk\") pod \"redhat-marketplace-xvhdg\" (UID: \"0d0adcf5-5570-499d-b626-fdf0bb785e79\") " pod="openshift-marketplace/redhat-marketplace-xvhdg" Feb 19 18:32:12 crc kubenswrapper[4813]: I0219 18:32:12.118310 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d0adcf5-5570-499d-b626-fdf0bb785e79-catalog-content\") pod \"redhat-marketplace-xvhdg\" (UID: \"0d0adcf5-5570-499d-b626-fdf0bb785e79\") " pod="openshift-marketplace/redhat-marketplace-xvhdg" Feb 19 18:32:12 crc kubenswrapper[4813]: I0219 18:32:12.118374 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d0adcf5-5570-499d-b626-fdf0bb785e79-utilities\") pod \"redhat-marketplace-xvhdg\" (UID: \"0d0adcf5-5570-499d-b626-fdf0bb785e79\") " pod="openshift-marketplace/redhat-marketplace-xvhdg" Feb 19 18:32:12 crc kubenswrapper[4813]: I0219 18:32:12.219497 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksppk\" (UniqueName: \"kubernetes.io/projected/0d0adcf5-5570-499d-b626-fdf0bb785e79-kube-api-access-ksppk\") pod \"redhat-marketplace-xvhdg\" (UID: \"0d0adcf5-5570-499d-b626-fdf0bb785e79\") " pod="openshift-marketplace/redhat-marketplace-xvhdg" Feb 19 18:32:12 crc kubenswrapper[4813]: I0219 18:32:12.219549 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d0adcf5-5570-499d-b626-fdf0bb785e79-catalog-content\") pod \"redhat-marketplace-xvhdg\" (UID: \"0d0adcf5-5570-499d-b626-fdf0bb785e79\") " pod="openshift-marketplace/redhat-marketplace-xvhdg" Feb 19 18:32:12 crc kubenswrapper[4813]: I0219 18:32:12.219637 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d0adcf5-5570-499d-b626-fdf0bb785e79-utilities\") pod \"redhat-marketplace-xvhdg\" (UID: \"0d0adcf5-5570-499d-b626-fdf0bb785e79\") " pod="openshift-marketplace/redhat-marketplace-xvhdg" Feb 19 18:32:12 crc kubenswrapper[4813]: I0219 18:32:12.220180 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d0adcf5-5570-499d-b626-fdf0bb785e79-utilities\") pod \"redhat-marketplace-xvhdg\" (UID: \"0d0adcf5-5570-499d-b626-fdf0bb785e79\") " pod="openshift-marketplace/redhat-marketplace-xvhdg" Feb 19 18:32:12 crc kubenswrapper[4813]: I0219 18:32:12.221803 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d0adcf5-5570-499d-b626-fdf0bb785e79-catalog-content\") pod \"redhat-marketplace-xvhdg\" (UID: \"0d0adcf5-5570-499d-b626-fdf0bb785e79\") " pod="openshift-marketplace/redhat-marketplace-xvhdg" Feb 19 18:32:12 crc kubenswrapper[4813]: I0219 18:32:12.237990 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksppk\" (UniqueName: \"kubernetes.io/projected/0d0adcf5-5570-499d-b626-fdf0bb785e79-kube-api-access-ksppk\") pod \"redhat-marketplace-xvhdg\" (UID: \"0d0adcf5-5570-499d-b626-fdf0bb785e79\") " pod="openshift-marketplace/redhat-marketplace-xvhdg" Feb 19 18:32:12 crc kubenswrapper[4813]: I0219 18:32:12.248940 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-bd28k" event={"ID":"5a165501-6ea2-4d63-9479-6ac760c6b116","Type":"ContainerStarted","Data":"2bde0c15092dd4c9242b415de83ef8ec6fbae00fb3f01395ccb54362e16ca0b6"} Feb 19 18:32:12 crc kubenswrapper[4813]: I0219 18:32:12.249543 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-bd28k" Feb 19 18:32:12 crc kubenswrapper[4813]: I0219 18:32:12.250712 4813 generic.go:334] "Generic (PLEG): container finished" podID="064d4c4c-ff7e-4368-817e-0234266467b5" containerID="c8d5e36b284e59d65ea3e9a53efca07e0b21d7aab02930e7f1422f727a42c476" exitCode=0 Feb 19 18:32:12 crc kubenswrapper[4813]: I0219 18:32:12.250749 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5q7bs" event={"ID":"064d4c4c-ff7e-4368-817e-0234266467b5","Type":"ContainerDied","Data":"c8d5e36b284e59d65ea3e9a53efca07e0b21d7aab02930e7f1422f727a42c476"} Feb 19 18:32:12 crc kubenswrapper[4813]: I0219 18:32:12.250765 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5q7bs" event={"ID":"064d4c4c-ff7e-4368-817e-0234266467b5","Type":"ContainerStarted","Data":"9df8da8d0415337cfc57deaf42bad55c65ca33182467fe323a4f4e76a128c744"} Feb 19 18:32:12 crc kubenswrapper[4813]: I0219 18:32:12.252620 4813 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 18:32:12 crc kubenswrapper[4813]: I0219 18:32:12.253560 4813 generic.go:334] "Generic (PLEG): container finished" podID="e63bf777-ed22-4aae-942e-74b8613ca4ce" containerID="90cc49563f9a6346069d787c35781a06ed6619b0e0ec51277e1ac74f238b3c08" exitCode=0 Feb 19 18:32:12 crc kubenswrapper[4813]: I0219 18:32:12.253604 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525430-fq4mh" event={"ID":"e63bf777-ed22-4aae-942e-74b8613ca4ce","Type":"ContainerDied","Data":"90cc49563f9a6346069d787c35781a06ed6619b0e0ec51277e1ac74f238b3c08"} Feb 19 18:32:12 crc kubenswrapper[4813]: I0219 18:32:12.255736 4813 generic.go:334] "Generic (PLEG): container finished" podID="91d14b93-63e4-4673-9a84-61fba91ac9ec" containerID="9447edc86f14a7f0eb161fefebf0efef2bc6879754609714423729edff0778e9" exitCode=0 Feb 19 18:32:12 crc kubenswrapper[4813]: I0219 18:32:12.255799 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xg84q" event={"ID":"91d14b93-63e4-4673-9a84-61fba91ac9ec","Type":"ContainerDied","Data":"9447edc86f14a7f0eb161fefebf0efef2bc6879754609714423729edff0778e9"} Feb 19 18:32:12 crc kubenswrapper[4813]: I0219 18:32:12.255851 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xg84q" event={"ID":"91d14b93-63e4-4673-9a84-61fba91ac9ec","Type":"ContainerStarted","Data":"7e028ba200e25f05cd8ee1bc41b98c9c03d6e8ac0e45d3c7b57fa4bf0a458876"} Feb 19 18:32:12 crc kubenswrapper[4813]: I0219 18:32:12.263190 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"d17d43a0c28e5473e78effd6b61bab46301c32347a155a42fdceb7e740682706"} Feb 19 18:32:12 crc kubenswrapper[4813]: I0219 18:32:12.263791 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 18:32:12 crc kubenswrapper[4813]: I0219 18:32:12.265607 4813 generic.go:334] "Generic (PLEG): container finished" podID="9a3d0dbc-cf99-4d9f-8ab2-a7f480f65d81" containerID="28c3d542a861de2e3d3463277db89dd35e7d038e4463780cad48346698b1ae41" exitCode=0 Feb 19 18:32:12 crc kubenswrapper[4813]: I0219 18:32:12.265653 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x7z6q" event={"ID":"9a3d0dbc-cf99-4d9f-8ab2-a7f480f65d81","Type":"ContainerDied","Data":"28c3d542a861de2e3d3463277db89dd35e7d038e4463780cad48346698b1ae41"} Feb 19 18:32:12 crc kubenswrapper[4813]: I0219 18:32:12.267909 4813 generic.go:334] "Generic (PLEG): container finished" podID="fc7c86cd-f30f-42e5-aa8c-55f65bea7db8" containerID="bf82591d49fcdf8d72efd8a3a4a80559df25339bc455ab6ba43586c92a89ee8a" exitCode=0 Feb 19 18:32:12 crc kubenswrapper[4813]: I0219 18:32:12.268068 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"fc7c86cd-f30f-42e5-aa8c-55f65bea7db8","Type":"ContainerDied","Data":"bf82591d49fcdf8d72efd8a3a4a80559df25339bc455ab6ba43586c92a89ee8a"} Feb 19 18:32:12 crc kubenswrapper[4813]: I0219 18:32:12.270891 4813 generic.go:334] "Generic (PLEG): container finished" podID="7c467423-e5c4-4329-9a0c-c68058d30c91" containerID="9b35b7ddc08fe74f6377317a31e9a8e1fce56e1c014f9cb3bbc682cbd54bb01e" exitCode=0 Feb 19 18:32:12 crc kubenswrapper[4813]: I0219 18:32:12.270942 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m8sg9" event={"ID":"7c467423-e5c4-4329-9a0c-c68058d30c91","Type":"ContainerDied","Data":"9b35b7ddc08fe74f6377317a31e9a8e1fce56e1c014f9cb3bbc682cbd54bb01e"} Feb 19 18:32:12 crc kubenswrapper[4813]: I0219 18:32:12.270994 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m8sg9" event={"ID":"7c467423-e5c4-4329-9a0c-c68058d30c91","Type":"ContainerStarted","Data":"e980d3f47dc6812d641d375a6296f11934bef1a26afa2b4a7b67ba48a3a10a06"} Feb 19 18:32:12 crc kubenswrapper[4813]: I0219 18:32:12.276224 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-bd28k" podStartSLOduration=130.276204692 podStartE2EDuration="2m10.276204692s" podCreationTimestamp="2026-02-19 18:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:32:12.274397447 +0000 UTC m=+151.499837998" watchObservedRunningTime="2026-02-19 18:32:12.276204692 +0000 UTC m=+151.501645233" Feb 19 18:32:12 crc kubenswrapper[4813]: I0219 18:32:12.296017 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 19 18:32:12 crc kubenswrapper[4813]: I0219 18:32:12.296898 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 18:32:12 crc kubenswrapper[4813]: I0219 18:32:12.299231 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 19 18:32:12 crc kubenswrapper[4813]: I0219 18:32:12.299392 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 19 18:32:12 crc kubenswrapper[4813]: I0219 18:32:12.301136 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 19 18:32:12 crc kubenswrapper[4813]: I0219 18:32:12.400205 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xvhdg" Feb 19 18:32:12 crc kubenswrapper[4813]: I0219 18:32:12.425639 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b2673311-de9e-4dd0-88be-eb4887afc965-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"b2673311-de9e-4dd0-88be-eb4887afc965\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 18:32:12 crc kubenswrapper[4813]: I0219 18:32:12.425687 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b2673311-de9e-4dd0-88be-eb4887afc965-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"b2673311-de9e-4dd0-88be-eb4887afc965\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 18:32:12 crc kubenswrapper[4813]: I0219 18:32:12.435205 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p2xbk"] Feb 19 18:32:12 crc kubenswrapper[4813]: W0219 18:32:12.442996 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda5ded39f_d187_46dd_a014_517d0b291e9d.slice/crio-de58d87e61e41d8e49d2a99e597c166812ef701cfbcd34569f4608f6b09157dd WatchSource:0}: Error finding container de58d87e61e41d8e49d2a99e597c166812ef701cfbcd34569f4608f6b09157dd: Status 404 returned error can't find the container with id de58d87e61e41d8e49d2a99e597c166812ef701cfbcd34569f4608f6b09157dd Feb 19 18:32:12 crc kubenswrapper[4813]: I0219 18:32:12.529156 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b2673311-de9e-4dd0-88be-eb4887afc965-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"b2673311-de9e-4dd0-88be-eb4887afc965\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 18:32:12 crc kubenswrapper[4813]: I0219 18:32:12.529216 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b2673311-de9e-4dd0-88be-eb4887afc965-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"b2673311-de9e-4dd0-88be-eb4887afc965\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 18:32:12 crc kubenswrapper[4813]: I0219 18:32:12.529559 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b2673311-de9e-4dd0-88be-eb4887afc965-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"b2673311-de9e-4dd0-88be-eb4887afc965\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 18:32:12 crc kubenswrapper[4813]: I0219 18:32:12.556310 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b2673311-de9e-4dd0-88be-eb4887afc965-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"b2673311-de9e-4dd0-88be-eb4887afc965\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 18:32:12 crc kubenswrapper[4813]: I0219 18:32:12.624339 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 18:32:12 crc kubenswrapper[4813]: I0219 18:32:12.641747 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xvhdg"] Feb 19 18:32:12 crc kubenswrapper[4813]: I0219 18:32:12.848099 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 19 18:32:12 crc kubenswrapper[4813]: I0219 18:32:12.875296 4813 patch_prober.go:28] interesting pod/router-default-5444994796-9j26p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 18:32:12 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Feb 19 18:32:12 crc kubenswrapper[4813]: [+]process-running ok Feb 19 18:32:12 crc kubenswrapper[4813]: healthz check failed Feb 19 18:32:12 crc kubenswrapper[4813]: I0219 18:32:12.875387 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9j26p" podUID="4d98d1e8-0f5b-4f71-9809-b9577f15e21a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 18:32:13 crc kubenswrapper[4813]: I0219 18:32:13.044257 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-z8tl9"] Feb 19 18:32:13 crc kubenswrapper[4813]: I0219 18:32:13.075873 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z8tl9"] Feb 19 18:32:13 crc kubenswrapper[4813]: I0219 18:32:13.076060 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z8tl9" Feb 19 18:32:13 crc kubenswrapper[4813]: I0219 18:32:13.078394 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 19 18:32:13 crc kubenswrapper[4813]: I0219 18:32:13.136065 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d418b66-03a6-4b99-975a-1c980c62e680-catalog-content\") pod \"redhat-operators-z8tl9\" (UID: \"2d418b66-03a6-4b99-975a-1c980c62e680\") " pod="openshift-marketplace/redhat-operators-z8tl9" Feb 19 18:32:13 crc kubenswrapper[4813]: I0219 18:32:13.136114 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5jt5\" (UniqueName: \"kubernetes.io/projected/2d418b66-03a6-4b99-975a-1c980c62e680-kube-api-access-h5jt5\") pod \"redhat-operators-z8tl9\" (UID: \"2d418b66-03a6-4b99-975a-1c980c62e680\") " pod="openshift-marketplace/redhat-operators-z8tl9" Feb 19 18:32:13 crc kubenswrapper[4813]: I0219 18:32:13.136135 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d418b66-03a6-4b99-975a-1c980c62e680-utilities\") pod \"redhat-operators-z8tl9\" (UID: \"2d418b66-03a6-4b99-975a-1c980c62e680\") " pod="openshift-marketplace/redhat-operators-z8tl9" Feb 19 18:32:13 crc kubenswrapper[4813]: I0219 18:32:13.141126 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-dqj4z" Feb 19 18:32:13 crc kubenswrapper[4813]: I0219 18:32:13.141358 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-dqj4z" Feb 19 18:32:13 crc kubenswrapper[4813]: I0219 18:32:13.148593 4813 patch_prober.go:28] interesting pod/console-f9d7485db-dqj4z container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Feb 19 18:32:13 crc kubenswrapper[4813]: I0219 18:32:13.148643 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-dqj4z" podUID="9c2745dc-3e5e-4571-ba36-aa14c87c6336" containerName="console" probeResult="failure" output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" Feb 19 18:32:13 crc kubenswrapper[4813]: I0219 18:32:13.222120 4813 patch_prober.go:28] interesting pod/downloads-7954f5f757-5gzd8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Feb 19 18:32:13 crc kubenswrapper[4813]: I0219 18:32:13.222179 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5gzd8" podUID="c705e274-b996-46b7-8e3f-97dff9107a6f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Feb 19 18:32:13 crc kubenswrapper[4813]: I0219 18:32:13.222571 4813 patch_prober.go:28] interesting pod/downloads-7954f5f757-5gzd8 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Feb 19 18:32:13 crc kubenswrapper[4813]: I0219 18:32:13.222633 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-5gzd8" podUID="c705e274-b996-46b7-8e3f-97dff9107a6f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Feb 19 18:32:13 crc kubenswrapper[4813]: I0219 18:32:13.237081 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5jt5\" (UniqueName: \"kubernetes.io/projected/2d418b66-03a6-4b99-975a-1c980c62e680-kube-api-access-h5jt5\") pod \"redhat-operators-z8tl9\" (UID: \"2d418b66-03a6-4b99-975a-1c980c62e680\") " pod="openshift-marketplace/redhat-operators-z8tl9" Feb 19 18:32:13 crc kubenswrapper[4813]: I0219 18:32:13.238068 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d418b66-03a6-4b99-975a-1c980c62e680-utilities\") pod \"redhat-operators-z8tl9\" (UID: \"2d418b66-03a6-4b99-975a-1c980c62e680\") " pod="openshift-marketplace/redhat-operators-z8tl9" Feb 19 18:32:13 crc kubenswrapper[4813]: I0219 18:32:13.238605 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d418b66-03a6-4b99-975a-1c980c62e680-utilities\") pod \"redhat-operators-z8tl9\" (UID: \"2d418b66-03a6-4b99-975a-1c980c62e680\") " pod="openshift-marketplace/redhat-operators-z8tl9" Feb 19 18:32:13 crc kubenswrapper[4813]: I0219 18:32:13.239296 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d418b66-03a6-4b99-975a-1c980c62e680-catalog-content\") pod \"redhat-operators-z8tl9\" (UID: \"2d418b66-03a6-4b99-975a-1c980c62e680\") " pod="openshift-marketplace/redhat-operators-z8tl9" Feb 19 18:32:13 crc kubenswrapper[4813]: I0219 18:32:13.239502 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d418b66-03a6-4b99-975a-1c980c62e680-catalog-content\") pod \"redhat-operators-z8tl9\" (UID: \"2d418b66-03a6-4b99-975a-1c980c62e680\") " pod="openshift-marketplace/redhat-operators-z8tl9" Feb 19 18:32:13 crc kubenswrapper[4813]: I0219 18:32:13.274949 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5jt5\" (UniqueName: \"kubernetes.io/projected/2d418b66-03a6-4b99-975a-1c980c62e680-kube-api-access-h5jt5\") pod \"redhat-operators-z8tl9\" (UID: \"2d418b66-03a6-4b99-975a-1c980c62e680\") " pod="openshift-marketplace/redhat-operators-z8tl9" Feb 19 18:32:13 crc kubenswrapper[4813]: I0219 18:32:13.292882 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-rksx7" Feb 19 18:32:13 crc kubenswrapper[4813]: I0219 18:32:13.295928 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"b2673311-de9e-4dd0-88be-eb4887afc965","Type":"ContainerStarted","Data":"72eccfa43df2fecb2afc749640fa29a1ea1713b2d995c7b042589e6e06e014a3"} Feb 19 18:32:13 crc kubenswrapper[4813]: I0219 18:32:13.296105 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"b2673311-de9e-4dd0-88be-eb4887afc965","Type":"ContainerStarted","Data":"d8afbe0fda6f640b311423bd7de290f882fd894ee5ea07df2845a2949ec669df"} Feb 19 18:32:13 crc kubenswrapper[4813]: I0219 18:32:13.298200 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-rksx7" Feb 19 18:32:13 crc kubenswrapper[4813]: I0219 18:32:13.306863 4813 generic.go:334] "Generic (PLEG): container finished" podID="a5ded39f-d187-46dd-a014-517d0b291e9d" containerID="21d278fb4154e9b59bcb1c896de2efcaece071f7dc40e7f2496dd55dff7a2045" exitCode=0 Feb 19 18:32:13 crc kubenswrapper[4813]: I0219 18:32:13.307544 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p2xbk" event={"ID":"a5ded39f-d187-46dd-a014-517d0b291e9d","Type":"ContainerDied","Data":"21d278fb4154e9b59bcb1c896de2efcaece071f7dc40e7f2496dd55dff7a2045"} Feb 19 18:32:13 crc kubenswrapper[4813]: I0219 18:32:13.307566 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p2xbk" event={"ID":"a5ded39f-d187-46dd-a014-517d0b291e9d","Type":"ContainerStarted","Data":"de58d87e61e41d8e49d2a99e597c166812ef701cfbcd34569f4608f6b09157dd"} Feb 19 18:32:13 crc kubenswrapper[4813]: I0219 18:32:13.392138 4813 generic.go:334] "Generic (PLEG): container finished" podID="0d0adcf5-5570-499d-b626-fdf0bb785e79" containerID="a1c8f480da418c53149cc193c21153f497f898b03c88182178c9d2299949ffba" exitCode=0 Feb 19 18:32:13 crc kubenswrapper[4813]: I0219 18:32:13.393361 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xvhdg" event={"ID":"0d0adcf5-5570-499d-b626-fdf0bb785e79","Type":"ContainerDied","Data":"a1c8f480da418c53149cc193c21153f497f898b03c88182178c9d2299949ffba"} Feb 19 18:32:13 crc kubenswrapper[4813]: I0219 18:32:13.393384 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xvhdg" event={"ID":"0d0adcf5-5570-499d-b626-fdf0bb785e79","Type":"ContainerStarted","Data":"6b4487251b9523df77a8626288ad273389735f38b246ae8cf33241d39f6f1ebe"} Feb 19 18:32:13 crc kubenswrapper[4813]: I0219 18:32:13.446016 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z8tl9" Feb 19 18:32:13 crc kubenswrapper[4813]: I0219 18:32:13.449386 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=1.449349794 podStartE2EDuration="1.449349794s" podCreationTimestamp="2026-02-19 18:32:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:32:13.419908903 +0000 UTC m=+152.645349444" watchObservedRunningTime="2026-02-19 18:32:13.449349794 +0000 UTC m=+152.674790335" Feb 19 18:32:13 crc kubenswrapper[4813]: I0219 18:32:13.455787 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2zq8l"] Feb 19 18:32:13 crc kubenswrapper[4813]: I0219 18:32:13.460569 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2zq8l" Feb 19 18:32:13 crc kubenswrapper[4813]: I0219 18:32:13.523457 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2zq8l"] Feb 19 18:32:13 crc kubenswrapper[4813]: I0219 18:32:13.552337 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebe695cb-96c7-4c92-8fb9-62e84ccf225d-utilities\") pod \"redhat-operators-2zq8l\" (UID: \"ebe695cb-96c7-4c92-8fb9-62e84ccf225d\") " pod="openshift-marketplace/redhat-operators-2zq8l" Feb 19 18:32:13 crc kubenswrapper[4813]: I0219 18:32:13.552470 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebe695cb-96c7-4c92-8fb9-62e84ccf225d-catalog-content\") pod \"redhat-operators-2zq8l\" (UID: \"ebe695cb-96c7-4c92-8fb9-62e84ccf225d\") " pod="openshift-marketplace/redhat-operators-2zq8l" Feb 19 18:32:13 crc kubenswrapper[4813]: I0219 18:32:13.552536 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bd7b\" (UniqueName: \"kubernetes.io/projected/ebe695cb-96c7-4c92-8fb9-62e84ccf225d-kube-api-access-7bd7b\") pod \"redhat-operators-2zq8l\" (UID: \"ebe695cb-96c7-4c92-8fb9-62e84ccf225d\") " pod="openshift-marketplace/redhat-operators-2zq8l" Feb 19 18:32:13 crc kubenswrapper[4813]: I0219 18:32:13.660934 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bd7b\" (UniqueName: \"kubernetes.io/projected/ebe695cb-96c7-4c92-8fb9-62e84ccf225d-kube-api-access-7bd7b\") pod \"redhat-operators-2zq8l\" (UID: \"ebe695cb-96c7-4c92-8fb9-62e84ccf225d\") " pod="openshift-marketplace/redhat-operators-2zq8l" Feb 19 18:32:13 crc kubenswrapper[4813]: I0219 18:32:13.661068 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebe695cb-96c7-4c92-8fb9-62e84ccf225d-utilities\") pod \"redhat-operators-2zq8l\" (UID: \"ebe695cb-96c7-4c92-8fb9-62e84ccf225d\") " pod="openshift-marketplace/redhat-operators-2zq8l" Feb 19 18:32:13 crc kubenswrapper[4813]: I0219 18:32:13.661959 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebe695cb-96c7-4c92-8fb9-62e84ccf225d-utilities\") pod \"redhat-operators-2zq8l\" (UID: \"ebe695cb-96c7-4c92-8fb9-62e84ccf225d\") " pod="openshift-marketplace/redhat-operators-2zq8l" Feb 19 18:32:13 crc kubenswrapper[4813]: I0219 18:32:13.662294 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebe695cb-96c7-4c92-8fb9-62e84ccf225d-catalog-content\") pod \"redhat-operators-2zq8l\" (UID: \"ebe695cb-96c7-4c92-8fb9-62e84ccf225d\") " pod="openshift-marketplace/redhat-operators-2zq8l" Feb 19 18:32:13 crc kubenswrapper[4813]: I0219 18:32:13.663421 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebe695cb-96c7-4c92-8fb9-62e84ccf225d-catalog-content\") pod \"redhat-operators-2zq8l\" (UID: \"ebe695cb-96c7-4c92-8fb9-62e84ccf225d\") " pod="openshift-marketplace/redhat-operators-2zq8l" Feb 19 18:32:13 crc kubenswrapper[4813]: I0219 18:32:13.686987 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bd7b\" (UniqueName: \"kubernetes.io/projected/ebe695cb-96c7-4c92-8fb9-62e84ccf225d-kube-api-access-7bd7b\") pod \"redhat-operators-2zq8l\" (UID: \"ebe695cb-96c7-4c92-8fb9-62e84ccf225d\") " pod="openshift-marketplace/redhat-operators-2zq8l" Feb 19 18:32:13 crc kubenswrapper[4813]: I0219 18:32:13.829758 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2zq8l" Feb 19 18:32:13 crc kubenswrapper[4813]: I0219 18:32:13.844558 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525430-fq4mh" Feb 19 18:32:13 crc kubenswrapper[4813]: I0219 18:32:13.872101 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-9j26p" Feb 19 18:32:13 crc kubenswrapper[4813]: I0219 18:32:13.876289 4813 patch_prober.go:28] interesting pod/router-default-5444994796-9j26p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 18:32:13 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Feb 19 18:32:13 crc kubenswrapper[4813]: [+]process-running ok Feb 19 18:32:13 crc kubenswrapper[4813]: healthz check failed Feb 19 18:32:13 crc kubenswrapper[4813]: I0219 18:32:13.876337 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9j26p" podUID="4d98d1e8-0f5b-4f71-9809-b9577f15e21a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 18:32:13 crc kubenswrapper[4813]: I0219 18:32:13.944783 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 18:32:13 crc kubenswrapper[4813]: I0219 18:32:13.970186 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e63bf777-ed22-4aae-942e-74b8613ca4ce-secret-volume\") pod \"e63bf777-ed22-4aae-942e-74b8613ca4ce\" (UID: \"e63bf777-ed22-4aae-942e-74b8613ca4ce\") " Feb 19 18:32:13 crc kubenswrapper[4813]: I0219 18:32:13.970310 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e63bf777-ed22-4aae-942e-74b8613ca4ce-config-volume\") pod \"e63bf777-ed22-4aae-942e-74b8613ca4ce\" (UID: \"e63bf777-ed22-4aae-942e-74b8613ca4ce\") " Feb 19 18:32:13 crc kubenswrapper[4813]: I0219 18:32:13.970332 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nkwfc\" (UniqueName: \"kubernetes.io/projected/e63bf777-ed22-4aae-942e-74b8613ca4ce-kube-api-access-nkwfc\") pod \"e63bf777-ed22-4aae-942e-74b8613ca4ce\" (UID: \"e63bf777-ed22-4aae-942e-74b8613ca4ce\") " Feb 19 18:32:13 crc kubenswrapper[4813]: I0219 18:32:13.971185 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e63bf777-ed22-4aae-942e-74b8613ca4ce-config-volume" (OuterVolumeSpecName: "config-volume") pod "e63bf777-ed22-4aae-942e-74b8613ca4ce" (UID: "e63bf777-ed22-4aae-942e-74b8613ca4ce"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:32:13 crc kubenswrapper[4813]: I0219 18:32:13.974658 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e63bf777-ed22-4aae-942e-74b8613ca4ce-kube-api-access-nkwfc" (OuterVolumeSpecName: "kube-api-access-nkwfc") pod "e63bf777-ed22-4aae-942e-74b8613ca4ce" (UID: "e63bf777-ed22-4aae-942e-74b8613ca4ce"). InnerVolumeSpecName "kube-api-access-nkwfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:32:13 crc kubenswrapper[4813]: I0219 18:32:13.977474 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e63bf777-ed22-4aae-942e-74b8613ca4ce-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e63bf777-ed22-4aae-942e-74b8613ca4ce" (UID: "e63bf777-ed22-4aae-942e-74b8613ca4ce"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:32:14 crc kubenswrapper[4813]: I0219 18:32:14.002628 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z8tl9"] Feb 19 18:32:14 crc kubenswrapper[4813]: W0219 18:32:14.028497 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d418b66_03a6_4b99_975a_1c980c62e680.slice/crio-b3d07546f3ca1ba21b299b10dc4d7211482b0050b1d02339db8336761491e1e1 WatchSource:0}: Error finding container b3d07546f3ca1ba21b299b10dc4d7211482b0050b1d02339db8336761491e1e1: Status 404 returned error can't find the container with id b3d07546f3ca1ba21b299b10dc4d7211482b0050b1d02339db8336761491e1e1 Feb 19 18:32:14 crc kubenswrapper[4813]: I0219 18:32:14.071783 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fc7c86cd-f30f-42e5-aa8c-55f65bea7db8-kube-api-access\") pod \"fc7c86cd-f30f-42e5-aa8c-55f65bea7db8\" (UID: \"fc7c86cd-f30f-42e5-aa8c-55f65bea7db8\") " Feb 19 18:32:14 crc kubenswrapper[4813]: I0219 18:32:14.072393 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fc7c86cd-f30f-42e5-aa8c-55f65bea7db8-kubelet-dir\") pod \"fc7c86cd-f30f-42e5-aa8c-55f65bea7db8\" (UID: \"fc7c86cd-f30f-42e5-aa8c-55f65bea7db8\") " Feb 19 18:32:14 crc kubenswrapper[4813]: I0219 18:32:14.072452 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc7c86cd-f30f-42e5-aa8c-55f65bea7db8-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "fc7c86cd-f30f-42e5-aa8c-55f65bea7db8" (UID: "fc7c86cd-f30f-42e5-aa8c-55f65bea7db8"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 18:32:14 crc kubenswrapper[4813]: I0219 18:32:14.072928 4813 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e63bf777-ed22-4aae-942e-74b8613ca4ce-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 18:32:14 crc kubenswrapper[4813]: I0219 18:32:14.072947 4813 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fc7c86cd-f30f-42e5-aa8c-55f65bea7db8-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 19 18:32:14 crc kubenswrapper[4813]: I0219 18:32:14.072970 4813 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e63bf777-ed22-4aae-942e-74b8613ca4ce-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 18:32:14 crc kubenswrapper[4813]: I0219 18:32:14.072980 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nkwfc\" (UniqueName: \"kubernetes.io/projected/e63bf777-ed22-4aae-942e-74b8613ca4ce-kube-api-access-nkwfc\") on node \"crc\" DevicePath \"\"" Feb 19 18:32:14 crc kubenswrapper[4813]: I0219 18:32:14.075756 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc7c86cd-f30f-42e5-aa8c-55f65bea7db8-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "fc7c86cd-f30f-42e5-aa8c-55f65bea7db8" (UID: "fc7c86cd-f30f-42e5-aa8c-55f65bea7db8"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:32:14 crc kubenswrapper[4813]: I0219 18:32:14.079557 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-h5w7j" Feb 19 18:32:14 crc kubenswrapper[4813]: I0219 18:32:14.226695 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fc7c86cd-f30f-42e5-aa8c-55f65bea7db8-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 18:32:14 crc kubenswrapper[4813]: I0219 18:32:14.262742 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2zq8l"] Feb 19 18:32:14 crc kubenswrapper[4813]: I0219 18:32:14.410877 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"fc7c86cd-f30f-42e5-aa8c-55f65bea7db8","Type":"ContainerDied","Data":"1d475b80b47f8142c8d7c3f01f3fab46a2c815d9e22be5d6a39ed12ce4185428"} Feb 19 18:32:14 crc kubenswrapper[4813]: I0219 18:32:14.410946 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d475b80b47f8142c8d7c3f01f3fab46a2c815d9e22be5d6a39ed12ce4185428" Feb 19 18:32:14 crc kubenswrapper[4813]: I0219 18:32:14.411282 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 18:32:14 crc kubenswrapper[4813]: I0219 18:32:14.414517 4813 generic.go:334] "Generic (PLEG): container finished" podID="b2673311-de9e-4dd0-88be-eb4887afc965" containerID="72eccfa43df2fecb2afc749640fa29a1ea1713b2d995c7b042589e6e06e014a3" exitCode=0 Feb 19 18:32:14 crc kubenswrapper[4813]: I0219 18:32:14.414783 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"b2673311-de9e-4dd0-88be-eb4887afc965","Type":"ContainerDied","Data":"72eccfa43df2fecb2afc749640fa29a1ea1713b2d995c7b042589e6e06e014a3"} Feb 19 18:32:14 crc kubenswrapper[4813]: I0219 18:32:14.426013 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2zq8l" event={"ID":"ebe695cb-96c7-4c92-8fb9-62e84ccf225d","Type":"ContainerStarted","Data":"1ad909c04a53f5fb07eaf47ac3ce8fcd9463ff7b052a8b92adc83cfcbc9ae843"} Feb 19 18:32:14 crc kubenswrapper[4813]: I0219 18:32:14.433804 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525430-fq4mh" event={"ID":"e63bf777-ed22-4aae-942e-74b8613ca4ce","Type":"ContainerDied","Data":"b5052808856778291d25d784120abaf014e0633cb7e2e18c21565efb51d077a6"} Feb 19 18:32:14 crc kubenswrapper[4813]: I0219 18:32:14.433841 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5052808856778291d25d784120abaf014e0633cb7e2e18c21565efb51d077a6" Feb 19 18:32:14 crc kubenswrapper[4813]: I0219 18:32:14.433930 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525430-fq4mh" Feb 19 18:32:14 crc kubenswrapper[4813]: I0219 18:32:14.443864 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z8tl9" event={"ID":"2d418b66-03a6-4b99-975a-1c980c62e680","Type":"ContainerStarted","Data":"aa60cc4cf1a2e6a3831cdf31a5b353aa0cacc1f6cee131a9b5f065ae18914de8"} Feb 19 18:32:14 crc kubenswrapper[4813]: I0219 18:32:14.443900 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z8tl9" event={"ID":"2d418b66-03a6-4b99-975a-1c980c62e680","Type":"ContainerStarted","Data":"b3d07546f3ca1ba21b299b10dc4d7211482b0050b1d02339db8336761491e1e1"} Feb 19 18:32:14 crc kubenswrapper[4813]: I0219 18:32:14.878749 4813 patch_prober.go:28] interesting pod/router-default-5444994796-9j26p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 18:32:14 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Feb 19 18:32:14 crc kubenswrapper[4813]: [+]process-running ok Feb 19 18:32:14 crc kubenswrapper[4813]: healthz check failed Feb 19 18:32:14 crc kubenswrapper[4813]: I0219 18:32:14.879111 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9j26p" podUID="4d98d1e8-0f5b-4f71-9809-b9577f15e21a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 18:32:15 crc kubenswrapper[4813]: I0219 18:32:15.459991 4813 generic.go:334] "Generic (PLEG): container finished" podID="ebe695cb-96c7-4c92-8fb9-62e84ccf225d" containerID="a40f1a1060c11977cd6bc672000cbcfc3a7655c08a72e80846bb3ba6f72d715d" exitCode=0 Feb 19 18:32:15 crc kubenswrapper[4813]: I0219 18:32:15.460143 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2zq8l" event={"ID":"ebe695cb-96c7-4c92-8fb9-62e84ccf225d","Type":"ContainerDied","Data":"a40f1a1060c11977cd6bc672000cbcfc3a7655c08a72e80846bb3ba6f72d715d"} Feb 19 18:32:15 crc kubenswrapper[4813]: I0219 18:32:15.464464 4813 generic.go:334] "Generic (PLEG): container finished" podID="2d418b66-03a6-4b99-975a-1c980c62e680" containerID="aa60cc4cf1a2e6a3831cdf31a5b353aa0cacc1f6cee131a9b5f065ae18914de8" exitCode=0 Feb 19 18:32:15 crc kubenswrapper[4813]: I0219 18:32:15.464540 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z8tl9" event={"ID":"2d418b66-03a6-4b99-975a-1c980c62e680","Type":"ContainerDied","Data":"aa60cc4cf1a2e6a3831cdf31a5b353aa0cacc1f6cee131a9b5f065ae18914de8"} Feb 19 18:32:15 crc kubenswrapper[4813]: I0219 18:32:15.735744 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 18:32:15 crc kubenswrapper[4813]: I0219 18:32:15.855807 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b2673311-de9e-4dd0-88be-eb4887afc965-kubelet-dir\") pod \"b2673311-de9e-4dd0-88be-eb4887afc965\" (UID: \"b2673311-de9e-4dd0-88be-eb4887afc965\") " Feb 19 18:32:15 crc kubenswrapper[4813]: I0219 18:32:15.855858 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b2673311-de9e-4dd0-88be-eb4887afc965-kube-api-access\") pod \"b2673311-de9e-4dd0-88be-eb4887afc965\" (UID: \"b2673311-de9e-4dd0-88be-eb4887afc965\") " Feb 19 18:32:15 crc kubenswrapper[4813]: I0219 18:32:15.856100 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2673311-de9e-4dd0-88be-eb4887afc965-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b2673311-de9e-4dd0-88be-eb4887afc965" (UID: "b2673311-de9e-4dd0-88be-eb4887afc965"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 18:32:15 crc kubenswrapper[4813]: I0219 18:32:15.860624 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2673311-de9e-4dd0-88be-eb4887afc965-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b2673311-de9e-4dd0-88be-eb4887afc965" (UID: "b2673311-de9e-4dd0-88be-eb4887afc965"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:32:15 crc kubenswrapper[4813]: I0219 18:32:15.874988 4813 patch_prober.go:28] interesting pod/router-default-5444994796-9j26p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 18:32:15 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Feb 19 18:32:15 crc kubenswrapper[4813]: [+]process-running ok Feb 19 18:32:15 crc kubenswrapper[4813]: healthz check failed Feb 19 18:32:15 crc kubenswrapper[4813]: I0219 18:32:15.875037 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9j26p" podUID="4d98d1e8-0f5b-4f71-9809-b9577f15e21a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 18:32:15 crc kubenswrapper[4813]: I0219 18:32:15.957392 4813 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b2673311-de9e-4dd0-88be-eb4887afc965-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 19 18:32:15 crc kubenswrapper[4813]: I0219 18:32:15.957425 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b2673311-de9e-4dd0-88be-eb4887afc965-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 18:32:16 crc kubenswrapper[4813]: I0219 18:32:16.033887 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-kmwrq" Feb 19 18:32:16 crc kubenswrapper[4813]: I0219 18:32:16.483726 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"b2673311-de9e-4dd0-88be-eb4887afc965","Type":"ContainerDied","Data":"d8afbe0fda6f640b311423bd7de290f882fd894ee5ea07df2845a2949ec669df"} Feb 19 18:32:16 crc kubenswrapper[4813]: I0219 18:32:16.483786 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8afbe0fda6f640b311423bd7de290f882fd894ee5ea07df2845a2949ec669df" Feb 19 18:32:16 crc kubenswrapper[4813]: I0219 18:32:16.483842 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 18:32:16 crc kubenswrapper[4813]: I0219 18:32:16.841993 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" Feb 19 18:32:16 crc kubenswrapper[4813]: I0219 18:32:16.875202 4813 patch_prober.go:28] interesting pod/router-default-5444994796-9j26p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 18:32:16 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Feb 19 18:32:16 crc kubenswrapper[4813]: [+]process-running ok Feb 19 18:32:16 crc kubenswrapper[4813]: healthz check failed Feb 19 18:32:16 crc kubenswrapper[4813]: I0219 18:32:16.875253 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9j26p" podUID="4d98d1e8-0f5b-4f71-9809-b9577f15e21a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 18:32:17 crc kubenswrapper[4813]: I0219 18:32:17.874080 4813 patch_prober.go:28] interesting pod/router-default-5444994796-9j26p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 18:32:17 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Feb 19 18:32:17 crc kubenswrapper[4813]: [+]process-running ok Feb 19 18:32:17 crc kubenswrapper[4813]: healthz check failed Feb 19 18:32:17 crc kubenswrapper[4813]: I0219 18:32:17.874308 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9j26p" podUID="4d98d1e8-0f5b-4f71-9809-b9577f15e21a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 18:32:18 crc kubenswrapper[4813]: I0219 18:32:18.874445 4813 patch_prober.go:28] interesting pod/router-default-5444994796-9j26p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 18:32:18 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Feb 19 18:32:18 crc kubenswrapper[4813]: [+]process-running ok Feb 19 18:32:18 crc kubenswrapper[4813]: healthz check failed Feb 19 18:32:18 crc kubenswrapper[4813]: I0219 18:32:18.874506 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9j26p" podUID="4d98d1e8-0f5b-4f71-9809-b9577f15e21a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 18:32:19 crc kubenswrapper[4813]: I0219 18:32:19.875621 4813 patch_prober.go:28] interesting pod/router-default-5444994796-9j26p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 18:32:19 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Feb 19 18:32:19 crc kubenswrapper[4813]: [+]process-running ok Feb 19 18:32:19 crc kubenswrapper[4813]: healthz check failed Feb 19 18:32:19 crc kubenswrapper[4813]: I0219 18:32:19.875800 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9j26p" podUID="4d98d1e8-0f5b-4f71-9809-b9577f15e21a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 18:32:20 crc kubenswrapper[4813]: I0219 18:32:20.875247 4813 patch_prober.go:28] interesting pod/router-default-5444994796-9j26p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 18:32:20 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Feb 19 18:32:20 crc kubenswrapper[4813]: [+]process-running ok Feb 19 18:32:20 crc kubenswrapper[4813]: healthz check failed Feb 19 18:32:20 crc kubenswrapper[4813]: I0219 18:32:20.876104 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9j26p" podUID="4d98d1e8-0f5b-4f71-9809-b9577f15e21a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 18:32:21 crc kubenswrapper[4813]: I0219 18:32:21.873916 4813 patch_prober.go:28] interesting pod/router-default-5444994796-9j26p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 18:32:21 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Feb 19 18:32:21 crc kubenswrapper[4813]: [+]process-running ok Feb 19 18:32:21 crc kubenswrapper[4813]: healthz check failed Feb 19 18:32:21 crc kubenswrapper[4813]: I0219 18:32:21.874002 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9j26p" podUID="4d98d1e8-0f5b-4f71-9809-b9577f15e21a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 18:32:22 crc kubenswrapper[4813]: I0219 18:32:22.875593 4813 patch_prober.go:28] interesting pod/router-default-5444994796-9j26p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 18:32:22 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Feb 19 18:32:22 crc kubenswrapper[4813]: [+]process-running ok Feb 19 18:32:22 crc kubenswrapper[4813]: healthz check failed Feb 19 18:32:22 crc kubenswrapper[4813]: I0219 18:32:22.875650 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9j26p" podUID="4d98d1e8-0f5b-4f71-9809-b9577f15e21a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 18:32:23 crc kubenswrapper[4813]: I0219 18:32:23.141356 4813 patch_prober.go:28] interesting pod/console-f9d7485db-dqj4z container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Feb 19 18:32:23 crc kubenswrapper[4813]: I0219 18:32:23.141419 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-dqj4z" podUID="9c2745dc-3e5e-4571-ba36-aa14c87c6336" containerName="console" probeResult="failure" output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" Feb 19 18:32:23 crc kubenswrapper[4813]: I0219 18:32:23.228272 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-5gzd8" Feb 19 18:32:23 crc kubenswrapper[4813]: I0219 18:32:23.877216 4813 patch_prober.go:28] interesting pod/router-default-5444994796-9j26p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 18:32:23 crc kubenswrapper[4813]: [+]has-synced ok Feb 19 18:32:23 crc kubenswrapper[4813]: [+]process-running ok Feb 19 18:32:23 crc kubenswrapper[4813]: healthz check failed Feb 19 18:32:23 crc kubenswrapper[4813]: I0219 18:32:23.877613 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9j26p" podUID="4d98d1e8-0f5b-4f71-9809-b9577f15e21a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 18:32:24 crc kubenswrapper[4813]: I0219 18:32:24.409756 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6fc21e0b-f723-4c9c-9ced-1683cc02fa00-metrics-certs\") pod \"network-metrics-daemon-l5vng\" (UID: \"6fc21e0b-f723-4c9c-9ced-1683cc02fa00\") " pod="openshift-multus/network-metrics-daemon-l5vng" Feb 19 18:32:24 crc kubenswrapper[4813]: I0219 18:32:24.415206 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6fc21e0b-f723-4c9c-9ced-1683cc02fa00-metrics-certs\") pod \"network-metrics-daemon-l5vng\" (UID: \"6fc21e0b-f723-4c9c-9ced-1683cc02fa00\") " pod="openshift-multus/network-metrics-daemon-l5vng" Feb 19 18:32:24 crc kubenswrapper[4813]: I0219 18:32:24.515870 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5vng" Feb 19 18:32:24 crc kubenswrapper[4813]: I0219 18:32:24.875298 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-9j26p" Feb 19 18:32:24 crc kubenswrapper[4813]: I0219 18:32:24.879727 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-9j26p" Feb 19 18:32:30 crc kubenswrapper[4813]: I0219 18:32:30.152661 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-bd28k" Feb 19 18:32:30 crc kubenswrapper[4813]: I0219 18:32:30.330180 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 18:32:30 crc kubenswrapper[4813]: I0219 18:32:30.330254 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 18:32:33 crc kubenswrapper[4813]: I0219 18:32:33.145149 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-dqj4z" Feb 19 18:32:33 crc kubenswrapper[4813]: I0219 18:32:33.150887 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-dqj4z" Feb 19 18:32:43 crc kubenswrapper[4813]: I0219 18:32:43.695362 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-chl4q" Feb 19 18:32:43 crc kubenswrapper[4813]: E0219 18:32:43.835858 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 19 18:32:43 crc kubenswrapper[4813]: E0219 18:32:43.836458 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h5jt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-z8tl9_openshift-marketplace(2d418b66-03a6-4b99-975a-1c980c62e680): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 18:32:43 crc kubenswrapper[4813]: E0219 18:32:43.837702 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-z8tl9" podUID="2d418b66-03a6-4b99-975a-1c980c62e680" Feb 19 18:32:46 crc kubenswrapper[4813]: E0219 18:32:46.121163 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-z8tl9" podUID="2d418b66-03a6-4b99-975a-1c980c62e680" Feb 19 18:32:46 crc kubenswrapper[4813]: E0219 18:32:46.178681 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 19 18:32:46 crc kubenswrapper[4813]: E0219 18:32:46.179096 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdc4d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-xg84q_openshift-marketplace(91d14b93-63e4-4673-9a84-61fba91ac9ec): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 18:32:46 crc kubenswrapper[4813]: E0219 18:32:46.181014 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-xg84q" podUID="91d14b93-63e4-4673-9a84-61fba91ac9ec" Feb 19 18:32:47 crc kubenswrapper[4813]: E0219 18:32:47.389436 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-xg84q" podUID="91d14b93-63e4-4673-9a84-61fba91ac9ec" Feb 19 18:32:47 crc kubenswrapper[4813]: E0219 18:32:47.457915 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 19 18:32:47 crc kubenswrapper[4813]: E0219 18:32:47.458208 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9wx22,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-p2xbk_openshift-marketplace(a5ded39f-d187-46dd-a014-517d0b291e9d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 18:32:47 crc kubenswrapper[4813]: E0219 18:32:47.459418 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-p2xbk" podUID="a5ded39f-d187-46dd-a014-517d0b291e9d" Feb 19 18:32:47 crc kubenswrapper[4813]: E0219 18:32:47.525745 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 19 18:32:47 crc kubenswrapper[4813]: E0219 18:32:47.526124 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ksppk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-xvhdg_openshift-marketplace(0d0adcf5-5570-499d-b626-fdf0bb785e79): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 18:32:47 crc kubenswrapper[4813]: E0219 18:32:47.527554 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-xvhdg" podUID="0d0adcf5-5570-499d-b626-fdf0bb785e79" Feb 19 18:32:47 crc kubenswrapper[4813]: E0219 18:32:47.554830 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 19 18:32:47 crc kubenswrapper[4813]: E0219 18:32:47.555003 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4v2nb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-x7z6q_openshift-marketplace(9a3d0dbc-cf99-4d9f-8ab2-a7f480f65d81): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 18:32:47 crc kubenswrapper[4813]: E0219 18:32:47.556226 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-x7z6q" podUID="9a3d0dbc-cf99-4d9f-8ab2-a7f480f65d81" Feb 19 18:32:47 crc kubenswrapper[4813]: E0219 18:32:47.574465 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 19 18:32:47 crc kubenswrapper[4813]: E0219 18:32:47.574675 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zl76p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-5q7bs_openshift-marketplace(064d4c4c-ff7e-4368-817e-0234266467b5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 18:32:47 crc kubenswrapper[4813]: E0219 18:32:47.575810 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-5q7bs" podUID="064d4c4c-ff7e-4368-817e-0234266467b5" Feb 19 18:32:47 crc kubenswrapper[4813]: I0219 18:32:47.763252 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-l5vng"] Feb 19 18:32:47 crc kubenswrapper[4813]: W0219 18:32:47.773341 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6fc21e0b_f723_4c9c_9ced_1683cc02fa00.slice/crio-2e7fb3222735935c92eb1983f78cb1f0c3b33cbb6f38e2a38a73a6fbbf75304a WatchSource:0}: Error finding container 2e7fb3222735935c92eb1983f78cb1f0c3b33cbb6f38e2a38a73a6fbbf75304a: Status 404 returned error can't find the container with id 2e7fb3222735935c92eb1983f78cb1f0c3b33cbb6f38e2a38a73a6fbbf75304a Feb 19 18:32:47 crc kubenswrapper[4813]: I0219 18:32:47.776821 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m8sg9" event={"ID":"7c467423-e5c4-4329-9a0c-c68058d30c91","Type":"ContainerStarted","Data":"9e9ea69d0d55dd41a68f8cb558f4a87e3c28096dd513fc46840fa22be3613bf8"} Feb 19 18:32:47 crc kubenswrapper[4813]: I0219 18:32:47.781752 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2zq8l" event={"ID":"ebe695cb-96c7-4c92-8fb9-62e84ccf225d","Type":"ContainerStarted","Data":"00f78eada0ce573269856e6351cb327cfc5540c601c2069f254f93bc88b79c4b"} Feb 19 18:32:47 crc kubenswrapper[4813]: E0219 18:32:47.788641 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-x7z6q" podUID="9a3d0dbc-cf99-4d9f-8ab2-a7f480f65d81" Feb 19 18:32:47 crc kubenswrapper[4813]: E0219 18:32:47.788920 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-xvhdg" podUID="0d0adcf5-5570-499d-b626-fdf0bb785e79" Feb 19 18:32:47 crc kubenswrapper[4813]: E0219 18:32:47.788913 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-p2xbk" podUID="a5ded39f-d187-46dd-a014-517d0b291e9d" Feb 19 18:32:47 crc kubenswrapper[4813]: E0219 18:32:47.791192 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-5q7bs" podUID="064d4c4c-ff7e-4368-817e-0234266467b5" Feb 19 18:32:48 crc kubenswrapper[4813]: I0219 18:32:48.791650 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-l5vng" event={"ID":"6fc21e0b-f723-4c9c-9ced-1683cc02fa00","Type":"ContainerStarted","Data":"99f0eb1412051b4a215af9173d2a59a69465c59c1906f9be39ea1ae2f74eaf80"} Feb 19 18:32:48 crc kubenswrapper[4813]: I0219 18:32:48.792255 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-l5vng" event={"ID":"6fc21e0b-f723-4c9c-9ced-1683cc02fa00","Type":"ContainerStarted","Data":"97c9ebb5adf857029aa044bbe124eba9148411d9ee5da9e99667c826fe01e51b"} Feb 19 18:32:48 crc kubenswrapper[4813]: I0219 18:32:48.792276 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-l5vng" event={"ID":"6fc21e0b-f723-4c9c-9ced-1683cc02fa00","Type":"ContainerStarted","Data":"2e7fb3222735935c92eb1983f78cb1f0c3b33cbb6f38e2a38a73a6fbbf75304a"} Feb 19 18:32:48 crc kubenswrapper[4813]: I0219 18:32:48.797548 4813 generic.go:334] "Generic (PLEG): container finished" podID="ebe695cb-96c7-4c92-8fb9-62e84ccf225d" containerID="00f78eada0ce573269856e6351cb327cfc5540c601c2069f254f93bc88b79c4b" exitCode=0 Feb 19 18:32:48 crc kubenswrapper[4813]: I0219 18:32:48.797665 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2zq8l" event={"ID":"ebe695cb-96c7-4c92-8fb9-62e84ccf225d","Type":"ContainerDied","Data":"00f78eada0ce573269856e6351cb327cfc5540c601c2069f254f93bc88b79c4b"} Feb 19 18:32:48 crc kubenswrapper[4813]: I0219 18:32:48.801892 4813 generic.go:334] "Generic (PLEG): container finished" podID="7c467423-e5c4-4329-9a0c-c68058d30c91" containerID="9e9ea69d0d55dd41a68f8cb558f4a87e3c28096dd513fc46840fa22be3613bf8" exitCode=0 Feb 19 18:32:48 crc kubenswrapper[4813]: I0219 18:32:48.801939 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m8sg9" event={"ID":"7c467423-e5c4-4329-9a0c-c68058d30c91","Type":"ContainerDied","Data":"9e9ea69d0d55dd41a68f8cb558f4a87e3c28096dd513fc46840fa22be3613bf8"} Feb 19 18:32:48 crc kubenswrapper[4813]: I0219 18:32:48.819793 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-l5vng" podStartSLOduration=166.819771239 podStartE2EDuration="2m46.819771239s" podCreationTimestamp="2026-02-19 18:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:32:48.811295482 +0000 UTC m=+188.036736053" watchObservedRunningTime="2026-02-19 18:32:48.819771239 +0000 UTC m=+188.045211820" Feb 19 18:32:49 crc kubenswrapper[4813]: I0219 18:32:49.804366 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 18:32:49 crc kubenswrapper[4813]: I0219 18:32:49.808685 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2zq8l" event={"ID":"ebe695cb-96c7-4c92-8fb9-62e84ccf225d","Type":"ContainerStarted","Data":"5b895de499e5c8fd81ed3857f09fcf4cd5524cb38c573cfa41639fa118b14231"} Feb 19 18:32:49 crc kubenswrapper[4813]: I0219 18:32:49.811158 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m8sg9" event={"ID":"7c467423-e5c4-4329-9a0c-c68058d30c91","Type":"ContainerStarted","Data":"0df61c256ebcd29dbd6e552f3ce2f7415e2aee9307b9f7ded245701cded01fb6"} Feb 19 18:32:49 crc kubenswrapper[4813]: I0219 18:32:49.846712 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-m8sg9" podStartSLOduration=2.641764722 podStartE2EDuration="39.846694605s" podCreationTimestamp="2026-02-19 18:32:10 +0000 UTC" firstStartedPulling="2026-02-19 18:32:12.273162676 +0000 UTC m=+151.498603217" lastFinishedPulling="2026-02-19 18:32:49.478092549 +0000 UTC m=+188.703533100" observedRunningTime="2026-02-19 18:32:49.846238279 +0000 UTC m=+189.071678820" watchObservedRunningTime="2026-02-19 18:32:49.846694605 +0000 UTC m=+189.072135146" Feb 19 18:32:49 crc kubenswrapper[4813]: I0219 18:32:49.863010 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2zq8l" podStartSLOduration=3.110005036 podStartE2EDuration="36.862941115s" podCreationTimestamp="2026-02-19 18:32:13 +0000 UTC" firstStartedPulling="2026-02-19 18:32:15.461666584 +0000 UTC m=+154.687107125" lastFinishedPulling="2026-02-19 18:32:49.214602663 +0000 UTC m=+188.440043204" observedRunningTime="2026-02-19 18:32:49.859725896 +0000 UTC m=+189.085166437" watchObservedRunningTime="2026-02-19 18:32:49.862941115 +0000 UTC m=+189.088381686" Feb 19 18:32:50 crc kubenswrapper[4813]: I0219 18:32:50.631684 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-m8sg9" Feb 19 18:32:50 crc kubenswrapper[4813]: I0219 18:32:50.631971 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-m8sg9" Feb 19 18:32:50 crc kubenswrapper[4813]: I0219 18:32:50.676474 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 19 18:32:50 crc kubenswrapper[4813]: E0219 18:32:50.676715 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e63bf777-ed22-4aae-942e-74b8613ca4ce" containerName="collect-profiles" Feb 19 18:32:50 crc kubenswrapper[4813]: I0219 18:32:50.676730 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="e63bf777-ed22-4aae-942e-74b8613ca4ce" containerName="collect-profiles" Feb 19 18:32:50 crc kubenswrapper[4813]: E0219 18:32:50.676747 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc7c86cd-f30f-42e5-aa8c-55f65bea7db8" containerName="pruner" Feb 19 18:32:50 crc kubenswrapper[4813]: I0219 18:32:50.676756 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc7c86cd-f30f-42e5-aa8c-55f65bea7db8" containerName="pruner" Feb 19 18:32:50 crc kubenswrapper[4813]: E0219 18:32:50.676775 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2673311-de9e-4dd0-88be-eb4887afc965" containerName="pruner" Feb 19 18:32:50 crc kubenswrapper[4813]: I0219 18:32:50.676784 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2673311-de9e-4dd0-88be-eb4887afc965" containerName="pruner" Feb 19 18:32:50 crc kubenswrapper[4813]: I0219 18:32:50.676907 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="e63bf777-ed22-4aae-942e-74b8613ca4ce" containerName="collect-profiles" Feb 19 18:32:50 crc kubenswrapper[4813]: I0219 18:32:50.676925 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc7c86cd-f30f-42e5-aa8c-55f65bea7db8" containerName="pruner" Feb 19 18:32:50 crc kubenswrapper[4813]: I0219 18:32:50.676938 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2673311-de9e-4dd0-88be-eb4887afc965" containerName="pruner" Feb 19 18:32:50 crc kubenswrapper[4813]: I0219 18:32:50.677399 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 18:32:50 crc kubenswrapper[4813]: I0219 18:32:50.680015 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 19 18:32:50 crc kubenswrapper[4813]: I0219 18:32:50.680350 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 19 18:32:50 crc kubenswrapper[4813]: I0219 18:32:50.690368 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 19 18:32:50 crc kubenswrapper[4813]: I0219 18:32:50.764172 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cb4c6efc-208a-4b77-ae49-71d2e7b26fb5-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"cb4c6efc-208a-4b77-ae49-71d2e7b26fb5\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 18:32:50 crc kubenswrapper[4813]: I0219 18:32:50.764287 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cb4c6efc-208a-4b77-ae49-71d2e7b26fb5-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"cb4c6efc-208a-4b77-ae49-71d2e7b26fb5\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 18:32:50 crc kubenswrapper[4813]: I0219 18:32:50.866008 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cb4c6efc-208a-4b77-ae49-71d2e7b26fb5-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"cb4c6efc-208a-4b77-ae49-71d2e7b26fb5\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 18:32:50 crc kubenswrapper[4813]: I0219 18:32:50.866068 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cb4c6efc-208a-4b77-ae49-71d2e7b26fb5-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"cb4c6efc-208a-4b77-ae49-71d2e7b26fb5\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 18:32:50 crc kubenswrapper[4813]: I0219 18:32:50.866124 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cb4c6efc-208a-4b77-ae49-71d2e7b26fb5-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"cb4c6efc-208a-4b77-ae49-71d2e7b26fb5\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 18:32:50 crc kubenswrapper[4813]: I0219 18:32:50.894170 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cb4c6efc-208a-4b77-ae49-71d2e7b26fb5-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"cb4c6efc-208a-4b77-ae49-71d2e7b26fb5\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 18:32:50 crc kubenswrapper[4813]: I0219 18:32:50.994158 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 18:32:51 crc kubenswrapper[4813]: I0219 18:32:51.431127 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 19 18:32:51 crc kubenswrapper[4813]: W0219 18:32:51.438496 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podcb4c6efc_208a_4b77_ae49_71d2e7b26fb5.slice/crio-5280eb1924e2e67948a8e5bb4843011066adb2bd0a7516481da83b420756e89d WatchSource:0}: Error finding container 5280eb1924e2e67948a8e5bb4843011066adb2bd0a7516481da83b420756e89d: Status 404 returned error can't find the container with id 5280eb1924e2e67948a8e5bb4843011066adb2bd0a7516481da83b420756e89d Feb 19 18:32:51 crc kubenswrapper[4813]: I0219 18:32:51.767188 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-m8sg9" podUID="7c467423-e5c4-4329-9a0c-c68058d30c91" containerName="registry-server" probeResult="failure" output=< Feb 19 18:32:51 crc kubenswrapper[4813]: timeout: failed to connect service ":50051" within 1s Feb 19 18:32:51 crc kubenswrapper[4813]: > Feb 19 18:32:51 crc kubenswrapper[4813]: I0219 18:32:51.828808 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"cb4c6efc-208a-4b77-ae49-71d2e7b26fb5","Type":"ContainerStarted","Data":"2697cc83d1042360c4be0c131047092c36de6f456e5f563b88d71bf4c727cad8"} Feb 19 18:32:51 crc kubenswrapper[4813]: I0219 18:32:51.828907 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"cb4c6efc-208a-4b77-ae49-71d2e7b26fb5","Type":"ContainerStarted","Data":"5280eb1924e2e67948a8e5bb4843011066adb2bd0a7516481da83b420756e89d"} Feb 19 18:32:51 crc kubenswrapper[4813]: I0219 18:32:51.851142 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=1.8511192410000001 podStartE2EDuration="1.851119241s" podCreationTimestamp="2026-02-19 18:32:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:32:51.850244431 +0000 UTC m=+191.075685002" watchObservedRunningTime="2026-02-19 18:32:51.851119241 +0000 UTC m=+191.076559822" Feb 19 18:32:52 crc kubenswrapper[4813]: I0219 18:32:52.678662 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-x6n5r"] Feb 19 18:32:52 crc kubenswrapper[4813]: I0219 18:32:52.833809 4813 generic.go:334] "Generic (PLEG): container finished" podID="cb4c6efc-208a-4b77-ae49-71d2e7b26fb5" containerID="2697cc83d1042360c4be0c131047092c36de6f456e5f563b88d71bf4c727cad8" exitCode=0 Feb 19 18:32:52 crc kubenswrapper[4813]: I0219 18:32:52.833852 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"cb4c6efc-208a-4b77-ae49-71d2e7b26fb5","Type":"ContainerDied","Data":"2697cc83d1042360c4be0c131047092c36de6f456e5f563b88d71bf4c727cad8"} Feb 19 18:32:53 crc kubenswrapper[4813]: I0219 18:32:53.830459 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2zq8l" Feb 19 18:32:53 crc kubenswrapper[4813]: I0219 18:32:53.830711 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2zq8l" Feb 19 18:32:54 crc kubenswrapper[4813]: I0219 18:32:54.091866 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 18:32:54 crc kubenswrapper[4813]: I0219 18:32:54.208287 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cb4c6efc-208a-4b77-ae49-71d2e7b26fb5-kube-api-access\") pod \"cb4c6efc-208a-4b77-ae49-71d2e7b26fb5\" (UID: \"cb4c6efc-208a-4b77-ae49-71d2e7b26fb5\") " Feb 19 18:32:54 crc kubenswrapper[4813]: I0219 18:32:54.208347 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cb4c6efc-208a-4b77-ae49-71d2e7b26fb5-kubelet-dir\") pod \"cb4c6efc-208a-4b77-ae49-71d2e7b26fb5\" (UID: \"cb4c6efc-208a-4b77-ae49-71d2e7b26fb5\") " Feb 19 18:32:54 crc kubenswrapper[4813]: I0219 18:32:54.208678 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb4c6efc-208a-4b77-ae49-71d2e7b26fb5-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "cb4c6efc-208a-4b77-ae49-71d2e7b26fb5" (UID: "cb4c6efc-208a-4b77-ae49-71d2e7b26fb5"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 18:32:54 crc kubenswrapper[4813]: I0219 18:32:54.216140 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb4c6efc-208a-4b77-ae49-71d2e7b26fb5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "cb4c6efc-208a-4b77-ae49-71d2e7b26fb5" (UID: "cb4c6efc-208a-4b77-ae49-71d2e7b26fb5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:32:54 crc kubenswrapper[4813]: I0219 18:32:54.309425 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cb4c6efc-208a-4b77-ae49-71d2e7b26fb5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 18:32:54 crc kubenswrapper[4813]: I0219 18:32:54.309458 4813 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cb4c6efc-208a-4b77-ae49-71d2e7b26fb5-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 19 18:32:54 crc kubenswrapper[4813]: I0219 18:32:54.847920 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"cb4c6efc-208a-4b77-ae49-71d2e7b26fb5","Type":"ContainerDied","Data":"5280eb1924e2e67948a8e5bb4843011066adb2bd0a7516481da83b420756e89d"} Feb 19 18:32:54 crc kubenswrapper[4813]: I0219 18:32:54.847973 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5280eb1924e2e67948a8e5bb4843011066adb2bd0a7516481da83b420756e89d" Feb 19 18:32:54 crc kubenswrapper[4813]: I0219 18:32:54.848030 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 18:32:54 crc kubenswrapper[4813]: I0219 18:32:54.875304 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2zq8l" podUID="ebe695cb-96c7-4c92-8fb9-62e84ccf225d" containerName="registry-server" probeResult="failure" output=< Feb 19 18:32:54 crc kubenswrapper[4813]: timeout: failed to connect service ":50051" within 1s Feb 19 18:32:54 crc kubenswrapper[4813]: > Feb 19 18:32:56 crc kubenswrapper[4813]: I0219 18:32:56.475834 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 19 18:32:56 crc kubenswrapper[4813]: E0219 18:32:56.476423 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb4c6efc-208a-4b77-ae49-71d2e7b26fb5" containerName="pruner" Feb 19 18:32:56 crc kubenswrapper[4813]: I0219 18:32:56.476441 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb4c6efc-208a-4b77-ae49-71d2e7b26fb5" containerName="pruner" Feb 19 18:32:56 crc kubenswrapper[4813]: I0219 18:32:56.476598 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb4c6efc-208a-4b77-ae49-71d2e7b26fb5" containerName="pruner" Feb 19 18:32:56 crc kubenswrapper[4813]: I0219 18:32:56.477066 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 19 18:32:56 crc kubenswrapper[4813]: I0219 18:32:56.480067 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 19 18:32:56 crc kubenswrapper[4813]: I0219 18:32:56.480331 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 19 18:32:56 crc kubenswrapper[4813]: I0219 18:32:56.487015 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 19 18:32:56 crc kubenswrapper[4813]: I0219 18:32:56.644583 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f1877644-5177-4aa6-969b-7af1fb9cede5-kube-api-access\") pod \"installer-9-crc\" (UID: \"f1877644-5177-4aa6-969b-7af1fb9cede5\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 18:32:56 crc kubenswrapper[4813]: I0219 18:32:56.644855 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f1877644-5177-4aa6-969b-7af1fb9cede5-var-lock\") pod \"installer-9-crc\" (UID: \"f1877644-5177-4aa6-969b-7af1fb9cede5\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 18:32:56 crc kubenswrapper[4813]: I0219 18:32:56.644974 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f1877644-5177-4aa6-969b-7af1fb9cede5-kubelet-dir\") pod \"installer-9-crc\" (UID: \"f1877644-5177-4aa6-969b-7af1fb9cede5\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 18:32:56 crc kubenswrapper[4813]: I0219 18:32:56.746738 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f1877644-5177-4aa6-969b-7af1fb9cede5-kube-api-access\") pod \"installer-9-crc\" (UID: \"f1877644-5177-4aa6-969b-7af1fb9cede5\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 18:32:56 crc kubenswrapper[4813]: I0219 18:32:56.746820 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f1877644-5177-4aa6-969b-7af1fb9cede5-var-lock\") pod \"installer-9-crc\" (UID: \"f1877644-5177-4aa6-969b-7af1fb9cede5\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 18:32:56 crc kubenswrapper[4813]: I0219 18:32:56.746846 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f1877644-5177-4aa6-969b-7af1fb9cede5-kubelet-dir\") pod \"installer-9-crc\" (UID: \"f1877644-5177-4aa6-969b-7af1fb9cede5\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 18:32:56 crc kubenswrapper[4813]: I0219 18:32:56.746912 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f1877644-5177-4aa6-969b-7af1fb9cede5-kubelet-dir\") pod \"installer-9-crc\" (UID: \"f1877644-5177-4aa6-969b-7af1fb9cede5\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 18:32:56 crc kubenswrapper[4813]: I0219 18:32:56.746943 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f1877644-5177-4aa6-969b-7af1fb9cede5-var-lock\") pod \"installer-9-crc\" (UID: \"f1877644-5177-4aa6-969b-7af1fb9cede5\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 18:32:56 crc kubenswrapper[4813]: I0219 18:32:56.781400 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f1877644-5177-4aa6-969b-7af1fb9cede5-kube-api-access\") pod \"installer-9-crc\" (UID: \"f1877644-5177-4aa6-969b-7af1fb9cede5\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 18:32:56 crc kubenswrapper[4813]: I0219 18:32:56.798184 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 19 18:32:57 crc kubenswrapper[4813]: I0219 18:32:57.225089 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 19 18:32:57 crc kubenswrapper[4813]: I0219 18:32:57.865868 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"f1877644-5177-4aa6-969b-7af1fb9cede5","Type":"ContainerStarted","Data":"49ddd4d7aabda736c45c5fff2116a3ecda60c45956fd1ceb26ac47f59d2d2c48"} Feb 19 18:32:57 crc kubenswrapper[4813]: I0219 18:32:57.866241 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"f1877644-5177-4aa6-969b-7af1fb9cede5","Type":"ContainerStarted","Data":"8533725aa9bdc42188459a02deb4b3ecde2c497c3cdf1da9264797a6bba6f824"} Feb 19 18:32:57 crc kubenswrapper[4813]: I0219 18:32:57.884990 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=1.884967057 podStartE2EDuration="1.884967057s" podCreationTimestamp="2026-02-19 18:32:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:32:57.878525339 +0000 UTC m=+197.103965910" watchObservedRunningTime="2026-02-19 18:32:57.884967057 +0000 UTC m=+197.110407598" Feb 19 18:33:00 crc kubenswrapper[4813]: I0219 18:33:00.330610 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 18:33:00 crc kubenswrapper[4813]: I0219 18:33:00.331026 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 18:33:00 crc kubenswrapper[4813]: I0219 18:33:00.694650 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-m8sg9" Feb 19 18:33:00 crc kubenswrapper[4813]: I0219 18:33:00.741697 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-m8sg9" Feb 19 18:33:01 crc kubenswrapper[4813]: I0219 18:33:01.914111 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m8sg9"] Feb 19 18:33:01 crc kubenswrapper[4813]: I0219 18:33:01.914306 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-m8sg9" podUID="7c467423-e5c4-4329-9a0c-c68058d30c91" containerName="registry-server" containerID="cri-o://0df61c256ebcd29dbd6e552f3ce2f7415e2aee9307b9f7ded245701cded01fb6" gracePeriod=2 Feb 19 18:33:02 crc kubenswrapper[4813]: I0219 18:33:02.891319 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x7z6q" event={"ID":"9a3d0dbc-cf99-4d9f-8ab2-a7f480f65d81","Type":"ContainerStarted","Data":"72262af3ebc34ad83fc3773c9412987689428b53f3c4715036e1e0e78b153092"} Feb 19 18:33:02 crc kubenswrapper[4813]: I0219 18:33:02.894845 4813 generic.go:334] "Generic (PLEG): container finished" podID="7c467423-e5c4-4329-9a0c-c68058d30c91" containerID="0df61c256ebcd29dbd6e552f3ce2f7415e2aee9307b9f7ded245701cded01fb6" exitCode=0 Feb 19 18:33:02 crc kubenswrapper[4813]: I0219 18:33:02.894911 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m8sg9" event={"ID":"7c467423-e5c4-4329-9a0c-c68058d30c91","Type":"ContainerDied","Data":"0df61c256ebcd29dbd6e552f3ce2f7415e2aee9307b9f7ded245701cded01fb6"} Feb 19 18:33:02 crc kubenswrapper[4813]: I0219 18:33:02.898129 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5q7bs" event={"ID":"064d4c4c-ff7e-4368-817e-0234266467b5","Type":"ContainerStarted","Data":"d0ce1077bd3b84e3cb988b85f895da0833d00de4aee7032a9dce185e352d40f0"} Feb 19 18:33:03 crc kubenswrapper[4813]: E0219 18:33:03.155381 4813 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a3d0dbc_cf99_4d9f_8ab2_a7f480f65d81.slice/crio-conmon-72262af3ebc34ad83fc3773c9412987689428b53f3c4715036e1e0e78b153092.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod064d4c4c_ff7e_4368_817e_0234266467b5.slice/crio-d0ce1077bd3b84e3cb988b85f895da0833d00de4aee7032a9dce185e352d40f0.scope\": RecentStats: unable to find data in memory cache]" Feb 19 18:33:03 crc kubenswrapper[4813]: I0219 18:33:03.247511 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m8sg9" Feb 19 18:33:03 crc kubenswrapper[4813]: I0219 18:33:03.339078 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c467423-e5c4-4329-9a0c-c68058d30c91-utilities\") pod \"7c467423-e5c4-4329-9a0c-c68058d30c91\" (UID: \"7c467423-e5c4-4329-9a0c-c68058d30c91\") " Feb 19 18:33:03 crc kubenswrapper[4813]: I0219 18:33:03.339208 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5pdf\" (UniqueName: \"kubernetes.io/projected/7c467423-e5c4-4329-9a0c-c68058d30c91-kube-api-access-v5pdf\") pod \"7c467423-e5c4-4329-9a0c-c68058d30c91\" (UID: \"7c467423-e5c4-4329-9a0c-c68058d30c91\") " Feb 19 18:33:03 crc kubenswrapper[4813]: I0219 18:33:03.339234 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c467423-e5c4-4329-9a0c-c68058d30c91-catalog-content\") pod \"7c467423-e5c4-4329-9a0c-c68058d30c91\" (UID: \"7c467423-e5c4-4329-9a0c-c68058d30c91\") " Feb 19 18:33:03 crc kubenswrapper[4813]: I0219 18:33:03.341494 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c467423-e5c4-4329-9a0c-c68058d30c91-utilities" (OuterVolumeSpecName: "utilities") pod "7c467423-e5c4-4329-9a0c-c68058d30c91" (UID: "7c467423-e5c4-4329-9a0c-c68058d30c91"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:33:03 crc kubenswrapper[4813]: I0219 18:33:03.351230 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c467423-e5c4-4329-9a0c-c68058d30c91-kube-api-access-v5pdf" (OuterVolumeSpecName: "kube-api-access-v5pdf") pod "7c467423-e5c4-4329-9a0c-c68058d30c91" (UID: "7c467423-e5c4-4329-9a0c-c68058d30c91"). InnerVolumeSpecName "kube-api-access-v5pdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:33:03 crc kubenswrapper[4813]: I0219 18:33:03.390415 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c467423-e5c4-4329-9a0c-c68058d30c91-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7c467423-e5c4-4329-9a0c-c68058d30c91" (UID: "7c467423-e5c4-4329-9a0c-c68058d30c91"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:33:03 crc kubenswrapper[4813]: I0219 18:33:03.440138 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c467423-e5c4-4329-9a0c-c68058d30c91-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 18:33:03 crc kubenswrapper[4813]: I0219 18:33:03.440427 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c467423-e5c4-4329-9a0c-c68058d30c91-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 18:33:03 crc kubenswrapper[4813]: I0219 18:33:03.440457 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5pdf\" (UniqueName: \"kubernetes.io/projected/7c467423-e5c4-4329-9a0c-c68058d30c91-kube-api-access-v5pdf\") on node \"crc\" DevicePath \"\"" Feb 19 18:33:03 crc kubenswrapper[4813]: I0219 18:33:03.870683 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2zq8l" Feb 19 18:33:03 crc kubenswrapper[4813]: I0219 18:33:03.906910 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z8tl9" event={"ID":"2d418b66-03a6-4b99-975a-1c980c62e680","Type":"ContainerStarted","Data":"e43c00145dd37abb23364e3e6fe73e0c155b46ea2fe969aafdf7756250a3a579"} Feb 19 18:33:03 crc kubenswrapper[4813]: I0219 18:33:03.910392 4813 generic.go:334] "Generic (PLEG): container finished" podID="91d14b93-63e4-4673-9a84-61fba91ac9ec" containerID="6ae6d90849c1631fdc504ceb80dcaa7fbd8ad14bc9e444eca0147b15a63d2ff3" exitCode=0 Feb 19 18:33:03 crc kubenswrapper[4813]: I0219 18:33:03.910464 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xg84q" event={"ID":"91d14b93-63e4-4673-9a84-61fba91ac9ec","Type":"ContainerDied","Data":"6ae6d90849c1631fdc504ceb80dcaa7fbd8ad14bc9e444eca0147b15a63d2ff3"} Feb 19 18:33:03 crc kubenswrapper[4813]: I0219 18:33:03.914790 4813 generic.go:334] "Generic (PLEG): container finished" podID="9a3d0dbc-cf99-4d9f-8ab2-a7f480f65d81" containerID="72262af3ebc34ad83fc3773c9412987689428b53f3c4715036e1e0e78b153092" exitCode=0 Feb 19 18:33:03 crc kubenswrapper[4813]: I0219 18:33:03.914871 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x7z6q" event={"ID":"9a3d0dbc-cf99-4d9f-8ab2-a7f480f65d81","Type":"ContainerDied","Data":"72262af3ebc34ad83fc3773c9412987689428b53f3c4715036e1e0e78b153092"} Feb 19 18:33:03 crc kubenswrapper[4813]: I0219 18:33:03.918292 4813 generic.go:334] "Generic (PLEG): container finished" podID="a5ded39f-d187-46dd-a014-517d0b291e9d" containerID="98a9f7fa20c03ee49f7fca8233e18606b9c75fe34d2abf1a88ec5304731b4707" exitCode=0 Feb 19 18:33:03 crc kubenswrapper[4813]: I0219 18:33:03.918357 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p2xbk" event={"ID":"a5ded39f-d187-46dd-a014-517d0b291e9d","Type":"ContainerDied","Data":"98a9f7fa20c03ee49f7fca8233e18606b9c75fe34d2abf1a88ec5304731b4707"} Feb 19 18:33:03 crc kubenswrapper[4813]: I0219 18:33:03.928871 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2zq8l" Feb 19 18:33:03 crc kubenswrapper[4813]: I0219 18:33:03.930367 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m8sg9" event={"ID":"7c467423-e5c4-4329-9a0c-c68058d30c91","Type":"ContainerDied","Data":"e980d3f47dc6812d641d375a6296f11934bef1a26afa2b4a7b67ba48a3a10a06"} Feb 19 18:33:03 crc kubenswrapper[4813]: I0219 18:33:03.930427 4813 scope.go:117] "RemoveContainer" containerID="0df61c256ebcd29dbd6e552f3ce2f7415e2aee9307b9f7ded245701cded01fb6" Feb 19 18:33:03 crc kubenswrapper[4813]: I0219 18:33:03.931442 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m8sg9" Feb 19 18:33:03 crc kubenswrapper[4813]: I0219 18:33:03.939156 4813 generic.go:334] "Generic (PLEG): container finished" podID="064d4c4c-ff7e-4368-817e-0234266467b5" containerID="d0ce1077bd3b84e3cb988b85f895da0833d00de4aee7032a9dce185e352d40f0" exitCode=0 Feb 19 18:33:03 crc kubenswrapper[4813]: I0219 18:33:03.947304 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5q7bs" event={"ID":"064d4c4c-ff7e-4368-817e-0234266467b5","Type":"ContainerDied","Data":"d0ce1077bd3b84e3cb988b85f895da0833d00de4aee7032a9dce185e352d40f0"} Feb 19 18:33:03 crc kubenswrapper[4813]: I0219 18:33:03.965921 4813 scope.go:117] "RemoveContainer" containerID="9e9ea69d0d55dd41a68f8cb558f4a87e3c28096dd513fc46840fa22be3613bf8" Feb 19 18:33:03 crc kubenswrapper[4813]: I0219 18:33:03.980089 4813 scope.go:117] "RemoveContainer" containerID="9b35b7ddc08fe74f6377317a31e9a8e1fce56e1c014f9cb3bbc682cbd54bb01e" Feb 19 18:33:04 crc kubenswrapper[4813]: I0219 18:33:04.012679 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m8sg9"] Feb 19 18:33:04 crc kubenswrapper[4813]: I0219 18:33:04.023535 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-m8sg9"] Feb 19 18:33:04 crc kubenswrapper[4813]: I0219 18:33:04.946294 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x7z6q" event={"ID":"9a3d0dbc-cf99-4d9f-8ab2-a7f480f65d81","Type":"ContainerStarted","Data":"15987d60f76059f64dbb5ccb5fba2d5cfae61708eaa0e9c564075962739cffde"} Feb 19 18:33:04 crc kubenswrapper[4813]: I0219 18:33:04.949054 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p2xbk" event={"ID":"a5ded39f-d187-46dd-a014-517d0b291e9d","Type":"ContainerStarted","Data":"3b8d1eccdb5a6fe99f2c656e137ebc9fa993f1937a236f9fbe3a5d17eac6c7a1"} Feb 19 18:33:04 crc kubenswrapper[4813]: I0219 18:33:04.952881 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5q7bs" event={"ID":"064d4c4c-ff7e-4368-817e-0234266467b5","Type":"ContainerStarted","Data":"34ddbe81b5edd583772a8a71fad4fe05ff92fb3a80d5fcf98a466e0b56162d6a"} Feb 19 18:33:04 crc kubenswrapper[4813]: I0219 18:33:04.954638 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xg84q" event={"ID":"91d14b93-63e4-4673-9a84-61fba91ac9ec","Type":"ContainerStarted","Data":"76ab519c7d58cc3596bd06f97e7e3b8405beb29a7c322b1044f66375aa70dc43"} Feb 19 18:33:04 crc kubenswrapper[4813]: I0219 18:33:04.956160 4813 generic.go:334] "Generic (PLEG): container finished" podID="0d0adcf5-5570-499d-b626-fdf0bb785e79" containerID="8672cb34c2c402810a971fdcdba87c6bc6fc84d516ab4c45d3e41d4945b893d9" exitCode=0 Feb 19 18:33:04 crc kubenswrapper[4813]: I0219 18:33:04.956209 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xvhdg" event={"ID":"0d0adcf5-5570-499d-b626-fdf0bb785e79","Type":"ContainerDied","Data":"8672cb34c2c402810a971fdcdba87c6bc6fc84d516ab4c45d3e41d4945b893d9"} Feb 19 18:33:04 crc kubenswrapper[4813]: I0219 18:33:04.959129 4813 generic.go:334] "Generic (PLEG): container finished" podID="2d418b66-03a6-4b99-975a-1c980c62e680" containerID="e43c00145dd37abb23364e3e6fe73e0c155b46ea2fe969aafdf7756250a3a579" exitCode=0 Feb 19 18:33:04 crc kubenswrapper[4813]: I0219 18:33:04.959180 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z8tl9" event={"ID":"2d418b66-03a6-4b99-975a-1c980c62e680","Type":"ContainerDied","Data":"e43c00145dd37abb23364e3e6fe73e0c155b46ea2fe969aafdf7756250a3a579"} Feb 19 18:33:04 crc kubenswrapper[4813]: I0219 18:33:04.970935 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-x7z6q" podStartSLOduration=3.886853527 podStartE2EDuration="55.970913515s" podCreationTimestamp="2026-02-19 18:32:09 +0000 UTC" firstStartedPulling="2026-02-19 18:32:12.269809052 +0000 UTC m=+151.495249593" lastFinishedPulling="2026-02-19 18:33:04.35386904 +0000 UTC m=+203.579309581" observedRunningTime="2026-02-19 18:33:04.968873593 +0000 UTC m=+204.194314164" watchObservedRunningTime="2026-02-19 18:33:04.970913515 +0000 UTC m=+204.196354086" Feb 19 18:33:04 crc kubenswrapper[4813]: I0219 18:33:04.983133 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xg84q" podStartSLOduration=2.918770259 podStartE2EDuration="54.983113787s" podCreationTimestamp="2026-02-19 18:32:10 +0000 UTC" firstStartedPulling="2026-02-19 18:32:12.258754783 +0000 UTC m=+151.484195324" lastFinishedPulling="2026-02-19 18:33:04.323098311 +0000 UTC m=+203.548538852" observedRunningTime="2026-02-19 18:33:04.981722648 +0000 UTC m=+204.207163189" watchObservedRunningTime="2026-02-19 18:33:04.983113787 +0000 UTC m=+204.208554328" Feb 19 18:33:05 crc kubenswrapper[4813]: I0219 18:33:05.000057 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5q7bs" podStartSLOduration=3.791707927 podStartE2EDuration="56.000037536s" podCreationTimestamp="2026-02-19 18:32:09 +0000 UTC" firstStartedPulling="2026-02-19 18:32:12.252299249 +0000 UTC m=+151.477739800" lastFinishedPulling="2026-02-19 18:33:04.460628868 +0000 UTC m=+203.686069409" observedRunningTime="2026-02-19 18:33:04.998350926 +0000 UTC m=+204.223791487" watchObservedRunningTime="2026-02-19 18:33:05.000037536 +0000 UTC m=+204.225478087" Feb 19 18:33:05 crc kubenswrapper[4813]: I0219 18:33:05.032160 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-p2xbk" podStartSLOduration=2.957309845 podStartE2EDuration="54.032140112s" podCreationTimestamp="2026-02-19 18:32:11 +0000 UTC" firstStartedPulling="2026-02-19 18:32:13.310430403 +0000 UTC m=+152.535870944" lastFinishedPulling="2026-02-19 18:33:04.38526067 +0000 UTC m=+203.610701211" observedRunningTime="2026-02-19 18:33:05.031271232 +0000 UTC m=+204.256711783" watchObservedRunningTime="2026-02-19 18:33:05.032140112 +0000 UTC m=+204.257580653" Feb 19 18:33:05 crc kubenswrapper[4813]: I0219 18:33:05.478523 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c467423-e5c4-4329-9a0c-c68058d30c91" path="/var/lib/kubelet/pods/7c467423-e5c4-4329-9a0c-c68058d30c91/volumes" Feb 19 18:33:05 crc kubenswrapper[4813]: I0219 18:33:05.966757 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xvhdg" event={"ID":"0d0adcf5-5570-499d-b626-fdf0bb785e79","Type":"ContainerStarted","Data":"b7bd8718acc0e3e32d62040d1b5006fbec8a9dd17f6a256f629cf09c606e0f51"} Feb 19 18:33:05 crc kubenswrapper[4813]: I0219 18:33:05.969024 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z8tl9" event={"ID":"2d418b66-03a6-4b99-975a-1c980c62e680","Type":"ContainerStarted","Data":"9cc6bfe1fe23351ff5968e699a82906813bdf48332f4843f79a0f146beb3b2d2"} Feb 19 18:33:05 crc kubenswrapper[4813]: I0219 18:33:05.987704 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xvhdg" podStartSLOduration=2.036102406 podStartE2EDuration="53.987688237s" podCreationTimestamp="2026-02-19 18:32:12 +0000 UTC" firstStartedPulling="2026-02-19 18:32:13.397071282 +0000 UTC m=+152.622511823" lastFinishedPulling="2026-02-19 18:33:05.348657113 +0000 UTC m=+204.574097654" observedRunningTime="2026-02-19 18:33:05.984843177 +0000 UTC m=+205.210283718" watchObservedRunningTime="2026-02-19 18:33:05.987688237 +0000 UTC m=+205.213128778" Feb 19 18:33:06 crc kubenswrapper[4813]: I0219 18:33:06.011401 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-z8tl9" podStartSLOduration=2.042656766 podStartE2EDuration="53.011377616s" podCreationTimestamp="2026-02-19 18:32:13 +0000 UTC" firstStartedPulling="2026-02-19 18:32:14.446328603 +0000 UTC m=+153.671769144" lastFinishedPulling="2026-02-19 18:33:05.415049453 +0000 UTC m=+204.640489994" observedRunningTime="2026-02-19 18:33:06.009130306 +0000 UTC m=+205.234570867" watchObservedRunningTime="2026-02-19 18:33:06.011377616 +0000 UTC m=+205.236818157" Feb 19 18:33:07 crc kubenswrapper[4813]: I0219 18:33:07.713785 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2zq8l"] Feb 19 18:33:07 crc kubenswrapper[4813]: I0219 18:33:07.714024 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2zq8l" podUID="ebe695cb-96c7-4c92-8fb9-62e84ccf225d" containerName="registry-server" containerID="cri-o://5b895de499e5c8fd81ed3857f09fcf4cd5524cb38c573cfa41639fa118b14231" gracePeriod=2 Feb 19 18:33:08 crc kubenswrapper[4813]: I0219 18:33:08.988552 4813 generic.go:334] "Generic (PLEG): container finished" podID="ebe695cb-96c7-4c92-8fb9-62e84ccf225d" containerID="5b895de499e5c8fd81ed3857f09fcf4cd5524cb38c573cfa41639fa118b14231" exitCode=0 Feb 19 18:33:08 crc kubenswrapper[4813]: I0219 18:33:08.988622 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2zq8l" event={"ID":"ebe695cb-96c7-4c92-8fb9-62e84ccf225d","Type":"ContainerDied","Data":"5b895de499e5c8fd81ed3857f09fcf4cd5524cb38c573cfa41639fa118b14231"} Feb 19 18:33:09 crc kubenswrapper[4813]: I0219 18:33:09.942516 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2zq8l" Feb 19 18:33:09 crc kubenswrapper[4813]: I0219 18:33:09.977710 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-x7z6q" Feb 19 18:33:09 crc kubenswrapper[4813]: I0219 18:33:09.978744 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-x7z6q" Feb 19 18:33:10 crc kubenswrapper[4813]: I0219 18:33:10.025571 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bd7b\" (UniqueName: \"kubernetes.io/projected/ebe695cb-96c7-4c92-8fb9-62e84ccf225d-kube-api-access-7bd7b\") pod \"ebe695cb-96c7-4c92-8fb9-62e84ccf225d\" (UID: \"ebe695cb-96c7-4c92-8fb9-62e84ccf225d\") " Feb 19 18:33:10 crc kubenswrapper[4813]: I0219 18:33:10.025653 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebe695cb-96c7-4c92-8fb9-62e84ccf225d-utilities\") pod \"ebe695cb-96c7-4c92-8fb9-62e84ccf225d\" (UID: \"ebe695cb-96c7-4c92-8fb9-62e84ccf225d\") " Feb 19 18:33:10 crc kubenswrapper[4813]: I0219 18:33:10.025769 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebe695cb-96c7-4c92-8fb9-62e84ccf225d-catalog-content\") pod \"ebe695cb-96c7-4c92-8fb9-62e84ccf225d\" (UID: \"ebe695cb-96c7-4c92-8fb9-62e84ccf225d\") " Feb 19 18:33:10 crc kubenswrapper[4813]: I0219 18:33:10.034205 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebe695cb-96c7-4c92-8fb9-62e84ccf225d-kube-api-access-7bd7b" (OuterVolumeSpecName: "kube-api-access-7bd7b") pod "ebe695cb-96c7-4c92-8fb9-62e84ccf225d" (UID: "ebe695cb-96c7-4c92-8fb9-62e84ccf225d"). InnerVolumeSpecName "kube-api-access-7bd7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:33:10 crc kubenswrapper[4813]: I0219 18:33:10.047990 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ebe695cb-96c7-4c92-8fb9-62e84ccf225d-utilities" (OuterVolumeSpecName: "utilities") pod "ebe695cb-96c7-4c92-8fb9-62e84ccf225d" (UID: "ebe695cb-96c7-4c92-8fb9-62e84ccf225d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:33:10 crc kubenswrapper[4813]: I0219 18:33:10.055780 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2zq8l" Feb 19 18:33:10 crc kubenswrapper[4813]: I0219 18:33:10.056302 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2zq8l" event={"ID":"ebe695cb-96c7-4c92-8fb9-62e84ccf225d","Type":"ContainerDied","Data":"1ad909c04a53f5fb07eaf47ac3ce8fcd9463ff7b052a8b92adc83cfcbc9ae843"} Feb 19 18:33:10 crc kubenswrapper[4813]: I0219 18:33:10.056338 4813 scope.go:117] "RemoveContainer" containerID="5b895de499e5c8fd81ed3857f09fcf4cd5524cb38c573cfa41639fa118b14231" Feb 19 18:33:10 crc kubenswrapper[4813]: I0219 18:33:10.067934 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-x7z6q" Feb 19 18:33:10 crc kubenswrapper[4813]: I0219 18:33:10.088179 4813 scope.go:117] "RemoveContainer" containerID="00f78eada0ce573269856e6351cb327cfc5540c601c2069f254f93bc88b79c4b" Feb 19 18:33:10 crc kubenswrapper[4813]: I0219 18:33:10.126536 4813 scope.go:117] "RemoveContainer" containerID="a40f1a1060c11977cd6bc672000cbcfc3a7655c08a72e80846bb3ba6f72d715d" Feb 19 18:33:10 crc kubenswrapper[4813]: I0219 18:33:10.131310 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bd7b\" (UniqueName: \"kubernetes.io/projected/ebe695cb-96c7-4c92-8fb9-62e84ccf225d-kube-api-access-7bd7b\") on node \"crc\" DevicePath \"\"" Feb 19 18:33:10 crc kubenswrapper[4813]: I0219 18:33:10.131338 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebe695cb-96c7-4c92-8fb9-62e84ccf225d-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 18:33:10 crc kubenswrapper[4813]: I0219 18:33:10.187444 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5q7bs" Feb 19 18:33:10 crc kubenswrapper[4813]: I0219 18:33:10.187488 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5q7bs" Feb 19 18:33:10 crc kubenswrapper[4813]: I0219 18:33:10.201599 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ebe695cb-96c7-4c92-8fb9-62e84ccf225d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ebe695cb-96c7-4c92-8fb9-62e84ccf225d" (UID: "ebe695cb-96c7-4c92-8fb9-62e84ccf225d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:33:10 crc kubenswrapper[4813]: I0219 18:33:10.227310 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5q7bs" Feb 19 18:33:10 crc kubenswrapper[4813]: I0219 18:33:10.236667 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebe695cb-96c7-4c92-8fb9-62e84ccf225d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 18:33:10 crc kubenswrapper[4813]: I0219 18:33:10.403238 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2zq8l"] Feb 19 18:33:10 crc kubenswrapper[4813]: I0219 18:33:10.405927 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2zq8l"] Feb 19 18:33:10 crc kubenswrapper[4813]: I0219 18:33:10.530350 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xg84q" Feb 19 18:33:10 crc kubenswrapper[4813]: I0219 18:33:10.530409 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xg84q" Feb 19 18:33:10 crc kubenswrapper[4813]: I0219 18:33:10.577370 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xg84q" Feb 19 18:33:11 crc kubenswrapper[4813]: I0219 18:33:11.120171 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5q7bs" Feb 19 18:33:11 crc kubenswrapper[4813]: I0219 18:33:11.124615 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xg84q" Feb 19 18:33:11 crc kubenswrapper[4813]: I0219 18:33:11.128742 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-x7z6q" Feb 19 18:33:11 crc kubenswrapper[4813]: I0219 18:33:11.484189 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebe695cb-96c7-4c92-8fb9-62e84ccf225d" path="/var/lib/kubelet/pods/ebe695cb-96c7-4c92-8fb9-62e84ccf225d/volumes" Feb 19 18:33:12 crc kubenswrapper[4813]: I0219 18:33:12.033839 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-p2xbk" Feb 19 18:33:12 crc kubenswrapper[4813]: I0219 18:33:12.033934 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-p2xbk" Feb 19 18:33:12 crc kubenswrapper[4813]: I0219 18:33:12.102351 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-p2xbk" Feb 19 18:33:12 crc kubenswrapper[4813]: I0219 18:33:12.151556 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-p2xbk" Feb 19 18:33:12 crc kubenswrapper[4813]: I0219 18:33:12.315946 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xg84q"] Feb 19 18:33:12 crc kubenswrapper[4813]: I0219 18:33:12.401260 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xvhdg" Feb 19 18:33:12 crc kubenswrapper[4813]: I0219 18:33:12.401357 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xvhdg" Feb 19 18:33:12 crc kubenswrapper[4813]: I0219 18:33:12.467808 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xvhdg" Feb 19 18:33:13 crc kubenswrapper[4813]: I0219 18:33:13.083000 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xg84q" podUID="91d14b93-63e4-4673-9a84-61fba91ac9ec" containerName="registry-server" containerID="cri-o://76ab519c7d58cc3596bd06f97e7e3b8405beb29a7c322b1044f66375aa70dc43" gracePeriod=2 Feb 19 18:33:13 crc kubenswrapper[4813]: I0219 18:33:13.126949 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xvhdg" Feb 19 18:33:13 crc kubenswrapper[4813]: I0219 18:33:13.447516 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-z8tl9" Feb 19 18:33:13 crc kubenswrapper[4813]: I0219 18:33:13.447872 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-z8tl9" Feb 19 18:33:13 crc kubenswrapper[4813]: I0219 18:33:13.513184 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-z8tl9" Feb 19 18:33:14 crc kubenswrapper[4813]: I0219 18:33:14.133566 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-z8tl9" Feb 19 18:33:16 crc kubenswrapper[4813]: I0219 18:33:16.006074 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xg84q" Feb 19 18:33:16 crc kubenswrapper[4813]: I0219 18:33:16.100146 4813 generic.go:334] "Generic (PLEG): container finished" podID="91d14b93-63e4-4673-9a84-61fba91ac9ec" containerID="76ab519c7d58cc3596bd06f97e7e3b8405beb29a7c322b1044f66375aa70dc43" exitCode=0 Feb 19 18:33:16 crc kubenswrapper[4813]: I0219 18:33:16.100193 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xg84q" event={"ID":"91d14b93-63e4-4673-9a84-61fba91ac9ec","Type":"ContainerDied","Data":"76ab519c7d58cc3596bd06f97e7e3b8405beb29a7c322b1044f66375aa70dc43"} Feb 19 18:33:16 crc kubenswrapper[4813]: I0219 18:33:16.100224 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xg84q" event={"ID":"91d14b93-63e4-4673-9a84-61fba91ac9ec","Type":"ContainerDied","Data":"7e028ba200e25f05cd8ee1bc41b98c9c03d6e8ac0e45d3c7b57fa4bf0a458876"} Feb 19 18:33:16 crc kubenswrapper[4813]: I0219 18:33:16.100233 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xg84q" Feb 19 18:33:16 crc kubenswrapper[4813]: I0219 18:33:16.100247 4813 scope.go:117] "RemoveContainer" containerID="76ab519c7d58cc3596bd06f97e7e3b8405beb29a7c322b1044f66375aa70dc43" Feb 19 18:33:16 crc kubenswrapper[4813]: I0219 18:33:16.115239 4813 scope.go:117] "RemoveContainer" containerID="6ae6d90849c1631fdc504ceb80dcaa7fbd8ad14bc9e444eca0147b15a63d2ff3" Feb 19 18:33:16 crc kubenswrapper[4813]: I0219 18:33:16.115324 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91d14b93-63e4-4673-9a84-61fba91ac9ec-utilities\") pod \"91d14b93-63e4-4673-9a84-61fba91ac9ec\" (UID: \"91d14b93-63e4-4673-9a84-61fba91ac9ec\") " Feb 19 18:33:16 crc kubenswrapper[4813]: I0219 18:33:16.115475 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rdc4d\" (UniqueName: \"kubernetes.io/projected/91d14b93-63e4-4673-9a84-61fba91ac9ec-kube-api-access-rdc4d\") pod \"91d14b93-63e4-4673-9a84-61fba91ac9ec\" (UID: \"91d14b93-63e4-4673-9a84-61fba91ac9ec\") " Feb 19 18:33:16 crc kubenswrapper[4813]: I0219 18:33:16.115580 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91d14b93-63e4-4673-9a84-61fba91ac9ec-catalog-content\") pod \"91d14b93-63e4-4673-9a84-61fba91ac9ec\" (UID: \"91d14b93-63e4-4673-9a84-61fba91ac9ec\") " Feb 19 18:33:16 crc kubenswrapper[4813]: I0219 18:33:16.116100 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xvhdg"] Feb 19 18:33:16 crc kubenswrapper[4813]: I0219 18:33:16.116308 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xvhdg" podUID="0d0adcf5-5570-499d-b626-fdf0bb785e79" containerName="registry-server" containerID="cri-o://b7bd8718acc0e3e32d62040d1b5006fbec8a9dd17f6a256f629cf09c606e0f51" gracePeriod=2 Feb 19 18:33:16 crc kubenswrapper[4813]: I0219 18:33:16.116841 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91d14b93-63e4-4673-9a84-61fba91ac9ec-utilities" (OuterVolumeSpecName: "utilities") pod "91d14b93-63e4-4673-9a84-61fba91ac9ec" (UID: "91d14b93-63e4-4673-9a84-61fba91ac9ec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:33:16 crc kubenswrapper[4813]: I0219 18:33:16.130541 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91d14b93-63e4-4673-9a84-61fba91ac9ec-kube-api-access-rdc4d" (OuterVolumeSpecName: "kube-api-access-rdc4d") pod "91d14b93-63e4-4673-9a84-61fba91ac9ec" (UID: "91d14b93-63e4-4673-9a84-61fba91ac9ec"). InnerVolumeSpecName "kube-api-access-rdc4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:33:16 crc kubenswrapper[4813]: I0219 18:33:16.157409 4813 scope.go:117] "RemoveContainer" containerID="9447edc86f14a7f0eb161fefebf0efef2bc6879754609714423729edff0778e9" Feb 19 18:33:16 crc kubenswrapper[4813]: I0219 18:33:16.172862 4813 scope.go:117] "RemoveContainer" containerID="76ab519c7d58cc3596bd06f97e7e3b8405beb29a7c322b1044f66375aa70dc43" Feb 19 18:33:16 crc kubenswrapper[4813]: E0219 18:33:16.173297 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76ab519c7d58cc3596bd06f97e7e3b8405beb29a7c322b1044f66375aa70dc43\": container with ID starting with 76ab519c7d58cc3596bd06f97e7e3b8405beb29a7c322b1044f66375aa70dc43 not found: ID does not exist" containerID="76ab519c7d58cc3596bd06f97e7e3b8405beb29a7c322b1044f66375aa70dc43" Feb 19 18:33:16 crc kubenswrapper[4813]: I0219 18:33:16.173345 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76ab519c7d58cc3596bd06f97e7e3b8405beb29a7c322b1044f66375aa70dc43"} err="failed to get container status \"76ab519c7d58cc3596bd06f97e7e3b8405beb29a7c322b1044f66375aa70dc43\": rpc error: code = NotFound desc = could not find container \"76ab519c7d58cc3596bd06f97e7e3b8405beb29a7c322b1044f66375aa70dc43\": container with ID starting with 76ab519c7d58cc3596bd06f97e7e3b8405beb29a7c322b1044f66375aa70dc43 not found: ID does not exist" Feb 19 18:33:16 crc kubenswrapper[4813]: I0219 18:33:16.173398 4813 scope.go:117] "RemoveContainer" containerID="6ae6d90849c1631fdc504ceb80dcaa7fbd8ad14bc9e444eca0147b15a63d2ff3" Feb 19 18:33:16 crc kubenswrapper[4813]: E0219 18:33:16.173813 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ae6d90849c1631fdc504ceb80dcaa7fbd8ad14bc9e444eca0147b15a63d2ff3\": container with ID starting with 6ae6d90849c1631fdc504ceb80dcaa7fbd8ad14bc9e444eca0147b15a63d2ff3 not found: ID does not exist" containerID="6ae6d90849c1631fdc504ceb80dcaa7fbd8ad14bc9e444eca0147b15a63d2ff3" Feb 19 18:33:16 crc kubenswrapper[4813]: I0219 18:33:16.173859 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ae6d90849c1631fdc504ceb80dcaa7fbd8ad14bc9e444eca0147b15a63d2ff3"} err="failed to get container status \"6ae6d90849c1631fdc504ceb80dcaa7fbd8ad14bc9e444eca0147b15a63d2ff3\": rpc error: code = NotFound desc = could not find container \"6ae6d90849c1631fdc504ceb80dcaa7fbd8ad14bc9e444eca0147b15a63d2ff3\": container with ID starting with 6ae6d90849c1631fdc504ceb80dcaa7fbd8ad14bc9e444eca0147b15a63d2ff3 not found: ID does not exist" Feb 19 18:33:16 crc kubenswrapper[4813]: I0219 18:33:16.173890 4813 scope.go:117] "RemoveContainer" containerID="9447edc86f14a7f0eb161fefebf0efef2bc6879754609714423729edff0778e9" Feb 19 18:33:16 crc kubenswrapper[4813]: E0219 18:33:16.174168 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9447edc86f14a7f0eb161fefebf0efef2bc6879754609714423729edff0778e9\": container with ID starting with 9447edc86f14a7f0eb161fefebf0efef2bc6879754609714423729edff0778e9 not found: ID does not exist" containerID="9447edc86f14a7f0eb161fefebf0efef2bc6879754609714423729edff0778e9" Feb 19 18:33:16 crc kubenswrapper[4813]: I0219 18:33:16.174201 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9447edc86f14a7f0eb161fefebf0efef2bc6879754609714423729edff0778e9"} err="failed to get container status \"9447edc86f14a7f0eb161fefebf0efef2bc6879754609714423729edff0778e9\": rpc error: code = NotFound desc = could not find container \"9447edc86f14a7f0eb161fefebf0efef2bc6879754609714423729edff0778e9\": container with ID starting with 9447edc86f14a7f0eb161fefebf0efef2bc6879754609714423729edff0778e9 not found: ID does not exist" Feb 19 18:33:16 crc kubenswrapper[4813]: I0219 18:33:16.218366 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91d14b93-63e4-4673-9a84-61fba91ac9ec-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 18:33:16 crc kubenswrapper[4813]: I0219 18:33:16.218449 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rdc4d\" (UniqueName: \"kubernetes.io/projected/91d14b93-63e4-4673-9a84-61fba91ac9ec-kube-api-access-rdc4d\") on node \"crc\" DevicePath \"\"" Feb 19 18:33:17 crc kubenswrapper[4813]: I0219 18:33:17.703431 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-x6n5r" podUID="f230f42e-0e70-43d5-abbb-a2bbbe3eddcb" containerName="oauth-openshift" containerID="cri-o://b8b8dec88c60522687dfc8b48def3021bcfcd3fc3ee0beabf8ea755783a11e9f" gracePeriod=15 Feb 19 18:33:19 crc kubenswrapper[4813]: I0219 18:33:19.828829 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91d14b93-63e4-4673-9a84-61fba91ac9ec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "91d14b93-63e4-4673-9a84-61fba91ac9ec" (UID: "91d14b93-63e4-4673-9a84-61fba91ac9ec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:33:19 crc kubenswrapper[4813]: I0219 18:33:19.868347 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91d14b93-63e4-4673-9a84-61fba91ac9ec-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 18:33:20 crc kubenswrapper[4813]: I0219 18:33:20.043337 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xg84q"] Feb 19 18:33:20 crc kubenswrapper[4813]: I0219 18:33:20.050827 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xg84q"] Feb 19 18:33:20 crc kubenswrapper[4813]: I0219 18:33:20.128515 4813 generic.go:334] "Generic (PLEG): container finished" podID="f230f42e-0e70-43d5-abbb-a2bbbe3eddcb" containerID="b8b8dec88c60522687dfc8b48def3021bcfcd3fc3ee0beabf8ea755783a11e9f" exitCode=0 Feb 19 18:33:20 crc kubenswrapper[4813]: I0219 18:33:20.128621 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-x6n5r" event={"ID":"f230f42e-0e70-43d5-abbb-a2bbbe3eddcb","Type":"ContainerDied","Data":"b8b8dec88c60522687dfc8b48def3021bcfcd3fc3ee0beabf8ea755783a11e9f"} Feb 19 18:33:20 crc kubenswrapper[4813]: I0219 18:33:20.131404 4813 generic.go:334] "Generic (PLEG): container finished" podID="0d0adcf5-5570-499d-b626-fdf0bb785e79" containerID="b7bd8718acc0e3e32d62040d1b5006fbec8a9dd17f6a256f629cf09c606e0f51" exitCode=0 Feb 19 18:33:20 crc kubenswrapper[4813]: I0219 18:33:20.131456 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xvhdg" event={"ID":"0d0adcf5-5570-499d-b626-fdf0bb785e79","Type":"ContainerDied","Data":"b7bd8718acc0e3e32d62040d1b5006fbec8a9dd17f6a256f629cf09c606e0f51"} Feb 19 18:33:20 crc kubenswrapper[4813]: I0219 18:33:20.913151 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xvhdg" Feb 19 18:33:20 crc kubenswrapper[4813]: I0219 18:33:20.985255 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d0adcf5-5570-499d-b626-fdf0bb785e79-utilities\") pod \"0d0adcf5-5570-499d-b626-fdf0bb785e79\" (UID: \"0d0adcf5-5570-499d-b626-fdf0bb785e79\") " Feb 19 18:33:20 crc kubenswrapper[4813]: I0219 18:33:20.985371 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d0adcf5-5570-499d-b626-fdf0bb785e79-catalog-content\") pod \"0d0adcf5-5570-499d-b626-fdf0bb785e79\" (UID: \"0d0adcf5-5570-499d-b626-fdf0bb785e79\") " Feb 19 18:33:20 crc kubenswrapper[4813]: I0219 18:33:20.985482 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksppk\" (UniqueName: \"kubernetes.io/projected/0d0adcf5-5570-499d-b626-fdf0bb785e79-kube-api-access-ksppk\") pod \"0d0adcf5-5570-499d-b626-fdf0bb785e79\" (UID: \"0d0adcf5-5570-499d-b626-fdf0bb785e79\") " Feb 19 18:33:20 crc kubenswrapper[4813]: I0219 18:33:20.987315 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d0adcf5-5570-499d-b626-fdf0bb785e79-utilities" (OuterVolumeSpecName: "utilities") pod "0d0adcf5-5570-499d-b626-fdf0bb785e79" (UID: "0d0adcf5-5570-499d-b626-fdf0bb785e79"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:33:20 crc kubenswrapper[4813]: I0219 18:33:20.993332 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d0adcf5-5570-499d-b626-fdf0bb785e79-kube-api-access-ksppk" (OuterVolumeSpecName: "kube-api-access-ksppk") pod "0d0adcf5-5570-499d-b626-fdf0bb785e79" (UID: "0d0adcf5-5570-499d-b626-fdf0bb785e79"). InnerVolumeSpecName "kube-api-access-ksppk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:33:21 crc kubenswrapper[4813]: I0219 18:33:21.027566 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d0adcf5-5570-499d-b626-fdf0bb785e79-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0d0adcf5-5570-499d-b626-fdf0bb785e79" (UID: "0d0adcf5-5570-499d-b626-fdf0bb785e79"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:33:21 crc kubenswrapper[4813]: I0219 18:33:21.087110 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ksppk\" (UniqueName: \"kubernetes.io/projected/0d0adcf5-5570-499d-b626-fdf0bb785e79-kube-api-access-ksppk\") on node \"crc\" DevicePath \"\"" Feb 19 18:33:21 crc kubenswrapper[4813]: I0219 18:33:21.087154 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d0adcf5-5570-499d-b626-fdf0bb785e79-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 18:33:21 crc kubenswrapper[4813]: I0219 18:33:21.087167 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d0adcf5-5570-499d-b626-fdf0bb785e79-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 18:33:21 crc kubenswrapper[4813]: I0219 18:33:21.139022 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xvhdg" event={"ID":"0d0adcf5-5570-499d-b626-fdf0bb785e79","Type":"ContainerDied","Data":"6b4487251b9523df77a8626288ad273389735f38b246ae8cf33241d39f6f1ebe"} Feb 19 18:33:21 crc kubenswrapper[4813]: I0219 18:33:21.139077 4813 scope.go:117] "RemoveContainer" containerID="b7bd8718acc0e3e32d62040d1b5006fbec8a9dd17f6a256f629cf09c606e0f51" Feb 19 18:33:21 crc kubenswrapper[4813]: I0219 18:33:21.139096 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xvhdg" Feb 19 18:33:21 crc kubenswrapper[4813]: I0219 18:33:21.158766 4813 scope.go:117] "RemoveContainer" containerID="8672cb34c2c402810a971fdcdba87c6bc6fc84d516ab4c45d3e41d4945b893d9" Feb 19 18:33:21 crc kubenswrapper[4813]: I0219 18:33:21.171051 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xvhdg"] Feb 19 18:33:21 crc kubenswrapper[4813]: I0219 18:33:21.172674 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xvhdg"] Feb 19 18:33:21 crc kubenswrapper[4813]: I0219 18:33:21.183140 4813 scope.go:117] "RemoveContainer" containerID="a1c8f480da418c53149cc193c21153f497f898b03c88182178c9d2299949ffba" Feb 19 18:33:21 crc kubenswrapper[4813]: I0219 18:33:21.234044 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-x6n5r" Feb 19 18:33:21 crc kubenswrapper[4813]: I0219 18:33:21.289006 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-v4-0-config-system-serving-cert\") pod \"f230f42e-0e70-43d5-abbb-a2bbbe3eddcb\" (UID: \"f230f42e-0e70-43d5-abbb-a2bbbe3eddcb\") " Feb 19 18:33:21 crc kubenswrapper[4813]: I0219 18:33:21.289052 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-v4-0-config-system-session\") pod \"f230f42e-0e70-43d5-abbb-a2bbbe3eddcb\" (UID: \"f230f42e-0e70-43d5-abbb-a2bbbe3eddcb\") " Feb 19 18:33:21 crc kubenswrapper[4813]: I0219 18:33:21.289099 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-v4-0-config-user-template-login\") pod \"f230f42e-0e70-43d5-abbb-a2bbbe3eddcb\" (UID: \"f230f42e-0e70-43d5-abbb-a2bbbe3eddcb\") " Feb 19 18:33:21 crc kubenswrapper[4813]: I0219 18:33:21.289128 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-v4-0-config-user-idp-0-file-data\") pod \"f230f42e-0e70-43d5-abbb-a2bbbe3eddcb\" (UID: \"f230f42e-0e70-43d5-abbb-a2bbbe3eddcb\") " Feb 19 18:33:21 crc kubenswrapper[4813]: I0219 18:33:21.289159 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-v4-0-config-user-template-provider-selection\") pod \"f230f42e-0e70-43d5-abbb-a2bbbe3eddcb\" (UID: \"f230f42e-0e70-43d5-abbb-a2bbbe3eddcb\") " Feb 19 18:33:21 crc kubenswrapper[4813]: I0219 18:33:21.289189 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-audit-policies\") pod \"f230f42e-0e70-43d5-abbb-a2bbbe3eddcb\" (UID: \"f230f42e-0e70-43d5-abbb-a2bbbe3eddcb\") " Feb 19 18:33:21 crc kubenswrapper[4813]: I0219 18:33:21.289210 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-v4-0-config-system-cliconfig\") pod \"f230f42e-0e70-43d5-abbb-a2bbbe3eddcb\" (UID: \"f230f42e-0e70-43d5-abbb-a2bbbe3eddcb\") " Feb 19 18:33:21 crc kubenswrapper[4813]: I0219 18:33:21.289238 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-v4-0-config-system-trusted-ca-bundle\") pod \"f230f42e-0e70-43d5-abbb-a2bbbe3eddcb\" (UID: \"f230f42e-0e70-43d5-abbb-a2bbbe3eddcb\") " Feb 19 18:33:21 crc kubenswrapper[4813]: I0219 18:33:21.289280 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-v4-0-config-system-service-ca\") pod \"f230f42e-0e70-43d5-abbb-a2bbbe3eddcb\" (UID: \"f230f42e-0e70-43d5-abbb-a2bbbe3eddcb\") " Feb 19 18:33:21 crc kubenswrapper[4813]: I0219 18:33:21.289308 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwbdz\" (UniqueName: \"kubernetes.io/projected/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-kube-api-access-kwbdz\") pod \"f230f42e-0e70-43d5-abbb-a2bbbe3eddcb\" (UID: \"f230f42e-0e70-43d5-abbb-a2bbbe3eddcb\") " Feb 19 18:33:21 crc kubenswrapper[4813]: I0219 18:33:21.289334 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-audit-dir\") pod \"f230f42e-0e70-43d5-abbb-a2bbbe3eddcb\" (UID: \"f230f42e-0e70-43d5-abbb-a2bbbe3eddcb\") " Feb 19 18:33:21 crc kubenswrapper[4813]: I0219 18:33:21.289357 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-v4-0-config-user-template-error\") pod \"f230f42e-0e70-43d5-abbb-a2bbbe3eddcb\" (UID: \"f230f42e-0e70-43d5-abbb-a2bbbe3eddcb\") " Feb 19 18:33:21 crc kubenswrapper[4813]: I0219 18:33:21.289404 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-v4-0-config-system-router-certs\") pod \"f230f42e-0e70-43d5-abbb-a2bbbe3eddcb\" (UID: \"f230f42e-0e70-43d5-abbb-a2bbbe3eddcb\") " Feb 19 18:33:21 crc kubenswrapper[4813]: I0219 18:33:21.289438 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-v4-0-config-system-ocp-branding-template\") pod \"f230f42e-0e70-43d5-abbb-a2bbbe3eddcb\" (UID: \"f230f42e-0e70-43d5-abbb-a2bbbe3eddcb\") " Feb 19 18:33:21 crc kubenswrapper[4813]: I0219 18:33:21.291498 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "f230f42e-0e70-43d5-abbb-a2bbbe3eddcb" (UID: "f230f42e-0e70-43d5-abbb-a2bbbe3eddcb"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:33:21 crc kubenswrapper[4813]: I0219 18:33:21.291517 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "f230f42e-0e70-43d5-abbb-a2bbbe3eddcb" (UID: "f230f42e-0e70-43d5-abbb-a2bbbe3eddcb"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:33:21 crc kubenswrapper[4813]: I0219 18:33:21.291578 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f230f42e-0e70-43d5-abbb-a2bbbe3eddcb" (UID: "f230f42e-0e70-43d5-abbb-a2bbbe3eddcb"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 18:33:21 crc kubenswrapper[4813]: I0219 18:33:21.292884 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "f230f42e-0e70-43d5-abbb-a2bbbe3eddcb" (UID: "f230f42e-0e70-43d5-abbb-a2bbbe3eddcb"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:33:21 crc kubenswrapper[4813]: I0219 18:33:21.293257 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f230f42e-0e70-43d5-abbb-a2bbbe3eddcb" (UID: "f230f42e-0e70-43d5-abbb-a2bbbe3eddcb"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:33:21 crc kubenswrapper[4813]: I0219 18:33:21.294656 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "f230f42e-0e70-43d5-abbb-a2bbbe3eddcb" (UID: "f230f42e-0e70-43d5-abbb-a2bbbe3eddcb"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:33:21 crc kubenswrapper[4813]: I0219 18:33:21.294915 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "f230f42e-0e70-43d5-abbb-a2bbbe3eddcb" (UID: "f230f42e-0e70-43d5-abbb-a2bbbe3eddcb"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:33:21 crc kubenswrapper[4813]: I0219 18:33:21.295208 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "f230f42e-0e70-43d5-abbb-a2bbbe3eddcb" (UID: "f230f42e-0e70-43d5-abbb-a2bbbe3eddcb"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:33:21 crc kubenswrapper[4813]: I0219 18:33:21.295507 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "f230f42e-0e70-43d5-abbb-a2bbbe3eddcb" (UID: "f230f42e-0e70-43d5-abbb-a2bbbe3eddcb"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:33:21 crc kubenswrapper[4813]: I0219 18:33:21.295793 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "f230f42e-0e70-43d5-abbb-a2bbbe3eddcb" (UID: "f230f42e-0e70-43d5-abbb-a2bbbe3eddcb"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:33:21 crc kubenswrapper[4813]: I0219 18:33:21.296517 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "f230f42e-0e70-43d5-abbb-a2bbbe3eddcb" (UID: "f230f42e-0e70-43d5-abbb-a2bbbe3eddcb"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:33:21 crc kubenswrapper[4813]: I0219 18:33:21.297222 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "f230f42e-0e70-43d5-abbb-a2bbbe3eddcb" (UID: "f230f42e-0e70-43d5-abbb-a2bbbe3eddcb"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:33:21 crc kubenswrapper[4813]: I0219 18:33:21.297511 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "f230f42e-0e70-43d5-abbb-a2bbbe3eddcb" (UID: "f230f42e-0e70-43d5-abbb-a2bbbe3eddcb"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:33:21 crc kubenswrapper[4813]: I0219 18:33:21.298342 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-kube-api-access-kwbdz" (OuterVolumeSpecName: "kube-api-access-kwbdz") pod "f230f42e-0e70-43d5-abbb-a2bbbe3eddcb" (UID: "f230f42e-0e70-43d5-abbb-a2bbbe3eddcb"). InnerVolumeSpecName "kube-api-access-kwbdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:33:21 crc kubenswrapper[4813]: I0219 18:33:21.390710 4813 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 19 18:33:21 crc kubenswrapper[4813]: I0219 18:33:21.390767 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 19 18:33:21 crc kubenswrapper[4813]: I0219 18:33:21.390791 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:33:21 crc kubenswrapper[4813]: I0219 18:33:21.390811 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 18:33:21 crc kubenswrapper[4813]: I0219 18:33:21.390832 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwbdz\" (UniqueName: \"kubernetes.io/projected/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-kube-api-access-kwbdz\") on node \"crc\" DevicePath \"\"" Feb 19 18:33:21 crc kubenswrapper[4813]: I0219 18:33:21.390851 4813 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 19 18:33:21 crc kubenswrapper[4813]: I0219 18:33:21.390868 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 19 18:33:21 crc kubenswrapper[4813]: I0219 18:33:21.390887 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 19 18:33:21 crc kubenswrapper[4813]: I0219 18:33:21.390906 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 19 18:33:21 crc kubenswrapper[4813]: I0219 18:33:21.390925 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 18:33:21 crc kubenswrapper[4813]: I0219 18:33:21.390944 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 19 18:33:21 crc kubenswrapper[4813]: I0219 18:33:21.390996 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 19 18:33:21 crc kubenswrapper[4813]: I0219 18:33:21.391014 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 19 18:33:21 crc kubenswrapper[4813]: I0219 18:33:21.391034 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 19 18:33:21 crc kubenswrapper[4813]: I0219 18:33:21.482911 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d0adcf5-5570-499d-b626-fdf0bb785e79" path="/var/lib/kubelet/pods/0d0adcf5-5570-499d-b626-fdf0bb785e79/volumes" Feb 19 18:33:21 crc kubenswrapper[4813]: I0219 18:33:21.484133 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91d14b93-63e4-4673-9a84-61fba91ac9ec" path="/var/lib/kubelet/pods/91d14b93-63e4-4673-9a84-61fba91ac9ec/volumes" Feb 19 18:33:22 crc kubenswrapper[4813]: I0219 18:33:22.148617 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-x6n5r" event={"ID":"f230f42e-0e70-43d5-abbb-a2bbbe3eddcb","Type":"ContainerDied","Data":"f8fb6e82c85375df1f2e8053a9d2eaa149e13c1bcb2ad617c0eaae3f83361aaa"} Feb 19 18:33:22 crc kubenswrapper[4813]: I0219 18:33:22.148704 4813 scope.go:117] "RemoveContainer" containerID="b8b8dec88c60522687dfc8b48def3021bcfcd3fc3ee0beabf8ea755783a11e9f" Feb 19 18:33:22 crc kubenswrapper[4813]: I0219 18:33:22.148763 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-x6n5r" Feb 19 18:33:22 crc kubenswrapper[4813]: I0219 18:33:22.180513 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-x6n5r"] Feb 19 18:33:22 crc kubenswrapper[4813]: I0219 18:33:22.187049 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-x6n5r"] Feb 19 18:33:23 crc kubenswrapper[4813]: I0219 18:33:23.478353 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f230f42e-0e70-43d5-abbb-a2bbbe3eddcb" path="/var/lib/kubelet/pods/f230f42e-0e70-43d5-abbb-a2bbbe3eddcb/volumes" Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.302033 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-c4f645f59-8qhsk"] Feb 19 18:33:28 crc kubenswrapper[4813]: E0219 18:33:28.302619 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91d14b93-63e4-4673-9a84-61fba91ac9ec" containerName="extract-utilities" Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.302635 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="91d14b93-63e4-4673-9a84-61fba91ac9ec" containerName="extract-utilities" Feb 19 18:33:28 crc kubenswrapper[4813]: E0219 18:33:28.302644 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d0adcf5-5570-499d-b626-fdf0bb785e79" containerName="extract-utilities" Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.302653 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d0adcf5-5570-499d-b626-fdf0bb785e79" containerName="extract-utilities" Feb 19 18:33:28 crc kubenswrapper[4813]: E0219 18:33:28.302667 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebe695cb-96c7-4c92-8fb9-62e84ccf225d" containerName="registry-server" Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.302675 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebe695cb-96c7-4c92-8fb9-62e84ccf225d" containerName="registry-server" Feb 19 18:33:28 crc kubenswrapper[4813]: E0219 18:33:28.302690 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c467423-e5c4-4329-9a0c-c68058d30c91" containerName="extract-content" Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.302698 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c467423-e5c4-4329-9a0c-c68058d30c91" containerName="extract-content" Feb 19 18:33:28 crc kubenswrapper[4813]: E0219 18:33:28.302707 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebe695cb-96c7-4c92-8fb9-62e84ccf225d" containerName="extract-utilities" Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.302715 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebe695cb-96c7-4c92-8fb9-62e84ccf225d" containerName="extract-utilities" Feb 19 18:33:28 crc kubenswrapper[4813]: E0219 18:33:28.302727 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91d14b93-63e4-4673-9a84-61fba91ac9ec" containerName="extract-content" Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.302734 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="91d14b93-63e4-4673-9a84-61fba91ac9ec" containerName="extract-content" Feb 19 18:33:28 crc kubenswrapper[4813]: E0219 18:33:28.302747 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebe695cb-96c7-4c92-8fb9-62e84ccf225d" containerName="extract-content" Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.302754 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebe695cb-96c7-4c92-8fb9-62e84ccf225d" containerName="extract-content" Feb 19 18:33:28 crc kubenswrapper[4813]: E0219 18:33:28.302765 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d0adcf5-5570-499d-b626-fdf0bb785e79" containerName="extract-content" Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.302772 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d0adcf5-5570-499d-b626-fdf0bb785e79" containerName="extract-content" Feb 19 18:33:28 crc kubenswrapper[4813]: E0219 18:33:28.302784 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c467423-e5c4-4329-9a0c-c68058d30c91" containerName="extract-utilities" Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.302793 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c467423-e5c4-4329-9a0c-c68058d30c91" containerName="extract-utilities" Feb 19 18:33:28 crc kubenswrapper[4813]: E0219 18:33:28.302804 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d0adcf5-5570-499d-b626-fdf0bb785e79" containerName="registry-server" Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.302812 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d0adcf5-5570-499d-b626-fdf0bb785e79" containerName="registry-server" Feb 19 18:33:28 crc kubenswrapper[4813]: E0219 18:33:28.302822 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c467423-e5c4-4329-9a0c-c68058d30c91" containerName="registry-server" Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.302829 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c467423-e5c4-4329-9a0c-c68058d30c91" containerName="registry-server" Feb 19 18:33:28 crc kubenswrapper[4813]: E0219 18:33:28.302843 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91d14b93-63e4-4673-9a84-61fba91ac9ec" containerName="registry-server" Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.302850 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="91d14b93-63e4-4673-9a84-61fba91ac9ec" containerName="registry-server" Feb 19 18:33:28 crc kubenswrapper[4813]: E0219 18:33:28.302859 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f230f42e-0e70-43d5-abbb-a2bbbe3eddcb" containerName="oauth-openshift" Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.302868 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f230f42e-0e70-43d5-abbb-a2bbbe3eddcb" containerName="oauth-openshift" Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.302992 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f230f42e-0e70-43d5-abbb-a2bbbe3eddcb" containerName="oauth-openshift" Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.303008 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="91d14b93-63e4-4673-9a84-61fba91ac9ec" containerName="registry-server" Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.303019 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d0adcf5-5570-499d-b626-fdf0bb785e79" containerName="registry-server" Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.303029 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebe695cb-96c7-4c92-8fb9-62e84ccf225d" containerName="registry-server" Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.303039 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c467423-e5c4-4329-9a0c-c68058d30c91" containerName="registry-server" Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.303443 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-c4f645f59-8qhsk" Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.309139 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.309667 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.310172 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.312550 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.312841 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.312981 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.313146 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.314284 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.314353 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.314293 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.314523 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.315112 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.324387 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-c4f645f59-8qhsk"] Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.329666 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.335389 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.341681 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.392988 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/26dbddb3-8ba8-47e4-8819-518d35691467-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-c4f645f59-8qhsk\" (UID: \"26dbddb3-8ba8-47e4-8819-518d35691467\") " pod="openshift-authentication/oauth-openshift-c4f645f59-8qhsk" Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.393082 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/26dbddb3-8ba8-47e4-8819-518d35691467-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-c4f645f59-8qhsk\" (UID: \"26dbddb3-8ba8-47e4-8819-518d35691467\") " pod="openshift-authentication/oauth-openshift-c4f645f59-8qhsk" Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.393142 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/26dbddb3-8ba8-47e4-8819-518d35691467-v4-0-config-user-template-error\") pod \"oauth-openshift-c4f645f59-8qhsk\" (UID: \"26dbddb3-8ba8-47e4-8819-518d35691467\") " pod="openshift-authentication/oauth-openshift-c4f645f59-8qhsk" Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.393203 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/26dbddb3-8ba8-47e4-8819-518d35691467-v4-0-config-system-session\") pod \"oauth-openshift-c4f645f59-8qhsk\" (UID: \"26dbddb3-8ba8-47e4-8819-518d35691467\") " pod="openshift-authentication/oauth-openshift-c4f645f59-8qhsk" Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.393240 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/26dbddb3-8ba8-47e4-8819-518d35691467-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-c4f645f59-8qhsk\" (UID: \"26dbddb3-8ba8-47e4-8819-518d35691467\") " pod="openshift-authentication/oauth-openshift-c4f645f59-8qhsk" Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.393379 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/26dbddb3-8ba8-47e4-8819-518d35691467-audit-dir\") pod \"oauth-openshift-c4f645f59-8qhsk\" (UID: \"26dbddb3-8ba8-47e4-8819-518d35691467\") " pod="openshift-authentication/oauth-openshift-c4f645f59-8qhsk" Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.393439 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/26dbddb3-8ba8-47e4-8819-518d35691467-v4-0-config-system-router-certs\") pod \"oauth-openshift-c4f645f59-8qhsk\" (UID: \"26dbddb3-8ba8-47e4-8819-518d35691467\") " pod="openshift-authentication/oauth-openshift-c4f645f59-8qhsk" Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.393488 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/26dbddb3-8ba8-47e4-8819-518d35691467-v4-0-config-system-service-ca\") pod \"oauth-openshift-c4f645f59-8qhsk\" (UID: \"26dbddb3-8ba8-47e4-8819-518d35691467\") " pod="openshift-authentication/oauth-openshift-c4f645f59-8qhsk" Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.393511 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/26dbddb3-8ba8-47e4-8819-518d35691467-v4-0-config-user-template-login\") pod \"oauth-openshift-c4f645f59-8qhsk\" (UID: \"26dbddb3-8ba8-47e4-8819-518d35691467\") " pod="openshift-authentication/oauth-openshift-c4f645f59-8qhsk" Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.393612 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/26dbddb3-8ba8-47e4-8819-518d35691467-v4-0-config-system-serving-cert\") pod \"oauth-openshift-c4f645f59-8qhsk\" (UID: \"26dbddb3-8ba8-47e4-8819-518d35691467\") " pod="openshift-authentication/oauth-openshift-c4f645f59-8qhsk" Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.393663 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26dbddb3-8ba8-47e4-8819-518d35691467-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-c4f645f59-8qhsk\" (UID: \"26dbddb3-8ba8-47e4-8819-518d35691467\") " pod="openshift-authentication/oauth-openshift-c4f645f59-8qhsk" Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.393726 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/26dbddb3-8ba8-47e4-8819-518d35691467-v4-0-config-system-cliconfig\") pod \"oauth-openshift-c4f645f59-8qhsk\" (UID: \"26dbddb3-8ba8-47e4-8819-518d35691467\") " pod="openshift-authentication/oauth-openshift-c4f645f59-8qhsk" Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.393760 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/26dbddb3-8ba8-47e4-8819-518d35691467-audit-policies\") pod \"oauth-openshift-c4f645f59-8qhsk\" (UID: \"26dbddb3-8ba8-47e4-8819-518d35691467\") " pod="openshift-authentication/oauth-openshift-c4f645f59-8qhsk" Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.393795 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwjmx\" (UniqueName: \"kubernetes.io/projected/26dbddb3-8ba8-47e4-8819-518d35691467-kube-api-access-nwjmx\") pod \"oauth-openshift-c4f645f59-8qhsk\" (UID: \"26dbddb3-8ba8-47e4-8819-518d35691467\") " pod="openshift-authentication/oauth-openshift-c4f645f59-8qhsk" Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.494697 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/26dbddb3-8ba8-47e4-8819-518d35691467-v4-0-config-user-template-login\") pod \"oauth-openshift-c4f645f59-8qhsk\" (UID: \"26dbddb3-8ba8-47e4-8819-518d35691467\") " pod="openshift-authentication/oauth-openshift-c4f645f59-8qhsk" Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.494746 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/26dbddb3-8ba8-47e4-8819-518d35691467-v4-0-config-system-serving-cert\") pod \"oauth-openshift-c4f645f59-8qhsk\" (UID: \"26dbddb3-8ba8-47e4-8819-518d35691467\") " pod="openshift-authentication/oauth-openshift-c4f645f59-8qhsk" Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.494764 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26dbddb3-8ba8-47e4-8819-518d35691467-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-c4f645f59-8qhsk\" (UID: \"26dbddb3-8ba8-47e4-8819-518d35691467\") " pod="openshift-authentication/oauth-openshift-c4f645f59-8qhsk" Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.494786 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/26dbddb3-8ba8-47e4-8819-518d35691467-v4-0-config-system-cliconfig\") pod \"oauth-openshift-c4f645f59-8qhsk\" (UID: \"26dbddb3-8ba8-47e4-8819-518d35691467\") " pod="openshift-authentication/oauth-openshift-c4f645f59-8qhsk" Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.494804 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/26dbddb3-8ba8-47e4-8819-518d35691467-audit-policies\") pod \"oauth-openshift-c4f645f59-8qhsk\" (UID: \"26dbddb3-8ba8-47e4-8819-518d35691467\") " pod="openshift-authentication/oauth-openshift-c4f645f59-8qhsk" Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.494820 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwjmx\" (UniqueName: \"kubernetes.io/projected/26dbddb3-8ba8-47e4-8819-518d35691467-kube-api-access-nwjmx\") pod \"oauth-openshift-c4f645f59-8qhsk\" (UID: \"26dbddb3-8ba8-47e4-8819-518d35691467\") " pod="openshift-authentication/oauth-openshift-c4f645f59-8qhsk" Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.494848 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/26dbddb3-8ba8-47e4-8819-518d35691467-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-c4f645f59-8qhsk\" (UID: \"26dbddb3-8ba8-47e4-8819-518d35691467\") " pod="openshift-authentication/oauth-openshift-c4f645f59-8qhsk" Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.494878 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/26dbddb3-8ba8-47e4-8819-518d35691467-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-c4f645f59-8qhsk\" (UID: \"26dbddb3-8ba8-47e4-8819-518d35691467\") " pod="openshift-authentication/oauth-openshift-c4f645f59-8qhsk" Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.494908 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/26dbddb3-8ba8-47e4-8819-518d35691467-v4-0-config-user-template-error\") pod \"oauth-openshift-c4f645f59-8qhsk\" (UID: \"26dbddb3-8ba8-47e4-8819-518d35691467\") " pod="openshift-authentication/oauth-openshift-c4f645f59-8qhsk" Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.494933 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/26dbddb3-8ba8-47e4-8819-518d35691467-v4-0-config-system-session\") pod \"oauth-openshift-c4f645f59-8qhsk\" (UID: \"26dbddb3-8ba8-47e4-8819-518d35691467\") " pod="openshift-authentication/oauth-openshift-c4f645f59-8qhsk" Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.494978 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/26dbddb3-8ba8-47e4-8819-518d35691467-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-c4f645f59-8qhsk\" (UID: \"26dbddb3-8ba8-47e4-8819-518d35691467\") " pod="openshift-authentication/oauth-openshift-c4f645f59-8qhsk" Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.495000 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/26dbddb3-8ba8-47e4-8819-518d35691467-audit-dir\") pod \"oauth-openshift-c4f645f59-8qhsk\" (UID: \"26dbddb3-8ba8-47e4-8819-518d35691467\") " pod="openshift-authentication/oauth-openshift-c4f645f59-8qhsk" Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.495019 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/26dbddb3-8ba8-47e4-8819-518d35691467-v4-0-config-system-router-certs\") pod \"oauth-openshift-c4f645f59-8qhsk\" (UID: \"26dbddb3-8ba8-47e4-8819-518d35691467\") " pod="openshift-authentication/oauth-openshift-c4f645f59-8qhsk" Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.495041 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/26dbddb3-8ba8-47e4-8819-518d35691467-v4-0-config-system-service-ca\") pod \"oauth-openshift-c4f645f59-8qhsk\" (UID: \"26dbddb3-8ba8-47e4-8819-518d35691467\") " pod="openshift-authentication/oauth-openshift-c4f645f59-8qhsk" Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.495717 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/26dbddb3-8ba8-47e4-8819-518d35691467-audit-dir\") pod \"oauth-openshift-c4f645f59-8qhsk\" (UID: \"26dbddb3-8ba8-47e4-8819-518d35691467\") " pod="openshift-authentication/oauth-openshift-c4f645f59-8qhsk" Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.495789 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/26dbddb3-8ba8-47e4-8819-518d35691467-v4-0-config-system-service-ca\") pod \"oauth-openshift-c4f645f59-8qhsk\" (UID: \"26dbddb3-8ba8-47e4-8819-518d35691467\") " pod="openshift-authentication/oauth-openshift-c4f645f59-8qhsk" Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.495782 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/26dbddb3-8ba8-47e4-8819-518d35691467-audit-policies\") pod \"oauth-openshift-c4f645f59-8qhsk\" (UID: \"26dbddb3-8ba8-47e4-8819-518d35691467\") " pod="openshift-authentication/oauth-openshift-c4f645f59-8qhsk" Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.496383 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/26dbddb3-8ba8-47e4-8819-518d35691467-v4-0-config-system-cliconfig\") pod \"oauth-openshift-c4f645f59-8qhsk\" (UID: \"26dbddb3-8ba8-47e4-8819-518d35691467\") " pod="openshift-authentication/oauth-openshift-c4f645f59-8qhsk" Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.496601 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26dbddb3-8ba8-47e4-8819-518d35691467-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-c4f645f59-8qhsk\" (UID: \"26dbddb3-8ba8-47e4-8819-518d35691467\") " pod="openshift-authentication/oauth-openshift-c4f645f59-8qhsk" Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.500609 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/26dbddb3-8ba8-47e4-8819-518d35691467-v4-0-config-system-session\") pod \"oauth-openshift-c4f645f59-8qhsk\" (UID: \"26dbddb3-8ba8-47e4-8819-518d35691467\") " pod="openshift-authentication/oauth-openshift-c4f645f59-8qhsk" Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.500645 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/26dbddb3-8ba8-47e4-8819-518d35691467-v4-0-config-system-serving-cert\") pod \"oauth-openshift-c4f645f59-8qhsk\" (UID: \"26dbddb3-8ba8-47e4-8819-518d35691467\") " pod="openshift-authentication/oauth-openshift-c4f645f59-8qhsk" Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.500648 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/26dbddb3-8ba8-47e4-8819-518d35691467-v4-0-config-user-template-login\") pod \"oauth-openshift-c4f645f59-8qhsk\" (UID: \"26dbddb3-8ba8-47e4-8819-518d35691467\") " pod="openshift-authentication/oauth-openshift-c4f645f59-8qhsk" Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.501086 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/26dbddb3-8ba8-47e4-8819-518d35691467-v4-0-config-user-template-error\") pod \"oauth-openshift-c4f645f59-8qhsk\" (UID: \"26dbddb3-8ba8-47e4-8819-518d35691467\") " pod="openshift-authentication/oauth-openshift-c4f645f59-8qhsk" Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.502085 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/26dbddb3-8ba8-47e4-8819-518d35691467-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-c4f645f59-8qhsk\" (UID: \"26dbddb3-8ba8-47e4-8819-518d35691467\") " pod="openshift-authentication/oauth-openshift-c4f645f59-8qhsk" Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.502392 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/26dbddb3-8ba8-47e4-8819-518d35691467-v4-0-config-system-router-certs\") pod \"oauth-openshift-c4f645f59-8qhsk\" (UID: \"26dbddb3-8ba8-47e4-8819-518d35691467\") " pod="openshift-authentication/oauth-openshift-c4f645f59-8qhsk" Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.507268 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/26dbddb3-8ba8-47e4-8819-518d35691467-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-c4f645f59-8qhsk\" (UID: \"26dbddb3-8ba8-47e4-8819-518d35691467\") " pod="openshift-authentication/oauth-openshift-c4f645f59-8qhsk" Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.512103 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/26dbddb3-8ba8-47e4-8819-518d35691467-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-c4f645f59-8qhsk\" (UID: \"26dbddb3-8ba8-47e4-8819-518d35691467\") " pod="openshift-authentication/oauth-openshift-c4f645f59-8qhsk" Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.514370 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwjmx\" (UniqueName: \"kubernetes.io/projected/26dbddb3-8ba8-47e4-8819-518d35691467-kube-api-access-nwjmx\") pod \"oauth-openshift-c4f645f59-8qhsk\" (UID: \"26dbddb3-8ba8-47e4-8819-518d35691467\") " pod="openshift-authentication/oauth-openshift-c4f645f59-8qhsk" Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.619907 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-c4f645f59-8qhsk" Feb 19 18:33:28 crc kubenswrapper[4813]: I0219 18:33:28.812165 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-c4f645f59-8qhsk"] Feb 19 18:33:28 crc kubenswrapper[4813]: W0219 18:33:28.819507 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26dbddb3_8ba8_47e4_8819_518d35691467.slice/crio-5d0db7430cafda4064a616866b65e14dc8bf72e64c3c4e4372dea96a30f4532c WatchSource:0}: Error finding container 5d0db7430cafda4064a616866b65e14dc8bf72e64c3c4e4372dea96a30f4532c: Status 404 returned error can't find the container with id 5d0db7430cafda4064a616866b65e14dc8bf72e64c3c4e4372dea96a30f4532c Feb 19 18:33:29 crc kubenswrapper[4813]: I0219 18:33:29.194640 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-c4f645f59-8qhsk" event={"ID":"26dbddb3-8ba8-47e4-8819-518d35691467","Type":"ContainerStarted","Data":"5d0db7430cafda4064a616866b65e14dc8bf72e64c3c4e4372dea96a30f4532c"} Feb 19 18:33:30 crc kubenswrapper[4813]: I0219 18:33:30.210179 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-c4f645f59-8qhsk" event={"ID":"26dbddb3-8ba8-47e4-8819-518d35691467","Type":"ContainerStarted","Data":"4e2eb6eca084a16a4406a36ea3fbc7e742a3e1ce5bedf9d41d170fece8a77fda"} Feb 19 18:33:30 crc kubenswrapper[4813]: I0219 18:33:30.211367 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-c4f645f59-8qhsk" Feb 19 18:33:30 crc kubenswrapper[4813]: I0219 18:33:30.221367 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-c4f645f59-8qhsk" Feb 19 18:33:30 crc kubenswrapper[4813]: I0219 18:33:30.241201 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-c4f645f59-8qhsk" podStartSLOduration=38.241179372 podStartE2EDuration="38.241179372s" podCreationTimestamp="2026-02-19 18:32:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:33:30.238710495 +0000 UTC m=+229.464151056" watchObservedRunningTime="2026-02-19 18:33:30.241179372 +0000 UTC m=+229.466619913" Feb 19 18:33:30 crc kubenswrapper[4813]: I0219 18:33:30.329485 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 18:33:30 crc kubenswrapper[4813]: I0219 18:33:30.329552 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 18:33:30 crc kubenswrapper[4813]: I0219 18:33:30.329641 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" Feb 19 18:33:30 crc kubenswrapper[4813]: I0219 18:33:30.330312 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"194a57de3f000b8b7bd147b882314c518fbd8173a9c366d4ee48e1e5fab9800b"} pod="openshift-machine-config-operator/machine-config-daemon-gfswm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 18:33:30 crc kubenswrapper[4813]: I0219 18:33:30.330376 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" containerID="cri-o://194a57de3f000b8b7bd147b882314c518fbd8173a9c366d4ee48e1e5fab9800b" gracePeriod=600 Feb 19 18:33:31 crc kubenswrapper[4813]: I0219 18:33:31.219452 4813 generic.go:334] "Generic (PLEG): container finished" podID="481977a2-7072-4176-abd4-863cb6104d70" containerID="194a57de3f000b8b7bd147b882314c518fbd8173a9c366d4ee48e1e5fab9800b" exitCode=0 Feb 19 18:33:31 crc kubenswrapper[4813]: I0219 18:33:31.219674 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" event={"ID":"481977a2-7072-4176-abd4-863cb6104d70","Type":"ContainerDied","Data":"194a57de3f000b8b7bd147b882314c518fbd8173a9c366d4ee48e1e5fab9800b"} Feb 19 18:33:31 crc kubenswrapper[4813]: I0219 18:33:31.219909 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" event={"ID":"481977a2-7072-4176-abd4-863cb6104d70","Type":"ContainerStarted","Data":"0a45006b1055daf3f49c258aa89bf05f49fe31d2a06d4d9e9317c1dd0bdf77b7"} Feb 19 18:33:35 crc kubenswrapper[4813]: I0219 18:33:35.262655 4813 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 18:33:35 crc kubenswrapper[4813]: I0219 18:33:35.263541 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://3a5217c8c5d8bdd49d4930ceb46695c6404c9cb98f22fa570d519f6fd9e90a6b" gracePeriod=15 Feb 19 18:33:35 crc kubenswrapper[4813]: I0219 18:33:35.263587 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://7b414bdca98a18315a2ebb69c39baa4748407aa769d6bc8efa4115efb47c9f4f" gracePeriod=15 Feb 19 18:33:35 crc kubenswrapper[4813]: I0219 18:33:35.263608 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://6f319cf58bd58c099ace281ef84d183ac951986506fa62207a45ec01f7398e7b" gracePeriod=15 Feb 19 18:33:35 crc kubenswrapper[4813]: I0219 18:33:35.263558 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://72ae02b48db247486c8af5ba910b32e186c15460f9195868b319d1ddc111fa0a" gracePeriod=15 Feb 19 18:33:35 crc kubenswrapper[4813]: I0219 18:33:35.263607 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://c8e90168ea6a75495be5812591cd0c929407c9adcb7c088d4bcbe3658a430320" gracePeriod=15 Feb 19 18:33:35 crc kubenswrapper[4813]: I0219 18:33:35.264804 4813 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 18:33:35 crc kubenswrapper[4813]: E0219 18:33:35.265011 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 19 18:33:35 crc kubenswrapper[4813]: I0219 18:33:35.265023 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 19 18:33:35 crc kubenswrapper[4813]: E0219 18:33:35.265032 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 19 18:33:35 crc kubenswrapper[4813]: I0219 18:33:35.265038 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 19 18:33:35 crc kubenswrapper[4813]: E0219 18:33:35.265054 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 18:33:35 crc kubenswrapper[4813]: I0219 18:33:35.265060 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 18:33:35 crc kubenswrapper[4813]: E0219 18:33:35.265066 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 19 18:33:35 crc kubenswrapper[4813]: I0219 18:33:35.265071 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 19 18:33:35 crc kubenswrapper[4813]: E0219 18:33:35.265078 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 19 18:33:35 crc kubenswrapper[4813]: I0219 18:33:35.265084 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 19 18:33:35 crc kubenswrapper[4813]: E0219 18:33:35.265093 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 19 18:33:35 crc kubenswrapper[4813]: I0219 18:33:35.265100 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 19 18:33:35 crc kubenswrapper[4813]: E0219 18:33:35.265108 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 18:33:35 crc kubenswrapper[4813]: I0219 18:33:35.265113 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 18:33:35 crc kubenswrapper[4813]: I0219 18:33:35.265193 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 19 18:33:35 crc kubenswrapper[4813]: I0219 18:33:35.265203 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 19 18:33:35 crc kubenswrapper[4813]: I0219 18:33:35.265209 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 18:33:35 crc kubenswrapper[4813]: I0219 18:33:35.265218 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 18:33:35 crc kubenswrapper[4813]: I0219 18:33:35.265229 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 19 18:33:35 crc kubenswrapper[4813]: I0219 18:33:35.265238 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 19 18:33:35 crc kubenswrapper[4813]: I0219 18:33:35.266544 4813 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 19 18:33:35 crc kubenswrapper[4813]: I0219 18:33:35.266946 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 18:33:35 crc kubenswrapper[4813]: I0219 18:33:35.271369 4813 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Feb 19 18:33:35 crc kubenswrapper[4813]: I0219 18:33:35.306843 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 19 18:33:35 crc kubenswrapper[4813]: I0219 18:33:35.408672 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 18:33:35 crc kubenswrapper[4813]: I0219 18:33:35.408983 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 18:33:35 crc kubenswrapper[4813]: I0219 18:33:35.409003 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 18:33:35 crc kubenswrapper[4813]: I0219 18:33:35.409038 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 18:33:35 crc kubenswrapper[4813]: I0219 18:33:35.409078 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 18:33:35 crc kubenswrapper[4813]: I0219 18:33:35.409145 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 18:33:35 crc kubenswrapper[4813]: I0219 18:33:35.409226 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 18:33:35 crc kubenswrapper[4813]: I0219 18:33:35.409302 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 18:33:35 crc kubenswrapper[4813]: I0219 18:33:35.510928 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 18:33:35 crc kubenswrapper[4813]: I0219 18:33:35.511022 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 18:33:35 crc kubenswrapper[4813]: I0219 18:33:35.511051 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 18:33:35 crc kubenswrapper[4813]: I0219 18:33:35.511060 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 18:33:35 crc kubenswrapper[4813]: I0219 18:33:35.511123 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 18:33:35 crc kubenswrapper[4813]: I0219 18:33:35.511138 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 18:33:35 crc kubenswrapper[4813]: I0219 18:33:35.511079 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 18:33:35 crc kubenswrapper[4813]: I0219 18:33:35.511195 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 18:33:35 crc kubenswrapper[4813]: I0219 18:33:35.511203 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 18:33:35 crc kubenswrapper[4813]: I0219 18:33:35.511241 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 18:33:35 crc kubenswrapper[4813]: I0219 18:33:35.511218 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 18:33:35 crc kubenswrapper[4813]: I0219 18:33:35.511298 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 18:33:35 crc kubenswrapper[4813]: I0219 18:33:35.511322 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 18:33:35 crc kubenswrapper[4813]: I0219 18:33:35.511352 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 18:33:35 crc kubenswrapper[4813]: I0219 18:33:35.511437 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 18:33:35 crc kubenswrapper[4813]: I0219 18:33:35.511528 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 18:33:35 crc kubenswrapper[4813]: I0219 18:33:35.608507 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 18:33:35 crc kubenswrapper[4813]: E0219 18:33:35.627733 4813 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.69:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1895b98677d15910 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 18:33:35.62712296 +0000 UTC m=+234.852563501,LastTimestamp:2026-02-19 18:33:35.62712296 +0000 UTC m=+234.852563501,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 18:33:36 crc kubenswrapper[4813]: I0219 18:33:36.254229 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 19 18:33:36 crc kubenswrapper[4813]: I0219 18:33:36.256335 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 19 18:33:36 crc kubenswrapper[4813]: I0219 18:33:36.257418 4813 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6f319cf58bd58c099ace281ef84d183ac951986506fa62207a45ec01f7398e7b" exitCode=0 Feb 19 18:33:36 crc kubenswrapper[4813]: I0219 18:33:36.257461 4813 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7b414bdca98a18315a2ebb69c39baa4748407aa769d6bc8efa4115efb47c9f4f" exitCode=0 Feb 19 18:33:36 crc kubenswrapper[4813]: I0219 18:33:36.257481 4813 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="72ae02b48db247486c8af5ba910b32e186c15460f9195868b319d1ddc111fa0a" exitCode=0 Feb 19 18:33:36 crc kubenswrapper[4813]: I0219 18:33:36.257498 4813 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c8e90168ea6a75495be5812591cd0c929407c9adcb7c088d4bcbe3658a430320" exitCode=2 Feb 19 18:33:36 crc kubenswrapper[4813]: I0219 18:33:36.257542 4813 scope.go:117] "RemoveContainer" containerID="f62c4c18d632f0210d0676c04bf42a7b28c8669c123d29ef70fd4129f0051a2b" Feb 19 18:33:36 crc kubenswrapper[4813]: I0219 18:33:36.262280 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"c2b535699535300630c4cacb900df9ad44e34842a5b284537e368ace29c99f43"} Feb 19 18:33:36 crc kubenswrapper[4813]: I0219 18:33:36.262340 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"a7a5ba718fbd3a6d7e9db5fc88abe992500df9a79e8717d337c34818cea35c55"} Feb 19 18:33:36 crc kubenswrapper[4813]: I0219 18:33:36.263989 4813 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Feb 19 18:33:36 crc kubenswrapper[4813]: I0219 18:33:36.266449 4813 generic.go:334] "Generic (PLEG): container finished" podID="f1877644-5177-4aa6-969b-7af1fb9cede5" containerID="49ddd4d7aabda736c45c5fff2116a3ecda60c45956fd1ceb26ac47f59d2d2c48" exitCode=0 Feb 19 18:33:36 crc kubenswrapper[4813]: I0219 18:33:36.266500 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"f1877644-5177-4aa6-969b-7af1fb9cede5","Type":"ContainerDied","Data":"49ddd4d7aabda736c45c5fff2116a3ecda60c45956fd1ceb26ac47f59d2d2c48"} Feb 19 18:33:36 crc kubenswrapper[4813]: I0219 18:33:36.267408 4813 status_manager.go:851] "Failed to get status for pod" podUID="f1877644-5177-4aa6-969b-7af1fb9cede5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Feb 19 18:33:36 crc kubenswrapper[4813]: I0219 18:33:36.267919 4813 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Feb 19 18:33:36 crc kubenswrapper[4813]: E0219 18:33:36.545157 4813 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.69:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-bd28k" volumeName="registry-storage" Feb 19 18:33:37 crc kubenswrapper[4813]: I0219 18:33:37.275417 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 19 18:33:37 crc kubenswrapper[4813]: I0219 18:33:37.627378 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 19 18:33:37 crc kubenswrapper[4813]: I0219 18:33:37.628541 4813 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Feb 19 18:33:37 crc kubenswrapper[4813]: I0219 18:33:37.628901 4813 status_manager.go:851] "Failed to get status for pod" podUID="f1877644-5177-4aa6-969b-7af1fb9cede5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Feb 19 18:33:37 crc kubenswrapper[4813]: I0219 18:33:37.632471 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 19 18:33:37 crc kubenswrapper[4813]: I0219 18:33:37.633055 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 18:33:37 crc kubenswrapper[4813]: I0219 18:33:37.633636 4813 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Feb 19 18:33:37 crc kubenswrapper[4813]: I0219 18:33:37.634093 4813 status_manager.go:851] "Failed to get status for pod" podUID="f1877644-5177-4aa6-969b-7af1fb9cede5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Feb 19 18:33:37 crc kubenswrapper[4813]: I0219 18:33:37.634381 4813 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Feb 19 18:33:37 crc kubenswrapper[4813]: I0219 18:33:37.740827 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 19 18:33:37 crc kubenswrapper[4813]: I0219 18:33:37.740979 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 18:33:37 crc kubenswrapper[4813]: I0219 18:33:37.740995 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f1877644-5177-4aa6-969b-7af1fb9cede5-kubelet-dir\") pod \"f1877644-5177-4aa6-969b-7af1fb9cede5\" (UID: \"f1877644-5177-4aa6-969b-7af1fb9cede5\") " Feb 19 18:33:37 crc kubenswrapper[4813]: I0219 18:33:37.741044 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1877644-5177-4aa6-969b-7af1fb9cede5-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f1877644-5177-4aa6-969b-7af1fb9cede5" (UID: "f1877644-5177-4aa6-969b-7af1fb9cede5"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 18:33:37 crc kubenswrapper[4813]: I0219 18:33:37.741096 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f1877644-5177-4aa6-969b-7af1fb9cede5-var-lock\") pod \"f1877644-5177-4aa6-969b-7af1fb9cede5\" (UID: \"f1877644-5177-4aa6-969b-7af1fb9cede5\") " Feb 19 18:33:37 crc kubenswrapper[4813]: I0219 18:33:37.741122 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 19 18:33:37 crc kubenswrapper[4813]: I0219 18:33:37.741184 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1877644-5177-4aa6-969b-7af1fb9cede5-var-lock" (OuterVolumeSpecName: "var-lock") pod "f1877644-5177-4aa6-969b-7af1fb9cede5" (UID: "f1877644-5177-4aa6-969b-7af1fb9cede5"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 18:33:37 crc kubenswrapper[4813]: I0219 18:33:37.741197 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f1877644-5177-4aa6-969b-7af1fb9cede5-kube-api-access\") pod \"f1877644-5177-4aa6-969b-7af1fb9cede5\" (UID: \"f1877644-5177-4aa6-969b-7af1fb9cede5\") " Feb 19 18:33:37 crc kubenswrapper[4813]: I0219 18:33:37.741227 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 19 18:33:37 crc kubenswrapper[4813]: I0219 18:33:37.741278 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 18:33:37 crc kubenswrapper[4813]: I0219 18:33:37.741360 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 18:33:37 crc kubenswrapper[4813]: I0219 18:33:37.741636 4813 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 19 18:33:37 crc kubenswrapper[4813]: I0219 18:33:37.741659 4813 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 19 18:33:37 crc kubenswrapper[4813]: I0219 18:33:37.741668 4813 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f1877644-5177-4aa6-969b-7af1fb9cede5-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 19 18:33:37 crc kubenswrapper[4813]: I0219 18:33:37.741678 4813 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f1877644-5177-4aa6-969b-7af1fb9cede5-var-lock\") on node \"crc\" DevicePath \"\"" Feb 19 18:33:37 crc kubenswrapper[4813]: I0219 18:33:37.741686 4813 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 19 18:33:37 crc kubenswrapper[4813]: I0219 18:33:37.748092 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1877644-5177-4aa6-969b-7af1fb9cede5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f1877644-5177-4aa6-969b-7af1fb9cede5" (UID: "f1877644-5177-4aa6-969b-7af1fb9cede5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:33:37 crc kubenswrapper[4813]: I0219 18:33:37.843087 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f1877644-5177-4aa6-969b-7af1fb9cede5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 18:33:38 crc kubenswrapper[4813]: I0219 18:33:38.286472 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 19 18:33:38 crc kubenswrapper[4813]: I0219 18:33:38.287233 4813 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3a5217c8c5d8bdd49d4930ceb46695c6404c9cb98f22fa570d519f6fd9e90a6b" exitCode=0 Feb 19 18:33:38 crc kubenswrapper[4813]: I0219 18:33:38.287331 4813 scope.go:117] "RemoveContainer" containerID="6f319cf58bd58c099ace281ef84d183ac951986506fa62207a45ec01f7398e7b" Feb 19 18:33:38 crc kubenswrapper[4813]: I0219 18:33:38.287494 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 18:33:38 crc kubenswrapper[4813]: I0219 18:33:38.289027 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"f1877644-5177-4aa6-969b-7af1fb9cede5","Type":"ContainerDied","Data":"8533725aa9bdc42188459a02deb4b3ecde2c497c3cdf1da9264797a6bba6f824"} Feb 19 18:33:38 crc kubenswrapper[4813]: I0219 18:33:38.289059 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8533725aa9bdc42188459a02deb4b3ecde2c497c3cdf1da9264797a6bba6f824" Feb 19 18:33:38 crc kubenswrapper[4813]: I0219 18:33:38.289108 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 19 18:33:38 crc kubenswrapper[4813]: I0219 18:33:38.302461 4813 status_manager.go:851] "Failed to get status for pod" podUID="f1877644-5177-4aa6-969b-7af1fb9cede5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Feb 19 18:33:38 crc kubenswrapper[4813]: I0219 18:33:38.302901 4813 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Feb 19 18:33:38 crc kubenswrapper[4813]: I0219 18:33:38.303243 4813 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Feb 19 18:33:38 crc kubenswrapper[4813]: I0219 18:33:38.305435 4813 scope.go:117] "RemoveContainer" containerID="7b414bdca98a18315a2ebb69c39baa4748407aa769d6bc8efa4115efb47c9f4f" Feb 19 18:33:38 crc kubenswrapper[4813]: I0219 18:33:38.310648 4813 status_manager.go:851] "Failed to get status for pod" podUID="f1877644-5177-4aa6-969b-7af1fb9cede5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Feb 19 18:33:38 crc kubenswrapper[4813]: I0219 18:33:38.311499 4813 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Feb 19 18:33:38 crc kubenswrapper[4813]: I0219 18:33:38.312056 4813 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Feb 19 18:33:38 crc kubenswrapper[4813]: I0219 18:33:38.320756 4813 scope.go:117] "RemoveContainer" containerID="72ae02b48db247486c8af5ba910b32e186c15460f9195868b319d1ddc111fa0a" Feb 19 18:33:38 crc kubenswrapper[4813]: I0219 18:33:38.335664 4813 scope.go:117] "RemoveContainer" containerID="c8e90168ea6a75495be5812591cd0c929407c9adcb7c088d4bcbe3658a430320" Feb 19 18:33:38 crc kubenswrapper[4813]: I0219 18:33:38.349762 4813 scope.go:117] "RemoveContainer" containerID="3a5217c8c5d8bdd49d4930ceb46695c6404c9cb98f22fa570d519f6fd9e90a6b" Feb 19 18:33:38 crc kubenswrapper[4813]: I0219 18:33:38.368516 4813 scope.go:117] "RemoveContainer" containerID="2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864" Feb 19 18:33:38 crc kubenswrapper[4813]: I0219 18:33:38.387969 4813 scope.go:117] "RemoveContainer" containerID="6f319cf58bd58c099ace281ef84d183ac951986506fa62207a45ec01f7398e7b" Feb 19 18:33:38 crc kubenswrapper[4813]: E0219 18:33:38.389916 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f319cf58bd58c099ace281ef84d183ac951986506fa62207a45ec01f7398e7b\": container with ID starting with 6f319cf58bd58c099ace281ef84d183ac951986506fa62207a45ec01f7398e7b not found: ID does not exist" containerID="6f319cf58bd58c099ace281ef84d183ac951986506fa62207a45ec01f7398e7b" Feb 19 18:33:38 crc kubenswrapper[4813]: I0219 18:33:38.390071 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f319cf58bd58c099ace281ef84d183ac951986506fa62207a45ec01f7398e7b"} err="failed to get container status \"6f319cf58bd58c099ace281ef84d183ac951986506fa62207a45ec01f7398e7b\": rpc error: code = NotFound desc = could not find container \"6f319cf58bd58c099ace281ef84d183ac951986506fa62207a45ec01f7398e7b\": container with ID starting with 6f319cf58bd58c099ace281ef84d183ac951986506fa62207a45ec01f7398e7b not found: ID does not exist" Feb 19 18:33:38 crc kubenswrapper[4813]: I0219 18:33:38.390380 4813 scope.go:117] "RemoveContainer" containerID="7b414bdca98a18315a2ebb69c39baa4748407aa769d6bc8efa4115efb47c9f4f" Feb 19 18:33:38 crc kubenswrapper[4813]: E0219 18:33:38.391498 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b414bdca98a18315a2ebb69c39baa4748407aa769d6bc8efa4115efb47c9f4f\": container with ID starting with 7b414bdca98a18315a2ebb69c39baa4748407aa769d6bc8efa4115efb47c9f4f not found: ID does not exist" containerID="7b414bdca98a18315a2ebb69c39baa4748407aa769d6bc8efa4115efb47c9f4f" Feb 19 18:33:38 crc kubenswrapper[4813]: I0219 18:33:38.391529 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b414bdca98a18315a2ebb69c39baa4748407aa769d6bc8efa4115efb47c9f4f"} err="failed to get container status \"7b414bdca98a18315a2ebb69c39baa4748407aa769d6bc8efa4115efb47c9f4f\": rpc error: code = NotFound desc = could not find container \"7b414bdca98a18315a2ebb69c39baa4748407aa769d6bc8efa4115efb47c9f4f\": container with ID starting with 7b414bdca98a18315a2ebb69c39baa4748407aa769d6bc8efa4115efb47c9f4f not found: ID does not exist" Feb 19 18:33:38 crc kubenswrapper[4813]: I0219 18:33:38.391552 4813 scope.go:117] "RemoveContainer" containerID="72ae02b48db247486c8af5ba910b32e186c15460f9195868b319d1ddc111fa0a" Feb 19 18:33:38 crc kubenswrapper[4813]: E0219 18:33:38.391910 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72ae02b48db247486c8af5ba910b32e186c15460f9195868b319d1ddc111fa0a\": container with ID starting with 72ae02b48db247486c8af5ba910b32e186c15460f9195868b319d1ddc111fa0a not found: ID does not exist" containerID="72ae02b48db247486c8af5ba910b32e186c15460f9195868b319d1ddc111fa0a" Feb 19 18:33:38 crc kubenswrapper[4813]: I0219 18:33:38.391941 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72ae02b48db247486c8af5ba910b32e186c15460f9195868b319d1ddc111fa0a"} err="failed to get container status \"72ae02b48db247486c8af5ba910b32e186c15460f9195868b319d1ddc111fa0a\": rpc error: code = NotFound desc = could not find container \"72ae02b48db247486c8af5ba910b32e186c15460f9195868b319d1ddc111fa0a\": container with ID starting with 72ae02b48db247486c8af5ba910b32e186c15460f9195868b319d1ddc111fa0a not found: ID does not exist" Feb 19 18:33:38 crc kubenswrapper[4813]: I0219 18:33:38.391977 4813 scope.go:117] "RemoveContainer" containerID="c8e90168ea6a75495be5812591cd0c929407c9adcb7c088d4bcbe3658a430320" Feb 19 18:33:38 crc kubenswrapper[4813]: E0219 18:33:38.392720 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8e90168ea6a75495be5812591cd0c929407c9adcb7c088d4bcbe3658a430320\": container with ID starting with c8e90168ea6a75495be5812591cd0c929407c9adcb7c088d4bcbe3658a430320 not found: ID does not exist" containerID="c8e90168ea6a75495be5812591cd0c929407c9adcb7c088d4bcbe3658a430320" Feb 19 18:33:38 crc kubenswrapper[4813]: I0219 18:33:38.392749 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8e90168ea6a75495be5812591cd0c929407c9adcb7c088d4bcbe3658a430320"} err="failed to get container status \"c8e90168ea6a75495be5812591cd0c929407c9adcb7c088d4bcbe3658a430320\": rpc error: code = NotFound desc = could not find container \"c8e90168ea6a75495be5812591cd0c929407c9adcb7c088d4bcbe3658a430320\": container with ID starting with c8e90168ea6a75495be5812591cd0c929407c9adcb7c088d4bcbe3658a430320 not found: ID does not exist" Feb 19 18:33:38 crc kubenswrapper[4813]: I0219 18:33:38.392763 4813 scope.go:117] "RemoveContainer" containerID="3a5217c8c5d8bdd49d4930ceb46695c6404c9cb98f22fa570d519f6fd9e90a6b" Feb 19 18:33:38 crc kubenswrapper[4813]: E0219 18:33:38.393131 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a5217c8c5d8bdd49d4930ceb46695c6404c9cb98f22fa570d519f6fd9e90a6b\": container with ID starting with 3a5217c8c5d8bdd49d4930ceb46695c6404c9cb98f22fa570d519f6fd9e90a6b not found: ID does not exist" containerID="3a5217c8c5d8bdd49d4930ceb46695c6404c9cb98f22fa570d519f6fd9e90a6b" Feb 19 18:33:38 crc kubenswrapper[4813]: I0219 18:33:38.393234 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a5217c8c5d8bdd49d4930ceb46695c6404c9cb98f22fa570d519f6fd9e90a6b"} err="failed to get container status \"3a5217c8c5d8bdd49d4930ceb46695c6404c9cb98f22fa570d519f6fd9e90a6b\": rpc error: code = NotFound desc = could not find container \"3a5217c8c5d8bdd49d4930ceb46695c6404c9cb98f22fa570d519f6fd9e90a6b\": container with ID starting with 3a5217c8c5d8bdd49d4930ceb46695c6404c9cb98f22fa570d519f6fd9e90a6b not found: ID does not exist" Feb 19 18:33:38 crc kubenswrapper[4813]: I0219 18:33:38.393336 4813 scope.go:117] "RemoveContainer" containerID="2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864" Feb 19 18:33:38 crc kubenswrapper[4813]: E0219 18:33:38.393787 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\": container with ID starting with 2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864 not found: ID does not exist" containerID="2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864" Feb 19 18:33:38 crc kubenswrapper[4813]: I0219 18:33:38.393808 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864"} err="failed to get container status \"2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\": rpc error: code = NotFound desc = could not find container \"2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864\": container with ID starting with 2ac2aac89ccdd77dc9e8df12f2d923403360a0f6c1684e847844f9a90fba6864 not found: ID does not exist" Feb 19 18:33:39 crc kubenswrapper[4813]: I0219 18:33:39.481571 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 19 18:33:41 crc kubenswrapper[4813]: I0219 18:33:41.477127 4813 status_manager.go:851] "Failed to get status for pod" podUID="f1877644-5177-4aa6-969b-7af1fb9cede5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Feb 19 18:33:41 crc kubenswrapper[4813]: I0219 18:33:41.482688 4813 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Feb 19 18:33:41 crc kubenswrapper[4813]: E0219 18:33:41.640012 4813 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" Feb 19 18:33:41 crc kubenswrapper[4813]: E0219 18:33:41.640216 4813 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" Feb 19 18:33:41 crc kubenswrapper[4813]: E0219 18:33:41.640484 4813 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" Feb 19 18:33:41 crc kubenswrapper[4813]: E0219 18:33:41.640971 4813 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" Feb 19 18:33:41 crc kubenswrapper[4813]: E0219 18:33:41.641239 4813 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" Feb 19 18:33:41 crc kubenswrapper[4813]: I0219 18:33:41.641334 4813 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 19 18:33:41 crc kubenswrapper[4813]: E0219 18:33:41.641610 4813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" interval="200ms" Feb 19 18:33:41 crc kubenswrapper[4813]: E0219 18:33:41.774846 4813 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.69:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1895b98677d15910 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 18:33:35.62712296 +0000 UTC m=+234.852563501,LastTimestamp:2026-02-19 18:33:35.62712296 +0000 UTC m=+234.852563501,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 18:33:41 crc kubenswrapper[4813]: E0219 18:33:41.842257 4813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" interval="400ms" Feb 19 18:33:42 crc kubenswrapper[4813]: E0219 18:33:42.242992 4813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" interval="800ms" Feb 19 18:33:43 crc kubenswrapper[4813]: E0219 18:33:43.044348 4813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" interval="1.6s" Feb 19 18:33:44 crc kubenswrapper[4813]: E0219 18:33:44.644834 4813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" interval="3.2s" Feb 19 18:33:47 crc kubenswrapper[4813]: I0219 18:33:47.471204 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 18:33:47 crc kubenswrapper[4813]: I0219 18:33:47.475154 4813 status_manager.go:851] "Failed to get status for pod" podUID="f1877644-5177-4aa6-969b-7af1fb9cede5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Feb 19 18:33:47 crc kubenswrapper[4813]: I0219 18:33:47.476255 4813 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Feb 19 18:33:47 crc kubenswrapper[4813]: I0219 18:33:47.505121 4813 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d7e9f575-0f71-4cf9-bbca-161279ecc067" Feb 19 18:33:47 crc kubenswrapper[4813]: I0219 18:33:47.505174 4813 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d7e9f575-0f71-4cf9-bbca-161279ecc067" Feb 19 18:33:47 crc kubenswrapper[4813]: E0219 18:33:47.505765 4813 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 18:33:47 crc kubenswrapper[4813]: I0219 18:33:47.506478 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 18:33:47 crc kubenswrapper[4813]: W0219 18:33:47.528248 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-97ae682819e73a7cb459a0549d690ea21e8558d6e93fd047a07629c5977b1bad WatchSource:0}: Error finding container 97ae682819e73a7cb459a0549d690ea21e8558d6e93fd047a07629c5977b1bad: Status 404 returned error can't find the container with id 97ae682819e73a7cb459a0549d690ea21e8558d6e93fd047a07629c5977b1bad Feb 19 18:33:47 crc kubenswrapper[4813]: E0219 18:33:47.847359 4813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" interval="6.4s" Feb 19 18:33:48 crc kubenswrapper[4813]: I0219 18:33:48.354481 4813 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="f09604c04307b1baaf56ef14641f6975348fba6d3014b380870b36496d5c3a3e" exitCode=0 Feb 19 18:33:48 crc kubenswrapper[4813]: I0219 18:33:48.354594 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"f09604c04307b1baaf56ef14641f6975348fba6d3014b380870b36496d5c3a3e"} Feb 19 18:33:48 crc kubenswrapper[4813]: I0219 18:33:48.354673 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"97ae682819e73a7cb459a0549d690ea21e8558d6e93fd047a07629c5977b1bad"} Feb 19 18:33:48 crc kubenswrapper[4813]: I0219 18:33:48.355333 4813 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d7e9f575-0f71-4cf9-bbca-161279ecc067" Feb 19 18:33:48 crc kubenswrapper[4813]: I0219 18:33:48.355379 4813 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d7e9f575-0f71-4cf9-bbca-161279ecc067" Feb 19 18:33:48 crc kubenswrapper[4813]: E0219 18:33:48.356322 4813 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 18:33:48 crc kubenswrapper[4813]: I0219 18:33:48.356329 4813 status_manager.go:851] "Failed to get status for pod" podUID="f1877644-5177-4aa6-969b-7af1fb9cede5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Feb 19 18:33:48 crc kubenswrapper[4813]: I0219 18:33:48.357229 4813 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Feb 19 18:33:49 crc kubenswrapper[4813]: I0219 18:33:49.364235 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"de33c7b510872237d08eb705d95d91af2bd74059579f3c85ad0241eb7814e29e"} Feb 19 18:33:49 crc kubenswrapper[4813]: I0219 18:33:49.364513 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5e208cb9fe86960ecae8b8b6c94859a256b94ea0923607827b00c33de5b3d17e"} Feb 19 18:33:50 crc kubenswrapper[4813]: I0219 18:33:50.372666 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 19 18:33:50 crc kubenswrapper[4813]: I0219 18:33:50.373133 4813 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="72a262261f9ec388a9b2c38c6c30d35645d6e42d482ed7468891523b7bc731fe" exitCode=1 Feb 19 18:33:50 crc kubenswrapper[4813]: I0219 18:33:50.373210 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"72a262261f9ec388a9b2c38c6c30d35645d6e42d482ed7468891523b7bc731fe"} Feb 19 18:33:50 crc kubenswrapper[4813]: I0219 18:33:50.373731 4813 scope.go:117] "RemoveContainer" containerID="72a262261f9ec388a9b2c38c6c30d35645d6e42d482ed7468891523b7bc731fe" Feb 19 18:33:50 crc kubenswrapper[4813]: I0219 18:33:50.379859 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"02a98188b2d82ec20feecd0102a4d88128464553727d415eabce3c91132481e0"} Feb 19 18:33:50 crc kubenswrapper[4813]: I0219 18:33:50.379896 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c256e06ba20026d9a186602cdab09f69fb2b164098688561db730819abb6c5e1"} Feb 19 18:33:50 crc kubenswrapper[4813]: I0219 18:33:50.379905 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c4d43e5a96ec966f5f43e6affdff208acdc29a1f06af269381863df87b697af1"} Feb 19 18:33:50 crc kubenswrapper[4813]: I0219 18:33:50.380079 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 18:33:50 crc kubenswrapper[4813]: I0219 18:33:50.380205 4813 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d7e9f575-0f71-4cf9-bbca-161279ecc067" Feb 19 18:33:50 crc kubenswrapper[4813]: I0219 18:33:50.380232 4813 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d7e9f575-0f71-4cf9-bbca-161279ecc067" Feb 19 18:33:51 crc kubenswrapper[4813]: I0219 18:33:51.387393 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 19 18:33:51 crc kubenswrapper[4813]: I0219 18:33:51.387447 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a268629a9b590385d3d927f868d70a463fefc89b5d2a73454af096f8ec61f18e"} Feb 19 18:33:52 crc kubenswrapper[4813]: I0219 18:33:52.507398 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 18:33:52 crc kubenswrapper[4813]: I0219 18:33:52.507768 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 18:33:52 crc kubenswrapper[4813]: I0219 18:33:52.517809 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 18:33:55 crc kubenswrapper[4813]: I0219 18:33:55.395618 4813 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 18:33:55 crc kubenswrapper[4813]: I0219 18:33:55.504113 4813 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="646ced9f-1c6e-4af5-9a20-774b4bb79962" Feb 19 18:33:56 crc kubenswrapper[4813]: I0219 18:33:56.421211 4813 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d7e9f575-0f71-4cf9-bbca-161279ecc067" Feb 19 18:33:56 crc kubenswrapper[4813]: I0219 18:33:56.421709 4813 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d7e9f575-0f71-4cf9-bbca-161279ecc067" Feb 19 18:33:56 crc kubenswrapper[4813]: I0219 18:33:56.423177 4813 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="646ced9f-1c6e-4af5-9a20-774b4bb79962" Feb 19 18:33:56 crc kubenswrapper[4813]: I0219 18:33:56.425213 4813 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://5e208cb9fe86960ecae8b8b6c94859a256b94ea0923607827b00c33de5b3d17e" Feb 19 18:33:56 crc kubenswrapper[4813]: I0219 18:33:56.425245 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 18:33:57 crc kubenswrapper[4813]: I0219 18:33:57.428024 4813 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d7e9f575-0f71-4cf9-bbca-161279ecc067" Feb 19 18:33:57 crc kubenswrapper[4813]: I0219 18:33:57.428066 4813 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d7e9f575-0f71-4cf9-bbca-161279ecc067" Feb 19 18:33:57 crc kubenswrapper[4813]: I0219 18:33:57.432049 4813 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="646ced9f-1c6e-4af5-9a20-774b4bb79962" Feb 19 18:33:57 crc kubenswrapper[4813]: I0219 18:33:57.808174 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 18:33:59 crc kubenswrapper[4813]: I0219 18:33:59.861825 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 18:33:59 crc kubenswrapper[4813]: I0219 18:33:59.862346 4813 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 19 18:33:59 crc kubenswrapper[4813]: I0219 18:33:59.862421 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 19 18:34:05 crc kubenswrapper[4813]: I0219 18:34:05.430914 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 19 18:34:06 crc kubenswrapper[4813]: I0219 18:34:06.095844 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 19 18:34:06 crc kubenswrapper[4813]: I0219 18:34:06.215942 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 19 18:34:06 crc kubenswrapper[4813]: I0219 18:34:06.306136 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 19 18:34:06 crc kubenswrapper[4813]: I0219 18:34:06.791408 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 19 18:34:07 crc kubenswrapper[4813]: I0219 18:34:07.201848 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 19 18:34:07 crc kubenswrapper[4813]: I0219 18:34:07.273357 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 19 18:34:07 crc kubenswrapper[4813]: I0219 18:34:07.614049 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 19 18:34:07 crc kubenswrapper[4813]: I0219 18:34:07.622256 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 19 18:34:07 crc kubenswrapper[4813]: I0219 18:34:07.657072 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 19 18:34:07 crc kubenswrapper[4813]: I0219 18:34:07.816837 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 19 18:34:07 crc kubenswrapper[4813]: I0219 18:34:07.872516 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 19 18:34:07 crc kubenswrapper[4813]: I0219 18:34:07.986723 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 19 18:34:08 crc kubenswrapper[4813]: I0219 18:34:08.262069 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 19 18:34:08 crc kubenswrapper[4813]: I0219 18:34:08.475653 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 19 18:34:08 crc kubenswrapper[4813]: I0219 18:34:08.566315 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 19 18:34:08 crc kubenswrapper[4813]: I0219 18:34:08.605443 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 19 18:34:08 crc kubenswrapper[4813]: I0219 18:34:08.653156 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 19 18:34:08 crc kubenswrapper[4813]: I0219 18:34:08.685718 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 19 18:34:08 crc kubenswrapper[4813]: I0219 18:34:08.723406 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 19 18:34:08 crc kubenswrapper[4813]: I0219 18:34:08.767726 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 19 18:34:08 crc kubenswrapper[4813]: I0219 18:34:08.854356 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 19 18:34:08 crc kubenswrapper[4813]: I0219 18:34:08.920285 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 19 18:34:08 crc kubenswrapper[4813]: I0219 18:34:08.952046 4813 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 19 18:34:08 crc kubenswrapper[4813]: I0219 18:34:08.970065 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 19 18:34:09 crc kubenswrapper[4813]: I0219 18:34:09.046380 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 19 18:34:09 crc kubenswrapper[4813]: I0219 18:34:09.067121 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 19 18:34:09 crc kubenswrapper[4813]: I0219 18:34:09.112660 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 19 18:34:09 crc kubenswrapper[4813]: I0219 18:34:09.120350 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 19 18:34:09 crc kubenswrapper[4813]: I0219 18:34:09.321693 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 19 18:34:09 crc kubenswrapper[4813]: I0219 18:34:09.408229 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 19 18:34:09 crc kubenswrapper[4813]: I0219 18:34:09.460036 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 19 18:34:09 crc kubenswrapper[4813]: I0219 18:34:09.749438 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 19 18:34:09 crc kubenswrapper[4813]: I0219 18:34:09.791837 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 19 18:34:09 crc kubenswrapper[4813]: I0219 18:34:09.815939 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 19 18:34:09 crc kubenswrapper[4813]: I0219 18:34:09.826214 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 19 18:34:09 crc kubenswrapper[4813]: I0219 18:34:09.862455 4813 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 19 18:34:09 crc kubenswrapper[4813]: I0219 18:34:09.862529 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 19 18:34:10 crc kubenswrapper[4813]: I0219 18:34:10.000152 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 19 18:34:10 crc kubenswrapper[4813]: I0219 18:34:10.038517 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 19 18:34:10 crc kubenswrapper[4813]: I0219 18:34:10.111451 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 19 18:34:10 crc kubenswrapper[4813]: I0219 18:34:10.155121 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 19 18:34:10 crc kubenswrapper[4813]: I0219 18:34:10.187794 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 19 18:34:10 crc kubenswrapper[4813]: I0219 18:34:10.219873 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 19 18:34:10 crc kubenswrapper[4813]: I0219 18:34:10.231883 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 19 18:34:10 crc kubenswrapper[4813]: I0219 18:34:10.273554 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 19 18:34:10 crc kubenswrapper[4813]: I0219 18:34:10.292719 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 19 18:34:10 crc kubenswrapper[4813]: I0219 18:34:10.297648 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 19 18:34:10 crc kubenswrapper[4813]: I0219 18:34:10.352682 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 19 18:34:10 crc kubenswrapper[4813]: I0219 18:34:10.574305 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 19 18:34:10 crc kubenswrapper[4813]: I0219 18:34:10.682847 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 19 18:34:10 crc kubenswrapper[4813]: I0219 18:34:10.723010 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 19 18:34:10 crc kubenswrapper[4813]: I0219 18:34:10.841058 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 19 18:34:10 crc kubenswrapper[4813]: I0219 18:34:10.946481 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 19 18:34:11 crc kubenswrapper[4813]: I0219 18:34:11.045786 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 19 18:34:11 crc kubenswrapper[4813]: I0219 18:34:11.262807 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 19 18:34:11 crc kubenswrapper[4813]: I0219 18:34:11.393808 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 19 18:34:11 crc kubenswrapper[4813]: I0219 18:34:11.442247 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 19 18:34:11 crc kubenswrapper[4813]: I0219 18:34:11.463297 4813 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 19 18:34:11 crc kubenswrapper[4813]: I0219 18:34:11.466374 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 19 18:34:11 crc kubenswrapper[4813]: I0219 18:34:11.546758 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 19 18:34:11 crc kubenswrapper[4813]: I0219 18:34:11.644236 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 19 18:34:11 crc kubenswrapper[4813]: I0219 18:34:11.659661 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 19 18:34:11 crc kubenswrapper[4813]: I0219 18:34:11.733529 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 19 18:34:11 crc kubenswrapper[4813]: I0219 18:34:11.740381 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 19 18:34:11 crc kubenswrapper[4813]: I0219 18:34:11.751384 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 19 18:34:11 crc kubenswrapper[4813]: I0219 18:34:11.771031 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 19 18:34:11 crc kubenswrapper[4813]: I0219 18:34:11.804107 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 19 18:34:11 crc kubenswrapper[4813]: I0219 18:34:11.956000 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 19 18:34:12 crc kubenswrapper[4813]: I0219 18:34:12.015425 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 19 18:34:12 crc kubenswrapper[4813]: I0219 18:34:12.026643 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 19 18:34:12 crc kubenswrapper[4813]: I0219 18:34:12.084430 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 19 18:34:12 crc kubenswrapper[4813]: I0219 18:34:12.110740 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 19 18:34:12 crc kubenswrapper[4813]: I0219 18:34:12.257483 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 19 18:34:12 crc kubenswrapper[4813]: I0219 18:34:12.292696 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 19 18:34:12 crc kubenswrapper[4813]: I0219 18:34:12.342730 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 19 18:34:12 crc kubenswrapper[4813]: I0219 18:34:12.473938 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 19 18:34:12 crc kubenswrapper[4813]: I0219 18:34:12.483534 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 19 18:34:12 crc kubenswrapper[4813]: I0219 18:34:12.488674 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 19 18:34:12 crc kubenswrapper[4813]: I0219 18:34:12.602661 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 19 18:34:12 crc kubenswrapper[4813]: I0219 18:34:12.668663 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 19 18:34:12 crc kubenswrapper[4813]: I0219 18:34:12.668768 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 19 18:34:12 crc kubenswrapper[4813]: I0219 18:34:12.806268 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 19 18:34:13 crc kubenswrapper[4813]: I0219 18:34:13.046409 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 19 18:34:13 crc kubenswrapper[4813]: I0219 18:34:13.071288 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 19 18:34:13 crc kubenswrapper[4813]: I0219 18:34:13.086551 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 19 18:34:13 crc kubenswrapper[4813]: I0219 18:34:13.185196 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 19 18:34:13 crc kubenswrapper[4813]: I0219 18:34:13.258745 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 19 18:34:13 crc kubenswrapper[4813]: I0219 18:34:13.288361 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 19 18:34:13 crc kubenswrapper[4813]: I0219 18:34:13.339351 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 19 18:34:13 crc kubenswrapper[4813]: I0219 18:34:13.343218 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 19 18:34:13 crc kubenswrapper[4813]: I0219 18:34:13.356287 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 19 18:34:13 crc kubenswrapper[4813]: I0219 18:34:13.367675 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 19 18:34:13 crc kubenswrapper[4813]: I0219 18:34:13.401784 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 19 18:34:13 crc kubenswrapper[4813]: I0219 18:34:13.466154 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 19 18:34:13 crc kubenswrapper[4813]: I0219 18:34:13.484573 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 19 18:34:13 crc kubenswrapper[4813]: I0219 18:34:13.608743 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 19 18:34:13 crc kubenswrapper[4813]: I0219 18:34:13.616664 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 19 18:34:13 crc kubenswrapper[4813]: I0219 18:34:13.623508 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 19 18:34:13 crc kubenswrapper[4813]: I0219 18:34:13.669115 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 19 18:34:13 crc kubenswrapper[4813]: I0219 18:34:13.669838 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 19 18:34:13 crc kubenswrapper[4813]: I0219 18:34:13.688590 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 19 18:34:13 crc kubenswrapper[4813]: I0219 18:34:13.748365 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 19 18:34:13 crc kubenswrapper[4813]: I0219 18:34:13.818871 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 19 18:34:13 crc kubenswrapper[4813]: I0219 18:34:13.846865 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 19 18:34:13 crc kubenswrapper[4813]: I0219 18:34:13.873545 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 19 18:34:13 crc kubenswrapper[4813]: I0219 18:34:13.880473 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 19 18:34:13 crc kubenswrapper[4813]: I0219 18:34:13.888515 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 19 18:34:13 crc kubenswrapper[4813]: I0219 18:34:13.935018 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 19 18:34:13 crc kubenswrapper[4813]: I0219 18:34:13.942108 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 19 18:34:13 crc kubenswrapper[4813]: I0219 18:34:13.958146 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 19 18:34:13 crc kubenswrapper[4813]: I0219 18:34:13.963627 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 19 18:34:13 crc kubenswrapper[4813]: I0219 18:34:13.988051 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 19 18:34:14 crc kubenswrapper[4813]: I0219 18:34:14.067991 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 19 18:34:14 crc kubenswrapper[4813]: I0219 18:34:14.132280 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 19 18:34:14 crc kubenswrapper[4813]: I0219 18:34:14.209309 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 19 18:34:14 crc kubenswrapper[4813]: I0219 18:34:14.213649 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 19 18:34:14 crc kubenswrapper[4813]: I0219 18:34:14.224392 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 19 18:34:14 crc kubenswrapper[4813]: I0219 18:34:14.263057 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 19 18:34:14 crc kubenswrapper[4813]: I0219 18:34:14.323003 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 19 18:34:14 crc kubenswrapper[4813]: I0219 18:34:14.326800 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 19 18:34:14 crc kubenswrapper[4813]: I0219 18:34:14.348933 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 19 18:34:14 crc kubenswrapper[4813]: I0219 18:34:14.424068 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 19 18:34:14 crc kubenswrapper[4813]: I0219 18:34:14.474175 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 19 18:34:14 crc kubenswrapper[4813]: I0219 18:34:14.558071 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 19 18:34:14 crc kubenswrapper[4813]: I0219 18:34:14.645478 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 19 18:34:14 crc kubenswrapper[4813]: I0219 18:34:14.673837 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 19 18:34:14 crc kubenswrapper[4813]: I0219 18:34:14.717942 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 19 18:34:14 crc kubenswrapper[4813]: I0219 18:34:14.723269 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 19 18:34:14 crc kubenswrapper[4813]: I0219 18:34:14.747549 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 19 18:34:14 crc kubenswrapper[4813]: I0219 18:34:14.765575 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 19 18:34:14 crc kubenswrapper[4813]: I0219 18:34:14.767890 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 19 18:34:14 crc kubenswrapper[4813]: I0219 18:34:14.873394 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 19 18:34:14 crc kubenswrapper[4813]: I0219 18:34:14.897678 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 19 18:34:14 crc kubenswrapper[4813]: I0219 18:34:14.946774 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 19 18:34:15 crc kubenswrapper[4813]: I0219 18:34:15.050657 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 19 18:34:15 crc kubenswrapper[4813]: I0219 18:34:15.080308 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 19 18:34:15 crc kubenswrapper[4813]: I0219 18:34:15.159914 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 19 18:34:15 crc kubenswrapper[4813]: I0219 18:34:15.219665 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 19 18:34:15 crc kubenswrapper[4813]: I0219 18:34:15.239054 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 19 18:34:15 crc kubenswrapper[4813]: I0219 18:34:15.296173 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 19 18:34:15 crc kubenswrapper[4813]: I0219 18:34:15.333372 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 19 18:34:15 crc kubenswrapper[4813]: I0219 18:34:15.379403 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 19 18:34:15 crc kubenswrapper[4813]: I0219 18:34:15.423078 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 19 18:34:15 crc kubenswrapper[4813]: I0219 18:34:15.515863 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 19 18:34:15 crc kubenswrapper[4813]: I0219 18:34:15.555543 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 19 18:34:15 crc kubenswrapper[4813]: I0219 18:34:15.562857 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 19 18:34:15 crc kubenswrapper[4813]: I0219 18:34:15.657096 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 19 18:34:15 crc kubenswrapper[4813]: I0219 18:34:15.727235 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 19 18:34:15 crc kubenswrapper[4813]: I0219 18:34:15.746703 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 19 18:34:15 crc kubenswrapper[4813]: I0219 18:34:15.782120 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 19 18:34:15 crc kubenswrapper[4813]: I0219 18:34:15.813116 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 19 18:34:15 crc kubenswrapper[4813]: I0219 18:34:15.815777 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 19 18:34:15 crc kubenswrapper[4813]: I0219 18:34:15.854459 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 19 18:34:15 crc kubenswrapper[4813]: I0219 18:34:15.885823 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 19 18:34:15 crc kubenswrapper[4813]: I0219 18:34:15.953193 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 19 18:34:15 crc kubenswrapper[4813]: I0219 18:34:15.979437 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 19 18:34:16 crc kubenswrapper[4813]: I0219 18:34:16.069225 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 19 18:34:16 crc kubenswrapper[4813]: I0219 18:34:16.137552 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 19 18:34:16 crc kubenswrapper[4813]: I0219 18:34:16.142945 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 19 18:34:16 crc kubenswrapper[4813]: I0219 18:34:16.147237 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 19 18:34:16 crc kubenswrapper[4813]: I0219 18:34:16.162068 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 19 18:34:16 crc kubenswrapper[4813]: I0219 18:34:16.208004 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 19 18:34:16 crc kubenswrapper[4813]: I0219 18:34:16.324128 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 19 18:34:16 crc kubenswrapper[4813]: I0219 18:34:16.450814 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 19 18:34:16 crc kubenswrapper[4813]: I0219 18:34:16.495226 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 19 18:34:16 crc kubenswrapper[4813]: I0219 18:34:16.526292 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 19 18:34:16 crc kubenswrapper[4813]: I0219 18:34:16.588546 4813 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 19 18:34:16 crc kubenswrapper[4813]: I0219 18:34:16.656286 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 19 18:34:16 crc kubenswrapper[4813]: I0219 18:34:16.678418 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 19 18:34:16 crc kubenswrapper[4813]: I0219 18:34:16.906353 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 19 18:34:16 crc kubenswrapper[4813]: I0219 18:34:16.926321 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 19 18:34:16 crc kubenswrapper[4813]: I0219 18:34:16.944463 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 19 18:34:16 crc kubenswrapper[4813]: I0219 18:34:16.944470 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 19 18:34:16 crc kubenswrapper[4813]: I0219 18:34:16.946630 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 19 18:34:16 crc kubenswrapper[4813]: I0219 18:34:16.965158 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 19 18:34:16 crc kubenswrapper[4813]: I0219 18:34:16.972256 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 19 18:34:16 crc kubenswrapper[4813]: I0219 18:34:16.986923 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 19 18:34:17 crc kubenswrapper[4813]: I0219 18:34:17.097592 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 19 18:34:17 crc kubenswrapper[4813]: I0219 18:34:17.107493 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 19 18:34:17 crc kubenswrapper[4813]: I0219 18:34:17.134604 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 19 18:34:17 crc kubenswrapper[4813]: I0219 18:34:17.151120 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 19 18:34:17 crc kubenswrapper[4813]: I0219 18:34:17.159760 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 19 18:34:17 crc kubenswrapper[4813]: I0219 18:34:17.287430 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 19 18:34:17 crc kubenswrapper[4813]: I0219 18:34:17.287547 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 19 18:34:17 crc kubenswrapper[4813]: I0219 18:34:17.325385 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 19 18:34:17 crc kubenswrapper[4813]: I0219 18:34:17.482692 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 19 18:34:17 crc kubenswrapper[4813]: I0219 18:34:17.606453 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 19 18:34:17 crc kubenswrapper[4813]: I0219 18:34:17.653817 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 19 18:34:17 crc kubenswrapper[4813]: I0219 18:34:17.722626 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 19 18:34:17 crc kubenswrapper[4813]: I0219 18:34:17.820796 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 19 18:34:17 crc kubenswrapper[4813]: I0219 18:34:17.947557 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 19 18:34:17 crc kubenswrapper[4813]: I0219 18:34:17.980495 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 19 18:34:18 crc kubenswrapper[4813]: I0219 18:34:18.012524 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 19 18:34:18 crc kubenswrapper[4813]: I0219 18:34:18.015773 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 19 18:34:18 crc kubenswrapper[4813]: I0219 18:34:18.065095 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 19 18:34:18 crc kubenswrapper[4813]: I0219 18:34:18.065344 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 19 18:34:18 crc kubenswrapper[4813]: I0219 18:34:18.099949 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 19 18:34:18 crc kubenswrapper[4813]: I0219 18:34:18.126808 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 19 18:34:18 crc kubenswrapper[4813]: I0219 18:34:18.132002 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 19 18:34:18 crc kubenswrapper[4813]: I0219 18:34:18.161031 4813 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 19 18:34:18 crc kubenswrapper[4813]: I0219 18:34:18.174237 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 19 18:34:18 crc kubenswrapper[4813]: I0219 18:34:18.248707 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 19 18:34:18 crc kubenswrapper[4813]: I0219 18:34:18.330396 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 19 18:34:18 crc kubenswrapper[4813]: I0219 18:34:18.414460 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 19 18:34:18 crc kubenswrapper[4813]: I0219 18:34:18.431278 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 19 18:34:18 crc kubenswrapper[4813]: I0219 18:34:18.481463 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 19 18:34:18 crc kubenswrapper[4813]: I0219 18:34:18.556586 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 19 18:34:18 crc kubenswrapper[4813]: I0219 18:34:18.598174 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 19 18:34:18 crc kubenswrapper[4813]: I0219 18:34:18.613439 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 19 18:34:18 crc kubenswrapper[4813]: I0219 18:34:18.619818 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 19 18:34:18 crc kubenswrapper[4813]: I0219 18:34:18.715028 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 19 18:34:18 crc kubenswrapper[4813]: I0219 18:34:18.727633 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 19 18:34:18 crc kubenswrapper[4813]: I0219 18:34:18.785787 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 19 18:34:18 crc kubenswrapper[4813]: I0219 18:34:18.838914 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 19 18:34:18 crc kubenswrapper[4813]: I0219 18:34:18.867890 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 19 18:34:18 crc kubenswrapper[4813]: I0219 18:34:18.920664 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 19 18:34:18 crc kubenswrapper[4813]: I0219 18:34:18.940395 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 19 18:34:18 crc kubenswrapper[4813]: I0219 18:34:18.941387 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 19 18:34:18 crc kubenswrapper[4813]: I0219 18:34:18.992850 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 19 18:34:19 crc kubenswrapper[4813]: I0219 18:34:19.195405 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 19 18:34:19 crc kubenswrapper[4813]: I0219 18:34:19.224068 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 19 18:34:19 crc kubenswrapper[4813]: I0219 18:34:19.501691 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 19 18:34:19 crc kubenswrapper[4813]: I0219 18:34:19.553767 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 19 18:34:19 crc kubenswrapper[4813]: I0219 18:34:19.604774 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 19 18:34:19 crc kubenswrapper[4813]: I0219 18:34:19.803932 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 19 18:34:19 crc kubenswrapper[4813]: I0219 18:34:19.866624 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 18:34:19 crc kubenswrapper[4813]: I0219 18:34:19.874773 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 18:34:19 crc kubenswrapper[4813]: I0219 18:34:19.906086 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 19 18:34:19 crc kubenswrapper[4813]: I0219 18:34:19.937796 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 19 18:34:19 crc kubenswrapper[4813]: I0219 18:34:19.980424 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 19 18:34:20 crc kubenswrapper[4813]: I0219 18:34:20.048242 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 19 18:34:20 crc kubenswrapper[4813]: I0219 18:34:20.091225 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 19 18:34:20 crc kubenswrapper[4813]: I0219 18:34:20.206044 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 19 18:34:20 crc kubenswrapper[4813]: I0219 18:34:20.238397 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 19 18:34:20 crc kubenswrapper[4813]: I0219 18:34:20.300447 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 19 18:34:20 crc kubenswrapper[4813]: I0219 18:34:20.318384 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 19 18:34:20 crc kubenswrapper[4813]: I0219 18:34:20.352464 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 19 18:34:20 crc kubenswrapper[4813]: I0219 18:34:20.396213 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 19 18:34:20 crc kubenswrapper[4813]: I0219 18:34:20.404693 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 19 18:34:20 crc kubenswrapper[4813]: I0219 18:34:20.530553 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 19 18:34:20 crc kubenswrapper[4813]: I0219 18:34:20.655154 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 19 18:34:20 crc kubenswrapper[4813]: I0219 18:34:20.905233 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 19 18:34:21 crc kubenswrapper[4813]: I0219 18:34:21.015866 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 19 18:34:21 crc kubenswrapper[4813]: I0219 18:34:21.062186 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 19 18:34:21 crc kubenswrapper[4813]: I0219 18:34:21.289161 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 19 18:34:21 crc kubenswrapper[4813]: I0219 18:34:21.370496 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 19 18:34:21 crc kubenswrapper[4813]: I0219 18:34:21.928337 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 19 18:34:21 crc kubenswrapper[4813]: I0219 18:34:21.940782 4813 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 19 18:34:22 crc kubenswrapper[4813]: I0219 18:34:22.097876 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 19 18:34:22 crc kubenswrapper[4813]: I0219 18:34:22.177096 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 19 18:34:22 crc kubenswrapper[4813]: I0219 18:34:22.202931 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 19 18:34:22 crc kubenswrapper[4813]: I0219 18:34:22.816855 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 19 18:34:23 crc kubenswrapper[4813]: I0219 18:34:23.256815 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 19 18:34:23 crc kubenswrapper[4813]: I0219 18:34:23.352007 4813 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 19 18:34:23 crc kubenswrapper[4813]: I0219 18:34:23.357672 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=48.357648524 podStartE2EDuration="48.357648524s" podCreationTimestamp="2026-02-19 18:33:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:33:55.475107694 +0000 UTC m=+254.700548255" watchObservedRunningTime="2026-02-19 18:34:23.357648524 +0000 UTC m=+282.583089105" Feb 19 18:34:23 crc kubenswrapper[4813]: I0219 18:34:23.361060 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 18:34:23 crc kubenswrapper[4813]: I0219 18:34:23.361120 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 18:34:23 crc kubenswrapper[4813]: I0219 18:34:23.368261 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 18:34:23 crc kubenswrapper[4813]: I0219 18:34:23.398057 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=28.398035272 podStartE2EDuration="28.398035272s" podCreationTimestamp="2026-02-19 18:33:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:34:23.394018145 +0000 UTC m=+282.619458726" watchObservedRunningTime="2026-02-19 18:34:23.398035272 +0000 UTC m=+282.623475843" Feb 19 18:34:23 crc kubenswrapper[4813]: I0219 18:34:23.535048 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5q7bs"] Feb 19 18:34:23 crc kubenswrapper[4813]: I0219 18:34:23.535298 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5q7bs" podUID="064d4c4c-ff7e-4368-817e-0234266467b5" containerName="registry-server" containerID="cri-o://34ddbe81b5edd583772a8a71fad4fe05ff92fb3a80d5fcf98a466e0b56162d6a" gracePeriod=30 Feb 19 18:34:23 crc kubenswrapper[4813]: I0219 18:34:23.555028 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x7z6q"] Feb 19 18:34:23 crc kubenswrapper[4813]: I0219 18:34:23.555875 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-x7z6q" podUID="9a3d0dbc-cf99-4d9f-8ab2-a7f480f65d81" containerName="registry-server" containerID="cri-o://15987d60f76059f64dbb5ccb5fba2d5cfae61708eaa0e9c564075962739cffde" gracePeriod=30 Feb 19 18:34:23 crc kubenswrapper[4813]: I0219 18:34:23.573403 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-h5w7j"] Feb 19 18:34:23 crc kubenswrapper[4813]: I0219 18:34:23.573732 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-h5w7j" podUID="60220df2-7a87-42d6-8ff6-f4d2ddec5160" containerName="marketplace-operator" containerID="cri-o://f858c2d1694d80b9f81743707727ddef00e94eba84d6371de2bd96b282ab0915" gracePeriod=30 Feb 19 18:34:23 crc kubenswrapper[4813]: I0219 18:34:23.582831 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p2xbk"] Feb 19 18:34:23 crc kubenswrapper[4813]: I0219 18:34:23.583319 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-p2xbk" podUID="a5ded39f-d187-46dd-a014-517d0b291e9d" containerName="registry-server" containerID="cri-o://3b8d1eccdb5a6fe99f2c656e137ebc9fa993f1937a236f9fbe3a5d17eac6c7a1" gracePeriod=30 Feb 19 18:34:23 crc kubenswrapper[4813]: I0219 18:34:23.589700 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z8tl9"] Feb 19 18:34:23 crc kubenswrapper[4813]: I0219 18:34:23.590016 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-z8tl9" podUID="2d418b66-03a6-4b99-975a-1c980c62e680" containerName="registry-server" containerID="cri-o://9cc6bfe1fe23351ff5968e699a82906813bdf48332f4843f79a0f146beb3b2d2" gracePeriod=30 Feb 19 18:34:23 crc kubenswrapper[4813]: I0219 18:34:23.716754 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.023530 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5q7bs" Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.028909 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x7z6q" Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.033804 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p2xbk" Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.041739 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-h5w7j" Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.073078 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wkmt\" (UniqueName: \"kubernetes.io/projected/60220df2-7a87-42d6-8ff6-f4d2ddec5160-kube-api-access-6wkmt\") pod \"60220df2-7a87-42d6-8ff6-f4d2ddec5160\" (UID: \"60220df2-7a87-42d6-8ff6-f4d2ddec5160\") " Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.073123 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/60220df2-7a87-42d6-8ff6-f4d2ddec5160-marketplace-operator-metrics\") pod \"60220df2-7a87-42d6-8ff6-f4d2ddec5160\" (UID: \"60220df2-7a87-42d6-8ff6-f4d2ddec5160\") " Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.073174 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zl76p\" (UniqueName: \"kubernetes.io/projected/064d4c4c-ff7e-4368-817e-0234266467b5-kube-api-access-zl76p\") pod \"064d4c4c-ff7e-4368-817e-0234266467b5\" (UID: \"064d4c4c-ff7e-4368-817e-0234266467b5\") " Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.073197 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/064d4c4c-ff7e-4368-817e-0234266467b5-catalog-content\") pod \"064d4c4c-ff7e-4368-817e-0234266467b5\" (UID: \"064d4c4c-ff7e-4368-817e-0234266467b5\") " Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.073242 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5ded39f-d187-46dd-a014-517d0b291e9d-utilities\") pod \"a5ded39f-d187-46dd-a014-517d0b291e9d\" (UID: \"a5ded39f-d187-46dd-a014-517d0b291e9d\") " Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.073259 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5ded39f-d187-46dd-a014-517d0b291e9d-catalog-content\") pod \"a5ded39f-d187-46dd-a014-517d0b291e9d\" (UID: \"a5ded39f-d187-46dd-a014-517d0b291e9d\") " Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.073305 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a3d0dbc-cf99-4d9f-8ab2-a7f480f65d81-utilities\") pod \"9a3d0dbc-cf99-4d9f-8ab2-a7f480f65d81\" (UID: \"9a3d0dbc-cf99-4d9f-8ab2-a7f480f65d81\") " Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.073334 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wx22\" (UniqueName: \"kubernetes.io/projected/a5ded39f-d187-46dd-a014-517d0b291e9d-kube-api-access-9wx22\") pod \"a5ded39f-d187-46dd-a014-517d0b291e9d\" (UID: \"a5ded39f-d187-46dd-a014-517d0b291e9d\") " Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.073390 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/064d4c4c-ff7e-4368-817e-0234266467b5-utilities\") pod \"064d4c4c-ff7e-4368-817e-0234266467b5\" (UID: \"064d4c4c-ff7e-4368-817e-0234266467b5\") " Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.073422 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4v2nb\" (UniqueName: \"kubernetes.io/projected/9a3d0dbc-cf99-4d9f-8ab2-a7f480f65d81-kube-api-access-4v2nb\") pod \"9a3d0dbc-cf99-4d9f-8ab2-a7f480f65d81\" (UID: \"9a3d0dbc-cf99-4d9f-8ab2-a7f480f65d81\") " Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.073459 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/60220df2-7a87-42d6-8ff6-f4d2ddec5160-marketplace-trusted-ca\") pod \"60220df2-7a87-42d6-8ff6-f4d2ddec5160\" (UID: \"60220df2-7a87-42d6-8ff6-f4d2ddec5160\") " Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.073481 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a3d0dbc-cf99-4d9f-8ab2-a7f480f65d81-catalog-content\") pod \"9a3d0dbc-cf99-4d9f-8ab2-a7f480f65d81\" (UID: \"9a3d0dbc-cf99-4d9f-8ab2-a7f480f65d81\") " Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.080353 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60220df2-7a87-42d6-8ff6-f4d2ddec5160-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "60220df2-7a87-42d6-8ff6-f4d2ddec5160" (UID: "60220df2-7a87-42d6-8ff6-f4d2ddec5160"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.081307 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5ded39f-d187-46dd-a014-517d0b291e9d-utilities" (OuterVolumeSpecName: "utilities") pod "a5ded39f-d187-46dd-a014-517d0b291e9d" (UID: "a5ded39f-d187-46dd-a014-517d0b291e9d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.082455 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a3d0dbc-cf99-4d9f-8ab2-a7f480f65d81-utilities" (OuterVolumeSpecName: "utilities") pod "9a3d0dbc-cf99-4d9f-8ab2-a7f480f65d81" (UID: "9a3d0dbc-cf99-4d9f-8ab2-a7f480f65d81"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.083305 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/064d4c4c-ff7e-4368-817e-0234266467b5-utilities" (OuterVolumeSpecName: "utilities") pod "064d4c4c-ff7e-4368-817e-0234266467b5" (UID: "064d4c4c-ff7e-4368-817e-0234266467b5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.111405 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5ded39f-d187-46dd-a014-517d0b291e9d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a5ded39f-d187-46dd-a014-517d0b291e9d" (UID: "a5ded39f-d187-46dd-a014-517d0b291e9d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.117438 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5ded39f-d187-46dd-a014-517d0b291e9d-kube-api-access-9wx22" (OuterVolumeSpecName: "kube-api-access-9wx22") pod "a5ded39f-d187-46dd-a014-517d0b291e9d" (UID: "a5ded39f-d187-46dd-a014-517d0b291e9d"). InnerVolumeSpecName "kube-api-access-9wx22". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.117542 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/064d4c4c-ff7e-4368-817e-0234266467b5-kube-api-access-zl76p" (OuterVolumeSpecName: "kube-api-access-zl76p") pod "064d4c4c-ff7e-4368-817e-0234266467b5" (UID: "064d4c4c-ff7e-4368-817e-0234266467b5"). InnerVolumeSpecName "kube-api-access-zl76p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.117772 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60220df2-7a87-42d6-8ff6-f4d2ddec5160-kube-api-access-6wkmt" (OuterVolumeSpecName: "kube-api-access-6wkmt") pod "60220df2-7a87-42d6-8ff6-f4d2ddec5160" (UID: "60220df2-7a87-42d6-8ff6-f4d2ddec5160"). InnerVolumeSpecName "kube-api-access-6wkmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.117902 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60220df2-7a87-42d6-8ff6-f4d2ddec5160-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "60220df2-7a87-42d6-8ff6-f4d2ddec5160" (UID: "60220df2-7a87-42d6-8ff6-f4d2ddec5160"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.118831 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a3d0dbc-cf99-4d9f-8ab2-a7f480f65d81-kube-api-access-4v2nb" (OuterVolumeSpecName: "kube-api-access-4v2nb") pod "9a3d0dbc-cf99-4d9f-8ab2-a7f480f65d81" (UID: "9a3d0dbc-cf99-4d9f-8ab2-a7f480f65d81"). InnerVolumeSpecName "kube-api-access-4v2nb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.121707 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z8tl9" Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.130327 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/064d4c4c-ff7e-4368-817e-0234266467b5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "064d4c4c-ff7e-4368-817e-0234266467b5" (UID: "064d4c4c-ff7e-4368-817e-0234266467b5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.162537 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a3d0dbc-cf99-4d9f-8ab2-a7f480f65d81-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9a3d0dbc-cf99-4d9f-8ab2-a7f480f65d81" (UID: "9a3d0dbc-cf99-4d9f-8ab2-a7f480f65d81"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.174437 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d418b66-03a6-4b99-975a-1c980c62e680-catalog-content\") pod \"2d418b66-03a6-4b99-975a-1c980c62e680\" (UID: \"2d418b66-03a6-4b99-975a-1c980c62e680\") " Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.174485 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d418b66-03a6-4b99-975a-1c980c62e680-utilities\") pod \"2d418b66-03a6-4b99-975a-1c980c62e680\" (UID: \"2d418b66-03a6-4b99-975a-1c980c62e680\") " Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.174587 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5jt5\" (UniqueName: \"kubernetes.io/projected/2d418b66-03a6-4b99-975a-1c980c62e680-kube-api-access-h5jt5\") pod \"2d418b66-03a6-4b99-975a-1c980c62e680\" (UID: \"2d418b66-03a6-4b99-975a-1c980c62e680\") " Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.174809 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wx22\" (UniqueName: \"kubernetes.io/projected/a5ded39f-d187-46dd-a014-517d0b291e9d-kube-api-access-9wx22\") on node \"crc\" DevicePath \"\"" Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.174825 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/064d4c4c-ff7e-4368-817e-0234266467b5-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.174834 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4v2nb\" (UniqueName: \"kubernetes.io/projected/9a3d0dbc-cf99-4d9f-8ab2-a7f480f65d81-kube-api-access-4v2nb\") on node \"crc\" DevicePath \"\"" Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.174845 4813 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/60220df2-7a87-42d6-8ff6-f4d2ddec5160-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.174855 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a3d0dbc-cf99-4d9f-8ab2-a7f480f65d81-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.174863 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6wkmt\" (UniqueName: \"kubernetes.io/projected/60220df2-7a87-42d6-8ff6-f4d2ddec5160-kube-api-access-6wkmt\") on node \"crc\" DevicePath \"\"" Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.174872 4813 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/60220df2-7a87-42d6-8ff6-f4d2ddec5160-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.174881 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zl76p\" (UniqueName: \"kubernetes.io/projected/064d4c4c-ff7e-4368-817e-0234266467b5-kube-api-access-zl76p\") on node \"crc\" DevicePath \"\"" Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.174889 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/064d4c4c-ff7e-4368-817e-0234266467b5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.174911 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5ded39f-d187-46dd-a014-517d0b291e9d-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.174919 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5ded39f-d187-46dd-a014-517d0b291e9d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.174928 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a3d0dbc-cf99-4d9f-8ab2-a7f480f65d81-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.176010 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d418b66-03a6-4b99-975a-1c980c62e680-utilities" (OuterVolumeSpecName: "utilities") pod "2d418b66-03a6-4b99-975a-1c980c62e680" (UID: "2d418b66-03a6-4b99-975a-1c980c62e680"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.176998 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d418b66-03a6-4b99-975a-1c980c62e680-kube-api-access-h5jt5" (OuterVolumeSpecName: "kube-api-access-h5jt5") pod "2d418b66-03a6-4b99-975a-1c980c62e680" (UID: "2d418b66-03a6-4b99-975a-1c980c62e680"). InnerVolumeSpecName "kube-api-access-h5jt5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.275474 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5jt5\" (UniqueName: \"kubernetes.io/projected/2d418b66-03a6-4b99-975a-1c980c62e680-kube-api-access-h5jt5\") on node \"crc\" DevicePath \"\"" Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.275519 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d418b66-03a6-4b99-975a-1c980c62e680-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.313680 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d418b66-03a6-4b99-975a-1c980c62e680-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2d418b66-03a6-4b99-975a-1c980c62e680" (UID: "2d418b66-03a6-4b99-975a-1c980c62e680"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.377437 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d418b66-03a6-4b99-975a-1c980c62e680-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.616439 4813 generic.go:334] "Generic (PLEG): container finished" podID="2d418b66-03a6-4b99-975a-1c980c62e680" containerID="9cc6bfe1fe23351ff5968e699a82906813bdf48332f4843f79a0f146beb3b2d2" exitCode=0 Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.616544 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z8tl9" event={"ID":"2d418b66-03a6-4b99-975a-1c980c62e680","Type":"ContainerDied","Data":"9cc6bfe1fe23351ff5968e699a82906813bdf48332f4843f79a0f146beb3b2d2"} Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.616559 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z8tl9" Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.616586 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z8tl9" event={"ID":"2d418b66-03a6-4b99-975a-1c980c62e680","Type":"ContainerDied","Data":"b3d07546f3ca1ba21b299b10dc4d7211482b0050b1d02339db8336761491e1e1"} Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.616616 4813 scope.go:117] "RemoveContainer" containerID="9cc6bfe1fe23351ff5968e699a82906813bdf48332f4843f79a0f146beb3b2d2" Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.621631 4813 generic.go:334] "Generic (PLEG): container finished" podID="9a3d0dbc-cf99-4d9f-8ab2-a7f480f65d81" containerID="15987d60f76059f64dbb5ccb5fba2d5cfae61708eaa0e9c564075962739cffde" exitCode=0 Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.621808 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x7z6q" event={"ID":"9a3d0dbc-cf99-4d9f-8ab2-a7f480f65d81","Type":"ContainerDied","Data":"15987d60f76059f64dbb5ccb5fba2d5cfae61708eaa0e9c564075962739cffde"} Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.621986 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x7z6q" Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.622105 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x7z6q" event={"ID":"9a3d0dbc-cf99-4d9f-8ab2-a7f480f65d81","Type":"ContainerDied","Data":"68f4511b0ced185d01c7cff6df9f7cc2a38eb7aba40c526d8f7352cee05fa3a9"} Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.626593 4813 generic.go:334] "Generic (PLEG): container finished" podID="a5ded39f-d187-46dd-a014-517d0b291e9d" containerID="3b8d1eccdb5a6fe99f2c656e137ebc9fa993f1937a236f9fbe3a5d17eac6c7a1" exitCode=0 Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.626686 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p2xbk" event={"ID":"a5ded39f-d187-46dd-a014-517d0b291e9d","Type":"ContainerDied","Data":"3b8d1eccdb5a6fe99f2c656e137ebc9fa993f1937a236f9fbe3a5d17eac6c7a1"} Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.626735 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p2xbk" event={"ID":"a5ded39f-d187-46dd-a014-517d0b291e9d","Type":"ContainerDied","Data":"de58d87e61e41d8e49d2a99e597c166812ef701cfbcd34569f4608f6b09157dd"} Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.626910 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p2xbk" Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.629917 4813 generic.go:334] "Generic (PLEG): container finished" podID="60220df2-7a87-42d6-8ff6-f4d2ddec5160" containerID="f858c2d1694d80b9f81743707727ddef00e94eba84d6371de2bd96b282ab0915" exitCode=0 Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.630300 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-h5w7j" Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.630394 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-h5w7j" event={"ID":"60220df2-7a87-42d6-8ff6-f4d2ddec5160","Type":"ContainerDied","Data":"f858c2d1694d80b9f81743707727ddef00e94eba84d6371de2bd96b282ab0915"} Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.630769 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-h5w7j" event={"ID":"60220df2-7a87-42d6-8ff6-f4d2ddec5160","Type":"ContainerDied","Data":"9d80ee95d049cc65b41aeb0934ed201c9b53d1fe220ae7edb4f92ca964932a59"} Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.638528 4813 generic.go:334] "Generic (PLEG): container finished" podID="064d4c4c-ff7e-4368-817e-0234266467b5" containerID="34ddbe81b5edd583772a8a71fad4fe05ff92fb3a80d5fcf98a466e0b56162d6a" exitCode=0 Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.638570 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5q7bs" event={"ID":"064d4c4c-ff7e-4368-817e-0234266467b5","Type":"ContainerDied","Data":"34ddbe81b5edd583772a8a71fad4fe05ff92fb3a80d5fcf98a466e0b56162d6a"} Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.638598 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5q7bs" event={"ID":"064d4c4c-ff7e-4368-817e-0234266467b5","Type":"ContainerDied","Data":"9df8da8d0415337cfc57deaf42bad55c65ca33182467fe323a4f4e76a128c744"} Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.638720 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5q7bs" Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.646727 4813 scope.go:117] "RemoveContainer" containerID="e43c00145dd37abb23364e3e6fe73e0c155b46ea2fe969aafdf7756250a3a579" Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.683578 4813 scope.go:117] "RemoveContainer" containerID="aa60cc4cf1a2e6a3831cdf31a5b353aa0cacc1f6cee131a9b5f065ae18914de8" Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.693259 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x7z6q"] Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.699814 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-x7z6q"] Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.703527 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z8tl9"] Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.707294 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-z8tl9"] Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.734466 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5q7bs"] Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.739105 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5q7bs"] Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.742858 4813 scope.go:117] "RemoveContainer" containerID="9cc6bfe1fe23351ff5968e699a82906813bdf48332f4843f79a0f146beb3b2d2" Feb 19 18:34:24 crc kubenswrapper[4813]: E0219 18:34:24.743430 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cc6bfe1fe23351ff5968e699a82906813bdf48332f4843f79a0f146beb3b2d2\": container with ID starting with 9cc6bfe1fe23351ff5968e699a82906813bdf48332f4843f79a0f146beb3b2d2 not found: ID does not exist" containerID="9cc6bfe1fe23351ff5968e699a82906813bdf48332f4843f79a0f146beb3b2d2" Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.743478 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cc6bfe1fe23351ff5968e699a82906813bdf48332f4843f79a0f146beb3b2d2"} err="failed to get container status \"9cc6bfe1fe23351ff5968e699a82906813bdf48332f4843f79a0f146beb3b2d2\": rpc error: code = NotFound desc = could not find container \"9cc6bfe1fe23351ff5968e699a82906813bdf48332f4843f79a0f146beb3b2d2\": container with ID starting with 9cc6bfe1fe23351ff5968e699a82906813bdf48332f4843f79a0f146beb3b2d2 not found: ID does not exist" Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.743502 4813 scope.go:117] "RemoveContainer" containerID="e43c00145dd37abb23364e3e6fe73e0c155b46ea2fe969aafdf7756250a3a579" Feb 19 18:34:24 crc kubenswrapper[4813]: E0219 18:34:24.743864 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e43c00145dd37abb23364e3e6fe73e0c155b46ea2fe969aafdf7756250a3a579\": container with ID starting with e43c00145dd37abb23364e3e6fe73e0c155b46ea2fe969aafdf7756250a3a579 not found: ID does not exist" containerID="e43c00145dd37abb23364e3e6fe73e0c155b46ea2fe969aafdf7756250a3a579" Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.743906 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e43c00145dd37abb23364e3e6fe73e0c155b46ea2fe969aafdf7756250a3a579"} err="failed to get container status \"e43c00145dd37abb23364e3e6fe73e0c155b46ea2fe969aafdf7756250a3a579\": rpc error: code = NotFound desc = could not find container \"e43c00145dd37abb23364e3e6fe73e0c155b46ea2fe969aafdf7756250a3a579\": container with ID starting with e43c00145dd37abb23364e3e6fe73e0c155b46ea2fe969aafdf7756250a3a579 not found: ID does not exist" Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.743935 4813 scope.go:117] "RemoveContainer" containerID="aa60cc4cf1a2e6a3831cdf31a5b353aa0cacc1f6cee131a9b5f065ae18914de8" Feb 19 18:34:24 crc kubenswrapper[4813]: E0219 18:34:24.744698 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa60cc4cf1a2e6a3831cdf31a5b353aa0cacc1f6cee131a9b5f065ae18914de8\": container with ID starting with aa60cc4cf1a2e6a3831cdf31a5b353aa0cacc1f6cee131a9b5f065ae18914de8 not found: ID does not exist" containerID="aa60cc4cf1a2e6a3831cdf31a5b353aa0cacc1f6cee131a9b5f065ae18914de8" Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.744807 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa60cc4cf1a2e6a3831cdf31a5b353aa0cacc1f6cee131a9b5f065ae18914de8"} err="failed to get container status \"aa60cc4cf1a2e6a3831cdf31a5b353aa0cacc1f6cee131a9b5f065ae18914de8\": rpc error: code = NotFound desc = could not find container \"aa60cc4cf1a2e6a3831cdf31a5b353aa0cacc1f6cee131a9b5f065ae18914de8\": container with ID starting with aa60cc4cf1a2e6a3831cdf31a5b353aa0cacc1f6cee131a9b5f065ae18914de8 not found: ID does not exist" Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.744850 4813 scope.go:117] "RemoveContainer" containerID="15987d60f76059f64dbb5ccb5fba2d5cfae61708eaa0e9c564075962739cffde" Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.750409 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-h5w7j"] Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.754727 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-h5w7j"] Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.766824 4813 scope.go:117] "RemoveContainer" containerID="72262af3ebc34ad83fc3773c9412987689428b53f3c4715036e1e0e78b153092" Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.773991 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p2xbk"] Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.779460 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-p2xbk"] Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.784229 4813 scope.go:117] "RemoveContainer" containerID="28c3d542a861de2e3d3463277db89dd35e7d038e4463780cad48346698b1ae41" Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.799188 4813 scope.go:117] "RemoveContainer" containerID="15987d60f76059f64dbb5ccb5fba2d5cfae61708eaa0e9c564075962739cffde" Feb 19 18:34:24 crc kubenswrapper[4813]: E0219 18:34:24.799634 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15987d60f76059f64dbb5ccb5fba2d5cfae61708eaa0e9c564075962739cffde\": container with ID starting with 15987d60f76059f64dbb5ccb5fba2d5cfae61708eaa0e9c564075962739cffde not found: ID does not exist" containerID="15987d60f76059f64dbb5ccb5fba2d5cfae61708eaa0e9c564075962739cffde" Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.799790 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15987d60f76059f64dbb5ccb5fba2d5cfae61708eaa0e9c564075962739cffde"} err="failed to get container status \"15987d60f76059f64dbb5ccb5fba2d5cfae61708eaa0e9c564075962739cffde\": rpc error: code = NotFound desc = could not find container \"15987d60f76059f64dbb5ccb5fba2d5cfae61708eaa0e9c564075962739cffde\": container with ID starting with 15987d60f76059f64dbb5ccb5fba2d5cfae61708eaa0e9c564075962739cffde not found: ID does not exist" Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.799929 4813 scope.go:117] "RemoveContainer" containerID="72262af3ebc34ad83fc3773c9412987689428b53f3c4715036e1e0e78b153092" Feb 19 18:34:24 crc kubenswrapper[4813]: E0219 18:34:24.800385 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72262af3ebc34ad83fc3773c9412987689428b53f3c4715036e1e0e78b153092\": container with ID starting with 72262af3ebc34ad83fc3773c9412987689428b53f3c4715036e1e0e78b153092 not found: ID does not exist" containerID="72262af3ebc34ad83fc3773c9412987689428b53f3c4715036e1e0e78b153092" Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.800432 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72262af3ebc34ad83fc3773c9412987689428b53f3c4715036e1e0e78b153092"} err="failed to get container status \"72262af3ebc34ad83fc3773c9412987689428b53f3c4715036e1e0e78b153092\": rpc error: code = NotFound desc = could not find container \"72262af3ebc34ad83fc3773c9412987689428b53f3c4715036e1e0e78b153092\": container with ID starting with 72262af3ebc34ad83fc3773c9412987689428b53f3c4715036e1e0e78b153092 not found: ID does not exist" Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.800465 4813 scope.go:117] "RemoveContainer" containerID="28c3d542a861de2e3d3463277db89dd35e7d038e4463780cad48346698b1ae41" Feb 19 18:34:24 crc kubenswrapper[4813]: E0219 18:34:24.800798 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28c3d542a861de2e3d3463277db89dd35e7d038e4463780cad48346698b1ae41\": container with ID starting with 28c3d542a861de2e3d3463277db89dd35e7d038e4463780cad48346698b1ae41 not found: ID does not exist" containerID="28c3d542a861de2e3d3463277db89dd35e7d038e4463780cad48346698b1ae41" Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.800828 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28c3d542a861de2e3d3463277db89dd35e7d038e4463780cad48346698b1ae41"} err="failed to get container status \"28c3d542a861de2e3d3463277db89dd35e7d038e4463780cad48346698b1ae41\": rpc error: code = NotFound desc = could not find container \"28c3d542a861de2e3d3463277db89dd35e7d038e4463780cad48346698b1ae41\": container with ID starting with 28c3d542a861de2e3d3463277db89dd35e7d038e4463780cad48346698b1ae41 not found: ID does not exist" Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.800851 4813 scope.go:117] "RemoveContainer" containerID="3b8d1eccdb5a6fe99f2c656e137ebc9fa993f1937a236f9fbe3a5d17eac6c7a1" Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.816298 4813 scope.go:117] "RemoveContainer" containerID="98a9f7fa20c03ee49f7fca8233e18606b9c75fe34d2abf1a88ec5304731b4707" Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.829297 4813 scope.go:117] "RemoveContainer" containerID="21d278fb4154e9b59bcb1c896de2efcaece071f7dc40e7f2496dd55dff7a2045" Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.844612 4813 scope.go:117] "RemoveContainer" containerID="3b8d1eccdb5a6fe99f2c656e137ebc9fa993f1937a236f9fbe3a5d17eac6c7a1" Feb 19 18:34:24 crc kubenswrapper[4813]: E0219 18:34:24.845060 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b8d1eccdb5a6fe99f2c656e137ebc9fa993f1937a236f9fbe3a5d17eac6c7a1\": container with ID starting with 3b8d1eccdb5a6fe99f2c656e137ebc9fa993f1937a236f9fbe3a5d17eac6c7a1 not found: ID does not exist" containerID="3b8d1eccdb5a6fe99f2c656e137ebc9fa993f1937a236f9fbe3a5d17eac6c7a1" Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.845176 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b8d1eccdb5a6fe99f2c656e137ebc9fa993f1937a236f9fbe3a5d17eac6c7a1"} err="failed to get container status \"3b8d1eccdb5a6fe99f2c656e137ebc9fa993f1937a236f9fbe3a5d17eac6c7a1\": rpc error: code = NotFound desc = could not find container \"3b8d1eccdb5a6fe99f2c656e137ebc9fa993f1937a236f9fbe3a5d17eac6c7a1\": container with ID starting with 3b8d1eccdb5a6fe99f2c656e137ebc9fa993f1937a236f9fbe3a5d17eac6c7a1 not found: ID does not exist" Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.845287 4813 scope.go:117] "RemoveContainer" containerID="98a9f7fa20c03ee49f7fca8233e18606b9c75fe34d2abf1a88ec5304731b4707" Feb 19 18:34:24 crc kubenswrapper[4813]: E0219 18:34:24.845648 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98a9f7fa20c03ee49f7fca8233e18606b9c75fe34d2abf1a88ec5304731b4707\": container with ID starting with 98a9f7fa20c03ee49f7fca8233e18606b9c75fe34d2abf1a88ec5304731b4707 not found: ID does not exist" containerID="98a9f7fa20c03ee49f7fca8233e18606b9c75fe34d2abf1a88ec5304731b4707" Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.845690 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98a9f7fa20c03ee49f7fca8233e18606b9c75fe34d2abf1a88ec5304731b4707"} err="failed to get container status \"98a9f7fa20c03ee49f7fca8233e18606b9c75fe34d2abf1a88ec5304731b4707\": rpc error: code = NotFound desc = could not find container \"98a9f7fa20c03ee49f7fca8233e18606b9c75fe34d2abf1a88ec5304731b4707\": container with ID starting with 98a9f7fa20c03ee49f7fca8233e18606b9c75fe34d2abf1a88ec5304731b4707 not found: ID does not exist" Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.845720 4813 scope.go:117] "RemoveContainer" containerID="21d278fb4154e9b59bcb1c896de2efcaece071f7dc40e7f2496dd55dff7a2045" Feb 19 18:34:24 crc kubenswrapper[4813]: E0219 18:34:24.846003 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21d278fb4154e9b59bcb1c896de2efcaece071f7dc40e7f2496dd55dff7a2045\": container with ID starting with 21d278fb4154e9b59bcb1c896de2efcaece071f7dc40e7f2496dd55dff7a2045 not found: ID does not exist" containerID="21d278fb4154e9b59bcb1c896de2efcaece071f7dc40e7f2496dd55dff7a2045" Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.846124 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21d278fb4154e9b59bcb1c896de2efcaece071f7dc40e7f2496dd55dff7a2045"} err="failed to get container status \"21d278fb4154e9b59bcb1c896de2efcaece071f7dc40e7f2496dd55dff7a2045\": rpc error: code = NotFound desc = could not find container \"21d278fb4154e9b59bcb1c896de2efcaece071f7dc40e7f2496dd55dff7a2045\": container with ID starting with 21d278fb4154e9b59bcb1c896de2efcaece071f7dc40e7f2496dd55dff7a2045 not found: ID does not exist" Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.846213 4813 scope.go:117] "RemoveContainer" containerID="f858c2d1694d80b9f81743707727ddef00e94eba84d6371de2bd96b282ab0915" Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.862187 4813 scope.go:117] "RemoveContainer" containerID="f858c2d1694d80b9f81743707727ddef00e94eba84d6371de2bd96b282ab0915" Feb 19 18:34:24 crc kubenswrapper[4813]: E0219 18:34:24.862734 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f858c2d1694d80b9f81743707727ddef00e94eba84d6371de2bd96b282ab0915\": container with ID starting with f858c2d1694d80b9f81743707727ddef00e94eba84d6371de2bd96b282ab0915 not found: ID does not exist" containerID="f858c2d1694d80b9f81743707727ddef00e94eba84d6371de2bd96b282ab0915" Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.862857 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f858c2d1694d80b9f81743707727ddef00e94eba84d6371de2bd96b282ab0915"} err="failed to get container status \"f858c2d1694d80b9f81743707727ddef00e94eba84d6371de2bd96b282ab0915\": rpc error: code = NotFound desc = could not find container \"f858c2d1694d80b9f81743707727ddef00e94eba84d6371de2bd96b282ab0915\": container with ID starting with f858c2d1694d80b9f81743707727ddef00e94eba84d6371de2bd96b282ab0915 not found: ID does not exist" Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.862969 4813 scope.go:117] "RemoveContainer" containerID="34ddbe81b5edd583772a8a71fad4fe05ff92fb3a80d5fcf98a466e0b56162d6a" Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.881128 4813 scope.go:117] "RemoveContainer" containerID="d0ce1077bd3b84e3cb988b85f895da0833d00de4aee7032a9dce185e352d40f0" Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.896010 4813 scope.go:117] "RemoveContainer" containerID="c8d5e36b284e59d65ea3e9a53efca07e0b21d7aab02930e7f1422f727a42c476" Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.913279 4813 scope.go:117] "RemoveContainer" containerID="34ddbe81b5edd583772a8a71fad4fe05ff92fb3a80d5fcf98a466e0b56162d6a" Feb 19 18:34:24 crc kubenswrapper[4813]: E0219 18:34:24.913889 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34ddbe81b5edd583772a8a71fad4fe05ff92fb3a80d5fcf98a466e0b56162d6a\": container with ID starting with 34ddbe81b5edd583772a8a71fad4fe05ff92fb3a80d5fcf98a466e0b56162d6a not found: ID does not exist" containerID="34ddbe81b5edd583772a8a71fad4fe05ff92fb3a80d5fcf98a466e0b56162d6a" Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.913930 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34ddbe81b5edd583772a8a71fad4fe05ff92fb3a80d5fcf98a466e0b56162d6a"} err="failed to get container status \"34ddbe81b5edd583772a8a71fad4fe05ff92fb3a80d5fcf98a466e0b56162d6a\": rpc error: code = NotFound desc = could not find container \"34ddbe81b5edd583772a8a71fad4fe05ff92fb3a80d5fcf98a466e0b56162d6a\": container with ID starting with 34ddbe81b5edd583772a8a71fad4fe05ff92fb3a80d5fcf98a466e0b56162d6a not found: ID does not exist" Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.913984 4813 scope.go:117] "RemoveContainer" containerID="d0ce1077bd3b84e3cb988b85f895da0833d00de4aee7032a9dce185e352d40f0" Feb 19 18:34:24 crc kubenswrapper[4813]: E0219 18:34:24.914274 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0ce1077bd3b84e3cb988b85f895da0833d00de4aee7032a9dce185e352d40f0\": container with ID starting with d0ce1077bd3b84e3cb988b85f895da0833d00de4aee7032a9dce185e352d40f0 not found: ID does not exist" containerID="d0ce1077bd3b84e3cb988b85f895da0833d00de4aee7032a9dce185e352d40f0" Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.914384 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0ce1077bd3b84e3cb988b85f895da0833d00de4aee7032a9dce185e352d40f0"} err="failed to get container status \"d0ce1077bd3b84e3cb988b85f895da0833d00de4aee7032a9dce185e352d40f0\": rpc error: code = NotFound desc = could not find container \"d0ce1077bd3b84e3cb988b85f895da0833d00de4aee7032a9dce185e352d40f0\": container with ID starting with d0ce1077bd3b84e3cb988b85f895da0833d00de4aee7032a9dce185e352d40f0 not found: ID does not exist" Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.914477 4813 scope.go:117] "RemoveContainer" containerID="c8d5e36b284e59d65ea3e9a53efca07e0b21d7aab02930e7f1422f727a42c476" Feb 19 18:34:24 crc kubenswrapper[4813]: E0219 18:34:24.914975 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8d5e36b284e59d65ea3e9a53efca07e0b21d7aab02930e7f1422f727a42c476\": container with ID starting with c8d5e36b284e59d65ea3e9a53efca07e0b21d7aab02930e7f1422f727a42c476 not found: ID does not exist" containerID="c8d5e36b284e59d65ea3e9a53efca07e0b21d7aab02930e7f1422f727a42c476" Feb 19 18:34:24 crc kubenswrapper[4813]: I0219 18:34:24.915080 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8d5e36b284e59d65ea3e9a53efca07e0b21d7aab02930e7f1422f727a42c476"} err="failed to get container status \"c8d5e36b284e59d65ea3e9a53efca07e0b21d7aab02930e7f1422f727a42c476\": rpc error: code = NotFound desc = could not find container \"c8d5e36b284e59d65ea3e9a53efca07e0b21d7aab02930e7f1422f727a42c476\": container with ID starting with c8d5e36b284e59d65ea3e9a53efca07e0b21d7aab02930e7f1422f727a42c476 not found: ID does not exist" Feb 19 18:34:25 crc kubenswrapper[4813]: I0219 18:34:25.480328 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="064d4c4c-ff7e-4368-817e-0234266467b5" path="/var/lib/kubelet/pods/064d4c4c-ff7e-4368-817e-0234266467b5/volumes" Feb 19 18:34:25 crc kubenswrapper[4813]: I0219 18:34:25.481892 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d418b66-03a6-4b99-975a-1c980c62e680" path="/var/lib/kubelet/pods/2d418b66-03a6-4b99-975a-1c980c62e680/volumes" Feb 19 18:34:25 crc kubenswrapper[4813]: I0219 18:34:25.483317 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60220df2-7a87-42d6-8ff6-f4d2ddec5160" path="/var/lib/kubelet/pods/60220df2-7a87-42d6-8ff6-f4d2ddec5160/volumes" Feb 19 18:34:25 crc kubenswrapper[4813]: I0219 18:34:25.485186 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a3d0dbc-cf99-4d9f-8ab2-a7f480f65d81" path="/var/lib/kubelet/pods/9a3d0dbc-cf99-4d9f-8ab2-a7f480f65d81/volumes" Feb 19 18:34:25 crc kubenswrapper[4813]: I0219 18:34:25.486440 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5ded39f-d187-46dd-a014-517d0b291e9d" path="/var/lib/kubelet/pods/a5ded39f-d187-46dd-a014-517d0b291e9d/volumes" Feb 19 18:34:29 crc kubenswrapper[4813]: I0219 18:34:29.380219 4813 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 19 18:34:29 crc kubenswrapper[4813]: I0219 18:34:29.380861 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://c2b535699535300630c4cacb900df9ad44e34842a5b284537e368ace29c99f43" gracePeriod=5 Feb 19 18:34:34 crc kubenswrapper[4813]: I0219 18:34:34.700069 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 19 18:34:34 crc kubenswrapper[4813]: I0219 18:34:34.700796 4813 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="c2b535699535300630c4cacb900df9ad44e34842a5b284537e368ace29c99f43" exitCode=137 Feb 19 18:34:34 crc kubenswrapper[4813]: I0219 18:34:34.986767 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 19 18:34:34 crc kubenswrapper[4813]: I0219 18:34:34.986870 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 18:34:35 crc kubenswrapper[4813]: I0219 18:34:35.014093 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 19 18:34:35 crc kubenswrapper[4813]: I0219 18:34:35.014172 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 19 18:34:35 crc kubenswrapper[4813]: I0219 18:34:35.014238 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 19 18:34:35 crc kubenswrapper[4813]: I0219 18:34:35.014276 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 18:34:35 crc kubenswrapper[4813]: I0219 18:34:35.014320 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 19 18:34:35 crc kubenswrapper[4813]: I0219 18:34:35.014355 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 19 18:34:35 crc kubenswrapper[4813]: I0219 18:34:35.014369 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 18:34:35 crc kubenswrapper[4813]: I0219 18:34:35.014410 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 18:34:35 crc kubenswrapper[4813]: I0219 18:34:35.014445 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 18:34:35 crc kubenswrapper[4813]: I0219 18:34:35.014676 4813 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 19 18:34:35 crc kubenswrapper[4813]: I0219 18:34:35.014706 4813 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 19 18:34:35 crc kubenswrapper[4813]: I0219 18:34:35.014723 4813 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 19 18:34:35 crc kubenswrapper[4813]: I0219 18:34:35.014739 4813 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 19 18:34:35 crc kubenswrapper[4813]: I0219 18:34:35.026114 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 18:34:35 crc kubenswrapper[4813]: I0219 18:34:35.116232 4813 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 19 18:34:35 crc kubenswrapper[4813]: I0219 18:34:35.484395 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 19 18:34:35 crc kubenswrapper[4813]: I0219 18:34:35.485319 4813 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Feb 19 18:34:35 crc kubenswrapper[4813]: I0219 18:34:35.499214 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 19 18:34:35 crc kubenswrapper[4813]: I0219 18:34:35.499276 4813 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="44c4e518-000a-486a-b5b7-7be46d720b35" Feb 19 18:34:35 crc kubenswrapper[4813]: I0219 18:34:35.506249 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 19 18:34:35 crc kubenswrapper[4813]: I0219 18:34:35.506507 4813 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="44c4e518-000a-486a-b5b7-7be46d720b35" Feb 19 18:34:35 crc kubenswrapper[4813]: I0219 18:34:35.713073 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 19 18:34:35 crc kubenswrapper[4813]: I0219 18:34:35.713251 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 18:34:35 crc kubenswrapper[4813]: I0219 18:34:35.713256 4813 scope.go:117] "RemoveContainer" containerID="c2b535699535300630c4cacb900df9ad44e34842a5b284537e368ace29c99f43" Feb 19 18:34:37 crc kubenswrapper[4813]: I0219 18:34:37.921515 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6r4vx"] Feb 19 18:34:37 crc kubenswrapper[4813]: E0219 18:34:37.922052 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a3d0dbc-cf99-4d9f-8ab2-a7f480f65d81" containerName="registry-server" Feb 19 18:34:37 crc kubenswrapper[4813]: I0219 18:34:37.922069 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a3d0dbc-cf99-4d9f-8ab2-a7f480f65d81" containerName="registry-server" Feb 19 18:34:37 crc kubenswrapper[4813]: E0219 18:34:37.922083 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5ded39f-d187-46dd-a014-517d0b291e9d" containerName="extract-utilities" Feb 19 18:34:37 crc kubenswrapper[4813]: I0219 18:34:37.922092 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5ded39f-d187-46dd-a014-517d0b291e9d" containerName="extract-utilities" Feb 19 18:34:37 crc kubenswrapper[4813]: E0219 18:34:37.922102 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d418b66-03a6-4b99-975a-1c980c62e680" containerName="extract-content" Feb 19 18:34:37 crc kubenswrapper[4813]: I0219 18:34:37.922111 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d418b66-03a6-4b99-975a-1c980c62e680" containerName="extract-content" Feb 19 18:34:37 crc kubenswrapper[4813]: E0219 18:34:37.922119 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="064d4c4c-ff7e-4368-817e-0234266467b5" containerName="registry-server" Feb 19 18:34:37 crc kubenswrapper[4813]: I0219 18:34:37.922127 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="064d4c4c-ff7e-4368-817e-0234266467b5" containerName="registry-server" Feb 19 18:34:37 crc kubenswrapper[4813]: E0219 18:34:37.922141 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a3d0dbc-cf99-4d9f-8ab2-a7f480f65d81" containerName="extract-content" Feb 19 18:34:37 crc kubenswrapper[4813]: I0219 18:34:37.922148 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a3d0dbc-cf99-4d9f-8ab2-a7f480f65d81" containerName="extract-content" Feb 19 18:34:37 crc kubenswrapper[4813]: E0219 18:34:37.922159 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5ded39f-d187-46dd-a014-517d0b291e9d" containerName="extract-content" Feb 19 18:34:37 crc kubenswrapper[4813]: I0219 18:34:37.922168 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5ded39f-d187-46dd-a014-517d0b291e9d" containerName="extract-content" Feb 19 18:34:37 crc kubenswrapper[4813]: E0219 18:34:37.922181 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1877644-5177-4aa6-969b-7af1fb9cede5" containerName="installer" Feb 19 18:34:37 crc kubenswrapper[4813]: I0219 18:34:37.922188 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1877644-5177-4aa6-969b-7af1fb9cede5" containerName="installer" Feb 19 18:34:37 crc kubenswrapper[4813]: E0219 18:34:37.922200 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="064d4c4c-ff7e-4368-817e-0234266467b5" containerName="extract-content" Feb 19 18:34:37 crc kubenswrapper[4813]: I0219 18:34:37.922209 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="064d4c4c-ff7e-4368-817e-0234266467b5" containerName="extract-content" Feb 19 18:34:37 crc kubenswrapper[4813]: E0219 18:34:37.922220 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5ded39f-d187-46dd-a014-517d0b291e9d" containerName="registry-server" Feb 19 18:34:37 crc kubenswrapper[4813]: I0219 18:34:37.922228 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5ded39f-d187-46dd-a014-517d0b291e9d" containerName="registry-server" Feb 19 18:34:37 crc kubenswrapper[4813]: E0219 18:34:37.922241 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60220df2-7a87-42d6-8ff6-f4d2ddec5160" containerName="marketplace-operator" Feb 19 18:34:37 crc kubenswrapper[4813]: I0219 18:34:37.922249 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="60220df2-7a87-42d6-8ff6-f4d2ddec5160" containerName="marketplace-operator" Feb 19 18:34:37 crc kubenswrapper[4813]: E0219 18:34:37.922259 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 19 18:34:37 crc kubenswrapper[4813]: I0219 18:34:37.922266 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 19 18:34:37 crc kubenswrapper[4813]: E0219 18:34:37.922277 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d418b66-03a6-4b99-975a-1c980c62e680" containerName="extract-utilities" Feb 19 18:34:37 crc kubenswrapper[4813]: I0219 18:34:37.922284 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d418b66-03a6-4b99-975a-1c980c62e680" containerName="extract-utilities" Feb 19 18:34:37 crc kubenswrapper[4813]: E0219 18:34:37.922296 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d418b66-03a6-4b99-975a-1c980c62e680" containerName="registry-server" Feb 19 18:34:37 crc kubenswrapper[4813]: I0219 18:34:37.922303 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d418b66-03a6-4b99-975a-1c980c62e680" containerName="registry-server" Feb 19 18:34:37 crc kubenswrapper[4813]: E0219 18:34:37.922313 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a3d0dbc-cf99-4d9f-8ab2-a7f480f65d81" containerName="extract-utilities" Feb 19 18:34:37 crc kubenswrapper[4813]: I0219 18:34:37.922320 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a3d0dbc-cf99-4d9f-8ab2-a7f480f65d81" containerName="extract-utilities" Feb 19 18:34:37 crc kubenswrapper[4813]: E0219 18:34:37.922330 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="064d4c4c-ff7e-4368-817e-0234266467b5" containerName="extract-utilities" Feb 19 18:34:37 crc kubenswrapper[4813]: I0219 18:34:37.922338 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="064d4c4c-ff7e-4368-817e-0234266467b5" containerName="extract-utilities" Feb 19 18:34:37 crc kubenswrapper[4813]: I0219 18:34:37.922447 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d418b66-03a6-4b99-975a-1c980c62e680" containerName="registry-server" Feb 19 18:34:37 crc kubenswrapper[4813]: I0219 18:34:37.922460 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5ded39f-d187-46dd-a014-517d0b291e9d" containerName="registry-server" Feb 19 18:34:37 crc kubenswrapper[4813]: I0219 18:34:37.922475 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a3d0dbc-cf99-4d9f-8ab2-a7f480f65d81" containerName="registry-server" Feb 19 18:34:37 crc kubenswrapper[4813]: I0219 18:34:37.922488 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="60220df2-7a87-42d6-8ff6-f4d2ddec5160" containerName="marketplace-operator" Feb 19 18:34:37 crc kubenswrapper[4813]: I0219 18:34:37.922499 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 19 18:34:37 crc kubenswrapper[4813]: I0219 18:34:37.922510 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="064d4c4c-ff7e-4368-817e-0234266467b5" containerName="registry-server" Feb 19 18:34:37 crc kubenswrapper[4813]: I0219 18:34:37.922522 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1877644-5177-4aa6-969b-7af1fb9cede5" containerName="installer" Feb 19 18:34:37 crc kubenswrapper[4813]: I0219 18:34:37.922936 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-6r4vx" Feb 19 18:34:37 crc kubenswrapper[4813]: I0219 18:34:37.926119 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6r4vx"] Feb 19 18:34:37 crc kubenswrapper[4813]: I0219 18:34:37.928576 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 19 18:34:37 crc kubenswrapper[4813]: I0219 18:34:37.929839 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 19 18:34:37 crc kubenswrapper[4813]: I0219 18:34:37.930777 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 19 18:34:37 crc kubenswrapper[4813]: I0219 18:34:37.931098 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 19 18:34:37 crc kubenswrapper[4813]: I0219 18:34:37.946721 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 19 18:34:37 crc kubenswrapper[4813]: I0219 18:34:37.952722 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr66n\" (UniqueName: \"kubernetes.io/projected/9f6ab327-3242-4ed2-a472-21b7f1a0bcbf-kube-api-access-mr66n\") pod \"marketplace-operator-79b997595-6r4vx\" (UID: \"9f6ab327-3242-4ed2-a472-21b7f1a0bcbf\") " pod="openshift-marketplace/marketplace-operator-79b997595-6r4vx" Feb 19 18:34:37 crc kubenswrapper[4813]: I0219 18:34:37.952776 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9f6ab327-3242-4ed2-a472-21b7f1a0bcbf-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-6r4vx\" (UID: \"9f6ab327-3242-4ed2-a472-21b7f1a0bcbf\") " pod="openshift-marketplace/marketplace-operator-79b997595-6r4vx" Feb 19 18:34:37 crc kubenswrapper[4813]: I0219 18:34:37.952832 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f6ab327-3242-4ed2-a472-21b7f1a0bcbf-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6r4vx\" (UID: \"9f6ab327-3242-4ed2-a472-21b7f1a0bcbf\") " pod="openshift-marketplace/marketplace-operator-79b997595-6r4vx" Feb 19 18:34:38 crc kubenswrapper[4813]: I0219 18:34:38.053685 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9f6ab327-3242-4ed2-a472-21b7f1a0bcbf-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-6r4vx\" (UID: \"9f6ab327-3242-4ed2-a472-21b7f1a0bcbf\") " pod="openshift-marketplace/marketplace-operator-79b997595-6r4vx" Feb 19 18:34:38 crc kubenswrapper[4813]: I0219 18:34:38.053781 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f6ab327-3242-4ed2-a472-21b7f1a0bcbf-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6r4vx\" (UID: \"9f6ab327-3242-4ed2-a472-21b7f1a0bcbf\") " pod="openshift-marketplace/marketplace-operator-79b997595-6r4vx" Feb 19 18:34:38 crc kubenswrapper[4813]: I0219 18:34:38.053840 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mr66n\" (UniqueName: \"kubernetes.io/projected/9f6ab327-3242-4ed2-a472-21b7f1a0bcbf-kube-api-access-mr66n\") pod \"marketplace-operator-79b997595-6r4vx\" (UID: \"9f6ab327-3242-4ed2-a472-21b7f1a0bcbf\") " pod="openshift-marketplace/marketplace-operator-79b997595-6r4vx" Feb 19 18:34:38 crc kubenswrapper[4813]: I0219 18:34:38.054898 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f6ab327-3242-4ed2-a472-21b7f1a0bcbf-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6r4vx\" (UID: \"9f6ab327-3242-4ed2-a472-21b7f1a0bcbf\") " pod="openshift-marketplace/marketplace-operator-79b997595-6r4vx" Feb 19 18:34:38 crc kubenswrapper[4813]: I0219 18:34:38.059593 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9f6ab327-3242-4ed2-a472-21b7f1a0bcbf-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-6r4vx\" (UID: \"9f6ab327-3242-4ed2-a472-21b7f1a0bcbf\") " pod="openshift-marketplace/marketplace-operator-79b997595-6r4vx" Feb 19 18:34:38 crc kubenswrapper[4813]: I0219 18:34:38.074693 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mr66n\" (UniqueName: \"kubernetes.io/projected/9f6ab327-3242-4ed2-a472-21b7f1a0bcbf-kube-api-access-mr66n\") pod \"marketplace-operator-79b997595-6r4vx\" (UID: \"9f6ab327-3242-4ed2-a472-21b7f1a0bcbf\") " pod="openshift-marketplace/marketplace-operator-79b997595-6r4vx" Feb 19 18:34:38 crc kubenswrapper[4813]: I0219 18:34:38.244772 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-6r4vx" Feb 19 18:34:38 crc kubenswrapper[4813]: I0219 18:34:38.411536 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6r4vx"] Feb 19 18:34:38 crc kubenswrapper[4813]: I0219 18:34:38.730180 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6r4vx" event={"ID":"9f6ab327-3242-4ed2-a472-21b7f1a0bcbf","Type":"ContainerStarted","Data":"2b9de2713b1bb5ef7c594ea53ffb54a70d7b67374cdecf125a96fca128d349f4"} Feb 19 18:34:39 crc kubenswrapper[4813]: I0219 18:34:39.743595 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6r4vx" event={"ID":"9f6ab327-3242-4ed2-a472-21b7f1a0bcbf","Type":"ContainerStarted","Data":"36dae21d00d686b86538726536411c04d2c0577921f1a808db6ef247734d9994"} Feb 19 18:34:39 crc kubenswrapper[4813]: I0219 18:34:39.744010 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-6r4vx" Feb 19 18:34:39 crc kubenswrapper[4813]: I0219 18:34:39.750202 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-6r4vx" Feb 19 18:34:39 crc kubenswrapper[4813]: I0219 18:34:39.759702 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-6r4vx" podStartSLOduration=2.759683549 podStartE2EDuration="2.759683549s" podCreationTimestamp="2026-02-19 18:34:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:34:39.756901836 +0000 UTC m=+298.982342377" watchObservedRunningTime="2026-02-19 18:34:39.759683549 +0000 UTC m=+298.985124120" Feb 19 18:34:41 crc kubenswrapper[4813]: I0219 18:34:41.244145 4813 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 19 18:34:47 crc kubenswrapper[4813]: I0219 18:34:47.433446 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mgsbg"] Feb 19 18:34:47 crc kubenswrapper[4813]: I0219 18:34:47.434373 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-mgsbg" podUID="624d3261-b732-4c7e-b1d9-56827d44c94f" containerName="controller-manager" containerID="cri-o://de9a5d7bb578e96bdf446f3fd9373f30b317c359a855eb55f889a051a9609d96" gracePeriod=30 Feb 19 18:34:47 crc kubenswrapper[4813]: I0219 18:34:47.530999 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-jfz6c"] Feb 19 18:34:47 crc kubenswrapper[4813]: I0219 18:34:47.531223 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jfz6c" podUID="0ddfe429-ea67-4b0c-bab1-bc72117fddda" containerName="route-controller-manager" containerID="cri-o://f1faf2f7e9ab2ddaee48139255b08e48e159a8d3e8a9c35ce3ad6b5b08c169f4" gracePeriod=30 Feb 19 18:34:47 crc kubenswrapper[4813]: I0219 18:34:47.792457 4813 generic.go:334] "Generic (PLEG): container finished" podID="624d3261-b732-4c7e-b1d9-56827d44c94f" containerID="de9a5d7bb578e96bdf446f3fd9373f30b317c359a855eb55f889a051a9609d96" exitCode=0 Feb 19 18:34:47 crc kubenswrapper[4813]: I0219 18:34:47.792541 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-mgsbg" event={"ID":"624d3261-b732-4c7e-b1d9-56827d44c94f","Type":"ContainerDied","Data":"de9a5d7bb578e96bdf446f3fd9373f30b317c359a855eb55f889a051a9609d96"} Feb 19 18:34:47 crc kubenswrapper[4813]: I0219 18:34:47.792853 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-mgsbg" event={"ID":"624d3261-b732-4c7e-b1d9-56827d44c94f","Type":"ContainerDied","Data":"ee0d57d2a9683f7247c500d27417dc8636d4b63013044d4e3f2c02ae1a16a930"} Feb 19 18:34:47 crc kubenswrapper[4813]: I0219 18:34:47.792872 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee0d57d2a9683f7247c500d27417dc8636d4b63013044d4e3f2c02ae1a16a930" Feb 19 18:34:47 crc kubenswrapper[4813]: I0219 18:34:47.794282 4813 generic.go:334] "Generic (PLEG): container finished" podID="0ddfe429-ea67-4b0c-bab1-bc72117fddda" containerID="f1faf2f7e9ab2ddaee48139255b08e48e159a8d3e8a9c35ce3ad6b5b08c169f4" exitCode=0 Feb 19 18:34:47 crc kubenswrapper[4813]: I0219 18:34:47.794324 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jfz6c" event={"ID":"0ddfe429-ea67-4b0c-bab1-bc72117fddda","Type":"ContainerDied","Data":"f1faf2f7e9ab2ddaee48139255b08e48e159a8d3e8a9c35ce3ad6b5b08c169f4"} Feb 19 18:34:47 crc kubenswrapper[4813]: I0219 18:34:47.842745 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-mgsbg" Feb 19 18:34:47 crc kubenswrapper[4813]: I0219 18:34:47.850637 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jfz6c" Feb 19 18:34:47 crc kubenswrapper[4813]: I0219 18:34:47.944324 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cd59f947b-d2km7"] Feb 19 18:34:47 crc kubenswrapper[4813]: E0219 18:34:47.944551 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="624d3261-b732-4c7e-b1d9-56827d44c94f" containerName="controller-manager" Feb 19 18:34:47 crc kubenswrapper[4813]: I0219 18:34:47.944566 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="624d3261-b732-4c7e-b1d9-56827d44c94f" containerName="controller-manager" Feb 19 18:34:47 crc kubenswrapper[4813]: E0219 18:34:47.944574 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ddfe429-ea67-4b0c-bab1-bc72117fddda" containerName="route-controller-manager" Feb 19 18:34:47 crc kubenswrapper[4813]: I0219 18:34:47.944582 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ddfe429-ea67-4b0c-bab1-bc72117fddda" containerName="route-controller-manager" Feb 19 18:34:47 crc kubenswrapper[4813]: I0219 18:34:47.944675 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="624d3261-b732-4c7e-b1d9-56827d44c94f" containerName="controller-manager" Feb 19 18:34:47 crc kubenswrapper[4813]: I0219 18:34:47.944692 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ddfe429-ea67-4b0c-bab1-bc72117fddda" containerName="route-controller-manager" Feb 19 18:34:47 crc kubenswrapper[4813]: I0219 18:34:47.945081 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-cd59f947b-d2km7" Feb 19 18:34:47 crc kubenswrapper[4813]: I0219 18:34:47.971986 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cd59f947b-d2km7"] Feb 19 18:34:47 crc kubenswrapper[4813]: I0219 18:34:47.980597 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ddfe429-ea67-4b0c-bab1-bc72117fddda-config\") pod \"0ddfe429-ea67-4b0c-bab1-bc72117fddda\" (UID: \"0ddfe429-ea67-4b0c-bab1-bc72117fddda\") " Feb 19 18:34:47 crc kubenswrapper[4813]: I0219 18:34:47.980643 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/624d3261-b732-4c7e-b1d9-56827d44c94f-client-ca\") pod \"624d3261-b732-4c7e-b1d9-56827d44c94f\" (UID: \"624d3261-b732-4c7e-b1d9-56827d44c94f\") " Feb 19 18:34:47 crc kubenswrapper[4813]: I0219 18:34:47.980687 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57nds\" (UniqueName: \"kubernetes.io/projected/624d3261-b732-4c7e-b1d9-56827d44c94f-kube-api-access-57nds\") pod \"624d3261-b732-4c7e-b1d9-56827d44c94f\" (UID: \"624d3261-b732-4c7e-b1d9-56827d44c94f\") " Feb 19 18:34:47 crc kubenswrapper[4813]: I0219 18:34:47.980930 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ddfe429-ea67-4b0c-bab1-bc72117fddda-serving-cert\") pod \"0ddfe429-ea67-4b0c-bab1-bc72117fddda\" (UID: \"0ddfe429-ea67-4b0c-bab1-bc72117fddda\") " Feb 19 18:34:47 crc kubenswrapper[4813]: I0219 18:34:47.981004 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0ddfe429-ea67-4b0c-bab1-bc72117fddda-client-ca\") pod \"0ddfe429-ea67-4b0c-bab1-bc72117fddda\" (UID: \"0ddfe429-ea67-4b0c-bab1-bc72117fddda\") " Feb 19 18:34:47 crc kubenswrapper[4813]: I0219 18:34:47.981033 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/624d3261-b732-4c7e-b1d9-56827d44c94f-serving-cert\") pod \"624d3261-b732-4c7e-b1d9-56827d44c94f\" (UID: \"624d3261-b732-4c7e-b1d9-56827d44c94f\") " Feb 19 18:34:47 crc kubenswrapper[4813]: I0219 18:34:47.981073 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/624d3261-b732-4c7e-b1d9-56827d44c94f-proxy-ca-bundles\") pod \"624d3261-b732-4c7e-b1d9-56827d44c94f\" (UID: \"624d3261-b732-4c7e-b1d9-56827d44c94f\") " Feb 19 18:34:47 crc kubenswrapper[4813]: I0219 18:34:47.981133 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/624d3261-b732-4c7e-b1d9-56827d44c94f-config\") pod \"624d3261-b732-4c7e-b1d9-56827d44c94f\" (UID: \"624d3261-b732-4c7e-b1d9-56827d44c94f\") " Feb 19 18:34:47 crc kubenswrapper[4813]: I0219 18:34:47.981162 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hcl89\" (UniqueName: \"kubernetes.io/projected/0ddfe429-ea67-4b0c-bab1-bc72117fddda-kube-api-access-hcl89\") pod \"0ddfe429-ea67-4b0c-bab1-bc72117fddda\" (UID: \"0ddfe429-ea67-4b0c-bab1-bc72117fddda\") " Feb 19 18:34:47 crc kubenswrapper[4813]: I0219 18:34:47.981315 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0de2fac-0b8c-4d60-8d9b-8e3c82391725-serving-cert\") pod \"route-controller-manager-cd59f947b-d2km7\" (UID: \"e0de2fac-0b8c-4d60-8d9b-8e3c82391725\") " pod="openshift-route-controller-manager/route-controller-manager-cd59f947b-d2km7" Feb 19 18:34:47 crc kubenswrapper[4813]: I0219 18:34:47.981372 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e0de2fac-0b8c-4d60-8d9b-8e3c82391725-client-ca\") pod \"route-controller-manager-cd59f947b-d2km7\" (UID: \"e0de2fac-0b8c-4d60-8d9b-8e3c82391725\") " pod="openshift-route-controller-manager/route-controller-manager-cd59f947b-d2km7" Feb 19 18:34:47 crc kubenswrapper[4813]: I0219 18:34:47.981473 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9ffr\" (UniqueName: \"kubernetes.io/projected/e0de2fac-0b8c-4d60-8d9b-8e3c82391725-kube-api-access-w9ffr\") pod \"route-controller-manager-cd59f947b-d2km7\" (UID: \"e0de2fac-0b8c-4d60-8d9b-8e3c82391725\") " pod="openshift-route-controller-manager/route-controller-manager-cd59f947b-d2km7" Feb 19 18:34:47 crc kubenswrapper[4813]: I0219 18:34:47.981517 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0de2fac-0b8c-4d60-8d9b-8e3c82391725-config\") pod \"route-controller-manager-cd59f947b-d2km7\" (UID: \"e0de2fac-0b8c-4d60-8d9b-8e3c82391725\") " pod="openshift-route-controller-manager/route-controller-manager-cd59f947b-d2km7" Feb 19 18:34:47 crc kubenswrapper[4813]: I0219 18:34:47.981319 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/624d3261-b732-4c7e-b1d9-56827d44c94f-client-ca" (OuterVolumeSpecName: "client-ca") pod "624d3261-b732-4c7e-b1d9-56827d44c94f" (UID: "624d3261-b732-4c7e-b1d9-56827d44c94f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:34:47 crc kubenswrapper[4813]: I0219 18:34:47.981713 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/624d3261-b732-4c7e-b1d9-56827d44c94f-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "624d3261-b732-4c7e-b1d9-56827d44c94f" (UID: "624d3261-b732-4c7e-b1d9-56827d44c94f"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:34:47 crc kubenswrapper[4813]: I0219 18:34:47.981744 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ddfe429-ea67-4b0c-bab1-bc72117fddda-client-ca" (OuterVolumeSpecName: "client-ca") pod "0ddfe429-ea67-4b0c-bab1-bc72117fddda" (UID: "0ddfe429-ea67-4b0c-bab1-bc72117fddda"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:34:47 crc kubenswrapper[4813]: I0219 18:34:47.982032 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/624d3261-b732-4c7e-b1d9-56827d44c94f-config" (OuterVolumeSpecName: "config") pod "624d3261-b732-4c7e-b1d9-56827d44c94f" (UID: "624d3261-b732-4c7e-b1d9-56827d44c94f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:34:47 crc kubenswrapper[4813]: I0219 18:34:47.983127 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ddfe429-ea67-4b0c-bab1-bc72117fddda-config" (OuterVolumeSpecName: "config") pod "0ddfe429-ea67-4b0c-bab1-bc72117fddda" (UID: "0ddfe429-ea67-4b0c-bab1-bc72117fddda"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:34:47 crc kubenswrapper[4813]: I0219 18:34:47.986496 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/624d3261-b732-4c7e-b1d9-56827d44c94f-kube-api-access-57nds" (OuterVolumeSpecName: "kube-api-access-57nds") pod "624d3261-b732-4c7e-b1d9-56827d44c94f" (UID: "624d3261-b732-4c7e-b1d9-56827d44c94f"). InnerVolumeSpecName "kube-api-access-57nds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:34:47 crc kubenswrapper[4813]: I0219 18:34:47.986532 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ddfe429-ea67-4b0c-bab1-bc72117fddda-kube-api-access-hcl89" (OuterVolumeSpecName: "kube-api-access-hcl89") pod "0ddfe429-ea67-4b0c-bab1-bc72117fddda" (UID: "0ddfe429-ea67-4b0c-bab1-bc72117fddda"). InnerVolumeSpecName "kube-api-access-hcl89". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:34:47 crc kubenswrapper[4813]: I0219 18:34:47.986583 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/624d3261-b732-4c7e-b1d9-56827d44c94f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "624d3261-b732-4c7e-b1d9-56827d44c94f" (UID: "624d3261-b732-4c7e-b1d9-56827d44c94f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:34:47 crc kubenswrapper[4813]: I0219 18:34:47.987116 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ddfe429-ea67-4b0c-bab1-bc72117fddda-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0ddfe429-ea67-4b0c-bab1-bc72117fddda" (UID: "0ddfe429-ea67-4b0c-bab1-bc72117fddda"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:34:48 crc kubenswrapper[4813]: I0219 18:34:48.082228 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9ffr\" (UniqueName: \"kubernetes.io/projected/e0de2fac-0b8c-4d60-8d9b-8e3c82391725-kube-api-access-w9ffr\") pod \"route-controller-manager-cd59f947b-d2km7\" (UID: \"e0de2fac-0b8c-4d60-8d9b-8e3c82391725\") " pod="openshift-route-controller-manager/route-controller-manager-cd59f947b-d2km7" Feb 19 18:34:48 crc kubenswrapper[4813]: I0219 18:34:48.082289 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0de2fac-0b8c-4d60-8d9b-8e3c82391725-config\") pod \"route-controller-manager-cd59f947b-d2km7\" (UID: \"e0de2fac-0b8c-4d60-8d9b-8e3c82391725\") " pod="openshift-route-controller-manager/route-controller-manager-cd59f947b-d2km7" Feb 19 18:34:48 crc kubenswrapper[4813]: I0219 18:34:48.082326 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0de2fac-0b8c-4d60-8d9b-8e3c82391725-serving-cert\") pod \"route-controller-manager-cd59f947b-d2km7\" (UID: \"e0de2fac-0b8c-4d60-8d9b-8e3c82391725\") " pod="openshift-route-controller-manager/route-controller-manager-cd59f947b-d2km7" Feb 19 18:34:48 crc kubenswrapper[4813]: I0219 18:34:48.082361 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e0de2fac-0b8c-4d60-8d9b-8e3c82391725-client-ca\") pod \"route-controller-manager-cd59f947b-d2km7\" (UID: \"e0de2fac-0b8c-4d60-8d9b-8e3c82391725\") " pod="openshift-route-controller-manager/route-controller-manager-cd59f947b-d2km7" Feb 19 18:34:48 crc kubenswrapper[4813]: I0219 18:34:48.082418 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ddfe429-ea67-4b0c-bab1-bc72117fddda-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 18:34:48 crc kubenswrapper[4813]: I0219 18:34:48.082431 4813 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0ddfe429-ea67-4b0c-bab1-bc72117fddda-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 18:34:48 crc kubenswrapper[4813]: I0219 18:34:48.082444 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/624d3261-b732-4c7e-b1d9-56827d44c94f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 18:34:48 crc kubenswrapper[4813]: I0219 18:34:48.082455 4813 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/624d3261-b732-4c7e-b1d9-56827d44c94f-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 18:34:48 crc kubenswrapper[4813]: I0219 18:34:48.082470 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/624d3261-b732-4c7e-b1d9-56827d44c94f-config\") on node \"crc\" DevicePath \"\"" Feb 19 18:34:48 crc kubenswrapper[4813]: I0219 18:34:48.082482 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hcl89\" (UniqueName: \"kubernetes.io/projected/0ddfe429-ea67-4b0c-bab1-bc72117fddda-kube-api-access-hcl89\") on node \"crc\" DevicePath \"\"" Feb 19 18:34:48 crc kubenswrapper[4813]: I0219 18:34:48.082494 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ddfe429-ea67-4b0c-bab1-bc72117fddda-config\") on node \"crc\" DevicePath \"\"" Feb 19 18:34:48 crc kubenswrapper[4813]: I0219 18:34:48.082505 4813 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/624d3261-b732-4c7e-b1d9-56827d44c94f-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 18:34:48 crc kubenswrapper[4813]: I0219 18:34:48.082516 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57nds\" (UniqueName: \"kubernetes.io/projected/624d3261-b732-4c7e-b1d9-56827d44c94f-kube-api-access-57nds\") on node \"crc\" DevicePath \"\"" Feb 19 18:34:48 crc kubenswrapper[4813]: I0219 18:34:48.083501 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e0de2fac-0b8c-4d60-8d9b-8e3c82391725-client-ca\") pod \"route-controller-manager-cd59f947b-d2km7\" (UID: \"e0de2fac-0b8c-4d60-8d9b-8e3c82391725\") " pod="openshift-route-controller-manager/route-controller-manager-cd59f947b-d2km7" Feb 19 18:34:48 crc kubenswrapper[4813]: I0219 18:34:48.083562 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0de2fac-0b8c-4d60-8d9b-8e3c82391725-config\") pod \"route-controller-manager-cd59f947b-d2km7\" (UID: \"e0de2fac-0b8c-4d60-8d9b-8e3c82391725\") " pod="openshift-route-controller-manager/route-controller-manager-cd59f947b-d2km7" Feb 19 18:34:48 crc kubenswrapper[4813]: I0219 18:34:48.085312 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0de2fac-0b8c-4d60-8d9b-8e3c82391725-serving-cert\") pod \"route-controller-manager-cd59f947b-d2km7\" (UID: \"e0de2fac-0b8c-4d60-8d9b-8e3c82391725\") " pod="openshift-route-controller-manager/route-controller-manager-cd59f947b-d2km7" Feb 19 18:34:48 crc kubenswrapper[4813]: I0219 18:34:48.098551 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9ffr\" (UniqueName: \"kubernetes.io/projected/e0de2fac-0b8c-4d60-8d9b-8e3c82391725-kube-api-access-w9ffr\") pod \"route-controller-manager-cd59f947b-d2km7\" (UID: \"e0de2fac-0b8c-4d60-8d9b-8e3c82391725\") " pod="openshift-route-controller-manager/route-controller-manager-cd59f947b-d2km7" Feb 19 18:34:48 crc kubenswrapper[4813]: I0219 18:34:48.256579 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-cd59f947b-d2km7" Feb 19 18:34:48 crc kubenswrapper[4813]: I0219 18:34:48.731106 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cd59f947b-d2km7"] Feb 19 18:34:48 crc kubenswrapper[4813]: W0219 18:34:48.737604 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0de2fac_0b8c_4d60_8d9b_8e3c82391725.slice/crio-934bb3c4346bb6a3ae9cd36c2a33d0e12adaf564c0a95a3187746da45de7c8d6 WatchSource:0}: Error finding container 934bb3c4346bb6a3ae9cd36c2a33d0e12adaf564c0a95a3187746da45de7c8d6: Status 404 returned error can't find the container with id 934bb3c4346bb6a3ae9cd36c2a33d0e12adaf564c0a95a3187746da45de7c8d6 Feb 19 18:34:48 crc kubenswrapper[4813]: I0219 18:34:48.799030 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-cd59f947b-d2km7" event={"ID":"e0de2fac-0b8c-4d60-8d9b-8e3c82391725","Type":"ContainerStarted","Data":"934bb3c4346bb6a3ae9cd36c2a33d0e12adaf564c0a95a3187746da45de7c8d6"} Feb 19 18:34:48 crc kubenswrapper[4813]: I0219 18:34:48.800863 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-mgsbg" Feb 19 18:34:48 crc kubenswrapper[4813]: I0219 18:34:48.800859 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jfz6c" event={"ID":"0ddfe429-ea67-4b0c-bab1-bc72117fddda","Type":"ContainerDied","Data":"c5a623d90986ad3c5bae3c2215aa2f9caf17992ca8dd382ae9e923220023eae2"} Feb 19 18:34:48 crc kubenswrapper[4813]: I0219 18:34:48.800931 4813 scope.go:117] "RemoveContainer" containerID="f1faf2f7e9ab2ddaee48139255b08e48e159a8d3e8a9c35ce3ad6b5b08c169f4" Feb 19 18:34:48 crc kubenswrapper[4813]: I0219 18:34:48.800930 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jfz6c" Feb 19 18:34:48 crc kubenswrapper[4813]: I0219 18:34:48.836522 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mgsbg"] Feb 19 18:34:48 crc kubenswrapper[4813]: I0219 18:34:48.843195 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mgsbg"] Feb 19 18:34:48 crc kubenswrapper[4813]: I0219 18:34:48.850862 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-jfz6c"] Feb 19 18:34:48 crc kubenswrapper[4813]: I0219 18:34:48.854839 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-jfz6c"] Feb 19 18:34:49 crc kubenswrapper[4813]: I0219 18:34:49.352272 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5f55469458-8qg9p"] Feb 19 18:34:49 crc kubenswrapper[4813]: I0219 18:34:49.352940 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f55469458-8qg9p" Feb 19 18:34:49 crc kubenswrapper[4813]: I0219 18:34:49.356533 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 19 18:34:49 crc kubenswrapper[4813]: I0219 18:34:49.356845 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 19 18:34:49 crc kubenswrapper[4813]: I0219 18:34:49.361521 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 19 18:34:49 crc kubenswrapper[4813]: I0219 18:34:49.361813 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 19 18:34:49 crc kubenswrapper[4813]: I0219 18:34:49.361821 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 19 18:34:49 crc kubenswrapper[4813]: I0219 18:34:49.362387 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 19 18:34:49 crc kubenswrapper[4813]: I0219 18:34:49.369347 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 19 18:34:49 crc kubenswrapper[4813]: I0219 18:34:49.400349 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0c456e4-1337-438d-b1c5-2c8038944c80-config\") pod \"controller-manager-5f55469458-8qg9p\" (UID: \"b0c456e4-1337-438d-b1c5-2c8038944c80\") " pod="openshift-controller-manager/controller-manager-5f55469458-8qg9p" Feb 19 18:34:49 crc kubenswrapper[4813]: I0219 18:34:49.400437 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b0c456e4-1337-438d-b1c5-2c8038944c80-proxy-ca-bundles\") pod \"controller-manager-5f55469458-8qg9p\" (UID: \"b0c456e4-1337-438d-b1c5-2c8038944c80\") " pod="openshift-controller-manager/controller-manager-5f55469458-8qg9p" Feb 19 18:34:49 crc kubenswrapper[4813]: I0219 18:34:49.400476 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0c456e4-1337-438d-b1c5-2c8038944c80-serving-cert\") pod \"controller-manager-5f55469458-8qg9p\" (UID: \"b0c456e4-1337-438d-b1c5-2c8038944c80\") " pod="openshift-controller-manager/controller-manager-5f55469458-8qg9p" Feb 19 18:34:49 crc kubenswrapper[4813]: I0219 18:34:49.400502 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st7lp\" (UniqueName: \"kubernetes.io/projected/b0c456e4-1337-438d-b1c5-2c8038944c80-kube-api-access-st7lp\") pod \"controller-manager-5f55469458-8qg9p\" (UID: \"b0c456e4-1337-438d-b1c5-2c8038944c80\") " pod="openshift-controller-manager/controller-manager-5f55469458-8qg9p" Feb 19 18:34:49 crc kubenswrapper[4813]: I0219 18:34:49.400541 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b0c456e4-1337-438d-b1c5-2c8038944c80-client-ca\") pod \"controller-manager-5f55469458-8qg9p\" (UID: \"b0c456e4-1337-438d-b1c5-2c8038944c80\") " pod="openshift-controller-manager/controller-manager-5f55469458-8qg9p" Feb 19 18:34:49 crc kubenswrapper[4813]: I0219 18:34:49.434732 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5f55469458-8qg9p"] Feb 19 18:34:49 crc kubenswrapper[4813]: I0219 18:34:49.483920 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ddfe429-ea67-4b0c-bab1-bc72117fddda" path="/var/lib/kubelet/pods/0ddfe429-ea67-4b0c-bab1-bc72117fddda/volumes" Feb 19 18:34:49 crc kubenswrapper[4813]: I0219 18:34:49.485214 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="624d3261-b732-4c7e-b1d9-56827d44c94f" path="/var/lib/kubelet/pods/624d3261-b732-4c7e-b1d9-56827d44c94f/volumes" Feb 19 18:34:49 crc kubenswrapper[4813]: I0219 18:34:49.501720 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0c456e4-1337-438d-b1c5-2c8038944c80-config\") pod \"controller-manager-5f55469458-8qg9p\" (UID: \"b0c456e4-1337-438d-b1c5-2c8038944c80\") " pod="openshift-controller-manager/controller-manager-5f55469458-8qg9p" Feb 19 18:34:49 crc kubenswrapper[4813]: I0219 18:34:49.501843 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b0c456e4-1337-438d-b1c5-2c8038944c80-proxy-ca-bundles\") pod \"controller-manager-5f55469458-8qg9p\" (UID: \"b0c456e4-1337-438d-b1c5-2c8038944c80\") " pod="openshift-controller-manager/controller-manager-5f55469458-8qg9p" Feb 19 18:34:49 crc kubenswrapper[4813]: I0219 18:34:49.501909 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0c456e4-1337-438d-b1c5-2c8038944c80-serving-cert\") pod \"controller-manager-5f55469458-8qg9p\" (UID: \"b0c456e4-1337-438d-b1c5-2c8038944c80\") " pod="openshift-controller-manager/controller-manager-5f55469458-8qg9p" Feb 19 18:34:49 crc kubenswrapper[4813]: I0219 18:34:49.501950 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-st7lp\" (UniqueName: \"kubernetes.io/projected/b0c456e4-1337-438d-b1c5-2c8038944c80-kube-api-access-st7lp\") pod \"controller-manager-5f55469458-8qg9p\" (UID: \"b0c456e4-1337-438d-b1c5-2c8038944c80\") " pod="openshift-controller-manager/controller-manager-5f55469458-8qg9p" Feb 19 18:34:49 crc kubenswrapper[4813]: I0219 18:34:49.502035 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b0c456e4-1337-438d-b1c5-2c8038944c80-client-ca\") pod \"controller-manager-5f55469458-8qg9p\" (UID: \"b0c456e4-1337-438d-b1c5-2c8038944c80\") " pod="openshift-controller-manager/controller-manager-5f55469458-8qg9p" Feb 19 18:34:49 crc kubenswrapper[4813]: I0219 18:34:49.503592 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b0c456e4-1337-438d-b1c5-2c8038944c80-proxy-ca-bundles\") pod \"controller-manager-5f55469458-8qg9p\" (UID: \"b0c456e4-1337-438d-b1c5-2c8038944c80\") " pod="openshift-controller-manager/controller-manager-5f55469458-8qg9p" Feb 19 18:34:49 crc kubenswrapper[4813]: I0219 18:34:49.503711 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b0c456e4-1337-438d-b1c5-2c8038944c80-client-ca\") pod \"controller-manager-5f55469458-8qg9p\" (UID: \"b0c456e4-1337-438d-b1c5-2c8038944c80\") " pod="openshift-controller-manager/controller-manager-5f55469458-8qg9p" Feb 19 18:34:49 crc kubenswrapper[4813]: I0219 18:34:49.505167 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0c456e4-1337-438d-b1c5-2c8038944c80-config\") pod \"controller-manager-5f55469458-8qg9p\" (UID: \"b0c456e4-1337-438d-b1c5-2c8038944c80\") " pod="openshift-controller-manager/controller-manager-5f55469458-8qg9p" Feb 19 18:34:49 crc kubenswrapper[4813]: I0219 18:34:49.517601 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0c456e4-1337-438d-b1c5-2c8038944c80-serving-cert\") pod \"controller-manager-5f55469458-8qg9p\" (UID: \"b0c456e4-1337-438d-b1c5-2c8038944c80\") " pod="openshift-controller-manager/controller-manager-5f55469458-8qg9p" Feb 19 18:34:49 crc kubenswrapper[4813]: I0219 18:34:49.532972 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-st7lp\" (UniqueName: \"kubernetes.io/projected/b0c456e4-1337-438d-b1c5-2c8038944c80-kube-api-access-st7lp\") pod \"controller-manager-5f55469458-8qg9p\" (UID: \"b0c456e4-1337-438d-b1c5-2c8038944c80\") " pod="openshift-controller-manager/controller-manager-5f55469458-8qg9p" Feb 19 18:34:49 crc kubenswrapper[4813]: I0219 18:34:49.683182 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f55469458-8qg9p" Feb 19 18:34:49 crc kubenswrapper[4813]: I0219 18:34:49.807821 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-cd59f947b-d2km7" event={"ID":"e0de2fac-0b8c-4d60-8d9b-8e3c82391725","Type":"ContainerStarted","Data":"a60f198d35de2bda2f84bb2c236deba98be32385706f1b37fa98c89f7fecaa6d"} Feb 19 18:34:49 crc kubenswrapper[4813]: I0219 18:34:49.808326 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-cd59f947b-d2km7" Feb 19 18:34:49 crc kubenswrapper[4813]: I0219 18:34:49.822910 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-cd59f947b-d2km7" Feb 19 18:34:49 crc kubenswrapper[4813]: I0219 18:34:49.845752 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-cd59f947b-d2km7" podStartSLOduration=2.845728588 podStartE2EDuration="2.845728588s" podCreationTimestamp="2026-02-19 18:34:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:34:49.826026536 +0000 UTC m=+309.051467107" watchObservedRunningTime="2026-02-19 18:34:49.845728588 +0000 UTC m=+309.071169169" Feb 19 18:34:49 crc kubenswrapper[4813]: I0219 18:34:49.922770 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5f55469458-8qg9p"] Feb 19 18:34:50 crc kubenswrapper[4813]: I0219 18:34:50.820280 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f55469458-8qg9p" event={"ID":"b0c456e4-1337-438d-b1c5-2c8038944c80","Type":"ContainerStarted","Data":"1b9b09b2c22a6daeb28fd6c0d198d0c2e5eae65c20a159b52d89cd91f3e04771"} Feb 19 18:34:50 crc kubenswrapper[4813]: I0219 18:34:50.820682 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f55469458-8qg9p" event={"ID":"b0c456e4-1337-438d-b1c5-2c8038944c80","Type":"ContainerStarted","Data":"7d2dace1c06a0bd8e7a9e6fae7997fae9e3e2746d796436c713b5bb1310f5086"} Feb 19 18:34:50 crc kubenswrapper[4813]: I0219 18:34:50.820714 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5f55469458-8qg9p" Feb 19 18:34:50 crc kubenswrapper[4813]: I0219 18:34:50.829732 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5f55469458-8qg9p" Feb 19 18:34:50 crc kubenswrapper[4813]: I0219 18:34:50.841124 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5f55469458-8qg9p" podStartSLOduration=3.841103922 podStartE2EDuration="3.841103922s" podCreationTimestamp="2026-02-19 18:34:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:34:50.84075733 +0000 UTC m=+310.066197901" watchObservedRunningTime="2026-02-19 18:34:50.841103922 +0000 UTC m=+310.066544473" Feb 19 18:34:52 crc kubenswrapper[4813]: I0219 18:34:52.474828 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5f55469458-8qg9p"] Feb 19 18:34:53 crc kubenswrapper[4813]: I0219 18:34:53.841181 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5f55469458-8qg9p" podUID="b0c456e4-1337-438d-b1c5-2c8038944c80" containerName="controller-manager" containerID="cri-o://1b9b09b2c22a6daeb28fd6c0d198d0c2e5eae65c20a159b52d89cd91f3e04771" gracePeriod=30 Feb 19 18:34:54 crc kubenswrapper[4813]: I0219 18:34:54.444327 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f55469458-8qg9p" Feb 19 18:34:54 crc kubenswrapper[4813]: I0219 18:34:54.471844 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-55fb7cb878-zzmj6"] Feb 19 18:34:54 crc kubenswrapper[4813]: E0219 18:34:54.472177 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0c456e4-1337-438d-b1c5-2c8038944c80" containerName="controller-manager" Feb 19 18:34:54 crc kubenswrapper[4813]: I0219 18:34:54.472196 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0c456e4-1337-438d-b1c5-2c8038944c80" containerName="controller-manager" Feb 19 18:34:54 crc kubenswrapper[4813]: I0219 18:34:54.472329 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0c456e4-1337-438d-b1c5-2c8038944c80" containerName="controller-manager" Feb 19 18:34:54 crc kubenswrapper[4813]: I0219 18:34:54.472914 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-55fb7cb878-zzmj6" Feb 19 18:34:54 crc kubenswrapper[4813]: I0219 18:34:54.478899 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f7efcb9-898f-45a8-80e4-3eba9475185b-serving-cert\") pod \"controller-manager-55fb7cb878-zzmj6\" (UID: \"4f7efcb9-898f-45a8-80e4-3eba9475185b\") " pod="openshift-controller-manager/controller-manager-55fb7cb878-zzmj6" Feb 19 18:34:54 crc kubenswrapper[4813]: I0219 18:34:54.479128 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgqlq\" (UniqueName: \"kubernetes.io/projected/4f7efcb9-898f-45a8-80e4-3eba9475185b-kube-api-access-mgqlq\") pod \"controller-manager-55fb7cb878-zzmj6\" (UID: \"4f7efcb9-898f-45a8-80e4-3eba9475185b\") " pod="openshift-controller-manager/controller-manager-55fb7cb878-zzmj6" Feb 19 18:34:54 crc kubenswrapper[4813]: I0219 18:34:54.479244 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4f7efcb9-898f-45a8-80e4-3eba9475185b-proxy-ca-bundles\") pod \"controller-manager-55fb7cb878-zzmj6\" (UID: \"4f7efcb9-898f-45a8-80e4-3eba9475185b\") " pod="openshift-controller-manager/controller-manager-55fb7cb878-zzmj6" Feb 19 18:34:54 crc kubenswrapper[4813]: I0219 18:34:54.479289 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f7efcb9-898f-45a8-80e4-3eba9475185b-config\") pod \"controller-manager-55fb7cb878-zzmj6\" (UID: \"4f7efcb9-898f-45a8-80e4-3eba9475185b\") " pod="openshift-controller-manager/controller-manager-55fb7cb878-zzmj6" Feb 19 18:34:54 crc kubenswrapper[4813]: I0219 18:34:54.479376 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4f7efcb9-898f-45a8-80e4-3eba9475185b-client-ca\") pod \"controller-manager-55fb7cb878-zzmj6\" (UID: \"4f7efcb9-898f-45a8-80e4-3eba9475185b\") " pod="openshift-controller-manager/controller-manager-55fb7cb878-zzmj6" Feb 19 18:34:54 crc kubenswrapper[4813]: I0219 18:34:54.491121 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-55fb7cb878-zzmj6"] Feb 19 18:34:54 crc kubenswrapper[4813]: I0219 18:34:54.580164 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0c456e4-1337-438d-b1c5-2c8038944c80-serving-cert\") pod \"b0c456e4-1337-438d-b1c5-2c8038944c80\" (UID: \"b0c456e4-1337-438d-b1c5-2c8038944c80\") " Feb 19 18:34:54 crc kubenswrapper[4813]: I0219 18:34:54.580230 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b0c456e4-1337-438d-b1c5-2c8038944c80-proxy-ca-bundles\") pod \"b0c456e4-1337-438d-b1c5-2c8038944c80\" (UID: \"b0c456e4-1337-438d-b1c5-2c8038944c80\") " Feb 19 18:34:54 crc kubenswrapper[4813]: I0219 18:34:54.580252 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b0c456e4-1337-438d-b1c5-2c8038944c80-client-ca\") pod \"b0c456e4-1337-438d-b1c5-2c8038944c80\" (UID: \"b0c456e4-1337-438d-b1c5-2c8038944c80\") " Feb 19 18:34:54 crc kubenswrapper[4813]: I0219 18:34:54.580296 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-st7lp\" (UniqueName: \"kubernetes.io/projected/b0c456e4-1337-438d-b1c5-2c8038944c80-kube-api-access-st7lp\") pod \"b0c456e4-1337-438d-b1c5-2c8038944c80\" (UID: \"b0c456e4-1337-438d-b1c5-2c8038944c80\") " Feb 19 18:34:54 crc kubenswrapper[4813]: I0219 18:34:54.580329 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0c456e4-1337-438d-b1c5-2c8038944c80-config\") pod \"b0c456e4-1337-438d-b1c5-2c8038944c80\" (UID: \"b0c456e4-1337-438d-b1c5-2c8038944c80\") " Feb 19 18:34:54 crc kubenswrapper[4813]: I0219 18:34:54.580422 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f7efcb9-898f-45a8-80e4-3eba9475185b-config\") pod \"controller-manager-55fb7cb878-zzmj6\" (UID: \"4f7efcb9-898f-45a8-80e4-3eba9475185b\") " pod="openshift-controller-manager/controller-manager-55fb7cb878-zzmj6" Feb 19 18:34:54 crc kubenswrapper[4813]: I0219 18:34:54.580452 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4f7efcb9-898f-45a8-80e4-3eba9475185b-client-ca\") pod \"controller-manager-55fb7cb878-zzmj6\" (UID: \"4f7efcb9-898f-45a8-80e4-3eba9475185b\") " pod="openshift-controller-manager/controller-manager-55fb7cb878-zzmj6" Feb 19 18:34:54 crc kubenswrapper[4813]: I0219 18:34:54.580481 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f7efcb9-898f-45a8-80e4-3eba9475185b-serving-cert\") pod \"controller-manager-55fb7cb878-zzmj6\" (UID: \"4f7efcb9-898f-45a8-80e4-3eba9475185b\") " pod="openshift-controller-manager/controller-manager-55fb7cb878-zzmj6" Feb 19 18:34:54 crc kubenswrapper[4813]: I0219 18:34:54.580505 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgqlq\" (UniqueName: \"kubernetes.io/projected/4f7efcb9-898f-45a8-80e4-3eba9475185b-kube-api-access-mgqlq\") pod \"controller-manager-55fb7cb878-zzmj6\" (UID: \"4f7efcb9-898f-45a8-80e4-3eba9475185b\") " pod="openshift-controller-manager/controller-manager-55fb7cb878-zzmj6" Feb 19 18:34:54 crc kubenswrapper[4813]: I0219 18:34:54.580551 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4f7efcb9-898f-45a8-80e4-3eba9475185b-proxy-ca-bundles\") pod \"controller-manager-55fb7cb878-zzmj6\" (UID: \"4f7efcb9-898f-45a8-80e4-3eba9475185b\") " pod="openshift-controller-manager/controller-manager-55fb7cb878-zzmj6" Feb 19 18:34:54 crc kubenswrapper[4813]: I0219 18:34:54.582255 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0c456e4-1337-438d-b1c5-2c8038944c80-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "b0c456e4-1337-438d-b1c5-2c8038944c80" (UID: "b0c456e4-1337-438d-b1c5-2c8038944c80"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:34:54 crc kubenswrapper[4813]: I0219 18:34:54.582399 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0c456e4-1337-438d-b1c5-2c8038944c80-client-ca" (OuterVolumeSpecName: "client-ca") pod "b0c456e4-1337-438d-b1c5-2c8038944c80" (UID: "b0c456e4-1337-438d-b1c5-2c8038944c80"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:34:54 crc kubenswrapper[4813]: I0219 18:34:54.582720 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4f7efcb9-898f-45a8-80e4-3eba9475185b-proxy-ca-bundles\") pod \"controller-manager-55fb7cb878-zzmj6\" (UID: \"4f7efcb9-898f-45a8-80e4-3eba9475185b\") " pod="openshift-controller-manager/controller-manager-55fb7cb878-zzmj6" Feb 19 18:34:54 crc kubenswrapper[4813]: I0219 18:34:54.583298 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4f7efcb9-898f-45a8-80e4-3eba9475185b-client-ca\") pod \"controller-manager-55fb7cb878-zzmj6\" (UID: \"4f7efcb9-898f-45a8-80e4-3eba9475185b\") " pod="openshift-controller-manager/controller-manager-55fb7cb878-zzmj6" Feb 19 18:34:54 crc kubenswrapper[4813]: I0219 18:34:54.583361 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0c456e4-1337-438d-b1c5-2c8038944c80-config" (OuterVolumeSpecName: "config") pod "b0c456e4-1337-438d-b1c5-2c8038944c80" (UID: "b0c456e4-1337-438d-b1c5-2c8038944c80"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:34:54 crc kubenswrapper[4813]: I0219 18:34:54.584999 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f7efcb9-898f-45a8-80e4-3eba9475185b-config\") pod \"controller-manager-55fb7cb878-zzmj6\" (UID: \"4f7efcb9-898f-45a8-80e4-3eba9475185b\") " pod="openshift-controller-manager/controller-manager-55fb7cb878-zzmj6" Feb 19 18:34:54 crc kubenswrapper[4813]: I0219 18:34:54.587112 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0c456e4-1337-438d-b1c5-2c8038944c80-kube-api-access-st7lp" (OuterVolumeSpecName: "kube-api-access-st7lp") pod "b0c456e4-1337-438d-b1c5-2c8038944c80" (UID: "b0c456e4-1337-438d-b1c5-2c8038944c80"). InnerVolumeSpecName "kube-api-access-st7lp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:34:54 crc kubenswrapper[4813]: I0219 18:34:54.590757 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f7efcb9-898f-45a8-80e4-3eba9475185b-serving-cert\") pod \"controller-manager-55fb7cb878-zzmj6\" (UID: \"4f7efcb9-898f-45a8-80e4-3eba9475185b\") " pod="openshift-controller-manager/controller-manager-55fb7cb878-zzmj6" Feb 19 18:34:54 crc kubenswrapper[4813]: I0219 18:34:54.592116 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0c456e4-1337-438d-b1c5-2c8038944c80-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b0c456e4-1337-438d-b1c5-2c8038944c80" (UID: "b0c456e4-1337-438d-b1c5-2c8038944c80"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:34:54 crc kubenswrapper[4813]: I0219 18:34:54.598348 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgqlq\" (UniqueName: \"kubernetes.io/projected/4f7efcb9-898f-45a8-80e4-3eba9475185b-kube-api-access-mgqlq\") pod \"controller-manager-55fb7cb878-zzmj6\" (UID: \"4f7efcb9-898f-45a8-80e4-3eba9475185b\") " pod="openshift-controller-manager/controller-manager-55fb7cb878-zzmj6" Feb 19 18:34:54 crc kubenswrapper[4813]: I0219 18:34:54.681660 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0c456e4-1337-438d-b1c5-2c8038944c80-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 18:34:54 crc kubenswrapper[4813]: I0219 18:34:54.681698 4813 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b0c456e4-1337-438d-b1c5-2c8038944c80-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 18:34:54 crc kubenswrapper[4813]: I0219 18:34:54.681715 4813 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b0c456e4-1337-438d-b1c5-2c8038944c80-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 18:34:54 crc kubenswrapper[4813]: I0219 18:34:54.681727 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-st7lp\" (UniqueName: \"kubernetes.io/projected/b0c456e4-1337-438d-b1c5-2c8038944c80-kube-api-access-st7lp\") on node \"crc\" DevicePath \"\"" Feb 19 18:34:54 crc kubenswrapper[4813]: I0219 18:34:54.681740 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0c456e4-1337-438d-b1c5-2c8038944c80-config\") on node \"crc\" DevicePath \"\"" Feb 19 18:34:54 crc kubenswrapper[4813]: I0219 18:34:54.792526 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-55fb7cb878-zzmj6" Feb 19 18:34:54 crc kubenswrapper[4813]: I0219 18:34:54.850361 4813 generic.go:334] "Generic (PLEG): container finished" podID="b0c456e4-1337-438d-b1c5-2c8038944c80" containerID="1b9b09b2c22a6daeb28fd6c0d198d0c2e5eae65c20a159b52d89cd91f3e04771" exitCode=0 Feb 19 18:34:54 crc kubenswrapper[4813]: I0219 18:34:54.850424 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f55469458-8qg9p" event={"ID":"b0c456e4-1337-438d-b1c5-2c8038944c80","Type":"ContainerDied","Data":"1b9b09b2c22a6daeb28fd6c0d198d0c2e5eae65c20a159b52d89cd91f3e04771"} Feb 19 18:34:54 crc kubenswrapper[4813]: I0219 18:34:54.850482 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f55469458-8qg9p" event={"ID":"b0c456e4-1337-438d-b1c5-2c8038944c80","Type":"ContainerDied","Data":"7d2dace1c06a0bd8e7a9e6fae7997fae9e3e2746d796436c713b5bb1310f5086"} Feb 19 18:34:54 crc kubenswrapper[4813]: I0219 18:34:54.850523 4813 scope.go:117] "RemoveContainer" containerID="1b9b09b2c22a6daeb28fd6c0d198d0c2e5eae65c20a159b52d89cd91f3e04771" Feb 19 18:34:54 crc kubenswrapper[4813]: I0219 18:34:54.851243 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f55469458-8qg9p" Feb 19 18:34:54 crc kubenswrapper[4813]: I0219 18:34:54.887734 4813 scope.go:117] "RemoveContainer" containerID="1b9b09b2c22a6daeb28fd6c0d198d0c2e5eae65c20a159b52d89cd91f3e04771" Feb 19 18:34:54 crc kubenswrapper[4813]: E0219 18:34:54.888825 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b9b09b2c22a6daeb28fd6c0d198d0c2e5eae65c20a159b52d89cd91f3e04771\": container with ID starting with 1b9b09b2c22a6daeb28fd6c0d198d0c2e5eae65c20a159b52d89cd91f3e04771 not found: ID does not exist" containerID="1b9b09b2c22a6daeb28fd6c0d198d0c2e5eae65c20a159b52d89cd91f3e04771" Feb 19 18:34:54 crc kubenswrapper[4813]: I0219 18:34:54.888904 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b9b09b2c22a6daeb28fd6c0d198d0c2e5eae65c20a159b52d89cd91f3e04771"} err="failed to get container status \"1b9b09b2c22a6daeb28fd6c0d198d0c2e5eae65c20a159b52d89cd91f3e04771\": rpc error: code = NotFound desc = could not find container \"1b9b09b2c22a6daeb28fd6c0d198d0c2e5eae65c20a159b52d89cd91f3e04771\": container with ID starting with 1b9b09b2c22a6daeb28fd6c0d198d0c2e5eae65c20a159b52d89cd91f3e04771 not found: ID does not exist" Feb 19 18:34:54 crc kubenswrapper[4813]: I0219 18:34:54.909498 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5f55469458-8qg9p"] Feb 19 18:34:54 crc kubenswrapper[4813]: I0219 18:34:54.915736 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5f55469458-8qg9p"] Feb 19 18:34:55 crc kubenswrapper[4813]: I0219 18:34:55.087226 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-55fb7cb878-zzmj6"] Feb 19 18:34:55 crc kubenswrapper[4813]: W0219 18:34:55.095919 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f7efcb9_898f_45a8_80e4_3eba9475185b.slice/crio-531fa50b3d197a10979464b60735a231cecb54d9f6c8000512ea93c15e12f021 WatchSource:0}: Error finding container 531fa50b3d197a10979464b60735a231cecb54d9f6c8000512ea93c15e12f021: Status 404 returned error can't find the container with id 531fa50b3d197a10979464b60735a231cecb54d9f6c8000512ea93c15e12f021 Feb 19 18:34:55 crc kubenswrapper[4813]: I0219 18:34:55.479719 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0c456e4-1337-438d-b1c5-2c8038944c80" path="/var/lib/kubelet/pods/b0c456e4-1337-438d-b1c5-2c8038944c80/volumes" Feb 19 18:34:55 crc kubenswrapper[4813]: I0219 18:34:55.860192 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-55fb7cb878-zzmj6" event={"ID":"4f7efcb9-898f-45a8-80e4-3eba9475185b","Type":"ContainerStarted","Data":"fe9bedecf6febc6bf48f8d5a2d6901f7571d3218658959745d42259fb5c8f63d"} Feb 19 18:34:55 crc kubenswrapper[4813]: I0219 18:34:55.860575 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-55fb7cb878-zzmj6" Feb 19 18:34:55 crc kubenswrapper[4813]: I0219 18:34:55.860591 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-55fb7cb878-zzmj6" event={"ID":"4f7efcb9-898f-45a8-80e4-3eba9475185b","Type":"ContainerStarted","Data":"531fa50b3d197a10979464b60735a231cecb54d9f6c8000512ea93c15e12f021"} Feb 19 18:34:55 crc kubenswrapper[4813]: I0219 18:34:55.867331 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-55fb7cb878-zzmj6" Feb 19 18:34:55 crc kubenswrapper[4813]: I0219 18:34:55.885398 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-55fb7cb878-zzmj6" podStartSLOduration=3.885373833 podStartE2EDuration="3.885373833s" podCreationTimestamp="2026-02-19 18:34:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:34:55.884660779 +0000 UTC m=+315.110101340" watchObservedRunningTime="2026-02-19 18:34:55.885373833 +0000 UTC m=+315.110814414" Feb 19 18:35:27 crc kubenswrapper[4813]: I0219 18:35:27.431106 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cd59f947b-d2km7"] Feb 19 18:35:27 crc kubenswrapper[4813]: I0219 18:35:27.431634 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-cd59f947b-d2km7" podUID="e0de2fac-0b8c-4d60-8d9b-8e3c82391725" containerName="route-controller-manager" containerID="cri-o://a60f198d35de2bda2f84bb2c236deba98be32385706f1b37fa98c89f7fecaa6d" gracePeriod=30 Feb 19 18:35:28 crc kubenswrapper[4813]: I0219 18:35:28.077142 4813 generic.go:334] "Generic (PLEG): container finished" podID="e0de2fac-0b8c-4d60-8d9b-8e3c82391725" containerID="a60f198d35de2bda2f84bb2c236deba98be32385706f1b37fa98c89f7fecaa6d" exitCode=0 Feb 19 18:35:28 crc kubenswrapper[4813]: I0219 18:35:28.077300 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-cd59f947b-d2km7" event={"ID":"e0de2fac-0b8c-4d60-8d9b-8e3c82391725","Type":"ContainerDied","Data":"a60f198d35de2bda2f84bb2c236deba98be32385706f1b37fa98c89f7fecaa6d"} Feb 19 18:35:28 crc kubenswrapper[4813]: I0219 18:35:28.460539 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-cd59f947b-d2km7" Feb 19 18:35:28 crc kubenswrapper[4813]: I0219 18:35:28.506713 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5cc88b647d-bt7tg"] Feb 19 18:35:28 crc kubenswrapper[4813]: E0219 18:35:28.507081 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0de2fac-0b8c-4d60-8d9b-8e3c82391725" containerName="route-controller-manager" Feb 19 18:35:28 crc kubenswrapper[4813]: I0219 18:35:28.507105 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0de2fac-0b8c-4d60-8d9b-8e3c82391725" containerName="route-controller-manager" Feb 19 18:35:28 crc kubenswrapper[4813]: I0219 18:35:28.507717 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0de2fac-0b8c-4d60-8d9b-8e3c82391725" containerName="route-controller-manager" Feb 19 18:35:28 crc kubenswrapper[4813]: I0219 18:35:28.509102 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5cc88b647d-bt7tg" Feb 19 18:35:28 crc kubenswrapper[4813]: I0219 18:35:28.514342 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5cc88b647d-bt7tg"] Feb 19 18:35:28 crc kubenswrapper[4813]: I0219 18:35:28.571703 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0de2fac-0b8c-4d60-8d9b-8e3c82391725-serving-cert\") pod \"e0de2fac-0b8c-4d60-8d9b-8e3c82391725\" (UID: \"e0de2fac-0b8c-4d60-8d9b-8e3c82391725\") " Feb 19 18:35:28 crc kubenswrapper[4813]: I0219 18:35:28.572055 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e0de2fac-0b8c-4d60-8d9b-8e3c82391725-client-ca\") pod \"e0de2fac-0b8c-4d60-8d9b-8e3c82391725\" (UID: \"e0de2fac-0b8c-4d60-8d9b-8e3c82391725\") " Feb 19 18:35:28 crc kubenswrapper[4813]: I0219 18:35:28.572231 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0de2fac-0b8c-4d60-8d9b-8e3c82391725-config\") pod \"e0de2fac-0b8c-4d60-8d9b-8e3c82391725\" (UID: \"e0de2fac-0b8c-4d60-8d9b-8e3c82391725\") " Feb 19 18:35:28 crc kubenswrapper[4813]: I0219 18:35:28.572301 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9ffr\" (UniqueName: \"kubernetes.io/projected/e0de2fac-0b8c-4d60-8d9b-8e3c82391725-kube-api-access-w9ffr\") pod \"e0de2fac-0b8c-4d60-8d9b-8e3c82391725\" (UID: \"e0de2fac-0b8c-4d60-8d9b-8e3c82391725\") " Feb 19 18:35:28 crc kubenswrapper[4813]: I0219 18:35:28.572675 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bcbeb2d1-d0ff-466f-b959-c92077b55868-client-ca\") pod \"route-controller-manager-5cc88b647d-bt7tg\" (UID: \"bcbeb2d1-d0ff-466f-b959-c92077b55868\") " pod="openshift-route-controller-manager/route-controller-manager-5cc88b647d-bt7tg" Feb 19 18:35:28 crc kubenswrapper[4813]: I0219 18:35:28.573092 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwk6z\" (UniqueName: \"kubernetes.io/projected/bcbeb2d1-d0ff-466f-b959-c92077b55868-kube-api-access-mwk6z\") pod \"route-controller-manager-5cc88b647d-bt7tg\" (UID: \"bcbeb2d1-d0ff-466f-b959-c92077b55868\") " pod="openshift-route-controller-manager/route-controller-manager-5cc88b647d-bt7tg" Feb 19 18:35:28 crc kubenswrapper[4813]: I0219 18:35:28.572897 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0de2fac-0b8c-4d60-8d9b-8e3c82391725-client-ca" (OuterVolumeSpecName: "client-ca") pod "e0de2fac-0b8c-4d60-8d9b-8e3c82391725" (UID: "e0de2fac-0b8c-4d60-8d9b-8e3c82391725"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:35:28 crc kubenswrapper[4813]: I0219 18:35:28.573175 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcbeb2d1-d0ff-466f-b959-c92077b55868-config\") pod \"route-controller-manager-5cc88b647d-bt7tg\" (UID: \"bcbeb2d1-d0ff-466f-b959-c92077b55868\") " pod="openshift-route-controller-manager/route-controller-manager-5cc88b647d-bt7tg" Feb 19 18:35:28 crc kubenswrapper[4813]: I0219 18:35:28.572943 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0de2fac-0b8c-4d60-8d9b-8e3c82391725-config" (OuterVolumeSpecName: "config") pod "e0de2fac-0b8c-4d60-8d9b-8e3c82391725" (UID: "e0de2fac-0b8c-4d60-8d9b-8e3c82391725"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:35:28 crc kubenswrapper[4813]: I0219 18:35:28.573318 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bcbeb2d1-d0ff-466f-b959-c92077b55868-serving-cert\") pod \"route-controller-manager-5cc88b647d-bt7tg\" (UID: \"bcbeb2d1-d0ff-466f-b959-c92077b55868\") " pod="openshift-route-controller-manager/route-controller-manager-5cc88b647d-bt7tg" Feb 19 18:35:28 crc kubenswrapper[4813]: I0219 18:35:28.573399 4813 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e0de2fac-0b8c-4d60-8d9b-8e3c82391725-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 18:35:28 crc kubenswrapper[4813]: I0219 18:35:28.573422 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0de2fac-0b8c-4d60-8d9b-8e3c82391725-config\") on node \"crc\" DevicePath \"\"" Feb 19 18:35:28 crc kubenswrapper[4813]: I0219 18:35:28.577819 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0de2fac-0b8c-4d60-8d9b-8e3c82391725-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e0de2fac-0b8c-4d60-8d9b-8e3c82391725" (UID: "e0de2fac-0b8c-4d60-8d9b-8e3c82391725"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:35:28 crc kubenswrapper[4813]: I0219 18:35:28.579836 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0de2fac-0b8c-4d60-8d9b-8e3c82391725-kube-api-access-w9ffr" (OuterVolumeSpecName: "kube-api-access-w9ffr") pod "e0de2fac-0b8c-4d60-8d9b-8e3c82391725" (UID: "e0de2fac-0b8c-4d60-8d9b-8e3c82391725"). InnerVolumeSpecName "kube-api-access-w9ffr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:35:28 crc kubenswrapper[4813]: I0219 18:35:28.674208 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcbeb2d1-d0ff-466f-b959-c92077b55868-config\") pod \"route-controller-manager-5cc88b647d-bt7tg\" (UID: \"bcbeb2d1-d0ff-466f-b959-c92077b55868\") " pod="openshift-route-controller-manager/route-controller-manager-5cc88b647d-bt7tg" Feb 19 18:35:28 crc kubenswrapper[4813]: I0219 18:35:28.674329 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bcbeb2d1-d0ff-466f-b959-c92077b55868-serving-cert\") pod \"route-controller-manager-5cc88b647d-bt7tg\" (UID: \"bcbeb2d1-d0ff-466f-b959-c92077b55868\") " pod="openshift-route-controller-manager/route-controller-manager-5cc88b647d-bt7tg" Feb 19 18:35:28 crc kubenswrapper[4813]: I0219 18:35:28.674439 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bcbeb2d1-d0ff-466f-b959-c92077b55868-client-ca\") pod \"route-controller-manager-5cc88b647d-bt7tg\" (UID: \"bcbeb2d1-d0ff-466f-b959-c92077b55868\") " pod="openshift-route-controller-manager/route-controller-manager-5cc88b647d-bt7tg" Feb 19 18:35:28 crc kubenswrapper[4813]: I0219 18:35:28.674496 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwk6z\" (UniqueName: \"kubernetes.io/projected/bcbeb2d1-d0ff-466f-b959-c92077b55868-kube-api-access-mwk6z\") pod \"route-controller-manager-5cc88b647d-bt7tg\" (UID: \"bcbeb2d1-d0ff-466f-b959-c92077b55868\") " pod="openshift-route-controller-manager/route-controller-manager-5cc88b647d-bt7tg" Feb 19 18:35:28 crc kubenswrapper[4813]: I0219 18:35:28.674558 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9ffr\" (UniqueName: \"kubernetes.io/projected/e0de2fac-0b8c-4d60-8d9b-8e3c82391725-kube-api-access-w9ffr\") on node \"crc\" DevicePath \"\"" Feb 19 18:35:28 crc kubenswrapper[4813]: I0219 18:35:28.674583 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0de2fac-0b8c-4d60-8d9b-8e3c82391725-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 18:35:28 crc kubenswrapper[4813]: I0219 18:35:28.677086 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcbeb2d1-d0ff-466f-b959-c92077b55868-config\") pod \"route-controller-manager-5cc88b647d-bt7tg\" (UID: \"bcbeb2d1-d0ff-466f-b959-c92077b55868\") " pod="openshift-route-controller-manager/route-controller-manager-5cc88b647d-bt7tg" Feb 19 18:35:28 crc kubenswrapper[4813]: I0219 18:35:28.679152 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bcbeb2d1-d0ff-466f-b959-c92077b55868-client-ca\") pod \"route-controller-manager-5cc88b647d-bt7tg\" (UID: \"bcbeb2d1-d0ff-466f-b959-c92077b55868\") " pod="openshift-route-controller-manager/route-controller-manager-5cc88b647d-bt7tg" Feb 19 18:35:28 crc kubenswrapper[4813]: I0219 18:35:28.691112 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bcbeb2d1-d0ff-466f-b959-c92077b55868-serving-cert\") pod \"route-controller-manager-5cc88b647d-bt7tg\" (UID: \"bcbeb2d1-d0ff-466f-b959-c92077b55868\") " pod="openshift-route-controller-manager/route-controller-manager-5cc88b647d-bt7tg" Feb 19 18:35:28 crc kubenswrapper[4813]: I0219 18:35:28.703905 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwk6z\" (UniqueName: \"kubernetes.io/projected/bcbeb2d1-d0ff-466f-b959-c92077b55868-kube-api-access-mwk6z\") pod \"route-controller-manager-5cc88b647d-bt7tg\" (UID: \"bcbeb2d1-d0ff-466f-b959-c92077b55868\") " pod="openshift-route-controller-manager/route-controller-manager-5cc88b647d-bt7tg" Feb 19 18:35:28 crc kubenswrapper[4813]: I0219 18:35:28.842727 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5cc88b647d-bt7tg" Feb 19 18:35:29 crc kubenswrapper[4813]: I0219 18:35:29.086488 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-cd59f947b-d2km7" event={"ID":"e0de2fac-0b8c-4d60-8d9b-8e3c82391725","Type":"ContainerDied","Data":"934bb3c4346bb6a3ae9cd36c2a33d0e12adaf564c0a95a3187746da45de7c8d6"} Feb 19 18:35:29 crc kubenswrapper[4813]: I0219 18:35:29.086542 4813 scope.go:117] "RemoveContainer" containerID="a60f198d35de2bda2f84bb2c236deba98be32385706f1b37fa98c89f7fecaa6d" Feb 19 18:35:29 crc kubenswrapper[4813]: I0219 18:35:29.086677 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-cd59f947b-d2km7" Feb 19 18:35:29 crc kubenswrapper[4813]: I0219 18:35:29.127443 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cd59f947b-d2km7"] Feb 19 18:35:29 crc kubenswrapper[4813]: I0219 18:35:29.132891 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cd59f947b-d2km7"] Feb 19 18:35:29 crc kubenswrapper[4813]: I0219 18:35:29.258032 4813 patch_prober.go:28] interesting pod/route-controller-manager-cd59f947b-d2km7 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.58:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 18:35:29 crc kubenswrapper[4813]: I0219 18:35:29.258385 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-cd59f947b-d2km7" podUID="e0de2fac-0b8c-4d60-8d9b-8e3c82391725" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.58:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 18:35:29 crc kubenswrapper[4813]: I0219 18:35:29.329871 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5cc88b647d-bt7tg"] Feb 19 18:35:29 crc kubenswrapper[4813]: I0219 18:35:29.489697 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0de2fac-0b8c-4d60-8d9b-8e3c82391725" path="/var/lib/kubelet/pods/e0de2fac-0b8c-4d60-8d9b-8e3c82391725/volumes" Feb 19 18:35:30 crc kubenswrapper[4813]: I0219 18:35:30.095244 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5cc88b647d-bt7tg" event={"ID":"bcbeb2d1-d0ff-466f-b959-c92077b55868","Type":"ContainerStarted","Data":"20682b3a82b74e03723d411489583d0ca9618f9b4554f845409958d8974c9350"} Feb 19 18:35:30 crc kubenswrapper[4813]: I0219 18:35:30.095286 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5cc88b647d-bt7tg" event={"ID":"bcbeb2d1-d0ff-466f-b959-c92077b55868","Type":"ContainerStarted","Data":"75e1298eee0e3b5dc5b2cf70c42132187ab9132ab60b89d94a7ddff07812b72c"} Feb 19 18:35:30 crc kubenswrapper[4813]: I0219 18:35:30.096126 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5cc88b647d-bt7tg" Feb 19 18:35:30 crc kubenswrapper[4813]: I0219 18:35:30.117458 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5cc88b647d-bt7tg" podStartSLOduration=3.117422982 podStartE2EDuration="3.117422982s" podCreationTimestamp="2026-02-19 18:35:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:35:30.116751248 +0000 UTC m=+349.342191789" watchObservedRunningTime="2026-02-19 18:35:30.117422982 +0000 UTC m=+349.342863563" Feb 19 18:35:30 crc kubenswrapper[4813]: I0219 18:35:30.137370 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5cc88b647d-bt7tg" Feb 19 18:35:30 crc kubenswrapper[4813]: I0219 18:35:30.329857 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 18:35:30 crc kubenswrapper[4813]: I0219 18:35:30.330243 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 18:35:31 crc kubenswrapper[4813]: I0219 18:35:31.144066 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8nj58"] Feb 19 18:35:31 crc kubenswrapper[4813]: I0219 18:35:31.145227 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8nj58" Feb 19 18:35:31 crc kubenswrapper[4813]: I0219 18:35:31.147410 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 19 18:35:31 crc kubenswrapper[4813]: I0219 18:35:31.159929 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8nj58"] Feb 19 18:35:31 crc kubenswrapper[4813]: I0219 18:35:31.310537 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxj6r\" (UniqueName: \"kubernetes.io/projected/b4185c34-eea3-4fe8-99c8-2e54370f18af-kube-api-access-nxj6r\") pod \"redhat-marketplace-8nj58\" (UID: \"b4185c34-eea3-4fe8-99c8-2e54370f18af\") " pod="openshift-marketplace/redhat-marketplace-8nj58" Feb 19 18:35:31 crc kubenswrapper[4813]: I0219 18:35:31.310667 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4185c34-eea3-4fe8-99c8-2e54370f18af-utilities\") pod \"redhat-marketplace-8nj58\" (UID: \"b4185c34-eea3-4fe8-99c8-2e54370f18af\") " pod="openshift-marketplace/redhat-marketplace-8nj58" Feb 19 18:35:31 crc kubenswrapper[4813]: I0219 18:35:31.310822 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4185c34-eea3-4fe8-99c8-2e54370f18af-catalog-content\") pod \"redhat-marketplace-8nj58\" (UID: \"b4185c34-eea3-4fe8-99c8-2e54370f18af\") " pod="openshift-marketplace/redhat-marketplace-8nj58" Feb 19 18:35:31 crc kubenswrapper[4813]: I0219 18:35:31.339295 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-twvpc"] Feb 19 18:35:31 crc kubenswrapper[4813]: I0219 18:35:31.340861 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-twvpc" Feb 19 18:35:31 crc kubenswrapper[4813]: I0219 18:35:31.343481 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 19 18:35:31 crc kubenswrapper[4813]: I0219 18:35:31.365114 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-twvpc"] Feb 19 18:35:31 crc kubenswrapper[4813]: I0219 18:35:31.412361 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxj6r\" (UniqueName: \"kubernetes.io/projected/b4185c34-eea3-4fe8-99c8-2e54370f18af-kube-api-access-nxj6r\") pod \"redhat-marketplace-8nj58\" (UID: \"b4185c34-eea3-4fe8-99c8-2e54370f18af\") " pod="openshift-marketplace/redhat-marketplace-8nj58" Feb 19 18:35:31 crc kubenswrapper[4813]: I0219 18:35:31.412491 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4185c34-eea3-4fe8-99c8-2e54370f18af-utilities\") pod \"redhat-marketplace-8nj58\" (UID: \"b4185c34-eea3-4fe8-99c8-2e54370f18af\") " pod="openshift-marketplace/redhat-marketplace-8nj58" Feb 19 18:35:31 crc kubenswrapper[4813]: I0219 18:35:31.412597 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4185c34-eea3-4fe8-99c8-2e54370f18af-catalog-content\") pod \"redhat-marketplace-8nj58\" (UID: \"b4185c34-eea3-4fe8-99c8-2e54370f18af\") " pod="openshift-marketplace/redhat-marketplace-8nj58" Feb 19 18:35:31 crc kubenswrapper[4813]: I0219 18:35:31.413560 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4185c34-eea3-4fe8-99c8-2e54370f18af-catalog-content\") pod \"redhat-marketplace-8nj58\" (UID: \"b4185c34-eea3-4fe8-99c8-2e54370f18af\") " pod="openshift-marketplace/redhat-marketplace-8nj58" Feb 19 18:35:31 crc kubenswrapper[4813]: I0219 18:35:31.413734 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4185c34-eea3-4fe8-99c8-2e54370f18af-utilities\") pod \"redhat-marketplace-8nj58\" (UID: \"b4185c34-eea3-4fe8-99c8-2e54370f18af\") " pod="openshift-marketplace/redhat-marketplace-8nj58" Feb 19 18:35:31 crc kubenswrapper[4813]: I0219 18:35:31.434790 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxj6r\" (UniqueName: \"kubernetes.io/projected/b4185c34-eea3-4fe8-99c8-2e54370f18af-kube-api-access-nxj6r\") pod \"redhat-marketplace-8nj58\" (UID: \"b4185c34-eea3-4fe8-99c8-2e54370f18af\") " pod="openshift-marketplace/redhat-marketplace-8nj58" Feb 19 18:35:31 crc kubenswrapper[4813]: I0219 18:35:31.469499 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8nj58" Feb 19 18:35:31 crc kubenswrapper[4813]: I0219 18:35:31.513858 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0ab347f-5d4c-4597-bd74-163c4e0c4418-catalog-content\") pod \"redhat-operators-twvpc\" (UID: \"e0ab347f-5d4c-4597-bd74-163c4e0c4418\") " pod="openshift-marketplace/redhat-operators-twvpc" Feb 19 18:35:31 crc kubenswrapper[4813]: I0219 18:35:31.513941 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngh44\" (UniqueName: \"kubernetes.io/projected/e0ab347f-5d4c-4597-bd74-163c4e0c4418-kube-api-access-ngh44\") pod \"redhat-operators-twvpc\" (UID: \"e0ab347f-5d4c-4597-bd74-163c4e0c4418\") " pod="openshift-marketplace/redhat-operators-twvpc" Feb 19 18:35:31 crc kubenswrapper[4813]: I0219 18:35:31.514003 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0ab347f-5d4c-4597-bd74-163c4e0c4418-utilities\") pod \"redhat-operators-twvpc\" (UID: \"e0ab347f-5d4c-4597-bd74-163c4e0c4418\") " pod="openshift-marketplace/redhat-operators-twvpc" Feb 19 18:35:31 crc kubenswrapper[4813]: I0219 18:35:31.614828 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0ab347f-5d4c-4597-bd74-163c4e0c4418-catalog-content\") pod \"redhat-operators-twvpc\" (UID: \"e0ab347f-5d4c-4597-bd74-163c4e0c4418\") " pod="openshift-marketplace/redhat-operators-twvpc" Feb 19 18:35:31 crc kubenswrapper[4813]: I0219 18:35:31.615406 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngh44\" (UniqueName: \"kubernetes.io/projected/e0ab347f-5d4c-4597-bd74-163c4e0c4418-kube-api-access-ngh44\") pod \"redhat-operators-twvpc\" (UID: \"e0ab347f-5d4c-4597-bd74-163c4e0c4418\") " pod="openshift-marketplace/redhat-operators-twvpc" Feb 19 18:35:31 crc kubenswrapper[4813]: I0219 18:35:31.615515 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0ab347f-5d4c-4597-bd74-163c4e0c4418-utilities\") pod \"redhat-operators-twvpc\" (UID: \"e0ab347f-5d4c-4597-bd74-163c4e0c4418\") " pod="openshift-marketplace/redhat-operators-twvpc" Feb 19 18:35:31 crc kubenswrapper[4813]: I0219 18:35:31.615659 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0ab347f-5d4c-4597-bd74-163c4e0c4418-catalog-content\") pod \"redhat-operators-twvpc\" (UID: \"e0ab347f-5d4c-4597-bd74-163c4e0c4418\") " pod="openshift-marketplace/redhat-operators-twvpc" Feb 19 18:35:31 crc kubenswrapper[4813]: I0219 18:35:31.616036 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0ab347f-5d4c-4597-bd74-163c4e0c4418-utilities\") pod \"redhat-operators-twvpc\" (UID: \"e0ab347f-5d4c-4597-bd74-163c4e0c4418\") " pod="openshift-marketplace/redhat-operators-twvpc" Feb 19 18:35:31 crc kubenswrapper[4813]: I0219 18:35:31.650197 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngh44\" (UniqueName: \"kubernetes.io/projected/e0ab347f-5d4c-4597-bd74-163c4e0c4418-kube-api-access-ngh44\") pod \"redhat-operators-twvpc\" (UID: \"e0ab347f-5d4c-4597-bd74-163c4e0c4418\") " pod="openshift-marketplace/redhat-operators-twvpc" Feb 19 18:35:31 crc kubenswrapper[4813]: I0219 18:35:31.655020 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-twvpc" Feb 19 18:35:31 crc kubenswrapper[4813]: I0219 18:35:31.868738 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8nj58"] Feb 19 18:35:31 crc kubenswrapper[4813]: W0219 18:35:31.872059 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4185c34_eea3_4fe8_99c8_2e54370f18af.slice/crio-c675de18757d2a4508198c6aac3a4804966945629a727d79c899dfad840aa8bd WatchSource:0}: Error finding container c675de18757d2a4508198c6aac3a4804966945629a727d79c899dfad840aa8bd: Status 404 returned error can't find the container with id c675de18757d2a4508198c6aac3a4804966945629a727d79c899dfad840aa8bd Feb 19 18:35:32 crc kubenswrapper[4813]: I0219 18:35:32.112366 4813 generic.go:334] "Generic (PLEG): container finished" podID="b4185c34-eea3-4fe8-99c8-2e54370f18af" containerID="cb7400c94768f4165c2d2c1b72e78b777611d74f07c0d6d3d19a081357710ee9" exitCode=0 Feb 19 18:35:32 crc kubenswrapper[4813]: I0219 18:35:32.112454 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8nj58" event={"ID":"b4185c34-eea3-4fe8-99c8-2e54370f18af","Type":"ContainerDied","Data":"cb7400c94768f4165c2d2c1b72e78b777611d74f07c0d6d3d19a081357710ee9"} Feb 19 18:35:32 crc kubenswrapper[4813]: I0219 18:35:32.112527 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8nj58" event={"ID":"b4185c34-eea3-4fe8-99c8-2e54370f18af","Type":"ContainerStarted","Data":"c675de18757d2a4508198c6aac3a4804966945629a727d79c899dfad840aa8bd"} Feb 19 18:35:32 crc kubenswrapper[4813]: I0219 18:35:32.140724 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-twvpc"] Feb 19 18:35:32 crc kubenswrapper[4813]: W0219 18:35:32.157383 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0ab347f_5d4c_4597_bd74_163c4e0c4418.slice/crio-7497d09fb990dbb6ed0eafa46fa3e78a66dea1ccb66ca999671194e5724abfa7 WatchSource:0}: Error finding container 7497d09fb990dbb6ed0eafa46fa3e78a66dea1ccb66ca999671194e5724abfa7: Status 404 returned error can't find the container with id 7497d09fb990dbb6ed0eafa46fa3e78a66dea1ccb66ca999671194e5724abfa7 Feb 19 18:35:33 crc kubenswrapper[4813]: I0219 18:35:33.122403 4813 generic.go:334] "Generic (PLEG): container finished" podID="e0ab347f-5d4c-4597-bd74-163c4e0c4418" containerID="5b2f5af3dac9dd0b60f48220f5e0bdb74af6585255d864fc8c818ec5fd3caa79" exitCode=0 Feb 19 18:35:33 crc kubenswrapper[4813]: I0219 18:35:33.122469 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-twvpc" event={"ID":"e0ab347f-5d4c-4597-bd74-163c4e0c4418","Type":"ContainerDied","Data":"5b2f5af3dac9dd0b60f48220f5e0bdb74af6585255d864fc8c818ec5fd3caa79"} Feb 19 18:35:33 crc kubenswrapper[4813]: I0219 18:35:33.123084 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-twvpc" event={"ID":"e0ab347f-5d4c-4597-bd74-163c4e0c4418","Type":"ContainerStarted","Data":"7497d09fb990dbb6ed0eafa46fa3e78a66dea1ccb66ca999671194e5724abfa7"} Feb 19 18:35:33 crc kubenswrapper[4813]: I0219 18:35:33.542649 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6ckxz"] Feb 19 18:35:33 crc kubenswrapper[4813]: I0219 18:35:33.557785 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6ckxz" Feb 19 18:35:33 crc kubenswrapper[4813]: I0219 18:35:33.561232 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 19 18:35:33 crc kubenswrapper[4813]: I0219 18:35:33.567727 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6ckxz"] Feb 19 18:35:33 crc kubenswrapper[4813]: I0219 18:35:33.742389 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-46zck"] Feb 19 18:35:33 crc kubenswrapper[4813]: I0219 18:35:33.742991 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecf8698e-c530-429c-90ca-8f693c11d185-utilities\") pod \"certified-operators-6ckxz\" (UID: \"ecf8698e-c530-429c-90ca-8f693c11d185\") " pod="openshift-marketplace/certified-operators-6ckxz" Feb 19 18:35:33 crc kubenswrapper[4813]: I0219 18:35:33.743140 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecf8698e-c530-429c-90ca-8f693c11d185-catalog-content\") pod \"certified-operators-6ckxz\" (UID: \"ecf8698e-c530-429c-90ca-8f693c11d185\") " pod="openshift-marketplace/certified-operators-6ckxz" Feb 19 18:35:33 crc kubenswrapper[4813]: I0219 18:35:33.743895 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhj25\" (UniqueName: \"kubernetes.io/projected/ecf8698e-c530-429c-90ca-8f693c11d185-kube-api-access-qhj25\") pod \"certified-operators-6ckxz\" (UID: \"ecf8698e-c530-429c-90ca-8f693c11d185\") " pod="openshift-marketplace/certified-operators-6ckxz" Feb 19 18:35:33 crc kubenswrapper[4813]: I0219 18:35:33.744350 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-46zck" Feb 19 18:35:33 crc kubenswrapper[4813]: I0219 18:35:33.747669 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 19 18:35:33 crc kubenswrapper[4813]: I0219 18:35:33.769214 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-46zck"] Feb 19 18:35:33 crc kubenswrapper[4813]: I0219 18:35:33.845307 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhj25\" (UniqueName: \"kubernetes.io/projected/ecf8698e-c530-429c-90ca-8f693c11d185-kube-api-access-qhj25\") pod \"certified-operators-6ckxz\" (UID: \"ecf8698e-c530-429c-90ca-8f693c11d185\") " pod="openshift-marketplace/certified-operators-6ckxz" Feb 19 18:35:33 crc kubenswrapper[4813]: I0219 18:35:33.845655 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecf8698e-c530-429c-90ca-8f693c11d185-utilities\") pod \"certified-operators-6ckxz\" (UID: \"ecf8698e-c530-429c-90ca-8f693c11d185\") " pod="openshift-marketplace/certified-operators-6ckxz" Feb 19 18:35:33 crc kubenswrapper[4813]: I0219 18:35:33.845818 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecf8698e-c530-429c-90ca-8f693c11d185-catalog-content\") pod \"certified-operators-6ckxz\" (UID: \"ecf8698e-c530-429c-90ca-8f693c11d185\") " pod="openshift-marketplace/certified-operators-6ckxz" Feb 19 18:35:33 crc kubenswrapper[4813]: I0219 18:35:33.846491 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecf8698e-c530-429c-90ca-8f693c11d185-utilities\") pod \"certified-operators-6ckxz\" (UID: \"ecf8698e-c530-429c-90ca-8f693c11d185\") " pod="openshift-marketplace/certified-operators-6ckxz" Feb 19 18:35:33 crc kubenswrapper[4813]: I0219 18:35:33.846591 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecf8698e-c530-429c-90ca-8f693c11d185-catalog-content\") pod \"certified-operators-6ckxz\" (UID: \"ecf8698e-c530-429c-90ca-8f693c11d185\") " pod="openshift-marketplace/certified-operators-6ckxz" Feb 19 18:35:33 crc kubenswrapper[4813]: I0219 18:35:33.878796 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhj25\" (UniqueName: \"kubernetes.io/projected/ecf8698e-c530-429c-90ca-8f693c11d185-kube-api-access-qhj25\") pod \"certified-operators-6ckxz\" (UID: \"ecf8698e-c530-429c-90ca-8f693c11d185\") " pod="openshift-marketplace/certified-operators-6ckxz" Feb 19 18:35:33 crc kubenswrapper[4813]: I0219 18:35:33.947425 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zs96v\" (UniqueName: \"kubernetes.io/projected/cded8d2a-b9bd-42fb-97b0-64f5bcaf26d6-kube-api-access-zs96v\") pod \"community-operators-46zck\" (UID: \"cded8d2a-b9bd-42fb-97b0-64f5bcaf26d6\") " pod="openshift-marketplace/community-operators-46zck" Feb 19 18:35:33 crc kubenswrapper[4813]: I0219 18:35:33.947574 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cded8d2a-b9bd-42fb-97b0-64f5bcaf26d6-catalog-content\") pod \"community-operators-46zck\" (UID: \"cded8d2a-b9bd-42fb-97b0-64f5bcaf26d6\") " pod="openshift-marketplace/community-operators-46zck" Feb 19 18:35:33 crc kubenswrapper[4813]: I0219 18:35:33.947782 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cded8d2a-b9bd-42fb-97b0-64f5bcaf26d6-utilities\") pod \"community-operators-46zck\" (UID: \"cded8d2a-b9bd-42fb-97b0-64f5bcaf26d6\") " pod="openshift-marketplace/community-operators-46zck" Feb 19 18:35:34 crc kubenswrapper[4813]: I0219 18:35:34.049080 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cded8d2a-b9bd-42fb-97b0-64f5bcaf26d6-catalog-content\") pod \"community-operators-46zck\" (UID: \"cded8d2a-b9bd-42fb-97b0-64f5bcaf26d6\") " pod="openshift-marketplace/community-operators-46zck" Feb 19 18:35:34 crc kubenswrapper[4813]: I0219 18:35:34.049212 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cded8d2a-b9bd-42fb-97b0-64f5bcaf26d6-utilities\") pod \"community-operators-46zck\" (UID: \"cded8d2a-b9bd-42fb-97b0-64f5bcaf26d6\") " pod="openshift-marketplace/community-operators-46zck" Feb 19 18:35:34 crc kubenswrapper[4813]: I0219 18:35:34.049268 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zs96v\" (UniqueName: \"kubernetes.io/projected/cded8d2a-b9bd-42fb-97b0-64f5bcaf26d6-kube-api-access-zs96v\") pod \"community-operators-46zck\" (UID: \"cded8d2a-b9bd-42fb-97b0-64f5bcaf26d6\") " pod="openshift-marketplace/community-operators-46zck" Feb 19 18:35:34 crc kubenswrapper[4813]: I0219 18:35:34.050062 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cded8d2a-b9bd-42fb-97b0-64f5bcaf26d6-catalog-content\") pod \"community-operators-46zck\" (UID: \"cded8d2a-b9bd-42fb-97b0-64f5bcaf26d6\") " pod="openshift-marketplace/community-operators-46zck" Feb 19 18:35:34 crc kubenswrapper[4813]: I0219 18:35:34.050223 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cded8d2a-b9bd-42fb-97b0-64f5bcaf26d6-utilities\") pod \"community-operators-46zck\" (UID: \"cded8d2a-b9bd-42fb-97b0-64f5bcaf26d6\") " pod="openshift-marketplace/community-operators-46zck" Feb 19 18:35:34 crc kubenswrapper[4813]: I0219 18:35:34.079418 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zs96v\" (UniqueName: \"kubernetes.io/projected/cded8d2a-b9bd-42fb-97b0-64f5bcaf26d6-kube-api-access-zs96v\") pod \"community-operators-46zck\" (UID: \"cded8d2a-b9bd-42fb-97b0-64f5bcaf26d6\") " pod="openshift-marketplace/community-operators-46zck" Feb 19 18:35:34 crc kubenswrapper[4813]: I0219 18:35:34.132206 4813 generic.go:334] "Generic (PLEG): container finished" podID="b4185c34-eea3-4fe8-99c8-2e54370f18af" containerID="4bed69ae3ab9fa0430c509ad5489923a81b7c5d655bcd8b009a6d21606fbd1cb" exitCode=0 Feb 19 18:35:34 crc kubenswrapper[4813]: I0219 18:35:34.132327 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8nj58" event={"ID":"b4185c34-eea3-4fe8-99c8-2e54370f18af","Type":"ContainerDied","Data":"4bed69ae3ab9fa0430c509ad5489923a81b7c5d655bcd8b009a6d21606fbd1cb"} Feb 19 18:35:34 crc kubenswrapper[4813]: I0219 18:35:34.134567 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-twvpc" event={"ID":"e0ab347f-5d4c-4597-bd74-163c4e0c4418","Type":"ContainerStarted","Data":"94cc2fa69cf6e20092fac29c8a7cd5d4bd9d993c37e47e420a8b084a7703ee84"} Feb 19 18:35:34 crc kubenswrapper[4813]: I0219 18:35:34.176609 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6ckxz" Feb 19 18:35:34 crc kubenswrapper[4813]: I0219 18:35:34.361652 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-46zck" Feb 19 18:35:34 crc kubenswrapper[4813]: I0219 18:35:34.630970 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6ckxz"] Feb 19 18:35:34 crc kubenswrapper[4813]: W0219 18:35:34.644131 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podecf8698e_c530_429c_90ca_8f693c11d185.slice/crio-f6b116520f5cdb67b1ed0f228573ab1b6da3d9b90fa6845285ac4a7e24ca3979 WatchSource:0}: Error finding container f6b116520f5cdb67b1ed0f228573ab1b6da3d9b90fa6845285ac4a7e24ca3979: Status 404 returned error can't find the container with id f6b116520f5cdb67b1ed0f228573ab1b6da3d9b90fa6845285ac4a7e24ca3979 Feb 19 18:35:34 crc kubenswrapper[4813]: I0219 18:35:34.738591 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-46zck"] Feb 19 18:35:34 crc kubenswrapper[4813]: W0219 18:35:34.749508 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcded8d2a_b9bd_42fb_97b0_64f5bcaf26d6.slice/crio-3f699d4a3b6d48d35dd24338cc00d7078d81f55a404cb92437332c7ccd93cbca WatchSource:0}: Error finding container 3f699d4a3b6d48d35dd24338cc00d7078d81f55a404cb92437332c7ccd93cbca: Status 404 returned error can't find the container with id 3f699d4a3b6d48d35dd24338cc00d7078d81f55a404cb92437332c7ccd93cbca Feb 19 18:35:35 crc kubenswrapper[4813]: I0219 18:35:35.143772 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8nj58" event={"ID":"b4185c34-eea3-4fe8-99c8-2e54370f18af","Type":"ContainerStarted","Data":"76de1b946208809ce6c8d971e75497b1e0c286785b08dc3f6af1cf0fddcda7b8"} Feb 19 18:35:35 crc kubenswrapper[4813]: I0219 18:35:35.145634 4813 generic.go:334] "Generic (PLEG): container finished" podID="ecf8698e-c530-429c-90ca-8f693c11d185" containerID="6cafa75be1f6180d360c34365a73258691c7acdc448eb8709739866613251623" exitCode=0 Feb 19 18:35:35 crc kubenswrapper[4813]: I0219 18:35:35.145718 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6ckxz" event={"ID":"ecf8698e-c530-429c-90ca-8f693c11d185","Type":"ContainerDied","Data":"6cafa75be1f6180d360c34365a73258691c7acdc448eb8709739866613251623"} Feb 19 18:35:35 crc kubenswrapper[4813]: I0219 18:35:35.145754 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6ckxz" event={"ID":"ecf8698e-c530-429c-90ca-8f693c11d185","Type":"ContainerStarted","Data":"f6b116520f5cdb67b1ed0f228573ab1b6da3d9b90fa6845285ac4a7e24ca3979"} Feb 19 18:35:35 crc kubenswrapper[4813]: I0219 18:35:35.148047 4813 generic.go:334] "Generic (PLEG): container finished" podID="e0ab347f-5d4c-4597-bd74-163c4e0c4418" containerID="94cc2fa69cf6e20092fac29c8a7cd5d4bd9d993c37e47e420a8b084a7703ee84" exitCode=0 Feb 19 18:35:35 crc kubenswrapper[4813]: I0219 18:35:35.148141 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-twvpc" event={"ID":"e0ab347f-5d4c-4597-bd74-163c4e0c4418","Type":"ContainerDied","Data":"94cc2fa69cf6e20092fac29c8a7cd5d4bd9d993c37e47e420a8b084a7703ee84"} Feb 19 18:35:35 crc kubenswrapper[4813]: I0219 18:35:35.150713 4813 generic.go:334] "Generic (PLEG): container finished" podID="cded8d2a-b9bd-42fb-97b0-64f5bcaf26d6" containerID="ccfa5be7922db7a7bdfc71d7714e09ef1b5a477164d3764a382514d617d3ddb6" exitCode=0 Feb 19 18:35:35 crc kubenswrapper[4813]: I0219 18:35:35.150771 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-46zck" event={"ID":"cded8d2a-b9bd-42fb-97b0-64f5bcaf26d6","Type":"ContainerDied","Data":"ccfa5be7922db7a7bdfc71d7714e09ef1b5a477164d3764a382514d617d3ddb6"} Feb 19 18:35:35 crc kubenswrapper[4813]: I0219 18:35:35.150808 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-46zck" event={"ID":"cded8d2a-b9bd-42fb-97b0-64f5bcaf26d6","Type":"ContainerStarted","Data":"3f699d4a3b6d48d35dd24338cc00d7078d81f55a404cb92437332c7ccd93cbca"} Feb 19 18:35:35 crc kubenswrapper[4813]: I0219 18:35:35.178747 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8nj58" podStartSLOduration=1.654224611 podStartE2EDuration="4.178725929s" podCreationTimestamp="2026-02-19 18:35:31 +0000 UTC" firstStartedPulling="2026-02-19 18:35:32.117124255 +0000 UTC m=+351.342564836" lastFinishedPulling="2026-02-19 18:35:34.641625613 +0000 UTC m=+353.867066154" observedRunningTime="2026-02-19 18:35:35.17534709 +0000 UTC m=+354.400787641" watchObservedRunningTime="2026-02-19 18:35:35.178725929 +0000 UTC m=+354.404166510" Feb 19 18:35:36 crc kubenswrapper[4813]: I0219 18:35:36.158372 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-twvpc" event={"ID":"e0ab347f-5d4c-4597-bd74-163c4e0c4418","Type":"ContainerStarted","Data":"89e768709e429dd85cf5386045a0e02e13a5cdd4d1dd3a9544b039ded699fde8"} Feb 19 18:35:36 crc kubenswrapper[4813]: I0219 18:35:36.160590 4813 generic.go:334] "Generic (PLEG): container finished" podID="cded8d2a-b9bd-42fb-97b0-64f5bcaf26d6" containerID="452d057b3582f604c450f168d5a2a9549bc9fa100877d69625d1610b93cd6c5c" exitCode=0 Feb 19 18:35:36 crc kubenswrapper[4813]: I0219 18:35:36.160677 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-46zck" event={"ID":"cded8d2a-b9bd-42fb-97b0-64f5bcaf26d6","Type":"ContainerDied","Data":"452d057b3582f604c450f168d5a2a9549bc9fa100877d69625d1610b93cd6c5c"} Feb 19 18:35:36 crc kubenswrapper[4813]: I0219 18:35:36.163414 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6ckxz" event={"ID":"ecf8698e-c530-429c-90ca-8f693c11d185","Type":"ContainerStarted","Data":"094503c44c22a79a7ad234189fc9c2d8315dd33cf0b19e4015c79daf8d385693"} Feb 19 18:35:36 crc kubenswrapper[4813]: I0219 18:35:36.183844 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-twvpc" podStartSLOduration=2.751977642 podStartE2EDuration="5.183822715s" podCreationTimestamp="2026-02-19 18:35:31 +0000 UTC" firstStartedPulling="2026-02-19 18:35:33.124293503 +0000 UTC m=+352.349734054" lastFinishedPulling="2026-02-19 18:35:35.556138576 +0000 UTC m=+354.781579127" observedRunningTime="2026-02-19 18:35:36.179639759 +0000 UTC m=+355.405080320" watchObservedRunningTime="2026-02-19 18:35:36.183822715 +0000 UTC m=+355.409263276" Feb 19 18:35:37 crc kubenswrapper[4813]: I0219 18:35:37.170925 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-46zck" event={"ID":"cded8d2a-b9bd-42fb-97b0-64f5bcaf26d6","Type":"ContainerStarted","Data":"a9fe953a3b830a08b8aa69b561499a40d303b1f6bcf264c7dc5d4f1e939d4429"} Feb 19 18:35:37 crc kubenswrapper[4813]: I0219 18:35:37.173794 4813 generic.go:334] "Generic (PLEG): container finished" podID="ecf8698e-c530-429c-90ca-8f693c11d185" containerID="094503c44c22a79a7ad234189fc9c2d8315dd33cf0b19e4015c79daf8d385693" exitCode=0 Feb 19 18:35:37 crc kubenswrapper[4813]: I0219 18:35:37.173942 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6ckxz" event={"ID":"ecf8698e-c530-429c-90ca-8f693c11d185","Type":"ContainerDied","Data":"094503c44c22a79a7ad234189fc9c2d8315dd33cf0b19e4015c79daf8d385693"} Feb 19 18:35:37 crc kubenswrapper[4813]: I0219 18:35:37.218338 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-46zck" podStartSLOduration=2.82495493 podStartE2EDuration="4.218315664s" podCreationTimestamp="2026-02-19 18:35:33 +0000 UTC" firstStartedPulling="2026-02-19 18:35:35.152758778 +0000 UTC m=+354.378199319" lastFinishedPulling="2026-02-19 18:35:36.546119502 +0000 UTC m=+355.771560053" observedRunningTime="2026-02-19 18:35:37.198612862 +0000 UTC m=+356.424053413" watchObservedRunningTime="2026-02-19 18:35:37.218315664 +0000 UTC m=+356.443756215" Feb 19 18:35:38 crc kubenswrapper[4813]: I0219 18:35:38.182259 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6ckxz" event={"ID":"ecf8698e-c530-429c-90ca-8f693c11d185","Type":"ContainerStarted","Data":"0fa12aecca1f049884a25c5fac25801ea4c739e21acb1778428b9076b48ae8b7"} Feb 19 18:35:38 crc kubenswrapper[4813]: I0219 18:35:38.212655 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6ckxz" podStartSLOduration=2.777492713 podStartE2EDuration="5.212637102s" podCreationTimestamp="2026-02-19 18:35:33 +0000 UTC" firstStartedPulling="2026-02-19 18:35:35.148646943 +0000 UTC m=+354.374087494" lastFinishedPulling="2026-02-19 18:35:37.583791342 +0000 UTC m=+356.809231883" observedRunningTime="2026-02-19 18:35:38.211383617 +0000 UTC m=+357.436824198" watchObservedRunningTime="2026-02-19 18:35:38.212637102 +0000 UTC m=+357.438077643" Feb 19 18:35:41 crc kubenswrapper[4813]: I0219 18:35:41.470399 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8nj58" Feb 19 18:35:41 crc kubenswrapper[4813]: I0219 18:35:41.482072 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8nj58" Feb 19 18:35:41 crc kubenswrapper[4813]: I0219 18:35:41.559338 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8nj58" Feb 19 18:35:41 crc kubenswrapper[4813]: I0219 18:35:41.656760 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-twvpc" Feb 19 18:35:41 crc kubenswrapper[4813]: I0219 18:35:41.657032 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-twvpc" Feb 19 18:35:42 crc kubenswrapper[4813]: I0219 18:35:42.260778 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8nj58" Feb 19 18:35:42 crc kubenswrapper[4813]: I0219 18:35:42.707845 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-twvpc" podUID="e0ab347f-5d4c-4597-bd74-163c4e0c4418" containerName="registry-server" probeResult="failure" output=< Feb 19 18:35:42 crc kubenswrapper[4813]: timeout: failed to connect service ":50051" within 1s Feb 19 18:35:42 crc kubenswrapper[4813]: > Feb 19 18:35:44 crc kubenswrapper[4813]: I0219 18:35:44.177443 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6ckxz" Feb 19 18:35:44 crc kubenswrapper[4813]: I0219 18:35:44.178122 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6ckxz" Feb 19 18:35:44 crc kubenswrapper[4813]: I0219 18:35:44.251172 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6ckxz" Feb 19 18:35:44 crc kubenswrapper[4813]: I0219 18:35:44.316808 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6ckxz" Feb 19 18:35:44 crc kubenswrapper[4813]: I0219 18:35:44.362712 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-46zck" Feb 19 18:35:44 crc kubenswrapper[4813]: I0219 18:35:44.384588 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-46zck" Feb 19 18:35:44 crc kubenswrapper[4813]: I0219 18:35:44.459068 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-46zck" Feb 19 18:35:45 crc kubenswrapper[4813]: I0219 18:35:45.279914 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-46zck" Feb 19 18:35:50 crc kubenswrapper[4813]: I0219 18:35:50.852510 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-r4cp2"] Feb 19 18:35:50 crc kubenswrapper[4813]: I0219 18:35:50.853630 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-r4cp2" Feb 19 18:35:50 crc kubenswrapper[4813]: I0219 18:35:50.870330 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-r4cp2"] Feb 19 18:35:50 crc kubenswrapper[4813]: I0219 18:35:50.971711 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e6edac78-379b-4caa-b643-61cae0a58a8e-installation-pull-secrets\") pod \"image-registry-66df7c8f76-r4cp2\" (UID: \"e6edac78-379b-4caa-b643-61cae0a58a8e\") " pod="openshift-image-registry/image-registry-66df7c8f76-r4cp2" Feb 19 18:35:50 crc kubenswrapper[4813]: I0219 18:35:50.972109 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e6edac78-379b-4caa-b643-61cae0a58a8e-bound-sa-token\") pod \"image-registry-66df7c8f76-r4cp2\" (UID: \"e6edac78-379b-4caa-b643-61cae0a58a8e\") " pod="openshift-image-registry/image-registry-66df7c8f76-r4cp2" Feb 19 18:35:50 crc kubenswrapper[4813]: I0219 18:35:50.972151 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-r4cp2\" (UID: \"e6edac78-379b-4caa-b643-61cae0a58a8e\") " pod="openshift-image-registry/image-registry-66df7c8f76-r4cp2" Feb 19 18:35:50 crc kubenswrapper[4813]: I0219 18:35:50.972179 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e6edac78-379b-4caa-b643-61cae0a58a8e-registry-tls\") pod \"image-registry-66df7c8f76-r4cp2\" (UID: \"e6edac78-379b-4caa-b643-61cae0a58a8e\") " pod="openshift-image-registry/image-registry-66df7c8f76-r4cp2" Feb 19 18:35:50 crc kubenswrapper[4813]: I0219 18:35:50.972293 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e6edac78-379b-4caa-b643-61cae0a58a8e-trusted-ca\") pod \"image-registry-66df7c8f76-r4cp2\" (UID: \"e6edac78-379b-4caa-b643-61cae0a58a8e\") " pod="openshift-image-registry/image-registry-66df7c8f76-r4cp2" Feb 19 18:35:50 crc kubenswrapper[4813]: I0219 18:35:50.972350 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvmw9\" (UniqueName: \"kubernetes.io/projected/e6edac78-379b-4caa-b643-61cae0a58a8e-kube-api-access-jvmw9\") pod \"image-registry-66df7c8f76-r4cp2\" (UID: \"e6edac78-379b-4caa-b643-61cae0a58a8e\") " pod="openshift-image-registry/image-registry-66df7c8f76-r4cp2" Feb 19 18:35:50 crc kubenswrapper[4813]: I0219 18:35:50.972397 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e6edac78-379b-4caa-b643-61cae0a58a8e-ca-trust-extracted\") pod \"image-registry-66df7c8f76-r4cp2\" (UID: \"e6edac78-379b-4caa-b643-61cae0a58a8e\") " pod="openshift-image-registry/image-registry-66df7c8f76-r4cp2" Feb 19 18:35:50 crc kubenswrapper[4813]: I0219 18:35:50.972414 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e6edac78-379b-4caa-b643-61cae0a58a8e-registry-certificates\") pod \"image-registry-66df7c8f76-r4cp2\" (UID: \"e6edac78-379b-4caa-b643-61cae0a58a8e\") " pod="openshift-image-registry/image-registry-66df7c8f76-r4cp2" Feb 19 18:35:50 crc kubenswrapper[4813]: I0219 18:35:50.999270 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-r4cp2\" (UID: \"e6edac78-379b-4caa-b643-61cae0a58a8e\") " pod="openshift-image-registry/image-registry-66df7c8f76-r4cp2" Feb 19 18:35:51 crc kubenswrapper[4813]: I0219 18:35:51.074387 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e6edac78-379b-4caa-b643-61cae0a58a8e-trusted-ca\") pod \"image-registry-66df7c8f76-r4cp2\" (UID: \"e6edac78-379b-4caa-b643-61cae0a58a8e\") " pod="openshift-image-registry/image-registry-66df7c8f76-r4cp2" Feb 19 18:35:51 crc kubenswrapper[4813]: I0219 18:35:51.074478 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvmw9\" (UniqueName: \"kubernetes.io/projected/e6edac78-379b-4caa-b643-61cae0a58a8e-kube-api-access-jvmw9\") pod \"image-registry-66df7c8f76-r4cp2\" (UID: \"e6edac78-379b-4caa-b643-61cae0a58a8e\") " pod="openshift-image-registry/image-registry-66df7c8f76-r4cp2" Feb 19 18:35:51 crc kubenswrapper[4813]: I0219 18:35:51.074548 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e6edac78-379b-4caa-b643-61cae0a58a8e-ca-trust-extracted\") pod \"image-registry-66df7c8f76-r4cp2\" (UID: \"e6edac78-379b-4caa-b643-61cae0a58a8e\") " pod="openshift-image-registry/image-registry-66df7c8f76-r4cp2" Feb 19 18:35:51 crc kubenswrapper[4813]: I0219 18:35:51.074597 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e6edac78-379b-4caa-b643-61cae0a58a8e-registry-certificates\") pod \"image-registry-66df7c8f76-r4cp2\" (UID: \"e6edac78-379b-4caa-b643-61cae0a58a8e\") " pod="openshift-image-registry/image-registry-66df7c8f76-r4cp2" Feb 19 18:35:51 crc kubenswrapper[4813]: I0219 18:35:51.074721 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e6edac78-379b-4caa-b643-61cae0a58a8e-installation-pull-secrets\") pod \"image-registry-66df7c8f76-r4cp2\" (UID: \"e6edac78-379b-4caa-b643-61cae0a58a8e\") " pod="openshift-image-registry/image-registry-66df7c8f76-r4cp2" Feb 19 18:35:51 crc kubenswrapper[4813]: I0219 18:35:51.074788 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e6edac78-379b-4caa-b643-61cae0a58a8e-bound-sa-token\") pod \"image-registry-66df7c8f76-r4cp2\" (UID: \"e6edac78-379b-4caa-b643-61cae0a58a8e\") " pod="openshift-image-registry/image-registry-66df7c8f76-r4cp2" Feb 19 18:35:51 crc kubenswrapper[4813]: I0219 18:35:51.074847 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e6edac78-379b-4caa-b643-61cae0a58a8e-registry-tls\") pod \"image-registry-66df7c8f76-r4cp2\" (UID: \"e6edac78-379b-4caa-b643-61cae0a58a8e\") " pod="openshift-image-registry/image-registry-66df7c8f76-r4cp2" Feb 19 18:35:51 crc kubenswrapper[4813]: I0219 18:35:51.076245 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e6edac78-379b-4caa-b643-61cae0a58a8e-ca-trust-extracted\") pod \"image-registry-66df7c8f76-r4cp2\" (UID: \"e6edac78-379b-4caa-b643-61cae0a58a8e\") " pod="openshift-image-registry/image-registry-66df7c8f76-r4cp2" Feb 19 18:35:51 crc kubenswrapper[4813]: I0219 18:35:51.077615 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e6edac78-379b-4caa-b643-61cae0a58a8e-registry-certificates\") pod \"image-registry-66df7c8f76-r4cp2\" (UID: \"e6edac78-379b-4caa-b643-61cae0a58a8e\") " pod="openshift-image-registry/image-registry-66df7c8f76-r4cp2" Feb 19 18:35:51 crc kubenswrapper[4813]: I0219 18:35:51.078031 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e6edac78-379b-4caa-b643-61cae0a58a8e-trusted-ca\") pod \"image-registry-66df7c8f76-r4cp2\" (UID: \"e6edac78-379b-4caa-b643-61cae0a58a8e\") " pod="openshift-image-registry/image-registry-66df7c8f76-r4cp2" Feb 19 18:35:51 crc kubenswrapper[4813]: I0219 18:35:51.082767 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e6edac78-379b-4caa-b643-61cae0a58a8e-registry-tls\") pod \"image-registry-66df7c8f76-r4cp2\" (UID: \"e6edac78-379b-4caa-b643-61cae0a58a8e\") " pod="openshift-image-registry/image-registry-66df7c8f76-r4cp2" Feb 19 18:35:51 crc kubenswrapper[4813]: I0219 18:35:51.082865 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e6edac78-379b-4caa-b643-61cae0a58a8e-installation-pull-secrets\") pod \"image-registry-66df7c8f76-r4cp2\" (UID: \"e6edac78-379b-4caa-b643-61cae0a58a8e\") " pod="openshift-image-registry/image-registry-66df7c8f76-r4cp2" Feb 19 18:35:51 crc kubenswrapper[4813]: I0219 18:35:51.100766 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvmw9\" (UniqueName: \"kubernetes.io/projected/e6edac78-379b-4caa-b643-61cae0a58a8e-kube-api-access-jvmw9\") pod \"image-registry-66df7c8f76-r4cp2\" (UID: \"e6edac78-379b-4caa-b643-61cae0a58a8e\") " pod="openshift-image-registry/image-registry-66df7c8f76-r4cp2" Feb 19 18:35:51 crc kubenswrapper[4813]: I0219 18:35:51.101662 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e6edac78-379b-4caa-b643-61cae0a58a8e-bound-sa-token\") pod \"image-registry-66df7c8f76-r4cp2\" (UID: \"e6edac78-379b-4caa-b643-61cae0a58a8e\") " pod="openshift-image-registry/image-registry-66df7c8f76-r4cp2" Feb 19 18:35:51 crc kubenswrapper[4813]: I0219 18:35:51.183565 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-r4cp2" Feb 19 18:35:51 crc kubenswrapper[4813]: I0219 18:35:51.591431 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-r4cp2"] Feb 19 18:35:51 crc kubenswrapper[4813]: W0219 18:35:51.593721 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6edac78_379b_4caa_b643_61cae0a58a8e.slice/crio-5fe6be84b6daa154b30fb667c7d4247c2969dd7c1539e316353bc7b4e0b4448e WatchSource:0}: Error finding container 5fe6be84b6daa154b30fb667c7d4247c2969dd7c1539e316353bc7b4e0b4448e: Status 404 returned error can't find the container with id 5fe6be84b6daa154b30fb667c7d4247c2969dd7c1539e316353bc7b4e0b4448e Feb 19 18:35:51 crc kubenswrapper[4813]: I0219 18:35:51.723596 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-twvpc" Feb 19 18:35:51 crc kubenswrapper[4813]: I0219 18:35:51.777044 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-twvpc" Feb 19 18:35:52 crc kubenswrapper[4813]: I0219 18:35:52.263813 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-r4cp2" event={"ID":"e6edac78-379b-4caa-b643-61cae0a58a8e","Type":"ContainerStarted","Data":"986a631c9612a08787152dcd190d184e1820b9a4039ed83df3c158fcabbde6a9"} Feb 19 18:35:52 crc kubenswrapper[4813]: I0219 18:35:52.264393 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-r4cp2" event={"ID":"e6edac78-379b-4caa-b643-61cae0a58a8e","Type":"ContainerStarted","Data":"5fe6be84b6daa154b30fb667c7d4247c2969dd7c1539e316353bc7b4e0b4448e"} Feb 19 18:35:52 crc kubenswrapper[4813]: I0219 18:35:52.264440 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-r4cp2" Feb 19 18:35:52 crc kubenswrapper[4813]: I0219 18:35:52.287219 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-r4cp2" podStartSLOduration=2.287194527 podStartE2EDuration="2.287194527s" podCreationTimestamp="2026-02-19 18:35:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:35:52.286030306 +0000 UTC m=+371.511470857" watchObservedRunningTime="2026-02-19 18:35:52.287194527 +0000 UTC m=+371.512635078" Feb 19 18:36:00 crc kubenswrapper[4813]: I0219 18:36:00.330468 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 18:36:00 crc kubenswrapper[4813]: I0219 18:36:00.331678 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 18:36:11 crc kubenswrapper[4813]: I0219 18:36:11.194199 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-r4cp2" Feb 19 18:36:11 crc kubenswrapper[4813]: I0219 18:36:11.260255 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-bd28k"] Feb 19 18:36:30 crc kubenswrapper[4813]: I0219 18:36:30.330119 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 18:36:30 crc kubenswrapper[4813]: I0219 18:36:30.331049 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 18:36:30 crc kubenswrapper[4813]: I0219 18:36:30.331114 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" Feb 19 18:36:30 crc kubenswrapper[4813]: I0219 18:36:30.332087 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0a45006b1055daf3f49c258aa89bf05f49fe31d2a06d4d9e9317c1dd0bdf77b7"} pod="openshift-machine-config-operator/machine-config-daemon-gfswm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 18:36:30 crc kubenswrapper[4813]: I0219 18:36:30.332195 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" containerID="cri-o://0a45006b1055daf3f49c258aa89bf05f49fe31d2a06d4d9e9317c1dd0bdf77b7" gracePeriod=600 Feb 19 18:36:31 crc kubenswrapper[4813]: I0219 18:36:31.519721 4813 generic.go:334] "Generic (PLEG): container finished" podID="481977a2-7072-4176-abd4-863cb6104d70" containerID="0a45006b1055daf3f49c258aa89bf05f49fe31d2a06d4d9e9317c1dd0bdf77b7" exitCode=0 Feb 19 18:36:31 crc kubenswrapper[4813]: I0219 18:36:31.519813 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" event={"ID":"481977a2-7072-4176-abd4-863cb6104d70","Type":"ContainerDied","Data":"0a45006b1055daf3f49c258aa89bf05f49fe31d2a06d4d9e9317c1dd0bdf77b7"} Feb 19 18:36:31 crc kubenswrapper[4813]: I0219 18:36:31.520116 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" event={"ID":"481977a2-7072-4176-abd4-863cb6104d70","Type":"ContainerStarted","Data":"a34127baba58dd3cc724f5d51572626bf8d85f8d83ca4c8ac0dec993e2bb35bb"} Feb 19 18:36:31 crc kubenswrapper[4813]: I0219 18:36:31.520148 4813 scope.go:117] "RemoveContainer" containerID="194a57de3f000b8b7bd147b882314c518fbd8173a9c366d4ee48e1e5fab9800b" Feb 19 18:36:36 crc kubenswrapper[4813]: I0219 18:36:36.324701 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-bd28k" podUID="5a165501-6ea2-4d63-9479-6ac760c6b116" containerName="registry" containerID="cri-o://2bde0c15092dd4c9242b415de83ef8ec6fbae00fb3f01395ccb54362e16ca0b6" gracePeriod=30 Feb 19 18:36:36 crc kubenswrapper[4813]: I0219 18:36:36.556504 4813 generic.go:334] "Generic (PLEG): container finished" podID="5a165501-6ea2-4d63-9479-6ac760c6b116" containerID="2bde0c15092dd4c9242b415de83ef8ec6fbae00fb3f01395ccb54362e16ca0b6" exitCode=0 Feb 19 18:36:36 crc kubenswrapper[4813]: I0219 18:36:36.556577 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-bd28k" event={"ID":"5a165501-6ea2-4d63-9479-6ac760c6b116","Type":"ContainerDied","Data":"2bde0c15092dd4c9242b415de83ef8ec6fbae00fb3f01395ccb54362e16ca0b6"} Feb 19 18:36:36 crc kubenswrapper[4813]: I0219 18:36:36.789399 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-bd28k" Feb 19 18:36:36 crc kubenswrapper[4813]: I0219 18:36:36.936316 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5a165501-6ea2-4d63-9479-6ac760c6b116-ca-trust-extracted\") pod \"5a165501-6ea2-4d63-9479-6ac760c6b116\" (UID: \"5a165501-6ea2-4d63-9479-6ac760c6b116\") " Feb 19 18:36:36 crc kubenswrapper[4813]: I0219 18:36:36.936562 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"5a165501-6ea2-4d63-9479-6ac760c6b116\" (UID: \"5a165501-6ea2-4d63-9479-6ac760c6b116\") " Feb 19 18:36:36 crc kubenswrapper[4813]: I0219 18:36:36.936633 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5a165501-6ea2-4d63-9479-6ac760c6b116-bound-sa-token\") pod \"5a165501-6ea2-4d63-9479-6ac760c6b116\" (UID: \"5a165501-6ea2-4d63-9479-6ac760c6b116\") " Feb 19 18:36:36 crc kubenswrapper[4813]: I0219 18:36:36.936684 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5a165501-6ea2-4d63-9479-6ac760c6b116-registry-tls\") pod \"5a165501-6ea2-4d63-9479-6ac760c6b116\" (UID: \"5a165501-6ea2-4d63-9479-6ac760c6b116\") " Feb 19 18:36:36 crc kubenswrapper[4813]: I0219 18:36:36.936738 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5a165501-6ea2-4d63-9479-6ac760c6b116-registry-certificates\") pod \"5a165501-6ea2-4d63-9479-6ac760c6b116\" (UID: \"5a165501-6ea2-4d63-9479-6ac760c6b116\") " Feb 19 18:36:36 crc kubenswrapper[4813]: I0219 18:36:36.936804 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zclv\" (UniqueName: \"kubernetes.io/projected/5a165501-6ea2-4d63-9479-6ac760c6b116-kube-api-access-2zclv\") pod \"5a165501-6ea2-4d63-9479-6ac760c6b116\" (UID: \"5a165501-6ea2-4d63-9479-6ac760c6b116\") " Feb 19 18:36:36 crc kubenswrapper[4813]: I0219 18:36:36.936855 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5a165501-6ea2-4d63-9479-6ac760c6b116-trusted-ca\") pod \"5a165501-6ea2-4d63-9479-6ac760c6b116\" (UID: \"5a165501-6ea2-4d63-9479-6ac760c6b116\") " Feb 19 18:36:36 crc kubenswrapper[4813]: I0219 18:36:36.937003 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5a165501-6ea2-4d63-9479-6ac760c6b116-installation-pull-secrets\") pod \"5a165501-6ea2-4d63-9479-6ac760c6b116\" (UID: \"5a165501-6ea2-4d63-9479-6ac760c6b116\") " Feb 19 18:36:36 crc kubenswrapper[4813]: I0219 18:36:36.939240 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a165501-6ea2-4d63-9479-6ac760c6b116-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "5a165501-6ea2-4d63-9479-6ac760c6b116" (UID: "5a165501-6ea2-4d63-9479-6ac760c6b116"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:36:36 crc kubenswrapper[4813]: I0219 18:36:36.939275 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a165501-6ea2-4d63-9479-6ac760c6b116-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "5a165501-6ea2-4d63-9479-6ac760c6b116" (UID: "5a165501-6ea2-4d63-9479-6ac760c6b116"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:36:36 crc kubenswrapper[4813]: I0219 18:36:36.947097 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a165501-6ea2-4d63-9479-6ac760c6b116-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "5a165501-6ea2-4d63-9479-6ac760c6b116" (UID: "5a165501-6ea2-4d63-9479-6ac760c6b116"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:36:36 crc kubenswrapper[4813]: I0219 18:36:36.947796 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a165501-6ea2-4d63-9479-6ac760c6b116-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "5a165501-6ea2-4d63-9479-6ac760c6b116" (UID: "5a165501-6ea2-4d63-9479-6ac760c6b116"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:36:36 crc kubenswrapper[4813]: I0219 18:36:36.948386 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a165501-6ea2-4d63-9479-6ac760c6b116-kube-api-access-2zclv" (OuterVolumeSpecName: "kube-api-access-2zclv") pod "5a165501-6ea2-4d63-9479-6ac760c6b116" (UID: "5a165501-6ea2-4d63-9479-6ac760c6b116"). InnerVolumeSpecName "kube-api-access-2zclv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:36:36 crc kubenswrapper[4813]: I0219 18:36:36.950534 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a165501-6ea2-4d63-9479-6ac760c6b116-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "5a165501-6ea2-4d63-9479-6ac760c6b116" (UID: "5a165501-6ea2-4d63-9479-6ac760c6b116"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:36:36 crc kubenswrapper[4813]: I0219 18:36:36.953080 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "5a165501-6ea2-4d63-9479-6ac760c6b116" (UID: "5a165501-6ea2-4d63-9479-6ac760c6b116"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 18:36:36 crc kubenswrapper[4813]: I0219 18:36:36.973261 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a165501-6ea2-4d63-9479-6ac760c6b116-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "5a165501-6ea2-4d63-9479-6ac760c6b116" (UID: "5a165501-6ea2-4d63-9479-6ac760c6b116"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:36:37 crc kubenswrapper[4813]: I0219 18:36:37.039011 4813 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5a165501-6ea2-4d63-9479-6ac760c6b116-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 19 18:36:37 crc kubenswrapper[4813]: I0219 18:36:37.039060 4813 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5a165501-6ea2-4d63-9479-6ac760c6b116-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 19 18:36:37 crc kubenswrapper[4813]: I0219 18:36:37.039080 4813 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5a165501-6ea2-4d63-9479-6ac760c6b116-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 19 18:36:37 crc kubenswrapper[4813]: I0219 18:36:37.039098 4813 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5a165501-6ea2-4d63-9479-6ac760c6b116-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 19 18:36:37 crc kubenswrapper[4813]: I0219 18:36:37.039117 4813 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5a165501-6ea2-4d63-9479-6ac760c6b116-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 19 18:36:37 crc kubenswrapper[4813]: I0219 18:36:37.039136 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2zclv\" (UniqueName: \"kubernetes.io/projected/5a165501-6ea2-4d63-9479-6ac760c6b116-kube-api-access-2zclv\") on node \"crc\" DevicePath \"\"" Feb 19 18:36:37 crc kubenswrapper[4813]: I0219 18:36:37.039152 4813 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5a165501-6ea2-4d63-9479-6ac760c6b116-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 18:36:37 crc kubenswrapper[4813]: I0219 18:36:37.566862 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-bd28k" event={"ID":"5a165501-6ea2-4d63-9479-6ac760c6b116","Type":"ContainerDied","Data":"9a45c01e06e4c513a7379aa5a226ab357bc379ea608c1fa594eaad17d8b21bc9"} Feb 19 18:36:37 crc kubenswrapper[4813]: I0219 18:36:37.566922 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-bd28k" Feb 19 18:36:37 crc kubenswrapper[4813]: I0219 18:36:37.566949 4813 scope.go:117] "RemoveContainer" containerID="2bde0c15092dd4c9242b415de83ef8ec6fbae00fb3f01395ccb54362e16ca0b6" Feb 19 18:36:37 crc kubenswrapper[4813]: I0219 18:36:37.604832 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-bd28k"] Feb 19 18:36:37 crc kubenswrapper[4813]: I0219 18:36:37.612718 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-bd28k"] Feb 19 18:36:39 crc kubenswrapper[4813]: I0219 18:36:39.484169 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a165501-6ea2-4d63-9479-6ac760c6b116" path="/var/lib/kubelet/pods/5a165501-6ea2-4d63-9479-6ac760c6b116/volumes" Feb 19 18:38:41 crc kubenswrapper[4813]: I0219 18:38:41.702342 4813 scope.go:117] "RemoveContainer" containerID="de9a5d7bb578e96bdf446f3fd9373f30b317c359a855eb55f889a051a9609d96" Feb 19 18:39:00 crc kubenswrapper[4813]: I0219 18:39:00.329733 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 18:39:00 crc kubenswrapper[4813]: I0219 18:39:00.330711 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 18:39:30 crc kubenswrapper[4813]: I0219 18:39:30.329491 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 18:39:30 crc kubenswrapper[4813]: I0219 18:39:30.330155 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 18:40:00 crc kubenswrapper[4813]: I0219 18:40:00.329485 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 18:40:00 crc kubenswrapper[4813]: I0219 18:40:00.330060 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 18:40:00 crc kubenswrapper[4813]: I0219 18:40:00.330101 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" Feb 19 18:40:00 crc kubenswrapper[4813]: I0219 18:40:00.451487 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a34127baba58dd3cc724f5d51572626bf8d85f8d83ca4c8ac0dec993e2bb35bb"} pod="openshift-machine-config-operator/machine-config-daemon-gfswm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 18:40:00 crc kubenswrapper[4813]: I0219 18:40:00.451860 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" containerID="cri-o://a34127baba58dd3cc724f5d51572626bf8d85f8d83ca4c8ac0dec993e2bb35bb" gracePeriod=600 Feb 19 18:40:01 crc kubenswrapper[4813]: I0219 18:40:01.462436 4813 generic.go:334] "Generic (PLEG): container finished" podID="481977a2-7072-4176-abd4-863cb6104d70" containerID="a34127baba58dd3cc724f5d51572626bf8d85f8d83ca4c8ac0dec993e2bb35bb" exitCode=0 Feb 19 18:40:01 crc kubenswrapper[4813]: I0219 18:40:01.462591 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" event={"ID":"481977a2-7072-4176-abd4-863cb6104d70","Type":"ContainerDied","Data":"a34127baba58dd3cc724f5d51572626bf8d85f8d83ca4c8ac0dec993e2bb35bb"} Feb 19 18:40:01 crc kubenswrapper[4813]: I0219 18:40:01.463549 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" event={"ID":"481977a2-7072-4176-abd4-863cb6104d70","Type":"ContainerStarted","Data":"d2ec5832c886721bb1adcc6f9f4d730f479f32a26f58ced42b5b32c757c1d3c3"} Feb 19 18:40:01 crc kubenswrapper[4813]: I0219 18:40:01.463597 4813 scope.go:117] "RemoveContainer" containerID="0a45006b1055daf3f49c258aa89bf05f49fe31d2a06d4d9e9317c1dd0bdf77b7" Feb 19 18:41:11 crc kubenswrapper[4813]: I0219 18:41:11.361308 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-dsrqn"] Feb 19 18:41:11 crc kubenswrapper[4813]: E0219 18:41:11.362560 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a165501-6ea2-4d63-9479-6ac760c6b116" containerName="registry" Feb 19 18:41:11 crc kubenswrapper[4813]: I0219 18:41:11.362623 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a165501-6ea2-4d63-9479-6ac760c6b116" containerName="registry" Feb 19 18:41:11 crc kubenswrapper[4813]: I0219 18:41:11.363851 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a165501-6ea2-4d63-9479-6ac760c6b116" containerName="registry" Feb 19 18:41:11 crc kubenswrapper[4813]: I0219 18:41:11.364784 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-dsrqn" Feb 19 18:41:11 crc kubenswrapper[4813]: I0219 18:41:11.368922 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Feb 19 18:41:11 crc kubenswrapper[4813]: I0219 18:41:11.369091 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Feb 19 18:41:11 crc kubenswrapper[4813]: I0219 18:41:11.370209 4813 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-scp5k" Feb 19 18:41:11 crc kubenswrapper[4813]: I0219 18:41:11.373016 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-dsrqn"] Feb 19 18:41:11 crc kubenswrapper[4813]: I0219 18:41:11.375392 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Feb 19 18:41:11 crc kubenswrapper[4813]: I0219 18:41:11.416131 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/7f2acde5-52ac-4a4a-9c3e-7e0cebb568d9-node-mnt\") pod \"crc-storage-crc-dsrqn\" (UID: \"7f2acde5-52ac-4a4a-9c3e-7e0cebb568d9\") " pod="crc-storage/crc-storage-crc-dsrqn" Feb 19 18:41:11 crc kubenswrapper[4813]: I0219 18:41:11.416415 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lsln\" (UniqueName: \"kubernetes.io/projected/7f2acde5-52ac-4a4a-9c3e-7e0cebb568d9-kube-api-access-4lsln\") pod \"crc-storage-crc-dsrqn\" (UID: \"7f2acde5-52ac-4a4a-9c3e-7e0cebb568d9\") " pod="crc-storage/crc-storage-crc-dsrqn" Feb 19 18:41:11 crc kubenswrapper[4813]: I0219 18:41:11.416502 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/7f2acde5-52ac-4a4a-9c3e-7e0cebb568d9-crc-storage\") pod \"crc-storage-crc-dsrqn\" (UID: \"7f2acde5-52ac-4a4a-9c3e-7e0cebb568d9\") " pod="crc-storage/crc-storage-crc-dsrqn" Feb 19 18:41:11 crc kubenswrapper[4813]: I0219 18:41:11.519297 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/7f2acde5-52ac-4a4a-9c3e-7e0cebb568d9-node-mnt\") pod \"crc-storage-crc-dsrqn\" (UID: \"7f2acde5-52ac-4a4a-9c3e-7e0cebb568d9\") " pod="crc-storage/crc-storage-crc-dsrqn" Feb 19 18:41:11 crc kubenswrapper[4813]: I0219 18:41:11.520180 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lsln\" (UniqueName: \"kubernetes.io/projected/7f2acde5-52ac-4a4a-9c3e-7e0cebb568d9-kube-api-access-4lsln\") pod \"crc-storage-crc-dsrqn\" (UID: \"7f2acde5-52ac-4a4a-9c3e-7e0cebb568d9\") " pod="crc-storage/crc-storage-crc-dsrqn" Feb 19 18:41:11 crc kubenswrapper[4813]: I0219 18:41:11.520306 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/7f2acde5-52ac-4a4a-9c3e-7e0cebb568d9-crc-storage\") pod \"crc-storage-crc-dsrqn\" (UID: \"7f2acde5-52ac-4a4a-9c3e-7e0cebb568d9\") " pod="crc-storage/crc-storage-crc-dsrqn" Feb 19 18:41:11 crc kubenswrapper[4813]: I0219 18:41:11.521807 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/7f2acde5-52ac-4a4a-9c3e-7e0cebb568d9-node-mnt\") pod \"crc-storage-crc-dsrqn\" (UID: \"7f2acde5-52ac-4a4a-9c3e-7e0cebb568d9\") " pod="crc-storage/crc-storage-crc-dsrqn" Feb 19 18:41:11 crc kubenswrapper[4813]: I0219 18:41:11.529908 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/7f2acde5-52ac-4a4a-9c3e-7e0cebb568d9-crc-storage\") pod \"crc-storage-crc-dsrqn\" (UID: \"7f2acde5-52ac-4a4a-9c3e-7e0cebb568d9\") " pod="crc-storage/crc-storage-crc-dsrqn" Feb 19 18:41:11 crc kubenswrapper[4813]: I0219 18:41:11.544794 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lsln\" (UniqueName: \"kubernetes.io/projected/7f2acde5-52ac-4a4a-9c3e-7e0cebb568d9-kube-api-access-4lsln\") pod \"crc-storage-crc-dsrqn\" (UID: \"7f2acde5-52ac-4a4a-9c3e-7e0cebb568d9\") " pod="crc-storage/crc-storage-crc-dsrqn" Feb 19 18:41:11 crc kubenswrapper[4813]: I0219 18:41:11.705245 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-dsrqn" Feb 19 18:41:12 crc kubenswrapper[4813]: I0219 18:41:12.131240 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-dsrqn"] Feb 19 18:41:12 crc kubenswrapper[4813]: I0219 18:41:12.140680 4813 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 18:41:12 crc kubenswrapper[4813]: I0219 18:41:12.928025 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-dsrqn" event={"ID":"7f2acde5-52ac-4a4a-9c3e-7e0cebb568d9","Type":"ContainerStarted","Data":"3ef1aed83706c99d0931393f7f742fedd36a0ebbc40af56ec48348015d97a7ea"} Feb 19 18:41:13 crc kubenswrapper[4813]: I0219 18:41:13.934604 4813 generic.go:334] "Generic (PLEG): container finished" podID="7f2acde5-52ac-4a4a-9c3e-7e0cebb568d9" containerID="2f270f841e3c456fcd42132bf34c3a6b1e6eb36b9e99c6bea77031d1286d19cb" exitCode=0 Feb 19 18:41:13 crc kubenswrapper[4813]: I0219 18:41:13.934712 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-dsrqn" event={"ID":"7f2acde5-52ac-4a4a-9c3e-7e0cebb568d9","Type":"ContainerDied","Data":"2f270f841e3c456fcd42132bf34c3a6b1e6eb36b9e99c6bea77031d1286d19cb"} Feb 19 18:41:15 crc kubenswrapper[4813]: I0219 18:41:15.161849 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-dsrqn" Feb 19 18:41:15 crc kubenswrapper[4813]: I0219 18:41:15.266085 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4lsln\" (UniqueName: \"kubernetes.io/projected/7f2acde5-52ac-4a4a-9c3e-7e0cebb568d9-kube-api-access-4lsln\") pod \"7f2acde5-52ac-4a4a-9c3e-7e0cebb568d9\" (UID: \"7f2acde5-52ac-4a4a-9c3e-7e0cebb568d9\") " Feb 19 18:41:15 crc kubenswrapper[4813]: I0219 18:41:15.266142 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/7f2acde5-52ac-4a4a-9c3e-7e0cebb568d9-crc-storage\") pod \"7f2acde5-52ac-4a4a-9c3e-7e0cebb568d9\" (UID: \"7f2acde5-52ac-4a4a-9c3e-7e0cebb568d9\") " Feb 19 18:41:15 crc kubenswrapper[4813]: I0219 18:41:15.266205 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/7f2acde5-52ac-4a4a-9c3e-7e0cebb568d9-node-mnt\") pod \"7f2acde5-52ac-4a4a-9c3e-7e0cebb568d9\" (UID: \"7f2acde5-52ac-4a4a-9c3e-7e0cebb568d9\") " Feb 19 18:41:15 crc kubenswrapper[4813]: I0219 18:41:15.266409 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f2acde5-52ac-4a4a-9c3e-7e0cebb568d9-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "7f2acde5-52ac-4a4a-9c3e-7e0cebb568d9" (UID: "7f2acde5-52ac-4a4a-9c3e-7e0cebb568d9"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 18:41:15 crc kubenswrapper[4813]: I0219 18:41:15.274143 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f2acde5-52ac-4a4a-9c3e-7e0cebb568d9-kube-api-access-4lsln" (OuterVolumeSpecName: "kube-api-access-4lsln") pod "7f2acde5-52ac-4a4a-9c3e-7e0cebb568d9" (UID: "7f2acde5-52ac-4a4a-9c3e-7e0cebb568d9"). InnerVolumeSpecName "kube-api-access-4lsln". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:41:15 crc kubenswrapper[4813]: I0219 18:41:15.278407 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f2acde5-52ac-4a4a-9c3e-7e0cebb568d9-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "7f2acde5-52ac-4a4a-9c3e-7e0cebb568d9" (UID: "7f2acde5-52ac-4a4a-9c3e-7e0cebb568d9"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:41:15 crc kubenswrapper[4813]: I0219 18:41:15.367750 4813 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/7f2acde5-52ac-4a4a-9c3e-7e0cebb568d9-node-mnt\") on node \"crc\" DevicePath \"\"" Feb 19 18:41:15 crc kubenswrapper[4813]: I0219 18:41:15.367804 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4lsln\" (UniqueName: \"kubernetes.io/projected/7f2acde5-52ac-4a4a-9c3e-7e0cebb568d9-kube-api-access-4lsln\") on node \"crc\" DevicePath \"\"" Feb 19 18:41:15 crc kubenswrapper[4813]: I0219 18:41:15.367832 4813 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/7f2acde5-52ac-4a4a-9c3e-7e0cebb568d9-crc-storage\") on node \"crc\" DevicePath \"\"" Feb 19 18:41:15 crc kubenswrapper[4813]: I0219 18:41:15.950083 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-dsrqn" event={"ID":"7f2acde5-52ac-4a4a-9c3e-7e0cebb568d9","Type":"ContainerDied","Data":"3ef1aed83706c99d0931393f7f742fedd36a0ebbc40af56ec48348015d97a7ea"} Feb 19 18:41:15 crc kubenswrapper[4813]: I0219 18:41:15.950161 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ef1aed83706c99d0931393f7f742fedd36a0ebbc40af56ec48348015d97a7ea" Feb 19 18:41:15 crc kubenswrapper[4813]: I0219 18:41:15.950189 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-dsrqn" Feb 19 18:41:22 crc kubenswrapper[4813]: I0219 18:41:22.170442 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan8dbm"] Feb 19 18:41:22 crc kubenswrapper[4813]: E0219 18:41:22.171090 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f2acde5-52ac-4a4a-9c3e-7e0cebb568d9" containerName="storage" Feb 19 18:41:22 crc kubenswrapper[4813]: I0219 18:41:22.171118 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f2acde5-52ac-4a4a-9c3e-7e0cebb568d9" containerName="storage" Feb 19 18:41:22 crc kubenswrapper[4813]: I0219 18:41:22.171337 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f2acde5-52ac-4a4a-9c3e-7e0cebb568d9" containerName="storage" Feb 19 18:41:22 crc kubenswrapper[4813]: I0219 18:41:22.172576 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan8dbm" Feb 19 18:41:22 crc kubenswrapper[4813]: I0219 18:41:22.174655 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 19 18:41:22 crc kubenswrapper[4813]: I0219 18:41:22.176349 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan8dbm"] Feb 19 18:41:22 crc kubenswrapper[4813]: I0219 18:41:22.356349 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcjvz\" (UniqueName: \"kubernetes.io/projected/1ab4cb0c-bcd2-4fa4-84b0-eca960beaf73-kube-api-access-vcjvz\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan8dbm\" (UID: \"1ab4cb0c-bcd2-4fa4-84b0-eca960beaf73\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan8dbm" Feb 19 18:41:22 crc kubenswrapper[4813]: I0219 18:41:22.356445 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1ab4cb0c-bcd2-4fa4-84b0-eca960beaf73-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan8dbm\" (UID: \"1ab4cb0c-bcd2-4fa4-84b0-eca960beaf73\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan8dbm" Feb 19 18:41:22 crc kubenswrapper[4813]: I0219 18:41:22.356589 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1ab4cb0c-bcd2-4fa4-84b0-eca960beaf73-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan8dbm\" (UID: \"1ab4cb0c-bcd2-4fa4-84b0-eca960beaf73\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan8dbm" Feb 19 18:41:22 crc kubenswrapper[4813]: I0219 18:41:22.457765 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1ab4cb0c-bcd2-4fa4-84b0-eca960beaf73-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan8dbm\" (UID: \"1ab4cb0c-bcd2-4fa4-84b0-eca960beaf73\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan8dbm" Feb 19 18:41:22 crc kubenswrapper[4813]: I0219 18:41:22.457816 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcjvz\" (UniqueName: \"kubernetes.io/projected/1ab4cb0c-bcd2-4fa4-84b0-eca960beaf73-kube-api-access-vcjvz\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan8dbm\" (UID: \"1ab4cb0c-bcd2-4fa4-84b0-eca960beaf73\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan8dbm" Feb 19 18:41:22 crc kubenswrapper[4813]: I0219 18:41:22.457842 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1ab4cb0c-bcd2-4fa4-84b0-eca960beaf73-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan8dbm\" (UID: \"1ab4cb0c-bcd2-4fa4-84b0-eca960beaf73\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan8dbm" Feb 19 18:41:22 crc kubenswrapper[4813]: I0219 18:41:22.458343 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1ab4cb0c-bcd2-4fa4-84b0-eca960beaf73-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan8dbm\" (UID: \"1ab4cb0c-bcd2-4fa4-84b0-eca960beaf73\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan8dbm" Feb 19 18:41:22 crc kubenswrapper[4813]: I0219 18:41:22.458378 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1ab4cb0c-bcd2-4fa4-84b0-eca960beaf73-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan8dbm\" (UID: \"1ab4cb0c-bcd2-4fa4-84b0-eca960beaf73\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan8dbm" Feb 19 18:41:22 crc kubenswrapper[4813]: I0219 18:41:22.479217 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcjvz\" (UniqueName: \"kubernetes.io/projected/1ab4cb0c-bcd2-4fa4-84b0-eca960beaf73-kube-api-access-vcjvz\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan8dbm\" (UID: \"1ab4cb0c-bcd2-4fa4-84b0-eca960beaf73\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan8dbm" Feb 19 18:41:22 crc kubenswrapper[4813]: I0219 18:41:22.503426 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan8dbm" Feb 19 18:41:22 crc kubenswrapper[4813]: I0219 18:41:22.703460 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan8dbm"] Feb 19 18:41:22 crc kubenswrapper[4813]: W0219 18:41:22.711660 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ab4cb0c_bcd2_4fa4_84b0_eca960beaf73.slice/crio-c1f0729598b0c1982f94eb583255e90ecca8c61fb6f43c9276a7539f8e137538 WatchSource:0}: Error finding container c1f0729598b0c1982f94eb583255e90ecca8c61fb6f43c9276a7539f8e137538: Status 404 returned error can't find the container with id c1f0729598b0c1982f94eb583255e90ecca8c61fb6f43c9276a7539f8e137538 Feb 19 18:41:22 crc kubenswrapper[4813]: I0219 18:41:22.992932 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan8dbm" event={"ID":"1ab4cb0c-bcd2-4fa4-84b0-eca960beaf73","Type":"ContainerStarted","Data":"c1f0729598b0c1982f94eb583255e90ecca8c61fb6f43c9276a7539f8e137538"} Feb 19 18:41:24 crc kubenswrapper[4813]: I0219 18:41:24.002023 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan8dbm" event={"ID":"1ab4cb0c-bcd2-4fa4-84b0-eca960beaf73","Type":"ContainerStarted","Data":"a3396fe88cdf3259ac857f0810e36f63aafb90a37b8b4461523f2ca2c03ec445"} Feb 19 18:41:24 crc kubenswrapper[4813]: I0219 18:41:24.360374 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pc9t2"] Feb 19 18:41:24 crc kubenswrapper[4813]: I0219 18:41:24.360819 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" podUID="928c75f4-605c-4556-8c29-14ff4bdf6f5e" containerName="ovn-controller" containerID="cri-o://c5fed64977a80ae6acac3a225e2b6b6ab865db1f36338df4a15011baae90f949" gracePeriod=30 Feb 19 18:41:24 crc kubenswrapper[4813]: I0219 18:41:24.360906 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" podUID="928c75f4-605c-4556-8c29-14ff4bdf6f5e" containerName="kube-rbac-proxy-node" containerID="cri-o://ec80078ab4536d8f1962994c3bdf8af5fcf81516f8845e84dbb33a78b1aa2062" gracePeriod=30 Feb 19 18:41:24 crc kubenswrapper[4813]: I0219 18:41:24.360896 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" podUID="928c75f4-605c-4556-8c29-14ff4bdf6f5e" containerName="sbdb" containerID="cri-o://1caf7a922d435742a5c6d5302c70003bc86cc3ab542e2e3fe0a368c150de7838" gracePeriod=30 Feb 19 18:41:24 crc kubenswrapper[4813]: I0219 18:41:24.360975 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" podUID="928c75f4-605c-4556-8c29-14ff4bdf6f5e" containerName="ovn-acl-logging" containerID="cri-o://a81cfe261685862b6b74d263ab66b4a2b1d68409791e82424b4f210c56561099" gracePeriod=30 Feb 19 18:41:24 crc kubenswrapper[4813]: I0219 18:41:24.360995 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" podUID="928c75f4-605c-4556-8c29-14ff4bdf6f5e" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://e03ec2a3e1656fbfb567377e79bcca10e1c8a6f7660727c164936eab7365930a" gracePeriod=30 Feb 19 18:41:24 crc kubenswrapper[4813]: I0219 18:41:24.361009 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" podUID="928c75f4-605c-4556-8c29-14ff4bdf6f5e" containerName="nbdb" containerID="cri-o://6191934d31b3f711d3c9d8f365251db200dea5a925cad670632dbfc527f76504" gracePeriod=30 Feb 19 18:41:24 crc kubenswrapper[4813]: I0219 18:41:24.361300 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" podUID="928c75f4-605c-4556-8c29-14ff4bdf6f5e" containerName="northd" containerID="cri-o://8c1b878b9bc710465c933e1965728be9fe220fe89a3d5c05fa538abd4bf22b81" gracePeriod=30 Feb 19 18:41:24 crc kubenswrapper[4813]: I0219 18:41:24.419156 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" podUID="928c75f4-605c-4556-8c29-14ff4bdf6f5e" containerName="ovnkube-controller" containerID="cri-o://73bccc2e2b4d6078c9df512ec532a1628f449b7e4d7ec2890220e9986fb06cb6" gracePeriod=30 Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.012544 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pc9t2_928c75f4-605c-4556-8c29-14ff4bdf6f5e/ovnkube-controller/3.log" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.015160 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pc9t2_928c75f4-605c-4556-8c29-14ff4bdf6f5e/ovn-acl-logging/0.log" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.015597 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pc9t2_928c75f4-605c-4556-8c29-14ff4bdf6f5e/ovn-controller/0.log" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.015944 4813 generic.go:334] "Generic (PLEG): container finished" podID="928c75f4-605c-4556-8c29-14ff4bdf6f5e" containerID="73bccc2e2b4d6078c9df512ec532a1628f449b7e4d7ec2890220e9986fb06cb6" exitCode=0 Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.015981 4813 generic.go:334] "Generic (PLEG): container finished" podID="928c75f4-605c-4556-8c29-14ff4bdf6f5e" containerID="1caf7a922d435742a5c6d5302c70003bc86cc3ab542e2e3fe0a368c150de7838" exitCode=0 Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.015989 4813 generic.go:334] "Generic (PLEG): container finished" podID="928c75f4-605c-4556-8c29-14ff4bdf6f5e" containerID="6191934d31b3f711d3c9d8f365251db200dea5a925cad670632dbfc527f76504" exitCode=0 Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.015995 4813 generic.go:334] "Generic (PLEG): container finished" podID="928c75f4-605c-4556-8c29-14ff4bdf6f5e" containerID="8c1b878b9bc710465c933e1965728be9fe220fe89a3d5c05fa538abd4bf22b81" exitCode=0 Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.016001 4813 generic.go:334] "Generic (PLEG): container finished" podID="928c75f4-605c-4556-8c29-14ff4bdf6f5e" containerID="e03ec2a3e1656fbfb567377e79bcca10e1c8a6f7660727c164936eab7365930a" exitCode=0 Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.016007 4813 generic.go:334] "Generic (PLEG): container finished" podID="928c75f4-605c-4556-8c29-14ff4bdf6f5e" containerID="ec80078ab4536d8f1962994c3bdf8af5fcf81516f8845e84dbb33a78b1aa2062" exitCode=0 Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.016012 4813 generic.go:334] "Generic (PLEG): container finished" podID="928c75f4-605c-4556-8c29-14ff4bdf6f5e" containerID="a81cfe261685862b6b74d263ab66b4a2b1d68409791e82424b4f210c56561099" exitCode=143 Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.016018 4813 generic.go:334] "Generic (PLEG): container finished" podID="928c75f4-605c-4556-8c29-14ff4bdf6f5e" containerID="c5fed64977a80ae6acac3a225e2b6b6ab865db1f36338df4a15011baae90f949" exitCode=143 Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.016044 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" event={"ID":"928c75f4-605c-4556-8c29-14ff4bdf6f5e","Type":"ContainerDied","Data":"73bccc2e2b4d6078c9df512ec532a1628f449b7e4d7ec2890220e9986fb06cb6"} Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.016102 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" event={"ID":"928c75f4-605c-4556-8c29-14ff4bdf6f5e","Type":"ContainerDied","Data":"1caf7a922d435742a5c6d5302c70003bc86cc3ab542e2e3fe0a368c150de7838"} Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.016122 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" event={"ID":"928c75f4-605c-4556-8c29-14ff4bdf6f5e","Type":"ContainerDied","Data":"6191934d31b3f711d3c9d8f365251db200dea5a925cad670632dbfc527f76504"} Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.016125 4813 scope.go:117] "RemoveContainer" containerID="0cc1c57ce812d6b74b0a2631aac1b67e009e76561c3561fcc94cf8bfa6c6a61c" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.016137 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" event={"ID":"928c75f4-605c-4556-8c29-14ff4bdf6f5e","Type":"ContainerDied","Data":"8c1b878b9bc710465c933e1965728be9fe220fe89a3d5c05fa538abd4bf22b81"} Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.016149 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" event={"ID":"928c75f4-605c-4556-8c29-14ff4bdf6f5e","Type":"ContainerDied","Data":"e03ec2a3e1656fbfb567377e79bcca10e1c8a6f7660727c164936eab7365930a"} Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.016162 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" event={"ID":"928c75f4-605c-4556-8c29-14ff4bdf6f5e","Type":"ContainerDied","Data":"ec80078ab4536d8f1962994c3bdf8af5fcf81516f8845e84dbb33a78b1aa2062"} Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.016173 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" event={"ID":"928c75f4-605c-4556-8c29-14ff4bdf6f5e","Type":"ContainerDied","Data":"a81cfe261685862b6b74d263ab66b4a2b1d68409791e82424b4f210c56561099"} Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.016183 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" event={"ID":"928c75f4-605c-4556-8c29-14ff4bdf6f5e","Type":"ContainerDied","Data":"c5fed64977a80ae6acac3a225e2b6b6ab865db1f36338df4a15011baae90f949"} Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.018285 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hksqw_b099cefb-f2e5-4f3f-976c-7433dba77ef2/kube-multus/2.log" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.020057 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hksqw_b099cefb-f2e5-4f3f-976c-7433dba77ef2/kube-multus/1.log" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.020091 4813 generic.go:334] "Generic (PLEG): container finished" podID="b099cefb-f2e5-4f3f-976c-7433dba77ef2" containerID="1bf20d5a1dff3d1f2180385a366bd304f81d49489fb5beb68ef022b82460d17a" exitCode=2 Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.020139 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hksqw" event={"ID":"b099cefb-f2e5-4f3f-976c-7433dba77ef2","Type":"ContainerDied","Data":"1bf20d5a1dff3d1f2180385a366bd304f81d49489fb5beb68ef022b82460d17a"} Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.020557 4813 scope.go:117] "RemoveContainer" containerID="1bf20d5a1dff3d1f2180385a366bd304f81d49489fb5beb68ef022b82460d17a" Feb 19 18:41:25 crc kubenswrapper[4813]: E0219 18:41:25.020816 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-hksqw_openshift-multus(b099cefb-f2e5-4f3f-976c-7433dba77ef2)\"" pod="openshift-multus/multus-hksqw" podUID="b099cefb-f2e5-4f3f-976c-7433dba77ef2" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.022982 4813 generic.go:334] "Generic (PLEG): container finished" podID="1ab4cb0c-bcd2-4fa4-84b0-eca960beaf73" containerID="a3396fe88cdf3259ac857f0810e36f63aafb90a37b8b4461523f2ca2c03ec445" exitCode=0 Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.023024 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan8dbm" event={"ID":"1ab4cb0c-bcd2-4fa4-84b0-eca960beaf73","Type":"ContainerDied","Data":"a3396fe88cdf3259ac857f0810e36f63aafb90a37b8b4461523f2ca2c03ec445"} Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.060540 4813 scope.go:117] "RemoveContainer" containerID="38e8bb938cc19db067c4260fd25416cf155dd95f94c41f03a9ef2b4d29589ad2" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.074392 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pc9t2_928c75f4-605c-4556-8c29-14ff4bdf6f5e/ovn-acl-logging/0.log" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.074823 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pc9t2_928c75f4-605c-4556-8c29-14ff4bdf6f5e/ovn-controller/0.log" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.075192 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.110351 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/928c75f4-605c-4556-8c29-14ff4bdf6f5e-ovnkube-script-lib\") pod \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.110835 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/928c75f4-605c-4556-8c29-14ff4bdf6f5e-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "928c75f4-605c-4556-8c29-14ff4bdf6f5e" (UID: "928c75f4-605c-4556-8c29-14ff4bdf6f5e"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.111255 4813 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/928c75f4-605c-4556-8c29-14ff4bdf6f5e-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.128779 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-9m5pg"] Feb 19 18:41:25 crc kubenswrapper[4813]: E0219 18:41:25.128998 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="928c75f4-605c-4556-8c29-14ff4bdf6f5e" containerName="kube-rbac-proxy-node" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.129011 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="928c75f4-605c-4556-8c29-14ff4bdf6f5e" containerName="kube-rbac-proxy-node" Feb 19 18:41:25 crc kubenswrapper[4813]: E0219 18:41:25.129019 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="928c75f4-605c-4556-8c29-14ff4bdf6f5e" containerName="kube-rbac-proxy-ovn-metrics" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.129026 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="928c75f4-605c-4556-8c29-14ff4bdf6f5e" containerName="kube-rbac-proxy-ovn-metrics" Feb 19 18:41:25 crc kubenswrapper[4813]: E0219 18:41:25.129037 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="928c75f4-605c-4556-8c29-14ff4bdf6f5e" containerName="northd" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.129045 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="928c75f4-605c-4556-8c29-14ff4bdf6f5e" containerName="northd" Feb 19 18:41:25 crc kubenswrapper[4813]: E0219 18:41:25.129053 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="928c75f4-605c-4556-8c29-14ff4bdf6f5e" containerName="ovnkube-controller" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.129061 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="928c75f4-605c-4556-8c29-14ff4bdf6f5e" containerName="ovnkube-controller" Feb 19 18:41:25 crc kubenswrapper[4813]: E0219 18:41:25.129069 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="928c75f4-605c-4556-8c29-14ff4bdf6f5e" containerName="ovnkube-controller" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.129075 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="928c75f4-605c-4556-8c29-14ff4bdf6f5e" containerName="ovnkube-controller" Feb 19 18:41:25 crc kubenswrapper[4813]: E0219 18:41:25.129086 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="928c75f4-605c-4556-8c29-14ff4bdf6f5e" containerName="ovn-acl-logging" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.129093 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="928c75f4-605c-4556-8c29-14ff4bdf6f5e" containerName="ovn-acl-logging" Feb 19 18:41:25 crc kubenswrapper[4813]: E0219 18:41:25.129100 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="928c75f4-605c-4556-8c29-14ff4bdf6f5e" containerName="ovn-controller" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.129107 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="928c75f4-605c-4556-8c29-14ff4bdf6f5e" containerName="ovn-controller" Feb 19 18:41:25 crc kubenswrapper[4813]: E0219 18:41:25.129119 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="928c75f4-605c-4556-8c29-14ff4bdf6f5e" containerName="nbdb" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.129127 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="928c75f4-605c-4556-8c29-14ff4bdf6f5e" containerName="nbdb" Feb 19 18:41:25 crc kubenswrapper[4813]: E0219 18:41:25.129138 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="928c75f4-605c-4556-8c29-14ff4bdf6f5e" containerName="sbdb" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.129144 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="928c75f4-605c-4556-8c29-14ff4bdf6f5e" containerName="sbdb" Feb 19 18:41:25 crc kubenswrapper[4813]: E0219 18:41:25.129157 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="928c75f4-605c-4556-8c29-14ff4bdf6f5e" containerName="kubecfg-setup" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.129162 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="928c75f4-605c-4556-8c29-14ff4bdf6f5e" containerName="kubecfg-setup" Feb 19 18:41:25 crc kubenswrapper[4813]: E0219 18:41:25.129169 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="928c75f4-605c-4556-8c29-14ff4bdf6f5e" containerName="ovnkube-controller" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.129175 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="928c75f4-605c-4556-8c29-14ff4bdf6f5e" containerName="ovnkube-controller" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.129264 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="928c75f4-605c-4556-8c29-14ff4bdf6f5e" containerName="ovnkube-controller" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.129275 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="928c75f4-605c-4556-8c29-14ff4bdf6f5e" containerName="nbdb" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.129284 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="928c75f4-605c-4556-8c29-14ff4bdf6f5e" containerName="kube-rbac-proxy-ovn-metrics" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.129292 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="928c75f4-605c-4556-8c29-14ff4bdf6f5e" containerName="ovnkube-controller" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.129301 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="928c75f4-605c-4556-8c29-14ff4bdf6f5e" containerName="northd" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.129312 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="928c75f4-605c-4556-8c29-14ff4bdf6f5e" containerName="ovn-acl-logging" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.129320 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="928c75f4-605c-4556-8c29-14ff4bdf6f5e" containerName="ovnkube-controller" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.129328 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="928c75f4-605c-4556-8c29-14ff4bdf6f5e" containerName="ovn-controller" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.129335 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="928c75f4-605c-4556-8c29-14ff4bdf6f5e" containerName="sbdb" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.129343 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="928c75f4-605c-4556-8c29-14ff4bdf6f5e" containerName="kube-rbac-proxy-node" Feb 19 18:41:25 crc kubenswrapper[4813]: E0219 18:41:25.129443 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="928c75f4-605c-4556-8c29-14ff4bdf6f5e" containerName="ovnkube-controller" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.129450 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="928c75f4-605c-4556-8c29-14ff4bdf6f5e" containerName="ovnkube-controller" Feb 19 18:41:25 crc kubenswrapper[4813]: E0219 18:41:25.129459 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="928c75f4-605c-4556-8c29-14ff4bdf6f5e" containerName="ovnkube-controller" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.129464 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="928c75f4-605c-4556-8c29-14ff4bdf6f5e" containerName="ovnkube-controller" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.129554 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="928c75f4-605c-4556-8c29-14ff4bdf6f5e" containerName="ovnkube-controller" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.129566 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="928c75f4-605c-4556-8c29-14ff4bdf6f5e" containerName="ovnkube-controller" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.131189 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9m5pg" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.211844 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-host-kubelet\") pod \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.211889 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-systemd-units\") pod \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.211914 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-var-lib-openvswitch\") pod \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.211982 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "928c75f4-605c-4556-8c29-14ff4bdf6f5e" (UID: "928c75f4-605c-4556-8c29-14ff4bdf6f5e"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.211987 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "928c75f4-605c-4556-8c29-14ff4bdf6f5e" (UID: "928c75f4-605c-4556-8c29-14ff4bdf6f5e"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.211988 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "928c75f4-605c-4556-8c29-14ff4bdf6f5e" (UID: "928c75f4-605c-4556-8c29-14ff4bdf6f5e"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.212010 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-etc-openvswitch\") pod \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.212035 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "928c75f4-605c-4556-8c29-14ff4bdf6f5e" (UID: "928c75f4-605c-4556-8c29-14ff4bdf6f5e"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.212186 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-host-cni-bin\") pod \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.212211 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "928c75f4-605c-4556-8c29-14ff4bdf6f5e" (UID: "928c75f4-605c-4556-8c29-14ff4bdf6f5e"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.212687 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-host-cni-netd\") pod \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.212742 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/928c75f4-605c-4556-8c29-14ff4bdf6f5e-env-overrides\") pod \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.212766 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-host-run-netns\") pod \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.212786 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-log-socket\") pod \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.212812 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/928c75f4-605c-4556-8c29-14ff4bdf6f5e-ovn-node-metrics-cert\") pod \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.212840 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/928c75f4-605c-4556-8c29-14ff4bdf6f5e-ovnkube-config\") pod \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.212864 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-run-systemd\") pod \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.212889 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-run-ovn\") pod \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.212919 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-node-log\") pod \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.212971 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cf9qf\" (UniqueName: \"kubernetes.io/projected/928c75f4-605c-4556-8c29-14ff4bdf6f5e-kube-api-access-cf9qf\") pod \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.212989 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-host-slash\") pod \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.213018 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-run-openvswitch\") pod \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.213044 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.213073 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-host-run-ovn-kubernetes\") pod \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\" (UID: \"928c75f4-605c-4556-8c29-14ff4bdf6f5e\") " Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.213175 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2ab53786-050e-4a9f-a1dd-e67a82ce1acd-ovnkube-config\") pod \"ovnkube-node-9m5pg\" (UID: \"2ab53786-050e-4a9f-a1dd-e67a82ce1acd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9m5pg" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.213217 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2ab53786-050e-4a9f-a1dd-e67a82ce1acd-etc-openvswitch\") pod \"ovnkube-node-9m5pg\" (UID: \"2ab53786-050e-4a9f-a1dd-e67a82ce1acd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9m5pg" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.213246 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2ab53786-050e-4a9f-a1dd-e67a82ce1acd-node-log\") pod \"ovnkube-node-9m5pg\" (UID: \"2ab53786-050e-4a9f-a1dd-e67a82ce1acd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9m5pg" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.213274 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2ab53786-050e-4a9f-a1dd-e67a82ce1acd-host-run-ovn-kubernetes\") pod \"ovnkube-node-9m5pg\" (UID: \"2ab53786-050e-4a9f-a1dd-e67a82ce1acd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9m5pg" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.213306 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2ab53786-050e-4a9f-a1dd-e67a82ce1acd-env-overrides\") pod \"ovnkube-node-9m5pg\" (UID: \"2ab53786-050e-4a9f-a1dd-e67a82ce1acd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9m5pg" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.213330 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2ab53786-050e-4a9f-a1dd-e67a82ce1acd-log-socket\") pod \"ovnkube-node-9m5pg\" (UID: \"2ab53786-050e-4a9f-a1dd-e67a82ce1acd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9m5pg" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.213370 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2ab53786-050e-4a9f-a1dd-e67a82ce1acd-ovn-node-metrics-cert\") pod \"ovnkube-node-9m5pg\" (UID: \"2ab53786-050e-4a9f-a1dd-e67a82ce1acd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9m5pg" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.213390 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lj7nf\" (UniqueName: \"kubernetes.io/projected/2ab53786-050e-4a9f-a1dd-e67a82ce1acd-kube-api-access-lj7nf\") pod \"ovnkube-node-9m5pg\" (UID: \"2ab53786-050e-4a9f-a1dd-e67a82ce1acd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9m5pg" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.213422 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2ab53786-050e-4a9f-a1dd-e67a82ce1acd-host-kubelet\") pod \"ovnkube-node-9m5pg\" (UID: \"2ab53786-050e-4a9f-a1dd-e67a82ce1acd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9m5pg" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.213447 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2ab53786-050e-4a9f-a1dd-e67a82ce1acd-run-systemd\") pod \"ovnkube-node-9m5pg\" (UID: \"2ab53786-050e-4a9f-a1dd-e67a82ce1acd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9m5pg" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.213474 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2ab53786-050e-4a9f-a1dd-e67a82ce1acd-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9m5pg\" (UID: \"2ab53786-050e-4a9f-a1dd-e67a82ce1acd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9m5pg" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.213495 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2ab53786-050e-4a9f-a1dd-e67a82ce1acd-run-openvswitch\") pod \"ovnkube-node-9m5pg\" (UID: \"2ab53786-050e-4a9f-a1dd-e67a82ce1acd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9m5pg" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.213521 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2ab53786-050e-4a9f-a1dd-e67a82ce1acd-run-ovn\") pod \"ovnkube-node-9m5pg\" (UID: \"2ab53786-050e-4a9f-a1dd-e67a82ce1acd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9m5pg" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.213551 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2ab53786-050e-4a9f-a1dd-e67a82ce1acd-host-slash\") pod \"ovnkube-node-9m5pg\" (UID: \"2ab53786-050e-4a9f-a1dd-e67a82ce1acd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9m5pg" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.213574 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2ab53786-050e-4a9f-a1dd-e67a82ce1acd-ovnkube-script-lib\") pod \"ovnkube-node-9m5pg\" (UID: \"2ab53786-050e-4a9f-a1dd-e67a82ce1acd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9m5pg" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.213602 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2ab53786-050e-4a9f-a1dd-e67a82ce1acd-systemd-units\") pod \"ovnkube-node-9m5pg\" (UID: \"2ab53786-050e-4a9f-a1dd-e67a82ce1acd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9m5pg" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.213621 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2ab53786-050e-4a9f-a1dd-e67a82ce1acd-host-cni-netd\") pod \"ovnkube-node-9m5pg\" (UID: \"2ab53786-050e-4a9f-a1dd-e67a82ce1acd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9m5pg" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.213650 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2ab53786-050e-4a9f-a1dd-e67a82ce1acd-host-run-netns\") pod \"ovnkube-node-9m5pg\" (UID: \"2ab53786-050e-4a9f-a1dd-e67a82ce1acd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9m5pg" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.213677 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2ab53786-050e-4a9f-a1dd-e67a82ce1acd-var-lib-openvswitch\") pod \"ovnkube-node-9m5pg\" (UID: \"2ab53786-050e-4a9f-a1dd-e67a82ce1acd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9m5pg" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.213700 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2ab53786-050e-4a9f-a1dd-e67a82ce1acd-host-cni-bin\") pod \"ovnkube-node-9m5pg\" (UID: \"2ab53786-050e-4a9f-a1dd-e67a82ce1acd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9m5pg" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.213739 4813 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.213754 4813 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.213767 4813 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.213779 4813 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.213791 4813 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.213865 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "928c75f4-605c-4556-8c29-14ff4bdf6f5e" (UID: "928c75f4-605c-4556-8c29-14ff4bdf6f5e"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.214026 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "928c75f4-605c-4556-8c29-14ff4bdf6f5e" (UID: "928c75f4-605c-4556-8c29-14ff4bdf6f5e"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.214051 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-log-socket" (OuterVolumeSpecName: "log-socket") pod "928c75f4-605c-4556-8c29-14ff4bdf6f5e" (UID: "928c75f4-605c-4556-8c29-14ff4bdf6f5e"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.214241 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/928c75f4-605c-4556-8c29-14ff4bdf6f5e-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "928c75f4-605c-4556-8c29-14ff4bdf6f5e" (UID: "928c75f4-605c-4556-8c29-14ff4bdf6f5e"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.214355 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-host-slash" (OuterVolumeSpecName: "host-slash") pod "928c75f4-605c-4556-8c29-14ff4bdf6f5e" (UID: "928c75f4-605c-4556-8c29-14ff4bdf6f5e"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.214433 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "928c75f4-605c-4556-8c29-14ff4bdf6f5e" (UID: "928c75f4-605c-4556-8c29-14ff4bdf6f5e"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.214466 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "928c75f4-605c-4556-8c29-14ff4bdf6f5e" (UID: "928c75f4-605c-4556-8c29-14ff4bdf6f5e"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.214487 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "928c75f4-605c-4556-8c29-14ff4bdf6f5e" (UID: "928c75f4-605c-4556-8c29-14ff4bdf6f5e"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.214525 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "928c75f4-605c-4556-8c29-14ff4bdf6f5e" (UID: "928c75f4-605c-4556-8c29-14ff4bdf6f5e"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.214708 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/928c75f4-605c-4556-8c29-14ff4bdf6f5e-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "928c75f4-605c-4556-8c29-14ff4bdf6f5e" (UID: "928c75f4-605c-4556-8c29-14ff4bdf6f5e"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.214759 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-node-log" (OuterVolumeSpecName: "node-log") pod "928c75f4-605c-4556-8c29-14ff4bdf6f5e" (UID: "928c75f4-605c-4556-8c29-14ff4bdf6f5e"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.218808 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/928c75f4-605c-4556-8c29-14ff4bdf6f5e-kube-api-access-cf9qf" (OuterVolumeSpecName: "kube-api-access-cf9qf") pod "928c75f4-605c-4556-8c29-14ff4bdf6f5e" (UID: "928c75f4-605c-4556-8c29-14ff4bdf6f5e"). InnerVolumeSpecName "kube-api-access-cf9qf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.218997 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/928c75f4-605c-4556-8c29-14ff4bdf6f5e-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "928c75f4-605c-4556-8c29-14ff4bdf6f5e" (UID: "928c75f4-605c-4556-8c29-14ff4bdf6f5e"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.230420 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "928c75f4-605c-4556-8c29-14ff4bdf6f5e" (UID: "928c75f4-605c-4556-8c29-14ff4bdf6f5e"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.314339 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2ab53786-050e-4a9f-a1dd-e67a82ce1acd-node-log\") pod \"ovnkube-node-9m5pg\" (UID: \"2ab53786-050e-4a9f-a1dd-e67a82ce1acd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9m5pg" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.314385 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2ab53786-050e-4a9f-a1dd-e67a82ce1acd-host-run-ovn-kubernetes\") pod \"ovnkube-node-9m5pg\" (UID: \"2ab53786-050e-4a9f-a1dd-e67a82ce1acd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9m5pg" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.314420 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2ab53786-050e-4a9f-a1dd-e67a82ce1acd-env-overrides\") pod \"ovnkube-node-9m5pg\" (UID: \"2ab53786-050e-4a9f-a1dd-e67a82ce1acd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9m5pg" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.314442 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2ab53786-050e-4a9f-a1dd-e67a82ce1acd-log-socket\") pod \"ovnkube-node-9m5pg\" (UID: \"2ab53786-050e-4a9f-a1dd-e67a82ce1acd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9m5pg" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.314467 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2ab53786-050e-4a9f-a1dd-e67a82ce1acd-ovn-node-metrics-cert\") pod \"ovnkube-node-9m5pg\" (UID: \"2ab53786-050e-4a9f-a1dd-e67a82ce1acd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9m5pg" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.314485 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lj7nf\" (UniqueName: \"kubernetes.io/projected/2ab53786-050e-4a9f-a1dd-e67a82ce1acd-kube-api-access-lj7nf\") pod \"ovnkube-node-9m5pg\" (UID: \"2ab53786-050e-4a9f-a1dd-e67a82ce1acd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9m5pg" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.314512 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2ab53786-050e-4a9f-a1dd-e67a82ce1acd-host-kubelet\") pod \"ovnkube-node-9m5pg\" (UID: \"2ab53786-050e-4a9f-a1dd-e67a82ce1acd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9m5pg" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.314532 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2ab53786-050e-4a9f-a1dd-e67a82ce1acd-run-systemd\") pod \"ovnkube-node-9m5pg\" (UID: \"2ab53786-050e-4a9f-a1dd-e67a82ce1acd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9m5pg" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.314551 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2ab53786-050e-4a9f-a1dd-e67a82ce1acd-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9m5pg\" (UID: \"2ab53786-050e-4a9f-a1dd-e67a82ce1acd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9m5pg" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.314571 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2ab53786-050e-4a9f-a1dd-e67a82ce1acd-run-openvswitch\") pod \"ovnkube-node-9m5pg\" (UID: \"2ab53786-050e-4a9f-a1dd-e67a82ce1acd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9m5pg" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.314594 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2ab53786-050e-4a9f-a1dd-e67a82ce1acd-run-ovn\") pod \"ovnkube-node-9m5pg\" (UID: \"2ab53786-050e-4a9f-a1dd-e67a82ce1acd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9m5pg" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.314617 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2ab53786-050e-4a9f-a1dd-e67a82ce1acd-host-slash\") pod \"ovnkube-node-9m5pg\" (UID: \"2ab53786-050e-4a9f-a1dd-e67a82ce1acd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9m5pg" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.314638 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2ab53786-050e-4a9f-a1dd-e67a82ce1acd-ovnkube-script-lib\") pod \"ovnkube-node-9m5pg\" (UID: \"2ab53786-050e-4a9f-a1dd-e67a82ce1acd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9m5pg" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.314664 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2ab53786-050e-4a9f-a1dd-e67a82ce1acd-systemd-units\") pod \"ovnkube-node-9m5pg\" (UID: \"2ab53786-050e-4a9f-a1dd-e67a82ce1acd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9m5pg" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.314682 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2ab53786-050e-4a9f-a1dd-e67a82ce1acd-host-cni-netd\") pod \"ovnkube-node-9m5pg\" (UID: \"2ab53786-050e-4a9f-a1dd-e67a82ce1acd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9m5pg" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.314713 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2ab53786-050e-4a9f-a1dd-e67a82ce1acd-host-run-netns\") pod \"ovnkube-node-9m5pg\" (UID: \"2ab53786-050e-4a9f-a1dd-e67a82ce1acd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9m5pg" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.314771 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2ab53786-050e-4a9f-a1dd-e67a82ce1acd-var-lib-openvswitch\") pod \"ovnkube-node-9m5pg\" (UID: \"2ab53786-050e-4a9f-a1dd-e67a82ce1acd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9m5pg" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.314793 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2ab53786-050e-4a9f-a1dd-e67a82ce1acd-host-cni-bin\") pod \"ovnkube-node-9m5pg\" (UID: \"2ab53786-050e-4a9f-a1dd-e67a82ce1acd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9m5pg" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.314812 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2ab53786-050e-4a9f-a1dd-e67a82ce1acd-ovnkube-config\") pod \"ovnkube-node-9m5pg\" (UID: \"2ab53786-050e-4a9f-a1dd-e67a82ce1acd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9m5pg" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.314830 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2ab53786-050e-4a9f-a1dd-e67a82ce1acd-etc-openvswitch\") pod \"ovnkube-node-9m5pg\" (UID: \"2ab53786-050e-4a9f-a1dd-e67a82ce1acd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9m5pg" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.314865 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cf9qf\" (UniqueName: \"kubernetes.io/projected/928c75f4-605c-4556-8c29-14ff4bdf6f5e-kube-api-access-cf9qf\") on node \"crc\" DevicePath \"\"" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.314875 4813 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-host-slash\") on node \"crc\" DevicePath \"\"" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.314885 4813 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.314896 4813 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.314905 4813 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.314914 4813 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.314923 4813 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/928c75f4-605c-4556-8c29-14ff4bdf6f5e-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.314931 4813 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-log-socket\") on node \"crc\" DevicePath \"\"" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.314939 4813 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.314946 4813 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/928c75f4-605c-4556-8c29-14ff4bdf6f5e-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.314970 4813 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/928c75f4-605c-4556-8c29-14ff4bdf6f5e-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.314980 4813 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.314988 4813 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.314997 4813 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/928c75f4-605c-4556-8c29-14ff4bdf6f5e-node-log\") on node \"crc\" DevicePath \"\"" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.315032 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2ab53786-050e-4a9f-a1dd-e67a82ce1acd-etc-openvswitch\") pod \"ovnkube-node-9m5pg\" (UID: \"2ab53786-050e-4a9f-a1dd-e67a82ce1acd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9m5pg" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.315066 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2ab53786-050e-4a9f-a1dd-e67a82ce1acd-node-log\") pod \"ovnkube-node-9m5pg\" (UID: \"2ab53786-050e-4a9f-a1dd-e67a82ce1acd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9m5pg" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.315088 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2ab53786-050e-4a9f-a1dd-e67a82ce1acd-host-run-ovn-kubernetes\") pod \"ovnkube-node-9m5pg\" (UID: \"2ab53786-050e-4a9f-a1dd-e67a82ce1acd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9m5pg" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.315542 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2ab53786-050e-4a9f-a1dd-e67a82ce1acd-env-overrides\") pod \"ovnkube-node-9m5pg\" (UID: \"2ab53786-050e-4a9f-a1dd-e67a82ce1acd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9m5pg" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.315572 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2ab53786-050e-4a9f-a1dd-e67a82ce1acd-log-socket\") pod \"ovnkube-node-9m5pg\" (UID: \"2ab53786-050e-4a9f-a1dd-e67a82ce1acd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9m5pg" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.315841 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2ab53786-050e-4a9f-a1dd-e67a82ce1acd-host-slash\") pod \"ovnkube-node-9m5pg\" (UID: \"2ab53786-050e-4a9f-a1dd-e67a82ce1acd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9m5pg" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.315929 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2ab53786-050e-4a9f-a1dd-e67a82ce1acd-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9m5pg\" (UID: \"2ab53786-050e-4a9f-a1dd-e67a82ce1acd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9m5pg" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.315972 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2ab53786-050e-4a9f-a1dd-e67a82ce1acd-host-kubelet\") pod \"ovnkube-node-9m5pg\" (UID: \"2ab53786-050e-4a9f-a1dd-e67a82ce1acd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9m5pg" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.315995 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2ab53786-050e-4a9f-a1dd-e67a82ce1acd-run-systemd\") pod \"ovnkube-node-9m5pg\" (UID: \"2ab53786-050e-4a9f-a1dd-e67a82ce1acd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9m5pg" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.316016 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2ab53786-050e-4a9f-a1dd-e67a82ce1acd-host-run-netns\") pod \"ovnkube-node-9m5pg\" (UID: \"2ab53786-050e-4a9f-a1dd-e67a82ce1acd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9m5pg" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.316057 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2ab53786-050e-4a9f-a1dd-e67a82ce1acd-run-openvswitch\") pod \"ovnkube-node-9m5pg\" (UID: \"2ab53786-050e-4a9f-a1dd-e67a82ce1acd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9m5pg" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.316119 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2ab53786-050e-4a9f-a1dd-e67a82ce1acd-run-ovn\") pod \"ovnkube-node-9m5pg\" (UID: \"2ab53786-050e-4a9f-a1dd-e67a82ce1acd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9m5pg" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.316136 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2ab53786-050e-4a9f-a1dd-e67a82ce1acd-systemd-units\") pod \"ovnkube-node-9m5pg\" (UID: \"2ab53786-050e-4a9f-a1dd-e67a82ce1acd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9m5pg" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.316156 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2ab53786-050e-4a9f-a1dd-e67a82ce1acd-host-cni-bin\") pod \"ovnkube-node-9m5pg\" (UID: \"2ab53786-050e-4a9f-a1dd-e67a82ce1acd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9m5pg" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.316193 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2ab53786-050e-4a9f-a1dd-e67a82ce1acd-host-cni-netd\") pod \"ovnkube-node-9m5pg\" (UID: \"2ab53786-050e-4a9f-a1dd-e67a82ce1acd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9m5pg" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.316201 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2ab53786-050e-4a9f-a1dd-e67a82ce1acd-var-lib-openvswitch\") pod \"ovnkube-node-9m5pg\" (UID: \"2ab53786-050e-4a9f-a1dd-e67a82ce1acd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9m5pg" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.316633 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2ab53786-050e-4a9f-a1dd-e67a82ce1acd-ovnkube-script-lib\") pod \"ovnkube-node-9m5pg\" (UID: \"2ab53786-050e-4a9f-a1dd-e67a82ce1acd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9m5pg" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.316731 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2ab53786-050e-4a9f-a1dd-e67a82ce1acd-ovnkube-config\") pod \"ovnkube-node-9m5pg\" (UID: \"2ab53786-050e-4a9f-a1dd-e67a82ce1acd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9m5pg" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.319869 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2ab53786-050e-4a9f-a1dd-e67a82ce1acd-ovn-node-metrics-cert\") pod \"ovnkube-node-9m5pg\" (UID: \"2ab53786-050e-4a9f-a1dd-e67a82ce1acd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9m5pg" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.337760 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lj7nf\" (UniqueName: \"kubernetes.io/projected/2ab53786-050e-4a9f-a1dd-e67a82ce1acd-kube-api-access-lj7nf\") pod \"ovnkube-node-9m5pg\" (UID: \"2ab53786-050e-4a9f-a1dd-e67a82ce1acd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9m5pg" Feb 19 18:41:25 crc kubenswrapper[4813]: I0219 18:41:25.447387 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9m5pg" Feb 19 18:41:25 crc kubenswrapper[4813]: W0219 18:41:25.463519 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ab53786_050e_4a9f_a1dd_e67a82ce1acd.slice/crio-c555ca69cc5025b2eb5062e90ea84647e0c93a464133eed7e58114994ab081a6 WatchSource:0}: Error finding container c555ca69cc5025b2eb5062e90ea84647e0c93a464133eed7e58114994ab081a6: Status 404 returned error can't find the container with id c555ca69cc5025b2eb5062e90ea84647e0c93a464133eed7e58114994ab081a6 Feb 19 18:41:26 crc kubenswrapper[4813]: I0219 18:41:26.029984 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hksqw_b099cefb-f2e5-4f3f-976c-7433dba77ef2/kube-multus/2.log" Feb 19 18:41:26 crc kubenswrapper[4813]: I0219 18:41:26.031719 4813 generic.go:334] "Generic (PLEG): container finished" podID="2ab53786-050e-4a9f-a1dd-e67a82ce1acd" containerID="c3bb9e2cdf5504544c51f2f6424fb357de7c49facc07f3ead4dc50d4f2b38d46" exitCode=0 Feb 19 18:41:26 crc kubenswrapper[4813]: I0219 18:41:26.031794 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9m5pg" event={"ID":"2ab53786-050e-4a9f-a1dd-e67a82ce1acd","Type":"ContainerDied","Data":"c3bb9e2cdf5504544c51f2f6424fb357de7c49facc07f3ead4dc50d4f2b38d46"} Feb 19 18:41:26 crc kubenswrapper[4813]: I0219 18:41:26.031832 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9m5pg" event={"ID":"2ab53786-050e-4a9f-a1dd-e67a82ce1acd","Type":"ContainerStarted","Data":"c555ca69cc5025b2eb5062e90ea84647e0c93a464133eed7e58114994ab081a6"} Feb 19 18:41:26 crc kubenswrapper[4813]: I0219 18:41:26.038691 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pc9t2_928c75f4-605c-4556-8c29-14ff4bdf6f5e/ovn-acl-logging/0.log" Feb 19 18:41:26 crc kubenswrapper[4813]: I0219 18:41:26.039621 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pc9t2_928c75f4-605c-4556-8c29-14ff4bdf6f5e/ovn-controller/0.log" Feb 19 18:41:26 crc kubenswrapper[4813]: I0219 18:41:26.040366 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" event={"ID":"928c75f4-605c-4556-8c29-14ff4bdf6f5e","Type":"ContainerDied","Data":"a720de7497231498de63ba2981716e9712d34d7099ecb3e56cbb4e9370af7958"} Feb 19 18:41:26 crc kubenswrapper[4813]: I0219 18:41:26.040441 4813 scope.go:117] "RemoveContainer" containerID="73bccc2e2b4d6078c9df512ec532a1628f449b7e4d7ec2890220e9986fb06cb6" Feb 19 18:41:26 crc kubenswrapper[4813]: I0219 18:41:26.040521 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pc9t2" Feb 19 18:41:26 crc kubenswrapper[4813]: I0219 18:41:26.071427 4813 scope.go:117] "RemoveContainer" containerID="1caf7a922d435742a5c6d5302c70003bc86cc3ab542e2e3fe0a368c150de7838" Feb 19 18:41:26 crc kubenswrapper[4813]: I0219 18:41:26.111312 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pc9t2"] Feb 19 18:41:26 crc kubenswrapper[4813]: I0219 18:41:26.115990 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pc9t2"] Feb 19 18:41:26 crc kubenswrapper[4813]: I0219 18:41:26.128560 4813 scope.go:117] "RemoveContainer" containerID="6191934d31b3f711d3c9d8f365251db200dea5a925cad670632dbfc527f76504" Feb 19 18:41:26 crc kubenswrapper[4813]: I0219 18:41:26.140865 4813 scope.go:117] "RemoveContainer" containerID="8c1b878b9bc710465c933e1965728be9fe220fe89a3d5c05fa538abd4bf22b81" Feb 19 18:41:26 crc kubenswrapper[4813]: I0219 18:41:26.164677 4813 scope.go:117] "RemoveContainer" containerID="e03ec2a3e1656fbfb567377e79bcca10e1c8a6f7660727c164936eab7365930a" Feb 19 18:41:26 crc kubenswrapper[4813]: I0219 18:41:26.182007 4813 scope.go:117] "RemoveContainer" containerID="ec80078ab4536d8f1962994c3bdf8af5fcf81516f8845e84dbb33a78b1aa2062" Feb 19 18:41:26 crc kubenswrapper[4813]: I0219 18:41:26.257065 4813 scope.go:117] "RemoveContainer" containerID="a81cfe261685862b6b74d263ab66b4a2b1d68409791e82424b4f210c56561099" Feb 19 18:41:26 crc kubenswrapper[4813]: I0219 18:41:26.279399 4813 scope.go:117] "RemoveContainer" containerID="c5fed64977a80ae6acac3a225e2b6b6ab865db1f36338df4a15011baae90f949" Feb 19 18:41:26 crc kubenswrapper[4813]: I0219 18:41:26.331507 4813 scope.go:117] "RemoveContainer" containerID="0244cf62334ae37b44f11119385a76667200a01b2162a064829f16e8d6c0cf65" Feb 19 18:41:27 crc kubenswrapper[4813]: I0219 18:41:27.047326 4813 generic.go:334] "Generic (PLEG): container finished" podID="1ab4cb0c-bcd2-4fa4-84b0-eca960beaf73" containerID="c02d982a85533efbd4ba082c1cd8014d0a9f0473ac90253fb0527e2b5b64d19a" exitCode=0 Feb 19 18:41:27 crc kubenswrapper[4813]: I0219 18:41:27.047432 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan8dbm" event={"ID":"1ab4cb0c-bcd2-4fa4-84b0-eca960beaf73","Type":"ContainerDied","Data":"c02d982a85533efbd4ba082c1cd8014d0a9f0473ac90253fb0527e2b5b64d19a"} Feb 19 18:41:27 crc kubenswrapper[4813]: I0219 18:41:27.053546 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9m5pg" event={"ID":"2ab53786-050e-4a9f-a1dd-e67a82ce1acd","Type":"ContainerStarted","Data":"578de797a5c3b078f815e37f99861fa12bd2e2298052a4de1fb3b0ad9bc5908e"} Feb 19 18:41:27 crc kubenswrapper[4813]: I0219 18:41:27.053584 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9m5pg" event={"ID":"2ab53786-050e-4a9f-a1dd-e67a82ce1acd","Type":"ContainerStarted","Data":"d6dcda7e9efb2cb3ccb13231c6b2ce0ec60f248245dca6691fd99f560b9d5324"} Feb 19 18:41:27 crc kubenswrapper[4813]: I0219 18:41:27.053598 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9m5pg" event={"ID":"2ab53786-050e-4a9f-a1dd-e67a82ce1acd","Type":"ContainerStarted","Data":"45ba7d993f3942d27bf22e4ae2ab8ecd6d52b13782b1de81dd96d433e351e756"} Feb 19 18:41:27 crc kubenswrapper[4813]: I0219 18:41:27.053607 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9m5pg" event={"ID":"2ab53786-050e-4a9f-a1dd-e67a82ce1acd","Type":"ContainerStarted","Data":"23ea9e18fc2d919c3d0d3c1d8fdd26fd6a6559591327358994f4ad50a2a2fbe2"} Feb 19 18:41:27 crc kubenswrapper[4813]: I0219 18:41:27.053617 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9m5pg" event={"ID":"2ab53786-050e-4a9f-a1dd-e67a82ce1acd","Type":"ContainerStarted","Data":"a6a56488c8a3408ecc2dff6d43ee6da259dfc64e5526f9e4ef2bff8b0a4ebc31"} Feb 19 18:41:27 crc kubenswrapper[4813]: I0219 18:41:27.053625 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9m5pg" event={"ID":"2ab53786-050e-4a9f-a1dd-e67a82ce1acd","Type":"ContainerStarted","Data":"09f22b7cf964bb7bf6992b80d58c32e8d236e92d63ff02bf598f5c74698cd870"} Feb 19 18:41:27 crc kubenswrapper[4813]: I0219 18:41:27.485904 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="928c75f4-605c-4556-8c29-14ff4bdf6f5e" path="/var/lib/kubelet/pods/928c75f4-605c-4556-8c29-14ff4bdf6f5e/volumes" Feb 19 18:41:28 crc kubenswrapper[4813]: I0219 18:41:28.789195 4813 generic.go:334] "Generic (PLEG): container finished" podID="1ab4cb0c-bcd2-4fa4-84b0-eca960beaf73" containerID="2b49aefaea505f6d9692e75241cebaa6b18aee3332900ccb022f3231a34c577d" exitCode=0 Feb 19 18:41:28 crc kubenswrapper[4813]: I0219 18:41:28.789259 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan8dbm" event={"ID":"1ab4cb0c-bcd2-4fa4-84b0-eca960beaf73","Type":"ContainerDied","Data":"2b49aefaea505f6d9692e75241cebaa6b18aee3332900ccb022f3231a34c577d"} Feb 19 18:41:29 crc kubenswrapper[4813]: I0219 18:41:29.799384 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9m5pg" event={"ID":"2ab53786-050e-4a9f-a1dd-e67a82ce1acd","Type":"ContainerStarted","Data":"1fe8a28d91e8311bc32e08568edfd1bf6d823287c23494c9ed9c6cda5c8b10a5"} Feb 19 18:41:29 crc kubenswrapper[4813]: I0219 18:41:29.896367 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan8dbm" Feb 19 18:41:30 crc kubenswrapper[4813]: I0219 18:41:30.075903 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcjvz\" (UniqueName: \"kubernetes.io/projected/1ab4cb0c-bcd2-4fa4-84b0-eca960beaf73-kube-api-access-vcjvz\") pod \"1ab4cb0c-bcd2-4fa4-84b0-eca960beaf73\" (UID: \"1ab4cb0c-bcd2-4fa4-84b0-eca960beaf73\") " Feb 19 18:41:30 crc kubenswrapper[4813]: I0219 18:41:30.076108 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1ab4cb0c-bcd2-4fa4-84b0-eca960beaf73-bundle\") pod \"1ab4cb0c-bcd2-4fa4-84b0-eca960beaf73\" (UID: \"1ab4cb0c-bcd2-4fa4-84b0-eca960beaf73\") " Feb 19 18:41:30 crc kubenswrapper[4813]: I0219 18:41:30.076134 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1ab4cb0c-bcd2-4fa4-84b0-eca960beaf73-util\") pod \"1ab4cb0c-bcd2-4fa4-84b0-eca960beaf73\" (UID: \"1ab4cb0c-bcd2-4fa4-84b0-eca960beaf73\") " Feb 19 18:41:30 crc kubenswrapper[4813]: I0219 18:41:30.076895 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ab4cb0c-bcd2-4fa4-84b0-eca960beaf73-bundle" (OuterVolumeSpecName: "bundle") pod "1ab4cb0c-bcd2-4fa4-84b0-eca960beaf73" (UID: "1ab4cb0c-bcd2-4fa4-84b0-eca960beaf73"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:41:30 crc kubenswrapper[4813]: I0219 18:41:30.090880 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ab4cb0c-bcd2-4fa4-84b0-eca960beaf73-util" (OuterVolumeSpecName: "util") pod "1ab4cb0c-bcd2-4fa4-84b0-eca960beaf73" (UID: "1ab4cb0c-bcd2-4fa4-84b0-eca960beaf73"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:41:30 crc kubenswrapper[4813]: I0219 18:41:30.091265 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ab4cb0c-bcd2-4fa4-84b0-eca960beaf73-kube-api-access-vcjvz" (OuterVolumeSpecName: "kube-api-access-vcjvz") pod "1ab4cb0c-bcd2-4fa4-84b0-eca960beaf73" (UID: "1ab4cb0c-bcd2-4fa4-84b0-eca960beaf73"). InnerVolumeSpecName "kube-api-access-vcjvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:41:30 crc kubenswrapper[4813]: I0219 18:41:30.177546 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vcjvz\" (UniqueName: \"kubernetes.io/projected/1ab4cb0c-bcd2-4fa4-84b0-eca960beaf73-kube-api-access-vcjvz\") on node \"crc\" DevicePath \"\"" Feb 19 18:41:30 crc kubenswrapper[4813]: I0219 18:41:30.177597 4813 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1ab4cb0c-bcd2-4fa4-84b0-eca960beaf73-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:41:30 crc kubenswrapper[4813]: I0219 18:41:30.177615 4813 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1ab4cb0c-bcd2-4fa4-84b0-eca960beaf73-util\") on node \"crc\" DevicePath \"\"" Feb 19 18:41:30 crc kubenswrapper[4813]: I0219 18:41:30.809314 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan8dbm" event={"ID":"1ab4cb0c-bcd2-4fa4-84b0-eca960beaf73","Type":"ContainerDied","Data":"c1f0729598b0c1982f94eb583255e90ecca8c61fb6f43c9276a7539f8e137538"} Feb 19 18:41:30 crc kubenswrapper[4813]: I0219 18:41:30.809628 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1f0729598b0c1982f94eb583255e90ecca8c61fb6f43c9276a7539f8e137538" Feb 19 18:41:30 crc kubenswrapper[4813]: I0219 18:41:30.809507 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan8dbm" Feb 19 18:41:31 crc kubenswrapper[4813]: I0219 18:41:31.818012 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9m5pg" event={"ID":"2ab53786-050e-4a9f-a1dd-e67a82ce1acd","Type":"ContainerStarted","Data":"dcda366cc7804941dd694932a8fa0daa1b5267553568f300d1e09c26fc036733"} Feb 19 18:41:31 crc kubenswrapper[4813]: I0219 18:41:31.818586 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9m5pg" Feb 19 18:41:31 crc kubenswrapper[4813]: I0219 18:41:31.818676 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9m5pg" Feb 19 18:41:31 crc kubenswrapper[4813]: I0219 18:41:31.876749 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-9m5pg" podStartSLOduration=6.876719997 podStartE2EDuration="6.876719997s" podCreationTimestamp="2026-02-19 18:41:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:41:31.871508971 +0000 UTC m=+711.096949512" watchObservedRunningTime="2026-02-19 18:41:31.876719997 +0000 UTC m=+711.102160528" Feb 19 18:41:31 crc kubenswrapper[4813]: I0219 18:41:31.897605 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9m5pg" Feb 19 18:41:32 crc kubenswrapper[4813]: I0219 18:41:32.824632 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9m5pg" Feb 19 18:41:32 crc kubenswrapper[4813]: I0219 18:41:32.876467 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9m5pg" Feb 19 18:41:33 crc kubenswrapper[4813]: I0219 18:41:33.799506 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-t29nq"] Feb 19 18:41:33 crc kubenswrapper[4813]: E0219 18:41:33.799720 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ab4cb0c-bcd2-4fa4-84b0-eca960beaf73" containerName="extract" Feb 19 18:41:33 crc kubenswrapper[4813]: I0219 18:41:33.799735 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ab4cb0c-bcd2-4fa4-84b0-eca960beaf73" containerName="extract" Feb 19 18:41:33 crc kubenswrapper[4813]: E0219 18:41:33.799751 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ab4cb0c-bcd2-4fa4-84b0-eca960beaf73" containerName="util" Feb 19 18:41:33 crc kubenswrapper[4813]: I0219 18:41:33.799757 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ab4cb0c-bcd2-4fa4-84b0-eca960beaf73" containerName="util" Feb 19 18:41:33 crc kubenswrapper[4813]: E0219 18:41:33.799776 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ab4cb0c-bcd2-4fa4-84b0-eca960beaf73" containerName="pull" Feb 19 18:41:33 crc kubenswrapper[4813]: I0219 18:41:33.799782 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ab4cb0c-bcd2-4fa4-84b0-eca960beaf73" containerName="pull" Feb 19 18:41:33 crc kubenswrapper[4813]: I0219 18:41:33.799865 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ab4cb0c-bcd2-4fa4-84b0-eca960beaf73" containerName="extract" Feb 19 18:41:33 crc kubenswrapper[4813]: I0219 18:41:33.800215 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-t29nq" Feb 19 18:41:33 crc kubenswrapper[4813]: I0219 18:41:33.801663 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-p2nwf" Feb 19 18:41:33 crc kubenswrapper[4813]: I0219 18:41:33.801713 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 19 18:41:33 crc kubenswrapper[4813]: I0219 18:41:33.802298 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 19 18:41:33 crc kubenswrapper[4813]: I0219 18:41:33.820689 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-t29nq"] Feb 19 18:41:33 crc kubenswrapper[4813]: I0219 18:41:33.924847 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4rdh\" (UniqueName: \"kubernetes.io/projected/df018e2f-81ed-4685-aa56-1fae8fee55ef-kube-api-access-b4rdh\") pod \"nmstate-operator-694c9596b7-t29nq\" (UID: \"df018e2f-81ed-4685-aa56-1fae8fee55ef\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-t29nq" Feb 19 18:41:34 crc kubenswrapper[4813]: I0219 18:41:34.026180 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4rdh\" (UniqueName: \"kubernetes.io/projected/df018e2f-81ed-4685-aa56-1fae8fee55ef-kube-api-access-b4rdh\") pod \"nmstate-operator-694c9596b7-t29nq\" (UID: \"df018e2f-81ed-4685-aa56-1fae8fee55ef\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-t29nq" Feb 19 18:41:34 crc kubenswrapper[4813]: I0219 18:41:34.046089 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4rdh\" (UniqueName: \"kubernetes.io/projected/df018e2f-81ed-4685-aa56-1fae8fee55ef-kube-api-access-b4rdh\") pod \"nmstate-operator-694c9596b7-t29nq\" (UID: \"df018e2f-81ed-4685-aa56-1fae8fee55ef\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-t29nq" Feb 19 18:41:34 crc kubenswrapper[4813]: I0219 18:41:34.116223 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-t29nq" Feb 19 18:41:34 crc kubenswrapper[4813]: E0219 18:41:34.146759 4813 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-operator-694c9596b7-t29nq_openshift-nmstate_df018e2f-81ed-4685-aa56-1fae8fee55ef_0(dff4864a934cffcb8bcc7145d7818a1db84c779121fcd3b313817c7ab476c4a4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 18:41:34 crc kubenswrapper[4813]: E0219 18:41:34.146883 4813 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-operator-694c9596b7-t29nq_openshift-nmstate_df018e2f-81ed-4685-aa56-1fae8fee55ef_0(dff4864a934cffcb8bcc7145d7818a1db84c779121fcd3b313817c7ab476c4a4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-operator-694c9596b7-t29nq" Feb 19 18:41:34 crc kubenswrapper[4813]: E0219 18:41:34.146938 4813 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-operator-694c9596b7-t29nq_openshift-nmstate_df018e2f-81ed-4685-aa56-1fae8fee55ef_0(dff4864a934cffcb8bcc7145d7818a1db84c779121fcd3b313817c7ab476c4a4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-operator-694c9596b7-t29nq" Feb 19 18:41:34 crc kubenswrapper[4813]: E0219 18:41:34.147101 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nmstate-operator-694c9596b7-t29nq_openshift-nmstate(df018e2f-81ed-4685-aa56-1fae8fee55ef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nmstate-operator-694c9596b7-t29nq_openshift-nmstate(df018e2f-81ed-4685-aa56-1fae8fee55ef)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-operator-694c9596b7-t29nq_openshift-nmstate_df018e2f-81ed-4685-aa56-1fae8fee55ef_0(dff4864a934cffcb8bcc7145d7818a1db84c779121fcd3b313817c7ab476c4a4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-nmstate/nmstate-operator-694c9596b7-t29nq" podUID="df018e2f-81ed-4685-aa56-1fae8fee55ef" Feb 19 18:41:34 crc kubenswrapper[4813]: I0219 18:41:34.832544 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-t29nq" Feb 19 18:41:34 crc kubenswrapper[4813]: I0219 18:41:34.833113 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-t29nq" Feb 19 18:41:34 crc kubenswrapper[4813]: E0219 18:41:34.857710 4813 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-operator-694c9596b7-t29nq_openshift-nmstate_df018e2f-81ed-4685-aa56-1fae8fee55ef_0(a30ad42728cee9d5ee4a3744a777edcfe704bd4b24151c41e045cc3db0e47090): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 18:41:34 crc kubenswrapper[4813]: E0219 18:41:34.857777 4813 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-operator-694c9596b7-t29nq_openshift-nmstate_df018e2f-81ed-4685-aa56-1fae8fee55ef_0(a30ad42728cee9d5ee4a3744a777edcfe704bd4b24151c41e045cc3db0e47090): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-operator-694c9596b7-t29nq" Feb 19 18:41:34 crc kubenswrapper[4813]: E0219 18:41:34.857804 4813 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-operator-694c9596b7-t29nq_openshift-nmstate_df018e2f-81ed-4685-aa56-1fae8fee55ef_0(a30ad42728cee9d5ee4a3744a777edcfe704bd4b24151c41e045cc3db0e47090): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-operator-694c9596b7-t29nq" Feb 19 18:41:34 crc kubenswrapper[4813]: E0219 18:41:34.857857 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nmstate-operator-694c9596b7-t29nq_openshift-nmstate(df018e2f-81ed-4685-aa56-1fae8fee55ef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nmstate-operator-694c9596b7-t29nq_openshift-nmstate(df018e2f-81ed-4685-aa56-1fae8fee55ef)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-operator-694c9596b7-t29nq_openshift-nmstate_df018e2f-81ed-4685-aa56-1fae8fee55ef_0(a30ad42728cee9d5ee4a3744a777edcfe704bd4b24151c41e045cc3db0e47090): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-nmstate/nmstate-operator-694c9596b7-t29nq" podUID="df018e2f-81ed-4685-aa56-1fae8fee55ef" Feb 19 18:41:36 crc kubenswrapper[4813]: I0219 18:41:36.471271 4813 scope.go:117] "RemoveContainer" containerID="1bf20d5a1dff3d1f2180385a366bd304f81d49489fb5beb68ef022b82460d17a" Feb 19 18:41:36 crc kubenswrapper[4813]: E0219 18:41:36.471782 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-hksqw_openshift-multus(b099cefb-f2e5-4f3f-976c-7433dba77ef2)\"" pod="openshift-multus/multus-hksqw" podUID="b099cefb-f2e5-4f3f-976c-7433dba77ef2" Feb 19 18:41:50 crc kubenswrapper[4813]: I0219 18:41:50.470645 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-t29nq" Feb 19 18:41:50 crc kubenswrapper[4813]: I0219 18:41:50.472107 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-t29nq" Feb 19 18:41:50 crc kubenswrapper[4813]: I0219 18:41:50.472136 4813 scope.go:117] "RemoveContainer" containerID="1bf20d5a1dff3d1f2180385a366bd304f81d49489fb5beb68ef022b82460d17a" Feb 19 18:41:50 crc kubenswrapper[4813]: E0219 18:41:50.514207 4813 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-operator-694c9596b7-t29nq_openshift-nmstate_df018e2f-81ed-4685-aa56-1fae8fee55ef_0(f24a9e24810df1dd2751fd7ef6ddcfcd7828d52d467217dedc8954c8cb2060b3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 18:41:50 crc kubenswrapper[4813]: E0219 18:41:50.514291 4813 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-operator-694c9596b7-t29nq_openshift-nmstate_df018e2f-81ed-4685-aa56-1fae8fee55ef_0(f24a9e24810df1dd2751fd7ef6ddcfcd7828d52d467217dedc8954c8cb2060b3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-operator-694c9596b7-t29nq" Feb 19 18:41:50 crc kubenswrapper[4813]: E0219 18:41:50.514325 4813 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-operator-694c9596b7-t29nq_openshift-nmstate_df018e2f-81ed-4685-aa56-1fae8fee55ef_0(f24a9e24810df1dd2751fd7ef6ddcfcd7828d52d467217dedc8954c8cb2060b3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-operator-694c9596b7-t29nq" Feb 19 18:41:50 crc kubenswrapper[4813]: E0219 18:41:50.514384 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nmstate-operator-694c9596b7-t29nq_openshift-nmstate(df018e2f-81ed-4685-aa56-1fae8fee55ef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nmstate-operator-694c9596b7-t29nq_openshift-nmstate(df018e2f-81ed-4685-aa56-1fae8fee55ef)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-operator-694c9596b7-t29nq_openshift-nmstate_df018e2f-81ed-4685-aa56-1fae8fee55ef_0(f24a9e24810df1dd2751fd7ef6ddcfcd7828d52d467217dedc8954c8cb2060b3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-nmstate/nmstate-operator-694c9596b7-t29nq" podUID="df018e2f-81ed-4685-aa56-1fae8fee55ef" Feb 19 18:41:50 crc kubenswrapper[4813]: I0219 18:41:50.929933 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hksqw_b099cefb-f2e5-4f3f-976c-7433dba77ef2/kube-multus/2.log" Feb 19 18:41:50 crc kubenswrapper[4813]: I0219 18:41:50.930441 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hksqw" event={"ID":"b099cefb-f2e5-4f3f-976c-7433dba77ef2","Type":"ContainerStarted","Data":"b1221119a3b04403c401c424542bd00de405549dcb1c524f770d8d6445f0a1e9"} Feb 19 18:41:55 crc kubenswrapper[4813]: I0219 18:41:55.485927 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9m5pg" Feb 19 18:42:00 crc kubenswrapper[4813]: I0219 18:42:00.330120 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 18:42:00 crc kubenswrapper[4813]: I0219 18:42:00.330206 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 18:42:04 crc kubenswrapper[4813]: I0219 18:42:04.471189 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-t29nq" Feb 19 18:42:04 crc kubenswrapper[4813]: I0219 18:42:04.472513 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-t29nq" Feb 19 18:42:04 crc kubenswrapper[4813]: I0219 18:42:04.683268 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-t29nq"] Feb 19 18:42:05 crc kubenswrapper[4813]: I0219 18:42:05.032883 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-t29nq" event={"ID":"df018e2f-81ed-4685-aa56-1fae8fee55ef","Type":"ContainerStarted","Data":"a50547b9e15047fc27082d14c65538549c339f9b3cd62c9836b2f3370ac92425"} Feb 19 18:42:09 crc kubenswrapper[4813]: I0219 18:42:09.057724 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-t29nq" event={"ID":"df018e2f-81ed-4685-aa56-1fae8fee55ef","Type":"ContainerStarted","Data":"1d49b0a93c756beb43c681533c1e0bd9f471c779aa72fa41a9cdce125c227275"} Feb 19 18:42:09 crc kubenswrapper[4813]: I0219 18:42:09.084153 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-t29nq" podStartSLOduration=32.544763516 podStartE2EDuration="36.084131647s" podCreationTimestamp="2026-02-19 18:41:33 +0000 UTC" firstStartedPulling="2026-02-19 18:42:04.691672026 +0000 UTC m=+743.917112577" lastFinishedPulling="2026-02-19 18:42:08.231040167 +0000 UTC m=+747.456480708" observedRunningTime="2026-02-19 18:42:09.080802594 +0000 UTC m=+748.306243175" watchObservedRunningTime="2026-02-19 18:42:09.084131647 +0000 UTC m=+748.309572208" Feb 19 18:42:10 crc kubenswrapper[4813]: I0219 18:42:10.098796 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-9g649"] Feb 19 18:42:10 crc kubenswrapper[4813]: I0219 18:42:10.100464 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-9g649" Feb 19 18:42:10 crc kubenswrapper[4813]: I0219 18:42:10.103091 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-74knz" Feb 19 18:42:10 crc kubenswrapper[4813]: I0219 18:42:10.124273 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-hbm7r"] Feb 19 18:42:10 crc kubenswrapper[4813]: I0219 18:42:10.125346 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-hbm7r" Feb 19 18:42:10 crc kubenswrapper[4813]: I0219 18:42:10.131294 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 19 18:42:10 crc kubenswrapper[4813]: I0219 18:42:10.137719 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-9g649"] Feb 19 18:42:10 crc kubenswrapper[4813]: I0219 18:42:10.144933 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-fgpk5"] Feb 19 18:42:10 crc kubenswrapper[4813]: I0219 18:42:10.145758 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-fgpk5" Feb 19 18:42:10 crc kubenswrapper[4813]: I0219 18:42:10.150110 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-hbm7r"] Feb 19 18:42:10 crc kubenswrapper[4813]: I0219 18:42:10.242216 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/22bb5199-3f3d-42a2-8f5a-b97deac59140-nmstate-lock\") pod \"nmstate-handler-fgpk5\" (UID: \"22bb5199-3f3d-42a2-8f5a-b97deac59140\") " pod="openshift-nmstate/nmstate-handler-fgpk5" Feb 19 18:42:10 crc kubenswrapper[4813]: I0219 18:42:10.242547 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/22bb5199-3f3d-42a2-8f5a-b97deac59140-dbus-socket\") pod \"nmstate-handler-fgpk5\" (UID: \"22bb5199-3f3d-42a2-8f5a-b97deac59140\") " pod="openshift-nmstate/nmstate-handler-fgpk5" Feb 19 18:42:10 crc kubenswrapper[4813]: I0219 18:42:10.242662 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jszcx\" (UniqueName: \"kubernetes.io/projected/22bb5199-3f3d-42a2-8f5a-b97deac59140-kube-api-access-jszcx\") pod \"nmstate-handler-fgpk5\" (UID: \"22bb5199-3f3d-42a2-8f5a-b97deac59140\") " pod="openshift-nmstate/nmstate-handler-fgpk5" Feb 19 18:42:10 crc kubenswrapper[4813]: I0219 18:42:10.242775 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6764\" (UniqueName: \"kubernetes.io/projected/4978407d-39ad-4cc4-b30c-88b2146d55b0-kube-api-access-l6764\") pod \"nmstate-webhook-866bcb46dc-hbm7r\" (UID: \"4978407d-39ad-4cc4-b30c-88b2146d55b0\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-hbm7r" Feb 19 18:42:10 crc kubenswrapper[4813]: I0219 18:42:10.242882 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/22bb5199-3f3d-42a2-8f5a-b97deac59140-ovs-socket\") pod \"nmstate-handler-fgpk5\" (UID: \"22bb5199-3f3d-42a2-8f5a-b97deac59140\") " pod="openshift-nmstate/nmstate-handler-fgpk5" Feb 19 18:42:10 crc kubenswrapper[4813]: I0219 18:42:10.243022 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqjmd\" (UniqueName: \"kubernetes.io/projected/92bb56c6-4974-463d-9714-093301482525-kube-api-access-tqjmd\") pod \"nmstate-metrics-58c85c668d-9g649\" (UID: \"92bb56c6-4974-463d-9714-093301482525\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-9g649" Feb 19 18:42:10 crc kubenswrapper[4813]: I0219 18:42:10.243131 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/4978407d-39ad-4cc4-b30c-88b2146d55b0-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-hbm7r\" (UID: \"4978407d-39ad-4cc4-b30c-88b2146d55b0\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-hbm7r" Feb 19 18:42:10 crc kubenswrapper[4813]: I0219 18:42:10.246277 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4g2vc"] Feb 19 18:42:10 crc kubenswrapper[4813]: I0219 18:42:10.246918 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4g2vc" Feb 19 18:42:10 crc kubenswrapper[4813]: I0219 18:42:10.249493 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 19 18:42:10 crc kubenswrapper[4813]: I0219 18:42:10.250024 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 19 18:42:10 crc kubenswrapper[4813]: I0219 18:42:10.250446 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-txjxc" Feb 19 18:42:10 crc kubenswrapper[4813]: I0219 18:42:10.262652 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4g2vc"] Feb 19 18:42:10 crc kubenswrapper[4813]: I0219 18:42:10.344233 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/4978407d-39ad-4cc4-b30c-88b2146d55b0-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-hbm7r\" (UID: \"4978407d-39ad-4cc4-b30c-88b2146d55b0\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-hbm7r" Feb 19 18:42:10 crc kubenswrapper[4813]: I0219 18:42:10.344528 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/22bb5199-3f3d-42a2-8f5a-b97deac59140-nmstate-lock\") pod \"nmstate-handler-fgpk5\" (UID: \"22bb5199-3f3d-42a2-8f5a-b97deac59140\") " pod="openshift-nmstate/nmstate-handler-fgpk5" Feb 19 18:42:10 crc kubenswrapper[4813]: I0219 18:42:10.344643 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/22bb5199-3f3d-42a2-8f5a-b97deac59140-nmstate-lock\") pod \"nmstate-handler-fgpk5\" (UID: \"22bb5199-3f3d-42a2-8f5a-b97deac59140\") " pod="openshift-nmstate/nmstate-handler-fgpk5" Feb 19 18:42:10 crc kubenswrapper[4813]: I0219 18:42:10.344728 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92c4w\" (UniqueName: \"kubernetes.io/projected/9f9fe016-b91a-4575-8312-e8731d603bfc-kube-api-access-92c4w\") pod \"nmstate-console-plugin-5c78fc5d65-4g2vc\" (UID: \"9f9fe016-b91a-4575-8312-e8731d603bfc\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4g2vc" Feb 19 18:42:10 crc kubenswrapper[4813]: I0219 18:42:10.344848 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/22bb5199-3f3d-42a2-8f5a-b97deac59140-dbus-socket\") pod \"nmstate-handler-fgpk5\" (UID: \"22bb5199-3f3d-42a2-8f5a-b97deac59140\") " pod="openshift-nmstate/nmstate-handler-fgpk5" Feb 19 18:42:10 crc kubenswrapper[4813]: I0219 18:42:10.345010 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jszcx\" (UniqueName: \"kubernetes.io/projected/22bb5199-3f3d-42a2-8f5a-b97deac59140-kube-api-access-jszcx\") pod \"nmstate-handler-fgpk5\" (UID: \"22bb5199-3f3d-42a2-8f5a-b97deac59140\") " pod="openshift-nmstate/nmstate-handler-fgpk5" Feb 19 18:42:10 crc kubenswrapper[4813]: I0219 18:42:10.345112 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6764\" (UniqueName: \"kubernetes.io/projected/4978407d-39ad-4cc4-b30c-88b2146d55b0-kube-api-access-l6764\") pod \"nmstate-webhook-866bcb46dc-hbm7r\" (UID: \"4978407d-39ad-4cc4-b30c-88b2146d55b0\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-hbm7r" Feb 19 18:42:10 crc kubenswrapper[4813]: I0219 18:42:10.345179 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/22bb5199-3f3d-42a2-8f5a-b97deac59140-dbus-socket\") pod \"nmstate-handler-fgpk5\" (UID: \"22bb5199-3f3d-42a2-8f5a-b97deac59140\") " pod="openshift-nmstate/nmstate-handler-fgpk5" Feb 19 18:42:10 crc kubenswrapper[4813]: I0219 18:42:10.345275 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/22bb5199-3f3d-42a2-8f5a-b97deac59140-ovs-socket\") pod \"nmstate-handler-fgpk5\" (UID: \"22bb5199-3f3d-42a2-8f5a-b97deac59140\") " pod="openshift-nmstate/nmstate-handler-fgpk5" Feb 19 18:42:10 crc kubenswrapper[4813]: I0219 18:42:10.345370 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/9f9fe016-b91a-4575-8312-e8731d603bfc-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-4g2vc\" (UID: \"9f9fe016-b91a-4575-8312-e8731d603bfc\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4g2vc" Feb 19 18:42:10 crc kubenswrapper[4813]: I0219 18:42:10.345474 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/9f9fe016-b91a-4575-8312-e8731d603bfc-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-4g2vc\" (UID: \"9f9fe016-b91a-4575-8312-e8731d603bfc\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4g2vc" Feb 19 18:42:10 crc kubenswrapper[4813]: I0219 18:42:10.345566 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqjmd\" (UniqueName: \"kubernetes.io/projected/92bb56c6-4974-463d-9714-093301482525-kube-api-access-tqjmd\") pod \"nmstate-metrics-58c85c668d-9g649\" (UID: \"92bb56c6-4974-463d-9714-093301482525\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-9g649" Feb 19 18:42:10 crc kubenswrapper[4813]: I0219 18:42:10.345303 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/22bb5199-3f3d-42a2-8f5a-b97deac59140-ovs-socket\") pod \"nmstate-handler-fgpk5\" (UID: \"22bb5199-3f3d-42a2-8f5a-b97deac59140\") " pod="openshift-nmstate/nmstate-handler-fgpk5" Feb 19 18:42:10 crc kubenswrapper[4813]: I0219 18:42:10.351391 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/4978407d-39ad-4cc4-b30c-88b2146d55b0-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-hbm7r\" (UID: \"4978407d-39ad-4cc4-b30c-88b2146d55b0\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-hbm7r" Feb 19 18:42:10 crc kubenswrapper[4813]: I0219 18:42:10.368920 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jszcx\" (UniqueName: \"kubernetes.io/projected/22bb5199-3f3d-42a2-8f5a-b97deac59140-kube-api-access-jszcx\") pod \"nmstate-handler-fgpk5\" (UID: \"22bb5199-3f3d-42a2-8f5a-b97deac59140\") " pod="openshift-nmstate/nmstate-handler-fgpk5" Feb 19 18:42:10 crc kubenswrapper[4813]: I0219 18:42:10.372911 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6764\" (UniqueName: \"kubernetes.io/projected/4978407d-39ad-4cc4-b30c-88b2146d55b0-kube-api-access-l6764\") pod \"nmstate-webhook-866bcb46dc-hbm7r\" (UID: \"4978407d-39ad-4cc4-b30c-88b2146d55b0\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-hbm7r" Feb 19 18:42:10 crc kubenswrapper[4813]: I0219 18:42:10.381912 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqjmd\" (UniqueName: \"kubernetes.io/projected/92bb56c6-4974-463d-9714-093301482525-kube-api-access-tqjmd\") pod \"nmstate-metrics-58c85c668d-9g649\" (UID: \"92bb56c6-4974-463d-9714-093301482525\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-9g649" Feb 19 18:42:10 crc kubenswrapper[4813]: I0219 18:42:10.419527 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5c7595d455-bnfvq"] Feb 19 18:42:10 crc kubenswrapper[4813]: I0219 18:42:10.420416 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5c7595d455-bnfvq" Feb 19 18:42:10 crc kubenswrapper[4813]: I0219 18:42:10.421281 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-9g649" Feb 19 18:42:10 crc kubenswrapper[4813]: I0219 18:42:10.437925 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5c7595d455-bnfvq"] Feb 19 18:42:10 crc kubenswrapper[4813]: I0219 18:42:10.441978 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-hbm7r" Feb 19 18:42:10 crc kubenswrapper[4813]: I0219 18:42:10.446228 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/9f9fe016-b91a-4575-8312-e8731d603bfc-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-4g2vc\" (UID: \"9f9fe016-b91a-4575-8312-e8731d603bfc\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4g2vc" Feb 19 18:42:10 crc kubenswrapper[4813]: I0219 18:42:10.446263 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/9f9fe016-b91a-4575-8312-e8731d603bfc-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-4g2vc\" (UID: \"9f9fe016-b91a-4575-8312-e8731d603bfc\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4g2vc" Feb 19 18:42:10 crc kubenswrapper[4813]: I0219 18:42:10.446333 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92c4w\" (UniqueName: \"kubernetes.io/projected/9f9fe016-b91a-4575-8312-e8731d603bfc-kube-api-access-92c4w\") pod \"nmstate-console-plugin-5c78fc5d65-4g2vc\" (UID: \"9f9fe016-b91a-4575-8312-e8731d603bfc\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4g2vc" Feb 19 18:42:10 crc kubenswrapper[4813]: I0219 18:42:10.447459 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/9f9fe016-b91a-4575-8312-e8731d603bfc-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-4g2vc\" (UID: \"9f9fe016-b91a-4575-8312-e8731d603bfc\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4g2vc" Feb 19 18:42:10 crc kubenswrapper[4813]: I0219 18:42:10.449775 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/9f9fe016-b91a-4575-8312-e8731d603bfc-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-4g2vc\" (UID: \"9f9fe016-b91a-4575-8312-e8731d603bfc\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4g2vc" Feb 19 18:42:10 crc kubenswrapper[4813]: I0219 18:42:10.467877 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-fgpk5" Feb 19 18:42:10 crc kubenswrapper[4813]: I0219 18:42:10.469111 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92c4w\" (UniqueName: \"kubernetes.io/projected/9f9fe016-b91a-4575-8312-e8731d603bfc-kube-api-access-92c4w\") pod \"nmstate-console-plugin-5c78fc5d65-4g2vc\" (UID: \"9f9fe016-b91a-4575-8312-e8731d603bfc\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4g2vc" Feb 19 18:42:10 crc kubenswrapper[4813]: I0219 18:42:10.547571 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2fbfcff5-38ed-4398-b41f-9a8bb5be9bc5-console-oauth-config\") pod \"console-5c7595d455-bnfvq\" (UID: \"2fbfcff5-38ed-4398-b41f-9a8bb5be9bc5\") " pod="openshift-console/console-5c7595d455-bnfvq" Feb 19 18:42:10 crc kubenswrapper[4813]: I0219 18:42:10.547878 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2fbfcff5-38ed-4398-b41f-9a8bb5be9bc5-oauth-serving-cert\") pod \"console-5c7595d455-bnfvq\" (UID: \"2fbfcff5-38ed-4398-b41f-9a8bb5be9bc5\") " pod="openshift-console/console-5c7595d455-bnfvq" Feb 19 18:42:10 crc kubenswrapper[4813]: I0219 18:42:10.547901 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9x65g\" (UniqueName: \"kubernetes.io/projected/2fbfcff5-38ed-4398-b41f-9a8bb5be9bc5-kube-api-access-9x65g\") pod \"console-5c7595d455-bnfvq\" (UID: \"2fbfcff5-38ed-4398-b41f-9a8bb5be9bc5\") " pod="openshift-console/console-5c7595d455-bnfvq" Feb 19 18:42:10 crc kubenswrapper[4813]: I0219 18:42:10.547923 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2fbfcff5-38ed-4398-b41f-9a8bb5be9bc5-console-serving-cert\") pod \"console-5c7595d455-bnfvq\" (UID: \"2fbfcff5-38ed-4398-b41f-9a8bb5be9bc5\") " pod="openshift-console/console-5c7595d455-bnfvq" Feb 19 18:42:10 crc kubenswrapper[4813]: I0219 18:42:10.547994 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2fbfcff5-38ed-4398-b41f-9a8bb5be9bc5-console-config\") pod \"console-5c7595d455-bnfvq\" (UID: \"2fbfcff5-38ed-4398-b41f-9a8bb5be9bc5\") " pod="openshift-console/console-5c7595d455-bnfvq" Feb 19 18:42:10 crc kubenswrapper[4813]: I0219 18:42:10.548031 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2fbfcff5-38ed-4398-b41f-9a8bb5be9bc5-trusted-ca-bundle\") pod \"console-5c7595d455-bnfvq\" (UID: \"2fbfcff5-38ed-4398-b41f-9a8bb5be9bc5\") " pod="openshift-console/console-5c7595d455-bnfvq" Feb 19 18:42:10 crc kubenswrapper[4813]: I0219 18:42:10.548047 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2fbfcff5-38ed-4398-b41f-9a8bb5be9bc5-service-ca\") pod \"console-5c7595d455-bnfvq\" (UID: \"2fbfcff5-38ed-4398-b41f-9a8bb5be9bc5\") " pod="openshift-console/console-5c7595d455-bnfvq" Feb 19 18:42:10 crc kubenswrapper[4813]: I0219 18:42:10.559357 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4g2vc" Feb 19 18:42:10 crc kubenswrapper[4813]: I0219 18:42:10.644385 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-9g649"] Feb 19 18:42:10 crc kubenswrapper[4813]: I0219 18:42:10.650162 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2fbfcff5-38ed-4398-b41f-9a8bb5be9bc5-oauth-serving-cert\") pod \"console-5c7595d455-bnfvq\" (UID: \"2fbfcff5-38ed-4398-b41f-9a8bb5be9bc5\") " pod="openshift-console/console-5c7595d455-bnfvq" Feb 19 18:42:10 crc kubenswrapper[4813]: I0219 18:42:10.650207 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9x65g\" (UniqueName: \"kubernetes.io/projected/2fbfcff5-38ed-4398-b41f-9a8bb5be9bc5-kube-api-access-9x65g\") pod \"console-5c7595d455-bnfvq\" (UID: \"2fbfcff5-38ed-4398-b41f-9a8bb5be9bc5\") " pod="openshift-console/console-5c7595d455-bnfvq" Feb 19 18:42:10 crc kubenswrapper[4813]: I0219 18:42:10.650235 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2fbfcff5-38ed-4398-b41f-9a8bb5be9bc5-console-serving-cert\") pod \"console-5c7595d455-bnfvq\" (UID: \"2fbfcff5-38ed-4398-b41f-9a8bb5be9bc5\") " pod="openshift-console/console-5c7595d455-bnfvq" Feb 19 18:42:10 crc kubenswrapper[4813]: I0219 18:42:10.650265 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2fbfcff5-38ed-4398-b41f-9a8bb5be9bc5-console-config\") pod \"console-5c7595d455-bnfvq\" (UID: \"2fbfcff5-38ed-4398-b41f-9a8bb5be9bc5\") " pod="openshift-console/console-5c7595d455-bnfvq" Feb 19 18:42:10 crc kubenswrapper[4813]: I0219 18:42:10.650295 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2fbfcff5-38ed-4398-b41f-9a8bb5be9bc5-trusted-ca-bundle\") pod \"console-5c7595d455-bnfvq\" (UID: \"2fbfcff5-38ed-4398-b41f-9a8bb5be9bc5\") " pod="openshift-console/console-5c7595d455-bnfvq" Feb 19 18:42:10 crc kubenswrapper[4813]: I0219 18:42:10.650367 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2fbfcff5-38ed-4398-b41f-9a8bb5be9bc5-service-ca\") pod \"console-5c7595d455-bnfvq\" (UID: \"2fbfcff5-38ed-4398-b41f-9a8bb5be9bc5\") " pod="openshift-console/console-5c7595d455-bnfvq" Feb 19 18:42:10 crc kubenswrapper[4813]: I0219 18:42:10.650404 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2fbfcff5-38ed-4398-b41f-9a8bb5be9bc5-console-oauth-config\") pod \"console-5c7595d455-bnfvq\" (UID: \"2fbfcff5-38ed-4398-b41f-9a8bb5be9bc5\") " pod="openshift-console/console-5c7595d455-bnfvq" Feb 19 18:42:10 crc kubenswrapper[4813]: I0219 18:42:10.652338 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2fbfcff5-38ed-4398-b41f-9a8bb5be9bc5-oauth-serving-cert\") pod \"console-5c7595d455-bnfvq\" (UID: \"2fbfcff5-38ed-4398-b41f-9a8bb5be9bc5\") " pod="openshift-console/console-5c7595d455-bnfvq" Feb 19 18:42:10 crc kubenswrapper[4813]: I0219 18:42:10.652509 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2fbfcff5-38ed-4398-b41f-9a8bb5be9bc5-console-config\") pod \"console-5c7595d455-bnfvq\" (UID: \"2fbfcff5-38ed-4398-b41f-9a8bb5be9bc5\") " pod="openshift-console/console-5c7595d455-bnfvq" Feb 19 18:42:10 crc kubenswrapper[4813]: I0219 18:42:10.653740 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2fbfcff5-38ed-4398-b41f-9a8bb5be9bc5-trusted-ca-bundle\") pod \"console-5c7595d455-bnfvq\" (UID: \"2fbfcff5-38ed-4398-b41f-9a8bb5be9bc5\") " pod="openshift-console/console-5c7595d455-bnfvq" Feb 19 18:42:10 crc kubenswrapper[4813]: I0219 18:42:10.654322 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2fbfcff5-38ed-4398-b41f-9a8bb5be9bc5-service-ca\") pod \"console-5c7595d455-bnfvq\" (UID: \"2fbfcff5-38ed-4398-b41f-9a8bb5be9bc5\") " pod="openshift-console/console-5c7595d455-bnfvq" Feb 19 18:42:10 crc kubenswrapper[4813]: I0219 18:42:10.656523 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2fbfcff5-38ed-4398-b41f-9a8bb5be9bc5-console-serving-cert\") pod \"console-5c7595d455-bnfvq\" (UID: \"2fbfcff5-38ed-4398-b41f-9a8bb5be9bc5\") " pod="openshift-console/console-5c7595d455-bnfvq" Feb 19 18:42:10 crc kubenswrapper[4813]: I0219 18:42:10.662936 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2fbfcff5-38ed-4398-b41f-9a8bb5be9bc5-console-oauth-config\") pod \"console-5c7595d455-bnfvq\" (UID: \"2fbfcff5-38ed-4398-b41f-9a8bb5be9bc5\") " pod="openshift-console/console-5c7595d455-bnfvq" Feb 19 18:42:10 crc kubenswrapper[4813]: I0219 18:42:10.671053 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9x65g\" (UniqueName: \"kubernetes.io/projected/2fbfcff5-38ed-4398-b41f-9a8bb5be9bc5-kube-api-access-9x65g\") pod \"console-5c7595d455-bnfvq\" (UID: \"2fbfcff5-38ed-4398-b41f-9a8bb5be9bc5\") " pod="openshift-console/console-5c7595d455-bnfvq" Feb 19 18:42:10 crc kubenswrapper[4813]: I0219 18:42:10.704809 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-hbm7r"] Feb 19 18:42:10 crc kubenswrapper[4813]: W0219 18:42:10.717523 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4978407d_39ad_4cc4_b30c_88b2146d55b0.slice/crio-66ecfd160f3ff3f89b3d393c330a9e8d2917e2e43a71803958bed5dbd7aa716c WatchSource:0}: Error finding container 66ecfd160f3ff3f89b3d393c330a9e8d2917e2e43a71803958bed5dbd7aa716c: Status 404 returned error can't find the container with id 66ecfd160f3ff3f89b3d393c330a9e8d2917e2e43a71803958bed5dbd7aa716c Feb 19 18:42:10 crc kubenswrapper[4813]: I0219 18:42:10.786923 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5c7595d455-bnfvq" Feb 19 18:42:10 crc kubenswrapper[4813]: I0219 18:42:10.805600 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4g2vc"] Feb 19 18:42:10 crc kubenswrapper[4813]: W0219 18:42:10.810234 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f9fe016_b91a_4575_8312_e8731d603bfc.slice/crio-121d35a7d8f6c492b8be971fcd1f566160f8294dd8808d6d33451d3a5d862133 WatchSource:0}: Error finding container 121d35a7d8f6c492b8be971fcd1f566160f8294dd8808d6d33451d3a5d862133: Status 404 returned error can't find the container with id 121d35a7d8f6c492b8be971fcd1f566160f8294dd8808d6d33451d3a5d862133 Feb 19 18:42:11 crc kubenswrapper[4813]: I0219 18:42:11.070585 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4g2vc" event={"ID":"9f9fe016-b91a-4575-8312-e8731d603bfc","Type":"ContainerStarted","Data":"121d35a7d8f6c492b8be971fcd1f566160f8294dd8808d6d33451d3a5d862133"} Feb 19 18:42:11 crc kubenswrapper[4813]: I0219 18:42:11.071600 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-hbm7r" event={"ID":"4978407d-39ad-4cc4-b30c-88b2146d55b0","Type":"ContainerStarted","Data":"66ecfd160f3ff3f89b3d393c330a9e8d2917e2e43a71803958bed5dbd7aa716c"} Feb 19 18:42:11 crc kubenswrapper[4813]: I0219 18:42:11.072608 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-9g649" event={"ID":"92bb56c6-4974-463d-9714-093301482525","Type":"ContainerStarted","Data":"c3a1ea6a5c1fe96b9fb356fa5644688af6400a937888d39a4e82a0e1db711ed1"} Feb 19 18:42:11 crc kubenswrapper[4813]: I0219 18:42:11.073698 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-fgpk5" event={"ID":"22bb5199-3f3d-42a2-8f5a-b97deac59140","Type":"ContainerStarted","Data":"4541f8d055ca08f7b6065cf1958a3d9fb0d3dde3d2031bd18609935a36b0e78f"} Feb 19 18:42:11 crc kubenswrapper[4813]: I0219 18:42:11.160193 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5c7595d455-bnfvq"] Feb 19 18:42:11 crc kubenswrapper[4813]: W0219 18:42:11.167554 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2fbfcff5_38ed_4398_b41f_9a8bb5be9bc5.slice/crio-0d404dfce7924705e36760e86fd0239a4b4733d82e6506a0c9a15ed79ceec0ef WatchSource:0}: Error finding container 0d404dfce7924705e36760e86fd0239a4b4733d82e6506a0c9a15ed79ceec0ef: Status 404 returned error can't find the container with id 0d404dfce7924705e36760e86fd0239a4b4733d82e6506a0c9a15ed79ceec0ef Feb 19 18:42:12 crc kubenswrapper[4813]: I0219 18:42:12.080294 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5c7595d455-bnfvq" event={"ID":"2fbfcff5-38ed-4398-b41f-9a8bb5be9bc5","Type":"ContainerStarted","Data":"385d2f95a4095a07095a4d910ebcbed8793158f905adafc50e15985e7d54429c"} Feb 19 18:42:12 crc kubenswrapper[4813]: I0219 18:42:12.080784 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5c7595d455-bnfvq" event={"ID":"2fbfcff5-38ed-4398-b41f-9a8bb5be9bc5","Type":"ContainerStarted","Data":"0d404dfce7924705e36760e86fd0239a4b4733d82e6506a0c9a15ed79ceec0ef"} Feb 19 18:42:12 crc kubenswrapper[4813]: I0219 18:42:12.101044 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5c7595d455-bnfvq" podStartSLOduration=2.101001763 podStartE2EDuration="2.101001763s" podCreationTimestamp="2026-02-19 18:42:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:42:12.096304327 +0000 UTC m=+751.321744898" watchObservedRunningTime="2026-02-19 18:42:12.101001763 +0000 UTC m=+751.326442304" Feb 19 18:42:15 crc kubenswrapper[4813]: I0219 18:42:15.099982 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-9g649" event={"ID":"92bb56c6-4974-463d-9714-093301482525","Type":"ContainerStarted","Data":"c6ed98fe1216cfe73e84534a9658d8d13b4aa21572f4640b54ca2285a2ce32cc"} Feb 19 18:42:15 crc kubenswrapper[4813]: I0219 18:42:15.104527 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-fgpk5" event={"ID":"22bb5199-3f3d-42a2-8f5a-b97deac59140","Type":"ContainerStarted","Data":"bf375e09db7d9cc39cc7313066fe0f90a2ec072f768ed545060a772f3ad546bc"} Feb 19 18:42:15 crc kubenswrapper[4813]: I0219 18:42:15.104662 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-fgpk5" Feb 19 18:42:15 crc kubenswrapper[4813]: I0219 18:42:15.107080 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4g2vc" event={"ID":"9f9fe016-b91a-4575-8312-e8731d603bfc","Type":"ContainerStarted","Data":"181e49161d554cd4ab5dcfee6c153822c74ae0d8246a159487a2926c9a20f390"} Feb 19 18:42:15 crc kubenswrapper[4813]: I0219 18:42:15.110210 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-hbm7r" event={"ID":"4978407d-39ad-4cc4-b30c-88b2146d55b0","Type":"ContainerStarted","Data":"8f2b4b20848768dfb421e50ff63523a9e525025b404e0aad1445abfbd9d363be"} Feb 19 18:42:15 crc kubenswrapper[4813]: I0219 18:42:15.110504 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-hbm7r" Feb 19 18:42:15 crc kubenswrapper[4813]: I0219 18:42:15.130395 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-fgpk5" podStartSLOduration=1.386819621 podStartE2EDuration="5.13028199s" podCreationTimestamp="2026-02-19 18:42:10 +0000 UTC" firstStartedPulling="2026-02-19 18:42:10.494209184 +0000 UTC m=+749.719649725" lastFinishedPulling="2026-02-19 18:42:14.237671523 +0000 UTC m=+753.463112094" observedRunningTime="2026-02-19 18:42:15.124748209 +0000 UTC m=+754.350188830" watchObservedRunningTime="2026-02-19 18:42:15.13028199 +0000 UTC m=+754.355722551" Feb 19 18:42:15 crc kubenswrapper[4813]: I0219 18:42:15.145719 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4g2vc" podStartSLOduration=1.720832771 podStartE2EDuration="5.145694175s" podCreationTimestamp="2026-02-19 18:42:10 +0000 UTC" firstStartedPulling="2026-02-19 18:42:10.812316374 +0000 UTC m=+750.037756915" lastFinishedPulling="2026-02-19 18:42:14.237177768 +0000 UTC m=+753.462618319" observedRunningTime="2026-02-19 18:42:15.144313962 +0000 UTC m=+754.369754503" watchObservedRunningTime="2026-02-19 18:42:15.145694175 +0000 UTC m=+754.371134716" Feb 19 18:42:15 crc kubenswrapper[4813]: I0219 18:42:15.206225 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-hbm7r" podStartSLOduration=1.610954636 podStartE2EDuration="5.206199529s" podCreationTimestamp="2026-02-19 18:42:10 +0000 UTC" firstStartedPulling="2026-02-19 18:42:10.721823716 +0000 UTC m=+749.947264257" lastFinishedPulling="2026-02-19 18:42:14.317068609 +0000 UTC m=+753.542509150" observedRunningTime="2026-02-19 18:42:15.202161605 +0000 UTC m=+754.427602156" watchObservedRunningTime="2026-02-19 18:42:15.206199529 +0000 UTC m=+754.431640070" Feb 19 18:42:15 crc kubenswrapper[4813]: I0219 18:42:15.910709 4813 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 19 18:42:18 crc kubenswrapper[4813]: I0219 18:42:18.129326 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-9g649" event={"ID":"92bb56c6-4974-463d-9714-093301482525","Type":"ContainerStarted","Data":"2db2d2b1db2c808b975fe400e1f95a23695e3694afa1cd2deab723452d932653"} Feb 19 18:42:18 crc kubenswrapper[4813]: I0219 18:42:18.157438 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-9g649" podStartSLOduration=1.822774801 podStartE2EDuration="8.157414622s" podCreationTimestamp="2026-02-19 18:42:10 +0000 UTC" firstStartedPulling="2026-02-19 18:42:10.667050199 +0000 UTC m=+749.892490740" lastFinishedPulling="2026-02-19 18:42:17.00169001 +0000 UTC m=+756.227130561" observedRunningTime="2026-02-19 18:42:18.153498812 +0000 UTC m=+757.378939353" watchObservedRunningTime="2026-02-19 18:42:18.157414622 +0000 UTC m=+757.382855203" Feb 19 18:42:20 crc kubenswrapper[4813]: I0219 18:42:20.494944 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-fgpk5" Feb 19 18:42:20 crc kubenswrapper[4813]: I0219 18:42:20.787092 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5c7595d455-bnfvq" Feb 19 18:42:20 crc kubenswrapper[4813]: I0219 18:42:20.788554 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5c7595d455-bnfvq" Feb 19 18:42:20 crc kubenswrapper[4813]: I0219 18:42:20.796286 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5c7595d455-bnfvq" Feb 19 18:42:21 crc kubenswrapper[4813]: I0219 18:42:21.153857 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5c7595d455-bnfvq" Feb 19 18:42:21 crc kubenswrapper[4813]: I0219 18:42:21.217611 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-dqj4z"] Feb 19 18:42:30 crc kubenswrapper[4813]: I0219 18:42:30.329985 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 18:42:30 crc kubenswrapper[4813]: I0219 18:42:30.330604 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 18:42:30 crc kubenswrapper[4813]: I0219 18:42:30.450371 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-hbm7r" Feb 19 18:42:45 crc kubenswrapper[4813]: I0219 18:42:45.798893 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gx7bx"] Feb 19 18:42:45 crc kubenswrapper[4813]: I0219 18:42:45.801147 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gx7bx" Feb 19 18:42:45 crc kubenswrapper[4813]: I0219 18:42:45.803643 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 19 18:42:45 crc kubenswrapper[4813]: I0219 18:42:45.820598 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gx7bx"] Feb 19 18:42:45 crc kubenswrapper[4813]: I0219 18:42:45.858098 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8lf2\" (UniqueName: \"kubernetes.io/projected/fa567e17-7ee2-4f55-9907-6c7aed9af532-kube-api-access-k8lf2\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gx7bx\" (UID: \"fa567e17-7ee2-4f55-9907-6c7aed9af532\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gx7bx" Feb 19 18:42:45 crc kubenswrapper[4813]: I0219 18:42:45.858197 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fa567e17-7ee2-4f55-9907-6c7aed9af532-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gx7bx\" (UID: \"fa567e17-7ee2-4f55-9907-6c7aed9af532\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gx7bx" Feb 19 18:42:45 crc kubenswrapper[4813]: I0219 18:42:45.858311 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fa567e17-7ee2-4f55-9907-6c7aed9af532-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gx7bx\" (UID: \"fa567e17-7ee2-4f55-9907-6c7aed9af532\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gx7bx" Feb 19 18:42:45 crc kubenswrapper[4813]: I0219 18:42:45.959043 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fa567e17-7ee2-4f55-9907-6c7aed9af532-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gx7bx\" (UID: \"fa567e17-7ee2-4f55-9907-6c7aed9af532\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gx7bx" Feb 19 18:42:45 crc kubenswrapper[4813]: I0219 18:42:45.959113 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8lf2\" (UniqueName: \"kubernetes.io/projected/fa567e17-7ee2-4f55-9907-6c7aed9af532-kube-api-access-k8lf2\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gx7bx\" (UID: \"fa567e17-7ee2-4f55-9907-6c7aed9af532\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gx7bx" Feb 19 18:42:45 crc kubenswrapper[4813]: I0219 18:42:45.959188 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fa567e17-7ee2-4f55-9907-6c7aed9af532-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gx7bx\" (UID: \"fa567e17-7ee2-4f55-9907-6c7aed9af532\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gx7bx" Feb 19 18:42:45 crc kubenswrapper[4813]: I0219 18:42:45.959983 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fa567e17-7ee2-4f55-9907-6c7aed9af532-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gx7bx\" (UID: \"fa567e17-7ee2-4f55-9907-6c7aed9af532\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gx7bx" Feb 19 18:42:45 crc kubenswrapper[4813]: I0219 18:42:45.960019 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fa567e17-7ee2-4f55-9907-6c7aed9af532-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gx7bx\" (UID: \"fa567e17-7ee2-4f55-9907-6c7aed9af532\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gx7bx" Feb 19 18:42:45 crc kubenswrapper[4813]: I0219 18:42:45.992553 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8lf2\" (UniqueName: \"kubernetes.io/projected/fa567e17-7ee2-4f55-9907-6c7aed9af532-kube-api-access-k8lf2\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gx7bx\" (UID: \"fa567e17-7ee2-4f55-9907-6c7aed9af532\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gx7bx" Feb 19 18:42:46 crc kubenswrapper[4813]: I0219 18:42:46.135077 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gx7bx" Feb 19 18:42:46 crc kubenswrapper[4813]: I0219 18:42:46.261683 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-dqj4z" podUID="9c2745dc-3e5e-4571-ba36-aa14c87c6336" containerName="console" containerID="cri-o://0ccc54834bebf1ac902b60be67dfa35caf9184a6cd116e45ccacc269c7cc9f24" gracePeriod=15 Feb 19 18:42:46 crc kubenswrapper[4813]: I0219 18:42:46.681095 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gx7bx"] Feb 19 18:42:47 crc kubenswrapper[4813]: I0219 18:42:47.333064 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-dqj4z_9c2745dc-3e5e-4571-ba36-aa14c87c6336/console/0.log" Feb 19 18:42:47 crc kubenswrapper[4813]: I0219 18:42:47.333452 4813 generic.go:334] "Generic (PLEG): container finished" podID="9c2745dc-3e5e-4571-ba36-aa14c87c6336" containerID="0ccc54834bebf1ac902b60be67dfa35caf9184a6cd116e45ccacc269c7cc9f24" exitCode=2 Feb 19 18:42:47 crc kubenswrapper[4813]: I0219 18:42:47.333546 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-dqj4z" event={"ID":"9c2745dc-3e5e-4571-ba36-aa14c87c6336","Type":"ContainerDied","Data":"0ccc54834bebf1ac902b60be67dfa35caf9184a6cd116e45ccacc269c7cc9f24"} Feb 19 18:42:47 crc kubenswrapper[4813]: I0219 18:42:47.334691 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gx7bx" event={"ID":"fa567e17-7ee2-4f55-9907-6c7aed9af532","Type":"ContainerStarted","Data":"ce4b22c17b9ad61c4f8abe3fc7be4ec2c5d94409908865f8ea645dfee8a73057"} Feb 19 18:42:49 crc kubenswrapper[4813]: I0219 18:42:49.346107 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ltpbj"] Feb 19 18:42:49 crc kubenswrapper[4813]: I0219 18:42:49.381403 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gx7bx" event={"ID":"fa567e17-7ee2-4f55-9907-6c7aed9af532","Type":"ContainerStarted","Data":"6c5d8ec6853c65346d93241cffaa219c9a85ff427cc049ba974b6bfb0679ce4f"} Feb 19 18:42:49 crc kubenswrapper[4813]: I0219 18:42:49.382030 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ltpbj" Feb 19 18:42:49 crc kubenswrapper[4813]: I0219 18:42:49.388271 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ltpbj"] Feb 19 18:42:49 crc kubenswrapper[4813]: I0219 18:42:49.419686 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eea9bcfd-192a-4da9-a309-1b519f236c4b-utilities\") pod \"redhat-operators-ltpbj\" (UID: \"eea9bcfd-192a-4da9-a309-1b519f236c4b\") " pod="openshift-marketplace/redhat-operators-ltpbj" Feb 19 18:42:49 crc kubenswrapper[4813]: I0219 18:42:49.419922 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnvws\" (UniqueName: \"kubernetes.io/projected/eea9bcfd-192a-4da9-a309-1b519f236c4b-kube-api-access-pnvws\") pod \"redhat-operators-ltpbj\" (UID: \"eea9bcfd-192a-4da9-a309-1b519f236c4b\") " pod="openshift-marketplace/redhat-operators-ltpbj" Feb 19 18:42:49 crc kubenswrapper[4813]: I0219 18:42:49.420013 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eea9bcfd-192a-4da9-a309-1b519f236c4b-catalog-content\") pod \"redhat-operators-ltpbj\" (UID: \"eea9bcfd-192a-4da9-a309-1b519f236c4b\") " pod="openshift-marketplace/redhat-operators-ltpbj" Feb 19 18:42:49 crc kubenswrapper[4813]: I0219 18:42:49.522136 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnvws\" (UniqueName: \"kubernetes.io/projected/eea9bcfd-192a-4da9-a309-1b519f236c4b-kube-api-access-pnvws\") pod \"redhat-operators-ltpbj\" (UID: \"eea9bcfd-192a-4da9-a309-1b519f236c4b\") " pod="openshift-marketplace/redhat-operators-ltpbj" Feb 19 18:42:49 crc kubenswrapper[4813]: I0219 18:42:49.522306 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eea9bcfd-192a-4da9-a309-1b519f236c4b-catalog-content\") pod \"redhat-operators-ltpbj\" (UID: \"eea9bcfd-192a-4da9-a309-1b519f236c4b\") " pod="openshift-marketplace/redhat-operators-ltpbj" Feb 19 18:42:49 crc kubenswrapper[4813]: I0219 18:42:49.522383 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eea9bcfd-192a-4da9-a309-1b519f236c4b-utilities\") pod \"redhat-operators-ltpbj\" (UID: \"eea9bcfd-192a-4da9-a309-1b519f236c4b\") " pod="openshift-marketplace/redhat-operators-ltpbj" Feb 19 18:42:49 crc kubenswrapper[4813]: I0219 18:42:49.523082 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eea9bcfd-192a-4da9-a309-1b519f236c4b-catalog-content\") pod \"redhat-operators-ltpbj\" (UID: \"eea9bcfd-192a-4da9-a309-1b519f236c4b\") " pod="openshift-marketplace/redhat-operators-ltpbj" Feb 19 18:42:49 crc kubenswrapper[4813]: I0219 18:42:49.523366 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eea9bcfd-192a-4da9-a309-1b519f236c4b-utilities\") pod \"redhat-operators-ltpbj\" (UID: \"eea9bcfd-192a-4da9-a309-1b519f236c4b\") " pod="openshift-marketplace/redhat-operators-ltpbj" Feb 19 18:42:49 crc kubenswrapper[4813]: I0219 18:42:49.556365 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnvws\" (UniqueName: \"kubernetes.io/projected/eea9bcfd-192a-4da9-a309-1b519f236c4b-kube-api-access-pnvws\") pod \"redhat-operators-ltpbj\" (UID: \"eea9bcfd-192a-4da9-a309-1b519f236c4b\") " pod="openshift-marketplace/redhat-operators-ltpbj" Feb 19 18:42:49 crc kubenswrapper[4813]: I0219 18:42:49.705930 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ltpbj" Feb 19 18:42:49 crc kubenswrapper[4813]: I0219 18:42:49.903134 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ltpbj"] Feb 19 18:42:49 crc kubenswrapper[4813]: W0219 18:42:49.912742 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeea9bcfd_192a_4da9_a309_1b519f236c4b.slice/crio-8301a220e625924dc4e7529a4e43e27f42cc214fcb0d513d588a66464fdedc50 WatchSource:0}: Error finding container 8301a220e625924dc4e7529a4e43e27f42cc214fcb0d513d588a66464fdedc50: Status 404 returned error can't find the container with id 8301a220e625924dc4e7529a4e43e27f42cc214fcb0d513d588a66464fdedc50 Feb 19 18:42:50 crc kubenswrapper[4813]: I0219 18:42:50.383162 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ltpbj" event={"ID":"eea9bcfd-192a-4da9-a309-1b519f236c4b","Type":"ContainerStarted","Data":"8301a220e625924dc4e7529a4e43e27f42cc214fcb0d513d588a66464fdedc50"} Feb 19 18:42:50 crc kubenswrapper[4813]: I0219 18:42:50.930877 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-dqj4z_9c2745dc-3e5e-4571-ba36-aa14c87c6336/console/0.log" Feb 19 18:42:50 crc kubenswrapper[4813]: I0219 18:42:50.931253 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-dqj4z" Feb 19 18:42:50 crc kubenswrapper[4813]: I0219 18:42:50.954798 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9c2745dc-3e5e-4571-ba36-aa14c87c6336-console-config\") pod \"9c2745dc-3e5e-4571-ba36-aa14c87c6336\" (UID: \"9c2745dc-3e5e-4571-ba36-aa14c87c6336\") " Feb 19 18:42:50 crc kubenswrapper[4813]: I0219 18:42:50.954998 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9c2745dc-3e5e-4571-ba36-aa14c87c6336-console-oauth-config\") pod \"9c2745dc-3e5e-4571-ba36-aa14c87c6336\" (UID: \"9c2745dc-3e5e-4571-ba36-aa14c87c6336\") " Feb 19 18:42:50 crc kubenswrapper[4813]: I0219 18:42:50.955033 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kpkzh\" (UniqueName: \"kubernetes.io/projected/9c2745dc-3e5e-4571-ba36-aa14c87c6336-kube-api-access-kpkzh\") pod \"9c2745dc-3e5e-4571-ba36-aa14c87c6336\" (UID: \"9c2745dc-3e5e-4571-ba36-aa14c87c6336\") " Feb 19 18:42:50 crc kubenswrapper[4813]: I0219 18:42:50.955079 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c2745dc-3e5e-4571-ba36-aa14c87c6336-trusted-ca-bundle\") pod \"9c2745dc-3e5e-4571-ba36-aa14c87c6336\" (UID: \"9c2745dc-3e5e-4571-ba36-aa14c87c6336\") " Feb 19 18:42:50 crc kubenswrapper[4813]: I0219 18:42:50.955213 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9c2745dc-3e5e-4571-ba36-aa14c87c6336-service-ca\") pod \"9c2745dc-3e5e-4571-ba36-aa14c87c6336\" (UID: \"9c2745dc-3e5e-4571-ba36-aa14c87c6336\") " Feb 19 18:42:50 crc kubenswrapper[4813]: I0219 18:42:50.955250 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9c2745dc-3e5e-4571-ba36-aa14c87c6336-console-serving-cert\") pod \"9c2745dc-3e5e-4571-ba36-aa14c87c6336\" (UID: \"9c2745dc-3e5e-4571-ba36-aa14c87c6336\") " Feb 19 18:42:50 crc kubenswrapper[4813]: I0219 18:42:50.955274 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9c2745dc-3e5e-4571-ba36-aa14c87c6336-oauth-serving-cert\") pod \"9c2745dc-3e5e-4571-ba36-aa14c87c6336\" (UID: \"9c2745dc-3e5e-4571-ba36-aa14c87c6336\") " Feb 19 18:42:50 crc kubenswrapper[4813]: I0219 18:42:50.955682 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c2745dc-3e5e-4571-ba36-aa14c87c6336-console-config" (OuterVolumeSpecName: "console-config") pod "9c2745dc-3e5e-4571-ba36-aa14c87c6336" (UID: "9c2745dc-3e5e-4571-ba36-aa14c87c6336"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:42:50 crc kubenswrapper[4813]: I0219 18:42:50.955770 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c2745dc-3e5e-4571-ba36-aa14c87c6336-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "9c2745dc-3e5e-4571-ba36-aa14c87c6336" (UID: "9c2745dc-3e5e-4571-ba36-aa14c87c6336"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:42:50 crc kubenswrapper[4813]: I0219 18:42:50.956148 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c2745dc-3e5e-4571-ba36-aa14c87c6336-service-ca" (OuterVolumeSpecName: "service-ca") pod "9c2745dc-3e5e-4571-ba36-aa14c87c6336" (UID: "9c2745dc-3e5e-4571-ba36-aa14c87c6336"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:42:50 crc kubenswrapper[4813]: I0219 18:42:50.956173 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c2745dc-3e5e-4571-ba36-aa14c87c6336-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "9c2745dc-3e5e-4571-ba36-aa14c87c6336" (UID: "9c2745dc-3e5e-4571-ba36-aa14c87c6336"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:42:50 crc kubenswrapper[4813]: I0219 18:42:50.962349 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c2745dc-3e5e-4571-ba36-aa14c87c6336-kube-api-access-kpkzh" (OuterVolumeSpecName: "kube-api-access-kpkzh") pod "9c2745dc-3e5e-4571-ba36-aa14c87c6336" (UID: "9c2745dc-3e5e-4571-ba36-aa14c87c6336"). InnerVolumeSpecName "kube-api-access-kpkzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:42:50 crc kubenswrapper[4813]: I0219 18:42:50.978173 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c2745dc-3e5e-4571-ba36-aa14c87c6336-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "9c2745dc-3e5e-4571-ba36-aa14c87c6336" (UID: "9c2745dc-3e5e-4571-ba36-aa14c87c6336"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:42:50 crc kubenswrapper[4813]: I0219 18:42:50.979169 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c2745dc-3e5e-4571-ba36-aa14c87c6336-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "9c2745dc-3e5e-4571-ba36-aa14c87c6336" (UID: "9c2745dc-3e5e-4571-ba36-aa14c87c6336"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:42:51 crc kubenswrapper[4813]: I0219 18:42:51.056936 4813 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9c2745dc-3e5e-4571-ba36-aa14c87c6336-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 19 18:42:51 crc kubenswrapper[4813]: I0219 18:42:51.056996 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kpkzh\" (UniqueName: \"kubernetes.io/projected/9c2745dc-3e5e-4571-ba36-aa14c87c6336-kube-api-access-kpkzh\") on node \"crc\" DevicePath \"\"" Feb 19 18:42:51 crc kubenswrapper[4813]: I0219 18:42:51.057009 4813 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c2745dc-3e5e-4571-ba36-aa14c87c6336-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:42:51 crc kubenswrapper[4813]: I0219 18:42:51.057019 4813 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9c2745dc-3e5e-4571-ba36-aa14c87c6336-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 18:42:51 crc kubenswrapper[4813]: I0219 18:42:51.057030 4813 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9c2745dc-3e5e-4571-ba36-aa14c87c6336-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 18:42:51 crc kubenswrapper[4813]: I0219 18:42:51.057042 4813 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9c2745dc-3e5e-4571-ba36-aa14c87c6336-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 18:42:51 crc kubenswrapper[4813]: I0219 18:42:51.057051 4813 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9c2745dc-3e5e-4571-ba36-aa14c87c6336-console-config\") on node \"crc\" DevicePath \"\"" Feb 19 18:42:51 crc kubenswrapper[4813]: I0219 18:42:51.391631 4813 generic.go:334] "Generic (PLEG): container finished" podID="fa567e17-7ee2-4f55-9907-6c7aed9af532" containerID="6c5d8ec6853c65346d93241cffaa219c9a85ff427cc049ba974b6bfb0679ce4f" exitCode=0 Feb 19 18:42:51 crc kubenswrapper[4813]: I0219 18:42:51.391695 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gx7bx" event={"ID":"fa567e17-7ee2-4f55-9907-6c7aed9af532","Type":"ContainerDied","Data":"6c5d8ec6853c65346d93241cffaa219c9a85ff427cc049ba974b6bfb0679ce4f"} Feb 19 18:42:51 crc kubenswrapper[4813]: I0219 18:42:51.394694 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-dqj4z_9c2745dc-3e5e-4571-ba36-aa14c87c6336/console/0.log" Feb 19 18:42:51 crc kubenswrapper[4813]: I0219 18:42:51.394810 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-dqj4z" event={"ID":"9c2745dc-3e5e-4571-ba36-aa14c87c6336","Type":"ContainerDied","Data":"f19fdf31675b98b4f4345d932a6e3f4222f710964a409bea9539311f612133e9"} Feb 19 18:42:51 crc kubenswrapper[4813]: I0219 18:42:51.394854 4813 scope.go:117] "RemoveContainer" containerID="0ccc54834bebf1ac902b60be67dfa35caf9184a6cd116e45ccacc269c7cc9f24" Feb 19 18:42:51 crc kubenswrapper[4813]: I0219 18:42:51.394879 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-dqj4z" Feb 19 18:42:51 crc kubenswrapper[4813]: I0219 18:42:51.398700 4813 generic.go:334] "Generic (PLEG): container finished" podID="eea9bcfd-192a-4da9-a309-1b519f236c4b" containerID="f772637bb820e860e514d62910433a3c9136df6ecc6dd9e2b8a319db1f045aa1" exitCode=0 Feb 19 18:42:51 crc kubenswrapper[4813]: I0219 18:42:51.398720 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ltpbj" event={"ID":"eea9bcfd-192a-4da9-a309-1b519f236c4b","Type":"ContainerDied","Data":"f772637bb820e860e514d62910433a3c9136df6ecc6dd9e2b8a319db1f045aa1"} Feb 19 18:42:51 crc kubenswrapper[4813]: I0219 18:42:51.488999 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-dqj4z"] Feb 19 18:42:51 crc kubenswrapper[4813]: I0219 18:42:51.494903 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-dqj4z"] Feb 19 18:42:53 crc kubenswrapper[4813]: I0219 18:42:53.484411 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c2745dc-3e5e-4571-ba36-aa14c87c6336" path="/var/lib/kubelet/pods/9c2745dc-3e5e-4571-ba36-aa14c87c6336/volumes" Feb 19 18:42:54 crc kubenswrapper[4813]: I0219 18:42:54.430573 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ltpbj" event={"ID":"eea9bcfd-192a-4da9-a309-1b519f236c4b","Type":"ContainerStarted","Data":"9297f7615802494448363911e4abaa2117e61a35f94fbe443d04bbdbcf4768d2"} Feb 19 18:42:55 crc kubenswrapper[4813]: I0219 18:42:55.440564 4813 generic.go:334] "Generic (PLEG): container finished" podID="eea9bcfd-192a-4da9-a309-1b519f236c4b" containerID="9297f7615802494448363911e4abaa2117e61a35f94fbe443d04bbdbcf4768d2" exitCode=0 Feb 19 18:42:55 crc kubenswrapper[4813]: I0219 18:42:55.440623 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ltpbj" event={"ID":"eea9bcfd-192a-4da9-a309-1b519f236c4b","Type":"ContainerDied","Data":"9297f7615802494448363911e4abaa2117e61a35f94fbe443d04bbdbcf4768d2"} Feb 19 18:42:56 crc kubenswrapper[4813]: I0219 18:42:56.456447 4813 generic.go:334] "Generic (PLEG): container finished" podID="fa567e17-7ee2-4f55-9907-6c7aed9af532" containerID="049e9d0d2d13a464d9cf923077a0021c66c3fe20f5de4a464a2d032739b80010" exitCode=0 Feb 19 18:42:56 crc kubenswrapper[4813]: I0219 18:42:56.456512 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gx7bx" event={"ID":"fa567e17-7ee2-4f55-9907-6c7aed9af532","Type":"ContainerDied","Data":"049e9d0d2d13a464d9cf923077a0021c66c3fe20f5de4a464a2d032739b80010"} Feb 19 18:42:57 crc kubenswrapper[4813]: I0219 18:42:57.468599 4813 generic.go:334] "Generic (PLEG): container finished" podID="fa567e17-7ee2-4f55-9907-6c7aed9af532" containerID="ec7db9a53cbd48fd6dce47df67317e3b4aca60cf6ce0060f67190e14bf3b3a7c" exitCode=0 Feb 19 18:42:57 crc kubenswrapper[4813]: I0219 18:42:57.468729 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gx7bx" event={"ID":"fa567e17-7ee2-4f55-9907-6c7aed9af532","Type":"ContainerDied","Data":"ec7db9a53cbd48fd6dce47df67317e3b4aca60cf6ce0060f67190e14bf3b3a7c"} Feb 19 18:42:57 crc kubenswrapper[4813]: I0219 18:42:57.484589 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ltpbj" event={"ID":"eea9bcfd-192a-4da9-a309-1b519f236c4b","Type":"ContainerStarted","Data":"ad0356a6047c65beacc57acecb53258547e6855bc4422af4bcc3dcb634169d9f"} Feb 19 18:42:57 crc kubenswrapper[4813]: I0219 18:42:57.532216 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ltpbj" podStartSLOduration=3.654072885 podStartE2EDuration="8.532194501s" podCreationTimestamp="2026-02-19 18:42:49 +0000 UTC" firstStartedPulling="2026-02-19 18:42:51.40325314 +0000 UTC m=+790.628693681" lastFinishedPulling="2026-02-19 18:42:56.281374716 +0000 UTC m=+795.506815297" observedRunningTime="2026-02-19 18:42:57.531416876 +0000 UTC m=+796.756857487" watchObservedRunningTime="2026-02-19 18:42:57.532194501 +0000 UTC m=+796.757635052" Feb 19 18:42:58 crc kubenswrapper[4813]: I0219 18:42:58.816177 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gx7bx" Feb 19 18:42:58 crc kubenswrapper[4813]: I0219 18:42:58.866223 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fa567e17-7ee2-4f55-9907-6c7aed9af532-bundle\") pod \"fa567e17-7ee2-4f55-9907-6c7aed9af532\" (UID: \"fa567e17-7ee2-4f55-9907-6c7aed9af532\") " Feb 19 18:42:58 crc kubenswrapper[4813]: I0219 18:42:58.866316 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8lf2\" (UniqueName: \"kubernetes.io/projected/fa567e17-7ee2-4f55-9907-6c7aed9af532-kube-api-access-k8lf2\") pod \"fa567e17-7ee2-4f55-9907-6c7aed9af532\" (UID: \"fa567e17-7ee2-4f55-9907-6c7aed9af532\") " Feb 19 18:42:58 crc kubenswrapper[4813]: I0219 18:42:58.866407 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fa567e17-7ee2-4f55-9907-6c7aed9af532-util\") pod \"fa567e17-7ee2-4f55-9907-6c7aed9af532\" (UID: \"fa567e17-7ee2-4f55-9907-6c7aed9af532\") " Feb 19 18:42:58 crc kubenswrapper[4813]: I0219 18:42:58.867408 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa567e17-7ee2-4f55-9907-6c7aed9af532-bundle" (OuterVolumeSpecName: "bundle") pod "fa567e17-7ee2-4f55-9907-6c7aed9af532" (UID: "fa567e17-7ee2-4f55-9907-6c7aed9af532"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:42:58 crc kubenswrapper[4813]: I0219 18:42:58.872551 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa567e17-7ee2-4f55-9907-6c7aed9af532-kube-api-access-k8lf2" (OuterVolumeSpecName: "kube-api-access-k8lf2") pod "fa567e17-7ee2-4f55-9907-6c7aed9af532" (UID: "fa567e17-7ee2-4f55-9907-6c7aed9af532"). InnerVolumeSpecName "kube-api-access-k8lf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:42:58 crc kubenswrapper[4813]: I0219 18:42:58.878913 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa567e17-7ee2-4f55-9907-6c7aed9af532-util" (OuterVolumeSpecName: "util") pod "fa567e17-7ee2-4f55-9907-6c7aed9af532" (UID: "fa567e17-7ee2-4f55-9907-6c7aed9af532"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:42:58 crc kubenswrapper[4813]: I0219 18:42:58.968492 4813 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fa567e17-7ee2-4f55-9907-6c7aed9af532-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:42:58 crc kubenswrapper[4813]: I0219 18:42:58.968557 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8lf2\" (UniqueName: \"kubernetes.io/projected/fa567e17-7ee2-4f55-9907-6c7aed9af532-kube-api-access-k8lf2\") on node \"crc\" DevicePath \"\"" Feb 19 18:42:58 crc kubenswrapper[4813]: I0219 18:42:58.968579 4813 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fa567e17-7ee2-4f55-9907-6c7aed9af532-util\") on node \"crc\" DevicePath \"\"" Feb 19 18:42:59 crc kubenswrapper[4813]: I0219 18:42:59.495070 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gx7bx" event={"ID":"fa567e17-7ee2-4f55-9907-6c7aed9af532","Type":"ContainerDied","Data":"ce4b22c17b9ad61c4f8abe3fc7be4ec2c5d94409908865f8ea645dfee8a73057"} Feb 19 18:42:59 crc kubenswrapper[4813]: I0219 18:42:59.495117 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce4b22c17b9ad61c4f8abe3fc7be4ec2c5d94409908865f8ea645dfee8a73057" Feb 19 18:42:59 crc kubenswrapper[4813]: I0219 18:42:59.495250 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gx7bx" Feb 19 18:42:59 crc kubenswrapper[4813]: I0219 18:42:59.707147 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ltpbj" Feb 19 18:42:59 crc kubenswrapper[4813]: I0219 18:42:59.707418 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ltpbj" Feb 19 18:43:00 crc kubenswrapper[4813]: I0219 18:43:00.330418 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 18:43:00 crc kubenswrapper[4813]: I0219 18:43:00.331223 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 18:43:00 crc kubenswrapper[4813]: I0219 18:43:00.331318 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" Feb 19 18:43:00 crc kubenswrapper[4813]: I0219 18:43:00.332241 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d2ec5832c886721bb1adcc6f9f4d730f479f32a26f58ced42b5b32c757c1d3c3"} pod="openshift-machine-config-operator/machine-config-daemon-gfswm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 18:43:00 crc kubenswrapper[4813]: I0219 18:43:00.332353 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" containerID="cri-o://d2ec5832c886721bb1adcc6f9f4d730f479f32a26f58ced42b5b32c757c1d3c3" gracePeriod=600 Feb 19 18:43:00 crc kubenswrapper[4813]: I0219 18:43:00.757409 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ltpbj" podUID="eea9bcfd-192a-4da9-a309-1b519f236c4b" containerName="registry-server" probeResult="failure" output=< Feb 19 18:43:00 crc kubenswrapper[4813]: timeout: failed to connect service ":50051" within 1s Feb 19 18:43:00 crc kubenswrapper[4813]: > Feb 19 18:43:01 crc kubenswrapper[4813]: E0219 18:43:01.734097 4813 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod481977a2_7072_4176_abd4_863cb6104d70.slice/crio-conmon-d2ec5832c886721bb1adcc6f9f4d730f479f32a26f58ced42b5b32c757c1d3c3.scope\": RecentStats: unable to find data in memory cache]" Feb 19 18:43:02 crc kubenswrapper[4813]: I0219 18:43:02.527040 4813 generic.go:334] "Generic (PLEG): container finished" podID="481977a2-7072-4176-abd4-863cb6104d70" containerID="d2ec5832c886721bb1adcc6f9f4d730f479f32a26f58ced42b5b32c757c1d3c3" exitCode=0 Feb 19 18:43:02 crc kubenswrapper[4813]: I0219 18:43:02.527112 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" event={"ID":"481977a2-7072-4176-abd4-863cb6104d70","Type":"ContainerDied","Data":"d2ec5832c886721bb1adcc6f9f4d730f479f32a26f58ced42b5b32c757c1d3c3"} Feb 19 18:43:02 crc kubenswrapper[4813]: I0219 18:43:02.527380 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" event={"ID":"481977a2-7072-4176-abd4-863cb6104d70","Type":"ContainerStarted","Data":"0c3855002c151cf8b5b2cf61ec6f6d7135091880565c3fee08603596d3342c68"} Feb 19 18:43:02 crc kubenswrapper[4813]: I0219 18:43:02.527401 4813 scope.go:117] "RemoveContainer" containerID="a34127baba58dd3cc724f5d51572626bf8d85f8d83ca4c8ac0dec993e2bb35bb" Feb 19 18:43:09 crc kubenswrapper[4813]: I0219 18:43:09.812618 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ltpbj" Feb 19 18:43:09 crc kubenswrapper[4813]: I0219 18:43:09.880071 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ltpbj" Feb 19 18:43:12 crc kubenswrapper[4813]: I0219 18:43:12.004967 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-577f5d4d96-7cmj7"] Feb 19 18:43:12 crc kubenswrapper[4813]: E0219 18:43:12.005466 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa567e17-7ee2-4f55-9907-6c7aed9af532" containerName="extract" Feb 19 18:43:12 crc kubenswrapper[4813]: I0219 18:43:12.005480 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa567e17-7ee2-4f55-9907-6c7aed9af532" containerName="extract" Feb 19 18:43:12 crc kubenswrapper[4813]: E0219 18:43:12.005500 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c2745dc-3e5e-4571-ba36-aa14c87c6336" containerName="console" Feb 19 18:43:12 crc kubenswrapper[4813]: I0219 18:43:12.005508 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c2745dc-3e5e-4571-ba36-aa14c87c6336" containerName="console" Feb 19 18:43:12 crc kubenswrapper[4813]: E0219 18:43:12.005520 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa567e17-7ee2-4f55-9907-6c7aed9af532" containerName="pull" Feb 19 18:43:12 crc kubenswrapper[4813]: I0219 18:43:12.005529 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa567e17-7ee2-4f55-9907-6c7aed9af532" containerName="pull" Feb 19 18:43:12 crc kubenswrapper[4813]: E0219 18:43:12.005549 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa567e17-7ee2-4f55-9907-6c7aed9af532" containerName="util" Feb 19 18:43:12 crc kubenswrapper[4813]: I0219 18:43:12.005558 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa567e17-7ee2-4f55-9907-6c7aed9af532" containerName="util" Feb 19 18:43:12 crc kubenswrapper[4813]: I0219 18:43:12.005671 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa567e17-7ee2-4f55-9907-6c7aed9af532" containerName="extract" Feb 19 18:43:12 crc kubenswrapper[4813]: I0219 18:43:12.005692 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c2745dc-3e5e-4571-ba36-aa14c87c6336" containerName="console" Feb 19 18:43:12 crc kubenswrapper[4813]: I0219 18:43:12.006128 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-577f5d4d96-7cmj7" Feb 19 18:43:12 crc kubenswrapper[4813]: I0219 18:43:12.008046 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 19 18:43:12 crc kubenswrapper[4813]: I0219 18:43:12.008143 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 19 18:43:12 crc kubenswrapper[4813]: I0219 18:43:12.008340 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-7gs9x" Feb 19 18:43:12 crc kubenswrapper[4813]: I0219 18:43:12.008416 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 19 18:43:12 crc kubenswrapper[4813]: I0219 18:43:12.008421 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 19 18:43:12 crc kubenswrapper[4813]: I0219 18:43:12.032728 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-577f5d4d96-7cmj7"] Feb 19 18:43:12 crc kubenswrapper[4813]: I0219 18:43:12.049768 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a22dc4b9-f796-4ad3-9195-823745af58b4-apiservice-cert\") pod \"metallb-operator-controller-manager-577f5d4d96-7cmj7\" (UID: \"a22dc4b9-f796-4ad3-9195-823745af58b4\") " pod="metallb-system/metallb-operator-controller-manager-577f5d4d96-7cmj7" Feb 19 18:43:12 crc kubenswrapper[4813]: I0219 18:43:12.050032 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a22dc4b9-f796-4ad3-9195-823745af58b4-webhook-cert\") pod \"metallb-operator-controller-manager-577f5d4d96-7cmj7\" (UID: \"a22dc4b9-f796-4ad3-9195-823745af58b4\") " pod="metallb-system/metallb-operator-controller-manager-577f5d4d96-7cmj7" Feb 19 18:43:12 crc kubenswrapper[4813]: I0219 18:43:12.050175 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cqpt\" (UniqueName: \"kubernetes.io/projected/a22dc4b9-f796-4ad3-9195-823745af58b4-kube-api-access-2cqpt\") pod \"metallb-operator-controller-manager-577f5d4d96-7cmj7\" (UID: \"a22dc4b9-f796-4ad3-9195-823745af58b4\") " pod="metallb-system/metallb-operator-controller-manager-577f5d4d96-7cmj7" Feb 19 18:43:12 crc kubenswrapper[4813]: I0219 18:43:12.132782 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ltpbj"] Feb 19 18:43:12 crc kubenswrapper[4813]: I0219 18:43:12.133086 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ltpbj" podUID="eea9bcfd-192a-4da9-a309-1b519f236c4b" containerName="registry-server" containerID="cri-o://ad0356a6047c65beacc57acecb53258547e6855bc4422af4bcc3dcb634169d9f" gracePeriod=2 Feb 19 18:43:12 crc kubenswrapper[4813]: I0219 18:43:12.151196 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a22dc4b9-f796-4ad3-9195-823745af58b4-apiservice-cert\") pod \"metallb-operator-controller-manager-577f5d4d96-7cmj7\" (UID: \"a22dc4b9-f796-4ad3-9195-823745af58b4\") " pod="metallb-system/metallb-operator-controller-manager-577f5d4d96-7cmj7" Feb 19 18:43:12 crc kubenswrapper[4813]: I0219 18:43:12.151278 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a22dc4b9-f796-4ad3-9195-823745af58b4-webhook-cert\") pod \"metallb-operator-controller-manager-577f5d4d96-7cmj7\" (UID: \"a22dc4b9-f796-4ad3-9195-823745af58b4\") " pod="metallb-system/metallb-operator-controller-manager-577f5d4d96-7cmj7" Feb 19 18:43:12 crc kubenswrapper[4813]: I0219 18:43:12.151323 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cqpt\" (UniqueName: \"kubernetes.io/projected/a22dc4b9-f796-4ad3-9195-823745af58b4-kube-api-access-2cqpt\") pod \"metallb-operator-controller-manager-577f5d4d96-7cmj7\" (UID: \"a22dc4b9-f796-4ad3-9195-823745af58b4\") " pod="metallb-system/metallb-operator-controller-manager-577f5d4d96-7cmj7" Feb 19 18:43:12 crc kubenswrapper[4813]: I0219 18:43:12.156889 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a22dc4b9-f796-4ad3-9195-823745af58b4-webhook-cert\") pod \"metallb-operator-controller-manager-577f5d4d96-7cmj7\" (UID: \"a22dc4b9-f796-4ad3-9195-823745af58b4\") " pod="metallb-system/metallb-operator-controller-manager-577f5d4d96-7cmj7" Feb 19 18:43:12 crc kubenswrapper[4813]: I0219 18:43:12.158247 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a22dc4b9-f796-4ad3-9195-823745af58b4-apiservice-cert\") pod \"metallb-operator-controller-manager-577f5d4d96-7cmj7\" (UID: \"a22dc4b9-f796-4ad3-9195-823745af58b4\") " pod="metallb-system/metallb-operator-controller-manager-577f5d4d96-7cmj7" Feb 19 18:43:12 crc kubenswrapper[4813]: I0219 18:43:12.183852 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cqpt\" (UniqueName: \"kubernetes.io/projected/a22dc4b9-f796-4ad3-9195-823745af58b4-kube-api-access-2cqpt\") pod \"metallb-operator-controller-manager-577f5d4d96-7cmj7\" (UID: \"a22dc4b9-f796-4ad3-9195-823745af58b4\") " pod="metallb-system/metallb-operator-controller-manager-577f5d4d96-7cmj7" Feb 19 18:43:12 crc kubenswrapper[4813]: I0219 18:43:12.319474 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-577f5d4d96-7cmj7" Feb 19 18:43:12 crc kubenswrapper[4813]: I0219 18:43:12.468405 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-6b6fccb66b-rkklz"] Feb 19 18:43:12 crc kubenswrapper[4813]: I0219 18:43:12.469425 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6b6fccb66b-rkklz" Feb 19 18:43:12 crc kubenswrapper[4813]: I0219 18:43:12.477524 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-wfj5s" Feb 19 18:43:12 crc kubenswrapper[4813]: I0219 18:43:12.477941 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 19 18:43:12 crc kubenswrapper[4813]: I0219 18:43:12.481415 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 19 18:43:12 crc kubenswrapper[4813]: I0219 18:43:12.490967 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6b6fccb66b-rkklz"] Feb 19 18:43:12 crc kubenswrapper[4813]: I0219 18:43:12.529918 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ltpbj" Feb 19 18:43:12 crc kubenswrapper[4813]: I0219 18:43:12.586203 4813 generic.go:334] "Generic (PLEG): container finished" podID="eea9bcfd-192a-4da9-a309-1b519f236c4b" containerID="ad0356a6047c65beacc57acecb53258547e6855bc4422af4bcc3dcb634169d9f" exitCode=0 Feb 19 18:43:12 crc kubenswrapper[4813]: I0219 18:43:12.586242 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ltpbj" event={"ID":"eea9bcfd-192a-4da9-a309-1b519f236c4b","Type":"ContainerDied","Data":"ad0356a6047c65beacc57acecb53258547e6855bc4422af4bcc3dcb634169d9f"} Feb 19 18:43:12 crc kubenswrapper[4813]: I0219 18:43:12.586265 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ltpbj" event={"ID":"eea9bcfd-192a-4da9-a309-1b519f236c4b","Type":"ContainerDied","Data":"8301a220e625924dc4e7529a4e43e27f42cc214fcb0d513d588a66464fdedc50"} Feb 19 18:43:12 crc kubenswrapper[4813]: I0219 18:43:12.586281 4813 scope.go:117] "RemoveContainer" containerID="ad0356a6047c65beacc57acecb53258547e6855bc4422af4bcc3dcb634169d9f" Feb 19 18:43:12 crc kubenswrapper[4813]: I0219 18:43:12.586372 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ltpbj" Feb 19 18:43:12 crc kubenswrapper[4813]: I0219 18:43:12.621459 4813 scope.go:117] "RemoveContainer" containerID="9297f7615802494448363911e4abaa2117e61a35f94fbe443d04bbdbcf4768d2" Feb 19 18:43:12 crc kubenswrapper[4813]: I0219 18:43:12.643219 4813 scope.go:117] "RemoveContainer" containerID="f772637bb820e860e514d62910433a3c9136df6ecc6dd9e2b8a319db1f045aa1" Feb 19 18:43:12 crc kubenswrapper[4813]: I0219 18:43:12.657311 4813 scope.go:117] "RemoveContainer" containerID="ad0356a6047c65beacc57acecb53258547e6855bc4422af4bcc3dcb634169d9f" Feb 19 18:43:12 crc kubenswrapper[4813]: E0219 18:43:12.657773 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad0356a6047c65beacc57acecb53258547e6855bc4422af4bcc3dcb634169d9f\": container with ID starting with ad0356a6047c65beacc57acecb53258547e6855bc4422af4bcc3dcb634169d9f not found: ID does not exist" containerID="ad0356a6047c65beacc57acecb53258547e6855bc4422af4bcc3dcb634169d9f" Feb 19 18:43:12 crc kubenswrapper[4813]: I0219 18:43:12.657813 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad0356a6047c65beacc57acecb53258547e6855bc4422af4bcc3dcb634169d9f"} err="failed to get container status \"ad0356a6047c65beacc57acecb53258547e6855bc4422af4bcc3dcb634169d9f\": rpc error: code = NotFound desc = could not find container \"ad0356a6047c65beacc57acecb53258547e6855bc4422af4bcc3dcb634169d9f\": container with ID starting with ad0356a6047c65beacc57acecb53258547e6855bc4422af4bcc3dcb634169d9f not found: ID does not exist" Feb 19 18:43:12 crc kubenswrapper[4813]: I0219 18:43:12.657840 4813 scope.go:117] "RemoveContainer" containerID="9297f7615802494448363911e4abaa2117e61a35f94fbe443d04bbdbcf4768d2" Feb 19 18:43:12 crc kubenswrapper[4813]: E0219 18:43:12.658355 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9297f7615802494448363911e4abaa2117e61a35f94fbe443d04bbdbcf4768d2\": container with ID starting with 9297f7615802494448363911e4abaa2117e61a35f94fbe443d04bbdbcf4768d2 not found: ID does not exist" containerID="9297f7615802494448363911e4abaa2117e61a35f94fbe443d04bbdbcf4768d2" Feb 19 18:43:12 crc kubenswrapper[4813]: I0219 18:43:12.658378 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9297f7615802494448363911e4abaa2117e61a35f94fbe443d04bbdbcf4768d2"} err="failed to get container status \"9297f7615802494448363911e4abaa2117e61a35f94fbe443d04bbdbcf4768d2\": rpc error: code = NotFound desc = could not find container \"9297f7615802494448363911e4abaa2117e61a35f94fbe443d04bbdbcf4768d2\": container with ID starting with 9297f7615802494448363911e4abaa2117e61a35f94fbe443d04bbdbcf4768d2 not found: ID does not exist" Feb 19 18:43:12 crc kubenswrapper[4813]: I0219 18:43:12.658391 4813 scope.go:117] "RemoveContainer" containerID="f772637bb820e860e514d62910433a3c9136df6ecc6dd9e2b8a319db1f045aa1" Feb 19 18:43:12 crc kubenswrapper[4813]: E0219 18:43:12.658765 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f772637bb820e860e514d62910433a3c9136df6ecc6dd9e2b8a319db1f045aa1\": container with ID starting with f772637bb820e860e514d62910433a3c9136df6ecc6dd9e2b8a319db1f045aa1 not found: ID does not exist" containerID="f772637bb820e860e514d62910433a3c9136df6ecc6dd9e2b8a319db1f045aa1" Feb 19 18:43:12 crc kubenswrapper[4813]: I0219 18:43:12.658786 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f772637bb820e860e514d62910433a3c9136df6ecc6dd9e2b8a319db1f045aa1"} err="failed to get container status \"f772637bb820e860e514d62910433a3c9136df6ecc6dd9e2b8a319db1f045aa1\": rpc error: code = NotFound desc = could not find container \"f772637bb820e860e514d62910433a3c9136df6ecc6dd9e2b8a319db1f045aa1\": container with ID starting with f772637bb820e860e514d62910433a3c9136df6ecc6dd9e2b8a319db1f045aa1 not found: ID does not exist" Feb 19 18:43:12 crc kubenswrapper[4813]: I0219 18:43:12.669592 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eea9bcfd-192a-4da9-a309-1b519f236c4b-catalog-content\") pod \"eea9bcfd-192a-4da9-a309-1b519f236c4b\" (UID: \"eea9bcfd-192a-4da9-a309-1b519f236c4b\") " Feb 19 18:43:12 crc kubenswrapper[4813]: I0219 18:43:12.669713 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eea9bcfd-192a-4da9-a309-1b519f236c4b-utilities\") pod \"eea9bcfd-192a-4da9-a309-1b519f236c4b\" (UID: \"eea9bcfd-192a-4da9-a309-1b519f236c4b\") " Feb 19 18:43:12 crc kubenswrapper[4813]: I0219 18:43:12.669776 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnvws\" (UniqueName: \"kubernetes.io/projected/eea9bcfd-192a-4da9-a309-1b519f236c4b-kube-api-access-pnvws\") pod \"eea9bcfd-192a-4da9-a309-1b519f236c4b\" (UID: \"eea9bcfd-192a-4da9-a309-1b519f236c4b\") " Feb 19 18:43:12 crc kubenswrapper[4813]: I0219 18:43:12.669974 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9782c1db-811f-4fa4-ad33-e6e1bed6ddf5-webhook-cert\") pod \"metallb-operator-webhook-server-6b6fccb66b-rkklz\" (UID: \"9782c1db-811f-4fa4-ad33-e6e1bed6ddf5\") " pod="metallb-system/metallb-operator-webhook-server-6b6fccb66b-rkklz" Feb 19 18:43:12 crc kubenswrapper[4813]: I0219 18:43:12.670040 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrj6z\" (UniqueName: \"kubernetes.io/projected/9782c1db-811f-4fa4-ad33-e6e1bed6ddf5-kube-api-access-xrj6z\") pod \"metallb-operator-webhook-server-6b6fccb66b-rkklz\" (UID: \"9782c1db-811f-4fa4-ad33-e6e1bed6ddf5\") " pod="metallb-system/metallb-operator-webhook-server-6b6fccb66b-rkklz" Feb 19 18:43:12 crc kubenswrapper[4813]: I0219 18:43:12.670070 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9782c1db-811f-4fa4-ad33-e6e1bed6ddf5-apiservice-cert\") pod \"metallb-operator-webhook-server-6b6fccb66b-rkklz\" (UID: \"9782c1db-811f-4fa4-ad33-e6e1bed6ddf5\") " pod="metallb-system/metallb-operator-webhook-server-6b6fccb66b-rkklz" Feb 19 18:43:12 crc kubenswrapper[4813]: I0219 18:43:12.670560 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eea9bcfd-192a-4da9-a309-1b519f236c4b-utilities" (OuterVolumeSpecName: "utilities") pod "eea9bcfd-192a-4da9-a309-1b519f236c4b" (UID: "eea9bcfd-192a-4da9-a309-1b519f236c4b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:43:12 crc kubenswrapper[4813]: I0219 18:43:12.682115 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eea9bcfd-192a-4da9-a309-1b519f236c4b-kube-api-access-pnvws" (OuterVolumeSpecName: "kube-api-access-pnvws") pod "eea9bcfd-192a-4da9-a309-1b519f236c4b" (UID: "eea9bcfd-192a-4da9-a309-1b519f236c4b"). InnerVolumeSpecName "kube-api-access-pnvws". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:43:12 crc kubenswrapper[4813]: I0219 18:43:12.772551 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrj6z\" (UniqueName: \"kubernetes.io/projected/9782c1db-811f-4fa4-ad33-e6e1bed6ddf5-kube-api-access-xrj6z\") pod \"metallb-operator-webhook-server-6b6fccb66b-rkklz\" (UID: \"9782c1db-811f-4fa4-ad33-e6e1bed6ddf5\") " pod="metallb-system/metallb-operator-webhook-server-6b6fccb66b-rkklz" Feb 19 18:43:12 crc kubenswrapper[4813]: I0219 18:43:12.772602 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9782c1db-811f-4fa4-ad33-e6e1bed6ddf5-apiservice-cert\") pod \"metallb-operator-webhook-server-6b6fccb66b-rkklz\" (UID: \"9782c1db-811f-4fa4-ad33-e6e1bed6ddf5\") " pod="metallb-system/metallb-operator-webhook-server-6b6fccb66b-rkklz" Feb 19 18:43:12 crc kubenswrapper[4813]: I0219 18:43:12.772653 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9782c1db-811f-4fa4-ad33-e6e1bed6ddf5-webhook-cert\") pod \"metallb-operator-webhook-server-6b6fccb66b-rkklz\" (UID: \"9782c1db-811f-4fa4-ad33-e6e1bed6ddf5\") " pod="metallb-system/metallb-operator-webhook-server-6b6fccb66b-rkklz" Feb 19 18:43:12 crc kubenswrapper[4813]: I0219 18:43:12.772693 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eea9bcfd-192a-4da9-a309-1b519f236c4b-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 18:43:12 crc kubenswrapper[4813]: I0219 18:43:12.772704 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnvws\" (UniqueName: \"kubernetes.io/projected/eea9bcfd-192a-4da9-a309-1b519f236c4b-kube-api-access-pnvws\") on node \"crc\" DevicePath \"\"" Feb 19 18:43:12 crc kubenswrapper[4813]: I0219 18:43:12.776616 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9782c1db-811f-4fa4-ad33-e6e1bed6ddf5-apiservice-cert\") pod \"metallb-operator-webhook-server-6b6fccb66b-rkklz\" (UID: \"9782c1db-811f-4fa4-ad33-e6e1bed6ddf5\") " pod="metallb-system/metallb-operator-webhook-server-6b6fccb66b-rkklz" Feb 19 18:43:12 crc kubenswrapper[4813]: I0219 18:43:12.776620 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9782c1db-811f-4fa4-ad33-e6e1bed6ddf5-webhook-cert\") pod \"metallb-operator-webhook-server-6b6fccb66b-rkklz\" (UID: \"9782c1db-811f-4fa4-ad33-e6e1bed6ddf5\") " pod="metallb-system/metallb-operator-webhook-server-6b6fccb66b-rkklz" Feb 19 18:43:12 crc kubenswrapper[4813]: I0219 18:43:12.798057 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-577f5d4d96-7cmj7"] Feb 19 18:43:12 crc kubenswrapper[4813]: W0219 18:43:12.803488 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda22dc4b9_f796_4ad3_9195_823745af58b4.slice/crio-b08dd5c03ea7202b5b12146f9de0e3c39b3e82039c3907b820b69c812cc4696d WatchSource:0}: Error finding container b08dd5c03ea7202b5b12146f9de0e3c39b3e82039c3907b820b69c812cc4696d: Status 404 returned error can't find the container with id b08dd5c03ea7202b5b12146f9de0e3c39b3e82039c3907b820b69c812cc4696d Feb 19 18:43:12 crc kubenswrapper[4813]: I0219 18:43:12.804813 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eea9bcfd-192a-4da9-a309-1b519f236c4b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eea9bcfd-192a-4da9-a309-1b519f236c4b" (UID: "eea9bcfd-192a-4da9-a309-1b519f236c4b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:43:12 crc kubenswrapper[4813]: I0219 18:43:12.805481 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrj6z\" (UniqueName: \"kubernetes.io/projected/9782c1db-811f-4fa4-ad33-e6e1bed6ddf5-kube-api-access-xrj6z\") pod \"metallb-operator-webhook-server-6b6fccb66b-rkklz\" (UID: \"9782c1db-811f-4fa4-ad33-e6e1bed6ddf5\") " pod="metallb-system/metallb-operator-webhook-server-6b6fccb66b-rkklz" Feb 19 18:43:12 crc kubenswrapper[4813]: I0219 18:43:12.874185 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eea9bcfd-192a-4da9-a309-1b519f236c4b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 18:43:12 crc kubenswrapper[4813]: I0219 18:43:12.914345 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ltpbj"] Feb 19 18:43:12 crc kubenswrapper[4813]: I0219 18:43:12.919826 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ltpbj"] Feb 19 18:43:13 crc kubenswrapper[4813]: I0219 18:43:13.102283 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6b6fccb66b-rkklz" Feb 19 18:43:13 crc kubenswrapper[4813]: I0219 18:43:13.344136 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6b6fccb66b-rkklz"] Feb 19 18:43:13 crc kubenswrapper[4813]: I0219 18:43:13.481881 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eea9bcfd-192a-4da9-a309-1b519f236c4b" path="/var/lib/kubelet/pods/eea9bcfd-192a-4da9-a309-1b519f236c4b/volumes" Feb 19 18:43:13 crc kubenswrapper[4813]: I0219 18:43:13.593843 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6b6fccb66b-rkklz" event={"ID":"9782c1db-811f-4fa4-ad33-e6e1bed6ddf5","Type":"ContainerStarted","Data":"b0c21ef5e8e81555c30a4bdb7f5b3d14f57b189ddd67ee93787fc9940a441fd4"} Feb 19 18:43:13 crc kubenswrapper[4813]: I0219 18:43:13.595439 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-577f5d4d96-7cmj7" event={"ID":"a22dc4b9-f796-4ad3-9195-823745af58b4","Type":"ContainerStarted","Data":"b08dd5c03ea7202b5b12146f9de0e3c39b3e82039c3907b820b69c812cc4696d"} Feb 19 18:43:16 crc kubenswrapper[4813]: I0219 18:43:16.620237 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-577f5d4d96-7cmj7" event={"ID":"a22dc4b9-f796-4ad3-9195-823745af58b4","Type":"ContainerStarted","Data":"52c18419770da28756ede70d87314ee12c6f48f43d6390a720bf5e70e64f74ad"} Feb 19 18:43:16 crc kubenswrapper[4813]: I0219 18:43:16.621935 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-577f5d4d96-7cmj7" Feb 19 18:43:16 crc kubenswrapper[4813]: I0219 18:43:16.647876 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-577f5d4d96-7cmj7" podStartSLOduration=3.083032645 podStartE2EDuration="5.647848109s" podCreationTimestamp="2026-02-19 18:43:11 +0000 UTC" firstStartedPulling="2026-02-19 18:43:12.806326105 +0000 UTC m=+812.031766646" lastFinishedPulling="2026-02-19 18:43:15.371141569 +0000 UTC m=+814.596582110" observedRunningTime="2026-02-19 18:43:16.642346478 +0000 UTC m=+815.867787039" watchObservedRunningTime="2026-02-19 18:43:16.647848109 +0000 UTC m=+815.873288660" Feb 19 18:43:17 crc kubenswrapper[4813]: I0219 18:43:17.627353 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6b6fccb66b-rkklz" event={"ID":"9782c1db-811f-4fa4-ad33-e6e1bed6ddf5","Type":"ContainerStarted","Data":"d0cca7b7c186ff04944b330df3b279b88717428d1095bdd1615861f313f02c75"} Feb 19 18:43:17 crc kubenswrapper[4813]: I0219 18:43:17.648762 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-6b6fccb66b-rkklz" podStartSLOduration=1.5897498589999999 podStartE2EDuration="5.648737897s" podCreationTimestamp="2026-02-19 18:43:12 +0000 UTC" firstStartedPulling="2026-02-19 18:43:13.350007331 +0000 UTC m=+812.575447862" lastFinishedPulling="2026-02-19 18:43:17.408995359 +0000 UTC m=+816.634435900" observedRunningTime="2026-02-19 18:43:17.645492966 +0000 UTC m=+816.870933557" watchObservedRunningTime="2026-02-19 18:43:17.648737897 +0000 UTC m=+816.874178468" Feb 19 18:43:18 crc kubenswrapper[4813]: I0219 18:43:18.659186 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-6b6fccb66b-rkklz" Feb 19 18:43:33 crc kubenswrapper[4813]: I0219 18:43:33.106765 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-6b6fccb66b-rkklz" Feb 19 18:43:52 crc kubenswrapper[4813]: I0219 18:43:52.323936 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-577f5d4d96-7cmj7" Feb 19 18:43:52 crc kubenswrapper[4813]: I0219 18:43:52.936159 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-4rmhj"] Feb 19 18:43:52 crc kubenswrapper[4813]: E0219 18:43:52.936362 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eea9bcfd-192a-4da9-a309-1b519f236c4b" containerName="extract-utilities" Feb 19 18:43:52 crc kubenswrapper[4813]: I0219 18:43:52.936374 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="eea9bcfd-192a-4da9-a309-1b519f236c4b" containerName="extract-utilities" Feb 19 18:43:52 crc kubenswrapper[4813]: E0219 18:43:52.936385 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eea9bcfd-192a-4da9-a309-1b519f236c4b" containerName="extract-content" Feb 19 18:43:52 crc kubenswrapper[4813]: I0219 18:43:52.936390 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="eea9bcfd-192a-4da9-a309-1b519f236c4b" containerName="extract-content" Feb 19 18:43:52 crc kubenswrapper[4813]: E0219 18:43:52.936406 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eea9bcfd-192a-4da9-a309-1b519f236c4b" containerName="registry-server" Feb 19 18:43:52 crc kubenswrapper[4813]: I0219 18:43:52.936413 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="eea9bcfd-192a-4da9-a309-1b519f236c4b" containerName="registry-server" Feb 19 18:43:52 crc kubenswrapper[4813]: I0219 18:43:52.936502 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="eea9bcfd-192a-4da9-a309-1b519f236c4b" containerName="registry-server" Feb 19 18:43:52 crc kubenswrapper[4813]: I0219 18:43:52.938280 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-4rmhj" Feb 19 18:43:52 crc kubenswrapper[4813]: I0219 18:43:52.940022 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 19 18:43:52 crc kubenswrapper[4813]: I0219 18:43:52.940536 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-hz2bk" Feb 19 18:43:52 crc kubenswrapper[4813]: I0219 18:43:52.940675 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 19 18:43:52 crc kubenswrapper[4813]: I0219 18:43:52.957063 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-svjhx"] Feb 19 18:43:52 crc kubenswrapper[4813]: I0219 18:43:52.957757 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-svjhx" Feb 19 18:43:52 crc kubenswrapper[4813]: I0219 18:43:52.959299 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 19 18:43:52 crc kubenswrapper[4813]: I0219 18:43:52.970939 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-svjhx"] Feb 19 18:43:53 crc kubenswrapper[4813]: I0219 18:43:53.043452 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-qfbjx"] Feb 19 18:43:53 crc kubenswrapper[4813]: I0219 18:43:53.044256 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-qfbjx" Feb 19 18:43:53 crc kubenswrapper[4813]: I0219 18:43:53.046409 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 19 18:43:53 crc kubenswrapper[4813]: I0219 18:43:53.051810 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 19 18:43:53 crc kubenswrapper[4813]: I0219 18:43:53.051931 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-qfxtj" Feb 19 18:43:53 crc kubenswrapper[4813]: I0219 18:43:53.052693 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 19 18:43:53 crc kubenswrapper[4813]: I0219 18:43:53.061264 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-qbxgb"] Feb 19 18:43:53 crc kubenswrapper[4813]: I0219 18:43:53.062082 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-qbxgb" Feb 19 18:43:53 crc kubenswrapper[4813]: I0219 18:43:53.063986 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 19 18:43:53 crc kubenswrapper[4813]: I0219 18:43:53.086376 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-qbxgb"] Feb 19 18:43:53 crc kubenswrapper[4813]: I0219 18:43:53.097973 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/9e232092-adea-48a4-a349-a9574c974c6f-frr-sockets\") pod \"frr-k8s-4rmhj\" (UID: \"9e232092-adea-48a4-a349-a9574c974c6f\") " pod="metallb-system/frr-k8s-4rmhj" Feb 19 18:43:53 crc kubenswrapper[4813]: I0219 18:43:53.098433 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/9e232092-adea-48a4-a349-a9574c974c6f-metrics\") pod \"frr-k8s-4rmhj\" (UID: \"9e232092-adea-48a4-a349-a9574c974c6f\") " pod="metallb-system/frr-k8s-4rmhj" Feb 19 18:43:53 crc kubenswrapper[4813]: I0219 18:43:53.098476 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4w5s8\" (UniqueName: \"kubernetes.io/projected/9e232092-adea-48a4-a349-a9574c974c6f-kube-api-access-4w5s8\") pod \"frr-k8s-4rmhj\" (UID: \"9e232092-adea-48a4-a349-a9574c974c6f\") " pod="metallb-system/frr-k8s-4rmhj" Feb 19 18:43:53 crc kubenswrapper[4813]: I0219 18:43:53.098498 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/9e232092-adea-48a4-a349-a9574c974c6f-reloader\") pod \"frr-k8s-4rmhj\" (UID: \"9e232092-adea-48a4-a349-a9574c974c6f\") " pod="metallb-system/frr-k8s-4rmhj" Feb 19 18:43:53 crc kubenswrapper[4813]: I0219 18:43:53.098522 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9e232092-adea-48a4-a349-a9574c974c6f-metrics-certs\") pod \"frr-k8s-4rmhj\" (UID: \"9e232092-adea-48a4-a349-a9574c974c6f\") " pod="metallb-system/frr-k8s-4rmhj" Feb 19 18:43:53 crc kubenswrapper[4813]: I0219 18:43:53.098581 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/9e232092-adea-48a4-a349-a9574c974c6f-frr-conf\") pod \"frr-k8s-4rmhj\" (UID: \"9e232092-adea-48a4-a349-a9574c974c6f\") " pod="metallb-system/frr-k8s-4rmhj" Feb 19 18:43:53 crc kubenswrapper[4813]: I0219 18:43:53.098609 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz2cx\" (UniqueName: \"kubernetes.io/projected/3d58013d-9952-4d23-ba48-6a4395eafe6d-kube-api-access-kz2cx\") pod \"frr-k8s-webhook-server-78b44bf5bb-svjhx\" (UID: \"3d58013d-9952-4d23-ba48-6a4395eafe6d\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-svjhx" Feb 19 18:43:53 crc kubenswrapper[4813]: I0219 18:43:53.098647 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3d58013d-9952-4d23-ba48-6a4395eafe6d-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-svjhx\" (UID: \"3d58013d-9952-4d23-ba48-6a4395eafe6d\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-svjhx" Feb 19 18:43:53 crc kubenswrapper[4813]: I0219 18:43:53.098739 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/9e232092-adea-48a4-a349-a9574c974c6f-frr-startup\") pod \"frr-k8s-4rmhj\" (UID: \"9e232092-adea-48a4-a349-a9574c974c6f\") " pod="metallb-system/frr-k8s-4rmhj" Feb 19 18:43:53 crc kubenswrapper[4813]: I0219 18:43:53.199619 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/9e232092-adea-48a4-a349-a9574c974c6f-metrics\") pod \"frr-k8s-4rmhj\" (UID: \"9e232092-adea-48a4-a349-a9574c974c6f\") " pod="metallb-system/frr-k8s-4rmhj" Feb 19 18:43:53 crc kubenswrapper[4813]: I0219 18:43:53.200045 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4w5s8\" (UniqueName: \"kubernetes.io/projected/9e232092-adea-48a4-a349-a9574c974c6f-kube-api-access-4w5s8\") pod \"frr-k8s-4rmhj\" (UID: \"9e232092-adea-48a4-a349-a9574c974c6f\") " pod="metallb-system/frr-k8s-4rmhj" Feb 19 18:43:53 crc kubenswrapper[4813]: I0219 18:43:53.200005 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/9e232092-adea-48a4-a349-a9574c974c6f-metrics\") pod \"frr-k8s-4rmhj\" (UID: \"9e232092-adea-48a4-a349-a9574c974c6f\") " pod="metallb-system/frr-k8s-4rmhj" Feb 19 18:43:53 crc kubenswrapper[4813]: I0219 18:43:53.200130 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/9e232092-adea-48a4-a349-a9574c974c6f-reloader\") pod \"frr-k8s-4rmhj\" (UID: \"9e232092-adea-48a4-a349-a9574c974c6f\") " pod="metallb-system/frr-k8s-4rmhj" Feb 19 18:43:53 crc kubenswrapper[4813]: I0219 18:43:53.200149 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9e232092-adea-48a4-a349-a9574c974c6f-metrics-certs\") pod \"frr-k8s-4rmhj\" (UID: \"9e232092-adea-48a4-a349-a9574c974c6f\") " pod="metallb-system/frr-k8s-4rmhj" Feb 19 18:43:53 crc kubenswrapper[4813]: I0219 18:43:53.200342 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/9e232092-adea-48a4-a349-a9574c974c6f-reloader\") pod \"frr-k8s-4rmhj\" (UID: \"9e232092-adea-48a4-a349-a9574c974c6f\") " pod="metallb-system/frr-k8s-4rmhj" Feb 19 18:43:53 crc kubenswrapper[4813]: I0219 18:43:53.200388 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dbe32660-daba-4fb2-b3e2-475f5b092ed9-metrics-certs\") pod \"speaker-qfbjx\" (UID: \"dbe32660-daba-4fb2-b3e2-475f5b092ed9\") " pod="metallb-system/speaker-qfbjx" Feb 19 18:43:53 crc kubenswrapper[4813]: I0219 18:43:53.200405 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zzgq\" (UniqueName: \"kubernetes.io/projected/dbe32660-daba-4fb2-b3e2-475f5b092ed9-kube-api-access-4zzgq\") pod \"speaker-qfbjx\" (UID: \"dbe32660-daba-4fb2-b3e2-475f5b092ed9\") " pod="metallb-system/speaker-qfbjx" Feb 19 18:43:53 crc kubenswrapper[4813]: I0219 18:43:53.200463 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/9e232092-adea-48a4-a349-a9574c974c6f-frr-conf\") pod \"frr-k8s-4rmhj\" (UID: \"9e232092-adea-48a4-a349-a9574c974c6f\") " pod="metallb-system/frr-k8s-4rmhj" Feb 19 18:43:53 crc kubenswrapper[4813]: I0219 18:43:53.200485 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/534e8bf3-c43f-4d6b-9340-9a1f876a2697-cert\") pod \"controller-69bbfbf88f-qbxgb\" (UID: \"534e8bf3-c43f-4d6b-9340-9a1f876a2697\") " pod="metallb-system/controller-69bbfbf88f-qbxgb" Feb 19 18:43:53 crc kubenswrapper[4813]: I0219 18:43:53.200694 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kz2cx\" (UniqueName: \"kubernetes.io/projected/3d58013d-9952-4d23-ba48-6a4395eafe6d-kube-api-access-kz2cx\") pod \"frr-k8s-webhook-server-78b44bf5bb-svjhx\" (UID: \"3d58013d-9952-4d23-ba48-6a4395eafe6d\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-svjhx" Feb 19 18:43:53 crc kubenswrapper[4813]: I0219 18:43:53.200659 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/9e232092-adea-48a4-a349-a9574c974c6f-frr-conf\") pod \"frr-k8s-4rmhj\" (UID: \"9e232092-adea-48a4-a349-a9574c974c6f\") " pod="metallb-system/frr-k8s-4rmhj" Feb 19 18:43:53 crc kubenswrapper[4813]: I0219 18:43:53.200978 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/534e8bf3-c43f-4d6b-9340-9a1f876a2697-metrics-certs\") pod \"controller-69bbfbf88f-qbxgb\" (UID: \"534e8bf3-c43f-4d6b-9340-9a1f876a2697\") " pod="metallb-system/controller-69bbfbf88f-qbxgb" Feb 19 18:43:53 crc kubenswrapper[4813]: I0219 18:43:53.201012 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3d58013d-9952-4d23-ba48-6a4395eafe6d-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-svjhx\" (UID: \"3d58013d-9952-4d23-ba48-6a4395eafe6d\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-svjhx" Feb 19 18:43:53 crc kubenswrapper[4813]: I0219 18:43:53.201031 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/9e232092-adea-48a4-a349-a9574c974c6f-frr-startup\") pod \"frr-k8s-4rmhj\" (UID: \"9e232092-adea-48a4-a349-a9574c974c6f\") " pod="metallb-system/frr-k8s-4rmhj" Feb 19 18:43:53 crc kubenswrapper[4813]: I0219 18:43:53.201061 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/9e232092-adea-48a4-a349-a9574c974c6f-frr-sockets\") pod \"frr-k8s-4rmhj\" (UID: \"9e232092-adea-48a4-a349-a9574c974c6f\") " pod="metallb-system/frr-k8s-4rmhj" Feb 19 18:43:53 crc kubenswrapper[4813]: I0219 18:43:53.201080 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/dbe32660-daba-4fb2-b3e2-475f5b092ed9-metallb-excludel2\") pod \"speaker-qfbjx\" (UID: \"dbe32660-daba-4fb2-b3e2-475f5b092ed9\") " pod="metallb-system/speaker-qfbjx" Feb 19 18:43:53 crc kubenswrapper[4813]: I0219 18:43:53.201296 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/9e232092-adea-48a4-a349-a9574c974c6f-frr-sockets\") pod \"frr-k8s-4rmhj\" (UID: \"9e232092-adea-48a4-a349-a9574c974c6f\") " pod="metallb-system/frr-k8s-4rmhj" Feb 19 18:43:53 crc kubenswrapper[4813]: I0219 18:43:53.201325 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/dbe32660-daba-4fb2-b3e2-475f5b092ed9-memberlist\") pod \"speaker-qfbjx\" (UID: \"dbe32660-daba-4fb2-b3e2-475f5b092ed9\") " pod="metallb-system/speaker-qfbjx" Feb 19 18:43:53 crc kubenswrapper[4813]: I0219 18:43:53.201400 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxzg5\" (UniqueName: \"kubernetes.io/projected/534e8bf3-c43f-4d6b-9340-9a1f876a2697-kube-api-access-hxzg5\") pod \"controller-69bbfbf88f-qbxgb\" (UID: \"534e8bf3-c43f-4d6b-9340-9a1f876a2697\") " pod="metallb-system/controller-69bbfbf88f-qbxgb" Feb 19 18:43:53 crc kubenswrapper[4813]: I0219 18:43:53.202070 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/9e232092-adea-48a4-a349-a9574c974c6f-frr-startup\") pod \"frr-k8s-4rmhj\" (UID: \"9e232092-adea-48a4-a349-a9574c974c6f\") " pod="metallb-system/frr-k8s-4rmhj" Feb 19 18:43:53 crc kubenswrapper[4813]: I0219 18:43:53.205806 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9e232092-adea-48a4-a349-a9574c974c6f-metrics-certs\") pod \"frr-k8s-4rmhj\" (UID: \"9e232092-adea-48a4-a349-a9574c974c6f\") " pod="metallb-system/frr-k8s-4rmhj" Feb 19 18:43:53 crc kubenswrapper[4813]: I0219 18:43:53.205812 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3d58013d-9952-4d23-ba48-6a4395eafe6d-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-svjhx\" (UID: \"3d58013d-9952-4d23-ba48-6a4395eafe6d\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-svjhx" Feb 19 18:43:53 crc kubenswrapper[4813]: I0219 18:43:53.218604 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kz2cx\" (UniqueName: \"kubernetes.io/projected/3d58013d-9952-4d23-ba48-6a4395eafe6d-kube-api-access-kz2cx\") pod \"frr-k8s-webhook-server-78b44bf5bb-svjhx\" (UID: \"3d58013d-9952-4d23-ba48-6a4395eafe6d\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-svjhx" Feb 19 18:43:53 crc kubenswrapper[4813]: I0219 18:43:53.222601 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4w5s8\" (UniqueName: \"kubernetes.io/projected/9e232092-adea-48a4-a349-a9574c974c6f-kube-api-access-4w5s8\") pod \"frr-k8s-4rmhj\" (UID: \"9e232092-adea-48a4-a349-a9574c974c6f\") " pod="metallb-system/frr-k8s-4rmhj" Feb 19 18:43:53 crc kubenswrapper[4813]: I0219 18:43:53.257063 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-4rmhj" Feb 19 18:43:53 crc kubenswrapper[4813]: I0219 18:43:53.272756 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-svjhx" Feb 19 18:43:53 crc kubenswrapper[4813]: I0219 18:43:53.302707 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/534e8bf3-c43f-4d6b-9340-9a1f876a2697-cert\") pod \"controller-69bbfbf88f-qbxgb\" (UID: \"534e8bf3-c43f-4d6b-9340-9a1f876a2697\") " pod="metallb-system/controller-69bbfbf88f-qbxgb" Feb 19 18:43:53 crc kubenswrapper[4813]: I0219 18:43:53.303026 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/534e8bf3-c43f-4d6b-9340-9a1f876a2697-metrics-certs\") pod \"controller-69bbfbf88f-qbxgb\" (UID: \"534e8bf3-c43f-4d6b-9340-9a1f876a2697\") " pod="metallb-system/controller-69bbfbf88f-qbxgb" Feb 19 18:43:53 crc kubenswrapper[4813]: I0219 18:43:53.303142 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/dbe32660-daba-4fb2-b3e2-475f5b092ed9-metallb-excludel2\") pod \"speaker-qfbjx\" (UID: \"dbe32660-daba-4fb2-b3e2-475f5b092ed9\") " pod="metallb-system/speaker-qfbjx" Feb 19 18:43:53 crc kubenswrapper[4813]: I0219 18:43:53.303238 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/dbe32660-daba-4fb2-b3e2-475f5b092ed9-memberlist\") pod \"speaker-qfbjx\" (UID: \"dbe32660-daba-4fb2-b3e2-475f5b092ed9\") " pod="metallb-system/speaker-qfbjx" Feb 19 18:43:53 crc kubenswrapper[4813]: I0219 18:43:53.303368 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxzg5\" (UniqueName: \"kubernetes.io/projected/534e8bf3-c43f-4d6b-9340-9a1f876a2697-kube-api-access-hxzg5\") pod \"controller-69bbfbf88f-qbxgb\" (UID: \"534e8bf3-c43f-4d6b-9340-9a1f876a2697\") " pod="metallb-system/controller-69bbfbf88f-qbxgb" Feb 19 18:43:53 crc kubenswrapper[4813]: I0219 18:43:53.303502 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dbe32660-daba-4fb2-b3e2-475f5b092ed9-metrics-certs\") pod \"speaker-qfbjx\" (UID: \"dbe32660-daba-4fb2-b3e2-475f5b092ed9\") " pod="metallb-system/speaker-qfbjx" Feb 19 18:43:53 crc kubenswrapper[4813]: I0219 18:43:53.303597 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zzgq\" (UniqueName: \"kubernetes.io/projected/dbe32660-daba-4fb2-b3e2-475f5b092ed9-kube-api-access-4zzgq\") pod \"speaker-qfbjx\" (UID: \"dbe32660-daba-4fb2-b3e2-475f5b092ed9\") " pod="metallb-system/speaker-qfbjx" Feb 19 18:43:53 crc kubenswrapper[4813]: E0219 18:43:53.303258 4813 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Feb 19 18:43:53 crc kubenswrapper[4813]: E0219 18:43:53.303841 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/534e8bf3-c43f-4d6b-9340-9a1f876a2697-metrics-certs podName:534e8bf3-c43f-4d6b-9340-9a1f876a2697 nodeName:}" failed. No retries permitted until 2026-02-19 18:43:53.803817749 +0000 UTC m=+853.029258290 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/534e8bf3-c43f-4d6b-9340-9a1f876a2697-metrics-certs") pod "controller-69bbfbf88f-qbxgb" (UID: "534e8bf3-c43f-4d6b-9340-9a1f876a2697") : secret "controller-certs-secret" not found Feb 19 18:43:53 crc kubenswrapper[4813]: E0219 18:43:53.303304 4813 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 19 18:43:53 crc kubenswrapper[4813]: I0219 18:43:53.303894 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/dbe32660-daba-4fb2-b3e2-475f5b092ed9-metallb-excludel2\") pod \"speaker-qfbjx\" (UID: \"dbe32660-daba-4fb2-b3e2-475f5b092ed9\") " pod="metallb-system/speaker-qfbjx" Feb 19 18:43:53 crc kubenswrapper[4813]: E0219 18:43:53.304086 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dbe32660-daba-4fb2-b3e2-475f5b092ed9-memberlist podName:dbe32660-daba-4fb2-b3e2-475f5b092ed9 nodeName:}" failed. No retries permitted until 2026-02-19 18:43:53.804027985 +0000 UTC m=+853.029468526 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/dbe32660-daba-4fb2-b3e2-475f5b092ed9-memberlist") pod "speaker-qfbjx" (UID: "dbe32660-daba-4fb2-b3e2-475f5b092ed9") : secret "metallb-memberlist" not found Feb 19 18:43:53 crc kubenswrapper[4813]: I0219 18:43:53.306196 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 19 18:43:53 crc kubenswrapper[4813]: I0219 18:43:53.310242 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dbe32660-daba-4fb2-b3e2-475f5b092ed9-metrics-certs\") pod \"speaker-qfbjx\" (UID: \"dbe32660-daba-4fb2-b3e2-475f5b092ed9\") " pod="metallb-system/speaker-qfbjx" Feb 19 18:43:53 crc kubenswrapper[4813]: I0219 18:43:53.323029 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zzgq\" (UniqueName: \"kubernetes.io/projected/dbe32660-daba-4fb2-b3e2-475f5b092ed9-kube-api-access-4zzgq\") pod \"speaker-qfbjx\" (UID: \"dbe32660-daba-4fb2-b3e2-475f5b092ed9\") " pod="metallb-system/speaker-qfbjx" Feb 19 18:43:53 crc kubenswrapper[4813]: I0219 18:43:53.323107 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/534e8bf3-c43f-4d6b-9340-9a1f876a2697-cert\") pod \"controller-69bbfbf88f-qbxgb\" (UID: \"534e8bf3-c43f-4d6b-9340-9a1f876a2697\") " pod="metallb-system/controller-69bbfbf88f-qbxgb" Feb 19 18:43:53 crc kubenswrapper[4813]: I0219 18:43:53.323805 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxzg5\" (UniqueName: \"kubernetes.io/projected/534e8bf3-c43f-4d6b-9340-9a1f876a2697-kube-api-access-hxzg5\") pod \"controller-69bbfbf88f-qbxgb\" (UID: \"534e8bf3-c43f-4d6b-9340-9a1f876a2697\") " pod="metallb-system/controller-69bbfbf88f-qbxgb" Feb 19 18:43:53 crc kubenswrapper[4813]: I0219 18:43:53.466015 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4rmhj" event={"ID":"9e232092-adea-48a4-a349-a9574c974c6f","Type":"ContainerStarted","Data":"78d787524c5c81c15a8ecc62f31cbbc45b587d7062b6f8ed38e5bbc2a3ad5c7a"} Feb 19 18:43:53 crc kubenswrapper[4813]: I0219 18:43:53.518143 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-svjhx"] Feb 19 18:43:53 crc kubenswrapper[4813]: W0219 18:43:53.527688 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3d58013d_9952_4d23_ba48_6a4395eafe6d.slice/crio-b2f7710013f5a10c98287aacc9e956696a023e19f9e17fe329dace65e87b1a10 WatchSource:0}: Error finding container b2f7710013f5a10c98287aacc9e956696a023e19f9e17fe329dace65e87b1a10: Status 404 returned error can't find the container with id b2f7710013f5a10c98287aacc9e956696a023e19f9e17fe329dace65e87b1a10 Feb 19 18:43:53 crc kubenswrapper[4813]: I0219 18:43:53.810145 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/534e8bf3-c43f-4d6b-9340-9a1f876a2697-metrics-certs\") pod \"controller-69bbfbf88f-qbxgb\" (UID: \"534e8bf3-c43f-4d6b-9340-9a1f876a2697\") " pod="metallb-system/controller-69bbfbf88f-qbxgb" Feb 19 18:43:53 crc kubenswrapper[4813]: I0219 18:43:53.810694 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/dbe32660-daba-4fb2-b3e2-475f5b092ed9-memberlist\") pod \"speaker-qfbjx\" (UID: \"dbe32660-daba-4fb2-b3e2-475f5b092ed9\") " pod="metallb-system/speaker-qfbjx" Feb 19 18:43:53 crc kubenswrapper[4813]: E0219 18:43:53.810906 4813 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 19 18:43:53 crc kubenswrapper[4813]: E0219 18:43:53.811063 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dbe32660-daba-4fb2-b3e2-475f5b092ed9-memberlist podName:dbe32660-daba-4fb2-b3e2-475f5b092ed9 nodeName:}" failed. No retries permitted until 2026-02-19 18:43:54.811036818 +0000 UTC m=+854.036477399 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/dbe32660-daba-4fb2-b3e2-475f5b092ed9-memberlist") pod "speaker-qfbjx" (UID: "dbe32660-daba-4fb2-b3e2-475f5b092ed9") : secret "metallb-memberlist" not found Feb 19 18:43:53 crc kubenswrapper[4813]: I0219 18:43:53.816025 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/534e8bf3-c43f-4d6b-9340-9a1f876a2697-metrics-certs\") pod \"controller-69bbfbf88f-qbxgb\" (UID: \"534e8bf3-c43f-4d6b-9340-9a1f876a2697\") " pod="metallb-system/controller-69bbfbf88f-qbxgb" Feb 19 18:43:53 crc kubenswrapper[4813]: I0219 18:43:53.982859 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-qbxgb" Feb 19 18:43:54 crc kubenswrapper[4813]: I0219 18:43:54.460731 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-qbxgb"] Feb 19 18:43:54 crc kubenswrapper[4813]: W0219 18:43:54.470450 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod534e8bf3_c43f_4d6b_9340_9a1f876a2697.slice/crio-3613871fa068035bc5d28ec9e1bda44adf22b48415a2f5f3fd00580132d4a188 WatchSource:0}: Error finding container 3613871fa068035bc5d28ec9e1bda44adf22b48415a2f5f3fd00580132d4a188: Status 404 returned error can't find the container with id 3613871fa068035bc5d28ec9e1bda44adf22b48415a2f5f3fd00580132d4a188 Feb 19 18:43:54 crc kubenswrapper[4813]: I0219 18:43:54.474137 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-svjhx" event={"ID":"3d58013d-9952-4d23-ba48-6a4395eafe6d","Type":"ContainerStarted","Data":"b2f7710013f5a10c98287aacc9e956696a023e19f9e17fe329dace65e87b1a10"} Feb 19 18:43:54 crc kubenswrapper[4813]: I0219 18:43:54.825611 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/dbe32660-daba-4fb2-b3e2-475f5b092ed9-memberlist\") pod \"speaker-qfbjx\" (UID: \"dbe32660-daba-4fb2-b3e2-475f5b092ed9\") " pod="metallb-system/speaker-qfbjx" Feb 19 18:43:54 crc kubenswrapper[4813]: I0219 18:43:54.833807 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/dbe32660-daba-4fb2-b3e2-475f5b092ed9-memberlist\") pod \"speaker-qfbjx\" (UID: \"dbe32660-daba-4fb2-b3e2-475f5b092ed9\") " pod="metallb-system/speaker-qfbjx" Feb 19 18:43:54 crc kubenswrapper[4813]: I0219 18:43:54.859622 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-qfbjx" Feb 19 18:43:54 crc kubenswrapper[4813]: W0219 18:43:54.893031 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddbe32660_daba_4fb2_b3e2_475f5b092ed9.slice/crio-63952b49e7ec209563cc0c08c175171b3c43826ec366a29c70adc202b4355c2d WatchSource:0}: Error finding container 63952b49e7ec209563cc0c08c175171b3c43826ec366a29c70adc202b4355c2d: Status 404 returned error can't find the container with id 63952b49e7ec209563cc0c08c175171b3c43826ec366a29c70adc202b4355c2d Feb 19 18:43:55 crc kubenswrapper[4813]: I0219 18:43:55.531784 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-qbxgb" event={"ID":"534e8bf3-c43f-4d6b-9340-9a1f876a2697","Type":"ContainerStarted","Data":"1a3617f5680e914ea2af2ae8d4196aa6d42f5987d51d293a7fdb331a3fbee698"} Feb 19 18:43:55 crc kubenswrapper[4813]: I0219 18:43:55.546465 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-qbxgb" event={"ID":"534e8bf3-c43f-4d6b-9340-9a1f876a2697","Type":"ContainerStarted","Data":"0c162d918d223f1d895e5512f222d43e874ecb6816dee9ae6bb1eec4997d40f9"} Feb 19 18:43:55 crc kubenswrapper[4813]: I0219 18:43:55.546544 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-qbxgb" Feb 19 18:43:55 crc kubenswrapper[4813]: I0219 18:43:55.538541 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-qbxgb" podStartSLOduration=2.5385157019999998 podStartE2EDuration="2.538515702s" podCreationTimestamp="2026-02-19 18:43:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:43:55.526177312 +0000 UTC m=+854.751617853" watchObservedRunningTime="2026-02-19 18:43:55.538515702 +0000 UTC m=+854.763956263" Feb 19 18:43:55 crc kubenswrapper[4813]: I0219 18:43:55.546863 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-qbxgb" event={"ID":"534e8bf3-c43f-4d6b-9340-9a1f876a2697","Type":"ContainerStarted","Data":"3613871fa068035bc5d28ec9e1bda44adf22b48415a2f5f3fd00580132d4a188"} Feb 19 18:43:55 crc kubenswrapper[4813]: I0219 18:43:55.547117 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-qfbjx" event={"ID":"dbe32660-daba-4fb2-b3e2-475f5b092ed9","Type":"ContainerStarted","Data":"76a6746c1d88a26c58b0ca7deb5e57da905f7a515d3d6d0347923b2614d973b0"} Feb 19 18:43:55 crc kubenswrapper[4813]: I0219 18:43:55.547148 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-qfbjx" event={"ID":"dbe32660-daba-4fb2-b3e2-475f5b092ed9","Type":"ContainerStarted","Data":"f6db49f44b3a7b270502ed04bbafac03936c8bffb9e5a90c5cb276febaf7bf51"} Feb 19 18:43:55 crc kubenswrapper[4813]: I0219 18:43:55.547158 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-qfbjx" event={"ID":"dbe32660-daba-4fb2-b3e2-475f5b092ed9","Type":"ContainerStarted","Data":"63952b49e7ec209563cc0c08c175171b3c43826ec366a29c70adc202b4355c2d"} Feb 19 18:43:56 crc kubenswrapper[4813]: I0219 18:43:56.512663 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-qfbjx" Feb 19 18:44:01 crc kubenswrapper[4813]: I0219 18:44:01.498994 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-qfbjx" podStartSLOduration=8.498942562 podStartE2EDuration="8.498942562s" podCreationTimestamp="2026-02-19 18:43:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:43:56.538797324 +0000 UTC m=+855.764237865" watchObservedRunningTime="2026-02-19 18:44:01.498942562 +0000 UTC m=+860.724383143" Feb 19 18:44:01 crc kubenswrapper[4813]: I0219 18:44:01.541914 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-svjhx" event={"ID":"3d58013d-9952-4d23-ba48-6a4395eafe6d","Type":"ContainerStarted","Data":"955f93cabfe95ecc0a110de3ad26d4444802f3fb39b49266de835e65a5ae9c2d"} Feb 19 18:44:01 crc kubenswrapper[4813]: I0219 18:44:01.542246 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-svjhx" Feb 19 18:44:01 crc kubenswrapper[4813]: I0219 18:44:01.544351 4813 generic.go:334] "Generic (PLEG): container finished" podID="9e232092-adea-48a4-a349-a9574c974c6f" containerID="35d81628efb923bc3af6e674d6d6d2a4f2294d8d0c4f42de1b0b08a2946b1c71" exitCode=0 Feb 19 18:44:01 crc kubenswrapper[4813]: I0219 18:44:01.544396 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4rmhj" event={"ID":"9e232092-adea-48a4-a349-a9574c974c6f","Type":"ContainerDied","Data":"35d81628efb923bc3af6e674d6d6d2a4f2294d8d0c4f42de1b0b08a2946b1c71"} Feb 19 18:44:01 crc kubenswrapper[4813]: I0219 18:44:01.564831 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-svjhx" podStartSLOduration=2.647702212 podStartE2EDuration="9.564808677s" podCreationTimestamp="2026-02-19 18:43:52 +0000 UTC" firstStartedPulling="2026-02-19 18:43:53.531886498 +0000 UTC m=+852.757327069" lastFinishedPulling="2026-02-19 18:44:00.448992993 +0000 UTC m=+859.674433534" observedRunningTime="2026-02-19 18:44:01.560276038 +0000 UTC m=+860.785716609" watchObservedRunningTime="2026-02-19 18:44:01.564808677 +0000 UTC m=+860.790249238" Feb 19 18:44:02 crc kubenswrapper[4813]: I0219 18:44:02.553296 4813 generic.go:334] "Generic (PLEG): container finished" podID="9e232092-adea-48a4-a349-a9574c974c6f" containerID="d24bf95754260c92d8b6f3a47659075172a40f4db78771731458a444f5fc9443" exitCode=0 Feb 19 18:44:02 crc kubenswrapper[4813]: I0219 18:44:02.554593 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4rmhj" event={"ID":"9e232092-adea-48a4-a349-a9574c974c6f","Type":"ContainerDied","Data":"d24bf95754260c92d8b6f3a47659075172a40f4db78771731458a444f5fc9443"} Feb 19 18:44:03 crc kubenswrapper[4813]: I0219 18:44:03.560735 4813 generic.go:334] "Generic (PLEG): container finished" podID="9e232092-adea-48a4-a349-a9574c974c6f" containerID="9b76742387780c5fc69b6651f183447a47b0435193b6e2a0d0859a7d6deac57e" exitCode=0 Feb 19 18:44:03 crc kubenswrapper[4813]: I0219 18:44:03.560773 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4rmhj" event={"ID":"9e232092-adea-48a4-a349-a9574c974c6f","Type":"ContainerDied","Data":"9b76742387780c5fc69b6651f183447a47b0435193b6e2a0d0859a7d6deac57e"} Feb 19 18:44:04 crc kubenswrapper[4813]: I0219 18:44:04.571324 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4rmhj" event={"ID":"9e232092-adea-48a4-a349-a9574c974c6f","Type":"ContainerStarted","Data":"5528c3a91c1ed675885aef9d998927017a01779f36454d87bb9f751efca002eb"} Feb 19 18:44:04 crc kubenswrapper[4813]: I0219 18:44:04.571657 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4rmhj" event={"ID":"9e232092-adea-48a4-a349-a9574c974c6f","Type":"ContainerStarted","Data":"7ec9c028de36a48988399f4ed8510ac04526e45b0bb8d0ff5f81d17be4317de0"} Feb 19 18:44:04 crc kubenswrapper[4813]: I0219 18:44:04.571672 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4rmhj" event={"ID":"9e232092-adea-48a4-a349-a9574c974c6f","Type":"ContainerStarted","Data":"c066e1fc0b33b9a8acc08f53ce7421611f10927554d0833cd484c5b4e89c4d3c"} Feb 19 18:44:04 crc kubenswrapper[4813]: I0219 18:44:04.571684 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4rmhj" event={"ID":"9e232092-adea-48a4-a349-a9574c974c6f","Type":"ContainerStarted","Data":"d48d384b69b1a5e339aaf816374eeecab5ba3d0bef4fdb5ecf918f93b70f2166"} Feb 19 18:44:05 crc kubenswrapper[4813]: I0219 18:44:05.586474 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-4rmhj" Feb 19 18:44:05 crc kubenswrapper[4813]: I0219 18:44:05.586849 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4rmhj" event={"ID":"9e232092-adea-48a4-a349-a9574c974c6f","Type":"ContainerStarted","Data":"cdb1d896326d3786caf3b29e48011dadd1f43ef5e5f72ecf9a01cc2637b49bff"} Feb 19 18:44:05 crc kubenswrapper[4813]: I0219 18:44:05.586877 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4rmhj" event={"ID":"9e232092-adea-48a4-a349-a9574c974c6f","Type":"ContainerStarted","Data":"4e99219315ba6ded11f147c5abf89e0a2d0303ff646170928e35d3e9350ac562"} Feb 19 18:44:08 crc kubenswrapper[4813]: I0219 18:44:08.257516 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-4rmhj" Feb 19 18:44:08 crc kubenswrapper[4813]: I0219 18:44:08.322743 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-4rmhj" Feb 19 18:44:08 crc kubenswrapper[4813]: I0219 18:44:08.368399 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-4rmhj" podStartSLOduration=9.35731026 podStartE2EDuration="16.368356302s" podCreationTimestamp="2026-02-19 18:43:52 +0000 UTC" firstStartedPulling="2026-02-19 18:43:53.459746471 +0000 UTC m=+852.685187012" lastFinishedPulling="2026-02-19 18:44:00.470792513 +0000 UTC m=+859.696233054" observedRunningTime="2026-02-19 18:44:05.614627496 +0000 UTC m=+864.840068037" watchObservedRunningTime="2026-02-19 18:44:08.368356302 +0000 UTC m=+867.593796893" Feb 19 18:44:13 crc kubenswrapper[4813]: I0219 18:44:13.271720 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-4rmhj" Feb 19 18:44:13 crc kubenswrapper[4813]: I0219 18:44:13.280226 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-svjhx" Feb 19 18:44:13 crc kubenswrapper[4813]: I0219 18:44:13.987335 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-qbxgb" Feb 19 18:44:14 crc kubenswrapper[4813]: I0219 18:44:14.863456 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-qfbjx" Feb 19 18:44:16 crc kubenswrapper[4813]: I0219 18:44:16.193502 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jxj6l"] Feb 19 18:44:16 crc kubenswrapper[4813]: I0219 18:44:16.194806 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jxj6l" Feb 19 18:44:16 crc kubenswrapper[4813]: I0219 18:44:16.196666 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 19 18:44:16 crc kubenswrapper[4813]: I0219 18:44:16.206873 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jxj6l"] Feb 19 18:44:16 crc kubenswrapper[4813]: I0219 18:44:16.208438 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7ab16436-c636-42ed-a497-0ab36f9a9074-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jxj6l\" (UID: \"7ab16436-c636-42ed-a497-0ab36f9a9074\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jxj6l" Feb 19 18:44:16 crc kubenswrapper[4813]: I0219 18:44:16.208577 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7ab16436-c636-42ed-a497-0ab36f9a9074-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jxj6l\" (UID: \"7ab16436-c636-42ed-a497-0ab36f9a9074\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jxj6l" Feb 19 18:44:16 crc kubenswrapper[4813]: I0219 18:44:16.208695 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkf2z\" (UniqueName: \"kubernetes.io/projected/7ab16436-c636-42ed-a497-0ab36f9a9074-kube-api-access-gkf2z\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jxj6l\" (UID: \"7ab16436-c636-42ed-a497-0ab36f9a9074\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jxj6l" Feb 19 18:44:16 crc kubenswrapper[4813]: I0219 18:44:16.309534 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkf2z\" (UniqueName: \"kubernetes.io/projected/7ab16436-c636-42ed-a497-0ab36f9a9074-kube-api-access-gkf2z\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jxj6l\" (UID: \"7ab16436-c636-42ed-a497-0ab36f9a9074\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jxj6l" Feb 19 18:44:16 crc kubenswrapper[4813]: I0219 18:44:16.309614 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7ab16436-c636-42ed-a497-0ab36f9a9074-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jxj6l\" (UID: \"7ab16436-c636-42ed-a497-0ab36f9a9074\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jxj6l" Feb 19 18:44:16 crc kubenswrapper[4813]: I0219 18:44:16.309666 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7ab16436-c636-42ed-a497-0ab36f9a9074-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jxj6l\" (UID: \"7ab16436-c636-42ed-a497-0ab36f9a9074\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jxj6l" Feb 19 18:44:16 crc kubenswrapper[4813]: I0219 18:44:16.310178 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7ab16436-c636-42ed-a497-0ab36f9a9074-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jxj6l\" (UID: \"7ab16436-c636-42ed-a497-0ab36f9a9074\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jxj6l" Feb 19 18:44:16 crc kubenswrapper[4813]: I0219 18:44:16.310311 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7ab16436-c636-42ed-a497-0ab36f9a9074-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jxj6l\" (UID: \"7ab16436-c636-42ed-a497-0ab36f9a9074\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jxj6l" Feb 19 18:44:16 crc kubenswrapper[4813]: I0219 18:44:16.329396 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkf2z\" (UniqueName: \"kubernetes.io/projected/7ab16436-c636-42ed-a497-0ab36f9a9074-kube-api-access-gkf2z\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jxj6l\" (UID: \"7ab16436-c636-42ed-a497-0ab36f9a9074\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jxj6l" Feb 19 18:44:16 crc kubenswrapper[4813]: I0219 18:44:16.573410 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jxj6l" Feb 19 18:44:17 crc kubenswrapper[4813]: I0219 18:44:17.003547 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jxj6l"] Feb 19 18:44:17 crc kubenswrapper[4813]: I0219 18:44:17.663422 4813 generic.go:334] "Generic (PLEG): container finished" podID="7ab16436-c636-42ed-a497-0ab36f9a9074" containerID="16f7fc7e0c029df5d5f5548b76bfce3d346f38cd060ad9184c4baae8ac7df861" exitCode=0 Feb 19 18:44:17 crc kubenswrapper[4813]: I0219 18:44:17.663474 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jxj6l" event={"ID":"7ab16436-c636-42ed-a497-0ab36f9a9074","Type":"ContainerDied","Data":"16f7fc7e0c029df5d5f5548b76bfce3d346f38cd060ad9184c4baae8ac7df861"} Feb 19 18:44:17 crc kubenswrapper[4813]: I0219 18:44:17.663793 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jxj6l" event={"ID":"7ab16436-c636-42ed-a497-0ab36f9a9074","Type":"ContainerStarted","Data":"c9907b866500e2756bb98d35e54e9aa8133f9c96d884f5975801b0a770880a75"} Feb 19 18:44:27 crc kubenswrapper[4813]: I0219 18:44:27.731082 4813 generic.go:334] "Generic (PLEG): container finished" podID="7ab16436-c636-42ed-a497-0ab36f9a9074" containerID="ba98ef51f128bb28934cb48f77d5e1dd4de21e88a123a7ffe7747d7bcf01d8da" exitCode=0 Feb 19 18:44:27 crc kubenswrapper[4813]: I0219 18:44:27.731165 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jxj6l" event={"ID":"7ab16436-c636-42ed-a497-0ab36f9a9074","Type":"ContainerDied","Data":"ba98ef51f128bb28934cb48f77d5e1dd4de21e88a123a7ffe7747d7bcf01d8da"} Feb 19 18:44:28 crc kubenswrapper[4813]: I0219 18:44:28.744049 4813 generic.go:334] "Generic (PLEG): container finished" podID="7ab16436-c636-42ed-a497-0ab36f9a9074" containerID="35e09622ce08d915fb943c459200f1cf605099991aeede3d93adc13c8b8b6cc5" exitCode=0 Feb 19 18:44:28 crc kubenswrapper[4813]: I0219 18:44:28.744112 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jxj6l" event={"ID":"7ab16436-c636-42ed-a497-0ab36f9a9074","Type":"ContainerDied","Data":"35e09622ce08d915fb943c459200f1cf605099991aeede3d93adc13c8b8b6cc5"} Feb 19 18:44:30 crc kubenswrapper[4813]: I0219 18:44:30.038711 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jxj6l" Feb 19 18:44:30 crc kubenswrapper[4813]: I0219 18:44:30.109371 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7ab16436-c636-42ed-a497-0ab36f9a9074-util\") pod \"7ab16436-c636-42ed-a497-0ab36f9a9074\" (UID: \"7ab16436-c636-42ed-a497-0ab36f9a9074\") " Feb 19 18:44:30 crc kubenswrapper[4813]: I0219 18:44:30.109545 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7ab16436-c636-42ed-a497-0ab36f9a9074-bundle\") pod \"7ab16436-c636-42ed-a497-0ab36f9a9074\" (UID: \"7ab16436-c636-42ed-a497-0ab36f9a9074\") " Feb 19 18:44:30 crc kubenswrapper[4813]: I0219 18:44:30.109618 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkf2z\" (UniqueName: \"kubernetes.io/projected/7ab16436-c636-42ed-a497-0ab36f9a9074-kube-api-access-gkf2z\") pod \"7ab16436-c636-42ed-a497-0ab36f9a9074\" (UID: \"7ab16436-c636-42ed-a497-0ab36f9a9074\") " Feb 19 18:44:30 crc kubenswrapper[4813]: I0219 18:44:30.111434 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ab16436-c636-42ed-a497-0ab36f9a9074-bundle" (OuterVolumeSpecName: "bundle") pod "7ab16436-c636-42ed-a497-0ab36f9a9074" (UID: "7ab16436-c636-42ed-a497-0ab36f9a9074"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:44:30 crc kubenswrapper[4813]: I0219 18:44:30.118165 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ab16436-c636-42ed-a497-0ab36f9a9074-kube-api-access-gkf2z" (OuterVolumeSpecName: "kube-api-access-gkf2z") pod "7ab16436-c636-42ed-a497-0ab36f9a9074" (UID: "7ab16436-c636-42ed-a497-0ab36f9a9074"). InnerVolumeSpecName "kube-api-access-gkf2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:44:30 crc kubenswrapper[4813]: I0219 18:44:30.132576 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ab16436-c636-42ed-a497-0ab36f9a9074-util" (OuterVolumeSpecName: "util") pod "7ab16436-c636-42ed-a497-0ab36f9a9074" (UID: "7ab16436-c636-42ed-a497-0ab36f9a9074"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:44:30 crc kubenswrapper[4813]: I0219 18:44:30.211210 4813 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7ab16436-c636-42ed-a497-0ab36f9a9074-util\") on node \"crc\" DevicePath \"\"" Feb 19 18:44:30 crc kubenswrapper[4813]: I0219 18:44:30.211264 4813 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7ab16436-c636-42ed-a497-0ab36f9a9074-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:44:30 crc kubenswrapper[4813]: I0219 18:44:30.211283 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gkf2z\" (UniqueName: \"kubernetes.io/projected/7ab16436-c636-42ed-a497-0ab36f9a9074-kube-api-access-gkf2z\") on node \"crc\" DevicePath \"\"" Feb 19 18:44:30 crc kubenswrapper[4813]: I0219 18:44:30.761150 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jxj6l" event={"ID":"7ab16436-c636-42ed-a497-0ab36f9a9074","Type":"ContainerDied","Data":"c9907b866500e2756bb98d35e54e9aa8133f9c96d884f5975801b0a770880a75"} Feb 19 18:44:30 crc kubenswrapper[4813]: I0219 18:44:30.761226 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9907b866500e2756bb98d35e54e9aa8133f9c96d884f5975801b0a770880a75" Feb 19 18:44:30 crc kubenswrapper[4813]: I0219 18:44:30.761251 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jxj6l" Feb 19 18:44:34 crc kubenswrapper[4813]: I0219 18:44:34.830665 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-hrs2k"] Feb 19 18:44:34 crc kubenswrapper[4813]: E0219 18:44:34.831588 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ab16436-c636-42ed-a497-0ab36f9a9074" containerName="extract" Feb 19 18:44:34 crc kubenswrapper[4813]: I0219 18:44:34.831609 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ab16436-c636-42ed-a497-0ab36f9a9074" containerName="extract" Feb 19 18:44:34 crc kubenswrapper[4813]: E0219 18:44:34.831631 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ab16436-c636-42ed-a497-0ab36f9a9074" containerName="pull" Feb 19 18:44:34 crc kubenswrapper[4813]: I0219 18:44:34.831645 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ab16436-c636-42ed-a497-0ab36f9a9074" containerName="pull" Feb 19 18:44:34 crc kubenswrapper[4813]: E0219 18:44:34.831659 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ab16436-c636-42ed-a497-0ab36f9a9074" containerName="util" Feb 19 18:44:34 crc kubenswrapper[4813]: I0219 18:44:34.831671 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ab16436-c636-42ed-a497-0ab36f9a9074" containerName="util" Feb 19 18:44:34 crc kubenswrapper[4813]: I0219 18:44:34.831888 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ab16436-c636-42ed-a497-0ab36f9a9074" containerName="extract" Feb 19 18:44:34 crc kubenswrapper[4813]: I0219 18:44:34.832654 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-hrs2k" Feb 19 18:44:34 crc kubenswrapper[4813]: I0219 18:44:34.835004 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Feb 19 18:44:34 crc kubenswrapper[4813]: I0219 18:44:34.836331 4813 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-6x9g2" Feb 19 18:44:34 crc kubenswrapper[4813]: I0219 18:44:34.856218 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Feb 19 18:44:34 crc kubenswrapper[4813]: I0219 18:44:34.859172 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-hrs2k"] Feb 19 18:44:34 crc kubenswrapper[4813]: I0219 18:44:34.977305 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbcns\" (UniqueName: \"kubernetes.io/projected/7d3f2076-1867-4c5d-b4b4-f20576f8642f-kube-api-access-wbcns\") pod \"cert-manager-operator-controller-manager-66c8bdd694-hrs2k\" (UID: \"7d3f2076-1867-4c5d-b4b4-f20576f8642f\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-hrs2k" Feb 19 18:44:34 crc kubenswrapper[4813]: I0219 18:44:34.977391 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7d3f2076-1867-4c5d-b4b4-f20576f8642f-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-hrs2k\" (UID: \"7d3f2076-1867-4c5d-b4b4-f20576f8642f\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-hrs2k" Feb 19 18:44:35 crc kubenswrapper[4813]: I0219 18:44:35.078691 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbcns\" (UniqueName: \"kubernetes.io/projected/7d3f2076-1867-4c5d-b4b4-f20576f8642f-kube-api-access-wbcns\") pod \"cert-manager-operator-controller-manager-66c8bdd694-hrs2k\" (UID: \"7d3f2076-1867-4c5d-b4b4-f20576f8642f\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-hrs2k" Feb 19 18:44:35 crc kubenswrapper[4813]: I0219 18:44:35.078792 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7d3f2076-1867-4c5d-b4b4-f20576f8642f-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-hrs2k\" (UID: \"7d3f2076-1867-4c5d-b4b4-f20576f8642f\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-hrs2k" Feb 19 18:44:35 crc kubenswrapper[4813]: I0219 18:44:35.079366 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7d3f2076-1867-4c5d-b4b4-f20576f8642f-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-hrs2k\" (UID: \"7d3f2076-1867-4c5d-b4b4-f20576f8642f\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-hrs2k" Feb 19 18:44:35 crc kubenswrapper[4813]: I0219 18:44:35.108153 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbcns\" (UniqueName: \"kubernetes.io/projected/7d3f2076-1867-4c5d-b4b4-f20576f8642f-kube-api-access-wbcns\") pod \"cert-manager-operator-controller-manager-66c8bdd694-hrs2k\" (UID: \"7d3f2076-1867-4c5d-b4b4-f20576f8642f\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-hrs2k" Feb 19 18:44:35 crc kubenswrapper[4813]: I0219 18:44:35.153599 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-hrs2k" Feb 19 18:44:37 crc kubenswrapper[4813]: I0219 18:44:37.247507 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-hrs2k"] Feb 19 18:44:38 crc kubenswrapper[4813]: I0219 18:44:38.209160 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-hrs2k" event={"ID":"7d3f2076-1867-4c5d-b4b4-f20576f8642f","Type":"ContainerStarted","Data":"8cf880b770423b96aa05bf8ec86b03d4e37b54e48ac987936b709d3d2971cbf8"} Feb 19 18:44:41 crc kubenswrapper[4813]: I0219 18:44:41.229946 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-hrs2k" event={"ID":"7d3f2076-1867-4c5d-b4b4-f20576f8642f","Type":"ContainerStarted","Data":"ff482a000ca03102990a2a80c4f2b6991c80aa143c8590249a2a4c73b8a4c3ab"} Feb 19 18:44:41 crc kubenswrapper[4813]: I0219 18:44:41.265517 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-hrs2k" podStartSLOduration=3.813419488 podStartE2EDuration="7.265491052s" podCreationTimestamp="2026-02-19 18:44:34 +0000 UTC" firstStartedPulling="2026-02-19 18:44:37.262025441 +0000 UTC m=+896.487465982" lastFinishedPulling="2026-02-19 18:44:40.714097015 +0000 UTC m=+899.939537546" observedRunningTime="2026-02-19 18:44:41.260682104 +0000 UTC m=+900.486122645" watchObservedRunningTime="2026-02-19 18:44:41.265491052 +0000 UTC m=+900.490931633" Feb 19 18:44:44 crc kubenswrapper[4813]: I0219 18:44:44.093318 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-4c7n8"] Feb 19 18:44:44 crc kubenswrapper[4813]: I0219 18:44:44.095098 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-4c7n8" Feb 19 18:44:44 crc kubenswrapper[4813]: I0219 18:44:44.097626 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 19 18:44:44 crc kubenswrapper[4813]: I0219 18:44:44.097691 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 19 18:44:44 crc kubenswrapper[4813]: I0219 18:44:44.098184 4813 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-8mzdl" Feb 19 18:44:44 crc kubenswrapper[4813]: I0219 18:44:44.100488 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5z4p\" (UniqueName: \"kubernetes.io/projected/8beeb97a-17d6-4a65-9818-e7488aa9e29b-kube-api-access-c5z4p\") pod \"cert-manager-webhook-6888856db4-4c7n8\" (UID: \"8beeb97a-17d6-4a65-9818-e7488aa9e29b\") " pod="cert-manager/cert-manager-webhook-6888856db4-4c7n8" Feb 19 18:44:44 crc kubenswrapper[4813]: I0219 18:44:44.100662 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8beeb97a-17d6-4a65-9818-e7488aa9e29b-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-4c7n8\" (UID: \"8beeb97a-17d6-4a65-9818-e7488aa9e29b\") " pod="cert-manager/cert-manager-webhook-6888856db4-4c7n8" Feb 19 18:44:44 crc kubenswrapper[4813]: I0219 18:44:44.111770 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-4c7n8"] Feb 19 18:44:44 crc kubenswrapper[4813]: I0219 18:44:44.205081 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5z4p\" (UniqueName: \"kubernetes.io/projected/8beeb97a-17d6-4a65-9818-e7488aa9e29b-kube-api-access-c5z4p\") pod \"cert-manager-webhook-6888856db4-4c7n8\" (UID: \"8beeb97a-17d6-4a65-9818-e7488aa9e29b\") " pod="cert-manager/cert-manager-webhook-6888856db4-4c7n8" Feb 19 18:44:44 crc kubenswrapper[4813]: I0219 18:44:44.205169 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8beeb97a-17d6-4a65-9818-e7488aa9e29b-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-4c7n8\" (UID: \"8beeb97a-17d6-4a65-9818-e7488aa9e29b\") " pod="cert-manager/cert-manager-webhook-6888856db4-4c7n8" Feb 19 18:44:44 crc kubenswrapper[4813]: I0219 18:44:44.232212 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5z4p\" (UniqueName: \"kubernetes.io/projected/8beeb97a-17d6-4a65-9818-e7488aa9e29b-kube-api-access-c5z4p\") pod \"cert-manager-webhook-6888856db4-4c7n8\" (UID: \"8beeb97a-17d6-4a65-9818-e7488aa9e29b\") " pod="cert-manager/cert-manager-webhook-6888856db4-4c7n8" Feb 19 18:44:44 crc kubenswrapper[4813]: I0219 18:44:44.237340 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8beeb97a-17d6-4a65-9818-e7488aa9e29b-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-4c7n8\" (UID: \"8beeb97a-17d6-4a65-9818-e7488aa9e29b\") " pod="cert-manager/cert-manager-webhook-6888856db4-4c7n8" Feb 19 18:44:44 crc kubenswrapper[4813]: I0219 18:44:44.426479 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-4c7n8" Feb 19 18:44:44 crc kubenswrapper[4813]: I0219 18:44:44.908247 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-4c7n8"] Feb 19 18:44:45 crc kubenswrapper[4813]: I0219 18:44:45.257044 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-4c7n8" event={"ID":"8beeb97a-17d6-4a65-9818-e7488aa9e29b","Type":"ContainerStarted","Data":"c4c317cbd5a2b89c676791c85290e57e1613da5661f78736d0236e373c44126c"} Feb 19 18:44:45 crc kubenswrapper[4813]: I0219 18:44:45.466130 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-qnfn5"] Feb 19 18:44:45 crc kubenswrapper[4813]: I0219 18:44:45.466939 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-qnfn5" Feb 19 18:44:45 crc kubenswrapper[4813]: I0219 18:44:45.470968 4813 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-wcbwb" Feb 19 18:44:45 crc kubenswrapper[4813]: I0219 18:44:45.490333 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-qnfn5"] Feb 19 18:44:45 crc kubenswrapper[4813]: I0219 18:44:45.632403 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnzjr\" (UniqueName: \"kubernetes.io/projected/42281278-8e09-4677-b354-85ae0b547f03-kube-api-access-mnzjr\") pod \"cert-manager-cainjector-5545bd876-qnfn5\" (UID: \"42281278-8e09-4677-b354-85ae0b547f03\") " pod="cert-manager/cert-manager-cainjector-5545bd876-qnfn5" Feb 19 18:44:45 crc kubenswrapper[4813]: I0219 18:44:45.632522 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/42281278-8e09-4677-b354-85ae0b547f03-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-qnfn5\" (UID: \"42281278-8e09-4677-b354-85ae0b547f03\") " pod="cert-manager/cert-manager-cainjector-5545bd876-qnfn5" Feb 19 18:44:45 crc kubenswrapper[4813]: I0219 18:44:45.733675 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/42281278-8e09-4677-b354-85ae0b547f03-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-qnfn5\" (UID: \"42281278-8e09-4677-b354-85ae0b547f03\") " pod="cert-manager/cert-manager-cainjector-5545bd876-qnfn5" Feb 19 18:44:45 crc kubenswrapper[4813]: I0219 18:44:45.733723 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnzjr\" (UniqueName: \"kubernetes.io/projected/42281278-8e09-4677-b354-85ae0b547f03-kube-api-access-mnzjr\") pod \"cert-manager-cainjector-5545bd876-qnfn5\" (UID: \"42281278-8e09-4677-b354-85ae0b547f03\") " pod="cert-manager/cert-manager-cainjector-5545bd876-qnfn5" Feb 19 18:44:45 crc kubenswrapper[4813]: I0219 18:44:45.753038 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/42281278-8e09-4677-b354-85ae0b547f03-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-qnfn5\" (UID: \"42281278-8e09-4677-b354-85ae0b547f03\") " pod="cert-manager/cert-manager-cainjector-5545bd876-qnfn5" Feb 19 18:44:45 crc kubenswrapper[4813]: I0219 18:44:45.753147 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnzjr\" (UniqueName: \"kubernetes.io/projected/42281278-8e09-4677-b354-85ae0b547f03-kube-api-access-mnzjr\") pod \"cert-manager-cainjector-5545bd876-qnfn5\" (UID: \"42281278-8e09-4677-b354-85ae0b547f03\") " pod="cert-manager/cert-manager-cainjector-5545bd876-qnfn5" Feb 19 18:44:45 crc kubenswrapper[4813]: I0219 18:44:45.805271 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-qnfn5" Feb 19 18:44:46 crc kubenswrapper[4813]: I0219 18:44:46.232563 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-qnfn5"] Feb 19 18:44:46 crc kubenswrapper[4813]: W0219 18:44:46.239050 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42281278_8e09_4677_b354_85ae0b547f03.slice/crio-c5352c17aa02435b4b037559bc3422602d3ad2f577ec0984976b81bb14c5683f WatchSource:0}: Error finding container c5352c17aa02435b4b037559bc3422602d3ad2f577ec0984976b81bb14c5683f: Status 404 returned error can't find the container with id c5352c17aa02435b4b037559bc3422602d3ad2f577ec0984976b81bb14c5683f Feb 19 18:44:46 crc kubenswrapper[4813]: I0219 18:44:46.266901 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-qnfn5" event={"ID":"42281278-8e09-4677-b354-85ae0b547f03","Type":"ContainerStarted","Data":"c5352c17aa02435b4b037559bc3422602d3ad2f577ec0984976b81bb14c5683f"} Feb 19 18:44:50 crc kubenswrapper[4813]: I0219 18:44:50.293091 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-qnfn5" event={"ID":"42281278-8e09-4677-b354-85ae0b547f03","Type":"ContainerStarted","Data":"48925995491e1c4e7177903ce720eb9fc62eac985cf7790091387868a39f333b"} Feb 19 18:44:50 crc kubenswrapper[4813]: I0219 18:44:50.295470 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-4c7n8" event={"ID":"8beeb97a-17d6-4a65-9818-e7488aa9e29b","Type":"ContainerStarted","Data":"d75d6ff904608a45d69b33382be03cb0f614a11837d762d616509c54a0b4504f"} Feb 19 18:44:50 crc kubenswrapper[4813]: I0219 18:44:50.295631 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-6888856db4-4c7n8" Feb 19 18:44:50 crc kubenswrapper[4813]: I0219 18:44:50.320879 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-5545bd876-qnfn5" podStartSLOduration=1.661811435 podStartE2EDuration="5.320847873s" podCreationTimestamp="2026-02-19 18:44:45 +0000 UTC" firstStartedPulling="2026-02-19 18:44:46.242452626 +0000 UTC m=+905.467893207" lastFinishedPulling="2026-02-19 18:44:49.901489104 +0000 UTC m=+909.126929645" observedRunningTime="2026-02-19 18:44:50.312744264 +0000 UTC m=+909.538184845" watchObservedRunningTime="2026-02-19 18:44:50.320847873 +0000 UTC m=+909.546288444" Feb 19 18:44:50 crc kubenswrapper[4813]: I0219 18:44:50.349210 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-6888856db4-4c7n8" podStartSLOduration=1.415424647 podStartE2EDuration="6.349193157s" podCreationTimestamp="2026-02-19 18:44:44 +0000 UTC" firstStartedPulling="2026-02-19 18:44:44.918693501 +0000 UTC m=+904.144134042" lastFinishedPulling="2026-02-19 18:44:49.852462011 +0000 UTC m=+909.077902552" observedRunningTime="2026-02-19 18:44:50.345398896 +0000 UTC m=+909.570839447" watchObservedRunningTime="2026-02-19 18:44:50.349193157 +0000 UTC m=+909.574633698" Feb 19 18:44:59 crc kubenswrapper[4813]: I0219 18:44:59.431304 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-6888856db4-4c7n8" Feb 19 18:45:00 crc kubenswrapper[4813]: I0219 18:45:00.153232 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525445-cd67s"] Feb 19 18:45:00 crc kubenswrapper[4813]: I0219 18:45:00.154388 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525445-cd67s" Feb 19 18:45:00 crc kubenswrapper[4813]: I0219 18:45:00.157256 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 18:45:00 crc kubenswrapper[4813]: I0219 18:45:00.158017 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 18:45:00 crc kubenswrapper[4813]: I0219 18:45:00.176794 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525445-cd67s"] Feb 19 18:45:00 crc kubenswrapper[4813]: I0219 18:45:00.242168 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/00cbc745-db4e-48c3-a8b1-21561917a0eb-secret-volume\") pod \"collect-profiles-29525445-cd67s\" (UID: \"00cbc745-db4e-48c3-a8b1-21561917a0eb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525445-cd67s" Feb 19 18:45:00 crc kubenswrapper[4813]: I0219 18:45:00.242219 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfkpp\" (UniqueName: \"kubernetes.io/projected/00cbc745-db4e-48c3-a8b1-21561917a0eb-kube-api-access-nfkpp\") pod \"collect-profiles-29525445-cd67s\" (UID: \"00cbc745-db4e-48c3-a8b1-21561917a0eb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525445-cd67s" Feb 19 18:45:00 crc kubenswrapper[4813]: I0219 18:45:00.242244 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/00cbc745-db4e-48c3-a8b1-21561917a0eb-config-volume\") pod \"collect-profiles-29525445-cd67s\" (UID: \"00cbc745-db4e-48c3-a8b1-21561917a0eb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525445-cd67s" Feb 19 18:45:00 crc kubenswrapper[4813]: I0219 18:45:00.344003 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/00cbc745-db4e-48c3-a8b1-21561917a0eb-secret-volume\") pod \"collect-profiles-29525445-cd67s\" (UID: \"00cbc745-db4e-48c3-a8b1-21561917a0eb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525445-cd67s" Feb 19 18:45:00 crc kubenswrapper[4813]: I0219 18:45:00.344053 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfkpp\" (UniqueName: \"kubernetes.io/projected/00cbc745-db4e-48c3-a8b1-21561917a0eb-kube-api-access-nfkpp\") pod \"collect-profiles-29525445-cd67s\" (UID: \"00cbc745-db4e-48c3-a8b1-21561917a0eb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525445-cd67s" Feb 19 18:45:00 crc kubenswrapper[4813]: I0219 18:45:00.344087 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/00cbc745-db4e-48c3-a8b1-21561917a0eb-config-volume\") pod \"collect-profiles-29525445-cd67s\" (UID: \"00cbc745-db4e-48c3-a8b1-21561917a0eb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525445-cd67s" Feb 19 18:45:00 crc kubenswrapper[4813]: I0219 18:45:00.345356 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/00cbc745-db4e-48c3-a8b1-21561917a0eb-config-volume\") pod \"collect-profiles-29525445-cd67s\" (UID: \"00cbc745-db4e-48c3-a8b1-21561917a0eb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525445-cd67s" Feb 19 18:45:00 crc kubenswrapper[4813]: I0219 18:45:00.355244 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/00cbc745-db4e-48c3-a8b1-21561917a0eb-secret-volume\") pod \"collect-profiles-29525445-cd67s\" (UID: \"00cbc745-db4e-48c3-a8b1-21561917a0eb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525445-cd67s" Feb 19 18:45:00 crc kubenswrapper[4813]: I0219 18:45:00.376435 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfkpp\" (UniqueName: \"kubernetes.io/projected/00cbc745-db4e-48c3-a8b1-21561917a0eb-kube-api-access-nfkpp\") pod \"collect-profiles-29525445-cd67s\" (UID: \"00cbc745-db4e-48c3-a8b1-21561917a0eb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525445-cd67s" Feb 19 18:45:00 crc kubenswrapper[4813]: I0219 18:45:00.476948 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525445-cd67s" Feb 19 18:45:00 crc kubenswrapper[4813]: I0219 18:45:00.933222 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525445-cd67s"] Feb 19 18:45:01 crc kubenswrapper[4813]: I0219 18:45:01.371929 4813 generic.go:334] "Generic (PLEG): container finished" podID="00cbc745-db4e-48c3-a8b1-21561917a0eb" containerID="5d02044cdc86dae23dcd27c93a04ac77447a8420276c23f63364b126070ffe18" exitCode=0 Feb 19 18:45:01 crc kubenswrapper[4813]: I0219 18:45:01.371998 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525445-cd67s" event={"ID":"00cbc745-db4e-48c3-a8b1-21561917a0eb","Type":"ContainerDied","Data":"5d02044cdc86dae23dcd27c93a04ac77447a8420276c23f63364b126070ffe18"} Feb 19 18:45:01 crc kubenswrapper[4813]: I0219 18:45:01.372048 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525445-cd67s" event={"ID":"00cbc745-db4e-48c3-a8b1-21561917a0eb","Type":"ContainerStarted","Data":"4d35aae12ddf73fd6de3dc1851b85bd50f729f406e06ade3caacc250eb1a7214"} Feb 19 18:45:02 crc kubenswrapper[4813]: I0219 18:45:02.744292 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525445-cd67s" Feb 19 18:45:02 crc kubenswrapper[4813]: I0219 18:45:02.878164 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/00cbc745-db4e-48c3-a8b1-21561917a0eb-config-volume\") pod \"00cbc745-db4e-48c3-a8b1-21561917a0eb\" (UID: \"00cbc745-db4e-48c3-a8b1-21561917a0eb\") " Feb 19 18:45:02 crc kubenswrapper[4813]: I0219 18:45:02.878264 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/00cbc745-db4e-48c3-a8b1-21561917a0eb-secret-volume\") pod \"00cbc745-db4e-48c3-a8b1-21561917a0eb\" (UID: \"00cbc745-db4e-48c3-a8b1-21561917a0eb\") " Feb 19 18:45:02 crc kubenswrapper[4813]: I0219 18:45:02.878306 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfkpp\" (UniqueName: \"kubernetes.io/projected/00cbc745-db4e-48c3-a8b1-21561917a0eb-kube-api-access-nfkpp\") pod \"00cbc745-db4e-48c3-a8b1-21561917a0eb\" (UID: \"00cbc745-db4e-48c3-a8b1-21561917a0eb\") " Feb 19 18:45:02 crc kubenswrapper[4813]: I0219 18:45:02.879040 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00cbc745-db4e-48c3-a8b1-21561917a0eb-config-volume" (OuterVolumeSpecName: "config-volume") pod "00cbc745-db4e-48c3-a8b1-21561917a0eb" (UID: "00cbc745-db4e-48c3-a8b1-21561917a0eb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:45:02 crc kubenswrapper[4813]: I0219 18:45:02.884696 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00cbc745-db4e-48c3-a8b1-21561917a0eb-kube-api-access-nfkpp" (OuterVolumeSpecName: "kube-api-access-nfkpp") pod "00cbc745-db4e-48c3-a8b1-21561917a0eb" (UID: "00cbc745-db4e-48c3-a8b1-21561917a0eb"). InnerVolumeSpecName "kube-api-access-nfkpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:45:02 crc kubenswrapper[4813]: I0219 18:45:02.885534 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00cbc745-db4e-48c3-a8b1-21561917a0eb-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "00cbc745-db4e-48c3-a8b1-21561917a0eb" (UID: "00cbc745-db4e-48c3-a8b1-21561917a0eb"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:45:02 crc kubenswrapper[4813]: I0219 18:45:02.948014 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-545d4d4674-r94m6"] Feb 19 18:45:02 crc kubenswrapper[4813]: E0219 18:45:02.948570 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00cbc745-db4e-48c3-a8b1-21561917a0eb" containerName="collect-profiles" Feb 19 18:45:02 crc kubenswrapper[4813]: I0219 18:45:02.948598 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="00cbc745-db4e-48c3-a8b1-21561917a0eb" containerName="collect-profiles" Feb 19 18:45:02 crc kubenswrapper[4813]: I0219 18:45:02.948833 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="00cbc745-db4e-48c3-a8b1-21561917a0eb" containerName="collect-profiles" Feb 19 18:45:02 crc kubenswrapper[4813]: I0219 18:45:02.949511 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-r94m6" Feb 19 18:45:02 crc kubenswrapper[4813]: I0219 18:45:02.952800 4813 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-bw4w6" Feb 19 18:45:02 crc kubenswrapper[4813]: I0219 18:45:02.965611 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-r94m6"] Feb 19 18:45:02 crc kubenswrapper[4813]: I0219 18:45:02.980285 4813 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/00cbc745-db4e-48c3-a8b1-21561917a0eb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 18:45:02 crc kubenswrapper[4813]: I0219 18:45:02.980318 4813 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/00cbc745-db4e-48c3-a8b1-21561917a0eb-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 18:45:02 crc kubenswrapper[4813]: I0219 18:45:02.980332 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfkpp\" (UniqueName: \"kubernetes.io/projected/00cbc745-db4e-48c3-a8b1-21561917a0eb-kube-api-access-nfkpp\") on node \"crc\" DevicePath \"\"" Feb 19 18:45:03 crc kubenswrapper[4813]: I0219 18:45:03.081175 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fszxw\" (UniqueName: \"kubernetes.io/projected/5088c764-f0da-4ec1-a3a5-f1b5cee7d1e2-kube-api-access-fszxw\") pod \"cert-manager-545d4d4674-r94m6\" (UID: \"5088c764-f0da-4ec1-a3a5-f1b5cee7d1e2\") " pod="cert-manager/cert-manager-545d4d4674-r94m6" Feb 19 18:45:03 crc kubenswrapper[4813]: I0219 18:45:03.081270 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5088c764-f0da-4ec1-a3a5-f1b5cee7d1e2-bound-sa-token\") pod \"cert-manager-545d4d4674-r94m6\" (UID: \"5088c764-f0da-4ec1-a3a5-f1b5cee7d1e2\") " pod="cert-manager/cert-manager-545d4d4674-r94m6" Feb 19 18:45:03 crc kubenswrapper[4813]: I0219 18:45:03.182756 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fszxw\" (UniqueName: \"kubernetes.io/projected/5088c764-f0da-4ec1-a3a5-f1b5cee7d1e2-kube-api-access-fszxw\") pod \"cert-manager-545d4d4674-r94m6\" (UID: \"5088c764-f0da-4ec1-a3a5-f1b5cee7d1e2\") " pod="cert-manager/cert-manager-545d4d4674-r94m6" Feb 19 18:45:03 crc kubenswrapper[4813]: I0219 18:45:03.182864 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5088c764-f0da-4ec1-a3a5-f1b5cee7d1e2-bound-sa-token\") pod \"cert-manager-545d4d4674-r94m6\" (UID: \"5088c764-f0da-4ec1-a3a5-f1b5cee7d1e2\") " pod="cert-manager/cert-manager-545d4d4674-r94m6" Feb 19 18:45:03 crc kubenswrapper[4813]: I0219 18:45:03.207646 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fszxw\" (UniqueName: \"kubernetes.io/projected/5088c764-f0da-4ec1-a3a5-f1b5cee7d1e2-kube-api-access-fszxw\") pod \"cert-manager-545d4d4674-r94m6\" (UID: \"5088c764-f0da-4ec1-a3a5-f1b5cee7d1e2\") " pod="cert-manager/cert-manager-545d4d4674-r94m6" Feb 19 18:45:03 crc kubenswrapper[4813]: I0219 18:45:03.211895 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5088c764-f0da-4ec1-a3a5-f1b5cee7d1e2-bound-sa-token\") pod \"cert-manager-545d4d4674-r94m6\" (UID: \"5088c764-f0da-4ec1-a3a5-f1b5cee7d1e2\") " pod="cert-manager/cert-manager-545d4d4674-r94m6" Feb 19 18:45:03 crc kubenswrapper[4813]: I0219 18:45:03.268715 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-r94m6" Feb 19 18:45:03 crc kubenswrapper[4813]: I0219 18:45:03.388002 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525445-cd67s" event={"ID":"00cbc745-db4e-48c3-a8b1-21561917a0eb","Type":"ContainerDied","Data":"4d35aae12ddf73fd6de3dc1851b85bd50f729f406e06ade3caacc250eb1a7214"} Feb 19 18:45:03 crc kubenswrapper[4813]: I0219 18:45:03.388416 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d35aae12ddf73fd6de3dc1851b85bd50f729f406e06ade3caacc250eb1a7214" Feb 19 18:45:03 crc kubenswrapper[4813]: I0219 18:45:03.388053 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525445-cd67s" Feb 19 18:45:03 crc kubenswrapper[4813]: E0219 18:45:03.500023 4813 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00cbc745_db4e_48c3_a8b1_21561917a0eb.slice/crio-4d35aae12ddf73fd6de3dc1851b85bd50f729f406e06ade3caacc250eb1a7214\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00cbc745_db4e_48c3_a8b1_21561917a0eb.slice\": RecentStats: unable to find data in memory cache]" Feb 19 18:45:03 crc kubenswrapper[4813]: I0219 18:45:03.706131 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-r94m6"] Feb 19 18:45:03 crc kubenswrapper[4813]: W0219 18:45:03.720488 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5088c764_f0da_4ec1_a3a5_f1b5cee7d1e2.slice/crio-0c827e43d609bb970dc6c30bc63535e157c0f567b2e6710e5fec1db8210c01b9 WatchSource:0}: Error finding container 0c827e43d609bb970dc6c30bc63535e157c0f567b2e6710e5fec1db8210c01b9: Status 404 returned error can't find the container with id 0c827e43d609bb970dc6c30bc63535e157c0f567b2e6710e5fec1db8210c01b9 Feb 19 18:45:04 crc kubenswrapper[4813]: I0219 18:45:04.398356 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-r94m6" event={"ID":"5088c764-f0da-4ec1-a3a5-f1b5cee7d1e2","Type":"ContainerStarted","Data":"13e2135b18e6f7ec01c01e343803d73f4bf631faa8ee2c7798d04dc919778a54"} Feb 19 18:45:04 crc kubenswrapper[4813]: I0219 18:45:04.398776 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-r94m6" event={"ID":"5088c764-f0da-4ec1-a3a5-f1b5cee7d1e2","Type":"ContainerStarted","Data":"0c827e43d609bb970dc6c30bc63535e157c0f567b2e6710e5fec1db8210c01b9"} Feb 19 18:45:04 crc kubenswrapper[4813]: I0219 18:45:04.433439 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-545d4d4674-r94m6" podStartSLOduration=2.433412492 podStartE2EDuration="2.433412492s" podCreationTimestamp="2026-02-19 18:45:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:45:04.426312926 +0000 UTC m=+923.651753507" watchObservedRunningTime="2026-02-19 18:45:04.433412492 +0000 UTC m=+923.658853063" Feb 19 18:45:12 crc kubenswrapper[4813]: I0219 18:45:12.612188 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-6tpgj"] Feb 19 18:45:12 crc kubenswrapper[4813]: I0219 18:45:12.614125 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6tpgj" Feb 19 18:45:12 crc kubenswrapper[4813]: I0219 18:45:12.616547 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-h7dmd" Feb 19 18:45:12 crc kubenswrapper[4813]: I0219 18:45:12.621722 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 19 18:45:12 crc kubenswrapper[4813]: I0219 18:45:12.621826 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 19 18:45:12 crc kubenswrapper[4813]: I0219 18:45:12.643268 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-6tpgj"] Feb 19 18:45:12 crc kubenswrapper[4813]: I0219 18:45:12.723637 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9kkc\" (UniqueName: \"kubernetes.io/projected/eb283122-f58c-4f02-8c5f-14c449ebb68f-kube-api-access-c9kkc\") pod \"openstack-operator-index-6tpgj\" (UID: \"eb283122-f58c-4f02-8c5f-14c449ebb68f\") " pod="openstack-operators/openstack-operator-index-6tpgj" Feb 19 18:45:12 crc kubenswrapper[4813]: I0219 18:45:12.825113 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9kkc\" (UniqueName: \"kubernetes.io/projected/eb283122-f58c-4f02-8c5f-14c449ebb68f-kube-api-access-c9kkc\") pod \"openstack-operator-index-6tpgj\" (UID: \"eb283122-f58c-4f02-8c5f-14c449ebb68f\") " pod="openstack-operators/openstack-operator-index-6tpgj" Feb 19 18:45:12 crc kubenswrapper[4813]: I0219 18:45:12.849808 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9kkc\" (UniqueName: \"kubernetes.io/projected/eb283122-f58c-4f02-8c5f-14c449ebb68f-kube-api-access-c9kkc\") pod \"openstack-operator-index-6tpgj\" (UID: \"eb283122-f58c-4f02-8c5f-14c449ebb68f\") " pod="openstack-operators/openstack-operator-index-6tpgj" Feb 19 18:45:12 crc kubenswrapper[4813]: I0219 18:45:12.945045 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6tpgj" Feb 19 18:45:13 crc kubenswrapper[4813]: I0219 18:45:13.220291 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-6tpgj"] Feb 19 18:45:13 crc kubenswrapper[4813]: I0219 18:45:13.484374 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6tpgj" event={"ID":"eb283122-f58c-4f02-8c5f-14c449ebb68f","Type":"ContainerStarted","Data":"cad8906e54888c6a558eb1e1b932b6432c81875232db1bc58997a2a528c52b8a"} Feb 19 18:45:15 crc kubenswrapper[4813]: I0219 18:45:15.489118 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6tpgj" event={"ID":"eb283122-f58c-4f02-8c5f-14c449ebb68f","Type":"ContainerStarted","Data":"7beebf0306ccd92b4f814d47aae91eb6b4f72b6d88a0d49a2caf2eb076e7ebbb"} Feb 19 18:45:15 crc kubenswrapper[4813]: I0219 18:45:15.516939 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-6tpgj" podStartSLOduration=2.392367993 podStartE2EDuration="3.516914164s" podCreationTimestamp="2026-02-19 18:45:12 +0000 UTC" firstStartedPulling="2026-02-19 18:45:13.231121093 +0000 UTC m=+932.456561654" lastFinishedPulling="2026-02-19 18:45:14.355667284 +0000 UTC m=+933.581107825" observedRunningTime="2026-02-19 18:45:15.508885588 +0000 UTC m=+934.734326169" watchObservedRunningTime="2026-02-19 18:45:15.516914164 +0000 UTC m=+934.742354745" Feb 19 18:45:16 crc kubenswrapper[4813]: I0219 18:45:16.973910 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-6tpgj"] Feb 19 18:45:17 crc kubenswrapper[4813]: I0219 18:45:17.382723 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rvp4w"] Feb 19 18:45:17 crc kubenswrapper[4813]: I0219 18:45:17.385691 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rvp4w" Feb 19 18:45:17 crc kubenswrapper[4813]: I0219 18:45:17.389845 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rvp4w"] Feb 19 18:45:17 crc kubenswrapper[4813]: I0219 18:45:17.490926 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec0a4c5f-ffbd-465f-8b33-6f6d88c1a193-catalog-content\") pod \"community-operators-rvp4w\" (UID: \"ec0a4c5f-ffbd-465f-8b33-6f6d88c1a193\") " pod="openshift-marketplace/community-operators-rvp4w" Feb 19 18:45:17 crc kubenswrapper[4813]: I0219 18:45:17.490999 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec0a4c5f-ffbd-465f-8b33-6f6d88c1a193-utilities\") pod \"community-operators-rvp4w\" (UID: \"ec0a4c5f-ffbd-465f-8b33-6f6d88c1a193\") " pod="openshift-marketplace/community-operators-rvp4w" Feb 19 18:45:17 crc kubenswrapper[4813]: I0219 18:45:17.491036 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nxzw\" (UniqueName: \"kubernetes.io/projected/ec0a4c5f-ffbd-465f-8b33-6f6d88c1a193-kube-api-access-4nxzw\") pod \"community-operators-rvp4w\" (UID: \"ec0a4c5f-ffbd-465f-8b33-6f6d88c1a193\") " pod="openshift-marketplace/community-operators-rvp4w" Feb 19 18:45:17 crc kubenswrapper[4813]: I0219 18:45:17.502816 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-6tpgj" podUID="eb283122-f58c-4f02-8c5f-14c449ebb68f" containerName="registry-server" containerID="cri-o://7beebf0306ccd92b4f814d47aae91eb6b4f72b6d88a0d49a2caf2eb076e7ebbb" gracePeriod=2 Feb 19 18:45:17 crc kubenswrapper[4813]: I0219 18:45:17.592924 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec0a4c5f-ffbd-465f-8b33-6f6d88c1a193-utilities\") pod \"community-operators-rvp4w\" (UID: \"ec0a4c5f-ffbd-465f-8b33-6f6d88c1a193\") " pod="openshift-marketplace/community-operators-rvp4w" Feb 19 18:45:17 crc kubenswrapper[4813]: I0219 18:45:17.593002 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nxzw\" (UniqueName: \"kubernetes.io/projected/ec0a4c5f-ffbd-465f-8b33-6f6d88c1a193-kube-api-access-4nxzw\") pod \"community-operators-rvp4w\" (UID: \"ec0a4c5f-ffbd-465f-8b33-6f6d88c1a193\") " pod="openshift-marketplace/community-operators-rvp4w" Feb 19 18:45:17 crc kubenswrapper[4813]: I0219 18:45:17.593731 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec0a4c5f-ffbd-465f-8b33-6f6d88c1a193-utilities\") pod \"community-operators-rvp4w\" (UID: \"ec0a4c5f-ffbd-465f-8b33-6f6d88c1a193\") " pod="openshift-marketplace/community-operators-rvp4w" Feb 19 18:45:17 crc kubenswrapper[4813]: I0219 18:45:17.593945 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec0a4c5f-ffbd-465f-8b33-6f6d88c1a193-catalog-content\") pod \"community-operators-rvp4w\" (UID: \"ec0a4c5f-ffbd-465f-8b33-6f6d88c1a193\") " pod="openshift-marketplace/community-operators-rvp4w" Feb 19 18:45:17 crc kubenswrapper[4813]: I0219 18:45:17.594494 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec0a4c5f-ffbd-465f-8b33-6f6d88c1a193-catalog-content\") pod \"community-operators-rvp4w\" (UID: \"ec0a4c5f-ffbd-465f-8b33-6f6d88c1a193\") " pod="openshift-marketplace/community-operators-rvp4w" Feb 19 18:45:17 crc kubenswrapper[4813]: I0219 18:45:17.618462 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nxzw\" (UniqueName: \"kubernetes.io/projected/ec0a4c5f-ffbd-465f-8b33-6f6d88c1a193-kube-api-access-4nxzw\") pod \"community-operators-rvp4w\" (UID: \"ec0a4c5f-ffbd-465f-8b33-6f6d88c1a193\") " pod="openshift-marketplace/community-operators-rvp4w" Feb 19 18:45:17 crc kubenswrapper[4813]: I0219 18:45:17.705253 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rvp4w" Feb 19 18:45:17 crc kubenswrapper[4813]: I0219 18:45:17.785209 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-dzhpp"] Feb 19 18:45:17 crc kubenswrapper[4813]: I0219 18:45:17.785964 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-dzhpp" Feb 19 18:45:17 crc kubenswrapper[4813]: I0219 18:45:17.807276 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-dzhpp"] Feb 19 18:45:17 crc kubenswrapper[4813]: I0219 18:45:17.898319 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tp9ft\" (UniqueName: \"kubernetes.io/projected/650c56a5-66e9-423c-8bed-c1680c1d53a8-kube-api-access-tp9ft\") pod \"openstack-operator-index-dzhpp\" (UID: \"650c56a5-66e9-423c-8bed-c1680c1d53a8\") " pod="openstack-operators/openstack-operator-index-dzhpp" Feb 19 18:45:18 crc kubenswrapper[4813]: I0219 18:45:18.000426 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tp9ft\" (UniqueName: \"kubernetes.io/projected/650c56a5-66e9-423c-8bed-c1680c1d53a8-kube-api-access-tp9ft\") pod \"openstack-operator-index-dzhpp\" (UID: \"650c56a5-66e9-423c-8bed-c1680c1d53a8\") " pod="openstack-operators/openstack-operator-index-dzhpp" Feb 19 18:45:18 crc kubenswrapper[4813]: I0219 18:45:18.015636 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6tpgj" Feb 19 18:45:18 crc kubenswrapper[4813]: I0219 18:45:18.027815 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rvp4w"] Feb 19 18:45:18 crc kubenswrapper[4813]: W0219 18:45:18.029483 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec0a4c5f_ffbd_465f_8b33_6f6d88c1a193.slice/crio-96965940bfaaf4d791a8a40b5c8f0e9090ab945f51669c16526896c43e07caee WatchSource:0}: Error finding container 96965940bfaaf4d791a8a40b5c8f0e9090ab945f51669c16526896c43e07caee: Status 404 returned error can't find the container with id 96965940bfaaf4d791a8a40b5c8f0e9090ab945f51669c16526896c43e07caee Feb 19 18:45:18 crc kubenswrapper[4813]: I0219 18:45:18.030566 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tp9ft\" (UniqueName: \"kubernetes.io/projected/650c56a5-66e9-423c-8bed-c1680c1d53a8-kube-api-access-tp9ft\") pod \"openstack-operator-index-dzhpp\" (UID: \"650c56a5-66e9-423c-8bed-c1680c1d53a8\") " pod="openstack-operators/openstack-operator-index-dzhpp" Feb 19 18:45:18 crc kubenswrapper[4813]: I0219 18:45:18.105109 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-dzhpp" Feb 19 18:45:18 crc kubenswrapper[4813]: I0219 18:45:18.204401 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9kkc\" (UniqueName: \"kubernetes.io/projected/eb283122-f58c-4f02-8c5f-14c449ebb68f-kube-api-access-c9kkc\") pod \"eb283122-f58c-4f02-8c5f-14c449ebb68f\" (UID: \"eb283122-f58c-4f02-8c5f-14c449ebb68f\") " Feb 19 18:45:18 crc kubenswrapper[4813]: I0219 18:45:18.207157 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb283122-f58c-4f02-8c5f-14c449ebb68f-kube-api-access-c9kkc" (OuterVolumeSpecName: "kube-api-access-c9kkc") pod "eb283122-f58c-4f02-8c5f-14c449ebb68f" (UID: "eb283122-f58c-4f02-8c5f-14c449ebb68f"). InnerVolumeSpecName "kube-api-access-c9kkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:45:18 crc kubenswrapper[4813]: I0219 18:45:18.289452 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-dzhpp"] Feb 19 18:45:18 crc kubenswrapper[4813]: W0219 18:45:18.293649 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod650c56a5_66e9_423c_8bed_c1680c1d53a8.slice/crio-cbc338bfe7a9df78c1454d6d3a904f623d81ecb3e9cceb513fc93fd88ea7e5f4 WatchSource:0}: Error finding container cbc338bfe7a9df78c1454d6d3a904f623d81ecb3e9cceb513fc93fd88ea7e5f4: Status 404 returned error can't find the container with id cbc338bfe7a9df78c1454d6d3a904f623d81ecb3e9cceb513fc93fd88ea7e5f4 Feb 19 18:45:18 crc kubenswrapper[4813]: I0219 18:45:18.306627 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9kkc\" (UniqueName: \"kubernetes.io/projected/eb283122-f58c-4f02-8c5f-14c449ebb68f-kube-api-access-c9kkc\") on node \"crc\" DevicePath \"\"" Feb 19 18:45:18 crc kubenswrapper[4813]: I0219 18:45:18.510503 4813 generic.go:334] "Generic (PLEG): container finished" podID="eb283122-f58c-4f02-8c5f-14c449ebb68f" containerID="7beebf0306ccd92b4f814d47aae91eb6b4f72b6d88a0d49a2caf2eb076e7ebbb" exitCode=0 Feb 19 18:45:18 crc kubenswrapper[4813]: I0219 18:45:18.510611 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6tpgj" event={"ID":"eb283122-f58c-4f02-8c5f-14c449ebb68f","Type":"ContainerDied","Data":"7beebf0306ccd92b4f814d47aae91eb6b4f72b6d88a0d49a2caf2eb076e7ebbb"} Feb 19 18:45:18 crc kubenswrapper[4813]: I0219 18:45:18.510648 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6tpgj" event={"ID":"eb283122-f58c-4f02-8c5f-14c449ebb68f","Type":"ContainerDied","Data":"cad8906e54888c6a558eb1e1b932b6432c81875232db1bc58997a2a528c52b8a"} Feb 19 18:45:18 crc kubenswrapper[4813]: I0219 18:45:18.510687 4813 scope.go:117] "RemoveContainer" containerID="7beebf0306ccd92b4f814d47aae91eb6b4f72b6d88a0d49a2caf2eb076e7ebbb" Feb 19 18:45:18 crc kubenswrapper[4813]: I0219 18:45:18.510683 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6tpgj" Feb 19 18:45:18 crc kubenswrapper[4813]: I0219 18:45:18.514343 4813 generic.go:334] "Generic (PLEG): container finished" podID="ec0a4c5f-ffbd-465f-8b33-6f6d88c1a193" containerID="973424851e41312f16c03579d99fa717da37af6ecdcc721d23d81289c0a1f1c0" exitCode=0 Feb 19 18:45:18 crc kubenswrapper[4813]: I0219 18:45:18.514461 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rvp4w" event={"ID":"ec0a4c5f-ffbd-465f-8b33-6f6d88c1a193","Type":"ContainerDied","Data":"973424851e41312f16c03579d99fa717da37af6ecdcc721d23d81289c0a1f1c0"} Feb 19 18:45:18 crc kubenswrapper[4813]: I0219 18:45:18.514502 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rvp4w" event={"ID":"ec0a4c5f-ffbd-465f-8b33-6f6d88c1a193","Type":"ContainerStarted","Data":"96965940bfaaf4d791a8a40b5c8f0e9090ab945f51669c16526896c43e07caee"} Feb 19 18:45:18 crc kubenswrapper[4813]: I0219 18:45:18.515632 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-dzhpp" event={"ID":"650c56a5-66e9-423c-8bed-c1680c1d53a8","Type":"ContainerStarted","Data":"cbc338bfe7a9df78c1454d6d3a904f623d81ecb3e9cceb513fc93fd88ea7e5f4"} Feb 19 18:45:18 crc kubenswrapper[4813]: I0219 18:45:18.539456 4813 scope.go:117] "RemoveContainer" containerID="7beebf0306ccd92b4f814d47aae91eb6b4f72b6d88a0d49a2caf2eb076e7ebbb" Feb 19 18:45:18 crc kubenswrapper[4813]: E0219 18:45:18.540535 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7beebf0306ccd92b4f814d47aae91eb6b4f72b6d88a0d49a2caf2eb076e7ebbb\": container with ID starting with 7beebf0306ccd92b4f814d47aae91eb6b4f72b6d88a0d49a2caf2eb076e7ebbb not found: ID does not exist" containerID="7beebf0306ccd92b4f814d47aae91eb6b4f72b6d88a0d49a2caf2eb076e7ebbb" Feb 19 18:45:18 crc kubenswrapper[4813]: I0219 18:45:18.540740 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7beebf0306ccd92b4f814d47aae91eb6b4f72b6d88a0d49a2caf2eb076e7ebbb"} err="failed to get container status \"7beebf0306ccd92b4f814d47aae91eb6b4f72b6d88a0d49a2caf2eb076e7ebbb\": rpc error: code = NotFound desc = could not find container \"7beebf0306ccd92b4f814d47aae91eb6b4f72b6d88a0d49a2caf2eb076e7ebbb\": container with ID starting with 7beebf0306ccd92b4f814d47aae91eb6b4f72b6d88a0d49a2caf2eb076e7ebbb not found: ID does not exist" Feb 19 18:45:18 crc kubenswrapper[4813]: I0219 18:45:18.556087 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-6tpgj"] Feb 19 18:45:18 crc kubenswrapper[4813]: I0219 18:45:18.562797 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-6tpgj"] Feb 19 18:45:19 crc kubenswrapper[4813]: I0219 18:45:19.483309 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb283122-f58c-4f02-8c5f-14c449ebb68f" path="/var/lib/kubelet/pods/eb283122-f58c-4f02-8c5f-14c449ebb68f/volumes" Feb 19 18:45:19 crc kubenswrapper[4813]: I0219 18:45:19.525252 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-dzhpp" event={"ID":"650c56a5-66e9-423c-8bed-c1680c1d53a8","Type":"ContainerStarted","Data":"e6ee5da3bb7403f23a7531cc2e52062101d4c141cb5781bbecb1d55e6f408889"} Feb 19 18:45:19 crc kubenswrapper[4813]: I0219 18:45:19.529061 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rvp4w" event={"ID":"ec0a4c5f-ffbd-465f-8b33-6f6d88c1a193","Type":"ContainerStarted","Data":"d1dac24e741a65059ed0511d957fcc21c06938c3db9adc96f70493a56b83b668"} Feb 19 18:45:19 crc kubenswrapper[4813]: I0219 18:45:19.551776 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-dzhpp" podStartSLOduration=2.080052756 podStartE2EDuration="2.551754155s" podCreationTimestamp="2026-02-19 18:45:17 +0000 UTC" firstStartedPulling="2026-02-19 18:45:18.297595602 +0000 UTC m=+937.523036143" lastFinishedPulling="2026-02-19 18:45:18.769296961 +0000 UTC m=+937.994737542" observedRunningTime="2026-02-19 18:45:19.546994834 +0000 UTC m=+938.772435445" watchObservedRunningTime="2026-02-19 18:45:19.551754155 +0000 UTC m=+938.777194736" Feb 19 18:45:20 crc kubenswrapper[4813]: I0219 18:45:20.539559 4813 generic.go:334] "Generic (PLEG): container finished" podID="ec0a4c5f-ffbd-465f-8b33-6f6d88c1a193" containerID="d1dac24e741a65059ed0511d957fcc21c06938c3db9adc96f70493a56b83b668" exitCode=0 Feb 19 18:45:20 crc kubenswrapper[4813]: I0219 18:45:20.539799 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rvp4w" event={"ID":"ec0a4c5f-ffbd-465f-8b33-6f6d88c1a193","Type":"ContainerDied","Data":"d1dac24e741a65059ed0511d957fcc21c06938c3db9adc96f70493a56b83b668"} Feb 19 18:45:21 crc kubenswrapper[4813]: I0219 18:45:21.548788 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rvp4w" event={"ID":"ec0a4c5f-ffbd-465f-8b33-6f6d88c1a193","Type":"ContainerStarted","Data":"d377a471ded2c4dc04e18632b75ccb3fc917ba91524077dd03eacc5bbbb8ee98"} Feb 19 18:45:21 crc kubenswrapper[4813]: I0219 18:45:21.577601 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rvp4w" podStartSLOduration=2.100024544 podStartE2EDuration="4.577587299s" podCreationTimestamp="2026-02-19 18:45:17 +0000 UTC" firstStartedPulling="2026-02-19 18:45:18.516336196 +0000 UTC m=+937.741776777" lastFinishedPulling="2026-02-19 18:45:20.993898991 +0000 UTC m=+940.219339532" observedRunningTime="2026-02-19 18:45:21.577095783 +0000 UTC m=+940.802536374" watchObservedRunningTime="2026-02-19 18:45:21.577587299 +0000 UTC m=+940.803027830" Feb 19 18:45:27 crc kubenswrapper[4813]: I0219 18:45:27.706451 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rvp4w" Feb 19 18:45:27 crc kubenswrapper[4813]: I0219 18:45:27.707025 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rvp4w" Feb 19 18:45:27 crc kubenswrapper[4813]: I0219 18:45:27.765026 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rvp4w" Feb 19 18:45:28 crc kubenswrapper[4813]: I0219 18:45:28.106151 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-dzhpp" Feb 19 18:45:28 crc kubenswrapper[4813]: I0219 18:45:28.107017 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-dzhpp" Feb 19 18:45:28 crc kubenswrapper[4813]: I0219 18:45:28.149059 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-dzhpp" Feb 19 18:45:28 crc kubenswrapper[4813]: I0219 18:45:28.638634 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-dzhpp" Feb 19 18:45:28 crc kubenswrapper[4813]: I0219 18:45:28.682161 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rvp4w" Feb 19 18:45:30 crc kubenswrapper[4813]: I0219 18:45:30.330070 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 18:45:30 crc kubenswrapper[4813]: I0219 18:45:30.330145 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 18:45:30 crc kubenswrapper[4813]: I0219 18:45:30.963818 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rvp4w"] Feb 19 18:45:30 crc kubenswrapper[4813]: I0219 18:45:30.964317 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rvp4w" podUID="ec0a4c5f-ffbd-465f-8b33-6f6d88c1a193" containerName="registry-server" containerID="cri-o://d377a471ded2c4dc04e18632b75ccb3fc917ba91524077dd03eacc5bbbb8ee98" gracePeriod=2 Feb 19 18:45:31 crc kubenswrapper[4813]: I0219 18:45:31.207258 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b79676zqdb"] Feb 19 18:45:31 crc kubenswrapper[4813]: E0219 18:45:31.207617 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb283122-f58c-4f02-8c5f-14c449ebb68f" containerName="registry-server" Feb 19 18:45:31 crc kubenswrapper[4813]: I0219 18:45:31.207636 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb283122-f58c-4f02-8c5f-14c449ebb68f" containerName="registry-server" Feb 19 18:45:31 crc kubenswrapper[4813]: I0219 18:45:31.207768 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb283122-f58c-4f02-8c5f-14c449ebb68f" containerName="registry-server" Feb 19 18:45:31 crc kubenswrapper[4813]: I0219 18:45:31.208696 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b79676zqdb" Feb 19 18:45:31 crc kubenswrapper[4813]: I0219 18:45:31.210779 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-ft42g" Feb 19 18:45:31 crc kubenswrapper[4813]: I0219 18:45:31.238415 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b79676zqdb"] Feb 19 18:45:31 crc kubenswrapper[4813]: I0219 18:45:31.388584 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a7b8ee18-5d07-4859-9fc5-07ac0fbddc4f-bundle\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b79676zqdb\" (UID: \"a7b8ee18-5d07-4859-9fc5-07ac0fbddc4f\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b79676zqdb" Feb 19 18:45:31 crc kubenswrapper[4813]: I0219 18:45:31.388789 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rctbs\" (UniqueName: \"kubernetes.io/projected/a7b8ee18-5d07-4859-9fc5-07ac0fbddc4f-kube-api-access-rctbs\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b79676zqdb\" (UID: \"a7b8ee18-5d07-4859-9fc5-07ac0fbddc4f\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b79676zqdb" Feb 19 18:45:31 crc kubenswrapper[4813]: I0219 18:45:31.388834 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a7b8ee18-5d07-4859-9fc5-07ac0fbddc4f-util\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b79676zqdb\" (UID: \"a7b8ee18-5d07-4859-9fc5-07ac0fbddc4f\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b79676zqdb" Feb 19 18:45:31 crc kubenswrapper[4813]: I0219 18:45:31.489726 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a7b8ee18-5d07-4859-9fc5-07ac0fbddc4f-bundle\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b79676zqdb\" (UID: \"a7b8ee18-5d07-4859-9fc5-07ac0fbddc4f\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b79676zqdb" Feb 19 18:45:31 crc kubenswrapper[4813]: I0219 18:45:31.489774 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rctbs\" (UniqueName: \"kubernetes.io/projected/a7b8ee18-5d07-4859-9fc5-07ac0fbddc4f-kube-api-access-rctbs\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b79676zqdb\" (UID: \"a7b8ee18-5d07-4859-9fc5-07ac0fbddc4f\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b79676zqdb" Feb 19 18:45:31 crc kubenswrapper[4813]: I0219 18:45:31.489807 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a7b8ee18-5d07-4859-9fc5-07ac0fbddc4f-util\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b79676zqdb\" (UID: \"a7b8ee18-5d07-4859-9fc5-07ac0fbddc4f\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b79676zqdb" Feb 19 18:45:31 crc kubenswrapper[4813]: I0219 18:45:31.491084 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a7b8ee18-5d07-4859-9fc5-07ac0fbddc4f-bundle\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b79676zqdb\" (UID: \"a7b8ee18-5d07-4859-9fc5-07ac0fbddc4f\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b79676zqdb" Feb 19 18:45:31 crc kubenswrapper[4813]: I0219 18:45:31.491134 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a7b8ee18-5d07-4859-9fc5-07ac0fbddc4f-util\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b79676zqdb\" (UID: \"a7b8ee18-5d07-4859-9fc5-07ac0fbddc4f\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b79676zqdb" Feb 19 18:45:31 crc kubenswrapper[4813]: I0219 18:45:31.507474 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rctbs\" (UniqueName: \"kubernetes.io/projected/a7b8ee18-5d07-4859-9fc5-07ac0fbddc4f-kube-api-access-rctbs\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b79676zqdb\" (UID: \"a7b8ee18-5d07-4859-9fc5-07ac0fbddc4f\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b79676zqdb" Feb 19 18:45:31 crc kubenswrapper[4813]: I0219 18:45:31.527470 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b79676zqdb" Feb 19 18:45:31 crc kubenswrapper[4813]: I0219 18:45:31.634392 4813 generic.go:334] "Generic (PLEG): container finished" podID="ec0a4c5f-ffbd-465f-8b33-6f6d88c1a193" containerID="d377a471ded2c4dc04e18632b75ccb3fc917ba91524077dd03eacc5bbbb8ee98" exitCode=0 Feb 19 18:45:31 crc kubenswrapper[4813]: I0219 18:45:31.634426 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rvp4w" event={"ID":"ec0a4c5f-ffbd-465f-8b33-6f6d88c1a193","Type":"ContainerDied","Data":"d377a471ded2c4dc04e18632b75ccb3fc917ba91524077dd03eacc5bbbb8ee98"} Feb 19 18:45:31 crc kubenswrapper[4813]: I0219 18:45:31.714378 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b79676zqdb"] Feb 19 18:45:31 crc kubenswrapper[4813]: I0219 18:45:31.859239 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rvp4w" Feb 19 18:45:31 crc kubenswrapper[4813]: I0219 18:45:31.895368 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec0a4c5f-ffbd-465f-8b33-6f6d88c1a193-catalog-content\") pod \"ec0a4c5f-ffbd-465f-8b33-6f6d88c1a193\" (UID: \"ec0a4c5f-ffbd-465f-8b33-6f6d88c1a193\") " Feb 19 18:45:31 crc kubenswrapper[4813]: I0219 18:45:31.895406 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec0a4c5f-ffbd-465f-8b33-6f6d88c1a193-utilities\") pod \"ec0a4c5f-ffbd-465f-8b33-6f6d88c1a193\" (UID: \"ec0a4c5f-ffbd-465f-8b33-6f6d88c1a193\") " Feb 19 18:45:31 crc kubenswrapper[4813]: I0219 18:45:31.895456 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nxzw\" (UniqueName: \"kubernetes.io/projected/ec0a4c5f-ffbd-465f-8b33-6f6d88c1a193-kube-api-access-4nxzw\") pod \"ec0a4c5f-ffbd-465f-8b33-6f6d88c1a193\" (UID: \"ec0a4c5f-ffbd-465f-8b33-6f6d88c1a193\") " Feb 19 18:45:31 crc kubenswrapper[4813]: I0219 18:45:31.896771 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec0a4c5f-ffbd-465f-8b33-6f6d88c1a193-utilities" (OuterVolumeSpecName: "utilities") pod "ec0a4c5f-ffbd-465f-8b33-6f6d88c1a193" (UID: "ec0a4c5f-ffbd-465f-8b33-6f6d88c1a193"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:45:31 crc kubenswrapper[4813]: I0219 18:45:31.904682 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec0a4c5f-ffbd-465f-8b33-6f6d88c1a193-kube-api-access-4nxzw" (OuterVolumeSpecName: "kube-api-access-4nxzw") pod "ec0a4c5f-ffbd-465f-8b33-6f6d88c1a193" (UID: "ec0a4c5f-ffbd-465f-8b33-6f6d88c1a193"). InnerVolumeSpecName "kube-api-access-4nxzw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:45:31 crc kubenswrapper[4813]: I0219 18:45:31.953098 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec0a4c5f-ffbd-465f-8b33-6f6d88c1a193-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ec0a4c5f-ffbd-465f-8b33-6f6d88c1a193" (UID: "ec0a4c5f-ffbd-465f-8b33-6f6d88c1a193"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:45:31 crc kubenswrapper[4813]: I0219 18:45:31.997036 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4nxzw\" (UniqueName: \"kubernetes.io/projected/ec0a4c5f-ffbd-465f-8b33-6f6d88c1a193-kube-api-access-4nxzw\") on node \"crc\" DevicePath \"\"" Feb 19 18:45:31 crc kubenswrapper[4813]: I0219 18:45:31.997087 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec0a4c5f-ffbd-465f-8b33-6f6d88c1a193-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 18:45:31 crc kubenswrapper[4813]: I0219 18:45:31.997101 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec0a4c5f-ffbd-465f-8b33-6f6d88c1a193-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 18:45:32 crc kubenswrapper[4813]: I0219 18:45:32.837537 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rvp4w" event={"ID":"ec0a4c5f-ffbd-465f-8b33-6f6d88c1a193","Type":"ContainerDied","Data":"96965940bfaaf4d791a8a40b5c8f0e9090ab945f51669c16526896c43e07caee"} Feb 19 18:45:32 crc kubenswrapper[4813]: I0219 18:45:32.839062 4813 scope.go:117] "RemoveContainer" containerID="d377a471ded2c4dc04e18632b75ccb3fc917ba91524077dd03eacc5bbbb8ee98" Feb 19 18:45:32 crc kubenswrapper[4813]: I0219 18:45:32.839263 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rvp4w" Feb 19 18:45:32 crc kubenswrapper[4813]: I0219 18:45:32.846118 4813 generic.go:334] "Generic (PLEG): container finished" podID="a7b8ee18-5d07-4859-9fc5-07ac0fbddc4f" containerID="dd54b664a0c3a8ee344174b256193483d2af3cca83e805025a5d5e863351c20b" exitCode=0 Feb 19 18:45:32 crc kubenswrapper[4813]: I0219 18:45:32.846180 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b79676zqdb" event={"ID":"a7b8ee18-5d07-4859-9fc5-07ac0fbddc4f","Type":"ContainerDied","Data":"dd54b664a0c3a8ee344174b256193483d2af3cca83e805025a5d5e863351c20b"} Feb 19 18:45:32 crc kubenswrapper[4813]: I0219 18:45:32.846202 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b79676zqdb" event={"ID":"a7b8ee18-5d07-4859-9fc5-07ac0fbddc4f","Type":"ContainerStarted","Data":"b3615caf0cf7ed0aded51df09876d287a23f21db7ea36b9e12cc7155f92f8736"} Feb 19 18:45:32 crc kubenswrapper[4813]: I0219 18:45:32.875535 4813 scope.go:117] "RemoveContainer" containerID="d1dac24e741a65059ed0511d957fcc21c06938c3db9adc96f70493a56b83b668" Feb 19 18:45:32 crc kubenswrapper[4813]: I0219 18:45:32.897857 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rvp4w"] Feb 19 18:45:32 crc kubenswrapper[4813]: I0219 18:45:32.903879 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rvp4w"] Feb 19 18:45:32 crc kubenswrapper[4813]: I0219 18:45:32.911868 4813 scope.go:117] "RemoveContainer" containerID="973424851e41312f16c03579d99fa717da37af6ecdcc721d23d81289c0a1f1c0" Feb 19 18:45:33 crc kubenswrapper[4813]: I0219 18:45:33.489008 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec0a4c5f-ffbd-465f-8b33-6f6d88c1a193" path="/var/lib/kubelet/pods/ec0a4c5f-ffbd-465f-8b33-6f6d88c1a193/volumes" Feb 19 18:45:34 crc kubenswrapper[4813]: I0219 18:45:34.866176 4813 generic.go:334] "Generic (PLEG): container finished" podID="a7b8ee18-5d07-4859-9fc5-07ac0fbddc4f" containerID="dda8adc3930915cc83a51ed3c22edf31a400c836c0ec67d95b3e53a4b35a1880" exitCode=0 Feb 19 18:45:34 crc kubenswrapper[4813]: I0219 18:45:34.866287 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b79676zqdb" event={"ID":"a7b8ee18-5d07-4859-9fc5-07ac0fbddc4f","Type":"ContainerDied","Data":"dda8adc3930915cc83a51ed3c22edf31a400c836c0ec67d95b3e53a4b35a1880"} Feb 19 18:45:35 crc kubenswrapper[4813]: I0219 18:45:35.877944 4813 generic.go:334] "Generic (PLEG): container finished" podID="a7b8ee18-5d07-4859-9fc5-07ac0fbddc4f" containerID="325cf4b263393ad7cb7fc6f2b2da6eb12266183a6c79ab348dfbf792a10490d6" exitCode=0 Feb 19 18:45:35 crc kubenswrapper[4813]: I0219 18:45:35.878013 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b79676zqdb" event={"ID":"a7b8ee18-5d07-4859-9fc5-07ac0fbddc4f","Type":"ContainerDied","Data":"325cf4b263393ad7cb7fc6f2b2da6eb12266183a6c79ab348dfbf792a10490d6"} Feb 19 18:45:37 crc kubenswrapper[4813]: I0219 18:45:37.301205 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b79676zqdb" Feb 19 18:45:37 crc kubenswrapper[4813]: I0219 18:45:37.490584 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rctbs\" (UniqueName: \"kubernetes.io/projected/a7b8ee18-5d07-4859-9fc5-07ac0fbddc4f-kube-api-access-rctbs\") pod \"a7b8ee18-5d07-4859-9fc5-07ac0fbddc4f\" (UID: \"a7b8ee18-5d07-4859-9fc5-07ac0fbddc4f\") " Feb 19 18:45:37 crc kubenswrapper[4813]: I0219 18:45:37.490707 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a7b8ee18-5d07-4859-9fc5-07ac0fbddc4f-util\") pod \"a7b8ee18-5d07-4859-9fc5-07ac0fbddc4f\" (UID: \"a7b8ee18-5d07-4859-9fc5-07ac0fbddc4f\") " Feb 19 18:45:37 crc kubenswrapper[4813]: I0219 18:45:37.490803 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a7b8ee18-5d07-4859-9fc5-07ac0fbddc4f-bundle\") pod \"a7b8ee18-5d07-4859-9fc5-07ac0fbddc4f\" (UID: \"a7b8ee18-5d07-4859-9fc5-07ac0fbddc4f\") " Feb 19 18:45:37 crc kubenswrapper[4813]: I0219 18:45:37.491591 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7b8ee18-5d07-4859-9fc5-07ac0fbddc4f-bundle" (OuterVolumeSpecName: "bundle") pod "a7b8ee18-5d07-4859-9fc5-07ac0fbddc4f" (UID: "a7b8ee18-5d07-4859-9fc5-07ac0fbddc4f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:45:37 crc kubenswrapper[4813]: I0219 18:45:37.498907 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7b8ee18-5d07-4859-9fc5-07ac0fbddc4f-kube-api-access-rctbs" (OuterVolumeSpecName: "kube-api-access-rctbs") pod "a7b8ee18-5d07-4859-9fc5-07ac0fbddc4f" (UID: "a7b8ee18-5d07-4859-9fc5-07ac0fbddc4f"). InnerVolumeSpecName "kube-api-access-rctbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:45:37 crc kubenswrapper[4813]: I0219 18:45:37.592259 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rctbs\" (UniqueName: \"kubernetes.io/projected/a7b8ee18-5d07-4859-9fc5-07ac0fbddc4f-kube-api-access-rctbs\") on node \"crc\" DevicePath \"\"" Feb 19 18:45:37 crc kubenswrapper[4813]: I0219 18:45:37.592306 4813 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a7b8ee18-5d07-4859-9fc5-07ac0fbddc4f-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:45:37 crc kubenswrapper[4813]: I0219 18:45:37.662530 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7b8ee18-5d07-4859-9fc5-07ac0fbddc4f-util" (OuterVolumeSpecName: "util") pod "a7b8ee18-5d07-4859-9fc5-07ac0fbddc4f" (UID: "a7b8ee18-5d07-4859-9fc5-07ac0fbddc4f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:45:37 crc kubenswrapper[4813]: I0219 18:45:37.693538 4813 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a7b8ee18-5d07-4859-9fc5-07ac0fbddc4f-util\") on node \"crc\" DevicePath \"\"" Feb 19 18:45:37 crc kubenswrapper[4813]: I0219 18:45:37.898308 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b79676zqdb" event={"ID":"a7b8ee18-5d07-4859-9fc5-07ac0fbddc4f","Type":"ContainerDied","Data":"b3615caf0cf7ed0aded51df09876d287a23f21db7ea36b9e12cc7155f92f8736"} Feb 19 18:45:37 crc kubenswrapper[4813]: I0219 18:45:37.898377 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3615caf0cf7ed0aded51df09876d287a23f21db7ea36b9e12cc7155f92f8736" Feb 19 18:45:37 crc kubenswrapper[4813]: I0219 18:45:37.898453 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b79676zqdb" Feb 19 18:45:42 crc kubenswrapper[4813]: I0219 18:45:42.380586 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-6679bf9b57-mz5c2"] Feb 19 18:45:42 crc kubenswrapper[4813]: E0219 18:45:42.382419 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7b8ee18-5d07-4859-9fc5-07ac0fbddc4f" containerName="pull" Feb 19 18:45:42 crc kubenswrapper[4813]: I0219 18:45:42.382546 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7b8ee18-5d07-4859-9fc5-07ac0fbddc4f" containerName="pull" Feb 19 18:45:42 crc kubenswrapper[4813]: E0219 18:45:42.382628 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7b8ee18-5d07-4859-9fc5-07ac0fbddc4f" containerName="util" Feb 19 18:45:42 crc kubenswrapper[4813]: I0219 18:45:42.382697 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7b8ee18-5d07-4859-9fc5-07ac0fbddc4f" containerName="util" Feb 19 18:45:42 crc kubenswrapper[4813]: E0219 18:45:42.382769 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7b8ee18-5d07-4859-9fc5-07ac0fbddc4f" containerName="extract" Feb 19 18:45:42 crc kubenswrapper[4813]: I0219 18:45:42.382845 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7b8ee18-5d07-4859-9fc5-07ac0fbddc4f" containerName="extract" Feb 19 18:45:42 crc kubenswrapper[4813]: E0219 18:45:42.382968 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec0a4c5f-ffbd-465f-8b33-6f6d88c1a193" containerName="extract-utilities" Feb 19 18:45:42 crc kubenswrapper[4813]: I0219 18:45:42.383061 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec0a4c5f-ffbd-465f-8b33-6f6d88c1a193" containerName="extract-utilities" Feb 19 18:45:42 crc kubenswrapper[4813]: E0219 18:45:42.383145 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec0a4c5f-ffbd-465f-8b33-6f6d88c1a193" containerName="extract-content" Feb 19 18:45:42 crc kubenswrapper[4813]: I0219 18:45:42.383223 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec0a4c5f-ffbd-465f-8b33-6f6d88c1a193" containerName="extract-content" Feb 19 18:45:42 crc kubenswrapper[4813]: E0219 18:45:42.383305 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec0a4c5f-ffbd-465f-8b33-6f6d88c1a193" containerName="registry-server" Feb 19 18:45:42 crc kubenswrapper[4813]: I0219 18:45:42.383376 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec0a4c5f-ffbd-465f-8b33-6f6d88c1a193" containerName="registry-server" Feb 19 18:45:42 crc kubenswrapper[4813]: I0219 18:45:42.383575 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7b8ee18-5d07-4859-9fc5-07ac0fbddc4f" containerName="extract" Feb 19 18:45:42 crc kubenswrapper[4813]: I0219 18:45:42.383661 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec0a4c5f-ffbd-465f-8b33-6f6d88c1a193" containerName="registry-server" Feb 19 18:45:42 crc kubenswrapper[4813]: I0219 18:45:42.384208 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-mz5c2" Feb 19 18:45:42 crc kubenswrapper[4813]: I0219 18:45:42.386554 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-q6zlc" Feb 19 18:45:42 crc kubenswrapper[4813]: I0219 18:45:42.431857 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6679bf9b57-mz5c2"] Feb 19 18:45:42 crc kubenswrapper[4813]: I0219 18:45:42.561014 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r49gb\" (UniqueName: \"kubernetes.io/projected/ed3475d6-09d9-4dde-8d28-46876ec8862c-kube-api-access-r49gb\") pod \"openstack-operator-controller-init-6679bf9b57-mz5c2\" (UID: \"ed3475d6-09d9-4dde-8d28-46876ec8862c\") " pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-mz5c2" Feb 19 18:45:42 crc kubenswrapper[4813]: I0219 18:45:42.568792 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vzn59"] Feb 19 18:45:42 crc kubenswrapper[4813]: I0219 18:45:42.570220 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vzn59" Feb 19 18:45:42 crc kubenswrapper[4813]: I0219 18:45:42.577842 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vzn59"] Feb 19 18:45:42 crc kubenswrapper[4813]: I0219 18:45:42.662232 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r49gb\" (UniqueName: \"kubernetes.io/projected/ed3475d6-09d9-4dde-8d28-46876ec8862c-kube-api-access-r49gb\") pod \"openstack-operator-controller-init-6679bf9b57-mz5c2\" (UID: \"ed3475d6-09d9-4dde-8d28-46876ec8862c\") " pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-mz5c2" Feb 19 18:45:42 crc kubenswrapper[4813]: I0219 18:45:42.681252 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r49gb\" (UniqueName: \"kubernetes.io/projected/ed3475d6-09d9-4dde-8d28-46876ec8862c-kube-api-access-r49gb\") pod \"openstack-operator-controller-init-6679bf9b57-mz5c2\" (UID: \"ed3475d6-09d9-4dde-8d28-46876ec8862c\") " pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-mz5c2" Feb 19 18:45:42 crc kubenswrapper[4813]: I0219 18:45:42.706752 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-mz5c2" Feb 19 18:45:42 crc kubenswrapper[4813]: I0219 18:45:42.764082 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6rxq\" (UniqueName: \"kubernetes.io/projected/9f0b9bf1-cf29-45eb-adb8-77d343bf8cba-kube-api-access-s6rxq\") pod \"certified-operators-vzn59\" (UID: \"9f0b9bf1-cf29-45eb-adb8-77d343bf8cba\") " pod="openshift-marketplace/certified-operators-vzn59" Feb 19 18:45:42 crc kubenswrapper[4813]: I0219 18:45:42.764144 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f0b9bf1-cf29-45eb-adb8-77d343bf8cba-catalog-content\") pod \"certified-operators-vzn59\" (UID: \"9f0b9bf1-cf29-45eb-adb8-77d343bf8cba\") " pod="openshift-marketplace/certified-operators-vzn59" Feb 19 18:45:42 crc kubenswrapper[4813]: I0219 18:45:42.764183 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f0b9bf1-cf29-45eb-adb8-77d343bf8cba-utilities\") pod \"certified-operators-vzn59\" (UID: \"9f0b9bf1-cf29-45eb-adb8-77d343bf8cba\") " pod="openshift-marketplace/certified-operators-vzn59" Feb 19 18:45:42 crc kubenswrapper[4813]: I0219 18:45:42.865189 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6rxq\" (UniqueName: \"kubernetes.io/projected/9f0b9bf1-cf29-45eb-adb8-77d343bf8cba-kube-api-access-s6rxq\") pod \"certified-operators-vzn59\" (UID: \"9f0b9bf1-cf29-45eb-adb8-77d343bf8cba\") " pod="openshift-marketplace/certified-operators-vzn59" Feb 19 18:45:42 crc kubenswrapper[4813]: I0219 18:45:42.865247 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f0b9bf1-cf29-45eb-adb8-77d343bf8cba-catalog-content\") pod \"certified-operators-vzn59\" (UID: \"9f0b9bf1-cf29-45eb-adb8-77d343bf8cba\") " pod="openshift-marketplace/certified-operators-vzn59" Feb 19 18:45:42 crc kubenswrapper[4813]: I0219 18:45:42.865287 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f0b9bf1-cf29-45eb-adb8-77d343bf8cba-utilities\") pod \"certified-operators-vzn59\" (UID: \"9f0b9bf1-cf29-45eb-adb8-77d343bf8cba\") " pod="openshift-marketplace/certified-operators-vzn59" Feb 19 18:45:42 crc kubenswrapper[4813]: I0219 18:45:42.865923 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f0b9bf1-cf29-45eb-adb8-77d343bf8cba-utilities\") pod \"certified-operators-vzn59\" (UID: \"9f0b9bf1-cf29-45eb-adb8-77d343bf8cba\") " pod="openshift-marketplace/certified-operators-vzn59" Feb 19 18:45:42 crc kubenswrapper[4813]: I0219 18:45:42.865926 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f0b9bf1-cf29-45eb-adb8-77d343bf8cba-catalog-content\") pod \"certified-operators-vzn59\" (UID: \"9f0b9bf1-cf29-45eb-adb8-77d343bf8cba\") " pod="openshift-marketplace/certified-operators-vzn59" Feb 19 18:45:42 crc kubenswrapper[4813]: I0219 18:45:42.898890 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6rxq\" (UniqueName: \"kubernetes.io/projected/9f0b9bf1-cf29-45eb-adb8-77d343bf8cba-kube-api-access-s6rxq\") pod \"certified-operators-vzn59\" (UID: \"9f0b9bf1-cf29-45eb-adb8-77d343bf8cba\") " pod="openshift-marketplace/certified-operators-vzn59" Feb 19 18:45:43 crc kubenswrapper[4813]: I0219 18:45:43.181620 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6679bf9b57-mz5c2"] Feb 19 18:45:43 crc kubenswrapper[4813]: I0219 18:45:43.182002 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vzn59" Feb 19 18:45:43 crc kubenswrapper[4813]: W0219 18:45:43.186464 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded3475d6_09d9_4dde_8d28_46876ec8862c.slice/crio-89ced660a33535c42029b7c4dbebaffcfec861b65e517b7885146768e2e0fbac WatchSource:0}: Error finding container 89ced660a33535c42029b7c4dbebaffcfec861b65e517b7885146768e2e0fbac: Status 404 returned error can't find the container with id 89ced660a33535c42029b7c4dbebaffcfec861b65e517b7885146768e2e0fbac Feb 19 18:45:43 crc kubenswrapper[4813]: I0219 18:45:43.393829 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vzn59"] Feb 19 18:45:43 crc kubenswrapper[4813]: W0219 18:45:43.399065 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f0b9bf1_cf29_45eb_adb8_77d343bf8cba.slice/crio-368a5f65f42c13654799caa9275dc1af2ff4e7b783744d4ae2a590faedbfa6bc WatchSource:0}: Error finding container 368a5f65f42c13654799caa9275dc1af2ff4e7b783744d4ae2a590faedbfa6bc: Status 404 returned error can't find the container with id 368a5f65f42c13654799caa9275dc1af2ff4e7b783744d4ae2a590faedbfa6bc Feb 19 18:45:43 crc kubenswrapper[4813]: I0219 18:45:43.946668 4813 generic.go:334] "Generic (PLEG): container finished" podID="9f0b9bf1-cf29-45eb-adb8-77d343bf8cba" containerID="2d3efdba512c0c1457d9e2f3f2875a0071f4364a87addd11b861a1d1b48c7722" exitCode=0 Feb 19 18:45:43 crc kubenswrapper[4813]: I0219 18:45:43.946730 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vzn59" event={"ID":"9f0b9bf1-cf29-45eb-adb8-77d343bf8cba","Type":"ContainerDied","Data":"2d3efdba512c0c1457d9e2f3f2875a0071f4364a87addd11b861a1d1b48c7722"} Feb 19 18:45:43 crc kubenswrapper[4813]: I0219 18:45:43.947019 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vzn59" event={"ID":"9f0b9bf1-cf29-45eb-adb8-77d343bf8cba","Type":"ContainerStarted","Data":"368a5f65f42c13654799caa9275dc1af2ff4e7b783744d4ae2a590faedbfa6bc"} Feb 19 18:45:43 crc kubenswrapper[4813]: I0219 18:45:43.948778 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-mz5c2" event={"ID":"ed3475d6-09d9-4dde-8d28-46876ec8862c","Type":"ContainerStarted","Data":"89ced660a33535c42029b7c4dbebaffcfec861b65e517b7885146768e2e0fbac"} Feb 19 18:45:45 crc kubenswrapper[4813]: I0219 18:45:45.963873 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vzn59" event={"ID":"9f0b9bf1-cf29-45eb-adb8-77d343bf8cba","Type":"ContainerStarted","Data":"e7a81f0c26e560eda5e89827a773a5e7417fb3470aa79e8ca6301e36a23edecf"} Feb 19 18:45:47 crc kubenswrapper[4813]: I0219 18:45:47.000581 4813 generic.go:334] "Generic (PLEG): container finished" podID="9f0b9bf1-cf29-45eb-adb8-77d343bf8cba" containerID="e7a81f0c26e560eda5e89827a773a5e7417fb3470aa79e8ca6301e36a23edecf" exitCode=0 Feb 19 18:45:47 crc kubenswrapper[4813]: I0219 18:45:47.000651 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vzn59" event={"ID":"9f0b9bf1-cf29-45eb-adb8-77d343bf8cba","Type":"ContainerDied","Data":"e7a81f0c26e560eda5e89827a773a5e7417fb3470aa79e8ca6301e36a23edecf"} Feb 19 18:45:49 crc kubenswrapper[4813]: I0219 18:45:49.015435 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vzn59" event={"ID":"9f0b9bf1-cf29-45eb-adb8-77d343bf8cba","Type":"ContainerStarted","Data":"43da6b0ffcd43cdfe216c11a4a6dd5dafaab6afad7db51f5cd8c774ba7f5f72a"} Feb 19 18:45:49 crc kubenswrapper[4813]: I0219 18:45:49.017026 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-mz5c2" event={"ID":"ed3475d6-09d9-4dde-8d28-46876ec8862c","Type":"ContainerStarted","Data":"da46744a73069479638787c698b2e662c8345f81f23426e4c0956ca076b84a41"} Feb 19 18:45:49 crc kubenswrapper[4813]: I0219 18:45:49.017090 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-mz5c2" Feb 19 18:45:49 crc kubenswrapper[4813]: I0219 18:45:49.033277 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vzn59" podStartSLOduration=2.396677892 podStartE2EDuration="7.033252176s" podCreationTimestamp="2026-02-19 18:45:42 +0000 UTC" firstStartedPulling="2026-02-19 18:45:43.947818493 +0000 UTC m=+963.173259024" lastFinishedPulling="2026-02-19 18:45:48.584392727 +0000 UTC m=+967.809833308" observedRunningTime="2026-02-19 18:45:49.031974976 +0000 UTC m=+968.257415537" watchObservedRunningTime="2026-02-19 18:45:49.033252176 +0000 UTC m=+968.258692737" Feb 19 18:45:49 crc kubenswrapper[4813]: I0219 18:45:49.066035 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-mz5c2" podStartSLOduration=1.986154225 podStartE2EDuration="7.066020652s" podCreationTimestamp="2026-02-19 18:45:42 +0000 UTC" firstStartedPulling="2026-02-19 18:45:43.191030026 +0000 UTC m=+962.416470567" lastFinishedPulling="2026-02-19 18:45:48.270896453 +0000 UTC m=+967.496336994" observedRunningTime="2026-02-19 18:45:49.065361331 +0000 UTC m=+968.290801892" watchObservedRunningTime="2026-02-19 18:45:49.066020652 +0000 UTC m=+968.291461193" Feb 19 18:45:53 crc kubenswrapper[4813]: I0219 18:45:53.182467 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vzn59" Feb 19 18:45:53 crc kubenswrapper[4813]: I0219 18:45:53.183825 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vzn59" Feb 19 18:45:53 crc kubenswrapper[4813]: I0219 18:45:53.247093 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vzn59" Feb 19 18:45:54 crc kubenswrapper[4813]: I0219 18:45:54.094232 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vzn59" Feb 19 18:45:54 crc kubenswrapper[4813]: I0219 18:45:54.586004 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-62sh6"] Feb 19 18:45:54 crc kubenswrapper[4813]: I0219 18:45:54.588194 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-62sh6" Feb 19 18:45:54 crc kubenswrapper[4813]: I0219 18:45:54.614318 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-62sh6"] Feb 19 18:45:54 crc kubenswrapper[4813]: I0219 18:45:54.633755 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ccebe5d-8b67-4a1b-ad70-55d4f6f29028-catalog-content\") pod \"redhat-marketplace-62sh6\" (UID: \"5ccebe5d-8b67-4a1b-ad70-55d4f6f29028\") " pod="openshift-marketplace/redhat-marketplace-62sh6" Feb 19 18:45:54 crc kubenswrapper[4813]: I0219 18:45:54.634144 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdhgl\" (UniqueName: \"kubernetes.io/projected/5ccebe5d-8b67-4a1b-ad70-55d4f6f29028-kube-api-access-wdhgl\") pod \"redhat-marketplace-62sh6\" (UID: \"5ccebe5d-8b67-4a1b-ad70-55d4f6f29028\") " pod="openshift-marketplace/redhat-marketplace-62sh6" Feb 19 18:45:54 crc kubenswrapper[4813]: I0219 18:45:54.634180 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ccebe5d-8b67-4a1b-ad70-55d4f6f29028-utilities\") pod \"redhat-marketplace-62sh6\" (UID: \"5ccebe5d-8b67-4a1b-ad70-55d4f6f29028\") " pod="openshift-marketplace/redhat-marketplace-62sh6" Feb 19 18:45:54 crc kubenswrapper[4813]: I0219 18:45:54.735874 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdhgl\" (UniqueName: \"kubernetes.io/projected/5ccebe5d-8b67-4a1b-ad70-55d4f6f29028-kube-api-access-wdhgl\") pod \"redhat-marketplace-62sh6\" (UID: \"5ccebe5d-8b67-4a1b-ad70-55d4f6f29028\") " pod="openshift-marketplace/redhat-marketplace-62sh6" Feb 19 18:45:54 crc kubenswrapper[4813]: I0219 18:45:54.736041 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ccebe5d-8b67-4a1b-ad70-55d4f6f29028-catalog-content\") pod \"redhat-marketplace-62sh6\" (UID: \"5ccebe5d-8b67-4a1b-ad70-55d4f6f29028\") " pod="openshift-marketplace/redhat-marketplace-62sh6" Feb 19 18:45:54 crc kubenswrapper[4813]: I0219 18:45:54.736095 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ccebe5d-8b67-4a1b-ad70-55d4f6f29028-utilities\") pod \"redhat-marketplace-62sh6\" (UID: \"5ccebe5d-8b67-4a1b-ad70-55d4f6f29028\") " pod="openshift-marketplace/redhat-marketplace-62sh6" Feb 19 18:45:54 crc kubenswrapper[4813]: I0219 18:45:54.737163 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ccebe5d-8b67-4a1b-ad70-55d4f6f29028-utilities\") pod \"redhat-marketplace-62sh6\" (UID: \"5ccebe5d-8b67-4a1b-ad70-55d4f6f29028\") " pod="openshift-marketplace/redhat-marketplace-62sh6" Feb 19 18:45:54 crc kubenswrapper[4813]: I0219 18:45:54.737199 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ccebe5d-8b67-4a1b-ad70-55d4f6f29028-catalog-content\") pod \"redhat-marketplace-62sh6\" (UID: \"5ccebe5d-8b67-4a1b-ad70-55d4f6f29028\") " pod="openshift-marketplace/redhat-marketplace-62sh6" Feb 19 18:45:54 crc kubenswrapper[4813]: I0219 18:45:54.772539 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdhgl\" (UniqueName: \"kubernetes.io/projected/5ccebe5d-8b67-4a1b-ad70-55d4f6f29028-kube-api-access-wdhgl\") pod \"redhat-marketplace-62sh6\" (UID: \"5ccebe5d-8b67-4a1b-ad70-55d4f6f29028\") " pod="openshift-marketplace/redhat-marketplace-62sh6" Feb 19 18:45:54 crc kubenswrapper[4813]: I0219 18:45:54.924340 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-62sh6" Feb 19 18:45:55 crc kubenswrapper[4813]: I0219 18:45:55.395006 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-62sh6"] Feb 19 18:45:56 crc kubenswrapper[4813]: I0219 18:45:56.075166 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-62sh6" event={"ID":"5ccebe5d-8b67-4a1b-ad70-55d4f6f29028","Type":"ContainerStarted","Data":"b30821a88a9b13accaf4d1f35affe74ec72300613221ae5ec4c2ccbb4d0afc68"} Feb 19 18:45:56 crc kubenswrapper[4813]: I0219 18:45:56.772619 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vzn59"] Feb 19 18:45:56 crc kubenswrapper[4813]: I0219 18:45:56.773094 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vzn59" podUID="9f0b9bf1-cf29-45eb-adb8-77d343bf8cba" containerName="registry-server" containerID="cri-o://43da6b0ffcd43cdfe216c11a4a6dd5dafaab6afad7db51f5cd8c774ba7f5f72a" gracePeriod=2 Feb 19 18:45:57 crc kubenswrapper[4813]: I0219 18:45:57.086274 4813 generic.go:334] "Generic (PLEG): container finished" podID="5ccebe5d-8b67-4a1b-ad70-55d4f6f29028" containerID="54931dee42312854919a8e7c0ba63a48ebb5c40c3208ea97008bbaca1bc190d8" exitCode=0 Feb 19 18:45:57 crc kubenswrapper[4813]: I0219 18:45:57.086341 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-62sh6" event={"ID":"5ccebe5d-8b67-4a1b-ad70-55d4f6f29028","Type":"ContainerDied","Data":"54931dee42312854919a8e7c0ba63a48ebb5c40c3208ea97008bbaca1bc190d8"} Feb 19 18:45:57 crc kubenswrapper[4813]: I0219 18:45:57.762065 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vzn59" Feb 19 18:45:57 crc kubenswrapper[4813]: I0219 18:45:57.881725 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6rxq\" (UniqueName: \"kubernetes.io/projected/9f0b9bf1-cf29-45eb-adb8-77d343bf8cba-kube-api-access-s6rxq\") pod \"9f0b9bf1-cf29-45eb-adb8-77d343bf8cba\" (UID: \"9f0b9bf1-cf29-45eb-adb8-77d343bf8cba\") " Feb 19 18:45:57 crc kubenswrapper[4813]: I0219 18:45:57.882227 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f0b9bf1-cf29-45eb-adb8-77d343bf8cba-utilities\") pod \"9f0b9bf1-cf29-45eb-adb8-77d343bf8cba\" (UID: \"9f0b9bf1-cf29-45eb-adb8-77d343bf8cba\") " Feb 19 18:45:57 crc kubenswrapper[4813]: I0219 18:45:57.882289 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f0b9bf1-cf29-45eb-adb8-77d343bf8cba-catalog-content\") pod \"9f0b9bf1-cf29-45eb-adb8-77d343bf8cba\" (UID: \"9f0b9bf1-cf29-45eb-adb8-77d343bf8cba\") " Feb 19 18:45:57 crc kubenswrapper[4813]: I0219 18:45:57.883481 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f0b9bf1-cf29-45eb-adb8-77d343bf8cba-utilities" (OuterVolumeSpecName: "utilities") pod "9f0b9bf1-cf29-45eb-adb8-77d343bf8cba" (UID: "9f0b9bf1-cf29-45eb-adb8-77d343bf8cba"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:45:57 crc kubenswrapper[4813]: I0219 18:45:57.891189 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f0b9bf1-cf29-45eb-adb8-77d343bf8cba-kube-api-access-s6rxq" (OuterVolumeSpecName: "kube-api-access-s6rxq") pod "9f0b9bf1-cf29-45eb-adb8-77d343bf8cba" (UID: "9f0b9bf1-cf29-45eb-adb8-77d343bf8cba"). InnerVolumeSpecName "kube-api-access-s6rxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:45:57 crc kubenswrapper[4813]: I0219 18:45:57.975053 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f0b9bf1-cf29-45eb-adb8-77d343bf8cba-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9f0b9bf1-cf29-45eb-adb8-77d343bf8cba" (UID: "9f0b9bf1-cf29-45eb-adb8-77d343bf8cba"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:45:57 crc kubenswrapper[4813]: I0219 18:45:57.983358 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f0b9bf1-cf29-45eb-adb8-77d343bf8cba-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 18:45:57 crc kubenswrapper[4813]: I0219 18:45:57.983391 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f0b9bf1-cf29-45eb-adb8-77d343bf8cba-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 18:45:57 crc kubenswrapper[4813]: I0219 18:45:57.983410 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s6rxq\" (UniqueName: \"kubernetes.io/projected/9f0b9bf1-cf29-45eb-adb8-77d343bf8cba-kube-api-access-s6rxq\") on node \"crc\" DevicePath \"\"" Feb 19 18:45:58 crc kubenswrapper[4813]: I0219 18:45:58.098301 4813 generic.go:334] "Generic (PLEG): container finished" podID="9f0b9bf1-cf29-45eb-adb8-77d343bf8cba" containerID="43da6b0ffcd43cdfe216c11a4a6dd5dafaab6afad7db51f5cd8c774ba7f5f72a" exitCode=0 Feb 19 18:45:58 crc kubenswrapper[4813]: I0219 18:45:58.098358 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vzn59" event={"ID":"9f0b9bf1-cf29-45eb-adb8-77d343bf8cba","Type":"ContainerDied","Data":"43da6b0ffcd43cdfe216c11a4a6dd5dafaab6afad7db51f5cd8c774ba7f5f72a"} Feb 19 18:45:58 crc kubenswrapper[4813]: I0219 18:45:58.098400 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vzn59" event={"ID":"9f0b9bf1-cf29-45eb-adb8-77d343bf8cba","Type":"ContainerDied","Data":"368a5f65f42c13654799caa9275dc1af2ff4e7b783744d4ae2a590faedbfa6bc"} Feb 19 18:45:58 crc kubenswrapper[4813]: I0219 18:45:58.098412 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vzn59" Feb 19 18:45:58 crc kubenswrapper[4813]: I0219 18:45:58.098430 4813 scope.go:117] "RemoveContainer" containerID="43da6b0ffcd43cdfe216c11a4a6dd5dafaab6afad7db51f5cd8c774ba7f5f72a" Feb 19 18:45:58 crc kubenswrapper[4813]: I0219 18:45:58.119421 4813 scope.go:117] "RemoveContainer" containerID="e7a81f0c26e560eda5e89827a773a5e7417fb3470aa79e8ca6301e36a23edecf" Feb 19 18:45:58 crc kubenswrapper[4813]: I0219 18:45:58.151779 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vzn59"] Feb 19 18:45:58 crc kubenswrapper[4813]: I0219 18:45:58.155854 4813 scope.go:117] "RemoveContainer" containerID="2d3efdba512c0c1457d9e2f3f2875a0071f4364a87addd11b861a1d1b48c7722" Feb 19 18:45:58 crc kubenswrapper[4813]: I0219 18:45:58.159972 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vzn59"] Feb 19 18:45:58 crc kubenswrapper[4813]: I0219 18:45:58.182366 4813 scope.go:117] "RemoveContainer" containerID="43da6b0ffcd43cdfe216c11a4a6dd5dafaab6afad7db51f5cd8c774ba7f5f72a" Feb 19 18:45:58 crc kubenswrapper[4813]: E0219 18:45:58.183369 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43da6b0ffcd43cdfe216c11a4a6dd5dafaab6afad7db51f5cd8c774ba7f5f72a\": container with ID starting with 43da6b0ffcd43cdfe216c11a4a6dd5dafaab6afad7db51f5cd8c774ba7f5f72a not found: ID does not exist" containerID="43da6b0ffcd43cdfe216c11a4a6dd5dafaab6afad7db51f5cd8c774ba7f5f72a" Feb 19 18:45:58 crc kubenswrapper[4813]: I0219 18:45:58.183503 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43da6b0ffcd43cdfe216c11a4a6dd5dafaab6afad7db51f5cd8c774ba7f5f72a"} err="failed to get container status \"43da6b0ffcd43cdfe216c11a4a6dd5dafaab6afad7db51f5cd8c774ba7f5f72a\": rpc error: code = NotFound desc = could not find container \"43da6b0ffcd43cdfe216c11a4a6dd5dafaab6afad7db51f5cd8c774ba7f5f72a\": container with ID starting with 43da6b0ffcd43cdfe216c11a4a6dd5dafaab6afad7db51f5cd8c774ba7f5f72a not found: ID does not exist" Feb 19 18:45:58 crc kubenswrapper[4813]: I0219 18:45:58.183633 4813 scope.go:117] "RemoveContainer" containerID="e7a81f0c26e560eda5e89827a773a5e7417fb3470aa79e8ca6301e36a23edecf" Feb 19 18:45:58 crc kubenswrapper[4813]: E0219 18:45:58.184195 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7a81f0c26e560eda5e89827a773a5e7417fb3470aa79e8ca6301e36a23edecf\": container with ID starting with e7a81f0c26e560eda5e89827a773a5e7417fb3470aa79e8ca6301e36a23edecf not found: ID does not exist" containerID="e7a81f0c26e560eda5e89827a773a5e7417fb3470aa79e8ca6301e36a23edecf" Feb 19 18:45:58 crc kubenswrapper[4813]: I0219 18:45:58.184242 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7a81f0c26e560eda5e89827a773a5e7417fb3470aa79e8ca6301e36a23edecf"} err="failed to get container status \"e7a81f0c26e560eda5e89827a773a5e7417fb3470aa79e8ca6301e36a23edecf\": rpc error: code = NotFound desc = could not find container \"e7a81f0c26e560eda5e89827a773a5e7417fb3470aa79e8ca6301e36a23edecf\": container with ID starting with e7a81f0c26e560eda5e89827a773a5e7417fb3470aa79e8ca6301e36a23edecf not found: ID does not exist" Feb 19 18:45:58 crc kubenswrapper[4813]: I0219 18:45:58.184278 4813 scope.go:117] "RemoveContainer" containerID="2d3efdba512c0c1457d9e2f3f2875a0071f4364a87addd11b861a1d1b48c7722" Feb 19 18:45:58 crc kubenswrapper[4813]: E0219 18:45:58.184582 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d3efdba512c0c1457d9e2f3f2875a0071f4364a87addd11b861a1d1b48c7722\": container with ID starting with 2d3efdba512c0c1457d9e2f3f2875a0071f4364a87addd11b861a1d1b48c7722 not found: ID does not exist" containerID="2d3efdba512c0c1457d9e2f3f2875a0071f4364a87addd11b861a1d1b48c7722" Feb 19 18:45:58 crc kubenswrapper[4813]: I0219 18:45:58.184610 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d3efdba512c0c1457d9e2f3f2875a0071f4364a87addd11b861a1d1b48c7722"} err="failed to get container status \"2d3efdba512c0c1457d9e2f3f2875a0071f4364a87addd11b861a1d1b48c7722\": rpc error: code = NotFound desc = could not find container \"2d3efdba512c0c1457d9e2f3f2875a0071f4364a87addd11b861a1d1b48c7722\": container with ID starting with 2d3efdba512c0c1457d9e2f3f2875a0071f4364a87addd11b861a1d1b48c7722 not found: ID does not exist" Feb 19 18:45:59 crc kubenswrapper[4813]: I0219 18:45:59.109762 4813 generic.go:334] "Generic (PLEG): container finished" podID="5ccebe5d-8b67-4a1b-ad70-55d4f6f29028" containerID="3b21e98de4f50b7b4e2b8d25d2baed156566f8c300da682c971be1a46c43ae77" exitCode=0 Feb 19 18:45:59 crc kubenswrapper[4813]: I0219 18:45:59.109804 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-62sh6" event={"ID":"5ccebe5d-8b67-4a1b-ad70-55d4f6f29028","Type":"ContainerDied","Data":"3b21e98de4f50b7b4e2b8d25d2baed156566f8c300da682c971be1a46c43ae77"} Feb 19 18:45:59 crc kubenswrapper[4813]: I0219 18:45:59.486004 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f0b9bf1-cf29-45eb-adb8-77d343bf8cba" path="/var/lib/kubelet/pods/9f0b9bf1-cf29-45eb-adb8-77d343bf8cba/volumes" Feb 19 18:46:00 crc kubenswrapper[4813]: I0219 18:46:00.122804 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-62sh6" event={"ID":"5ccebe5d-8b67-4a1b-ad70-55d4f6f29028","Type":"ContainerStarted","Data":"1d9eb2871f4c51067cd29eec8719642a87a259edbe16134a6ab35dcc60da7221"} Feb 19 18:46:00 crc kubenswrapper[4813]: I0219 18:46:00.154697 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-62sh6" podStartSLOduration=3.644919623 podStartE2EDuration="6.154672884s" podCreationTimestamp="2026-02-19 18:45:54 +0000 UTC" firstStartedPulling="2026-02-19 18:45:57.088542886 +0000 UTC m=+976.313983457" lastFinishedPulling="2026-02-19 18:45:59.598296167 +0000 UTC m=+978.823736718" observedRunningTime="2026-02-19 18:46:00.150450664 +0000 UTC m=+979.375891205" watchObservedRunningTime="2026-02-19 18:46:00.154672884 +0000 UTC m=+979.380113465" Feb 19 18:46:00 crc kubenswrapper[4813]: I0219 18:46:00.329509 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 18:46:00 crc kubenswrapper[4813]: I0219 18:46:00.329574 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 18:46:02 crc kubenswrapper[4813]: I0219 18:46:02.710802 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-mz5c2" Feb 19 18:46:04 crc kubenswrapper[4813]: I0219 18:46:04.925056 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-62sh6" Feb 19 18:46:04 crc kubenswrapper[4813]: I0219 18:46:04.925472 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-62sh6" Feb 19 18:46:04 crc kubenswrapper[4813]: I0219 18:46:04.997750 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-62sh6" Feb 19 18:46:05 crc kubenswrapper[4813]: I0219 18:46:05.234446 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-62sh6" Feb 19 18:46:07 crc kubenswrapper[4813]: I0219 18:46:07.398493 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-62sh6"] Feb 19 18:46:07 crc kubenswrapper[4813]: I0219 18:46:07.398997 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-62sh6" podUID="5ccebe5d-8b67-4a1b-ad70-55d4f6f29028" containerName="registry-server" containerID="cri-o://1d9eb2871f4c51067cd29eec8719642a87a259edbe16134a6ab35dcc60da7221" gracePeriod=2 Feb 19 18:46:07 crc kubenswrapper[4813]: I0219 18:46:07.796140 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-62sh6" Feb 19 18:46:07 crc kubenswrapper[4813]: I0219 18:46:07.942686 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdhgl\" (UniqueName: \"kubernetes.io/projected/5ccebe5d-8b67-4a1b-ad70-55d4f6f29028-kube-api-access-wdhgl\") pod \"5ccebe5d-8b67-4a1b-ad70-55d4f6f29028\" (UID: \"5ccebe5d-8b67-4a1b-ad70-55d4f6f29028\") " Feb 19 18:46:07 crc kubenswrapper[4813]: I0219 18:46:07.943009 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ccebe5d-8b67-4a1b-ad70-55d4f6f29028-catalog-content\") pod \"5ccebe5d-8b67-4a1b-ad70-55d4f6f29028\" (UID: \"5ccebe5d-8b67-4a1b-ad70-55d4f6f29028\") " Feb 19 18:46:07 crc kubenswrapper[4813]: I0219 18:46:07.943048 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ccebe5d-8b67-4a1b-ad70-55d4f6f29028-utilities\") pod \"5ccebe5d-8b67-4a1b-ad70-55d4f6f29028\" (UID: \"5ccebe5d-8b67-4a1b-ad70-55d4f6f29028\") " Feb 19 18:46:07 crc kubenswrapper[4813]: I0219 18:46:07.944697 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ccebe5d-8b67-4a1b-ad70-55d4f6f29028-utilities" (OuterVolumeSpecName: "utilities") pod "5ccebe5d-8b67-4a1b-ad70-55d4f6f29028" (UID: "5ccebe5d-8b67-4a1b-ad70-55d4f6f29028"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:46:07 crc kubenswrapper[4813]: I0219 18:46:07.971231 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ccebe5d-8b67-4a1b-ad70-55d4f6f29028-kube-api-access-wdhgl" (OuterVolumeSpecName: "kube-api-access-wdhgl") pod "5ccebe5d-8b67-4a1b-ad70-55d4f6f29028" (UID: "5ccebe5d-8b67-4a1b-ad70-55d4f6f29028"). InnerVolumeSpecName "kube-api-access-wdhgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:46:07 crc kubenswrapper[4813]: I0219 18:46:07.996101 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ccebe5d-8b67-4a1b-ad70-55d4f6f29028-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5ccebe5d-8b67-4a1b-ad70-55d4f6f29028" (UID: "5ccebe5d-8b67-4a1b-ad70-55d4f6f29028"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:46:08 crc kubenswrapper[4813]: I0219 18:46:08.044608 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdhgl\" (UniqueName: \"kubernetes.io/projected/5ccebe5d-8b67-4a1b-ad70-55d4f6f29028-kube-api-access-wdhgl\") on node \"crc\" DevicePath \"\"" Feb 19 18:46:08 crc kubenswrapper[4813]: I0219 18:46:08.044644 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ccebe5d-8b67-4a1b-ad70-55d4f6f29028-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 18:46:08 crc kubenswrapper[4813]: I0219 18:46:08.044654 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ccebe5d-8b67-4a1b-ad70-55d4f6f29028-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 18:46:08 crc kubenswrapper[4813]: I0219 18:46:08.181651 4813 generic.go:334] "Generic (PLEG): container finished" podID="5ccebe5d-8b67-4a1b-ad70-55d4f6f29028" containerID="1d9eb2871f4c51067cd29eec8719642a87a259edbe16134a6ab35dcc60da7221" exitCode=0 Feb 19 18:46:08 crc kubenswrapper[4813]: I0219 18:46:08.181697 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-62sh6" event={"ID":"5ccebe5d-8b67-4a1b-ad70-55d4f6f29028","Type":"ContainerDied","Data":"1d9eb2871f4c51067cd29eec8719642a87a259edbe16134a6ab35dcc60da7221"} Feb 19 18:46:08 crc kubenswrapper[4813]: I0219 18:46:08.181725 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-62sh6" event={"ID":"5ccebe5d-8b67-4a1b-ad70-55d4f6f29028","Type":"ContainerDied","Data":"b30821a88a9b13accaf4d1f35affe74ec72300613221ae5ec4c2ccbb4d0afc68"} Feb 19 18:46:08 crc kubenswrapper[4813]: I0219 18:46:08.181746 4813 scope.go:117] "RemoveContainer" containerID="1d9eb2871f4c51067cd29eec8719642a87a259edbe16134a6ab35dcc60da7221" Feb 19 18:46:08 crc kubenswrapper[4813]: I0219 18:46:08.182096 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-62sh6" Feb 19 18:46:08 crc kubenswrapper[4813]: I0219 18:46:08.219323 4813 scope.go:117] "RemoveContainer" containerID="3b21e98de4f50b7b4e2b8d25d2baed156566f8c300da682c971be1a46c43ae77" Feb 19 18:46:08 crc kubenswrapper[4813]: I0219 18:46:08.232897 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-62sh6"] Feb 19 18:46:08 crc kubenswrapper[4813]: I0219 18:46:08.238808 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-62sh6"] Feb 19 18:46:08 crc kubenswrapper[4813]: I0219 18:46:08.239214 4813 scope.go:117] "RemoveContainer" containerID="54931dee42312854919a8e7c0ba63a48ebb5c40c3208ea97008bbaca1bc190d8" Feb 19 18:46:08 crc kubenswrapper[4813]: I0219 18:46:08.282701 4813 scope.go:117] "RemoveContainer" containerID="1d9eb2871f4c51067cd29eec8719642a87a259edbe16134a6ab35dcc60da7221" Feb 19 18:46:08 crc kubenswrapper[4813]: E0219 18:46:08.283436 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d9eb2871f4c51067cd29eec8719642a87a259edbe16134a6ab35dcc60da7221\": container with ID starting with 1d9eb2871f4c51067cd29eec8719642a87a259edbe16134a6ab35dcc60da7221 not found: ID does not exist" containerID="1d9eb2871f4c51067cd29eec8719642a87a259edbe16134a6ab35dcc60da7221" Feb 19 18:46:08 crc kubenswrapper[4813]: I0219 18:46:08.283518 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d9eb2871f4c51067cd29eec8719642a87a259edbe16134a6ab35dcc60da7221"} err="failed to get container status \"1d9eb2871f4c51067cd29eec8719642a87a259edbe16134a6ab35dcc60da7221\": rpc error: code = NotFound desc = could not find container \"1d9eb2871f4c51067cd29eec8719642a87a259edbe16134a6ab35dcc60da7221\": container with ID starting with 1d9eb2871f4c51067cd29eec8719642a87a259edbe16134a6ab35dcc60da7221 not found: ID does not exist" Feb 19 18:46:08 crc kubenswrapper[4813]: I0219 18:46:08.283560 4813 scope.go:117] "RemoveContainer" containerID="3b21e98de4f50b7b4e2b8d25d2baed156566f8c300da682c971be1a46c43ae77" Feb 19 18:46:08 crc kubenswrapper[4813]: E0219 18:46:08.284701 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b21e98de4f50b7b4e2b8d25d2baed156566f8c300da682c971be1a46c43ae77\": container with ID starting with 3b21e98de4f50b7b4e2b8d25d2baed156566f8c300da682c971be1a46c43ae77 not found: ID does not exist" containerID="3b21e98de4f50b7b4e2b8d25d2baed156566f8c300da682c971be1a46c43ae77" Feb 19 18:46:08 crc kubenswrapper[4813]: I0219 18:46:08.284767 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b21e98de4f50b7b4e2b8d25d2baed156566f8c300da682c971be1a46c43ae77"} err="failed to get container status \"3b21e98de4f50b7b4e2b8d25d2baed156566f8c300da682c971be1a46c43ae77\": rpc error: code = NotFound desc = could not find container \"3b21e98de4f50b7b4e2b8d25d2baed156566f8c300da682c971be1a46c43ae77\": container with ID starting with 3b21e98de4f50b7b4e2b8d25d2baed156566f8c300da682c971be1a46c43ae77 not found: ID does not exist" Feb 19 18:46:08 crc kubenswrapper[4813]: I0219 18:46:08.284807 4813 scope.go:117] "RemoveContainer" containerID="54931dee42312854919a8e7c0ba63a48ebb5c40c3208ea97008bbaca1bc190d8" Feb 19 18:46:08 crc kubenswrapper[4813]: E0219 18:46:08.285173 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54931dee42312854919a8e7c0ba63a48ebb5c40c3208ea97008bbaca1bc190d8\": container with ID starting with 54931dee42312854919a8e7c0ba63a48ebb5c40c3208ea97008bbaca1bc190d8 not found: ID does not exist" containerID="54931dee42312854919a8e7c0ba63a48ebb5c40c3208ea97008bbaca1bc190d8" Feb 19 18:46:08 crc kubenswrapper[4813]: I0219 18:46:08.285262 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54931dee42312854919a8e7c0ba63a48ebb5c40c3208ea97008bbaca1bc190d8"} err="failed to get container status \"54931dee42312854919a8e7c0ba63a48ebb5c40c3208ea97008bbaca1bc190d8\": rpc error: code = NotFound desc = could not find container \"54931dee42312854919a8e7c0ba63a48ebb5c40c3208ea97008bbaca1bc190d8\": container with ID starting with 54931dee42312854919a8e7c0ba63a48ebb5c40c3208ea97008bbaca1bc190d8 not found: ID does not exist" Feb 19 18:46:09 crc kubenswrapper[4813]: I0219 18:46:09.478889 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ccebe5d-8b67-4a1b-ad70-55d4f6f29028" path="/var/lib/kubelet/pods/5ccebe5d-8b67-4a1b-ad70-55d4f6f29028/volumes" Feb 19 18:46:21 crc kubenswrapper[4813]: I0219 18:46:21.780226 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-fn86j"] Feb 19 18:46:21 crc kubenswrapper[4813]: E0219 18:46:21.780906 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f0b9bf1-cf29-45eb-adb8-77d343bf8cba" containerName="registry-server" Feb 19 18:46:21 crc kubenswrapper[4813]: I0219 18:46:21.780918 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f0b9bf1-cf29-45eb-adb8-77d343bf8cba" containerName="registry-server" Feb 19 18:46:21 crc kubenswrapper[4813]: E0219 18:46:21.780933 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ccebe5d-8b67-4a1b-ad70-55d4f6f29028" containerName="extract-content" Feb 19 18:46:21 crc kubenswrapper[4813]: I0219 18:46:21.780939 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ccebe5d-8b67-4a1b-ad70-55d4f6f29028" containerName="extract-content" Feb 19 18:46:21 crc kubenswrapper[4813]: E0219 18:46:21.780969 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ccebe5d-8b67-4a1b-ad70-55d4f6f29028" containerName="extract-utilities" Feb 19 18:46:21 crc kubenswrapper[4813]: I0219 18:46:21.780976 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ccebe5d-8b67-4a1b-ad70-55d4f6f29028" containerName="extract-utilities" Feb 19 18:46:21 crc kubenswrapper[4813]: E0219 18:46:21.780986 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f0b9bf1-cf29-45eb-adb8-77d343bf8cba" containerName="extract-content" Feb 19 18:46:21 crc kubenswrapper[4813]: I0219 18:46:21.780994 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f0b9bf1-cf29-45eb-adb8-77d343bf8cba" containerName="extract-content" Feb 19 18:46:21 crc kubenswrapper[4813]: E0219 18:46:21.781004 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f0b9bf1-cf29-45eb-adb8-77d343bf8cba" containerName="extract-utilities" Feb 19 18:46:21 crc kubenswrapper[4813]: I0219 18:46:21.781011 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f0b9bf1-cf29-45eb-adb8-77d343bf8cba" containerName="extract-utilities" Feb 19 18:46:21 crc kubenswrapper[4813]: E0219 18:46:21.781018 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ccebe5d-8b67-4a1b-ad70-55d4f6f29028" containerName="registry-server" Feb 19 18:46:21 crc kubenswrapper[4813]: I0219 18:46:21.781023 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ccebe5d-8b67-4a1b-ad70-55d4f6f29028" containerName="registry-server" Feb 19 18:46:21 crc kubenswrapper[4813]: I0219 18:46:21.781136 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f0b9bf1-cf29-45eb-adb8-77d343bf8cba" containerName="registry-server" Feb 19 18:46:21 crc kubenswrapper[4813]: I0219 18:46:21.781148 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ccebe5d-8b67-4a1b-ad70-55d4f6f29028" containerName="registry-server" Feb 19 18:46:21 crc kubenswrapper[4813]: I0219 18:46:21.781522 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-fn86j" Feb 19 18:46:21 crc kubenswrapper[4813]: I0219 18:46:21.783402 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-jd997" Feb 19 18:46:21 crc kubenswrapper[4813]: I0219 18:46:21.786363 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-tfr46"] Feb 19 18:46:21 crc kubenswrapper[4813]: I0219 18:46:21.787917 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-tfr46" Feb 19 18:46:21 crc kubenswrapper[4813]: I0219 18:46:21.789302 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-rv6gd" Feb 19 18:46:21 crc kubenswrapper[4813]: I0219 18:46:21.790828 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-fn86j"] Feb 19 18:46:21 crc kubenswrapper[4813]: I0219 18:46:21.814511 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-h2q2p"] Feb 19 18:46:21 crc kubenswrapper[4813]: I0219 18:46:21.815372 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-h2q2p" Feb 19 18:46:21 crc kubenswrapper[4813]: I0219 18:46:21.820678 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-zx6bl" Feb 19 18:46:21 crc kubenswrapper[4813]: I0219 18:46:21.824361 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-h2q2p"] Feb 19 18:46:21 crc kubenswrapper[4813]: I0219 18:46:21.845004 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-tfr46"] Feb 19 18:46:21 crc kubenswrapper[4813]: I0219 18:46:21.854245 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-68vcr"] Feb 19 18:46:21 crc kubenswrapper[4813]: I0219 18:46:21.893920 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-5t7t2"] Feb 19 18:46:21 crc kubenswrapper[4813]: I0219 18:46:21.894520 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-5t7t2" Feb 19 18:46:21 crc kubenswrapper[4813]: I0219 18:46:21.894911 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-68vcr" Feb 19 18:46:21 crc kubenswrapper[4813]: I0219 18:46:21.898489 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-cnb66"] Feb 19 18:46:21 crc kubenswrapper[4813]: I0219 18:46:21.899304 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-cnb66" Feb 19 18:46:21 crc kubenswrapper[4813]: I0219 18:46:21.899978 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-pv5d2" Feb 19 18:46:21 crc kubenswrapper[4813]: I0219 18:46:21.900279 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-8j2tr" Feb 19 18:46:21 crc kubenswrapper[4813]: I0219 18:46:21.903688 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-pnspp" Feb 19 18:46:21 crc kubenswrapper[4813]: I0219 18:46:21.905223 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88tmm\" (UniqueName: \"kubernetes.io/projected/2a98a45c-87a7-4ecf-a0e8-2e8743b82960-kube-api-access-88tmm\") pod \"designate-operator-controller-manager-6d8bf5c495-h2q2p\" (UID: \"2a98a45c-87a7-4ecf-a0e8-2e8743b82960\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-h2q2p" Feb 19 18:46:21 crc kubenswrapper[4813]: I0219 18:46:21.905262 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5t2nj\" (UniqueName: \"kubernetes.io/projected/1067532d-ed6f-4474-8006-a4b4a6b1c89e-kube-api-access-5t2nj\") pod \"barbican-operator-controller-manager-868647ff47-tfr46\" (UID: \"1067532d-ed6f-4474-8006-a4b4a6b1c89e\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-tfr46" Feb 19 18:46:21 crc kubenswrapper[4813]: I0219 18:46:21.905297 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hkmw\" (UniqueName: \"kubernetes.io/projected/b88eb3b7-8ae3-4e00-8a0f-a5bb7109241d-kube-api-access-7hkmw\") pod \"cinder-operator-controller-manager-5d946d989d-fn86j\" (UID: \"b88eb3b7-8ae3-4e00-8a0f-a5bb7109241d\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-fn86j" Feb 19 18:46:21 crc kubenswrapper[4813]: I0219 18:46:21.915359 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-5t7t2"] Feb 19 18:46:21 crc kubenswrapper[4813]: I0219 18:46:21.932789 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-68vcr"] Feb 19 18:46:21 crc kubenswrapper[4813]: I0219 18:46:21.951866 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-cnb66"] Feb 19 18:46:21 crc kubenswrapper[4813]: I0219 18:46:21.966350 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-g7x92"] Feb 19 18:46:21 crc kubenswrapper[4813]: I0219 18:46:21.967728 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-g7x92" Feb 19 18:46:21 crc kubenswrapper[4813]: I0219 18:46:21.973291 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-l49lr" Feb 19 18:46:21 crc kubenswrapper[4813]: I0219 18:46:21.973474 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 19 18:46:21 crc kubenswrapper[4813]: I0219 18:46:21.976756 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-8rrl4"] Feb 19 18:46:21 crc kubenswrapper[4813]: I0219 18:46:21.977610 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-8rrl4" Feb 19 18:46:21 crc kubenswrapper[4813]: I0219 18:46:21.984877 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-g7x92"] Feb 19 18:46:21 crc kubenswrapper[4813]: I0219 18:46:21.985584 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-x5sbh" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:21.998396 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-j894w"] Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:21.999143 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-j894w" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.004337 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-kxc9n" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.006007 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hkmw\" (UniqueName: \"kubernetes.io/projected/b88eb3b7-8ae3-4e00-8a0f-a5bb7109241d-kube-api-access-7hkmw\") pod \"cinder-operator-controller-manager-5d946d989d-fn86j\" (UID: \"b88eb3b7-8ae3-4e00-8a0f-a5bb7109241d\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-fn86j" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.006094 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dffls\" (UniqueName: \"kubernetes.io/projected/bb9127dd-a22e-4d2b-91c9-29021a547c96-kube-api-access-dffls\") pod \"horizon-operator-controller-manager-5b9b8895d5-cnb66\" (UID: \"bb9127dd-a22e-4d2b-91c9-29021a547c96\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-cnb66" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.006120 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gft78\" (UniqueName: \"kubernetes.io/projected/293d6d95-e878-451c-8a2f-3040ca924854-kube-api-access-gft78\") pod \"heat-operator-controller-manager-69f49c598c-68vcr\" (UID: \"293d6d95-e878-451c-8a2f-3040ca924854\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-68vcr" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.006143 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88tmm\" (UniqueName: \"kubernetes.io/projected/2a98a45c-87a7-4ecf-a0e8-2e8743b82960-kube-api-access-88tmm\") pod \"designate-operator-controller-manager-6d8bf5c495-h2q2p\" (UID: \"2a98a45c-87a7-4ecf-a0e8-2e8743b82960\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-h2q2p" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.006163 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5t2nj\" (UniqueName: \"kubernetes.io/projected/1067532d-ed6f-4474-8006-a4b4a6b1c89e-kube-api-access-5t2nj\") pod \"barbican-operator-controller-manager-868647ff47-tfr46\" (UID: \"1067532d-ed6f-4474-8006-a4b4a6b1c89e\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-tfr46" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.006187 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrnjb\" (UniqueName: \"kubernetes.io/projected/496ee8bf-6326-4315-9219-ad7d26760349-kube-api-access-qrnjb\") pod \"glance-operator-controller-manager-77987464f4-5t7t2\" (UID: \"496ee8bf-6326-4315-9219-ad7d26760349\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-5t7t2" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.024989 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-8rrl4"] Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.035488 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-4pf4j"] Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.036641 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hkmw\" (UniqueName: \"kubernetes.io/projected/b88eb3b7-8ae3-4e00-8a0f-a5bb7109241d-kube-api-access-7hkmw\") pod \"cinder-operator-controller-manager-5d946d989d-fn86j\" (UID: \"b88eb3b7-8ae3-4e00-8a0f-a5bb7109241d\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-fn86j" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.036881 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5t2nj\" (UniqueName: \"kubernetes.io/projected/1067532d-ed6f-4474-8006-a4b4a6b1c89e-kube-api-access-5t2nj\") pod \"barbican-operator-controller-manager-868647ff47-tfr46\" (UID: \"1067532d-ed6f-4474-8006-a4b4a6b1c89e\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-tfr46" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.036981 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-4pf4j" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.043013 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-j894w"] Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.059602 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-khxs2" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.063278 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88tmm\" (UniqueName: \"kubernetes.io/projected/2a98a45c-87a7-4ecf-a0e8-2e8743b82960-kube-api-access-88tmm\") pod \"designate-operator-controller-manager-6d8bf5c495-h2q2p\" (UID: \"2a98a45c-87a7-4ecf-a0e8-2e8743b82960\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-h2q2p" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.063377 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-4pf4j"] Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.094011 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-c942j"] Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.094839 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-c942j" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.101068 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-kkp48" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.101473 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-fn86j" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.107642 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8llg\" (UniqueName: \"kubernetes.io/projected/ada715c6-71df-428d-97ff-72c3abe923a5-kube-api-access-f8llg\") pod \"ironic-operator-controller-manager-554564d7fc-8rrl4\" (UID: \"ada715c6-71df-428d-97ff-72c3abe923a5\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-8rrl4" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.107710 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlt7h\" (UniqueName: \"kubernetes.io/projected/fd22b35b-ee39-435e-964c-d545597056b6-kube-api-access-rlt7h\") pod \"infra-operator-controller-manager-79d975b745-g7x92\" (UID: \"fd22b35b-ee39-435e-964c-d545597056b6\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-g7x92" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.107754 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28dl8\" (UniqueName: \"kubernetes.io/projected/de341409-e6ef-4b1a-a359-a8f39fa0bc91-kube-api-access-28dl8\") pod \"keystone-operator-controller-manager-b4d948c87-j894w\" (UID: \"de341409-e6ef-4b1a-a359-a8f39fa0bc91\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-j894w" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.107783 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8jcx\" (UniqueName: \"kubernetes.io/projected/da53be97-4a7d-4c8a-bcfa-25e1ea34c7c6-kube-api-access-v8jcx\") pod \"manila-operator-controller-manager-54f6768c69-4pf4j\" (UID: \"da53be97-4a7d-4c8a-bcfa-25e1ea34c7c6\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-4pf4j" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.107825 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dffls\" (UniqueName: \"kubernetes.io/projected/bb9127dd-a22e-4d2b-91c9-29021a547c96-kube-api-access-dffls\") pod \"horizon-operator-controller-manager-5b9b8895d5-cnb66\" (UID: \"bb9127dd-a22e-4d2b-91c9-29021a547c96\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-cnb66" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.107850 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fd22b35b-ee39-435e-964c-d545597056b6-cert\") pod \"infra-operator-controller-manager-79d975b745-g7x92\" (UID: \"fd22b35b-ee39-435e-964c-d545597056b6\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-g7x92" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.107871 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gft78\" (UniqueName: \"kubernetes.io/projected/293d6d95-e878-451c-8a2f-3040ca924854-kube-api-access-gft78\") pod \"heat-operator-controller-manager-69f49c598c-68vcr\" (UID: \"293d6d95-e878-451c-8a2f-3040ca924854\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-68vcr" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.107918 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrnjb\" (UniqueName: \"kubernetes.io/projected/496ee8bf-6326-4315-9219-ad7d26760349-kube-api-access-qrnjb\") pod \"glance-operator-controller-manager-77987464f4-5t7t2\" (UID: \"496ee8bf-6326-4315-9219-ad7d26760349\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-5t7t2" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.121035 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-6nmgx"] Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.121863 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-6nmgx" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.128305 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-59lwj" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.133474 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrnjb\" (UniqueName: \"kubernetes.io/projected/496ee8bf-6326-4315-9219-ad7d26760349-kube-api-access-qrnjb\") pod \"glance-operator-controller-manager-77987464f4-5t7t2\" (UID: \"496ee8bf-6326-4315-9219-ad7d26760349\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-5t7t2" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.133666 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-c942j"] Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.135903 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-tfr46" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.147014 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-6nmgx"] Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.148237 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-h2q2p" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.162538 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dffls\" (UniqueName: \"kubernetes.io/projected/bb9127dd-a22e-4d2b-91c9-29021a547c96-kube-api-access-dffls\") pod \"horizon-operator-controller-manager-5b9b8895d5-cnb66\" (UID: \"bb9127dd-a22e-4d2b-91c9-29021a547c96\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-cnb66" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.176028 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-scptn"] Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.177284 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-scptn" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.179723 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gft78\" (UniqueName: \"kubernetes.io/projected/293d6d95-e878-451c-8a2f-3040ca924854-kube-api-access-gft78\") pod \"heat-operator-controller-manager-69f49c598c-68vcr\" (UID: \"293d6d95-e878-451c-8a2f-3040ca924854\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-68vcr" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.184057 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-qxd7c" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.197451 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-scptn"] Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.209994 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8jcx\" (UniqueName: \"kubernetes.io/projected/da53be97-4a7d-4c8a-bcfa-25e1ea34c7c6-kube-api-access-v8jcx\") pod \"manila-operator-controller-manager-54f6768c69-4pf4j\" (UID: \"da53be97-4a7d-4c8a-bcfa-25e1ea34c7c6\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-4pf4j" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.210083 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fd22b35b-ee39-435e-964c-d545597056b6-cert\") pod \"infra-operator-controller-manager-79d975b745-g7x92\" (UID: \"fd22b35b-ee39-435e-964c-d545597056b6\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-g7x92" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.210169 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6w75p\" (UniqueName: \"kubernetes.io/projected/5eac30f8-7b86-4d18-bf6f-e6dbf42a0625-kube-api-access-6w75p\") pod \"mariadb-operator-controller-manager-6994f66f48-c942j\" (UID: \"5eac30f8-7b86-4d18-bf6f-e6dbf42a0625\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-c942j" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.210231 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8llg\" (UniqueName: \"kubernetes.io/projected/ada715c6-71df-428d-97ff-72c3abe923a5-kube-api-access-f8llg\") pod \"ironic-operator-controller-manager-554564d7fc-8rrl4\" (UID: \"ada715c6-71df-428d-97ff-72c3abe923a5\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-8rrl4" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.210254 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlt7h\" (UniqueName: \"kubernetes.io/projected/fd22b35b-ee39-435e-964c-d545597056b6-kube-api-access-rlt7h\") pod \"infra-operator-controller-manager-79d975b745-g7x92\" (UID: \"fd22b35b-ee39-435e-964c-d545597056b6\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-g7x92" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.210276 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28dl8\" (UniqueName: \"kubernetes.io/projected/de341409-e6ef-4b1a-a359-a8f39fa0bc91-kube-api-access-28dl8\") pod \"keystone-operator-controller-manager-b4d948c87-j894w\" (UID: \"de341409-e6ef-4b1a-a359-a8f39fa0bc91\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-j894w" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.210329 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4w6jq\" (UniqueName: \"kubernetes.io/projected/0794f801-e064-425c-ab9d-c00719fb3f86-kube-api-access-4w6jq\") pod \"neutron-operator-controller-manager-64ddbf8bb-6nmgx\" (UID: \"0794f801-e064-425c-ab9d-c00719fb3f86\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-6nmgx" Feb 19 18:46:22 crc kubenswrapper[4813]: E0219 18:46:22.210906 4813 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 18:46:22 crc kubenswrapper[4813]: E0219 18:46:22.210985 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fd22b35b-ee39-435e-964c-d545597056b6-cert podName:fd22b35b-ee39-435e-964c-d545597056b6 nodeName:}" failed. No retries permitted until 2026-02-19 18:46:22.710967821 +0000 UTC m=+1001.936408362 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fd22b35b-ee39-435e-964c-d545597056b6-cert") pod "infra-operator-controller-manager-79d975b745-g7x92" (UID: "fd22b35b-ee39-435e-964c-d545597056b6") : secret "infra-operator-webhook-server-cert" not found Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.212767 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-5t7t2" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.214654 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-zqs6j"] Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.217486 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-zqs6j" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.222332 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-4bg96" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.224024 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-68vcr" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.234338 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-cnb66" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.240584 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlt7h\" (UniqueName: \"kubernetes.io/projected/fd22b35b-ee39-435e-964c-d545597056b6-kube-api-access-rlt7h\") pod \"infra-operator-controller-manager-79d975b745-g7x92\" (UID: \"fd22b35b-ee39-435e-964c-d545597056b6\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-g7x92" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.245085 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8jcx\" (UniqueName: \"kubernetes.io/projected/da53be97-4a7d-4c8a-bcfa-25e1ea34c7c6-kube-api-access-v8jcx\") pod \"manila-operator-controller-manager-54f6768c69-4pf4j\" (UID: \"da53be97-4a7d-4c8a-bcfa-25e1ea34c7c6\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-4pf4j" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.255004 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-zqs6j"] Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.255749 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8llg\" (UniqueName: \"kubernetes.io/projected/ada715c6-71df-428d-97ff-72c3abe923a5-kube-api-access-f8llg\") pod \"ironic-operator-controller-manager-554564d7fc-8rrl4\" (UID: \"ada715c6-71df-428d-97ff-72c3abe923a5\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-8rrl4" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.274820 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28dl8\" (UniqueName: \"kubernetes.io/projected/de341409-e6ef-4b1a-a359-a8f39fa0bc91-kube-api-access-28dl8\") pod \"keystone-operator-controller-manager-b4d948c87-j894w\" (UID: \"de341409-e6ef-4b1a-a359-a8f39fa0bc91\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-j894w" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.311007 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-m8fzh"] Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.311805 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-m8fzh" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.313295 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zb49\" (UniqueName: \"kubernetes.io/projected/aa2a1869-c465-4840-8879-e882d8b996b5-kube-api-access-6zb49\") pod \"nova-operator-controller-manager-567668f5cf-scptn\" (UID: \"aa2a1869-c465-4840-8879-e882d8b996b5\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-scptn" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.313327 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4w6jq\" (UniqueName: \"kubernetes.io/projected/0794f801-e064-425c-ab9d-c00719fb3f86-kube-api-access-4w6jq\") pod \"neutron-operator-controller-manager-64ddbf8bb-6nmgx\" (UID: \"0794f801-e064-425c-ab9d-c00719fb3f86\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-6nmgx" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.313394 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lj47\" (UniqueName: \"kubernetes.io/projected/1ca28ba3-eb1b-427d-a9b9-f0dcf0cc7dd9-kube-api-access-7lj47\") pod \"octavia-operator-controller-manager-69f8888797-zqs6j\" (UID: \"1ca28ba3-eb1b-427d-a9b9-f0dcf0cc7dd9\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-zqs6j" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.313423 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6w75p\" (UniqueName: \"kubernetes.io/projected/5eac30f8-7b86-4d18-bf6f-e6dbf42a0625-kube-api-access-6w75p\") pod \"mariadb-operator-controller-manager-6994f66f48-c942j\" (UID: \"5eac30f8-7b86-4d18-bf6f-e6dbf42a0625\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-c942j" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.316382 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-2nh4g" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.317197 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-8rrl4" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.354752 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4w6jq\" (UniqueName: \"kubernetes.io/projected/0794f801-e064-425c-ab9d-c00719fb3f86-kube-api-access-4w6jq\") pod \"neutron-operator-controller-manager-64ddbf8bb-6nmgx\" (UID: \"0794f801-e064-425c-ab9d-c00719fb3f86\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-6nmgx" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.363251 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-j894w" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.364196 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6w75p\" (UniqueName: \"kubernetes.io/projected/5eac30f8-7b86-4d18-bf6f-e6dbf42a0625-kube-api-access-6w75p\") pod \"mariadb-operator-controller-manager-6994f66f48-c942j\" (UID: \"5eac30f8-7b86-4d18-bf6f-e6dbf42a0625\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-c942j" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.377228 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-r9fmx"] Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.380597 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-r9fmx" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.397732 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.397772 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-95vt7" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.412847 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-m8fzh"] Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.414748 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zb49\" (UniqueName: \"kubernetes.io/projected/aa2a1869-c465-4840-8879-e882d8b996b5-kube-api-access-6zb49\") pod \"nova-operator-controller-manager-567668f5cf-scptn\" (UID: \"aa2a1869-c465-4840-8879-e882d8b996b5\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-scptn" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.414863 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lj47\" (UniqueName: \"kubernetes.io/projected/1ca28ba3-eb1b-427d-a9b9-f0dcf0cc7dd9-kube-api-access-7lj47\") pod \"octavia-operator-controller-manager-69f8888797-zqs6j\" (UID: \"1ca28ba3-eb1b-427d-a9b9-f0dcf0cc7dd9\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-zqs6j" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.414897 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nzn2\" (UniqueName: \"kubernetes.io/projected/60dc84e0-9a88-4edc-9aac-ffbc1baa4cc8-kube-api-access-4nzn2\") pod \"ovn-operator-controller-manager-d44cf6b75-m8fzh\" (UID: \"60dc84e0-9a88-4edc-9aac-ffbc1baa4cc8\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-m8fzh" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.434073 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lj47\" (UniqueName: \"kubernetes.io/projected/1ca28ba3-eb1b-427d-a9b9-f0dcf0cc7dd9-kube-api-access-7lj47\") pod \"octavia-operator-controller-manager-69f8888797-zqs6j\" (UID: \"1ca28ba3-eb1b-427d-a9b9-f0dcf0cc7dd9\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-zqs6j" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.435666 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-rgvk6"] Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.452020 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zb49\" (UniqueName: \"kubernetes.io/projected/aa2a1869-c465-4840-8879-e882d8b996b5-kube-api-access-6zb49\") pod \"nova-operator-controller-manager-567668f5cf-scptn\" (UID: \"aa2a1869-c465-4840-8879-e882d8b996b5\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-scptn" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.457083 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-r9fmx"] Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.457185 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-rgvk6" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.463098 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-rgvk6"] Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.463303 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-cdhsk" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.498579 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-4pf4j" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.503752 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-k6dm9"] Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.504579 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-k6dm9" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.522983 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-8jjsg" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.524078 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nzn2\" (UniqueName: \"kubernetes.io/projected/60dc84e0-9a88-4edc-9aac-ffbc1baa4cc8-kube-api-access-4nzn2\") pod \"ovn-operator-controller-manager-d44cf6b75-m8fzh\" (UID: \"60dc84e0-9a88-4edc-9aac-ffbc1baa4cc8\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-m8fzh" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.524130 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jswt5\" (UniqueName: \"kubernetes.io/projected/317c820b-5be9-49c1-b291-e0d62982fce8-kube-api-access-jswt5\") pod \"placement-operator-controller-manager-8497b45c89-rgvk6\" (UID: \"317c820b-5be9-49c1-b291-e0d62982fce8\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-rgvk6" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.524156 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tk7zg\" (UniqueName: \"kubernetes.io/projected/c250b55d-1d6b-4f28-8a0c-833736ac564b-kube-api-access-tk7zg\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-r9fmx\" (UID: \"c250b55d-1d6b-4f28-8a0c-833736ac564b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-r9fmx" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.524180 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c250b55d-1d6b-4f28-8a0c-833736ac564b-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-r9fmx\" (UID: \"c250b55d-1d6b-4f28-8a0c-833736ac564b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-r9fmx" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.547274 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-c942j" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.548648 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-6nmgx" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.561346 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-k6dm9"] Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.566365 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-scptn" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.568854 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nzn2\" (UniqueName: \"kubernetes.io/projected/60dc84e0-9a88-4edc-9aac-ffbc1baa4cc8-kube-api-access-4nzn2\") pod \"ovn-operator-controller-manager-d44cf6b75-m8fzh\" (UID: \"60dc84e0-9a88-4edc-9aac-ffbc1baa4cc8\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-m8fzh" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.582282 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-zqs6j" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.602426 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-gtm5k"] Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.603281 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-gtm5k" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.616686 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-b9n6x" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.643756 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9h2z\" (UniqueName: \"kubernetes.io/projected/52c59bd3-a386-46a0-b528-1a658f9a64a1-kube-api-access-n9h2z\") pod \"swift-operator-controller-manager-68f46476f-k6dm9\" (UID: \"52c59bd3-a386-46a0-b528-1a658f9a64a1\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-k6dm9" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.643806 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jswt5\" (UniqueName: \"kubernetes.io/projected/317c820b-5be9-49c1-b291-e0d62982fce8-kube-api-access-jswt5\") pod \"placement-operator-controller-manager-8497b45c89-rgvk6\" (UID: \"317c820b-5be9-49c1-b291-e0d62982fce8\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-rgvk6" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.643832 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tk7zg\" (UniqueName: \"kubernetes.io/projected/c250b55d-1d6b-4f28-8a0c-833736ac564b-kube-api-access-tk7zg\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-r9fmx\" (UID: \"c250b55d-1d6b-4f28-8a0c-833736ac564b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-r9fmx" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.643861 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c250b55d-1d6b-4f28-8a0c-833736ac564b-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-r9fmx\" (UID: \"c250b55d-1d6b-4f28-8a0c-833736ac564b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-r9fmx" Feb 19 18:46:22 crc kubenswrapper[4813]: E0219 18:46:22.644155 4813 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 18:46:22 crc kubenswrapper[4813]: E0219 18:46:22.644211 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c250b55d-1d6b-4f28-8a0c-833736ac564b-cert podName:c250b55d-1d6b-4f28-8a0c-833736ac564b nodeName:}" failed. No retries permitted until 2026-02-19 18:46:23.144197624 +0000 UTC m=+1002.369638165 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c250b55d-1d6b-4f28-8a0c-833736ac564b-cert") pod "openstack-baremetal-operator-controller-manager-fb5fcc5b8-r9fmx" (UID: "c250b55d-1d6b-4f28-8a0c-833736ac564b") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.650207 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-gtm5k"] Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.665981 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-m8fzh" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.671092 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jswt5\" (UniqueName: \"kubernetes.io/projected/317c820b-5be9-49c1-b291-e0d62982fce8-kube-api-access-jswt5\") pod \"placement-operator-controller-manager-8497b45c89-rgvk6\" (UID: \"317c820b-5be9-49c1-b291-e0d62982fce8\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-rgvk6" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.679115 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tk7zg\" (UniqueName: \"kubernetes.io/projected/c250b55d-1d6b-4f28-8a0c-833736ac564b-kube-api-access-tk7zg\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-r9fmx\" (UID: \"c250b55d-1d6b-4f28-8a0c-833736ac564b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-r9fmx" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.682084 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-glgns"] Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.682893 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-glgns" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.686806 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-wj5hh" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.720743 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-glgns"] Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.745016 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-xq5wn"] Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.745916 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-xq5wn" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.749751 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9h2z\" (UniqueName: \"kubernetes.io/projected/52c59bd3-a386-46a0-b528-1a658f9a64a1-kube-api-access-n9h2z\") pod \"swift-operator-controller-manager-68f46476f-k6dm9\" (UID: \"52c59bd3-a386-46a0-b528-1a658f9a64a1\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-k6dm9" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.750117 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5zdq\" (UniqueName: \"kubernetes.io/projected/6d9cb4d7-d624-47bf-ad6f-977968f5c074-kube-api-access-d5zdq\") pod \"telemetry-operator-controller-manager-7f45b4ff68-gtm5k\" (UID: \"6d9cb4d7-d624-47bf-ad6f-977968f5c074\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-gtm5k" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.750213 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fd22b35b-ee39-435e-964c-d545597056b6-cert\") pod \"infra-operator-controller-manager-79d975b745-g7x92\" (UID: \"fd22b35b-ee39-435e-964c-d545597056b6\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-g7x92" Feb 19 18:46:22 crc kubenswrapper[4813]: E0219 18:46:22.750361 4813 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 18:46:22 crc kubenswrapper[4813]: E0219 18:46:22.751153 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fd22b35b-ee39-435e-964c-d545597056b6-cert podName:fd22b35b-ee39-435e-964c-d545597056b6 nodeName:}" failed. No retries permitted until 2026-02-19 18:46:23.75113154 +0000 UTC m=+1002.976572081 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fd22b35b-ee39-435e-964c-d545597056b6-cert") pod "infra-operator-controller-manager-79d975b745-g7x92" (UID: "fd22b35b-ee39-435e-964c-d545597056b6") : secret "infra-operator-webhook-server-cert" not found Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.753342 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-xq5wn"] Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.766721 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-9qvxq" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.787627 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-rgvk6" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.788384 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9h2z\" (UniqueName: \"kubernetes.io/projected/52c59bd3-a386-46a0-b528-1a658f9a64a1-kube-api-access-n9h2z\") pod \"swift-operator-controller-manager-68f46476f-k6dm9\" (UID: \"52c59bd3-a386-46a0-b528-1a658f9a64a1\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-k6dm9" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.797005 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-69ff7bc449-qfpxs"] Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.798029 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-qfpxs" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.801404 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-7dk2k" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.801724 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.802296 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.812161 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-69ff7bc449-qfpxs"] Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.824091 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qlcvr"] Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.824987 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qlcvr" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.832472 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qlcvr"] Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.833921 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-dnhc6" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.851475 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5zdq\" (UniqueName: \"kubernetes.io/projected/6d9cb4d7-d624-47bf-ad6f-977968f5c074-kube-api-access-d5zdq\") pod \"telemetry-operator-controller-manager-7f45b4ff68-gtm5k\" (UID: \"6d9cb4d7-d624-47bf-ad6f-977968f5c074\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-gtm5k" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.851525 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gw57m\" (UniqueName: \"kubernetes.io/projected/40a9fa53-e23a-4506-be2f-25a76446db8f-kube-api-access-gw57m\") pod \"watcher-operator-controller-manager-5db88f68c-xq5wn\" (UID: \"40a9fa53-e23a-4506-be2f-25a76446db8f\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-xq5wn" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.851599 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssnk8\" (UniqueName: \"kubernetes.io/projected/78220d1d-a875-4e0d-af2d-8a0668017340-kube-api-access-ssnk8\") pod \"test-operator-controller-manager-7866795846-glgns\" (UID: \"78220d1d-a875-4e0d-af2d-8a0668017340\") " pod="openstack-operators/test-operator-controller-manager-7866795846-glgns" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.872700 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5zdq\" (UniqueName: \"kubernetes.io/projected/6d9cb4d7-d624-47bf-ad6f-977968f5c074-kube-api-access-d5zdq\") pod \"telemetry-operator-controller-manager-7f45b4ff68-gtm5k\" (UID: \"6d9cb4d7-d624-47bf-ad6f-977968f5c074\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-gtm5k" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.882741 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-k6dm9" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.933825 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-fn86j"] Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.952583 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdx98\" (UniqueName: \"kubernetes.io/projected/ae13095c-48b2-4236-8fef-528d9f0ad712-kube-api-access-zdx98\") pod \"openstack-operator-controller-manager-69ff7bc449-qfpxs\" (UID: \"ae13095c-48b2-4236-8fef-528d9f0ad712\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-qfpxs" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.952624 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ae13095c-48b2-4236-8fef-528d9f0ad712-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-qfpxs\" (UID: \"ae13095c-48b2-4236-8fef-528d9f0ad712\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-qfpxs" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.952661 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssnk8\" (UniqueName: \"kubernetes.io/projected/78220d1d-a875-4e0d-af2d-8a0668017340-kube-api-access-ssnk8\") pod \"test-operator-controller-manager-7866795846-glgns\" (UID: \"78220d1d-a875-4e0d-af2d-8a0668017340\") " pod="openstack-operators/test-operator-controller-manager-7866795846-glgns" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.952759 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ae13095c-48b2-4236-8fef-528d9f0ad712-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-qfpxs\" (UID: \"ae13095c-48b2-4236-8fef-528d9f0ad712\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-qfpxs" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.952864 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4n88f\" (UniqueName: \"kubernetes.io/projected/139aa386-db56-4988-8ca6-2ea715cf9630-kube-api-access-4n88f\") pod \"rabbitmq-cluster-operator-manager-668c99d594-qlcvr\" (UID: \"139aa386-db56-4988-8ca6-2ea715cf9630\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qlcvr" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.953049 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gw57m\" (UniqueName: \"kubernetes.io/projected/40a9fa53-e23a-4506-be2f-25a76446db8f-kube-api-access-gw57m\") pod \"watcher-operator-controller-manager-5db88f68c-xq5wn\" (UID: \"40a9fa53-e23a-4506-be2f-25a76446db8f\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-xq5wn" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.974582 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gw57m\" (UniqueName: \"kubernetes.io/projected/40a9fa53-e23a-4506-be2f-25a76446db8f-kube-api-access-gw57m\") pod \"watcher-operator-controller-manager-5db88f68c-xq5wn\" (UID: \"40a9fa53-e23a-4506-be2f-25a76446db8f\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-xq5wn" Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.982522 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssnk8\" (UniqueName: \"kubernetes.io/projected/78220d1d-a875-4e0d-af2d-8a0668017340-kube-api-access-ssnk8\") pod \"test-operator-controller-manager-7866795846-glgns\" (UID: \"78220d1d-a875-4e0d-af2d-8a0668017340\") " pod="openstack-operators/test-operator-controller-manager-7866795846-glgns" Feb 19 18:46:22 crc kubenswrapper[4813]: W0219 18:46:22.986634 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb88eb3b7_8ae3_4e00_8a0f_a5bb7109241d.slice/crio-4ed4ae249ed64ec04c4ebb6bf8310e3c01ffa95cdad0473d4942c638af1a39e6 WatchSource:0}: Error finding container 4ed4ae249ed64ec04c4ebb6bf8310e3c01ffa95cdad0473d4942c638af1a39e6: Status 404 returned error can't find the container with id 4ed4ae249ed64ec04c4ebb6bf8310e3c01ffa95cdad0473d4942c638af1a39e6 Feb 19 18:46:22 crc kubenswrapper[4813]: I0219 18:46:22.991366 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-gtm5k" Feb 19 18:46:23 crc kubenswrapper[4813]: I0219 18:46:23.009398 4813 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 18:46:23 crc kubenswrapper[4813]: I0219 18:46:23.018123 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-glgns" Feb 19 18:46:23 crc kubenswrapper[4813]: I0219 18:46:23.024327 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-tfr46"] Feb 19 18:46:23 crc kubenswrapper[4813]: I0219 18:46:23.058770 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4n88f\" (UniqueName: \"kubernetes.io/projected/139aa386-db56-4988-8ca6-2ea715cf9630-kube-api-access-4n88f\") pod \"rabbitmq-cluster-operator-manager-668c99d594-qlcvr\" (UID: \"139aa386-db56-4988-8ca6-2ea715cf9630\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qlcvr" Feb 19 18:46:23 crc kubenswrapper[4813]: I0219 18:46:23.059017 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdx98\" (UniqueName: \"kubernetes.io/projected/ae13095c-48b2-4236-8fef-528d9f0ad712-kube-api-access-zdx98\") pod \"openstack-operator-controller-manager-69ff7bc449-qfpxs\" (UID: \"ae13095c-48b2-4236-8fef-528d9f0ad712\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-qfpxs" Feb 19 18:46:23 crc kubenswrapper[4813]: I0219 18:46:23.063015 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ae13095c-48b2-4236-8fef-528d9f0ad712-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-qfpxs\" (UID: \"ae13095c-48b2-4236-8fef-528d9f0ad712\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-qfpxs" Feb 19 18:46:23 crc kubenswrapper[4813]: E0219 18:46:23.068278 4813 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 18:46:23 crc kubenswrapper[4813]: E0219 18:46:23.068339 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae13095c-48b2-4236-8fef-528d9f0ad712-webhook-certs podName:ae13095c-48b2-4236-8fef-528d9f0ad712 nodeName:}" failed. No retries permitted until 2026-02-19 18:46:23.568325035 +0000 UTC m=+1002.793765576 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ae13095c-48b2-4236-8fef-528d9f0ad712-webhook-certs") pod "openstack-operator-controller-manager-69ff7bc449-qfpxs" (UID: "ae13095c-48b2-4236-8fef-528d9f0ad712") : secret "webhook-server-cert" not found Feb 19 18:46:23 crc kubenswrapper[4813]: I0219 18:46:23.070410 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ae13095c-48b2-4236-8fef-528d9f0ad712-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-qfpxs\" (UID: \"ae13095c-48b2-4236-8fef-528d9f0ad712\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-qfpxs" Feb 19 18:46:23 crc kubenswrapper[4813]: E0219 18:46:23.070646 4813 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 18:46:23 crc kubenswrapper[4813]: E0219 18:46:23.070711 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae13095c-48b2-4236-8fef-528d9f0ad712-metrics-certs podName:ae13095c-48b2-4236-8fef-528d9f0ad712 nodeName:}" failed. No retries permitted until 2026-02-19 18:46:23.570693518 +0000 UTC m=+1002.796134059 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ae13095c-48b2-4236-8fef-528d9f0ad712-metrics-certs") pod "openstack-operator-controller-manager-69ff7bc449-qfpxs" (UID: "ae13095c-48b2-4236-8fef-528d9f0ad712") : secret "metrics-server-cert" not found Feb 19 18:46:23 crc kubenswrapper[4813]: I0219 18:46:23.084310 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-xq5wn" Feb 19 18:46:23 crc kubenswrapper[4813]: I0219 18:46:23.085401 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdx98\" (UniqueName: \"kubernetes.io/projected/ae13095c-48b2-4236-8fef-528d9f0ad712-kube-api-access-zdx98\") pod \"openstack-operator-controller-manager-69ff7bc449-qfpxs\" (UID: \"ae13095c-48b2-4236-8fef-528d9f0ad712\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-qfpxs" Feb 19 18:46:23 crc kubenswrapper[4813]: I0219 18:46:23.086101 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4n88f\" (UniqueName: \"kubernetes.io/projected/139aa386-db56-4988-8ca6-2ea715cf9630-kube-api-access-4n88f\") pod \"rabbitmq-cluster-operator-manager-668c99d594-qlcvr\" (UID: \"139aa386-db56-4988-8ca6-2ea715cf9630\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qlcvr" Feb 19 18:46:23 crc kubenswrapper[4813]: I0219 18:46:23.158116 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qlcvr" Feb 19 18:46:23 crc kubenswrapper[4813]: I0219 18:46:23.172171 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c250b55d-1d6b-4f28-8a0c-833736ac564b-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-r9fmx\" (UID: \"c250b55d-1d6b-4f28-8a0c-833736ac564b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-r9fmx" Feb 19 18:46:23 crc kubenswrapper[4813]: E0219 18:46:23.172325 4813 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 18:46:23 crc kubenswrapper[4813]: E0219 18:46:23.172376 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c250b55d-1d6b-4f28-8a0c-833736ac564b-cert podName:c250b55d-1d6b-4f28-8a0c-833736ac564b nodeName:}" failed. No retries permitted until 2026-02-19 18:46:24.172357151 +0000 UTC m=+1003.397797692 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c250b55d-1d6b-4f28-8a0c-833736ac564b-cert") pod "openstack-baremetal-operator-controller-manager-fb5fcc5b8-r9fmx" (UID: "c250b55d-1d6b-4f28-8a0c-833736ac564b") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 18:46:23 crc kubenswrapper[4813]: I0219 18:46:23.307686 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-tfr46" event={"ID":"1067532d-ed6f-4474-8006-a4b4a6b1c89e","Type":"ContainerStarted","Data":"65b4b6e901c6f12c956bfbbfbae077e2c9af12c63c45deeb16bb48c7bba4429a"} Feb 19 18:46:23 crc kubenswrapper[4813]: I0219 18:46:23.308967 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-fn86j" event={"ID":"b88eb3b7-8ae3-4e00-8a0f-a5bb7109241d","Type":"ContainerStarted","Data":"4ed4ae249ed64ec04c4ebb6bf8310e3c01ffa95cdad0473d4942c638af1a39e6"} Feb 19 18:46:23 crc kubenswrapper[4813]: I0219 18:46:23.399546 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-68vcr"] Feb 19 18:46:23 crc kubenswrapper[4813]: W0219 18:46:23.413176 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod293d6d95_e878_451c_8a2f_3040ca924854.slice/crio-ae3bef8b806a502d59551c803f198fae8a4afb4c74f59edcdf2a8540366752cd WatchSource:0}: Error finding container ae3bef8b806a502d59551c803f198fae8a4afb4c74f59edcdf2a8540366752cd: Status 404 returned error can't find the container with id ae3bef8b806a502d59551c803f198fae8a4afb4c74f59edcdf2a8540366752cd Feb 19 18:46:23 crc kubenswrapper[4813]: I0219 18:46:23.431594 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-h2q2p"] Feb 19 18:46:23 crc kubenswrapper[4813]: W0219 18:46:23.437140 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb9127dd_a22e_4d2b_91c9_29021a547c96.slice/crio-827fcc476f609b3f9798d4ac4584f593e6bb523640a64d86c23bb3fd2a67c9a9 WatchSource:0}: Error finding container 827fcc476f609b3f9798d4ac4584f593e6bb523640a64d86c23bb3fd2a67c9a9: Status 404 returned error can't find the container with id 827fcc476f609b3f9798d4ac4584f593e6bb523640a64d86c23bb3fd2a67c9a9 Feb 19 18:46:23 crc kubenswrapper[4813]: I0219 18:46:23.439906 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-cnb66"] Feb 19 18:46:23 crc kubenswrapper[4813]: I0219 18:46:23.448490 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-8rrl4"] Feb 19 18:46:23 crc kubenswrapper[4813]: I0219 18:46:23.453292 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-j894w"] Feb 19 18:46:23 crc kubenswrapper[4813]: I0219 18:46:23.583706 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ae13095c-48b2-4236-8fef-528d9f0ad712-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-qfpxs\" (UID: \"ae13095c-48b2-4236-8fef-528d9f0ad712\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-qfpxs" Feb 19 18:46:23 crc kubenswrapper[4813]: I0219 18:46:23.583748 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ae13095c-48b2-4236-8fef-528d9f0ad712-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-qfpxs\" (UID: \"ae13095c-48b2-4236-8fef-528d9f0ad712\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-qfpxs" Feb 19 18:46:23 crc kubenswrapper[4813]: E0219 18:46:23.583863 4813 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 18:46:23 crc kubenswrapper[4813]: E0219 18:46:23.583907 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae13095c-48b2-4236-8fef-528d9f0ad712-metrics-certs podName:ae13095c-48b2-4236-8fef-528d9f0ad712 nodeName:}" failed. No retries permitted until 2026-02-19 18:46:24.583894055 +0000 UTC m=+1003.809334596 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ae13095c-48b2-4236-8fef-528d9f0ad712-metrics-certs") pod "openstack-operator-controller-manager-69ff7bc449-qfpxs" (UID: "ae13095c-48b2-4236-8fef-528d9f0ad712") : secret "metrics-server-cert" not found Feb 19 18:46:23 crc kubenswrapper[4813]: E0219 18:46:23.584254 4813 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 18:46:23 crc kubenswrapper[4813]: E0219 18:46:23.584297 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae13095c-48b2-4236-8fef-528d9f0ad712-webhook-certs podName:ae13095c-48b2-4236-8fef-528d9f0ad712 nodeName:}" failed. No retries permitted until 2026-02-19 18:46:24.584285837 +0000 UTC m=+1003.809726368 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ae13095c-48b2-4236-8fef-528d9f0ad712-webhook-certs") pod "openstack-operator-controller-manager-69ff7bc449-qfpxs" (UID: "ae13095c-48b2-4236-8fef-528d9f0ad712") : secret "webhook-server-cert" not found Feb 19 18:46:23 crc kubenswrapper[4813]: I0219 18:46:23.668362 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-6nmgx"] Feb 19 18:46:23 crc kubenswrapper[4813]: W0219 18:46:23.676251 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0794f801_e064_425c_ab9d_c00719fb3f86.slice/crio-4b07ea0d255f942a2ac666a3dac58c8d2b1d5455cbbfae2bf7f6bf5ced4be966 WatchSource:0}: Error finding container 4b07ea0d255f942a2ac666a3dac58c8d2b1d5455cbbfae2bf7f6bf5ced4be966: Status 404 returned error can't find the container with id 4b07ea0d255f942a2ac666a3dac58c8d2b1d5455cbbfae2bf7f6bf5ced4be966 Feb 19 18:46:23 crc kubenswrapper[4813]: I0219 18:46:23.690759 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-5t7t2"] Feb 19 18:46:23 crc kubenswrapper[4813]: I0219 18:46:23.713423 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-c942j"] Feb 19 18:46:23 crc kubenswrapper[4813]: I0219 18:46:23.715897 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-4pf4j"] Feb 19 18:46:23 crc kubenswrapper[4813]: W0219 18:46:23.735692 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda53be97_4a7d_4c8a_bcfa_25e1ea34c7c6.slice/crio-735520b0be44a29e22056f79fdb923783ba06c5ba26e45fc6b9c0dcc33d25996 WatchSource:0}: Error finding container 735520b0be44a29e22056f79fdb923783ba06c5ba26e45fc6b9c0dcc33d25996: Status 404 returned error can't find the container with id 735520b0be44a29e22056f79fdb923783ba06c5ba26e45fc6b9c0dcc33d25996 Feb 19 18:46:23 crc kubenswrapper[4813]: I0219 18:46:23.761339 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-m8fzh"] Feb 19 18:46:23 crc kubenswrapper[4813]: I0219 18:46:23.766428 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-zqs6j"] Feb 19 18:46:23 crc kubenswrapper[4813]: W0219 18:46:23.769876 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ca28ba3_eb1b_427d_a9b9_f0dcf0cc7dd9.slice/crio-668fec31510f5917e4573869ff303a5a33b09d987ca57c24405b033ff816aeb2 WatchSource:0}: Error finding container 668fec31510f5917e4573869ff303a5a33b09d987ca57c24405b033ff816aeb2: Status 404 returned error can't find the container with id 668fec31510f5917e4573869ff303a5a33b09d987ca57c24405b033ff816aeb2 Feb 19 18:46:23 crc kubenswrapper[4813]: W0219 18:46:23.773842 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod317c820b_5be9_49c1_b291_e0d62982fce8.slice/crio-b107d2478971ff6299ead879601c81592ec685f01e5ddb065fa29cf94a63fd64 WatchSource:0}: Error finding container b107d2478971ff6299ead879601c81592ec685f01e5ddb065fa29cf94a63fd64: Status 404 returned error can't find the container with id b107d2478971ff6299ead879601c81592ec685f01e5ddb065fa29cf94a63fd64 Feb 19 18:46:23 crc kubenswrapper[4813]: I0219 18:46:23.774268 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-scptn"] Feb 19 18:46:23 crc kubenswrapper[4813]: E0219 18:46:23.775424 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jswt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-8497b45c89-rgvk6_openstack-operators(317c820b-5be9-49c1-b291-e0d62982fce8): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 19 18:46:23 crc kubenswrapper[4813]: E0219 18:46:23.776808 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-rgvk6" podUID="317c820b-5be9-49c1-b291-e0d62982fce8" Feb 19 18:46:23 crc kubenswrapper[4813]: I0219 18:46:23.790858 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fd22b35b-ee39-435e-964c-d545597056b6-cert\") pod \"infra-operator-controller-manager-79d975b745-g7x92\" (UID: \"fd22b35b-ee39-435e-964c-d545597056b6\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-g7x92" Feb 19 18:46:23 crc kubenswrapper[4813]: E0219 18:46:23.791065 4813 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 18:46:23 crc kubenswrapper[4813]: E0219 18:46:23.791105 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fd22b35b-ee39-435e-964c-d545597056b6-cert podName:fd22b35b-ee39-435e-964c-d545597056b6 nodeName:}" failed. No retries permitted until 2026-02-19 18:46:25.791091141 +0000 UTC m=+1005.016531682 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fd22b35b-ee39-435e-964c-d545597056b6-cert") pod "infra-operator-controller-manager-79d975b745-g7x92" (UID: "fd22b35b-ee39-435e-964c-d545597056b6") : secret "infra-operator-webhook-server-cert" not found Feb 19 18:46:23 crc kubenswrapper[4813]: I0219 18:46:23.811505 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-rgvk6"] Feb 19 18:46:23 crc kubenswrapper[4813]: I0219 18:46:23.819356 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-gtm5k"] Feb 19 18:46:23 crc kubenswrapper[4813]: E0219 18:46:23.824062 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ssnk8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-7866795846-glgns_openstack-operators(78220d1d-a875-4e0d-af2d-8a0668017340): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 19 18:46:23 crc kubenswrapper[4813]: E0219 18:46:23.825154 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-7866795846-glgns" podUID="78220d1d-a875-4e0d-af2d-8a0668017340" Feb 19 18:46:23 crc kubenswrapper[4813]: E0219 18:46:23.825409 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6zb49,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-567668f5cf-scptn_openstack-operators(aa2a1869-c465-4840-8879-e882d8b996b5): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 19 18:46:23 crc kubenswrapper[4813]: I0219 18:46:23.826146 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-glgns"] Feb 19 18:46:23 crc kubenswrapper[4813]: E0219 18:46:23.826580 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-scptn" podUID="aa2a1869-c465-4840-8879-e882d8b996b5" Feb 19 18:46:23 crc kubenswrapper[4813]: I0219 18:46:23.892388 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-k6dm9"] Feb 19 18:46:23 crc kubenswrapper[4813]: W0219 18:46:23.901116 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52c59bd3_a386_46a0_b528_1a658f9a64a1.slice/crio-76cf97dcdc101e4255fce633bdfda1eb6ea07fdc51f5398326f75a529f9ba948 WatchSource:0}: Error finding container 76cf97dcdc101e4255fce633bdfda1eb6ea07fdc51f5398326f75a529f9ba948: Status 404 returned error can't find the container with id 76cf97dcdc101e4255fce633bdfda1eb6ea07fdc51f5398326f75a529f9ba948 Feb 19 18:46:23 crc kubenswrapper[4813]: E0219 18:46:23.902556 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n9h2z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68f46476f-k6dm9_openstack-operators(52c59bd3-a386-46a0-b528-1a658f9a64a1): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 19 18:46:23 crc kubenswrapper[4813]: E0219 18:46:23.903692 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-k6dm9" podUID="52c59bd3-a386-46a0-b528-1a658f9a64a1" Feb 19 18:46:23 crc kubenswrapper[4813]: I0219 18:46:23.916790 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-xq5wn"] Feb 19 18:46:23 crc kubenswrapper[4813]: E0219 18:46:23.919171 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4n88f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-qlcvr_openstack-operators(139aa386-db56-4988-8ca6-2ea715cf9630): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 19 18:46:23 crc kubenswrapper[4813]: W0219 18:46:23.921598 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod40a9fa53_e23a_4506_be2f_25a76446db8f.slice/crio-5ee0ac8872cdaf0f8f24633e26b5e3426b684e4ece9c53453f46ff34a6c70b09 WatchSource:0}: Error finding container 5ee0ac8872cdaf0f8f24633e26b5e3426b684e4ece9c53453f46ff34a6c70b09: Status 404 returned error can't find the container with id 5ee0ac8872cdaf0f8f24633e26b5e3426b684e4ece9c53453f46ff34a6c70b09 Feb 19 18:46:23 crc kubenswrapper[4813]: E0219 18:46:23.921634 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qlcvr" podUID="139aa386-db56-4988-8ca6-2ea715cf9630" Feb 19 18:46:23 crc kubenswrapper[4813]: I0219 18:46:23.923890 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qlcvr"] Feb 19 18:46:23 crc kubenswrapper[4813]: E0219 18:46:23.925244 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gw57m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-5db88f68c-xq5wn_openstack-operators(40a9fa53-e23a-4506-be2f-25a76446db8f): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 19 18:46:23 crc kubenswrapper[4813]: E0219 18:46:23.926370 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-xq5wn" podUID="40a9fa53-e23a-4506-be2f-25a76446db8f" Feb 19 18:46:24 crc kubenswrapper[4813]: I0219 18:46:24.196743 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c250b55d-1d6b-4f28-8a0c-833736ac564b-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-r9fmx\" (UID: \"c250b55d-1d6b-4f28-8a0c-833736ac564b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-r9fmx" Feb 19 18:46:24 crc kubenswrapper[4813]: E0219 18:46:24.197854 4813 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 18:46:24 crc kubenswrapper[4813]: E0219 18:46:24.198030 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c250b55d-1d6b-4f28-8a0c-833736ac564b-cert podName:c250b55d-1d6b-4f28-8a0c-833736ac564b nodeName:}" failed. No retries permitted until 2026-02-19 18:46:26.198008512 +0000 UTC m=+1005.423449053 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c250b55d-1d6b-4f28-8a0c-833736ac564b-cert") pod "openstack-baremetal-operator-controller-manager-fb5fcc5b8-r9fmx" (UID: "c250b55d-1d6b-4f28-8a0c-833736ac564b") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 18:46:24 crc kubenswrapper[4813]: I0219 18:46:24.314660 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-rgvk6" event={"ID":"317c820b-5be9-49c1-b291-e0d62982fce8","Type":"ContainerStarted","Data":"b107d2478971ff6299ead879601c81592ec685f01e5ddb065fa29cf94a63fd64"} Feb 19 18:46:24 crc kubenswrapper[4813]: E0219 18:46:24.317483 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-rgvk6" podUID="317c820b-5be9-49c1-b291-e0d62982fce8" Feb 19 18:46:24 crc kubenswrapper[4813]: I0219 18:46:24.318944 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-6nmgx" event={"ID":"0794f801-e064-425c-ab9d-c00719fb3f86","Type":"ContainerStarted","Data":"4b07ea0d255f942a2ac666a3dac58c8d2b1d5455cbbfae2bf7f6bf5ced4be966"} Feb 19 18:46:24 crc kubenswrapper[4813]: I0219 18:46:24.320126 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-k6dm9" event={"ID":"52c59bd3-a386-46a0-b528-1a658f9a64a1","Type":"ContainerStarted","Data":"76cf97dcdc101e4255fce633bdfda1eb6ea07fdc51f5398326f75a529f9ba948"} Feb 19 18:46:24 crc kubenswrapper[4813]: I0219 18:46:24.322331 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-h2q2p" event={"ID":"2a98a45c-87a7-4ecf-a0e8-2e8743b82960","Type":"ContainerStarted","Data":"d9ea1f1d5db90744bc75e0886081860bf5ffbdd259c61981e0db84d89931c36c"} Feb 19 18:46:24 crc kubenswrapper[4813]: E0219 18:46:24.323087 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-k6dm9" podUID="52c59bd3-a386-46a0-b528-1a658f9a64a1" Feb 19 18:46:24 crc kubenswrapper[4813]: I0219 18:46:24.323921 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-8rrl4" event={"ID":"ada715c6-71df-428d-97ff-72c3abe923a5","Type":"ContainerStarted","Data":"256a93c69ff207b511dbe092685907792e6e61b896256112da41398a9bdc2e18"} Feb 19 18:46:24 crc kubenswrapper[4813]: I0219 18:46:24.329692 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-5t7t2" event={"ID":"496ee8bf-6326-4315-9219-ad7d26760349","Type":"ContainerStarted","Data":"727ba4305e30096557723f8f16c56fc25cab36fd3597ecff6fd12247e5adc03b"} Feb 19 18:46:24 crc kubenswrapper[4813]: I0219 18:46:24.331377 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-m8fzh" event={"ID":"60dc84e0-9a88-4edc-9aac-ffbc1baa4cc8","Type":"ContainerStarted","Data":"f35f35209a22a8b95d360ca40fe8e5eade586ee7c8376c82274b1949fdfbd9aa"} Feb 19 18:46:24 crc kubenswrapper[4813]: I0219 18:46:24.338660 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-c942j" event={"ID":"5eac30f8-7b86-4d18-bf6f-e6dbf42a0625","Type":"ContainerStarted","Data":"7a604b868a2238c9e590283ba2b8453287e2f2f2f6f131af62cf217e296b2935"} Feb 19 18:46:24 crc kubenswrapper[4813]: I0219 18:46:24.340385 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-4pf4j" event={"ID":"da53be97-4a7d-4c8a-bcfa-25e1ea34c7c6","Type":"ContainerStarted","Data":"735520b0be44a29e22056f79fdb923783ba06c5ba26e45fc6b9c0dcc33d25996"} Feb 19 18:46:24 crc kubenswrapper[4813]: I0219 18:46:24.342042 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-gtm5k" event={"ID":"6d9cb4d7-d624-47bf-ad6f-977968f5c074","Type":"ContainerStarted","Data":"286ae3a259a75178880deac04b69feb95d228ad703058a85140545fe9c2ef657"} Feb 19 18:46:24 crc kubenswrapper[4813]: I0219 18:46:24.344746 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-xq5wn" event={"ID":"40a9fa53-e23a-4506-be2f-25a76446db8f","Type":"ContainerStarted","Data":"5ee0ac8872cdaf0f8f24633e26b5e3426b684e4ece9c53453f46ff34a6c70b09"} Feb 19 18:46:24 crc kubenswrapper[4813]: I0219 18:46:24.346558 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-zqs6j" event={"ID":"1ca28ba3-eb1b-427d-a9b9-f0dcf0cc7dd9","Type":"ContainerStarted","Data":"668fec31510f5917e4573869ff303a5a33b09d987ca57c24405b033ff816aeb2"} Feb 19 18:46:24 crc kubenswrapper[4813]: E0219 18:46:24.346933 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-xq5wn" podUID="40a9fa53-e23a-4506-be2f-25a76446db8f" Feb 19 18:46:24 crc kubenswrapper[4813]: I0219 18:46:24.348397 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-j894w" event={"ID":"de341409-e6ef-4b1a-a359-a8f39fa0bc91","Type":"ContainerStarted","Data":"ff8041a906a8901cf1e497c5a5ecdb8ede9c546f58e3778904eeafb5a07ed9ab"} Feb 19 18:46:24 crc kubenswrapper[4813]: I0219 18:46:24.352511 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qlcvr" event={"ID":"139aa386-db56-4988-8ca6-2ea715cf9630","Type":"ContainerStarted","Data":"d588f2799899623be10c0e8a855e9db2553c0654f2449c6daf9600eb09a8fb94"} Feb 19 18:46:24 crc kubenswrapper[4813]: E0219 18:46:24.354672 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qlcvr" podUID="139aa386-db56-4988-8ca6-2ea715cf9630" Feb 19 18:46:24 crc kubenswrapper[4813]: I0219 18:46:24.357102 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-68vcr" event={"ID":"293d6d95-e878-451c-8a2f-3040ca924854","Type":"ContainerStarted","Data":"ae3bef8b806a502d59551c803f198fae8a4afb4c74f59edcdf2a8540366752cd"} Feb 19 18:46:24 crc kubenswrapper[4813]: I0219 18:46:24.360745 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-glgns" event={"ID":"78220d1d-a875-4e0d-af2d-8a0668017340","Type":"ContainerStarted","Data":"676eade6457f83a22084577006df593486e9fed43b36118889c5f7e235938195"} Feb 19 18:46:24 crc kubenswrapper[4813]: E0219 18:46:24.362356 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-glgns" podUID="78220d1d-a875-4e0d-af2d-8a0668017340" Feb 19 18:46:24 crc kubenswrapper[4813]: I0219 18:46:24.362806 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-scptn" event={"ID":"aa2a1869-c465-4840-8879-e882d8b996b5","Type":"ContainerStarted","Data":"c9c878630c272577589db37af9fd645660f20538267ef70e056244a2840b53c7"} Feb 19 18:46:24 crc kubenswrapper[4813]: I0219 18:46:24.367300 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-cnb66" event={"ID":"bb9127dd-a22e-4d2b-91c9-29021a547c96","Type":"ContainerStarted","Data":"827fcc476f609b3f9798d4ac4584f593e6bb523640a64d86c23bb3fd2a67c9a9"} Feb 19 18:46:24 crc kubenswrapper[4813]: E0219 18:46:24.367336 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-scptn" podUID="aa2a1869-c465-4840-8879-e882d8b996b5" Feb 19 18:46:24 crc kubenswrapper[4813]: I0219 18:46:24.607897 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ae13095c-48b2-4236-8fef-528d9f0ad712-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-qfpxs\" (UID: \"ae13095c-48b2-4236-8fef-528d9f0ad712\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-qfpxs" Feb 19 18:46:24 crc kubenswrapper[4813]: I0219 18:46:24.608010 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ae13095c-48b2-4236-8fef-528d9f0ad712-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-qfpxs\" (UID: \"ae13095c-48b2-4236-8fef-528d9f0ad712\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-qfpxs" Feb 19 18:46:24 crc kubenswrapper[4813]: E0219 18:46:24.608121 4813 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 18:46:24 crc kubenswrapper[4813]: E0219 18:46:24.608230 4813 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 18:46:24 crc kubenswrapper[4813]: E0219 18:46:24.608234 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae13095c-48b2-4236-8fef-528d9f0ad712-webhook-certs podName:ae13095c-48b2-4236-8fef-528d9f0ad712 nodeName:}" failed. No retries permitted until 2026-02-19 18:46:26.608196684 +0000 UTC m=+1005.833637225 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ae13095c-48b2-4236-8fef-528d9f0ad712-webhook-certs") pod "openstack-operator-controller-manager-69ff7bc449-qfpxs" (UID: "ae13095c-48b2-4236-8fef-528d9f0ad712") : secret "webhook-server-cert" not found Feb 19 18:46:24 crc kubenswrapper[4813]: E0219 18:46:24.608791 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae13095c-48b2-4236-8fef-528d9f0ad712-metrics-certs podName:ae13095c-48b2-4236-8fef-528d9f0ad712 nodeName:}" failed. No retries permitted until 2026-02-19 18:46:26.608775042 +0000 UTC m=+1005.834215583 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ae13095c-48b2-4236-8fef-528d9f0ad712-metrics-certs") pod "openstack-operator-controller-manager-69ff7bc449-qfpxs" (UID: "ae13095c-48b2-4236-8fef-528d9f0ad712") : secret "metrics-server-cert" not found Feb 19 18:46:25 crc kubenswrapper[4813]: E0219 18:46:25.376999 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-glgns" podUID="78220d1d-a875-4e0d-af2d-8a0668017340" Feb 19 18:46:25 crc kubenswrapper[4813]: E0219 18:46:25.376996 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-scptn" podUID="aa2a1869-c465-4840-8879-e882d8b996b5" Feb 19 18:46:25 crc kubenswrapper[4813]: E0219 18:46:25.377320 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-xq5wn" podUID="40a9fa53-e23a-4506-be2f-25a76446db8f" Feb 19 18:46:25 crc kubenswrapper[4813]: E0219 18:46:25.377348 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-k6dm9" podUID="52c59bd3-a386-46a0-b528-1a658f9a64a1" Feb 19 18:46:25 crc kubenswrapper[4813]: E0219 18:46:25.378114 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-rgvk6" podUID="317c820b-5be9-49c1-b291-e0d62982fce8" Feb 19 18:46:25 crc kubenswrapper[4813]: E0219 18:46:25.378222 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qlcvr" podUID="139aa386-db56-4988-8ca6-2ea715cf9630" Feb 19 18:46:25 crc kubenswrapper[4813]: I0219 18:46:25.825488 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fd22b35b-ee39-435e-964c-d545597056b6-cert\") pod \"infra-operator-controller-manager-79d975b745-g7x92\" (UID: \"fd22b35b-ee39-435e-964c-d545597056b6\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-g7x92" Feb 19 18:46:25 crc kubenswrapper[4813]: E0219 18:46:25.825726 4813 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 18:46:25 crc kubenswrapper[4813]: E0219 18:46:25.825808 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fd22b35b-ee39-435e-964c-d545597056b6-cert podName:fd22b35b-ee39-435e-964c-d545597056b6 nodeName:}" failed. No retries permitted until 2026-02-19 18:46:29.82579065 +0000 UTC m=+1009.051231181 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fd22b35b-ee39-435e-964c-d545597056b6-cert") pod "infra-operator-controller-manager-79d975b745-g7x92" (UID: "fd22b35b-ee39-435e-964c-d545597056b6") : secret "infra-operator-webhook-server-cert" not found Feb 19 18:46:26 crc kubenswrapper[4813]: I0219 18:46:26.231382 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c250b55d-1d6b-4f28-8a0c-833736ac564b-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-r9fmx\" (UID: \"c250b55d-1d6b-4f28-8a0c-833736ac564b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-r9fmx" Feb 19 18:46:26 crc kubenswrapper[4813]: E0219 18:46:26.231536 4813 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 18:46:26 crc kubenswrapper[4813]: E0219 18:46:26.231612 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c250b55d-1d6b-4f28-8a0c-833736ac564b-cert podName:c250b55d-1d6b-4f28-8a0c-833736ac564b nodeName:}" failed. No retries permitted until 2026-02-19 18:46:30.231595387 +0000 UTC m=+1009.457035928 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c250b55d-1d6b-4f28-8a0c-833736ac564b-cert") pod "openstack-baremetal-operator-controller-manager-fb5fcc5b8-r9fmx" (UID: "c250b55d-1d6b-4f28-8a0c-833736ac564b") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 18:46:26 crc kubenswrapper[4813]: I0219 18:46:26.636351 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ae13095c-48b2-4236-8fef-528d9f0ad712-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-qfpxs\" (UID: \"ae13095c-48b2-4236-8fef-528d9f0ad712\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-qfpxs" Feb 19 18:46:26 crc kubenswrapper[4813]: I0219 18:46:26.636404 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ae13095c-48b2-4236-8fef-528d9f0ad712-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-qfpxs\" (UID: \"ae13095c-48b2-4236-8fef-528d9f0ad712\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-qfpxs" Feb 19 18:46:26 crc kubenswrapper[4813]: E0219 18:46:26.636500 4813 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 18:46:26 crc kubenswrapper[4813]: E0219 18:46:26.636545 4813 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 18:46:26 crc kubenswrapper[4813]: E0219 18:46:26.636574 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae13095c-48b2-4236-8fef-528d9f0ad712-metrics-certs podName:ae13095c-48b2-4236-8fef-528d9f0ad712 nodeName:}" failed. No retries permitted until 2026-02-19 18:46:30.636555359 +0000 UTC m=+1009.861995900 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ae13095c-48b2-4236-8fef-528d9f0ad712-metrics-certs") pod "openstack-operator-controller-manager-69ff7bc449-qfpxs" (UID: "ae13095c-48b2-4236-8fef-528d9f0ad712") : secret "metrics-server-cert" not found Feb 19 18:46:26 crc kubenswrapper[4813]: E0219 18:46:26.636624 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae13095c-48b2-4236-8fef-528d9f0ad712-webhook-certs podName:ae13095c-48b2-4236-8fef-528d9f0ad712 nodeName:}" failed. No retries permitted until 2026-02-19 18:46:30.63660777 +0000 UTC m=+1009.862048311 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ae13095c-48b2-4236-8fef-528d9f0ad712-webhook-certs") pod "openstack-operator-controller-manager-69ff7bc449-qfpxs" (UID: "ae13095c-48b2-4236-8fef-528d9f0ad712") : secret "webhook-server-cert" not found Feb 19 18:46:29 crc kubenswrapper[4813]: I0219 18:46:29.899380 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fd22b35b-ee39-435e-964c-d545597056b6-cert\") pod \"infra-operator-controller-manager-79d975b745-g7x92\" (UID: \"fd22b35b-ee39-435e-964c-d545597056b6\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-g7x92" Feb 19 18:46:29 crc kubenswrapper[4813]: E0219 18:46:29.899549 4813 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 18:46:29 crc kubenswrapper[4813]: E0219 18:46:29.899792 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fd22b35b-ee39-435e-964c-d545597056b6-cert podName:fd22b35b-ee39-435e-964c-d545597056b6 nodeName:}" failed. No retries permitted until 2026-02-19 18:46:37.899778711 +0000 UTC m=+1017.125219252 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fd22b35b-ee39-435e-964c-d545597056b6-cert") pod "infra-operator-controller-manager-79d975b745-g7x92" (UID: "fd22b35b-ee39-435e-964c-d545597056b6") : secret "infra-operator-webhook-server-cert" not found Feb 19 18:46:30 crc kubenswrapper[4813]: I0219 18:46:30.309082 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c250b55d-1d6b-4f28-8a0c-833736ac564b-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-r9fmx\" (UID: \"c250b55d-1d6b-4f28-8a0c-833736ac564b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-r9fmx" Feb 19 18:46:30 crc kubenswrapper[4813]: E0219 18:46:30.309245 4813 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 18:46:30 crc kubenswrapper[4813]: E0219 18:46:30.309308 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c250b55d-1d6b-4f28-8a0c-833736ac564b-cert podName:c250b55d-1d6b-4f28-8a0c-833736ac564b nodeName:}" failed. No retries permitted until 2026-02-19 18:46:38.309290783 +0000 UTC m=+1017.534731324 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c250b55d-1d6b-4f28-8a0c-833736ac564b-cert") pod "openstack-baremetal-operator-controller-manager-fb5fcc5b8-r9fmx" (UID: "c250b55d-1d6b-4f28-8a0c-833736ac564b") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 18:46:30 crc kubenswrapper[4813]: I0219 18:46:30.330580 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 18:46:30 crc kubenswrapper[4813]: I0219 18:46:30.330705 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 18:46:30 crc kubenswrapper[4813]: I0219 18:46:30.330798 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" Feb 19 18:46:30 crc kubenswrapper[4813]: I0219 18:46:30.332240 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0c3855002c151cf8b5b2cf61ec6f6d7135091880565c3fee08603596d3342c68"} pod="openshift-machine-config-operator/machine-config-daemon-gfswm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 18:46:30 crc kubenswrapper[4813]: I0219 18:46:30.332364 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" containerID="cri-o://0c3855002c151cf8b5b2cf61ec6f6d7135091880565c3fee08603596d3342c68" gracePeriod=600 Feb 19 18:46:30 crc kubenswrapper[4813]: I0219 18:46:30.712871 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ae13095c-48b2-4236-8fef-528d9f0ad712-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-qfpxs\" (UID: \"ae13095c-48b2-4236-8fef-528d9f0ad712\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-qfpxs" Feb 19 18:46:30 crc kubenswrapper[4813]: E0219 18:46:30.713159 4813 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 18:46:30 crc kubenswrapper[4813]: E0219 18:46:30.713452 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae13095c-48b2-4236-8fef-528d9f0ad712-metrics-certs podName:ae13095c-48b2-4236-8fef-528d9f0ad712 nodeName:}" failed. No retries permitted until 2026-02-19 18:46:38.713424838 +0000 UTC m=+1017.938865419 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ae13095c-48b2-4236-8fef-528d9f0ad712-metrics-certs") pod "openstack-operator-controller-manager-69ff7bc449-qfpxs" (UID: "ae13095c-48b2-4236-8fef-528d9f0ad712") : secret "metrics-server-cert" not found Feb 19 18:46:30 crc kubenswrapper[4813]: I0219 18:46:30.713748 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ae13095c-48b2-4236-8fef-528d9f0ad712-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-qfpxs\" (UID: \"ae13095c-48b2-4236-8fef-528d9f0ad712\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-qfpxs" Feb 19 18:46:30 crc kubenswrapper[4813]: E0219 18:46:30.713876 4813 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 18:46:30 crc kubenswrapper[4813]: E0219 18:46:30.718941 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae13095c-48b2-4236-8fef-528d9f0ad712-webhook-certs podName:ae13095c-48b2-4236-8fef-528d9f0ad712 nodeName:}" failed. No retries permitted until 2026-02-19 18:46:38.713915233 +0000 UTC m=+1017.939355784 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ae13095c-48b2-4236-8fef-528d9f0ad712-webhook-certs") pod "openstack-operator-controller-manager-69ff7bc449-qfpxs" (UID: "ae13095c-48b2-4236-8fef-528d9f0ad712") : secret "webhook-server-cert" not found Feb 19 18:46:31 crc kubenswrapper[4813]: I0219 18:46:31.446047 4813 generic.go:334] "Generic (PLEG): container finished" podID="481977a2-7072-4176-abd4-863cb6104d70" containerID="0c3855002c151cf8b5b2cf61ec6f6d7135091880565c3fee08603596d3342c68" exitCode=0 Feb 19 18:46:31 crc kubenswrapper[4813]: I0219 18:46:31.446092 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" event={"ID":"481977a2-7072-4176-abd4-863cb6104d70","Type":"ContainerDied","Data":"0c3855002c151cf8b5b2cf61ec6f6d7135091880565c3fee08603596d3342c68"} Feb 19 18:46:31 crc kubenswrapper[4813]: I0219 18:46:31.446131 4813 scope.go:117] "RemoveContainer" containerID="d2ec5832c886721bb1adcc6f9f4d730f479f32a26f58ced42b5b32c757c1d3c3" Feb 19 18:46:36 crc kubenswrapper[4813]: E0219 18:46:36.180176 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:90ad8fd8c1889b6be77925016532218eb6149d2c1c8535a5f9f1775c776fa6cc" Feb 19 18:46:36 crc kubenswrapper[4813]: E0219 18:46:36.180857 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:90ad8fd8c1889b6be77925016532218eb6149d2c1c8535a5f9f1775c776fa6cc,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5t2nj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-868647ff47-tfr46_openstack-operators(1067532d-ed6f-4474-8006-a4b4a6b1c89e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 18:46:36 crc kubenswrapper[4813]: E0219 18:46:36.182016 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-tfr46" podUID="1067532d-ed6f-4474-8006-a4b4a6b1c89e" Feb 19 18:46:36 crc kubenswrapper[4813]: E0219 18:46:36.480393 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/barbican-operator@sha256:90ad8fd8c1889b6be77925016532218eb6149d2c1c8535a5f9f1775c776fa6cc\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-tfr46" podUID="1067532d-ed6f-4474-8006-a4b4a6b1c89e" Feb 19 18:46:36 crc kubenswrapper[4813]: E0219 18:46:36.658418 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34" Feb 19 18:46:36 crc kubenswrapper[4813]: E0219 18:46:36.658627 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7lj47,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-69f8888797-zqs6j_openstack-operators(1ca28ba3-eb1b-427d-a9b9-f0dcf0cc7dd9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 18:46:36 crc kubenswrapper[4813]: E0219 18:46:36.659915 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-zqs6j" podUID="1ca28ba3-eb1b-427d-a9b9-f0dcf0cc7dd9" Feb 19 18:46:37 crc kubenswrapper[4813]: E0219 18:46:37.235224 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf" Feb 19 18:46:37 crc kubenswrapper[4813]: E0219 18:46:37.235403 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4w6jq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-64ddbf8bb-6nmgx_openstack-operators(0794f801-e064-425c-ab9d-c00719fb3f86): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 18:46:37 crc kubenswrapper[4813]: E0219 18:46:37.236599 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-6nmgx" podUID="0794f801-e064-425c-ab9d-c00719fb3f86" Feb 19 18:46:37 crc kubenswrapper[4813]: E0219 18:46:37.491225 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-6nmgx" podUID="0794f801-e064-425c-ab9d-c00719fb3f86" Feb 19 18:46:37 crc kubenswrapper[4813]: E0219 18:46:37.494236 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-zqs6j" podUID="1ca28ba3-eb1b-427d-a9b9-f0dcf0cc7dd9" Feb 19 18:46:37 crc kubenswrapper[4813]: E0219 18:46:37.896072 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2" Feb 19 18:46:37 crc kubenswrapper[4813]: E0219 18:46:37.896233 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gft78,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-69f49c598c-68vcr_openstack-operators(293d6d95-e878-451c-8a2f-3040ca924854): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 18:46:37 crc kubenswrapper[4813]: E0219 18:46:37.897349 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-68vcr" podUID="293d6d95-e878-451c-8a2f-3040ca924854" Feb 19 18:46:37 crc kubenswrapper[4813]: I0219 18:46:37.937021 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fd22b35b-ee39-435e-964c-d545597056b6-cert\") pod \"infra-operator-controller-manager-79d975b745-g7x92\" (UID: \"fd22b35b-ee39-435e-964c-d545597056b6\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-g7x92" Feb 19 18:46:37 crc kubenswrapper[4813]: E0219 18:46:37.937180 4813 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 18:46:37 crc kubenswrapper[4813]: E0219 18:46:37.937246 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fd22b35b-ee39-435e-964c-d545597056b6-cert podName:fd22b35b-ee39-435e-964c-d545597056b6 nodeName:}" failed. No retries permitted until 2026-02-19 18:46:53.937227485 +0000 UTC m=+1033.162668026 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fd22b35b-ee39-435e-964c-d545597056b6-cert") pod "infra-operator-controller-manager-79d975b745-g7x92" (UID: "fd22b35b-ee39-435e-964c-d545597056b6") : secret "infra-operator-webhook-server-cert" not found Feb 19 18:46:38 crc kubenswrapper[4813]: I0219 18:46:38.341850 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c250b55d-1d6b-4f28-8a0c-833736ac564b-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-r9fmx\" (UID: \"c250b55d-1d6b-4f28-8a0c-833736ac564b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-r9fmx" Feb 19 18:46:38 crc kubenswrapper[4813]: E0219 18:46:38.342095 4813 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 18:46:38 crc kubenswrapper[4813]: E0219 18:46:38.342205 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c250b55d-1d6b-4f28-8a0c-833736ac564b-cert podName:c250b55d-1d6b-4f28-8a0c-833736ac564b nodeName:}" failed. No retries permitted until 2026-02-19 18:46:54.342187896 +0000 UTC m=+1033.567628437 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c250b55d-1d6b-4f28-8a0c-833736ac564b-cert") pod "openstack-baremetal-operator-controller-manager-fb5fcc5b8-r9fmx" (UID: "c250b55d-1d6b-4f28-8a0c-833736ac564b") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 18:46:38 crc kubenswrapper[4813]: E0219 18:46:38.496460 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2\\\"\"" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-68vcr" podUID="293d6d95-e878-451c-8a2f-3040ca924854" Feb 19 18:46:38 crc kubenswrapper[4813]: E0219 18:46:38.579510 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642" Feb 19 18:46:38 crc kubenswrapper[4813]: E0219 18:46:38.579783 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-88tmm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-6d8bf5c495-h2q2p_openstack-operators(2a98a45c-87a7-4ecf-a0e8-2e8743b82960): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 18:46:38 crc kubenswrapper[4813]: E0219 18:46:38.581820 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-h2q2p" podUID="2a98a45c-87a7-4ecf-a0e8-2e8743b82960" Feb 19 18:46:38 crc kubenswrapper[4813]: I0219 18:46:38.747116 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ae13095c-48b2-4236-8fef-528d9f0ad712-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-qfpxs\" (UID: \"ae13095c-48b2-4236-8fef-528d9f0ad712\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-qfpxs" Feb 19 18:46:38 crc kubenswrapper[4813]: I0219 18:46:38.747291 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ae13095c-48b2-4236-8fef-528d9f0ad712-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-qfpxs\" (UID: \"ae13095c-48b2-4236-8fef-528d9f0ad712\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-qfpxs" Feb 19 18:46:38 crc kubenswrapper[4813]: E0219 18:46:38.747434 4813 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 18:46:38 crc kubenswrapper[4813]: E0219 18:46:38.747500 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae13095c-48b2-4236-8fef-528d9f0ad712-webhook-certs podName:ae13095c-48b2-4236-8fef-528d9f0ad712 nodeName:}" failed. No retries permitted until 2026-02-19 18:46:54.747480848 +0000 UTC m=+1033.972921389 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ae13095c-48b2-4236-8fef-528d9f0ad712-webhook-certs") pod "openstack-operator-controller-manager-69ff7bc449-qfpxs" (UID: "ae13095c-48b2-4236-8fef-528d9f0ad712") : secret "webhook-server-cert" not found Feb 19 18:46:38 crc kubenswrapper[4813]: E0219 18:46:38.747907 4813 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 18:46:38 crc kubenswrapper[4813]: E0219 18:46:38.747948 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae13095c-48b2-4236-8fef-528d9f0ad712-metrics-certs podName:ae13095c-48b2-4236-8fef-528d9f0ad712 nodeName:}" failed. No retries permitted until 2026-02-19 18:46:54.747937512 +0000 UTC m=+1033.973378053 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ae13095c-48b2-4236-8fef-528d9f0ad712-metrics-certs") pod "openstack-operator-controller-manager-69ff7bc449-qfpxs" (UID: "ae13095c-48b2-4236-8fef-528d9f0ad712") : secret "metrics-server-cert" not found Feb 19 18:46:39 crc kubenswrapper[4813]: E0219 18:46:39.308712 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da" Feb 19 18:46:39 crc kubenswrapper[4813]: E0219 18:46:39.308873 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dffls,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-5b9b8895d5-cnb66_openstack-operators(bb9127dd-a22e-4d2b-91c9-29021a547c96): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 18:46:39 crc kubenswrapper[4813]: E0219 18:46:39.310038 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-cnb66" podUID="bb9127dd-a22e-4d2b-91c9-29021a547c96" Feb 19 18:46:39 crc kubenswrapper[4813]: E0219 18:46:39.534068 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-cnb66" podUID="bb9127dd-a22e-4d2b-91c9-29021a547c96" Feb 19 18:46:39 crc kubenswrapper[4813]: E0219 18:46:39.534279 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642\\\"\"" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-h2q2p" podUID="2a98a45c-87a7-4ecf-a0e8-2e8743b82960" Feb 19 18:46:40 crc kubenswrapper[4813]: E0219 18:46:40.120662 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a" Feb 19 18:46:40 crc kubenswrapper[4813]: E0219 18:46:40.120813 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6w75p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-6994f66f48-c942j_openstack-operators(5eac30f8-7b86-4d18-bf6f-e6dbf42a0625): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 18:46:40 crc kubenswrapper[4813]: E0219 18:46:40.122144 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-c942j" podUID="5eac30f8-7b86-4d18-bf6f-e6dbf42a0625" Feb 19 18:46:40 crc kubenswrapper[4813]: E0219 18:46:40.535655 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-c942j" podUID="5eac30f8-7b86-4d18-bf6f-e6dbf42a0625" Feb 19 18:46:40 crc kubenswrapper[4813]: E0219 18:46:40.694569 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1" Feb 19 18:46:40 crc kubenswrapper[4813]: E0219 18:46:40.694792 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-28dl8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b4d948c87-j894w_openstack-operators(de341409-e6ef-4b1a-a359-a8f39fa0bc91): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 18:46:40 crc kubenswrapper[4813]: E0219 18:46:40.697236 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-j894w" podUID="de341409-e6ef-4b1a-a359-a8f39fa0bc91" Feb 19 18:46:41 crc kubenswrapper[4813]: E0219 18:46:41.544378 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-j894w" podUID="de341409-e6ef-4b1a-a359-a8f39fa0bc91" Feb 19 18:46:43 crc kubenswrapper[4813]: I0219 18:46:43.559862 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-fn86j" event={"ID":"b88eb3b7-8ae3-4e00-8a0f-a5bb7109241d","Type":"ContainerStarted","Data":"b53290864b3855013a81dc32199c47197e54fe4d2588b9a6f2229da5f50021ad"} Feb 19 18:46:43 crc kubenswrapper[4813]: I0219 18:46:43.560566 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-fn86j" Feb 19 18:46:43 crc kubenswrapper[4813]: I0219 18:46:43.566491 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-4pf4j" event={"ID":"da53be97-4a7d-4c8a-bcfa-25e1ea34c7c6","Type":"ContainerStarted","Data":"e23ef362a594364f64ab699ff1e868f3a024599633e547c99b43be31fcba0086"} Feb 19 18:46:43 crc kubenswrapper[4813]: I0219 18:46:43.567154 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-4pf4j" Feb 19 18:46:43 crc kubenswrapper[4813]: I0219 18:46:43.568537 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-rgvk6" event={"ID":"317c820b-5be9-49c1-b291-e0d62982fce8","Type":"ContainerStarted","Data":"2019a5c49cae5daa786f66788f659b4b91a532aa5a3df012dbb869b7b6e465c3"} Feb 19 18:46:43 crc kubenswrapper[4813]: I0219 18:46:43.568864 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-rgvk6" Feb 19 18:46:43 crc kubenswrapper[4813]: I0219 18:46:43.574185 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" event={"ID":"481977a2-7072-4176-abd4-863cb6104d70","Type":"ContainerStarted","Data":"94eee4c6af3220d0f9daafe2c95225cd4a99afe2b2f03d1a34bc8f27e9e13151"} Feb 19 18:46:43 crc kubenswrapper[4813]: I0219 18:46:43.584140 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-fn86j" podStartSLOduration=4.9147365910000005 podStartE2EDuration="22.584121383s" podCreationTimestamp="2026-02-19 18:46:21 +0000 UTC" firstStartedPulling="2026-02-19 18:46:23.009141121 +0000 UTC m=+1002.234581662" lastFinishedPulling="2026-02-19 18:46:40.678525903 +0000 UTC m=+1019.903966454" observedRunningTime="2026-02-19 18:46:43.581399659 +0000 UTC m=+1022.806840200" watchObservedRunningTime="2026-02-19 18:46:43.584121383 +0000 UTC m=+1022.809561934" Feb 19 18:46:43 crc kubenswrapper[4813]: I0219 18:46:43.584311 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-m8fzh" event={"ID":"60dc84e0-9a88-4edc-9aac-ffbc1baa4cc8","Type":"ContainerStarted","Data":"cc0b113b1a157d2dd3a528ba7cb5a3278a483e5796ef9e03fc7b0fb6ca90665a"} Feb 19 18:46:43 crc kubenswrapper[4813]: I0219 18:46:43.585311 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-m8fzh" Feb 19 18:46:43 crc kubenswrapper[4813]: I0219 18:46:43.591384 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-8rrl4" event={"ID":"ada715c6-71df-428d-97ff-72c3abe923a5","Type":"ContainerStarted","Data":"c26d5d816ea6f0f2c28d1005f5f761c1363e091059012863b3840cc32dbe52b3"} Feb 19 18:46:43 crc kubenswrapper[4813]: I0219 18:46:43.591521 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-8rrl4" Feb 19 18:46:43 crc kubenswrapper[4813]: I0219 18:46:43.629307 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-rgvk6" podStartSLOduration=2.219227208 podStartE2EDuration="21.629289506s" podCreationTimestamp="2026-02-19 18:46:22 +0000 UTC" firstStartedPulling="2026-02-19 18:46:23.775315485 +0000 UTC m=+1003.000756026" lastFinishedPulling="2026-02-19 18:46:43.185377783 +0000 UTC m=+1022.410818324" observedRunningTime="2026-02-19 18:46:43.622659031 +0000 UTC m=+1022.848099572" watchObservedRunningTime="2026-02-19 18:46:43.629289506 +0000 UTC m=+1022.854730047" Feb 19 18:46:43 crc kubenswrapper[4813]: I0219 18:46:43.648080 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-4pf4j" podStartSLOduration=5.169493244 podStartE2EDuration="22.648061464s" podCreationTimestamp="2026-02-19 18:46:21 +0000 UTC" firstStartedPulling="2026-02-19 18:46:23.742328738 +0000 UTC m=+1002.967769279" lastFinishedPulling="2026-02-19 18:46:41.220896958 +0000 UTC m=+1020.446337499" observedRunningTime="2026-02-19 18:46:43.639779419 +0000 UTC m=+1022.865219970" watchObservedRunningTime="2026-02-19 18:46:43.648061464 +0000 UTC m=+1022.873502005" Feb 19 18:46:43 crc kubenswrapper[4813]: I0219 18:46:43.662010 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-m8fzh" podStartSLOduration=5.2153853980000004 podStartE2EDuration="22.661994603s" podCreationTimestamp="2026-02-19 18:46:21 +0000 UTC" firstStartedPulling="2026-02-19 18:46:23.773930832 +0000 UTC m=+1002.999371373" lastFinishedPulling="2026-02-19 18:46:41.220540037 +0000 UTC m=+1020.445980578" observedRunningTime="2026-02-19 18:46:43.654352298 +0000 UTC m=+1022.879792839" watchObservedRunningTime="2026-02-19 18:46:43.661994603 +0000 UTC m=+1022.887435144" Feb 19 18:46:43 crc kubenswrapper[4813]: I0219 18:46:43.676033 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-8rrl4" podStartSLOduration=4.935821151 podStartE2EDuration="22.676017895s" podCreationTimestamp="2026-02-19 18:46:21 +0000 UTC" firstStartedPulling="2026-02-19 18:46:23.480266641 +0000 UTC m=+1002.705707192" lastFinishedPulling="2026-02-19 18:46:41.220463395 +0000 UTC m=+1020.445903936" observedRunningTime="2026-02-19 18:46:43.67261832 +0000 UTC m=+1022.898058861" watchObservedRunningTime="2026-02-19 18:46:43.676017895 +0000 UTC m=+1022.901458436" Feb 19 18:46:44 crc kubenswrapper[4813]: I0219 18:46:44.598917 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-scptn" event={"ID":"aa2a1869-c465-4840-8879-e882d8b996b5","Type":"ContainerStarted","Data":"6615b54bfe4f96b20eb155cd321ce80e60c1e57c86144cddbf2df4f31741f99f"} Feb 19 18:46:44 crc kubenswrapper[4813]: I0219 18:46:44.599684 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-scptn" Feb 19 18:46:44 crc kubenswrapper[4813]: I0219 18:46:44.601139 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-5t7t2" event={"ID":"496ee8bf-6326-4315-9219-ad7d26760349","Type":"ContainerStarted","Data":"bd5c74ebc068e07a115d2521b9dd9277d7c51f1cb52905ad91ecd84d89bbcec1"} Feb 19 18:46:44 crc kubenswrapper[4813]: I0219 18:46:44.601278 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987464f4-5t7t2" Feb 19 18:46:44 crc kubenswrapper[4813]: I0219 18:46:44.613338 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-scptn" podStartSLOduration=4.239387677 podStartE2EDuration="23.613323543s" podCreationTimestamp="2026-02-19 18:46:21 +0000 UTC" firstStartedPulling="2026-02-19 18:46:23.825230653 +0000 UTC m=+1003.050671194" lastFinishedPulling="2026-02-19 18:46:43.199166519 +0000 UTC m=+1022.424607060" observedRunningTime="2026-02-19 18:46:44.611516598 +0000 UTC m=+1023.836957139" watchObservedRunningTime="2026-02-19 18:46:44.613323543 +0000 UTC m=+1023.838764094" Feb 19 18:46:44 crc kubenswrapper[4813]: I0219 18:46:44.614577 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-glgns" event={"ID":"78220d1d-a875-4e0d-af2d-8a0668017340","Type":"ContainerStarted","Data":"5d24580007afa72065fc3b5d94f738cfcefeb34ddc022627b99c288bf411d32f"} Feb 19 18:46:44 crc kubenswrapper[4813]: I0219 18:46:44.614795 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7866795846-glgns" Feb 19 18:46:44 crc kubenswrapper[4813]: I0219 18:46:44.616721 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-gtm5k" event={"ID":"6d9cb4d7-d624-47bf-ad6f-977968f5c074","Type":"ContainerStarted","Data":"ce5b97d82654d92bcb139d44e97717f9ed463ff71283c54113e4167275841bb5"} Feb 19 18:46:44 crc kubenswrapper[4813]: I0219 18:46:44.616869 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-gtm5k" Feb 19 18:46:44 crc kubenswrapper[4813]: I0219 18:46:44.619131 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-xq5wn" event={"ID":"40a9fa53-e23a-4506-be2f-25a76446db8f","Type":"ContainerStarted","Data":"f4f9ddab4981456b3484efbdbe6d1dba9ae6bb89b1ae9287df8f52488e6416a0"} Feb 19 18:46:44 crc kubenswrapper[4813]: I0219 18:46:44.634880 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987464f4-5t7t2" podStartSLOduration=6.11386329 podStartE2EDuration="23.634858678s" podCreationTimestamp="2026-02-19 18:46:21 +0000 UTC" firstStartedPulling="2026-02-19 18:46:23.699499278 +0000 UTC m=+1002.924939819" lastFinishedPulling="2026-02-19 18:46:41.220494666 +0000 UTC m=+1020.445935207" observedRunningTime="2026-02-19 18:46:44.629454271 +0000 UTC m=+1023.854894842" watchObservedRunningTime="2026-02-19 18:46:44.634858678 +0000 UTC m=+1023.860299219" Feb 19 18:46:44 crc kubenswrapper[4813]: I0219 18:46:44.653197 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7866795846-glgns" podStartSLOduration=3.290530074 podStartE2EDuration="22.653180542s" podCreationTimestamp="2026-02-19 18:46:22 +0000 UTC" firstStartedPulling="2026-02-19 18:46:23.823909272 +0000 UTC m=+1003.049349813" lastFinishedPulling="2026-02-19 18:46:43.18655974 +0000 UTC m=+1022.412000281" observedRunningTime="2026-02-19 18:46:44.648048634 +0000 UTC m=+1023.873489175" watchObservedRunningTime="2026-02-19 18:46:44.653180542 +0000 UTC m=+1023.878621083" Feb 19 18:46:44 crc kubenswrapper[4813]: I0219 18:46:44.667527 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-xq5wn" podStartSLOduration=3.392312911 podStartE2EDuration="22.667510864s" podCreationTimestamp="2026-02-19 18:46:22 +0000 UTC" firstStartedPulling="2026-02-19 18:46:23.925139432 +0000 UTC m=+1003.150579973" lastFinishedPulling="2026-02-19 18:46:43.200337385 +0000 UTC m=+1022.425777926" observedRunningTime="2026-02-19 18:46:44.667178013 +0000 UTC m=+1023.892618554" watchObservedRunningTime="2026-02-19 18:46:44.667510864 +0000 UTC m=+1023.892951405" Feb 19 18:46:44 crc kubenswrapper[4813]: I0219 18:46:44.678923 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-gtm5k" podStartSLOduration=5.280756583 podStartE2EDuration="22.678909255s" podCreationTimestamp="2026-02-19 18:46:22 +0000 UTC" firstStartedPulling="2026-02-19 18:46:23.822311633 +0000 UTC m=+1003.047752174" lastFinishedPulling="2026-02-19 18:46:41.220464305 +0000 UTC m=+1020.445904846" observedRunningTime="2026-02-19 18:46:44.678194613 +0000 UTC m=+1023.903635154" watchObservedRunningTime="2026-02-19 18:46:44.678909255 +0000 UTC m=+1023.904349796" Feb 19 18:46:46 crc kubenswrapper[4813]: I0219 18:46:46.637658 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-k6dm9" event={"ID":"52c59bd3-a386-46a0-b528-1a658f9a64a1","Type":"ContainerStarted","Data":"4d72f739a41c88449b190b09c7936bf1766863a5651788db56214dc9ac98f5c8"} Feb 19 18:46:46 crc kubenswrapper[4813]: I0219 18:46:46.638321 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-k6dm9" Feb 19 18:46:46 crc kubenswrapper[4813]: I0219 18:46:46.641801 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qlcvr" event={"ID":"139aa386-db56-4988-8ca6-2ea715cf9630","Type":"ContainerStarted","Data":"1d2b292c993eac1b164d30a2e2d66176a5ded56f68476a4f7b24aac3f28d004a"} Feb 19 18:46:46 crc kubenswrapper[4813]: I0219 18:46:46.672354 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68f46476f-k6dm9" podStartSLOduration=2.196521886 podStartE2EDuration="24.672327062s" podCreationTimestamp="2026-02-19 18:46:22 +0000 UTC" firstStartedPulling="2026-02-19 18:46:23.902426322 +0000 UTC m=+1003.127866863" lastFinishedPulling="2026-02-19 18:46:46.378231498 +0000 UTC m=+1025.603672039" observedRunningTime="2026-02-19 18:46:46.657470144 +0000 UTC m=+1025.882910685" watchObservedRunningTime="2026-02-19 18:46:46.672327062 +0000 UTC m=+1025.897767643" Feb 19 18:46:46 crc kubenswrapper[4813]: I0219 18:46:46.681358 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qlcvr" podStartSLOduration=2.236043554 podStartE2EDuration="24.68133049s" podCreationTimestamp="2026-02-19 18:46:22 +0000 UTC" firstStartedPulling="2026-02-19 18:46:23.919054764 +0000 UTC m=+1003.144495295" lastFinishedPulling="2026-02-19 18:46:46.36434169 +0000 UTC m=+1025.589782231" observedRunningTime="2026-02-19 18:46:46.681300989 +0000 UTC m=+1025.906741540" watchObservedRunningTime="2026-02-19 18:46:46.68133049 +0000 UTC m=+1025.906771071" Feb 19 18:46:49 crc kubenswrapper[4813]: I0219 18:46:49.681537 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-zqs6j" event={"ID":"1ca28ba3-eb1b-427d-a9b9-f0dcf0cc7dd9","Type":"ContainerStarted","Data":"45338c1381b2b6edb9a6677890ea26ea892964536f7f57e079588db57a789c2b"} Feb 19 18:46:49 crc kubenswrapper[4813]: I0219 18:46:49.682301 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-zqs6j" Feb 19 18:46:49 crc kubenswrapper[4813]: I0219 18:46:49.714737 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-zqs6j" podStartSLOduration=3.4651900270000002 podStartE2EDuration="28.714714559s" podCreationTimestamp="2026-02-19 18:46:21 +0000 UTC" firstStartedPulling="2026-02-19 18:46:23.775028496 +0000 UTC m=+1003.000469037" lastFinishedPulling="2026-02-19 18:46:49.024552988 +0000 UTC m=+1028.249993569" observedRunningTime="2026-02-19 18:46:49.708742885 +0000 UTC m=+1028.934183466" watchObservedRunningTime="2026-02-19 18:46:49.714714559 +0000 UTC m=+1028.940155130" Feb 19 18:46:51 crc kubenswrapper[4813]: I0219 18:46:51.703339 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-tfr46" event={"ID":"1067532d-ed6f-4474-8006-a4b4a6b1c89e","Type":"ContainerStarted","Data":"ec87a62a61d4be3495356a4aa104ba03b1d51b83a0cf9d4b28ab325695f4e68e"} Feb 19 18:46:51 crc kubenswrapper[4813]: I0219 18:46:51.704130 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-tfr46" Feb 19 18:46:51 crc kubenswrapper[4813]: I0219 18:46:51.709667 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-68vcr" event={"ID":"293d6d95-e878-451c-8a2f-3040ca924854","Type":"ContainerStarted","Data":"142fe9914c3d0c79e7b4c437abbed5d49bde64b509b766ed842de0df9b3fc1e6"} Feb 19 18:46:51 crc kubenswrapper[4813]: I0219 18:46:51.710006 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-68vcr" Feb 19 18:46:51 crc kubenswrapper[4813]: I0219 18:46:51.759315 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-tfr46" podStartSLOduration=2.887471512 podStartE2EDuration="30.759291703s" podCreationTimestamp="2026-02-19 18:46:21 +0000 UTC" firstStartedPulling="2026-02-19 18:46:23.110854346 +0000 UTC m=+1002.336294887" lastFinishedPulling="2026-02-19 18:46:50.982674537 +0000 UTC m=+1030.208115078" observedRunningTime="2026-02-19 18:46:51.730760604 +0000 UTC m=+1030.956201155" watchObservedRunningTime="2026-02-19 18:46:51.759291703 +0000 UTC m=+1030.984732254" Feb 19 18:46:51 crc kubenswrapper[4813]: I0219 18:46:51.762693 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-68vcr" podStartSLOduration=3.201121118 podStartE2EDuration="30.762680048s" podCreationTimestamp="2026-02-19 18:46:21 +0000 UTC" firstStartedPulling="2026-02-19 18:46:23.420099827 +0000 UTC m=+1002.645540368" lastFinishedPulling="2026-02-19 18:46:50.981658717 +0000 UTC m=+1030.207099298" observedRunningTime="2026-02-19 18:46:51.755210027 +0000 UTC m=+1030.980650578" watchObservedRunningTime="2026-02-19 18:46:51.762680048 +0000 UTC m=+1030.988120609" Feb 19 18:46:52 crc kubenswrapper[4813]: I0219 18:46:52.104513 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-fn86j" Feb 19 18:46:52 crc kubenswrapper[4813]: I0219 18:46:52.216938 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987464f4-5t7t2" Feb 19 18:46:52 crc kubenswrapper[4813]: I0219 18:46:52.320270 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-8rrl4" Feb 19 18:46:52 crc kubenswrapper[4813]: I0219 18:46:52.501798 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-4pf4j" Feb 19 18:46:52 crc kubenswrapper[4813]: I0219 18:46:52.569609 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-scptn" Feb 19 18:46:52 crc kubenswrapper[4813]: I0219 18:46:52.668927 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-m8fzh" Feb 19 18:46:52 crc kubenswrapper[4813]: I0219 18:46:52.717463 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-cnb66" event={"ID":"bb9127dd-a22e-4d2b-91c9-29021a547c96","Type":"ContainerStarted","Data":"b4be9b5d5c962a9ebfce82d6efe98500179ac1b87c62251e54995002ebf46364"} Feb 19 18:46:52 crc kubenswrapper[4813]: I0219 18:46:52.717651 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-cnb66" Feb 19 18:46:52 crc kubenswrapper[4813]: I0219 18:46:52.718645 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-h2q2p" event={"ID":"2a98a45c-87a7-4ecf-a0e8-2e8743b82960","Type":"ContainerStarted","Data":"b7fb9b1b19e138eac372e97c7db4f8d27bd300a638d4c7bd00ad63b830f1b422"} Feb 19 18:46:52 crc kubenswrapper[4813]: I0219 18:46:52.719003 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-h2q2p" Feb 19 18:46:52 crc kubenswrapper[4813]: I0219 18:46:52.740189 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-cnb66" podStartSLOduration=3.162078735 podStartE2EDuration="31.740170564s" podCreationTimestamp="2026-02-19 18:46:21 +0000 UTC" firstStartedPulling="2026-02-19 18:46:23.438471113 +0000 UTC m=+1002.663911654" lastFinishedPulling="2026-02-19 18:46:52.016562942 +0000 UTC m=+1031.242003483" observedRunningTime="2026-02-19 18:46:52.733628913 +0000 UTC m=+1031.959069454" watchObservedRunningTime="2026-02-19 18:46:52.740170564 +0000 UTC m=+1031.965611105" Feb 19 18:46:52 crc kubenswrapper[4813]: I0219 18:46:52.753311 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-h2q2p" podStartSLOduration=3.250978415 podStartE2EDuration="31.753291349s" podCreationTimestamp="2026-02-19 18:46:21 +0000 UTC" firstStartedPulling="2026-02-19 18:46:23.432088806 +0000 UTC m=+1002.657529347" lastFinishedPulling="2026-02-19 18:46:51.93440172 +0000 UTC m=+1031.159842281" observedRunningTime="2026-02-19 18:46:52.751118002 +0000 UTC m=+1031.976558553" watchObservedRunningTime="2026-02-19 18:46:52.753291349 +0000 UTC m=+1031.978731900" Feb 19 18:46:52 crc kubenswrapper[4813]: I0219 18:46:52.791355 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-rgvk6" Feb 19 18:46:52 crc kubenswrapper[4813]: I0219 18:46:52.885321 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-k6dm9" Feb 19 18:46:52 crc kubenswrapper[4813]: I0219 18:46:52.993689 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-gtm5k" Feb 19 18:46:53 crc kubenswrapper[4813]: I0219 18:46:53.021986 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7866795846-glgns" Feb 19 18:46:53 crc kubenswrapper[4813]: I0219 18:46:53.084694 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-xq5wn" Feb 19 18:46:53 crc kubenswrapper[4813]: I0219 18:46:53.086355 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-xq5wn" Feb 19 18:46:53 crc kubenswrapper[4813]: I0219 18:46:53.728520 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-6nmgx" event={"ID":"0794f801-e064-425c-ab9d-c00719fb3f86","Type":"ContainerStarted","Data":"80502f1d966d71a27aed3f1977469485ae592436d8b8bb3c72c0184ef8b189d2"} Feb 19 18:46:53 crc kubenswrapper[4813]: I0219 18:46:53.729347 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-6nmgx" Feb 19 18:46:53 crc kubenswrapper[4813]: I0219 18:46:53.760883 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-6nmgx" podStartSLOduration=3.466279901 podStartE2EDuration="32.760862172s" podCreationTimestamp="2026-02-19 18:46:21 +0000 UTC" firstStartedPulling="2026-02-19 18:46:23.679522513 +0000 UTC m=+1002.904963054" lastFinishedPulling="2026-02-19 18:46:52.974104784 +0000 UTC m=+1032.199545325" observedRunningTime="2026-02-19 18:46:53.753077582 +0000 UTC m=+1032.978518183" watchObservedRunningTime="2026-02-19 18:46:53.760862172 +0000 UTC m=+1032.986302723" Feb 19 18:46:53 crc kubenswrapper[4813]: I0219 18:46:53.992729 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fd22b35b-ee39-435e-964c-d545597056b6-cert\") pod \"infra-operator-controller-manager-79d975b745-g7x92\" (UID: \"fd22b35b-ee39-435e-964c-d545597056b6\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-g7x92" Feb 19 18:46:54 crc kubenswrapper[4813]: I0219 18:46:54.002196 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fd22b35b-ee39-435e-964c-d545597056b6-cert\") pod \"infra-operator-controller-manager-79d975b745-g7x92\" (UID: \"fd22b35b-ee39-435e-964c-d545597056b6\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-g7x92" Feb 19 18:46:54 crc kubenswrapper[4813]: I0219 18:46:54.098668 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-l49lr" Feb 19 18:46:54 crc kubenswrapper[4813]: I0219 18:46:54.106564 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-g7x92" Feb 19 18:46:54 crc kubenswrapper[4813]: I0219 18:46:54.399589 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c250b55d-1d6b-4f28-8a0c-833736ac564b-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-r9fmx\" (UID: \"c250b55d-1d6b-4f28-8a0c-833736ac564b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-r9fmx" Feb 19 18:46:54 crc kubenswrapper[4813]: I0219 18:46:54.405674 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c250b55d-1d6b-4f28-8a0c-833736ac564b-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-r9fmx\" (UID: \"c250b55d-1d6b-4f28-8a0c-833736ac564b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-r9fmx" Feb 19 18:46:54 crc kubenswrapper[4813]: I0219 18:46:54.514657 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-95vt7" Feb 19 18:46:54 crc kubenswrapper[4813]: I0219 18:46:54.523304 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-r9fmx" Feb 19 18:46:54 crc kubenswrapper[4813]: I0219 18:46:54.582735 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-g7x92"] Feb 19 18:46:54 crc kubenswrapper[4813]: W0219 18:46:54.594437 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd22b35b_ee39_435e_964c_d545597056b6.slice/crio-d51159e03f956d4fe0d6cb22c585a0d9e9e6f70d8b1f358b7ea3a5f1ef2460be WatchSource:0}: Error finding container d51159e03f956d4fe0d6cb22c585a0d9e9e6f70d8b1f358b7ea3a5f1ef2460be: Status 404 returned error can't find the container with id d51159e03f956d4fe0d6cb22c585a0d9e9e6f70d8b1f358b7ea3a5f1ef2460be Feb 19 18:46:54 crc kubenswrapper[4813]: I0219 18:46:54.735105 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-g7x92" event={"ID":"fd22b35b-ee39-435e-964c-d545597056b6","Type":"ContainerStarted","Data":"d51159e03f956d4fe0d6cb22c585a0d9e9e6f70d8b1f358b7ea3a5f1ef2460be"} Feb 19 18:46:54 crc kubenswrapper[4813]: I0219 18:46:54.736111 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-c942j" event={"ID":"5eac30f8-7b86-4d18-bf6f-e6dbf42a0625","Type":"ContainerStarted","Data":"226a5508f90167bb24f0a75d88c8e555bc27016b84d393a9a201e522f0c5cdee"} Feb 19 18:46:54 crc kubenswrapper[4813]: I0219 18:46:54.736447 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-c942j" Feb 19 18:46:54 crc kubenswrapper[4813]: I0219 18:46:54.772265 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-c942j" podStartSLOduration=3.493618614 podStartE2EDuration="33.772242503s" podCreationTimestamp="2026-02-19 18:46:21 +0000 UTC" firstStartedPulling="2026-02-19 18:46:23.716575685 +0000 UTC m=+1002.942016226" lastFinishedPulling="2026-02-19 18:46:53.995199544 +0000 UTC m=+1033.220640115" observedRunningTime="2026-02-19 18:46:54.762217523 +0000 UTC m=+1033.987658074" watchObservedRunningTime="2026-02-19 18:46:54.772242503 +0000 UTC m=+1033.997683114" Feb 19 18:46:54 crc kubenswrapper[4813]: I0219 18:46:54.805307 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ae13095c-48b2-4236-8fef-528d9f0ad712-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-qfpxs\" (UID: \"ae13095c-48b2-4236-8fef-528d9f0ad712\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-qfpxs" Feb 19 18:46:54 crc kubenswrapper[4813]: I0219 18:46:54.805419 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ae13095c-48b2-4236-8fef-528d9f0ad712-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-qfpxs\" (UID: \"ae13095c-48b2-4236-8fef-528d9f0ad712\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-qfpxs" Feb 19 18:46:54 crc kubenswrapper[4813]: I0219 18:46:54.809226 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ae13095c-48b2-4236-8fef-528d9f0ad712-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-qfpxs\" (UID: \"ae13095c-48b2-4236-8fef-528d9f0ad712\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-qfpxs" Feb 19 18:46:54 crc kubenswrapper[4813]: I0219 18:46:54.809431 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ae13095c-48b2-4236-8fef-528d9f0ad712-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-qfpxs\" (UID: \"ae13095c-48b2-4236-8fef-528d9f0ad712\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-qfpxs" Feb 19 18:46:54 crc kubenswrapper[4813]: I0219 18:46:54.937940 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-7dk2k" Feb 19 18:46:54 crc kubenswrapper[4813]: I0219 18:46:54.946730 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-qfpxs" Feb 19 18:46:55 crc kubenswrapper[4813]: I0219 18:46:55.049810 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-r9fmx"] Feb 19 18:46:55 crc kubenswrapper[4813]: W0219 18:46:55.059389 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc250b55d_1d6b_4f28_8a0c_833736ac564b.slice/crio-2c564e87436bff84746e1ebab540a10fa981ac7767e547ac9da58dcee237d9d4 WatchSource:0}: Error finding container 2c564e87436bff84746e1ebab540a10fa981ac7767e547ac9da58dcee237d9d4: Status 404 returned error can't find the container with id 2c564e87436bff84746e1ebab540a10fa981ac7767e547ac9da58dcee237d9d4 Feb 19 18:46:55 crc kubenswrapper[4813]: I0219 18:46:55.281582 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-69ff7bc449-qfpxs"] Feb 19 18:46:55 crc kubenswrapper[4813]: W0219 18:46:55.292881 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae13095c_48b2_4236_8fef_528d9f0ad712.slice/crio-8278b1da2d50f44b6928d597df126360bfaeb26896abe81db54112ae5a7c165e WatchSource:0}: Error finding container 8278b1da2d50f44b6928d597df126360bfaeb26896abe81db54112ae5a7c165e: Status 404 returned error can't find the container with id 8278b1da2d50f44b6928d597df126360bfaeb26896abe81db54112ae5a7c165e Feb 19 18:46:55 crc kubenswrapper[4813]: I0219 18:46:55.744871 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-r9fmx" event={"ID":"c250b55d-1d6b-4f28-8a0c-833736ac564b","Type":"ContainerStarted","Data":"2c564e87436bff84746e1ebab540a10fa981ac7767e547ac9da58dcee237d9d4"} Feb 19 18:46:55 crc kubenswrapper[4813]: I0219 18:46:55.746975 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-qfpxs" event={"ID":"ae13095c-48b2-4236-8fef-528d9f0ad712","Type":"ContainerStarted","Data":"cd0462a58e508001f61520ea1b02a265c520483daff20fbd71a967dfd6282720"} Feb 19 18:46:55 crc kubenswrapper[4813]: I0219 18:46:55.747022 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-qfpxs" event={"ID":"ae13095c-48b2-4236-8fef-528d9f0ad712","Type":"ContainerStarted","Data":"8278b1da2d50f44b6928d597df126360bfaeb26896abe81db54112ae5a7c165e"} Feb 19 18:46:55 crc kubenswrapper[4813]: I0219 18:46:55.747152 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-qfpxs" Feb 19 18:46:55 crc kubenswrapper[4813]: I0219 18:46:55.788202 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-qfpxs" podStartSLOduration=33.788175984 podStartE2EDuration="33.788175984s" podCreationTimestamp="2026-02-19 18:46:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:46:55.780350343 +0000 UTC m=+1035.005790904" watchObservedRunningTime="2026-02-19 18:46:55.788175984 +0000 UTC m=+1035.013616555" Feb 19 18:46:59 crc kubenswrapper[4813]: I0219 18:46:59.791618 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-g7x92" event={"ID":"fd22b35b-ee39-435e-964c-d545597056b6","Type":"ContainerStarted","Data":"b3cfa89b03824ba8401fe52a1c9eb2b390283eb49435edd502fca1adfbdd4f98"} Feb 19 18:46:59 crc kubenswrapper[4813]: I0219 18:46:59.792230 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79d975b745-g7x92" Feb 19 18:46:59 crc kubenswrapper[4813]: I0219 18:46:59.794256 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-j894w" event={"ID":"de341409-e6ef-4b1a-a359-a8f39fa0bc91","Type":"ContainerStarted","Data":"23d8fdb9bb66ab5ed34dc2d7b5c4c24bb9bb72291a68c9681830dd9aa9822cb8"} Feb 19 18:46:59 crc kubenswrapper[4813]: I0219 18:46:59.794782 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-j894w" Feb 19 18:46:59 crc kubenswrapper[4813]: I0219 18:46:59.796460 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-r9fmx" event={"ID":"c250b55d-1d6b-4f28-8a0c-833736ac564b","Type":"ContainerStarted","Data":"72a5de942b81d15e89829b3f5eb5bac9d9f0f1242cec4aad37780d9069006e7c"} Feb 19 18:46:59 crc kubenswrapper[4813]: I0219 18:46:59.796660 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-r9fmx" Feb 19 18:46:59 crc kubenswrapper[4813]: I0219 18:46:59.823510 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79d975b745-g7x92" podStartSLOduration=34.861530445 podStartE2EDuration="38.82347977s" podCreationTimestamp="2026-02-19 18:46:21 +0000 UTC" firstStartedPulling="2026-02-19 18:46:54.597658472 +0000 UTC m=+1033.823099063" lastFinishedPulling="2026-02-19 18:46:58.559607807 +0000 UTC m=+1037.785048388" observedRunningTime="2026-02-19 18:46:59.821837119 +0000 UTC m=+1039.047277690" watchObservedRunningTime="2026-02-19 18:46:59.82347977 +0000 UTC m=+1039.048920351" Feb 19 18:46:59 crc kubenswrapper[4813]: I0219 18:46:59.859245 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-j894w" podStartSLOduration=3.668595532 podStartE2EDuration="38.859223684s" podCreationTimestamp="2026-02-19 18:46:21 +0000 UTC" firstStartedPulling="2026-02-19 18:46:23.47957594 +0000 UTC m=+1002.705016481" lastFinishedPulling="2026-02-19 18:46:58.670204072 +0000 UTC m=+1037.895644633" observedRunningTime="2026-02-19 18:46:59.850998388 +0000 UTC m=+1039.076439019" watchObservedRunningTime="2026-02-19 18:46:59.859223684 +0000 UTC m=+1039.084664255" Feb 19 18:46:59 crc kubenswrapper[4813]: I0219 18:46:59.897763 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-r9fmx" podStartSLOduration=35.406306538 podStartE2EDuration="38.897736254s" podCreationTimestamp="2026-02-19 18:46:21 +0000 UTC" firstStartedPulling="2026-02-19 18:46:55.074653873 +0000 UTC m=+1034.300094454" lastFinishedPulling="2026-02-19 18:46:58.566083599 +0000 UTC m=+1037.791524170" observedRunningTime="2026-02-19 18:46:59.891677125 +0000 UTC m=+1039.117117706" watchObservedRunningTime="2026-02-19 18:46:59.897736254 +0000 UTC m=+1039.123176845" Feb 19 18:47:02 crc kubenswrapper[4813]: I0219 18:47:02.139141 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-tfr46" Feb 19 18:47:02 crc kubenswrapper[4813]: I0219 18:47:02.151001 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-h2q2p" Feb 19 18:47:02 crc kubenswrapper[4813]: I0219 18:47:02.226531 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-68vcr" Feb 19 18:47:02 crc kubenswrapper[4813]: I0219 18:47:02.245046 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-cnb66" Feb 19 18:47:02 crc kubenswrapper[4813]: I0219 18:47:02.551612 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-c942j" Feb 19 18:47:02 crc kubenswrapper[4813]: I0219 18:47:02.552209 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-6nmgx" Feb 19 18:47:02 crc kubenswrapper[4813]: I0219 18:47:02.594306 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-zqs6j" Feb 19 18:47:04 crc kubenswrapper[4813]: I0219 18:47:04.117811 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79d975b745-g7x92" Feb 19 18:47:04 crc kubenswrapper[4813]: I0219 18:47:04.531262 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-r9fmx" Feb 19 18:47:04 crc kubenswrapper[4813]: I0219 18:47:04.957683 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-qfpxs" Feb 19 18:47:12 crc kubenswrapper[4813]: I0219 18:47:12.366786 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-j894w" Feb 19 18:47:32 crc kubenswrapper[4813]: I0219 18:47:32.799268 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-855cbc58c5-57z69"] Feb 19 18:47:32 crc kubenswrapper[4813]: I0219 18:47:32.800797 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-855cbc58c5-57z69" Feb 19 18:47:32 crc kubenswrapper[4813]: I0219 18:47:32.805172 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 19 18:47:32 crc kubenswrapper[4813]: I0219 18:47:32.805229 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 19 18:47:32 crc kubenswrapper[4813]: I0219 18:47:32.805464 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 19 18:47:32 crc kubenswrapper[4813]: I0219 18:47:32.810647 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-pm6sl" Feb 19 18:47:32 crc kubenswrapper[4813]: I0219 18:47:32.816651 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-855cbc58c5-57z69"] Feb 19 18:47:32 crc kubenswrapper[4813]: I0219 18:47:32.875199 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6fcf94d689-h8tkx"] Feb 19 18:47:32 crc kubenswrapper[4813]: I0219 18:47:32.876451 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fcf94d689-h8tkx" Feb 19 18:47:32 crc kubenswrapper[4813]: I0219 18:47:32.878150 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 19 18:47:32 crc kubenswrapper[4813]: I0219 18:47:32.887623 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6fcf94d689-h8tkx"] Feb 19 18:47:32 crc kubenswrapper[4813]: I0219 18:47:32.894735 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1dd69ab-c6bb-4474-acb7-e72f469dd6c7-dns-svc\") pod \"dnsmasq-dns-6fcf94d689-h8tkx\" (UID: \"e1dd69ab-c6bb-4474-acb7-e72f469dd6c7\") " pod="openstack/dnsmasq-dns-6fcf94d689-h8tkx" Feb 19 18:47:32 crc kubenswrapper[4813]: I0219 18:47:32.894787 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0b113de-4a90-4929-8365-53c93c5aaee7-config\") pod \"dnsmasq-dns-855cbc58c5-57z69\" (UID: \"b0b113de-4a90-4929-8365-53c93c5aaee7\") " pod="openstack/dnsmasq-dns-855cbc58c5-57z69" Feb 19 18:47:32 crc kubenswrapper[4813]: I0219 18:47:32.894808 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9wjz\" (UniqueName: \"kubernetes.io/projected/b0b113de-4a90-4929-8365-53c93c5aaee7-kube-api-access-c9wjz\") pod \"dnsmasq-dns-855cbc58c5-57z69\" (UID: \"b0b113de-4a90-4929-8365-53c93c5aaee7\") " pod="openstack/dnsmasq-dns-855cbc58c5-57z69" Feb 19 18:47:32 crc kubenswrapper[4813]: I0219 18:47:32.894867 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1dd69ab-c6bb-4474-acb7-e72f469dd6c7-config\") pod \"dnsmasq-dns-6fcf94d689-h8tkx\" (UID: \"e1dd69ab-c6bb-4474-acb7-e72f469dd6c7\") " pod="openstack/dnsmasq-dns-6fcf94d689-h8tkx" Feb 19 18:47:32 crc kubenswrapper[4813]: I0219 18:47:32.894898 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6l62\" (UniqueName: \"kubernetes.io/projected/e1dd69ab-c6bb-4474-acb7-e72f469dd6c7-kube-api-access-n6l62\") pod \"dnsmasq-dns-6fcf94d689-h8tkx\" (UID: \"e1dd69ab-c6bb-4474-acb7-e72f469dd6c7\") " pod="openstack/dnsmasq-dns-6fcf94d689-h8tkx" Feb 19 18:47:32 crc kubenswrapper[4813]: I0219 18:47:32.995198 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1dd69ab-c6bb-4474-acb7-e72f469dd6c7-dns-svc\") pod \"dnsmasq-dns-6fcf94d689-h8tkx\" (UID: \"e1dd69ab-c6bb-4474-acb7-e72f469dd6c7\") " pod="openstack/dnsmasq-dns-6fcf94d689-h8tkx" Feb 19 18:47:32 crc kubenswrapper[4813]: I0219 18:47:32.995259 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0b113de-4a90-4929-8365-53c93c5aaee7-config\") pod \"dnsmasq-dns-855cbc58c5-57z69\" (UID: \"b0b113de-4a90-4929-8365-53c93c5aaee7\") " pod="openstack/dnsmasq-dns-855cbc58c5-57z69" Feb 19 18:47:32 crc kubenswrapper[4813]: I0219 18:47:32.995280 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9wjz\" (UniqueName: \"kubernetes.io/projected/b0b113de-4a90-4929-8365-53c93c5aaee7-kube-api-access-c9wjz\") pod \"dnsmasq-dns-855cbc58c5-57z69\" (UID: \"b0b113de-4a90-4929-8365-53c93c5aaee7\") " pod="openstack/dnsmasq-dns-855cbc58c5-57z69" Feb 19 18:47:32 crc kubenswrapper[4813]: I0219 18:47:32.995316 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1dd69ab-c6bb-4474-acb7-e72f469dd6c7-config\") pod \"dnsmasq-dns-6fcf94d689-h8tkx\" (UID: \"e1dd69ab-c6bb-4474-acb7-e72f469dd6c7\") " pod="openstack/dnsmasq-dns-6fcf94d689-h8tkx" Feb 19 18:47:32 crc kubenswrapper[4813]: I0219 18:47:32.995345 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6l62\" (UniqueName: \"kubernetes.io/projected/e1dd69ab-c6bb-4474-acb7-e72f469dd6c7-kube-api-access-n6l62\") pod \"dnsmasq-dns-6fcf94d689-h8tkx\" (UID: \"e1dd69ab-c6bb-4474-acb7-e72f469dd6c7\") " pod="openstack/dnsmasq-dns-6fcf94d689-h8tkx" Feb 19 18:47:32 crc kubenswrapper[4813]: I0219 18:47:32.996405 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1dd69ab-c6bb-4474-acb7-e72f469dd6c7-dns-svc\") pod \"dnsmasq-dns-6fcf94d689-h8tkx\" (UID: \"e1dd69ab-c6bb-4474-acb7-e72f469dd6c7\") " pod="openstack/dnsmasq-dns-6fcf94d689-h8tkx" Feb 19 18:47:32 crc kubenswrapper[4813]: I0219 18:47:32.996510 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1dd69ab-c6bb-4474-acb7-e72f469dd6c7-config\") pod \"dnsmasq-dns-6fcf94d689-h8tkx\" (UID: \"e1dd69ab-c6bb-4474-acb7-e72f469dd6c7\") " pod="openstack/dnsmasq-dns-6fcf94d689-h8tkx" Feb 19 18:47:32 crc kubenswrapper[4813]: I0219 18:47:32.996804 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0b113de-4a90-4929-8365-53c93c5aaee7-config\") pod \"dnsmasq-dns-855cbc58c5-57z69\" (UID: \"b0b113de-4a90-4929-8365-53c93c5aaee7\") " pod="openstack/dnsmasq-dns-855cbc58c5-57z69" Feb 19 18:47:33 crc kubenswrapper[4813]: I0219 18:47:33.017905 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6l62\" (UniqueName: \"kubernetes.io/projected/e1dd69ab-c6bb-4474-acb7-e72f469dd6c7-kube-api-access-n6l62\") pod \"dnsmasq-dns-6fcf94d689-h8tkx\" (UID: \"e1dd69ab-c6bb-4474-acb7-e72f469dd6c7\") " pod="openstack/dnsmasq-dns-6fcf94d689-h8tkx" Feb 19 18:47:33 crc kubenswrapper[4813]: I0219 18:47:33.022691 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9wjz\" (UniqueName: \"kubernetes.io/projected/b0b113de-4a90-4929-8365-53c93c5aaee7-kube-api-access-c9wjz\") pod \"dnsmasq-dns-855cbc58c5-57z69\" (UID: \"b0b113de-4a90-4929-8365-53c93c5aaee7\") " pod="openstack/dnsmasq-dns-855cbc58c5-57z69" Feb 19 18:47:33 crc kubenswrapper[4813]: I0219 18:47:33.132641 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-855cbc58c5-57z69" Feb 19 18:47:33 crc kubenswrapper[4813]: I0219 18:47:33.203149 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fcf94d689-h8tkx" Feb 19 18:47:33 crc kubenswrapper[4813]: I0219 18:47:33.484242 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6fcf94d689-h8tkx"] Feb 19 18:47:33 crc kubenswrapper[4813]: I0219 18:47:33.639697 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-855cbc58c5-57z69"] Feb 19 18:47:33 crc kubenswrapper[4813]: I0219 18:47:33.800512 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6fcf94d689-h8tkx"] Feb 19 18:47:33 crc kubenswrapper[4813]: I0219 18:47:33.819430 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f54874ffc-9d5c2"] Feb 19 18:47:33 crc kubenswrapper[4813]: I0219 18:47:33.820700 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f54874ffc-9d5c2" Feb 19 18:47:33 crc kubenswrapper[4813]: I0219 18:47:33.836312 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f54874ffc-9d5c2"] Feb 19 18:47:33 crc kubenswrapper[4813]: I0219 18:47:33.907459 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2db349ca-5a17-4b1e-b8fa-da6112ee5843-config\") pod \"dnsmasq-dns-f54874ffc-9d5c2\" (UID: \"2db349ca-5a17-4b1e-b8fa-da6112ee5843\") " pod="openstack/dnsmasq-dns-f54874ffc-9d5c2" Feb 19 18:47:33 crc kubenswrapper[4813]: I0219 18:47:33.907507 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57kh5\" (UniqueName: \"kubernetes.io/projected/2db349ca-5a17-4b1e-b8fa-da6112ee5843-kube-api-access-57kh5\") pod \"dnsmasq-dns-f54874ffc-9d5c2\" (UID: \"2db349ca-5a17-4b1e-b8fa-da6112ee5843\") " pod="openstack/dnsmasq-dns-f54874ffc-9d5c2" Feb 19 18:47:33 crc kubenswrapper[4813]: I0219 18:47:33.907585 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2db349ca-5a17-4b1e-b8fa-da6112ee5843-dns-svc\") pod \"dnsmasq-dns-f54874ffc-9d5c2\" (UID: \"2db349ca-5a17-4b1e-b8fa-da6112ee5843\") " pod="openstack/dnsmasq-dns-f54874ffc-9d5c2" Feb 19 18:47:34 crc kubenswrapper[4813]: I0219 18:47:34.008937 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2db349ca-5a17-4b1e-b8fa-da6112ee5843-config\") pod \"dnsmasq-dns-f54874ffc-9d5c2\" (UID: \"2db349ca-5a17-4b1e-b8fa-da6112ee5843\") " pod="openstack/dnsmasq-dns-f54874ffc-9d5c2" Feb 19 18:47:34 crc kubenswrapper[4813]: I0219 18:47:34.009005 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57kh5\" (UniqueName: \"kubernetes.io/projected/2db349ca-5a17-4b1e-b8fa-da6112ee5843-kube-api-access-57kh5\") pod \"dnsmasq-dns-f54874ffc-9d5c2\" (UID: \"2db349ca-5a17-4b1e-b8fa-da6112ee5843\") " pod="openstack/dnsmasq-dns-f54874ffc-9d5c2" Feb 19 18:47:34 crc kubenswrapper[4813]: I0219 18:47:34.009058 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2db349ca-5a17-4b1e-b8fa-da6112ee5843-dns-svc\") pod \"dnsmasq-dns-f54874ffc-9d5c2\" (UID: \"2db349ca-5a17-4b1e-b8fa-da6112ee5843\") " pod="openstack/dnsmasq-dns-f54874ffc-9d5c2" Feb 19 18:47:34 crc kubenswrapper[4813]: I0219 18:47:34.010154 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2db349ca-5a17-4b1e-b8fa-da6112ee5843-dns-svc\") pod \"dnsmasq-dns-f54874ffc-9d5c2\" (UID: \"2db349ca-5a17-4b1e-b8fa-da6112ee5843\") " pod="openstack/dnsmasq-dns-f54874ffc-9d5c2" Feb 19 18:47:34 crc kubenswrapper[4813]: I0219 18:47:34.010184 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2db349ca-5a17-4b1e-b8fa-da6112ee5843-config\") pod \"dnsmasq-dns-f54874ffc-9d5c2\" (UID: \"2db349ca-5a17-4b1e-b8fa-da6112ee5843\") " pod="openstack/dnsmasq-dns-f54874ffc-9d5c2" Feb 19 18:47:34 crc kubenswrapper[4813]: I0219 18:47:34.026140 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57kh5\" (UniqueName: \"kubernetes.io/projected/2db349ca-5a17-4b1e-b8fa-da6112ee5843-kube-api-access-57kh5\") pod \"dnsmasq-dns-f54874ffc-9d5c2\" (UID: \"2db349ca-5a17-4b1e-b8fa-da6112ee5843\") " pod="openstack/dnsmasq-dns-f54874ffc-9d5c2" Feb 19 18:47:34 crc kubenswrapper[4813]: I0219 18:47:34.140227 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f54874ffc-9d5c2" Feb 19 18:47:34 crc kubenswrapper[4813]: I0219 18:47:34.237138 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fcf94d689-h8tkx" event={"ID":"e1dd69ab-c6bb-4474-acb7-e72f469dd6c7","Type":"ContainerStarted","Data":"02e694555ec228db71f42422251bf8834d953c12dbe90f8622f44b7e1d074adf"} Feb 19 18:47:34 crc kubenswrapper[4813]: I0219 18:47:34.239540 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-855cbc58c5-57z69" event={"ID":"b0b113de-4a90-4929-8365-53c93c5aaee7","Type":"ContainerStarted","Data":"2d3879518cb609fed68e79c21411ed64d823abed697ddf108ea144622f247793"} Feb 19 18:47:34 crc kubenswrapper[4813]: I0219 18:47:34.468322 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-855cbc58c5-57z69"] Feb 19 18:47:34 crc kubenswrapper[4813]: I0219 18:47:34.489051 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67ff45466c-c88qq"] Feb 19 18:47:34 crc kubenswrapper[4813]: I0219 18:47:34.490529 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67ff45466c-c88qq" Feb 19 18:47:34 crc kubenswrapper[4813]: I0219 18:47:34.515383 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67ff45466c-c88qq"] Feb 19 18:47:34 crc kubenswrapper[4813]: I0219 18:47:34.615470 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f54874ffc-9d5c2"] Feb 19 18:47:34 crc kubenswrapper[4813]: I0219 18:47:34.620823 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df1b69cc-9b96-474a-bc5d-a2e00a99d8b5-config\") pod \"dnsmasq-dns-67ff45466c-c88qq\" (UID: \"df1b69cc-9b96-474a-bc5d-a2e00a99d8b5\") " pod="openstack/dnsmasq-dns-67ff45466c-c88qq" Feb 19 18:47:34 crc kubenswrapper[4813]: I0219 18:47:34.620871 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nprcb\" (UniqueName: \"kubernetes.io/projected/df1b69cc-9b96-474a-bc5d-a2e00a99d8b5-kube-api-access-nprcb\") pod \"dnsmasq-dns-67ff45466c-c88qq\" (UID: \"df1b69cc-9b96-474a-bc5d-a2e00a99d8b5\") " pod="openstack/dnsmasq-dns-67ff45466c-c88qq" Feb 19 18:47:34 crc kubenswrapper[4813]: I0219 18:47:34.620890 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df1b69cc-9b96-474a-bc5d-a2e00a99d8b5-dns-svc\") pod \"dnsmasq-dns-67ff45466c-c88qq\" (UID: \"df1b69cc-9b96-474a-bc5d-a2e00a99d8b5\") " pod="openstack/dnsmasq-dns-67ff45466c-c88qq" Feb 19 18:47:34 crc kubenswrapper[4813]: W0219 18:47:34.638797 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2db349ca_5a17_4b1e_b8fa_da6112ee5843.slice/crio-b1a1d6b2cf89b5b2e09c714bdfb13d96548e39ffe37e59aa62904d97b2443a15 WatchSource:0}: Error finding container b1a1d6b2cf89b5b2e09c714bdfb13d96548e39ffe37e59aa62904d97b2443a15: Status 404 returned error can't find the container with id b1a1d6b2cf89b5b2e09c714bdfb13d96548e39ffe37e59aa62904d97b2443a15 Feb 19 18:47:34 crc kubenswrapper[4813]: I0219 18:47:34.722590 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df1b69cc-9b96-474a-bc5d-a2e00a99d8b5-config\") pod \"dnsmasq-dns-67ff45466c-c88qq\" (UID: \"df1b69cc-9b96-474a-bc5d-a2e00a99d8b5\") " pod="openstack/dnsmasq-dns-67ff45466c-c88qq" Feb 19 18:47:34 crc kubenswrapper[4813]: I0219 18:47:34.722637 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nprcb\" (UniqueName: \"kubernetes.io/projected/df1b69cc-9b96-474a-bc5d-a2e00a99d8b5-kube-api-access-nprcb\") pod \"dnsmasq-dns-67ff45466c-c88qq\" (UID: \"df1b69cc-9b96-474a-bc5d-a2e00a99d8b5\") " pod="openstack/dnsmasq-dns-67ff45466c-c88qq" Feb 19 18:47:34 crc kubenswrapper[4813]: I0219 18:47:34.722654 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df1b69cc-9b96-474a-bc5d-a2e00a99d8b5-dns-svc\") pod \"dnsmasq-dns-67ff45466c-c88qq\" (UID: \"df1b69cc-9b96-474a-bc5d-a2e00a99d8b5\") " pod="openstack/dnsmasq-dns-67ff45466c-c88qq" Feb 19 18:47:34 crc kubenswrapper[4813]: I0219 18:47:34.723832 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df1b69cc-9b96-474a-bc5d-a2e00a99d8b5-dns-svc\") pod \"dnsmasq-dns-67ff45466c-c88qq\" (UID: \"df1b69cc-9b96-474a-bc5d-a2e00a99d8b5\") " pod="openstack/dnsmasq-dns-67ff45466c-c88qq" Feb 19 18:47:34 crc kubenswrapper[4813]: I0219 18:47:34.724002 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df1b69cc-9b96-474a-bc5d-a2e00a99d8b5-config\") pod \"dnsmasq-dns-67ff45466c-c88qq\" (UID: \"df1b69cc-9b96-474a-bc5d-a2e00a99d8b5\") " pod="openstack/dnsmasq-dns-67ff45466c-c88qq" Feb 19 18:47:34 crc kubenswrapper[4813]: I0219 18:47:34.740896 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nprcb\" (UniqueName: \"kubernetes.io/projected/df1b69cc-9b96-474a-bc5d-a2e00a99d8b5-kube-api-access-nprcb\") pod \"dnsmasq-dns-67ff45466c-c88qq\" (UID: \"df1b69cc-9b96-474a-bc5d-a2e00a99d8b5\") " pod="openstack/dnsmasq-dns-67ff45466c-c88qq" Feb 19 18:47:34 crc kubenswrapper[4813]: I0219 18:47:34.812735 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67ff45466c-c88qq" Feb 19 18:47:34 crc kubenswrapper[4813]: I0219 18:47:34.961206 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 18:47:34 crc kubenswrapper[4813]: I0219 18:47:34.962604 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 18:47:34 crc kubenswrapper[4813]: I0219 18:47:34.966267 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 19 18:47:34 crc kubenswrapper[4813]: I0219 18:47:34.966378 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 19 18:47:34 crc kubenswrapper[4813]: I0219 18:47:34.966627 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 19 18:47:34 crc kubenswrapper[4813]: I0219 18:47:34.966761 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 19 18:47:34 crc kubenswrapper[4813]: I0219 18:47:34.967257 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 19 18:47:34 crc kubenswrapper[4813]: I0219 18:47:34.968321 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 19 18:47:34 crc kubenswrapper[4813]: I0219 18:47:34.977096 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 18:47:34 crc kubenswrapper[4813]: I0219 18:47:34.980369 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-m6xcl" Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.082411 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67ff45466c-c88qq"] Feb 19 18:47:35 crc kubenswrapper[4813]: W0219 18:47:35.099770 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf1b69cc_9b96_474a_bc5d_a2e00a99d8b5.slice/crio-974730f15e9ad142f1deffb1bbfc83dd0f2224d8164e93803b64e73fc5746b50 WatchSource:0}: Error finding container 974730f15e9ad142f1deffb1bbfc83dd0f2224d8164e93803b64e73fc5746b50: Status 404 returned error can't find the container with id 974730f15e9ad142f1deffb1bbfc83dd0f2224d8164e93803b64e73fc5746b50 Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.133427 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c69ff3db-8806-451a-9df0-c6289c327579-config-data\") pod \"rabbitmq-server-0\" (UID: \"c69ff3db-8806-451a-9df0-c6289c327579\") " pod="openstack/rabbitmq-server-0" Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.133476 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"c69ff3db-8806-451a-9df0-c6289c327579\") " pod="openstack/rabbitmq-server-0" Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.133510 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c69ff3db-8806-451a-9df0-c6289c327579-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"c69ff3db-8806-451a-9df0-c6289c327579\") " pod="openstack/rabbitmq-server-0" Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.133528 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c69ff3db-8806-451a-9df0-c6289c327579-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"c69ff3db-8806-451a-9df0-c6289c327579\") " pod="openstack/rabbitmq-server-0" Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.133558 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c69ff3db-8806-451a-9df0-c6289c327579-server-conf\") pod \"rabbitmq-server-0\" (UID: \"c69ff3db-8806-451a-9df0-c6289c327579\") " pod="openstack/rabbitmq-server-0" Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.133577 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slk9l\" (UniqueName: \"kubernetes.io/projected/c69ff3db-8806-451a-9df0-c6289c327579-kube-api-access-slk9l\") pod \"rabbitmq-server-0\" (UID: \"c69ff3db-8806-451a-9df0-c6289c327579\") " pod="openstack/rabbitmq-server-0" Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.133601 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c69ff3db-8806-451a-9df0-c6289c327579-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"c69ff3db-8806-451a-9df0-c6289c327579\") " pod="openstack/rabbitmq-server-0" Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.133631 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c69ff3db-8806-451a-9df0-c6289c327579-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"c69ff3db-8806-451a-9df0-c6289c327579\") " pod="openstack/rabbitmq-server-0" Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.133647 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c69ff3db-8806-451a-9df0-c6289c327579-pod-info\") pod \"rabbitmq-server-0\" (UID: \"c69ff3db-8806-451a-9df0-c6289c327579\") " pod="openstack/rabbitmq-server-0" Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.133667 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c69ff3db-8806-451a-9df0-c6289c327579-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"c69ff3db-8806-451a-9df0-c6289c327579\") " pod="openstack/rabbitmq-server-0" Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.133681 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c69ff3db-8806-451a-9df0-c6289c327579-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"c69ff3db-8806-451a-9df0-c6289c327579\") " pod="openstack/rabbitmq-server-0" Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.234750 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c69ff3db-8806-451a-9df0-c6289c327579-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"c69ff3db-8806-451a-9df0-c6289c327579\") " pod="openstack/rabbitmq-server-0" Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.234819 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c69ff3db-8806-451a-9df0-c6289c327579-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"c69ff3db-8806-451a-9df0-c6289c327579\") " pod="openstack/rabbitmq-server-0" Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.234844 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c69ff3db-8806-451a-9df0-c6289c327579-pod-info\") pod \"rabbitmq-server-0\" (UID: \"c69ff3db-8806-451a-9df0-c6289c327579\") " pod="openstack/rabbitmq-server-0" Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.234872 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c69ff3db-8806-451a-9df0-c6289c327579-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"c69ff3db-8806-451a-9df0-c6289c327579\") " pod="openstack/rabbitmq-server-0" Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.234894 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c69ff3db-8806-451a-9df0-c6289c327579-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"c69ff3db-8806-451a-9df0-c6289c327579\") " pod="openstack/rabbitmq-server-0" Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.234930 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c69ff3db-8806-451a-9df0-c6289c327579-config-data\") pod \"rabbitmq-server-0\" (UID: \"c69ff3db-8806-451a-9df0-c6289c327579\") " pod="openstack/rabbitmq-server-0" Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.234980 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"c69ff3db-8806-451a-9df0-c6289c327579\") " pod="openstack/rabbitmq-server-0" Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.235415 4813 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"c69ff3db-8806-451a-9df0-c6289c327579\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-server-0" Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.235439 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c69ff3db-8806-451a-9df0-c6289c327579-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"c69ff3db-8806-451a-9df0-c6289c327579\") " pod="openstack/rabbitmq-server-0" Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.235754 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c69ff3db-8806-451a-9df0-c6289c327579-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"c69ff3db-8806-451a-9df0-c6289c327579\") " pod="openstack/rabbitmq-server-0" Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.235794 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c69ff3db-8806-451a-9df0-c6289c327579-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"c69ff3db-8806-451a-9df0-c6289c327579\") " pod="openstack/rabbitmq-server-0" Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.235836 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c69ff3db-8806-451a-9df0-c6289c327579-server-conf\") pod \"rabbitmq-server-0\" (UID: \"c69ff3db-8806-451a-9df0-c6289c327579\") " pod="openstack/rabbitmq-server-0" Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.235863 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slk9l\" (UniqueName: \"kubernetes.io/projected/c69ff3db-8806-451a-9df0-c6289c327579-kube-api-access-slk9l\") pod \"rabbitmq-server-0\" (UID: \"c69ff3db-8806-451a-9df0-c6289c327579\") " pod="openstack/rabbitmq-server-0" Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.236139 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c69ff3db-8806-451a-9df0-c6289c327579-config-data\") pod \"rabbitmq-server-0\" (UID: \"c69ff3db-8806-451a-9df0-c6289c327579\") " pod="openstack/rabbitmq-server-0" Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.236311 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c69ff3db-8806-451a-9df0-c6289c327579-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"c69ff3db-8806-451a-9df0-c6289c327579\") " pod="openstack/rabbitmq-server-0" Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.239917 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c69ff3db-8806-451a-9df0-c6289c327579-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"c69ff3db-8806-451a-9df0-c6289c327579\") " pod="openstack/rabbitmq-server-0" Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.240252 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c69ff3db-8806-451a-9df0-c6289c327579-server-conf\") pod \"rabbitmq-server-0\" (UID: \"c69ff3db-8806-451a-9df0-c6289c327579\") " pod="openstack/rabbitmq-server-0" Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.240664 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c69ff3db-8806-451a-9df0-c6289c327579-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"c69ff3db-8806-451a-9df0-c6289c327579\") " pod="openstack/rabbitmq-server-0" Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.250104 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67ff45466c-c88qq" event={"ID":"df1b69cc-9b96-474a-bc5d-a2e00a99d8b5","Type":"ContainerStarted","Data":"974730f15e9ad142f1deffb1bbfc83dd0f2224d8164e93803b64e73fc5746b50"} Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.252580 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slk9l\" (UniqueName: \"kubernetes.io/projected/c69ff3db-8806-451a-9df0-c6289c327579-kube-api-access-slk9l\") pod \"rabbitmq-server-0\" (UID: \"c69ff3db-8806-451a-9df0-c6289c327579\") " pod="openstack/rabbitmq-server-0" Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.254086 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f54874ffc-9d5c2" event={"ID":"2db349ca-5a17-4b1e-b8fa-da6112ee5843","Type":"ContainerStarted","Data":"b1a1d6b2cf89b5b2e09c714bdfb13d96548e39ffe37e59aa62904d97b2443a15"} Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.266309 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c69ff3db-8806-451a-9df0-c6289c327579-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"c69ff3db-8806-451a-9df0-c6289c327579\") " pod="openstack/rabbitmq-server-0" Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.270387 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c69ff3db-8806-451a-9df0-c6289c327579-pod-info\") pod \"rabbitmq-server-0\" (UID: \"c69ff3db-8806-451a-9df0-c6289c327579\") " pod="openstack/rabbitmq-server-0" Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.280640 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c69ff3db-8806-451a-9df0-c6289c327579-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"c69ff3db-8806-451a-9df0-c6289c327579\") " pod="openstack/rabbitmq-server-0" Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.282079 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"c69ff3db-8806-451a-9df0-c6289c327579\") " pod="openstack/rabbitmq-server-0" Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.297636 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.597995 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.600550 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.603897 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.606501 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.606743 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-tslbb" Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.607177 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.607525 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.607789 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.607912 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.614685 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.741327 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/db22a584-f05a-41ba-ad23-387b4100a9e1-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"db22a584-f05a-41ba-ad23-387b4100a9e1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.741371 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/db22a584-f05a-41ba-ad23-387b4100a9e1-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"db22a584-f05a-41ba-ad23-387b4100a9e1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.741400 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/db22a584-f05a-41ba-ad23-387b4100a9e1-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"db22a584-f05a-41ba-ad23-387b4100a9e1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.741439 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/db22a584-f05a-41ba-ad23-387b4100a9e1-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"db22a584-f05a-41ba-ad23-387b4100a9e1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.741469 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"db22a584-f05a-41ba-ad23-387b4100a9e1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.741488 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/db22a584-f05a-41ba-ad23-387b4100a9e1-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"db22a584-f05a-41ba-ad23-387b4100a9e1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.741665 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fmkv\" (UniqueName: \"kubernetes.io/projected/db22a584-f05a-41ba-ad23-387b4100a9e1-kube-api-access-9fmkv\") pod \"rabbitmq-cell1-server-0\" (UID: \"db22a584-f05a-41ba-ad23-387b4100a9e1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.741981 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/db22a584-f05a-41ba-ad23-387b4100a9e1-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"db22a584-f05a-41ba-ad23-387b4100a9e1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.742109 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/db22a584-f05a-41ba-ad23-387b4100a9e1-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"db22a584-f05a-41ba-ad23-387b4100a9e1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.742185 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/db22a584-f05a-41ba-ad23-387b4100a9e1-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"db22a584-f05a-41ba-ad23-387b4100a9e1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.742221 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/db22a584-f05a-41ba-ad23-387b4100a9e1-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"db22a584-f05a-41ba-ad23-387b4100a9e1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.742437 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 18:47:35 crc kubenswrapper[4813]: W0219 18:47:35.750666 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc69ff3db_8806_451a_9df0_c6289c327579.slice/crio-a9a04dd3dcc3db19e8091009cb9d7bd2558fee7cdea0765483552bb6b90cd4e9 WatchSource:0}: Error finding container a9a04dd3dcc3db19e8091009cb9d7bd2558fee7cdea0765483552bb6b90cd4e9: Status 404 returned error can't find the container with id a9a04dd3dcc3db19e8091009cb9d7bd2558fee7cdea0765483552bb6b90cd4e9 Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.843366 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/db22a584-f05a-41ba-ad23-387b4100a9e1-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"db22a584-f05a-41ba-ad23-387b4100a9e1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.843415 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/db22a584-f05a-41ba-ad23-387b4100a9e1-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"db22a584-f05a-41ba-ad23-387b4100a9e1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.843439 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/db22a584-f05a-41ba-ad23-387b4100a9e1-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"db22a584-f05a-41ba-ad23-387b4100a9e1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.843468 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/db22a584-f05a-41ba-ad23-387b4100a9e1-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"db22a584-f05a-41ba-ad23-387b4100a9e1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.843486 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/db22a584-f05a-41ba-ad23-387b4100a9e1-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"db22a584-f05a-41ba-ad23-387b4100a9e1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.843505 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/db22a584-f05a-41ba-ad23-387b4100a9e1-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"db22a584-f05a-41ba-ad23-387b4100a9e1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.843536 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/db22a584-f05a-41ba-ad23-387b4100a9e1-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"db22a584-f05a-41ba-ad23-387b4100a9e1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.843558 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"db22a584-f05a-41ba-ad23-387b4100a9e1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.843577 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/db22a584-f05a-41ba-ad23-387b4100a9e1-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"db22a584-f05a-41ba-ad23-387b4100a9e1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.843599 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fmkv\" (UniqueName: \"kubernetes.io/projected/db22a584-f05a-41ba-ad23-387b4100a9e1-kube-api-access-9fmkv\") pod \"rabbitmq-cell1-server-0\" (UID: \"db22a584-f05a-41ba-ad23-387b4100a9e1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.843615 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/db22a584-f05a-41ba-ad23-387b4100a9e1-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"db22a584-f05a-41ba-ad23-387b4100a9e1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.844678 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/db22a584-f05a-41ba-ad23-387b4100a9e1-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"db22a584-f05a-41ba-ad23-387b4100a9e1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.844905 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/db22a584-f05a-41ba-ad23-387b4100a9e1-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"db22a584-f05a-41ba-ad23-387b4100a9e1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.847393 4813 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"db22a584-f05a-41ba-ad23-387b4100a9e1\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-cell1-server-0" Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.847611 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/db22a584-f05a-41ba-ad23-387b4100a9e1-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"db22a584-f05a-41ba-ad23-387b4100a9e1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.848660 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/db22a584-f05a-41ba-ad23-387b4100a9e1-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"db22a584-f05a-41ba-ad23-387b4100a9e1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.849062 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/db22a584-f05a-41ba-ad23-387b4100a9e1-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"db22a584-f05a-41ba-ad23-387b4100a9e1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.851597 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/db22a584-f05a-41ba-ad23-387b4100a9e1-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"db22a584-f05a-41ba-ad23-387b4100a9e1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.851939 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/db22a584-f05a-41ba-ad23-387b4100a9e1-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"db22a584-f05a-41ba-ad23-387b4100a9e1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.854014 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/db22a584-f05a-41ba-ad23-387b4100a9e1-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"db22a584-f05a-41ba-ad23-387b4100a9e1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.868559 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fmkv\" (UniqueName: \"kubernetes.io/projected/db22a584-f05a-41ba-ad23-387b4100a9e1-kube-api-access-9fmkv\") pod \"rabbitmq-cell1-server-0\" (UID: \"db22a584-f05a-41ba-ad23-387b4100a9e1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.869230 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/db22a584-f05a-41ba-ad23-387b4100a9e1-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"db22a584-f05a-41ba-ad23-387b4100a9e1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.875724 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"db22a584-f05a-41ba-ad23-387b4100a9e1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 18:47:35 crc kubenswrapper[4813]: I0219 18:47:35.925255 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 18:47:36 crc kubenswrapper[4813]: I0219 18:47:36.263154 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c69ff3db-8806-451a-9df0-c6289c327579","Type":"ContainerStarted","Data":"a9a04dd3dcc3db19e8091009cb9d7bd2558fee7cdea0765483552bb6b90cd4e9"} Feb 19 18:47:36 crc kubenswrapper[4813]: W0219 18:47:36.458068 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb22a584_f05a_41ba_ad23_387b4100a9e1.slice/crio-1ab17441071d8491a23819db4c3d6733cf81fffae60d675644fa1b3c73f438e0 WatchSource:0}: Error finding container 1ab17441071d8491a23819db4c3d6733cf81fffae60d675644fa1b3c73f438e0: Status 404 returned error can't find the container with id 1ab17441071d8491a23819db4c3d6733cf81fffae60d675644fa1b3c73f438e0 Feb 19 18:47:36 crc kubenswrapper[4813]: I0219 18:47:36.464476 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 18:47:37 crc kubenswrapper[4813]: I0219 18:47:37.147819 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 19 18:47:37 crc kubenswrapper[4813]: I0219 18:47:37.149683 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 19 18:47:37 crc kubenswrapper[4813]: I0219 18:47:37.152218 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 19 18:47:37 crc kubenswrapper[4813]: I0219 18:47:37.152266 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 19 18:47:37 crc kubenswrapper[4813]: I0219 18:47:37.152827 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 19 18:47:37 crc kubenswrapper[4813]: I0219 18:47:37.152933 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-6zpfq" Feb 19 18:47:37 crc kubenswrapper[4813]: I0219 18:47:37.160210 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 19 18:47:37 crc kubenswrapper[4813]: I0219 18:47:37.163709 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 19 18:47:37 crc kubenswrapper[4813]: I0219 18:47:37.263671 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2f8101a7-841e-4fa7-b98a-030b82e66c94-config-data-default\") pod \"openstack-galera-0\" (UID: \"2f8101a7-841e-4fa7-b98a-030b82e66c94\") " pod="openstack/openstack-galera-0" Feb 19 18:47:37 crc kubenswrapper[4813]: I0219 18:47:37.263745 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"2f8101a7-841e-4fa7-b98a-030b82e66c94\") " pod="openstack/openstack-galera-0" Feb 19 18:47:37 crc kubenswrapper[4813]: I0219 18:47:37.263822 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f8101a7-841e-4fa7-b98a-030b82e66c94-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"2f8101a7-841e-4fa7-b98a-030b82e66c94\") " pod="openstack/openstack-galera-0" Feb 19 18:47:37 crc kubenswrapper[4813]: I0219 18:47:37.263855 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2f8101a7-841e-4fa7-b98a-030b82e66c94-kolla-config\") pod \"openstack-galera-0\" (UID: \"2f8101a7-841e-4fa7-b98a-030b82e66c94\") " pod="openstack/openstack-galera-0" Feb 19 18:47:37 crc kubenswrapper[4813]: I0219 18:47:37.263891 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f8101a7-841e-4fa7-b98a-030b82e66c94-operator-scripts\") pod \"openstack-galera-0\" (UID: \"2f8101a7-841e-4fa7-b98a-030b82e66c94\") " pod="openstack/openstack-galera-0" Feb 19 18:47:37 crc kubenswrapper[4813]: I0219 18:47:37.263911 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f8101a7-841e-4fa7-b98a-030b82e66c94-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"2f8101a7-841e-4fa7-b98a-030b82e66c94\") " pod="openstack/openstack-galera-0" Feb 19 18:47:37 crc kubenswrapper[4813]: I0219 18:47:37.263967 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4l5k\" (UniqueName: \"kubernetes.io/projected/2f8101a7-841e-4fa7-b98a-030b82e66c94-kube-api-access-g4l5k\") pod \"openstack-galera-0\" (UID: \"2f8101a7-841e-4fa7-b98a-030b82e66c94\") " pod="openstack/openstack-galera-0" Feb 19 18:47:37 crc kubenswrapper[4813]: I0219 18:47:37.264006 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2f8101a7-841e-4fa7-b98a-030b82e66c94-config-data-generated\") pod \"openstack-galera-0\" (UID: \"2f8101a7-841e-4fa7-b98a-030b82e66c94\") " pod="openstack/openstack-galera-0" Feb 19 18:47:37 crc kubenswrapper[4813]: I0219 18:47:37.293482 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"db22a584-f05a-41ba-ad23-387b4100a9e1","Type":"ContainerStarted","Data":"1ab17441071d8491a23819db4c3d6733cf81fffae60d675644fa1b3c73f438e0"} Feb 19 18:47:37 crc kubenswrapper[4813]: I0219 18:47:37.365883 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f8101a7-841e-4fa7-b98a-030b82e66c94-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"2f8101a7-841e-4fa7-b98a-030b82e66c94\") " pod="openstack/openstack-galera-0" Feb 19 18:47:37 crc kubenswrapper[4813]: I0219 18:47:37.365923 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2f8101a7-841e-4fa7-b98a-030b82e66c94-kolla-config\") pod \"openstack-galera-0\" (UID: \"2f8101a7-841e-4fa7-b98a-030b82e66c94\") " pod="openstack/openstack-galera-0" Feb 19 18:47:37 crc kubenswrapper[4813]: I0219 18:47:37.365975 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f8101a7-841e-4fa7-b98a-030b82e66c94-operator-scripts\") pod \"openstack-galera-0\" (UID: \"2f8101a7-841e-4fa7-b98a-030b82e66c94\") " pod="openstack/openstack-galera-0" Feb 19 18:47:37 crc kubenswrapper[4813]: I0219 18:47:37.365992 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f8101a7-841e-4fa7-b98a-030b82e66c94-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"2f8101a7-841e-4fa7-b98a-030b82e66c94\") " pod="openstack/openstack-galera-0" Feb 19 18:47:37 crc kubenswrapper[4813]: I0219 18:47:37.366021 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4l5k\" (UniqueName: \"kubernetes.io/projected/2f8101a7-841e-4fa7-b98a-030b82e66c94-kube-api-access-g4l5k\") pod \"openstack-galera-0\" (UID: \"2f8101a7-841e-4fa7-b98a-030b82e66c94\") " pod="openstack/openstack-galera-0" Feb 19 18:47:37 crc kubenswrapper[4813]: I0219 18:47:37.368391 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2f8101a7-841e-4fa7-b98a-030b82e66c94-config-data-generated\") pod \"openstack-galera-0\" (UID: \"2f8101a7-841e-4fa7-b98a-030b82e66c94\") " pod="openstack/openstack-galera-0" Feb 19 18:47:37 crc kubenswrapper[4813]: I0219 18:47:37.368419 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2f8101a7-841e-4fa7-b98a-030b82e66c94-config-data-default\") pod \"openstack-galera-0\" (UID: \"2f8101a7-841e-4fa7-b98a-030b82e66c94\") " pod="openstack/openstack-galera-0" Feb 19 18:47:37 crc kubenswrapper[4813]: I0219 18:47:37.368450 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"2f8101a7-841e-4fa7-b98a-030b82e66c94\") " pod="openstack/openstack-galera-0" Feb 19 18:47:37 crc kubenswrapper[4813]: I0219 18:47:37.368728 4813 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"2f8101a7-841e-4fa7-b98a-030b82e66c94\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/openstack-galera-0" Feb 19 18:47:37 crc kubenswrapper[4813]: I0219 18:47:37.369296 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2f8101a7-841e-4fa7-b98a-030b82e66c94-config-data-default\") pod \"openstack-galera-0\" (UID: \"2f8101a7-841e-4fa7-b98a-030b82e66c94\") " pod="openstack/openstack-galera-0" Feb 19 18:47:37 crc kubenswrapper[4813]: I0219 18:47:37.369606 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2f8101a7-841e-4fa7-b98a-030b82e66c94-config-data-generated\") pod \"openstack-galera-0\" (UID: \"2f8101a7-841e-4fa7-b98a-030b82e66c94\") " pod="openstack/openstack-galera-0" Feb 19 18:47:37 crc kubenswrapper[4813]: I0219 18:47:37.369612 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f8101a7-841e-4fa7-b98a-030b82e66c94-operator-scripts\") pod \"openstack-galera-0\" (UID: \"2f8101a7-841e-4fa7-b98a-030b82e66c94\") " pod="openstack/openstack-galera-0" Feb 19 18:47:37 crc kubenswrapper[4813]: I0219 18:47:37.370096 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2f8101a7-841e-4fa7-b98a-030b82e66c94-kolla-config\") pod \"openstack-galera-0\" (UID: \"2f8101a7-841e-4fa7-b98a-030b82e66c94\") " pod="openstack/openstack-galera-0" Feb 19 18:47:37 crc kubenswrapper[4813]: I0219 18:47:37.375605 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f8101a7-841e-4fa7-b98a-030b82e66c94-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"2f8101a7-841e-4fa7-b98a-030b82e66c94\") " pod="openstack/openstack-galera-0" Feb 19 18:47:37 crc kubenswrapper[4813]: I0219 18:47:37.392054 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f8101a7-841e-4fa7-b98a-030b82e66c94-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"2f8101a7-841e-4fa7-b98a-030b82e66c94\") " pod="openstack/openstack-galera-0" Feb 19 18:47:37 crc kubenswrapper[4813]: I0219 18:47:37.397576 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4l5k\" (UniqueName: \"kubernetes.io/projected/2f8101a7-841e-4fa7-b98a-030b82e66c94-kube-api-access-g4l5k\") pod \"openstack-galera-0\" (UID: \"2f8101a7-841e-4fa7-b98a-030b82e66c94\") " pod="openstack/openstack-galera-0" Feb 19 18:47:37 crc kubenswrapper[4813]: I0219 18:47:37.405092 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"2f8101a7-841e-4fa7-b98a-030b82e66c94\") " pod="openstack/openstack-galera-0" Feb 19 18:47:37 crc kubenswrapper[4813]: I0219 18:47:37.475806 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 19 18:47:38 crc kubenswrapper[4813]: I0219 18:47:38.041003 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 19 18:47:38 crc kubenswrapper[4813]: W0219 18:47:38.068563 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f8101a7_841e_4fa7_b98a_030b82e66c94.slice/crio-8ac56b41c2104ffffc323b297d5c7075f4d2d757f588d13ebe56da37cac0331a WatchSource:0}: Error finding container 8ac56b41c2104ffffc323b297d5c7075f4d2d757f588d13ebe56da37cac0331a: Status 404 returned error can't find the container with id 8ac56b41c2104ffffc323b297d5c7075f4d2d757f588d13ebe56da37cac0331a Feb 19 18:47:38 crc kubenswrapper[4813]: I0219 18:47:38.300463 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"2f8101a7-841e-4fa7-b98a-030b82e66c94","Type":"ContainerStarted","Data":"8ac56b41c2104ffffc323b297d5c7075f4d2d757f588d13ebe56da37cac0331a"} Feb 19 18:47:38 crc kubenswrapper[4813]: I0219 18:47:38.482698 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 19 18:47:38 crc kubenswrapper[4813]: I0219 18:47:38.483808 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 19 18:47:38 crc kubenswrapper[4813]: I0219 18:47:38.486376 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-ljklq" Feb 19 18:47:38 crc kubenswrapper[4813]: I0219 18:47:38.486554 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 19 18:47:38 crc kubenswrapper[4813]: I0219 18:47:38.486653 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 19 18:47:38 crc kubenswrapper[4813]: I0219 18:47:38.486912 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 19 18:47:38 crc kubenswrapper[4813]: I0219 18:47:38.492575 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 19 18:47:38 crc kubenswrapper[4813]: I0219 18:47:38.587700 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7385c55-b36b-486d-add0-958b8cece7de-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"a7385c55-b36b-486d-add0-958b8cece7de\") " pod="openstack/openstack-cell1-galera-0" Feb 19 18:47:38 crc kubenswrapper[4813]: I0219 18:47:38.587781 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a7385c55-b36b-486d-add0-958b8cece7de-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"a7385c55-b36b-486d-add0-958b8cece7de\") " pod="openstack/openstack-cell1-galera-0" Feb 19 18:47:38 crc kubenswrapper[4813]: I0219 18:47:38.587826 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7385c55-b36b-486d-add0-958b8cece7de-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"a7385c55-b36b-486d-add0-958b8cece7de\") " pod="openstack/openstack-cell1-galera-0" Feb 19 18:47:38 crc kubenswrapper[4813]: I0219 18:47:38.587849 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"a7385c55-b36b-486d-add0-958b8cece7de\") " pod="openstack/openstack-cell1-galera-0" Feb 19 18:47:38 crc kubenswrapper[4813]: I0219 18:47:38.587864 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6b86\" (UniqueName: \"kubernetes.io/projected/a7385c55-b36b-486d-add0-958b8cece7de-kube-api-access-w6b86\") pod \"openstack-cell1-galera-0\" (UID: \"a7385c55-b36b-486d-add0-958b8cece7de\") " pod="openstack/openstack-cell1-galera-0" Feb 19 18:47:38 crc kubenswrapper[4813]: I0219 18:47:38.587886 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a7385c55-b36b-486d-add0-958b8cece7de-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"a7385c55-b36b-486d-add0-958b8cece7de\") " pod="openstack/openstack-cell1-galera-0" Feb 19 18:47:38 crc kubenswrapper[4813]: I0219 18:47:38.587902 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7385c55-b36b-486d-add0-958b8cece7de-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"a7385c55-b36b-486d-add0-958b8cece7de\") " pod="openstack/openstack-cell1-galera-0" Feb 19 18:47:38 crc kubenswrapper[4813]: I0219 18:47:38.587932 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a7385c55-b36b-486d-add0-958b8cece7de-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"a7385c55-b36b-486d-add0-958b8cece7de\") " pod="openstack/openstack-cell1-galera-0" Feb 19 18:47:38 crc kubenswrapper[4813]: I0219 18:47:38.689313 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a7385c55-b36b-486d-add0-958b8cece7de-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"a7385c55-b36b-486d-add0-958b8cece7de\") " pod="openstack/openstack-cell1-galera-0" Feb 19 18:47:38 crc kubenswrapper[4813]: I0219 18:47:38.689374 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7385c55-b36b-486d-add0-958b8cece7de-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"a7385c55-b36b-486d-add0-958b8cece7de\") " pod="openstack/openstack-cell1-galera-0" Feb 19 18:47:38 crc kubenswrapper[4813]: I0219 18:47:38.689420 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a7385c55-b36b-486d-add0-958b8cece7de-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"a7385c55-b36b-486d-add0-958b8cece7de\") " pod="openstack/openstack-cell1-galera-0" Feb 19 18:47:38 crc kubenswrapper[4813]: I0219 18:47:38.689461 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7385c55-b36b-486d-add0-958b8cece7de-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"a7385c55-b36b-486d-add0-958b8cece7de\") " pod="openstack/openstack-cell1-galera-0" Feb 19 18:47:38 crc kubenswrapper[4813]: I0219 18:47:38.689482 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"a7385c55-b36b-486d-add0-958b8cece7de\") " pod="openstack/openstack-cell1-galera-0" Feb 19 18:47:38 crc kubenswrapper[4813]: I0219 18:47:38.689500 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6b86\" (UniqueName: \"kubernetes.io/projected/a7385c55-b36b-486d-add0-958b8cece7de-kube-api-access-w6b86\") pod \"openstack-cell1-galera-0\" (UID: \"a7385c55-b36b-486d-add0-958b8cece7de\") " pod="openstack/openstack-cell1-galera-0" Feb 19 18:47:38 crc kubenswrapper[4813]: I0219 18:47:38.689523 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a7385c55-b36b-486d-add0-958b8cece7de-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"a7385c55-b36b-486d-add0-958b8cece7de\") " pod="openstack/openstack-cell1-galera-0" Feb 19 18:47:38 crc kubenswrapper[4813]: I0219 18:47:38.689543 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7385c55-b36b-486d-add0-958b8cece7de-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"a7385c55-b36b-486d-add0-958b8cece7de\") " pod="openstack/openstack-cell1-galera-0" Feb 19 18:47:38 crc kubenswrapper[4813]: I0219 18:47:38.689798 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a7385c55-b36b-486d-add0-958b8cece7de-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"a7385c55-b36b-486d-add0-958b8cece7de\") " pod="openstack/openstack-cell1-galera-0" Feb 19 18:47:38 crc kubenswrapper[4813]: I0219 18:47:38.689922 4813 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"a7385c55-b36b-486d-add0-958b8cece7de\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/openstack-cell1-galera-0" Feb 19 18:47:38 crc kubenswrapper[4813]: I0219 18:47:38.690589 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a7385c55-b36b-486d-add0-958b8cece7de-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"a7385c55-b36b-486d-add0-958b8cece7de\") " pod="openstack/openstack-cell1-galera-0" Feb 19 18:47:38 crc kubenswrapper[4813]: I0219 18:47:38.690991 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7385c55-b36b-486d-add0-958b8cece7de-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"a7385c55-b36b-486d-add0-958b8cece7de\") " pod="openstack/openstack-cell1-galera-0" Feb 19 18:47:38 crc kubenswrapper[4813]: I0219 18:47:38.692425 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a7385c55-b36b-486d-add0-958b8cece7de-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"a7385c55-b36b-486d-add0-958b8cece7de\") " pod="openstack/openstack-cell1-galera-0" Feb 19 18:47:38 crc kubenswrapper[4813]: I0219 18:47:38.701765 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7385c55-b36b-486d-add0-958b8cece7de-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"a7385c55-b36b-486d-add0-958b8cece7de\") " pod="openstack/openstack-cell1-galera-0" Feb 19 18:47:38 crc kubenswrapper[4813]: I0219 18:47:38.703526 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7385c55-b36b-486d-add0-958b8cece7de-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"a7385c55-b36b-486d-add0-958b8cece7de\") " pod="openstack/openstack-cell1-galera-0" Feb 19 18:47:38 crc kubenswrapper[4813]: I0219 18:47:38.707613 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6b86\" (UniqueName: \"kubernetes.io/projected/a7385c55-b36b-486d-add0-958b8cece7de-kube-api-access-w6b86\") pod \"openstack-cell1-galera-0\" (UID: \"a7385c55-b36b-486d-add0-958b8cece7de\") " pod="openstack/openstack-cell1-galera-0" Feb 19 18:47:38 crc kubenswrapper[4813]: I0219 18:47:38.734093 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"a7385c55-b36b-486d-add0-958b8cece7de\") " pod="openstack/openstack-cell1-galera-0" Feb 19 18:47:38 crc kubenswrapper[4813]: I0219 18:47:38.843819 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 19 18:47:38 crc kubenswrapper[4813]: I0219 18:47:38.845292 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 19 18:47:38 crc kubenswrapper[4813]: I0219 18:47:38.846253 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 19 18:47:38 crc kubenswrapper[4813]: I0219 18:47:38.854491 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 19 18:47:38 crc kubenswrapper[4813]: I0219 18:47:38.855168 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-swcn8" Feb 19 18:47:38 crc kubenswrapper[4813]: I0219 18:47:38.855396 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 19 18:47:38 crc kubenswrapper[4813]: I0219 18:47:38.855580 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 19 18:47:38 crc kubenswrapper[4813]: I0219 18:47:38.892013 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d4e6cd2-75bd-43bd-9f0e-fd35002ab607-memcached-tls-certs\") pod \"memcached-0\" (UID: \"2d4e6cd2-75bd-43bd-9f0e-fd35002ab607\") " pod="openstack/memcached-0" Feb 19 18:47:38 crc kubenswrapper[4813]: I0219 18:47:38.892068 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2d4e6cd2-75bd-43bd-9f0e-fd35002ab607-kolla-config\") pod \"memcached-0\" (UID: \"2d4e6cd2-75bd-43bd-9f0e-fd35002ab607\") " pod="openstack/memcached-0" Feb 19 18:47:38 crc kubenswrapper[4813]: I0219 18:47:38.892124 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2d4e6cd2-75bd-43bd-9f0e-fd35002ab607-config-data\") pod \"memcached-0\" (UID: \"2d4e6cd2-75bd-43bd-9f0e-fd35002ab607\") " pod="openstack/memcached-0" Feb 19 18:47:38 crc kubenswrapper[4813]: I0219 18:47:38.892146 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d4e6cd2-75bd-43bd-9f0e-fd35002ab607-combined-ca-bundle\") pod \"memcached-0\" (UID: \"2d4e6cd2-75bd-43bd-9f0e-fd35002ab607\") " pod="openstack/memcached-0" Feb 19 18:47:38 crc kubenswrapper[4813]: I0219 18:47:38.892173 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-986ln\" (UniqueName: \"kubernetes.io/projected/2d4e6cd2-75bd-43bd-9f0e-fd35002ab607-kube-api-access-986ln\") pod \"memcached-0\" (UID: \"2d4e6cd2-75bd-43bd-9f0e-fd35002ab607\") " pod="openstack/memcached-0" Feb 19 18:47:38 crc kubenswrapper[4813]: I0219 18:47:38.993747 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d4e6cd2-75bd-43bd-9f0e-fd35002ab607-memcached-tls-certs\") pod \"memcached-0\" (UID: \"2d4e6cd2-75bd-43bd-9f0e-fd35002ab607\") " pod="openstack/memcached-0" Feb 19 18:47:39 crc kubenswrapper[4813]: I0219 18:47:38.997856 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2d4e6cd2-75bd-43bd-9f0e-fd35002ab607-kolla-config\") pod \"memcached-0\" (UID: \"2d4e6cd2-75bd-43bd-9f0e-fd35002ab607\") " pod="openstack/memcached-0" Feb 19 18:47:39 crc kubenswrapper[4813]: I0219 18:47:38.997930 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2d4e6cd2-75bd-43bd-9f0e-fd35002ab607-config-data\") pod \"memcached-0\" (UID: \"2d4e6cd2-75bd-43bd-9f0e-fd35002ab607\") " pod="openstack/memcached-0" Feb 19 18:47:39 crc kubenswrapper[4813]: I0219 18:47:38.998187 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d4e6cd2-75bd-43bd-9f0e-fd35002ab607-combined-ca-bundle\") pod \"memcached-0\" (UID: \"2d4e6cd2-75bd-43bd-9f0e-fd35002ab607\") " pod="openstack/memcached-0" Feb 19 18:47:39 crc kubenswrapper[4813]: I0219 18:47:38.998211 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-986ln\" (UniqueName: \"kubernetes.io/projected/2d4e6cd2-75bd-43bd-9f0e-fd35002ab607-kube-api-access-986ln\") pod \"memcached-0\" (UID: \"2d4e6cd2-75bd-43bd-9f0e-fd35002ab607\") " pod="openstack/memcached-0" Feb 19 18:47:39 crc kubenswrapper[4813]: I0219 18:47:39.001858 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2d4e6cd2-75bd-43bd-9f0e-fd35002ab607-kolla-config\") pod \"memcached-0\" (UID: \"2d4e6cd2-75bd-43bd-9f0e-fd35002ab607\") " pod="openstack/memcached-0" Feb 19 18:47:39 crc kubenswrapper[4813]: I0219 18:47:39.002657 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2d4e6cd2-75bd-43bd-9f0e-fd35002ab607-config-data\") pod \"memcached-0\" (UID: \"2d4e6cd2-75bd-43bd-9f0e-fd35002ab607\") " pod="openstack/memcached-0" Feb 19 18:47:39 crc kubenswrapper[4813]: I0219 18:47:39.023025 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d4e6cd2-75bd-43bd-9f0e-fd35002ab607-combined-ca-bundle\") pod \"memcached-0\" (UID: \"2d4e6cd2-75bd-43bd-9f0e-fd35002ab607\") " pod="openstack/memcached-0" Feb 19 18:47:39 crc kubenswrapper[4813]: I0219 18:47:39.040895 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d4e6cd2-75bd-43bd-9f0e-fd35002ab607-memcached-tls-certs\") pod \"memcached-0\" (UID: \"2d4e6cd2-75bd-43bd-9f0e-fd35002ab607\") " pod="openstack/memcached-0" Feb 19 18:47:39 crc kubenswrapper[4813]: I0219 18:47:39.068072 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-986ln\" (UniqueName: \"kubernetes.io/projected/2d4e6cd2-75bd-43bd-9f0e-fd35002ab607-kube-api-access-986ln\") pod \"memcached-0\" (UID: \"2d4e6cd2-75bd-43bd-9f0e-fd35002ab607\") " pod="openstack/memcached-0" Feb 19 18:47:39 crc kubenswrapper[4813]: I0219 18:47:39.172556 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 19 18:47:40 crc kubenswrapper[4813]: I0219 18:47:40.868474 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 18:47:40 crc kubenswrapper[4813]: I0219 18:47:40.869384 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 18:47:40 crc kubenswrapper[4813]: I0219 18:47:40.872294 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-9rdnk" Feb 19 18:47:40 crc kubenswrapper[4813]: I0219 18:47:40.890523 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 18:47:40 crc kubenswrapper[4813]: I0219 18:47:40.934264 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9m8gq\" (UniqueName: \"kubernetes.io/projected/42255dff-2745-4bb9-a1fa-9c6f39327204-kube-api-access-9m8gq\") pod \"kube-state-metrics-0\" (UID: \"42255dff-2745-4bb9-a1fa-9c6f39327204\") " pod="openstack/kube-state-metrics-0" Feb 19 18:47:41 crc kubenswrapper[4813]: I0219 18:47:41.036144 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9m8gq\" (UniqueName: \"kubernetes.io/projected/42255dff-2745-4bb9-a1fa-9c6f39327204-kube-api-access-9m8gq\") pod \"kube-state-metrics-0\" (UID: \"42255dff-2745-4bb9-a1fa-9c6f39327204\") " pod="openstack/kube-state-metrics-0" Feb 19 18:47:41 crc kubenswrapper[4813]: I0219 18:47:41.056274 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9m8gq\" (UniqueName: \"kubernetes.io/projected/42255dff-2745-4bb9-a1fa-9c6f39327204-kube-api-access-9m8gq\") pod \"kube-state-metrics-0\" (UID: \"42255dff-2745-4bb9-a1fa-9c6f39327204\") " pod="openstack/kube-state-metrics-0" Feb 19 18:47:41 crc kubenswrapper[4813]: I0219 18:47:41.191465 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 18:47:44 crc kubenswrapper[4813]: I0219 18:47:44.615023 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-vr4rs"] Feb 19 18:47:44 crc kubenswrapper[4813]: I0219 18:47:44.616273 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vr4rs" Feb 19 18:47:44 crc kubenswrapper[4813]: I0219 18:47:44.620850 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-w7s6z"] Feb 19 18:47:44 crc kubenswrapper[4813]: I0219 18:47:44.623364 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-w7s6z" Feb 19 18:47:44 crc kubenswrapper[4813]: I0219 18:47:44.625841 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 19 18:47:44 crc kubenswrapper[4813]: I0219 18:47:44.626016 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-lqknw" Feb 19 18:47:44 crc kubenswrapper[4813]: I0219 18:47:44.626187 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 19 18:47:44 crc kubenswrapper[4813]: I0219 18:47:44.631385 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-vr4rs"] Feb 19 18:47:44 crc kubenswrapper[4813]: I0219 18:47:44.636736 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-w7s6z"] Feb 19 18:47:44 crc kubenswrapper[4813]: I0219 18:47:44.706122 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/096a88db-91ee-4ac3-b5ae-ba4bca838436-var-log\") pod \"ovn-controller-ovs-w7s6z\" (UID: \"096a88db-91ee-4ac3-b5ae-ba4bca838436\") " pod="openstack/ovn-controller-ovs-w7s6z" Feb 19 18:47:44 crc kubenswrapper[4813]: I0219 18:47:44.706161 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/abaee778-ea35-4887-90c8-2834d3eef00d-var-run-ovn\") pod \"ovn-controller-vr4rs\" (UID: \"abaee778-ea35-4887-90c8-2834d3eef00d\") " pod="openstack/ovn-controller-vr4rs" Feb 19 18:47:44 crc kubenswrapper[4813]: I0219 18:47:44.706182 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/abaee778-ea35-4887-90c8-2834d3eef00d-scripts\") pod \"ovn-controller-vr4rs\" (UID: \"abaee778-ea35-4887-90c8-2834d3eef00d\") " pod="openstack/ovn-controller-vr4rs" Feb 19 18:47:44 crc kubenswrapper[4813]: I0219 18:47:44.706202 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4zc7\" (UniqueName: \"kubernetes.io/projected/096a88db-91ee-4ac3-b5ae-ba4bca838436-kube-api-access-p4zc7\") pod \"ovn-controller-ovs-w7s6z\" (UID: \"096a88db-91ee-4ac3-b5ae-ba4bca838436\") " pod="openstack/ovn-controller-ovs-w7s6z" Feb 19 18:47:44 crc kubenswrapper[4813]: I0219 18:47:44.706298 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/096a88db-91ee-4ac3-b5ae-ba4bca838436-scripts\") pod \"ovn-controller-ovs-w7s6z\" (UID: \"096a88db-91ee-4ac3-b5ae-ba4bca838436\") " pod="openstack/ovn-controller-ovs-w7s6z" Feb 19 18:47:44 crc kubenswrapper[4813]: I0219 18:47:44.706345 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/abaee778-ea35-4887-90c8-2834d3eef00d-ovn-controller-tls-certs\") pod \"ovn-controller-vr4rs\" (UID: \"abaee778-ea35-4887-90c8-2834d3eef00d\") " pod="openstack/ovn-controller-vr4rs" Feb 19 18:47:44 crc kubenswrapper[4813]: I0219 18:47:44.706369 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/096a88db-91ee-4ac3-b5ae-ba4bca838436-etc-ovs\") pod \"ovn-controller-ovs-w7s6z\" (UID: \"096a88db-91ee-4ac3-b5ae-ba4bca838436\") " pod="openstack/ovn-controller-ovs-w7s6z" Feb 19 18:47:44 crc kubenswrapper[4813]: I0219 18:47:44.706388 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/abaee778-ea35-4887-90c8-2834d3eef00d-var-run\") pod \"ovn-controller-vr4rs\" (UID: \"abaee778-ea35-4887-90c8-2834d3eef00d\") " pod="openstack/ovn-controller-vr4rs" Feb 19 18:47:44 crc kubenswrapper[4813]: I0219 18:47:44.706453 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/096a88db-91ee-4ac3-b5ae-ba4bca838436-var-run\") pod \"ovn-controller-ovs-w7s6z\" (UID: \"096a88db-91ee-4ac3-b5ae-ba4bca838436\") " pod="openstack/ovn-controller-ovs-w7s6z" Feb 19 18:47:44 crc kubenswrapper[4813]: I0219 18:47:44.706524 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abaee778-ea35-4887-90c8-2834d3eef00d-combined-ca-bundle\") pod \"ovn-controller-vr4rs\" (UID: \"abaee778-ea35-4887-90c8-2834d3eef00d\") " pod="openstack/ovn-controller-vr4rs" Feb 19 18:47:44 crc kubenswrapper[4813]: I0219 18:47:44.706603 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/abaee778-ea35-4887-90c8-2834d3eef00d-var-log-ovn\") pod \"ovn-controller-vr4rs\" (UID: \"abaee778-ea35-4887-90c8-2834d3eef00d\") " pod="openstack/ovn-controller-vr4rs" Feb 19 18:47:44 crc kubenswrapper[4813]: I0219 18:47:44.706736 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pgpj\" (UniqueName: \"kubernetes.io/projected/abaee778-ea35-4887-90c8-2834d3eef00d-kube-api-access-8pgpj\") pod \"ovn-controller-vr4rs\" (UID: \"abaee778-ea35-4887-90c8-2834d3eef00d\") " pod="openstack/ovn-controller-vr4rs" Feb 19 18:47:44 crc kubenswrapper[4813]: I0219 18:47:44.706794 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/096a88db-91ee-4ac3-b5ae-ba4bca838436-var-lib\") pod \"ovn-controller-ovs-w7s6z\" (UID: \"096a88db-91ee-4ac3-b5ae-ba4bca838436\") " pod="openstack/ovn-controller-ovs-w7s6z" Feb 19 18:47:44 crc kubenswrapper[4813]: I0219 18:47:44.809377 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/abaee778-ea35-4887-90c8-2834d3eef00d-var-log-ovn\") pod \"ovn-controller-vr4rs\" (UID: \"abaee778-ea35-4887-90c8-2834d3eef00d\") " pod="openstack/ovn-controller-vr4rs" Feb 19 18:47:44 crc kubenswrapper[4813]: I0219 18:47:44.809691 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pgpj\" (UniqueName: \"kubernetes.io/projected/abaee778-ea35-4887-90c8-2834d3eef00d-kube-api-access-8pgpj\") pod \"ovn-controller-vr4rs\" (UID: \"abaee778-ea35-4887-90c8-2834d3eef00d\") " pod="openstack/ovn-controller-vr4rs" Feb 19 18:47:44 crc kubenswrapper[4813]: I0219 18:47:44.809729 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/096a88db-91ee-4ac3-b5ae-ba4bca838436-var-lib\") pod \"ovn-controller-ovs-w7s6z\" (UID: \"096a88db-91ee-4ac3-b5ae-ba4bca838436\") " pod="openstack/ovn-controller-ovs-w7s6z" Feb 19 18:47:44 crc kubenswrapper[4813]: I0219 18:47:44.809790 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/096a88db-91ee-4ac3-b5ae-ba4bca838436-var-log\") pod \"ovn-controller-ovs-w7s6z\" (UID: \"096a88db-91ee-4ac3-b5ae-ba4bca838436\") " pod="openstack/ovn-controller-ovs-w7s6z" Feb 19 18:47:44 crc kubenswrapper[4813]: I0219 18:47:44.809809 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/abaee778-ea35-4887-90c8-2834d3eef00d-var-run-ovn\") pod \"ovn-controller-vr4rs\" (UID: \"abaee778-ea35-4887-90c8-2834d3eef00d\") " pod="openstack/ovn-controller-vr4rs" Feb 19 18:47:44 crc kubenswrapper[4813]: I0219 18:47:44.809826 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/abaee778-ea35-4887-90c8-2834d3eef00d-scripts\") pod \"ovn-controller-vr4rs\" (UID: \"abaee778-ea35-4887-90c8-2834d3eef00d\") " pod="openstack/ovn-controller-vr4rs" Feb 19 18:47:44 crc kubenswrapper[4813]: I0219 18:47:44.809846 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4zc7\" (UniqueName: \"kubernetes.io/projected/096a88db-91ee-4ac3-b5ae-ba4bca838436-kube-api-access-p4zc7\") pod \"ovn-controller-ovs-w7s6z\" (UID: \"096a88db-91ee-4ac3-b5ae-ba4bca838436\") " pod="openstack/ovn-controller-ovs-w7s6z" Feb 19 18:47:44 crc kubenswrapper[4813]: I0219 18:47:44.809865 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/096a88db-91ee-4ac3-b5ae-ba4bca838436-scripts\") pod \"ovn-controller-ovs-w7s6z\" (UID: \"096a88db-91ee-4ac3-b5ae-ba4bca838436\") " pod="openstack/ovn-controller-ovs-w7s6z" Feb 19 18:47:44 crc kubenswrapper[4813]: I0219 18:47:44.809863 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/abaee778-ea35-4887-90c8-2834d3eef00d-var-log-ovn\") pod \"ovn-controller-vr4rs\" (UID: \"abaee778-ea35-4887-90c8-2834d3eef00d\") " pod="openstack/ovn-controller-vr4rs" Feb 19 18:47:44 crc kubenswrapper[4813]: I0219 18:47:44.809887 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/abaee778-ea35-4887-90c8-2834d3eef00d-ovn-controller-tls-certs\") pod \"ovn-controller-vr4rs\" (UID: \"abaee778-ea35-4887-90c8-2834d3eef00d\") " pod="openstack/ovn-controller-vr4rs" Feb 19 18:47:44 crc kubenswrapper[4813]: I0219 18:47:44.811062 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/096a88db-91ee-4ac3-b5ae-ba4bca838436-etc-ovs\") pod \"ovn-controller-ovs-w7s6z\" (UID: \"096a88db-91ee-4ac3-b5ae-ba4bca838436\") " pod="openstack/ovn-controller-ovs-w7s6z" Feb 19 18:47:44 crc kubenswrapper[4813]: I0219 18:47:44.811129 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/abaee778-ea35-4887-90c8-2834d3eef00d-var-run\") pod \"ovn-controller-vr4rs\" (UID: \"abaee778-ea35-4887-90c8-2834d3eef00d\") " pod="openstack/ovn-controller-vr4rs" Feb 19 18:47:44 crc kubenswrapper[4813]: I0219 18:47:44.810191 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/096a88db-91ee-4ac3-b5ae-ba4bca838436-var-lib\") pod \"ovn-controller-ovs-w7s6z\" (UID: \"096a88db-91ee-4ac3-b5ae-ba4bca838436\") " pod="openstack/ovn-controller-ovs-w7s6z" Feb 19 18:47:44 crc kubenswrapper[4813]: I0219 18:47:44.810158 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/abaee778-ea35-4887-90c8-2834d3eef00d-var-run-ovn\") pod \"ovn-controller-vr4rs\" (UID: \"abaee778-ea35-4887-90c8-2834d3eef00d\") " pod="openstack/ovn-controller-vr4rs" Feb 19 18:47:44 crc kubenswrapper[4813]: I0219 18:47:44.810284 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/096a88db-91ee-4ac3-b5ae-ba4bca838436-var-log\") pod \"ovn-controller-ovs-w7s6z\" (UID: \"096a88db-91ee-4ac3-b5ae-ba4bca838436\") " pod="openstack/ovn-controller-ovs-w7s6z" Feb 19 18:47:44 crc kubenswrapper[4813]: I0219 18:47:44.811364 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/096a88db-91ee-4ac3-b5ae-ba4bca838436-etc-ovs\") pod \"ovn-controller-ovs-w7s6z\" (UID: \"096a88db-91ee-4ac3-b5ae-ba4bca838436\") " pod="openstack/ovn-controller-ovs-w7s6z" Feb 19 18:47:44 crc kubenswrapper[4813]: I0219 18:47:44.811821 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/abaee778-ea35-4887-90c8-2834d3eef00d-var-run\") pod \"ovn-controller-vr4rs\" (UID: \"abaee778-ea35-4887-90c8-2834d3eef00d\") " pod="openstack/ovn-controller-vr4rs" Feb 19 18:47:44 crc kubenswrapper[4813]: I0219 18:47:44.812034 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/096a88db-91ee-4ac3-b5ae-ba4bca838436-var-run\") pod \"ovn-controller-ovs-w7s6z\" (UID: \"096a88db-91ee-4ac3-b5ae-ba4bca838436\") " pod="openstack/ovn-controller-ovs-w7s6z" Feb 19 18:47:44 crc kubenswrapper[4813]: I0219 18:47:44.812065 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abaee778-ea35-4887-90c8-2834d3eef00d-combined-ca-bundle\") pod \"ovn-controller-vr4rs\" (UID: \"abaee778-ea35-4887-90c8-2834d3eef00d\") " pod="openstack/ovn-controller-vr4rs" Feb 19 18:47:44 crc kubenswrapper[4813]: I0219 18:47:44.812145 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/096a88db-91ee-4ac3-b5ae-ba4bca838436-var-run\") pod \"ovn-controller-ovs-w7s6z\" (UID: \"096a88db-91ee-4ac3-b5ae-ba4bca838436\") " pod="openstack/ovn-controller-ovs-w7s6z" Feb 19 18:47:44 crc kubenswrapper[4813]: I0219 18:47:44.812510 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/096a88db-91ee-4ac3-b5ae-ba4bca838436-scripts\") pod \"ovn-controller-ovs-w7s6z\" (UID: \"096a88db-91ee-4ac3-b5ae-ba4bca838436\") " pod="openstack/ovn-controller-ovs-w7s6z" Feb 19 18:47:44 crc kubenswrapper[4813]: I0219 18:47:44.813245 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/abaee778-ea35-4887-90c8-2834d3eef00d-scripts\") pod \"ovn-controller-vr4rs\" (UID: \"abaee778-ea35-4887-90c8-2834d3eef00d\") " pod="openstack/ovn-controller-vr4rs" Feb 19 18:47:44 crc kubenswrapper[4813]: I0219 18:47:44.819797 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abaee778-ea35-4887-90c8-2834d3eef00d-combined-ca-bundle\") pod \"ovn-controller-vr4rs\" (UID: \"abaee778-ea35-4887-90c8-2834d3eef00d\") " pod="openstack/ovn-controller-vr4rs" Feb 19 18:47:44 crc kubenswrapper[4813]: I0219 18:47:44.821399 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/abaee778-ea35-4887-90c8-2834d3eef00d-ovn-controller-tls-certs\") pod \"ovn-controller-vr4rs\" (UID: \"abaee778-ea35-4887-90c8-2834d3eef00d\") " pod="openstack/ovn-controller-vr4rs" Feb 19 18:47:44 crc kubenswrapper[4813]: I0219 18:47:44.832262 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pgpj\" (UniqueName: \"kubernetes.io/projected/abaee778-ea35-4887-90c8-2834d3eef00d-kube-api-access-8pgpj\") pod \"ovn-controller-vr4rs\" (UID: \"abaee778-ea35-4887-90c8-2834d3eef00d\") " pod="openstack/ovn-controller-vr4rs" Feb 19 18:47:44 crc kubenswrapper[4813]: I0219 18:47:44.833868 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4zc7\" (UniqueName: \"kubernetes.io/projected/096a88db-91ee-4ac3-b5ae-ba4bca838436-kube-api-access-p4zc7\") pod \"ovn-controller-ovs-w7s6z\" (UID: \"096a88db-91ee-4ac3-b5ae-ba4bca838436\") " pod="openstack/ovn-controller-ovs-w7s6z" Feb 19 18:47:44 crc kubenswrapper[4813]: I0219 18:47:44.938536 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vr4rs" Feb 19 18:47:44 crc kubenswrapper[4813]: I0219 18:47:44.960551 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-w7s6z" Feb 19 18:47:45 crc kubenswrapper[4813]: I0219 18:47:45.505451 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 19 18:47:45 crc kubenswrapper[4813]: I0219 18:47:45.516508 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 19 18:47:45 crc kubenswrapper[4813]: I0219 18:47:45.523256 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 19 18:47:45 crc kubenswrapper[4813]: I0219 18:47:45.524086 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 19 18:47:45 crc kubenswrapper[4813]: I0219 18:47:45.524473 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-2hkzz" Feb 19 18:47:45 crc kubenswrapper[4813]: I0219 18:47:45.525919 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 19 18:47:45 crc kubenswrapper[4813]: I0219 18:47:45.526249 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 19 18:47:45 crc kubenswrapper[4813]: I0219 18:47:45.536346 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 19 18:47:45 crc kubenswrapper[4813]: I0219 18:47:45.634440 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3304ea7-bdca-4b4c-b290-07eacbc6a646-config\") pod \"ovsdbserver-nb-0\" (UID: \"b3304ea7-bdca-4b4c-b290-07eacbc6a646\") " pod="openstack/ovsdbserver-nb-0" Feb 19 18:47:45 crc kubenswrapper[4813]: I0219 18:47:45.634505 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3304ea7-bdca-4b4c-b290-07eacbc6a646-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b3304ea7-bdca-4b4c-b290-07eacbc6a646\") " pod="openstack/ovsdbserver-nb-0" Feb 19 18:47:45 crc kubenswrapper[4813]: I0219 18:47:45.634543 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3304ea7-bdca-4b4c-b290-07eacbc6a646-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"b3304ea7-bdca-4b4c-b290-07eacbc6a646\") " pod="openstack/ovsdbserver-nb-0" Feb 19 18:47:45 crc kubenswrapper[4813]: I0219 18:47:45.634565 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b3304ea7-bdca-4b4c-b290-07eacbc6a646-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"b3304ea7-bdca-4b4c-b290-07eacbc6a646\") " pod="openstack/ovsdbserver-nb-0" Feb 19 18:47:45 crc kubenswrapper[4813]: I0219 18:47:45.634592 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"b3304ea7-bdca-4b4c-b290-07eacbc6a646\") " pod="openstack/ovsdbserver-nb-0" Feb 19 18:47:45 crc kubenswrapper[4813]: I0219 18:47:45.634621 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3304ea7-bdca-4b4c-b290-07eacbc6a646-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b3304ea7-bdca-4b4c-b290-07eacbc6a646\") " pod="openstack/ovsdbserver-nb-0" Feb 19 18:47:45 crc kubenswrapper[4813]: I0219 18:47:45.634674 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttzcq\" (UniqueName: \"kubernetes.io/projected/b3304ea7-bdca-4b4c-b290-07eacbc6a646-kube-api-access-ttzcq\") pod \"ovsdbserver-nb-0\" (UID: \"b3304ea7-bdca-4b4c-b290-07eacbc6a646\") " pod="openstack/ovsdbserver-nb-0" Feb 19 18:47:45 crc kubenswrapper[4813]: I0219 18:47:45.634803 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b3304ea7-bdca-4b4c-b290-07eacbc6a646-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"b3304ea7-bdca-4b4c-b290-07eacbc6a646\") " pod="openstack/ovsdbserver-nb-0" Feb 19 18:47:45 crc kubenswrapper[4813]: I0219 18:47:45.736124 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3304ea7-bdca-4b4c-b290-07eacbc6a646-config\") pod \"ovsdbserver-nb-0\" (UID: \"b3304ea7-bdca-4b4c-b290-07eacbc6a646\") " pod="openstack/ovsdbserver-nb-0" Feb 19 18:47:45 crc kubenswrapper[4813]: I0219 18:47:45.736195 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3304ea7-bdca-4b4c-b290-07eacbc6a646-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b3304ea7-bdca-4b4c-b290-07eacbc6a646\") " pod="openstack/ovsdbserver-nb-0" Feb 19 18:47:45 crc kubenswrapper[4813]: I0219 18:47:45.736226 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b3304ea7-bdca-4b4c-b290-07eacbc6a646-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"b3304ea7-bdca-4b4c-b290-07eacbc6a646\") " pod="openstack/ovsdbserver-nb-0" Feb 19 18:47:45 crc kubenswrapper[4813]: I0219 18:47:45.736247 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3304ea7-bdca-4b4c-b290-07eacbc6a646-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"b3304ea7-bdca-4b4c-b290-07eacbc6a646\") " pod="openstack/ovsdbserver-nb-0" Feb 19 18:47:45 crc kubenswrapper[4813]: I0219 18:47:45.736272 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"b3304ea7-bdca-4b4c-b290-07eacbc6a646\") " pod="openstack/ovsdbserver-nb-0" Feb 19 18:47:45 crc kubenswrapper[4813]: I0219 18:47:45.736300 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3304ea7-bdca-4b4c-b290-07eacbc6a646-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b3304ea7-bdca-4b4c-b290-07eacbc6a646\") " pod="openstack/ovsdbserver-nb-0" Feb 19 18:47:45 crc kubenswrapper[4813]: I0219 18:47:45.736557 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttzcq\" (UniqueName: \"kubernetes.io/projected/b3304ea7-bdca-4b4c-b290-07eacbc6a646-kube-api-access-ttzcq\") pod \"ovsdbserver-nb-0\" (UID: \"b3304ea7-bdca-4b4c-b290-07eacbc6a646\") " pod="openstack/ovsdbserver-nb-0" Feb 19 18:47:45 crc kubenswrapper[4813]: I0219 18:47:45.736594 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b3304ea7-bdca-4b4c-b290-07eacbc6a646-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"b3304ea7-bdca-4b4c-b290-07eacbc6a646\") " pod="openstack/ovsdbserver-nb-0" Feb 19 18:47:45 crc kubenswrapper[4813]: I0219 18:47:45.736800 4813 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"b3304ea7-bdca-4b4c-b290-07eacbc6a646\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/ovsdbserver-nb-0" Feb 19 18:47:45 crc kubenswrapper[4813]: I0219 18:47:45.737157 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3304ea7-bdca-4b4c-b290-07eacbc6a646-config\") pod \"ovsdbserver-nb-0\" (UID: \"b3304ea7-bdca-4b4c-b290-07eacbc6a646\") " pod="openstack/ovsdbserver-nb-0" Feb 19 18:47:45 crc kubenswrapper[4813]: I0219 18:47:45.737481 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b3304ea7-bdca-4b4c-b290-07eacbc6a646-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"b3304ea7-bdca-4b4c-b290-07eacbc6a646\") " pod="openstack/ovsdbserver-nb-0" Feb 19 18:47:45 crc kubenswrapper[4813]: I0219 18:47:45.738461 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b3304ea7-bdca-4b4c-b290-07eacbc6a646-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"b3304ea7-bdca-4b4c-b290-07eacbc6a646\") " pod="openstack/ovsdbserver-nb-0" Feb 19 18:47:45 crc kubenswrapper[4813]: I0219 18:47:45.743104 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3304ea7-bdca-4b4c-b290-07eacbc6a646-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b3304ea7-bdca-4b4c-b290-07eacbc6a646\") " pod="openstack/ovsdbserver-nb-0" Feb 19 18:47:45 crc kubenswrapper[4813]: I0219 18:47:45.751153 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3304ea7-bdca-4b4c-b290-07eacbc6a646-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b3304ea7-bdca-4b4c-b290-07eacbc6a646\") " pod="openstack/ovsdbserver-nb-0" Feb 19 18:47:45 crc kubenswrapper[4813]: I0219 18:47:45.752108 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3304ea7-bdca-4b4c-b290-07eacbc6a646-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"b3304ea7-bdca-4b4c-b290-07eacbc6a646\") " pod="openstack/ovsdbserver-nb-0" Feb 19 18:47:45 crc kubenswrapper[4813]: I0219 18:47:45.767236 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttzcq\" (UniqueName: \"kubernetes.io/projected/b3304ea7-bdca-4b4c-b290-07eacbc6a646-kube-api-access-ttzcq\") pod \"ovsdbserver-nb-0\" (UID: \"b3304ea7-bdca-4b4c-b290-07eacbc6a646\") " pod="openstack/ovsdbserver-nb-0" Feb 19 18:47:45 crc kubenswrapper[4813]: I0219 18:47:45.802180 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"b3304ea7-bdca-4b4c-b290-07eacbc6a646\") " pod="openstack/ovsdbserver-nb-0" Feb 19 18:47:45 crc kubenswrapper[4813]: I0219 18:47:45.858132 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 19 18:47:48 crc kubenswrapper[4813]: I0219 18:47:48.291913 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 19 18:47:48 crc kubenswrapper[4813]: I0219 18:47:48.293859 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 19 18:47:48 crc kubenswrapper[4813]: I0219 18:47:48.296287 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-mpd6b" Feb 19 18:47:48 crc kubenswrapper[4813]: I0219 18:47:48.297009 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 19 18:47:48 crc kubenswrapper[4813]: I0219 18:47:48.297152 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 19 18:47:48 crc kubenswrapper[4813]: I0219 18:47:48.297207 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 19 18:47:48 crc kubenswrapper[4813]: I0219 18:47:48.309067 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 19 18:47:48 crc kubenswrapper[4813]: I0219 18:47:48.388875 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"25674cd0-03bb-481f-b039-b3b1db4ea1d4\") " pod="openstack/ovsdbserver-sb-0" Feb 19 18:47:48 crc kubenswrapper[4813]: I0219 18:47:48.388935 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/25674cd0-03bb-481f-b039-b3b1db4ea1d4-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"25674cd0-03bb-481f-b039-b3b1db4ea1d4\") " pod="openstack/ovsdbserver-sb-0" Feb 19 18:47:48 crc kubenswrapper[4813]: I0219 18:47:48.389003 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/25674cd0-03bb-481f-b039-b3b1db4ea1d4-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"25674cd0-03bb-481f-b039-b3b1db4ea1d4\") " pod="openstack/ovsdbserver-sb-0" Feb 19 18:47:48 crc kubenswrapper[4813]: I0219 18:47:48.389026 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/25674cd0-03bb-481f-b039-b3b1db4ea1d4-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"25674cd0-03bb-481f-b039-b3b1db4ea1d4\") " pod="openstack/ovsdbserver-sb-0" Feb 19 18:47:48 crc kubenswrapper[4813]: I0219 18:47:48.389047 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmnsh\" (UniqueName: \"kubernetes.io/projected/25674cd0-03bb-481f-b039-b3b1db4ea1d4-kube-api-access-xmnsh\") pod \"ovsdbserver-sb-0\" (UID: \"25674cd0-03bb-481f-b039-b3b1db4ea1d4\") " pod="openstack/ovsdbserver-sb-0" Feb 19 18:47:48 crc kubenswrapper[4813]: I0219 18:47:48.389075 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25674cd0-03bb-481f-b039-b3b1db4ea1d4-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"25674cd0-03bb-481f-b039-b3b1db4ea1d4\") " pod="openstack/ovsdbserver-sb-0" Feb 19 18:47:48 crc kubenswrapper[4813]: I0219 18:47:48.389097 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25674cd0-03bb-481f-b039-b3b1db4ea1d4-config\") pod \"ovsdbserver-sb-0\" (UID: \"25674cd0-03bb-481f-b039-b3b1db4ea1d4\") " pod="openstack/ovsdbserver-sb-0" Feb 19 18:47:48 crc kubenswrapper[4813]: I0219 18:47:48.389123 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/25674cd0-03bb-481f-b039-b3b1db4ea1d4-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"25674cd0-03bb-481f-b039-b3b1db4ea1d4\") " pod="openstack/ovsdbserver-sb-0" Feb 19 18:47:48 crc kubenswrapper[4813]: I0219 18:47:48.492304 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/25674cd0-03bb-481f-b039-b3b1db4ea1d4-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"25674cd0-03bb-481f-b039-b3b1db4ea1d4\") " pod="openstack/ovsdbserver-sb-0" Feb 19 18:47:48 crc kubenswrapper[4813]: I0219 18:47:48.492378 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/25674cd0-03bb-481f-b039-b3b1db4ea1d4-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"25674cd0-03bb-481f-b039-b3b1db4ea1d4\") " pod="openstack/ovsdbserver-sb-0" Feb 19 18:47:48 crc kubenswrapper[4813]: I0219 18:47:48.493683 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/25674cd0-03bb-481f-b039-b3b1db4ea1d4-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"25674cd0-03bb-481f-b039-b3b1db4ea1d4\") " pod="openstack/ovsdbserver-sb-0" Feb 19 18:47:48 crc kubenswrapper[4813]: I0219 18:47:48.492412 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmnsh\" (UniqueName: \"kubernetes.io/projected/25674cd0-03bb-481f-b039-b3b1db4ea1d4-kube-api-access-xmnsh\") pod \"ovsdbserver-sb-0\" (UID: \"25674cd0-03bb-481f-b039-b3b1db4ea1d4\") " pod="openstack/ovsdbserver-sb-0" Feb 19 18:47:48 crc kubenswrapper[4813]: I0219 18:47:48.493882 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25674cd0-03bb-481f-b039-b3b1db4ea1d4-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"25674cd0-03bb-481f-b039-b3b1db4ea1d4\") " pod="openstack/ovsdbserver-sb-0" Feb 19 18:47:48 crc kubenswrapper[4813]: I0219 18:47:48.493998 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25674cd0-03bb-481f-b039-b3b1db4ea1d4-config\") pod \"ovsdbserver-sb-0\" (UID: \"25674cd0-03bb-481f-b039-b3b1db4ea1d4\") " pod="openstack/ovsdbserver-sb-0" Feb 19 18:47:48 crc kubenswrapper[4813]: I0219 18:47:48.495184 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25674cd0-03bb-481f-b039-b3b1db4ea1d4-config\") pod \"ovsdbserver-sb-0\" (UID: \"25674cd0-03bb-481f-b039-b3b1db4ea1d4\") " pod="openstack/ovsdbserver-sb-0" Feb 19 18:47:48 crc kubenswrapper[4813]: I0219 18:47:48.494041 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/25674cd0-03bb-481f-b039-b3b1db4ea1d4-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"25674cd0-03bb-481f-b039-b3b1db4ea1d4\") " pod="openstack/ovsdbserver-sb-0" Feb 19 18:47:48 crc kubenswrapper[4813]: I0219 18:47:48.495361 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"25674cd0-03bb-481f-b039-b3b1db4ea1d4\") " pod="openstack/ovsdbserver-sb-0" Feb 19 18:47:48 crc kubenswrapper[4813]: I0219 18:47:48.495407 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/25674cd0-03bb-481f-b039-b3b1db4ea1d4-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"25674cd0-03bb-481f-b039-b3b1db4ea1d4\") " pod="openstack/ovsdbserver-sb-0" Feb 19 18:47:48 crc kubenswrapper[4813]: I0219 18:47:48.495843 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/25674cd0-03bb-481f-b039-b3b1db4ea1d4-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"25674cd0-03bb-481f-b039-b3b1db4ea1d4\") " pod="openstack/ovsdbserver-sb-0" Feb 19 18:47:48 crc kubenswrapper[4813]: I0219 18:47:48.496125 4813 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"25674cd0-03bb-481f-b039-b3b1db4ea1d4\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/ovsdbserver-sb-0" Feb 19 18:47:48 crc kubenswrapper[4813]: I0219 18:47:48.507253 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/25674cd0-03bb-481f-b039-b3b1db4ea1d4-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"25674cd0-03bb-481f-b039-b3b1db4ea1d4\") " pod="openstack/ovsdbserver-sb-0" Feb 19 18:47:48 crc kubenswrapper[4813]: I0219 18:47:48.507415 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/25674cd0-03bb-481f-b039-b3b1db4ea1d4-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"25674cd0-03bb-481f-b039-b3b1db4ea1d4\") " pod="openstack/ovsdbserver-sb-0" Feb 19 18:47:48 crc kubenswrapper[4813]: I0219 18:47:48.507289 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25674cd0-03bb-481f-b039-b3b1db4ea1d4-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"25674cd0-03bb-481f-b039-b3b1db4ea1d4\") " pod="openstack/ovsdbserver-sb-0" Feb 19 18:47:48 crc kubenswrapper[4813]: I0219 18:47:48.510481 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmnsh\" (UniqueName: \"kubernetes.io/projected/25674cd0-03bb-481f-b039-b3b1db4ea1d4-kube-api-access-xmnsh\") pod \"ovsdbserver-sb-0\" (UID: \"25674cd0-03bb-481f-b039-b3b1db4ea1d4\") " pod="openstack/ovsdbserver-sb-0" Feb 19 18:47:48 crc kubenswrapper[4813]: I0219 18:47:48.523247 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"25674cd0-03bb-481f-b039-b3b1db4ea1d4\") " pod="openstack/ovsdbserver-sb-0" Feb 19 18:47:48 crc kubenswrapper[4813]: I0219 18:47:48.633103 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 19 18:47:52 crc kubenswrapper[4813]: E0219 18:47:52.620506 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:221c84e162c46ac7454de6fb84343d0a605f2ea1d7d5647a34a66569e0a8fd76" Feb 19 18:47:52 crc kubenswrapper[4813]: E0219 18:47:52.621337 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:221c84e162c46ac7454de6fb84343d0a605f2ea1d7d5647a34a66569e0a8fd76,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-slk9l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(c69ff3db-8806-451a-9df0-c6289c327579): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 18:47:52 crc kubenswrapper[4813]: E0219 18:47:52.624211 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="c69ff3db-8806-451a-9df0-c6289c327579" Feb 19 18:47:52 crc kubenswrapper[4813]: E0219 18:47:52.627310 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:221c84e162c46ac7454de6fb84343d0a605f2ea1d7d5647a34a66569e0a8fd76" Feb 19 18:47:52 crc kubenswrapper[4813]: E0219 18:47:52.627644 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:221c84e162c46ac7454de6fb84343d0a605f2ea1d7d5647a34a66569e0a8fd76,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9fmkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(db22a584-f05a-41ba-ad23-387b4100a9e1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 18:47:52 crc kubenswrapper[4813]: E0219 18:47:52.628774 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="db22a584-f05a-41ba-ad23-387b4100a9e1" Feb 19 18:47:53 crc kubenswrapper[4813]: E0219 18:47:53.419642 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:221c84e162c46ac7454de6fb84343d0a605f2ea1d7d5647a34a66569e0a8fd76\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="db22a584-f05a-41ba-ad23-387b4100a9e1" Feb 19 18:47:53 crc kubenswrapper[4813]: E0219 18:47:53.421360 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:221c84e162c46ac7454de6fb84343d0a605f2ea1d7d5647a34a66569e0a8fd76\\\"\"" pod="openstack/rabbitmq-server-0" podUID="c69ff3db-8806-451a-9df0-c6289c327579" Feb 19 18:47:53 crc kubenswrapper[4813]: E0219 18:47:53.437191 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" Feb 19 18:47:53 crc kubenswrapper[4813]: E0219 18:47:53.437349 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c9wjz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-855cbc58c5-57z69_openstack(b0b113de-4a90-4929-8365-53c93c5aaee7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 18:47:53 crc kubenswrapper[4813]: E0219 18:47:53.438566 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-855cbc58c5-57z69" podUID="b0b113de-4a90-4929-8365-53c93c5aaee7" Feb 19 18:47:53 crc kubenswrapper[4813]: E0219 18:47:53.497494 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" Feb 19 18:47:53 crc kubenswrapper[4813]: E0219 18:47:53.497724 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n6l62,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-6fcf94d689-h8tkx_openstack(e1dd69ab-c6bb-4474-acb7-e72f469dd6c7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 18:47:53 crc kubenswrapper[4813]: E0219 18:47:53.499063 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-6fcf94d689-h8tkx" podUID="e1dd69ab-c6bb-4474-acb7-e72f469dd6c7" Feb 19 18:47:55 crc kubenswrapper[4813]: E0219 18:47:55.430190 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" Feb 19 18:47:55 crc kubenswrapper[4813]: E0219 18:47:55.431400 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nprcb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-67ff45466c-c88qq_openstack(df1b69cc-9b96-474a-bc5d-a2e00a99d8b5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 18:47:55 crc kubenswrapper[4813]: E0219 18:47:55.432860 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-67ff45466c-c88qq" podUID="df1b69cc-9b96-474a-bc5d-a2e00a99d8b5" Feb 19 18:47:55 crc kubenswrapper[4813]: I0219 18:47:55.441595 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-855cbc58c5-57z69" Feb 19 18:47:55 crc kubenswrapper[4813]: I0219 18:47:55.446397 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fcf94d689-h8tkx" Feb 19 18:47:55 crc kubenswrapper[4813]: I0219 18:47:55.446613 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fcf94d689-h8tkx" event={"ID":"e1dd69ab-c6bb-4474-acb7-e72f469dd6c7","Type":"ContainerDied","Data":"02e694555ec228db71f42422251bf8834d953c12dbe90f8622f44b7e1d074adf"} Feb 19 18:47:55 crc kubenswrapper[4813]: I0219 18:47:55.450447 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-855cbc58c5-57z69" event={"ID":"b0b113de-4a90-4929-8365-53c93c5aaee7","Type":"ContainerDied","Data":"2d3879518cb609fed68e79c21411ed64d823abed697ddf108ea144622f247793"} Feb 19 18:47:55 crc kubenswrapper[4813]: I0219 18:47:55.450618 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-855cbc58c5-57z69" Feb 19 18:47:55 crc kubenswrapper[4813]: E0219 18:47:55.481008 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" Feb 19 18:47:55 crc kubenswrapper[4813]: E0219 18:47:55.481197 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-57kh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-f54874ffc-9d5c2_openstack(2db349ca-5a17-4b1e-b8fa-da6112ee5843): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 18:47:55 crc kubenswrapper[4813]: E0219 18:47:55.482677 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-f54874ffc-9d5c2" podUID="2db349ca-5a17-4b1e-b8fa-da6112ee5843" Feb 19 18:47:55 crc kubenswrapper[4813]: I0219 18:47:55.512906 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0b113de-4a90-4929-8365-53c93c5aaee7-config\") pod \"b0b113de-4a90-4929-8365-53c93c5aaee7\" (UID: \"b0b113de-4a90-4929-8365-53c93c5aaee7\") " Feb 19 18:47:55 crc kubenswrapper[4813]: I0219 18:47:55.512990 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1dd69ab-c6bb-4474-acb7-e72f469dd6c7-config\") pod \"e1dd69ab-c6bb-4474-acb7-e72f469dd6c7\" (UID: \"e1dd69ab-c6bb-4474-acb7-e72f469dd6c7\") " Feb 19 18:47:55 crc kubenswrapper[4813]: I0219 18:47:55.513173 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6l62\" (UniqueName: \"kubernetes.io/projected/e1dd69ab-c6bb-4474-acb7-e72f469dd6c7-kube-api-access-n6l62\") pod \"e1dd69ab-c6bb-4474-acb7-e72f469dd6c7\" (UID: \"e1dd69ab-c6bb-4474-acb7-e72f469dd6c7\") " Feb 19 18:47:55 crc kubenswrapper[4813]: I0219 18:47:55.513240 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9wjz\" (UniqueName: \"kubernetes.io/projected/b0b113de-4a90-4929-8365-53c93c5aaee7-kube-api-access-c9wjz\") pod \"b0b113de-4a90-4929-8365-53c93c5aaee7\" (UID: \"b0b113de-4a90-4929-8365-53c93c5aaee7\") " Feb 19 18:47:55 crc kubenswrapper[4813]: I0219 18:47:55.513269 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1dd69ab-c6bb-4474-acb7-e72f469dd6c7-dns-svc\") pod \"e1dd69ab-c6bb-4474-acb7-e72f469dd6c7\" (UID: \"e1dd69ab-c6bb-4474-acb7-e72f469dd6c7\") " Feb 19 18:47:55 crc kubenswrapper[4813]: I0219 18:47:55.513604 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0b113de-4a90-4929-8365-53c93c5aaee7-config" (OuterVolumeSpecName: "config") pod "b0b113de-4a90-4929-8365-53c93c5aaee7" (UID: "b0b113de-4a90-4929-8365-53c93c5aaee7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:47:55 crc kubenswrapper[4813]: I0219 18:47:55.513628 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1dd69ab-c6bb-4474-acb7-e72f469dd6c7-config" (OuterVolumeSpecName: "config") pod "e1dd69ab-c6bb-4474-acb7-e72f469dd6c7" (UID: "e1dd69ab-c6bb-4474-acb7-e72f469dd6c7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:47:55 crc kubenswrapper[4813]: I0219 18:47:55.514060 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1dd69ab-c6bb-4474-acb7-e72f469dd6c7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e1dd69ab-c6bb-4474-acb7-e72f469dd6c7" (UID: "e1dd69ab-c6bb-4474-acb7-e72f469dd6c7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:47:55 crc kubenswrapper[4813]: I0219 18:47:55.517050 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1dd69ab-c6bb-4474-acb7-e72f469dd6c7-kube-api-access-n6l62" (OuterVolumeSpecName: "kube-api-access-n6l62") pod "e1dd69ab-c6bb-4474-acb7-e72f469dd6c7" (UID: "e1dd69ab-c6bb-4474-acb7-e72f469dd6c7"). InnerVolumeSpecName "kube-api-access-n6l62". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:47:55 crc kubenswrapper[4813]: I0219 18:47:55.519615 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0b113de-4a90-4929-8365-53c93c5aaee7-kube-api-access-c9wjz" (OuterVolumeSpecName: "kube-api-access-c9wjz") pod "b0b113de-4a90-4929-8365-53c93c5aaee7" (UID: "b0b113de-4a90-4929-8365-53c93c5aaee7"). InnerVolumeSpecName "kube-api-access-c9wjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:47:55 crc kubenswrapper[4813]: I0219 18:47:55.615340 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0b113de-4a90-4929-8365-53c93c5aaee7-config\") on node \"crc\" DevicePath \"\"" Feb 19 18:47:55 crc kubenswrapper[4813]: I0219 18:47:55.615376 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1dd69ab-c6bb-4474-acb7-e72f469dd6c7-config\") on node \"crc\" DevicePath \"\"" Feb 19 18:47:55 crc kubenswrapper[4813]: I0219 18:47:55.615390 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6l62\" (UniqueName: \"kubernetes.io/projected/e1dd69ab-c6bb-4474-acb7-e72f469dd6c7-kube-api-access-n6l62\") on node \"crc\" DevicePath \"\"" Feb 19 18:47:55 crc kubenswrapper[4813]: I0219 18:47:55.615401 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9wjz\" (UniqueName: \"kubernetes.io/projected/b0b113de-4a90-4929-8365-53c93c5aaee7-kube-api-access-c9wjz\") on node \"crc\" DevicePath \"\"" Feb 19 18:47:55 crc kubenswrapper[4813]: I0219 18:47:55.615414 4813 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1dd69ab-c6bb-4474-acb7-e72f469dd6c7-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 18:47:55 crc kubenswrapper[4813]: I0219 18:47:55.793773 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-855cbc58c5-57z69"] Feb 19 18:47:55 crc kubenswrapper[4813]: I0219 18:47:55.804846 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-855cbc58c5-57z69"] Feb 19 18:47:55 crc kubenswrapper[4813]: I0219 18:47:55.890972 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 19 18:47:55 crc kubenswrapper[4813]: W0219 18:47:55.904200 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda7385c55_b36b_486d_add0_958b8cece7de.slice/crio-cda3228c661fa9cd58b1d7183c55a935f35b40a8be1e3bf115d78858f34014ab WatchSource:0}: Error finding container cda3228c661fa9cd58b1d7183c55a935f35b40a8be1e3bf115d78858f34014ab: Status 404 returned error can't find the container with id cda3228c661fa9cd58b1d7183c55a935f35b40a8be1e3bf115d78858f34014ab Feb 19 18:47:55 crc kubenswrapper[4813]: I0219 18:47:55.943942 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 18:47:56 crc kubenswrapper[4813]: I0219 18:47:56.030407 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 19 18:47:56 crc kubenswrapper[4813]: I0219 18:47:56.066032 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-vr4rs"] Feb 19 18:47:56 crc kubenswrapper[4813]: W0219 18:47:56.066480 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podabaee778_ea35_4887_90c8_2834d3eef00d.slice/crio-6d4b04f2970a6182aea06067a80275c0f0b2525f2c70ee58181f3c99e9596a40 WatchSource:0}: Error finding container 6d4b04f2970a6182aea06067a80275c0f0b2525f2c70ee58181f3c99e9596a40: Status 404 returned error can't find the container with id 6d4b04f2970a6182aea06067a80275c0f0b2525f2c70ee58181f3c99e9596a40 Feb 19 18:47:56 crc kubenswrapper[4813]: W0219 18:47:56.067934 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d4e6cd2_75bd_43bd_9f0e_fd35002ab607.slice/crio-2be29964735caaad96a2560105738c00636594a7e4862836c9069c549854bb92 WatchSource:0}: Error finding container 2be29964735caaad96a2560105738c00636594a7e4862836c9069c549854bb92: Status 404 returned error can't find the container with id 2be29964735caaad96a2560105738c00636594a7e4862836c9069c549854bb92 Feb 19 18:47:56 crc kubenswrapper[4813]: I0219 18:47:56.074178 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 19 18:47:56 crc kubenswrapper[4813]: I0219 18:47:56.187436 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-w7s6z"] Feb 19 18:47:56 crc kubenswrapper[4813]: W0219 18:47:56.189219 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod096a88db_91ee_4ac3_b5ae_ba4bca838436.slice/crio-aa5128dd31514943a1a0eadd5beb7c9c3b15220b65329c8a7044df2499706f31 WatchSource:0}: Error finding container aa5128dd31514943a1a0eadd5beb7c9c3b15220b65329c8a7044df2499706f31: Status 404 returned error can't find the container with id aa5128dd31514943a1a0eadd5beb7c9c3b15220b65329c8a7044df2499706f31 Feb 19 18:47:56 crc kubenswrapper[4813]: I0219 18:47:56.459037 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a7385c55-b36b-486d-add0-958b8cece7de","Type":"ContainerStarted","Data":"35531a89b45b21d436f51039dee2872292441946fc76cbc19142c9b37efc56d4"} Feb 19 18:47:56 crc kubenswrapper[4813]: I0219 18:47:56.459084 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a7385c55-b36b-486d-add0-958b8cece7de","Type":"ContainerStarted","Data":"cda3228c661fa9cd58b1d7183c55a935f35b40a8be1e3bf115d78858f34014ab"} Feb 19 18:47:56 crc kubenswrapper[4813]: I0219 18:47:56.461085 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"2f8101a7-841e-4fa7-b98a-030b82e66c94","Type":"ContainerStarted","Data":"0a7c6d6177ee6ab11116c691850a745c7aca97ccf8e6afee4e58b11e0a94661d"} Feb 19 18:47:56 crc kubenswrapper[4813]: I0219 18:47:56.462696 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-vr4rs" event={"ID":"abaee778-ea35-4887-90c8-2834d3eef00d","Type":"ContainerStarted","Data":"6d4b04f2970a6182aea06067a80275c0f0b2525f2c70ee58181f3c99e9596a40"} Feb 19 18:47:56 crc kubenswrapper[4813]: I0219 18:47:56.464083 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"25674cd0-03bb-481f-b039-b3b1db4ea1d4","Type":"ContainerStarted","Data":"6a39da59ed0e23a81845636fb58a1ca21c1684421cacbd9b7f1c2dd2547c7f37"} Feb 19 18:47:56 crc kubenswrapper[4813]: I0219 18:47:56.465946 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"2d4e6cd2-75bd-43bd-9f0e-fd35002ab607","Type":"ContainerStarted","Data":"2be29964735caaad96a2560105738c00636594a7e4862836c9069c549854bb92"} Feb 19 18:47:56 crc kubenswrapper[4813]: I0219 18:47:56.467313 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"42255dff-2745-4bb9-a1fa-9c6f39327204","Type":"ContainerStarted","Data":"00d30e9d61135a1f95e640b8f6f5f43de02fa70ca088ef4846071dd7e2087afa"} Feb 19 18:47:56 crc kubenswrapper[4813]: I0219 18:47:56.468692 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-w7s6z" event={"ID":"096a88db-91ee-4ac3-b5ae-ba4bca838436","Type":"ContainerStarted","Data":"aa5128dd31514943a1a0eadd5beb7c9c3b15220b65329c8a7044df2499706f31"} Feb 19 18:47:56 crc kubenswrapper[4813]: I0219 18:47:56.468698 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fcf94d689-h8tkx" Feb 19 18:47:56 crc kubenswrapper[4813]: E0219 18:47:56.486019 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2\\\"\"" pod="openstack/dnsmasq-dns-67ff45466c-c88qq" podUID="df1b69cc-9b96-474a-bc5d-a2e00a99d8b5" Feb 19 18:47:56 crc kubenswrapper[4813]: E0219 18:47:56.488797 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2\\\"\"" pod="openstack/dnsmasq-dns-f54874ffc-9d5c2" podUID="2db349ca-5a17-4b1e-b8fa-da6112ee5843" Feb 19 18:47:56 crc kubenswrapper[4813]: I0219 18:47:56.636004 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6fcf94d689-h8tkx"] Feb 19 18:47:56 crc kubenswrapper[4813]: I0219 18:47:56.647719 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6fcf94d689-h8tkx"] Feb 19 18:47:56 crc kubenswrapper[4813]: I0219 18:47:56.682746 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 19 18:47:57 crc kubenswrapper[4813]: I0219 18:47:57.478736 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0b113de-4a90-4929-8365-53c93c5aaee7" path="/var/lib/kubelet/pods/b0b113de-4a90-4929-8365-53c93c5aaee7/volumes" Feb 19 18:47:57 crc kubenswrapper[4813]: I0219 18:47:57.479105 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1dd69ab-c6bb-4474-acb7-e72f469dd6c7" path="/var/lib/kubelet/pods/e1dd69ab-c6bb-4474-acb7-e72f469dd6c7/volumes" Feb 19 18:47:57 crc kubenswrapper[4813]: I0219 18:47:57.479403 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"b3304ea7-bdca-4b4c-b290-07eacbc6a646","Type":"ContainerStarted","Data":"dfa2737b381569a8492c0f0e2b897493498f4eb64e965ab3b0460263bc7ae632"} Feb 19 18:47:59 crc kubenswrapper[4813]: I0219 18:47:59.499730 4813 generic.go:334] "Generic (PLEG): container finished" podID="2f8101a7-841e-4fa7-b98a-030b82e66c94" containerID="0a7c6d6177ee6ab11116c691850a745c7aca97ccf8e6afee4e58b11e0a94661d" exitCode=0 Feb 19 18:47:59 crc kubenswrapper[4813]: I0219 18:47:59.500017 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"2f8101a7-841e-4fa7-b98a-030b82e66c94","Type":"ContainerDied","Data":"0a7c6d6177ee6ab11116c691850a745c7aca97ccf8e6afee4e58b11e0a94661d"} Feb 19 18:48:00 crc kubenswrapper[4813]: I0219 18:48:00.508761 4813 generic.go:334] "Generic (PLEG): container finished" podID="a7385c55-b36b-486d-add0-958b8cece7de" containerID="35531a89b45b21d436f51039dee2872292441946fc76cbc19142c9b37efc56d4" exitCode=0 Feb 19 18:48:00 crc kubenswrapper[4813]: I0219 18:48:00.508858 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a7385c55-b36b-486d-add0-958b8cece7de","Type":"ContainerDied","Data":"35531a89b45b21d436f51039dee2872292441946fc76cbc19142c9b37efc56d4"} Feb 19 18:48:02 crc kubenswrapper[4813]: I0219 18:48:02.538821 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"2d4e6cd2-75bd-43bd-9f0e-fd35002ab607","Type":"ContainerStarted","Data":"18eaf8dca70d4fb335560e1bef6b8d6431fc9112b942b9be1509d59f63ef5a18"} Feb 19 18:48:02 crc kubenswrapper[4813]: I0219 18:48:02.539324 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 19 18:48:02 crc kubenswrapper[4813]: I0219 18:48:02.542459 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"42255dff-2745-4bb9-a1fa-9c6f39327204","Type":"ContainerStarted","Data":"d4cd053903955edd8e43f97062778d55b27af38ad61137ea07d21f50890b3833"} Feb 19 18:48:02 crc kubenswrapper[4813]: I0219 18:48:02.542585 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 19 18:48:02 crc kubenswrapper[4813]: I0219 18:48:02.545005 4813 generic.go:334] "Generic (PLEG): container finished" podID="096a88db-91ee-4ac3-b5ae-ba4bca838436" containerID="3ba0919aa460d3ab19ef502460953e6f6c2685b69618fe7479aaba3aaff111c3" exitCode=0 Feb 19 18:48:02 crc kubenswrapper[4813]: I0219 18:48:02.545100 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-w7s6z" event={"ID":"096a88db-91ee-4ac3-b5ae-ba4bca838436","Type":"ContainerDied","Data":"3ba0919aa460d3ab19ef502460953e6f6c2685b69618fe7479aaba3aaff111c3"} Feb 19 18:48:02 crc kubenswrapper[4813]: I0219 18:48:02.550430 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a7385c55-b36b-486d-add0-958b8cece7de","Type":"ContainerStarted","Data":"009f0408ab4ffcc4868aed437d6e30405baf42c34d352c5760db13ab83d1b55f"} Feb 19 18:48:02 crc kubenswrapper[4813]: I0219 18:48:02.562225 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=19.782008087 podStartE2EDuration="24.562208142s" podCreationTimestamp="2026-02-19 18:47:38 +0000 UTC" firstStartedPulling="2026-02-19 18:47:56.072068728 +0000 UTC m=+1095.297509269" lastFinishedPulling="2026-02-19 18:48:00.852268783 +0000 UTC m=+1100.077709324" observedRunningTime="2026-02-19 18:48:02.555303388 +0000 UTC m=+1101.780743969" watchObservedRunningTime="2026-02-19 18:48:02.562208142 +0000 UTC m=+1101.787648683" Feb 19 18:48:02 crc kubenswrapper[4813]: I0219 18:48:02.571166 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"2f8101a7-841e-4fa7-b98a-030b82e66c94","Type":"ContainerStarted","Data":"bdedaff11b60d1f610f35c3711658657b2c38b9a083960b34b9c08ca7b2b0af5"} Feb 19 18:48:02 crc kubenswrapper[4813]: I0219 18:48:02.573414 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-vr4rs" event={"ID":"abaee778-ea35-4887-90c8-2834d3eef00d","Type":"ContainerStarted","Data":"0f7e482e975d5a408c6ad81d42d1de1741994d6d368e1feef74405613c2006a7"} Feb 19 18:48:02 crc kubenswrapper[4813]: I0219 18:48:02.573524 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-vr4rs" Feb 19 18:48:02 crc kubenswrapper[4813]: I0219 18:48:02.576149 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"25674cd0-03bb-481f-b039-b3b1db4ea1d4","Type":"ContainerStarted","Data":"8502c4b79b14dd3ab247b7d4c9c00c1dcc9265447761cd275f6e5165983c1b3d"} Feb 19 18:48:02 crc kubenswrapper[4813]: I0219 18:48:02.585520 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"b3304ea7-bdca-4b4c-b290-07eacbc6a646","Type":"ContainerStarted","Data":"d7b8203f3013dcbfda0d8e5f52cf6970deeae02a6a776d49e0ee9660ef66cb59"} Feb 19 18:48:02 crc kubenswrapper[4813]: I0219 18:48:02.586697 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=25.586677135 podStartE2EDuration="25.586677135s" podCreationTimestamp="2026-02-19 18:47:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:48:02.58492906 +0000 UTC m=+1101.810369611" watchObservedRunningTime="2026-02-19 18:48:02.586677135 +0000 UTC m=+1101.812117686" Feb 19 18:48:02 crc kubenswrapper[4813]: I0219 18:48:02.628043 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=16.899866306 podStartE2EDuration="22.628021863s" podCreationTimestamp="2026-02-19 18:47:40 +0000 UTC" firstStartedPulling="2026-02-19 18:47:55.951619185 +0000 UTC m=+1095.177059726" lastFinishedPulling="2026-02-19 18:48:01.679774742 +0000 UTC m=+1100.905215283" observedRunningTime="2026-02-19 18:48:02.625280358 +0000 UTC m=+1101.850720889" watchObservedRunningTime="2026-02-19 18:48:02.628021863 +0000 UTC m=+1101.853462404" Feb 19 18:48:02 crc kubenswrapper[4813]: I0219 18:48:02.679643 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-vr4rs" podStartSLOduration=13.325674929 podStartE2EDuration="18.6796278s" podCreationTimestamp="2026-02-19 18:47:44 +0000 UTC" firstStartedPulling="2026-02-19 18:47:56.070427326 +0000 UTC m=+1095.295867867" lastFinishedPulling="2026-02-19 18:48:01.424380167 +0000 UTC m=+1100.649820738" observedRunningTime="2026-02-19 18:48:02.650295217 +0000 UTC m=+1101.875735778" watchObservedRunningTime="2026-02-19 18:48:02.6796278 +0000 UTC m=+1101.905068341" Feb 19 18:48:02 crc kubenswrapper[4813]: I0219 18:48:02.685534 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=9.268829096 podStartE2EDuration="26.685518954s" podCreationTimestamp="2026-02-19 18:47:36 +0000 UTC" firstStartedPulling="2026-02-19 18:47:38.077556179 +0000 UTC m=+1077.302996720" lastFinishedPulling="2026-02-19 18:47:55.494246037 +0000 UTC m=+1094.719686578" observedRunningTime="2026-02-19 18:48:02.678181286 +0000 UTC m=+1101.903621847" watchObservedRunningTime="2026-02-19 18:48:02.685518954 +0000 UTC m=+1101.910959495" Feb 19 18:48:03 crc kubenswrapper[4813]: I0219 18:48:03.600560 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-w7s6z" event={"ID":"096a88db-91ee-4ac3-b5ae-ba4bca838436","Type":"ContainerStarted","Data":"e06b8d6895c1ae07d8bfb4358319aa314b17cc2acab940e6baa96f82402c1514"} Feb 19 18:48:03 crc kubenswrapper[4813]: I0219 18:48:03.601154 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-w7s6z" event={"ID":"096a88db-91ee-4ac3-b5ae-ba4bca838436","Type":"ContainerStarted","Data":"fa363048760a37417b36d93a6b7c7436ba2ba02b903ff26bcb4c7f22d5b4f33f"} Feb 19 18:48:03 crc kubenswrapper[4813]: I0219 18:48:03.601216 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-w7s6z" Feb 19 18:48:03 crc kubenswrapper[4813]: I0219 18:48:03.601254 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-w7s6z" Feb 19 18:48:03 crc kubenswrapper[4813]: I0219 18:48:03.621876 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-w7s6z" podStartSLOduration=14.390270957 podStartE2EDuration="19.621858644s" podCreationTimestamp="2026-02-19 18:47:44 +0000 UTC" firstStartedPulling="2026-02-19 18:47:56.191356364 +0000 UTC m=+1095.416796905" lastFinishedPulling="2026-02-19 18:48:01.422944011 +0000 UTC m=+1100.648384592" observedRunningTime="2026-02-19 18:48:03.620557023 +0000 UTC m=+1102.845997574" watchObservedRunningTime="2026-02-19 18:48:03.621858644 +0000 UTC m=+1102.847299185" Feb 19 18:48:04 crc kubenswrapper[4813]: I0219 18:48:04.612099 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"25674cd0-03bb-481f-b039-b3b1db4ea1d4","Type":"ContainerStarted","Data":"0d9e225c56d6e81840554a63919cdbe4f5142d75bbdc351a40de2802bf43b99e"} Feb 19 18:48:04 crc kubenswrapper[4813]: I0219 18:48:04.616935 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"b3304ea7-bdca-4b4c-b290-07eacbc6a646","Type":"ContainerStarted","Data":"fdb0030cda574aea48f90a9d627c54cbe19111ce3ca5470320cb6b9f671e12fe"} Feb 19 18:48:04 crc kubenswrapper[4813]: I0219 18:48:04.647429 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=10.039149644 podStartE2EDuration="17.647408292s" podCreationTimestamp="2026-02-19 18:47:47 +0000 UTC" firstStartedPulling="2026-02-19 18:47:56.039132691 +0000 UTC m=+1095.264573232" lastFinishedPulling="2026-02-19 18:48:03.647391339 +0000 UTC m=+1102.872831880" observedRunningTime="2026-02-19 18:48:04.641196279 +0000 UTC m=+1103.866636860" watchObservedRunningTime="2026-02-19 18:48:04.647408292 +0000 UTC m=+1103.872848863" Feb 19 18:48:04 crc kubenswrapper[4813]: I0219 18:48:04.683043 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=13.740640409000001 podStartE2EDuration="20.683017552s" podCreationTimestamp="2026-02-19 18:47:44 +0000 UTC" firstStartedPulling="2026-02-19 18:47:56.697992137 +0000 UTC m=+1095.923432668" lastFinishedPulling="2026-02-19 18:48:03.64036927 +0000 UTC m=+1102.865809811" observedRunningTime="2026-02-19 18:48:04.675148107 +0000 UTC m=+1103.900588688" watchObservedRunningTime="2026-02-19 18:48:04.683017552 +0000 UTC m=+1103.908458123" Feb 19 18:48:05 crc kubenswrapper[4813]: I0219 18:48:05.858320 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 19 18:48:06 crc kubenswrapper[4813]: I0219 18:48:06.635140 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 19 18:48:06 crc kubenswrapper[4813]: I0219 18:48:06.635258 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"db22a584-f05a-41ba-ad23-387b4100a9e1","Type":"ContainerStarted","Data":"0b467ef85f9a437d89647927595333d8dc397b82f76409cebc5dee43012b081e"} Feb 19 18:48:06 crc kubenswrapper[4813]: I0219 18:48:06.695877 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 19 18:48:06 crc kubenswrapper[4813]: I0219 18:48:06.859200 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 19 18:48:06 crc kubenswrapper[4813]: I0219 18:48:06.926120 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 19 18:48:06 crc kubenswrapper[4813]: E0219 18:48:06.938682 4813 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.69:53598->38.102.83.69:38045: write tcp 38.102.83.69:53598->38.102.83.69:38045: write: broken pipe Feb 19 18:48:07 crc kubenswrapper[4813]: I0219 18:48:07.486678 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 19 18:48:07 crc kubenswrapper[4813]: I0219 18:48:07.486720 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 19 18:48:07 crc kubenswrapper[4813]: I0219 18:48:07.575797 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 19 18:48:07 crc kubenswrapper[4813]: I0219 18:48:07.641908 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 19 18:48:07 crc kubenswrapper[4813]: I0219 18:48:07.679192 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 19 18:48:07 crc kubenswrapper[4813]: I0219 18:48:07.701062 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 19 18:48:07 crc kubenswrapper[4813]: I0219 18:48:07.745974 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 19 18:48:07 crc kubenswrapper[4813]: I0219 18:48:07.925402 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f54874ffc-9d5c2"] Feb 19 18:48:07 crc kubenswrapper[4813]: I0219 18:48:07.967040 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57bdd75c-fdt2d"] Feb 19 18:48:07 crc kubenswrapper[4813]: I0219 18:48:07.972055 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57bdd75c-fdt2d" Feb 19 18:48:07 crc kubenswrapper[4813]: I0219 18:48:07.974211 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 19 18:48:07 crc kubenswrapper[4813]: I0219 18:48:07.984919 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57bdd75c-fdt2d"] Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.040343 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d18d3e87-1b83-4a09-9e7b-f28e1e540d85-dns-svc\") pod \"dnsmasq-dns-57bdd75c-fdt2d\" (UID: \"d18d3e87-1b83-4a09-9e7b-f28e1e540d85\") " pod="openstack/dnsmasq-dns-57bdd75c-fdt2d" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.040428 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z98d7\" (UniqueName: \"kubernetes.io/projected/d18d3e87-1b83-4a09-9e7b-f28e1e540d85-kube-api-access-z98d7\") pod \"dnsmasq-dns-57bdd75c-fdt2d\" (UID: \"d18d3e87-1b83-4a09-9e7b-f28e1e540d85\") " pod="openstack/dnsmasq-dns-57bdd75c-fdt2d" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.040458 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d18d3e87-1b83-4a09-9e7b-f28e1e540d85-ovsdbserver-nb\") pod \"dnsmasq-dns-57bdd75c-fdt2d\" (UID: \"d18d3e87-1b83-4a09-9e7b-f28e1e540d85\") " pod="openstack/dnsmasq-dns-57bdd75c-fdt2d" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.040493 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d18d3e87-1b83-4a09-9e7b-f28e1e540d85-config\") pod \"dnsmasq-dns-57bdd75c-fdt2d\" (UID: \"d18d3e87-1b83-4a09-9e7b-f28e1e540d85\") " pod="openstack/dnsmasq-dns-57bdd75c-fdt2d" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.141830 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z98d7\" (UniqueName: \"kubernetes.io/projected/d18d3e87-1b83-4a09-9e7b-f28e1e540d85-kube-api-access-z98d7\") pod \"dnsmasq-dns-57bdd75c-fdt2d\" (UID: \"d18d3e87-1b83-4a09-9e7b-f28e1e540d85\") " pod="openstack/dnsmasq-dns-57bdd75c-fdt2d" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.141884 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d18d3e87-1b83-4a09-9e7b-f28e1e540d85-ovsdbserver-nb\") pod \"dnsmasq-dns-57bdd75c-fdt2d\" (UID: \"d18d3e87-1b83-4a09-9e7b-f28e1e540d85\") " pod="openstack/dnsmasq-dns-57bdd75c-fdt2d" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.141925 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d18d3e87-1b83-4a09-9e7b-f28e1e540d85-config\") pod \"dnsmasq-dns-57bdd75c-fdt2d\" (UID: \"d18d3e87-1b83-4a09-9e7b-f28e1e540d85\") " pod="openstack/dnsmasq-dns-57bdd75c-fdt2d" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.141996 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d18d3e87-1b83-4a09-9e7b-f28e1e540d85-dns-svc\") pod \"dnsmasq-dns-57bdd75c-fdt2d\" (UID: \"d18d3e87-1b83-4a09-9e7b-f28e1e540d85\") " pod="openstack/dnsmasq-dns-57bdd75c-fdt2d" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.143042 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d18d3e87-1b83-4a09-9e7b-f28e1e540d85-dns-svc\") pod \"dnsmasq-dns-57bdd75c-fdt2d\" (UID: \"d18d3e87-1b83-4a09-9e7b-f28e1e540d85\") " pod="openstack/dnsmasq-dns-57bdd75c-fdt2d" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.143065 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d18d3e87-1b83-4a09-9e7b-f28e1e540d85-ovsdbserver-nb\") pod \"dnsmasq-dns-57bdd75c-fdt2d\" (UID: \"d18d3e87-1b83-4a09-9e7b-f28e1e540d85\") " pod="openstack/dnsmasq-dns-57bdd75c-fdt2d" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.143612 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d18d3e87-1b83-4a09-9e7b-f28e1e540d85-config\") pod \"dnsmasq-dns-57bdd75c-fdt2d\" (UID: \"d18d3e87-1b83-4a09-9e7b-f28e1e540d85\") " pod="openstack/dnsmasq-dns-57bdd75c-fdt2d" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.163903 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z98d7\" (UniqueName: \"kubernetes.io/projected/d18d3e87-1b83-4a09-9e7b-f28e1e540d85-kube-api-access-z98d7\") pod \"dnsmasq-dns-57bdd75c-fdt2d\" (UID: \"d18d3e87-1b83-4a09-9e7b-f28e1e540d85\") " pod="openstack/dnsmasq-dns-57bdd75c-fdt2d" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.246613 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-85btv"] Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.255938 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-85btv" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.263671 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.269632 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67ff45466c-c88qq"] Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.303087 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-85btv"] Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.305305 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57bdd75c-fdt2d" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.325422 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75b7bcc64f-j8v4k"] Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.326580 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75b7bcc64f-j8v4k" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.337240 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.355356 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8734aca-5b08-4847-b485-1d31add9fba1-config\") pod \"ovn-controller-metrics-85btv\" (UID: \"c8734aca-5b08-4847-b485-1d31add9fba1\") " pod="openstack/ovn-controller-metrics-85btv" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.355438 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8734aca-5b08-4847-b485-1d31add9fba1-combined-ca-bundle\") pod \"ovn-controller-metrics-85btv\" (UID: \"c8734aca-5b08-4847-b485-1d31add9fba1\") " pod="openstack/ovn-controller-metrics-85btv" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.355460 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c8734aca-5b08-4847-b485-1d31add9fba1-ovn-rundir\") pod \"ovn-controller-metrics-85btv\" (UID: \"c8734aca-5b08-4847-b485-1d31add9fba1\") " pod="openstack/ovn-controller-metrics-85btv" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.355478 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dktsq\" (UniqueName: \"kubernetes.io/projected/c8734aca-5b08-4847-b485-1d31add9fba1-kube-api-access-dktsq\") pod \"ovn-controller-metrics-85btv\" (UID: \"c8734aca-5b08-4847-b485-1d31add9fba1\") " pod="openstack/ovn-controller-metrics-85btv" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.355509 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c8734aca-5b08-4847-b485-1d31add9fba1-ovs-rundir\") pod \"ovn-controller-metrics-85btv\" (UID: \"c8734aca-5b08-4847-b485-1d31add9fba1\") " pod="openstack/ovn-controller-metrics-85btv" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.355530 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8734aca-5b08-4847-b485-1d31add9fba1-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-85btv\" (UID: \"c8734aca-5b08-4847-b485-1d31add9fba1\") " pod="openstack/ovn-controller-metrics-85btv" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.365842 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75b7bcc64f-j8v4k"] Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.386026 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.387295 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.396247 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.396449 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-46g84" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.396591 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.396674 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.407637 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.439892 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f54874ffc-9d5c2" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.456840 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dktsq\" (UniqueName: \"kubernetes.io/projected/c8734aca-5b08-4847-b485-1d31add9fba1-kube-api-access-dktsq\") pod \"ovn-controller-metrics-85btv\" (UID: \"c8734aca-5b08-4847-b485-1d31add9fba1\") " pod="openstack/ovn-controller-metrics-85btv" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.456891 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c8734aca-5b08-4847-b485-1d31add9fba1-ovs-rundir\") pod \"ovn-controller-metrics-85btv\" (UID: \"c8734aca-5b08-4847-b485-1d31add9fba1\") " pod="openstack/ovn-controller-metrics-85btv" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.456918 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c675ac6-fe1c-42a1-b67e-f958aef3c086-config\") pod \"ovn-northd-0\" (UID: \"8c675ac6-fe1c-42a1-b67e-f958aef3c086\") " pod="openstack/ovn-northd-0" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.456942 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8734aca-5b08-4847-b485-1d31add9fba1-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-85btv\" (UID: \"c8734aca-5b08-4847-b485-1d31add9fba1\") " pod="openstack/ovn-controller-metrics-85btv" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.456979 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/389d3fe8-138e-47b3-83c6-5c7e7c5551b7-config\") pod \"dnsmasq-dns-75b7bcc64f-j8v4k\" (UID: \"389d3fe8-138e-47b3-83c6-5c7e7c5551b7\") " pod="openstack/dnsmasq-dns-75b7bcc64f-j8v4k" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.457014 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/389d3fe8-138e-47b3-83c6-5c7e7c5551b7-ovsdbserver-sb\") pod \"dnsmasq-dns-75b7bcc64f-j8v4k\" (UID: \"389d3fe8-138e-47b3-83c6-5c7e7c5551b7\") " pod="openstack/dnsmasq-dns-75b7bcc64f-j8v4k" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.457035 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c675ac6-fe1c-42a1-b67e-f958aef3c086-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"8c675ac6-fe1c-42a1-b67e-f958aef3c086\") " pod="openstack/ovn-northd-0" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.457054 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c675ac6-fe1c-42a1-b67e-f958aef3c086-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"8c675ac6-fe1c-42a1-b67e-f958aef3c086\") " pod="openstack/ovn-northd-0" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.457083 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/389d3fe8-138e-47b3-83c6-5c7e7c5551b7-dns-svc\") pod \"dnsmasq-dns-75b7bcc64f-j8v4k\" (UID: \"389d3fe8-138e-47b3-83c6-5c7e7c5551b7\") " pod="openstack/dnsmasq-dns-75b7bcc64f-j8v4k" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.457100 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/389d3fe8-138e-47b3-83c6-5c7e7c5551b7-ovsdbserver-nb\") pod \"dnsmasq-dns-75b7bcc64f-j8v4k\" (UID: \"389d3fe8-138e-47b3-83c6-5c7e7c5551b7\") " pod="openstack/dnsmasq-dns-75b7bcc64f-j8v4k" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.457118 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8734aca-5b08-4847-b485-1d31add9fba1-config\") pod \"ovn-controller-metrics-85btv\" (UID: \"c8734aca-5b08-4847-b485-1d31add9fba1\") " pod="openstack/ovn-controller-metrics-85btv" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.457145 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c675ac6-fe1c-42a1-b67e-f958aef3c086-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"8c675ac6-fe1c-42a1-b67e-f958aef3c086\") " pod="openstack/ovn-northd-0" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.457165 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8c675ac6-fe1c-42a1-b67e-f958aef3c086-scripts\") pod \"ovn-northd-0\" (UID: \"8c675ac6-fe1c-42a1-b67e-f958aef3c086\") " pod="openstack/ovn-northd-0" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.457183 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zxds\" (UniqueName: \"kubernetes.io/projected/389d3fe8-138e-47b3-83c6-5c7e7c5551b7-kube-api-access-8zxds\") pod \"dnsmasq-dns-75b7bcc64f-j8v4k\" (UID: \"389d3fe8-138e-47b3-83c6-5c7e7c5551b7\") " pod="openstack/dnsmasq-dns-75b7bcc64f-j8v4k" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.457208 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8v8ll\" (UniqueName: \"kubernetes.io/projected/8c675ac6-fe1c-42a1-b67e-f958aef3c086-kube-api-access-8v8ll\") pod \"ovn-northd-0\" (UID: \"8c675ac6-fe1c-42a1-b67e-f958aef3c086\") " pod="openstack/ovn-northd-0" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.457225 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8c675ac6-fe1c-42a1-b67e-f958aef3c086-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"8c675ac6-fe1c-42a1-b67e-f958aef3c086\") " pod="openstack/ovn-northd-0" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.457247 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8734aca-5b08-4847-b485-1d31add9fba1-combined-ca-bundle\") pod \"ovn-controller-metrics-85btv\" (UID: \"c8734aca-5b08-4847-b485-1d31add9fba1\") " pod="openstack/ovn-controller-metrics-85btv" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.457262 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c8734aca-5b08-4847-b485-1d31add9fba1-ovn-rundir\") pod \"ovn-controller-metrics-85btv\" (UID: \"c8734aca-5b08-4847-b485-1d31add9fba1\") " pod="openstack/ovn-controller-metrics-85btv" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.457474 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c8734aca-5b08-4847-b485-1d31add9fba1-ovn-rundir\") pod \"ovn-controller-metrics-85btv\" (UID: \"c8734aca-5b08-4847-b485-1d31add9fba1\") " pod="openstack/ovn-controller-metrics-85btv" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.457678 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c8734aca-5b08-4847-b485-1d31add9fba1-ovs-rundir\") pod \"ovn-controller-metrics-85btv\" (UID: \"c8734aca-5b08-4847-b485-1d31add9fba1\") " pod="openstack/ovn-controller-metrics-85btv" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.458601 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8734aca-5b08-4847-b485-1d31add9fba1-config\") pod \"ovn-controller-metrics-85btv\" (UID: \"c8734aca-5b08-4847-b485-1d31add9fba1\") " pod="openstack/ovn-controller-metrics-85btv" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.465435 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8734aca-5b08-4847-b485-1d31add9fba1-combined-ca-bundle\") pod \"ovn-controller-metrics-85btv\" (UID: \"c8734aca-5b08-4847-b485-1d31add9fba1\") " pod="openstack/ovn-controller-metrics-85btv" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.469577 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8734aca-5b08-4847-b485-1d31add9fba1-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-85btv\" (UID: \"c8734aca-5b08-4847-b485-1d31add9fba1\") " pod="openstack/ovn-controller-metrics-85btv" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.472506 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dktsq\" (UniqueName: \"kubernetes.io/projected/c8734aca-5b08-4847-b485-1d31add9fba1-kube-api-access-dktsq\") pod \"ovn-controller-metrics-85btv\" (UID: \"c8734aca-5b08-4847-b485-1d31add9fba1\") " pod="openstack/ovn-controller-metrics-85btv" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.558634 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2db349ca-5a17-4b1e-b8fa-da6112ee5843-config\") pod \"2db349ca-5a17-4b1e-b8fa-da6112ee5843\" (UID: \"2db349ca-5a17-4b1e-b8fa-da6112ee5843\") " Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.558919 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2db349ca-5a17-4b1e-b8fa-da6112ee5843-dns-svc\") pod \"2db349ca-5a17-4b1e-b8fa-da6112ee5843\" (UID: \"2db349ca-5a17-4b1e-b8fa-da6112ee5843\") " Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.559052 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57kh5\" (UniqueName: \"kubernetes.io/projected/2db349ca-5a17-4b1e-b8fa-da6112ee5843-kube-api-access-57kh5\") pod \"2db349ca-5a17-4b1e-b8fa-da6112ee5843\" (UID: \"2db349ca-5a17-4b1e-b8fa-da6112ee5843\") " Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.559366 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/389d3fe8-138e-47b3-83c6-5c7e7c5551b7-config\") pod \"dnsmasq-dns-75b7bcc64f-j8v4k\" (UID: \"389d3fe8-138e-47b3-83c6-5c7e7c5551b7\") " pod="openstack/dnsmasq-dns-75b7bcc64f-j8v4k" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.559394 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/389d3fe8-138e-47b3-83c6-5c7e7c5551b7-ovsdbserver-sb\") pod \"dnsmasq-dns-75b7bcc64f-j8v4k\" (UID: \"389d3fe8-138e-47b3-83c6-5c7e7c5551b7\") " pod="openstack/dnsmasq-dns-75b7bcc64f-j8v4k" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.559422 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c675ac6-fe1c-42a1-b67e-f958aef3c086-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"8c675ac6-fe1c-42a1-b67e-f958aef3c086\") " pod="openstack/ovn-northd-0" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.559465 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c675ac6-fe1c-42a1-b67e-f958aef3c086-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"8c675ac6-fe1c-42a1-b67e-f958aef3c086\") " pod="openstack/ovn-northd-0" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.559461 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2db349ca-5a17-4b1e-b8fa-da6112ee5843-config" (OuterVolumeSpecName: "config") pod "2db349ca-5a17-4b1e-b8fa-da6112ee5843" (UID: "2db349ca-5a17-4b1e-b8fa-da6112ee5843"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.559530 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/389d3fe8-138e-47b3-83c6-5c7e7c5551b7-dns-svc\") pod \"dnsmasq-dns-75b7bcc64f-j8v4k\" (UID: \"389d3fe8-138e-47b3-83c6-5c7e7c5551b7\") " pod="openstack/dnsmasq-dns-75b7bcc64f-j8v4k" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.559560 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/389d3fe8-138e-47b3-83c6-5c7e7c5551b7-ovsdbserver-nb\") pod \"dnsmasq-dns-75b7bcc64f-j8v4k\" (UID: \"389d3fe8-138e-47b3-83c6-5c7e7c5551b7\") " pod="openstack/dnsmasq-dns-75b7bcc64f-j8v4k" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.559632 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c675ac6-fe1c-42a1-b67e-f958aef3c086-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"8c675ac6-fe1c-42a1-b67e-f958aef3c086\") " pod="openstack/ovn-northd-0" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.559659 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8c675ac6-fe1c-42a1-b67e-f958aef3c086-scripts\") pod \"ovn-northd-0\" (UID: \"8c675ac6-fe1c-42a1-b67e-f958aef3c086\") " pod="openstack/ovn-northd-0" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.559681 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zxds\" (UniqueName: \"kubernetes.io/projected/389d3fe8-138e-47b3-83c6-5c7e7c5551b7-kube-api-access-8zxds\") pod \"dnsmasq-dns-75b7bcc64f-j8v4k\" (UID: \"389d3fe8-138e-47b3-83c6-5c7e7c5551b7\") " pod="openstack/dnsmasq-dns-75b7bcc64f-j8v4k" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.559727 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8v8ll\" (UniqueName: \"kubernetes.io/projected/8c675ac6-fe1c-42a1-b67e-f958aef3c086-kube-api-access-8v8ll\") pod \"ovn-northd-0\" (UID: \"8c675ac6-fe1c-42a1-b67e-f958aef3c086\") " pod="openstack/ovn-northd-0" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.559757 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8c675ac6-fe1c-42a1-b67e-f958aef3c086-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"8c675ac6-fe1c-42a1-b67e-f958aef3c086\") " pod="openstack/ovn-northd-0" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.559808 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c675ac6-fe1c-42a1-b67e-f958aef3c086-config\") pod \"ovn-northd-0\" (UID: \"8c675ac6-fe1c-42a1-b67e-f958aef3c086\") " pod="openstack/ovn-northd-0" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.559857 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2db349ca-5a17-4b1e-b8fa-da6112ee5843-config\") on node \"crc\" DevicePath \"\"" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.560519 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/389d3fe8-138e-47b3-83c6-5c7e7c5551b7-ovsdbserver-sb\") pod \"dnsmasq-dns-75b7bcc64f-j8v4k\" (UID: \"389d3fe8-138e-47b3-83c6-5c7e7c5551b7\") " pod="openstack/dnsmasq-dns-75b7bcc64f-j8v4k" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.561097 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c675ac6-fe1c-42a1-b67e-f958aef3c086-config\") pod \"ovn-northd-0\" (UID: \"8c675ac6-fe1c-42a1-b67e-f958aef3c086\") " pod="openstack/ovn-northd-0" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.561118 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/389d3fe8-138e-47b3-83c6-5c7e7c5551b7-config\") pod \"dnsmasq-dns-75b7bcc64f-j8v4k\" (UID: \"389d3fe8-138e-47b3-83c6-5c7e7c5551b7\") " pod="openstack/dnsmasq-dns-75b7bcc64f-j8v4k" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.561817 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8c675ac6-fe1c-42a1-b67e-f958aef3c086-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"8c675ac6-fe1c-42a1-b67e-f958aef3c086\") " pod="openstack/ovn-northd-0" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.561843 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8c675ac6-fe1c-42a1-b67e-f958aef3c086-scripts\") pod \"ovn-northd-0\" (UID: \"8c675ac6-fe1c-42a1-b67e-f958aef3c086\") " pod="openstack/ovn-northd-0" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.562664 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/389d3fe8-138e-47b3-83c6-5c7e7c5551b7-ovsdbserver-nb\") pod \"dnsmasq-dns-75b7bcc64f-j8v4k\" (UID: \"389d3fe8-138e-47b3-83c6-5c7e7c5551b7\") " pod="openstack/dnsmasq-dns-75b7bcc64f-j8v4k" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.563478 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/389d3fe8-138e-47b3-83c6-5c7e7c5551b7-dns-svc\") pod \"dnsmasq-dns-75b7bcc64f-j8v4k\" (UID: \"389d3fe8-138e-47b3-83c6-5c7e7c5551b7\") " pod="openstack/dnsmasq-dns-75b7bcc64f-j8v4k" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.564405 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c675ac6-fe1c-42a1-b67e-f958aef3c086-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"8c675ac6-fe1c-42a1-b67e-f958aef3c086\") " pod="openstack/ovn-northd-0" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.566459 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2db349ca-5a17-4b1e-b8fa-da6112ee5843-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2db349ca-5a17-4b1e-b8fa-da6112ee5843" (UID: "2db349ca-5a17-4b1e-b8fa-da6112ee5843"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.566955 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c675ac6-fe1c-42a1-b67e-f958aef3c086-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"8c675ac6-fe1c-42a1-b67e-f958aef3c086\") " pod="openstack/ovn-northd-0" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.567463 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c675ac6-fe1c-42a1-b67e-f958aef3c086-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"8c675ac6-fe1c-42a1-b67e-f958aef3c086\") " pod="openstack/ovn-northd-0" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.576437 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2db349ca-5a17-4b1e-b8fa-da6112ee5843-kube-api-access-57kh5" (OuterVolumeSpecName: "kube-api-access-57kh5") pod "2db349ca-5a17-4b1e-b8fa-da6112ee5843" (UID: "2db349ca-5a17-4b1e-b8fa-da6112ee5843"). InnerVolumeSpecName "kube-api-access-57kh5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.579678 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8v8ll\" (UniqueName: \"kubernetes.io/projected/8c675ac6-fe1c-42a1-b67e-f958aef3c086-kube-api-access-8v8ll\") pod \"ovn-northd-0\" (UID: \"8c675ac6-fe1c-42a1-b67e-f958aef3c086\") " pod="openstack/ovn-northd-0" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.583187 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zxds\" (UniqueName: \"kubernetes.io/projected/389d3fe8-138e-47b3-83c6-5c7e7c5551b7-kube-api-access-8zxds\") pod \"dnsmasq-dns-75b7bcc64f-j8v4k\" (UID: \"389d3fe8-138e-47b3-83c6-5c7e7c5551b7\") " pod="openstack/dnsmasq-dns-75b7bcc64f-j8v4k" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.617341 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-85btv" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.647740 4813 generic.go:334] "Generic (PLEG): container finished" podID="df1b69cc-9b96-474a-bc5d-a2e00a99d8b5" containerID="ac0549e207bc69564b24bda742327b56ef84ba0d1d08afca97bc09f0aeea4bac" exitCode=0 Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.647791 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67ff45466c-c88qq" event={"ID":"df1b69cc-9b96-474a-bc5d-a2e00a99d8b5","Type":"ContainerDied","Data":"ac0549e207bc69564b24bda742327b56ef84ba0d1d08afca97bc09f0aeea4bac"} Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.650269 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f54874ffc-9d5c2" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.653934 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f54874ffc-9d5c2" event={"ID":"2db349ca-5a17-4b1e-b8fa-da6112ee5843","Type":"ContainerDied","Data":"b1a1d6b2cf89b5b2e09c714bdfb13d96548e39ffe37e59aa62904d97b2443a15"} Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.660542 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57kh5\" (UniqueName: \"kubernetes.io/projected/2db349ca-5a17-4b1e-b8fa-da6112ee5843-kube-api-access-57kh5\") on node \"crc\" DevicePath \"\"" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.660559 4813 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2db349ca-5a17-4b1e-b8fa-da6112ee5843-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.676958 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75b7bcc64f-j8v4k" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.714530 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.717561 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f54874ffc-9d5c2"] Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.749506 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f54874ffc-9d5c2"] Feb 19 18:48:08 crc kubenswrapper[4813]: W0219 18:48:08.812758 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd18d3e87_1b83_4a09_9e7b_f28e1e540d85.slice/crio-91c91832f18c31671e2dcdc348b92dfc0ae4c9689b7c9fda205a8cbd83cb2b83 WatchSource:0}: Error finding container 91c91832f18c31671e2dcdc348b92dfc0ae4c9689b7c9fda205a8cbd83cb2b83: Status 404 returned error can't find the container with id 91c91832f18c31671e2dcdc348b92dfc0ae4c9689b7c9fda205a8cbd83cb2b83 Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.815053 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57bdd75c-fdt2d"] Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.844561 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.844599 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 19 18:48:08 crc kubenswrapper[4813]: I0219 18:48:08.937393 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 19 18:48:09 crc kubenswrapper[4813]: I0219 18:48:09.133206 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-85btv"] Feb 19 18:48:09 crc kubenswrapper[4813]: W0219 18:48:09.138233 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc8734aca_5b08_4847_b485_1d31add9fba1.slice/crio-bdb717649bdc194b1d9468b82ca9065a9baa5f497b97b5425c0cc6433105ac8f WatchSource:0}: Error finding container bdb717649bdc194b1d9468b82ca9065a9baa5f497b97b5425c0cc6433105ac8f: Status 404 returned error can't find the container with id bdb717649bdc194b1d9468b82ca9065a9baa5f497b97b5425c0cc6433105ac8f Feb 19 18:48:09 crc kubenswrapper[4813]: I0219 18:48:09.173546 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 19 18:48:09 crc kubenswrapper[4813]: I0219 18:48:09.280028 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75b7bcc64f-j8v4k"] Feb 19 18:48:09 crc kubenswrapper[4813]: W0219 18:48:09.286896 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod389d3fe8_138e_47b3_83c6_5c7e7c5551b7.slice/crio-c773dd4ebb98302bd923d19c0d00234e36721ee8586ff365e7e2ae16f6108515 WatchSource:0}: Error finding container c773dd4ebb98302bd923d19c0d00234e36721ee8586ff365e7e2ae16f6108515: Status 404 returned error can't find the container with id c773dd4ebb98302bd923d19c0d00234e36721ee8586ff365e7e2ae16f6108515 Feb 19 18:48:09 crc kubenswrapper[4813]: I0219 18:48:09.292294 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 19 18:48:09 crc kubenswrapper[4813]: I0219 18:48:09.364293 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67ff45466c-c88qq" Feb 19 18:48:09 crc kubenswrapper[4813]: I0219 18:48:09.469502 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-lm976"] Feb 19 18:48:09 crc kubenswrapper[4813]: E0219 18:48:09.469875 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df1b69cc-9b96-474a-bc5d-a2e00a99d8b5" containerName="init" Feb 19 18:48:09 crc kubenswrapper[4813]: I0219 18:48:09.469893 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="df1b69cc-9b96-474a-bc5d-a2e00a99d8b5" containerName="init" Feb 19 18:48:09 crc kubenswrapper[4813]: I0219 18:48:09.470090 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="df1b69cc-9b96-474a-bc5d-a2e00a99d8b5" containerName="init" Feb 19 18:48:09 crc kubenswrapper[4813]: I0219 18:48:09.470590 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-lm976" Feb 19 18:48:09 crc kubenswrapper[4813]: I0219 18:48:09.472360 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nprcb\" (UniqueName: \"kubernetes.io/projected/df1b69cc-9b96-474a-bc5d-a2e00a99d8b5-kube-api-access-nprcb\") pod \"df1b69cc-9b96-474a-bc5d-a2e00a99d8b5\" (UID: \"df1b69cc-9b96-474a-bc5d-a2e00a99d8b5\") " Feb 19 18:48:09 crc kubenswrapper[4813]: I0219 18:48:09.472408 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df1b69cc-9b96-474a-bc5d-a2e00a99d8b5-dns-svc\") pod \"df1b69cc-9b96-474a-bc5d-a2e00a99d8b5\" (UID: \"df1b69cc-9b96-474a-bc5d-a2e00a99d8b5\") " Feb 19 18:48:09 crc kubenswrapper[4813]: I0219 18:48:09.472446 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df1b69cc-9b96-474a-bc5d-a2e00a99d8b5-config\") pod \"df1b69cc-9b96-474a-bc5d-a2e00a99d8b5\" (UID: \"df1b69cc-9b96-474a-bc5d-a2e00a99d8b5\") " Feb 19 18:48:09 crc kubenswrapper[4813]: I0219 18:48:09.477445 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df1b69cc-9b96-474a-bc5d-a2e00a99d8b5-kube-api-access-nprcb" (OuterVolumeSpecName: "kube-api-access-nprcb") pod "df1b69cc-9b96-474a-bc5d-a2e00a99d8b5" (UID: "df1b69cc-9b96-474a-bc5d-a2e00a99d8b5"). InnerVolumeSpecName "kube-api-access-nprcb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:48:09 crc kubenswrapper[4813]: I0219 18:48:09.489072 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2db349ca-5a17-4b1e-b8fa-da6112ee5843" path="/var/lib/kubelet/pods/2db349ca-5a17-4b1e-b8fa-da6112ee5843/volumes" Feb 19 18:48:09 crc kubenswrapper[4813]: I0219 18:48:09.509312 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-f626-account-create-update-jp84f"] Feb 19 18:48:09 crc kubenswrapper[4813]: I0219 18:48:09.510278 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-f626-account-create-update-jp84f" Feb 19 18:48:09 crc kubenswrapper[4813]: I0219 18:48:09.511726 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 19 18:48:09 crc kubenswrapper[4813]: I0219 18:48:09.515582 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-lm976"] Feb 19 18:48:09 crc kubenswrapper[4813]: I0219 18:48:09.520193 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df1b69cc-9b96-474a-bc5d-a2e00a99d8b5-config" (OuterVolumeSpecName: "config") pod "df1b69cc-9b96-474a-bc5d-a2e00a99d8b5" (UID: "df1b69cc-9b96-474a-bc5d-a2e00a99d8b5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:48:09 crc kubenswrapper[4813]: I0219 18:48:09.521060 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-f626-account-create-update-jp84f"] Feb 19 18:48:09 crc kubenswrapper[4813]: I0219 18:48:09.540837 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df1b69cc-9b96-474a-bc5d-a2e00a99d8b5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "df1b69cc-9b96-474a-bc5d-a2e00a99d8b5" (UID: "df1b69cc-9b96-474a-bc5d-a2e00a99d8b5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:48:09 crc kubenswrapper[4813]: I0219 18:48:09.573872 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plb82\" (UniqueName: \"kubernetes.io/projected/319f74d2-fc89-40c3-98a6-43f9c3ec542e-kube-api-access-plb82\") pod \"glance-db-create-lm976\" (UID: \"319f74d2-fc89-40c3-98a6-43f9c3ec542e\") " pod="openstack/glance-db-create-lm976" Feb 19 18:48:09 crc kubenswrapper[4813]: I0219 18:48:09.573939 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/319f74d2-fc89-40c3-98a6-43f9c3ec542e-operator-scripts\") pod \"glance-db-create-lm976\" (UID: \"319f74d2-fc89-40c3-98a6-43f9c3ec542e\") " pod="openstack/glance-db-create-lm976" Feb 19 18:48:09 crc kubenswrapper[4813]: I0219 18:48:09.574083 4813 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df1b69cc-9b96-474a-bc5d-a2e00a99d8b5-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 18:48:09 crc kubenswrapper[4813]: I0219 18:48:09.574093 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df1b69cc-9b96-474a-bc5d-a2e00a99d8b5-config\") on node \"crc\" DevicePath \"\"" Feb 19 18:48:09 crc kubenswrapper[4813]: I0219 18:48:09.574103 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nprcb\" (UniqueName: \"kubernetes.io/projected/df1b69cc-9b96-474a-bc5d-a2e00a99d8b5-kube-api-access-nprcb\") on node \"crc\" DevicePath \"\"" Feb 19 18:48:09 crc kubenswrapper[4813]: I0219 18:48:09.657006 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"8c675ac6-fe1c-42a1-b67e-f958aef3c086","Type":"ContainerStarted","Data":"095d5d76bcb19778cb1e9862a6c6d2c3bcbbdd65e6cc2a1a688411a3d0f3b087"} Feb 19 18:48:09 crc kubenswrapper[4813]: I0219 18:48:09.661599 4813 generic.go:334] "Generic (PLEG): container finished" podID="389d3fe8-138e-47b3-83c6-5c7e7c5551b7" containerID="ea5651b62096a33f83198625eb36338067ea156edf6f99abd43cec029af3078a" exitCode=0 Feb 19 18:48:09 crc kubenswrapper[4813]: I0219 18:48:09.661661 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75b7bcc64f-j8v4k" event={"ID":"389d3fe8-138e-47b3-83c6-5c7e7c5551b7","Type":"ContainerDied","Data":"ea5651b62096a33f83198625eb36338067ea156edf6f99abd43cec029af3078a"} Feb 19 18:48:09 crc kubenswrapper[4813]: I0219 18:48:09.661688 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75b7bcc64f-j8v4k" event={"ID":"389d3fe8-138e-47b3-83c6-5c7e7c5551b7","Type":"ContainerStarted","Data":"c773dd4ebb98302bd923d19c0d00234e36721ee8586ff365e7e2ae16f6108515"} Feb 19 18:48:09 crc kubenswrapper[4813]: I0219 18:48:09.663884 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67ff45466c-c88qq" event={"ID":"df1b69cc-9b96-474a-bc5d-a2e00a99d8b5","Type":"ContainerDied","Data":"974730f15e9ad142f1deffb1bbfc83dd0f2224d8164e93803b64e73fc5746b50"} Feb 19 18:48:09 crc kubenswrapper[4813]: I0219 18:48:09.663976 4813 scope.go:117] "RemoveContainer" containerID="ac0549e207bc69564b24bda742327b56ef84ba0d1d08afca97bc09f0aeea4bac" Feb 19 18:48:09 crc kubenswrapper[4813]: I0219 18:48:09.664160 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67ff45466c-c88qq" Feb 19 18:48:09 crc kubenswrapper[4813]: I0219 18:48:09.668847 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-85btv" event={"ID":"c8734aca-5b08-4847-b485-1d31add9fba1","Type":"ContainerStarted","Data":"3c2c71c20556e819af235dccf56fcb9253ca0ffc4dcbeab4a4b3cac3cd471eda"} Feb 19 18:48:09 crc kubenswrapper[4813]: I0219 18:48:09.668887 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-85btv" event={"ID":"c8734aca-5b08-4847-b485-1d31add9fba1","Type":"ContainerStarted","Data":"bdb717649bdc194b1d9468b82ca9065a9baa5f497b97b5425c0cc6433105ac8f"} Feb 19 18:48:09 crc kubenswrapper[4813]: I0219 18:48:09.671279 4813 generic.go:334] "Generic (PLEG): container finished" podID="d18d3e87-1b83-4a09-9e7b-f28e1e540d85" containerID="bc88b0a71be71cdc231f09749fe8a19381f586e732a9331107df182b0b457875" exitCode=0 Feb 19 18:48:09 crc kubenswrapper[4813]: I0219 18:48:09.671341 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57bdd75c-fdt2d" event={"ID":"d18d3e87-1b83-4a09-9e7b-f28e1e540d85","Type":"ContainerDied","Data":"bc88b0a71be71cdc231f09749fe8a19381f586e732a9331107df182b0b457875"} Feb 19 18:48:09 crc kubenswrapper[4813]: I0219 18:48:09.671363 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57bdd75c-fdt2d" event={"ID":"d18d3e87-1b83-4a09-9e7b-f28e1e540d85","Type":"ContainerStarted","Data":"91c91832f18c31671e2dcdc348b92dfc0ae4c9689b7c9fda205a8cbd83cb2b83"} Feb 19 18:48:09 crc kubenswrapper[4813]: I0219 18:48:09.673657 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c69ff3db-8806-451a-9df0-c6289c327579","Type":"ContainerStarted","Data":"b9426c242aece9d97709c8664b0a4c377652f962eb00395db079b212fbca6168"} Feb 19 18:48:09 crc kubenswrapper[4813]: I0219 18:48:09.674873 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd93a436-429b-4ec3-8c52-f5a50c1d1ae2-operator-scripts\") pod \"glance-f626-account-create-update-jp84f\" (UID: \"fd93a436-429b-4ec3-8c52-f5a50c1d1ae2\") " pod="openstack/glance-f626-account-create-update-jp84f" Feb 19 18:48:09 crc kubenswrapper[4813]: I0219 18:48:09.674903 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tkvv\" (UniqueName: \"kubernetes.io/projected/fd93a436-429b-4ec3-8c52-f5a50c1d1ae2-kube-api-access-5tkvv\") pod \"glance-f626-account-create-update-jp84f\" (UID: \"fd93a436-429b-4ec3-8c52-f5a50c1d1ae2\") " pod="openstack/glance-f626-account-create-update-jp84f" Feb 19 18:48:09 crc kubenswrapper[4813]: I0219 18:48:09.674938 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plb82\" (UniqueName: \"kubernetes.io/projected/319f74d2-fc89-40c3-98a6-43f9c3ec542e-kube-api-access-plb82\") pod \"glance-db-create-lm976\" (UID: \"319f74d2-fc89-40c3-98a6-43f9c3ec542e\") " pod="openstack/glance-db-create-lm976" Feb 19 18:48:09 crc kubenswrapper[4813]: I0219 18:48:09.675040 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/319f74d2-fc89-40c3-98a6-43f9c3ec542e-operator-scripts\") pod \"glance-db-create-lm976\" (UID: \"319f74d2-fc89-40c3-98a6-43f9c3ec542e\") " pod="openstack/glance-db-create-lm976" Feb 19 18:48:09 crc kubenswrapper[4813]: I0219 18:48:09.675592 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/319f74d2-fc89-40c3-98a6-43f9c3ec542e-operator-scripts\") pod \"glance-db-create-lm976\" (UID: \"319f74d2-fc89-40c3-98a6-43f9c3ec542e\") " pod="openstack/glance-db-create-lm976" Feb 19 18:48:09 crc kubenswrapper[4813]: I0219 18:48:09.696247 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plb82\" (UniqueName: \"kubernetes.io/projected/319f74d2-fc89-40c3-98a6-43f9c3ec542e-kube-api-access-plb82\") pod \"glance-db-create-lm976\" (UID: \"319f74d2-fc89-40c3-98a6-43f9c3ec542e\") " pod="openstack/glance-db-create-lm976" Feb 19 18:48:09 crc kubenswrapper[4813]: I0219 18:48:09.737481 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-85btv" podStartSLOduration=1.737462232 podStartE2EDuration="1.737462232s" podCreationTimestamp="2026-02-19 18:48:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:48:09.725650374 +0000 UTC m=+1108.951090915" watchObservedRunningTime="2026-02-19 18:48:09.737462232 +0000 UTC m=+1108.962902773" Feb 19 18:48:09 crc kubenswrapper[4813]: I0219 18:48:09.777935 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd93a436-429b-4ec3-8c52-f5a50c1d1ae2-operator-scripts\") pod \"glance-f626-account-create-update-jp84f\" (UID: \"fd93a436-429b-4ec3-8c52-f5a50c1d1ae2\") " pod="openstack/glance-f626-account-create-update-jp84f" Feb 19 18:48:09 crc kubenswrapper[4813]: I0219 18:48:09.778003 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tkvv\" (UniqueName: \"kubernetes.io/projected/fd93a436-429b-4ec3-8c52-f5a50c1d1ae2-kube-api-access-5tkvv\") pod \"glance-f626-account-create-update-jp84f\" (UID: \"fd93a436-429b-4ec3-8c52-f5a50c1d1ae2\") " pod="openstack/glance-f626-account-create-update-jp84f" Feb 19 18:48:09 crc kubenswrapper[4813]: I0219 18:48:09.780617 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd93a436-429b-4ec3-8c52-f5a50c1d1ae2-operator-scripts\") pod \"glance-f626-account-create-update-jp84f\" (UID: \"fd93a436-429b-4ec3-8c52-f5a50c1d1ae2\") " pod="openstack/glance-f626-account-create-update-jp84f" Feb 19 18:48:09 crc kubenswrapper[4813]: I0219 18:48:09.791063 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 19 18:48:09 crc kubenswrapper[4813]: I0219 18:48:09.818078 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tkvv\" (UniqueName: \"kubernetes.io/projected/fd93a436-429b-4ec3-8c52-f5a50c1d1ae2-kube-api-access-5tkvv\") pod \"glance-f626-account-create-update-jp84f\" (UID: \"fd93a436-429b-4ec3-8c52-f5a50c1d1ae2\") " pod="openstack/glance-f626-account-create-update-jp84f" Feb 19 18:48:09 crc kubenswrapper[4813]: I0219 18:48:09.818492 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67ff45466c-c88qq"] Feb 19 18:48:09 crc kubenswrapper[4813]: I0219 18:48:09.821556 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67ff45466c-c88qq"] Feb 19 18:48:09 crc kubenswrapper[4813]: I0219 18:48:09.831512 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-lm976" Feb 19 18:48:09 crc kubenswrapper[4813]: I0219 18:48:09.842261 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-f626-account-create-update-jp84f" Feb 19 18:48:10 crc kubenswrapper[4813]: I0219 18:48:10.161694 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-5h4p6"] Feb 19 18:48:10 crc kubenswrapper[4813]: I0219 18:48:10.163356 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-5h4p6" Feb 19 18:48:10 crc kubenswrapper[4813]: I0219 18:48:10.175068 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-734c-account-create-update-wj88l"] Feb 19 18:48:10 crc kubenswrapper[4813]: I0219 18:48:10.178566 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-734c-account-create-update-wj88l" Feb 19 18:48:10 crc kubenswrapper[4813]: I0219 18:48:10.180466 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 19 18:48:10 crc kubenswrapper[4813]: I0219 18:48:10.187725 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-5h4p6"] Feb 19 18:48:10 crc kubenswrapper[4813]: I0219 18:48:10.202463 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-734c-account-create-update-wj88l"] Feb 19 18:48:10 crc kubenswrapper[4813]: I0219 18:48:10.291250 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25w74\" (UniqueName: \"kubernetes.io/projected/46cbac0e-c130-4aed-9c70-e4c4d7378092-kube-api-access-25w74\") pod \"keystone-db-create-5h4p6\" (UID: \"46cbac0e-c130-4aed-9c70-e4c4d7378092\") " pod="openstack/keystone-db-create-5h4p6" Feb 19 18:48:10 crc kubenswrapper[4813]: I0219 18:48:10.291319 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8ks8\" (UniqueName: \"kubernetes.io/projected/43b357f4-2181-45a7-9d60-1a90f76b1c77-kube-api-access-w8ks8\") pod \"keystone-734c-account-create-update-wj88l\" (UID: \"43b357f4-2181-45a7-9d60-1a90f76b1c77\") " pod="openstack/keystone-734c-account-create-update-wj88l" Feb 19 18:48:10 crc kubenswrapper[4813]: I0219 18:48:10.291448 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43b357f4-2181-45a7-9d60-1a90f76b1c77-operator-scripts\") pod \"keystone-734c-account-create-update-wj88l\" (UID: \"43b357f4-2181-45a7-9d60-1a90f76b1c77\") " pod="openstack/keystone-734c-account-create-update-wj88l" Feb 19 18:48:10 crc kubenswrapper[4813]: I0219 18:48:10.291567 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46cbac0e-c130-4aed-9c70-e4c4d7378092-operator-scripts\") pod \"keystone-db-create-5h4p6\" (UID: \"46cbac0e-c130-4aed-9c70-e4c4d7378092\") " pod="openstack/keystone-db-create-5h4p6" Feb 19 18:48:10 crc kubenswrapper[4813]: I0219 18:48:10.376847 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-f626-account-create-update-jp84f"] Feb 19 18:48:10 crc kubenswrapper[4813]: I0219 18:48:10.389064 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-pf8pv"] Feb 19 18:48:10 crc kubenswrapper[4813]: I0219 18:48:10.390387 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-pf8pv" Feb 19 18:48:10 crc kubenswrapper[4813]: I0219 18:48:10.393127 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8ks8\" (UniqueName: \"kubernetes.io/projected/43b357f4-2181-45a7-9d60-1a90f76b1c77-kube-api-access-w8ks8\") pod \"keystone-734c-account-create-update-wj88l\" (UID: \"43b357f4-2181-45a7-9d60-1a90f76b1c77\") " pod="openstack/keystone-734c-account-create-update-wj88l" Feb 19 18:48:10 crc kubenswrapper[4813]: I0219 18:48:10.393209 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43b357f4-2181-45a7-9d60-1a90f76b1c77-operator-scripts\") pod \"keystone-734c-account-create-update-wj88l\" (UID: \"43b357f4-2181-45a7-9d60-1a90f76b1c77\") " pod="openstack/keystone-734c-account-create-update-wj88l" Feb 19 18:48:10 crc kubenswrapper[4813]: I0219 18:48:10.393245 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46cbac0e-c130-4aed-9c70-e4c4d7378092-operator-scripts\") pod \"keystone-db-create-5h4p6\" (UID: \"46cbac0e-c130-4aed-9c70-e4c4d7378092\") " pod="openstack/keystone-db-create-5h4p6" Feb 19 18:48:10 crc kubenswrapper[4813]: I0219 18:48:10.393310 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25w74\" (UniqueName: \"kubernetes.io/projected/46cbac0e-c130-4aed-9c70-e4c4d7378092-kube-api-access-25w74\") pod \"keystone-db-create-5h4p6\" (UID: \"46cbac0e-c130-4aed-9c70-e4c4d7378092\") " pod="openstack/keystone-db-create-5h4p6" Feb 19 18:48:10 crc kubenswrapper[4813]: I0219 18:48:10.395117 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43b357f4-2181-45a7-9d60-1a90f76b1c77-operator-scripts\") pod \"keystone-734c-account-create-update-wj88l\" (UID: \"43b357f4-2181-45a7-9d60-1a90f76b1c77\") " pod="openstack/keystone-734c-account-create-update-wj88l" Feb 19 18:48:10 crc kubenswrapper[4813]: I0219 18:48:10.396055 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46cbac0e-c130-4aed-9c70-e4c4d7378092-operator-scripts\") pod \"keystone-db-create-5h4p6\" (UID: \"46cbac0e-c130-4aed-9c70-e4c4d7378092\") " pod="openstack/keystone-db-create-5h4p6" Feb 19 18:48:10 crc kubenswrapper[4813]: I0219 18:48:10.403660 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-pf8pv"] Feb 19 18:48:10 crc kubenswrapper[4813]: I0219 18:48:10.415712 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-d489-account-create-update-xvmmd"] Feb 19 18:48:10 crc kubenswrapper[4813]: I0219 18:48:10.417634 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d489-account-create-update-xvmmd" Feb 19 18:48:10 crc kubenswrapper[4813]: I0219 18:48:10.425703 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 19 18:48:10 crc kubenswrapper[4813]: I0219 18:48:10.435308 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8ks8\" (UniqueName: \"kubernetes.io/projected/43b357f4-2181-45a7-9d60-1a90f76b1c77-kube-api-access-w8ks8\") pod \"keystone-734c-account-create-update-wj88l\" (UID: \"43b357f4-2181-45a7-9d60-1a90f76b1c77\") " pod="openstack/keystone-734c-account-create-update-wj88l" Feb 19 18:48:10 crc kubenswrapper[4813]: I0219 18:48:10.439279 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-d489-account-create-update-xvmmd"] Feb 19 18:48:10 crc kubenswrapper[4813]: I0219 18:48:10.445774 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25w74\" (UniqueName: \"kubernetes.io/projected/46cbac0e-c130-4aed-9c70-e4c4d7378092-kube-api-access-25w74\") pod \"keystone-db-create-5h4p6\" (UID: \"46cbac0e-c130-4aed-9c70-e4c4d7378092\") " pod="openstack/keystone-db-create-5h4p6" Feb 19 18:48:10 crc kubenswrapper[4813]: I0219 18:48:10.492812 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-5h4p6" Feb 19 18:48:10 crc kubenswrapper[4813]: I0219 18:48:10.507337 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d2a6f952-c093-4d22-9452-3043f5f26472-operator-scripts\") pod \"placement-db-create-pf8pv\" (UID: \"d2a6f952-c093-4d22-9452-3043f5f26472\") " pod="openstack/placement-db-create-pf8pv" Feb 19 18:48:10 crc kubenswrapper[4813]: I0219 18:48:10.507402 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3296d69-fa27-44f5-89a4-5122c3662dc5-operator-scripts\") pod \"placement-d489-account-create-update-xvmmd\" (UID: \"d3296d69-fa27-44f5-89a4-5122c3662dc5\") " pod="openstack/placement-d489-account-create-update-xvmmd" Feb 19 18:48:10 crc kubenswrapper[4813]: I0219 18:48:10.507581 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjx6b\" (UniqueName: \"kubernetes.io/projected/d2a6f952-c093-4d22-9452-3043f5f26472-kube-api-access-zjx6b\") pod \"placement-db-create-pf8pv\" (UID: \"d2a6f952-c093-4d22-9452-3043f5f26472\") " pod="openstack/placement-db-create-pf8pv" Feb 19 18:48:10 crc kubenswrapper[4813]: I0219 18:48:10.507658 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srp5c\" (UniqueName: \"kubernetes.io/projected/d3296d69-fa27-44f5-89a4-5122c3662dc5-kube-api-access-srp5c\") pod \"placement-d489-account-create-update-xvmmd\" (UID: \"d3296d69-fa27-44f5-89a4-5122c3662dc5\") " pod="openstack/placement-d489-account-create-update-xvmmd" Feb 19 18:48:10 crc kubenswrapper[4813]: I0219 18:48:10.507760 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-734c-account-create-update-wj88l" Feb 19 18:48:10 crc kubenswrapper[4813]: I0219 18:48:10.549896 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-lm976"] Feb 19 18:48:10 crc kubenswrapper[4813]: I0219 18:48:10.609500 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d2a6f952-c093-4d22-9452-3043f5f26472-operator-scripts\") pod \"placement-db-create-pf8pv\" (UID: \"d2a6f952-c093-4d22-9452-3043f5f26472\") " pod="openstack/placement-db-create-pf8pv" Feb 19 18:48:10 crc kubenswrapper[4813]: I0219 18:48:10.610342 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3296d69-fa27-44f5-89a4-5122c3662dc5-operator-scripts\") pod \"placement-d489-account-create-update-xvmmd\" (UID: \"d3296d69-fa27-44f5-89a4-5122c3662dc5\") " pod="openstack/placement-d489-account-create-update-xvmmd" Feb 19 18:48:10 crc kubenswrapper[4813]: I0219 18:48:10.610490 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjx6b\" (UniqueName: \"kubernetes.io/projected/d2a6f952-c093-4d22-9452-3043f5f26472-kube-api-access-zjx6b\") pod \"placement-db-create-pf8pv\" (UID: \"d2a6f952-c093-4d22-9452-3043f5f26472\") " pod="openstack/placement-db-create-pf8pv" Feb 19 18:48:10 crc kubenswrapper[4813]: I0219 18:48:10.610551 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srp5c\" (UniqueName: \"kubernetes.io/projected/d3296d69-fa27-44f5-89a4-5122c3662dc5-kube-api-access-srp5c\") pod \"placement-d489-account-create-update-xvmmd\" (UID: \"d3296d69-fa27-44f5-89a4-5122c3662dc5\") " pod="openstack/placement-d489-account-create-update-xvmmd" Feb 19 18:48:10 crc kubenswrapper[4813]: I0219 18:48:10.610234 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d2a6f952-c093-4d22-9452-3043f5f26472-operator-scripts\") pod \"placement-db-create-pf8pv\" (UID: \"d2a6f952-c093-4d22-9452-3043f5f26472\") " pod="openstack/placement-db-create-pf8pv" Feb 19 18:48:10 crc kubenswrapper[4813]: I0219 18:48:10.611999 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3296d69-fa27-44f5-89a4-5122c3662dc5-operator-scripts\") pod \"placement-d489-account-create-update-xvmmd\" (UID: \"d3296d69-fa27-44f5-89a4-5122c3662dc5\") " pod="openstack/placement-d489-account-create-update-xvmmd" Feb 19 18:48:10 crc kubenswrapper[4813]: I0219 18:48:10.627320 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srp5c\" (UniqueName: \"kubernetes.io/projected/d3296d69-fa27-44f5-89a4-5122c3662dc5-kube-api-access-srp5c\") pod \"placement-d489-account-create-update-xvmmd\" (UID: \"d3296d69-fa27-44f5-89a4-5122c3662dc5\") " pod="openstack/placement-d489-account-create-update-xvmmd" Feb 19 18:48:10 crc kubenswrapper[4813]: I0219 18:48:10.628152 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjx6b\" (UniqueName: \"kubernetes.io/projected/d2a6f952-c093-4d22-9452-3043f5f26472-kube-api-access-zjx6b\") pod \"placement-db-create-pf8pv\" (UID: \"d2a6f952-c093-4d22-9452-3043f5f26472\") " pod="openstack/placement-db-create-pf8pv" Feb 19 18:48:10 crc kubenswrapper[4813]: I0219 18:48:10.686889 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57bdd75c-fdt2d" event={"ID":"d18d3e87-1b83-4a09-9e7b-f28e1e540d85","Type":"ContainerStarted","Data":"8f8371a2c83a18ac8310710e7402f2978d277543c99dd8fd10ec756f91477d7c"} Feb 19 18:48:10 crc kubenswrapper[4813]: I0219 18:48:10.687015 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57bdd75c-fdt2d" Feb 19 18:48:10 crc kubenswrapper[4813]: I0219 18:48:10.689606 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75b7bcc64f-j8v4k" event={"ID":"389d3fe8-138e-47b3-83c6-5c7e7c5551b7","Type":"ContainerStarted","Data":"fec741ddb8fc2066033960ff2fc4b154f16f3a7328f78d6ad0919d4c295cfa9c"} Feb 19 18:48:10 crc kubenswrapper[4813]: I0219 18:48:10.689726 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-75b7bcc64f-j8v4k" Feb 19 18:48:10 crc kubenswrapper[4813]: I0219 18:48:10.706369 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57bdd75c-fdt2d" podStartSLOduration=3.7063526060000003 podStartE2EDuration="3.706352606s" podCreationTimestamp="2026-02-19 18:48:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:48:10.702463395 +0000 UTC m=+1109.927903946" watchObservedRunningTime="2026-02-19 18:48:10.706352606 +0000 UTC m=+1109.931793147" Feb 19 18:48:10 crc kubenswrapper[4813]: I0219 18:48:10.713198 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-pf8pv" Feb 19 18:48:10 crc kubenswrapper[4813]: I0219 18:48:10.722212 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-75b7bcc64f-j8v4k" podStartSLOduration=2.72219524 podStartE2EDuration="2.72219524s" podCreationTimestamp="2026-02-19 18:48:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:48:10.718635509 +0000 UTC m=+1109.944076050" watchObservedRunningTime="2026-02-19 18:48:10.72219524 +0000 UTC m=+1109.947635781" Feb 19 18:48:10 crc kubenswrapper[4813]: W0219 18:48:10.726600 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd93a436_429b_4ec3_8c52_f5a50c1d1ae2.slice/crio-e20a5da38df80a136b96a32e8a1119783945ede9ee92546f0918734f4f701606 WatchSource:0}: Error finding container e20a5da38df80a136b96a32e8a1119783945ede9ee92546f0918734f4f701606: Status 404 returned error can't find the container with id e20a5da38df80a136b96a32e8a1119783945ede9ee92546f0918734f4f701606 Feb 19 18:48:10 crc kubenswrapper[4813]: W0219 18:48:10.729012 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod319f74d2_fc89_40c3_98a6_43f9c3ec542e.slice/crio-2e663330a915c9c13fb8482c4d7dcee5d6d15236bcbb861f411d7ab60b2c3389 WatchSource:0}: Error finding container 2e663330a915c9c13fb8482c4d7dcee5d6d15236bcbb861f411d7ab60b2c3389: Status 404 returned error can't find the container with id 2e663330a915c9c13fb8482c4d7dcee5d6d15236bcbb861f411d7ab60b2c3389 Feb 19 18:48:10 crc kubenswrapper[4813]: I0219 18:48:10.795700 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d489-account-create-update-xvmmd" Feb 19 18:48:11 crc kubenswrapper[4813]: I0219 18:48:11.239051 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57bdd75c-fdt2d"] Feb 19 18:48:11 crc kubenswrapper[4813]: I0219 18:48:11.242827 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 19 18:48:11 crc kubenswrapper[4813]: I0219 18:48:11.272493 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-734c-account-create-update-wj88l"] Feb 19 18:48:11 crc kubenswrapper[4813]: I0219 18:48:11.304020 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-689df5d84f-5dff4"] Feb 19 18:48:11 crc kubenswrapper[4813]: I0219 18:48:11.305512 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-689df5d84f-5dff4" Feb 19 18:48:11 crc kubenswrapper[4813]: I0219 18:48:11.312932 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-689df5d84f-5dff4"] Feb 19 18:48:11 crc kubenswrapper[4813]: I0219 18:48:11.377107 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-d489-account-create-update-xvmmd"] Feb 19 18:48:11 crc kubenswrapper[4813]: I0219 18:48:11.387063 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-5h4p6"] Feb 19 18:48:11 crc kubenswrapper[4813]: W0219 18:48:11.392681 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3296d69_fa27_44f5_89a4_5122c3662dc5.slice/crio-2f2ef1d11c6f2940a3e15eb084283628dad75c0ce3706f34220a94a1317729ec WatchSource:0}: Error finding container 2f2ef1d11c6f2940a3e15eb084283628dad75c0ce3706f34220a94a1317729ec: Status 404 returned error can't find the container with id 2f2ef1d11c6f2940a3e15eb084283628dad75c0ce3706f34220a94a1317729ec Feb 19 18:48:11 crc kubenswrapper[4813]: I0219 18:48:11.435699 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8e97206-5dce-4e2a-989d-aaf8a78c053f-config\") pod \"dnsmasq-dns-689df5d84f-5dff4\" (UID: \"f8e97206-5dce-4e2a-989d-aaf8a78c053f\") " pod="openstack/dnsmasq-dns-689df5d84f-5dff4" Feb 19 18:48:11 crc kubenswrapper[4813]: I0219 18:48:11.435976 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f8e97206-5dce-4e2a-989d-aaf8a78c053f-ovsdbserver-sb\") pod \"dnsmasq-dns-689df5d84f-5dff4\" (UID: \"f8e97206-5dce-4e2a-989d-aaf8a78c053f\") " pod="openstack/dnsmasq-dns-689df5d84f-5dff4" Feb 19 18:48:11 crc kubenswrapper[4813]: I0219 18:48:11.436010 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbwhg\" (UniqueName: \"kubernetes.io/projected/f8e97206-5dce-4e2a-989d-aaf8a78c053f-kube-api-access-xbwhg\") pod \"dnsmasq-dns-689df5d84f-5dff4\" (UID: \"f8e97206-5dce-4e2a-989d-aaf8a78c053f\") " pod="openstack/dnsmasq-dns-689df5d84f-5dff4" Feb 19 18:48:11 crc kubenswrapper[4813]: I0219 18:48:11.436033 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f8e97206-5dce-4e2a-989d-aaf8a78c053f-ovsdbserver-nb\") pod \"dnsmasq-dns-689df5d84f-5dff4\" (UID: \"f8e97206-5dce-4e2a-989d-aaf8a78c053f\") " pod="openstack/dnsmasq-dns-689df5d84f-5dff4" Feb 19 18:48:11 crc kubenswrapper[4813]: I0219 18:48:11.436119 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f8e97206-5dce-4e2a-989d-aaf8a78c053f-dns-svc\") pod \"dnsmasq-dns-689df5d84f-5dff4\" (UID: \"f8e97206-5dce-4e2a-989d-aaf8a78c053f\") " pod="openstack/dnsmasq-dns-689df5d84f-5dff4" Feb 19 18:48:11 crc kubenswrapper[4813]: I0219 18:48:11.529202 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df1b69cc-9b96-474a-bc5d-a2e00a99d8b5" path="/var/lib/kubelet/pods/df1b69cc-9b96-474a-bc5d-a2e00a99d8b5/volumes" Feb 19 18:48:11 crc kubenswrapper[4813]: I0219 18:48:11.540254 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f8e97206-5dce-4e2a-989d-aaf8a78c053f-ovsdbserver-sb\") pod \"dnsmasq-dns-689df5d84f-5dff4\" (UID: \"f8e97206-5dce-4e2a-989d-aaf8a78c053f\") " pod="openstack/dnsmasq-dns-689df5d84f-5dff4" Feb 19 18:48:11 crc kubenswrapper[4813]: I0219 18:48:11.540318 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbwhg\" (UniqueName: \"kubernetes.io/projected/f8e97206-5dce-4e2a-989d-aaf8a78c053f-kube-api-access-xbwhg\") pod \"dnsmasq-dns-689df5d84f-5dff4\" (UID: \"f8e97206-5dce-4e2a-989d-aaf8a78c053f\") " pod="openstack/dnsmasq-dns-689df5d84f-5dff4" Feb 19 18:48:11 crc kubenswrapper[4813]: I0219 18:48:11.540351 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f8e97206-5dce-4e2a-989d-aaf8a78c053f-ovsdbserver-nb\") pod \"dnsmasq-dns-689df5d84f-5dff4\" (UID: \"f8e97206-5dce-4e2a-989d-aaf8a78c053f\") " pod="openstack/dnsmasq-dns-689df5d84f-5dff4" Feb 19 18:48:11 crc kubenswrapper[4813]: I0219 18:48:11.540570 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f8e97206-5dce-4e2a-989d-aaf8a78c053f-dns-svc\") pod \"dnsmasq-dns-689df5d84f-5dff4\" (UID: \"f8e97206-5dce-4e2a-989d-aaf8a78c053f\") " pod="openstack/dnsmasq-dns-689df5d84f-5dff4" Feb 19 18:48:11 crc kubenswrapper[4813]: I0219 18:48:11.540711 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8e97206-5dce-4e2a-989d-aaf8a78c053f-config\") pod \"dnsmasq-dns-689df5d84f-5dff4\" (UID: \"f8e97206-5dce-4e2a-989d-aaf8a78c053f\") " pod="openstack/dnsmasq-dns-689df5d84f-5dff4" Feb 19 18:48:11 crc kubenswrapper[4813]: I0219 18:48:11.542231 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f8e97206-5dce-4e2a-989d-aaf8a78c053f-ovsdbserver-sb\") pod \"dnsmasq-dns-689df5d84f-5dff4\" (UID: \"f8e97206-5dce-4e2a-989d-aaf8a78c053f\") " pod="openstack/dnsmasq-dns-689df5d84f-5dff4" Feb 19 18:48:11 crc kubenswrapper[4813]: I0219 18:48:11.542478 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f8e97206-5dce-4e2a-989d-aaf8a78c053f-ovsdbserver-nb\") pod \"dnsmasq-dns-689df5d84f-5dff4\" (UID: \"f8e97206-5dce-4e2a-989d-aaf8a78c053f\") " pod="openstack/dnsmasq-dns-689df5d84f-5dff4" Feb 19 18:48:11 crc kubenswrapper[4813]: I0219 18:48:11.542963 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8e97206-5dce-4e2a-989d-aaf8a78c053f-config\") pod \"dnsmasq-dns-689df5d84f-5dff4\" (UID: \"f8e97206-5dce-4e2a-989d-aaf8a78c053f\") " pod="openstack/dnsmasq-dns-689df5d84f-5dff4" Feb 19 18:48:11 crc kubenswrapper[4813]: I0219 18:48:11.543585 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f8e97206-5dce-4e2a-989d-aaf8a78c053f-dns-svc\") pod \"dnsmasq-dns-689df5d84f-5dff4\" (UID: \"f8e97206-5dce-4e2a-989d-aaf8a78c053f\") " pod="openstack/dnsmasq-dns-689df5d84f-5dff4" Feb 19 18:48:11 crc kubenswrapper[4813]: I0219 18:48:11.573989 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbwhg\" (UniqueName: \"kubernetes.io/projected/f8e97206-5dce-4e2a-989d-aaf8a78c053f-kube-api-access-xbwhg\") pod \"dnsmasq-dns-689df5d84f-5dff4\" (UID: \"f8e97206-5dce-4e2a-989d-aaf8a78c053f\") " pod="openstack/dnsmasq-dns-689df5d84f-5dff4" Feb 19 18:48:11 crc kubenswrapper[4813]: I0219 18:48:11.616766 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-pf8pv"] Feb 19 18:48:11 crc kubenswrapper[4813]: I0219 18:48:11.660788 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-689df5d84f-5dff4" Feb 19 18:48:11 crc kubenswrapper[4813]: I0219 18:48:11.715463 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"8c675ac6-fe1c-42a1-b67e-f958aef3c086","Type":"ContainerStarted","Data":"ef8054e602e4f8d1b1826408ddcea8dc3c850848bc422d974dd7ed560d2484d9"} Feb 19 18:48:11 crc kubenswrapper[4813]: I0219 18:48:11.715504 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"8c675ac6-fe1c-42a1-b67e-f958aef3c086","Type":"ContainerStarted","Data":"9226feb2aaff488664cb846e1b506dadfbfcb60658291ac816823714d99eabc1"} Feb 19 18:48:11 crc kubenswrapper[4813]: I0219 18:48:11.715692 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 19 18:48:11 crc kubenswrapper[4813]: I0219 18:48:11.718298 4813 generic.go:334] "Generic (PLEG): container finished" podID="fd93a436-429b-4ec3-8c52-f5a50c1d1ae2" containerID="d54b63de3aad6d9e307f23a72e1f7901457bc2d8af8395fb0d3e63bbbebd3eb2" exitCode=0 Feb 19 18:48:11 crc kubenswrapper[4813]: I0219 18:48:11.718339 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-f626-account-create-update-jp84f" event={"ID":"fd93a436-429b-4ec3-8c52-f5a50c1d1ae2","Type":"ContainerDied","Data":"d54b63de3aad6d9e307f23a72e1f7901457bc2d8af8395fb0d3e63bbbebd3eb2"} Feb 19 18:48:11 crc kubenswrapper[4813]: I0219 18:48:11.718356 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-f626-account-create-update-jp84f" event={"ID":"fd93a436-429b-4ec3-8c52-f5a50c1d1ae2","Type":"ContainerStarted","Data":"e20a5da38df80a136b96a32e8a1119783945ede9ee92546f0918734f4f701606"} Feb 19 18:48:11 crc kubenswrapper[4813]: I0219 18:48:11.724198 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-pf8pv" event={"ID":"d2a6f952-c093-4d22-9452-3043f5f26472","Type":"ContainerStarted","Data":"3ab38db17a290578730ef302962fa4927496171ec7c88e490570c9cd3ccdc72f"} Feb 19 18:48:11 crc kubenswrapper[4813]: I0219 18:48:11.728199 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d489-account-create-update-xvmmd" event={"ID":"d3296d69-fa27-44f5-89a4-5122c3662dc5","Type":"ContainerStarted","Data":"6493a7242bce5b2c90612f6408da117da88b1ac72061ec6857947e43348e30c1"} Feb 19 18:48:11 crc kubenswrapper[4813]: I0219 18:48:11.728233 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d489-account-create-update-xvmmd" event={"ID":"d3296d69-fa27-44f5-89a4-5122c3662dc5","Type":"ContainerStarted","Data":"2f2ef1d11c6f2940a3e15eb084283628dad75c0ce3706f34220a94a1317729ec"} Feb 19 18:48:11 crc kubenswrapper[4813]: I0219 18:48:11.730191 4813 generic.go:334] "Generic (PLEG): container finished" podID="319f74d2-fc89-40c3-98a6-43f9c3ec542e" containerID="a83bcfd4da6cf2a51f8d0ca5581bc152a4dc583f7a672ddd92d87f20e8c77394" exitCode=0 Feb 19 18:48:11 crc kubenswrapper[4813]: I0219 18:48:11.730230 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-lm976" event={"ID":"319f74d2-fc89-40c3-98a6-43f9c3ec542e","Type":"ContainerDied","Data":"a83bcfd4da6cf2a51f8d0ca5581bc152a4dc583f7a672ddd92d87f20e8c77394"} Feb 19 18:48:11 crc kubenswrapper[4813]: I0219 18:48:11.730246 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-lm976" event={"ID":"319f74d2-fc89-40c3-98a6-43f9c3ec542e","Type":"ContainerStarted","Data":"2e663330a915c9c13fb8482c4d7dcee5d6d15236bcbb861f411d7ab60b2c3389"} Feb 19 18:48:11 crc kubenswrapper[4813]: I0219 18:48:11.732287 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-734c-account-create-update-wj88l" event={"ID":"43b357f4-2181-45a7-9d60-1a90f76b1c77","Type":"ContainerStarted","Data":"c663bf7132a1d71bd88008b90def56ee6497f7202061d34bce6f3a92513d526c"} Feb 19 18:48:11 crc kubenswrapper[4813]: I0219 18:48:11.732314 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-734c-account-create-update-wj88l" event={"ID":"43b357f4-2181-45a7-9d60-1a90f76b1c77","Type":"ContainerStarted","Data":"7630babb31ca8f0641c43cb826a13e513b25de01427857792659b38850102578"} Feb 19 18:48:11 crc kubenswrapper[4813]: I0219 18:48:11.742488 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-5h4p6" event={"ID":"46cbac0e-c130-4aed-9c70-e4c4d7378092","Type":"ContainerStarted","Data":"a19eca13cf4bbec8b7d1a7200d049c7f5ef1d7da3e57d75afa6035f101b45e80"} Feb 19 18:48:11 crc kubenswrapper[4813]: I0219 18:48:11.742524 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-5h4p6" event={"ID":"46cbac0e-c130-4aed-9c70-e4c4d7378092","Type":"ContainerStarted","Data":"faa0f4c307809db1bfc3b434d7ce9c353eb6317bd3aab98599b6e087d827fbb0"} Feb 19 18:48:11 crc kubenswrapper[4813]: I0219 18:48:11.743243 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.252660213 podStartE2EDuration="3.743233338s" podCreationTimestamp="2026-02-19 18:48:08 +0000 UTC" firstStartedPulling="2026-02-19 18:48:09.296882067 +0000 UTC m=+1108.522322608" lastFinishedPulling="2026-02-19 18:48:10.787455192 +0000 UTC m=+1110.012895733" observedRunningTime="2026-02-19 18:48:11.733448823 +0000 UTC m=+1110.958889364" watchObservedRunningTime="2026-02-19 18:48:11.743233338 +0000 UTC m=+1110.968673879" Feb 19 18:48:11 crc kubenswrapper[4813]: I0219 18:48:11.758860 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-734c-account-create-update-wj88l" podStartSLOduration=1.758843334 podStartE2EDuration="1.758843334s" podCreationTimestamp="2026-02-19 18:48:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:48:11.75419868 +0000 UTC m=+1110.979639221" watchObservedRunningTime="2026-02-19 18:48:11.758843334 +0000 UTC m=+1110.984283875" Feb 19 18:48:11 crc kubenswrapper[4813]: I0219 18:48:11.774030 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-d489-account-create-update-xvmmd" podStartSLOduration=1.773960735 podStartE2EDuration="1.773960735s" podCreationTimestamp="2026-02-19 18:48:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:48:11.765397439 +0000 UTC m=+1110.990837980" watchObservedRunningTime="2026-02-19 18:48:11.773960735 +0000 UTC m=+1110.999401276" Feb 19 18:48:11 crc kubenswrapper[4813]: I0219 18:48:11.816715 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-5h4p6" podStartSLOduration=1.816699817 podStartE2EDuration="1.816699817s" podCreationTimestamp="2026-02-19 18:48:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:48:11.809372818 +0000 UTC m=+1111.034813359" watchObservedRunningTime="2026-02-19 18:48:11.816699817 +0000 UTC m=+1111.042140358" Feb 19 18:48:12 crc kubenswrapper[4813]: I0219 18:48:12.174941 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-689df5d84f-5dff4"] Feb 19 18:48:12 crc kubenswrapper[4813]: W0219 18:48:12.175428 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8e97206_5dce_4e2a_989d_aaf8a78c053f.slice/crio-66327d62f30c9b706733ee8fc42751d124a483e326b720e32b133904814b1af2 WatchSource:0}: Error finding container 66327d62f30c9b706733ee8fc42751d124a483e326b720e32b133904814b1af2: Status 404 returned error can't find the container with id 66327d62f30c9b706733ee8fc42751d124a483e326b720e32b133904814b1af2 Feb 19 18:48:12 crc kubenswrapper[4813]: I0219 18:48:12.371779 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 19 18:48:12 crc kubenswrapper[4813]: I0219 18:48:12.379274 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 19 18:48:12 crc kubenswrapper[4813]: I0219 18:48:12.382529 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 19 18:48:12 crc kubenswrapper[4813]: I0219 18:48:12.382781 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 19 18:48:12 crc kubenswrapper[4813]: I0219 18:48:12.383050 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-bhhf2" Feb 19 18:48:12 crc kubenswrapper[4813]: I0219 18:48:12.383206 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 19 18:48:12 crc kubenswrapper[4813]: I0219 18:48:12.405379 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 19 18:48:12 crc kubenswrapper[4813]: I0219 18:48:12.454280 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxkw8\" (UniqueName: \"kubernetes.io/projected/1cb5586f-0789-4095-84e2-32c8c41984c1-kube-api-access-pxkw8\") pod \"swift-storage-0\" (UID: \"1cb5586f-0789-4095-84e2-32c8c41984c1\") " pod="openstack/swift-storage-0" Feb 19 18:48:12 crc kubenswrapper[4813]: I0219 18:48:12.454355 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cb5586f-0789-4095-84e2-32c8c41984c1-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"1cb5586f-0789-4095-84e2-32c8c41984c1\") " pod="openstack/swift-storage-0" Feb 19 18:48:12 crc kubenswrapper[4813]: I0219 18:48:12.454383 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1cb5586f-0789-4095-84e2-32c8c41984c1-etc-swift\") pod \"swift-storage-0\" (UID: \"1cb5586f-0789-4095-84e2-32c8c41984c1\") " pod="openstack/swift-storage-0" Feb 19 18:48:12 crc kubenswrapper[4813]: I0219 18:48:12.454458 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"1cb5586f-0789-4095-84e2-32c8c41984c1\") " pod="openstack/swift-storage-0" Feb 19 18:48:12 crc kubenswrapper[4813]: I0219 18:48:12.454486 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/1cb5586f-0789-4095-84e2-32c8c41984c1-cache\") pod \"swift-storage-0\" (UID: \"1cb5586f-0789-4095-84e2-32c8c41984c1\") " pod="openstack/swift-storage-0" Feb 19 18:48:12 crc kubenswrapper[4813]: I0219 18:48:12.454500 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/1cb5586f-0789-4095-84e2-32c8c41984c1-lock\") pod \"swift-storage-0\" (UID: \"1cb5586f-0789-4095-84e2-32c8c41984c1\") " pod="openstack/swift-storage-0" Feb 19 18:48:12 crc kubenswrapper[4813]: I0219 18:48:12.556498 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"1cb5586f-0789-4095-84e2-32c8c41984c1\") " pod="openstack/swift-storage-0" Feb 19 18:48:12 crc kubenswrapper[4813]: I0219 18:48:12.556551 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/1cb5586f-0789-4095-84e2-32c8c41984c1-cache\") pod \"swift-storage-0\" (UID: \"1cb5586f-0789-4095-84e2-32c8c41984c1\") " pod="openstack/swift-storage-0" Feb 19 18:48:12 crc kubenswrapper[4813]: I0219 18:48:12.556567 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/1cb5586f-0789-4095-84e2-32c8c41984c1-lock\") pod \"swift-storage-0\" (UID: \"1cb5586f-0789-4095-84e2-32c8c41984c1\") " pod="openstack/swift-storage-0" Feb 19 18:48:12 crc kubenswrapper[4813]: I0219 18:48:12.556613 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxkw8\" (UniqueName: \"kubernetes.io/projected/1cb5586f-0789-4095-84e2-32c8c41984c1-kube-api-access-pxkw8\") pod \"swift-storage-0\" (UID: \"1cb5586f-0789-4095-84e2-32c8c41984c1\") " pod="openstack/swift-storage-0" Feb 19 18:48:12 crc kubenswrapper[4813]: I0219 18:48:12.556650 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cb5586f-0789-4095-84e2-32c8c41984c1-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"1cb5586f-0789-4095-84e2-32c8c41984c1\") " pod="openstack/swift-storage-0" Feb 19 18:48:12 crc kubenswrapper[4813]: I0219 18:48:12.556672 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1cb5586f-0789-4095-84e2-32c8c41984c1-etc-swift\") pod \"swift-storage-0\" (UID: \"1cb5586f-0789-4095-84e2-32c8c41984c1\") " pod="openstack/swift-storage-0" Feb 19 18:48:12 crc kubenswrapper[4813]: E0219 18:48:12.556761 4813 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 19 18:48:12 crc kubenswrapper[4813]: E0219 18:48:12.556774 4813 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 19 18:48:12 crc kubenswrapper[4813]: E0219 18:48:12.556816 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1cb5586f-0789-4095-84e2-32c8c41984c1-etc-swift podName:1cb5586f-0789-4095-84e2-32c8c41984c1 nodeName:}" failed. No retries permitted until 2026-02-19 18:48:13.056801753 +0000 UTC m=+1112.282242294 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1cb5586f-0789-4095-84e2-32c8c41984c1-etc-swift") pod "swift-storage-0" (UID: "1cb5586f-0789-4095-84e2-32c8c41984c1") : configmap "swift-ring-files" not found Feb 19 18:48:12 crc kubenswrapper[4813]: I0219 18:48:12.557322 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/1cb5586f-0789-4095-84e2-32c8c41984c1-lock\") pod \"swift-storage-0\" (UID: \"1cb5586f-0789-4095-84e2-32c8c41984c1\") " pod="openstack/swift-storage-0" Feb 19 18:48:12 crc kubenswrapper[4813]: I0219 18:48:12.557936 4813 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"1cb5586f-0789-4095-84e2-32c8c41984c1\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/swift-storage-0" Feb 19 18:48:12 crc kubenswrapper[4813]: I0219 18:48:12.558120 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/1cb5586f-0789-4095-84e2-32c8c41984c1-cache\") pod \"swift-storage-0\" (UID: \"1cb5586f-0789-4095-84e2-32c8c41984c1\") " pod="openstack/swift-storage-0" Feb 19 18:48:12 crc kubenswrapper[4813]: I0219 18:48:12.568181 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cb5586f-0789-4095-84e2-32c8c41984c1-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"1cb5586f-0789-4095-84e2-32c8c41984c1\") " pod="openstack/swift-storage-0" Feb 19 18:48:12 crc kubenswrapper[4813]: I0219 18:48:12.583644 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"1cb5586f-0789-4095-84e2-32c8c41984c1\") " pod="openstack/swift-storage-0" Feb 19 18:48:12 crc kubenswrapper[4813]: I0219 18:48:12.597738 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxkw8\" (UniqueName: \"kubernetes.io/projected/1cb5586f-0789-4095-84e2-32c8c41984c1-kube-api-access-pxkw8\") pod \"swift-storage-0\" (UID: \"1cb5586f-0789-4095-84e2-32c8c41984c1\") " pod="openstack/swift-storage-0" Feb 19 18:48:12 crc kubenswrapper[4813]: I0219 18:48:12.749446 4813 generic.go:334] "Generic (PLEG): container finished" podID="d2a6f952-c093-4d22-9452-3043f5f26472" containerID="590f2f643680a87703dacbc74bd36c5a912b0e44830cf1d97e1cd2bc88daa9ea" exitCode=0 Feb 19 18:48:12 crc kubenswrapper[4813]: I0219 18:48:12.749508 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-pf8pv" event={"ID":"d2a6f952-c093-4d22-9452-3043f5f26472","Type":"ContainerDied","Data":"590f2f643680a87703dacbc74bd36c5a912b0e44830cf1d97e1cd2bc88daa9ea"} Feb 19 18:48:12 crc kubenswrapper[4813]: I0219 18:48:12.750752 4813 generic.go:334] "Generic (PLEG): container finished" podID="d3296d69-fa27-44f5-89a4-5122c3662dc5" containerID="6493a7242bce5b2c90612f6408da117da88b1ac72061ec6857947e43348e30c1" exitCode=0 Feb 19 18:48:12 crc kubenswrapper[4813]: I0219 18:48:12.750798 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d489-account-create-update-xvmmd" event={"ID":"d3296d69-fa27-44f5-89a4-5122c3662dc5","Type":"ContainerDied","Data":"6493a7242bce5b2c90612f6408da117da88b1ac72061ec6857947e43348e30c1"} Feb 19 18:48:12 crc kubenswrapper[4813]: I0219 18:48:12.752211 4813 generic.go:334] "Generic (PLEG): container finished" podID="46cbac0e-c130-4aed-9c70-e4c4d7378092" containerID="a19eca13cf4bbec8b7d1a7200d049c7f5ef1d7da3e57d75afa6035f101b45e80" exitCode=0 Feb 19 18:48:12 crc kubenswrapper[4813]: I0219 18:48:12.752264 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-5h4p6" event={"ID":"46cbac0e-c130-4aed-9c70-e4c4d7378092","Type":"ContainerDied","Data":"a19eca13cf4bbec8b7d1a7200d049c7f5ef1d7da3e57d75afa6035f101b45e80"} Feb 19 18:48:12 crc kubenswrapper[4813]: I0219 18:48:12.753220 4813 generic.go:334] "Generic (PLEG): container finished" podID="f8e97206-5dce-4e2a-989d-aaf8a78c053f" containerID="93a4eaddfdee1c1f9f1c0d5aa36e876e2f2aa0c39c763936e56af0c00822e728" exitCode=0 Feb 19 18:48:12 crc kubenswrapper[4813]: I0219 18:48:12.753334 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-689df5d84f-5dff4" event={"ID":"f8e97206-5dce-4e2a-989d-aaf8a78c053f","Type":"ContainerDied","Data":"93a4eaddfdee1c1f9f1c0d5aa36e876e2f2aa0c39c763936e56af0c00822e728"} Feb 19 18:48:12 crc kubenswrapper[4813]: I0219 18:48:12.753357 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-689df5d84f-5dff4" event={"ID":"f8e97206-5dce-4e2a-989d-aaf8a78c053f","Type":"ContainerStarted","Data":"66327d62f30c9b706733ee8fc42751d124a483e326b720e32b133904814b1af2"} Feb 19 18:48:12 crc kubenswrapper[4813]: I0219 18:48:12.754692 4813 generic.go:334] "Generic (PLEG): container finished" podID="43b357f4-2181-45a7-9d60-1a90f76b1c77" containerID="c663bf7132a1d71bd88008b90def56ee6497f7202061d34bce6f3a92513d526c" exitCode=0 Feb 19 18:48:12 crc kubenswrapper[4813]: I0219 18:48:12.754716 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-734c-account-create-update-wj88l" event={"ID":"43b357f4-2181-45a7-9d60-1a90f76b1c77","Type":"ContainerDied","Data":"c663bf7132a1d71bd88008b90def56ee6497f7202061d34bce6f3a92513d526c"} Feb 19 18:48:12 crc kubenswrapper[4813]: I0219 18:48:12.754853 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57bdd75c-fdt2d" podUID="d18d3e87-1b83-4a09-9e7b-f28e1e540d85" containerName="dnsmasq-dns" containerID="cri-o://8f8371a2c83a18ac8310710e7402f2978d277543c99dd8fd10ec756f91477d7c" gracePeriod=10 Feb 19 18:48:13 crc kubenswrapper[4813]: I0219 18:48:13.084276 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1cb5586f-0789-4095-84e2-32c8c41984c1-etc-swift\") pod \"swift-storage-0\" (UID: \"1cb5586f-0789-4095-84e2-32c8c41984c1\") " pod="openstack/swift-storage-0" Feb 19 18:48:13 crc kubenswrapper[4813]: E0219 18:48:13.084808 4813 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 19 18:48:13 crc kubenswrapper[4813]: E0219 18:48:13.084822 4813 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 19 18:48:13 crc kubenswrapper[4813]: E0219 18:48:13.084865 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1cb5586f-0789-4095-84e2-32c8c41984c1-etc-swift podName:1cb5586f-0789-4095-84e2-32c8c41984c1 nodeName:}" failed. No retries permitted until 2026-02-19 18:48:14.084850553 +0000 UTC m=+1113.310291104 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1cb5586f-0789-4095-84e2-32c8c41984c1-etc-swift") pod "swift-storage-0" (UID: "1cb5586f-0789-4095-84e2-32c8c41984c1") : configmap "swift-ring-files" not found Feb 19 18:48:13 crc kubenswrapper[4813]: I0219 18:48:13.144896 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-lm976" Feb 19 18:48:13 crc kubenswrapper[4813]: I0219 18:48:13.264247 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-f626-account-create-update-jp84f" Feb 19 18:48:13 crc kubenswrapper[4813]: I0219 18:48:13.269830 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57bdd75c-fdt2d" Feb 19 18:48:13 crc kubenswrapper[4813]: I0219 18:48:13.286864 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/319f74d2-fc89-40c3-98a6-43f9c3ec542e-operator-scripts\") pod \"319f74d2-fc89-40c3-98a6-43f9c3ec542e\" (UID: \"319f74d2-fc89-40c3-98a6-43f9c3ec542e\") " Feb 19 18:48:13 crc kubenswrapper[4813]: I0219 18:48:13.286984 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plb82\" (UniqueName: \"kubernetes.io/projected/319f74d2-fc89-40c3-98a6-43f9c3ec542e-kube-api-access-plb82\") pod \"319f74d2-fc89-40c3-98a6-43f9c3ec542e\" (UID: \"319f74d2-fc89-40c3-98a6-43f9c3ec542e\") " Feb 19 18:48:13 crc kubenswrapper[4813]: I0219 18:48:13.287506 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/319f74d2-fc89-40c3-98a6-43f9c3ec542e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "319f74d2-fc89-40c3-98a6-43f9c3ec542e" (UID: "319f74d2-fc89-40c3-98a6-43f9c3ec542e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:48:13 crc kubenswrapper[4813]: I0219 18:48:13.296617 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/319f74d2-fc89-40c3-98a6-43f9c3ec542e-kube-api-access-plb82" (OuterVolumeSpecName: "kube-api-access-plb82") pod "319f74d2-fc89-40c3-98a6-43f9c3ec542e" (UID: "319f74d2-fc89-40c3-98a6-43f9c3ec542e"). InnerVolumeSpecName "kube-api-access-plb82". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:48:13 crc kubenswrapper[4813]: I0219 18:48:13.388310 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d18d3e87-1b83-4a09-9e7b-f28e1e540d85-config\") pod \"d18d3e87-1b83-4a09-9e7b-f28e1e540d85\" (UID: \"d18d3e87-1b83-4a09-9e7b-f28e1e540d85\") " Feb 19 18:48:13 crc kubenswrapper[4813]: I0219 18:48:13.388457 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d18d3e87-1b83-4a09-9e7b-f28e1e540d85-dns-svc\") pod \"d18d3e87-1b83-4a09-9e7b-f28e1e540d85\" (UID: \"d18d3e87-1b83-4a09-9e7b-f28e1e540d85\") " Feb 19 18:48:13 crc kubenswrapper[4813]: I0219 18:48:13.388524 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5tkvv\" (UniqueName: \"kubernetes.io/projected/fd93a436-429b-4ec3-8c52-f5a50c1d1ae2-kube-api-access-5tkvv\") pod \"fd93a436-429b-4ec3-8c52-f5a50c1d1ae2\" (UID: \"fd93a436-429b-4ec3-8c52-f5a50c1d1ae2\") " Feb 19 18:48:13 crc kubenswrapper[4813]: I0219 18:48:13.388594 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d18d3e87-1b83-4a09-9e7b-f28e1e540d85-ovsdbserver-nb\") pod \"d18d3e87-1b83-4a09-9e7b-f28e1e540d85\" (UID: \"d18d3e87-1b83-4a09-9e7b-f28e1e540d85\") " Feb 19 18:48:13 crc kubenswrapper[4813]: I0219 18:48:13.388619 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z98d7\" (UniqueName: \"kubernetes.io/projected/d18d3e87-1b83-4a09-9e7b-f28e1e540d85-kube-api-access-z98d7\") pod \"d18d3e87-1b83-4a09-9e7b-f28e1e540d85\" (UID: \"d18d3e87-1b83-4a09-9e7b-f28e1e540d85\") " Feb 19 18:48:13 crc kubenswrapper[4813]: I0219 18:48:13.388689 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd93a436-429b-4ec3-8c52-f5a50c1d1ae2-operator-scripts\") pod \"fd93a436-429b-4ec3-8c52-f5a50c1d1ae2\" (UID: \"fd93a436-429b-4ec3-8c52-f5a50c1d1ae2\") " Feb 19 18:48:13 crc kubenswrapper[4813]: I0219 18:48:13.389086 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plb82\" (UniqueName: \"kubernetes.io/projected/319f74d2-fc89-40c3-98a6-43f9c3ec542e-kube-api-access-plb82\") on node \"crc\" DevicePath \"\"" Feb 19 18:48:13 crc kubenswrapper[4813]: I0219 18:48:13.389102 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/319f74d2-fc89-40c3-98a6-43f9c3ec542e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 18:48:13 crc kubenswrapper[4813]: I0219 18:48:13.389434 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd93a436-429b-4ec3-8c52-f5a50c1d1ae2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fd93a436-429b-4ec3-8c52-f5a50c1d1ae2" (UID: "fd93a436-429b-4ec3-8c52-f5a50c1d1ae2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:48:13 crc kubenswrapper[4813]: I0219 18:48:13.400031 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd93a436-429b-4ec3-8c52-f5a50c1d1ae2-kube-api-access-5tkvv" (OuterVolumeSpecName: "kube-api-access-5tkvv") pod "fd93a436-429b-4ec3-8c52-f5a50c1d1ae2" (UID: "fd93a436-429b-4ec3-8c52-f5a50c1d1ae2"). InnerVolumeSpecName "kube-api-access-5tkvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:48:13 crc kubenswrapper[4813]: I0219 18:48:13.400433 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d18d3e87-1b83-4a09-9e7b-f28e1e540d85-kube-api-access-z98d7" (OuterVolumeSpecName: "kube-api-access-z98d7") pod "d18d3e87-1b83-4a09-9e7b-f28e1e540d85" (UID: "d18d3e87-1b83-4a09-9e7b-f28e1e540d85"). InnerVolumeSpecName "kube-api-access-z98d7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:48:13 crc kubenswrapper[4813]: I0219 18:48:13.435595 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d18d3e87-1b83-4a09-9e7b-f28e1e540d85-config" (OuterVolumeSpecName: "config") pod "d18d3e87-1b83-4a09-9e7b-f28e1e540d85" (UID: "d18d3e87-1b83-4a09-9e7b-f28e1e540d85"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:48:13 crc kubenswrapper[4813]: I0219 18:48:13.442071 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d18d3e87-1b83-4a09-9e7b-f28e1e540d85-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d18d3e87-1b83-4a09-9e7b-f28e1e540d85" (UID: "d18d3e87-1b83-4a09-9e7b-f28e1e540d85"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:48:13 crc kubenswrapper[4813]: I0219 18:48:13.448406 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d18d3e87-1b83-4a09-9e7b-f28e1e540d85-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d18d3e87-1b83-4a09-9e7b-f28e1e540d85" (UID: "d18d3e87-1b83-4a09-9e7b-f28e1e540d85"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:48:13 crc kubenswrapper[4813]: I0219 18:48:13.496108 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5tkvv\" (UniqueName: \"kubernetes.io/projected/fd93a436-429b-4ec3-8c52-f5a50c1d1ae2-kube-api-access-5tkvv\") on node \"crc\" DevicePath \"\"" Feb 19 18:48:13 crc kubenswrapper[4813]: I0219 18:48:13.496133 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d18d3e87-1b83-4a09-9e7b-f28e1e540d85-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 18:48:13 crc kubenswrapper[4813]: I0219 18:48:13.496143 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z98d7\" (UniqueName: \"kubernetes.io/projected/d18d3e87-1b83-4a09-9e7b-f28e1e540d85-kube-api-access-z98d7\") on node \"crc\" DevicePath \"\"" Feb 19 18:48:13 crc kubenswrapper[4813]: I0219 18:48:13.496154 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd93a436-429b-4ec3-8c52-f5a50c1d1ae2-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 18:48:13 crc kubenswrapper[4813]: I0219 18:48:13.496164 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d18d3e87-1b83-4a09-9e7b-f28e1e540d85-config\") on node \"crc\" DevicePath \"\"" Feb 19 18:48:13 crc kubenswrapper[4813]: I0219 18:48:13.496172 4813 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d18d3e87-1b83-4a09-9e7b-f28e1e540d85-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 18:48:13 crc kubenswrapper[4813]: I0219 18:48:13.776467 4813 generic.go:334] "Generic (PLEG): container finished" podID="d18d3e87-1b83-4a09-9e7b-f28e1e540d85" containerID="8f8371a2c83a18ac8310710e7402f2978d277543c99dd8fd10ec756f91477d7c" exitCode=0 Feb 19 18:48:13 crc kubenswrapper[4813]: I0219 18:48:13.776527 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57bdd75c-fdt2d" Feb 19 18:48:13 crc kubenswrapper[4813]: I0219 18:48:13.776551 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57bdd75c-fdt2d" event={"ID":"d18d3e87-1b83-4a09-9e7b-f28e1e540d85","Type":"ContainerDied","Data":"8f8371a2c83a18ac8310710e7402f2978d277543c99dd8fd10ec756f91477d7c"} Feb 19 18:48:13 crc kubenswrapper[4813]: I0219 18:48:13.777037 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57bdd75c-fdt2d" event={"ID":"d18d3e87-1b83-4a09-9e7b-f28e1e540d85","Type":"ContainerDied","Data":"91c91832f18c31671e2dcdc348b92dfc0ae4c9689b7c9fda205a8cbd83cb2b83"} Feb 19 18:48:13 crc kubenswrapper[4813]: I0219 18:48:13.777080 4813 scope.go:117] "RemoveContainer" containerID="8f8371a2c83a18ac8310710e7402f2978d277543c99dd8fd10ec756f91477d7c" Feb 19 18:48:13 crc kubenswrapper[4813]: I0219 18:48:13.781464 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-f626-account-create-update-jp84f" Feb 19 18:48:13 crc kubenswrapper[4813]: I0219 18:48:13.781546 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-f626-account-create-update-jp84f" event={"ID":"fd93a436-429b-4ec3-8c52-f5a50c1d1ae2","Type":"ContainerDied","Data":"e20a5da38df80a136b96a32e8a1119783945ede9ee92546f0918734f4f701606"} Feb 19 18:48:13 crc kubenswrapper[4813]: I0219 18:48:13.781604 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e20a5da38df80a136b96a32e8a1119783945ede9ee92546f0918734f4f701606" Feb 19 18:48:13 crc kubenswrapper[4813]: I0219 18:48:13.785101 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-lm976" event={"ID":"319f74d2-fc89-40c3-98a6-43f9c3ec542e","Type":"ContainerDied","Data":"2e663330a915c9c13fb8482c4d7dcee5d6d15236bcbb861f411d7ab60b2c3389"} Feb 19 18:48:13 crc kubenswrapper[4813]: I0219 18:48:13.785144 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e663330a915c9c13fb8482c4d7dcee5d6d15236bcbb861f411d7ab60b2c3389" Feb 19 18:48:13 crc kubenswrapper[4813]: I0219 18:48:13.785148 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-lm976" Feb 19 18:48:13 crc kubenswrapper[4813]: I0219 18:48:13.788807 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-689df5d84f-5dff4" event={"ID":"f8e97206-5dce-4e2a-989d-aaf8a78c053f","Type":"ContainerStarted","Data":"aba7117f14b94647770d60bc236bacc3d35d683284088b93b85bfac6151e00a2"} Feb 19 18:48:13 crc kubenswrapper[4813]: I0219 18:48:13.789781 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-689df5d84f-5dff4" Feb 19 18:48:13 crc kubenswrapper[4813]: I0219 18:48:13.818083 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57bdd75c-fdt2d"] Feb 19 18:48:13 crc kubenswrapper[4813]: I0219 18:48:13.825927 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57bdd75c-fdt2d"] Feb 19 18:48:13 crc kubenswrapper[4813]: I0219 18:48:13.859859 4813 scope.go:117] "RemoveContainer" containerID="bc88b0a71be71cdc231f09749fe8a19381f586e732a9331107df182b0b457875" Feb 19 18:48:13 crc kubenswrapper[4813]: I0219 18:48:13.875953 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-689df5d84f-5dff4" podStartSLOduration=2.875931327 podStartE2EDuration="2.875931327s" podCreationTimestamp="2026-02-19 18:48:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:48:13.843728254 +0000 UTC m=+1113.069168795" watchObservedRunningTime="2026-02-19 18:48:13.875931327 +0000 UTC m=+1113.101371878" Feb 19 18:48:13 crc kubenswrapper[4813]: I0219 18:48:13.930091 4813 scope.go:117] "RemoveContainer" containerID="8f8371a2c83a18ac8310710e7402f2978d277543c99dd8fd10ec756f91477d7c" Feb 19 18:48:13 crc kubenswrapper[4813]: E0219 18:48:13.940414 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f8371a2c83a18ac8310710e7402f2978d277543c99dd8fd10ec756f91477d7c\": container with ID starting with 8f8371a2c83a18ac8310710e7402f2978d277543c99dd8fd10ec756f91477d7c not found: ID does not exist" containerID="8f8371a2c83a18ac8310710e7402f2978d277543c99dd8fd10ec756f91477d7c" Feb 19 18:48:13 crc kubenswrapper[4813]: I0219 18:48:13.940464 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f8371a2c83a18ac8310710e7402f2978d277543c99dd8fd10ec756f91477d7c"} err="failed to get container status \"8f8371a2c83a18ac8310710e7402f2978d277543c99dd8fd10ec756f91477d7c\": rpc error: code = NotFound desc = could not find container \"8f8371a2c83a18ac8310710e7402f2978d277543c99dd8fd10ec756f91477d7c\": container with ID starting with 8f8371a2c83a18ac8310710e7402f2978d277543c99dd8fd10ec756f91477d7c not found: ID does not exist" Feb 19 18:48:13 crc kubenswrapper[4813]: I0219 18:48:13.940489 4813 scope.go:117] "RemoveContainer" containerID="bc88b0a71be71cdc231f09749fe8a19381f586e732a9331107df182b0b457875" Feb 19 18:48:13 crc kubenswrapper[4813]: E0219 18:48:13.945393 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc88b0a71be71cdc231f09749fe8a19381f586e732a9331107df182b0b457875\": container with ID starting with bc88b0a71be71cdc231f09749fe8a19381f586e732a9331107df182b0b457875 not found: ID does not exist" containerID="bc88b0a71be71cdc231f09749fe8a19381f586e732a9331107df182b0b457875" Feb 19 18:48:13 crc kubenswrapper[4813]: I0219 18:48:13.945433 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc88b0a71be71cdc231f09749fe8a19381f586e732a9331107df182b0b457875"} err="failed to get container status \"bc88b0a71be71cdc231f09749fe8a19381f586e732a9331107df182b0b457875\": rpc error: code = NotFound desc = could not find container \"bc88b0a71be71cdc231f09749fe8a19381f586e732a9331107df182b0b457875\": container with ID starting with bc88b0a71be71cdc231f09749fe8a19381f586e732a9331107df182b0b457875 not found: ID does not exist" Feb 19 18:48:14 crc kubenswrapper[4813]: I0219 18:48:14.109528 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1cb5586f-0789-4095-84e2-32c8c41984c1-etc-swift\") pod \"swift-storage-0\" (UID: \"1cb5586f-0789-4095-84e2-32c8c41984c1\") " pod="openstack/swift-storage-0" Feb 19 18:48:14 crc kubenswrapper[4813]: E0219 18:48:14.109685 4813 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 19 18:48:14 crc kubenswrapper[4813]: E0219 18:48:14.109699 4813 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 19 18:48:14 crc kubenswrapper[4813]: E0219 18:48:14.109739 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1cb5586f-0789-4095-84e2-32c8c41984c1-etc-swift podName:1cb5586f-0789-4095-84e2-32c8c41984c1 nodeName:}" failed. No retries permitted until 2026-02-19 18:48:16.109725671 +0000 UTC m=+1115.335166202 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1cb5586f-0789-4095-84e2-32c8c41984c1-etc-swift") pod "swift-storage-0" (UID: "1cb5586f-0789-4095-84e2-32c8c41984c1") : configmap "swift-ring-files" not found Feb 19 18:48:14 crc kubenswrapper[4813]: I0219 18:48:14.231184 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d489-account-create-update-xvmmd" Feb 19 18:48:14 crc kubenswrapper[4813]: I0219 18:48:14.320604 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srp5c\" (UniqueName: \"kubernetes.io/projected/d3296d69-fa27-44f5-89a4-5122c3662dc5-kube-api-access-srp5c\") pod \"d3296d69-fa27-44f5-89a4-5122c3662dc5\" (UID: \"d3296d69-fa27-44f5-89a4-5122c3662dc5\") " Feb 19 18:48:14 crc kubenswrapper[4813]: I0219 18:48:14.320794 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3296d69-fa27-44f5-89a4-5122c3662dc5-operator-scripts\") pod \"d3296d69-fa27-44f5-89a4-5122c3662dc5\" (UID: \"d3296d69-fa27-44f5-89a4-5122c3662dc5\") " Feb 19 18:48:14 crc kubenswrapper[4813]: I0219 18:48:14.321579 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3296d69-fa27-44f5-89a4-5122c3662dc5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d3296d69-fa27-44f5-89a4-5122c3662dc5" (UID: "d3296d69-fa27-44f5-89a4-5122c3662dc5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:48:14 crc kubenswrapper[4813]: I0219 18:48:14.325042 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3296d69-fa27-44f5-89a4-5122c3662dc5-kube-api-access-srp5c" (OuterVolumeSpecName: "kube-api-access-srp5c") pod "d3296d69-fa27-44f5-89a4-5122c3662dc5" (UID: "d3296d69-fa27-44f5-89a4-5122c3662dc5"). InnerVolumeSpecName "kube-api-access-srp5c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:48:14 crc kubenswrapper[4813]: I0219 18:48:14.389234 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-pf8pv" Feb 19 18:48:14 crc kubenswrapper[4813]: I0219 18:48:14.395298 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-734c-account-create-update-wj88l" Feb 19 18:48:14 crc kubenswrapper[4813]: I0219 18:48:14.416337 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-5h4p6" Feb 19 18:48:14 crc kubenswrapper[4813]: I0219 18:48:14.423019 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srp5c\" (UniqueName: \"kubernetes.io/projected/d3296d69-fa27-44f5-89a4-5122c3662dc5-kube-api-access-srp5c\") on node \"crc\" DevicePath \"\"" Feb 19 18:48:14 crc kubenswrapper[4813]: I0219 18:48:14.423048 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3296d69-fa27-44f5-89a4-5122c3662dc5-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 18:48:14 crc kubenswrapper[4813]: I0219 18:48:14.524029 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43b357f4-2181-45a7-9d60-1a90f76b1c77-operator-scripts\") pod \"43b357f4-2181-45a7-9d60-1a90f76b1c77\" (UID: \"43b357f4-2181-45a7-9d60-1a90f76b1c77\") " Feb 19 18:48:14 crc kubenswrapper[4813]: I0219 18:48:14.524101 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8ks8\" (UniqueName: \"kubernetes.io/projected/43b357f4-2181-45a7-9d60-1a90f76b1c77-kube-api-access-w8ks8\") pod \"43b357f4-2181-45a7-9d60-1a90f76b1c77\" (UID: \"43b357f4-2181-45a7-9d60-1a90f76b1c77\") " Feb 19 18:48:14 crc kubenswrapper[4813]: I0219 18:48:14.524136 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25w74\" (UniqueName: \"kubernetes.io/projected/46cbac0e-c130-4aed-9c70-e4c4d7378092-kube-api-access-25w74\") pod \"46cbac0e-c130-4aed-9c70-e4c4d7378092\" (UID: \"46cbac0e-c130-4aed-9c70-e4c4d7378092\") " Feb 19 18:48:14 crc kubenswrapper[4813]: I0219 18:48:14.524186 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d2a6f952-c093-4d22-9452-3043f5f26472-operator-scripts\") pod \"d2a6f952-c093-4d22-9452-3043f5f26472\" (UID: \"d2a6f952-c093-4d22-9452-3043f5f26472\") " Feb 19 18:48:14 crc kubenswrapper[4813]: I0219 18:48:14.524246 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjx6b\" (UniqueName: \"kubernetes.io/projected/d2a6f952-c093-4d22-9452-3043f5f26472-kube-api-access-zjx6b\") pod \"d2a6f952-c093-4d22-9452-3043f5f26472\" (UID: \"d2a6f952-c093-4d22-9452-3043f5f26472\") " Feb 19 18:48:14 crc kubenswrapper[4813]: I0219 18:48:14.524288 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46cbac0e-c130-4aed-9c70-e4c4d7378092-operator-scripts\") pod \"46cbac0e-c130-4aed-9c70-e4c4d7378092\" (UID: \"46cbac0e-c130-4aed-9c70-e4c4d7378092\") " Feb 19 18:48:14 crc kubenswrapper[4813]: I0219 18:48:14.524845 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43b357f4-2181-45a7-9d60-1a90f76b1c77-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "43b357f4-2181-45a7-9d60-1a90f76b1c77" (UID: "43b357f4-2181-45a7-9d60-1a90f76b1c77"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:48:14 crc kubenswrapper[4813]: I0219 18:48:14.524856 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2a6f952-c093-4d22-9452-3043f5f26472-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d2a6f952-c093-4d22-9452-3043f5f26472" (UID: "d2a6f952-c093-4d22-9452-3043f5f26472"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:48:14 crc kubenswrapper[4813]: I0219 18:48:14.525227 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46cbac0e-c130-4aed-9c70-e4c4d7378092-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "46cbac0e-c130-4aed-9c70-e4c4d7378092" (UID: "46cbac0e-c130-4aed-9c70-e4c4d7378092"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:48:14 crc kubenswrapper[4813]: I0219 18:48:14.525577 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46cbac0e-c130-4aed-9c70-e4c4d7378092-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 18:48:14 crc kubenswrapper[4813]: I0219 18:48:14.525605 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43b357f4-2181-45a7-9d60-1a90f76b1c77-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 18:48:14 crc kubenswrapper[4813]: I0219 18:48:14.525617 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d2a6f952-c093-4d22-9452-3043f5f26472-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 18:48:14 crc kubenswrapper[4813]: I0219 18:48:14.527591 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2a6f952-c093-4d22-9452-3043f5f26472-kube-api-access-zjx6b" (OuterVolumeSpecName: "kube-api-access-zjx6b") pod "d2a6f952-c093-4d22-9452-3043f5f26472" (UID: "d2a6f952-c093-4d22-9452-3043f5f26472"). InnerVolumeSpecName "kube-api-access-zjx6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:48:14 crc kubenswrapper[4813]: I0219 18:48:14.528005 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43b357f4-2181-45a7-9d60-1a90f76b1c77-kube-api-access-w8ks8" (OuterVolumeSpecName: "kube-api-access-w8ks8") pod "43b357f4-2181-45a7-9d60-1a90f76b1c77" (UID: "43b357f4-2181-45a7-9d60-1a90f76b1c77"). InnerVolumeSpecName "kube-api-access-w8ks8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:48:14 crc kubenswrapper[4813]: I0219 18:48:14.528547 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46cbac0e-c130-4aed-9c70-e4c4d7378092-kube-api-access-25w74" (OuterVolumeSpecName: "kube-api-access-25w74") pod "46cbac0e-c130-4aed-9c70-e4c4d7378092" (UID: "46cbac0e-c130-4aed-9c70-e4c4d7378092"). InnerVolumeSpecName "kube-api-access-25w74". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:48:14 crc kubenswrapper[4813]: I0219 18:48:14.620174 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-xg95m"] Feb 19 18:48:14 crc kubenswrapper[4813]: E0219 18:48:14.620461 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d18d3e87-1b83-4a09-9e7b-f28e1e540d85" containerName="init" Feb 19 18:48:14 crc kubenswrapper[4813]: I0219 18:48:14.620473 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="d18d3e87-1b83-4a09-9e7b-f28e1e540d85" containerName="init" Feb 19 18:48:14 crc kubenswrapper[4813]: E0219 18:48:14.620490 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd93a436-429b-4ec3-8c52-f5a50c1d1ae2" containerName="mariadb-account-create-update" Feb 19 18:48:14 crc kubenswrapper[4813]: I0219 18:48:14.620496 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd93a436-429b-4ec3-8c52-f5a50c1d1ae2" containerName="mariadb-account-create-update" Feb 19 18:48:14 crc kubenswrapper[4813]: E0219 18:48:14.620508 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d18d3e87-1b83-4a09-9e7b-f28e1e540d85" containerName="dnsmasq-dns" Feb 19 18:48:14 crc kubenswrapper[4813]: I0219 18:48:14.620514 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="d18d3e87-1b83-4a09-9e7b-f28e1e540d85" containerName="dnsmasq-dns" Feb 19 18:48:14 crc kubenswrapper[4813]: E0219 18:48:14.620525 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3296d69-fa27-44f5-89a4-5122c3662dc5" containerName="mariadb-account-create-update" Feb 19 18:48:14 crc kubenswrapper[4813]: I0219 18:48:14.620531 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3296d69-fa27-44f5-89a4-5122c3662dc5" containerName="mariadb-account-create-update" Feb 19 18:48:14 crc kubenswrapper[4813]: E0219 18:48:14.620542 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="319f74d2-fc89-40c3-98a6-43f9c3ec542e" containerName="mariadb-database-create" Feb 19 18:48:14 crc kubenswrapper[4813]: I0219 18:48:14.620548 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="319f74d2-fc89-40c3-98a6-43f9c3ec542e" containerName="mariadb-database-create" Feb 19 18:48:14 crc kubenswrapper[4813]: E0219 18:48:14.620567 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43b357f4-2181-45a7-9d60-1a90f76b1c77" containerName="mariadb-account-create-update" Feb 19 18:48:14 crc kubenswrapper[4813]: I0219 18:48:14.620572 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="43b357f4-2181-45a7-9d60-1a90f76b1c77" containerName="mariadb-account-create-update" Feb 19 18:48:14 crc kubenswrapper[4813]: E0219 18:48:14.620582 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46cbac0e-c130-4aed-9c70-e4c4d7378092" containerName="mariadb-database-create" Feb 19 18:48:14 crc kubenswrapper[4813]: I0219 18:48:14.620592 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="46cbac0e-c130-4aed-9c70-e4c4d7378092" containerName="mariadb-database-create" Feb 19 18:48:14 crc kubenswrapper[4813]: E0219 18:48:14.620608 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2a6f952-c093-4d22-9452-3043f5f26472" containerName="mariadb-database-create" Feb 19 18:48:14 crc kubenswrapper[4813]: I0219 18:48:14.620616 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2a6f952-c093-4d22-9452-3043f5f26472" containerName="mariadb-database-create" Feb 19 18:48:14 crc kubenswrapper[4813]: I0219 18:48:14.620796 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="43b357f4-2181-45a7-9d60-1a90f76b1c77" containerName="mariadb-account-create-update" Feb 19 18:48:14 crc kubenswrapper[4813]: I0219 18:48:14.620811 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd93a436-429b-4ec3-8c52-f5a50c1d1ae2" containerName="mariadb-account-create-update" Feb 19 18:48:14 crc kubenswrapper[4813]: I0219 18:48:14.620819 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3296d69-fa27-44f5-89a4-5122c3662dc5" containerName="mariadb-account-create-update" Feb 19 18:48:14 crc kubenswrapper[4813]: I0219 18:48:14.620825 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="319f74d2-fc89-40c3-98a6-43f9c3ec542e" containerName="mariadb-database-create" Feb 19 18:48:14 crc kubenswrapper[4813]: I0219 18:48:14.620838 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="d18d3e87-1b83-4a09-9e7b-f28e1e540d85" containerName="dnsmasq-dns" Feb 19 18:48:14 crc kubenswrapper[4813]: I0219 18:48:14.620846 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="46cbac0e-c130-4aed-9c70-e4c4d7378092" containerName="mariadb-database-create" Feb 19 18:48:14 crc kubenswrapper[4813]: I0219 18:48:14.620854 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2a6f952-c093-4d22-9452-3043f5f26472" containerName="mariadb-database-create" Feb 19 18:48:14 crc kubenswrapper[4813]: I0219 18:48:14.621340 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-xg95m" Feb 19 18:48:14 crc kubenswrapper[4813]: I0219 18:48:14.624217 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-d6cgg" Feb 19 18:48:14 crc kubenswrapper[4813]: I0219 18:48:14.627290 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8ks8\" (UniqueName: \"kubernetes.io/projected/43b357f4-2181-45a7-9d60-1a90f76b1c77-kube-api-access-w8ks8\") on node \"crc\" DevicePath \"\"" Feb 19 18:48:14 crc kubenswrapper[4813]: I0219 18:48:14.627318 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25w74\" (UniqueName: \"kubernetes.io/projected/46cbac0e-c130-4aed-9c70-e4c4d7378092-kube-api-access-25w74\") on node \"crc\" DevicePath \"\"" Feb 19 18:48:14 crc kubenswrapper[4813]: I0219 18:48:14.627328 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjx6b\" (UniqueName: \"kubernetes.io/projected/d2a6f952-c093-4d22-9452-3043f5f26472-kube-api-access-zjx6b\") on node \"crc\" DevicePath \"\"" Feb 19 18:48:14 crc kubenswrapper[4813]: I0219 18:48:14.628548 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 19 18:48:14 crc kubenswrapper[4813]: I0219 18:48:14.634037 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-xg95m"] Feb 19 18:48:14 crc kubenswrapper[4813]: I0219 18:48:14.728924 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33b0ba56-83a8-4c5c-a8f4-91c0e7249a7b-combined-ca-bundle\") pod \"glance-db-sync-xg95m\" (UID: \"33b0ba56-83a8-4c5c-a8f4-91c0e7249a7b\") " pod="openstack/glance-db-sync-xg95m" Feb 19 18:48:14 crc kubenswrapper[4813]: I0219 18:48:14.729088 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzx9b\" (UniqueName: \"kubernetes.io/projected/33b0ba56-83a8-4c5c-a8f4-91c0e7249a7b-kube-api-access-fzx9b\") pod \"glance-db-sync-xg95m\" (UID: \"33b0ba56-83a8-4c5c-a8f4-91c0e7249a7b\") " pod="openstack/glance-db-sync-xg95m" Feb 19 18:48:14 crc kubenswrapper[4813]: I0219 18:48:14.729117 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33b0ba56-83a8-4c5c-a8f4-91c0e7249a7b-config-data\") pod \"glance-db-sync-xg95m\" (UID: \"33b0ba56-83a8-4c5c-a8f4-91c0e7249a7b\") " pod="openstack/glance-db-sync-xg95m" Feb 19 18:48:14 crc kubenswrapper[4813]: I0219 18:48:14.729379 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/33b0ba56-83a8-4c5c-a8f4-91c0e7249a7b-db-sync-config-data\") pod \"glance-db-sync-xg95m\" (UID: \"33b0ba56-83a8-4c5c-a8f4-91c0e7249a7b\") " pod="openstack/glance-db-sync-xg95m" Feb 19 18:48:14 crc kubenswrapper[4813]: I0219 18:48:14.799643 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-pf8pv" Feb 19 18:48:14 crc kubenswrapper[4813]: I0219 18:48:14.799641 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-pf8pv" event={"ID":"d2a6f952-c093-4d22-9452-3043f5f26472","Type":"ContainerDied","Data":"3ab38db17a290578730ef302962fa4927496171ec7c88e490570c9cd3ccdc72f"} Feb 19 18:48:14 crc kubenswrapper[4813]: I0219 18:48:14.799770 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ab38db17a290578730ef302962fa4927496171ec7c88e490570c9cd3ccdc72f" Feb 19 18:48:14 crc kubenswrapper[4813]: I0219 18:48:14.801045 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d489-account-create-update-xvmmd" event={"ID":"d3296d69-fa27-44f5-89a4-5122c3662dc5","Type":"ContainerDied","Data":"2f2ef1d11c6f2940a3e15eb084283628dad75c0ce3706f34220a94a1317729ec"} Feb 19 18:48:14 crc kubenswrapper[4813]: I0219 18:48:14.801114 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f2ef1d11c6f2940a3e15eb084283628dad75c0ce3706f34220a94a1317729ec" Feb 19 18:48:14 crc kubenswrapper[4813]: I0219 18:48:14.801206 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d489-account-create-update-xvmmd" Feb 19 18:48:14 crc kubenswrapper[4813]: I0219 18:48:14.807486 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-734c-account-create-update-wj88l" event={"ID":"43b357f4-2181-45a7-9d60-1a90f76b1c77","Type":"ContainerDied","Data":"7630babb31ca8f0641c43cb826a13e513b25de01427857792659b38850102578"} Feb 19 18:48:14 crc kubenswrapper[4813]: I0219 18:48:14.807529 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7630babb31ca8f0641c43cb826a13e513b25de01427857792659b38850102578" Feb 19 18:48:14 crc kubenswrapper[4813]: I0219 18:48:14.807546 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-734c-account-create-update-wj88l" Feb 19 18:48:14 crc kubenswrapper[4813]: I0219 18:48:14.809601 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-5h4p6" event={"ID":"46cbac0e-c130-4aed-9c70-e4c4d7378092","Type":"ContainerDied","Data":"faa0f4c307809db1bfc3b434d7ce9c353eb6317bd3aab98599b6e087d827fbb0"} Feb 19 18:48:14 crc kubenswrapper[4813]: I0219 18:48:14.809636 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-5h4p6" Feb 19 18:48:14 crc kubenswrapper[4813]: I0219 18:48:14.809660 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="faa0f4c307809db1bfc3b434d7ce9c353eb6317bd3aab98599b6e087d827fbb0" Feb 19 18:48:14 crc kubenswrapper[4813]: I0219 18:48:14.830372 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/33b0ba56-83a8-4c5c-a8f4-91c0e7249a7b-db-sync-config-data\") pod \"glance-db-sync-xg95m\" (UID: \"33b0ba56-83a8-4c5c-a8f4-91c0e7249a7b\") " pod="openstack/glance-db-sync-xg95m" Feb 19 18:48:14 crc kubenswrapper[4813]: I0219 18:48:14.830512 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33b0ba56-83a8-4c5c-a8f4-91c0e7249a7b-combined-ca-bundle\") pod \"glance-db-sync-xg95m\" (UID: \"33b0ba56-83a8-4c5c-a8f4-91c0e7249a7b\") " pod="openstack/glance-db-sync-xg95m" Feb 19 18:48:14 crc kubenswrapper[4813]: I0219 18:48:14.830554 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzx9b\" (UniqueName: \"kubernetes.io/projected/33b0ba56-83a8-4c5c-a8f4-91c0e7249a7b-kube-api-access-fzx9b\") pod \"glance-db-sync-xg95m\" (UID: \"33b0ba56-83a8-4c5c-a8f4-91c0e7249a7b\") " pod="openstack/glance-db-sync-xg95m" Feb 19 18:48:14 crc kubenswrapper[4813]: I0219 18:48:14.830580 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33b0ba56-83a8-4c5c-a8f4-91c0e7249a7b-config-data\") pod \"glance-db-sync-xg95m\" (UID: \"33b0ba56-83a8-4c5c-a8f4-91c0e7249a7b\") " pod="openstack/glance-db-sync-xg95m" Feb 19 18:48:14 crc kubenswrapper[4813]: I0219 18:48:14.836952 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/33b0ba56-83a8-4c5c-a8f4-91c0e7249a7b-db-sync-config-data\") pod \"glance-db-sync-xg95m\" (UID: \"33b0ba56-83a8-4c5c-a8f4-91c0e7249a7b\") " pod="openstack/glance-db-sync-xg95m" Feb 19 18:48:14 crc kubenswrapper[4813]: I0219 18:48:14.837046 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33b0ba56-83a8-4c5c-a8f4-91c0e7249a7b-config-data\") pod \"glance-db-sync-xg95m\" (UID: \"33b0ba56-83a8-4c5c-a8f4-91c0e7249a7b\") " pod="openstack/glance-db-sync-xg95m" Feb 19 18:48:14 crc kubenswrapper[4813]: I0219 18:48:14.837129 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33b0ba56-83a8-4c5c-a8f4-91c0e7249a7b-combined-ca-bundle\") pod \"glance-db-sync-xg95m\" (UID: \"33b0ba56-83a8-4c5c-a8f4-91c0e7249a7b\") " pod="openstack/glance-db-sync-xg95m" Feb 19 18:48:14 crc kubenswrapper[4813]: I0219 18:48:14.851068 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzx9b\" (UniqueName: \"kubernetes.io/projected/33b0ba56-83a8-4c5c-a8f4-91c0e7249a7b-kube-api-access-fzx9b\") pod \"glance-db-sync-xg95m\" (UID: \"33b0ba56-83a8-4c5c-a8f4-91c0e7249a7b\") " pod="openstack/glance-db-sync-xg95m" Feb 19 18:48:14 crc kubenswrapper[4813]: I0219 18:48:14.935678 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-xg95m" Feb 19 18:48:15 crc kubenswrapper[4813]: I0219 18:48:15.480178 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d18d3e87-1b83-4a09-9e7b-f28e1e540d85" path="/var/lib/kubelet/pods/d18d3e87-1b83-4a09-9e7b-f28e1e540d85/volumes" Feb 19 18:48:15 crc kubenswrapper[4813]: I0219 18:48:15.562231 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-xg95m"] Feb 19 18:48:15 crc kubenswrapper[4813]: I0219 18:48:15.817268 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-xg95m" event={"ID":"33b0ba56-83a8-4c5c-a8f4-91c0e7249a7b","Type":"ContainerStarted","Data":"411b70ffc305466eb9f98c1deb5a5153ff5c339a7d0cc65e44c9432d60395244"} Feb 19 18:48:16 crc kubenswrapper[4813]: I0219 18:48:16.126474 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-xqns6"] Feb 19 18:48:16 crc kubenswrapper[4813]: I0219 18:48:16.127632 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xqns6" Feb 19 18:48:16 crc kubenswrapper[4813]: I0219 18:48:16.131845 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 19 18:48:16 crc kubenswrapper[4813]: I0219 18:48:16.140034 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-xqns6"] Feb 19 18:48:16 crc kubenswrapper[4813]: I0219 18:48:16.164199 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1cb5586f-0789-4095-84e2-32c8c41984c1-etc-swift\") pod \"swift-storage-0\" (UID: \"1cb5586f-0789-4095-84e2-32c8c41984c1\") " pod="openstack/swift-storage-0" Feb 19 18:48:16 crc kubenswrapper[4813]: E0219 18:48:16.164447 4813 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 19 18:48:16 crc kubenswrapper[4813]: E0219 18:48:16.164487 4813 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 19 18:48:16 crc kubenswrapper[4813]: E0219 18:48:16.164615 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1cb5586f-0789-4095-84e2-32c8c41984c1-etc-swift podName:1cb5586f-0789-4095-84e2-32c8c41984c1 nodeName:}" failed. No retries permitted until 2026-02-19 18:48:20.164579885 +0000 UTC m=+1119.390020476 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1cb5586f-0789-4095-84e2-32c8c41984c1-etc-swift") pod "swift-storage-0" (UID: "1cb5586f-0789-4095-84e2-32c8c41984c1") : configmap "swift-ring-files" not found Feb 19 18:48:16 crc kubenswrapper[4813]: I0219 18:48:16.265928 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68ff8cac-a16a-453f-9b6a-2ecb977a3759-operator-scripts\") pod \"root-account-create-update-xqns6\" (UID: \"68ff8cac-a16a-453f-9b6a-2ecb977a3759\") " pod="openstack/root-account-create-update-xqns6" Feb 19 18:48:16 crc kubenswrapper[4813]: I0219 18:48:16.266052 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nhgr\" (UniqueName: \"kubernetes.io/projected/68ff8cac-a16a-453f-9b6a-2ecb977a3759-kube-api-access-9nhgr\") pod \"root-account-create-update-xqns6\" (UID: \"68ff8cac-a16a-453f-9b6a-2ecb977a3759\") " pod="openstack/root-account-create-update-xqns6" Feb 19 18:48:16 crc kubenswrapper[4813]: I0219 18:48:16.363917 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-crwbn"] Feb 19 18:48:16 crc kubenswrapper[4813]: I0219 18:48:16.364845 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-crwbn" Feb 19 18:48:16 crc kubenswrapper[4813]: I0219 18:48:16.367831 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68ff8cac-a16a-453f-9b6a-2ecb977a3759-operator-scripts\") pod \"root-account-create-update-xqns6\" (UID: \"68ff8cac-a16a-453f-9b6a-2ecb977a3759\") " pod="openstack/root-account-create-update-xqns6" Feb 19 18:48:16 crc kubenswrapper[4813]: I0219 18:48:16.368042 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nhgr\" (UniqueName: \"kubernetes.io/projected/68ff8cac-a16a-453f-9b6a-2ecb977a3759-kube-api-access-9nhgr\") pod \"root-account-create-update-xqns6\" (UID: \"68ff8cac-a16a-453f-9b6a-2ecb977a3759\") " pod="openstack/root-account-create-update-xqns6" Feb 19 18:48:16 crc kubenswrapper[4813]: I0219 18:48:16.368631 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 19 18:48:16 crc kubenswrapper[4813]: I0219 18:48:16.369156 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68ff8cac-a16a-453f-9b6a-2ecb977a3759-operator-scripts\") pod \"root-account-create-update-xqns6\" (UID: \"68ff8cac-a16a-453f-9b6a-2ecb977a3759\") " pod="openstack/root-account-create-update-xqns6" Feb 19 18:48:16 crc kubenswrapper[4813]: I0219 18:48:16.371825 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 19 18:48:16 crc kubenswrapper[4813]: I0219 18:48:16.381647 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 19 18:48:16 crc kubenswrapper[4813]: I0219 18:48:16.386050 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-crwbn"] Feb 19 18:48:16 crc kubenswrapper[4813]: I0219 18:48:16.392711 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nhgr\" (UniqueName: \"kubernetes.io/projected/68ff8cac-a16a-453f-9b6a-2ecb977a3759-kube-api-access-9nhgr\") pod \"root-account-create-update-xqns6\" (UID: \"68ff8cac-a16a-453f-9b6a-2ecb977a3759\") " pod="openstack/root-account-create-update-xqns6" Feb 19 18:48:16 crc kubenswrapper[4813]: I0219 18:48:16.469710 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c0aa6e15-2818-4ba2-9cfc-001324222fa7-ring-data-devices\") pod \"swift-ring-rebalance-crwbn\" (UID: \"c0aa6e15-2818-4ba2-9cfc-001324222fa7\") " pod="openstack/swift-ring-rebalance-crwbn" Feb 19 18:48:16 crc kubenswrapper[4813]: I0219 18:48:16.469872 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qj9jp\" (UniqueName: \"kubernetes.io/projected/c0aa6e15-2818-4ba2-9cfc-001324222fa7-kube-api-access-qj9jp\") pod \"swift-ring-rebalance-crwbn\" (UID: \"c0aa6e15-2818-4ba2-9cfc-001324222fa7\") " pod="openstack/swift-ring-rebalance-crwbn" Feb 19 18:48:16 crc kubenswrapper[4813]: I0219 18:48:16.469917 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c0aa6e15-2818-4ba2-9cfc-001324222fa7-dispersionconf\") pod \"swift-ring-rebalance-crwbn\" (UID: \"c0aa6e15-2818-4ba2-9cfc-001324222fa7\") " pod="openstack/swift-ring-rebalance-crwbn" Feb 19 18:48:16 crc kubenswrapper[4813]: I0219 18:48:16.470013 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c0aa6e15-2818-4ba2-9cfc-001324222fa7-etc-swift\") pod \"swift-ring-rebalance-crwbn\" (UID: \"c0aa6e15-2818-4ba2-9cfc-001324222fa7\") " pod="openstack/swift-ring-rebalance-crwbn" Feb 19 18:48:16 crc kubenswrapper[4813]: I0219 18:48:16.470068 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c0aa6e15-2818-4ba2-9cfc-001324222fa7-scripts\") pod \"swift-ring-rebalance-crwbn\" (UID: \"c0aa6e15-2818-4ba2-9cfc-001324222fa7\") " pod="openstack/swift-ring-rebalance-crwbn" Feb 19 18:48:16 crc kubenswrapper[4813]: I0219 18:48:16.470100 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c0aa6e15-2818-4ba2-9cfc-001324222fa7-swiftconf\") pod \"swift-ring-rebalance-crwbn\" (UID: \"c0aa6e15-2818-4ba2-9cfc-001324222fa7\") " pod="openstack/swift-ring-rebalance-crwbn" Feb 19 18:48:16 crc kubenswrapper[4813]: I0219 18:48:16.470228 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0aa6e15-2818-4ba2-9cfc-001324222fa7-combined-ca-bundle\") pod \"swift-ring-rebalance-crwbn\" (UID: \"c0aa6e15-2818-4ba2-9cfc-001324222fa7\") " pod="openstack/swift-ring-rebalance-crwbn" Feb 19 18:48:16 crc kubenswrapper[4813]: I0219 18:48:16.471147 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xqns6" Feb 19 18:48:16 crc kubenswrapper[4813]: I0219 18:48:16.572485 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c0aa6e15-2818-4ba2-9cfc-001324222fa7-etc-swift\") pod \"swift-ring-rebalance-crwbn\" (UID: \"c0aa6e15-2818-4ba2-9cfc-001324222fa7\") " pod="openstack/swift-ring-rebalance-crwbn" Feb 19 18:48:16 crc kubenswrapper[4813]: I0219 18:48:16.572659 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c0aa6e15-2818-4ba2-9cfc-001324222fa7-scripts\") pod \"swift-ring-rebalance-crwbn\" (UID: \"c0aa6e15-2818-4ba2-9cfc-001324222fa7\") " pod="openstack/swift-ring-rebalance-crwbn" Feb 19 18:48:16 crc kubenswrapper[4813]: I0219 18:48:16.572708 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c0aa6e15-2818-4ba2-9cfc-001324222fa7-swiftconf\") pod \"swift-ring-rebalance-crwbn\" (UID: \"c0aa6e15-2818-4ba2-9cfc-001324222fa7\") " pod="openstack/swift-ring-rebalance-crwbn" Feb 19 18:48:16 crc kubenswrapper[4813]: I0219 18:48:16.572778 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0aa6e15-2818-4ba2-9cfc-001324222fa7-combined-ca-bundle\") pod \"swift-ring-rebalance-crwbn\" (UID: \"c0aa6e15-2818-4ba2-9cfc-001324222fa7\") " pod="openstack/swift-ring-rebalance-crwbn" Feb 19 18:48:16 crc kubenswrapper[4813]: I0219 18:48:16.572825 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c0aa6e15-2818-4ba2-9cfc-001324222fa7-ring-data-devices\") pod \"swift-ring-rebalance-crwbn\" (UID: \"c0aa6e15-2818-4ba2-9cfc-001324222fa7\") " pod="openstack/swift-ring-rebalance-crwbn" Feb 19 18:48:16 crc kubenswrapper[4813]: I0219 18:48:16.572991 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qj9jp\" (UniqueName: \"kubernetes.io/projected/c0aa6e15-2818-4ba2-9cfc-001324222fa7-kube-api-access-qj9jp\") pod \"swift-ring-rebalance-crwbn\" (UID: \"c0aa6e15-2818-4ba2-9cfc-001324222fa7\") " pod="openstack/swift-ring-rebalance-crwbn" Feb 19 18:48:16 crc kubenswrapper[4813]: I0219 18:48:16.573050 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c0aa6e15-2818-4ba2-9cfc-001324222fa7-dispersionconf\") pod \"swift-ring-rebalance-crwbn\" (UID: \"c0aa6e15-2818-4ba2-9cfc-001324222fa7\") " pod="openstack/swift-ring-rebalance-crwbn" Feb 19 18:48:16 crc kubenswrapper[4813]: I0219 18:48:16.573833 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c0aa6e15-2818-4ba2-9cfc-001324222fa7-etc-swift\") pod \"swift-ring-rebalance-crwbn\" (UID: \"c0aa6e15-2818-4ba2-9cfc-001324222fa7\") " pod="openstack/swift-ring-rebalance-crwbn" Feb 19 18:48:16 crc kubenswrapper[4813]: I0219 18:48:16.575787 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c0aa6e15-2818-4ba2-9cfc-001324222fa7-scripts\") pod \"swift-ring-rebalance-crwbn\" (UID: \"c0aa6e15-2818-4ba2-9cfc-001324222fa7\") " pod="openstack/swift-ring-rebalance-crwbn" Feb 19 18:48:16 crc kubenswrapper[4813]: I0219 18:48:16.577083 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c0aa6e15-2818-4ba2-9cfc-001324222fa7-ring-data-devices\") pod \"swift-ring-rebalance-crwbn\" (UID: \"c0aa6e15-2818-4ba2-9cfc-001324222fa7\") " pod="openstack/swift-ring-rebalance-crwbn" Feb 19 18:48:16 crc kubenswrapper[4813]: I0219 18:48:16.581418 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0aa6e15-2818-4ba2-9cfc-001324222fa7-combined-ca-bundle\") pod \"swift-ring-rebalance-crwbn\" (UID: \"c0aa6e15-2818-4ba2-9cfc-001324222fa7\") " pod="openstack/swift-ring-rebalance-crwbn" Feb 19 18:48:16 crc kubenswrapper[4813]: I0219 18:48:16.581781 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c0aa6e15-2818-4ba2-9cfc-001324222fa7-swiftconf\") pod \"swift-ring-rebalance-crwbn\" (UID: \"c0aa6e15-2818-4ba2-9cfc-001324222fa7\") " pod="openstack/swift-ring-rebalance-crwbn" Feb 19 18:48:16 crc kubenswrapper[4813]: I0219 18:48:16.585370 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c0aa6e15-2818-4ba2-9cfc-001324222fa7-dispersionconf\") pod \"swift-ring-rebalance-crwbn\" (UID: \"c0aa6e15-2818-4ba2-9cfc-001324222fa7\") " pod="openstack/swift-ring-rebalance-crwbn" Feb 19 18:48:16 crc kubenswrapper[4813]: I0219 18:48:16.606494 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qj9jp\" (UniqueName: \"kubernetes.io/projected/c0aa6e15-2818-4ba2-9cfc-001324222fa7-kube-api-access-qj9jp\") pod \"swift-ring-rebalance-crwbn\" (UID: \"c0aa6e15-2818-4ba2-9cfc-001324222fa7\") " pod="openstack/swift-ring-rebalance-crwbn" Feb 19 18:48:16 crc kubenswrapper[4813]: I0219 18:48:16.692288 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-crwbn" Feb 19 18:48:16 crc kubenswrapper[4813]: I0219 18:48:16.949315 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-xqns6"] Feb 19 18:48:17 crc kubenswrapper[4813]: I0219 18:48:17.106882 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-crwbn"] Feb 19 18:48:17 crc kubenswrapper[4813]: W0219 18:48:17.114806 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc0aa6e15_2818_4ba2_9cfc_001324222fa7.slice/crio-0b4688f696214b6544b2bbfaa03ab8efc472dfd7165cc6f546fbc443d0b76302 WatchSource:0}: Error finding container 0b4688f696214b6544b2bbfaa03ab8efc472dfd7165cc6f546fbc443d0b76302: Status 404 returned error can't find the container with id 0b4688f696214b6544b2bbfaa03ab8efc472dfd7165cc6f546fbc443d0b76302 Feb 19 18:48:17 crc kubenswrapper[4813]: E0219 18:48:17.414453 4813 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod68ff8cac_a16a_453f_9b6a_2ecb977a3759.slice/crio-conmon-f64515cccc17d57ae69306575f2fa6522cd0ff0492c66f7f7a0de9e97bbbf6de.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod68ff8cac_a16a_453f_9b6a_2ecb977a3759.slice/crio-f64515cccc17d57ae69306575f2fa6522cd0ff0492c66f7f7a0de9e97bbbf6de.scope\": RecentStats: unable to find data in memory cache]" Feb 19 18:48:17 crc kubenswrapper[4813]: I0219 18:48:17.836227 4813 generic.go:334] "Generic (PLEG): container finished" podID="68ff8cac-a16a-453f-9b6a-2ecb977a3759" containerID="f64515cccc17d57ae69306575f2fa6522cd0ff0492c66f7f7a0de9e97bbbf6de" exitCode=0 Feb 19 18:48:17 crc kubenswrapper[4813]: I0219 18:48:17.836353 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xqns6" event={"ID":"68ff8cac-a16a-453f-9b6a-2ecb977a3759","Type":"ContainerDied","Data":"f64515cccc17d57ae69306575f2fa6522cd0ff0492c66f7f7a0de9e97bbbf6de"} Feb 19 18:48:17 crc kubenswrapper[4813]: I0219 18:48:17.836438 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xqns6" event={"ID":"68ff8cac-a16a-453f-9b6a-2ecb977a3759","Type":"ContainerStarted","Data":"4719c824beacc1e242fb1750775dccd42e8ef271dd32becc297b85dcdf0adca8"} Feb 19 18:48:17 crc kubenswrapper[4813]: I0219 18:48:17.837630 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-crwbn" event={"ID":"c0aa6e15-2818-4ba2-9cfc-001324222fa7","Type":"ContainerStarted","Data":"0b4688f696214b6544b2bbfaa03ab8efc472dfd7165cc6f546fbc443d0b76302"} Feb 19 18:48:18 crc kubenswrapper[4813]: I0219 18:48:18.680126 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-75b7bcc64f-j8v4k" Feb 19 18:48:19 crc kubenswrapper[4813]: I0219 18:48:19.482572 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xqns6" Feb 19 18:48:19 crc kubenswrapper[4813]: I0219 18:48:19.536996 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nhgr\" (UniqueName: \"kubernetes.io/projected/68ff8cac-a16a-453f-9b6a-2ecb977a3759-kube-api-access-9nhgr\") pod \"68ff8cac-a16a-453f-9b6a-2ecb977a3759\" (UID: \"68ff8cac-a16a-453f-9b6a-2ecb977a3759\") " Feb 19 18:48:19 crc kubenswrapper[4813]: I0219 18:48:19.537139 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68ff8cac-a16a-453f-9b6a-2ecb977a3759-operator-scripts\") pod \"68ff8cac-a16a-453f-9b6a-2ecb977a3759\" (UID: \"68ff8cac-a16a-453f-9b6a-2ecb977a3759\") " Feb 19 18:48:19 crc kubenswrapper[4813]: I0219 18:48:19.537906 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68ff8cac-a16a-453f-9b6a-2ecb977a3759-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "68ff8cac-a16a-453f-9b6a-2ecb977a3759" (UID: "68ff8cac-a16a-453f-9b6a-2ecb977a3759"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:48:19 crc kubenswrapper[4813]: I0219 18:48:19.551234 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68ff8cac-a16a-453f-9b6a-2ecb977a3759-kube-api-access-9nhgr" (OuterVolumeSpecName: "kube-api-access-9nhgr") pod "68ff8cac-a16a-453f-9b6a-2ecb977a3759" (UID: "68ff8cac-a16a-453f-9b6a-2ecb977a3759"). InnerVolumeSpecName "kube-api-access-9nhgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:48:19 crc kubenswrapper[4813]: I0219 18:48:19.639025 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68ff8cac-a16a-453f-9b6a-2ecb977a3759-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 18:48:19 crc kubenswrapper[4813]: I0219 18:48:19.639095 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nhgr\" (UniqueName: \"kubernetes.io/projected/68ff8cac-a16a-453f-9b6a-2ecb977a3759-kube-api-access-9nhgr\") on node \"crc\" DevicePath \"\"" Feb 19 18:48:19 crc kubenswrapper[4813]: I0219 18:48:19.853355 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xqns6" event={"ID":"68ff8cac-a16a-453f-9b6a-2ecb977a3759","Type":"ContainerDied","Data":"4719c824beacc1e242fb1750775dccd42e8ef271dd32becc297b85dcdf0adca8"} Feb 19 18:48:19 crc kubenswrapper[4813]: I0219 18:48:19.853421 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4719c824beacc1e242fb1750775dccd42e8ef271dd32becc297b85dcdf0adca8" Feb 19 18:48:19 crc kubenswrapper[4813]: I0219 18:48:19.853472 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xqns6" Feb 19 18:48:20 crc kubenswrapper[4813]: I0219 18:48:20.249321 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1cb5586f-0789-4095-84e2-32c8c41984c1-etc-swift\") pod \"swift-storage-0\" (UID: \"1cb5586f-0789-4095-84e2-32c8c41984c1\") " pod="openstack/swift-storage-0" Feb 19 18:48:20 crc kubenswrapper[4813]: E0219 18:48:20.249500 4813 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 19 18:48:20 crc kubenswrapper[4813]: E0219 18:48:20.249537 4813 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 19 18:48:20 crc kubenswrapper[4813]: E0219 18:48:20.249611 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1cb5586f-0789-4095-84e2-32c8c41984c1-etc-swift podName:1cb5586f-0789-4095-84e2-32c8c41984c1 nodeName:}" failed. No retries permitted until 2026-02-19 18:48:28.249587443 +0000 UTC m=+1127.475028004 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1cb5586f-0789-4095-84e2-32c8c41984c1-etc-swift") pod "swift-storage-0" (UID: "1cb5586f-0789-4095-84e2-32c8c41984c1") : configmap "swift-ring-files" not found Feb 19 18:48:21 crc kubenswrapper[4813]: I0219 18:48:21.663134 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-689df5d84f-5dff4" Feb 19 18:48:21 crc kubenswrapper[4813]: I0219 18:48:21.717547 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75b7bcc64f-j8v4k"] Feb 19 18:48:21 crc kubenswrapper[4813]: I0219 18:48:21.718051 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-75b7bcc64f-j8v4k" podUID="389d3fe8-138e-47b3-83c6-5c7e7c5551b7" containerName="dnsmasq-dns" containerID="cri-o://fec741ddb8fc2066033960ff2fc4b154f16f3a7328f78d6ad0919d4c295cfa9c" gracePeriod=10 Feb 19 18:48:21 crc kubenswrapper[4813]: I0219 18:48:21.869719 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-crwbn" event={"ID":"c0aa6e15-2818-4ba2-9cfc-001324222fa7","Type":"ContainerStarted","Data":"ae6e6fea86970a782ff89e6a5e94f9fdc4f0fe974554d09798ed586863363d74"} Feb 19 18:48:21 crc kubenswrapper[4813]: I0219 18:48:21.890350 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-crwbn" podStartSLOduration=2.131644444 podStartE2EDuration="5.890336147s" podCreationTimestamp="2026-02-19 18:48:16 +0000 UTC" firstStartedPulling="2026-02-19 18:48:17.118264165 +0000 UTC m=+1116.343704706" lastFinishedPulling="2026-02-19 18:48:20.876955868 +0000 UTC m=+1120.102396409" observedRunningTime="2026-02-19 18:48:21.886889321 +0000 UTC m=+1121.112329862" watchObservedRunningTime="2026-02-19 18:48:21.890336147 +0000 UTC m=+1121.115776688" Feb 19 18:48:22 crc kubenswrapper[4813]: I0219 18:48:22.520826 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-xqns6"] Feb 19 18:48:22 crc kubenswrapper[4813]: I0219 18:48:22.532105 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-xqns6"] Feb 19 18:48:22 crc kubenswrapper[4813]: I0219 18:48:22.880533 4813 generic.go:334] "Generic (PLEG): container finished" podID="389d3fe8-138e-47b3-83c6-5c7e7c5551b7" containerID="fec741ddb8fc2066033960ff2fc4b154f16f3a7328f78d6ad0919d4c295cfa9c" exitCode=0 Feb 19 18:48:22 crc kubenswrapper[4813]: I0219 18:48:22.880620 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75b7bcc64f-j8v4k" event={"ID":"389d3fe8-138e-47b3-83c6-5c7e7c5551b7","Type":"ContainerDied","Data":"fec741ddb8fc2066033960ff2fc4b154f16f3a7328f78d6ad0919d4c295cfa9c"} Feb 19 18:48:23 crc kubenswrapper[4813]: I0219 18:48:23.483207 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68ff8cac-a16a-453f-9b6a-2ecb977a3759" path="/var/lib/kubelet/pods/68ff8cac-a16a-453f-9b6a-2ecb977a3759/volumes" Feb 19 18:48:27 crc kubenswrapper[4813]: I0219 18:48:27.520161 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-s59x2"] Feb 19 18:48:27 crc kubenswrapper[4813]: E0219 18:48:27.521013 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68ff8cac-a16a-453f-9b6a-2ecb977a3759" containerName="mariadb-account-create-update" Feb 19 18:48:27 crc kubenswrapper[4813]: I0219 18:48:27.521031 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="68ff8cac-a16a-453f-9b6a-2ecb977a3759" containerName="mariadb-account-create-update" Feb 19 18:48:27 crc kubenswrapper[4813]: I0219 18:48:27.521408 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="68ff8cac-a16a-453f-9b6a-2ecb977a3759" containerName="mariadb-account-create-update" Feb 19 18:48:27 crc kubenswrapper[4813]: I0219 18:48:27.522084 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-s59x2" Feb 19 18:48:27 crc kubenswrapper[4813]: I0219 18:48:27.524365 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 19 18:48:27 crc kubenswrapper[4813]: I0219 18:48:27.528917 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-s59x2"] Feb 19 18:48:27 crc kubenswrapper[4813]: I0219 18:48:27.615314 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xn5q\" (UniqueName: \"kubernetes.io/projected/15bef837-ce66-495d-8b85-f072341093ee-kube-api-access-4xn5q\") pod \"root-account-create-update-s59x2\" (UID: \"15bef837-ce66-495d-8b85-f072341093ee\") " pod="openstack/root-account-create-update-s59x2" Feb 19 18:48:27 crc kubenswrapper[4813]: I0219 18:48:27.615643 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15bef837-ce66-495d-8b85-f072341093ee-operator-scripts\") pod \"root-account-create-update-s59x2\" (UID: \"15bef837-ce66-495d-8b85-f072341093ee\") " pod="openstack/root-account-create-update-s59x2" Feb 19 18:48:27 crc kubenswrapper[4813]: I0219 18:48:27.716838 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15bef837-ce66-495d-8b85-f072341093ee-operator-scripts\") pod \"root-account-create-update-s59x2\" (UID: \"15bef837-ce66-495d-8b85-f072341093ee\") " pod="openstack/root-account-create-update-s59x2" Feb 19 18:48:27 crc kubenswrapper[4813]: I0219 18:48:27.717077 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xn5q\" (UniqueName: \"kubernetes.io/projected/15bef837-ce66-495d-8b85-f072341093ee-kube-api-access-4xn5q\") pod \"root-account-create-update-s59x2\" (UID: \"15bef837-ce66-495d-8b85-f072341093ee\") " pod="openstack/root-account-create-update-s59x2" Feb 19 18:48:27 crc kubenswrapper[4813]: I0219 18:48:27.718277 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15bef837-ce66-495d-8b85-f072341093ee-operator-scripts\") pod \"root-account-create-update-s59x2\" (UID: \"15bef837-ce66-495d-8b85-f072341093ee\") " pod="openstack/root-account-create-update-s59x2" Feb 19 18:48:27 crc kubenswrapper[4813]: I0219 18:48:27.737669 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xn5q\" (UniqueName: \"kubernetes.io/projected/15bef837-ce66-495d-8b85-f072341093ee-kube-api-access-4xn5q\") pod \"root-account-create-update-s59x2\" (UID: \"15bef837-ce66-495d-8b85-f072341093ee\") " pod="openstack/root-account-create-update-s59x2" Feb 19 18:48:27 crc kubenswrapper[4813]: I0219 18:48:27.846543 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-s59x2" Feb 19 18:48:28 crc kubenswrapper[4813]: I0219 18:48:28.327835 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1cb5586f-0789-4095-84e2-32c8c41984c1-etc-swift\") pod \"swift-storage-0\" (UID: \"1cb5586f-0789-4095-84e2-32c8c41984c1\") " pod="openstack/swift-storage-0" Feb 19 18:48:28 crc kubenswrapper[4813]: E0219 18:48:28.328031 4813 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 19 18:48:28 crc kubenswrapper[4813]: E0219 18:48:28.328053 4813 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 19 18:48:28 crc kubenswrapper[4813]: E0219 18:48:28.328103 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1cb5586f-0789-4095-84e2-32c8c41984c1-etc-swift podName:1cb5586f-0789-4095-84e2-32c8c41984c1 nodeName:}" failed. No retries permitted until 2026-02-19 18:48:44.328088031 +0000 UTC m=+1143.553528572 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1cb5586f-0789-4095-84e2-32c8c41984c1-etc-swift") pod "swift-storage-0" (UID: "1cb5586f-0789-4095-84e2-32c8c41984c1") : configmap "swift-ring-files" not found Feb 19 18:48:28 crc kubenswrapper[4813]: I0219 18:48:28.679150 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-75b7bcc64f-j8v4k" podUID="389d3fe8-138e-47b3-83c6-5c7e7c5551b7" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: i/o timeout" Feb 19 18:48:28 crc kubenswrapper[4813]: I0219 18:48:28.796742 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 19 18:48:31 crc kubenswrapper[4813]: I0219 18:48:31.769503 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75b7bcc64f-j8v4k" Feb 19 18:48:31 crc kubenswrapper[4813]: I0219 18:48:31.897056 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/389d3fe8-138e-47b3-83c6-5c7e7c5551b7-config\") pod \"389d3fe8-138e-47b3-83c6-5c7e7c5551b7\" (UID: \"389d3fe8-138e-47b3-83c6-5c7e7c5551b7\") " Feb 19 18:48:31 crc kubenswrapper[4813]: I0219 18:48:31.897199 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/389d3fe8-138e-47b3-83c6-5c7e7c5551b7-ovsdbserver-nb\") pod \"389d3fe8-138e-47b3-83c6-5c7e7c5551b7\" (UID: \"389d3fe8-138e-47b3-83c6-5c7e7c5551b7\") " Feb 19 18:48:31 crc kubenswrapper[4813]: I0219 18:48:31.897241 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zxds\" (UniqueName: \"kubernetes.io/projected/389d3fe8-138e-47b3-83c6-5c7e7c5551b7-kube-api-access-8zxds\") pod \"389d3fe8-138e-47b3-83c6-5c7e7c5551b7\" (UID: \"389d3fe8-138e-47b3-83c6-5c7e7c5551b7\") " Feb 19 18:48:31 crc kubenswrapper[4813]: I0219 18:48:31.897298 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/389d3fe8-138e-47b3-83c6-5c7e7c5551b7-ovsdbserver-sb\") pod \"389d3fe8-138e-47b3-83c6-5c7e7c5551b7\" (UID: \"389d3fe8-138e-47b3-83c6-5c7e7c5551b7\") " Feb 19 18:48:31 crc kubenswrapper[4813]: I0219 18:48:31.897329 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/389d3fe8-138e-47b3-83c6-5c7e7c5551b7-dns-svc\") pod \"389d3fe8-138e-47b3-83c6-5c7e7c5551b7\" (UID: \"389d3fe8-138e-47b3-83c6-5c7e7c5551b7\") " Feb 19 18:48:31 crc kubenswrapper[4813]: I0219 18:48:31.904399 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/389d3fe8-138e-47b3-83c6-5c7e7c5551b7-kube-api-access-8zxds" (OuterVolumeSpecName: "kube-api-access-8zxds") pod "389d3fe8-138e-47b3-83c6-5c7e7c5551b7" (UID: "389d3fe8-138e-47b3-83c6-5c7e7c5551b7"). InnerVolumeSpecName "kube-api-access-8zxds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:48:31 crc kubenswrapper[4813]: I0219 18:48:31.939664 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/389d3fe8-138e-47b3-83c6-5c7e7c5551b7-config" (OuterVolumeSpecName: "config") pod "389d3fe8-138e-47b3-83c6-5c7e7c5551b7" (UID: "389d3fe8-138e-47b3-83c6-5c7e7c5551b7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:48:31 crc kubenswrapper[4813]: I0219 18:48:31.945202 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/389d3fe8-138e-47b3-83c6-5c7e7c5551b7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "389d3fe8-138e-47b3-83c6-5c7e7c5551b7" (UID: "389d3fe8-138e-47b3-83c6-5c7e7c5551b7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:48:31 crc kubenswrapper[4813]: I0219 18:48:31.948546 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/389d3fe8-138e-47b3-83c6-5c7e7c5551b7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "389d3fe8-138e-47b3-83c6-5c7e7c5551b7" (UID: "389d3fe8-138e-47b3-83c6-5c7e7c5551b7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:48:31 crc kubenswrapper[4813]: I0219 18:48:31.957939 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/389d3fe8-138e-47b3-83c6-5c7e7c5551b7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "389d3fe8-138e-47b3-83c6-5c7e7c5551b7" (UID: "389d3fe8-138e-47b3-83c6-5c7e7c5551b7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:48:31 crc kubenswrapper[4813]: I0219 18:48:31.983595 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75b7bcc64f-j8v4k" event={"ID":"389d3fe8-138e-47b3-83c6-5c7e7c5551b7","Type":"ContainerDied","Data":"c773dd4ebb98302bd923d19c0d00234e36721ee8586ff365e7e2ae16f6108515"} Feb 19 18:48:31 crc kubenswrapper[4813]: I0219 18:48:31.983647 4813 scope.go:117] "RemoveContainer" containerID="fec741ddb8fc2066033960ff2fc4b154f16f3a7328f78d6ad0919d4c295cfa9c" Feb 19 18:48:31 crc kubenswrapper[4813]: I0219 18:48:31.983721 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75b7bcc64f-j8v4k" Feb 19 18:48:31 crc kubenswrapper[4813]: I0219 18:48:31.991343 4813 generic.go:334] "Generic (PLEG): container finished" podID="c0aa6e15-2818-4ba2-9cfc-001324222fa7" containerID="ae6e6fea86970a782ff89e6a5e94f9fdc4f0fe974554d09798ed586863363d74" exitCode=0 Feb 19 18:48:31 crc kubenswrapper[4813]: I0219 18:48:31.991400 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-crwbn" event={"ID":"c0aa6e15-2818-4ba2-9cfc-001324222fa7","Type":"ContainerDied","Data":"ae6e6fea86970a782ff89e6a5e94f9fdc4f0fe974554d09798ed586863363d74"} Feb 19 18:48:31 crc kubenswrapper[4813]: I0219 18:48:31.998836 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/389d3fe8-138e-47b3-83c6-5c7e7c5551b7-config\") on node \"crc\" DevicePath \"\"" Feb 19 18:48:31 crc kubenswrapper[4813]: I0219 18:48:31.998859 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/389d3fe8-138e-47b3-83c6-5c7e7c5551b7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 18:48:31 crc kubenswrapper[4813]: I0219 18:48:31.998870 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zxds\" (UniqueName: \"kubernetes.io/projected/389d3fe8-138e-47b3-83c6-5c7e7c5551b7-kube-api-access-8zxds\") on node \"crc\" DevicePath \"\"" Feb 19 18:48:31 crc kubenswrapper[4813]: I0219 18:48:31.998887 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/389d3fe8-138e-47b3-83c6-5c7e7c5551b7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 18:48:31 crc kubenswrapper[4813]: I0219 18:48:31.998904 4813 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/389d3fe8-138e-47b3-83c6-5c7e7c5551b7-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 18:48:32 crc kubenswrapper[4813]: I0219 18:48:32.004681 4813 scope.go:117] "RemoveContainer" containerID="ea5651b62096a33f83198625eb36338067ea156edf6f99abd43cec029af3078a" Feb 19 18:48:32 crc kubenswrapper[4813]: I0219 18:48:32.031458 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75b7bcc64f-j8v4k"] Feb 19 18:48:32 crc kubenswrapper[4813]: I0219 18:48:32.038566 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75b7bcc64f-j8v4k"] Feb 19 18:48:32 crc kubenswrapper[4813]: I0219 18:48:32.102352 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-s59x2"] Feb 19 18:48:33 crc kubenswrapper[4813]: I0219 18:48:33.002551 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-xg95m" event={"ID":"33b0ba56-83a8-4c5c-a8f4-91c0e7249a7b","Type":"ContainerStarted","Data":"26108ad72f63c574475732f4021cf708dc6d1c9417fc66ddbb7ad7422cd9d5a4"} Feb 19 18:48:33 crc kubenswrapper[4813]: I0219 18:48:33.006444 4813 generic.go:334] "Generic (PLEG): container finished" podID="15bef837-ce66-495d-8b85-f072341093ee" containerID="cc423f64a2ed3dfd9ab063cd117e4c9a5e3aa2de2355fbb8c6cf7bb650c929da" exitCode=0 Feb 19 18:48:33 crc kubenswrapper[4813]: I0219 18:48:33.006528 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-s59x2" event={"ID":"15bef837-ce66-495d-8b85-f072341093ee","Type":"ContainerDied","Data":"cc423f64a2ed3dfd9ab063cd117e4c9a5e3aa2de2355fbb8c6cf7bb650c929da"} Feb 19 18:48:33 crc kubenswrapper[4813]: I0219 18:48:33.006570 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-s59x2" event={"ID":"15bef837-ce66-495d-8b85-f072341093ee","Type":"ContainerStarted","Data":"86345315adcfc9c67532edf7748bf81a531d6608b2bcec2563aae47954d82435"} Feb 19 18:48:33 crc kubenswrapper[4813]: I0219 18:48:33.020462 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-xg95m" podStartSLOduration=2.909702619 podStartE2EDuration="19.020447481s" podCreationTimestamp="2026-02-19 18:48:14 +0000 UTC" firstStartedPulling="2026-02-19 18:48:15.570814028 +0000 UTC m=+1114.796254569" lastFinishedPulling="2026-02-19 18:48:31.68155888 +0000 UTC m=+1130.906999431" observedRunningTime="2026-02-19 18:48:33.019925234 +0000 UTC m=+1132.245365785" watchObservedRunningTime="2026-02-19 18:48:33.020447481 +0000 UTC m=+1132.245888022" Feb 19 18:48:33 crc kubenswrapper[4813]: I0219 18:48:33.374292 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-crwbn" Feb 19 18:48:33 crc kubenswrapper[4813]: I0219 18:48:33.430518 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c0aa6e15-2818-4ba2-9cfc-001324222fa7-scripts\") pod \"c0aa6e15-2818-4ba2-9cfc-001324222fa7\" (UID: \"c0aa6e15-2818-4ba2-9cfc-001324222fa7\") " Feb 19 18:48:33 crc kubenswrapper[4813]: I0219 18:48:33.430611 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c0aa6e15-2818-4ba2-9cfc-001324222fa7-swiftconf\") pod \"c0aa6e15-2818-4ba2-9cfc-001324222fa7\" (UID: \"c0aa6e15-2818-4ba2-9cfc-001324222fa7\") " Feb 19 18:48:33 crc kubenswrapper[4813]: I0219 18:48:33.430641 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0aa6e15-2818-4ba2-9cfc-001324222fa7-combined-ca-bundle\") pod \"c0aa6e15-2818-4ba2-9cfc-001324222fa7\" (UID: \"c0aa6e15-2818-4ba2-9cfc-001324222fa7\") " Feb 19 18:48:33 crc kubenswrapper[4813]: I0219 18:48:33.430685 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c0aa6e15-2818-4ba2-9cfc-001324222fa7-dispersionconf\") pod \"c0aa6e15-2818-4ba2-9cfc-001324222fa7\" (UID: \"c0aa6e15-2818-4ba2-9cfc-001324222fa7\") " Feb 19 18:48:33 crc kubenswrapper[4813]: I0219 18:48:33.430713 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c0aa6e15-2818-4ba2-9cfc-001324222fa7-etc-swift\") pod \"c0aa6e15-2818-4ba2-9cfc-001324222fa7\" (UID: \"c0aa6e15-2818-4ba2-9cfc-001324222fa7\") " Feb 19 18:48:33 crc kubenswrapper[4813]: I0219 18:48:33.430772 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c0aa6e15-2818-4ba2-9cfc-001324222fa7-ring-data-devices\") pod \"c0aa6e15-2818-4ba2-9cfc-001324222fa7\" (UID: \"c0aa6e15-2818-4ba2-9cfc-001324222fa7\") " Feb 19 18:48:33 crc kubenswrapper[4813]: I0219 18:48:33.430831 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qj9jp\" (UniqueName: \"kubernetes.io/projected/c0aa6e15-2818-4ba2-9cfc-001324222fa7-kube-api-access-qj9jp\") pod \"c0aa6e15-2818-4ba2-9cfc-001324222fa7\" (UID: \"c0aa6e15-2818-4ba2-9cfc-001324222fa7\") " Feb 19 18:48:33 crc kubenswrapper[4813]: I0219 18:48:33.431682 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0aa6e15-2818-4ba2-9cfc-001324222fa7-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "c0aa6e15-2818-4ba2-9cfc-001324222fa7" (UID: "c0aa6e15-2818-4ba2-9cfc-001324222fa7"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:48:33 crc kubenswrapper[4813]: I0219 18:48:33.431943 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0aa6e15-2818-4ba2-9cfc-001324222fa7-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "c0aa6e15-2818-4ba2-9cfc-001324222fa7" (UID: "c0aa6e15-2818-4ba2-9cfc-001324222fa7"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:48:33 crc kubenswrapper[4813]: I0219 18:48:33.444209 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0aa6e15-2818-4ba2-9cfc-001324222fa7-kube-api-access-qj9jp" (OuterVolumeSpecName: "kube-api-access-qj9jp") pod "c0aa6e15-2818-4ba2-9cfc-001324222fa7" (UID: "c0aa6e15-2818-4ba2-9cfc-001324222fa7"). InnerVolumeSpecName "kube-api-access-qj9jp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:48:33 crc kubenswrapper[4813]: I0219 18:48:33.451584 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0aa6e15-2818-4ba2-9cfc-001324222fa7-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "c0aa6e15-2818-4ba2-9cfc-001324222fa7" (UID: "c0aa6e15-2818-4ba2-9cfc-001324222fa7"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:48:33 crc kubenswrapper[4813]: I0219 18:48:33.460519 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0aa6e15-2818-4ba2-9cfc-001324222fa7-scripts" (OuterVolumeSpecName: "scripts") pod "c0aa6e15-2818-4ba2-9cfc-001324222fa7" (UID: "c0aa6e15-2818-4ba2-9cfc-001324222fa7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:48:33 crc kubenswrapper[4813]: I0219 18:48:33.462175 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0aa6e15-2818-4ba2-9cfc-001324222fa7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c0aa6e15-2818-4ba2-9cfc-001324222fa7" (UID: "c0aa6e15-2818-4ba2-9cfc-001324222fa7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:48:33 crc kubenswrapper[4813]: I0219 18:48:33.466724 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0aa6e15-2818-4ba2-9cfc-001324222fa7-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "c0aa6e15-2818-4ba2-9cfc-001324222fa7" (UID: "c0aa6e15-2818-4ba2-9cfc-001324222fa7"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:48:33 crc kubenswrapper[4813]: I0219 18:48:33.492423 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="389d3fe8-138e-47b3-83c6-5c7e7c5551b7" path="/var/lib/kubelet/pods/389d3fe8-138e-47b3-83c6-5c7e7c5551b7/volumes" Feb 19 18:48:33 crc kubenswrapper[4813]: I0219 18:48:33.532594 4813 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c0aa6e15-2818-4ba2-9cfc-001324222fa7-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 19 18:48:33 crc kubenswrapper[4813]: I0219 18:48:33.532635 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0aa6e15-2818-4ba2-9cfc-001324222fa7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:48:33 crc kubenswrapper[4813]: I0219 18:48:33.532648 4813 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c0aa6e15-2818-4ba2-9cfc-001324222fa7-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 19 18:48:33 crc kubenswrapper[4813]: I0219 18:48:33.532656 4813 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c0aa6e15-2818-4ba2-9cfc-001324222fa7-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 19 18:48:33 crc kubenswrapper[4813]: I0219 18:48:33.532666 4813 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c0aa6e15-2818-4ba2-9cfc-001324222fa7-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 19 18:48:33 crc kubenswrapper[4813]: I0219 18:48:33.532675 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qj9jp\" (UniqueName: \"kubernetes.io/projected/c0aa6e15-2818-4ba2-9cfc-001324222fa7-kube-api-access-qj9jp\") on node \"crc\" DevicePath \"\"" Feb 19 18:48:33 crc kubenswrapper[4813]: I0219 18:48:33.532684 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c0aa6e15-2818-4ba2-9cfc-001324222fa7-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 18:48:33 crc kubenswrapper[4813]: I0219 18:48:33.680542 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-75b7bcc64f-j8v4k" podUID="389d3fe8-138e-47b3-83c6-5c7e7c5551b7" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: i/o timeout" Feb 19 18:48:34 crc kubenswrapper[4813]: I0219 18:48:34.015641 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-crwbn" event={"ID":"c0aa6e15-2818-4ba2-9cfc-001324222fa7","Type":"ContainerDied","Data":"0b4688f696214b6544b2bbfaa03ab8efc472dfd7165cc6f546fbc443d0b76302"} Feb 19 18:48:34 crc kubenswrapper[4813]: I0219 18:48:34.015695 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b4688f696214b6544b2bbfaa03ab8efc472dfd7165cc6f546fbc443d0b76302" Feb 19 18:48:34 crc kubenswrapper[4813]: I0219 18:48:34.015802 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-crwbn" Feb 19 18:48:34 crc kubenswrapper[4813]: I0219 18:48:34.341818 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-s59x2" Feb 19 18:48:34 crc kubenswrapper[4813]: I0219 18:48:34.446174 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15bef837-ce66-495d-8b85-f072341093ee-operator-scripts\") pod \"15bef837-ce66-495d-8b85-f072341093ee\" (UID: \"15bef837-ce66-495d-8b85-f072341093ee\") " Feb 19 18:48:34 crc kubenswrapper[4813]: I0219 18:48:34.446255 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xn5q\" (UniqueName: \"kubernetes.io/projected/15bef837-ce66-495d-8b85-f072341093ee-kube-api-access-4xn5q\") pod \"15bef837-ce66-495d-8b85-f072341093ee\" (UID: \"15bef837-ce66-495d-8b85-f072341093ee\") " Feb 19 18:48:34 crc kubenswrapper[4813]: I0219 18:48:34.446877 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15bef837-ce66-495d-8b85-f072341093ee-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "15bef837-ce66-495d-8b85-f072341093ee" (UID: "15bef837-ce66-495d-8b85-f072341093ee"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:48:34 crc kubenswrapper[4813]: I0219 18:48:34.454127 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15bef837-ce66-495d-8b85-f072341093ee-kube-api-access-4xn5q" (OuterVolumeSpecName: "kube-api-access-4xn5q") pod "15bef837-ce66-495d-8b85-f072341093ee" (UID: "15bef837-ce66-495d-8b85-f072341093ee"). InnerVolumeSpecName "kube-api-access-4xn5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:48:34 crc kubenswrapper[4813]: I0219 18:48:34.548636 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xn5q\" (UniqueName: \"kubernetes.io/projected/15bef837-ce66-495d-8b85-f072341093ee-kube-api-access-4xn5q\") on node \"crc\" DevicePath \"\"" Feb 19 18:48:34 crc kubenswrapper[4813]: I0219 18:48:34.548672 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15bef837-ce66-495d-8b85-f072341093ee-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 18:48:34 crc kubenswrapper[4813]: I0219 18:48:34.972642 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-vr4rs" podUID="abaee778-ea35-4887-90c8-2834d3eef00d" containerName="ovn-controller" probeResult="failure" output=< Feb 19 18:48:34 crc kubenswrapper[4813]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 19 18:48:34 crc kubenswrapper[4813]: > Feb 19 18:48:34 crc kubenswrapper[4813]: I0219 18:48:34.994932 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-w7s6z" Feb 19 18:48:34 crc kubenswrapper[4813]: I0219 18:48:34.996088 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-w7s6z" Feb 19 18:48:35 crc kubenswrapper[4813]: I0219 18:48:35.035800 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-s59x2" Feb 19 18:48:35 crc kubenswrapper[4813]: I0219 18:48:35.035800 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-s59x2" event={"ID":"15bef837-ce66-495d-8b85-f072341093ee","Type":"ContainerDied","Data":"86345315adcfc9c67532edf7748bf81a531d6608b2bcec2563aae47954d82435"} Feb 19 18:48:35 crc kubenswrapper[4813]: I0219 18:48:35.036919 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86345315adcfc9c67532edf7748bf81a531d6608b2bcec2563aae47954d82435" Feb 19 18:48:35 crc kubenswrapper[4813]: I0219 18:48:35.237184 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-vr4rs-config-wwxkf"] Feb 19 18:48:35 crc kubenswrapper[4813]: E0219 18:48:35.237755 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15bef837-ce66-495d-8b85-f072341093ee" containerName="mariadb-account-create-update" Feb 19 18:48:35 crc kubenswrapper[4813]: I0219 18:48:35.237785 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="15bef837-ce66-495d-8b85-f072341093ee" containerName="mariadb-account-create-update" Feb 19 18:48:35 crc kubenswrapper[4813]: E0219 18:48:35.237826 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0aa6e15-2818-4ba2-9cfc-001324222fa7" containerName="swift-ring-rebalance" Feb 19 18:48:35 crc kubenswrapper[4813]: I0219 18:48:35.237840 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0aa6e15-2818-4ba2-9cfc-001324222fa7" containerName="swift-ring-rebalance" Feb 19 18:48:35 crc kubenswrapper[4813]: E0219 18:48:35.237865 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="389d3fe8-138e-47b3-83c6-5c7e7c5551b7" containerName="dnsmasq-dns" Feb 19 18:48:35 crc kubenswrapper[4813]: I0219 18:48:35.237878 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="389d3fe8-138e-47b3-83c6-5c7e7c5551b7" containerName="dnsmasq-dns" Feb 19 18:48:35 crc kubenswrapper[4813]: E0219 18:48:35.237912 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="389d3fe8-138e-47b3-83c6-5c7e7c5551b7" containerName="init" Feb 19 18:48:35 crc kubenswrapper[4813]: I0219 18:48:35.237924 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="389d3fe8-138e-47b3-83c6-5c7e7c5551b7" containerName="init" Feb 19 18:48:35 crc kubenswrapper[4813]: I0219 18:48:35.238288 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="15bef837-ce66-495d-8b85-f072341093ee" containerName="mariadb-account-create-update" Feb 19 18:48:35 crc kubenswrapper[4813]: I0219 18:48:35.238317 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="389d3fe8-138e-47b3-83c6-5c7e7c5551b7" containerName="dnsmasq-dns" Feb 19 18:48:35 crc kubenswrapper[4813]: I0219 18:48:35.238352 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0aa6e15-2818-4ba2-9cfc-001324222fa7" containerName="swift-ring-rebalance" Feb 19 18:48:35 crc kubenswrapper[4813]: I0219 18:48:35.239257 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vr4rs-config-wwxkf" Feb 19 18:48:35 crc kubenswrapper[4813]: I0219 18:48:35.254477 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-vr4rs-config-wwxkf"] Feb 19 18:48:35 crc kubenswrapper[4813]: I0219 18:48:35.277197 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 19 18:48:35 crc kubenswrapper[4813]: I0219 18:48:35.360436 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khxnw\" (UniqueName: \"kubernetes.io/projected/fa671e4d-5289-41c2-8e3d-23657e5340f6-kube-api-access-khxnw\") pod \"ovn-controller-vr4rs-config-wwxkf\" (UID: \"fa671e4d-5289-41c2-8e3d-23657e5340f6\") " pod="openstack/ovn-controller-vr4rs-config-wwxkf" Feb 19 18:48:35 crc kubenswrapper[4813]: I0219 18:48:35.360525 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/fa671e4d-5289-41c2-8e3d-23657e5340f6-var-log-ovn\") pod \"ovn-controller-vr4rs-config-wwxkf\" (UID: \"fa671e4d-5289-41c2-8e3d-23657e5340f6\") " pod="openstack/ovn-controller-vr4rs-config-wwxkf" Feb 19 18:48:35 crc kubenswrapper[4813]: I0219 18:48:35.360586 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/fa671e4d-5289-41c2-8e3d-23657e5340f6-additional-scripts\") pod \"ovn-controller-vr4rs-config-wwxkf\" (UID: \"fa671e4d-5289-41c2-8e3d-23657e5340f6\") " pod="openstack/ovn-controller-vr4rs-config-wwxkf" Feb 19 18:48:35 crc kubenswrapper[4813]: I0219 18:48:35.360625 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/fa671e4d-5289-41c2-8e3d-23657e5340f6-var-run-ovn\") pod \"ovn-controller-vr4rs-config-wwxkf\" (UID: \"fa671e4d-5289-41c2-8e3d-23657e5340f6\") " pod="openstack/ovn-controller-vr4rs-config-wwxkf" Feb 19 18:48:35 crc kubenswrapper[4813]: I0219 18:48:35.360647 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fa671e4d-5289-41c2-8e3d-23657e5340f6-scripts\") pod \"ovn-controller-vr4rs-config-wwxkf\" (UID: \"fa671e4d-5289-41c2-8e3d-23657e5340f6\") " pod="openstack/ovn-controller-vr4rs-config-wwxkf" Feb 19 18:48:35 crc kubenswrapper[4813]: I0219 18:48:35.360799 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fa671e4d-5289-41c2-8e3d-23657e5340f6-var-run\") pod \"ovn-controller-vr4rs-config-wwxkf\" (UID: \"fa671e4d-5289-41c2-8e3d-23657e5340f6\") " pod="openstack/ovn-controller-vr4rs-config-wwxkf" Feb 19 18:48:35 crc kubenswrapper[4813]: I0219 18:48:35.462564 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/fa671e4d-5289-41c2-8e3d-23657e5340f6-additional-scripts\") pod \"ovn-controller-vr4rs-config-wwxkf\" (UID: \"fa671e4d-5289-41c2-8e3d-23657e5340f6\") " pod="openstack/ovn-controller-vr4rs-config-wwxkf" Feb 19 18:48:35 crc kubenswrapper[4813]: I0219 18:48:35.462632 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/fa671e4d-5289-41c2-8e3d-23657e5340f6-var-run-ovn\") pod \"ovn-controller-vr4rs-config-wwxkf\" (UID: \"fa671e4d-5289-41c2-8e3d-23657e5340f6\") " pod="openstack/ovn-controller-vr4rs-config-wwxkf" Feb 19 18:48:35 crc kubenswrapper[4813]: I0219 18:48:35.462655 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fa671e4d-5289-41c2-8e3d-23657e5340f6-scripts\") pod \"ovn-controller-vr4rs-config-wwxkf\" (UID: \"fa671e4d-5289-41c2-8e3d-23657e5340f6\") " pod="openstack/ovn-controller-vr4rs-config-wwxkf" Feb 19 18:48:35 crc kubenswrapper[4813]: I0219 18:48:35.462698 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fa671e4d-5289-41c2-8e3d-23657e5340f6-var-run\") pod \"ovn-controller-vr4rs-config-wwxkf\" (UID: \"fa671e4d-5289-41c2-8e3d-23657e5340f6\") " pod="openstack/ovn-controller-vr4rs-config-wwxkf" Feb 19 18:48:35 crc kubenswrapper[4813]: I0219 18:48:35.462775 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khxnw\" (UniqueName: \"kubernetes.io/projected/fa671e4d-5289-41c2-8e3d-23657e5340f6-kube-api-access-khxnw\") pod \"ovn-controller-vr4rs-config-wwxkf\" (UID: \"fa671e4d-5289-41c2-8e3d-23657e5340f6\") " pod="openstack/ovn-controller-vr4rs-config-wwxkf" Feb 19 18:48:35 crc kubenswrapper[4813]: I0219 18:48:35.462832 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/fa671e4d-5289-41c2-8e3d-23657e5340f6-var-log-ovn\") pod \"ovn-controller-vr4rs-config-wwxkf\" (UID: \"fa671e4d-5289-41c2-8e3d-23657e5340f6\") " pod="openstack/ovn-controller-vr4rs-config-wwxkf" Feb 19 18:48:35 crc kubenswrapper[4813]: I0219 18:48:35.463069 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/fa671e4d-5289-41c2-8e3d-23657e5340f6-var-run-ovn\") pod \"ovn-controller-vr4rs-config-wwxkf\" (UID: \"fa671e4d-5289-41c2-8e3d-23657e5340f6\") " pod="openstack/ovn-controller-vr4rs-config-wwxkf" Feb 19 18:48:35 crc kubenswrapper[4813]: I0219 18:48:35.463089 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fa671e4d-5289-41c2-8e3d-23657e5340f6-var-run\") pod \"ovn-controller-vr4rs-config-wwxkf\" (UID: \"fa671e4d-5289-41c2-8e3d-23657e5340f6\") " pod="openstack/ovn-controller-vr4rs-config-wwxkf" Feb 19 18:48:35 crc kubenswrapper[4813]: I0219 18:48:35.463079 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/fa671e4d-5289-41c2-8e3d-23657e5340f6-var-log-ovn\") pod \"ovn-controller-vr4rs-config-wwxkf\" (UID: \"fa671e4d-5289-41c2-8e3d-23657e5340f6\") " pod="openstack/ovn-controller-vr4rs-config-wwxkf" Feb 19 18:48:35 crc kubenswrapper[4813]: I0219 18:48:35.463513 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/fa671e4d-5289-41c2-8e3d-23657e5340f6-additional-scripts\") pod \"ovn-controller-vr4rs-config-wwxkf\" (UID: \"fa671e4d-5289-41c2-8e3d-23657e5340f6\") " pod="openstack/ovn-controller-vr4rs-config-wwxkf" Feb 19 18:48:35 crc kubenswrapper[4813]: I0219 18:48:35.464607 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fa671e4d-5289-41c2-8e3d-23657e5340f6-scripts\") pod \"ovn-controller-vr4rs-config-wwxkf\" (UID: \"fa671e4d-5289-41c2-8e3d-23657e5340f6\") " pod="openstack/ovn-controller-vr4rs-config-wwxkf" Feb 19 18:48:35 crc kubenswrapper[4813]: I0219 18:48:35.486976 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khxnw\" (UniqueName: \"kubernetes.io/projected/fa671e4d-5289-41c2-8e3d-23657e5340f6-kube-api-access-khxnw\") pod \"ovn-controller-vr4rs-config-wwxkf\" (UID: \"fa671e4d-5289-41c2-8e3d-23657e5340f6\") " pod="openstack/ovn-controller-vr4rs-config-wwxkf" Feb 19 18:48:35 crc kubenswrapper[4813]: I0219 18:48:35.605470 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vr4rs-config-wwxkf" Feb 19 18:48:36 crc kubenswrapper[4813]: I0219 18:48:36.085125 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-vr4rs-config-wwxkf"] Feb 19 18:48:36 crc kubenswrapper[4813]: W0219 18:48:36.088541 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa671e4d_5289_41c2_8e3d_23657e5340f6.slice/crio-34e7ceabe59d6e77a31e689a8bc0f12c9ef5f68cd7fc9b9bfdec217fb736e203 WatchSource:0}: Error finding container 34e7ceabe59d6e77a31e689a8bc0f12c9ef5f68cd7fc9b9bfdec217fb736e203: Status 404 returned error can't find the container with id 34e7ceabe59d6e77a31e689a8bc0f12c9ef5f68cd7fc9b9bfdec217fb736e203 Feb 19 18:48:37 crc kubenswrapper[4813]: I0219 18:48:37.049268 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-vr4rs-config-wwxkf" event={"ID":"fa671e4d-5289-41c2-8e3d-23657e5340f6","Type":"ContainerStarted","Data":"00253c9334ee177d3261e02381af6b4c6dbd84ba0ca94543e86ca7b07ba45c12"} Feb 19 18:48:37 crc kubenswrapper[4813]: I0219 18:48:37.049858 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-vr4rs-config-wwxkf" event={"ID":"fa671e4d-5289-41c2-8e3d-23657e5340f6","Type":"ContainerStarted","Data":"34e7ceabe59d6e77a31e689a8bc0f12c9ef5f68cd7fc9b9bfdec217fb736e203"} Feb 19 18:48:37 crc kubenswrapper[4813]: I0219 18:48:37.084610 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-vr4rs-config-wwxkf" podStartSLOduration=2.084585579 podStartE2EDuration="2.084585579s" podCreationTimestamp="2026-02-19 18:48:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:48:37.071973866 +0000 UTC m=+1136.297414407" watchObservedRunningTime="2026-02-19 18:48:37.084585579 +0000 UTC m=+1136.310026130" Feb 19 18:48:38 crc kubenswrapper[4813]: I0219 18:48:38.060373 4813 generic.go:334] "Generic (PLEG): container finished" podID="fa671e4d-5289-41c2-8e3d-23657e5340f6" containerID="00253c9334ee177d3261e02381af6b4c6dbd84ba0ca94543e86ca7b07ba45c12" exitCode=0 Feb 19 18:48:38 crc kubenswrapper[4813]: I0219 18:48:38.060435 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-vr4rs-config-wwxkf" event={"ID":"fa671e4d-5289-41c2-8e3d-23657e5340f6","Type":"ContainerDied","Data":"00253c9334ee177d3261e02381af6b4c6dbd84ba0ca94543e86ca7b07ba45c12"} Feb 19 18:48:39 crc kubenswrapper[4813]: I0219 18:48:39.071523 4813 generic.go:334] "Generic (PLEG): container finished" podID="db22a584-f05a-41ba-ad23-387b4100a9e1" containerID="0b467ef85f9a437d89647927595333d8dc397b82f76409cebc5dee43012b081e" exitCode=0 Feb 19 18:48:39 crc kubenswrapper[4813]: I0219 18:48:39.071638 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"db22a584-f05a-41ba-ad23-387b4100a9e1","Type":"ContainerDied","Data":"0b467ef85f9a437d89647927595333d8dc397b82f76409cebc5dee43012b081e"} Feb 19 18:48:39 crc kubenswrapper[4813]: I0219 18:48:39.419151 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vr4rs-config-wwxkf" Feb 19 18:48:39 crc kubenswrapper[4813]: I0219 18:48:39.547734 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/fa671e4d-5289-41c2-8e3d-23657e5340f6-additional-scripts\") pod \"fa671e4d-5289-41c2-8e3d-23657e5340f6\" (UID: \"fa671e4d-5289-41c2-8e3d-23657e5340f6\") " Feb 19 18:48:39 crc kubenswrapper[4813]: I0219 18:48:39.547802 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fa671e4d-5289-41c2-8e3d-23657e5340f6-var-run\") pod \"fa671e4d-5289-41c2-8e3d-23657e5340f6\" (UID: \"fa671e4d-5289-41c2-8e3d-23657e5340f6\") " Feb 19 18:48:39 crc kubenswrapper[4813]: I0219 18:48:39.547887 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khxnw\" (UniqueName: \"kubernetes.io/projected/fa671e4d-5289-41c2-8e3d-23657e5340f6-kube-api-access-khxnw\") pod \"fa671e4d-5289-41c2-8e3d-23657e5340f6\" (UID: \"fa671e4d-5289-41c2-8e3d-23657e5340f6\") " Feb 19 18:48:39 crc kubenswrapper[4813]: I0219 18:48:39.548012 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/fa671e4d-5289-41c2-8e3d-23657e5340f6-var-log-ovn\") pod \"fa671e4d-5289-41c2-8e3d-23657e5340f6\" (UID: \"fa671e4d-5289-41c2-8e3d-23657e5340f6\") " Feb 19 18:48:39 crc kubenswrapper[4813]: I0219 18:48:39.548032 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/fa671e4d-5289-41c2-8e3d-23657e5340f6-var-run-ovn\") pod \"fa671e4d-5289-41c2-8e3d-23657e5340f6\" (UID: \"fa671e4d-5289-41c2-8e3d-23657e5340f6\") " Feb 19 18:48:39 crc kubenswrapper[4813]: I0219 18:48:39.548063 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fa671e4d-5289-41c2-8e3d-23657e5340f6-scripts\") pod \"fa671e4d-5289-41c2-8e3d-23657e5340f6\" (UID: \"fa671e4d-5289-41c2-8e3d-23657e5340f6\") " Feb 19 18:48:39 crc kubenswrapper[4813]: I0219 18:48:39.549164 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa671e4d-5289-41c2-8e3d-23657e5340f6-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "fa671e4d-5289-41c2-8e3d-23657e5340f6" (UID: "fa671e4d-5289-41c2-8e3d-23657e5340f6"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 18:48:39 crc kubenswrapper[4813]: I0219 18:48:39.549192 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa671e4d-5289-41c2-8e3d-23657e5340f6-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "fa671e4d-5289-41c2-8e3d-23657e5340f6" (UID: "fa671e4d-5289-41c2-8e3d-23657e5340f6"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 18:48:39 crc kubenswrapper[4813]: I0219 18:48:39.549231 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa671e4d-5289-41c2-8e3d-23657e5340f6-var-run" (OuterVolumeSpecName: "var-run") pod "fa671e4d-5289-41c2-8e3d-23657e5340f6" (UID: "fa671e4d-5289-41c2-8e3d-23657e5340f6"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 18:48:39 crc kubenswrapper[4813]: I0219 18:48:39.549518 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa671e4d-5289-41c2-8e3d-23657e5340f6-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "fa671e4d-5289-41c2-8e3d-23657e5340f6" (UID: "fa671e4d-5289-41c2-8e3d-23657e5340f6"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:48:39 crc kubenswrapper[4813]: I0219 18:48:39.549706 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa671e4d-5289-41c2-8e3d-23657e5340f6-scripts" (OuterVolumeSpecName: "scripts") pod "fa671e4d-5289-41c2-8e3d-23657e5340f6" (UID: "fa671e4d-5289-41c2-8e3d-23657e5340f6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:48:39 crc kubenswrapper[4813]: I0219 18:48:39.555134 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa671e4d-5289-41c2-8e3d-23657e5340f6-kube-api-access-khxnw" (OuterVolumeSpecName: "kube-api-access-khxnw") pod "fa671e4d-5289-41c2-8e3d-23657e5340f6" (UID: "fa671e4d-5289-41c2-8e3d-23657e5340f6"). InnerVolumeSpecName "kube-api-access-khxnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:48:39 crc kubenswrapper[4813]: I0219 18:48:39.650133 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khxnw\" (UniqueName: \"kubernetes.io/projected/fa671e4d-5289-41c2-8e3d-23657e5340f6-kube-api-access-khxnw\") on node \"crc\" DevicePath \"\"" Feb 19 18:48:39 crc kubenswrapper[4813]: I0219 18:48:39.650178 4813 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/fa671e4d-5289-41c2-8e3d-23657e5340f6-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 19 18:48:39 crc kubenswrapper[4813]: I0219 18:48:39.650193 4813 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/fa671e4d-5289-41c2-8e3d-23657e5340f6-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 19 18:48:39 crc kubenswrapper[4813]: I0219 18:48:39.650204 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fa671e4d-5289-41c2-8e3d-23657e5340f6-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 18:48:39 crc kubenswrapper[4813]: I0219 18:48:39.650214 4813 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/fa671e4d-5289-41c2-8e3d-23657e5340f6-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 18:48:39 crc kubenswrapper[4813]: I0219 18:48:39.650225 4813 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fa671e4d-5289-41c2-8e3d-23657e5340f6-var-run\") on node \"crc\" DevicePath \"\"" Feb 19 18:48:39 crc kubenswrapper[4813]: I0219 18:48:39.985211 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-vr4rs" Feb 19 18:48:40 crc kubenswrapper[4813]: I0219 18:48:40.084558 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-vr4rs-config-wwxkf" event={"ID":"fa671e4d-5289-41c2-8e3d-23657e5340f6","Type":"ContainerDied","Data":"34e7ceabe59d6e77a31e689a8bc0f12c9ef5f68cd7fc9b9bfdec217fb736e203"} Feb 19 18:48:40 crc kubenswrapper[4813]: I0219 18:48:40.084604 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34e7ceabe59d6e77a31e689a8bc0f12c9ef5f68cd7fc9b9bfdec217fb736e203" Feb 19 18:48:40 crc kubenswrapper[4813]: I0219 18:48:40.084683 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vr4rs-config-wwxkf" Feb 19 18:48:40 crc kubenswrapper[4813]: I0219 18:48:40.090476 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"db22a584-f05a-41ba-ad23-387b4100a9e1","Type":"ContainerStarted","Data":"e519f5b9e793340baf7974d5e67220195aa91581ee6dc67bcc7ab9451042be70"} Feb 19 18:48:40 crc kubenswrapper[4813]: I0219 18:48:40.091280 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 19 18:48:40 crc kubenswrapper[4813]: I0219 18:48:40.196551 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.773031766 podStartE2EDuration="1m6.196529705s" podCreationTimestamp="2026-02-19 18:47:34 +0000 UTC" firstStartedPulling="2026-02-19 18:47:36.46069101 +0000 UTC m=+1075.686131551" lastFinishedPulling="2026-02-19 18:48:04.884188909 +0000 UTC m=+1104.109629490" observedRunningTime="2026-02-19 18:48:40.166330584 +0000 UTC m=+1139.391771135" watchObservedRunningTime="2026-02-19 18:48:40.196529705 +0000 UTC m=+1139.421970246" Feb 19 18:48:40 crc kubenswrapper[4813]: I0219 18:48:40.322787 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-vr4rs-config-wwxkf"] Feb 19 18:48:40 crc kubenswrapper[4813]: I0219 18:48:40.331917 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-vr4rs-config-wwxkf"] Feb 19 18:48:40 crc kubenswrapper[4813]: I0219 18:48:40.452410 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-vr4rs-config-66k4n"] Feb 19 18:48:40 crc kubenswrapper[4813]: E0219 18:48:40.452816 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa671e4d-5289-41c2-8e3d-23657e5340f6" containerName="ovn-config" Feb 19 18:48:40 crc kubenswrapper[4813]: I0219 18:48:40.452838 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa671e4d-5289-41c2-8e3d-23657e5340f6" containerName="ovn-config" Feb 19 18:48:40 crc kubenswrapper[4813]: I0219 18:48:40.453025 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa671e4d-5289-41c2-8e3d-23657e5340f6" containerName="ovn-config" Feb 19 18:48:40 crc kubenswrapper[4813]: I0219 18:48:40.453713 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vr4rs-config-66k4n" Feb 19 18:48:40 crc kubenswrapper[4813]: I0219 18:48:40.456372 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 19 18:48:40 crc kubenswrapper[4813]: I0219 18:48:40.470850 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-vr4rs-config-66k4n"] Feb 19 18:48:40 crc kubenswrapper[4813]: I0219 18:48:40.563657 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f8dbc5dd-416a-44c6-a026-34544b15011f-var-run\") pod \"ovn-controller-vr4rs-config-66k4n\" (UID: \"f8dbc5dd-416a-44c6-a026-34544b15011f\") " pod="openstack/ovn-controller-vr4rs-config-66k4n" Feb 19 18:48:40 crc kubenswrapper[4813]: I0219 18:48:40.563708 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f8dbc5dd-416a-44c6-a026-34544b15011f-var-log-ovn\") pod \"ovn-controller-vr4rs-config-66k4n\" (UID: \"f8dbc5dd-416a-44c6-a026-34544b15011f\") " pod="openstack/ovn-controller-vr4rs-config-66k4n" Feb 19 18:48:40 crc kubenswrapper[4813]: I0219 18:48:40.563894 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f8dbc5dd-416a-44c6-a026-34544b15011f-additional-scripts\") pod \"ovn-controller-vr4rs-config-66k4n\" (UID: \"f8dbc5dd-416a-44c6-a026-34544b15011f\") " pod="openstack/ovn-controller-vr4rs-config-66k4n" Feb 19 18:48:40 crc kubenswrapper[4813]: I0219 18:48:40.564069 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dx64\" (UniqueName: \"kubernetes.io/projected/f8dbc5dd-416a-44c6-a026-34544b15011f-kube-api-access-6dx64\") pod \"ovn-controller-vr4rs-config-66k4n\" (UID: \"f8dbc5dd-416a-44c6-a026-34544b15011f\") " pod="openstack/ovn-controller-vr4rs-config-66k4n" Feb 19 18:48:40 crc kubenswrapper[4813]: I0219 18:48:40.564201 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f8dbc5dd-416a-44c6-a026-34544b15011f-var-run-ovn\") pod \"ovn-controller-vr4rs-config-66k4n\" (UID: \"f8dbc5dd-416a-44c6-a026-34544b15011f\") " pod="openstack/ovn-controller-vr4rs-config-66k4n" Feb 19 18:48:40 crc kubenswrapper[4813]: I0219 18:48:40.564319 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f8dbc5dd-416a-44c6-a026-34544b15011f-scripts\") pod \"ovn-controller-vr4rs-config-66k4n\" (UID: \"f8dbc5dd-416a-44c6-a026-34544b15011f\") " pod="openstack/ovn-controller-vr4rs-config-66k4n" Feb 19 18:48:40 crc kubenswrapper[4813]: I0219 18:48:40.665781 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f8dbc5dd-416a-44c6-a026-34544b15011f-var-run-ovn\") pod \"ovn-controller-vr4rs-config-66k4n\" (UID: \"f8dbc5dd-416a-44c6-a026-34544b15011f\") " pod="openstack/ovn-controller-vr4rs-config-66k4n" Feb 19 18:48:40 crc kubenswrapper[4813]: I0219 18:48:40.665849 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f8dbc5dd-416a-44c6-a026-34544b15011f-scripts\") pod \"ovn-controller-vr4rs-config-66k4n\" (UID: \"f8dbc5dd-416a-44c6-a026-34544b15011f\") " pod="openstack/ovn-controller-vr4rs-config-66k4n" Feb 19 18:48:40 crc kubenswrapper[4813]: I0219 18:48:40.665905 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f8dbc5dd-416a-44c6-a026-34544b15011f-var-run\") pod \"ovn-controller-vr4rs-config-66k4n\" (UID: \"f8dbc5dd-416a-44c6-a026-34544b15011f\") " pod="openstack/ovn-controller-vr4rs-config-66k4n" Feb 19 18:48:40 crc kubenswrapper[4813]: I0219 18:48:40.665931 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f8dbc5dd-416a-44c6-a026-34544b15011f-var-log-ovn\") pod \"ovn-controller-vr4rs-config-66k4n\" (UID: \"f8dbc5dd-416a-44c6-a026-34544b15011f\") " pod="openstack/ovn-controller-vr4rs-config-66k4n" Feb 19 18:48:40 crc kubenswrapper[4813]: I0219 18:48:40.665998 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f8dbc5dd-416a-44c6-a026-34544b15011f-additional-scripts\") pod \"ovn-controller-vr4rs-config-66k4n\" (UID: \"f8dbc5dd-416a-44c6-a026-34544b15011f\") " pod="openstack/ovn-controller-vr4rs-config-66k4n" Feb 19 18:48:40 crc kubenswrapper[4813]: I0219 18:48:40.666058 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dx64\" (UniqueName: \"kubernetes.io/projected/f8dbc5dd-416a-44c6-a026-34544b15011f-kube-api-access-6dx64\") pod \"ovn-controller-vr4rs-config-66k4n\" (UID: \"f8dbc5dd-416a-44c6-a026-34544b15011f\") " pod="openstack/ovn-controller-vr4rs-config-66k4n" Feb 19 18:48:40 crc kubenswrapper[4813]: I0219 18:48:40.666344 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f8dbc5dd-416a-44c6-a026-34544b15011f-var-run\") pod \"ovn-controller-vr4rs-config-66k4n\" (UID: \"f8dbc5dd-416a-44c6-a026-34544b15011f\") " pod="openstack/ovn-controller-vr4rs-config-66k4n" Feb 19 18:48:40 crc kubenswrapper[4813]: I0219 18:48:40.666344 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f8dbc5dd-416a-44c6-a026-34544b15011f-var-run-ovn\") pod \"ovn-controller-vr4rs-config-66k4n\" (UID: \"f8dbc5dd-416a-44c6-a026-34544b15011f\") " pod="openstack/ovn-controller-vr4rs-config-66k4n" Feb 19 18:48:40 crc kubenswrapper[4813]: I0219 18:48:40.666855 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f8dbc5dd-416a-44c6-a026-34544b15011f-additional-scripts\") pod \"ovn-controller-vr4rs-config-66k4n\" (UID: \"f8dbc5dd-416a-44c6-a026-34544b15011f\") " pod="openstack/ovn-controller-vr4rs-config-66k4n" Feb 19 18:48:40 crc kubenswrapper[4813]: I0219 18:48:40.666911 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f8dbc5dd-416a-44c6-a026-34544b15011f-var-log-ovn\") pod \"ovn-controller-vr4rs-config-66k4n\" (UID: \"f8dbc5dd-416a-44c6-a026-34544b15011f\") " pod="openstack/ovn-controller-vr4rs-config-66k4n" Feb 19 18:48:40 crc kubenswrapper[4813]: I0219 18:48:40.667927 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f8dbc5dd-416a-44c6-a026-34544b15011f-scripts\") pod \"ovn-controller-vr4rs-config-66k4n\" (UID: \"f8dbc5dd-416a-44c6-a026-34544b15011f\") " pod="openstack/ovn-controller-vr4rs-config-66k4n" Feb 19 18:48:40 crc kubenswrapper[4813]: I0219 18:48:40.685149 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dx64\" (UniqueName: \"kubernetes.io/projected/f8dbc5dd-416a-44c6-a026-34544b15011f-kube-api-access-6dx64\") pod \"ovn-controller-vr4rs-config-66k4n\" (UID: \"f8dbc5dd-416a-44c6-a026-34544b15011f\") " pod="openstack/ovn-controller-vr4rs-config-66k4n" Feb 19 18:48:40 crc kubenswrapper[4813]: I0219 18:48:40.767598 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vr4rs-config-66k4n" Feb 19 18:48:41 crc kubenswrapper[4813]: I0219 18:48:41.098668 4813 generic.go:334] "Generic (PLEG): container finished" podID="33b0ba56-83a8-4c5c-a8f4-91c0e7249a7b" containerID="26108ad72f63c574475732f4021cf708dc6d1c9417fc66ddbb7ad7422cd9d5a4" exitCode=0 Feb 19 18:48:41 crc kubenswrapper[4813]: I0219 18:48:41.098772 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-xg95m" event={"ID":"33b0ba56-83a8-4c5c-a8f4-91c0e7249a7b","Type":"ContainerDied","Data":"26108ad72f63c574475732f4021cf708dc6d1c9417fc66ddbb7ad7422cd9d5a4"} Feb 19 18:48:41 crc kubenswrapper[4813]: I0219 18:48:41.221556 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-vr4rs-config-66k4n"] Feb 19 18:48:41 crc kubenswrapper[4813]: W0219 18:48:41.223759 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8dbc5dd_416a_44c6_a026_34544b15011f.slice/crio-90697b9132236a3c7bf6d380365bce0706e6a8c8b355eafda6eb48424a21d587 WatchSource:0}: Error finding container 90697b9132236a3c7bf6d380365bce0706e6a8c8b355eafda6eb48424a21d587: Status 404 returned error can't find the container with id 90697b9132236a3c7bf6d380365bce0706e6a8c8b355eafda6eb48424a21d587 Feb 19 18:48:41 crc kubenswrapper[4813]: I0219 18:48:41.484213 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa671e4d-5289-41c2-8e3d-23657e5340f6" path="/var/lib/kubelet/pods/fa671e4d-5289-41c2-8e3d-23657e5340f6/volumes" Feb 19 18:48:42 crc kubenswrapper[4813]: I0219 18:48:42.108041 4813 generic.go:334] "Generic (PLEG): container finished" podID="f8dbc5dd-416a-44c6-a026-34544b15011f" containerID="5a26d45d99f9e598d77ad7b692c685032d9b5071227135e375f8e4485081b1a2" exitCode=0 Feb 19 18:48:42 crc kubenswrapper[4813]: I0219 18:48:42.108117 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-vr4rs-config-66k4n" event={"ID":"f8dbc5dd-416a-44c6-a026-34544b15011f","Type":"ContainerDied","Data":"5a26d45d99f9e598d77ad7b692c685032d9b5071227135e375f8e4485081b1a2"} Feb 19 18:48:42 crc kubenswrapper[4813]: I0219 18:48:42.108142 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-vr4rs-config-66k4n" event={"ID":"f8dbc5dd-416a-44c6-a026-34544b15011f","Type":"ContainerStarted","Data":"90697b9132236a3c7bf6d380365bce0706e6a8c8b355eafda6eb48424a21d587"} Feb 19 18:48:42 crc kubenswrapper[4813]: I0219 18:48:42.110583 4813 generic.go:334] "Generic (PLEG): container finished" podID="c69ff3db-8806-451a-9df0-c6289c327579" containerID="b9426c242aece9d97709c8664b0a4c377652f962eb00395db079b212fbca6168" exitCode=0 Feb 19 18:48:42 crc kubenswrapper[4813]: I0219 18:48:42.110638 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c69ff3db-8806-451a-9df0-c6289c327579","Type":"ContainerDied","Data":"b9426c242aece9d97709c8664b0a4c377652f962eb00395db079b212fbca6168"} Feb 19 18:48:42 crc kubenswrapper[4813]: I0219 18:48:42.534141 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-xg95m" Feb 19 18:48:42 crc kubenswrapper[4813]: I0219 18:48:42.610577 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/33b0ba56-83a8-4c5c-a8f4-91c0e7249a7b-db-sync-config-data\") pod \"33b0ba56-83a8-4c5c-a8f4-91c0e7249a7b\" (UID: \"33b0ba56-83a8-4c5c-a8f4-91c0e7249a7b\") " Feb 19 18:48:42 crc kubenswrapper[4813]: I0219 18:48:42.610791 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzx9b\" (UniqueName: \"kubernetes.io/projected/33b0ba56-83a8-4c5c-a8f4-91c0e7249a7b-kube-api-access-fzx9b\") pod \"33b0ba56-83a8-4c5c-a8f4-91c0e7249a7b\" (UID: \"33b0ba56-83a8-4c5c-a8f4-91c0e7249a7b\") " Feb 19 18:48:42 crc kubenswrapper[4813]: I0219 18:48:42.610851 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33b0ba56-83a8-4c5c-a8f4-91c0e7249a7b-combined-ca-bundle\") pod \"33b0ba56-83a8-4c5c-a8f4-91c0e7249a7b\" (UID: \"33b0ba56-83a8-4c5c-a8f4-91c0e7249a7b\") " Feb 19 18:48:42 crc kubenswrapper[4813]: I0219 18:48:42.610886 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33b0ba56-83a8-4c5c-a8f4-91c0e7249a7b-config-data\") pod \"33b0ba56-83a8-4c5c-a8f4-91c0e7249a7b\" (UID: \"33b0ba56-83a8-4c5c-a8f4-91c0e7249a7b\") " Feb 19 18:48:42 crc kubenswrapper[4813]: I0219 18:48:42.629374 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33b0ba56-83a8-4c5c-a8f4-91c0e7249a7b-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "33b0ba56-83a8-4c5c-a8f4-91c0e7249a7b" (UID: "33b0ba56-83a8-4c5c-a8f4-91c0e7249a7b"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:48:42 crc kubenswrapper[4813]: I0219 18:48:42.630552 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33b0ba56-83a8-4c5c-a8f4-91c0e7249a7b-kube-api-access-fzx9b" (OuterVolumeSpecName: "kube-api-access-fzx9b") pod "33b0ba56-83a8-4c5c-a8f4-91c0e7249a7b" (UID: "33b0ba56-83a8-4c5c-a8f4-91c0e7249a7b"). InnerVolumeSpecName "kube-api-access-fzx9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:48:42 crc kubenswrapper[4813]: I0219 18:48:42.632996 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33b0ba56-83a8-4c5c-a8f4-91c0e7249a7b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "33b0ba56-83a8-4c5c-a8f4-91c0e7249a7b" (UID: "33b0ba56-83a8-4c5c-a8f4-91c0e7249a7b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:48:42 crc kubenswrapper[4813]: I0219 18:48:42.651298 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33b0ba56-83a8-4c5c-a8f4-91c0e7249a7b-config-data" (OuterVolumeSpecName: "config-data") pod "33b0ba56-83a8-4c5c-a8f4-91c0e7249a7b" (UID: "33b0ba56-83a8-4c5c-a8f4-91c0e7249a7b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:48:42 crc kubenswrapper[4813]: I0219 18:48:42.712648 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzx9b\" (UniqueName: \"kubernetes.io/projected/33b0ba56-83a8-4c5c-a8f4-91c0e7249a7b-kube-api-access-fzx9b\") on node \"crc\" DevicePath \"\"" Feb 19 18:48:42 crc kubenswrapper[4813]: I0219 18:48:42.712689 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33b0ba56-83a8-4c5c-a8f4-91c0e7249a7b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:48:42 crc kubenswrapper[4813]: I0219 18:48:42.712702 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33b0ba56-83a8-4c5c-a8f4-91c0e7249a7b-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 18:48:42 crc kubenswrapper[4813]: I0219 18:48:42.712719 4813 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/33b0ba56-83a8-4c5c-a8f4-91c0e7249a7b-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 18:48:43 crc kubenswrapper[4813]: I0219 18:48:43.120779 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c69ff3db-8806-451a-9df0-c6289c327579","Type":"ContainerStarted","Data":"902b709f26604393c782b1c130285fdc4bd898bf2ec34607dbfa328616279266"} Feb 19 18:48:43 crc kubenswrapper[4813]: I0219 18:48:43.121235 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 19 18:48:43 crc kubenswrapper[4813]: I0219 18:48:43.122939 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-xg95m" event={"ID":"33b0ba56-83a8-4c5c-a8f4-91c0e7249a7b","Type":"ContainerDied","Data":"411b70ffc305466eb9f98c1deb5a5153ff5c339a7d0cc65e44c9432d60395244"} Feb 19 18:48:43 crc kubenswrapper[4813]: I0219 18:48:43.122967 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="411b70ffc305466eb9f98c1deb5a5153ff5c339a7d0cc65e44c9432d60395244" Feb 19 18:48:43 crc kubenswrapper[4813]: I0219 18:48:43.122983 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-xg95m" Feb 19 18:48:43 crc kubenswrapper[4813]: I0219 18:48:43.172575 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=-9223371966.682219 podStartE2EDuration="1m10.172558116s" podCreationTimestamp="2026-02-19 18:47:33 +0000 UTC" firstStartedPulling="2026-02-19 18:47:35.754606723 +0000 UTC m=+1074.980047264" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:48:43.167772367 +0000 UTC m=+1142.393212938" watchObservedRunningTime="2026-02-19 18:48:43.172558116 +0000 UTC m=+1142.397998657" Feb 19 18:48:43 crc kubenswrapper[4813]: I0219 18:48:43.597695 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vr4rs-config-66k4n" Feb 19 18:48:43 crc kubenswrapper[4813]: I0219 18:48:43.667607 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f8dbc5dd-416a-44c6-a026-34544b15011f-scripts\") pod \"f8dbc5dd-416a-44c6-a026-34544b15011f\" (UID: \"f8dbc5dd-416a-44c6-a026-34544b15011f\") " Feb 19 18:48:43 crc kubenswrapper[4813]: I0219 18:48:43.667685 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f8dbc5dd-416a-44c6-a026-34544b15011f-var-log-ovn\") pod \"f8dbc5dd-416a-44c6-a026-34544b15011f\" (UID: \"f8dbc5dd-416a-44c6-a026-34544b15011f\") " Feb 19 18:48:43 crc kubenswrapper[4813]: I0219 18:48:43.667718 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f8dbc5dd-416a-44c6-a026-34544b15011f-var-run-ovn\") pod \"f8dbc5dd-416a-44c6-a026-34544b15011f\" (UID: \"f8dbc5dd-416a-44c6-a026-34544b15011f\") " Feb 19 18:48:43 crc kubenswrapper[4813]: I0219 18:48:43.667776 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f8dbc5dd-416a-44c6-a026-34544b15011f-additional-scripts\") pod \"f8dbc5dd-416a-44c6-a026-34544b15011f\" (UID: \"f8dbc5dd-416a-44c6-a026-34544b15011f\") " Feb 19 18:48:43 crc kubenswrapper[4813]: I0219 18:48:43.667878 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f8dbc5dd-416a-44c6-a026-34544b15011f-var-run\") pod \"f8dbc5dd-416a-44c6-a026-34544b15011f\" (UID: \"f8dbc5dd-416a-44c6-a026-34544b15011f\") " Feb 19 18:48:43 crc kubenswrapper[4813]: I0219 18:48:43.667907 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dx64\" (UniqueName: \"kubernetes.io/projected/f8dbc5dd-416a-44c6-a026-34544b15011f-kube-api-access-6dx64\") pod \"f8dbc5dd-416a-44c6-a026-34544b15011f\" (UID: \"f8dbc5dd-416a-44c6-a026-34544b15011f\") " Feb 19 18:48:43 crc kubenswrapper[4813]: I0219 18:48:43.671130 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8dbc5dd-416a-44c6-a026-34544b15011f-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "f8dbc5dd-416a-44c6-a026-34544b15011f" (UID: "f8dbc5dd-416a-44c6-a026-34544b15011f"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 18:48:43 crc kubenswrapper[4813]: I0219 18:48:43.671395 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8dbc5dd-416a-44c6-a026-34544b15011f-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "f8dbc5dd-416a-44c6-a026-34544b15011f" (UID: "f8dbc5dd-416a-44c6-a026-34544b15011f"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 18:48:43 crc kubenswrapper[4813]: I0219 18:48:43.671430 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8dbc5dd-416a-44c6-a026-34544b15011f-var-run" (OuterVolumeSpecName: "var-run") pod "f8dbc5dd-416a-44c6-a026-34544b15011f" (UID: "f8dbc5dd-416a-44c6-a026-34544b15011f"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 18:48:43 crc kubenswrapper[4813]: I0219 18:48:43.672124 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8dbc5dd-416a-44c6-a026-34544b15011f-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "f8dbc5dd-416a-44c6-a026-34544b15011f" (UID: "f8dbc5dd-416a-44c6-a026-34544b15011f"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:48:43 crc kubenswrapper[4813]: I0219 18:48:43.672414 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8dbc5dd-416a-44c6-a026-34544b15011f-scripts" (OuterVolumeSpecName: "scripts") pod "f8dbc5dd-416a-44c6-a026-34544b15011f" (UID: "f8dbc5dd-416a-44c6-a026-34544b15011f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:48:43 crc kubenswrapper[4813]: I0219 18:48:43.693392 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8dbc5dd-416a-44c6-a026-34544b15011f-kube-api-access-6dx64" (OuterVolumeSpecName: "kube-api-access-6dx64") pod "f8dbc5dd-416a-44c6-a026-34544b15011f" (UID: "f8dbc5dd-416a-44c6-a026-34544b15011f"). InnerVolumeSpecName "kube-api-access-6dx64". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:48:43 crc kubenswrapper[4813]: I0219 18:48:43.732303 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74cc88677c-fmhgx"] Feb 19 18:48:43 crc kubenswrapper[4813]: E0219 18:48:43.732920 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8dbc5dd-416a-44c6-a026-34544b15011f" containerName="ovn-config" Feb 19 18:48:43 crc kubenswrapper[4813]: I0219 18:48:43.732937 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8dbc5dd-416a-44c6-a026-34544b15011f" containerName="ovn-config" Feb 19 18:48:43 crc kubenswrapper[4813]: E0219 18:48:43.732963 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33b0ba56-83a8-4c5c-a8f4-91c0e7249a7b" containerName="glance-db-sync" Feb 19 18:48:43 crc kubenswrapper[4813]: I0219 18:48:43.732970 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="33b0ba56-83a8-4c5c-a8f4-91c0e7249a7b" containerName="glance-db-sync" Feb 19 18:48:43 crc kubenswrapper[4813]: I0219 18:48:43.733180 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="33b0ba56-83a8-4c5c-a8f4-91c0e7249a7b" containerName="glance-db-sync" Feb 19 18:48:43 crc kubenswrapper[4813]: I0219 18:48:43.733195 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8dbc5dd-416a-44c6-a026-34544b15011f" containerName="ovn-config" Feb 19 18:48:43 crc kubenswrapper[4813]: I0219 18:48:43.734032 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74cc88677c-fmhgx" Feb 19 18:48:43 crc kubenswrapper[4813]: I0219 18:48:43.740693 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74cc88677c-fmhgx"] Feb 19 18:48:43 crc kubenswrapper[4813]: I0219 18:48:43.769915 4813 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f8dbc5dd-416a-44c6-a026-34544b15011f-var-run\") on node \"crc\" DevicePath \"\"" Feb 19 18:48:43 crc kubenswrapper[4813]: I0219 18:48:43.769944 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6dx64\" (UniqueName: \"kubernetes.io/projected/f8dbc5dd-416a-44c6-a026-34544b15011f-kube-api-access-6dx64\") on node \"crc\" DevicePath \"\"" Feb 19 18:48:43 crc kubenswrapper[4813]: I0219 18:48:43.769956 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f8dbc5dd-416a-44c6-a026-34544b15011f-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 18:48:43 crc kubenswrapper[4813]: I0219 18:48:43.769965 4813 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f8dbc5dd-416a-44c6-a026-34544b15011f-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 19 18:48:43 crc kubenswrapper[4813]: I0219 18:48:43.769989 4813 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f8dbc5dd-416a-44c6-a026-34544b15011f-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 19 18:48:43 crc kubenswrapper[4813]: I0219 18:48:43.769998 4813 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f8dbc5dd-416a-44c6-a026-34544b15011f-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 18:48:43 crc kubenswrapper[4813]: I0219 18:48:43.870762 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdd3b676-02d4-41a4-9077-a61bd273f09a-config\") pod \"dnsmasq-dns-74cc88677c-fmhgx\" (UID: \"cdd3b676-02d4-41a4-9077-a61bd273f09a\") " pod="openstack/dnsmasq-dns-74cc88677c-fmhgx" Feb 19 18:48:43 crc kubenswrapper[4813]: I0219 18:48:43.870984 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cdd3b676-02d4-41a4-9077-a61bd273f09a-ovsdbserver-nb\") pod \"dnsmasq-dns-74cc88677c-fmhgx\" (UID: \"cdd3b676-02d4-41a4-9077-a61bd273f09a\") " pod="openstack/dnsmasq-dns-74cc88677c-fmhgx" Feb 19 18:48:43 crc kubenswrapper[4813]: I0219 18:48:43.871069 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cdd3b676-02d4-41a4-9077-a61bd273f09a-ovsdbserver-sb\") pod \"dnsmasq-dns-74cc88677c-fmhgx\" (UID: \"cdd3b676-02d4-41a4-9077-a61bd273f09a\") " pod="openstack/dnsmasq-dns-74cc88677c-fmhgx" Feb 19 18:48:43 crc kubenswrapper[4813]: I0219 18:48:43.871148 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w57qx\" (UniqueName: \"kubernetes.io/projected/cdd3b676-02d4-41a4-9077-a61bd273f09a-kube-api-access-w57qx\") pod \"dnsmasq-dns-74cc88677c-fmhgx\" (UID: \"cdd3b676-02d4-41a4-9077-a61bd273f09a\") " pod="openstack/dnsmasq-dns-74cc88677c-fmhgx" Feb 19 18:48:43 crc kubenswrapper[4813]: I0219 18:48:43.871261 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cdd3b676-02d4-41a4-9077-a61bd273f09a-dns-svc\") pod \"dnsmasq-dns-74cc88677c-fmhgx\" (UID: \"cdd3b676-02d4-41a4-9077-a61bd273f09a\") " pod="openstack/dnsmasq-dns-74cc88677c-fmhgx" Feb 19 18:48:43 crc kubenswrapper[4813]: I0219 18:48:43.972999 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cdd3b676-02d4-41a4-9077-a61bd273f09a-ovsdbserver-nb\") pod \"dnsmasq-dns-74cc88677c-fmhgx\" (UID: \"cdd3b676-02d4-41a4-9077-a61bd273f09a\") " pod="openstack/dnsmasq-dns-74cc88677c-fmhgx" Feb 19 18:48:43 crc kubenswrapper[4813]: I0219 18:48:43.973286 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cdd3b676-02d4-41a4-9077-a61bd273f09a-ovsdbserver-sb\") pod \"dnsmasq-dns-74cc88677c-fmhgx\" (UID: \"cdd3b676-02d4-41a4-9077-a61bd273f09a\") " pod="openstack/dnsmasq-dns-74cc88677c-fmhgx" Feb 19 18:48:43 crc kubenswrapper[4813]: I0219 18:48:43.973374 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w57qx\" (UniqueName: \"kubernetes.io/projected/cdd3b676-02d4-41a4-9077-a61bd273f09a-kube-api-access-w57qx\") pod \"dnsmasq-dns-74cc88677c-fmhgx\" (UID: \"cdd3b676-02d4-41a4-9077-a61bd273f09a\") " pod="openstack/dnsmasq-dns-74cc88677c-fmhgx" Feb 19 18:48:43 crc kubenswrapper[4813]: I0219 18:48:43.973472 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cdd3b676-02d4-41a4-9077-a61bd273f09a-dns-svc\") pod \"dnsmasq-dns-74cc88677c-fmhgx\" (UID: \"cdd3b676-02d4-41a4-9077-a61bd273f09a\") " pod="openstack/dnsmasq-dns-74cc88677c-fmhgx" Feb 19 18:48:43 crc kubenswrapper[4813]: I0219 18:48:43.973579 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdd3b676-02d4-41a4-9077-a61bd273f09a-config\") pod \"dnsmasq-dns-74cc88677c-fmhgx\" (UID: \"cdd3b676-02d4-41a4-9077-a61bd273f09a\") " pod="openstack/dnsmasq-dns-74cc88677c-fmhgx" Feb 19 18:48:43 crc kubenswrapper[4813]: I0219 18:48:43.974432 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cdd3b676-02d4-41a4-9077-a61bd273f09a-ovsdbserver-sb\") pod \"dnsmasq-dns-74cc88677c-fmhgx\" (UID: \"cdd3b676-02d4-41a4-9077-a61bd273f09a\") " pod="openstack/dnsmasq-dns-74cc88677c-fmhgx" Feb 19 18:48:43 crc kubenswrapper[4813]: I0219 18:48:43.974475 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cdd3b676-02d4-41a4-9077-a61bd273f09a-dns-svc\") pod \"dnsmasq-dns-74cc88677c-fmhgx\" (UID: \"cdd3b676-02d4-41a4-9077-a61bd273f09a\") " pod="openstack/dnsmasq-dns-74cc88677c-fmhgx" Feb 19 18:48:43 crc kubenswrapper[4813]: I0219 18:48:43.974554 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdd3b676-02d4-41a4-9077-a61bd273f09a-config\") pod \"dnsmasq-dns-74cc88677c-fmhgx\" (UID: \"cdd3b676-02d4-41a4-9077-a61bd273f09a\") " pod="openstack/dnsmasq-dns-74cc88677c-fmhgx" Feb 19 18:48:43 crc kubenswrapper[4813]: I0219 18:48:43.975058 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cdd3b676-02d4-41a4-9077-a61bd273f09a-ovsdbserver-nb\") pod \"dnsmasq-dns-74cc88677c-fmhgx\" (UID: \"cdd3b676-02d4-41a4-9077-a61bd273f09a\") " pod="openstack/dnsmasq-dns-74cc88677c-fmhgx" Feb 19 18:48:43 crc kubenswrapper[4813]: I0219 18:48:43.998903 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w57qx\" (UniqueName: \"kubernetes.io/projected/cdd3b676-02d4-41a4-9077-a61bd273f09a-kube-api-access-w57qx\") pod \"dnsmasq-dns-74cc88677c-fmhgx\" (UID: \"cdd3b676-02d4-41a4-9077-a61bd273f09a\") " pod="openstack/dnsmasq-dns-74cc88677c-fmhgx" Feb 19 18:48:44 crc kubenswrapper[4813]: I0219 18:48:44.051118 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74cc88677c-fmhgx" Feb 19 18:48:44 crc kubenswrapper[4813]: I0219 18:48:44.132104 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vr4rs-config-66k4n" Feb 19 18:48:44 crc kubenswrapper[4813]: I0219 18:48:44.132097 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-vr4rs-config-66k4n" event={"ID":"f8dbc5dd-416a-44c6-a026-34544b15011f","Type":"ContainerDied","Data":"90697b9132236a3c7bf6d380365bce0706e6a8c8b355eafda6eb48424a21d587"} Feb 19 18:48:44 crc kubenswrapper[4813]: I0219 18:48:44.132702 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90697b9132236a3c7bf6d380365bce0706e6a8c8b355eafda6eb48424a21d587" Feb 19 18:48:44 crc kubenswrapper[4813]: I0219 18:48:44.380856 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1cb5586f-0789-4095-84e2-32c8c41984c1-etc-swift\") pod \"swift-storage-0\" (UID: \"1cb5586f-0789-4095-84e2-32c8c41984c1\") " pod="openstack/swift-storage-0" Feb 19 18:48:44 crc kubenswrapper[4813]: I0219 18:48:44.391020 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1cb5586f-0789-4095-84e2-32c8c41984c1-etc-swift\") pod \"swift-storage-0\" (UID: \"1cb5586f-0789-4095-84e2-32c8c41984c1\") " pod="openstack/swift-storage-0" Feb 19 18:48:44 crc kubenswrapper[4813]: I0219 18:48:44.495843 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 19 18:48:44 crc kubenswrapper[4813]: I0219 18:48:44.638293 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74cc88677c-fmhgx"] Feb 19 18:48:44 crc kubenswrapper[4813]: I0219 18:48:44.723222 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-vr4rs-config-66k4n"] Feb 19 18:48:44 crc kubenswrapper[4813]: I0219 18:48:44.741613 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-vr4rs-config-66k4n"] Feb 19 18:48:45 crc kubenswrapper[4813]: I0219 18:48:45.115519 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 19 18:48:45 crc kubenswrapper[4813]: W0219 18:48:45.119927 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1cb5586f_0789_4095_84e2_32c8c41984c1.slice/crio-2e074525ec469851fd47bd99cac462da007fe02071208461f5dd1b693d3bec3d WatchSource:0}: Error finding container 2e074525ec469851fd47bd99cac462da007fe02071208461f5dd1b693d3bec3d: Status 404 returned error can't find the container with id 2e074525ec469851fd47bd99cac462da007fe02071208461f5dd1b693d3bec3d Feb 19 18:48:45 crc kubenswrapper[4813]: I0219 18:48:45.138570 4813 generic.go:334] "Generic (PLEG): container finished" podID="cdd3b676-02d4-41a4-9077-a61bd273f09a" containerID="35937ae84e35ae8d4be7a92af84be075bbc5f6113cd7ecd86ce46fac1734609a" exitCode=0 Feb 19 18:48:45 crc kubenswrapper[4813]: I0219 18:48:45.138625 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74cc88677c-fmhgx" event={"ID":"cdd3b676-02d4-41a4-9077-a61bd273f09a","Type":"ContainerDied","Data":"35937ae84e35ae8d4be7a92af84be075bbc5f6113cd7ecd86ce46fac1734609a"} Feb 19 18:48:45 crc kubenswrapper[4813]: I0219 18:48:45.138648 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74cc88677c-fmhgx" event={"ID":"cdd3b676-02d4-41a4-9077-a61bd273f09a","Type":"ContainerStarted","Data":"242563cc76845f45d4f2666b6ca8d3997165c687fd2b5e2f40ddd0ec8218896b"} Feb 19 18:48:45 crc kubenswrapper[4813]: I0219 18:48:45.142318 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1cb5586f-0789-4095-84e2-32c8c41984c1","Type":"ContainerStarted","Data":"2e074525ec469851fd47bd99cac462da007fe02071208461f5dd1b693d3bec3d"} Feb 19 18:48:45 crc kubenswrapper[4813]: I0219 18:48:45.484387 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8dbc5dd-416a-44c6-a026-34544b15011f" path="/var/lib/kubelet/pods/f8dbc5dd-416a-44c6-a026-34544b15011f/volumes" Feb 19 18:48:46 crc kubenswrapper[4813]: I0219 18:48:46.155144 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74cc88677c-fmhgx" event={"ID":"cdd3b676-02d4-41a4-9077-a61bd273f09a","Type":"ContainerStarted","Data":"c13b5d5563830f3fd2e991b9ee4b32fb0fc2bf40bb76201f1e810377d822337c"} Feb 19 18:48:46 crc kubenswrapper[4813]: I0219 18:48:46.155479 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74cc88677c-fmhgx" Feb 19 18:48:46 crc kubenswrapper[4813]: I0219 18:48:46.180216 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74cc88677c-fmhgx" podStartSLOduration=3.180193602 podStartE2EDuration="3.180193602s" podCreationTimestamp="2026-02-19 18:48:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:48:46.176447195 +0000 UTC m=+1145.401887746" watchObservedRunningTime="2026-02-19 18:48:46.180193602 +0000 UTC m=+1145.405634143" Feb 19 18:48:47 crc kubenswrapper[4813]: I0219 18:48:47.165447 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1cb5586f-0789-4095-84e2-32c8c41984c1","Type":"ContainerStarted","Data":"c810d20c3f622957076f0b197785575897ec5b3aab666c3929ba959f136bbac7"} Feb 19 18:48:47 crc kubenswrapper[4813]: I0219 18:48:47.166066 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1cb5586f-0789-4095-84e2-32c8c41984c1","Type":"ContainerStarted","Data":"173ce911f070f154fad4553f907b72434be2bbf4d9cc33dc8c7d8f6176342729"} Feb 19 18:48:47 crc kubenswrapper[4813]: I0219 18:48:47.166084 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1cb5586f-0789-4095-84e2-32c8c41984c1","Type":"ContainerStarted","Data":"733548779f14379f65bf0cb205274b2eac4a5ab44281cf9b72a9a0a27ccb488b"} Feb 19 18:48:47 crc kubenswrapper[4813]: I0219 18:48:47.166096 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1cb5586f-0789-4095-84e2-32c8c41984c1","Type":"ContainerStarted","Data":"bbf92d00811dd2fb35c3895f4c863c12eb1772bf8d221d1a27c08caefbd5bf0f"} Feb 19 18:48:48 crc kubenswrapper[4813]: E0219 18:48:48.140519 4813 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc69ff3db_8806_451a_9df0_c6289c327579.slice/crio-b9426c242aece9d97709c8664b0a4c377652f962eb00395db079b212fbca6168.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc69ff3db_8806_451a_9df0_c6289c327579.slice/crio-conmon-b9426c242aece9d97709c8664b0a4c377652f962eb00395db079b212fbca6168.scope\": RecentStats: unable to find data in memory cache]" Feb 19 18:48:49 crc kubenswrapper[4813]: I0219 18:48:49.189346 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1cb5586f-0789-4095-84e2-32c8c41984c1","Type":"ContainerStarted","Data":"5e213ff63ffa13d3db8cd64214ffc25c184324e7bcb422854cb7d5544e995e3a"} Feb 19 18:48:49 crc kubenswrapper[4813]: I0219 18:48:49.191188 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1cb5586f-0789-4095-84e2-32c8c41984c1","Type":"ContainerStarted","Data":"3f73192612ae69b540b4da233b53fc0e0246e8770b1168eaa09e2818d0475b0b"} Feb 19 18:48:49 crc kubenswrapper[4813]: I0219 18:48:49.191312 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1cb5586f-0789-4095-84e2-32c8c41984c1","Type":"ContainerStarted","Data":"8a3df88ea660009faf21ceee32e4bf9966f8ba5d8d06da52cc8eed2e8281585b"} Feb 19 18:48:49 crc kubenswrapper[4813]: I0219 18:48:49.191417 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1cb5586f-0789-4095-84e2-32c8c41984c1","Type":"ContainerStarted","Data":"8e7a00f1985307d87ed43318441bfb1dd483cd2506f6ccb527a5a1f992665d68"} Feb 19 18:48:51 crc kubenswrapper[4813]: I0219 18:48:51.213637 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1cb5586f-0789-4095-84e2-32c8c41984c1","Type":"ContainerStarted","Data":"ecdd61b7c76fea90d5030cd83e27eede68a2613e82c87d74bf1b52eae40ede4c"} Feb 19 18:48:51 crc kubenswrapper[4813]: I0219 18:48:51.214104 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1cb5586f-0789-4095-84e2-32c8c41984c1","Type":"ContainerStarted","Data":"b247b4f4fb5c13c744d6518cac9721377f2279093c96dcef4c1fb84f402a96fe"} Feb 19 18:48:51 crc kubenswrapper[4813]: I0219 18:48:51.214114 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1cb5586f-0789-4095-84e2-32c8c41984c1","Type":"ContainerStarted","Data":"fe2ab5db7a8282b411ff6c2df1477d67ccefc71efdf7f49af3550ca28e6ca30f"} Feb 19 18:48:54 crc kubenswrapper[4813]: I0219 18:48:54.053146 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-74cc88677c-fmhgx" Feb 19 18:48:54 crc kubenswrapper[4813]: I0219 18:48:54.135072 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-689df5d84f-5dff4"] Feb 19 18:48:54 crc kubenswrapper[4813]: I0219 18:48:54.135347 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-689df5d84f-5dff4" podUID="f8e97206-5dce-4e2a-989d-aaf8a78c053f" containerName="dnsmasq-dns" containerID="cri-o://aba7117f14b94647770d60bc236bacc3d35d683284088b93b85bfac6151e00a2" gracePeriod=10 Feb 19 18:48:54 crc kubenswrapper[4813]: I0219 18:48:54.252749 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1cb5586f-0789-4095-84e2-32c8c41984c1","Type":"ContainerStarted","Data":"63ed63fc4060e5e77bda2fcedba7a9038be456080560f0972c1162295667a995"} Feb 19 18:48:55 crc kubenswrapper[4813]: I0219 18:48:55.156663 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-689df5d84f-5dff4" Feb 19 18:48:55 crc kubenswrapper[4813]: I0219 18:48:55.269915 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1cb5586f-0789-4095-84e2-32c8c41984c1","Type":"ContainerStarted","Data":"c1d32779936dc64abfb1211d92a86919ed65d162961f758373172e5d8896b226"} Feb 19 18:48:55 crc kubenswrapper[4813]: I0219 18:48:55.270001 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1cb5586f-0789-4095-84e2-32c8c41984c1","Type":"ContainerStarted","Data":"3463fd0ce80861bf450484b04bd0d12e36c7e89f7201fbf18492f7f3c1961d7e"} Feb 19 18:48:55 crc kubenswrapper[4813]: I0219 18:48:55.272413 4813 generic.go:334] "Generic (PLEG): container finished" podID="f8e97206-5dce-4e2a-989d-aaf8a78c053f" containerID="aba7117f14b94647770d60bc236bacc3d35d683284088b93b85bfac6151e00a2" exitCode=0 Feb 19 18:48:55 crc kubenswrapper[4813]: I0219 18:48:55.272466 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-689df5d84f-5dff4" event={"ID":"f8e97206-5dce-4e2a-989d-aaf8a78c053f","Type":"ContainerDied","Data":"aba7117f14b94647770d60bc236bacc3d35d683284088b93b85bfac6151e00a2"} Feb 19 18:48:55 crc kubenswrapper[4813]: I0219 18:48:55.272498 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-689df5d84f-5dff4" event={"ID":"f8e97206-5dce-4e2a-989d-aaf8a78c053f","Type":"ContainerDied","Data":"66327d62f30c9b706733ee8fc42751d124a483e326b720e32b133904814b1af2"} Feb 19 18:48:55 crc kubenswrapper[4813]: I0219 18:48:55.272517 4813 scope.go:117] "RemoveContainer" containerID="aba7117f14b94647770d60bc236bacc3d35d683284088b93b85bfac6151e00a2" Feb 19 18:48:55 crc kubenswrapper[4813]: I0219 18:48:55.272466 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-689df5d84f-5dff4" Feb 19 18:48:55 crc kubenswrapper[4813]: I0219 18:48:55.302200 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 19 18:48:55 crc kubenswrapper[4813]: I0219 18:48:55.338045 4813 scope.go:117] "RemoveContainer" containerID="93a4eaddfdee1c1f9f1c0d5aa36e876e2f2aa0c39c763936e56af0c00822e728" Feb 19 18:48:55 crc kubenswrapper[4813]: I0219 18:48:55.347184 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f8e97206-5dce-4e2a-989d-aaf8a78c053f-ovsdbserver-sb\") pod \"f8e97206-5dce-4e2a-989d-aaf8a78c053f\" (UID: \"f8e97206-5dce-4e2a-989d-aaf8a78c053f\") " Feb 19 18:48:55 crc kubenswrapper[4813]: I0219 18:48:55.347267 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbwhg\" (UniqueName: \"kubernetes.io/projected/f8e97206-5dce-4e2a-989d-aaf8a78c053f-kube-api-access-xbwhg\") pod \"f8e97206-5dce-4e2a-989d-aaf8a78c053f\" (UID: \"f8e97206-5dce-4e2a-989d-aaf8a78c053f\") " Feb 19 18:48:55 crc kubenswrapper[4813]: I0219 18:48:55.347357 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f8e97206-5dce-4e2a-989d-aaf8a78c053f-ovsdbserver-nb\") pod \"f8e97206-5dce-4e2a-989d-aaf8a78c053f\" (UID: \"f8e97206-5dce-4e2a-989d-aaf8a78c053f\") " Feb 19 18:48:55 crc kubenswrapper[4813]: I0219 18:48:55.347438 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f8e97206-5dce-4e2a-989d-aaf8a78c053f-dns-svc\") pod \"f8e97206-5dce-4e2a-989d-aaf8a78c053f\" (UID: \"f8e97206-5dce-4e2a-989d-aaf8a78c053f\") " Feb 19 18:48:55 crc kubenswrapper[4813]: I0219 18:48:55.347475 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8e97206-5dce-4e2a-989d-aaf8a78c053f-config\") pod \"f8e97206-5dce-4e2a-989d-aaf8a78c053f\" (UID: \"f8e97206-5dce-4e2a-989d-aaf8a78c053f\") " Feb 19 18:48:55 crc kubenswrapper[4813]: I0219 18:48:55.355146 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8e97206-5dce-4e2a-989d-aaf8a78c053f-kube-api-access-xbwhg" (OuterVolumeSpecName: "kube-api-access-xbwhg") pod "f8e97206-5dce-4e2a-989d-aaf8a78c053f" (UID: "f8e97206-5dce-4e2a-989d-aaf8a78c053f"). InnerVolumeSpecName "kube-api-access-xbwhg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:48:55 crc kubenswrapper[4813]: I0219 18:48:55.449801 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbwhg\" (UniqueName: \"kubernetes.io/projected/f8e97206-5dce-4e2a-989d-aaf8a78c053f-kube-api-access-xbwhg\") on node \"crc\" DevicePath \"\"" Feb 19 18:48:55 crc kubenswrapper[4813]: I0219 18:48:55.499201 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8e97206-5dce-4e2a-989d-aaf8a78c053f-config" (OuterVolumeSpecName: "config") pod "f8e97206-5dce-4e2a-989d-aaf8a78c053f" (UID: "f8e97206-5dce-4e2a-989d-aaf8a78c053f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:48:55 crc kubenswrapper[4813]: I0219 18:48:55.506210 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8e97206-5dce-4e2a-989d-aaf8a78c053f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f8e97206-5dce-4e2a-989d-aaf8a78c053f" (UID: "f8e97206-5dce-4e2a-989d-aaf8a78c053f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:48:55 crc kubenswrapper[4813]: I0219 18:48:55.514544 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8e97206-5dce-4e2a-989d-aaf8a78c053f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f8e97206-5dce-4e2a-989d-aaf8a78c053f" (UID: "f8e97206-5dce-4e2a-989d-aaf8a78c053f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:48:55 crc kubenswrapper[4813]: I0219 18:48:55.515717 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8e97206-5dce-4e2a-989d-aaf8a78c053f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f8e97206-5dce-4e2a-989d-aaf8a78c053f" (UID: "f8e97206-5dce-4e2a-989d-aaf8a78c053f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:48:55 crc kubenswrapper[4813]: I0219 18:48:55.562374 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f8e97206-5dce-4e2a-989d-aaf8a78c053f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 18:48:55 crc kubenswrapper[4813]: I0219 18:48:55.562413 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f8e97206-5dce-4e2a-989d-aaf8a78c053f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 18:48:55 crc kubenswrapper[4813]: I0219 18:48:55.562425 4813 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f8e97206-5dce-4e2a-989d-aaf8a78c053f-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 18:48:55 crc kubenswrapper[4813]: I0219 18:48:55.562435 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8e97206-5dce-4e2a-989d-aaf8a78c053f-config\") on node \"crc\" DevicePath \"\"" Feb 19 18:48:55 crc kubenswrapper[4813]: I0219 18:48:55.583210 4813 scope.go:117] "RemoveContainer" containerID="aba7117f14b94647770d60bc236bacc3d35d683284088b93b85bfac6151e00a2" Feb 19 18:48:55 crc kubenswrapper[4813]: E0219 18:48:55.584026 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aba7117f14b94647770d60bc236bacc3d35d683284088b93b85bfac6151e00a2\": container with ID starting with aba7117f14b94647770d60bc236bacc3d35d683284088b93b85bfac6151e00a2 not found: ID does not exist" containerID="aba7117f14b94647770d60bc236bacc3d35d683284088b93b85bfac6151e00a2" Feb 19 18:48:55 crc kubenswrapper[4813]: I0219 18:48:55.584060 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aba7117f14b94647770d60bc236bacc3d35d683284088b93b85bfac6151e00a2"} err="failed to get container status \"aba7117f14b94647770d60bc236bacc3d35d683284088b93b85bfac6151e00a2\": rpc error: code = NotFound desc = could not find container \"aba7117f14b94647770d60bc236bacc3d35d683284088b93b85bfac6151e00a2\": container with ID starting with aba7117f14b94647770d60bc236bacc3d35d683284088b93b85bfac6151e00a2 not found: ID does not exist" Feb 19 18:48:55 crc kubenswrapper[4813]: I0219 18:48:55.584088 4813 scope.go:117] "RemoveContainer" containerID="93a4eaddfdee1c1f9f1c0d5aa36e876e2f2aa0c39c763936e56af0c00822e728" Feb 19 18:48:55 crc kubenswrapper[4813]: E0219 18:48:55.587320 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93a4eaddfdee1c1f9f1c0d5aa36e876e2f2aa0c39c763936e56af0c00822e728\": container with ID starting with 93a4eaddfdee1c1f9f1c0d5aa36e876e2f2aa0c39c763936e56af0c00822e728 not found: ID does not exist" containerID="93a4eaddfdee1c1f9f1c0d5aa36e876e2f2aa0c39c763936e56af0c00822e728" Feb 19 18:48:55 crc kubenswrapper[4813]: I0219 18:48:55.587362 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93a4eaddfdee1c1f9f1c0d5aa36e876e2f2aa0c39c763936e56af0c00822e728"} err="failed to get container status \"93a4eaddfdee1c1f9f1c0d5aa36e876e2f2aa0c39c763936e56af0c00822e728\": rpc error: code = NotFound desc = could not find container \"93a4eaddfdee1c1f9f1c0d5aa36e876e2f2aa0c39c763936e56af0c00822e728\": container with ID starting with 93a4eaddfdee1c1f9f1c0d5aa36e876e2f2aa0c39c763936e56af0c00822e728 not found: ID does not exist" Feb 19 18:48:55 crc kubenswrapper[4813]: I0219 18:48:55.614800 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-689df5d84f-5dff4"] Feb 19 18:48:55 crc kubenswrapper[4813]: I0219 18:48:55.629114 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-689df5d84f-5dff4"] Feb 19 18:48:55 crc kubenswrapper[4813]: I0219 18:48:55.709385 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-tmnlr"] Feb 19 18:48:55 crc kubenswrapper[4813]: E0219 18:48:55.709694 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8e97206-5dce-4e2a-989d-aaf8a78c053f" containerName="init" Feb 19 18:48:55 crc kubenswrapper[4813]: I0219 18:48:55.709710 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8e97206-5dce-4e2a-989d-aaf8a78c053f" containerName="init" Feb 19 18:48:55 crc kubenswrapper[4813]: E0219 18:48:55.709726 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8e97206-5dce-4e2a-989d-aaf8a78c053f" containerName="dnsmasq-dns" Feb 19 18:48:55 crc kubenswrapper[4813]: I0219 18:48:55.709733 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8e97206-5dce-4e2a-989d-aaf8a78c053f" containerName="dnsmasq-dns" Feb 19 18:48:55 crc kubenswrapper[4813]: I0219 18:48:55.709866 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8e97206-5dce-4e2a-989d-aaf8a78c053f" containerName="dnsmasq-dns" Feb 19 18:48:55 crc kubenswrapper[4813]: I0219 18:48:55.710441 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-tmnlr" Feb 19 18:48:55 crc kubenswrapper[4813]: I0219 18:48:55.718367 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-tmnlr"] Feb 19 18:48:55 crc kubenswrapper[4813]: I0219 18:48:55.724047 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-570b-account-create-update-kslzx"] Feb 19 18:48:55 crc kubenswrapper[4813]: I0219 18:48:55.725020 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-570b-account-create-update-kslzx" Feb 19 18:48:55 crc kubenswrapper[4813]: I0219 18:48:55.729367 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 19 18:48:55 crc kubenswrapper[4813]: I0219 18:48:55.753615 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-570b-account-create-update-kslzx"] Feb 19 18:48:55 crc kubenswrapper[4813]: I0219 18:48:55.866291 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f5d86d9-0531-43ec-ad26-29f99daf42cb-operator-scripts\") pod \"cinder-db-create-tmnlr\" (UID: \"7f5d86d9-0531-43ec-ad26-29f99daf42cb\") " pod="openstack/cinder-db-create-tmnlr" Feb 19 18:48:55 crc kubenswrapper[4813]: I0219 18:48:55.866361 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfm6h\" (UniqueName: \"kubernetes.io/projected/6899daa4-6104-4900-ab52-6ffaeff57788-kube-api-access-pfm6h\") pod \"cinder-570b-account-create-update-kslzx\" (UID: \"6899daa4-6104-4900-ab52-6ffaeff57788\") " pod="openstack/cinder-570b-account-create-update-kslzx" Feb 19 18:48:55 crc kubenswrapper[4813]: I0219 18:48:55.866388 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6899daa4-6104-4900-ab52-6ffaeff57788-operator-scripts\") pod \"cinder-570b-account-create-update-kslzx\" (UID: \"6899daa4-6104-4900-ab52-6ffaeff57788\") " pod="openstack/cinder-570b-account-create-update-kslzx" Feb 19 18:48:55 crc kubenswrapper[4813]: I0219 18:48:55.866721 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hplnx\" (UniqueName: \"kubernetes.io/projected/7f5d86d9-0531-43ec-ad26-29f99daf42cb-kube-api-access-hplnx\") pod \"cinder-db-create-tmnlr\" (UID: \"7f5d86d9-0531-43ec-ad26-29f99daf42cb\") " pod="openstack/cinder-db-create-tmnlr" Feb 19 18:48:55 crc kubenswrapper[4813]: I0219 18:48:55.930123 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 19 18:48:55 crc kubenswrapper[4813]: I0219 18:48:55.934286 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-wjwfp"] Feb 19 18:48:55 crc kubenswrapper[4813]: I0219 18:48:55.935557 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-wjwfp" Feb 19 18:48:55 crc kubenswrapper[4813]: I0219 18:48:55.947504 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-2083-account-create-update-w69p8"] Feb 19 18:48:55 crc kubenswrapper[4813]: I0219 18:48:55.948570 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-2083-account-create-update-w69p8" Feb 19 18:48:55 crc kubenswrapper[4813]: I0219 18:48:55.950438 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 19 18:48:55 crc kubenswrapper[4813]: I0219 18:48:55.953607 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-wjwfp"] Feb 19 18:48:55 crc kubenswrapper[4813]: I0219 18:48:55.968610 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hplnx\" (UniqueName: \"kubernetes.io/projected/7f5d86d9-0531-43ec-ad26-29f99daf42cb-kube-api-access-hplnx\") pod \"cinder-db-create-tmnlr\" (UID: \"7f5d86d9-0531-43ec-ad26-29f99daf42cb\") " pod="openstack/cinder-db-create-tmnlr" Feb 19 18:48:55 crc kubenswrapper[4813]: I0219 18:48:55.968686 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f5d86d9-0531-43ec-ad26-29f99daf42cb-operator-scripts\") pod \"cinder-db-create-tmnlr\" (UID: \"7f5d86d9-0531-43ec-ad26-29f99daf42cb\") " pod="openstack/cinder-db-create-tmnlr" Feb 19 18:48:55 crc kubenswrapper[4813]: I0219 18:48:55.968731 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfm6h\" (UniqueName: \"kubernetes.io/projected/6899daa4-6104-4900-ab52-6ffaeff57788-kube-api-access-pfm6h\") pod \"cinder-570b-account-create-update-kslzx\" (UID: \"6899daa4-6104-4900-ab52-6ffaeff57788\") " pod="openstack/cinder-570b-account-create-update-kslzx" Feb 19 18:48:55 crc kubenswrapper[4813]: I0219 18:48:55.968754 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6899daa4-6104-4900-ab52-6ffaeff57788-operator-scripts\") pod \"cinder-570b-account-create-update-kslzx\" (UID: \"6899daa4-6104-4900-ab52-6ffaeff57788\") " pod="openstack/cinder-570b-account-create-update-kslzx" Feb 19 18:48:55 crc kubenswrapper[4813]: I0219 18:48:55.969554 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6899daa4-6104-4900-ab52-6ffaeff57788-operator-scripts\") pod \"cinder-570b-account-create-update-kslzx\" (UID: \"6899daa4-6104-4900-ab52-6ffaeff57788\") " pod="openstack/cinder-570b-account-create-update-kslzx" Feb 19 18:48:55 crc kubenswrapper[4813]: I0219 18:48:55.969572 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f5d86d9-0531-43ec-ad26-29f99daf42cb-operator-scripts\") pod \"cinder-db-create-tmnlr\" (UID: \"7f5d86d9-0531-43ec-ad26-29f99daf42cb\") " pod="openstack/cinder-db-create-tmnlr" Feb 19 18:48:55 crc kubenswrapper[4813]: I0219 18:48:55.970805 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-2083-account-create-update-w69p8"] Feb 19 18:48:55 crc kubenswrapper[4813]: I0219 18:48:55.986338 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfm6h\" (UniqueName: \"kubernetes.io/projected/6899daa4-6104-4900-ab52-6ffaeff57788-kube-api-access-pfm6h\") pod \"cinder-570b-account-create-update-kslzx\" (UID: \"6899daa4-6104-4900-ab52-6ffaeff57788\") " pod="openstack/cinder-570b-account-create-update-kslzx" Feb 19 18:48:56 crc kubenswrapper[4813]: I0219 18:48:56.012707 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hplnx\" (UniqueName: \"kubernetes.io/projected/7f5d86d9-0531-43ec-ad26-29f99daf42cb-kube-api-access-hplnx\") pod \"cinder-db-create-tmnlr\" (UID: \"7f5d86d9-0531-43ec-ad26-29f99daf42cb\") " pod="openstack/cinder-db-create-tmnlr" Feb 19 18:48:56 crc kubenswrapper[4813]: I0219 18:48:56.024457 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-tmnlr" Feb 19 18:48:56 crc kubenswrapper[4813]: I0219 18:48:56.031339 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-mlk69"] Feb 19 18:48:56 crc kubenswrapper[4813]: I0219 18:48:56.041678 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-570b-account-create-update-kslzx" Feb 19 18:48:56 crc kubenswrapper[4813]: I0219 18:48:56.051937 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-mlk69" Feb 19 18:48:56 crc kubenswrapper[4813]: I0219 18:48:56.069675 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95494006-9962-4c6c-b3f6-0637d97734a5-operator-scripts\") pod \"barbican-2083-account-create-update-w69p8\" (UID: \"95494006-9962-4c6c-b3f6-0637d97734a5\") " pod="openstack/barbican-2083-account-create-update-w69p8" Feb 19 18:48:56 crc kubenswrapper[4813]: I0219 18:48:56.069725 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cc92073-5619-4686-b083-0a824d82934f-operator-scripts\") pod \"barbican-db-create-wjwfp\" (UID: \"2cc92073-5619-4686-b083-0a824d82934f\") " pod="openstack/barbican-db-create-wjwfp" Feb 19 18:48:56 crc kubenswrapper[4813]: I0219 18:48:56.069923 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzghr\" (UniqueName: \"kubernetes.io/projected/2cc92073-5619-4686-b083-0a824d82934f-kube-api-access-jzghr\") pod \"barbican-db-create-wjwfp\" (UID: \"2cc92073-5619-4686-b083-0a824d82934f\") " pod="openstack/barbican-db-create-wjwfp" Feb 19 18:48:56 crc kubenswrapper[4813]: I0219 18:48:56.070020 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp5bn\" (UniqueName: \"kubernetes.io/projected/95494006-9962-4c6c-b3f6-0637d97734a5-kube-api-access-pp5bn\") pod \"barbican-2083-account-create-update-w69p8\" (UID: \"95494006-9962-4c6c-b3f6-0637d97734a5\") " pod="openstack/barbican-2083-account-create-update-w69p8" Feb 19 18:48:56 crc kubenswrapper[4813]: I0219 18:48:56.081414 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-mlk69"] Feb 19 18:48:56 crc kubenswrapper[4813]: I0219 18:48:56.172816 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8e68eb2-b8d9-4d8a-85b0-25fe7038a6ab-operator-scripts\") pod \"neutron-db-create-mlk69\" (UID: \"b8e68eb2-b8d9-4d8a-85b0-25fe7038a6ab\") " pod="openstack/neutron-db-create-mlk69" Feb 19 18:48:56 crc kubenswrapper[4813]: I0219 18:48:56.172894 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95494006-9962-4c6c-b3f6-0637d97734a5-operator-scripts\") pod \"barbican-2083-account-create-update-w69p8\" (UID: \"95494006-9962-4c6c-b3f6-0637d97734a5\") " pod="openstack/barbican-2083-account-create-update-w69p8" Feb 19 18:48:56 crc kubenswrapper[4813]: I0219 18:48:56.172917 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cc92073-5619-4686-b083-0a824d82934f-operator-scripts\") pod \"barbican-db-create-wjwfp\" (UID: \"2cc92073-5619-4686-b083-0a824d82934f\") " pod="openstack/barbican-db-create-wjwfp" Feb 19 18:48:56 crc kubenswrapper[4813]: I0219 18:48:56.172984 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzghr\" (UniqueName: \"kubernetes.io/projected/2cc92073-5619-4686-b083-0a824d82934f-kube-api-access-jzghr\") pod \"barbican-db-create-wjwfp\" (UID: \"2cc92073-5619-4686-b083-0a824d82934f\") " pod="openstack/barbican-db-create-wjwfp" Feb 19 18:48:56 crc kubenswrapper[4813]: I0219 18:48:56.173016 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pp5bn\" (UniqueName: \"kubernetes.io/projected/95494006-9962-4c6c-b3f6-0637d97734a5-kube-api-access-pp5bn\") pod \"barbican-2083-account-create-update-w69p8\" (UID: \"95494006-9962-4c6c-b3f6-0637d97734a5\") " pod="openstack/barbican-2083-account-create-update-w69p8" Feb 19 18:48:56 crc kubenswrapper[4813]: I0219 18:48:56.173042 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wphrf\" (UniqueName: \"kubernetes.io/projected/b8e68eb2-b8d9-4d8a-85b0-25fe7038a6ab-kube-api-access-wphrf\") pod \"neutron-db-create-mlk69\" (UID: \"b8e68eb2-b8d9-4d8a-85b0-25fe7038a6ab\") " pod="openstack/neutron-db-create-mlk69" Feb 19 18:48:56 crc kubenswrapper[4813]: I0219 18:48:56.174110 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95494006-9962-4c6c-b3f6-0637d97734a5-operator-scripts\") pod \"barbican-2083-account-create-update-w69p8\" (UID: \"95494006-9962-4c6c-b3f6-0637d97734a5\") " pod="openstack/barbican-2083-account-create-update-w69p8" Feb 19 18:48:56 crc kubenswrapper[4813]: I0219 18:48:56.174420 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cc92073-5619-4686-b083-0a824d82934f-operator-scripts\") pod \"barbican-db-create-wjwfp\" (UID: \"2cc92073-5619-4686-b083-0a824d82934f\") " pod="openstack/barbican-db-create-wjwfp" Feb 19 18:48:56 crc kubenswrapper[4813]: I0219 18:48:56.206194 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzghr\" (UniqueName: \"kubernetes.io/projected/2cc92073-5619-4686-b083-0a824d82934f-kube-api-access-jzghr\") pod \"barbican-db-create-wjwfp\" (UID: \"2cc92073-5619-4686-b083-0a824d82934f\") " pod="openstack/barbican-db-create-wjwfp" Feb 19 18:48:56 crc kubenswrapper[4813]: I0219 18:48:56.210129 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pp5bn\" (UniqueName: \"kubernetes.io/projected/95494006-9962-4c6c-b3f6-0637d97734a5-kube-api-access-pp5bn\") pod \"barbican-2083-account-create-update-w69p8\" (UID: \"95494006-9962-4c6c-b3f6-0637d97734a5\") " pod="openstack/barbican-2083-account-create-update-w69p8" Feb 19 18:48:56 crc kubenswrapper[4813]: I0219 18:48:56.243525 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-cde0-account-create-update-pd9m9"] Feb 19 18:48:56 crc kubenswrapper[4813]: I0219 18:48:56.255070 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cde0-account-create-update-pd9m9" Feb 19 18:48:56 crc kubenswrapper[4813]: I0219 18:48:56.259463 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 19 18:48:56 crc kubenswrapper[4813]: I0219 18:48:56.261098 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-wjwfp" Feb 19 18:48:56 crc kubenswrapper[4813]: I0219 18:48:56.272983 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-lnxwr"] Feb 19 18:48:56 crc kubenswrapper[4813]: I0219 18:48:56.273927 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-lnxwr" Feb 19 18:48:56 crc kubenswrapper[4813]: I0219 18:48:56.274521 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wphrf\" (UniqueName: \"kubernetes.io/projected/b8e68eb2-b8d9-4d8a-85b0-25fe7038a6ab-kube-api-access-wphrf\") pod \"neutron-db-create-mlk69\" (UID: \"b8e68eb2-b8d9-4d8a-85b0-25fe7038a6ab\") " pod="openstack/neutron-db-create-mlk69" Feb 19 18:48:56 crc kubenswrapper[4813]: I0219 18:48:56.274598 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8e68eb2-b8d9-4d8a-85b0-25fe7038a6ab-operator-scripts\") pod \"neutron-db-create-mlk69\" (UID: \"b8e68eb2-b8d9-4d8a-85b0-25fe7038a6ab\") " pod="openstack/neutron-db-create-mlk69" Feb 19 18:48:56 crc kubenswrapper[4813]: I0219 18:48:56.278331 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4lsm2" Feb 19 18:48:56 crc kubenswrapper[4813]: I0219 18:48:56.278621 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 19 18:48:56 crc kubenswrapper[4813]: I0219 18:48:56.278724 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 19 18:48:56 crc kubenswrapper[4813]: I0219 18:48:56.281042 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 19 18:48:56 crc kubenswrapper[4813]: I0219 18:48:56.283462 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8e68eb2-b8d9-4d8a-85b0-25fe7038a6ab-operator-scripts\") pod \"neutron-db-create-mlk69\" (UID: \"b8e68eb2-b8d9-4d8a-85b0-25fe7038a6ab\") " pod="openstack/neutron-db-create-mlk69" Feb 19 18:48:56 crc kubenswrapper[4813]: I0219 18:48:56.299701 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-cde0-account-create-update-pd9m9"] Feb 19 18:48:56 crc kubenswrapper[4813]: I0219 18:48:56.304919 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wphrf\" (UniqueName: \"kubernetes.io/projected/b8e68eb2-b8d9-4d8a-85b0-25fe7038a6ab-kube-api-access-wphrf\") pod \"neutron-db-create-mlk69\" (UID: \"b8e68eb2-b8d9-4d8a-85b0-25fe7038a6ab\") " pod="openstack/neutron-db-create-mlk69" Feb 19 18:48:56 crc kubenswrapper[4813]: I0219 18:48:56.316714 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-2083-account-create-update-w69p8" Feb 19 18:48:56 crc kubenswrapper[4813]: I0219 18:48:56.361309 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-lnxwr"] Feb 19 18:48:56 crc kubenswrapper[4813]: I0219 18:48:56.394983 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-mlk69" Feb 19 18:48:56 crc kubenswrapper[4813]: I0219 18:48:56.395671 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0028d082-4f3c-4810-ba82-311c776dc554-combined-ca-bundle\") pod \"keystone-db-sync-lnxwr\" (UID: \"0028d082-4f3c-4810-ba82-311c776dc554\") " pod="openstack/keystone-db-sync-lnxwr" Feb 19 18:48:56 crc kubenswrapper[4813]: I0219 18:48:56.395769 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af0adc50-6632-4b6f-a01f-56314538b07b-operator-scripts\") pod \"neutron-cde0-account-create-update-pd9m9\" (UID: \"af0adc50-6632-4b6f-a01f-56314538b07b\") " pod="openstack/neutron-cde0-account-create-update-pd9m9" Feb 19 18:48:56 crc kubenswrapper[4813]: I0219 18:48:56.395847 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8f2wb\" (UniqueName: \"kubernetes.io/projected/0028d082-4f3c-4810-ba82-311c776dc554-kube-api-access-8f2wb\") pod \"keystone-db-sync-lnxwr\" (UID: \"0028d082-4f3c-4810-ba82-311c776dc554\") " pod="openstack/keystone-db-sync-lnxwr" Feb 19 18:48:56 crc kubenswrapper[4813]: I0219 18:48:56.395910 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjr7q\" (UniqueName: \"kubernetes.io/projected/af0adc50-6632-4b6f-a01f-56314538b07b-kube-api-access-tjr7q\") pod \"neutron-cde0-account-create-update-pd9m9\" (UID: \"af0adc50-6632-4b6f-a01f-56314538b07b\") " pod="openstack/neutron-cde0-account-create-update-pd9m9" Feb 19 18:48:56 crc kubenswrapper[4813]: I0219 18:48:56.396057 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0028d082-4f3c-4810-ba82-311c776dc554-config-data\") pod \"keystone-db-sync-lnxwr\" (UID: \"0028d082-4f3c-4810-ba82-311c776dc554\") " pod="openstack/keystone-db-sync-lnxwr" Feb 19 18:48:56 crc kubenswrapper[4813]: I0219 18:48:56.498352 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0028d082-4f3c-4810-ba82-311c776dc554-combined-ca-bundle\") pod \"keystone-db-sync-lnxwr\" (UID: \"0028d082-4f3c-4810-ba82-311c776dc554\") " pod="openstack/keystone-db-sync-lnxwr" Feb 19 18:48:56 crc kubenswrapper[4813]: I0219 18:48:56.498390 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af0adc50-6632-4b6f-a01f-56314538b07b-operator-scripts\") pod \"neutron-cde0-account-create-update-pd9m9\" (UID: \"af0adc50-6632-4b6f-a01f-56314538b07b\") " pod="openstack/neutron-cde0-account-create-update-pd9m9" Feb 19 18:48:56 crc kubenswrapper[4813]: I0219 18:48:56.498421 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8f2wb\" (UniqueName: \"kubernetes.io/projected/0028d082-4f3c-4810-ba82-311c776dc554-kube-api-access-8f2wb\") pod \"keystone-db-sync-lnxwr\" (UID: \"0028d082-4f3c-4810-ba82-311c776dc554\") " pod="openstack/keystone-db-sync-lnxwr" Feb 19 18:48:56 crc kubenswrapper[4813]: I0219 18:48:56.498450 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjr7q\" (UniqueName: \"kubernetes.io/projected/af0adc50-6632-4b6f-a01f-56314538b07b-kube-api-access-tjr7q\") pod \"neutron-cde0-account-create-update-pd9m9\" (UID: \"af0adc50-6632-4b6f-a01f-56314538b07b\") " pod="openstack/neutron-cde0-account-create-update-pd9m9" Feb 19 18:48:56 crc kubenswrapper[4813]: I0219 18:48:56.498495 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0028d082-4f3c-4810-ba82-311c776dc554-config-data\") pod \"keystone-db-sync-lnxwr\" (UID: \"0028d082-4f3c-4810-ba82-311c776dc554\") " pod="openstack/keystone-db-sync-lnxwr" Feb 19 18:48:56 crc kubenswrapper[4813]: I0219 18:48:56.505841 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af0adc50-6632-4b6f-a01f-56314538b07b-operator-scripts\") pod \"neutron-cde0-account-create-update-pd9m9\" (UID: \"af0adc50-6632-4b6f-a01f-56314538b07b\") " pod="openstack/neutron-cde0-account-create-update-pd9m9" Feb 19 18:48:56 crc kubenswrapper[4813]: I0219 18:48:56.508768 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0028d082-4f3c-4810-ba82-311c776dc554-config-data\") pod \"keystone-db-sync-lnxwr\" (UID: \"0028d082-4f3c-4810-ba82-311c776dc554\") " pod="openstack/keystone-db-sync-lnxwr" Feb 19 18:48:56 crc kubenswrapper[4813]: I0219 18:48:56.518559 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0028d082-4f3c-4810-ba82-311c776dc554-combined-ca-bundle\") pod \"keystone-db-sync-lnxwr\" (UID: \"0028d082-4f3c-4810-ba82-311c776dc554\") " pod="openstack/keystone-db-sync-lnxwr" Feb 19 18:48:56 crc kubenswrapper[4813]: I0219 18:48:56.540239 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1cb5586f-0789-4095-84e2-32c8c41984c1","Type":"ContainerStarted","Data":"7fd9247f2c24fe230ceea2d4153e592d608389bd4aeb769e1c50f8d5b7ce6c1f"} Feb 19 18:48:56 crc kubenswrapper[4813]: I0219 18:48:56.540562 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjr7q\" (UniqueName: \"kubernetes.io/projected/af0adc50-6632-4b6f-a01f-56314538b07b-kube-api-access-tjr7q\") pod \"neutron-cde0-account-create-update-pd9m9\" (UID: \"af0adc50-6632-4b6f-a01f-56314538b07b\") " pod="openstack/neutron-cde0-account-create-update-pd9m9" Feb 19 18:48:56 crc kubenswrapper[4813]: I0219 18:48:56.552546 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8f2wb\" (UniqueName: \"kubernetes.io/projected/0028d082-4f3c-4810-ba82-311c776dc554-kube-api-access-8f2wb\") pod \"keystone-db-sync-lnxwr\" (UID: \"0028d082-4f3c-4810-ba82-311c776dc554\") " pod="openstack/keystone-db-sync-lnxwr" Feb 19 18:48:56 crc kubenswrapper[4813]: I0219 18:48:56.598348 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=40.396915981 podStartE2EDuration="45.598332277s" podCreationTimestamp="2026-02-19 18:48:11 +0000 UTC" firstStartedPulling="2026-02-19 18:48:45.121954035 +0000 UTC m=+1144.347394576" lastFinishedPulling="2026-02-19 18:48:50.323370311 +0000 UTC m=+1149.548810872" observedRunningTime="2026-02-19 18:48:56.595169889 +0000 UTC m=+1155.820610430" watchObservedRunningTime="2026-02-19 18:48:56.598332277 +0000 UTC m=+1155.823772818" Feb 19 18:48:56 crc kubenswrapper[4813]: I0219 18:48:56.767800 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-tmnlr"] Feb 19 18:48:56 crc kubenswrapper[4813]: I0219 18:48:56.817328 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cde0-account-create-update-pd9m9" Feb 19 18:48:56 crc kubenswrapper[4813]: I0219 18:48:56.845538 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-lnxwr" Feb 19 18:48:56 crc kubenswrapper[4813]: I0219 18:48:56.880561 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-68677f88c9-7zxq4"] Feb 19 18:48:56 crc kubenswrapper[4813]: I0219 18:48:56.882730 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68677f88c9-7zxq4" Feb 19 18:48:56 crc kubenswrapper[4813]: I0219 18:48:56.887590 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 19 18:48:56 crc kubenswrapper[4813]: I0219 18:48:56.896752 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68677f88c9-7zxq4"] Feb 19 18:48:56 crc kubenswrapper[4813]: I0219 18:48:56.955496 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-570b-account-create-update-kslzx"] Feb 19 18:48:56 crc kubenswrapper[4813]: W0219 18:48:56.975767 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6899daa4_6104_4900_ab52_6ffaeff57788.slice/crio-766949606c8ae57e221c676af83abe292cb23d8a525dcf1388cdc648306e28f5 WatchSource:0}: Error finding container 766949606c8ae57e221c676af83abe292cb23d8a525dcf1388cdc648306e28f5: Status 404 returned error can't find the container with id 766949606c8ae57e221c676af83abe292cb23d8a525dcf1388cdc648306e28f5 Feb 19 18:48:57 crc kubenswrapper[4813]: I0219 18:48:57.019658 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/035014cb-5042-4867-9d3c-4f0061a8e4ac-dns-swift-storage-0\") pod \"dnsmasq-dns-68677f88c9-7zxq4\" (UID: \"035014cb-5042-4867-9d3c-4f0061a8e4ac\") " pod="openstack/dnsmasq-dns-68677f88c9-7zxq4" Feb 19 18:48:57 crc kubenswrapper[4813]: I0219 18:48:57.019774 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/035014cb-5042-4867-9d3c-4f0061a8e4ac-ovsdbserver-nb\") pod \"dnsmasq-dns-68677f88c9-7zxq4\" (UID: \"035014cb-5042-4867-9d3c-4f0061a8e4ac\") " pod="openstack/dnsmasq-dns-68677f88c9-7zxq4" Feb 19 18:48:57 crc kubenswrapper[4813]: I0219 18:48:57.020174 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8scks\" (UniqueName: \"kubernetes.io/projected/035014cb-5042-4867-9d3c-4f0061a8e4ac-kube-api-access-8scks\") pod \"dnsmasq-dns-68677f88c9-7zxq4\" (UID: \"035014cb-5042-4867-9d3c-4f0061a8e4ac\") " pod="openstack/dnsmasq-dns-68677f88c9-7zxq4" Feb 19 18:48:57 crc kubenswrapper[4813]: I0219 18:48:57.020270 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/035014cb-5042-4867-9d3c-4f0061a8e4ac-dns-svc\") pod \"dnsmasq-dns-68677f88c9-7zxq4\" (UID: \"035014cb-5042-4867-9d3c-4f0061a8e4ac\") " pod="openstack/dnsmasq-dns-68677f88c9-7zxq4" Feb 19 18:48:57 crc kubenswrapper[4813]: I0219 18:48:57.020300 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/035014cb-5042-4867-9d3c-4f0061a8e4ac-ovsdbserver-sb\") pod \"dnsmasq-dns-68677f88c9-7zxq4\" (UID: \"035014cb-5042-4867-9d3c-4f0061a8e4ac\") " pod="openstack/dnsmasq-dns-68677f88c9-7zxq4" Feb 19 18:48:57 crc kubenswrapper[4813]: I0219 18:48:57.020761 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/035014cb-5042-4867-9d3c-4f0061a8e4ac-config\") pod \"dnsmasq-dns-68677f88c9-7zxq4\" (UID: \"035014cb-5042-4867-9d3c-4f0061a8e4ac\") " pod="openstack/dnsmasq-dns-68677f88c9-7zxq4" Feb 19 18:48:57 crc kubenswrapper[4813]: I0219 18:48:57.031461 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-wjwfp"] Feb 19 18:48:57 crc kubenswrapper[4813]: I0219 18:48:57.054072 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-mlk69"] Feb 19 18:48:57 crc kubenswrapper[4813]: I0219 18:48:57.055762 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-2083-account-create-update-w69p8"] Feb 19 18:48:57 crc kubenswrapper[4813]: W0219 18:48:57.066048 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod95494006_9962_4c6c_b3f6_0637d97734a5.slice/crio-f003968245798b6b25e28ea521e7cdfc4c9cba54bc737efc6b16ade34871207c WatchSource:0}: Error finding container f003968245798b6b25e28ea521e7cdfc4c9cba54bc737efc6b16ade34871207c: Status 404 returned error can't find the container with id f003968245798b6b25e28ea521e7cdfc4c9cba54bc737efc6b16ade34871207c Feb 19 18:48:57 crc kubenswrapper[4813]: I0219 18:48:57.122818 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/035014cb-5042-4867-9d3c-4f0061a8e4ac-config\") pod \"dnsmasq-dns-68677f88c9-7zxq4\" (UID: \"035014cb-5042-4867-9d3c-4f0061a8e4ac\") " pod="openstack/dnsmasq-dns-68677f88c9-7zxq4" Feb 19 18:48:57 crc kubenswrapper[4813]: I0219 18:48:57.122868 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/035014cb-5042-4867-9d3c-4f0061a8e4ac-dns-swift-storage-0\") pod \"dnsmasq-dns-68677f88c9-7zxq4\" (UID: \"035014cb-5042-4867-9d3c-4f0061a8e4ac\") " pod="openstack/dnsmasq-dns-68677f88c9-7zxq4" Feb 19 18:48:57 crc kubenswrapper[4813]: I0219 18:48:57.122898 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/035014cb-5042-4867-9d3c-4f0061a8e4ac-ovsdbserver-nb\") pod \"dnsmasq-dns-68677f88c9-7zxq4\" (UID: \"035014cb-5042-4867-9d3c-4f0061a8e4ac\") " pod="openstack/dnsmasq-dns-68677f88c9-7zxq4" Feb 19 18:48:57 crc kubenswrapper[4813]: I0219 18:48:57.122924 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8scks\" (UniqueName: \"kubernetes.io/projected/035014cb-5042-4867-9d3c-4f0061a8e4ac-kube-api-access-8scks\") pod \"dnsmasq-dns-68677f88c9-7zxq4\" (UID: \"035014cb-5042-4867-9d3c-4f0061a8e4ac\") " pod="openstack/dnsmasq-dns-68677f88c9-7zxq4" Feb 19 18:48:57 crc kubenswrapper[4813]: I0219 18:48:57.122946 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/035014cb-5042-4867-9d3c-4f0061a8e4ac-dns-svc\") pod \"dnsmasq-dns-68677f88c9-7zxq4\" (UID: \"035014cb-5042-4867-9d3c-4f0061a8e4ac\") " pod="openstack/dnsmasq-dns-68677f88c9-7zxq4" Feb 19 18:48:57 crc kubenswrapper[4813]: I0219 18:48:57.122979 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/035014cb-5042-4867-9d3c-4f0061a8e4ac-ovsdbserver-sb\") pod \"dnsmasq-dns-68677f88c9-7zxq4\" (UID: \"035014cb-5042-4867-9d3c-4f0061a8e4ac\") " pod="openstack/dnsmasq-dns-68677f88c9-7zxq4" Feb 19 18:48:57 crc kubenswrapper[4813]: I0219 18:48:57.123824 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/035014cb-5042-4867-9d3c-4f0061a8e4ac-ovsdbserver-sb\") pod \"dnsmasq-dns-68677f88c9-7zxq4\" (UID: \"035014cb-5042-4867-9d3c-4f0061a8e4ac\") " pod="openstack/dnsmasq-dns-68677f88c9-7zxq4" Feb 19 18:48:57 crc kubenswrapper[4813]: I0219 18:48:57.124368 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/035014cb-5042-4867-9d3c-4f0061a8e4ac-config\") pod \"dnsmasq-dns-68677f88c9-7zxq4\" (UID: \"035014cb-5042-4867-9d3c-4f0061a8e4ac\") " pod="openstack/dnsmasq-dns-68677f88c9-7zxq4" Feb 19 18:48:57 crc kubenswrapper[4813]: I0219 18:48:57.124902 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/035014cb-5042-4867-9d3c-4f0061a8e4ac-dns-swift-storage-0\") pod \"dnsmasq-dns-68677f88c9-7zxq4\" (UID: \"035014cb-5042-4867-9d3c-4f0061a8e4ac\") " pod="openstack/dnsmasq-dns-68677f88c9-7zxq4" Feb 19 18:48:57 crc kubenswrapper[4813]: I0219 18:48:57.125424 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/035014cb-5042-4867-9d3c-4f0061a8e4ac-ovsdbserver-nb\") pod \"dnsmasq-dns-68677f88c9-7zxq4\" (UID: \"035014cb-5042-4867-9d3c-4f0061a8e4ac\") " pod="openstack/dnsmasq-dns-68677f88c9-7zxq4" Feb 19 18:48:57 crc kubenswrapper[4813]: I0219 18:48:57.126302 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/035014cb-5042-4867-9d3c-4f0061a8e4ac-dns-svc\") pod \"dnsmasq-dns-68677f88c9-7zxq4\" (UID: \"035014cb-5042-4867-9d3c-4f0061a8e4ac\") " pod="openstack/dnsmasq-dns-68677f88c9-7zxq4" Feb 19 18:48:57 crc kubenswrapper[4813]: I0219 18:48:57.153226 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8scks\" (UniqueName: \"kubernetes.io/projected/035014cb-5042-4867-9d3c-4f0061a8e4ac-kube-api-access-8scks\") pod \"dnsmasq-dns-68677f88c9-7zxq4\" (UID: \"035014cb-5042-4867-9d3c-4f0061a8e4ac\") " pod="openstack/dnsmasq-dns-68677f88c9-7zxq4" Feb 19 18:48:57 crc kubenswrapper[4813]: I0219 18:48:57.161690 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-cde0-account-create-update-pd9m9"] Feb 19 18:48:57 crc kubenswrapper[4813]: W0219 18:48:57.164061 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaf0adc50_6632_4b6f_a01f_56314538b07b.slice/crio-a102ccdfd5708d37ddb917da5315c1478c511cf42539c0902bcdf963a11c8be0 WatchSource:0}: Error finding container a102ccdfd5708d37ddb917da5315c1478c511cf42539c0902bcdf963a11c8be0: Status 404 returned error can't find the container with id a102ccdfd5708d37ddb917da5315c1478c511cf42539c0902bcdf963a11c8be0 Feb 19 18:48:57 crc kubenswrapper[4813]: I0219 18:48:57.213577 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68677f88c9-7zxq4" Feb 19 18:48:57 crc kubenswrapper[4813]: I0219 18:48:57.481848 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8e97206-5dce-4e2a-989d-aaf8a78c053f" path="/var/lib/kubelet/pods/f8e97206-5dce-4e2a-989d-aaf8a78c053f/volumes" Feb 19 18:48:57 crc kubenswrapper[4813]: I0219 18:48:57.548228 4813 generic.go:334] "Generic (PLEG): container finished" podID="7f5d86d9-0531-43ec-ad26-29f99daf42cb" containerID="803b975691bded29eae06b673097142de87bf85186b3fd3df69da20ef33ddd64" exitCode=0 Feb 19 18:48:57 crc kubenswrapper[4813]: I0219 18:48:57.548314 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-tmnlr" event={"ID":"7f5d86d9-0531-43ec-ad26-29f99daf42cb","Type":"ContainerDied","Data":"803b975691bded29eae06b673097142de87bf85186b3fd3df69da20ef33ddd64"} Feb 19 18:48:57 crc kubenswrapper[4813]: I0219 18:48:57.548346 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-tmnlr" event={"ID":"7f5d86d9-0531-43ec-ad26-29f99daf42cb","Type":"ContainerStarted","Data":"afc000619609ffb42ac61f81c3f509bc78024608f1da67d6ee2d2cfd8995aba6"} Feb 19 18:48:57 crc kubenswrapper[4813]: I0219 18:48:57.549604 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-wjwfp" event={"ID":"2cc92073-5619-4686-b083-0a824d82934f","Type":"ContainerStarted","Data":"13192d61e26b0abb14066637f47e14757025c5626ce910fff4ccd994cdca3e0f"} Feb 19 18:48:57 crc kubenswrapper[4813]: I0219 18:48:57.549630 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-wjwfp" event={"ID":"2cc92073-5619-4686-b083-0a824d82934f","Type":"ContainerStarted","Data":"09105d98beb7b0a2dcf8dd88de9618f66e2dbb8cdf7212468c2e54e890cd01c5"} Feb 19 18:48:57 crc kubenswrapper[4813]: I0219 18:48:57.553114 4813 generic.go:334] "Generic (PLEG): container finished" podID="b8e68eb2-b8d9-4d8a-85b0-25fe7038a6ab" containerID="2b35cf86320758f6fa476d651db026b27c757fa47f91268776bb1f628c230b52" exitCode=0 Feb 19 18:48:57 crc kubenswrapper[4813]: I0219 18:48:57.553175 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-mlk69" event={"ID":"b8e68eb2-b8d9-4d8a-85b0-25fe7038a6ab","Type":"ContainerDied","Data":"2b35cf86320758f6fa476d651db026b27c757fa47f91268776bb1f628c230b52"} Feb 19 18:48:57 crc kubenswrapper[4813]: I0219 18:48:57.553200 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-mlk69" event={"ID":"b8e68eb2-b8d9-4d8a-85b0-25fe7038a6ab","Type":"ContainerStarted","Data":"eb9527f45a1f085fa632306b8c59c9dd85b798f557dd9400a3218ba8b904ed0f"} Feb 19 18:48:57 crc kubenswrapper[4813]: I0219 18:48:57.554546 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-570b-account-create-update-kslzx" event={"ID":"6899daa4-6104-4900-ab52-6ffaeff57788","Type":"ContainerStarted","Data":"9274722aace668916d0e004ffce85ea1a2fff67c03b7e8a4b70e1dc4e953339a"} Feb 19 18:48:57 crc kubenswrapper[4813]: I0219 18:48:57.554582 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-570b-account-create-update-kslzx" event={"ID":"6899daa4-6104-4900-ab52-6ffaeff57788","Type":"ContainerStarted","Data":"766949606c8ae57e221c676af83abe292cb23d8a525dcf1388cdc648306e28f5"} Feb 19 18:48:57 crc kubenswrapper[4813]: I0219 18:48:57.555824 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-2083-account-create-update-w69p8" event={"ID":"95494006-9962-4c6c-b3f6-0637d97734a5","Type":"ContainerStarted","Data":"2748c6337477604f30bec1728569342fb07d1c41d4b9ae1505ab4f5b1e436fd2"} Feb 19 18:48:57 crc kubenswrapper[4813]: I0219 18:48:57.555848 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-2083-account-create-update-w69p8" event={"ID":"95494006-9962-4c6c-b3f6-0637d97734a5","Type":"ContainerStarted","Data":"f003968245798b6b25e28ea521e7cdfc4c9cba54bc737efc6b16ade34871207c"} Feb 19 18:48:57 crc kubenswrapper[4813]: I0219 18:48:57.559157 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cde0-account-create-update-pd9m9" event={"ID":"af0adc50-6632-4b6f-a01f-56314538b07b","Type":"ContainerStarted","Data":"f26ff4b3ab8d8a8a5e7e6f0f6f0fc46990e0f6af849d5a9c66de2e6e72499a00"} Feb 19 18:48:57 crc kubenswrapper[4813]: I0219 18:48:57.559195 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cde0-account-create-update-pd9m9" event={"ID":"af0adc50-6632-4b6f-a01f-56314538b07b","Type":"ContainerStarted","Data":"a102ccdfd5708d37ddb917da5315c1478c511cf42539c0902bcdf963a11c8be0"} Feb 19 18:48:57 crc kubenswrapper[4813]: I0219 18:48:57.587771 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-cde0-account-create-update-pd9m9" podStartSLOduration=1.587754758 podStartE2EDuration="1.587754758s" podCreationTimestamp="2026-02-19 18:48:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:48:57.582108314 +0000 UTC m=+1156.807548855" watchObservedRunningTime="2026-02-19 18:48:57.587754758 +0000 UTC m=+1156.813195299" Feb 19 18:48:57 crc kubenswrapper[4813]: I0219 18:48:57.623151 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-wjwfp" podStartSLOduration=2.623134509 podStartE2EDuration="2.623134509s" podCreationTimestamp="2026-02-19 18:48:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:48:57.61830073 +0000 UTC m=+1156.843741281" watchObservedRunningTime="2026-02-19 18:48:57.623134509 +0000 UTC m=+1156.848575040" Feb 19 18:48:57 crc kubenswrapper[4813]: I0219 18:48:57.643616 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-2083-account-create-update-w69p8" podStartSLOduration=2.6435951600000003 podStartE2EDuration="2.64359516s" podCreationTimestamp="2026-02-19 18:48:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:48:57.640794763 +0000 UTC m=+1156.866235304" watchObservedRunningTime="2026-02-19 18:48:57.64359516 +0000 UTC m=+1156.869035701" Feb 19 18:48:58 crc kubenswrapper[4813]: I0219 18:48:58.011420 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-lnxwr"] Feb 19 18:48:58 crc kubenswrapper[4813]: W0219 18:48:58.016821 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0028d082_4f3c_4810_ba82_311c776dc554.slice/crio-869174e9053d8e87e8e33c7f426ca1f007b01a3c28a311605fe0050d337801b9 WatchSource:0}: Error finding container 869174e9053d8e87e8e33c7f426ca1f007b01a3c28a311605fe0050d337801b9: Status 404 returned error can't find the container with id 869174e9053d8e87e8e33c7f426ca1f007b01a3c28a311605fe0050d337801b9 Feb 19 18:48:58 crc kubenswrapper[4813]: I0219 18:48:58.095914 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68677f88c9-7zxq4"] Feb 19 18:48:58 crc kubenswrapper[4813]: E0219 18:48:58.373648 4813 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc69ff3db_8806_451a_9df0_c6289c327579.slice/crio-b9426c242aece9d97709c8664b0a4c377652f962eb00395db079b212fbca6168.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc69ff3db_8806_451a_9df0_c6289c327579.slice/crio-conmon-b9426c242aece9d97709c8664b0a4c377652f962eb00395db079b212fbca6168.scope\": RecentStats: unable to find data in memory cache]" Feb 19 18:48:58 crc kubenswrapper[4813]: I0219 18:48:58.571150 4813 generic.go:334] "Generic (PLEG): container finished" podID="af0adc50-6632-4b6f-a01f-56314538b07b" containerID="f26ff4b3ab8d8a8a5e7e6f0f6f0fc46990e0f6af849d5a9c66de2e6e72499a00" exitCode=0 Feb 19 18:48:58 crc kubenswrapper[4813]: I0219 18:48:58.571238 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cde0-account-create-update-pd9m9" event={"ID":"af0adc50-6632-4b6f-a01f-56314538b07b","Type":"ContainerDied","Data":"f26ff4b3ab8d8a8a5e7e6f0f6f0fc46990e0f6af849d5a9c66de2e6e72499a00"} Feb 19 18:48:58 crc kubenswrapper[4813]: I0219 18:48:58.573450 4813 generic.go:334] "Generic (PLEG): container finished" podID="2cc92073-5619-4686-b083-0a824d82934f" containerID="13192d61e26b0abb14066637f47e14757025c5626ce910fff4ccd994cdca3e0f" exitCode=0 Feb 19 18:48:58 crc kubenswrapper[4813]: I0219 18:48:58.573492 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-wjwfp" event={"ID":"2cc92073-5619-4686-b083-0a824d82934f","Type":"ContainerDied","Data":"13192d61e26b0abb14066637f47e14757025c5626ce910fff4ccd994cdca3e0f"} Feb 19 18:48:58 crc kubenswrapper[4813]: I0219 18:48:58.575439 4813 generic.go:334] "Generic (PLEG): container finished" podID="035014cb-5042-4867-9d3c-4f0061a8e4ac" containerID="d2bd0f44e36295036138de2c4c1a46d29273cc11baa5673f60270297d992f673" exitCode=0 Feb 19 18:48:58 crc kubenswrapper[4813]: I0219 18:48:58.575486 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68677f88c9-7zxq4" event={"ID":"035014cb-5042-4867-9d3c-4f0061a8e4ac","Type":"ContainerDied","Data":"d2bd0f44e36295036138de2c4c1a46d29273cc11baa5673f60270297d992f673"} Feb 19 18:48:58 crc kubenswrapper[4813]: I0219 18:48:58.575502 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68677f88c9-7zxq4" event={"ID":"035014cb-5042-4867-9d3c-4f0061a8e4ac","Type":"ContainerStarted","Data":"c39c977fc2821c89b2103aabc7b5163fa323d7231ee51580f9f7a54dd48d1580"} Feb 19 18:48:58 crc kubenswrapper[4813]: I0219 18:48:58.577339 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-lnxwr" event={"ID":"0028d082-4f3c-4810-ba82-311c776dc554","Type":"ContainerStarted","Data":"869174e9053d8e87e8e33c7f426ca1f007b01a3c28a311605fe0050d337801b9"} Feb 19 18:48:58 crc kubenswrapper[4813]: I0219 18:48:58.579195 4813 generic.go:334] "Generic (PLEG): container finished" podID="6899daa4-6104-4900-ab52-6ffaeff57788" containerID="9274722aace668916d0e004ffce85ea1a2fff67c03b7e8a4b70e1dc4e953339a" exitCode=0 Feb 19 18:48:58 crc kubenswrapper[4813]: I0219 18:48:58.579241 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-570b-account-create-update-kslzx" event={"ID":"6899daa4-6104-4900-ab52-6ffaeff57788","Type":"ContainerDied","Data":"9274722aace668916d0e004ffce85ea1a2fff67c03b7e8a4b70e1dc4e953339a"} Feb 19 18:48:58 crc kubenswrapper[4813]: I0219 18:48:58.584902 4813 generic.go:334] "Generic (PLEG): container finished" podID="95494006-9962-4c6c-b3f6-0637d97734a5" containerID="2748c6337477604f30bec1728569342fb07d1c41d4b9ae1505ab4f5b1e436fd2" exitCode=0 Feb 19 18:48:58 crc kubenswrapper[4813]: I0219 18:48:58.585071 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-2083-account-create-update-w69p8" event={"ID":"95494006-9962-4c6c-b3f6-0637d97734a5","Type":"ContainerDied","Data":"2748c6337477604f30bec1728569342fb07d1c41d4b9ae1505ab4f5b1e436fd2"} Feb 19 18:48:58 crc kubenswrapper[4813]: I0219 18:48:58.924184 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-mlk69" Feb 19 18:48:59 crc kubenswrapper[4813]: I0219 18:48:59.059540 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wphrf\" (UniqueName: \"kubernetes.io/projected/b8e68eb2-b8d9-4d8a-85b0-25fe7038a6ab-kube-api-access-wphrf\") pod \"b8e68eb2-b8d9-4d8a-85b0-25fe7038a6ab\" (UID: \"b8e68eb2-b8d9-4d8a-85b0-25fe7038a6ab\") " Feb 19 18:48:59 crc kubenswrapper[4813]: I0219 18:48:59.059679 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8e68eb2-b8d9-4d8a-85b0-25fe7038a6ab-operator-scripts\") pod \"b8e68eb2-b8d9-4d8a-85b0-25fe7038a6ab\" (UID: \"b8e68eb2-b8d9-4d8a-85b0-25fe7038a6ab\") " Feb 19 18:48:59 crc kubenswrapper[4813]: I0219 18:48:59.064321 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8e68eb2-b8d9-4d8a-85b0-25fe7038a6ab-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b8e68eb2-b8d9-4d8a-85b0-25fe7038a6ab" (UID: "b8e68eb2-b8d9-4d8a-85b0-25fe7038a6ab"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:48:59 crc kubenswrapper[4813]: I0219 18:48:59.073275 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8e68eb2-b8d9-4d8a-85b0-25fe7038a6ab-kube-api-access-wphrf" (OuterVolumeSpecName: "kube-api-access-wphrf") pod "b8e68eb2-b8d9-4d8a-85b0-25fe7038a6ab" (UID: "b8e68eb2-b8d9-4d8a-85b0-25fe7038a6ab"). InnerVolumeSpecName "kube-api-access-wphrf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:48:59 crc kubenswrapper[4813]: I0219 18:48:59.162509 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wphrf\" (UniqueName: \"kubernetes.io/projected/b8e68eb2-b8d9-4d8a-85b0-25fe7038a6ab-kube-api-access-wphrf\") on node \"crc\" DevicePath \"\"" Feb 19 18:48:59 crc kubenswrapper[4813]: I0219 18:48:59.162571 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8e68eb2-b8d9-4d8a-85b0-25fe7038a6ab-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 18:48:59 crc kubenswrapper[4813]: I0219 18:48:59.178369 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-570b-account-create-update-kslzx" Feb 19 18:48:59 crc kubenswrapper[4813]: I0219 18:48:59.191891 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-tmnlr" Feb 19 18:48:59 crc kubenswrapper[4813]: I0219 18:48:59.265030 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6899daa4-6104-4900-ab52-6ffaeff57788-operator-scripts\") pod \"6899daa4-6104-4900-ab52-6ffaeff57788\" (UID: \"6899daa4-6104-4900-ab52-6ffaeff57788\") " Feb 19 18:48:59 crc kubenswrapper[4813]: I0219 18:48:59.265190 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hplnx\" (UniqueName: \"kubernetes.io/projected/7f5d86d9-0531-43ec-ad26-29f99daf42cb-kube-api-access-hplnx\") pod \"7f5d86d9-0531-43ec-ad26-29f99daf42cb\" (UID: \"7f5d86d9-0531-43ec-ad26-29f99daf42cb\") " Feb 19 18:48:59 crc kubenswrapper[4813]: I0219 18:48:59.265326 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfm6h\" (UniqueName: \"kubernetes.io/projected/6899daa4-6104-4900-ab52-6ffaeff57788-kube-api-access-pfm6h\") pod \"6899daa4-6104-4900-ab52-6ffaeff57788\" (UID: \"6899daa4-6104-4900-ab52-6ffaeff57788\") " Feb 19 18:48:59 crc kubenswrapper[4813]: I0219 18:48:59.265406 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f5d86d9-0531-43ec-ad26-29f99daf42cb-operator-scripts\") pod \"7f5d86d9-0531-43ec-ad26-29f99daf42cb\" (UID: \"7f5d86d9-0531-43ec-ad26-29f99daf42cb\") " Feb 19 18:48:59 crc kubenswrapper[4813]: I0219 18:48:59.266784 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f5d86d9-0531-43ec-ad26-29f99daf42cb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7f5d86d9-0531-43ec-ad26-29f99daf42cb" (UID: "7f5d86d9-0531-43ec-ad26-29f99daf42cb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:48:59 crc kubenswrapper[4813]: I0219 18:48:59.267307 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6899daa4-6104-4900-ab52-6ffaeff57788-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6899daa4-6104-4900-ab52-6ffaeff57788" (UID: "6899daa4-6104-4900-ab52-6ffaeff57788"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:48:59 crc kubenswrapper[4813]: I0219 18:48:59.274502 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f5d86d9-0531-43ec-ad26-29f99daf42cb-kube-api-access-hplnx" (OuterVolumeSpecName: "kube-api-access-hplnx") pod "7f5d86d9-0531-43ec-ad26-29f99daf42cb" (UID: "7f5d86d9-0531-43ec-ad26-29f99daf42cb"). InnerVolumeSpecName "kube-api-access-hplnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:48:59 crc kubenswrapper[4813]: I0219 18:48:59.276142 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6899daa4-6104-4900-ab52-6ffaeff57788-kube-api-access-pfm6h" (OuterVolumeSpecName: "kube-api-access-pfm6h") pod "6899daa4-6104-4900-ab52-6ffaeff57788" (UID: "6899daa4-6104-4900-ab52-6ffaeff57788"). InnerVolumeSpecName "kube-api-access-pfm6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:48:59 crc kubenswrapper[4813]: I0219 18:48:59.367450 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfm6h\" (UniqueName: \"kubernetes.io/projected/6899daa4-6104-4900-ab52-6ffaeff57788-kube-api-access-pfm6h\") on node \"crc\" DevicePath \"\"" Feb 19 18:48:59 crc kubenswrapper[4813]: I0219 18:48:59.367493 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f5d86d9-0531-43ec-ad26-29f99daf42cb-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 18:48:59 crc kubenswrapper[4813]: I0219 18:48:59.367502 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6899daa4-6104-4900-ab52-6ffaeff57788-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 18:48:59 crc kubenswrapper[4813]: I0219 18:48:59.367512 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hplnx\" (UniqueName: \"kubernetes.io/projected/7f5d86d9-0531-43ec-ad26-29f99daf42cb-kube-api-access-hplnx\") on node \"crc\" DevicePath \"\"" Feb 19 18:48:59 crc kubenswrapper[4813]: I0219 18:48:59.595277 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-tmnlr" event={"ID":"7f5d86d9-0531-43ec-ad26-29f99daf42cb","Type":"ContainerDied","Data":"afc000619609ffb42ac61f81c3f509bc78024608f1da67d6ee2d2cfd8995aba6"} Feb 19 18:48:59 crc kubenswrapper[4813]: I0219 18:48:59.595484 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afc000619609ffb42ac61f81c3f509bc78024608f1da67d6ee2d2cfd8995aba6" Feb 19 18:48:59 crc kubenswrapper[4813]: I0219 18:48:59.595287 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-tmnlr" Feb 19 18:48:59 crc kubenswrapper[4813]: I0219 18:48:59.603494 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68677f88c9-7zxq4" event={"ID":"035014cb-5042-4867-9d3c-4f0061a8e4ac","Type":"ContainerStarted","Data":"fde1aa8af6c94563e7bbf5b2047c9cd5fd53fa9284f8c06fc650f7cbe4400b9e"} Feb 19 18:48:59 crc kubenswrapper[4813]: I0219 18:48:59.603779 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-68677f88c9-7zxq4" Feb 19 18:48:59 crc kubenswrapper[4813]: I0219 18:48:59.605585 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-mlk69" event={"ID":"b8e68eb2-b8d9-4d8a-85b0-25fe7038a6ab","Type":"ContainerDied","Data":"eb9527f45a1f085fa632306b8c59c9dd85b798f557dd9400a3218ba8b904ed0f"} Feb 19 18:48:59 crc kubenswrapper[4813]: I0219 18:48:59.605675 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb9527f45a1f085fa632306b8c59c9dd85b798f557dd9400a3218ba8b904ed0f" Feb 19 18:48:59 crc kubenswrapper[4813]: I0219 18:48:59.605804 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-mlk69" Feb 19 18:48:59 crc kubenswrapper[4813]: I0219 18:48:59.609525 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-570b-account-create-update-kslzx" Feb 19 18:48:59 crc kubenswrapper[4813]: I0219 18:48:59.609551 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-570b-account-create-update-kslzx" event={"ID":"6899daa4-6104-4900-ab52-6ffaeff57788","Type":"ContainerDied","Data":"766949606c8ae57e221c676af83abe292cb23d8a525dcf1388cdc648306e28f5"} Feb 19 18:48:59 crc kubenswrapper[4813]: I0219 18:48:59.609582 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="766949606c8ae57e221c676af83abe292cb23d8a525dcf1388cdc648306e28f5" Feb 19 18:48:59 crc kubenswrapper[4813]: I0219 18:48:59.659571 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-68677f88c9-7zxq4" podStartSLOduration=3.659548427 podStartE2EDuration="3.659548427s" podCreationTimestamp="2026-02-19 18:48:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:48:59.624778525 +0000 UTC m=+1158.850219066" watchObservedRunningTime="2026-02-19 18:48:59.659548427 +0000 UTC m=+1158.884988958" Feb 19 18:49:00 crc kubenswrapper[4813]: I0219 18:49:00.081410 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-wjwfp" Feb 19 18:49:00 crc kubenswrapper[4813]: I0219 18:49:00.086195 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-2083-account-create-update-w69p8" Feb 19 18:49:00 crc kubenswrapper[4813]: I0219 18:49:00.092202 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cde0-account-create-update-pd9m9" Feb 19 18:49:00 crc kubenswrapper[4813]: I0219 18:49:00.180722 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af0adc50-6632-4b6f-a01f-56314538b07b-operator-scripts\") pod \"af0adc50-6632-4b6f-a01f-56314538b07b\" (UID: \"af0adc50-6632-4b6f-a01f-56314538b07b\") " Feb 19 18:49:00 crc kubenswrapper[4813]: I0219 18:49:00.180926 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pp5bn\" (UniqueName: \"kubernetes.io/projected/95494006-9962-4c6c-b3f6-0637d97734a5-kube-api-access-pp5bn\") pod \"95494006-9962-4c6c-b3f6-0637d97734a5\" (UID: \"95494006-9962-4c6c-b3f6-0637d97734a5\") " Feb 19 18:49:00 crc kubenswrapper[4813]: I0219 18:49:00.180950 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjr7q\" (UniqueName: \"kubernetes.io/projected/af0adc50-6632-4b6f-a01f-56314538b07b-kube-api-access-tjr7q\") pod \"af0adc50-6632-4b6f-a01f-56314538b07b\" (UID: \"af0adc50-6632-4b6f-a01f-56314538b07b\") " Feb 19 18:49:00 crc kubenswrapper[4813]: I0219 18:49:00.180990 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95494006-9962-4c6c-b3f6-0637d97734a5-operator-scripts\") pod \"95494006-9962-4c6c-b3f6-0637d97734a5\" (UID: \"95494006-9962-4c6c-b3f6-0637d97734a5\") " Feb 19 18:49:00 crc kubenswrapper[4813]: I0219 18:49:00.181032 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cc92073-5619-4686-b083-0a824d82934f-operator-scripts\") pod \"2cc92073-5619-4686-b083-0a824d82934f\" (UID: \"2cc92073-5619-4686-b083-0a824d82934f\") " Feb 19 18:49:00 crc kubenswrapper[4813]: I0219 18:49:00.181548 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jzghr\" (UniqueName: \"kubernetes.io/projected/2cc92073-5619-4686-b083-0a824d82934f-kube-api-access-jzghr\") pod \"2cc92073-5619-4686-b083-0a824d82934f\" (UID: \"2cc92073-5619-4686-b083-0a824d82934f\") " Feb 19 18:49:00 crc kubenswrapper[4813]: I0219 18:49:00.181661 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95494006-9962-4c6c-b3f6-0637d97734a5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "95494006-9962-4c6c-b3f6-0637d97734a5" (UID: "95494006-9962-4c6c-b3f6-0637d97734a5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:49:00 crc kubenswrapper[4813]: I0219 18:49:00.181665 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cc92073-5619-4686-b083-0a824d82934f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2cc92073-5619-4686-b083-0a824d82934f" (UID: "2cc92073-5619-4686-b083-0a824d82934f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:49:00 crc kubenswrapper[4813]: I0219 18:49:00.181812 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af0adc50-6632-4b6f-a01f-56314538b07b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "af0adc50-6632-4b6f-a01f-56314538b07b" (UID: "af0adc50-6632-4b6f-a01f-56314538b07b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:49:00 crc kubenswrapper[4813]: I0219 18:49:00.182244 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af0adc50-6632-4b6f-a01f-56314538b07b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:00 crc kubenswrapper[4813]: I0219 18:49:00.182257 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95494006-9962-4c6c-b3f6-0637d97734a5-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:00 crc kubenswrapper[4813]: I0219 18:49:00.182265 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cc92073-5619-4686-b083-0a824d82934f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:00 crc kubenswrapper[4813]: I0219 18:49:00.186359 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95494006-9962-4c6c-b3f6-0637d97734a5-kube-api-access-pp5bn" (OuterVolumeSpecName: "kube-api-access-pp5bn") pod "95494006-9962-4c6c-b3f6-0637d97734a5" (UID: "95494006-9962-4c6c-b3f6-0637d97734a5"). InnerVolumeSpecName "kube-api-access-pp5bn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:49:00 crc kubenswrapper[4813]: I0219 18:49:00.192650 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cc92073-5619-4686-b083-0a824d82934f-kube-api-access-jzghr" (OuterVolumeSpecName: "kube-api-access-jzghr") pod "2cc92073-5619-4686-b083-0a824d82934f" (UID: "2cc92073-5619-4686-b083-0a824d82934f"). InnerVolumeSpecName "kube-api-access-jzghr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:49:00 crc kubenswrapper[4813]: I0219 18:49:00.192718 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af0adc50-6632-4b6f-a01f-56314538b07b-kube-api-access-tjr7q" (OuterVolumeSpecName: "kube-api-access-tjr7q") pod "af0adc50-6632-4b6f-a01f-56314538b07b" (UID: "af0adc50-6632-4b6f-a01f-56314538b07b"). InnerVolumeSpecName "kube-api-access-tjr7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:49:00 crc kubenswrapper[4813]: I0219 18:49:00.284075 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pp5bn\" (UniqueName: \"kubernetes.io/projected/95494006-9962-4c6c-b3f6-0637d97734a5-kube-api-access-pp5bn\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:00 crc kubenswrapper[4813]: I0219 18:49:00.284145 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjr7q\" (UniqueName: \"kubernetes.io/projected/af0adc50-6632-4b6f-a01f-56314538b07b-kube-api-access-tjr7q\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:00 crc kubenswrapper[4813]: I0219 18:49:00.284160 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jzghr\" (UniqueName: \"kubernetes.io/projected/2cc92073-5619-4686-b083-0a824d82934f-kube-api-access-jzghr\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:00 crc kubenswrapper[4813]: I0219 18:49:00.329869 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 18:49:00 crc kubenswrapper[4813]: I0219 18:49:00.329995 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 18:49:00 crc kubenswrapper[4813]: I0219 18:49:00.620978 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cde0-account-create-update-pd9m9" Feb 19 18:49:00 crc kubenswrapper[4813]: I0219 18:49:00.620975 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cde0-account-create-update-pd9m9" event={"ID":"af0adc50-6632-4b6f-a01f-56314538b07b","Type":"ContainerDied","Data":"a102ccdfd5708d37ddb917da5315c1478c511cf42539c0902bcdf963a11c8be0"} Feb 19 18:49:00 crc kubenswrapper[4813]: I0219 18:49:00.621408 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a102ccdfd5708d37ddb917da5315c1478c511cf42539c0902bcdf963a11c8be0" Feb 19 18:49:00 crc kubenswrapper[4813]: I0219 18:49:00.627092 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-wjwfp" event={"ID":"2cc92073-5619-4686-b083-0a824d82934f","Type":"ContainerDied","Data":"09105d98beb7b0a2dcf8dd88de9618f66e2dbb8cdf7212468c2e54e890cd01c5"} Feb 19 18:49:00 crc kubenswrapper[4813]: I0219 18:49:00.627135 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09105d98beb7b0a2dcf8dd88de9618f66e2dbb8cdf7212468c2e54e890cd01c5" Feb 19 18:49:00 crc kubenswrapper[4813]: I0219 18:49:00.627158 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-wjwfp" Feb 19 18:49:00 crc kubenswrapper[4813]: I0219 18:49:00.633212 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-2083-account-create-update-w69p8" Feb 19 18:49:00 crc kubenswrapper[4813]: I0219 18:49:00.633230 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-2083-account-create-update-w69p8" event={"ID":"95494006-9962-4c6c-b3f6-0637d97734a5","Type":"ContainerDied","Data":"f003968245798b6b25e28ea521e7cdfc4c9cba54bc737efc6b16ade34871207c"} Feb 19 18:49:00 crc kubenswrapper[4813]: I0219 18:49:00.633279 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f003968245798b6b25e28ea521e7cdfc4c9cba54bc737efc6b16ade34871207c" Feb 19 18:49:04 crc kubenswrapper[4813]: I0219 18:49:04.674424 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-lnxwr" event={"ID":"0028d082-4f3c-4810-ba82-311c776dc554","Type":"ContainerStarted","Data":"3de0eee80ceb950d2591d80fbb2ad7694516698ca054530af0e58d17e8d30584"} Feb 19 18:49:04 crc kubenswrapper[4813]: I0219 18:49:04.703010 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-lnxwr" podStartSLOduration=2.975635728 podStartE2EDuration="8.702949214s" podCreationTimestamp="2026-02-19 18:48:56 +0000 UTC" firstStartedPulling="2026-02-19 18:48:58.019712469 +0000 UTC m=+1157.245153010" lastFinishedPulling="2026-02-19 18:49:03.747025945 +0000 UTC m=+1162.972466496" observedRunningTime="2026-02-19 18:49:04.698643671 +0000 UTC m=+1163.924084282" watchObservedRunningTime="2026-02-19 18:49:04.702949214 +0000 UTC m=+1163.928389805" Feb 19 18:49:07 crc kubenswrapper[4813]: I0219 18:49:07.215219 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-68677f88c9-7zxq4" Feb 19 18:49:07 crc kubenswrapper[4813]: I0219 18:49:07.277014 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74cc88677c-fmhgx"] Feb 19 18:49:07 crc kubenswrapper[4813]: I0219 18:49:07.277519 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74cc88677c-fmhgx" podUID="cdd3b676-02d4-41a4-9077-a61bd273f09a" containerName="dnsmasq-dns" containerID="cri-o://c13b5d5563830f3fd2e991b9ee4b32fb0fc2bf40bb76201f1e810377d822337c" gracePeriod=10 Feb 19 18:49:07 crc kubenswrapper[4813]: I0219 18:49:07.713423 4813 generic.go:334] "Generic (PLEG): container finished" podID="0028d082-4f3c-4810-ba82-311c776dc554" containerID="3de0eee80ceb950d2591d80fbb2ad7694516698ca054530af0e58d17e8d30584" exitCode=0 Feb 19 18:49:07 crc kubenswrapper[4813]: I0219 18:49:07.713491 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-lnxwr" event={"ID":"0028d082-4f3c-4810-ba82-311c776dc554","Type":"ContainerDied","Data":"3de0eee80ceb950d2591d80fbb2ad7694516698ca054530af0e58d17e8d30584"} Feb 19 18:49:07 crc kubenswrapper[4813]: I0219 18:49:07.715508 4813 generic.go:334] "Generic (PLEG): container finished" podID="cdd3b676-02d4-41a4-9077-a61bd273f09a" containerID="c13b5d5563830f3fd2e991b9ee4b32fb0fc2bf40bb76201f1e810377d822337c" exitCode=0 Feb 19 18:49:07 crc kubenswrapper[4813]: I0219 18:49:07.715531 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74cc88677c-fmhgx" event={"ID":"cdd3b676-02d4-41a4-9077-a61bd273f09a","Type":"ContainerDied","Data":"c13b5d5563830f3fd2e991b9ee4b32fb0fc2bf40bb76201f1e810377d822337c"} Feb 19 18:49:07 crc kubenswrapper[4813]: I0219 18:49:07.715547 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74cc88677c-fmhgx" event={"ID":"cdd3b676-02d4-41a4-9077-a61bd273f09a","Type":"ContainerDied","Data":"242563cc76845f45d4f2666b6ca8d3997165c687fd2b5e2f40ddd0ec8218896b"} Feb 19 18:49:07 crc kubenswrapper[4813]: I0219 18:49:07.715559 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="242563cc76845f45d4f2666b6ca8d3997165c687fd2b5e2f40ddd0ec8218896b" Feb 19 18:49:07 crc kubenswrapper[4813]: I0219 18:49:07.749435 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74cc88677c-fmhgx" Feb 19 18:49:07 crc kubenswrapper[4813]: I0219 18:49:07.822329 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdd3b676-02d4-41a4-9077-a61bd273f09a-config\") pod \"cdd3b676-02d4-41a4-9077-a61bd273f09a\" (UID: \"cdd3b676-02d4-41a4-9077-a61bd273f09a\") " Feb 19 18:49:07 crc kubenswrapper[4813]: I0219 18:49:07.822413 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w57qx\" (UniqueName: \"kubernetes.io/projected/cdd3b676-02d4-41a4-9077-a61bd273f09a-kube-api-access-w57qx\") pod \"cdd3b676-02d4-41a4-9077-a61bd273f09a\" (UID: \"cdd3b676-02d4-41a4-9077-a61bd273f09a\") " Feb 19 18:49:07 crc kubenswrapper[4813]: I0219 18:49:07.822470 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cdd3b676-02d4-41a4-9077-a61bd273f09a-ovsdbserver-nb\") pod \"cdd3b676-02d4-41a4-9077-a61bd273f09a\" (UID: \"cdd3b676-02d4-41a4-9077-a61bd273f09a\") " Feb 19 18:49:07 crc kubenswrapper[4813]: I0219 18:49:07.822529 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cdd3b676-02d4-41a4-9077-a61bd273f09a-dns-svc\") pod \"cdd3b676-02d4-41a4-9077-a61bd273f09a\" (UID: \"cdd3b676-02d4-41a4-9077-a61bd273f09a\") " Feb 19 18:49:07 crc kubenswrapper[4813]: I0219 18:49:07.822571 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cdd3b676-02d4-41a4-9077-a61bd273f09a-ovsdbserver-sb\") pod \"cdd3b676-02d4-41a4-9077-a61bd273f09a\" (UID: \"cdd3b676-02d4-41a4-9077-a61bd273f09a\") " Feb 19 18:49:07 crc kubenswrapper[4813]: I0219 18:49:07.852243 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdd3b676-02d4-41a4-9077-a61bd273f09a-kube-api-access-w57qx" (OuterVolumeSpecName: "kube-api-access-w57qx") pod "cdd3b676-02d4-41a4-9077-a61bd273f09a" (UID: "cdd3b676-02d4-41a4-9077-a61bd273f09a"). InnerVolumeSpecName "kube-api-access-w57qx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:49:07 crc kubenswrapper[4813]: I0219 18:49:07.869938 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdd3b676-02d4-41a4-9077-a61bd273f09a-config" (OuterVolumeSpecName: "config") pod "cdd3b676-02d4-41a4-9077-a61bd273f09a" (UID: "cdd3b676-02d4-41a4-9077-a61bd273f09a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:49:07 crc kubenswrapper[4813]: I0219 18:49:07.871105 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdd3b676-02d4-41a4-9077-a61bd273f09a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cdd3b676-02d4-41a4-9077-a61bd273f09a" (UID: "cdd3b676-02d4-41a4-9077-a61bd273f09a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:49:07 crc kubenswrapper[4813]: I0219 18:49:07.872708 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdd3b676-02d4-41a4-9077-a61bd273f09a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cdd3b676-02d4-41a4-9077-a61bd273f09a" (UID: "cdd3b676-02d4-41a4-9077-a61bd273f09a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:49:07 crc kubenswrapper[4813]: I0219 18:49:07.880535 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdd3b676-02d4-41a4-9077-a61bd273f09a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cdd3b676-02d4-41a4-9077-a61bd273f09a" (UID: "cdd3b676-02d4-41a4-9077-a61bd273f09a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:49:07 crc kubenswrapper[4813]: I0219 18:49:07.925095 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdd3b676-02d4-41a4-9077-a61bd273f09a-config\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:07 crc kubenswrapper[4813]: I0219 18:49:07.925144 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w57qx\" (UniqueName: \"kubernetes.io/projected/cdd3b676-02d4-41a4-9077-a61bd273f09a-kube-api-access-w57qx\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:07 crc kubenswrapper[4813]: I0219 18:49:07.925169 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cdd3b676-02d4-41a4-9077-a61bd273f09a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:07 crc kubenswrapper[4813]: I0219 18:49:07.925186 4813 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cdd3b676-02d4-41a4-9077-a61bd273f09a-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:07 crc kubenswrapper[4813]: I0219 18:49:07.925205 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cdd3b676-02d4-41a4-9077-a61bd273f09a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:08 crc kubenswrapper[4813]: E0219 18:49:08.618463 4813 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc69ff3db_8806_451a_9df0_c6289c327579.slice/crio-b9426c242aece9d97709c8664b0a4c377652f962eb00395db079b212fbca6168.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc69ff3db_8806_451a_9df0_c6289c327579.slice/crio-conmon-b9426c242aece9d97709c8664b0a4c377652f962eb00395db079b212fbca6168.scope\": RecentStats: unable to find data in memory cache]" Feb 19 18:49:08 crc kubenswrapper[4813]: I0219 18:49:08.724848 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74cc88677c-fmhgx" Feb 19 18:49:08 crc kubenswrapper[4813]: I0219 18:49:08.780437 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74cc88677c-fmhgx"] Feb 19 18:49:08 crc kubenswrapper[4813]: I0219 18:49:08.787430 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74cc88677c-fmhgx"] Feb 19 18:49:09 crc kubenswrapper[4813]: I0219 18:49:09.131544 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-lnxwr" Feb 19 18:49:09 crc kubenswrapper[4813]: I0219 18:49:09.245392 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0028d082-4f3c-4810-ba82-311c776dc554-config-data\") pod \"0028d082-4f3c-4810-ba82-311c776dc554\" (UID: \"0028d082-4f3c-4810-ba82-311c776dc554\") " Feb 19 18:49:09 crc kubenswrapper[4813]: I0219 18:49:09.245459 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0028d082-4f3c-4810-ba82-311c776dc554-combined-ca-bundle\") pod \"0028d082-4f3c-4810-ba82-311c776dc554\" (UID: \"0028d082-4f3c-4810-ba82-311c776dc554\") " Feb 19 18:49:09 crc kubenswrapper[4813]: I0219 18:49:09.245642 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8f2wb\" (UniqueName: \"kubernetes.io/projected/0028d082-4f3c-4810-ba82-311c776dc554-kube-api-access-8f2wb\") pod \"0028d082-4f3c-4810-ba82-311c776dc554\" (UID: \"0028d082-4f3c-4810-ba82-311c776dc554\") " Feb 19 18:49:09 crc kubenswrapper[4813]: I0219 18:49:09.254311 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0028d082-4f3c-4810-ba82-311c776dc554-kube-api-access-8f2wb" (OuterVolumeSpecName: "kube-api-access-8f2wb") pod "0028d082-4f3c-4810-ba82-311c776dc554" (UID: "0028d082-4f3c-4810-ba82-311c776dc554"). InnerVolumeSpecName "kube-api-access-8f2wb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:49:09 crc kubenswrapper[4813]: I0219 18:49:09.276715 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0028d082-4f3c-4810-ba82-311c776dc554-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0028d082-4f3c-4810-ba82-311c776dc554" (UID: "0028d082-4f3c-4810-ba82-311c776dc554"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:49:09 crc kubenswrapper[4813]: I0219 18:49:09.319819 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0028d082-4f3c-4810-ba82-311c776dc554-config-data" (OuterVolumeSpecName: "config-data") pod "0028d082-4f3c-4810-ba82-311c776dc554" (UID: "0028d082-4f3c-4810-ba82-311c776dc554"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:49:09 crc kubenswrapper[4813]: I0219 18:49:09.347243 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0028d082-4f3c-4810-ba82-311c776dc554-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:09 crc kubenswrapper[4813]: I0219 18:49:09.347463 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0028d082-4f3c-4810-ba82-311c776dc554-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:09 crc kubenswrapper[4813]: I0219 18:49:09.347546 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8f2wb\" (UniqueName: \"kubernetes.io/projected/0028d082-4f3c-4810-ba82-311c776dc554-kube-api-access-8f2wb\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:09 crc kubenswrapper[4813]: I0219 18:49:09.482490 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdd3b676-02d4-41a4-9077-a61bd273f09a" path="/var/lib/kubelet/pods/cdd3b676-02d4-41a4-9077-a61bd273f09a/volumes" Feb 19 18:49:09 crc kubenswrapper[4813]: I0219 18:49:09.735776 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-lnxwr" event={"ID":"0028d082-4f3c-4810-ba82-311c776dc554","Type":"ContainerDied","Data":"869174e9053d8e87e8e33c7f426ca1f007b01a3c28a311605fe0050d337801b9"} Feb 19 18:49:09 crc kubenswrapper[4813]: I0219 18:49:09.736136 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="869174e9053d8e87e8e33c7f426ca1f007b01a3c28a311605fe0050d337801b9" Feb 19 18:49:09 crc kubenswrapper[4813]: I0219 18:49:09.735862 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-lnxwr" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.083399 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-n2twj"] Feb 19 18:49:10 crc kubenswrapper[4813]: E0219 18:49:10.083756 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8e68eb2-b8d9-4d8a-85b0-25fe7038a6ab" containerName="mariadb-database-create" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.083768 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8e68eb2-b8d9-4d8a-85b0-25fe7038a6ab" containerName="mariadb-database-create" Feb 19 18:49:10 crc kubenswrapper[4813]: E0219 18:49:10.083779 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95494006-9962-4c6c-b3f6-0637d97734a5" containerName="mariadb-account-create-update" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.083784 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="95494006-9962-4c6c-b3f6-0637d97734a5" containerName="mariadb-account-create-update" Feb 19 18:49:10 crc kubenswrapper[4813]: E0219 18:49:10.083798 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdd3b676-02d4-41a4-9077-a61bd273f09a" containerName="init" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.083804 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdd3b676-02d4-41a4-9077-a61bd273f09a" containerName="init" Feb 19 18:49:10 crc kubenswrapper[4813]: E0219 18:49:10.083814 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af0adc50-6632-4b6f-a01f-56314538b07b" containerName="mariadb-account-create-update" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.083820 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="af0adc50-6632-4b6f-a01f-56314538b07b" containerName="mariadb-account-create-update" Feb 19 18:49:10 crc kubenswrapper[4813]: E0219 18:49:10.083831 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f5d86d9-0531-43ec-ad26-29f99daf42cb" containerName="mariadb-database-create" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.083836 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f5d86d9-0531-43ec-ad26-29f99daf42cb" containerName="mariadb-database-create" Feb 19 18:49:10 crc kubenswrapper[4813]: E0219 18:49:10.083850 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdd3b676-02d4-41a4-9077-a61bd273f09a" containerName="dnsmasq-dns" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.083856 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdd3b676-02d4-41a4-9077-a61bd273f09a" containerName="dnsmasq-dns" Feb 19 18:49:10 crc kubenswrapper[4813]: E0219 18:49:10.083869 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cc92073-5619-4686-b083-0a824d82934f" containerName="mariadb-database-create" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.083875 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cc92073-5619-4686-b083-0a824d82934f" containerName="mariadb-database-create" Feb 19 18:49:10 crc kubenswrapper[4813]: E0219 18:49:10.083888 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0028d082-4f3c-4810-ba82-311c776dc554" containerName="keystone-db-sync" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.083894 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="0028d082-4f3c-4810-ba82-311c776dc554" containerName="keystone-db-sync" Feb 19 18:49:10 crc kubenswrapper[4813]: E0219 18:49:10.083905 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6899daa4-6104-4900-ab52-6ffaeff57788" containerName="mariadb-account-create-update" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.083910 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="6899daa4-6104-4900-ab52-6ffaeff57788" containerName="mariadb-account-create-update" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.084060 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cc92073-5619-4686-b083-0a824d82934f" containerName="mariadb-database-create" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.084070 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8e68eb2-b8d9-4d8a-85b0-25fe7038a6ab" containerName="mariadb-database-create" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.084079 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="af0adc50-6632-4b6f-a01f-56314538b07b" containerName="mariadb-account-create-update" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.084089 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="95494006-9962-4c6c-b3f6-0637d97734a5" containerName="mariadb-account-create-update" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.084098 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f5d86d9-0531-43ec-ad26-29f99daf42cb" containerName="mariadb-database-create" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.084107 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="6899daa4-6104-4900-ab52-6ffaeff57788" containerName="mariadb-account-create-update" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.084115 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="0028d082-4f3c-4810-ba82-311c776dc554" containerName="keystone-db-sync" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.084126 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdd3b676-02d4-41a4-9077-a61bd273f09a" containerName="dnsmasq-dns" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.084637 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-n2twj" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.089577 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.089793 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.090005 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4lsm2" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.090155 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.090297 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.108387 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d67cdfc8f-gtgh9"] Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.109568 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d67cdfc8f-gtgh9" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.120258 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-n2twj"] Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.163255 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d67cdfc8f-gtgh9"] Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.164213 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/32e6bfc4-62d0-4597-ad22-f88689246209-ovsdbserver-nb\") pod \"dnsmasq-dns-7d67cdfc8f-gtgh9\" (UID: \"32e6bfc4-62d0-4597-ad22-f88689246209\") " pod="openstack/dnsmasq-dns-7d67cdfc8f-gtgh9" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.164256 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jlzr\" (UniqueName: \"kubernetes.io/projected/32e6bfc4-62d0-4597-ad22-f88689246209-kube-api-access-2jlzr\") pod \"dnsmasq-dns-7d67cdfc8f-gtgh9\" (UID: \"32e6bfc4-62d0-4597-ad22-f88689246209\") " pod="openstack/dnsmasq-dns-7d67cdfc8f-gtgh9" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.164324 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxrbn\" (UniqueName: \"kubernetes.io/projected/8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94-kube-api-access-mxrbn\") pod \"keystone-bootstrap-n2twj\" (UID: \"8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94\") " pod="openstack/keystone-bootstrap-n2twj" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.164375 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32e6bfc4-62d0-4597-ad22-f88689246209-dns-svc\") pod \"dnsmasq-dns-7d67cdfc8f-gtgh9\" (UID: \"32e6bfc4-62d0-4597-ad22-f88689246209\") " pod="openstack/dnsmasq-dns-7d67cdfc8f-gtgh9" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.164411 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94-scripts\") pod \"keystone-bootstrap-n2twj\" (UID: \"8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94\") " pod="openstack/keystone-bootstrap-n2twj" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.164440 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/32e6bfc4-62d0-4597-ad22-f88689246209-ovsdbserver-sb\") pod \"dnsmasq-dns-7d67cdfc8f-gtgh9\" (UID: \"32e6bfc4-62d0-4597-ad22-f88689246209\") " pod="openstack/dnsmasq-dns-7d67cdfc8f-gtgh9" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.164474 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94-credential-keys\") pod \"keystone-bootstrap-n2twj\" (UID: \"8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94\") " pod="openstack/keystone-bootstrap-n2twj" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.164523 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/32e6bfc4-62d0-4597-ad22-f88689246209-dns-swift-storage-0\") pod \"dnsmasq-dns-7d67cdfc8f-gtgh9\" (UID: \"32e6bfc4-62d0-4597-ad22-f88689246209\") " pod="openstack/dnsmasq-dns-7d67cdfc8f-gtgh9" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.164561 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94-combined-ca-bundle\") pod \"keystone-bootstrap-n2twj\" (UID: \"8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94\") " pod="openstack/keystone-bootstrap-n2twj" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.164654 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32e6bfc4-62d0-4597-ad22-f88689246209-config\") pod \"dnsmasq-dns-7d67cdfc8f-gtgh9\" (UID: \"32e6bfc4-62d0-4597-ad22-f88689246209\") " pod="openstack/dnsmasq-dns-7d67cdfc8f-gtgh9" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.164684 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94-fernet-keys\") pod \"keystone-bootstrap-n2twj\" (UID: \"8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94\") " pod="openstack/keystone-bootstrap-n2twj" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.164777 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94-config-data\") pod \"keystone-bootstrap-n2twj\" (UID: \"8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94\") " pod="openstack/keystone-bootstrap-n2twj" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.265831 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxrbn\" (UniqueName: \"kubernetes.io/projected/8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94-kube-api-access-mxrbn\") pod \"keystone-bootstrap-n2twj\" (UID: \"8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94\") " pod="openstack/keystone-bootstrap-n2twj" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.265908 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32e6bfc4-62d0-4597-ad22-f88689246209-dns-svc\") pod \"dnsmasq-dns-7d67cdfc8f-gtgh9\" (UID: \"32e6bfc4-62d0-4597-ad22-f88689246209\") " pod="openstack/dnsmasq-dns-7d67cdfc8f-gtgh9" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.265946 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94-scripts\") pod \"keystone-bootstrap-n2twj\" (UID: \"8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94\") " pod="openstack/keystone-bootstrap-n2twj" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.265992 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/32e6bfc4-62d0-4597-ad22-f88689246209-ovsdbserver-sb\") pod \"dnsmasq-dns-7d67cdfc8f-gtgh9\" (UID: \"32e6bfc4-62d0-4597-ad22-f88689246209\") " pod="openstack/dnsmasq-dns-7d67cdfc8f-gtgh9" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.266021 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94-credential-keys\") pod \"keystone-bootstrap-n2twj\" (UID: \"8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94\") " pod="openstack/keystone-bootstrap-n2twj" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.266067 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/32e6bfc4-62d0-4597-ad22-f88689246209-dns-swift-storage-0\") pod \"dnsmasq-dns-7d67cdfc8f-gtgh9\" (UID: \"32e6bfc4-62d0-4597-ad22-f88689246209\") " pod="openstack/dnsmasq-dns-7d67cdfc8f-gtgh9" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.266101 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94-combined-ca-bundle\") pod \"keystone-bootstrap-n2twj\" (UID: \"8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94\") " pod="openstack/keystone-bootstrap-n2twj" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.266122 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32e6bfc4-62d0-4597-ad22-f88689246209-config\") pod \"dnsmasq-dns-7d67cdfc8f-gtgh9\" (UID: \"32e6bfc4-62d0-4597-ad22-f88689246209\") " pod="openstack/dnsmasq-dns-7d67cdfc8f-gtgh9" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.266138 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94-fernet-keys\") pod \"keystone-bootstrap-n2twj\" (UID: \"8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94\") " pod="openstack/keystone-bootstrap-n2twj" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.266158 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94-config-data\") pod \"keystone-bootstrap-n2twj\" (UID: \"8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94\") " pod="openstack/keystone-bootstrap-n2twj" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.266188 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/32e6bfc4-62d0-4597-ad22-f88689246209-ovsdbserver-nb\") pod \"dnsmasq-dns-7d67cdfc8f-gtgh9\" (UID: \"32e6bfc4-62d0-4597-ad22-f88689246209\") " pod="openstack/dnsmasq-dns-7d67cdfc8f-gtgh9" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.266210 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jlzr\" (UniqueName: \"kubernetes.io/projected/32e6bfc4-62d0-4597-ad22-f88689246209-kube-api-access-2jlzr\") pod \"dnsmasq-dns-7d67cdfc8f-gtgh9\" (UID: \"32e6bfc4-62d0-4597-ad22-f88689246209\") " pod="openstack/dnsmasq-dns-7d67cdfc8f-gtgh9" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.267492 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32e6bfc4-62d0-4597-ad22-f88689246209-dns-svc\") pod \"dnsmasq-dns-7d67cdfc8f-gtgh9\" (UID: \"32e6bfc4-62d0-4597-ad22-f88689246209\") " pod="openstack/dnsmasq-dns-7d67cdfc8f-gtgh9" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.268338 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/32e6bfc4-62d0-4597-ad22-f88689246209-ovsdbserver-nb\") pod \"dnsmasq-dns-7d67cdfc8f-gtgh9\" (UID: \"32e6bfc4-62d0-4597-ad22-f88689246209\") " pod="openstack/dnsmasq-dns-7d67cdfc8f-gtgh9" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.268372 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/32e6bfc4-62d0-4597-ad22-f88689246209-ovsdbserver-sb\") pod \"dnsmasq-dns-7d67cdfc8f-gtgh9\" (UID: \"32e6bfc4-62d0-4597-ad22-f88689246209\") " pod="openstack/dnsmasq-dns-7d67cdfc8f-gtgh9" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.268871 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32e6bfc4-62d0-4597-ad22-f88689246209-config\") pod \"dnsmasq-dns-7d67cdfc8f-gtgh9\" (UID: \"32e6bfc4-62d0-4597-ad22-f88689246209\") " pod="openstack/dnsmasq-dns-7d67cdfc8f-gtgh9" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.268991 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/32e6bfc4-62d0-4597-ad22-f88689246209-dns-swift-storage-0\") pod \"dnsmasq-dns-7d67cdfc8f-gtgh9\" (UID: \"32e6bfc4-62d0-4597-ad22-f88689246209\") " pod="openstack/dnsmasq-dns-7d67cdfc8f-gtgh9" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.280615 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94-credential-keys\") pod \"keystone-bootstrap-n2twj\" (UID: \"8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94\") " pod="openstack/keystone-bootstrap-n2twj" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.282135 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94-fernet-keys\") pod \"keystone-bootstrap-n2twj\" (UID: \"8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94\") " pod="openstack/keystone-bootstrap-n2twj" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.292228 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94-scripts\") pod \"keystone-bootstrap-n2twj\" (UID: \"8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94\") " pod="openstack/keystone-bootstrap-n2twj" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.292658 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94-config-data\") pod \"keystone-bootstrap-n2twj\" (UID: \"8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94\") " pod="openstack/keystone-bootstrap-n2twj" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.292809 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94-combined-ca-bundle\") pod \"keystone-bootstrap-n2twj\" (UID: \"8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94\") " pod="openstack/keystone-bootstrap-n2twj" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.295863 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jlzr\" (UniqueName: \"kubernetes.io/projected/32e6bfc4-62d0-4597-ad22-f88689246209-kube-api-access-2jlzr\") pod \"dnsmasq-dns-7d67cdfc8f-gtgh9\" (UID: \"32e6bfc4-62d0-4597-ad22-f88689246209\") " pod="openstack/dnsmasq-dns-7d67cdfc8f-gtgh9" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.317031 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-nnwfh"] Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.318642 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxrbn\" (UniqueName: \"kubernetes.io/projected/8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94-kube-api-access-mxrbn\") pod \"keystone-bootstrap-n2twj\" (UID: \"8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94\") " pod="openstack/keystone-bootstrap-n2twj" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.322004 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-nnwfh" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.329211 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.329463 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.329577 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-6dt8c" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.359986 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-nnwfh"] Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.370816 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwdj6\" (UniqueName: \"kubernetes.io/projected/a87080d3-007c-48e0-aa89-b82c5d9dafab-kube-api-access-gwdj6\") pod \"cinder-db-sync-nnwfh\" (UID: \"a87080d3-007c-48e0-aa89-b82c5d9dafab\") " pod="openstack/cinder-db-sync-nnwfh" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.370894 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a87080d3-007c-48e0-aa89-b82c5d9dafab-scripts\") pod \"cinder-db-sync-nnwfh\" (UID: \"a87080d3-007c-48e0-aa89-b82c5d9dafab\") " pod="openstack/cinder-db-sync-nnwfh" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.370967 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a87080d3-007c-48e0-aa89-b82c5d9dafab-config-data\") pod \"cinder-db-sync-nnwfh\" (UID: \"a87080d3-007c-48e0-aa89-b82c5d9dafab\") " pod="openstack/cinder-db-sync-nnwfh" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.370994 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a87080d3-007c-48e0-aa89-b82c5d9dafab-etc-machine-id\") pod \"cinder-db-sync-nnwfh\" (UID: \"a87080d3-007c-48e0-aa89-b82c5d9dafab\") " pod="openstack/cinder-db-sync-nnwfh" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.371016 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a87080d3-007c-48e0-aa89-b82c5d9dafab-db-sync-config-data\") pod \"cinder-db-sync-nnwfh\" (UID: \"a87080d3-007c-48e0-aa89-b82c5d9dafab\") " pod="openstack/cinder-db-sync-nnwfh" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.371044 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a87080d3-007c-48e0-aa89-b82c5d9dafab-combined-ca-bundle\") pod \"cinder-db-sync-nnwfh\" (UID: \"a87080d3-007c-48e0-aa89-b82c5d9dafab\") " pod="openstack/cinder-db-sync-nnwfh" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.394102 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-fbp7p"] Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.395380 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-fbp7p" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.399670 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-69wj7" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.400352 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.406829 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.408475 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-n2twj" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.412664 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-fbp7p"] Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.433817 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-lc2jp"] Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.435152 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-lc2jp" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.453411 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-wqc9s" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.455908 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d67cdfc8f-gtgh9" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.478606 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.478875 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.488507 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwdj6\" (UniqueName: \"kubernetes.io/projected/a87080d3-007c-48e0-aa89-b82c5d9dafab-kube-api-access-gwdj6\") pod \"cinder-db-sync-nnwfh\" (UID: \"a87080d3-007c-48e0-aa89-b82c5d9dafab\") " pod="openstack/cinder-db-sync-nnwfh" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.488830 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6f5k\" (UniqueName: \"kubernetes.io/projected/6fa56113-499a-490c-bba9-8676d4312e4e-kube-api-access-v6f5k\") pod \"neutron-db-sync-fbp7p\" (UID: \"6fa56113-499a-490c-bba9-8676d4312e4e\") " pod="openstack/neutron-db-sync-fbp7p" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.488883 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a87080d3-007c-48e0-aa89-b82c5d9dafab-scripts\") pod \"cinder-db-sync-nnwfh\" (UID: \"a87080d3-007c-48e0-aa89-b82c5d9dafab\") " pod="openstack/cinder-db-sync-nnwfh" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.488944 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6fa56113-499a-490c-bba9-8676d4312e4e-config\") pod \"neutron-db-sync-fbp7p\" (UID: \"6fa56113-499a-490c-bba9-8676d4312e4e\") " pod="openstack/neutron-db-sync-fbp7p" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.489031 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a87080d3-007c-48e0-aa89-b82c5d9dafab-config-data\") pod \"cinder-db-sync-nnwfh\" (UID: \"a87080d3-007c-48e0-aa89-b82c5d9dafab\") " pod="openstack/cinder-db-sync-nnwfh" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.489070 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a87080d3-007c-48e0-aa89-b82c5d9dafab-etc-machine-id\") pod \"cinder-db-sync-nnwfh\" (UID: \"a87080d3-007c-48e0-aa89-b82c5d9dafab\") " pod="openstack/cinder-db-sync-nnwfh" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.489111 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a87080d3-007c-48e0-aa89-b82c5d9dafab-db-sync-config-data\") pod \"cinder-db-sync-nnwfh\" (UID: \"a87080d3-007c-48e0-aa89-b82c5d9dafab\") " pod="openstack/cinder-db-sync-nnwfh" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.489150 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fa56113-499a-490c-bba9-8676d4312e4e-combined-ca-bundle\") pod \"neutron-db-sync-fbp7p\" (UID: \"6fa56113-499a-490c-bba9-8676d4312e4e\") " pod="openstack/neutron-db-sync-fbp7p" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.489194 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a87080d3-007c-48e0-aa89-b82c5d9dafab-combined-ca-bundle\") pod \"cinder-db-sync-nnwfh\" (UID: \"a87080d3-007c-48e0-aa89-b82c5d9dafab\") " pod="openstack/cinder-db-sync-nnwfh" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.500801 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a87080d3-007c-48e0-aa89-b82c5d9dafab-etc-machine-id\") pod \"cinder-db-sync-nnwfh\" (UID: \"a87080d3-007c-48e0-aa89-b82c5d9dafab\") " pod="openstack/cinder-db-sync-nnwfh" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.525403 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a87080d3-007c-48e0-aa89-b82c5d9dafab-scripts\") pod \"cinder-db-sync-nnwfh\" (UID: \"a87080d3-007c-48e0-aa89-b82c5d9dafab\") " pod="openstack/cinder-db-sync-nnwfh" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.547489 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-lc2jp"] Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.561416 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwdj6\" (UniqueName: \"kubernetes.io/projected/a87080d3-007c-48e0-aa89-b82c5d9dafab-kube-api-access-gwdj6\") pod \"cinder-db-sync-nnwfh\" (UID: \"a87080d3-007c-48e0-aa89-b82c5d9dafab\") " pod="openstack/cinder-db-sync-nnwfh" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.578040 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a87080d3-007c-48e0-aa89-b82c5d9dafab-combined-ca-bundle\") pod \"cinder-db-sync-nnwfh\" (UID: \"a87080d3-007c-48e0-aa89-b82c5d9dafab\") " pod="openstack/cinder-db-sync-nnwfh" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.580727 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a87080d3-007c-48e0-aa89-b82c5d9dafab-config-data\") pod \"cinder-db-sync-nnwfh\" (UID: \"a87080d3-007c-48e0-aa89-b82c5d9dafab\") " pod="openstack/cinder-db-sync-nnwfh" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.581243 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a87080d3-007c-48e0-aa89-b82c5d9dafab-db-sync-config-data\") pod \"cinder-db-sync-nnwfh\" (UID: \"a87080d3-007c-48e0-aa89-b82c5d9dafab\") " pod="openstack/cinder-db-sync-nnwfh" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.586105 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d67cdfc8f-gtgh9"] Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.592466 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6f5k\" (UniqueName: \"kubernetes.io/projected/6fa56113-499a-490c-bba9-8676d4312e4e-kube-api-access-v6f5k\") pod \"neutron-db-sync-fbp7p\" (UID: \"6fa56113-499a-490c-bba9-8676d4312e4e\") " pod="openstack/neutron-db-sync-fbp7p" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.592597 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6fa56113-499a-490c-bba9-8676d4312e4e-config\") pod \"neutron-db-sync-fbp7p\" (UID: \"6fa56113-499a-490c-bba9-8676d4312e4e\") " pod="openstack/neutron-db-sync-fbp7p" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.592708 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27d0219c-fd97-4271-9059-0903aae52f65-scripts\") pod \"placement-db-sync-lc2jp\" (UID: \"27d0219c-fd97-4271-9059-0903aae52f65\") " pod="openstack/placement-db-sync-lc2jp" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.592743 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fa56113-499a-490c-bba9-8676d4312e4e-combined-ca-bundle\") pod \"neutron-db-sync-fbp7p\" (UID: \"6fa56113-499a-490c-bba9-8676d4312e4e\") " pod="openstack/neutron-db-sync-fbp7p" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.592873 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27d0219c-fd97-4271-9059-0903aae52f65-logs\") pod \"placement-db-sync-lc2jp\" (UID: \"27d0219c-fd97-4271-9059-0903aae52f65\") " pod="openstack/placement-db-sync-lc2jp" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.592907 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27d0219c-fd97-4271-9059-0903aae52f65-combined-ca-bundle\") pod \"placement-db-sync-lc2jp\" (UID: \"27d0219c-fd97-4271-9059-0903aae52f65\") " pod="openstack/placement-db-sync-lc2jp" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.592944 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27d0219c-fd97-4271-9059-0903aae52f65-config-data\") pod \"placement-db-sync-lc2jp\" (UID: \"27d0219c-fd97-4271-9059-0903aae52f65\") " pod="openstack/placement-db-sync-lc2jp" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.592987 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ws8w4\" (UniqueName: \"kubernetes.io/projected/27d0219c-fd97-4271-9059-0903aae52f65-kube-api-access-ws8w4\") pod \"placement-db-sync-lc2jp\" (UID: \"27d0219c-fd97-4271-9059-0903aae52f65\") " pod="openstack/placement-db-sync-lc2jp" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.628926 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fa56113-499a-490c-bba9-8676d4312e4e-combined-ca-bundle\") pod \"neutron-db-sync-fbp7p\" (UID: \"6fa56113-499a-490c-bba9-8676d4312e4e\") " pod="openstack/neutron-db-sync-fbp7p" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.629050 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.633985 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/6fa56113-499a-490c-bba9-8676d4312e4e-config\") pod \"neutron-db-sync-fbp7p\" (UID: \"6fa56113-499a-490c-bba9-8676d4312e4e\") " pod="openstack/neutron-db-sync-fbp7p" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.663585 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6f5k\" (UniqueName: \"kubernetes.io/projected/6fa56113-499a-490c-bba9-8676d4312e4e-kube-api-access-v6f5k\") pod \"neutron-db-sync-fbp7p\" (UID: \"6fa56113-499a-490c-bba9-8676d4312e4e\") " pod="openstack/neutron-db-sync-fbp7p" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.671554 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67dccc895-gzmtt"] Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.672663 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67dccc895-gzmtt" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.673055 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.677376 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.681359 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.688381 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-nnwfh" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.700840 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27d0219c-fd97-4271-9059-0903aae52f65-logs\") pod \"placement-db-sync-lc2jp\" (UID: \"27d0219c-fd97-4271-9059-0903aae52f65\") " pod="openstack/placement-db-sync-lc2jp" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.701033 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27d0219c-fd97-4271-9059-0903aae52f65-combined-ca-bundle\") pod \"placement-db-sync-lc2jp\" (UID: \"27d0219c-fd97-4271-9059-0903aae52f65\") " pod="openstack/placement-db-sync-lc2jp" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.701180 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27d0219c-fd97-4271-9059-0903aae52f65-config-data\") pod \"placement-db-sync-lc2jp\" (UID: \"27d0219c-fd97-4271-9059-0903aae52f65\") " pod="openstack/placement-db-sync-lc2jp" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.701315 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ws8w4\" (UniqueName: \"kubernetes.io/projected/27d0219c-fd97-4271-9059-0903aae52f65-kube-api-access-ws8w4\") pod \"placement-db-sync-lc2jp\" (UID: \"27d0219c-fd97-4271-9059-0903aae52f65\") " pod="openstack/placement-db-sync-lc2jp" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.701660 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27d0219c-fd97-4271-9059-0903aae52f65-logs\") pod \"placement-db-sync-lc2jp\" (UID: \"27d0219c-fd97-4271-9059-0903aae52f65\") " pod="openstack/placement-db-sync-lc2jp" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.702060 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27d0219c-fd97-4271-9059-0903aae52f65-scripts\") pod \"placement-db-sync-lc2jp\" (UID: \"27d0219c-fd97-4271-9059-0903aae52f65\") " pod="openstack/placement-db-sync-lc2jp" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.706661 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27d0219c-fd97-4271-9059-0903aae52f65-scripts\") pod \"placement-db-sync-lc2jp\" (UID: \"27d0219c-fd97-4271-9059-0903aae52f65\") " pod="openstack/placement-db-sync-lc2jp" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.713162 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27d0219c-fd97-4271-9059-0903aae52f65-config-data\") pod \"placement-db-sync-lc2jp\" (UID: \"27d0219c-fd97-4271-9059-0903aae52f65\") " pod="openstack/placement-db-sync-lc2jp" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.718769 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27d0219c-fd97-4271-9059-0903aae52f65-combined-ca-bundle\") pod \"placement-db-sync-lc2jp\" (UID: \"27d0219c-fd97-4271-9059-0903aae52f65\") " pod="openstack/placement-db-sync-lc2jp" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.719179 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-fbp7p" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.744507 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ws8w4\" (UniqueName: \"kubernetes.io/projected/27d0219c-fd97-4271-9059-0903aae52f65-kube-api-access-ws8w4\") pod \"placement-db-sync-lc2jp\" (UID: \"27d0219c-fd97-4271-9059-0903aae52f65\") " pod="openstack/placement-db-sync-lc2jp" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.764050 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.803092 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85fc318c-5591-4bb8-a92f-04b0f34884e7-config\") pod \"dnsmasq-dns-67dccc895-gzmtt\" (UID: \"85fc318c-5591-4bb8-a92f-04b0f34884e7\") " pod="openstack/dnsmasq-dns-67dccc895-gzmtt" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.803130 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4446e6ed-7663-41ab-9fae-f6da8f4f5449-run-httpd\") pod \"ceilometer-0\" (UID: \"4446e6ed-7663-41ab-9fae-f6da8f4f5449\") " pod="openstack/ceilometer-0" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.803145 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/85fc318c-5591-4bb8-a92f-04b0f34884e7-ovsdbserver-nb\") pod \"dnsmasq-dns-67dccc895-gzmtt\" (UID: \"85fc318c-5591-4bb8-a92f-04b0f34884e7\") " pod="openstack/dnsmasq-dns-67dccc895-gzmtt" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.803163 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4446e6ed-7663-41ab-9fae-f6da8f4f5449-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4446e6ed-7663-41ab-9fae-f6da8f4f5449\") " pod="openstack/ceilometer-0" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.803203 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/85fc318c-5591-4bb8-a92f-04b0f34884e7-ovsdbserver-sb\") pod \"dnsmasq-dns-67dccc895-gzmtt\" (UID: \"85fc318c-5591-4bb8-a92f-04b0f34884e7\") " pod="openstack/dnsmasq-dns-67dccc895-gzmtt" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.803222 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4446e6ed-7663-41ab-9fae-f6da8f4f5449-config-data\") pod \"ceilometer-0\" (UID: \"4446e6ed-7663-41ab-9fae-f6da8f4f5449\") " pod="openstack/ceilometer-0" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.803238 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/85fc318c-5591-4bb8-a92f-04b0f34884e7-dns-svc\") pod \"dnsmasq-dns-67dccc895-gzmtt\" (UID: \"85fc318c-5591-4bb8-a92f-04b0f34884e7\") " pod="openstack/dnsmasq-dns-67dccc895-gzmtt" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.803252 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jdmx\" (UniqueName: \"kubernetes.io/projected/4446e6ed-7663-41ab-9fae-f6da8f4f5449-kube-api-access-4jdmx\") pod \"ceilometer-0\" (UID: \"4446e6ed-7663-41ab-9fae-f6da8f4f5449\") " pod="openstack/ceilometer-0" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.803265 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ghf8\" (UniqueName: \"kubernetes.io/projected/85fc318c-5591-4bb8-a92f-04b0f34884e7-kube-api-access-8ghf8\") pod \"dnsmasq-dns-67dccc895-gzmtt\" (UID: \"85fc318c-5591-4bb8-a92f-04b0f34884e7\") " pod="openstack/dnsmasq-dns-67dccc895-gzmtt" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.803325 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4446e6ed-7663-41ab-9fae-f6da8f4f5449-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4446e6ed-7663-41ab-9fae-f6da8f4f5449\") " pod="openstack/ceilometer-0" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.803343 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4446e6ed-7663-41ab-9fae-f6da8f4f5449-scripts\") pod \"ceilometer-0\" (UID: \"4446e6ed-7663-41ab-9fae-f6da8f4f5449\") " pod="openstack/ceilometer-0" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.803399 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/85fc318c-5591-4bb8-a92f-04b0f34884e7-dns-swift-storage-0\") pod \"dnsmasq-dns-67dccc895-gzmtt\" (UID: \"85fc318c-5591-4bb8-a92f-04b0f34884e7\") " pod="openstack/dnsmasq-dns-67dccc895-gzmtt" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.803428 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4446e6ed-7663-41ab-9fae-f6da8f4f5449-log-httpd\") pod \"ceilometer-0\" (UID: \"4446e6ed-7663-41ab-9fae-f6da8f4f5449\") " pod="openstack/ceilometer-0" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.868830 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67dccc895-gzmtt"] Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.890132 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-9s5kt"] Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.894089 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-9s5kt" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.898055 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-x7ftv" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.906099 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/85fc318c-5591-4bb8-a92f-04b0f34884e7-dns-swift-storage-0\") pod \"dnsmasq-dns-67dccc895-gzmtt\" (UID: \"85fc318c-5591-4bb8-a92f-04b0f34884e7\") " pod="openstack/dnsmasq-dns-67dccc895-gzmtt" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.906173 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4446e6ed-7663-41ab-9fae-f6da8f4f5449-log-httpd\") pod \"ceilometer-0\" (UID: \"4446e6ed-7663-41ab-9fae-f6da8f4f5449\") " pod="openstack/ceilometer-0" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.906337 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85fc318c-5591-4bb8-a92f-04b0f34884e7-config\") pod \"dnsmasq-dns-67dccc895-gzmtt\" (UID: \"85fc318c-5591-4bb8-a92f-04b0f34884e7\") " pod="openstack/dnsmasq-dns-67dccc895-gzmtt" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.906361 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4446e6ed-7663-41ab-9fae-f6da8f4f5449-run-httpd\") pod \"ceilometer-0\" (UID: \"4446e6ed-7663-41ab-9fae-f6da8f4f5449\") " pod="openstack/ceilometer-0" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.906395 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/85fc318c-5591-4bb8-a92f-04b0f34884e7-ovsdbserver-nb\") pod \"dnsmasq-dns-67dccc895-gzmtt\" (UID: \"85fc318c-5591-4bb8-a92f-04b0f34884e7\") " pod="openstack/dnsmasq-dns-67dccc895-gzmtt" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.906418 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4446e6ed-7663-41ab-9fae-f6da8f4f5449-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4446e6ed-7663-41ab-9fae-f6da8f4f5449\") " pod="openstack/ceilometer-0" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.906505 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/85fc318c-5591-4bb8-a92f-04b0f34884e7-ovsdbserver-sb\") pod \"dnsmasq-dns-67dccc895-gzmtt\" (UID: \"85fc318c-5591-4bb8-a92f-04b0f34884e7\") " pod="openstack/dnsmasq-dns-67dccc895-gzmtt" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.906530 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4446e6ed-7663-41ab-9fae-f6da8f4f5449-config-data\") pod \"ceilometer-0\" (UID: \"4446e6ed-7663-41ab-9fae-f6da8f4f5449\") " pod="openstack/ceilometer-0" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.906592 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/85fc318c-5591-4bb8-a92f-04b0f34884e7-dns-svc\") pod \"dnsmasq-dns-67dccc895-gzmtt\" (UID: \"85fc318c-5591-4bb8-a92f-04b0f34884e7\") " pod="openstack/dnsmasq-dns-67dccc895-gzmtt" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.906609 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jdmx\" (UniqueName: \"kubernetes.io/projected/4446e6ed-7663-41ab-9fae-f6da8f4f5449-kube-api-access-4jdmx\") pod \"ceilometer-0\" (UID: \"4446e6ed-7663-41ab-9fae-f6da8f4f5449\") " pod="openstack/ceilometer-0" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.906626 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ghf8\" (UniqueName: \"kubernetes.io/projected/85fc318c-5591-4bb8-a92f-04b0f34884e7-kube-api-access-8ghf8\") pod \"dnsmasq-dns-67dccc895-gzmtt\" (UID: \"85fc318c-5591-4bb8-a92f-04b0f34884e7\") " pod="openstack/dnsmasq-dns-67dccc895-gzmtt" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.906732 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4446e6ed-7663-41ab-9fae-f6da8f4f5449-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4446e6ed-7663-41ab-9fae-f6da8f4f5449\") " pod="openstack/ceilometer-0" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.906758 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4446e6ed-7663-41ab-9fae-f6da8f4f5449-scripts\") pod \"ceilometer-0\" (UID: \"4446e6ed-7663-41ab-9fae-f6da8f4f5449\") " pod="openstack/ceilometer-0" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.906972 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4446e6ed-7663-41ab-9fae-f6da8f4f5449-run-httpd\") pod \"ceilometer-0\" (UID: \"4446e6ed-7663-41ab-9fae-f6da8f4f5449\") " pod="openstack/ceilometer-0" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.907735 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/85fc318c-5591-4bb8-a92f-04b0f34884e7-dns-swift-storage-0\") pod \"dnsmasq-dns-67dccc895-gzmtt\" (UID: \"85fc318c-5591-4bb8-a92f-04b0f34884e7\") " pod="openstack/dnsmasq-dns-67dccc895-gzmtt" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.907998 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4446e6ed-7663-41ab-9fae-f6da8f4f5449-log-httpd\") pod \"ceilometer-0\" (UID: \"4446e6ed-7663-41ab-9fae-f6da8f4f5449\") " pod="openstack/ceilometer-0" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.910826 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.912421 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/85fc318c-5591-4bb8-a92f-04b0f34884e7-ovsdbserver-nb\") pod \"dnsmasq-dns-67dccc895-gzmtt\" (UID: \"85fc318c-5591-4bb8-a92f-04b0f34884e7\") " pod="openstack/dnsmasq-dns-67dccc895-gzmtt" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.916593 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4446e6ed-7663-41ab-9fae-f6da8f4f5449-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4446e6ed-7663-41ab-9fae-f6da8f4f5449\") " pod="openstack/ceilometer-0" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.917351 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4446e6ed-7663-41ab-9fae-f6da8f4f5449-scripts\") pod \"ceilometer-0\" (UID: \"4446e6ed-7663-41ab-9fae-f6da8f4f5449\") " pod="openstack/ceilometer-0" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.917403 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85fc318c-5591-4bb8-a92f-04b0f34884e7-config\") pod \"dnsmasq-dns-67dccc895-gzmtt\" (UID: \"85fc318c-5591-4bb8-a92f-04b0f34884e7\") " pod="openstack/dnsmasq-dns-67dccc895-gzmtt" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.917506 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-9s5kt"] Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.918556 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/85fc318c-5591-4bb8-a92f-04b0f34884e7-ovsdbserver-sb\") pod \"dnsmasq-dns-67dccc895-gzmtt\" (UID: \"85fc318c-5591-4bb8-a92f-04b0f34884e7\") " pod="openstack/dnsmasq-dns-67dccc895-gzmtt" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.918761 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/85fc318c-5591-4bb8-a92f-04b0f34884e7-dns-svc\") pod \"dnsmasq-dns-67dccc895-gzmtt\" (UID: \"85fc318c-5591-4bb8-a92f-04b0f34884e7\") " pod="openstack/dnsmasq-dns-67dccc895-gzmtt" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.920311 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4446e6ed-7663-41ab-9fae-f6da8f4f5449-config-data\") pod \"ceilometer-0\" (UID: \"4446e6ed-7663-41ab-9fae-f6da8f4f5449\") " pod="openstack/ceilometer-0" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.932898 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4446e6ed-7663-41ab-9fae-f6da8f4f5449-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4446e6ed-7663-41ab-9fae-f6da8f4f5449\") " pod="openstack/ceilometer-0" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.973324 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jdmx\" (UniqueName: \"kubernetes.io/projected/4446e6ed-7663-41ab-9fae-f6da8f4f5449-kube-api-access-4jdmx\") pod \"ceilometer-0\" (UID: \"4446e6ed-7663-41ab-9fae-f6da8f4f5449\") " pod="openstack/ceilometer-0" Feb 19 18:49:10 crc kubenswrapper[4813]: I0219 18:49:10.989448 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ghf8\" (UniqueName: \"kubernetes.io/projected/85fc318c-5591-4bb8-a92f-04b0f34884e7-kube-api-access-8ghf8\") pod \"dnsmasq-dns-67dccc895-gzmtt\" (UID: \"85fc318c-5591-4bb8-a92f-04b0f34884e7\") " pod="openstack/dnsmasq-dns-67dccc895-gzmtt" Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.005200 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-lc2jp" Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.008249 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d622858a-0915-43b1-9169-8f176f0b16f0-combined-ca-bundle\") pod \"barbican-db-sync-9s5kt\" (UID: \"d622858a-0915-43b1-9169-8f176f0b16f0\") " pod="openstack/barbican-db-sync-9s5kt" Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.008304 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d622858a-0915-43b1-9169-8f176f0b16f0-db-sync-config-data\") pod \"barbican-db-sync-9s5kt\" (UID: \"d622858a-0915-43b1-9169-8f176f0b16f0\") " pod="openstack/barbican-db-sync-9s5kt" Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.008338 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjglc\" (UniqueName: \"kubernetes.io/projected/d622858a-0915-43b1-9169-8f176f0b16f0-kube-api-access-qjglc\") pod \"barbican-db-sync-9s5kt\" (UID: \"d622858a-0915-43b1-9169-8f176f0b16f0\") " pod="openstack/barbican-db-sync-9s5kt" Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.037676 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67dccc895-gzmtt" Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.078969 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.111046 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d622858a-0915-43b1-9169-8f176f0b16f0-combined-ca-bundle\") pod \"barbican-db-sync-9s5kt\" (UID: \"d622858a-0915-43b1-9169-8f176f0b16f0\") " pod="openstack/barbican-db-sync-9s5kt" Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.111227 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d622858a-0915-43b1-9169-8f176f0b16f0-db-sync-config-data\") pod \"barbican-db-sync-9s5kt\" (UID: \"d622858a-0915-43b1-9169-8f176f0b16f0\") " pod="openstack/barbican-db-sync-9s5kt" Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.112052 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjglc\" (UniqueName: \"kubernetes.io/projected/d622858a-0915-43b1-9169-8f176f0b16f0-kube-api-access-qjglc\") pod \"barbican-db-sync-9s5kt\" (UID: \"d622858a-0915-43b1-9169-8f176f0b16f0\") " pod="openstack/barbican-db-sync-9s5kt" Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.117465 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d622858a-0915-43b1-9169-8f176f0b16f0-combined-ca-bundle\") pod \"barbican-db-sync-9s5kt\" (UID: \"d622858a-0915-43b1-9169-8f176f0b16f0\") " pod="openstack/barbican-db-sync-9s5kt" Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.120411 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d622858a-0915-43b1-9169-8f176f0b16f0-db-sync-config-data\") pod \"barbican-db-sync-9s5kt\" (UID: \"d622858a-0915-43b1-9169-8f176f0b16f0\") " pod="openstack/barbican-db-sync-9s5kt" Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.133694 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjglc\" (UniqueName: \"kubernetes.io/projected/d622858a-0915-43b1-9169-8f176f0b16f0-kube-api-access-qjglc\") pod \"barbican-db-sync-9s5kt\" (UID: \"d622858a-0915-43b1-9169-8f176f0b16f0\") " pod="openstack/barbican-db-sync-9s5kt" Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.221015 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.226014 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.228558 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.229121 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-9s5kt" Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.234535 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.234713 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.234835 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-d6cgg" Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.240474 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.262280 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.263552 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.266086 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.266344 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.303367 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.316588 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7294ec3-b62c-46d5-845e-6e3463859e0f-logs\") pod \"glance-default-internal-api-0\" (UID: \"a7294ec3-b62c-46d5-845e-6e3463859e0f\") " pod="openstack/glance-default-internal-api-0" Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.316646 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7294ec3-b62c-46d5-845e-6e3463859e0f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a7294ec3-b62c-46d5-845e-6e3463859e0f\") " pod="openstack/glance-default-internal-api-0" Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.316677 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6da8fcfe-c22a-46a7-af7d-d02dd947d168-scripts\") pod \"glance-default-external-api-0\" (UID: \"6da8fcfe-c22a-46a7-af7d-d02dd947d168\") " pod="openstack/glance-default-external-api-0" Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.316721 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"a7294ec3-b62c-46d5-845e-6e3463859e0f\") " pod="openstack/glance-default-internal-api-0" Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.316753 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7294ec3-b62c-46d5-845e-6e3463859e0f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a7294ec3-b62c-46d5-845e-6e3463859e0f\") " pod="openstack/glance-default-internal-api-0" Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.316799 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2q4s\" (UniqueName: \"kubernetes.io/projected/6da8fcfe-c22a-46a7-af7d-d02dd947d168-kube-api-access-s2q4s\") pod \"glance-default-external-api-0\" (UID: \"6da8fcfe-c22a-46a7-af7d-d02dd947d168\") " pod="openstack/glance-default-external-api-0" Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.316821 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6da8fcfe-c22a-46a7-af7d-d02dd947d168-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6da8fcfe-c22a-46a7-af7d-d02dd947d168\") " pod="openstack/glance-default-external-api-0" Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.316903 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6da8fcfe-c22a-46a7-af7d-d02dd947d168-logs\") pod \"glance-default-external-api-0\" (UID: \"6da8fcfe-c22a-46a7-af7d-d02dd947d168\") " pod="openstack/glance-default-external-api-0" Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.316946 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qr8d\" (UniqueName: \"kubernetes.io/projected/a7294ec3-b62c-46d5-845e-6e3463859e0f-kube-api-access-4qr8d\") pod \"glance-default-internal-api-0\" (UID: \"a7294ec3-b62c-46d5-845e-6e3463859e0f\") " pod="openstack/glance-default-internal-api-0" Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.316986 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6da8fcfe-c22a-46a7-af7d-d02dd947d168-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6da8fcfe-c22a-46a7-af7d-d02dd947d168\") " pod="openstack/glance-default-external-api-0" Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.317012 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6da8fcfe-c22a-46a7-af7d-d02dd947d168-config-data\") pod \"glance-default-external-api-0\" (UID: \"6da8fcfe-c22a-46a7-af7d-d02dd947d168\") " pod="openstack/glance-default-external-api-0" Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.317032 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a7294ec3-b62c-46d5-845e-6e3463859e0f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a7294ec3-b62c-46d5-845e-6e3463859e0f\") " pod="openstack/glance-default-internal-api-0" Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.317076 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7294ec3-b62c-46d5-845e-6e3463859e0f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a7294ec3-b62c-46d5-845e-6e3463859e0f\") " pod="openstack/glance-default-internal-api-0" Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.317100 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7294ec3-b62c-46d5-845e-6e3463859e0f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a7294ec3-b62c-46d5-845e-6e3463859e0f\") " pod="openstack/glance-default-internal-api-0" Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.317122 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6da8fcfe-c22a-46a7-af7d-d02dd947d168-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6da8fcfe-c22a-46a7-af7d-d02dd947d168\") " pod="openstack/glance-default-external-api-0" Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.317141 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"6da8fcfe-c22a-46a7-af7d-d02dd947d168\") " pod="openstack/glance-default-external-api-0" Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.334367 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-n2twj"] Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.345139 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d67cdfc8f-gtgh9"] Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.353651 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-nnwfh"] Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.419800 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qr8d\" (UniqueName: \"kubernetes.io/projected/a7294ec3-b62c-46d5-845e-6e3463859e0f-kube-api-access-4qr8d\") pod \"glance-default-internal-api-0\" (UID: \"a7294ec3-b62c-46d5-845e-6e3463859e0f\") " pod="openstack/glance-default-internal-api-0" Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.419838 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6da8fcfe-c22a-46a7-af7d-d02dd947d168-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6da8fcfe-c22a-46a7-af7d-d02dd947d168\") " pod="openstack/glance-default-external-api-0" Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.419866 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6da8fcfe-c22a-46a7-af7d-d02dd947d168-config-data\") pod \"glance-default-external-api-0\" (UID: \"6da8fcfe-c22a-46a7-af7d-d02dd947d168\") " pod="openstack/glance-default-external-api-0" Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.419882 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a7294ec3-b62c-46d5-845e-6e3463859e0f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a7294ec3-b62c-46d5-845e-6e3463859e0f\") " pod="openstack/glance-default-internal-api-0" Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.419913 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7294ec3-b62c-46d5-845e-6e3463859e0f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a7294ec3-b62c-46d5-845e-6e3463859e0f\") " pod="openstack/glance-default-internal-api-0" Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.419934 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7294ec3-b62c-46d5-845e-6e3463859e0f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a7294ec3-b62c-46d5-845e-6e3463859e0f\") " pod="openstack/glance-default-internal-api-0" Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.419995 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6da8fcfe-c22a-46a7-af7d-d02dd947d168-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6da8fcfe-c22a-46a7-af7d-d02dd947d168\") " pod="openstack/glance-default-external-api-0" Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.420015 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"6da8fcfe-c22a-46a7-af7d-d02dd947d168\") " pod="openstack/glance-default-external-api-0" Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.420031 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7294ec3-b62c-46d5-845e-6e3463859e0f-logs\") pod \"glance-default-internal-api-0\" (UID: \"a7294ec3-b62c-46d5-845e-6e3463859e0f\") " pod="openstack/glance-default-internal-api-0" Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.420077 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7294ec3-b62c-46d5-845e-6e3463859e0f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a7294ec3-b62c-46d5-845e-6e3463859e0f\") " pod="openstack/glance-default-internal-api-0" Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.420095 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6da8fcfe-c22a-46a7-af7d-d02dd947d168-scripts\") pod \"glance-default-external-api-0\" (UID: \"6da8fcfe-c22a-46a7-af7d-d02dd947d168\") " pod="openstack/glance-default-external-api-0" Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.420128 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"a7294ec3-b62c-46d5-845e-6e3463859e0f\") " pod="openstack/glance-default-internal-api-0" Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.420157 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7294ec3-b62c-46d5-845e-6e3463859e0f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a7294ec3-b62c-46d5-845e-6e3463859e0f\") " pod="openstack/glance-default-internal-api-0" Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.420189 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2q4s\" (UniqueName: \"kubernetes.io/projected/6da8fcfe-c22a-46a7-af7d-d02dd947d168-kube-api-access-s2q4s\") pod \"glance-default-external-api-0\" (UID: \"6da8fcfe-c22a-46a7-af7d-d02dd947d168\") " pod="openstack/glance-default-external-api-0" Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.420207 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6da8fcfe-c22a-46a7-af7d-d02dd947d168-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6da8fcfe-c22a-46a7-af7d-d02dd947d168\") " pod="openstack/glance-default-external-api-0" Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.420227 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6da8fcfe-c22a-46a7-af7d-d02dd947d168-logs\") pod \"glance-default-external-api-0\" (UID: \"6da8fcfe-c22a-46a7-af7d-d02dd947d168\") " pod="openstack/glance-default-external-api-0" Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.420508 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a7294ec3-b62c-46d5-845e-6e3463859e0f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a7294ec3-b62c-46d5-845e-6e3463859e0f\") " pod="openstack/glance-default-internal-api-0" Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.420633 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6da8fcfe-c22a-46a7-af7d-d02dd947d168-logs\") pod \"glance-default-external-api-0\" (UID: \"6da8fcfe-c22a-46a7-af7d-d02dd947d168\") " pod="openstack/glance-default-external-api-0" Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.420852 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7294ec3-b62c-46d5-845e-6e3463859e0f-logs\") pod \"glance-default-internal-api-0\" (UID: \"a7294ec3-b62c-46d5-845e-6e3463859e0f\") " pod="openstack/glance-default-internal-api-0" Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.421212 4813 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"6da8fcfe-c22a-46a7-af7d-d02dd947d168\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-external-api-0" Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.429230 4813 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"a7294ec3-b62c-46d5-845e-6e3463859e0f\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.429394 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6da8fcfe-c22a-46a7-af7d-d02dd947d168-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6da8fcfe-c22a-46a7-af7d-d02dd947d168\") " pod="openstack/glance-default-external-api-0" Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.431147 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7294ec3-b62c-46d5-845e-6e3463859e0f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a7294ec3-b62c-46d5-845e-6e3463859e0f\") " pod="openstack/glance-default-internal-api-0" Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.434222 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6da8fcfe-c22a-46a7-af7d-d02dd947d168-config-data\") pod \"glance-default-external-api-0\" (UID: \"6da8fcfe-c22a-46a7-af7d-d02dd947d168\") " pod="openstack/glance-default-external-api-0" Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.434789 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6da8fcfe-c22a-46a7-af7d-d02dd947d168-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6da8fcfe-c22a-46a7-af7d-d02dd947d168\") " pod="openstack/glance-default-external-api-0" Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.436801 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6da8fcfe-c22a-46a7-af7d-d02dd947d168-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6da8fcfe-c22a-46a7-af7d-d02dd947d168\") " pod="openstack/glance-default-external-api-0" Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.436934 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7294ec3-b62c-46d5-845e-6e3463859e0f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a7294ec3-b62c-46d5-845e-6e3463859e0f\") " pod="openstack/glance-default-internal-api-0" Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.444116 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7294ec3-b62c-46d5-845e-6e3463859e0f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a7294ec3-b62c-46d5-845e-6e3463859e0f\") " pod="openstack/glance-default-internal-api-0" Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.448816 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qr8d\" (UniqueName: \"kubernetes.io/projected/a7294ec3-b62c-46d5-845e-6e3463859e0f-kube-api-access-4qr8d\") pod \"glance-default-internal-api-0\" (UID: \"a7294ec3-b62c-46d5-845e-6e3463859e0f\") " pod="openstack/glance-default-internal-api-0" Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.458810 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6da8fcfe-c22a-46a7-af7d-d02dd947d168-scripts\") pod \"glance-default-external-api-0\" (UID: \"6da8fcfe-c22a-46a7-af7d-d02dd947d168\") " pod="openstack/glance-default-external-api-0" Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.470307 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7294ec3-b62c-46d5-845e-6e3463859e0f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a7294ec3-b62c-46d5-845e-6e3463859e0f\") " pod="openstack/glance-default-internal-api-0" Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.474287 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"6da8fcfe-c22a-46a7-af7d-d02dd947d168\") " pod="openstack/glance-default-external-api-0" Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.475765 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"a7294ec3-b62c-46d5-845e-6e3463859e0f\") " pod="openstack/glance-default-internal-api-0" Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.480316 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2q4s\" (UniqueName: \"kubernetes.io/projected/6da8fcfe-c22a-46a7-af7d-d02dd947d168-kube-api-access-s2q4s\") pod \"glance-default-external-api-0\" (UID: \"6da8fcfe-c22a-46a7-af7d-d02dd947d168\") " pod="openstack/glance-default-external-api-0" Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.571472 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.591723 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.641062 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-fbp7p"] Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.713739 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-lc2jp"] Feb 19 18:49:11 crc kubenswrapper[4813]: W0219 18:49:11.744589 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod27d0219c_fd97_4271_9059_0903aae52f65.slice/crio-90ecf777e2c00afc46bc04b7c0b1b6d8b154239816f33f897dd45c5b0f6d2018 WatchSource:0}: Error finding container 90ecf777e2c00afc46bc04b7c0b1b6d8b154239816f33f897dd45c5b0f6d2018: Status 404 returned error can't find the container with id 90ecf777e2c00afc46bc04b7c0b1b6d8b154239816f33f897dd45c5b0f6d2018 Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.772110 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67dccc895-gzmtt"] Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.799287 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.804755 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d67cdfc8f-gtgh9" event={"ID":"32e6bfc4-62d0-4597-ad22-f88689246209","Type":"ContainerStarted","Data":"ba1d3593d5c58b2dd9017dadeb5f02de33c7817bf8b802251a069af48ae2c5af"} Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.807285 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-n2twj" event={"ID":"8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94","Type":"ContainerStarted","Data":"f36a43ec1ed98f0f35c14335002ed4e7830736935198936a699bcc1ffc7dd4d9"} Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.808391 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-nnwfh" event={"ID":"a87080d3-007c-48e0-aa89-b82c5d9dafab","Type":"ContainerStarted","Data":"2c8235698e5d23d7a891e4ad8348e349bc9f7ef7320cecb4d8acafa6dfd507fc"} Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.812070 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-lc2jp" event={"ID":"27d0219c-fd97-4271-9059-0903aae52f65","Type":"ContainerStarted","Data":"90ecf777e2c00afc46bc04b7c0b1b6d8b154239816f33f897dd45c5b0f6d2018"} Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.814837 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-fbp7p" event={"ID":"6fa56113-499a-490c-bba9-8676d4312e4e","Type":"ContainerStarted","Data":"a6fc1a080c0a9e7c76b176b93f046a9dd936271ea8542bdb7f8c05aa35b81b94"} Feb 19 18:49:11 crc kubenswrapper[4813]: I0219 18:49:11.947263 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-9s5kt"] Feb 19 18:49:12 crc kubenswrapper[4813]: I0219 18:49:12.261606 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 18:49:12 crc kubenswrapper[4813]: I0219 18:49:12.356179 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 18:49:12 crc kubenswrapper[4813]: I0219 18:49:12.836356 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a7294ec3-b62c-46d5-845e-6e3463859e0f","Type":"ContainerStarted","Data":"fe3ba7035b06d3f7343e51f366947bf4e7eb1c4d3b584c8e721f30c05367b241"} Feb 19 18:49:12 crc kubenswrapper[4813]: I0219 18:49:12.839588 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-n2twj" event={"ID":"8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94","Type":"ContainerStarted","Data":"fbf2efd8df51468ee3111d03323da4ae9c317212db4f2d94ec596fcf42163d90"} Feb 19 18:49:12 crc kubenswrapper[4813]: I0219 18:49:12.846800 4813 generic.go:334] "Generic (PLEG): container finished" podID="85fc318c-5591-4bb8-a92f-04b0f34884e7" containerID="b0864f50c0c2d40b2316664b91fc781b2661ec928d8f7c82bae05c26bc9c992d" exitCode=0 Feb 19 18:49:12 crc kubenswrapper[4813]: I0219 18:49:12.846846 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67dccc895-gzmtt" event={"ID":"85fc318c-5591-4bb8-a92f-04b0f34884e7","Type":"ContainerDied","Data":"b0864f50c0c2d40b2316664b91fc781b2661ec928d8f7c82bae05c26bc9c992d"} Feb 19 18:49:12 crc kubenswrapper[4813]: I0219 18:49:12.846862 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67dccc895-gzmtt" event={"ID":"85fc318c-5591-4bb8-a92f-04b0f34884e7","Type":"ContainerStarted","Data":"ab56bf256d801254cd2abf2624c1ce3a538ac630039c3a3f5dc41f322ad33d7d"} Feb 19 18:49:12 crc kubenswrapper[4813]: I0219 18:49:12.849230 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6da8fcfe-c22a-46a7-af7d-d02dd947d168","Type":"ContainerStarted","Data":"ac30b5e58e5de4eb0552c17e9a3a0e993bd0872bdfe36b779b6cd3abb1a69c2c"} Feb 19 18:49:12 crc kubenswrapper[4813]: I0219 18:49:12.851260 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4446e6ed-7663-41ab-9fae-f6da8f4f5449","Type":"ContainerStarted","Data":"ce80f7aef1a0438644020c5f48d55ced01a988ffc32b33ab807a1b94c1ba8132"} Feb 19 18:49:12 crc kubenswrapper[4813]: I0219 18:49:12.855274 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-9s5kt" event={"ID":"d622858a-0915-43b1-9169-8f176f0b16f0","Type":"ContainerStarted","Data":"c69d607de1b17bf98758cc16725aa60b9f622f4ccb07684b8b40e0c336c57423"} Feb 19 18:49:12 crc kubenswrapper[4813]: I0219 18:49:12.873547 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-n2twj" podStartSLOduration=2.873530086 podStartE2EDuration="2.873530086s" podCreationTimestamp="2026-02-19 18:49:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:49:12.869401309 +0000 UTC m=+1172.094841850" watchObservedRunningTime="2026-02-19 18:49:12.873530086 +0000 UTC m=+1172.098970627" Feb 19 18:49:12 crc kubenswrapper[4813]: I0219 18:49:12.876135 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 18:49:12 crc kubenswrapper[4813]: I0219 18:49:12.876184 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-fbp7p" event={"ID":"6fa56113-499a-490c-bba9-8676d4312e4e","Type":"ContainerStarted","Data":"b2e48289ecd4bad8376d6fcd570aeec36797339684b0340044cc436faa9f613e"} Feb 19 18:49:12 crc kubenswrapper[4813]: I0219 18:49:12.967305 4813 generic.go:334] "Generic (PLEG): container finished" podID="32e6bfc4-62d0-4597-ad22-f88689246209" containerID="a5e4584d98c8cc01e0ce8d0e934da101b0f96b7f4bdd4147567b87533d59dd47" exitCode=0 Feb 19 18:49:12 crc kubenswrapper[4813]: I0219 18:49:12.967350 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d67cdfc8f-gtgh9" event={"ID":"32e6bfc4-62d0-4597-ad22-f88689246209","Type":"ContainerDied","Data":"a5e4584d98c8cc01e0ce8d0e934da101b0f96b7f4bdd4147567b87533d59dd47"} Feb 19 18:49:12 crc kubenswrapper[4813]: I0219 18:49:12.982871 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-fbp7p" podStartSLOduration=2.982850677 podStartE2EDuration="2.982850677s" podCreationTimestamp="2026-02-19 18:49:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:49:12.967615738 +0000 UTC m=+1172.193056279" watchObservedRunningTime="2026-02-19 18:49:12.982850677 +0000 UTC m=+1172.208291218" Feb 19 18:49:13 crc kubenswrapper[4813]: I0219 18:49:13.047021 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 18:49:13 crc kubenswrapper[4813]: I0219 18:49:13.103859 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 18:49:13 crc kubenswrapper[4813]: I0219 18:49:13.625897 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d67cdfc8f-gtgh9" Feb 19 18:49:13 crc kubenswrapper[4813]: I0219 18:49:13.710433 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32e6bfc4-62d0-4597-ad22-f88689246209-dns-svc\") pod \"32e6bfc4-62d0-4597-ad22-f88689246209\" (UID: \"32e6bfc4-62d0-4597-ad22-f88689246209\") " Feb 19 18:49:13 crc kubenswrapper[4813]: I0219 18:49:13.710485 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/32e6bfc4-62d0-4597-ad22-f88689246209-ovsdbserver-nb\") pod \"32e6bfc4-62d0-4597-ad22-f88689246209\" (UID: \"32e6bfc4-62d0-4597-ad22-f88689246209\") " Feb 19 18:49:13 crc kubenswrapper[4813]: I0219 18:49:13.710513 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32e6bfc4-62d0-4597-ad22-f88689246209-config\") pod \"32e6bfc4-62d0-4597-ad22-f88689246209\" (UID: \"32e6bfc4-62d0-4597-ad22-f88689246209\") " Feb 19 18:49:13 crc kubenswrapper[4813]: I0219 18:49:13.710531 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/32e6bfc4-62d0-4597-ad22-f88689246209-ovsdbserver-sb\") pod \"32e6bfc4-62d0-4597-ad22-f88689246209\" (UID: \"32e6bfc4-62d0-4597-ad22-f88689246209\") " Feb 19 18:49:13 crc kubenswrapper[4813]: I0219 18:49:13.710596 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jlzr\" (UniqueName: \"kubernetes.io/projected/32e6bfc4-62d0-4597-ad22-f88689246209-kube-api-access-2jlzr\") pod \"32e6bfc4-62d0-4597-ad22-f88689246209\" (UID: \"32e6bfc4-62d0-4597-ad22-f88689246209\") " Feb 19 18:49:13 crc kubenswrapper[4813]: I0219 18:49:13.710620 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/32e6bfc4-62d0-4597-ad22-f88689246209-dns-swift-storage-0\") pod \"32e6bfc4-62d0-4597-ad22-f88689246209\" (UID: \"32e6bfc4-62d0-4597-ad22-f88689246209\") " Feb 19 18:49:13 crc kubenswrapper[4813]: I0219 18:49:13.718217 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32e6bfc4-62d0-4597-ad22-f88689246209-kube-api-access-2jlzr" (OuterVolumeSpecName: "kube-api-access-2jlzr") pod "32e6bfc4-62d0-4597-ad22-f88689246209" (UID: "32e6bfc4-62d0-4597-ad22-f88689246209"). InnerVolumeSpecName "kube-api-access-2jlzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:49:13 crc kubenswrapper[4813]: I0219 18:49:13.732864 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32e6bfc4-62d0-4597-ad22-f88689246209-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "32e6bfc4-62d0-4597-ad22-f88689246209" (UID: "32e6bfc4-62d0-4597-ad22-f88689246209"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:49:13 crc kubenswrapper[4813]: I0219 18:49:13.735975 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32e6bfc4-62d0-4597-ad22-f88689246209-config" (OuterVolumeSpecName: "config") pod "32e6bfc4-62d0-4597-ad22-f88689246209" (UID: "32e6bfc4-62d0-4597-ad22-f88689246209"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:49:13 crc kubenswrapper[4813]: I0219 18:49:13.740366 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32e6bfc4-62d0-4597-ad22-f88689246209-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "32e6bfc4-62d0-4597-ad22-f88689246209" (UID: "32e6bfc4-62d0-4597-ad22-f88689246209"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:49:13 crc kubenswrapper[4813]: I0219 18:49:13.742609 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32e6bfc4-62d0-4597-ad22-f88689246209-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "32e6bfc4-62d0-4597-ad22-f88689246209" (UID: "32e6bfc4-62d0-4597-ad22-f88689246209"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:49:13 crc kubenswrapper[4813]: I0219 18:49:13.749046 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32e6bfc4-62d0-4597-ad22-f88689246209-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "32e6bfc4-62d0-4597-ad22-f88689246209" (UID: "32e6bfc4-62d0-4597-ad22-f88689246209"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:49:13 crc kubenswrapper[4813]: I0219 18:49:13.811663 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2jlzr\" (UniqueName: \"kubernetes.io/projected/32e6bfc4-62d0-4597-ad22-f88689246209-kube-api-access-2jlzr\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:13 crc kubenswrapper[4813]: I0219 18:49:13.811711 4813 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/32e6bfc4-62d0-4597-ad22-f88689246209-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:13 crc kubenswrapper[4813]: I0219 18:49:13.811723 4813 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32e6bfc4-62d0-4597-ad22-f88689246209-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:13 crc kubenswrapper[4813]: I0219 18:49:13.811734 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/32e6bfc4-62d0-4597-ad22-f88689246209-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:13 crc kubenswrapper[4813]: I0219 18:49:13.811760 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32e6bfc4-62d0-4597-ad22-f88689246209-config\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:13 crc kubenswrapper[4813]: I0219 18:49:13.811768 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/32e6bfc4-62d0-4597-ad22-f88689246209-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:13 crc kubenswrapper[4813]: I0219 18:49:13.992774 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d67cdfc8f-gtgh9" event={"ID":"32e6bfc4-62d0-4597-ad22-f88689246209","Type":"ContainerDied","Data":"ba1d3593d5c58b2dd9017dadeb5f02de33c7817bf8b802251a069af48ae2c5af"} Feb 19 18:49:13 crc kubenswrapper[4813]: I0219 18:49:13.992813 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d67cdfc8f-gtgh9" Feb 19 18:49:13 crc kubenswrapper[4813]: I0219 18:49:13.992841 4813 scope.go:117] "RemoveContainer" containerID="a5e4584d98c8cc01e0ce8d0e934da101b0f96b7f4bdd4147567b87533d59dd47" Feb 19 18:49:13 crc kubenswrapper[4813]: I0219 18:49:13.997641 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a7294ec3-b62c-46d5-845e-6e3463859e0f","Type":"ContainerStarted","Data":"0b17e83d812216bbc39cef9ed416f813920df9ed20c079517126c4340ac08b29"} Feb 19 18:49:14 crc kubenswrapper[4813]: I0219 18:49:14.007497 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67dccc895-gzmtt" event={"ID":"85fc318c-5591-4bb8-a92f-04b0f34884e7","Type":"ContainerStarted","Data":"cc1a7985b68dabdd2ade8c388623c6d6b868f6f8491a4f17c04a3cbef2736d1f"} Feb 19 18:49:14 crc kubenswrapper[4813]: I0219 18:49:14.007556 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-67dccc895-gzmtt" Feb 19 18:49:14 crc kubenswrapper[4813]: I0219 18:49:14.027532 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6da8fcfe-c22a-46a7-af7d-d02dd947d168","Type":"ContainerStarted","Data":"dc0bd8f8507e377eb4183dd220976f678b5ca302e99d949f51f53faa6cdaf6f9"} Feb 19 18:49:14 crc kubenswrapper[4813]: I0219 18:49:14.036136 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-67dccc895-gzmtt" podStartSLOduration=4.036119256 podStartE2EDuration="4.036119256s" podCreationTimestamp="2026-02-19 18:49:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:49:14.023556348 +0000 UTC m=+1173.248996889" watchObservedRunningTime="2026-02-19 18:49:14.036119256 +0000 UTC m=+1173.261559797" Feb 19 18:49:14 crc kubenswrapper[4813]: I0219 18:49:14.075046 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d67cdfc8f-gtgh9"] Feb 19 18:49:14 crc kubenswrapper[4813]: I0219 18:49:14.082365 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d67cdfc8f-gtgh9"] Feb 19 18:49:15 crc kubenswrapper[4813]: I0219 18:49:15.042670 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a7294ec3-b62c-46d5-845e-6e3463859e0f","Type":"ContainerStarted","Data":"55b851aeefbf994746b7a3b70511a3a220a45e985c52fafdf345be138f045476"} Feb 19 18:49:15 crc kubenswrapper[4813]: I0219 18:49:15.042743 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="a7294ec3-b62c-46d5-845e-6e3463859e0f" containerName="glance-log" containerID="cri-o://0b17e83d812216bbc39cef9ed416f813920df9ed20c079517126c4340ac08b29" gracePeriod=30 Feb 19 18:49:15 crc kubenswrapper[4813]: I0219 18:49:15.042787 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="a7294ec3-b62c-46d5-845e-6e3463859e0f" containerName="glance-httpd" containerID="cri-o://55b851aeefbf994746b7a3b70511a3a220a45e985c52fafdf345be138f045476" gracePeriod=30 Feb 19 18:49:15 crc kubenswrapper[4813]: I0219 18:49:15.050500 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6da8fcfe-c22a-46a7-af7d-d02dd947d168","Type":"ContainerStarted","Data":"0a60a78d261cb04b1b012038371aea558591b23cdd5f65064861e1c3648a707a"} Feb 19 18:49:15 crc kubenswrapper[4813]: I0219 18:49:15.050874 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="6da8fcfe-c22a-46a7-af7d-d02dd947d168" containerName="glance-log" containerID="cri-o://dc0bd8f8507e377eb4183dd220976f678b5ca302e99d949f51f53faa6cdaf6f9" gracePeriod=30 Feb 19 18:49:15 crc kubenswrapper[4813]: I0219 18:49:15.050916 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="6da8fcfe-c22a-46a7-af7d-d02dd947d168" containerName="glance-httpd" containerID="cri-o://0a60a78d261cb04b1b012038371aea558591b23cdd5f65064861e1c3648a707a" gracePeriod=30 Feb 19 18:49:15 crc kubenswrapper[4813]: I0219 18:49:15.069745 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.069718428 podStartE2EDuration="5.069718428s" podCreationTimestamp="2026-02-19 18:49:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:49:15.066036334 +0000 UTC m=+1174.291476885" watchObservedRunningTime="2026-02-19 18:49:15.069718428 +0000 UTC m=+1174.295158959" Feb 19 18:49:15 crc kubenswrapper[4813]: I0219 18:49:15.484728 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32e6bfc4-62d0-4597-ad22-f88689246209" path="/var/lib/kubelet/pods/32e6bfc4-62d0-4597-ad22-f88689246209/volumes" Feb 19 18:49:16 crc kubenswrapper[4813]: I0219 18:49:16.062148 4813 generic.go:334] "Generic (PLEG): container finished" podID="6da8fcfe-c22a-46a7-af7d-d02dd947d168" containerID="0a60a78d261cb04b1b012038371aea558591b23cdd5f65064861e1c3648a707a" exitCode=0 Feb 19 18:49:16 crc kubenswrapper[4813]: I0219 18:49:16.062410 4813 generic.go:334] "Generic (PLEG): container finished" podID="6da8fcfe-c22a-46a7-af7d-d02dd947d168" containerID="dc0bd8f8507e377eb4183dd220976f678b5ca302e99d949f51f53faa6cdaf6f9" exitCode=143 Feb 19 18:49:16 crc kubenswrapper[4813]: I0219 18:49:16.062288 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6da8fcfe-c22a-46a7-af7d-d02dd947d168","Type":"ContainerDied","Data":"0a60a78d261cb04b1b012038371aea558591b23cdd5f65064861e1c3648a707a"} Feb 19 18:49:16 crc kubenswrapper[4813]: I0219 18:49:16.062480 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6da8fcfe-c22a-46a7-af7d-d02dd947d168","Type":"ContainerDied","Data":"dc0bd8f8507e377eb4183dd220976f678b5ca302e99d949f51f53faa6cdaf6f9"} Feb 19 18:49:16 crc kubenswrapper[4813]: I0219 18:49:16.065425 4813 generic.go:334] "Generic (PLEG): container finished" podID="a7294ec3-b62c-46d5-845e-6e3463859e0f" containerID="55b851aeefbf994746b7a3b70511a3a220a45e985c52fafdf345be138f045476" exitCode=0 Feb 19 18:49:16 crc kubenswrapper[4813]: I0219 18:49:16.065442 4813 generic.go:334] "Generic (PLEG): container finished" podID="a7294ec3-b62c-46d5-845e-6e3463859e0f" containerID="0b17e83d812216bbc39cef9ed416f813920df9ed20c079517126c4340ac08b29" exitCode=143 Feb 19 18:49:16 crc kubenswrapper[4813]: I0219 18:49:16.065481 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a7294ec3-b62c-46d5-845e-6e3463859e0f","Type":"ContainerDied","Data":"55b851aeefbf994746b7a3b70511a3a220a45e985c52fafdf345be138f045476"} Feb 19 18:49:16 crc kubenswrapper[4813]: I0219 18:49:16.065506 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a7294ec3-b62c-46d5-845e-6e3463859e0f","Type":"ContainerDied","Data":"0b17e83d812216bbc39cef9ed416f813920df9ed20c079517126c4340ac08b29"} Feb 19 18:49:16 crc kubenswrapper[4813]: I0219 18:49:16.067301 4813 generic.go:334] "Generic (PLEG): container finished" podID="8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94" containerID="fbf2efd8df51468ee3111d03323da4ae9c317212db4f2d94ec596fcf42163d90" exitCode=0 Feb 19 18:49:16 crc kubenswrapper[4813]: I0219 18:49:16.067343 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-n2twj" event={"ID":"8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94","Type":"ContainerDied","Data":"fbf2efd8df51468ee3111d03323da4ae9c317212db4f2d94ec596fcf42163d90"} Feb 19 18:49:16 crc kubenswrapper[4813]: I0219 18:49:16.088844 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.088825634 podStartE2EDuration="6.088825634s" podCreationTimestamp="2026-02-19 18:49:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:49:15.093213962 +0000 UTC m=+1174.318654503" watchObservedRunningTime="2026-02-19 18:49:16.088825634 +0000 UTC m=+1175.314266175" Feb 19 18:49:18 crc kubenswrapper[4813]: I0219 18:49:18.259295 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-n2twj" Feb 19 18:49:18 crc kubenswrapper[4813]: I0219 18:49:18.298148 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94-config-data\") pod \"8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94\" (UID: \"8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94\") " Feb 19 18:49:18 crc kubenswrapper[4813]: I0219 18:49:18.298191 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94-combined-ca-bundle\") pod \"8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94\" (UID: \"8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94\") " Feb 19 18:49:18 crc kubenswrapper[4813]: I0219 18:49:18.298214 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94-credential-keys\") pod \"8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94\" (UID: \"8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94\") " Feb 19 18:49:18 crc kubenswrapper[4813]: I0219 18:49:18.298256 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxrbn\" (UniqueName: \"kubernetes.io/projected/8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94-kube-api-access-mxrbn\") pod \"8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94\" (UID: \"8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94\") " Feb 19 18:49:18 crc kubenswrapper[4813]: I0219 18:49:18.298296 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94-fernet-keys\") pod \"8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94\" (UID: \"8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94\") " Feb 19 18:49:18 crc kubenswrapper[4813]: I0219 18:49:18.298320 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94-scripts\") pod \"8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94\" (UID: \"8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94\") " Feb 19 18:49:18 crc kubenswrapper[4813]: I0219 18:49:18.305377 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94-scripts" (OuterVolumeSpecName: "scripts") pod "8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94" (UID: "8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:49:18 crc kubenswrapper[4813]: I0219 18:49:18.306003 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94" (UID: "8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:49:18 crc kubenswrapper[4813]: I0219 18:49:18.307307 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94-kube-api-access-mxrbn" (OuterVolumeSpecName: "kube-api-access-mxrbn") pod "8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94" (UID: "8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94"). InnerVolumeSpecName "kube-api-access-mxrbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:49:18 crc kubenswrapper[4813]: I0219 18:49:18.310179 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94" (UID: "8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:49:18 crc kubenswrapper[4813]: I0219 18:49:18.339275 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94-config-data" (OuterVolumeSpecName: "config-data") pod "8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94" (UID: "8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:49:18 crc kubenswrapper[4813]: I0219 18:49:18.380022 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94" (UID: "8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:49:18 crc kubenswrapper[4813]: I0219 18:49:18.405124 4813 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:18 crc kubenswrapper[4813]: I0219 18:49:18.405160 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:18 crc kubenswrapper[4813]: I0219 18:49:18.405171 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:18 crc kubenswrapper[4813]: I0219 18:49:18.405183 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:18 crc kubenswrapper[4813]: I0219 18:49:18.405193 4813 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:18 crc kubenswrapper[4813]: I0219 18:49:18.405203 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxrbn\" (UniqueName: \"kubernetes.io/projected/8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94-kube-api-access-mxrbn\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:18 crc kubenswrapper[4813]: E0219 18:49:18.853698 4813 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc69ff3db_8806_451a_9df0_c6289c327579.slice/crio-b9426c242aece9d97709c8664b0a4c377652f962eb00395db079b212fbca6168.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc69ff3db_8806_451a_9df0_c6289c327579.slice/crio-conmon-b9426c242aece9d97709c8664b0a4c377652f962eb00395db079b212fbca6168.scope\": RecentStats: unable to find data in memory cache]" Feb 19 18:49:19 crc kubenswrapper[4813]: I0219 18:49:19.101652 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-n2twj" event={"ID":"8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94","Type":"ContainerDied","Data":"f36a43ec1ed98f0f35c14335002ed4e7830736935198936a699bcc1ffc7dd4d9"} Feb 19 18:49:19 crc kubenswrapper[4813]: I0219 18:49:19.101700 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f36a43ec1ed98f0f35c14335002ed4e7830736935198936a699bcc1ffc7dd4d9" Feb 19 18:49:19 crc kubenswrapper[4813]: I0219 18:49:19.101781 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-n2twj" Feb 19 18:49:19 crc kubenswrapper[4813]: I0219 18:49:19.360875 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-n2twj"] Feb 19 18:49:19 crc kubenswrapper[4813]: I0219 18:49:19.368362 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-n2twj"] Feb 19 18:49:19 crc kubenswrapper[4813]: I0219 18:49:19.447389 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-8h8fd"] Feb 19 18:49:19 crc kubenswrapper[4813]: E0219 18:49:19.448675 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32e6bfc4-62d0-4597-ad22-f88689246209" containerName="init" Feb 19 18:49:19 crc kubenswrapper[4813]: I0219 18:49:19.448731 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="32e6bfc4-62d0-4597-ad22-f88689246209" containerName="init" Feb 19 18:49:19 crc kubenswrapper[4813]: E0219 18:49:19.448803 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94" containerName="keystone-bootstrap" Feb 19 18:49:19 crc kubenswrapper[4813]: I0219 18:49:19.448815 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94" containerName="keystone-bootstrap" Feb 19 18:49:19 crc kubenswrapper[4813]: I0219 18:49:19.449251 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="32e6bfc4-62d0-4597-ad22-f88689246209" containerName="init" Feb 19 18:49:19 crc kubenswrapper[4813]: I0219 18:49:19.449267 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94" containerName="keystone-bootstrap" Feb 19 18:49:19 crc kubenswrapper[4813]: I0219 18:49:19.450102 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8h8fd" Feb 19 18:49:19 crc kubenswrapper[4813]: I0219 18:49:19.453742 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 19 18:49:19 crc kubenswrapper[4813]: I0219 18:49:19.454009 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 19 18:49:19 crc kubenswrapper[4813]: I0219 18:49:19.454981 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4lsm2" Feb 19 18:49:19 crc kubenswrapper[4813]: I0219 18:49:19.455148 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 19 18:49:19 crc kubenswrapper[4813]: I0219 18:49:19.457386 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 19 18:49:19 crc kubenswrapper[4813]: I0219 18:49:19.505558 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94" path="/var/lib/kubelet/pods/8e02ffdd-f7c8-4b4e-8b10-b0406efe3f94/volumes" Feb 19 18:49:19 crc kubenswrapper[4813]: I0219 18:49:19.506544 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-8h8fd"] Feb 19 18:49:19 crc kubenswrapper[4813]: I0219 18:49:19.527819 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0a45ccb1-adb5-432d-a40b-89c7f8412cfd-fernet-keys\") pod \"keystone-bootstrap-8h8fd\" (UID: \"0a45ccb1-adb5-432d-a40b-89c7f8412cfd\") " pod="openstack/keystone-bootstrap-8h8fd" Feb 19 18:49:19 crc kubenswrapper[4813]: I0219 18:49:19.527941 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a45ccb1-adb5-432d-a40b-89c7f8412cfd-combined-ca-bundle\") pod \"keystone-bootstrap-8h8fd\" (UID: \"0a45ccb1-adb5-432d-a40b-89c7f8412cfd\") " pod="openstack/keystone-bootstrap-8h8fd" Feb 19 18:49:19 crc kubenswrapper[4813]: I0219 18:49:19.528692 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0a45ccb1-adb5-432d-a40b-89c7f8412cfd-credential-keys\") pod \"keystone-bootstrap-8h8fd\" (UID: \"0a45ccb1-adb5-432d-a40b-89c7f8412cfd\") " pod="openstack/keystone-bootstrap-8h8fd" Feb 19 18:49:19 crc kubenswrapper[4813]: I0219 18:49:19.528729 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a45ccb1-adb5-432d-a40b-89c7f8412cfd-scripts\") pod \"keystone-bootstrap-8h8fd\" (UID: \"0a45ccb1-adb5-432d-a40b-89c7f8412cfd\") " pod="openstack/keystone-bootstrap-8h8fd" Feb 19 18:49:19 crc kubenswrapper[4813]: I0219 18:49:19.528913 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a45ccb1-adb5-432d-a40b-89c7f8412cfd-config-data\") pod \"keystone-bootstrap-8h8fd\" (UID: \"0a45ccb1-adb5-432d-a40b-89c7f8412cfd\") " pod="openstack/keystone-bootstrap-8h8fd" Feb 19 18:49:19 crc kubenswrapper[4813]: I0219 18:49:19.529112 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4m7q9\" (UniqueName: \"kubernetes.io/projected/0a45ccb1-adb5-432d-a40b-89c7f8412cfd-kube-api-access-4m7q9\") pod \"keystone-bootstrap-8h8fd\" (UID: \"0a45ccb1-adb5-432d-a40b-89c7f8412cfd\") " pod="openstack/keystone-bootstrap-8h8fd" Feb 19 18:49:19 crc kubenswrapper[4813]: I0219 18:49:19.631490 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a45ccb1-adb5-432d-a40b-89c7f8412cfd-scripts\") pod \"keystone-bootstrap-8h8fd\" (UID: \"0a45ccb1-adb5-432d-a40b-89c7f8412cfd\") " pod="openstack/keystone-bootstrap-8h8fd" Feb 19 18:49:19 crc kubenswrapper[4813]: I0219 18:49:19.631573 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a45ccb1-adb5-432d-a40b-89c7f8412cfd-config-data\") pod \"keystone-bootstrap-8h8fd\" (UID: \"0a45ccb1-adb5-432d-a40b-89c7f8412cfd\") " pod="openstack/keystone-bootstrap-8h8fd" Feb 19 18:49:19 crc kubenswrapper[4813]: I0219 18:49:19.631641 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4m7q9\" (UniqueName: \"kubernetes.io/projected/0a45ccb1-adb5-432d-a40b-89c7f8412cfd-kube-api-access-4m7q9\") pod \"keystone-bootstrap-8h8fd\" (UID: \"0a45ccb1-adb5-432d-a40b-89c7f8412cfd\") " pod="openstack/keystone-bootstrap-8h8fd" Feb 19 18:49:19 crc kubenswrapper[4813]: I0219 18:49:19.631708 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0a45ccb1-adb5-432d-a40b-89c7f8412cfd-fernet-keys\") pod \"keystone-bootstrap-8h8fd\" (UID: \"0a45ccb1-adb5-432d-a40b-89c7f8412cfd\") " pod="openstack/keystone-bootstrap-8h8fd" Feb 19 18:49:19 crc kubenswrapper[4813]: I0219 18:49:19.631749 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a45ccb1-adb5-432d-a40b-89c7f8412cfd-combined-ca-bundle\") pod \"keystone-bootstrap-8h8fd\" (UID: \"0a45ccb1-adb5-432d-a40b-89c7f8412cfd\") " pod="openstack/keystone-bootstrap-8h8fd" Feb 19 18:49:19 crc kubenswrapper[4813]: I0219 18:49:19.631766 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0a45ccb1-adb5-432d-a40b-89c7f8412cfd-credential-keys\") pod \"keystone-bootstrap-8h8fd\" (UID: \"0a45ccb1-adb5-432d-a40b-89c7f8412cfd\") " pod="openstack/keystone-bootstrap-8h8fd" Feb 19 18:49:19 crc kubenswrapper[4813]: I0219 18:49:19.636669 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0a45ccb1-adb5-432d-a40b-89c7f8412cfd-credential-keys\") pod \"keystone-bootstrap-8h8fd\" (UID: \"0a45ccb1-adb5-432d-a40b-89c7f8412cfd\") " pod="openstack/keystone-bootstrap-8h8fd" Feb 19 18:49:19 crc kubenswrapper[4813]: I0219 18:49:19.637876 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a45ccb1-adb5-432d-a40b-89c7f8412cfd-scripts\") pod \"keystone-bootstrap-8h8fd\" (UID: \"0a45ccb1-adb5-432d-a40b-89c7f8412cfd\") " pod="openstack/keystone-bootstrap-8h8fd" Feb 19 18:49:19 crc kubenswrapper[4813]: I0219 18:49:19.640872 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a45ccb1-adb5-432d-a40b-89c7f8412cfd-config-data\") pod \"keystone-bootstrap-8h8fd\" (UID: \"0a45ccb1-adb5-432d-a40b-89c7f8412cfd\") " pod="openstack/keystone-bootstrap-8h8fd" Feb 19 18:49:19 crc kubenswrapper[4813]: I0219 18:49:19.647719 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a45ccb1-adb5-432d-a40b-89c7f8412cfd-combined-ca-bundle\") pod \"keystone-bootstrap-8h8fd\" (UID: \"0a45ccb1-adb5-432d-a40b-89c7f8412cfd\") " pod="openstack/keystone-bootstrap-8h8fd" Feb 19 18:49:19 crc kubenswrapper[4813]: I0219 18:49:19.651777 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0a45ccb1-adb5-432d-a40b-89c7f8412cfd-fernet-keys\") pod \"keystone-bootstrap-8h8fd\" (UID: \"0a45ccb1-adb5-432d-a40b-89c7f8412cfd\") " pod="openstack/keystone-bootstrap-8h8fd" Feb 19 18:49:19 crc kubenswrapper[4813]: I0219 18:49:19.654791 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4m7q9\" (UniqueName: \"kubernetes.io/projected/0a45ccb1-adb5-432d-a40b-89c7f8412cfd-kube-api-access-4m7q9\") pod \"keystone-bootstrap-8h8fd\" (UID: \"0a45ccb1-adb5-432d-a40b-89c7f8412cfd\") " pod="openstack/keystone-bootstrap-8h8fd" Feb 19 18:49:19 crc kubenswrapper[4813]: I0219 18:49:19.774293 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8h8fd" Feb 19 18:49:21 crc kubenswrapper[4813]: I0219 18:49:21.039746 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-67dccc895-gzmtt" Feb 19 18:49:21 crc kubenswrapper[4813]: I0219 18:49:21.115860 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68677f88c9-7zxq4"] Feb 19 18:49:21 crc kubenswrapper[4813]: I0219 18:49:21.116713 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-68677f88c9-7zxq4" podUID="035014cb-5042-4867-9d3c-4f0061a8e4ac" containerName="dnsmasq-dns" containerID="cri-o://fde1aa8af6c94563e7bbf5b2047c9cd5fd53fa9284f8c06fc650f7cbe4400b9e" gracePeriod=10 Feb 19 18:49:22 crc kubenswrapper[4813]: I0219 18:49:22.130885 4813 generic.go:334] "Generic (PLEG): container finished" podID="035014cb-5042-4867-9d3c-4f0061a8e4ac" containerID="fde1aa8af6c94563e7bbf5b2047c9cd5fd53fa9284f8c06fc650f7cbe4400b9e" exitCode=0 Feb 19 18:49:22 crc kubenswrapper[4813]: I0219 18:49:22.130931 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68677f88c9-7zxq4" event={"ID":"035014cb-5042-4867-9d3c-4f0061a8e4ac","Type":"ContainerDied","Data":"fde1aa8af6c94563e7bbf5b2047c9cd5fd53fa9284f8c06fc650f7cbe4400b9e"} Feb 19 18:49:22 crc kubenswrapper[4813]: I0219 18:49:22.214884 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-68677f88c9-7zxq4" podUID="035014cb-5042-4867-9d3c-4f0061a8e4ac" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.136:5353: connect: connection refused" Feb 19 18:49:27 crc kubenswrapper[4813]: I0219 18:49:27.214375 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-68677f88c9-7zxq4" podUID="035014cb-5042-4867-9d3c-4f0061a8e4ac" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.136:5353: connect: connection refused" Feb 19 18:49:29 crc kubenswrapper[4813]: E0219 18:49:29.081685 4813 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc69ff3db_8806_451a_9df0_c6289c327579.slice/crio-b9426c242aece9d97709c8664b0a4c377652f962eb00395db079b212fbca6168.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc69ff3db_8806_451a_9df0_c6289c327579.slice/crio-conmon-b9426c242aece9d97709c8664b0a4c377652f962eb00395db079b212fbca6168.scope\": RecentStats: unable to find data in memory cache]" Feb 19 18:49:30 crc kubenswrapper[4813]: I0219 18:49:30.329886 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 18:49:30 crc kubenswrapper[4813]: I0219 18:49:30.331022 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 18:49:32 crc kubenswrapper[4813]: E0219 18:49:32.370237 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:a5f8855b2ed00a661ac827cc3908e540ed2327354ac5a1d39491f4507237b4ec" Feb 19 18:49:32 crc kubenswrapper[4813]: E0219 18:49:32.371022 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:a5f8855b2ed00a661ac827cc3908e540ed2327354ac5a1d39491f4507237b4ec,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qjglc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-9s5kt_openstack(d622858a-0915-43b1-9169-8f176f0b16f0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 18:49:32 crc kubenswrapper[4813]: E0219 18:49:32.372485 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-9s5kt" podUID="d622858a-0915-43b1-9169-8f176f0b16f0" Feb 19 18:49:32 crc kubenswrapper[4813]: I0219 18:49:32.492783 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 18:49:32 crc kubenswrapper[4813]: I0219 18:49:32.499472 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 18:49:32 crc kubenswrapper[4813]: I0219 18:49:32.592844 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7294ec3-b62c-46d5-845e-6e3463859e0f-internal-tls-certs\") pod \"a7294ec3-b62c-46d5-845e-6e3463859e0f\" (UID: \"a7294ec3-b62c-46d5-845e-6e3463859e0f\") " Feb 19 18:49:32 crc kubenswrapper[4813]: I0219 18:49:32.592937 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7294ec3-b62c-46d5-845e-6e3463859e0f-combined-ca-bundle\") pod \"a7294ec3-b62c-46d5-845e-6e3463859e0f\" (UID: \"a7294ec3-b62c-46d5-845e-6e3463859e0f\") " Feb 19 18:49:32 crc kubenswrapper[4813]: I0219 18:49:32.593018 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7294ec3-b62c-46d5-845e-6e3463859e0f-logs\") pod \"a7294ec3-b62c-46d5-845e-6e3463859e0f\" (UID: \"a7294ec3-b62c-46d5-845e-6e3463859e0f\") " Feb 19 18:49:32 crc kubenswrapper[4813]: I0219 18:49:32.593059 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qr8d\" (UniqueName: \"kubernetes.io/projected/a7294ec3-b62c-46d5-845e-6e3463859e0f-kube-api-access-4qr8d\") pod \"a7294ec3-b62c-46d5-845e-6e3463859e0f\" (UID: \"a7294ec3-b62c-46d5-845e-6e3463859e0f\") " Feb 19 18:49:32 crc kubenswrapper[4813]: I0219 18:49:32.593097 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6da8fcfe-c22a-46a7-af7d-d02dd947d168-combined-ca-bundle\") pod \"6da8fcfe-c22a-46a7-af7d-d02dd947d168\" (UID: \"6da8fcfe-c22a-46a7-af7d-d02dd947d168\") " Feb 19 18:49:32 crc kubenswrapper[4813]: I0219 18:49:32.593129 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2q4s\" (UniqueName: \"kubernetes.io/projected/6da8fcfe-c22a-46a7-af7d-d02dd947d168-kube-api-access-s2q4s\") pod \"6da8fcfe-c22a-46a7-af7d-d02dd947d168\" (UID: \"6da8fcfe-c22a-46a7-af7d-d02dd947d168\") " Feb 19 18:49:32 crc kubenswrapper[4813]: I0219 18:49:32.593157 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7294ec3-b62c-46d5-845e-6e3463859e0f-config-data\") pod \"a7294ec3-b62c-46d5-845e-6e3463859e0f\" (UID: \"a7294ec3-b62c-46d5-845e-6e3463859e0f\") " Feb 19 18:49:32 crc kubenswrapper[4813]: I0219 18:49:32.593221 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6da8fcfe-c22a-46a7-af7d-d02dd947d168-logs\") pod \"6da8fcfe-c22a-46a7-af7d-d02dd947d168\" (UID: \"6da8fcfe-c22a-46a7-af7d-d02dd947d168\") " Feb 19 18:49:32 crc kubenswrapper[4813]: I0219 18:49:32.593246 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a7294ec3-b62c-46d5-845e-6e3463859e0f-httpd-run\") pod \"a7294ec3-b62c-46d5-845e-6e3463859e0f\" (UID: \"a7294ec3-b62c-46d5-845e-6e3463859e0f\") " Feb 19 18:49:32 crc kubenswrapper[4813]: I0219 18:49:32.593292 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"6da8fcfe-c22a-46a7-af7d-d02dd947d168\" (UID: \"6da8fcfe-c22a-46a7-af7d-d02dd947d168\") " Feb 19 18:49:32 crc kubenswrapper[4813]: I0219 18:49:32.593319 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"a7294ec3-b62c-46d5-845e-6e3463859e0f\" (UID: \"a7294ec3-b62c-46d5-845e-6e3463859e0f\") " Feb 19 18:49:32 crc kubenswrapper[4813]: I0219 18:49:32.593365 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6da8fcfe-c22a-46a7-af7d-d02dd947d168-public-tls-certs\") pod \"6da8fcfe-c22a-46a7-af7d-d02dd947d168\" (UID: \"6da8fcfe-c22a-46a7-af7d-d02dd947d168\") " Feb 19 18:49:32 crc kubenswrapper[4813]: I0219 18:49:32.593397 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6da8fcfe-c22a-46a7-af7d-d02dd947d168-httpd-run\") pod \"6da8fcfe-c22a-46a7-af7d-d02dd947d168\" (UID: \"6da8fcfe-c22a-46a7-af7d-d02dd947d168\") " Feb 19 18:49:32 crc kubenswrapper[4813]: I0219 18:49:32.593428 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6da8fcfe-c22a-46a7-af7d-d02dd947d168-config-data\") pod \"6da8fcfe-c22a-46a7-af7d-d02dd947d168\" (UID: \"6da8fcfe-c22a-46a7-af7d-d02dd947d168\") " Feb 19 18:49:32 crc kubenswrapper[4813]: I0219 18:49:32.593798 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7294ec3-b62c-46d5-845e-6e3463859e0f-scripts\") pod \"a7294ec3-b62c-46d5-845e-6e3463859e0f\" (UID: \"a7294ec3-b62c-46d5-845e-6e3463859e0f\") " Feb 19 18:49:32 crc kubenswrapper[4813]: I0219 18:49:32.593829 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6da8fcfe-c22a-46a7-af7d-d02dd947d168-scripts\") pod \"6da8fcfe-c22a-46a7-af7d-d02dd947d168\" (UID: \"6da8fcfe-c22a-46a7-af7d-d02dd947d168\") " Feb 19 18:49:32 crc kubenswrapper[4813]: I0219 18:49:32.594126 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7294ec3-b62c-46d5-845e-6e3463859e0f-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "a7294ec3-b62c-46d5-845e-6e3463859e0f" (UID: "a7294ec3-b62c-46d5-845e-6e3463859e0f"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:49:32 crc kubenswrapper[4813]: I0219 18:49:32.594425 4813 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a7294ec3-b62c-46d5-845e-6e3463859e0f-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:32 crc kubenswrapper[4813]: I0219 18:49:32.594637 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7294ec3-b62c-46d5-845e-6e3463859e0f-logs" (OuterVolumeSpecName: "logs") pod "a7294ec3-b62c-46d5-845e-6e3463859e0f" (UID: "a7294ec3-b62c-46d5-845e-6e3463859e0f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:49:32 crc kubenswrapper[4813]: I0219 18:49:32.595584 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6da8fcfe-c22a-46a7-af7d-d02dd947d168-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "6da8fcfe-c22a-46a7-af7d-d02dd947d168" (UID: "6da8fcfe-c22a-46a7-af7d-d02dd947d168"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:49:32 crc kubenswrapper[4813]: I0219 18:49:32.596014 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6da8fcfe-c22a-46a7-af7d-d02dd947d168-logs" (OuterVolumeSpecName: "logs") pod "6da8fcfe-c22a-46a7-af7d-d02dd947d168" (UID: "6da8fcfe-c22a-46a7-af7d-d02dd947d168"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:49:32 crc kubenswrapper[4813]: I0219 18:49:32.599272 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6da8fcfe-c22a-46a7-af7d-d02dd947d168-kube-api-access-s2q4s" (OuterVolumeSpecName: "kube-api-access-s2q4s") pod "6da8fcfe-c22a-46a7-af7d-d02dd947d168" (UID: "6da8fcfe-c22a-46a7-af7d-d02dd947d168"). InnerVolumeSpecName "kube-api-access-s2q4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:49:32 crc kubenswrapper[4813]: I0219 18:49:32.599784 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7294ec3-b62c-46d5-845e-6e3463859e0f-scripts" (OuterVolumeSpecName: "scripts") pod "a7294ec3-b62c-46d5-845e-6e3463859e0f" (UID: "a7294ec3-b62c-46d5-845e-6e3463859e0f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:49:32 crc kubenswrapper[4813]: I0219 18:49:32.600765 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "a7294ec3-b62c-46d5-845e-6e3463859e0f" (UID: "a7294ec3-b62c-46d5-845e-6e3463859e0f"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 18:49:32 crc kubenswrapper[4813]: I0219 18:49:32.605181 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "6da8fcfe-c22a-46a7-af7d-d02dd947d168" (UID: "6da8fcfe-c22a-46a7-af7d-d02dd947d168"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 18:49:32 crc kubenswrapper[4813]: I0219 18:49:32.607315 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7294ec3-b62c-46d5-845e-6e3463859e0f-kube-api-access-4qr8d" (OuterVolumeSpecName: "kube-api-access-4qr8d") pod "a7294ec3-b62c-46d5-845e-6e3463859e0f" (UID: "a7294ec3-b62c-46d5-845e-6e3463859e0f"). InnerVolumeSpecName "kube-api-access-4qr8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:49:32 crc kubenswrapper[4813]: I0219 18:49:32.615053 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6da8fcfe-c22a-46a7-af7d-d02dd947d168-scripts" (OuterVolumeSpecName: "scripts") pod "6da8fcfe-c22a-46a7-af7d-d02dd947d168" (UID: "6da8fcfe-c22a-46a7-af7d-d02dd947d168"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:49:32 crc kubenswrapper[4813]: I0219 18:49:32.630988 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6da8fcfe-c22a-46a7-af7d-d02dd947d168-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6da8fcfe-c22a-46a7-af7d-d02dd947d168" (UID: "6da8fcfe-c22a-46a7-af7d-d02dd947d168"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:49:32 crc kubenswrapper[4813]: I0219 18:49:32.633059 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7294ec3-b62c-46d5-845e-6e3463859e0f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a7294ec3-b62c-46d5-845e-6e3463859e0f" (UID: "a7294ec3-b62c-46d5-845e-6e3463859e0f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:49:32 crc kubenswrapper[4813]: I0219 18:49:32.652540 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6da8fcfe-c22a-46a7-af7d-d02dd947d168-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "6da8fcfe-c22a-46a7-af7d-d02dd947d168" (UID: "6da8fcfe-c22a-46a7-af7d-d02dd947d168"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:49:32 crc kubenswrapper[4813]: I0219 18:49:32.656107 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6da8fcfe-c22a-46a7-af7d-d02dd947d168-config-data" (OuterVolumeSpecName: "config-data") pod "6da8fcfe-c22a-46a7-af7d-d02dd947d168" (UID: "6da8fcfe-c22a-46a7-af7d-d02dd947d168"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:49:32 crc kubenswrapper[4813]: I0219 18:49:32.657770 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7294ec3-b62c-46d5-845e-6e3463859e0f-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a7294ec3-b62c-46d5-845e-6e3463859e0f" (UID: "a7294ec3-b62c-46d5-845e-6e3463859e0f"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:49:32 crc kubenswrapper[4813]: I0219 18:49:32.659151 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7294ec3-b62c-46d5-845e-6e3463859e0f-config-data" (OuterVolumeSpecName: "config-data") pod "a7294ec3-b62c-46d5-845e-6e3463859e0f" (UID: "a7294ec3-b62c-46d5-845e-6e3463859e0f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:49:32 crc kubenswrapper[4813]: I0219 18:49:32.695871 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6da8fcfe-c22a-46a7-af7d-d02dd947d168-logs\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:32 crc kubenswrapper[4813]: I0219 18:49:32.696466 4813 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Feb 19 18:49:32 crc kubenswrapper[4813]: I0219 18:49:32.696536 4813 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Feb 19 18:49:32 crc kubenswrapper[4813]: I0219 18:49:32.696552 4813 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6da8fcfe-c22a-46a7-af7d-d02dd947d168-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:32 crc kubenswrapper[4813]: I0219 18:49:32.696592 4813 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6da8fcfe-c22a-46a7-af7d-d02dd947d168-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:32 crc kubenswrapper[4813]: I0219 18:49:32.696606 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6da8fcfe-c22a-46a7-af7d-d02dd947d168-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:32 crc kubenswrapper[4813]: I0219 18:49:32.696619 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7294ec3-b62c-46d5-845e-6e3463859e0f-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:32 crc kubenswrapper[4813]: I0219 18:49:32.696632 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6da8fcfe-c22a-46a7-af7d-d02dd947d168-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:32 crc kubenswrapper[4813]: I0219 18:49:32.696651 4813 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7294ec3-b62c-46d5-845e-6e3463859e0f-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:32 crc kubenswrapper[4813]: I0219 18:49:32.696664 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7294ec3-b62c-46d5-845e-6e3463859e0f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:32 crc kubenswrapper[4813]: I0219 18:49:32.696674 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7294ec3-b62c-46d5-845e-6e3463859e0f-logs\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:32 crc kubenswrapper[4813]: I0219 18:49:32.696687 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qr8d\" (UniqueName: \"kubernetes.io/projected/a7294ec3-b62c-46d5-845e-6e3463859e0f-kube-api-access-4qr8d\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:32 crc kubenswrapper[4813]: I0219 18:49:32.696700 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6da8fcfe-c22a-46a7-af7d-d02dd947d168-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:32 crc kubenswrapper[4813]: I0219 18:49:32.696747 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2q4s\" (UniqueName: \"kubernetes.io/projected/6da8fcfe-c22a-46a7-af7d-d02dd947d168-kube-api-access-s2q4s\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:32 crc kubenswrapper[4813]: I0219 18:49:32.696766 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7294ec3-b62c-46d5-845e-6e3463859e0f-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:32 crc kubenswrapper[4813]: I0219 18:49:32.712308 4813 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Feb 19 18:49:32 crc kubenswrapper[4813]: I0219 18:49:32.716411 4813 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Feb 19 18:49:32 crc kubenswrapper[4813]: I0219 18:49:32.798454 4813 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:32 crc kubenswrapper[4813]: I0219 18:49:32.798487 4813 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.240891 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6da8fcfe-c22a-46a7-af7d-d02dd947d168","Type":"ContainerDied","Data":"ac30b5e58e5de4eb0552c17e9a3a0e993bd0872bdfe36b779b6cd3abb1a69c2c"} Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.240943 4813 scope.go:117] "RemoveContainer" containerID="0a60a78d261cb04b1b012038371aea558591b23cdd5f65064861e1c3648a707a" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.241122 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.248767 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.248780 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a7294ec3-b62c-46d5-845e-6e3463859e0f","Type":"ContainerDied","Data":"fe3ba7035b06d3f7343e51f366947bf4e7eb1c4d3b584c8e721f30c05367b241"} Feb 19 18:49:33 crc kubenswrapper[4813]: E0219 18:49:33.250445 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:a5f8855b2ed00a661ac827cc3908e540ed2327354ac5a1d39491f4507237b4ec\\\"\"" pod="openstack/barbican-db-sync-9s5kt" podUID="d622858a-0915-43b1-9169-8f176f0b16f0" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.316769 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.355602 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.386284 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.402248 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.412981 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 18:49:33 crc kubenswrapper[4813]: E0219 18:49:33.413386 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6da8fcfe-c22a-46a7-af7d-d02dd947d168" containerName="glance-httpd" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.413406 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="6da8fcfe-c22a-46a7-af7d-d02dd947d168" containerName="glance-httpd" Feb 19 18:49:33 crc kubenswrapper[4813]: E0219 18:49:33.413416 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7294ec3-b62c-46d5-845e-6e3463859e0f" containerName="glance-log" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.413422 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7294ec3-b62c-46d5-845e-6e3463859e0f" containerName="glance-log" Feb 19 18:49:33 crc kubenswrapper[4813]: E0219 18:49:33.413438 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7294ec3-b62c-46d5-845e-6e3463859e0f" containerName="glance-httpd" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.413445 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7294ec3-b62c-46d5-845e-6e3463859e0f" containerName="glance-httpd" Feb 19 18:49:33 crc kubenswrapper[4813]: E0219 18:49:33.413457 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6da8fcfe-c22a-46a7-af7d-d02dd947d168" containerName="glance-log" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.413462 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="6da8fcfe-c22a-46a7-af7d-d02dd947d168" containerName="glance-log" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.413629 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7294ec3-b62c-46d5-845e-6e3463859e0f" containerName="glance-httpd" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.413644 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="6da8fcfe-c22a-46a7-af7d-d02dd947d168" containerName="glance-log" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.413758 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="6da8fcfe-c22a-46a7-af7d-d02dd947d168" containerName="glance-httpd" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.413768 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7294ec3-b62c-46d5-845e-6e3463859e0f" containerName="glance-log" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.414604 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.416279 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.416781 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.417293 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.417414 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-d6cgg" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.423884 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.425311 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.427129 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.427426 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.435441 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.446794 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.483149 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6da8fcfe-c22a-46a7-af7d-d02dd947d168" path="/var/lib/kubelet/pods/6da8fcfe-c22a-46a7-af7d-d02dd947d168/volumes" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.483796 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7294ec3-b62c-46d5-845e-6e3463859e0f" path="/var/lib/kubelet/pods/a7294ec3-b62c-46d5-845e-6e3463859e0f/volumes" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.516651 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6c68d02b-5a16-4663-b109-265ae29b311b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6c68d02b-5a16-4663-b109-265ae29b311b\") " pod="openstack/glance-default-internal-api-0" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.516751 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c68d02b-5a16-4663-b109-265ae29b311b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6c68d02b-5a16-4663-b109-265ae29b311b\") " pod="openstack/glance-default-internal-api-0" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.516788 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c68d02b-5a16-4663-b109-265ae29b311b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6c68d02b-5a16-4663-b109-265ae29b311b\") " pod="openstack/glance-default-internal-api-0" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.516904 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pwk7\" (UniqueName: \"kubernetes.io/projected/5d67b04b-756d-4c0d-93bf-ce2766c48cd9-kube-api-access-9pwk7\") pod \"glance-default-external-api-0\" (UID: \"5d67b04b-756d-4c0d-93bf-ce2766c48cd9\") " pod="openstack/glance-default-external-api-0" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.516983 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c68d02b-5a16-4663-b109-265ae29b311b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6c68d02b-5a16-4663-b109-265ae29b311b\") " pod="openstack/glance-default-internal-api-0" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.517061 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"5d67b04b-756d-4c0d-93bf-ce2766c48cd9\") " pod="openstack/glance-default-external-api-0" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.517282 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d67b04b-756d-4c0d-93bf-ce2766c48cd9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5d67b04b-756d-4c0d-93bf-ce2766c48cd9\") " pod="openstack/glance-default-external-api-0" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.517401 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4j8z\" (UniqueName: \"kubernetes.io/projected/6c68d02b-5a16-4663-b109-265ae29b311b-kube-api-access-m4j8z\") pod \"glance-default-internal-api-0\" (UID: \"6c68d02b-5a16-4663-b109-265ae29b311b\") " pod="openstack/glance-default-internal-api-0" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.517557 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5d67b04b-756d-4c0d-93bf-ce2766c48cd9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5d67b04b-756d-4c0d-93bf-ce2766c48cd9\") " pod="openstack/glance-default-external-api-0" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.517633 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c68d02b-5a16-4663-b109-265ae29b311b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6c68d02b-5a16-4663-b109-265ae29b311b\") " pod="openstack/glance-default-internal-api-0" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.517761 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d67b04b-756d-4c0d-93bf-ce2766c48cd9-scripts\") pod \"glance-default-external-api-0\" (UID: \"5d67b04b-756d-4c0d-93bf-ce2766c48cd9\") " pod="openstack/glance-default-external-api-0" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.517806 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"6c68d02b-5a16-4663-b109-265ae29b311b\") " pod="openstack/glance-default-internal-api-0" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.517888 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d67b04b-756d-4c0d-93bf-ce2766c48cd9-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5d67b04b-756d-4c0d-93bf-ce2766c48cd9\") " pod="openstack/glance-default-external-api-0" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.518025 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d67b04b-756d-4c0d-93bf-ce2766c48cd9-logs\") pod \"glance-default-external-api-0\" (UID: \"5d67b04b-756d-4c0d-93bf-ce2766c48cd9\") " pod="openstack/glance-default-external-api-0" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.518094 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c68d02b-5a16-4663-b109-265ae29b311b-logs\") pod \"glance-default-internal-api-0\" (UID: \"6c68d02b-5a16-4663-b109-265ae29b311b\") " pod="openstack/glance-default-internal-api-0" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.518177 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d67b04b-756d-4c0d-93bf-ce2766c48cd9-config-data\") pod \"glance-default-external-api-0\" (UID: \"5d67b04b-756d-4c0d-93bf-ce2766c48cd9\") " pod="openstack/glance-default-external-api-0" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.529133 4813 scope.go:117] "RemoveContainer" containerID="dc0bd8f8507e377eb4183dd220976f678b5ca302e99d949f51f53faa6cdaf6f9" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.620265 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pwk7\" (UniqueName: \"kubernetes.io/projected/5d67b04b-756d-4c0d-93bf-ce2766c48cd9-kube-api-access-9pwk7\") pod \"glance-default-external-api-0\" (UID: \"5d67b04b-756d-4c0d-93bf-ce2766c48cd9\") " pod="openstack/glance-default-external-api-0" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.620690 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c68d02b-5a16-4663-b109-265ae29b311b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6c68d02b-5a16-4663-b109-265ae29b311b\") " pod="openstack/glance-default-internal-api-0" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.620754 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"5d67b04b-756d-4c0d-93bf-ce2766c48cd9\") " pod="openstack/glance-default-external-api-0" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.620806 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d67b04b-756d-4c0d-93bf-ce2766c48cd9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5d67b04b-756d-4c0d-93bf-ce2766c48cd9\") " pod="openstack/glance-default-external-api-0" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.620842 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4j8z\" (UniqueName: \"kubernetes.io/projected/6c68d02b-5a16-4663-b109-265ae29b311b-kube-api-access-m4j8z\") pod \"glance-default-internal-api-0\" (UID: \"6c68d02b-5a16-4663-b109-265ae29b311b\") " pod="openstack/glance-default-internal-api-0" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.620910 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5d67b04b-756d-4c0d-93bf-ce2766c48cd9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5d67b04b-756d-4c0d-93bf-ce2766c48cd9\") " pod="openstack/glance-default-external-api-0" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.620938 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c68d02b-5a16-4663-b109-265ae29b311b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6c68d02b-5a16-4663-b109-265ae29b311b\") " pod="openstack/glance-default-internal-api-0" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.621006 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d67b04b-756d-4c0d-93bf-ce2766c48cd9-scripts\") pod \"glance-default-external-api-0\" (UID: \"5d67b04b-756d-4c0d-93bf-ce2766c48cd9\") " pod="openstack/glance-default-external-api-0" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.621040 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"6c68d02b-5a16-4663-b109-265ae29b311b\") " pod="openstack/glance-default-internal-api-0" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.621072 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d67b04b-756d-4c0d-93bf-ce2766c48cd9-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5d67b04b-756d-4c0d-93bf-ce2766c48cd9\") " pod="openstack/glance-default-external-api-0" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.621105 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d67b04b-756d-4c0d-93bf-ce2766c48cd9-logs\") pod \"glance-default-external-api-0\" (UID: \"5d67b04b-756d-4c0d-93bf-ce2766c48cd9\") " pod="openstack/glance-default-external-api-0" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.621131 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c68d02b-5a16-4663-b109-265ae29b311b-logs\") pod \"glance-default-internal-api-0\" (UID: \"6c68d02b-5a16-4663-b109-265ae29b311b\") " pod="openstack/glance-default-internal-api-0" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.621161 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d67b04b-756d-4c0d-93bf-ce2766c48cd9-config-data\") pod \"glance-default-external-api-0\" (UID: \"5d67b04b-756d-4c0d-93bf-ce2766c48cd9\") " pod="openstack/glance-default-external-api-0" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.621188 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6c68d02b-5a16-4663-b109-265ae29b311b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6c68d02b-5a16-4663-b109-265ae29b311b\") " pod="openstack/glance-default-internal-api-0" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.621222 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c68d02b-5a16-4663-b109-265ae29b311b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6c68d02b-5a16-4663-b109-265ae29b311b\") " pod="openstack/glance-default-internal-api-0" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.621243 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c68d02b-5a16-4663-b109-265ae29b311b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6c68d02b-5a16-4663-b109-265ae29b311b\") " pod="openstack/glance-default-internal-api-0" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.623348 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c68d02b-5a16-4663-b109-265ae29b311b-logs\") pod \"glance-default-internal-api-0\" (UID: \"6c68d02b-5a16-4663-b109-265ae29b311b\") " pod="openstack/glance-default-internal-api-0" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.623553 4813 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"6c68d02b-5a16-4663-b109-265ae29b311b\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.627384 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d67b04b-756d-4c0d-93bf-ce2766c48cd9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5d67b04b-756d-4c0d-93bf-ce2766c48cd9\") " pod="openstack/glance-default-external-api-0" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.627477 4813 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"5d67b04b-756d-4c0d-93bf-ce2766c48cd9\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-external-api-0" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.633427 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5d67b04b-756d-4c0d-93bf-ce2766c48cd9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5d67b04b-756d-4c0d-93bf-ce2766c48cd9\") " pod="openstack/glance-default-external-api-0" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.634625 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d67b04b-756d-4c0d-93bf-ce2766c48cd9-logs\") pod \"glance-default-external-api-0\" (UID: \"5d67b04b-756d-4c0d-93bf-ce2766c48cd9\") " pod="openstack/glance-default-external-api-0" Feb 19 18:49:33 crc kubenswrapper[4813]: E0219 18:49:33.637148 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:3fa6e687aa002b92fedbfe2c1ccaa2906b399c58d17bf9ecece2c4cd69a0210b" Feb 19 18:49:33 crc kubenswrapper[4813]: E0219 18:49:33.637316 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:3fa6e687aa002b92fedbfe2c1ccaa2906b399c58d17bf9ecece2c4cd69a0210b,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gwdj6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-nnwfh_openstack(a87080d3-007c-48e0-aa89-b82c5d9dafab): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 18:49:33 crc kubenswrapper[4813]: E0219 18:49:33.639141 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-nnwfh" podUID="a87080d3-007c-48e0-aa89-b82c5d9dafab" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.641993 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6c68d02b-5a16-4663-b109-265ae29b311b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6c68d02b-5a16-4663-b109-265ae29b311b\") " pod="openstack/glance-default-internal-api-0" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.647928 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c68d02b-5a16-4663-b109-265ae29b311b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6c68d02b-5a16-4663-b109-265ae29b311b\") " pod="openstack/glance-default-internal-api-0" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.648852 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d67b04b-756d-4c0d-93bf-ce2766c48cd9-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5d67b04b-756d-4c0d-93bf-ce2766c48cd9\") " pod="openstack/glance-default-external-api-0" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.648887 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d67b04b-756d-4c0d-93bf-ce2766c48cd9-scripts\") pod \"glance-default-external-api-0\" (UID: \"5d67b04b-756d-4c0d-93bf-ce2766c48cd9\") " pod="openstack/glance-default-external-api-0" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.651175 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c68d02b-5a16-4663-b109-265ae29b311b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6c68d02b-5a16-4663-b109-265ae29b311b\") " pod="openstack/glance-default-internal-api-0" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.658557 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d67b04b-756d-4c0d-93bf-ce2766c48cd9-config-data\") pod \"glance-default-external-api-0\" (UID: \"5d67b04b-756d-4c0d-93bf-ce2766c48cd9\") " pod="openstack/glance-default-external-api-0" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.661266 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c68d02b-5a16-4663-b109-265ae29b311b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6c68d02b-5a16-4663-b109-265ae29b311b\") " pod="openstack/glance-default-internal-api-0" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.663884 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c68d02b-5a16-4663-b109-265ae29b311b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6c68d02b-5a16-4663-b109-265ae29b311b\") " pod="openstack/glance-default-internal-api-0" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.672846 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"6c68d02b-5a16-4663-b109-265ae29b311b\") " pod="openstack/glance-default-internal-api-0" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.673538 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4j8z\" (UniqueName: \"kubernetes.io/projected/6c68d02b-5a16-4663-b109-265ae29b311b-kube-api-access-m4j8z\") pod \"glance-default-internal-api-0\" (UID: \"6c68d02b-5a16-4663-b109-265ae29b311b\") " pod="openstack/glance-default-internal-api-0" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.688094 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pwk7\" (UniqueName: \"kubernetes.io/projected/5d67b04b-756d-4c0d-93bf-ce2766c48cd9-kube-api-access-9pwk7\") pod \"glance-default-external-api-0\" (UID: \"5d67b04b-756d-4c0d-93bf-ce2766c48cd9\") " pod="openstack/glance-default-external-api-0" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.705159 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"5d67b04b-756d-4c0d-93bf-ce2766c48cd9\") " pod="openstack/glance-default-external-api-0" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.742271 4813 scope.go:117] "RemoveContainer" containerID="55b851aeefbf994746b7a3b70511a3a220a45e985c52fafdf345be138f045476" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.742499 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.748534 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.804769 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68677f88c9-7zxq4" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.831802 4813 scope.go:117] "RemoveContainer" containerID="0b17e83d812216bbc39cef9ed416f813920df9ed20c079517126c4340ac08b29" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.932619 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/035014cb-5042-4867-9d3c-4f0061a8e4ac-dns-swift-storage-0\") pod \"035014cb-5042-4867-9d3c-4f0061a8e4ac\" (UID: \"035014cb-5042-4867-9d3c-4f0061a8e4ac\") " Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.933021 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/035014cb-5042-4867-9d3c-4f0061a8e4ac-ovsdbserver-nb\") pod \"035014cb-5042-4867-9d3c-4f0061a8e4ac\" (UID: \"035014cb-5042-4867-9d3c-4f0061a8e4ac\") " Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.933080 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8scks\" (UniqueName: \"kubernetes.io/projected/035014cb-5042-4867-9d3c-4f0061a8e4ac-kube-api-access-8scks\") pod \"035014cb-5042-4867-9d3c-4f0061a8e4ac\" (UID: \"035014cb-5042-4867-9d3c-4f0061a8e4ac\") " Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.933114 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/035014cb-5042-4867-9d3c-4f0061a8e4ac-ovsdbserver-sb\") pod \"035014cb-5042-4867-9d3c-4f0061a8e4ac\" (UID: \"035014cb-5042-4867-9d3c-4f0061a8e4ac\") " Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.933162 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/035014cb-5042-4867-9d3c-4f0061a8e4ac-config\") pod \"035014cb-5042-4867-9d3c-4f0061a8e4ac\" (UID: \"035014cb-5042-4867-9d3c-4f0061a8e4ac\") " Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.933213 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/035014cb-5042-4867-9d3c-4f0061a8e4ac-dns-svc\") pod \"035014cb-5042-4867-9d3c-4f0061a8e4ac\" (UID: \"035014cb-5042-4867-9d3c-4f0061a8e4ac\") " Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.939611 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/035014cb-5042-4867-9d3c-4f0061a8e4ac-kube-api-access-8scks" (OuterVolumeSpecName: "kube-api-access-8scks") pod "035014cb-5042-4867-9d3c-4f0061a8e4ac" (UID: "035014cb-5042-4867-9d3c-4f0061a8e4ac"). InnerVolumeSpecName "kube-api-access-8scks". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.980140 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/035014cb-5042-4867-9d3c-4f0061a8e4ac-config" (OuterVolumeSpecName: "config") pod "035014cb-5042-4867-9d3c-4f0061a8e4ac" (UID: "035014cb-5042-4867-9d3c-4f0061a8e4ac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.981869 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/035014cb-5042-4867-9d3c-4f0061a8e4ac-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "035014cb-5042-4867-9d3c-4f0061a8e4ac" (UID: "035014cb-5042-4867-9d3c-4f0061a8e4ac"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.985622 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/035014cb-5042-4867-9d3c-4f0061a8e4ac-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "035014cb-5042-4867-9d3c-4f0061a8e4ac" (UID: "035014cb-5042-4867-9d3c-4f0061a8e4ac"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.987034 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/035014cb-5042-4867-9d3c-4f0061a8e4ac-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "035014cb-5042-4867-9d3c-4f0061a8e4ac" (UID: "035014cb-5042-4867-9d3c-4f0061a8e4ac"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:49:33 crc kubenswrapper[4813]: I0219 18:49:33.994911 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/035014cb-5042-4867-9d3c-4f0061a8e4ac-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "035014cb-5042-4867-9d3c-4f0061a8e4ac" (UID: "035014cb-5042-4867-9d3c-4f0061a8e4ac"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:49:34 crc kubenswrapper[4813]: I0219 18:49:34.016476 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-8h8fd"] Feb 19 18:49:34 crc kubenswrapper[4813]: I0219 18:49:34.035447 4813 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/035014cb-5042-4867-9d3c-4f0061a8e4ac-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:34 crc kubenswrapper[4813]: I0219 18:49:34.035481 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/035014cb-5042-4867-9d3c-4f0061a8e4ac-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:34 crc kubenswrapper[4813]: I0219 18:49:34.035490 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8scks\" (UniqueName: \"kubernetes.io/projected/035014cb-5042-4867-9d3c-4f0061a8e4ac-kube-api-access-8scks\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:34 crc kubenswrapper[4813]: I0219 18:49:34.035500 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/035014cb-5042-4867-9d3c-4f0061a8e4ac-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:34 crc kubenswrapper[4813]: I0219 18:49:34.035510 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/035014cb-5042-4867-9d3c-4f0061a8e4ac-config\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:34 crc kubenswrapper[4813]: I0219 18:49:34.035521 4813 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/035014cb-5042-4867-9d3c-4f0061a8e4ac-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:34 crc kubenswrapper[4813]: I0219 18:49:34.263102 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8h8fd" event={"ID":"0a45ccb1-adb5-432d-a40b-89c7f8412cfd","Type":"ContainerStarted","Data":"869846648a5af54c9d2dfac6cf01db1cb4983ff40d5a072e82a3effcde006db0"} Feb 19 18:49:34 crc kubenswrapper[4813]: I0219 18:49:34.263389 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8h8fd" event={"ID":"0a45ccb1-adb5-432d-a40b-89c7f8412cfd","Type":"ContainerStarted","Data":"3d70b9795b6af0ac1e83ed05ff776cecf9d8a9cce719ca1761a2ffe72edfe24f"} Feb 19 18:49:34 crc kubenswrapper[4813]: I0219 18:49:34.265874 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68677f88c9-7zxq4" event={"ID":"035014cb-5042-4867-9d3c-4f0061a8e4ac","Type":"ContainerDied","Data":"c39c977fc2821c89b2103aabc7b5163fa323d7231ee51580f9f7a54dd48d1580"} Feb 19 18:49:34 crc kubenswrapper[4813]: I0219 18:49:34.265918 4813 scope.go:117] "RemoveContainer" containerID="fde1aa8af6c94563e7bbf5b2047c9cd5fd53fa9284f8c06fc650f7cbe4400b9e" Feb 19 18:49:34 crc kubenswrapper[4813]: I0219 18:49:34.266017 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68677f88c9-7zxq4" Feb 19 18:49:34 crc kubenswrapper[4813]: I0219 18:49:34.277254 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4446e6ed-7663-41ab-9fae-f6da8f4f5449","Type":"ContainerStarted","Data":"4fa144b488fed73c21844ad14a76893ad553ff87da4e836d63e03c5203e0a173"} Feb 19 18:49:34 crc kubenswrapper[4813]: I0219 18:49:34.278524 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-lc2jp" event={"ID":"27d0219c-fd97-4271-9059-0903aae52f65","Type":"ContainerStarted","Data":"b4a6d7a30ef50dd71bcd5afec2db98656e12a3087aa45f96cbbac4ff6cfc9b5f"} Feb 19 18:49:34 crc kubenswrapper[4813]: E0219 18:49:34.280048 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:3fa6e687aa002b92fedbfe2c1ccaa2906b399c58d17bf9ecece2c4cd69a0210b\\\"\"" pod="openstack/cinder-db-sync-nnwfh" podUID="a87080d3-007c-48e0-aa89-b82c5d9dafab" Feb 19 18:49:34 crc kubenswrapper[4813]: I0219 18:49:34.291347 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-8h8fd" podStartSLOduration=15.291332447 podStartE2EDuration="15.291332447s" podCreationTimestamp="2026-02-19 18:49:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:49:34.283853066 +0000 UTC m=+1193.509293607" watchObservedRunningTime="2026-02-19 18:49:34.291332447 +0000 UTC m=+1193.516772988" Feb 19 18:49:34 crc kubenswrapper[4813]: I0219 18:49:34.308727 4813 scope.go:117] "RemoveContainer" containerID="d2bd0f44e36295036138de2c4c1a46d29273cc11baa5673f60270297d992f673" Feb 19 18:49:34 crc kubenswrapper[4813]: I0219 18:49:34.323759 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-lc2jp" podStartSLOduration=2.572148903 podStartE2EDuration="24.323729006s" podCreationTimestamp="2026-02-19 18:49:10 +0000 UTC" firstStartedPulling="2026-02-19 18:49:11.757915365 +0000 UTC m=+1170.983355906" lastFinishedPulling="2026-02-19 18:49:33.509495468 +0000 UTC m=+1192.734936009" observedRunningTime="2026-02-19 18:49:34.309165066 +0000 UTC m=+1193.534605627" watchObservedRunningTime="2026-02-19 18:49:34.323729006 +0000 UTC m=+1193.549169547" Feb 19 18:49:34 crc kubenswrapper[4813]: I0219 18:49:34.365077 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 18:49:34 crc kubenswrapper[4813]: I0219 18:49:34.373904 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68677f88c9-7zxq4"] Feb 19 18:49:34 crc kubenswrapper[4813]: I0219 18:49:34.383202 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-68677f88c9-7zxq4"] Feb 19 18:49:34 crc kubenswrapper[4813]: I0219 18:49:34.427795 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 18:49:35 crc kubenswrapper[4813]: I0219 18:49:35.291777 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5d67b04b-756d-4c0d-93bf-ce2766c48cd9","Type":"ContainerStarted","Data":"5e62d4e283cff7634e214723697d9b2fcddf021bab89ee818599c42f7ab5f8c4"} Feb 19 18:49:35 crc kubenswrapper[4813]: I0219 18:49:35.292191 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5d67b04b-756d-4c0d-93bf-ce2766c48cd9","Type":"ContainerStarted","Data":"3eade5c848599515406bd7226bcbbfa05218e5ec8365f17338ecf20972b9bc3c"} Feb 19 18:49:35 crc kubenswrapper[4813]: I0219 18:49:35.293623 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6c68d02b-5a16-4663-b109-265ae29b311b","Type":"ContainerStarted","Data":"128dac3b620e663ac00e57ee6e601177e3f89b5b91cf0c39b2978ce209425fab"} Feb 19 18:49:35 crc kubenswrapper[4813]: I0219 18:49:35.293665 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6c68d02b-5a16-4663-b109-265ae29b311b","Type":"ContainerStarted","Data":"5f03789ae2df4f55fc6133f731f3188e9d4921b3b477f5a0541d4388d60c2d52"} Feb 19 18:49:35 crc kubenswrapper[4813]: I0219 18:49:35.484795 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="035014cb-5042-4867-9d3c-4f0061a8e4ac" path="/var/lib/kubelet/pods/035014cb-5042-4867-9d3c-4f0061a8e4ac/volumes" Feb 19 18:49:36 crc kubenswrapper[4813]: I0219 18:49:36.304389 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4446e6ed-7663-41ab-9fae-f6da8f4f5449","Type":"ContainerStarted","Data":"b90427982961bec5baa8d0a5adb7ef61ac4a4b9f69504de632565a8d4fb1746f"} Feb 19 18:49:36 crc kubenswrapper[4813]: I0219 18:49:36.306685 4813 generic.go:334] "Generic (PLEG): container finished" podID="27d0219c-fd97-4271-9059-0903aae52f65" containerID="b4a6d7a30ef50dd71bcd5afec2db98656e12a3087aa45f96cbbac4ff6cfc9b5f" exitCode=0 Feb 19 18:49:36 crc kubenswrapper[4813]: I0219 18:49:36.306756 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-lc2jp" event={"ID":"27d0219c-fd97-4271-9059-0903aae52f65","Type":"ContainerDied","Data":"b4a6d7a30ef50dd71bcd5afec2db98656e12a3087aa45f96cbbac4ff6cfc9b5f"} Feb 19 18:49:36 crc kubenswrapper[4813]: I0219 18:49:36.309250 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5d67b04b-756d-4c0d-93bf-ce2766c48cd9","Type":"ContainerStarted","Data":"352bd4fd06c385c9f5e35f72e77478143c55653358a40bf88d2506b71b54a63d"} Feb 19 18:49:36 crc kubenswrapper[4813]: I0219 18:49:36.310603 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6c68d02b-5a16-4663-b109-265ae29b311b","Type":"ContainerStarted","Data":"8e12b692f541ff3e34487b31b7479a4414bdcfbee0b1c3a4dfc4db8b7258f44e"} Feb 19 18:49:36 crc kubenswrapper[4813]: I0219 18:49:36.356155 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.356123897 podStartE2EDuration="3.356123897s" podCreationTimestamp="2026-02-19 18:49:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:49:36.349001117 +0000 UTC m=+1195.574441678" watchObservedRunningTime="2026-02-19 18:49:36.356123897 +0000 UTC m=+1195.581564468" Feb 19 18:49:36 crc kubenswrapper[4813]: I0219 18:49:36.378780 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.378761845 podStartE2EDuration="3.378761845s" podCreationTimestamp="2026-02-19 18:49:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:49:36.376537146 +0000 UTC m=+1195.601977727" watchObservedRunningTime="2026-02-19 18:49:36.378761845 +0000 UTC m=+1195.604202386" Feb 19 18:49:37 crc kubenswrapper[4813]: I0219 18:49:37.215194 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-68677f88c9-7zxq4" podUID="035014cb-5042-4867-9d3c-4f0061a8e4ac" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.136:5353: i/o timeout" Feb 19 18:49:37 crc kubenswrapper[4813]: I0219 18:49:37.682274 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-lc2jp" Feb 19 18:49:37 crc kubenswrapper[4813]: I0219 18:49:37.817437 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27d0219c-fd97-4271-9059-0903aae52f65-logs\") pod \"27d0219c-fd97-4271-9059-0903aae52f65\" (UID: \"27d0219c-fd97-4271-9059-0903aae52f65\") " Feb 19 18:49:37 crc kubenswrapper[4813]: I0219 18:49:37.817573 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27d0219c-fd97-4271-9059-0903aae52f65-combined-ca-bundle\") pod \"27d0219c-fd97-4271-9059-0903aae52f65\" (UID: \"27d0219c-fd97-4271-9059-0903aae52f65\") " Feb 19 18:49:37 crc kubenswrapper[4813]: I0219 18:49:37.817613 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27d0219c-fd97-4271-9059-0903aae52f65-config-data\") pod \"27d0219c-fd97-4271-9059-0903aae52f65\" (UID: \"27d0219c-fd97-4271-9059-0903aae52f65\") " Feb 19 18:49:37 crc kubenswrapper[4813]: I0219 18:49:37.817687 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27d0219c-fd97-4271-9059-0903aae52f65-scripts\") pod \"27d0219c-fd97-4271-9059-0903aae52f65\" (UID: \"27d0219c-fd97-4271-9059-0903aae52f65\") " Feb 19 18:49:37 crc kubenswrapper[4813]: I0219 18:49:37.817787 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws8w4\" (UniqueName: \"kubernetes.io/projected/27d0219c-fd97-4271-9059-0903aae52f65-kube-api-access-ws8w4\") pod \"27d0219c-fd97-4271-9059-0903aae52f65\" (UID: \"27d0219c-fd97-4271-9059-0903aae52f65\") " Feb 19 18:49:37 crc kubenswrapper[4813]: I0219 18:49:37.817796 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27d0219c-fd97-4271-9059-0903aae52f65-logs" (OuterVolumeSpecName: "logs") pod "27d0219c-fd97-4271-9059-0903aae52f65" (UID: "27d0219c-fd97-4271-9059-0903aae52f65"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:49:37 crc kubenswrapper[4813]: I0219 18:49:37.818149 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27d0219c-fd97-4271-9059-0903aae52f65-logs\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:37 crc kubenswrapper[4813]: I0219 18:49:37.824237 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27d0219c-fd97-4271-9059-0903aae52f65-kube-api-access-ws8w4" (OuterVolumeSpecName: "kube-api-access-ws8w4") pod "27d0219c-fd97-4271-9059-0903aae52f65" (UID: "27d0219c-fd97-4271-9059-0903aae52f65"). InnerVolumeSpecName "kube-api-access-ws8w4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:49:37 crc kubenswrapper[4813]: I0219 18:49:37.836613 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27d0219c-fd97-4271-9059-0903aae52f65-scripts" (OuterVolumeSpecName: "scripts") pod "27d0219c-fd97-4271-9059-0903aae52f65" (UID: "27d0219c-fd97-4271-9059-0903aae52f65"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:49:37 crc kubenswrapper[4813]: I0219 18:49:37.850524 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27d0219c-fd97-4271-9059-0903aae52f65-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "27d0219c-fd97-4271-9059-0903aae52f65" (UID: "27d0219c-fd97-4271-9059-0903aae52f65"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:49:37 crc kubenswrapper[4813]: I0219 18:49:37.875678 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27d0219c-fd97-4271-9059-0903aae52f65-config-data" (OuterVolumeSpecName: "config-data") pod "27d0219c-fd97-4271-9059-0903aae52f65" (UID: "27d0219c-fd97-4271-9059-0903aae52f65"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:49:37 crc kubenswrapper[4813]: I0219 18:49:37.920665 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27d0219c-fd97-4271-9059-0903aae52f65-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:37 crc kubenswrapper[4813]: I0219 18:49:37.920724 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27d0219c-fd97-4271-9059-0903aae52f65-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:37 crc kubenswrapper[4813]: I0219 18:49:37.920739 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ws8w4\" (UniqueName: \"kubernetes.io/projected/27d0219c-fd97-4271-9059-0903aae52f65-kube-api-access-ws8w4\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:37 crc kubenswrapper[4813]: I0219 18:49:37.920753 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27d0219c-fd97-4271-9059-0903aae52f65-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:38 crc kubenswrapper[4813]: I0219 18:49:38.332988 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-lc2jp" event={"ID":"27d0219c-fd97-4271-9059-0903aae52f65","Type":"ContainerDied","Data":"90ecf777e2c00afc46bc04b7c0b1b6d8b154239816f33f897dd45c5b0f6d2018"} Feb 19 18:49:38 crc kubenswrapper[4813]: I0219 18:49:38.333405 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90ecf777e2c00afc46bc04b7c0b1b6d8b154239816f33f897dd45c5b0f6d2018" Feb 19 18:49:38 crc kubenswrapper[4813]: I0219 18:49:38.333057 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-lc2jp" Feb 19 18:49:38 crc kubenswrapper[4813]: I0219 18:49:38.339708 4813 generic.go:334] "Generic (PLEG): container finished" podID="0a45ccb1-adb5-432d-a40b-89c7f8412cfd" containerID="869846648a5af54c9d2dfac6cf01db1cb4983ff40d5a072e82a3effcde006db0" exitCode=0 Feb 19 18:49:38 crc kubenswrapper[4813]: I0219 18:49:38.339762 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8h8fd" event={"ID":"0a45ccb1-adb5-432d-a40b-89c7f8412cfd","Type":"ContainerDied","Data":"869846648a5af54c9d2dfac6cf01db1cb4983ff40d5a072e82a3effcde006db0"} Feb 19 18:49:38 crc kubenswrapper[4813]: I0219 18:49:38.427692 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5777547648-br5pd"] Feb 19 18:49:38 crc kubenswrapper[4813]: E0219 18:49:38.428837 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="035014cb-5042-4867-9d3c-4f0061a8e4ac" containerName="init" Feb 19 18:49:38 crc kubenswrapper[4813]: I0219 18:49:38.428860 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="035014cb-5042-4867-9d3c-4f0061a8e4ac" containerName="init" Feb 19 18:49:38 crc kubenswrapper[4813]: E0219 18:49:38.428944 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="035014cb-5042-4867-9d3c-4f0061a8e4ac" containerName="dnsmasq-dns" Feb 19 18:49:38 crc kubenswrapper[4813]: I0219 18:49:38.428973 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="035014cb-5042-4867-9d3c-4f0061a8e4ac" containerName="dnsmasq-dns" Feb 19 18:49:38 crc kubenswrapper[4813]: E0219 18:49:38.428995 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27d0219c-fd97-4271-9059-0903aae52f65" containerName="placement-db-sync" Feb 19 18:49:38 crc kubenswrapper[4813]: I0219 18:49:38.429005 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="27d0219c-fd97-4271-9059-0903aae52f65" containerName="placement-db-sync" Feb 19 18:49:38 crc kubenswrapper[4813]: I0219 18:49:38.429467 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="035014cb-5042-4867-9d3c-4f0061a8e4ac" containerName="dnsmasq-dns" Feb 19 18:49:38 crc kubenswrapper[4813]: I0219 18:49:38.429492 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="27d0219c-fd97-4271-9059-0903aae52f65" containerName="placement-db-sync" Feb 19 18:49:38 crc kubenswrapper[4813]: I0219 18:49:38.430646 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5777547648-br5pd" Feb 19 18:49:38 crc kubenswrapper[4813]: I0219 18:49:38.432913 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 19 18:49:38 crc kubenswrapper[4813]: I0219 18:49:38.433444 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-wqc9s" Feb 19 18:49:38 crc kubenswrapper[4813]: I0219 18:49:38.433780 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 19 18:49:38 crc kubenswrapper[4813]: I0219 18:49:38.435342 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 19 18:49:38 crc kubenswrapper[4813]: I0219 18:49:38.450417 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 19 18:49:38 crc kubenswrapper[4813]: I0219 18:49:38.468371 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5777547648-br5pd"] Feb 19 18:49:38 crc kubenswrapper[4813]: I0219 18:49:38.530769 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f91fa4c5-f63e-4a98-be87-3d154cdc6db0-public-tls-certs\") pod \"placement-5777547648-br5pd\" (UID: \"f91fa4c5-f63e-4a98-be87-3d154cdc6db0\") " pod="openstack/placement-5777547648-br5pd" Feb 19 18:49:38 crc kubenswrapper[4813]: I0219 18:49:38.530980 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f91fa4c5-f63e-4a98-be87-3d154cdc6db0-combined-ca-bundle\") pod \"placement-5777547648-br5pd\" (UID: \"f91fa4c5-f63e-4a98-be87-3d154cdc6db0\") " pod="openstack/placement-5777547648-br5pd" Feb 19 18:49:38 crc kubenswrapper[4813]: I0219 18:49:38.531086 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f91fa4c5-f63e-4a98-be87-3d154cdc6db0-logs\") pod \"placement-5777547648-br5pd\" (UID: \"f91fa4c5-f63e-4a98-be87-3d154cdc6db0\") " pod="openstack/placement-5777547648-br5pd" Feb 19 18:49:38 crc kubenswrapper[4813]: I0219 18:49:38.531131 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f91fa4c5-f63e-4a98-be87-3d154cdc6db0-internal-tls-certs\") pod \"placement-5777547648-br5pd\" (UID: \"f91fa4c5-f63e-4a98-be87-3d154cdc6db0\") " pod="openstack/placement-5777547648-br5pd" Feb 19 18:49:38 crc kubenswrapper[4813]: I0219 18:49:38.531170 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f91fa4c5-f63e-4a98-be87-3d154cdc6db0-config-data\") pod \"placement-5777547648-br5pd\" (UID: \"f91fa4c5-f63e-4a98-be87-3d154cdc6db0\") " pod="openstack/placement-5777547648-br5pd" Feb 19 18:49:38 crc kubenswrapper[4813]: I0219 18:49:38.531216 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8r4z5\" (UniqueName: \"kubernetes.io/projected/f91fa4c5-f63e-4a98-be87-3d154cdc6db0-kube-api-access-8r4z5\") pod \"placement-5777547648-br5pd\" (UID: \"f91fa4c5-f63e-4a98-be87-3d154cdc6db0\") " pod="openstack/placement-5777547648-br5pd" Feb 19 18:49:38 crc kubenswrapper[4813]: I0219 18:49:38.531244 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f91fa4c5-f63e-4a98-be87-3d154cdc6db0-scripts\") pod \"placement-5777547648-br5pd\" (UID: \"f91fa4c5-f63e-4a98-be87-3d154cdc6db0\") " pod="openstack/placement-5777547648-br5pd" Feb 19 18:49:38 crc kubenswrapper[4813]: I0219 18:49:38.634167 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f91fa4c5-f63e-4a98-be87-3d154cdc6db0-combined-ca-bundle\") pod \"placement-5777547648-br5pd\" (UID: \"f91fa4c5-f63e-4a98-be87-3d154cdc6db0\") " pod="openstack/placement-5777547648-br5pd" Feb 19 18:49:38 crc kubenswrapper[4813]: I0219 18:49:38.634242 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f91fa4c5-f63e-4a98-be87-3d154cdc6db0-logs\") pod \"placement-5777547648-br5pd\" (UID: \"f91fa4c5-f63e-4a98-be87-3d154cdc6db0\") " pod="openstack/placement-5777547648-br5pd" Feb 19 18:49:38 crc kubenswrapper[4813]: I0219 18:49:38.634282 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f91fa4c5-f63e-4a98-be87-3d154cdc6db0-internal-tls-certs\") pod \"placement-5777547648-br5pd\" (UID: \"f91fa4c5-f63e-4a98-be87-3d154cdc6db0\") " pod="openstack/placement-5777547648-br5pd" Feb 19 18:49:38 crc kubenswrapper[4813]: I0219 18:49:38.634316 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f91fa4c5-f63e-4a98-be87-3d154cdc6db0-config-data\") pod \"placement-5777547648-br5pd\" (UID: \"f91fa4c5-f63e-4a98-be87-3d154cdc6db0\") " pod="openstack/placement-5777547648-br5pd" Feb 19 18:49:38 crc kubenswrapper[4813]: I0219 18:49:38.634340 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8r4z5\" (UniqueName: \"kubernetes.io/projected/f91fa4c5-f63e-4a98-be87-3d154cdc6db0-kube-api-access-8r4z5\") pod \"placement-5777547648-br5pd\" (UID: \"f91fa4c5-f63e-4a98-be87-3d154cdc6db0\") " pod="openstack/placement-5777547648-br5pd" Feb 19 18:49:38 crc kubenswrapper[4813]: I0219 18:49:38.634365 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f91fa4c5-f63e-4a98-be87-3d154cdc6db0-scripts\") pod \"placement-5777547648-br5pd\" (UID: \"f91fa4c5-f63e-4a98-be87-3d154cdc6db0\") " pod="openstack/placement-5777547648-br5pd" Feb 19 18:49:38 crc kubenswrapper[4813]: I0219 18:49:38.634459 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f91fa4c5-f63e-4a98-be87-3d154cdc6db0-public-tls-certs\") pod \"placement-5777547648-br5pd\" (UID: \"f91fa4c5-f63e-4a98-be87-3d154cdc6db0\") " pod="openstack/placement-5777547648-br5pd" Feb 19 18:49:38 crc kubenswrapper[4813]: I0219 18:49:38.635796 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f91fa4c5-f63e-4a98-be87-3d154cdc6db0-logs\") pod \"placement-5777547648-br5pd\" (UID: \"f91fa4c5-f63e-4a98-be87-3d154cdc6db0\") " pod="openstack/placement-5777547648-br5pd" Feb 19 18:49:38 crc kubenswrapper[4813]: I0219 18:49:38.639778 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f91fa4c5-f63e-4a98-be87-3d154cdc6db0-scripts\") pod \"placement-5777547648-br5pd\" (UID: \"f91fa4c5-f63e-4a98-be87-3d154cdc6db0\") " pod="openstack/placement-5777547648-br5pd" Feb 19 18:49:38 crc kubenswrapper[4813]: I0219 18:49:38.639838 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f91fa4c5-f63e-4a98-be87-3d154cdc6db0-public-tls-certs\") pod \"placement-5777547648-br5pd\" (UID: \"f91fa4c5-f63e-4a98-be87-3d154cdc6db0\") " pod="openstack/placement-5777547648-br5pd" Feb 19 18:49:38 crc kubenswrapper[4813]: I0219 18:49:38.640156 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f91fa4c5-f63e-4a98-be87-3d154cdc6db0-config-data\") pod \"placement-5777547648-br5pd\" (UID: \"f91fa4c5-f63e-4a98-be87-3d154cdc6db0\") " pod="openstack/placement-5777547648-br5pd" Feb 19 18:49:38 crc kubenswrapper[4813]: I0219 18:49:38.640789 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f91fa4c5-f63e-4a98-be87-3d154cdc6db0-combined-ca-bundle\") pod \"placement-5777547648-br5pd\" (UID: \"f91fa4c5-f63e-4a98-be87-3d154cdc6db0\") " pod="openstack/placement-5777547648-br5pd" Feb 19 18:49:38 crc kubenswrapper[4813]: I0219 18:49:38.642460 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f91fa4c5-f63e-4a98-be87-3d154cdc6db0-internal-tls-certs\") pod \"placement-5777547648-br5pd\" (UID: \"f91fa4c5-f63e-4a98-be87-3d154cdc6db0\") " pod="openstack/placement-5777547648-br5pd" Feb 19 18:49:38 crc kubenswrapper[4813]: I0219 18:49:38.652509 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8r4z5\" (UniqueName: \"kubernetes.io/projected/f91fa4c5-f63e-4a98-be87-3d154cdc6db0-kube-api-access-8r4z5\") pod \"placement-5777547648-br5pd\" (UID: \"f91fa4c5-f63e-4a98-be87-3d154cdc6db0\") " pod="openstack/placement-5777547648-br5pd" Feb 19 18:49:38 crc kubenswrapper[4813]: I0219 18:49:38.752015 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5777547648-br5pd" Feb 19 18:49:39 crc kubenswrapper[4813]: E0219 18:49:39.309474 4813 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc69ff3db_8806_451a_9df0_c6289c327579.slice/crio-b9426c242aece9d97709c8664b0a4c377652f962eb00395db079b212fbca6168.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc69ff3db_8806_451a_9df0_c6289c327579.slice/crio-conmon-b9426c242aece9d97709c8664b0a4c377652f962eb00395db079b212fbca6168.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6fa56113_499a_490c_bba9_8676d4312e4e.slice/crio-conmon-b2e48289ecd4bad8376d6fcd570aeec36797339684b0340044cc436faa9f613e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6fa56113_499a_490c_bba9_8676d4312e4e.slice/crio-b2e48289ecd4bad8376d6fcd570aeec36797339684b0340044cc436faa9f613e.scope\": RecentStats: unable to find data in memory cache]" Feb 19 18:49:39 crc kubenswrapper[4813]: I0219 18:49:39.351301 4813 generic.go:334] "Generic (PLEG): container finished" podID="6fa56113-499a-490c-bba9-8676d4312e4e" containerID="b2e48289ecd4bad8376d6fcd570aeec36797339684b0340044cc436faa9f613e" exitCode=0 Feb 19 18:49:39 crc kubenswrapper[4813]: I0219 18:49:39.352521 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-fbp7p" event={"ID":"6fa56113-499a-490c-bba9-8676d4312e4e","Type":"ContainerDied","Data":"b2e48289ecd4bad8376d6fcd570aeec36797339684b0340044cc436faa9f613e"} Feb 19 18:49:40 crc kubenswrapper[4813]: I0219 18:49:40.005665 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8h8fd" Feb 19 18:49:40 crc kubenswrapper[4813]: I0219 18:49:40.068914 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a45ccb1-adb5-432d-a40b-89c7f8412cfd-config-data\") pod \"0a45ccb1-adb5-432d-a40b-89c7f8412cfd\" (UID: \"0a45ccb1-adb5-432d-a40b-89c7f8412cfd\") " Feb 19 18:49:40 crc kubenswrapper[4813]: I0219 18:49:40.069003 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a45ccb1-adb5-432d-a40b-89c7f8412cfd-scripts\") pod \"0a45ccb1-adb5-432d-a40b-89c7f8412cfd\" (UID: \"0a45ccb1-adb5-432d-a40b-89c7f8412cfd\") " Feb 19 18:49:40 crc kubenswrapper[4813]: I0219 18:49:40.069360 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0a45ccb1-adb5-432d-a40b-89c7f8412cfd-fernet-keys\") pod \"0a45ccb1-adb5-432d-a40b-89c7f8412cfd\" (UID: \"0a45ccb1-adb5-432d-a40b-89c7f8412cfd\") " Feb 19 18:49:40 crc kubenswrapper[4813]: I0219 18:49:40.069493 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a45ccb1-adb5-432d-a40b-89c7f8412cfd-combined-ca-bundle\") pod \"0a45ccb1-adb5-432d-a40b-89c7f8412cfd\" (UID: \"0a45ccb1-adb5-432d-a40b-89c7f8412cfd\") " Feb 19 18:49:40 crc kubenswrapper[4813]: I0219 18:49:40.069567 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4m7q9\" (UniqueName: \"kubernetes.io/projected/0a45ccb1-adb5-432d-a40b-89c7f8412cfd-kube-api-access-4m7q9\") pod \"0a45ccb1-adb5-432d-a40b-89c7f8412cfd\" (UID: \"0a45ccb1-adb5-432d-a40b-89c7f8412cfd\") " Feb 19 18:49:40 crc kubenswrapper[4813]: I0219 18:49:40.069601 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0a45ccb1-adb5-432d-a40b-89c7f8412cfd-credential-keys\") pod \"0a45ccb1-adb5-432d-a40b-89c7f8412cfd\" (UID: \"0a45ccb1-adb5-432d-a40b-89c7f8412cfd\") " Feb 19 18:49:40 crc kubenswrapper[4813]: I0219 18:49:40.074631 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a45ccb1-adb5-432d-a40b-89c7f8412cfd-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "0a45ccb1-adb5-432d-a40b-89c7f8412cfd" (UID: "0a45ccb1-adb5-432d-a40b-89c7f8412cfd"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:49:40 crc kubenswrapper[4813]: I0219 18:49:40.076002 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a45ccb1-adb5-432d-a40b-89c7f8412cfd-scripts" (OuterVolumeSpecName: "scripts") pod "0a45ccb1-adb5-432d-a40b-89c7f8412cfd" (UID: "0a45ccb1-adb5-432d-a40b-89c7f8412cfd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:49:40 crc kubenswrapper[4813]: I0219 18:49:40.079530 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a45ccb1-adb5-432d-a40b-89c7f8412cfd-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "0a45ccb1-adb5-432d-a40b-89c7f8412cfd" (UID: "0a45ccb1-adb5-432d-a40b-89c7f8412cfd"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:49:40 crc kubenswrapper[4813]: I0219 18:49:40.079606 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a45ccb1-adb5-432d-a40b-89c7f8412cfd-kube-api-access-4m7q9" (OuterVolumeSpecName: "kube-api-access-4m7q9") pod "0a45ccb1-adb5-432d-a40b-89c7f8412cfd" (UID: "0a45ccb1-adb5-432d-a40b-89c7f8412cfd"). InnerVolumeSpecName "kube-api-access-4m7q9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:49:40 crc kubenswrapper[4813]: I0219 18:49:40.107101 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a45ccb1-adb5-432d-a40b-89c7f8412cfd-config-data" (OuterVolumeSpecName: "config-data") pod "0a45ccb1-adb5-432d-a40b-89c7f8412cfd" (UID: "0a45ccb1-adb5-432d-a40b-89c7f8412cfd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:49:40 crc kubenswrapper[4813]: I0219 18:49:40.108548 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a45ccb1-adb5-432d-a40b-89c7f8412cfd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0a45ccb1-adb5-432d-a40b-89c7f8412cfd" (UID: "0a45ccb1-adb5-432d-a40b-89c7f8412cfd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:49:40 crc kubenswrapper[4813]: I0219 18:49:40.172381 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a45ccb1-adb5-432d-a40b-89c7f8412cfd-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:40 crc kubenswrapper[4813]: I0219 18:49:40.172408 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a45ccb1-adb5-432d-a40b-89c7f8412cfd-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:40 crc kubenswrapper[4813]: I0219 18:49:40.172418 4813 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0a45ccb1-adb5-432d-a40b-89c7f8412cfd-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:40 crc kubenswrapper[4813]: I0219 18:49:40.172425 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a45ccb1-adb5-432d-a40b-89c7f8412cfd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:40 crc kubenswrapper[4813]: I0219 18:49:40.172434 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4m7q9\" (UniqueName: \"kubernetes.io/projected/0a45ccb1-adb5-432d-a40b-89c7f8412cfd-kube-api-access-4m7q9\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:40 crc kubenswrapper[4813]: I0219 18:49:40.172443 4813 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0a45ccb1-adb5-432d-a40b-89c7f8412cfd-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:40 crc kubenswrapper[4813]: I0219 18:49:40.370085 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8h8fd" Feb 19 18:49:40 crc kubenswrapper[4813]: I0219 18:49:40.370098 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8h8fd" event={"ID":"0a45ccb1-adb5-432d-a40b-89c7f8412cfd","Type":"ContainerDied","Data":"3d70b9795b6af0ac1e83ed05ff776cecf9d8a9cce719ca1761a2ffe72edfe24f"} Feb 19 18:49:40 crc kubenswrapper[4813]: I0219 18:49:40.371431 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d70b9795b6af0ac1e83ed05ff776cecf9d8a9cce719ca1761a2ffe72edfe24f" Feb 19 18:49:40 crc kubenswrapper[4813]: I0219 18:49:40.374074 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4446e6ed-7663-41ab-9fae-f6da8f4f5449","Type":"ContainerStarted","Data":"f3d6fdd3c59e3e01f33197739decd04a0dac26668d2cd4b7085a00e81751dcdd"} Feb 19 18:49:40 crc kubenswrapper[4813]: I0219 18:49:40.376877 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5777547648-br5pd"] Feb 19 18:49:40 crc kubenswrapper[4813]: W0219 18:49:40.385351 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf91fa4c5_f63e_4a98_be87_3d154cdc6db0.slice/crio-f46f23d585165b549d6ec7aa1351bc2940ccc533bbf83b5ce1490e8c79c0863e WatchSource:0}: Error finding container f46f23d585165b549d6ec7aa1351bc2940ccc533bbf83b5ce1490e8c79c0863e: Status 404 returned error can't find the container with id f46f23d585165b549d6ec7aa1351bc2940ccc533bbf83b5ce1490e8c79c0863e Feb 19 18:49:40 crc kubenswrapper[4813]: I0219 18:49:40.458199 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-5899f78d95-lmnxh"] Feb 19 18:49:40 crc kubenswrapper[4813]: E0219 18:49:40.458600 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a45ccb1-adb5-432d-a40b-89c7f8412cfd" containerName="keystone-bootstrap" Feb 19 18:49:40 crc kubenswrapper[4813]: I0219 18:49:40.458622 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a45ccb1-adb5-432d-a40b-89c7f8412cfd" containerName="keystone-bootstrap" Feb 19 18:49:40 crc kubenswrapper[4813]: I0219 18:49:40.458999 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a45ccb1-adb5-432d-a40b-89c7f8412cfd" containerName="keystone-bootstrap" Feb 19 18:49:40 crc kubenswrapper[4813]: I0219 18:49:40.459683 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5899f78d95-lmnxh" Feb 19 18:49:40 crc kubenswrapper[4813]: I0219 18:49:40.463050 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 19 18:49:40 crc kubenswrapper[4813]: I0219 18:49:40.463295 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 19 18:49:40 crc kubenswrapper[4813]: I0219 18:49:40.464027 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 19 18:49:40 crc kubenswrapper[4813]: I0219 18:49:40.464274 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4lsm2" Feb 19 18:49:40 crc kubenswrapper[4813]: I0219 18:49:40.464430 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 19 18:49:40 crc kubenswrapper[4813]: I0219 18:49:40.464630 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 19 18:49:40 crc kubenswrapper[4813]: I0219 18:49:40.483400 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5899f78d95-lmnxh"] Feb 19 18:49:40 crc kubenswrapper[4813]: I0219 18:49:40.578374 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/79cdc675-a16c-4c18-bcef-d844d7a2f75d-credential-keys\") pod \"keystone-5899f78d95-lmnxh\" (UID: \"79cdc675-a16c-4c18-bcef-d844d7a2f75d\") " pod="openstack/keystone-5899f78d95-lmnxh" Feb 19 18:49:40 crc kubenswrapper[4813]: I0219 18:49:40.578413 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79cdc675-a16c-4c18-bcef-d844d7a2f75d-scripts\") pod \"keystone-5899f78d95-lmnxh\" (UID: \"79cdc675-a16c-4c18-bcef-d844d7a2f75d\") " pod="openstack/keystone-5899f78d95-lmnxh" Feb 19 18:49:40 crc kubenswrapper[4813]: I0219 18:49:40.578476 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/79cdc675-a16c-4c18-bcef-d844d7a2f75d-fernet-keys\") pod \"keystone-5899f78d95-lmnxh\" (UID: \"79cdc675-a16c-4c18-bcef-d844d7a2f75d\") " pod="openstack/keystone-5899f78d95-lmnxh" Feb 19 18:49:40 crc kubenswrapper[4813]: I0219 18:49:40.578517 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79cdc675-a16c-4c18-bcef-d844d7a2f75d-combined-ca-bundle\") pod \"keystone-5899f78d95-lmnxh\" (UID: \"79cdc675-a16c-4c18-bcef-d844d7a2f75d\") " pod="openstack/keystone-5899f78d95-lmnxh" Feb 19 18:49:40 crc kubenswrapper[4813]: I0219 18:49:40.578903 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79cdc675-a16c-4c18-bcef-d844d7a2f75d-config-data\") pod \"keystone-5899f78d95-lmnxh\" (UID: \"79cdc675-a16c-4c18-bcef-d844d7a2f75d\") " pod="openstack/keystone-5899f78d95-lmnxh" Feb 19 18:49:40 crc kubenswrapper[4813]: I0219 18:49:40.579199 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlj9d\" (UniqueName: \"kubernetes.io/projected/79cdc675-a16c-4c18-bcef-d844d7a2f75d-kube-api-access-zlj9d\") pod \"keystone-5899f78d95-lmnxh\" (UID: \"79cdc675-a16c-4c18-bcef-d844d7a2f75d\") " pod="openstack/keystone-5899f78d95-lmnxh" Feb 19 18:49:40 crc kubenswrapper[4813]: I0219 18:49:40.579227 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/79cdc675-a16c-4c18-bcef-d844d7a2f75d-internal-tls-certs\") pod \"keystone-5899f78d95-lmnxh\" (UID: \"79cdc675-a16c-4c18-bcef-d844d7a2f75d\") " pod="openstack/keystone-5899f78d95-lmnxh" Feb 19 18:49:40 crc kubenswrapper[4813]: I0219 18:49:40.579243 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/79cdc675-a16c-4c18-bcef-d844d7a2f75d-public-tls-certs\") pod \"keystone-5899f78d95-lmnxh\" (UID: \"79cdc675-a16c-4c18-bcef-d844d7a2f75d\") " pod="openstack/keystone-5899f78d95-lmnxh" Feb 19 18:49:40 crc kubenswrapper[4813]: I0219 18:49:40.680168 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlj9d\" (UniqueName: \"kubernetes.io/projected/79cdc675-a16c-4c18-bcef-d844d7a2f75d-kube-api-access-zlj9d\") pod \"keystone-5899f78d95-lmnxh\" (UID: \"79cdc675-a16c-4c18-bcef-d844d7a2f75d\") " pod="openstack/keystone-5899f78d95-lmnxh" Feb 19 18:49:40 crc kubenswrapper[4813]: I0219 18:49:40.680200 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/79cdc675-a16c-4c18-bcef-d844d7a2f75d-internal-tls-certs\") pod \"keystone-5899f78d95-lmnxh\" (UID: \"79cdc675-a16c-4c18-bcef-d844d7a2f75d\") " pod="openstack/keystone-5899f78d95-lmnxh" Feb 19 18:49:40 crc kubenswrapper[4813]: I0219 18:49:40.680218 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/79cdc675-a16c-4c18-bcef-d844d7a2f75d-public-tls-certs\") pod \"keystone-5899f78d95-lmnxh\" (UID: \"79cdc675-a16c-4c18-bcef-d844d7a2f75d\") " pod="openstack/keystone-5899f78d95-lmnxh" Feb 19 18:49:40 crc kubenswrapper[4813]: I0219 18:49:40.680260 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/79cdc675-a16c-4c18-bcef-d844d7a2f75d-credential-keys\") pod \"keystone-5899f78d95-lmnxh\" (UID: \"79cdc675-a16c-4c18-bcef-d844d7a2f75d\") " pod="openstack/keystone-5899f78d95-lmnxh" Feb 19 18:49:40 crc kubenswrapper[4813]: I0219 18:49:40.680283 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79cdc675-a16c-4c18-bcef-d844d7a2f75d-scripts\") pod \"keystone-5899f78d95-lmnxh\" (UID: \"79cdc675-a16c-4c18-bcef-d844d7a2f75d\") " pod="openstack/keystone-5899f78d95-lmnxh" Feb 19 18:49:40 crc kubenswrapper[4813]: I0219 18:49:40.680335 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/79cdc675-a16c-4c18-bcef-d844d7a2f75d-fernet-keys\") pod \"keystone-5899f78d95-lmnxh\" (UID: \"79cdc675-a16c-4c18-bcef-d844d7a2f75d\") " pod="openstack/keystone-5899f78d95-lmnxh" Feb 19 18:49:40 crc kubenswrapper[4813]: I0219 18:49:40.680366 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79cdc675-a16c-4c18-bcef-d844d7a2f75d-combined-ca-bundle\") pod \"keystone-5899f78d95-lmnxh\" (UID: \"79cdc675-a16c-4c18-bcef-d844d7a2f75d\") " pod="openstack/keystone-5899f78d95-lmnxh" Feb 19 18:49:40 crc kubenswrapper[4813]: I0219 18:49:40.680381 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79cdc675-a16c-4c18-bcef-d844d7a2f75d-config-data\") pod \"keystone-5899f78d95-lmnxh\" (UID: \"79cdc675-a16c-4c18-bcef-d844d7a2f75d\") " pod="openstack/keystone-5899f78d95-lmnxh" Feb 19 18:49:40 crc kubenswrapper[4813]: I0219 18:49:40.680439 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-fbp7p" Feb 19 18:49:40 crc kubenswrapper[4813]: I0219 18:49:40.684343 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/79cdc675-a16c-4c18-bcef-d844d7a2f75d-internal-tls-certs\") pod \"keystone-5899f78d95-lmnxh\" (UID: \"79cdc675-a16c-4c18-bcef-d844d7a2f75d\") " pod="openstack/keystone-5899f78d95-lmnxh" Feb 19 18:49:40 crc kubenswrapper[4813]: I0219 18:49:40.684744 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/79cdc675-a16c-4c18-bcef-d844d7a2f75d-credential-keys\") pod \"keystone-5899f78d95-lmnxh\" (UID: \"79cdc675-a16c-4c18-bcef-d844d7a2f75d\") " pod="openstack/keystone-5899f78d95-lmnxh" Feb 19 18:49:40 crc kubenswrapper[4813]: I0219 18:49:40.685049 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/79cdc675-a16c-4c18-bcef-d844d7a2f75d-fernet-keys\") pod \"keystone-5899f78d95-lmnxh\" (UID: \"79cdc675-a16c-4c18-bcef-d844d7a2f75d\") " pod="openstack/keystone-5899f78d95-lmnxh" Feb 19 18:49:40 crc kubenswrapper[4813]: I0219 18:49:40.686113 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79cdc675-a16c-4c18-bcef-d844d7a2f75d-config-data\") pod \"keystone-5899f78d95-lmnxh\" (UID: \"79cdc675-a16c-4c18-bcef-d844d7a2f75d\") " pod="openstack/keystone-5899f78d95-lmnxh" Feb 19 18:49:40 crc kubenswrapper[4813]: I0219 18:49:40.686210 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79cdc675-a16c-4c18-bcef-d844d7a2f75d-scripts\") pod \"keystone-5899f78d95-lmnxh\" (UID: \"79cdc675-a16c-4c18-bcef-d844d7a2f75d\") " pod="openstack/keystone-5899f78d95-lmnxh" Feb 19 18:49:40 crc kubenswrapper[4813]: I0219 18:49:40.690795 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79cdc675-a16c-4c18-bcef-d844d7a2f75d-combined-ca-bundle\") pod \"keystone-5899f78d95-lmnxh\" (UID: \"79cdc675-a16c-4c18-bcef-d844d7a2f75d\") " pod="openstack/keystone-5899f78d95-lmnxh" Feb 19 18:49:40 crc kubenswrapper[4813]: I0219 18:49:40.691897 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/79cdc675-a16c-4c18-bcef-d844d7a2f75d-public-tls-certs\") pod \"keystone-5899f78d95-lmnxh\" (UID: \"79cdc675-a16c-4c18-bcef-d844d7a2f75d\") " pod="openstack/keystone-5899f78d95-lmnxh" Feb 19 18:49:40 crc kubenswrapper[4813]: I0219 18:49:40.700754 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlj9d\" (UniqueName: \"kubernetes.io/projected/79cdc675-a16c-4c18-bcef-d844d7a2f75d-kube-api-access-zlj9d\") pod \"keystone-5899f78d95-lmnxh\" (UID: \"79cdc675-a16c-4c18-bcef-d844d7a2f75d\") " pod="openstack/keystone-5899f78d95-lmnxh" Feb 19 18:49:40 crc kubenswrapper[4813]: I0219 18:49:40.781474 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6f5k\" (UniqueName: \"kubernetes.io/projected/6fa56113-499a-490c-bba9-8676d4312e4e-kube-api-access-v6f5k\") pod \"6fa56113-499a-490c-bba9-8676d4312e4e\" (UID: \"6fa56113-499a-490c-bba9-8676d4312e4e\") " Feb 19 18:49:40 crc kubenswrapper[4813]: I0219 18:49:40.781540 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fa56113-499a-490c-bba9-8676d4312e4e-combined-ca-bundle\") pod \"6fa56113-499a-490c-bba9-8676d4312e4e\" (UID: \"6fa56113-499a-490c-bba9-8676d4312e4e\") " Feb 19 18:49:40 crc kubenswrapper[4813]: I0219 18:49:40.781565 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6fa56113-499a-490c-bba9-8676d4312e4e-config\") pod \"6fa56113-499a-490c-bba9-8676d4312e4e\" (UID: \"6fa56113-499a-490c-bba9-8676d4312e4e\") " Feb 19 18:49:40 crc kubenswrapper[4813]: I0219 18:49:40.785820 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fa56113-499a-490c-bba9-8676d4312e4e-kube-api-access-v6f5k" (OuterVolumeSpecName: "kube-api-access-v6f5k") pod "6fa56113-499a-490c-bba9-8676d4312e4e" (UID: "6fa56113-499a-490c-bba9-8676d4312e4e"). InnerVolumeSpecName "kube-api-access-v6f5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:49:40 crc kubenswrapper[4813]: I0219 18:49:40.793422 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5899f78d95-lmnxh" Feb 19 18:49:40 crc kubenswrapper[4813]: I0219 18:49:40.811230 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fa56113-499a-490c-bba9-8676d4312e4e-config" (OuterVolumeSpecName: "config") pod "6fa56113-499a-490c-bba9-8676d4312e4e" (UID: "6fa56113-499a-490c-bba9-8676d4312e4e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:49:40 crc kubenswrapper[4813]: I0219 18:49:40.823502 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fa56113-499a-490c-bba9-8676d4312e4e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6fa56113-499a-490c-bba9-8676d4312e4e" (UID: "6fa56113-499a-490c-bba9-8676d4312e4e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:49:40 crc kubenswrapper[4813]: I0219 18:49:40.882886 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6f5k\" (UniqueName: \"kubernetes.io/projected/6fa56113-499a-490c-bba9-8676d4312e4e-kube-api-access-v6f5k\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:40 crc kubenswrapper[4813]: I0219 18:49:40.883173 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fa56113-499a-490c-bba9-8676d4312e4e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:40 crc kubenswrapper[4813]: I0219 18:49:40.883184 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/6fa56113-499a-490c-bba9-8676d4312e4e-config\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:41 crc kubenswrapper[4813]: I0219 18:49:41.289385 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5899f78d95-lmnxh"] Feb 19 18:49:41 crc kubenswrapper[4813]: W0219 18:49:41.295979 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79cdc675_a16c_4c18_bcef_d844d7a2f75d.slice/crio-d52f3f1ee71bd9828d230464f75a582c40364abbbcb8418142451780240b5f77 WatchSource:0}: Error finding container d52f3f1ee71bd9828d230464f75a582c40364abbbcb8418142451780240b5f77: Status 404 returned error can't find the container with id d52f3f1ee71bd9828d230464f75a582c40364abbbcb8418142451780240b5f77 Feb 19 18:49:41 crc kubenswrapper[4813]: I0219 18:49:41.386852 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-fbp7p" Feb 19 18:49:41 crc kubenswrapper[4813]: I0219 18:49:41.388946 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-fbp7p" event={"ID":"6fa56113-499a-490c-bba9-8676d4312e4e","Type":"ContainerDied","Data":"a6fc1a080c0a9e7c76b176b93f046a9dd936271ea8542bdb7f8c05aa35b81b94"} Feb 19 18:49:41 crc kubenswrapper[4813]: I0219 18:49:41.389024 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6fc1a080c0a9e7c76b176b93f046a9dd936271ea8542bdb7f8c05aa35b81b94" Feb 19 18:49:41 crc kubenswrapper[4813]: I0219 18:49:41.400981 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5899f78d95-lmnxh" event={"ID":"79cdc675-a16c-4c18-bcef-d844d7a2f75d","Type":"ContainerStarted","Data":"d52f3f1ee71bd9828d230464f75a582c40364abbbcb8418142451780240b5f77"} Feb 19 18:49:41 crc kubenswrapper[4813]: I0219 18:49:41.407462 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5777547648-br5pd" event={"ID":"f91fa4c5-f63e-4a98-be87-3d154cdc6db0","Type":"ContainerStarted","Data":"691381488d7f78058688fb4f5e70199f772f40bff52786b7ec81259c77c0b817"} Feb 19 18:49:41 crc kubenswrapper[4813]: I0219 18:49:41.407509 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5777547648-br5pd" event={"ID":"f91fa4c5-f63e-4a98-be87-3d154cdc6db0","Type":"ContainerStarted","Data":"6924f9ac4ffac182d1d22d2801b9ecfb7d5714952dd528d4dc0ed31a85a2d612"} Feb 19 18:49:41 crc kubenswrapper[4813]: I0219 18:49:41.407524 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5777547648-br5pd" event={"ID":"f91fa4c5-f63e-4a98-be87-3d154cdc6db0","Type":"ContainerStarted","Data":"f46f23d585165b549d6ec7aa1351bc2940ccc533bbf83b5ce1490e8c79c0863e"} Feb 19 18:49:41 crc kubenswrapper[4813]: I0219 18:49:41.408119 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5777547648-br5pd" Feb 19 18:49:41 crc kubenswrapper[4813]: I0219 18:49:41.408244 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5777547648-br5pd" Feb 19 18:49:41 crc kubenswrapper[4813]: I0219 18:49:41.637791 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-5777547648-br5pd" podStartSLOduration=3.6377646219999997 podStartE2EDuration="3.637764622s" podCreationTimestamp="2026-02-19 18:49:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:49:41.555108293 +0000 UTC m=+1200.780548834" watchObservedRunningTime="2026-02-19 18:49:41.637764622 +0000 UTC m=+1200.863205163" Feb 19 18:49:41 crc kubenswrapper[4813]: I0219 18:49:41.744174 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-db5c97f8f-96mx5"] Feb 19 18:49:41 crc kubenswrapper[4813]: E0219 18:49:41.744728 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fa56113-499a-490c-bba9-8676d4312e4e" containerName="neutron-db-sync" Feb 19 18:49:41 crc kubenswrapper[4813]: I0219 18:49:41.744746 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fa56113-499a-490c-bba9-8676d4312e4e" containerName="neutron-db-sync" Feb 19 18:49:41 crc kubenswrapper[4813]: I0219 18:49:41.744995 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fa56113-499a-490c-bba9-8676d4312e4e" containerName="neutron-db-sync" Feb 19 18:49:41 crc kubenswrapper[4813]: I0219 18:49:41.746158 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-db5c97f8f-96mx5" Feb 19 18:49:41 crc kubenswrapper[4813]: I0219 18:49:41.753842 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-db5c97f8f-96mx5"] Feb 19 18:49:41 crc kubenswrapper[4813]: I0219 18:49:41.833745 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-66787bd68b-jd5l8"] Feb 19 18:49:41 crc kubenswrapper[4813]: I0219 18:49:41.835051 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-66787bd68b-jd5l8" Feb 19 18:49:41 crc kubenswrapper[4813]: I0219 18:49:41.838489 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 19 18:49:41 crc kubenswrapper[4813]: I0219 18:49:41.838589 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-69wj7" Feb 19 18:49:41 crc kubenswrapper[4813]: I0219 18:49:41.838838 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 19 18:49:41 crc kubenswrapper[4813]: I0219 18:49:41.839284 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 19 18:49:41 crc kubenswrapper[4813]: I0219 18:49:41.858253 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3273f47-8ad4-42ab-b905-a55e4e23400f-config\") pod \"dnsmasq-dns-db5c97f8f-96mx5\" (UID: \"f3273f47-8ad4-42ab-b905-a55e4e23400f\") " pod="openstack/dnsmasq-dns-db5c97f8f-96mx5" Feb 19 18:49:41 crc kubenswrapper[4813]: I0219 18:49:41.858327 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3273f47-8ad4-42ab-b905-a55e4e23400f-dns-svc\") pod \"dnsmasq-dns-db5c97f8f-96mx5\" (UID: \"f3273f47-8ad4-42ab-b905-a55e4e23400f\") " pod="openstack/dnsmasq-dns-db5c97f8f-96mx5" Feb 19 18:49:41 crc kubenswrapper[4813]: I0219 18:49:41.858361 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f3273f47-8ad4-42ab-b905-a55e4e23400f-dns-swift-storage-0\") pod \"dnsmasq-dns-db5c97f8f-96mx5\" (UID: \"f3273f47-8ad4-42ab-b905-a55e4e23400f\") " pod="openstack/dnsmasq-dns-db5c97f8f-96mx5" Feb 19 18:49:41 crc kubenswrapper[4813]: I0219 18:49:41.858462 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-444fb\" (UniqueName: \"kubernetes.io/projected/f3273f47-8ad4-42ab-b905-a55e4e23400f-kube-api-access-444fb\") pod \"dnsmasq-dns-db5c97f8f-96mx5\" (UID: \"f3273f47-8ad4-42ab-b905-a55e4e23400f\") " pod="openstack/dnsmasq-dns-db5c97f8f-96mx5" Feb 19 18:49:41 crc kubenswrapper[4813]: I0219 18:49:41.858484 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f3273f47-8ad4-42ab-b905-a55e4e23400f-ovsdbserver-nb\") pod \"dnsmasq-dns-db5c97f8f-96mx5\" (UID: \"f3273f47-8ad4-42ab-b905-a55e4e23400f\") " pod="openstack/dnsmasq-dns-db5c97f8f-96mx5" Feb 19 18:49:41 crc kubenswrapper[4813]: I0219 18:49:41.858505 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3273f47-8ad4-42ab-b905-a55e4e23400f-ovsdbserver-sb\") pod \"dnsmasq-dns-db5c97f8f-96mx5\" (UID: \"f3273f47-8ad4-42ab-b905-a55e4e23400f\") " pod="openstack/dnsmasq-dns-db5c97f8f-96mx5" Feb 19 18:49:41 crc kubenswrapper[4813]: I0219 18:49:41.866732 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-66787bd68b-jd5l8"] Feb 19 18:49:41 crc kubenswrapper[4813]: I0219 18:49:41.959904 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-444fb\" (UniqueName: \"kubernetes.io/projected/f3273f47-8ad4-42ab-b905-a55e4e23400f-kube-api-access-444fb\") pod \"dnsmasq-dns-db5c97f8f-96mx5\" (UID: \"f3273f47-8ad4-42ab-b905-a55e4e23400f\") " pod="openstack/dnsmasq-dns-db5c97f8f-96mx5" Feb 19 18:49:41 crc kubenswrapper[4813]: I0219 18:49:41.959986 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f3273f47-8ad4-42ab-b905-a55e4e23400f-ovsdbserver-nb\") pod \"dnsmasq-dns-db5c97f8f-96mx5\" (UID: \"f3273f47-8ad4-42ab-b905-a55e4e23400f\") " pod="openstack/dnsmasq-dns-db5c97f8f-96mx5" Feb 19 18:49:41 crc kubenswrapper[4813]: I0219 18:49:41.960011 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3273f47-8ad4-42ab-b905-a55e4e23400f-ovsdbserver-sb\") pod \"dnsmasq-dns-db5c97f8f-96mx5\" (UID: \"f3273f47-8ad4-42ab-b905-a55e4e23400f\") " pod="openstack/dnsmasq-dns-db5c97f8f-96mx5" Feb 19 18:49:41 crc kubenswrapper[4813]: I0219 18:49:41.960040 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71bd7206-d9dd-40e7-a991-c5cf107989f4-combined-ca-bundle\") pod \"neutron-66787bd68b-jd5l8\" (UID: \"71bd7206-d9dd-40e7-a991-c5cf107989f4\") " pod="openstack/neutron-66787bd68b-jd5l8" Feb 19 18:49:41 crc kubenswrapper[4813]: I0219 18:49:41.960082 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3273f47-8ad4-42ab-b905-a55e4e23400f-config\") pod \"dnsmasq-dns-db5c97f8f-96mx5\" (UID: \"f3273f47-8ad4-42ab-b905-a55e4e23400f\") " pod="openstack/dnsmasq-dns-db5c97f8f-96mx5" Feb 19 18:49:41 crc kubenswrapper[4813]: I0219 18:49:41.960108 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/71bd7206-d9dd-40e7-a991-c5cf107989f4-httpd-config\") pod \"neutron-66787bd68b-jd5l8\" (UID: \"71bd7206-d9dd-40e7-a991-c5cf107989f4\") " pod="openstack/neutron-66787bd68b-jd5l8" Feb 19 18:49:41 crc kubenswrapper[4813]: I0219 18:49:41.960136 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg6b4\" (UniqueName: \"kubernetes.io/projected/71bd7206-d9dd-40e7-a991-c5cf107989f4-kube-api-access-xg6b4\") pod \"neutron-66787bd68b-jd5l8\" (UID: \"71bd7206-d9dd-40e7-a991-c5cf107989f4\") " pod="openstack/neutron-66787bd68b-jd5l8" Feb 19 18:49:41 crc kubenswrapper[4813]: I0219 18:49:41.960158 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/71bd7206-d9dd-40e7-a991-c5cf107989f4-ovndb-tls-certs\") pod \"neutron-66787bd68b-jd5l8\" (UID: \"71bd7206-d9dd-40e7-a991-c5cf107989f4\") " pod="openstack/neutron-66787bd68b-jd5l8" Feb 19 18:49:41 crc kubenswrapper[4813]: I0219 18:49:41.960178 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3273f47-8ad4-42ab-b905-a55e4e23400f-dns-svc\") pod \"dnsmasq-dns-db5c97f8f-96mx5\" (UID: \"f3273f47-8ad4-42ab-b905-a55e4e23400f\") " pod="openstack/dnsmasq-dns-db5c97f8f-96mx5" Feb 19 18:49:41 crc kubenswrapper[4813]: I0219 18:49:41.960211 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f3273f47-8ad4-42ab-b905-a55e4e23400f-dns-swift-storage-0\") pod \"dnsmasq-dns-db5c97f8f-96mx5\" (UID: \"f3273f47-8ad4-42ab-b905-a55e4e23400f\") " pod="openstack/dnsmasq-dns-db5c97f8f-96mx5" Feb 19 18:49:41 crc kubenswrapper[4813]: I0219 18:49:41.960237 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/71bd7206-d9dd-40e7-a991-c5cf107989f4-config\") pod \"neutron-66787bd68b-jd5l8\" (UID: \"71bd7206-d9dd-40e7-a991-c5cf107989f4\") " pod="openstack/neutron-66787bd68b-jd5l8" Feb 19 18:49:41 crc kubenswrapper[4813]: I0219 18:49:41.961487 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f3273f47-8ad4-42ab-b905-a55e4e23400f-ovsdbserver-nb\") pod \"dnsmasq-dns-db5c97f8f-96mx5\" (UID: \"f3273f47-8ad4-42ab-b905-a55e4e23400f\") " pod="openstack/dnsmasq-dns-db5c97f8f-96mx5" Feb 19 18:49:41 crc kubenswrapper[4813]: I0219 18:49:41.962000 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3273f47-8ad4-42ab-b905-a55e4e23400f-ovsdbserver-sb\") pod \"dnsmasq-dns-db5c97f8f-96mx5\" (UID: \"f3273f47-8ad4-42ab-b905-a55e4e23400f\") " pod="openstack/dnsmasq-dns-db5c97f8f-96mx5" Feb 19 18:49:41 crc kubenswrapper[4813]: I0219 18:49:41.962521 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3273f47-8ad4-42ab-b905-a55e4e23400f-config\") pod \"dnsmasq-dns-db5c97f8f-96mx5\" (UID: \"f3273f47-8ad4-42ab-b905-a55e4e23400f\") " pod="openstack/dnsmasq-dns-db5c97f8f-96mx5" Feb 19 18:49:41 crc kubenswrapper[4813]: I0219 18:49:41.963106 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3273f47-8ad4-42ab-b905-a55e4e23400f-dns-svc\") pod \"dnsmasq-dns-db5c97f8f-96mx5\" (UID: \"f3273f47-8ad4-42ab-b905-a55e4e23400f\") " pod="openstack/dnsmasq-dns-db5c97f8f-96mx5" Feb 19 18:49:41 crc kubenswrapper[4813]: I0219 18:49:41.963620 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f3273f47-8ad4-42ab-b905-a55e4e23400f-dns-swift-storage-0\") pod \"dnsmasq-dns-db5c97f8f-96mx5\" (UID: \"f3273f47-8ad4-42ab-b905-a55e4e23400f\") " pod="openstack/dnsmasq-dns-db5c97f8f-96mx5" Feb 19 18:49:41 crc kubenswrapper[4813]: I0219 18:49:41.978054 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-444fb\" (UniqueName: \"kubernetes.io/projected/f3273f47-8ad4-42ab-b905-a55e4e23400f-kube-api-access-444fb\") pod \"dnsmasq-dns-db5c97f8f-96mx5\" (UID: \"f3273f47-8ad4-42ab-b905-a55e4e23400f\") " pod="openstack/dnsmasq-dns-db5c97f8f-96mx5" Feb 19 18:49:42 crc kubenswrapper[4813]: I0219 18:49:42.061239 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/71bd7206-d9dd-40e7-a991-c5cf107989f4-config\") pod \"neutron-66787bd68b-jd5l8\" (UID: \"71bd7206-d9dd-40e7-a991-c5cf107989f4\") " pod="openstack/neutron-66787bd68b-jd5l8" Feb 19 18:49:42 crc kubenswrapper[4813]: I0219 18:49:42.061536 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71bd7206-d9dd-40e7-a991-c5cf107989f4-combined-ca-bundle\") pod \"neutron-66787bd68b-jd5l8\" (UID: \"71bd7206-d9dd-40e7-a991-c5cf107989f4\") " pod="openstack/neutron-66787bd68b-jd5l8" Feb 19 18:49:42 crc kubenswrapper[4813]: I0219 18:49:42.061585 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/71bd7206-d9dd-40e7-a991-c5cf107989f4-httpd-config\") pod \"neutron-66787bd68b-jd5l8\" (UID: \"71bd7206-d9dd-40e7-a991-c5cf107989f4\") " pod="openstack/neutron-66787bd68b-jd5l8" Feb 19 18:49:42 crc kubenswrapper[4813]: I0219 18:49:42.061615 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xg6b4\" (UniqueName: \"kubernetes.io/projected/71bd7206-d9dd-40e7-a991-c5cf107989f4-kube-api-access-xg6b4\") pod \"neutron-66787bd68b-jd5l8\" (UID: \"71bd7206-d9dd-40e7-a991-c5cf107989f4\") " pod="openstack/neutron-66787bd68b-jd5l8" Feb 19 18:49:42 crc kubenswrapper[4813]: I0219 18:49:42.061639 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/71bd7206-d9dd-40e7-a991-c5cf107989f4-ovndb-tls-certs\") pod \"neutron-66787bd68b-jd5l8\" (UID: \"71bd7206-d9dd-40e7-a991-c5cf107989f4\") " pod="openstack/neutron-66787bd68b-jd5l8" Feb 19 18:49:42 crc kubenswrapper[4813]: I0219 18:49:42.069711 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/71bd7206-d9dd-40e7-a991-c5cf107989f4-config\") pod \"neutron-66787bd68b-jd5l8\" (UID: \"71bd7206-d9dd-40e7-a991-c5cf107989f4\") " pod="openstack/neutron-66787bd68b-jd5l8" Feb 19 18:49:42 crc kubenswrapper[4813]: I0219 18:49:42.073055 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/71bd7206-d9dd-40e7-a991-c5cf107989f4-ovndb-tls-certs\") pod \"neutron-66787bd68b-jd5l8\" (UID: \"71bd7206-d9dd-40e7-a991-c5cf107989f4\") " pod="openstack/neutron-66787bd68b-jd5l8" Feb 19 18:49:42 crc kubenswrapper[4813]: I0219 18:49:42.080078 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-db5c97f8f-96mx5" Feb 19 18:49:42 crc kubenswrapper[4813]: I0219 18:49:42.083575 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xg6b4\" (UniqueName: \"kubernetes.io/projected/71bd7206-d9dd-40e7-a991-c5cf107989f4-kube-api-access-xg6b4\") pod \"neutron-66787bd68b-jd5l8\" (UID: \"71bd7206-d9dd-40e7-a991-c5cf107989f4\") " pod="openstack/neutron-66787bd68b-jd5l8" Feb 19 18:49:42 crc kubenswrapper[4813]: I0219 18:49:42.085473 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71bd7206-d9dd-40e7-a991-c5cf107989f4-combined-ca-bundle\") pod \"neutron-66787bd68b-jd5l8\" (UID: \"71bd7206-d9dd-40e7-a991-c5cf107989f4\") " pod="openstack/neutron-66787bd68b-jd5l8" Feb 19 18:49:42 crc kubenswrapper[4813]: I0219 18:49:42.085498 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/71bd7206-d9dd-40e7-a991-c5cf107989f4-httpd-config\") pod \"neutron-66787bd68b-jd5l8\" (UID: \"71bd7206-d9dd-40e7-a991-c5cf107989f4\") " pod="openstack/neutron-66787bd68b-jd5l8" Feb 19 18:49:42 crc kubenswrapper[4813]: I0219 18:49:42.154268 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-66787bd68b-jd5l8" Feb 19 18:49:42 crc kubenswrapper[4813]: I0219 18:49:42.437976 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5899f78d95-lmnxh" event={"ID":"79cdc675-a16c-4c18-bcef-d844d7a2f75d","Type":"ContainerStarted","Data":"11ff41f1c3a05377a4d38f769ca6258be7fe43feeb2f1e8eb06714701d85e419"} Feb 19 18:49:42 crc kubenswrapper[4813]: I0219 18:49:42.438314 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-5899f78d95-lmnxh" Feb 19 18:49:42 crc kubenswrapper[4813]: I0219 18:49:42.460801 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-5899f78d95-lmnxh" podStartSLOduration=2.460779511 podStartE2EDuration="2.460779511s" podCreationTimestamp="2026-02-19 18:49:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:49:42.457875601 +0000 UTC m=+1201.683316142" watchObservedRunningTime="2026-02-19 18:49:42.460779511 +0000 UTC m=+1201.686220052" Feb 19 18:49:42 crc kubenswrapper[4813]: I0219 18:49:42.606227 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-db5c97f8f-96mx5"] Feb 19 18:49:42 crc kubenswrapper[4813]: W0219 18:49:42.616893 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf3273f47_8ad4_42ab_b905_a55e4e23400f.slice/crio-7ed945e781aaa414b839f98307d8c78316df0496987bdcf26488256c67ca1d4d WatchSource:0}: Error finding container 7ed945e781aaa414b839f98307d8c78316df0496987bdcf26488256c67ca1d4d: Status 404 returned error can't find the container with id 7ed945e781aaa414b839f98307d8c78316df0496987bdcf26488256c67ca1d4d Feb 19 18:49:42 crc kubenswrapper[4813]: I0219 18:49:42.849321 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-66787bd68b-jd5l8"] Feb 19 18:49:43 crc kubenswrapper[4813]: W0219 18:49:43.204301 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod71bd7206_d9dd_40e7_a991_c5cf107989f4.slice/crio-fe6454e419b957e6a5f61830c6918674df7165b2e4a0491c5c30d46e09512c25 WatchSource:0}: Error finding container fe6454e419b957e6a5f61830c6918674df7165b2e4a0491c5c30d46e09512c25: Status 404 returned error can't find the container with id fe6454e419b957e6a5f61830c6918674df7165b2e4a0491c5c30d46e09512c25 Feb 19 18:49:43 crc kubenswrapper[4813]: I0219 18:49:43.445151 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-db5c97f8f-96mx5" event={"ID":"f3273f47-8ad4-42ab-b905-a55e4e23400f","Type":"ContainerStarted","Data":"7ed945e781aaa414b839f98307d8c78316df0496987bdcf26488256c67ca1d4d"} Feb 19 18:49:43 crc kubenswrapper[4813]: I0219 18:49:43.447093 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66787bd68b-jd5l8" event={"ID":"71bd7206-d9dd-40e7-a991-c5cf107989f4","Type":"ContainerStarted","Data":"fe6454e419b957e6a5f61830c6918674df7165b2e4a0491c5c30d46e09512c25"} Feb 19 18:49:43 crc kubenswrapper[4813]: I0219 18:49:43.650555 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-79884964f7-nvxp2"] Feb 19 18:49:43 crc kubenswrapper[4813]: I0219 18:49:43.657818 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-79884964f7-nvxp2" Feb 19 18:49:43 crc kubenswrapper[4813]: I0219 18:49:43.666097 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 19 18:49:43 crc kubenswrapper[4813]: I0219 18:49:43.666442 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 19 18:49:43 crc kubenswrapper[4813]: I0219 18:49:43.667769 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-79884964f7-nvxp2"] Feb 19 18:49:43 crc kubenswrapper[4813]: I0219 18:49:43.694769 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd-combined-ca-bundle\") pod \"neutron-79884964f7-nvxp2\" (UID: \"6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd\") " pod="openstack/neutron-79884964f7-nvxp2" Feb 19 18:49:43 crc kubenswrapper[4813]: I0219 18:49:43.694820 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd-httpd-config\") pod \"neutron-79884964f7-nvxp2\" (UID: \"6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd\") " pod="openstack/neutron-79884964f7-nvxp2" Feb 19 18:49:43 crc kubenswrapper[4813]: I0219 18:49:43.694845 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd-config\") pod \"neutron-79884964f7-nvxp2\" (UID: \"6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd\") " pod="openstack/neutron-79884964f7-nvxp2" Feb 19 18:49:43 crc kubenswrapper[4813]: I0219 18:49:43.694866 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd-internal-tls-certs\") pod \"neutron-79884964f7-nvxp2\" (UID: \"6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd\") " pod="openstack/neutron-79884964f7-nvxp2" Feb 19 18:49:43 crc kubenswrapper[4813]: I0219 18:49:43.694914 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdk62\" (UniqueName: \"kubernetes.io/projected/6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd-kube-api-access-kdk62\") pod \"neutron-79884964f7-nvxp2\" (UID: \"6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd\") " pod="openstack/neutron-79884964f7-nvxp2" Feb 19 18:49:43 crc kubenswrapper[4813]: I0219 18:49:43.694984 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd-ovndb-tls-certs\") pod \"neutron-79884964f7-nvxp2\" (UID: \"6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd\") " pod="openstack/neutron-79884964f7-nvxp2" Feb 19 18:49:43 crc kubenswrapper[4813]: I0219 18:49:43.695031 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd-public-tls-certs\") pod \"neutron-79884964f7-nvxp2\" (UID: \"6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd\") " pod="openstack/neutron-79884964f7-nvxp2" Feb 19 18:49:43 crc kubenswrapper[4813]: I0219 18:49:43.743060 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 19 18:49:43 crc kubenswrapper[4813]: I0219 18:49:43.743105 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 19 18:49:43 crc kubenswrapper[4813]: I0219 18:49:43.749291 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 19 18:49:43 crc kubenswrapper[4813]: I0219 18:49:43.749318 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 19 18:49:43 crc kubenswrapper[4813]: I0219 18:49:43.779573 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 19 18:49:43 crc kubenswrapper[4813]: I0219 18:49:43.781515 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 19 18:49:43 crc kubenswrapper[4813]: I0219 18:49:43.783062 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 19 18:49:43 crc kubenswrapper[4813]: I0219 18:49:43.785364 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 19 18:49:43 crc kubenswrapper[4813]: I0219 18:49:43.797560 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd-combined-ca-bundle\") pod \"neutron-79884964f7-nvxp2\" (UID: \"6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd\") " pod="openstack/neutron-79884964f7-nvxp2" Feb 19 18:49:43 crc kubenswrapper[4813]: I0219 18:49:43.797623 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd-httpd-config\") pod \"neutron-79884964f7-nvxp2\" (UID: \"6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd\") " pod="openstack/neutron-79884964f7-nvxp2" Feb 19 18:49:43 crc kubenswrapper[4813]: I0219 18:49:43.797658 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd-config\") pod \"neutron-79884964f7-nvxp2\" (UID: \"6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd\") " pod="openstack/neutron-79884964f7-nvxp2" Feb 19 18:49:43 crc kubenswrapper[4813]: I0219 18:49:43.797684 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd-internal-tls-certs\") pod \"neutron-79884964f7-nvxp2\" (UID: \"6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd\") " pod="openstack/neutron-79884964f7-nvxp2" Feb 19 18:49:43 crc kubenswrapper[4813]: I0219 18:49:43.797708 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdk62\" (UniqueName: \"kubernetes.io/projected/6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd-kube-api-access-kdk62\") pod \"neutron-79884964f7-nvxp2\" (UID: \"6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd\") " pod="openstack/neutron-79884964f7-nvxp2" Feb 19 18:49:43 crc kubenswrapper[4813]: I0219 18:49:43.797765 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd-ovndb-tls-certs\") pod \"neutron-79884964f7-nvxp2\" (UID: \"6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd\") " pod="openstack/neutron-79884964f7-nvxp2" Feb 19 18:49:43 crc kubenswrapper[4813]: I0219 18:49:43.797812 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd-public-tls-certs\") pod \"neutron-79884964f7-nvxp2\" (UID: \"6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd\") " pod="openstack/neutron-79884964f7-nvxp2" Feb 19 18:49:43 crc kubenswrapper[4813]: I0219 18:49:43.803921 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd-combined-ca-bundle\") pod \"neutron-79884964f7-nvxp2\" (UID: \"6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd\") " pod="openstack/neutron-79884964f7-nvxp2" Feb 19 18:49:43 crc kubenswrapper[4813]: I0219 18:49:43.804858 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd-config\") pod \"neutron-79884964f7-nvxp2\" (UID: \"6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd\") " pod="openstack/neutron-79884964f7-nvxp2" Feb 19 18:49:43 crc kubenswrapper[4813]: I0219 18:49:43.807225 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd-ovndb-tls-certs\") pod \"neutron-79884964f7-nvxp2\" (UID: \"6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd\") " pod="openstack/neutron-79884964f7-nvxp2" Feb 19 18:49:43 crc kubenswrapper[4813]: I0219 18:49:43.808206 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd-internal-tls-certs\") pod \"neutron-79884964f7-nvxp2\" (UID: \"6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd\") " pod="openstack/neutron-79884964f7-nvxp2" Feb 19 18:49:43 crc kubenswrapper[4813]: I0219 18:49:43.810788 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd-public-tls-certs\") pod \"neutron-79884964f7-nvxp2\" (UID: \"6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd\") " pod="openstack/neutron-79884964f7-nvxp2" Feb 19 18:49:43 crc kubenswrapper[4813]: I0219 18:49:43.817803 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdk62\" (UniqueName: \"kubernetes.io/projected/6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd-kube-api-access-kdk62\") pod \"neutron-79884964f7-nvxp2\" (UID: \"6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd\") " pod="openstack/neutron-79884964f7-nvxp2" Feb 19 18:49:43 crc kubenswrapper[4813]: I0219 18:49:43.835135 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd-httpd-config\") pod \"neutron-79884964f7-nvxp2\" (UID: \"6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd\") " pod="openstack/neutron-79884964f7-nvxp2" Feb 19 18:49:44 crc kubenswrapper[4813]: I0219 18:49:44.018472 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-79884964f7-nvxp2" Feb 19 18:49:44 crc kubenswrapper[4813]: I0219 18:49:44.471864 4813 generic.go:334] "Generic (PLEG): container finished" podID="f3273f47-8ad4-42ab-b905-a55e4e23400f" containerID="af6a2aea5c455e1c251c9ee0bdc5038ff3b28b3749d43d0e2b7495a6590ac959" exitCode=0 Feb 19 18:49:44 crc kubenswrapper[4813]: I0219 18:49:44.471998 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-db5c97f8f-96mx5" event={"ID":"f3273f47-8ad4-42ab-b905-a55e4e23400f","Type":"ContainerDied","Data":"af6a2aea5c455e1c251c9ee0bdc5038ff3b28b3749d43d0e2b7495a6590ac959"} Feb 19 18:49:44 crc kubenswrapper[4813]: I0219 18:49:44.474428 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66787bd68b-jd5l8" event={"ID":"71bd7206-d9dd-40e7-a991-c5cf107989f4","Type":"ContainerStarted","Data":"be42f7d08861220021c70e71acbf687aef181911a7cf12a8a39a0b4bb04847e0"} Feb 19 18:49:44 crc kubenswrapper[4813]: I0219 18:49:44.474755 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66787bd68b-jd5l8" event={"ID":"71bd7206-d9dd-40e7-a991-c5cf107989f4","Type":"ContainerStarted","Data":"af026d010d31eb078c48b991d55c4e3b17bec908c6df634652a26ae1154003aa"} Feb 19 18:49:44 crc kubenswrapper[4813]: I0219 18:49:44.476728 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 19 18:49:44 crc kubenswrapper[4813]: I0219 18:49:44.476767 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 19 18:49:44 crc kubenswrapper[4813]: I0219 18:49:44.476782 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 19 18:49:44 crc kubenswrapper[4813]: I0219 18:49:44.476793 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 19 18:49:44 crc kubenswrapper[4813]: I0219 18:49:44.507144 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-66787bd68b-jd5l8" podStartSLOduration=3.507124572 podStartE2EDuration="3.507124572s" podCreationTimestamp="2026-02-19 18:49:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:49:44.502687724 +0000 UTC m=+1203.728128275" watchObservedRunningTime="2026-02-19 18:49:44.507124572 +0000 UTC m=+1203.732565123" Feb 19 18:49:44 crc kubenswrapper[4813]: I0219 18:49:44.636608 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-79884964f7-nvxp2"] Feb 19 18:49:45 crc kubenswrapper[4813]: I0219 18:49:45.490964 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-79884964f7-nvxp2" event={"ID":"6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd","Type":"ContainerStarted","Data":"8d8d3b2b19279b349f178ba59ce9a2b30004d64b0f858bebde8ea20429c2ad81"} Feb 19 18:49:45 crc kubenswrapper[4813]: I0219 18:49:45.491406 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-79884964f7-nvxp2" event={"ID":"6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd","Type":"ContainerStarted","Data":"8da735698479a4a63037188d407db2f3e744a3d19718c2c1b9b54bcc5cc8aefc"} Feb 19 18:49:45 crc kubenswrapper[4813]: I0219 18:49:45.506020 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-db5c97f8f-96mx5" event={"ID":"f3273f47-8ad4-42ab-b905-a55e4e23400f","Type":"ContainerStarted","Data":"89fb06a84ee1eb1f8794842873d2c140f6c67f316fd68d04b670483d239c1f4d"} Feb 19 18:49:45 crc kubenswrapper[4813]: I0219 18:49:45.506083 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-db5c97f8f-96mx5" Feb 19 18:49:45 crc kubenswrapper[4813]: I0219 18:49:45.506096 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-66787bd68b-jd5l8" Feb 19 18:49:45 crc kubenswrapper[4813]: I0219 18:49:45.529868 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-db5c97f8f-96mx5" podStartSLOduration=4.529847438 podStartE2EDuration="4.529847438s" podCreationTimestamp="2026-02-19 18:49:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:49:45.523319537 +0000 UTC m=+1204.748760078" watchObservedRunningTime="2026-02-19 18:49:45.529847438 +0000 UTC m=+1204.755287979" Feb 19 18:49:46 crc kubenswrapper[4813]: I0219 18:49:46.478871 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 19 18:49:46 crc kubenswrapper[4813]: I0219 18:49:46.509623 4813 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 18:49:46 crc kubenswrapper[4813]: I0219 18:49:46.509646 4813 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 18:49:46 crc kubenswrapper[4813]: I0219 18:49:46.509652 4813 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 18:49:46 crc kubenswrapper[4813]: I0219 18:49:46.531989 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 19 18:49:46 crc kubenswrapper[4813]: I0219 18:49:46.532529 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 19 18:49:46 crc kubenswrapper[4813]: I0219 18:49:46.584716 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 19 18:49:50 crc kubenswrapper[4813]: I0219 18:49:50.553613 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-79884964f7-nvxp2" event={"ID":"6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd","Type":"ContainerStarted","Data":"798087b6274eb1a02113a606fd85310be018115d9f4e8a89578ca60d75da110c"} Feb 19 18:49:51 crc kubenswrapper[4813]: I0219 18:49:51.564076 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4446e6ed-7663-41ab-9fae-f6da8f4f5449","Type":"ContainerStarted","Data":"b23093c181cb23b00205b702d5d01561b8936f76067bc28d25c6af1d89b235bd"} Feb 19 18:49:51 crc kubenswrapper[4813]: I0219 18:49:51.564489 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4446e6ed-7663-41ab-9fae-f6da8f4f5449" containerName="ceilometer-central-agent" containerID="cri-o://4fa144b488fed73c21844ad14a76893ad553ff87da4e836d63e03c5203e0a173" gracePeriod=30 Feb 19 18:49:51 crc kubenswrapper[4813]: I0219 18:49:51.564712 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 18:49:51 crc kubenswrapper[4813]: I0219 18:49:51.564931 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4446e6ed-7663-41ab-9fae-f6da8f4f5449" containerName="proxy-httpd" containerID="cri-o://b23093c181cb23b00205b702d5d01561b8936f76067bc28d25c6af1d89b235bd" gracePeriod=30 Feb 19 18:49:51 crc kubenswrapper[4813]: I0219 18:49:51.565029 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4446e6ed-7663-41ab-9fae-f6da8f4f5449" containerName="sg-core" containerID="cri-o://f3d6fdd3c59e3e01f33197739decd04a0dac26668d2cd4b7085a00e81751dcdd" gracePeriod=30 Feb 19 18:49:51 crc kubenswrapper[4813]: I0219 18:49:51.565064 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4446e6ed-7663-41ab-9fae-f6da8f4f5449" containerName="ceilometer-notification-agent" containerID="cri-o://b90427982961bec5baa8d0a5adb7ef61ac4a4b9f69504de632565a8d4fb1746f" gracePeriod=30 Feb 19 18:49:51 crc kubenswrapper[4813]: I0219 18:49:51.568808 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-9s5kt" event={"ID":"d622858a-0915-43b1-9169-8f176f0b16f0","Type":"ContainerStarted","Data":"9397c3a0ab851a08334b46849ddf572b6925e37fadc61f477415f20533b7cf9d"} Feb 19 18:49:51 crc kubenswrapper[4813]: I0219 18:49:51.571980 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-nnwfh" event={"ID":"a87080d3-007c-48e0-aa89-b82c5d9dafab","Type":"ContainerStarted","Data":"2aad7eab96447a6cadb62e2950ffd040729196c936832856ae0117d0d29ac117"} Feb 19 18:49:51 crc kubenswrapper[4813]: I0219 18:49:51.572145 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-79884964f7-nvxp2" Feb 19 18:49:51 crc kubenswrapper[4813]: I0219 18:49:51.587016 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.705422063 podStartE2EDuration="41.587001037s" podCreationTimestamp="2026-02-19 18:49:10 +0000 UTC" firstStartedPulling="2026-02-19 18:49:11.785135515 +0000 UTC m=+1171.010576056" lastFinishedPulling="2026-02-19 18:49:50.666714489 +0000 UTC m=+1209.892155030" observedRunningTime="2026-02-19 18:49:51.585219081 +0000 UTC m=+1210.810659652" watchObservedRunningTime="2026-02-19 18:49:51.587001037 +0000 UTC m=+1210.812441578" Feb 19 18:49:51 crc kubenswrapper[4813]: I0219 18:49:51.605337 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-9s5kt" podStartSLOduration=3.715366534 podStartE2EDuration="41.605319071s" podCreationTimestamp="2026-02-19 18:49:10 +0000 UTC" firstStartedPulling="2026-02-19 18:49:11.96433612 +0000 UTC m=+1171.189776651" lastFinishedPulling="2026-02-19 18:49:49.854288627 +0000 UTC m=+1209.079729188" observedRunningTime="2026-02-19 18:49:51.599359367 +0000 UTC m=+1210.824799908" watchObservedRunningTime="2026-02-19 18:49:51.605319071 +0000 UTC m=+1210.830759612" Feb 19 18:49:51 crc kubenswrapper[4813]: I0219 18:49:51.619946 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-nnwfh" podStartSLOduration=2.369584067 podStartE2EDuration="41.619923902s" podCreationTimestamp="2026-02-19 18:49:10 +0000 UTC" firstStartedPulling="2026-02-19 18:49:11.375842653 +0000 UTC m=+1170.601283194" lastFinishedPulling="2026-02-19 18:49:50.626182478 +0000 UTC m=+1209.851623029" observedRunningTime="2026-02-19 18:49:51.615121824 +0000 UTC m=+1210.840562365" watchObservedRunningTime="2026-02-19 18:49:51.619923902 +0000 UTC m=+1210.845364443" Feb 19 18:49:51 crc kubenswrapper[4813]: I0219 18:49:51.634994 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-79884964f7-nvxp2" podStartSLOduration=8.634957406 podStartE2EDuration="8.634957406s" podCreationTimestamp="2026-02-19 18:49:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:49:51.630081974 +0000 UTC m=+1210.855522535" watchObservedRunningTime="2026-02-19 18:49:51.634957406 +0000 UTC m=+1210.860397947" Feb 19 18:49:52 crc kubenswrapper[4813]: I0219 18:49:52.081845 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-db5c97f8f-96mx5" Feb 19 18:49:52 crc kubenswrapper[4813]: I0219 18:49:52.175773 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67dccc895-gzmtt"] Feb 19 18:49:52 crc kubenswrapper[4813]: I0219 18:49:52.176418 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-67dccc895-gzmtt" podUID="85fc318c-5591-4bb8-a92f-04b0f34884e7" containerName="dnsmasq-dns" containerID="cri-o://cc1a7985b68dabdd2ade8c388623c6d6b868f6f8491a4f17c04a3cbef2736d1f" gracePeriod=10 Feb 19 18:49:52 crc kubenswrapper[4813]: I0219 18:49:52.588412 4813 generic.go:334] "Generic (PLEG): container finished" podID="85fc318c-5591-4bb8-a92f-04b0f34884e7" containerID="cc1a7985b68dabdd2ade8c388623c6d6b868f6f8491a4f17c04a3cbef2736d1f" exitCode=0 Feb 19 18:49:52 crc kubenswrapper[4813]: I0219 18:49:52.588483 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67dccc895-gzmtt" event={"ID":"85fc318c-5591-4bb8-a92f-04b0f34884e7","Type":"ContainerDied","Data":"cc1a7985b68dabdd2ade8c388623c6d6b868f6f8491a4f17c04a3cbef2736d1f"} Feb 19 18:49:52 crc kubenswrapper[4813]: I0219 18:49:52.594945 4813 generic.go:334] "Generic (PLEG): container finished" podID="4446e6ed-7663-41ab-9fae-f6da8f4f5449" containerID="b23093c181cb23b00205b702d5d01561b8936f76067bc28d25c6af1d89b235bd" exitCode=0 Feb 19 18:49:52 crc kubenswrapper[4813]: I0219 18:49:52.595002 4813 generic.go:334] "Generic (PLEG): container finished" podID="4446e6ed-7663-41ab-9fae-f6da8f4f5449" containerID="f3d6fdd3c59e3e01f33197739decd04a0dac26668d2cd4b7085a00e81751dcdd" exitCode=2 Feb 19 18:49:52 crc kubenswrapper[4813]: I0219 18:49:52.595017 4813 generic.go:334] "Generic (PLEG): container finished" podID="4446e6ed-7663-41ab-9fae-f6da8f4f5449" containerID="4fa144b488fed73c21844ad14a76893ad553ff87da4e836d63e03c5203e0a173" exitCode=0 Feb 19 18:49:52 crc kubenswrapper[4813]: I0219 18:49:52.595042 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4446e6ed-7663-41ab-9fae-f6da8f4f5449","Type":"ContainerDied","Data":"b23093c181cb23b00205b702d5d01561b8936f76067bc28d25c6af1d89b235bd"} Feb 19 18:49:52 crc kubenswrapper[4813]: I0219 18:49:52.595098 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4446e6ed-7663-41ab-9fae-f6da8f4f5449","Type":"ContainerDied","Data":"f3d6fdd3c59e3e01f33197739decd04a0dac26668d2cd4b7085a00e81751dcdd"} Feb 19 18:49:52 crc kubenswrapper[4813]: I0219 18:49:52.595113 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4446e6ed-7663-41ab-9fae-f6da8f4f5449","Type":"ContainerDied","Data":"4fa144b488fed73c21844ad14a76893ad553ff87da4e836d63e03c5203e0a173"} Feb 19 18:49:52 crc kubenswrapper[4813]: I0219 18:49:52.694805 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67dccc895-gzmtt" Feb 19 18:49:52 crc kubenswrapper[4813]: I0219 18:49:52.777339 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/85fc318c-5591-4bb8-a92f-04b0f34884e7-ovsdbserver-sb\") pod \"85fc318c-5591-4bb8-a92f-04b0f34884e7\" (UID: \"85fc318c-5591-4bb8-a92f-04b0f34884e7\") " Feb 19 18:49:52 crc kubenswrapper[4813]: I0219 18:49:52.777505 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/85fc318c-5591-4bb8-a92f-04b0f34884e7-ovsdbserver-nb\") pod \"85fc318c-5591-4bb8-a92f-04b0f34884e7\" (UID: \"85fc318c-5591-4bb8-a92f-04b0f34884e7\") " Feb 19 18:49:52 crc kubenswrapper[4813]: I0219 18:49:52.777581 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/85fc318c-5591-4bb8-a92f-04b0f34884e7-dns-swift-storage-0\") pod \"85fc318c-5591-4bb8-a92f-04b0f34884e7\" (UID: \"85fc318c-5591-4bb8-a92f-04b0f34884e7\") " Feb 19 18:49:52 crc kubenswrapper[4813]: I0219 18:49:52.777637 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/85fc318c-5591-4bb8-a92f-04b0f34884e7-dns-svc\") pod \"85fc318c-5591-4bb8-a92f-04b0f34884e7\" (UID: \"85fc318c-5591-4bb8-a92f-04b0f34884e7\") " Feb 19 18:49:52 crc kubenswrapper[4813]: I0219 18:49:52.777713 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ghf8\" (UniqueName: \"kubernetes.io/projected/85fc318c-5591-4bb8-a92f-04b0f34884e7-kube-api-access-8ghf8\") pod \"85fc318c-5591-4bb8-a92f-04b0f34884e7\" (UID: \"85fc318c-5591-4bb8-a92f-04b0f34884e7\") " Feb 19 18:49:52 crc kubenswrapper[4813]: I0219 18:49:52.777757 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85fc318c-5591-4bb8-a92f-04b0f34884e7-config\") pod \"85fc318c-5591-4bb8-a92f-04b0f34884e7\" (UID: \"85fc318c-5591-4bb8-a92f-04b0f34884e7\") " Feb 19 18:49:52 crc kubenswrapper[4813]: I0219 18:49:52.798241 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85fc318c-5591-4bb8-a92f-04b0f34884e7-kube-api-access-8ghf8" (OuterVolumeSpecName: "kube-api-access-8ghf8") pod "85fc318c-5591-4bb8-a92f-04b0f34884e7" (UID: "85fc318c-5591-4bb8-a92f-04b0f34884e7"). InnerVolumeSpecName "kube-api-access-8ghf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:49:52 crc kubenswrapper[4813]: I0219 18:49:52.832715 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85fc318c-5591-4bb8-a92f-04b0f34884e7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "85fc318c-5591-4bb8-a92f-04b0f34884e7" (UID: "85fc318c-5591-4bb8-a92f-04b0f34884e7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:49:52 crc kubenswrapper[4813]: I0219 18:49:52.833420 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85fc318c-5591-4bb8-a92f-04b0f34884e7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "85fc318c-5591-4bb8-a92f-04b0f34884e7" (UID: "85fc318c-5591-4bb8-a92f-04b0f34884e7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:49:52 crc kubenswrapper[4813]: I0219 18:49:52.836456 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85fc318c-5591-4bb8-a92f-04b0f34884e7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "85fc318c-5591-4bb8-a92f-04b0f34884e7" (UID: "85fc318c-5591-4bb8-a92f-04b0f34884e7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:49:52 crc kubenswrapper[4813]: I0219 18:49:52.838280 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85fc318c-5591-4bb8-a92f-04b0f34884e7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "85fc318c-5591-4bb8-a92f-04b0f34884e7" (UID: "85fc318c-5591-4bb8-a92f-04b0f34884e7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:49:52 crc kubenswrapper[4813]: I0219 18:49:52.839014 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85fc318c-5591-4bb8-a92f-04b0f34884e7-config" (OuterVolumeSpecName: "config") pod "85fc318c-5591-4bb8-a92f-04b0f34884e7" (UID: "85fc318c-5591-4bb8-a92f-04b0f34884e7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:49:52 crc kubenswrapper[4813]: I0219 18:49:52.879905 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/85fc318c-5591-4bb8-a92f-04b0f34884e7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:52 crc kubenswrapper[4813]: I0219 18:49:52.879934 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/85fc318c-5591-4bb8-a92f-04b0f34884e7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:52 crc kubenswrapper[4813]: I0219 18:49:52.879945 4813 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/85fc318c-5591-4bb8-a92f-04b0f34884e7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:52 crc kubenswrapper[4813]: I0219 18:49:52.879957 4813 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/85fc318c-5591-4bb8-a92f-04b0f34884e7-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:52 crc kubenswrapper[4813]: I0219 18:49:52.879982 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ghf8\" (UniqueName: \"kubernetes.io/projected/85fc318c-5591-4bb8-a92f-04b0f34884e7-kube-api-access-8ghf8\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:52 crc kubenswrapper[4813]: I0219 18:49:52.879992 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85fc318c-5591-4bb8-a92f-04b0f34884e7-config\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:53 crc kubenswrapper[4813]: I0219 18:49:53.605223 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67dccc895-gzmtt" Feb 19 18:49:53 crc kubenswrapper[4813]: I0219 18:49:53.605228 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67dccc895-gzmtt" event={"ID":"85fc318c-5591-4bb8-a92f-04b0f34884e7","Type":"ContainerDied","Data":"ab56bf256d801254cd2abf2624c1ce3a538ac630039c3a3f5dc41f322ad33d7d"} Feb 19 18:49:53 crc kubenswrapper[4813]: I0219 18:49:53.605684 4813 scope.go:117] "RemoveContainer" containerID="cc1a7985b68dabdd2ade8c388623c6d6b868f6f8491a4f17c04a3cbef2736d1f" Feb 19 18:49:53 crc kubenswrapper[4813]: I0219 18:49:53.606921 4813 generic.go:334] "Generic (PLEG): container finished" podID="d622858a-0915-43b1-9169-8f176f0b16f0" containerID="9397c3a0ab851a08334b46849ddf572b6925e37fadc61f477415f20533b7cf9d" exitCode=0 Feb 19 18:49:53 crc kubenswrapper[4813]: I0219 18:49:53.606979 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-9s5kt" event={"ID":"d622858a-0915-43b1-9169-8f176f0b16f0","Type":"ContainerDied","Data":"9397c3a0ab851a08334b46849ddf572b6925e37fadc61f477415f20533b7cf9d"} Feb 19 18:49:53 crc kubenswrapper[4813]: I0219 18:49:53.653507 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67dccc895-gzmtt"] Feb 19 18:49:53 crc kubenswrapper[4813]: I0219 18:49:53.665568 4813 scope.go:117] "RemoveContainer" containerID="b0864f50c0c2d40b2316664b91fc781b2661ec928d8f7c82bae05c26bc9c992d" Feb 19 18:49:53 crc kubenswrapper[4813]: I0219 18:49:53.669203 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67dccc895-gzmtt"] Feb 19 18:49:54 crc kubenswrapper[4813]: I0219 18:49:54.045818 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 18:49:54 crc kubenswrapper[4813]: I0219 18:49:54.101002 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4446e6ed-7663-41ab-9fae-f6da8f4f5449-run-httpd\") pod \"4446e6ed-7663-41ab-9fae-f6da8f4f5449\" (UID: \"4446e6ed-7663-41ab-9fae-f6da8f4f5449\") " Feb 19 18:49:54 crc kubenswrapper[4813]: I0219 18:49:54.101182 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4446e6ed-7663-41ab-9fae-f6da8f4f5449-scripts\") pod \"4446e6ed-7663-41ab-9fae-f6da8f4f5449\" (UID: \"4446e6ed-7663-41ab-9fae-f6da8f4f5449\") " Feb 19 18:49:54 crc kubenswrapper[4813]: I0219 18:49:54.101277 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4446e6ed-7663-41ab-9fae-f6da8f4f5449-sg-core-conf-yaml\") pod \"4446e6ed-7663-41ab-9fae-f6da8f4f5449\" (UID: \"4446e6ed-7663-41ab-9fae-f6da8f4f5449\") " Feb 19 18:49:54 crc kubenswrapper[4813]: I0219 18:49:54.101312 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jdmx\" (UniqueName: \"kubernetes.io/projected/4446e6ed-7663-41ab-9fae-f6da8f4f5449-kube-api-access-4jdmx\") pod \"4446e6ed-7663-41ab-9fae-f6da8f4f5449\" (UID: \"4446e6ed-7663-41ab-9fae-f6da8f4f5449\") " Feb 19 18:49:54 crc kubenswrapper[4813]: I0219 18:49:54.101369 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4446e6ed-7663-41ab-9fae-f6da8f4f5449-combined-ca-bundle\") pod \"4446e6ed-7663-41ab-9fae-f6da8f4f5449\" (UID: \"4446e6ed-7663-41ab-9fae-f6da8f4f5449\") " Feb 19 18:49:54 crc kubenswrapper[4813]: I0219 18:49:54.101429 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4446e6ed-7663-41ab-9fae-f6da8f4f5449-config-data\") pod \"4446e6ed-7663-41ab-9fae-f6da8f4f5449\" (UID: \"4446e6ed-7663-41ab-9fae-f6da8f4f5449\") " Feb 19 18:49:54 crc kubenswrapper[4813]: I0219 18:49:54.101468 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4446e6ed-7663-41ab-9fae-f6da8f4f5449-log-httpd\") pod \"4446e6ed-7663-41ab-9fae-f6da8f4f5449\" (UID: \"4446e6ed-7663-41ab-9fae-f6da8f4f5449\") " Feb 19 18:49:54 crc kubenswrapper[4813]: I0219 18:49:54.102643 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4446e6ed-7663-41ab-9fae-f6da8f4f5449-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4446e6ed-7663-41ab-9fae-f6da8f4f5449" (UID: "4446e6ed-7663-41ab-9fae-f6da8f4f5449"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:49:54 crc kubenswrapper[4813]: I0219 18:49:54.103051 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4446e6ed-7663-41ab-9fae-f6da8f4f5449-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4446e6ed-7663-41ab-9fae-f6da8f4f5449" (UID: "4446e6ed-7663-41ab-9fae-f6da8f4f5449"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:49:54 crc kubenswrapper[4813]: I0219 18:49:54.107144 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4446e6ed-7663-41ab-9fae-f6da8f4f5449-scripts" (OuterVolumeSpecName: "scripts") pod "4446e6ed-7663-41ab-9fae-f6da8f4f5449" (UID: "4446e6ed-7663-41ab-9fae-f6da8f4f5449"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:49:54 crc kubenswrapper[4813]: I0219 18:49:54.109119 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4446e6ed-7663-41ab-9fae-f6da8f4f5449-kube-api-access-4jdmx" (OuterVolumeSpecName: "kube-api-access-4jdmx") pod "4446e6ed-7663-41ab-9fae-f6da8f4f5449" (UID: "4446e6ed-7663-41ab-9fae-f6da8f4f5449"). InnerVolumeSpecName "kube-api-access-4jdmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:49:54 crc kubenswrapper[4813]: I0219 18:49:54.140840 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4446e6ed-7663-41ab-9fae-f6da8f4f5449-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4446e6ed-7663-41ab-9fae-f6da8f4f5449" (UID: "4446e6ed-7663-41ab-9fae-f6da8f4f5449"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:49:54 crc kubenswrapper[4813]: I0219 18:49:54.196733 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4446e6ed-7663-41ab-9fae-f6da8f4f5449-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4446e6ed-7663-41ab-9fae-f6da8f4f5449" (UID: "4446e6ed-7663-41ab-9fae-f6da8f4f5449"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:49:54 crc kubenswrapper[4813]: I0219 18:49:54.204272 4813 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4446e6ed-7663-41ab-9fae-f6da8f4f5449-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:54 crc kubenswrapper[4813]: I0219 18:49:54.204326 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4jdmx\" (UniqueName: \"kubernetes.io/projected/4446e6ed-7663-41ab-9fae-f6da8f4f5449-kube-api-access-4jdmx\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:54 crc kubenswrapper[4813]: I0219 18:49:54.204362 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4446e6ed-7663-41ab-9fae-f6da8f4f5449-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:54 crc kubenswrapper[4813]: I0219 18:49:54.204471 4813 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4446e6ed-7663-41ab-9fae-f6da8f4f5449-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:54 crc kubenswrapper[4813]: I0219 18:49:54.204489 4813 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4446e6ed-7663-41ab-9fae-f6da8f4f5449-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:54 crc kubenswrapper[4813]: I0219 18:49:54.204519 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4446e6ed-7663-41ab-9fae-f6da8f4f5449-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:54 crc kubenswrapper[4813]: I0219 18:49:54.228063 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4446e6ed-7663-41ab-9fae-f6da8f4f5449-config-data" (OuterVolumeSpecName: "config-data") pod "4446e6ed-7663-41ab-9fae-f6da8f4f5449" (UID: "4446e6ed-7663-41ab-9fae-f6da8f4f5449"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:49:54 crc kubenswrapper[4813]: I0219 18:49:54.306888 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4446e6ed-7663-41ab-9fae-f6da8f4f5449-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:54 crc kubenswrapper[4813]: I0219 18:49:54.624366 4813 generic.go:334] "Generic (PLEG): container finished" podID="4446e6ed-7663-41ab-9fae-f6da8f4f5449" containerID="b90427982961bec5baa8d0a5adb7ef61ac4a4b9f69504de632565a8d4fb1746f" exitCode=0 Feb 19 18:49:54 crc kubenswrapper[4813]: I0219 18:49:54.624460 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4446e6ed-7663-41ab-9fae-f6da8f4f5449","Type":"ContainerDied","Data":"b90427982961bec5baa8d0a5adb7ef61ac4a4b9f69504de632565a8d4fb1746f"} Feb 19 18:49:54 crc kubenswrapper[4813]: I0219 18:49:54.624492 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 18:49:54 crc kubenswrapper[4813]: I0219 18:49:54.624541 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4446e6ed-7663-41ab-9fae-f6da8f4f5449","Type":"ContainerDied","Data":"ce80f7aef1a0438644020c5f48d55ced01a988ffc32b33ab807a1b94c1ba8132"} Feb 19 18:49:54 crc kubenswrapper[4813]: I0219 18:49:54.624578 4813 scope.go:117] "RemoveContainer" containerID="b23093c181cb23b00205b702d5d01561b8936f76067bc28d25c6af1d89b235bd" Feb 19 18:49:54 crc kubenswrapper[4813]: I0219 18:49:54.661738 4813 scope.go:117] "RemoveContainer" containerID="f3d6fdd3c59e3e01f33197739decd04a0dac26668d2cd4b7085a00e81751dcdd" Feb 19 18:49:54 crc kubenswrapper[4813]: I0219 18:49:54.717180 4813 scope.go:117] "RemoveContainer" containerID="b90427982961bec5baa8d0a5adb7ef61ac4a4b9f69504de632565a8d4fb1746f" Feb 19 18:49:54 crc kubenswrapper[4813]: I0219 18:49:54.718853 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 18:49:54 crc kubenswrapper[4813]: I0219 18:49:54.756842 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 18:49:54 crc kubenswrapper[4813]: I0219 18:49:54.768629 4813 scope.go:117] "RemoveContainer" containerID="4fa144b488fed73c21844ad14a76893ad553ff87da4e836d63e03c5203e0a173" Feb 19 18:49:54 crc kubenswrapper[4813]: I0219 18:49:54.768757 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 18:49:54 crc kubenswrapper[4813]: E0219 18:49:54.769132 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4446e6ed-7663-41ab-9fae-f6da8f4f5449" containerName="ceilometer-central-agent" Feb 19 18:49:54 crc kubenswrapper[4813]: I0219 18:49:54.769149 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="4446e6ed-7663-41ab-9fae-f6da8f4f5449" containerName="ceilometer-central-agent" Feb 19 18:49:54 crc kubenswrapper[4813]: E0219 18:49:54.769163 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4446e6ed-7663-41ab-9fae-f6da8f4f5449" containerName="sg-core" Feb 19 18:49:54 crc kubenswrapper[4813]: I0219 18:49:54.769170 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="4446e6ed-7663-41ab-9fae-f6da8f4f5449" containerName="sg-core" Feb 19 18:49:54 crc kubenswrapper[4813]: E0219 18:49:54.769184 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85fc318c-5591-4bb8-a92f-04b0f34884e7" containerName="init" Feb 19 18:49:54 crc kubenswrapper[4813]: I0219 18:49:54.769190 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="85fc318c-5591-4bb8-a92f-04b0f34884e7" containerName="init" Feb 19 18:49:54 crc kubenswrapper[4813]: E0219 18:49:54.769200 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85fc318c-5591-4bb8-a92f-04b0f34884e7" containerName="dnsmasq-dns" Feb 19 18:49:54 crc kubenswrapper[4813]: I0219 18:49:54.769206 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="85fc318c-5591-4bb8-a92f-04b0f34884e7" containerName="dnsmasq-dns" Feb 19 18:49:54 crc kubenswrapper[4813]: E0219 18:49:54.769215 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4446e6ed-7663-41ab-9fae-f6da8f4f5449" containerName="proxy-httpd" Feb 19 18:49:54 crc kubenswrapper[4813]: I0219 18:49:54.769220 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="4446e6ed-7663-41ab-9fae-f6da8f4f5449" containerName="proxy-httpd" Feb 19 18:49:54 crc kubenswrapper[4813]: E0219 18:49:54.769234 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4446e6ed-7663-41ab-9fae-f6da8f4f5449" containerName="ceilometer-notification-agent" Feb 19 18:49:54 crc kubenswrapper[4813]: I0219 18:49:54.769240 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="4446e6ed-7663-41ab-9fae-f6da8f4f5449" containerName="ceilometer-notification-agent" Feb 19 18:49:54 crc kubenswrapper[4813]: I0219 18:49:54.769394 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="4446e6ed-7663-41ab-9fae-f6da8f4f5449" containerName="sg-core" Feb 19 18:49:54 crc kubenswrapper[4813]: I0219 18:49:54.769409 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="4446e6ed-7663-41ab-9fae-f6da8f4f5449" containerName="ceilometer-central-agent" Feb 19 18:49:54 crc kubenswrapper[4813]: I0219 18:49:54.769434 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="4446e6ed-7663-41ab-9fae-f6da8f4f5449" containerName="proxy-httpd" Feb 19 18:49:54 crc kubenswrapper[4813]: I0219 18:49:54.769452 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="85fc318c-5591-4bb8-a92f-04b0f34884e7" containerName="dnsmasq-dns" Feb 19 18:49:54 crc kubenswrapper[4813]: I0219 18:49:54.769503 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="4446e6ed-7663-41ab-9fae-f6da8f4f5449" containerName="ceilometer-notification-agent" Feb 19 18:49:54 crc kubenswrapper[4813]: I0219 18:49:54.771360 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 18:49:54 crc kubenswrapper[4813]: I0219 18:49:54.775410 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 18:49:54 crc kubenswrapper[4813]: I0219 18:49:54.776164 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 18:49:54 crc kubenswrapper[4813]: I0219 18:49:54.794913 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 18:49:54 crc kubenswrapper[4813]: I0219 18:49:54.810321 4813 scope.go:117] "RemoveContainer" containerID="b23093c181cb23b00205b702d5d01561b8936f76067bc28d25c6af1d89b235bd" Feb 19 18:49:54 crc kubenswrapper[4813]: E0219 18:49:54.811487 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b23093c181cb23b00205b702d5d01561b8936f76067bc28d25c6af1d89b235bd\": container with ID starting with b23093c181cb23b00205b702d5d01561b8936f76067bc28d25c6af1d89b235bd not found: ID does not exist" containerID="b23093c181cb23b00205b702d5d01561b8936f76067bc28d25c6af1d89b235bd" Feb 19 18:49:54 crc kubenswrapper[4813]: I0219 18:49:54.811532 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b23093c181cb23b00205b702d5d01561b8936f76067bc28d25c6af1d89b235bd"} err="failed to get container status \"b23093c181cb23b00205b702d5d01561b8936f76067bc28d25c6af1d89b235bd\": rpc error: code = NotFound desc = could not find container \"b23093c181cb23b00205b702d5d01561b8936f76067bc28d25c6af1d89b235bd\": container with ID starting with b23093c181cb23b00205b702d5d01561b8936f76067bc28d25c6af1d89b235bd not found: ID does not exist" Feb 19 18:49:54 crc kubenswrapper[4813]: I0219 18:49:54.811560 4813 scope.go:117] "RemoveContainer" containerID="f3d6fdd3c59e3e01f33197739decd04a0dac26668d2cd4b7085a00e81751dcdd" Feb 19 18:49:54 crc kubenswrapper[4813]: E0219 18:49:54.811879 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3d6fdd3c59e3e01f33197739decd04a0dac26668d2cd4b7085a00e81751dcdd\": container with ID starting with f3d6fdd3c59e3e01f33197739decd04a0dac26668d2cd4b7085a00e81751dcdd not found: ID does not exist" containerID="f3d6fdd3c59e3e01f33197739decd04a0dac26668d2cd4b7085a00e81751dcdd" Feb 19 18:49:54 crc kubenswrapper[4813]: I0219 18:49:54.811907 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3d6fdd3c59e3e01f33197739decd04a0dac26668d2cd4b7085a00e81751dcdd"} err="failed to get container status \"f3d6fdd3c59e3e01f33197739decd04a0dac26668d2cd4b7085a00e81751dcdd\": rpc error: code = NotFound desc = could not find container \"f3d6fdd3c59e3e01f33197739decd04a0dac26668d2cd4b7085a00e81751dcdd\": container with ID starting with f3d6fdd3c59e3e01f33197739decd04a0dac26668d2cd4b7085a00e81751dcdd not found: ID does not exist" Feb 19 18:49:54 crc kubenswrapper[4813]: I0219 18:49:54.811927 4813 scope.go:117] "RemoveContainer" containerID="b90427982961bec5baa8d0a5adb7ef61ac4a4b9f69504de632565a8d4fb1746f" Feb 19 18:49:54 crc kubenswrapper[4813]: E0219 18:49:54.812315 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b90427982961bec5baa8d0a5adb7ef61ac4a4b9f69504de632565a8d4fb1746f\": container with ID starting with b90427982961bec5baa8d0a5adb7ef61ac4a4b9f69504de632565a8d4fb1746f not found: ID does not exist" containerID="b90427982961bec5baa8d0a5adb7ef61ac4a4b9f69504de632565a8d4fb1746f" Feb 19 18:49:54 crc kubenswrapper[4813]: I0219 18:49:54.812356 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b90427982961bec5baa8d0a5adb7ef61ac4a4b9f69504de632565a8d4fb1746f"} err="failed to get container status \"b90427982961bec5baa8d0a5adb7ef61ac4a4b9f69504de632565a8d4fb1746f\": rpc error: code = NotFound desc = could not find container \"b90427982961bec5baa8d0a5adb7ef61ac4a4b9f69504de632565a8d4fb1746f\": container with ID starting with b90427982961bec5baa8d0a5adb7ef61ac4a4b9f69504de632565a8d4fb1746f not found: ID does not exist" Feb 19 18:49:54 crc kubenswrapper[4813]: I0219 18:49:54.812387 4813 scope.go:117] "RemoveContainer" containerID="4fa144b488fed73c21844ad14a76893ad553ff87da4e836d63e03c5203e0a173" Feb 19 18:49:54 crc kubenswrapper[4813]: E0219 18:49:54.812696 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fa144b488fed73c21844ad14a76893ad553ff87da4e836d63e03c5203e0a173\": container with ID starting with 4fa144b488fed73c21844ad14a76893ad553ff87da4e836d63e03c5203e0a173 not found: ID does not exist" containerID="4fa144b488fed73c21844ad14a76893ad553ff87da4e836d63e03c5203e0a173" Feb 19 18:49:54 crc kubenswrapper[4813]: I0219 18:49:54.812717 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fa144b488fed73c21844ad14a76893ad553ff87da4e836d63e03c5203e0a173"} err="failed to get container status \"4fa144b488fed73c21844ad14a76893ad553ff87da4e836d63e03c5203e0a173\": rpc error: code = NotFound desc = could not find container \"4fa144b488fed73c21844ad14a76893ad553ff87da4e836d63e03c5203e0a173\": container with ID starting with 4fa144b488fed73c21844ad14a76893ad553ff87da4e836d63e03c5203e0a173 not found: ID does not exist" Feb 19 18:49:54 crc kubenswrapper[4813]: I0219 18:49:54.817654 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/005d604f-ced9-4b2e-aae7-1cac5398b880-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"005d604f-ced9-4b2e-aae7-1cac5398b880\") " pod="openstack/ceilometer-0" Feb 19 18:49:54 crc kubenswrapper[4813]: I0219 18:49:54.817706 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pz6dr\" (UniqueName: \"kubernetes.io/projected/005d604f-ced9-4b2e-aae7-1cac5398b880-kube-api-access-pz6dr\") pod \"ceilometer-0\" (UID: \"005d604f-ced9-4b2e-aae7-1cac5398b880\") " pod="openstack/ceilometer-0" Feb 19 18:49:54 crc kubenswrapper[4813]: I0219 18:49:54.817725 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/005d604f-ced9-4b2e-aae7-1cac5398b880-log-httpd\") pod \"ceilometer-0\" (UID: \"005d604f-ced9-4b2e-aae7-1cac5398b880\") " pod="openstack/ceilometer-0" Feb 19 18:49:54 crc kubenswrapper[4813]: I0219 18:49:54.817749 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/005d604f-ced9-4b2e-aae7-1cac5398b880-scripts\") pod \"ceilometer-0\" (UID: \"005d604f-ced9-4b2e-aae7-1cac5398b880\") " pod="openstack/ceilometer-0" Feb 19 18:49:54 crc kubenswrapper[4813]: I0219 18:49:54.817886 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/005d604f-ced9-4b2e-aae7-1cac5398b880-run-httpd\") pod \"ceilometer-0\" (UID: \"005d604f-ced9-4b2e-aae7-1cac5398b880\") " pod="openstack/ceilometer-0" Feb 19 18:49:54 crc kubenswrapper[4813]: I0219 18:49:54.817945 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/005d604f-ced9-4b2e-aae7-1cac5398b880-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"005d604f-ced9-4b2e-aae7-1cac5398b880\") " pod="openstack/ceilometer-0" Feb 19 18:49:54 crc kubenswrapper[4813]: I0219 18:49:54.818049 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/005d604f-ced9-4b2e-aae7-1cac5398b880-config-data\") pod \"ceilometer-0\" (UID: \"005d604f-ced9-4b2e-aae7-1cac5398b880\") " pod="openstack/ceilometer-0" Feb 19 18:49:54 crc kubenswrapper[4813]: I0219 18:49:54.919276 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/005d604f-ced9-4b2e-aae7-1cac5398b880-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"005d604f-ced9-4b2e-aae7-1cac5398b880\") " pod="openstack/ceilometer-0" Feb 19 18:49:54 crc kubenswrapper[4813]: I0219 18:49:54.919607 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pz6dr\" (UniqueName: \"kubernetes.io/projected/005d604f-ced9-4b2e-aae7-1cac5398b880-kube-api-access-pz6dr\") pod \"ceilometer-0\" (UID: \"005d604f-ced9-4b2e-aae7-1cac5398b880\") " pod="openstack/ceilometer-0" Feb 19 18:49:54 crc kubenswrapper[4813]: I0219 18:49:54.919628 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/005d604f-ced9-4b2e-aae7-1cac5398b880-log-httpd\") pod \"ceilometer-0\" (UID: \"005d604f-ced9-4b2e-aae7-1cac5398b880\") " pod="openstack/ceilometer-0" Feb 19 18:49:54 crc kubenswrapper[4813]: I0219 18:49:54.919654 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/005d604f-ced9-4b2e-aae7-1cac5398b880-scripts\") pod \"ceilometer-0\" (UID: \"005d604f-ced9-4b2e-aae7-1cac5398b880\") " pod="openstack/ceilometer-0" Feb 19 18:49:54 crc kubenswrapper[4813]: I0219 18:49:54.919714 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/005d604f-ced9-4b2e-aae7-1cac5398b880-run-httpd\") pod \"ceilometer-0\" (UID: \"005d604f-ced9-4b2e-aae7-1cac5398b880\") " pod="openstack/ceilometer-0" Feb 19 18:49:54 crc kubenswrapper[4813]: I0219 18:49:54.919733 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/005d604f-ced9-4b2e-aae7-1cac5398b880-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"005d604f-ced9-4b2e-aae7-1cac5398b880\") " pod="openstack/ceilometer-0" Feb 19 18:49:54 crc kubenswrapper[4813]: I0219 18:49:54.919758 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/005d604f-ced9-4b2e-aae7-1cac5398b880-config-data\") pod \"ceilometer-0\" (UID: \"005d604f-ced9-4b2e-aae7-1cac5398b880\") " pod="openstack/ceilometer-0" Feb 19 18:49:54 crc kubenswrapper[4813]: I0219 18:49:54.920576 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/005d604f-ced9-4b2e-aae7-1cac5398b880-log-httpd\") pod \"ceilometer-0\" (UID: \"005d604f-ced9-4b2e-aae7-1cac5398b880\") " pod="openstack/ceilometer-0" Feb 19 18:49:54 crc kubenswrapper[4813]: I0219 18:49:54.920566 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/005d604f-ced9-4b2e-aae7-1cac5398b880-run-httpd\") pod \"ceilometer-0\" (UID: \"005d604f-ced9-4b2e-aae7-1cac5398b880\") " pod="openstack/ceilometer-0" Feb 19 18:49:54 crc kubenswrapper[4813]: I0219 18:49:54.923651 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/005d604f-ced9-4b2e-aae7-1cac5398b880-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"005d604f-ced9-4b2e-aae7-1cac5398b880\") " pod="openstack/ceilometer-0" Feb 19 18:49:54 crc kubenswrapper[4813]: I0219 18:49:54.923850 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/005d604f-ced9-4b2e-aae7-1cac5398b880-config-data\") pod \"ceilometer-0\" (UID: \"005d604f-ced9-4b2e-aae7-1cac5398b880\") " pod="openstack/ceilometer-0" Feb 19 18:49:54 crc kubenswrapper[4813]: I0219 18:49:54.926813 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/005d604f-ced9-4b2e-aae7-1cac5398b880-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"005d604f-ced9-4b2e-aae7-1cac5398b880\") " pod="openstack/ceilometer-0" Feb 19 18:49:54 crc kubenswrapper[4813]: I0219 18:49:54.928342 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/005d604f-ced9-4b2e-aae7-1cac5398b880-scripts\") pod \"ceilometer-0\" (UID: \"005d604f-ced9-4b2e-aae7-1cac5398b880\") " pod="openstack/ceilometer-0" Feb 19 18:49:54 crc kubenswrapper[4813]: I0219 18:49:54.935991 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pz6dr\" (UniqueName: \"kubernetes.io/projected/005d604f-ced9-4b2e-aae7-1cac5398b880-kube-api-access-pz6dr\") pod \"ceilometer-0\" (UID: \"005d604f-ced9-4b2e-aae7-1cac5398b880\") " pod="openstack/ceilometer-0" Feb 19 18:49:54 crc kubenswrapper[4813]: I0219 18:49:54.976806 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-9s5kt" Feb 19 18:49:55 crc kubenswrapper[4813]: I0219 18:49:55.020652 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d622858a-0915-43b1-9169-8f176f0b16f0-combined-ca-bundle\") pod \"d622858a-0915-43b1-9169-8f176f0b16f0\" (UID: \"d622858a-0915-43b1-9169-8f176f0b16f0\") " Feb 19 18:49:55 crc kubenswrapper[4813]: I0219 18:49:55.020955 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjglc\" (UniqueName: \"kubernetes.io/projected/d622858a-0915-43b1-9169-8f176f0b16f0-kube-api-access-qjglc\") pod \"d622858a-0915-43b1-9169-8f176f0b16f0\" (UID: \"d622858a-0915-43b1-9169-8f176f0b16f0\") " Feb 19 18:49:55 crc kubenswrapper[4813]: I0219 18:49:55.021045 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d622858a-0915-43b1-9169-8f176f0b16f0-db-sync-config-data\") pod \"d622858a-0915-43b1-9169-8f176f0b16f0\" (UID: \"d622858a-0915-43b1-9169-8f176f0b16f0\") " Feb 19 18:49:55 crc kubenswrapper[4813]: I0219 18:49:55.026239 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d622858a-0915-43b1-9169-8f176f0b16f0-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "d622858a-0915-43b1-9169-8f176f0b16f0" (UID: "d622858a-0915-43b1-9169-8f176f0b16f0"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:49:55 crc kubenswrapper[4813]: I0219 18:49:55.026350 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d622858a-0915-43b1-9169-8f176f0b16f0-kube-api-access-qjglc" (OuterVolumeSpecName: "kube-api-access-qjglc") pod "d622858a-0915-43b1-9169-8f176f0b16f0" (UID: "d622858a-0915-43b1-9169-8f176f0b16f0"). InnerVolumeSpecName "kube-api-access-qjglc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:49:55 crc kubenswrapper[4813]: I0219 18:49:55.050835 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d622858a-0915-43b1-9169-8f176f0b16f0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d622858a-0915-43b1-9169-8f176f0b16f0" (UID: "d622858a-0915-43b1-9169-8f176f0b16f0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:49:55 crc kubenswrapper[4813]: I0219 18:49:55.101921 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 18:49:55 crc kubenswrapper[4813]: I0219 18:49:55.123561 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d622858a-0915-43b1-9169-8f176f0b16f0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:55 crc kubenswrapper[4813]: I0219 18:49:55.123592 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjglc\" (UniqueName: \"kubernetes.io/projected/d622858a-0915-43b1-9169-8f176f0b16f0-kube-api-access-qjglc\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:55 crc kubenswrapper[4813]: I0219 18:49:55.123601 4813 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d622858a-0915-43b1-9169-8f176f0b16f0-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:55 crc kubenswrapper[4813]: I0219 18:49:55.484205 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4446e6ed-7663-41ab-9fae-f6da8f4f5449" path="/var/lib/kubelet/pods/4446e6ed-7663-41ab-9fae-f6da8f4f5449/volumes" Feb 19 18:49:55 crc kubenswrapper[4813]: I0219 18:49:55.485330 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85fc318c-5591-4bb8-a92f-04b0f34884e7" path="/var/lib/kubelet/pods/85fc318c-5591-4bb8-a92f-04b0f34884e7/volumes" Feb 19 18:49:55 crc kubenswrapper[4813]: I0219 18:49:55.558667 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 18:49:55 crc kubenswrapper[4813]: W0219 18:49:55.571281 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod005d604f_ced9_4b2e_aae7_1cac5398b880.slice/crio-f8edc69fc08d0886f9fa7b1087afe30a20be05307f31b206aae33e1f88977648 WatchSource:0}: Error finding container f8edc69fc08d0886f9fa7b1087afe30a20be05307f31b206aae33e1f88977648: Status 404 returned error can't find the container with id f8edc69fc08d0886f9fa7b1087afe30a20be05307f31b206aae33e1f88977648 Feb 19 18:49:55 crc kubenswrapper[4813]: I0219 18:49:55.633628 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"005d604f-ced9-4b2e-aae7-1cac5398b880","Type":"ContainerStarted","Data":"f8edc69fc08d0886f9fa7b1087afe30a20be05307f31b206aae33e1f88977648"} Feb 19 18:49:55 crc kubenswrapper[4813]: I0219 18:49:55.636699 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-9s5kt" event={"ID":"d622858a-0915-43b1-9169-8f176f0b16f0","Type":"ContainerDied","Data":"c69d607de1b17bf98758cc16725aa60b9f622f4ccb07684b8b40e0c336c57423"} Feb 19 18:49:55 crc kubenswrapper[4813]: I0219 18:49:55.636738 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c69d607de1b17bf98758cc16725aa60b9f622f4ccb07684b8b40e0c336c57423" Feb 19 18:49:55 crc kubenswrapper[4813]: I0219 18:49:55.636835 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-9s5kt" Feb 19 18:49:55 crc kubenswrapper[4813]: I0219 18:49:55.935551 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-9f6f7ccc7-dwqzl"] Feb 19 18:49:55 crc kubenswrapper[4813]: E0219 18:49:55.936653 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d622858a-0915-43b1-9169-8f176f0b16f0" containerName="barbican-db-sync" Feb 19 18:49:55 crc kubenswrapper[4813]: I0219 18:49:55.936674 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="d622858a-0915-43b1-9169-8f176f0b16f0" containerName="barbican-db-sync" Feb 19 18:49:55 crc kubenswrapper[4813]: I0219 18:49:55.937135 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="d622858a-0915-43b1-9169-8f176f0b16f0" containerName="barbican-db-sync" Feb 19 18:49:55 crc kubenswrapper[4813]: I0219 18:49:55.939355 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-9f6f7ccc7-dwqzl" Feb 19 18:49:55 crc kubenswrapper[4813]: I0219 18:49:55.943908 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-9f6f7ccc7-dwqzl"] Feb 19 18:49:55 crc kubenswrapper[4813]: I0219 18:49:55.950799 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 19 18:49:55 crc kubenswrapper[4813]: I0219 18:49:55.951048 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-x7ftv" Feb 19 18:49:55 crc kubenswrapper[4813]: I0219 18:49:55.951222 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.034251 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-bf98f678b-j6t6g"] Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.046346 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5aeb8bcb-4373-48d4-9ac6-e6472189e440-combined-ca-bundle\") pod \"barbican-worker-9f6f7ccc7-dwqzl\" (UID: \"5aeb8bcb-4373-48d4-9ac6-e6472189e440\") " pod="openstack/barbican-worker-9f6f7ccc7-dwqzl" Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.046456 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkhlf\" (UniqueName: \"kubernetes.io/projected/5aeb8bcb-4373-48d4-9ac6-e6472189e440-kube-api-access-lkhlf\") pod \"barbican-worker-9f6f7ccc7-dwqzl\" (UID: \"5aeb8bcb-4373-48d4-9ac6-e6472189e440\") " pod="openstack/barbican-worker-9f6f7ccc7-dwqzl" Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.046515 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5aeb8bcb-4373-48d4-9ac6-e6472189e440-config-data-custom\") pod \"barbican-worker-9f6f7ccc7-dwqzl\" (UID: \"5aeb8bcb-4373-48d4-9ac6-e6472189e440\") " pod="openstack/barbican-worker-9f6f7ccc7-dwqzl" Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.046573 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5aeb8bcb-4373-48d4-9ac6-e6472189e440-logs\") pod \"barbican-worker-9f6f7ccc7-dwqzl\" (UID: \"5aeb8bcb-4373-48d4-9ac6-e6472189e440\") " pod="openstack/barbican-worker-9f6f7ccc7-dwqzl" Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.046600 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5aeb8bcb-4373-48d4-9ac6-e6472189e440-config-data\") pod \"barbican-worker-9f6f7ccc7-dwqzl\" (UID: \"5aeb8bcb-4373-48d4-9ac6-e6472189e440\") " pod="openstack/barbican-worker-9f6f7ccc7-dwqzl" Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.047807 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-bf98f678b-j6t6g" Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.052739 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.061723 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-bf98f678b-j6t6g"] Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.113645 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-9d49dd75f-mhb4c"] Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.115172 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9d49dd75f-mhb4c" Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.132150 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9d49dd75f-mhb4c"] Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.151301 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkhlf\" (UniqueName: \"kubernetes.io/projected/5aeb8bcb-4373-48d4-9ac6-e6472189e440-kube-api-access-lkhlf\") pod \"barbican-worker-9f6f7ccc7-dwqzl\" (UID: \"5aeb8bcb-4373-48d4-9ac6-e6472189e440\") " pod="openstack/barbican-worker-9f6f7ccc7-dwqzl" Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.151340 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81790f67-278c-4b6a-82e5-ec5bb521c6ac-logs\") pod \"barbican-keystone-listener-bf98f678b-j6t6g\" (UID: \"81790f67-278c-4b6a-82e5-ec5bb521c6ac\") " pod="openstack/barbican-keystone-listener-bf98f678b-j6t6g" Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.151367 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/81790f67-278c-4b6a-82e5-ec5bb521c6ac-config-data-custom\") pod \"barbican-keystone-listener-bf98f678b-j6t6g\" (UID: \"81790f67-278c-4b6a-82e5-ec5bb521c6ac\") " pod="openstack/barbican-keystone-listener-bf98f678b-j6t6g" Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.151435 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35d7e901-239e-4015-951a-a8a044717b21-config\") pod \"dnsmasq-dns-9d49dd75f-mhb4c\" (UID: \"35d7e901-239e-4015-951a-a8a044717b21\") " pod="openstack/dnsmasq-dns-9d49dd75f-mhb4c" Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.151465 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5aeb8bcb-4373-48d4-9ac6-e6472189e440-config-data-custom\") pod \"barbican-worker-9f6f7ccc7-dwqzl\" (UID: \"5aeb8bcb-4373-48d4-9ac6-e6472189e440\") " pod="openstack/barbican-worker-9f6f7ccc7-dwqzl" Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.151545 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/35d7e901-239e-4015-951a-a8a044717b21-dns-swift-storage-0\") pod \"dnsmasq-dns-9d49dd75f-mhb4c\" (UID: \"35d7e901-239e-4015-951a-a8a044717b21\") " pod="openstack/dnsmasq-dns-9d49dd75f-mhb4c" Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.151569 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhljq\" (UniqueName: \"kubernetes.io/projected/35d7e901-239e-4015-951a-a8a044717b21-kube-api-access-vhljq\") pod \"dnsmasq-dns-9d49dd75f-mhb4c\" (UID: \"35d7e901-239e-4015-951a-a8a044717b21\") " pod="openstack/dnsmasq-dns-9d49dd75f-mhb4c" Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.151594 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5aeb8bcb-4373-48d4-9ac6-e6472189e440-logs\") pod \"barbican-worker-9f6f7ccc7-dwqzl\" (UID: \"5aeb8bcb-4373-48d4-9ac6-e6472189e440\") " pod="openstack/barbican-worker-9f6f7ccc7-dwqzl" Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.151639 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5aeb8bcb-4373-48d4-9ac6-e6472189e440-config-data\") pod \"barbican-worker-9f6f7ccc7-dwqzl\" (UID: \"5aeb8bcb-4373-48d4-9ac6-e6472189e440\") " pod="openstack/barbican-worker-9f6f7ccc7-dwqzl" Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.151657 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9n6d2\" (UniqueName: \"kubernetes.io/projected/81790f67-278c-4b6a-82e5-ec5bb521c6ac-kube-api-access-9n6d2\") pod \"barbican-keystone-listener-bf98f678b-j6t6g\" (UID: \"81790f67-278c-4b6a-82e5-ec5bb521c6ac\") " pod="openstack/barbican-keystone-listener-bf98f678b-j6t6g" Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.151673 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/35d7e901-239e-4015-951a-a8a044717b21-ovsdbserver-sb\") pod \"dnsmasq-dns-9d49dd75f-mhb4c\" (UID: \"35d7e901-239e-4015-951a-a8a044717b21\") " pod="openstack/dnsmasq-dns-9d49dd75f-mhb4c" Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.151735 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5aeb8bcb-4373-48d4-9ac6-e6472189e440-combined-ca-bundle\") pod \"barbican-worker-9f6f7ccc7-dwqzl\" (UID: \"5aeb8bcb-4373-48d4-9ac6-e6472189e440\") " pod="openstack/barbican-worker-9f6f7ccc7-dwqzl" Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.151751 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/35d7e901-239e-4015-951a-a8a044717b21-ovsdbserver-nb\") pod \"dnsmasq-dns-9d49dd75f-mhb4c\" (UID: \"35d7e901-239e-4015-951a-a8a044717b21\") " pod="openstack/dnsmasq-dns-9d49dd75f-mhb4c" Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.151772 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81790f67-278c-4b6a-82e5-ec5bb521c6ac-combined-ca-bundle\") pod \"barbican-keystone-listener-bf98f678b-j6t6g\" (UID: \"81790f67-278c-4b6a-82e5-ec5bb521c6ac\") " pod="openstack/barbican-keystone-listener-bf98f678b-j6t6g" Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.151802 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81790f67-278c-4b6a-82e5-ec5bb521c6ac-config-data\") pod \"barbican-keystone-listener-bf98f678b-j6t6g\" (UID: \"81790f67-278c-4b6a-82e5-ec5bb521c6ac\") " pod="openstack/barbican-keystone-listener-bf98f678b-j6t6g" Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.151822 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/35d7e901-239e-4015-951a-a8a044717b21-dns-svc\") pod \"dnsmasq-dns-9d49dd75f-mhb4c\" (UID: \"35d7e901-239e-4015-951a-a8a044717b21\") " pod="openstack/dnsmasq-dns-9d49dd75f-mhb4c" Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.153739 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5aeb8bcb-4373-48d4-9ac6-e6472189e440-logs\") pod \"barbican-worker-9f6f7ccc7-dwqzl\" (UID: \"5aeb8bcb-4373-48d4-9ac6-e6472189e440\") " pod="openstack/barbican-worker-9f6f7ccc7-dwqzl" Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.158016 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5aeb8bcb-4373-48d4-9ac6-e6472189e440-combined-ca-bundle\") pod \"barbican-worker-9f6f7ccc7-dwqzl\" (UID: \"5aeb8bcb-4373-48d4-9ac6-e6472189e440\") " pod="openstack/barbican-worker-9f6f7ccc7-dwqzl" Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.158069 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5aeb8bcb-4373-48d4-9ac6-e6472189e440-config-data-custom\") pod \"barbican-worker-9f6f7ccc7-dwqzl\" (UID: \"5aeb8bcb-4373-48d4-9ac6-e6472189e440\") " pod="openstack/barbican-worker-9f6f7ccc7-dwqzl" Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.160556 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5aeb8bcb-4373-48d4-9ac6-e6472189e440-config-data\") pod \"barbican-worker-9f6f7ccc7-dwqzl\" (UID: \"5aeb8bcb-4373-48d4-9ac6-e6472189e440\") " pod="openstack/barbican-worker-9f6f7ccc7-dwqzl" Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.196641 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkhlf\" (UniqueName: \"kubernetes.io/projected/5aeb8bcb-4373-48d4-9ac6-e6472189e440-kube-api-access-lkhlf\") pod \"barbican-worker-9f6f7ccc7-dwqzl\" (UID: \"5aeb8bcb-4373-48d4-9ac6-e6472189e440\") " pod="openstack/barbican-worker-9f6f7ccc7-dwqzl" Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.247382 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6f96ddcd7d-r5nt7"] Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.249275 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6f96ddcd7d-r5nt7" Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.253486 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81790f67-278c-4b6a-82e5-ec5bb521c6ac-logs\") pod \"barbican-keystone-listener-bf98f678b-j6t6g\" (UID: \"81790f67-278c-4b6a-82e5-ec5bb521c6ac\") " pod="openstack/barbican-keystone-listener-bf98f678b-j6t6g" Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.253582 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/81790f67-278c-4b6a-82e5-ec5bb521c6ac-config-data-custom\") pod \"barbican-keystone-listener-bf98f678b-j6t6g\" (UID: \"81790f67-278c-4b6a-82e5-ec5bb521c6ac\") " pod="openstack/barbican-keystone-listener-bf98f678b-j6t6g" Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.253690 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35d7e901-239e-4015-951a-a8a044717b21-config\") pod \"dnsmasq-dns-9d49dd75f-mhb4c\" (UID: \"35d7e901-239e-4015-951a-a8a044717b21\") " pod="openstack/dnsmasq-dns-9d49dd75f-mhb4c" Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.253791 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/35d7e901-239e-4015-951a-a8a044717b21-dns-swift-storage-0\") pod \"dnsmasq-dns-9d49dd75f-mhb4c\" (UID: \"35d7e901-239e-4015-951a-a8a044717b21\") " pod="openstack/dnsmasq-dns-9d49dd75f-mhb4c" Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.253856 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhljq\" (UniqueName: \"kubernetes.io/projected/35d7e901-239e-4015-951a-a8a044717b21-kube-api-access-vhljq\") pod \"dnsmasq-dns-9d49dd75f-mhb4c\" (UID: \"35d7e901-239e-4015-951a-a8a044717b21\") " pod="openstack/dnsmasq-dns-9d49dd75f-mhb4c" Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.253942 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9n6d2\" (UniqueName: \"kubernetes.io/projected/81790f67-278c-4b6a-82e5-ec5bb521c6ac-kube-api-access-9n6d2\") pod \"barbican-keystone-listener-bf98f678b-j6t6g\" (UID: \"81790f67-278c-4b6a-82e5-ec5bb521c6ac\") " pod="openstack/barbican-keystone-listener-bf98f678b-j6t6g" Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.254036 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/35d7e901-239e-4015-951a-a8a044717b21-ovsdbserver-sb\") pod \"dnsmasq-dns-9d49dd75f-mhb4c\" (UID: \"35d7e901-239e-4015-951a-a8a044717b21\") " pod="openstack/dnsmasq-dns-9d49dd75f-mhb4c" Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.254133 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/35d7e901-239e-4015-951a-a8a044717b21-ovsdbserver-nb\") pod \"dnsmasq-dns-9d49dd75f-mhb4c\" (UID: \"35d7e901-239e-4015-951a-a8a044717b21\") " pod="openstack/dnsmasq-dns-9d49dd75f-mhb4c" Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.254199 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81790f67-278c-4b6a-82e5-ec5bb521c6ac-combined-ca-bundle\") pod \"barbican-keystone-listener-bf98f678b-j6t6g\" (UID: \"81790f67-278c-4b6a-82e5-ec5bb521c6ac\") " pod="openstack/barbican-keystone-listener-bf98f678b-j6t6g" Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.254271 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81790f67-278c-4b6a-82e5-ec5bb521c6ac-config-data\") pod \"barbican-keystone-listener-bf98f678b-j6t6g\" (UID: \"81790f67-278c-4b6a-82e5-ec5bb521c6ac\") " pod="openstack/barbican-keystone-listener-bf98f678b-j6t6g" Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.254336 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/35d7e901-239e-4015-951a-a8a044717b21-dns-svc\") pod \"dnsmasq-dns-9d49dd75f-mhb4c\" (UID: \"35d7e901-239e-4015-951a-a8a044717b21\") " pod="openstack/dnsmasq-dns-9d49dd75f-mhb4c" Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.255203 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81790f67-278c-4b6a-82e5-ec5bb521c6ac-logs\") pod \"barbican-keystone-listener-bf98f678b-j6t6g\" (UID: \"81790f67-278c-4b6a-82e5-ec5bb521c6ac\") " pod="openstack/barbican-keystone-listener-bf98f678b-j6t6g" Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.255292 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/35d7e901-239e-4015-951a-a8a044717b21-dns-svc\") pod \"dnsmasq-dns-9d49dd75f-mhb4c\" (UID: \"35d7e901-239e-4015-951a-a8a044717b21\") " pod="openstack/dnsmasq-dns-9d49dd75f-mhb4c" Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.256204 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/35d7e901-239e-4015-951a-a8a044717b21-ovsdbserver-sb\") pod \"dnsmasq-dns-9d49dd75f-mhb4c\" (UID: \"35d7e901-239e-4015-951a-a8a044717b21\") " pod="openstack/dnsmasq-dns-9d49dd75f-mhb4c" Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.256767 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.256892 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35d7e901-239e-4015-951a-a8a044717b21-config\") pod \"dnsmasq-dns-9d49dd75f-mhb4c\" (UID: \"35d7e901-239e-4015-951a-a8a044717b21\") " pod="openstack/dnsmasq-dns-9d49dd75f-mhb4c" Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.257591 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/35d7e901-239e-4015-951a-a8a044717b21-dns-swift-storage-0\") pod \"dnsmasq-dns-9d49dd75f-mhb4c\" (UID: \"35d7e901-239e-4015-951a-a8a044717b21\") " pod="openstack/dnsmasq-dns-9d49dd75f-mhb4c" Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.258171 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/35d7e901-239e-4015-951a-a8a044717b21-ovsdbserver-nb\") pod \"dnsmasq-dns-9d49dd75f-mhb4c\" (UID: \"35d7e901-239e-4015-951a-a8a044717b21\") " pod="openstack/dnsmasq-dns-9d49dd75f-mhb4c" Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.260704 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81790f67-278c-4b6a-82e5-ec5bb521c6ac-combined-ca-bundle\") pod \"barbican-keystone-listener-bf98f678b-j6t6g\" (UID: \"81790f67-278c-4b6a-82e5-ec5bb521c6ac\") " pod="openstack/barbican-keystone-listener-bf98f678b-j6t6g" Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.261774 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81790f67-278c-4b6a-82e5-ec5bb521c6ac-config-data\") pod \"barbican-keystone-listener-bf98f678b-j6t6g\" (UID: \"81790f67-278c-4b6a-82e5-ec5bb521c6ac\") " pod="openstack/barbican-keystone-listener-bf98f678b-j6t6g" Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.264239 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/81790f67-278c-4b6a-82e5-ec5bb521c6ac-config-data-custom\") pod \"barbican-keystone-listener-bf98f678b-j6t6g\" (UID: \"81790f67-278c-4b6a-82e5-ec5bb521c6ac\") " pod="openstack/barbican-keystone-listener-bf98f678b-j6t6g" Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.269074 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-9f6f7ccc7-dwqzl" Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.278332 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9n6d2\" (UniqueName: \"kubernetes.io/projected/81790f67-278c-4b6a-82e5-ec5bb521c6ac-kube-api-access-9n6d2\") pod \"barbican-keystone-listener-bf98f678b-j6t6g\" (UID: \"81790f67-278c-4b6a-82e5-ec5bb521c6ac\") " pod="openstack/barbican-keystone-listener-bf98f678b-j6t6g" Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.278611 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhljq\" (UniqueName: \"kubernetes.io/projected/35d7e901-239e-4015-951a-a8a044717b21-kube-api-access-vhljq\") pod \"dnsmasq-dns-9d49dd75f-mhb4c\" (UID: \"35d7e901-239e-4015-951a-a8a044717b21\") " pod="openstack/dnsmasq-dns-9d49dd75f-mhb4c" Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.282972 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6f96ddcd7d-r5nt7"] Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.355237 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9wrs\" (UniqueName: \"kubernetes.io/projected/9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1-kube-api-access-c9wrs\") pod \"barbican-api-6f96ddcd7d-r5nt7\" (UID: \"9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1\") " pod="openstack/barbican-api-6f96ddcd7d-r5nt7" Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.355290 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1-config-data\") pod \"barbican-api-6f96ddcd7d-r5nt7\" (UID: \"9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1\") " pod="openstack/barbican-api-6f96ddcd7d-r5nt7" Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.355327 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1-config-data-custom\") pod \"barbican-api-6f96ddcd7d-r5nt7\" (UID: \"9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1\") " pod="openstack/barbican-api-6f96ddcd7d-r5nt7" Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.355354 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1-combined-ca-bundle\") pod \"barbican-api-6f96ddcd7d-r5nt7\" (UID: \"9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1\") " pod="openstack/barbican-api-6f96ddcd7d-r5nt7" Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.355378 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1-logs\") pod \"barbican-api-6f96ddcd7d-r5nt7\" (UID: \"9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1\") " pod="openstack/barbican-api-6f96ddcd7d-r5nt7" Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.417735 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-bf98f678b-j6t6g" Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.453424 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9d49dd75f-mhb4c" Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.457210 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9wrs\" (UniqueName: \"kubernetes.io/projected/9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1-kube-api-access-c9wrs\") pod \"barbican-api-6f96ddcd7d-r5nt7\" (UID: \"9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1\") " pod="openstack/barbican-api-6f96ddcd7d-r5nt7" Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.457270 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1-config-data\") pod \"barbican-api-6f96ddcd7d-r5nt7\" (UID: \"9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1\") " pod="openstack/barbican-api-6f96ddcd7d-r5nt7" Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.457325 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1-config-data-custom\") pod \"barbican-api-6f96ddcd7d-r5nt7\" (UID: \"9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1\") " pod="openstack/barbican-api-6f96ddcd7d-r5nt7" Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.457354 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1-combined-ca-bundle\") pod \"barbican-api-6f96ddcd7d-r5nt7\" (UID: \"9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1\") " pod="openstack/barbican-api-6f96ddcd7d-r5nt7" Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.457382 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1-logs\") pod \"barbican-api-6f96ddcd7d-r5nt7\" (UID: \"9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1\") " pod="openstack/barbican-api-6f96ddcd7d-r5nt7" Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.458255 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1-logs\") pod \"barbican-api-6f96ddcd7d-r5nt7\" (UID: \"9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1\") " pod="openstack/barbican-api-6f96ddcd7d-r5nt7" Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.462144 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1-config-data\") pod \"barbican-api-6f96ddcd7d-r5nt7\" (UID: \"9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1\") " pod="openstack/barbican-api-6f96ddcd7d-r5nt7" Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.462264 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1-config-data-custom\") pod \"barbican-api-6f96ddcd7d-r5nt7\" (UID: \"9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1\") " pod="openstack/barbican-api-6f96ddcd7d-r5nt7" Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.464048 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1-combined-ca-bundle\") pod \"barbican-api-6f96ddcd7d-r5nt7\" (UID: \"9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1\") " pod="openstack/barbican-api-6f96ddcd7d-r5nt7" Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.473933 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9wrs\" (UniqueName: \"kubernetes.io/projected/9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1-kube-api-access-c9wrs\") pod \"barbican-api-6f96ddcd7d-r5nt7\" (UID: \"9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1\") " pod="openstack/barbican-api-6f96ddcd7d-r5nt7" Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.577041 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6f96ddcd7d-r5nt7" Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.656292 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"005d604f-ced9-4b2e-aae7-1cac5398b880","Type":"ContainerStarted","Data":"3981e8ba101dc0e733451758abeb564cdd34a8ee4928fa93df480f496f607bfe"} Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.670650 4813 generic.go:334] "Generic (PLEG): container finished" podID="a87080d3-007c-48e0-aa89-b82c5d9dafab" containerID="2aad7eab96447a6cadb62e2950ffd040729196c936832856ae0117d0d29ac117" exitCode=0 Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.670690 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-nnwfh" event={"ID":"a87080d3-007c-48e0-aa89-b82c5d9dafab","Type":"ContainerDied","Data":"2aad7eab96447a6cadb62e2950ffd040729196c936832856ae0117d0d29ac117"} Feb 19 18:49:56 crc kubenswrapper[4813]: W0219 18:49:56.725491 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5aeb8bcb_4373_48d4_9ac6_e6472189e440.slice/crio-4153383a658984fa3ebba097ecc8a95a05dc5f65aa988487fe746d1ce6979bb1 WatchSource:0}: Error finding container 4153383a658984fa3ebba097ecc8a95a05dc5f65aa988487fe746d1ce6979bb1: Status 404 returned error can't find the container with id 4153383a658984fa3ebba097ecc8a95a05dc5f65aa988487fe746d1ce6979bb1 Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.745288 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-9f6f7ccc7-dwqzl"] Feb 19 18:49:56 crc kubenswrapper[4813]: W0219 18:49:56.866762 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81790f67_278c_4b6a_82e5_ec5bb521c6ac.slice/crio-4a9a96b9f255b2a2ab8e9274d60e36244e1129f48007f631834faf74f4ecde1e WatchSource:0}: Error finding container 4a9a96b9f255b2a2ab8e9274d60e36244e1129f48007f631834faf74f4ecde1e: Status 404 returned error can't find the container with id 4a9a96b9f255b2a2ab8e9274d60e36244e1129f48007f631834faf74f4ecde1e Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.878670 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-bf98f678b-j6t6g"] Feb 19 18:49:56 crc kubenswrapper[4813]: W0219 18:49:56.969832 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod35d7e901_239e_4015_951a_a8a044717b21.slice/crio-8e6f5a96964dba8fd20a9845a18c51b1179a9ef6fd1cb0bd7c0fec509c019192 WatchSource:0}: Error finding container 8e6f5a96964dba8fd20a9845a18c51b1179a9ef6fd1cb0bd7c0fec509c019192: Status 404 returned error can't find the container with id 8e6f5a96964dba8fd20a9845a18c51b1179a9ef6fd1cb0bd7c0fec509c019192 Feb 19 18:49:56 crc kubenswrapper[4813]: I0219 18:49:56.977278 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9d49dd75f-mhb4c"] Feb 19 18:49:57 crc kubenswrapper[4813]: W0219 18:49:57.062054 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9a9bf9a0_ffbe_47df_9e7a_7d33afc001e1.slice/crio-2478d0bc8e710678cb7039ad2351f4e883728f08bab2242028a980e5182c7c64 WatchSource:0}: Error finding container 2478d0bc8e710678cb7039ad2351f4e883728f08bab2242028a980e5182c7c64: Status 404 returned error can't find the container with id 2478d0bc8e710678cb7039ad2351f4e883728f08bab2242028a980e5182c7c64 Feb 19 18:49:57 crc kubenswrapper[4813]: I0219 18:49:57.062707 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6f96ddcd7d-r5nt7"] Feb 19 18:49:57 crc kubenswrapper[4813]: I0219 18:49:57.693649 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6f96ddcd7d-r5nt7" event={"ID":"9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1","Type":"ContainerStarted","Data":"b5d1848c5c948f5ecdfed0aff2fe8bdb00c86151a565fd0ef767a3e18ea26c6c"} Feb 19 18:49:57 crc kubenswrapper[4813]: I0219 18:49:57.693844 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6f96ddcd7d-r5nt7" event={"ID":"9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1","Type":"ContainerStarted","Data":"787d274cf138fe60285ad37afb73be3c4f0032f5ecc749e03f67423dba5450bb"} Feb 19 18:49:57 crc kubenswrapper[4813]: I0219 18:49:57.693857 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6f96ddcd7d-r5nt7" event={"ID":"9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1","Type":"ContainerStarted","Data":"2478d0bc8e710678cb7039ad2351f4e883728f08bab2242028a980e5182c7c64"} Feb 19 18:49:57 crc kubenswrapper[4813]: I0219 18:49:57.694807 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6f96ddcd7d-r5nt7" Feb 19 18:49:57 crc kubenswrapper[4813]: I0219 18:49:57.694828 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6f96ddcd7d-r5nt7" Feb 19 18:49:57 crc kubenswrapper[4813]: I0219 18:49:57.697147 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-bf98f678b-j6t6g" event={"ID":"81790f67-278c-4b6a-82e5-ec5bb521c6ac","Type":"ContainerStarted","Data":"4a9a96b9f255b2a2ab8e9274d60e36244e1129f48007f631834faf74f4ecde1e"} Feb 19 18:49:57 crc kubenswrapper[4813]: I0219 18:49:57.698682 4813 generic.go:334] "Generic (PLEG): container finished" podID="35d7e901-239e-4015-951a-a8a044717b21" containerID="4f5c2efe072017886ae56581c28fcc96249a7b2d5dedf552f858ae3820b28a48" exitCode=0 Feb 19 18:49:57 crc kubenswrapper[4813]: I0219 18:49:57.698759 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9d49dd75f-mhb4c" event={"ID":"35d7e901-239e-4015-951a-a8a044717b21","Type":"ContainerDied","Data":"4f5c2efe072017886ae56581c28fcc96249a7b2d5dedf552f858ae3820b28a48"} Feb 19 18:49:57 crc kubenswrapper[4813]: I0219 18:49:57.698801 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9d49dd75f-mhb4c" event={"ID":"35d7e901-239e-4015-951a-a8a044717b21","Type":"ContainerStarted","Data":"8e6f5a96964dba8fd20a9845a18c51b1179a9ef6fd1cb0bd7c0fec509c019192"} Feb 19 18:49:57 crc kubenswrapper[4813]: I0219 18:49:57.700929 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"005d604f-ced9-4b2e-aae7-1cac5398b880","Type":"ContainerStarted","Data":"089cb6cd95b4c514d410b3a5358746ddec405997e5cbf7f302c0b8c2303104d3"} Feb 19 18:49:57 crc kubenswrapper[4813]: I0219 18:49:57.700970 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"005d604f-ced9-4b2e-aae7-1cac5398b880","Type":"ContainerStarted","Data":"43d565c541a41da70248ed67715843bd5843ce149b092182d58128cdc186d943"} Feb 19 18:49:57 crc kubenswrapper[4813]: I0219 18:49:57.703043 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-9f6f7ccc7-dwqzl" event={"ID":"5aeb8bcb-4373-48d4-9ac6-e6472189e440","Type":"ContainerStarted","Data":"4153383a658984fa3ebba097ecc8a95a05dc5f65aa988487fe746d1ce6979bb1"} Feb 19 18:49:57 crc kubenswrapper[4813]: I0219 18:49:57.720172 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6f96ddcd7d-r5nt7" podStartSLOduration=1.720157118 podStartE2EDuration="1.720157118s" podCreationTimestamp="2026-02-19 18:49:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:49:57.714792073 +0000 UTC m=+1216.940232614" watchObservedRunningTime="2026-02-19 18:49:57.720157118 +0000 UTC m=+1216.945597659" Feb 19 18:49:58 crc kubenswrapper[4813]: I0219 18:49:58.330983 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-nnwfh" Feb 19 18:49:58 crc kubenswrapper[4813]: I0219 18:49:58.390274 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a87080d3-007c-48e0-aa89-b82c5d9dafab-config-data\") pod \"a87080d3-007c-48e0-aa89-b82c5d9dafab\" (UID: \"a87080d3-007c-48e0-aa89-b82c5d9dafab\") " Feb 19 18:49:58 crc kubenswrapper[4813]: I0219 18:49:58.390338 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a87080d3-007c-48e0-aa89-b82c5d9dafab-etc-machine-id\") pod \"a87080d3-007c-48e0-aa89-b82c5d9dafab\" (UID: \"a87080d3-007c-48e0-aa89-b82c5d9dafab\") " Feb 19 18:49:58 crc kubenswrapper[4813]: I0219 18:49:58.390383 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a87080d3-007c-48e0-aa89-b82c5d9dafab-db-sync-config-data\") pod \"a87080d3-007c-48e0-aa89-b82c5d9dafab\" (UID: \"a87080d3-007c-48e0-aa89-b82c5d9dafab\") " Feb 19 18:49:58 crc kubenswrapper[4813]: I0219 18:49:58.390407 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a87080d3-007c-48e0-aa89-b82c5d9dafab-combined-ca-bundle\") pod \"a87080d3-007c-48e0-aa89-b82c5d9dafab\" (UID: \"a87080d3-007c-48e0-aa89-b82c5d9dafab\") " Feb 19 18:49:58 crc kubenswrapper[4813]: I0219 18:49:58.390447 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a87080d3-007c-48e0-aa89-b82c5d9dafab-scripts\") pod \"a87080d3-007c-48e0-aa89-b82c5d9dafab\" (UID: \"a87080d3-007c-48e0-aa89-b82c5d9dafab\") " Feb 19 18:49:58 crc kubenswrapper[4813]: I0219 18:49:58.390523 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwdj6\" (UniqueName: \"kubernetes.io/projected/a87080d3-007c-48e0-aa89-b82c5d9dafab-kube-api-access-gwdj6\") pod \"a87080d3-007c-48e0-aa89-b82c5d9dafab\" (UID: \"a87080d3-007c-48e0-aa89-b82c5d9dafab\") " Feb 19 18:49:58 crc kubenswrapper[4813]: I0219 18:49:58.396033 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a87080d3-007c-48e0-aa89-b82c5d9dafab-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "a87080d3-007c-48e0-aa89-b82c5d9dafab" (UID: "a87080d3-007c-48e0-aa89-b82c5d9dafab"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 18:49:58 crc kubenswrapper[4813]: I0219 18:49:58.405425 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a87080d3-007c-48e0-aa89-b82c5d9dafab-kube-api-access-gwdj6" (OuterVolumeSpecName: "kube-api-access-gwdj6") pod "a87080d3-007c-48e0-aa89-b82c5d9dafab" (UID: "a87080d3-007c-48e0-aa89-b82c5d9dafab"). InnerVolumeSpecName "kube-api-access-gwdj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:49:58 crc kubenswrapper[4813]: I0219 18:49:58.406161 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a87080d3-007c-48e0-aa89-b82c5d9dafab-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "a87080d3-007c-48e0-aa89-b82c5d9dafab" (UID: "a87080d3-007c-48e0-aa89-b82c5d9dafab"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:49:58 crc kubenswrapper[4813]: I0219 18:49:58.430523 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a87080d3-007c-48e0-aa89-b82c5d9dafab-scripts" (OuterVolumeSpecName: "scripts") pod "a87080d3-007c-48e0-aa89-b82c5d9dafab" (UID: "a87080d3-007c-48e0-aa89-b82c5d9dafab"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:49:58 crc kubenswrapper[4813]: I0219 18:49:58.492070 4813 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a87080d3-007c-48e0-aa89-b82c5d9dafab-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:58 crc kubenswrapper[4813]: I0219 18:49:58.492409 4813 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a87080d3-007c-48e0-aa89-b82c5d9dafab-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:58 crc kubenswrapper[4813]: I0219 18:49:58.492422 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a87080d3-007c-48e0-aa89-b82c5d9dafab-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:58 crc kubenswrapper[4813]: I0219 18:49:58.492432 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwdj6\" (UniqueName: \"kubernetes.io/projected/a87080d3-007c-48e0-aa89-b82c5d9dafab-kube-api-access-gwdj6\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:58 crc kubenswrapper[4813]: I0219 18:49:58.493343 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a87080d3-007c-48e0-aa89-b82c5d9dafab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a87080d3-007c-48e0-aa89-b82c5d9dafab" (UID: "a87080d3-007c-48e0-aa89-b82c5d9dafab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:49:58 crc kubenswrapper[4813]: I0219 18:49:58.541175 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a87080d3-007c-48e0-aa89-b82c5d9dafab-config-data" (OuterVolumeSpecName: "config-data") pod "a87080d3-007c-48e0-aa89-b82c5d9dafab" (UID: "a87080d3-007c-48e0-aa89-b82c5d9dafab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:49:58 crc kubenswrapper[4813]: I0219 18:49:58.593680 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a87080d3-007c-48e0-aa89-b82c5d9dafab-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:58 crc kubenswrapper[4813]: I0219 18:49:58.593712 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a87080d3-007c-48e0-aa89-b82c5d9dafab-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 18:49:58 crc kubenswrapper[4813]: I0219 18:49:58.712274 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-nnwfh" Feb 19 18:49:58 crc kubenswrapper[4813]: I0219 18:49:58.712273 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-nnwfh" event={"ID":"a87080d3-007c-48e0-aa89-b82c5d9dafab","Type":"ContainerDied","Data":"2c8235698e5d23d7a891e4ad8348e349bc9f7ef7320cecb4d8acafa6dfd507fc"} Feb 19 18:49:58 crc kubenswrapper[4813]: I0219 18:49:58.712323 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c8235698e5d23d7a891e4ad8348e349bc9f7ef7320cecb4d8acafa6dfd507fc" Feb 19 18:49:58 crc kubenswrapper[4813]: I0219 18:49:58.954458 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 18:49:58 crc kubenswrapper[4813]: E0219 18:49:58.954794 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a87080d3-007c-48e0-aa89-b82c5d9dafab" containerName="cinder-db-sync" Feb 19 18:49:58 crc kubenswrapper[4813]: I0219 18:49:58.954806 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="a87080d3-007c-48e0-aa89-b82c5d9dafab" containerName="cinder-db-sync" Feb 19 18:49:58 crc kubenswrapper[4813]: I0219 18:49:58.954982 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="a87080d3-007c-48e0-aa89-b82c5d9dafab" containerName="cinder-db-sync" Feb 19 18:49:58 crc kubenswrapper[4813]: I0219 18:49:58.955799 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 18:49:58 crc kubenswrapper[4813]: I0219 18:49:58.960489 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-6dt8c" Feb 19 18:49:58 crc kubenswrapper[4813]: I0219 18:49:58.960838 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 19 18:49:58 crc kubenswrapper[4813]: I0219 18:49:58.964430 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 19 18:49:58 crc kubenswrapper[4813]: I0219 18:49:58.967296 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 19 18:49:58 crc kubenswrapper[4813]: I0219 18:49:58.984869 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.000120 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9d49dd75f-mhb4c"] Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.040217 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6c8dc7b4d9-ltqxw"] Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.044298 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c8dc7b4d9-ltqxw" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.061387 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c8dc7b4d9-ltqxw"] Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.109049 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b-dns-swift-storage-0\") pod \"dnsmasq-dns-6c8dc7b4d9-ltqxw\" (UID: \"dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b\") " pod="openstack/dnsmasq-dns-6c8dc7b4d9-ltqxw" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.109104 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae8a0c27-107e-49cf-a6c9-46429689f3de-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ae8a0c27-107e-49cf-a6c9-46429689f3de\") " pod="openstack/cinder-scheduler-0" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.109123 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b-ovsdbserver-nb\") pod \"dnsmasq-dns-6c8dc7b4d9-ltqxw\" (UID: \"dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b\") " pod="openstack/dnsmasq-dns-6c8dc7b4d9-ltqxw" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.109172 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b-dns-svc\") pod \"dnsmasq-dns-6c8dc7b4d9-ltqxw\" (UID: \"dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b\") " pod="openstack/dnsmasq-dns-6c8dc7b4d9-ltqxw" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.109188 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae8a0c27-107e-49cf-a6c9-46429689f3de-config-data\") pod \"cinder-scheduler-0\" (UID: \"ae8a0c27-107e-49cf-a6c9-46429689f3de\") " pod="openstack/cinder-scheduler-0" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.109205 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ae8a0c27-107e-49cf-a6c9-46429689f3de-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ae8a0c27-107e-49cf-a6c9-46429689f3de\") " pod="openstack/cinder-scheduler-0" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.109221 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wq4h5\" (UniqueName: \"kubernetes.io/projected/ae8a0c27-107e-49cf-a6c9-46429689f3de-kube-api-access-wq4h5\") pod \"cinder-scheduler-0\" (UID: \"ae8a0c27-107e-49cf-a6c9-46429689f3de\") " pod="openstack/cinder-scheduler-0" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.109324 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b-config\") pod \"dnsmasq-dns-6c8dc7b4d9-ltqxw\" (UID: \"dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b\") " pod="openstack/dnsmasq-dns-6c8dc7b4d9-ltqxw" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.109391 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b-ovsdbserver-sb\") pod \"dnsmasq-dns-6c8dc7b4d9-ltqxw\" (UID: \"dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b\") " pod="openstack/dnsmasq-dns-6c8dc7b4d9-ltqxw" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.109512 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8prp\" (UniqueName: \"kubernetes.io/projected/dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b-kube-api-access-j8prp\") pod \"dnsmasq-dns-6c8dc7b4d9-ltqxw\" (UID: \"dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b\") " pod="openstack/dnsmasq-dns-6c8dc7b4d9-ltqxw" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.109562 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae8a0c27-107e-49cf-a6c9-46429689f3de-scripts\") pod \"cinder-scheduler-0\" (UID: \"ae8a0c27-107e-49cf-a6c9-46429689f3de\") " pod="openstack/cinder-scheduler-0" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.109623 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae8a0c27-107e-49cf-a6c9-46429689f3de-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ae8a0c27-107e-49cf-a6c9-46429689f3de\") " pod="openstack/cinder-scheduler-0" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.179919 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.182047 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.185269 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.225337 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.234133 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b-dns-swift-storage-0\") pod \"dnsmasq-dns-6c8dc7b4d9-ltqxw\" (UID: \"dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b\") " pod="openstack/dnsmasq-dns-6c8dc7b4d9-ltqxw" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.234216 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6da9f233-0235-4895-b898-a75d5d4e11d6-etc-machine-id\") pod \"cinder-api-0\" (UID: \"6da9f233-0235-4895-b898-a75d5d4e11d6\") " pod="openstack/cinder-api-0" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.234268 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae8a0c27-107e-49cf-a6c9-46429689f3de-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ae8a0c27-107e-49cf-a6c9-46429689f3de\") " pod="openstack/cinder-scheduler-0" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.234306 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b-ovsdbserver-nb\") pod \"dnsmasq-dns-6c8dc7b4d9-ltqxw\" (UID: \"dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b\") " pod="openstack/dnsmasq-dns-6c8dc7b4d9-ltqxw" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.234366 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b-dns-svc\") pod \"dnsmasq-dns-6c8dc7b4d9-ltqxw\" (UID: \"dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b\") " pod="openstack/dnsmasq-dns-6c8dc7b4d9-ltqxw" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.234393 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae8a0c27-107e-49cf-a6c9-46429689f3de-config-data\") pod \"cinder-scheduler-0\" (UID: \"ae8a0c27-107e-49cf-a6c9-46429689f3de\") " pod="openstack/cinder-scheduler-0" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.234421 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ae8a0c27-107e-49cf-a6c9-46429689f3de-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ae8a0c27-107e-49cf-a6c9-46429689f3de\") " pod="openstack/cinder-scheduler-0" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.234446 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wq4h5\" (UniqueName: \"kubernetes.io/projected/ae8a0c27-107e-49cf-a6c9-46429689f3de-kube-api-access-wq4h5\") pod \"cinder-scheduler-0\" (UID: \"ae8a0c27-107e-49cf-a6c9-46429689f3de\") " pod="openstack/cinder-scheduler-0" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.234470 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6da9f233-0235-4895-b898-a75d5d4e11d6-logs\") pod \"cinder-api-0\" (UID: \"6da9f233-0235-4895-b898-a75d5d4e11d6\") " pod="openstack/cinder-api-0" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.234547 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6da9f233-0235-4895-b898-a75d5d4e11d6-scripts\") pod \"cinder-api-0\" (UID: \"6da9f233-0235-4895-b898-a75d5d4e11d6\") " pod="openstack/cinder-api-0" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.234572 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b-config\") pod \"dnsmasq-dns-6c8dc7b4d9-ltqxw\" (UID: \"dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b\") " pod="openstack/dnsmasq-dns-6c8dc7b4d9-ltqxw" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.234624 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b-ovsdbserver-sb\") pod \"dnsmasq-dns-6c8dc7b4d9-ltqxw\" (UID: \"dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b\") " pod="openstack/dnsmasq-dns-6c8dc7b4d9-ltqxw" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.234723 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnvht\" (UniqueName: \"kubernetes.io/projected/6da9f233-0235-4895-b898-a75d5d4e11d6-kube-api-access-pnvht\") pod \"cinder-api-0\" (UID: \"6da9f233-0235-4895-b898-a75d5d4e11d6\") " pod="openstack/cinder-api-0" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.234755 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6da9f233-0235-4895-b898-a75d5d4e11d6-config-data\") pod \"cinder-api-0\" (UID: \"6da9f233-0235-4895-b898-a75d5d4e11d6\") " pod="openstack/cinder-api-0" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.234773 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8prp\" (UniqueName: \"kubernetes.io/projected/dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b-kube-api-access-j8prp\") pod \"dnsmasq-dns-6c8dc7b4d9-ltqxw\" (UID: \"dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b\") " pod="openstack/dnsmasq-dns-6c8dc7b4d9-ltqxw" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.234824 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae8a0c27-107e-49cf-a6c9-46429689f3de-scripts\") pod \"cinder-scheduler-0\" (UID: \"ae8a0c27-107e-49cf-a6c9-46429689f3de\") " pod="openstack/cinder-scheduler-0" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.234922 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae8a0c27-107e-49cf-a6c9-46429689f3de-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ae8a0c27-107e-49cf-a6c9-46429689f3de\") " pod="openstack/cinder-scheduler-0" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.234985 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6da9f233-0235-4895-b898-a75d5d4e11d6-config-data-custom\") pod \"cinder-api-0\" (UID: \"6da9f233-0235-4895-b898-a75d5d4e11d6\") " pod="openstack/cinder-api-0" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.235025 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6da9f233-0235-4895-b898-a75d5d4e11d6-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"6da9f233-0235-4895-b898-a75d5d4e11d6\") " pod="openstack/cinder-api-0" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.237112 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ae8a0c27-107e-49cf-a6c9-46429689f3de-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ae8a0c27-107e-49cf-a6c9-46429689f3de\") " pod="openstack/cinder-scheduler-0" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.237705 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b-config\") pod \"dnsmasq-dns-6c8dc7b4d9-ltqxw\" (UID: \"dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b\") " pod="openstack/dnsmasq-dns-6c8dc7b4d9-ltqxw" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.239145 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b-ovsdbserver-nb\") pod \"dnsmasq-dns-6c8dc7b4d9-ltqxw\" (UID: \"dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b\") " pod="openstack/dnsmasq-dns-6c8dc7b4d9-ltqxw" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.241627 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae8a0c27-107e-49cf-a6c9-46429689f3de-scripts\") pod \"cinder-scheduler-0\" (UID: \"ae8a0c27-107e-49cf-a6c9-46429689f3de\") " pod="openstack/cinder-scheduler-0" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.241768 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b-ovsdbserver-sb\") pod \"dnsmasq-dns-6c8dc7b4d9-ltqxw\" (UID: \"dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b\") " pod="openstack/dnsmasq-dns-6c8dc7b4d9-ltqxw" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.245308 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae8a0c27-107e-49cf-a6c9-46429689f3de-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ae8a0c27-107e-49cf-a6c9-46429689f3de\") " pod="openstack/cinder-scheduler-0" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.255050 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wq4h5\" (UniqueName: \"kubernetes.io/projected/ae8a0c27-107e-49cf-a6c9-46429689f3de-kube-api-access-wq4h5\") pod \"cinder-scheduler-0\" (UID: \"ae8a0c27-107e-49cf-a6c9-46429689f3de\") " pod="openstack/cinder-scheduler-0" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.255674 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b-dns-svc\") pod \"dnsmasq-dns-6c8dc7b4d9-ltqxw\" (UID: \"dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b\") " pod="openstack/dnsmasq-dns-6c8dc7b4d9-ltqxw" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.256094 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae8a0c27-107e-49cf-a6c9-46429689f3de-config-data\") pod \"cinder-scheduler-0\" (UID: \"ae8a0c27-107e-49cf-a6c9-46429689f3de\") " pod="openstack/cinder-scheduler-0" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.256366 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b-dns-swift-storage-0\") pod \"dnsmasq-dns-6c8dc7b4d9-ltqxw\" (UID: \"dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b\") " pod="openstack/dnsmasq-dns-6c8dc7b4d9-ltqxw" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.256985 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8prp\" (UniqueName: \"kubernetes.io/projected/dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b-kube-api-access-j8prp\") pod \"dnsmasq-dns-6c8dc7b4d9-ltqxw\" (UID: \"dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b\") " pod="openstack/dnsmasq-dns-6c8dc7b4d9-ltqxw" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.261785 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae8a0c27-107e-49cf-a6c9-46429689f3de-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ae8a0c27-107e-49cf-a6c9-46429689f3de\") " pod="openstack/cinder-scheduler-0" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.296228 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.338375 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6da9f233-0235-4895-b898-a75d5d4e11d6-config-data-custom\") pod \"cinder-api-0\" (UID: \"6da9f233-0235-4895-b898-a75d5d4e11d6\") " pod="openstack/cinder-api-0" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.338459 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6da9f233-0235-4895-b898-a75d5d4e11d6-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"6da9f233-0235-4895-b898-a75d5d4e11d6\") " pod="openstack/cinder-api-0" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.338523 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6da9f233-0235-4895-b898-a75d5d4e11d6-etc-machine-id\") pod \"cinder-api-0\" (UID: \"6da9f233-0235-4895-b898-a75d5d4e11d6\") " pod="openstack/cinder-api-0" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.338584 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6da9f233-0235-4895-b898-a75d5d4e11d6-logs\") pod \"cinder-api-0\" (UID: \"6da9f233-0235-4895-b898-a75d5d4e11d6\") " pod="openstack/cinder-api-0" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.338611 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6da9f233-0235-4895-b898-a75d5d4e11d6-scripts\") pod \"cinder-api-0\" (UID: \"6da9f233-0235-4895-b898-a75d5d4e11d6\") " pod="openstack/cinder-api-0" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.338681 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnvht\" (UniqueName: \"kubernetes.io/projected/6da9f233-0235-4895-b898-a75d5d4e11d6-kube-api-access-pnvht\") pod \"cinder-api-0\" (UID: \"6da9f233-0235-4895-b898-a75d5d4e11d6\") " pod="openstack/cinder-api-0" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.338698 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6da9f233-0235-4895-b898-a75d5d4e11d6-config-data\") pod \"cinder-api-0\" (UID: \"6da9f233-0235-4895-b898-a75d5d4e11d6\") " pod="openstack/cinder-api-0" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.339096 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6da9f233-0235-4895-b898-a75d5d4e11d6-etc-machine-id\") pod \"cinder-api-0\" (UID: \"6da9f233-0235-4895-b898-a75d5d4e11d6\") " pod="openstack/cinder-api-0" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.339466 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6da9f233-0235-4895-b898-a75d5d4e11d6-logs\") pod \"cinder-api-0\" (UID: \"6da9f233-0235-4895-b898-a75d5d4e11d6\") " pod="openstack/cinder-api-0" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.346697 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6da9f233-0235-4895-b898-a75d5d4e11d6-scripts\") pod \"cinder-api-0\" (UID: \"6da9f233-0235-4895-b898-a75d5d4e11d6\") " pod="openstack/cinder-api-0" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.346922 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6da9f233-0235-4895-b898-a75d5d4e11d6-config-data-custom\") pod \"cinder-api-0\" (UID: \"6da9f233-0235-4895-b898-a75d5d4e11d6\") " pod="openstack/cinder-api-0" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.347051 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6da9f233-0235-4895-b898-a75d5d4e11d6-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"6da9f233-0235-4895-b898-a75d5d4e11d6\") " pod="openstack/cinder-api-0" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.348344 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6da9f233-0235-4895-b898-a75d5d4e11d6-config-data\") pod \"cinder-api-0\" (UID: \"6da9f233-0235-4895-b898-a75d5d4e11d6\") " pod="openstack/cinder-api-0" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.355731 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnvht\" (UniqueName: \"kubernetes.io/projected/6da9f233-0235-4895-b898-a75d5d4e11d6-kube-api-access-pnvht\") pod \"cinder-api-0\" (UID: \"6da9f233-0235-4895-b898-a75d5d4e11d6\") " pod="openstack/cinder-api-0" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.388930 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c8dc7b4d9-ltqxw" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.532221 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.621048 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7c88676b6d-zlhlk"] Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.622746 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7c88676b6d-zlhlk" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.628484 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.628703 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.642901 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7c88676b6d-zlhlk"] Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.661519 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ecb54a2-f23d-45b0-8311-eb5ff83f9f30-config-data\") pod \"barbican-api-7c88676b6d-zlhlk\" (UID: \"6ecb54a2-f23d-45b0-8311-eb5ff83f9f30\") " pod="openstack/barbican-api-7c88676b6d-zlhlk" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.661607 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ecb54a2-f23d-45b0-8311-eb5ff83f9f30-combined-ca-bundle\") pod \"barbican-api-7c88676b6d-zlhlk\" (UID: \"6ecb54a2-f23d-45b0-8311-eb5ff83f9f30\") " pod="openstack/barbican-api-7c88676b6d-zlhlk" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.663853 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ecb54a2-f23d-45b0-8311-eb5ff83f9f30-public-tls-certs\") pod \"barbican-api-7c88676b6d-zlhlk\" (UID: \"6ecb54a2-f23d-45b0-8311-eb5ff83f9f30\") " pod="openstack/barbican-api-7c88676b6d-zlhlk" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.663932 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f66ct\" (UniqueName: \"kubernetes.io/projected/6ecb54a2-f23d-45b0-8311-eb5ff83f9f30-kube-api-access-f66ct\") pod \"barbican-api-7c88676b6d-zlhlk\" (UID: \"6ecb54a2-f23d-45b0-8311-eb5ff83f9f30\") " pod="openstack/barbican-api-7c88676b6d-zlhlk" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.663998 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6ecb54a2-f23d-45b0-8311-eb5ff83f9f30-config-data-custom\") pod \"barbican-api-7c88676b6d-zlhlk\" (UID: \"6ecb54a2-f23d-45b0-8311-eb5ff83f9f30\") " pod="openstack/barbican-api-7c88676b6d-zlhlk" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.664343 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ecb54a2-f23d-45b0-8311-eb5ff83f9f30-internal-tls-certs\") pod \"barbican-api-7c88676b6d-zlhlk\" (UID: \"6ecb54a2-f23d-45b0-8311-eb5ff83f9f30\") " pod="openstack/barbican-api-7c88676b6d-zlhlk" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.664384 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ecb54a2-f23d-45b0-8311-eb5ff83f9f30-logs\") pod \"barbican-api-7c88676b6d-zlhlk\" (UID: \"6ecb54a2-f23d-45b0-8311-eb5ff83f9f30\") " pod="openstack/barbican-api-7c88676b6d-zlhlk" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.747097 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-bf98f678b-j6t6g" event={"ID":"81790f67-278c-4b6a-82e5-ec5bb521c6ac","Type":"ContainerStarted","Data":"d4ea4dbc1137b15bb3aa4bc83a3b37e503f5c54a481f8b73a7cfd507673900a8"} Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.747150 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-bf98f678b-j6t6g" event={"ID":"81790f67-278c-4b6a-82e5-ec5bb521c6ac","Type":"ContainerStarted","Data":"479709b4af77750e6b92b1b1ddf45a2dfef9f8a888e9bcb59242645580afb4a9"} Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.749644 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9d49dd75f-mhb4c" event={"ID":"35d7e901-239e-4015-951a-a8a044717b21","Type":"ContainerStarted","Data":"781e47ccdfd43687b08d3f9a31236d0a4810ea63e571806ef7dfb93b53072f8d"} Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.749836 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-9d49dd75f-mhb4c" podUID="35d7e901-239e-4015-951a-a8a044717b21" containerName="dnsmasq-dns" containerID="cri-o://781e47ccdfd43687b08d3f9a31236d0a4810ea63e571806ef7dfb93b53072f8d" gracePeriod=10 Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.750136 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-9d49dd75f-mhb4c" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.763556 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"005d604f-ced9-4b2e-aae7-1cac5398b880","Type":"ContainerStarted","Data":"35c718accd8ada1711268971319cc2a6093a2def08dcaa7dbd09b1262dd23d6e"} Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.763869 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.765730 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ecb54a2-f23d-45b0-8311-eb5ff83f9f30-internal-tls-certs\") pod \"barbican-api-7c88676b6d-zlhlk\" (UID: \"6ecb54a2-f23d-45b0-8311-eb5ff83f9f30\") " pod="openstack/barbican-api-7c88676b6d-zlhlk" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.765770 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ecb54a2-f23d-45b0-8311-eb5ff83f9f30-logs\") pod \"barbican-api-7c88676b6d-zlhlk\" (UID: \"6ecb54a2-f23d-45b0-8311-eb5ff83f9f30\") " pod="openstack/barbican-api-7c88676b6d-zlhlk" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.765839 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ecb54a2-f23d-45b0-8311-eb5ff83f9f30-config-data\") pod \"barbican-api-7c88676b6d-zlhlk\" (UID: \"6ecb54a2-f23d-45b0-8311-eb5ff83f9f30\") " pod="openstack/barbican-api-7c88676b6d-zlhlk" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.765878 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ecb54a2-f23d-45b0-8311-eb5ff83f9f30-combined-ca-bundle\") pod \"barbican-api-7c88676b6d-zlhlk\" (UID: \"6ecb54a2-f23d-45b0-8311-eb5ff83f9f30\") " pod="openstack/barbican-api-7c88676b6d-zlhlk" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.765919 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ecb54a2-f23d-45b0-8311-eb5ff83f9f30-public-tls-certs\") pod \"barbican-api-7c88676b6d-zlhlk\" (UID: \"6ecb54a2-f23d-45b0-8311-eb5ff83f9f30\") " pod="openstack/barbican-api-7c88676b6d-zlhlk" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.765976 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f66ct\" (UniqueName: \"kubernetes.io/projected/6ecb54a2-f23d-45b0-8311-eb5ff83f9f30-kube-api-access-f66ct\") pod \"barbican-api-7c88676b6d-zlhlk\" (UID: \"6ecb54a2-f23d-45b0-8311-eb5ff83f9f30\") " pod="openstack/barbican-api-7c88676b6d-zlhlk" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.766034 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6ecb54a2-f23d-45b0-8311-eb5ff83f9f30-config-data-custom\") pod \"barbican-api-7c88676b6d-zlhlk\" (UID: \"6ecb54a2-f23d-45b0-8311-eb5ff83f9f30\") " pod="openstack/barbican-api-7c88676b6d-zlhlk" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.771159 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ecb54a2-f23d-45b0-8311-eb5ff83f9f30-logs\") pod \"barbican-api-7c88676b6d-zlhlk\" (UID: \"6ecb54a2-f23d-45b0-8311-eb5ff83f9f30\") " pod="openstack/barbican-api-7c88676b6d-zlhlk" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.774452 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-9f6f7ccc7-dwqzl" event={"ID":"5aeb8bcb-4373-48d4-9ac6-e6472189e440","Type":"ContainerStarted","Data":"7154bb37e5f6fe174542d1e2d97dda065217b924fa83469cc6cbb289e826d07a"} Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.774682 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-9f6f7ccc7-dwqzl" event={"ID":"5aeb8bcb-4373-48d4-9ac6-e6472189e440","Type":"ContainerStarted","Data":"52e141647daf76f468f278365830d8a4de021205dfe34eb192a73cab648f4e9a"} Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.775385 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.781197 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6ecb54a2-f23d-45b0-8311-eb5ff83f9f30-config-data-custom\") pod \"barbican-api-7c88676b6d-zlhlk\" (UID: \"6ecb54a2-f23d-45b0-8311-eb5ff83f9f30\") " pod="openstack/barbican-api-7c88676b6d-zlhlk" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.783050 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ecb54a2-f23d-45b0-8311-eb5ff83f9f30-combined-ca-bundle\") pod \"barbican-api-7c88676b6d-zlhlk\" (UID: \"6ecb54a2-f23d-45b0-8311-eb5ff83f9f30\") " pod="openstack/barbican-api-7c88676b6d-zlhlk" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.784033 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ecb54a2-f23d-45b0-8311-eb5ff83f9f30-public-tls-certs\") pod \"barbican-api-7c88676b6d-zlhlk\" (UID: \"6ecb54a2-f23d-45b0-8311-eb5ff83f9f30\") " pod="openstack/barbican-api-7c88676b6d-zlhlk" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.784222 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ecb54a2-f23d-45b0-8311-eb5ff83f9f30-config-data\") pod \"barbican-api-7c88676b6d-zlhlk\" (UID: \"6ecb54a2-f23d-45b0-8311-eb5ff83f9f30\") " pod="openstack/barbican-api-7c88676b6d-zlhlk" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.789234 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ecb54a2-f23d-45b0-8311-eb5ff83f9f30-internal-tls-certs\") pod \"barbican-api-7c88676b6d-zlhlk\" (UID: \"6ecb54a2-f23d-45b0-8311-eb5ff83f9f30\") " pod="openstack/barbican-api-7c88676b6d-zlhlk" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.791440 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f66ct\" (UniqueName: \"kubernetes.io/projected/6ecb54a2-f23d-45b0-8311-eb5ff83f9f30-kube-api-access-f66ct\") pod \"barbican-api-7c88676b6d-zlhlk\" (UID: \"6ecb54a2-f23d-45b0-8311-eb5ff83f9f30\") " pod="openstack/barbican-api-7c88676b6d-zlhlk" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.793240 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-bf98f678b-j6t6g" podStartSLOduration=1.689499654 podStartE2EDuration="3.793221914s" podCreationTimestamp="2026-02-19 18:49:56 +0000 UTC" firstStartedPulling="2026-02-19 18:49:56.869801407 +0000 UTC m=+1216.095241948" lastFinishedPulling="2026-02-19 18:49:58.973523667 +0000 UTC m=+1218.198964208" observedRunningTime="2026-02-19 18:49:59.784406742 +0000 UTC m=+1219.009847283" watchObservedRunningTime="2026-02-19 18:49:59.793221914 +0000 UTC m=+1219.018662455" Feb 19 18:49:59 crc kubenswrapper[4813]: W0219 18:49:59.797322 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae8a0c27_107e_49cf_a6c9_46429689f3de.slice/crio-46f42a9e8556fe582a24a2a3e76232a69021a13577f392de23466572b2ad8cef WatchSource:0}: Error finding container 46f42a9e8556fe582a24a2a3e76232a69021a13577f392de23466572b2ad8cef: Status 404 returned error can't find the container with id 46f42a9e8556fe582a24a2a3e76232a69021a13577f392de23466572b2ad8cef Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.830458 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.408393818 podStartE2EDuration="5.83041165s" podCreationTimestamp="2026-02-19 18:49:54 +0000 UTC" firstStartedPulling="2026-02-19 18:49:55.573564246 +0000 UTC m=+1214.799004787" lastFinishedPulling="2026-02-19 18:49:58.995582078 +0000 UTC m=+1218.221022619" observedRunningTime="2026-02-19 18:49:59.813251051 +0000 UTC m=+1219.038691602" watchObservedRunningTime="2026-02-19 18:49:59.83041165 +0000 UTC m=+1219.055852191" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.871879 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-9f6f7ccc7-dwqzl" podStartSLOduration=2.627221703 podStartE2EDuration="4.871856409s" podCreationTimestamp="2026-02-19 18:49:55 +0000 UTC" firstStartedPulling="2026-02-19 18:49:56.727884391 +0000 UTC m=+1215.953324922" lastFinishedPulling="2026-02-19 18:49:58.972519087 +0000 UTC m=+1218.197959628" observedRunningTime="2026-02-19 18:49:59.840583444 +0000 UTC m=+1219.066023985" watchObservedRunningTime="2026-02-19 18:49:59.871856409 +0000 UTC m=+1219.097296950" Feb 19 18:49:59 crc kubenswrapper[4813]: E0219 18:49:59.911320 4813 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod35d7e901_239e_4015_951a_a8a044717b21.slice/crio-conmon-781e47ccdfd43687b08d3f9a31236d0a4810ea63e571806ef7dfb93b53072f8d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod35d7e901_239e_4015_951a_a8a044717b21.slice/crio-781e47ccdfd43687b08d3f9a31236d0a4810ea63e571806ef7dfb93b53072f8d.scope\": RecentStats: unable to find data in memory cache]" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.918115 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-9d49dd75f-mhb4c" podStartSLOduration=3.918097884 podStartE2EDuration="3.918097884s" podCreationTimestamp="2026-02-19 18:49:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:49:59.866325687 +0000 UTC m=+1219.091766228" watchObservedRunningTime="2026-02-19 18:49:59.918097884 +0000 UTC m=+1219.143538425" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.972141 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7c88676b6d-zlhlk" Feb 19 18:49:59 crc kubenswrapper[4813]: I0219 18:49:59.996196 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c8dc7b4d9-ltqxw"] Feb 19 18:50:00 crc kubenswrapper[4813]: I0219 18:50:00.003649 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 19 18:50:00 crc kubenswrapper[4813]: I0219 18:50:00.332754 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 18:50:00 crc kubenswrapper[4813]: I0219 18:50:00.333523 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 18:50:00 crc kubenswrapper[4813]: I0219 18:50:00.333578 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" Feb 19 18:50:00 crc kubenswrapper[4813]: I0219 18:50:00.334586 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"94eee4c6af3220d0f9daafe2c95225cd4a99afe2b2f03d1a34bc8f27e9e13151"} pod="openshift-machine-config-operator/machine-config-daemon-gfswm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 18:50:00 crc kubenswrapper[4813]: I0219 18:50:00.334644 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" containerID="cri-o://94eee4c6af3220d0f9daafe2c95225cd4a99afe2b2f03d1a34bc8f27e9e13151" gracePeriod=600 Feb 19 18:50:00 crc kubenswrapper[4813]: I0219 18:50:00.548308 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9d49dd75f-mhb4c" Feb 19 18:50:00 crc kubenswrapper[4813]: I0219 18:50:00.564554 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7c88676b6d-zlhlk"] Feb 19 18:50:00 crc kubenswrapper[4813]: I0219 18:50:00.580563 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/35d7e901-239e-4015-951a-a8a044717b21-ovsdbserver-nb\") pod \"35d7e901-239e-4015-951a-a8a044717b21\" (UID: \"35d7e901-239e-4015-951a-a8a044717b21\") " Feb 19 18:50:00 crc kubenswrapper[4813]: I0219 18:50:00.583750 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/35d7e901-239e-4015-951a-a8a044717b21-ovsdbserver-sb\") pod \"35d7e901-239e-4015-951a-a8a044717b21\" (UID: \"35d7e901-239e-4015-951a-a8a044717b21\") " Feb 19 18:50:00 crc kubenswrapper[4813]: I0219 18:50:00.584003 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/35d7e901-239e-4015-951a-a8a044717b21-dns-swift-storage-0\") pod \"35d7e901-239e-4015-951a-a8a044717b21\" (UID: \"35d7e901-239e-4015-951a-a8a044717b21\") " Feb 19 18:50:00 crc kubenswrapper[4813]: I0219 18:50:00.584076 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35d7e901-239e-4015-951a-a8a044717b21-config\") pod \"35d7e901-239e-4015-951a-a8a044717b21\" (UID: \"35d7e901-239e-4015-951a-a8a044717b21\") " Feb 19 18:50:00 crc kubenswrapper[4813]: I0219 18:50:00.584144 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vhljq\" (UniqueName: \"kubernetes.io/projected/35d7e901-239e-4015-951a-a8a044717b21-kube-api-access-vhljq\") pod \"35d7e901-239e-4015-951a-a8a044717b21\" (UID: \"35d7e901-239e-4015-951a-a8a044717b21\") " Feb 19 18:50:00 crc kubenswrapper[4813]: I0219 18:50:00.584219 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/35d7e901-239e-4015-951a-a8a044717b21-dns-svc\") pod \"35d7e901-239e-4015-951a-a8a044717b21\" (UID: \"35d7e901-239e-4015-951a-a8a044717b21\") " Feb 19 18:50:00 crc kubenswrapper[4813]: I0219 18:50:00.602145 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35d7e901-239e-4015-951a-a8a044717b21-kube-api-access-vhljq" (OuterVolumeSpecName: "kube-api-access-vhljq") pod "35d7e901-239e-4015-951a-a8a044717b21" (UID: "35d7e901-239e-4015-951a-a8a044717b21"). InnerVolumeSpecName "kube-api-access-vhljq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:50:00 crc kubenswrapper[4813]: I0219 18:50:00.639839 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35d7e901-239e-4015-951a-a8a044717b21-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "35d7e901-239e-4015-951a-a8a044717b21" (UID: "35d7e901-239e-4015-951a-a8a044717b21"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:50:00 crc kubenswrapper[4813]: I0219 18:50:00.655544 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35d7e901-239e-4015-951a-a8a044717b21-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "35d7e901-239e-4015-951a-a8a044717b21" (UID: "35d7e901-239e-4015-951a-a8a044717b21"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:50:00 crc kubenswrapper[4813]: I0219 18:50:00.672497 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35d7e901-239e-4015-951a-a8a044717b21-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "35d7e901-239e-4015-951a-a8a044717b21" (UID: "35d7e901-239e-4015-951a-a8a044717b21"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:50:00 crc kubenswrapper[4813]: I0219 18:50:00.688098 4813 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/35d7e901-239e-4015-951a-a8a044717b21-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:00 crc kubenswrapper[4813]: I0219 18:50:00.688132 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vhljq\" (UniqueName: \"kubernetes.io/projected/35d7e901-239e-4015-951a-a8a044717b21-kube-api-access-vhljq\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:00 crc kubenswrapper[4813]: I0219 18:50:00.688141 4813 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/35d7e901-239e-4015-951a-a8a044717b21-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:00 crc kubenswrapper[4813]: I0219 18:50:00.688152 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/35d7e901-239e-4015-951a-a8a044717b21-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:00 crc kubenswrapper[4813]: I0219 18:50:00.690218 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35d7e901-239e-4015-951a-a8a044717b21-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "35d7e901-239e-4015-951a-a8a044717b21" (UID: "35d7e901-239e-4015-951a-a8a044717b21"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:50:00 crc kubenswrapper[4813]: I0219 18:50:00.702043 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35d7e901-239e-4015-951a-a8a044717b21-config" (OuterVolumeSpecName: "config") pod "35d7e901-239e-4015-951a-a8a044717b21" (UID: "35d7e901-239e-4015-951a-a8a044717b21"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:50:00 crc kubenswrapper[4813]: I0219 18:50:00.783757 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7c88676b6d-zlhlk" event={"ID":"6ecb54a2-f23d-45b0-8311-eb5ff83f9f30","Type":"ContainerStarted","Data":"5f9428e0811337010d65b13f1773b5a437c08b26ecadb9a88e8fd6cbf13fc6ed"} Feb 19 18:50:00 crc kubenswrapper[4813]: I0219 18:50:00.785522 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ae8a0c27-107e-49cf-a6c9-46429689f3de","Type":"ContainerStarted","Data":"46f42a9e8556fe582a24a2a3e76232a69021a13577f392de23466572b2ad8cef"} Feb 19 18:50:00 crc kubenswrapper[4813]: I0219 18:50:00.790306 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/35d7e901-239e-4015-951a-a8a044717b21-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:00 crc kubenswrapper[4813]: I0219 18:50:00.790326 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35d7e901-239e-4015-951a-a8a044717b21-config\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:00 crc kubenswrapper[4813]: I0219 18:50:00.790904 4813 generic.go:334] "Generic (PLEG): container finished" podID="35d7e901-239e-4015-951a-a8a044717b21" containerID="781e47ccdfd43687b08d3f9a31236d0a4810ea63e571806ef7dfb93b53072f8d" exitCode=0 Feb 19 18:50:00 crc kubenswrapper[4813]: I0219 18:50:00.791186 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9d49dd75f-mhb4c" event={"ID":"35d7e901-239e-4015-951a-a8a044717b21","Type":"ContainerDied","Data":"781e47ccdfd43687b08d3f9a31236d0a4810ea63e571806ef7dfb93b53072f8d"} Feb 19 18:50:00 crc kubenswrapper[4813]: I0219 18:50:00.791214 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9d49dd75f-mhb4c" event={"ID":"35d7e901-239e-4015-951a-a8a044717b21","Type":"ContainerDied","Data":"8e6f5a96964dba8fd20a9845a18c51b1179a9ef6fd1cb0bd7c0fec509c019192"} Feb 19 18:50:00 crc kubenswrapper[4813]: I0219 18:50:00.791232 4813 scope.go:117] "RemoveContainer" containerID="781e47ccdfd43687b08d3f9a31236d0a4810ea63e571806ef7dfb93b53072f8d" Feb 19 18:50:00 crc kubenswrapper[4813]: I0219 18:50:00.791504 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9d49dd75f-mhb4c" Feb 19 18:50:00 crc kubenswrapper[4813]: I0219 18:50:00.808211 4813 generic.go:334] "Generic (PLEG): container finished" podID="481977a2-7072-4176-abd4-863cb6104d70" containerID="94eee4c6af3220d0f9daafe2c95225cd4a99afe2b2f03d1a34bc8f27e9e13151" exitCode=0 Feb 19 18:50:00 crc kubenswrapper[4813]: I0219 18:50:00.808280 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" event={"ID":"481977a2-7072-4176-abd4-863cb6104d70","Type":"ContainerDied","Data":"94eee4c6af3220d0f9daafe2c95225cd4a99afe2b2f03d1a34bc8f27e9e13151"} Feb 19 18:50:00 crc kubenswrapper[4813]: I0219 18:50:00.825926 4813 generic.go:334] "Generic (PLEG): container finished" podID="dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b" containerID="05799b021a1f38afb61377db7f0d244f5916a280faff16bcca5e207c5e9ec44e" exitCode=0 Feb 19 18:50:00 crc kubenswrapper[4813]: I0219 18:50:00.826041 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c8dc7b4d9-ltqxw" event={"ID":"dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b","Type":"ContainerDied","Data":"05799b021a1f38afb61377db7f0d244f5916a280faff16bcca5e207c5e9ec44e"} Feb 19 18:50:00 crc kubenswrapper[4813]: I0219 18:50:00.826076 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c8dc7b4d9-ltqxw" event={"ID":"dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b","Type":"ContainerStarted","Data":"87409e993151f2c2bbc335e7d311939fa641e804d19876e2f60c3789dd5c0cc9"} Feb 19 18:50:00 crc kubenswrapper[4813]: I0219 18:50:00.842689 4813 scope.go:117] "RemoveContainer" containerID="4f5c2efe072017886ae56581c28fcc96249a7b2d5dedf552f858ae3820b28a48" Feb 19 18:50:00 crc kubenswrapper[4813]: I0219 18:50:00.843298 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9d49dd75f-mhb4c"] Feb 19 18:50:00 crc kubenswrapper[4813]: I0219 18:50:00.857548 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6da9f233-0235-4895-b898-a75d5d4e11d6","Type":"ContainerStarted","Data":"c31cbadc2f7886074f0e4a9a8f3ee483ff5ad306577a967129a1c27659163212"} Feb 19 18:50:00 crc kubenswrapper[4813]: I0219 18:50:00.904241 4813 scope.go:117] "RemoveContainer" containerID="781e47ccdfd43687b08d3f9a31236d0a4810ea63e571806ef7dfb93b53072f8d" Feb 19 18:50:00 crc kubenswrapper[4813]: E0219 18:50:00.923940 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"781e47ccdfd43687b08d3f9a31236d0a4810ea63e571806ef7dfb93b53072f8d\": container with ID starting with 781e47ccdfd43687b08d3f9a31236d0a4810ea63e571806ef7dfb93b53072f8d not found: ID does not exist" containerID="781e47ccdfd43687b08d3f9a31236d0a4810ea63e571806ef7dfb93b53072f8d" Feb 19 18:50:00 crc kubenswrapper[4813]: I0219 18:50:00.923998 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"781e47ccdfd43687b08d3f9a31236d0a4810ea63e571806ef7dfb93b53072f8d"} err="failed to get container status \"781e47ccdfd43687b08d3f9a31236d0a4810ea63e571806ef7dfb93b53072f8d\": rpc error: code = NotFound desc = could not find container \"781e47ccdfd43687b08d3f9a31236d0a4810ea63e571806ef7dfb93b53072f8d\": container with ID starting with 781e47ccdfd43687b08d3f9a31236d0a4810ea63e571806ef7dfb93b53072f8d not found: ID does not exist" Feb 19 18:50:00 crc kubenswrapper[4813]: I0219 18:50:00.924022 4813 scope.go:117] "RemoveContainer" containerID="4f5c2efe072017886ae56581c28fcc96249a7b2d5dedf552f858ae3820b28a48" Feb 19 18:50:00 crc kubenswrapper[4813]: E0219 18:50:00.924330 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f5c2efe072017886ae56581c28fcc96249a7b2d5dedf552f858ae3820b28a48\": container with ID starting with 4f5c2efe072017886ae56581c28fcc96249a7b2d5dedf552f858ae3820b28a48 not found: ID does not exist" containerID="4f5c2efe072017886ae56581c28fcc96249a7b2d5dedf552f858ae3820b28a48" Feb 19 18:50:00 crc kubenswrapper[4813]: I0219 18:50:00.924347 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f5c2efe072017886ae56581c28fcc96249a7b2d5dedf552f858ae3820b28a48"} err="failed to get container status \"4f5c2efe072017886ae56581c28fcc96249a7b2d5dedf552f858ae3820b28a48\": rpc error: code = NotFound desc = could not find container \"4f5c2efe072017886ae56581c28fcc96249a7b2d5dedf552f858ae3820b28a48\": container with ID starting with 4f5c2efe072017886ae56581c28fcc96249a7b2d5dedf552f858ae3820b28a48 not found: ID does not exist" Feb 19 18:50:00 crc kubenswrapper[4813]: I0219 18:50:00.924359 4813 scope.go:117] "RemoveContainer" containerID="0c3855002c151cf8b5b2cf61ec6f6d7135091880565c3fee08603596d3342c68" Feb 19 18:50:00 crc kubenswrapper[4813]: I0219 18:50:00.924406 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-9d49dd75f-mhb4c"] Feb 19 18:50:01 crc kubenswrapper[4813]: I0219 18:50:01.484415 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35d7e901-239e-4015-951a-a8a044717b21" path="/var/lib/kubelet/pods/35d7e901-239e-4015-951a-a8a044717b21/volumes" Feb 19 18:50:01 crc kubenswrapper[4813]: I0219 18:50:01.877417 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ae8a0c27-107e-49cf-a6c9-46429689f3de","Type":"ContainerStarted","Data":"b73bffbdb78c13ffb17d767c8bc8acb0aa526ef0a8f4d6ec4cb4bcaa1d7ac1b6"} Feb 19 18:50:01 crc kubenswrapper[4813]: I0219 18:50:01.882300 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" event={"ID":"481977a2-7072-4176-abd4-863cb6104d70","Type":"ContainerStarted","Data":"c0f644376cce138d79691366e77b885fec90be67e37a466be8ebae7b3478e829"} Feb 19 18:50:01 crc kubenswrapper[4813]: I0219 18:50:01.889709 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c8dc7b4d9-ltqxw" event={"ID":"dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b","Type":"ContainerStarted","Data":"6f157cff439cf9cd81cac1e259897c0f46a7ac0701b5998a6e0fb3e7cd22c487"} Feb 19 18:50:01 crc kubenswrapper[4813]: I0219 18:50:01.889844 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6c8dc7b4d9-ltqxw" Feb 19 18:50:01 crc kubenswrapper[4813]: I0219 18:50:01.892043 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6da9f233-0235-4895-b898-a75d5d4e11d6","Type":"ContainerStarted","Data":"fc8c3024758b8c91249caea8d0c09cd35fcfe17efa82ba5c14d9dfdd55b922ce"} Feb 19 18:50:01 crc kubenswrapper[4813]: I0219 18:50:01.892088 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6da9f233-0235-4895-b898-a75d5d4e11d6","Type":"ContainerStarted","Data":"c9d4e6aeb5dcc18da7c1acec789d8c5d56ef79a06f59fa4060d5de8a4b8ea0a4"} Feb 19 18:50:01 crc kubenswrapper[4813]: I0219 18:50:01.892198 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 19 18:50:01 crc kubenswrapper[4813]: I0219 18:50:01.894659 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7c88676b6d-zlhlk" event={"ID":"6ecb54a2-f23d-45b0-8311-eb5ff83f9f30","Type":"ContainerStarted","Data":"fe9cae1f29fb502eab2ed61c37be245fbecda7cbaa6a4d4b23a769893fe52d66"} Feb 19 18:50:01 crc kubenswrapper[4813]: I0219 18:50:01.894699 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7c88676b6d-zlhlk" event={"ID":"6ecb54a2-f23d-45b0-8311-eb5ff83f9f30","Type":"ContainerStarted","Data":"c497a249844ccadf079da17947184a1a812d81325a2ff39036321f46b9c5c309"} Feb 19 18:50:01 crc kubenswrapper[4813]: I0219 18:50:01.894820 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7c88676b6d-zlhlk" Feb 19 18:50:01 crc kubenswrapper[4813]: I0219 18:50:01.894876 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7c88676b6d-zlhlk" Feb 19 18:50:01 crc kubenswrapper[4813]: I0219 18:50:01.936603 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=2.936582426 podStartE2EDuration="2.936582426s" podCreationTimestamp="2026-02-19 18:49:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:50:01.925578487 +0000 UTC m=+1221.151019028" watchObservedRunningTime="2026-02-19 18:50:01.936582426 +0000 UTC m=+1221.162022967" Feb 19 18:50:01 crc kubenswrapper[4813]: I0219 18:50:01.974543 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7c88676b6d-zlhlk" podStartSLOduration=2.974496995 podStartE2EDuration="2.974496995s" podCreationTimestamp="2026-02-19 18:49:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:50:01.953373784 +0000 UTC m=+1221.178814325" watchObservedRunningTime="2026-02-19 18:50:01.974496995 +0000 UTC m=+1221.199937546" Feb 19 18:50:01 crc kubenswrapper[4813]: I0219 18:50:01.983874 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6c8dc7b4d9-ltqxw" podStartSLOduration=3.983855654 podStartE2EDuration="3.983855654s" podCreationTimestamp="2026-02-19 18:49:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:50:01.979862471 +0000 UTC m=+1221.205303012" watchObservedRunningTime="2026-02-19 18:50:01.983855654 +0000 UTC m=+1221.209296195" Feb 19 18:50:02 crc kubenswrapper[4813]: I0219 18:50:02.499968 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 19 18:50:02 crc kubenswrapper[4813]: I0219 18:50:02.904422 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ae8a0c27-107e-49cf-a6c9-46429689f3de","Type":"ContainerStarted","Data":"1bdb72dfdf2d50015ed4fd85350d102b128cffc2a4785c9d6b3237c3ab260a54"} Feb 19 18:50:02 crc kubenswrapper[4813]: I0219 18:50:02.928457 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.984828744 podStartE2EDuration="4.928434271s" podCreationTimestamp="2026-02-19 18:49:58 +0000 UTC" firstStartedPulling="2026-02-19 18:49:59.806135732 +0000 UTC m=+1219.031576273" lastFinishedPulling="2026-02-19 18:50:00.749741259 +0000 UTC m=+1219.975181800" observedRunningTime="2026-02-19 18:50:02.926024847 +0000 UTC m=+1222.151465388" watchObservedRunningTime="2026-02-19 18:50:02.928434271 +0000 UTC m=+1222.153874822" Feb 19 18:50:03 crc kubenswrapper[4813]: I0219 18:50:03.913016 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="6da9f233-0235-4895-b898-a75d5d4e11d6" containerName="cinder-api-log" containerID="cri-o://c9d4e6aeb5dcc18da7c1acec789d8c5d56ef79a06f59fa4060d5de8a4b8ea0a4" gracePeriod=30 Feb 19 18:50:03 crc kubenswrapper[4813]: I0219 18:50:03.913489 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="6da9f233-0235-4895-b898-a75d5d4e11d6" containerName="cinder-api" containerID="cri-o://fc8c3024758b8c91249caea8d0c09cd35fcfe17efa82ba5c14d9dfdd55b922ce" gracePeriod=30 Feb 19 18:50:04 crc kubenswrapper[4813]: I0219 18:50:04.296615 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 19 18:50:04 crc kubenswrapper[4813]: I0219 18:50:04.541997 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 18:50:04 crc kubenswrapper[4813]: I0219 18:50:04.557772 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6da9f233-0235-4895-b898-a75d5d4e11d6-scripts\") pod \"6da9f233-0235-4895-b898-a75d5d4e11d6\" (UID: \"6da9f233-0235-4895-b898-a75d5d4e11d6\") " Feb 19 18:50:04 crc kubenswrapper[4813]: I0219 18:50:04.557828 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6da9f233-0235-4895-b898-a75d5d4e11d6-logs\") pod \"6da9f233-0235-4895-b898-a75d5d4e11d6\" (UID: \"6da9f233-0235-4895-b898-a75d5d4e11d6\") " Feb 19 18:50:04 crc kubenswrapper[4813]: I0219 18:50:04.557922 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6da9f233-0235-4895-b898-a75d5d4e11d6-combined-ca-bundle\") pod \"6da9f233-0235-4895-b898-a75d5d4e11d6\" (UID: \"6da9f233-0235-4895-b898-a75d5d4e11d6\") " Feb 19 18:50:04 crc kubenswrapper[4813]: I0219 18:50:04.558035 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6da9f233-0235-4895-b898-a75d5d4e11d6-config-data\") pod \"6da9f233-0235-4895-b898-a75d5d4e11d6\" (UID: \"6da9f233-0235-4895-b898-a75d5d4e11d6\") " Feb 19 18:50:04 crc kubenswrapper[4813]: I0219 18:50:04.558085 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6da9f233-0235-4895-b898-a75d5d4e11d6-config-data-custom\") pod \"6da9f233-0235-4895-b898-a75d5d4e11d6\" (UID: \"6da9f233-0235-4895-b898-a75d5d4e11d6\") " Feb 19 18:50:04 crc kubenswrapper[4813]: I0219 18:50:04.558170 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6da9f233-0235-4895-b898-a75d5d4e11d6-etc-machine-id\") pod \"6da9f233-0235-4895-b898-a75d5d4e11d6\" (UID: \"6da9f233-0235-4895-b898-a75d5d4e11d6\") " Feb 19 18:50:04 crc kubenswrapper[4813]: I0219 18:50:04.558202 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnvht\" (UniqueName: \"kubernetes.io/projected/6da9f233-0235-4895-b898-a75d5d4e11d6-kube-api-access-pnvht\") pod \"6da9f233-0235-4895-b898-a75d5d4e11d6\" (UID: \"6da9f233-0235-4895-b898-a75d5d4e11d6\") " Feb 19 18:50:04 crc kubenswrapper[4813]: I0219 18:50:04.558255 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6da9f233-0235-4895-b898-a75d5d4e11d6-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "6da9f233-0235-4895-b898-a75d5d4e11d6" (UID: "6da9f233-0235-4895-b898-a75d5d4e11d6"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 18:50:04 crc kubenswrapper[4813]: I0219 18:50:04.558771 4813 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6da9f233-0235-4895-b898-a75d5d4e11d6-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:04 crc kubenswrapper[4813]: I0219 18:50:04.559049 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6da9f233-0235-4895-b898-a75d5d4e11d6-logs" (OuterVolumeSpecName: "logs") pod "6da9f233-0235-4895-b898-a75d5d4e11d6" (UID: "6da9f233-0235-4895-b898-a75d5d4e11d6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:50:04 crc kubenswrapper[4813]: I0219 18:50:04.564602 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6da9f233-0235-4895-b898-a75d5d4e11d6-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6da9f233-0235-4895-b898-a75d5d4e11d6" (UID: "6da9f233-0235-4895-b898-a75d5d4e11d6"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:50:04 crc kubenswrapper[4813]: I0219 18:50:04.580173 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6da9f233-0235-4895-b898-a75d5d4e11d6-scripts" (OuterVolumeSpecName: "scripts") pod "6da9f233-0235-4895-b898-a75d5d4e11d6" (UID: "6da9f233-0235-4895-b898-a75d5d4e11d6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:50:04 crc kubenswrapper[4813]: I0219 18:50:04.590915 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6da9f233-0235-4895-b898-a75d5d4e11d6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6da9f233-0235-4895-b898-a75d5d4e11d6" (UID: "6da9f233-0235-4895-b898-a75d5d4e11d6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:50:04 crc kubenswrapper[4813]: I0219 18:50:04.590998 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6da9f233-0235-4895-b898-a75d5d4e11d6-kube-api-access-pnvht" (OuterVolumeSpecName: "kube-api-access-pnvht") pod "6da9f233-0235-4895-b898-a75d5d4e11d6" (UID: "6da9f233-0235-4895-b898-a75d5d4e11d6"). InnerVolumeSpecName "kube-api-access-pnvht". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:50:04 crc kubenswrapper[4813]: I0219 18:50:04.646052 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6da9f233-0235-4895-b898-a75d5d4e11d6-config-data" (OuterVolumeSpecName: "config-data") pod "6da9f233-0235-4895-b898-a75d5d4e11d6" (UID: "6da9f233-0235-4895-b898-a75d5d4e11d6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:50:04 crc kubenswrapper[4813]: I0219 18:50:04.660476 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6da9f233-0235-4895-b898-a75d5d4e11d6-logs\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:04 crc kubenswrapper[4813]: I0219 18:50:04.660522 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6da9f233-0235-4895-b898-a75d5d4e11d6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:04 crc kubenswrapper[4813]: I0219 18:50:04.660537 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6da9f233-0235-4895-b898-a75d5d4e11d6-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:04 crc kubenswrapper[4813]: I0219 18:50:04.660548 4813 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6da9f233-0235-4895-b898-a75d5d4e11d6-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:04 crc kubenswrapper[4813]: I0219 18:50:04.660561 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnvht\" (UniqueName: \"kubernetes.io/projected/6da9f233-0235-4895-b898-a75d5d4e11d6-kube-api-access-pnvht\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:04 crc kubenswrapper[4813]: I0219 18:50:04.660572 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6da9f233-0235-4895-b898-a75d5d4e11d6-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:04 crc kubenswrapper[4813]: I0219 18:50:04.921929 4813 generic.go:334] "Generic (PLEG): container finished" podID="6da9f233-0235-4895-b898-a75d5d4e11d6" containerID="fc8c3024758b8c91249caea8d0c09cd35fcfe17efa82ba5c14d9dfdd55b922ce" exitCode=0 Feb 19 18:50:04 crc kubenswrapper[4813]: I0219 18:50:04.922212 4813 generic.go:334] "Generic (PLEG): container finished" podID="6da9f233-0235-4895-b898-a75d5d4e11d6" containerID="c9d4e6aeb5dcc18da7c1acec789d8c5d56ef79a06f59fa4060d5de8a4b8ea0a4" exitCode=143 Feb 19 18:50:04 crc kubenswrapper[4813]: I0219 18:50:04.921993 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 18:50:04 crc kubenswrapper[4813]: I0219 18:50:04.922009 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6da9f233-0235-4895-b898-a75d5d4e11d6","Type":"ContainerDied","Data":"fc8c3024758b8c91249caea8d0c09cd35fcfe17efa82ba5c14d9dfdd55b922ce"} Feb 19 18:50:04 crc kubenswrapper[4813]: I0219 18:50:04.922363 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6da9f233-0235-4895-b898-a75d5d4e11d6","Type":"ContainerDied","Data":"c9d4e6aeb5dcc18da7c1acec789d8c5d56ef79a06f59fa4060d5de8a4b8ea0a4"} Feb 19 18:50:04 crc kubenswrapper[4813]: I0219 18:50:04.922388 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6da9f233-0235-4895-b898-a75d5d4e11d6","Type":"ContainerDied","Data":"c31cbadc2f7886074f0e4a9a8f3ee483ff5ad306577a967129a1c27659163212"} Feb 19 18:50:04 crc kubenswrapper[4813]: I0219 18:50:04.922407 4813 scope.go:117] "RemoveContainer" containerID="fc8c3024758b8c91249caea8d0c09cd35fcfe17efa82ba5c14d9dfdd55b922ce" Feb 19 18:50:04 crc kubenswrapper[4813]: I0219 18:50:04.946570 4813 scope.go:117] "RemoveContainer" containerID="c9d4e6aeb5dcc18da7c1acec789d8c5d56ef79a06f59fa4060d5de8a4b8ea0a4" Feb 19 18:50:04 crc kubenswrapper[4813]: I0219 18:50:04.965718 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 19 18:50:04 crc kubenswrapper[4813]: I0219 18:50:04.985413 4813 scope.go:117] "RemoveContainer" containerID="fc8c3024758b8c91249caea8d0c09cd35fcfe17efa82ba5c14d9dfdd55b922ce" Feb 19 18:50:04 crc kubenswrapper[4813]: E0219 18:50:04.993184 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc8c3024758b8c91249caea8d0c09cd35fcfe17efa82ba5c14d9dfdd55b922ce\": container with ID starting with fc8c3024758b8c91249caea8d0c09cd35fcfe17efa82ba5c14d9dfdd55b922ce not found: ID does not exist" containerID="fc8c3024758b8c91249caea8d0c09cd35fcfe17efa82ba5c14d9dfdd55b922ce" Feb 19 18:50:04 crc kubenswrapper[4813]: I0219 18:50:04.993237 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc8c3024758b8c91249caea8d0c09cd35fcfe17efa82ba5c14d9dfdd55b922ce"} err="failed to get container status \"fc8c3024758b8c91249caea8d0c09cd35fcfe17efa82ba5c14d9dfdd55b922ce\": rpc error: code = NotFound desc = could not find container \"fc8c3024758b8c91249caea8d0c09cd35fcfe17efa82ba5c14d9dfdd55b922ce\": container with ID starting with fc8c3024758b8c91249caea8d0c09cd35fcfe17efa82ba5c14d9dfdd55b922ce not found: ID does not exist" Feb 19 18:50:04 crc kubenswrapper[4813]: I0219 18:50:04.993262 4813 scope.go:117] "RemoveContainer" containerID="c9d4e6aeb5dcc18da7c1acec789d8c5d56ef79a06f59fa4060d5de8a4b8ea0a4" Feb 19 18:50:04 crc kubenswrapper[4813]: E0219 18:50:04.996092 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9d4e6aeb5dcc18da7c1acec789d8c5d56ef79a06f59fa4060d5de8a4b8ea0a4\": container with ID starting with c9d4e6aeb5dcc18da7c1acec789d8c5d56ef79a06f59fa4060d5de8a4b8ea0a4 not found: ID does not exist" containerID="c9d4e6aeb5dcc18da7c1acec789d8c5d56ef79a06f59fa4060d5de8a4b8ea0a4" Feb 19 18:50:04 crc kubenswrapper[4813]: I0219 18:50:04.996135 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9d4e6aeb5dcc18da7c1acec789d8c5d56ef79a06f59fa4060d5de8a4b8ea0a4"} err="failed to get container status \"c9d4e6aeb5dcc18da7c1acec789d8c5d56ef79a06f59fa4060d5de8a4b8ea0a4\": rpc error: code = NotFound desc = could not find container \"c9d4e6aeb5dcc18da7c1acec789d8c5d56ef79a06f59fa4060d5de8a4b8ea0a4\": container with ID starting with c9d4e6aeb5dcc18da7c1acec789d8c5d56ef79a06f59fa4060d5de8a4b8ea0a4 not found: ID does not exist" Feb 19 18:50:04 crc kubenswrapper[4813]: I0219 18:50:04.996167 4813 scope.go:117] "RemoveContainer" containerID="fc8c3024758b8c91249caea8d0c09cd35fcfe17efa82ba5c14d9dfdd55b922ce" Feb 19 18:50:05 crc kubenswrapper[4813]: I0219 18:50:05.000519 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc8c3024758b8c91249caea8d0c09cd35fcfe17efa82ba5c14d9dfdd55b922ce"} err="failed to get container status \"fc8c3024758b8c91249caea8d0c09cd35fcfe17efa82ba5c14d9dfdd55b922ce\": rpc error: code = NotFound desc = could not find container \"fc8c3024758b8c91249caea8d0c09cd35fcfe17efa82ba5c14d9dfdd55b922ce\": container with ID starting with fc8c3024758b8c91249caea8d0c09cd35fcfe17efa82ba5c14d9dfdd55b922ce not found: ID does not exist" Feb 19 18:50:05 crc kubenswrapper[4813]: I0219 18:50:05.000542 4813 scope.go:117] "RemoveContainer" containerID="c9d4e6aeb5dcc18da7c1acec789d8c5d56ef79a06f59fa4060d5de8a4b8ea0a4" Feb 19 18:50:05 crc kubenswrapper[4813]: I0219 18:50:05.000935 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9d4e6aeb5dcc18da7c1acec789d8c5d56ef79a06f59fa4060d5de8a4b8ea0a4"} err="failed to get container status \"c9d4e6aeb5dcc18da7c1acec789d8c5d56ef79a06f59fa4060d5de8a4b8ea0a4\": rpc error: code = NotFound desc = could not find container \"c9d4e6aeb5dcc18da7c1acec789d8c5d56ef79a06f59fa4060d5de8a4b8ea0a4\": container with ID starting with c9d4e6aeb5dcc18da7c1acec789d8c5d56ef79a06f59fa4060d5de8a4b8ea0a4 not found: ID does not exist" Feb 19 18:50:05 crc kubenswrapper[4813]: I0219 18:50:05.010707 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 19 18:50:05 crc kubenswrapper[4813]: I0219 18:50:05.031015 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 19 18:50:05 crc kubenswrapper[4813]: E0219 18:50:05.031421 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6da9f233-0235-4895-b898-a75d5d4e11d6" containerName="cinder-api-log" Feb 19 18:50:05 crc kubenswrapper[4813]: I0219 18:50:05.031440 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="6da9f233-0235-4895-b898-a75d5d4e11d6" containerName="cinder-api-log" Feb 19 18:50:05 crc kubenswrapper[4813]: E0219 18:50:05.031460 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35d7e901-239e-4015-951a-a8a044717b21" containerName="init" Feb 19 18:50:05 crc kubenswrapper[4813]: I0219 18:50:05.031466 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="35d7e901-239e-4015-951a-a8a044717b21" containerName="init" Feb 19 18:50:05 crc kubenswrapper[4813]: E0219 18:50:05.031488 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35d7e901-239e-4015-951a-a8a044717b21" containerName="dnsmasq-dns" Feb 19 18:50:05 crc kubenswrapper[4813]: I0219 18:50:05.031497 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="35d7e901-239e-4015-951a-a8a044717b21" containerName="dnsmasq-dns" Feb 19 18:50:05 crc kubenswrapper[4813]: E0219 18:50:05.031506 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6da9f233-0235-4895-b898-a75d5d4e11d6" containerName="cinder-api" Feb 19 18:50:05 crc kubenswrapper[4813]: I0219 18:50:05.031512 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="6da9f233-0235-4895-b898-a75d5d4e11d6" containerName="cinder-api" Feb 19 18:50:05 crc kubenswrapper[4813]: I0219 18:50:05.031671 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="35d7e901-239e-4015-951a-a8a044717b21" containerName="dnsmasq-dns" Feb 19 18:50:05 crc kubenswrapper[4813]: I0219 18:50:05.031687 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="6da9f233-0235-4895-b898-a75d5d4e11d6" containerName="cinder-api" Feb 19 18:50:05 crc kubenswrapper[4813]: I0219 18:50:05.031700 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="6da9f233-0235-4895-b898-a75d5d4e11d6" containerName="cinder-api-log" Feb 19 18:50:05 crc kubenswrapper[4813]: I0219 18:50:05.032654 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 18:50:05 crc kubenswrapper[4813]: I0219 18:50:05.039617 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 19 18:50:05 crc kubenswrapper[4813]: I0219 18:50:05.039800 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 19 18:50:05 crc kubenswrapper[4813]: I0219 18:50:05.039945 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 19 18:50:05 crc kubenswrapper[4813]: I0219 18:50:05.046930 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 19 18:50:05 crc kubenswrapper[4813]: I0219 18:50:05.069075 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66vx6\" (UniqueName: \"kubernetes.io/projected/6ad2b86e-f285-4acc-a87b-18f97baf0294-kube-api-access-66vx6\") pod \"cinder-api-0\" (UID: \"6ad2b86e-f285-4acc-a87b-18f97baf0294\") " pod="openstack/cinder-api-0" Feb 19 18:50:05 crc kubenswrapper[4813]: I0219 18:50:05.069144 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ad2b86e-f285-4acc-a87b-18f97baf0294-logs\") pod \"cinder-api-0\" (UID: \"6ad2b86e-f285-4acc-a87b-18f97baf0294\") " pod="openstack/cinder-api-0" Feb 19 18:50:05 crc kubenswrapper[4813]: I0219 18:50:05.069172 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6ad2b86e-f285-4acc-a87b-18f97baf0294-etc-machine-id\") pod \"cinder-api-0\" (UID: \"6ad2b86e-f285-4acc-a87b-18f97baf0294\") " pod="openstack/cinder-api-0" Feb 19 18:50:05 crc kubenswrapper[4813]: I0219 18:50:05.069278 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ad2b86e-f285-4acc-a87b-18f97baf0294-public-tls-certs\") pod \"cinder-api-0\" (UID: \"6ad2b86e-f285-4acc-a87b-18f97baf0294\") " pod="openstack/cinder-api-0" Feb 19 18:50:05 crc kubenswrapper[4813]: I0219 18:50:05.069396 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ad2b86e-f285-4acc-a87b-18f97baf0294-scripts\") pod \"cinder-api-0\" (UID: \"6ad2b86e-f285-4acc-a87b-18f97baf0294\") " pod="openstack/cinder-api-0" Feb 19 18:50:05 crc kubenswrapper[4813]: I0219 18:50:05.069487 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ad2b86e-f285-4acc-a87b-18f97baf0294-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"6ad2b86e-f285-4acc-a87b-18f97baf0294\") " pod="openstack/cinder-api-0" Feb 19 18:50:05 crc kubenswrapper[4813]: I0219 18:50:05.069520 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ad2b86e-f285-4acc-a87b-18f97baf0294-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"6ad2b86e-f285-4acc-a87b-18f97baf0294\") " pod="openstack/cinder-api-0" Feb 19 18:50:05 crc kubenswrapper[4813]: I0219 18:50:05.069706 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ad2b86e-f285-4acc-a87b-18f97baf0294-config-data\") pod \"cinder-api-0\" (UID: \"6ad2b86e-f285-4acc-a87b-18f97baf0294\") " pod="openstack/cinder-api-0" Feb 19 18:50:05 crc kubenswrapper[4813]: I0219 18:50:05.069908 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6ad2b86e-f285-4acc-a87b-18f97baf0294-config-data-custom\") pod \"cinder-api-0\" (UID: \"6ad2b86e-f285-4acc-a87b-18f97baf0294\") " pod="openstack/cinder-api-0" Feb 19 18:50:05 crc kubenswrapper[4813]: I0219 18:50:05.171672 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ad2b86e-f285-4acc-a87b-18f97baf0294-public-tls-certs\") pod \"cinder-api-0\" (UID: \"6ad2b86e-f285-4acc-a87b-18f97baf0294\") " pod="openstack/cinder-api-0" Feb 19 18:50:05 crc kubenswrapper[4813]: I0219 18:50:05.171935 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ad2b86e-f285-4acc-a87b-18f97baf0294-scripts\") pod \"cinder-api-0\" (UID: \"6ad2b86e-f285-4acc-a87b-18f97baf0294\") " pod="openstack/cinder-api-0" Feb 19 18:50:05 crc kubenswrapper[4813]: I0219 18:50:05.171990 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ad2b86e-f285-4acc-a87b-18f97baf0294-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"6ad2b86e-f285-4acc-a87b-18f97baf0294\") " pod="openstack/cinder-api-0" Feb 19 18:50:05 crc kubenswrapper[4813]: I0219 18:50:05.172011 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ad2b86e-f285-4acc-a87b-18f97baf0294-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"6ad2b86e-f285-4acc-a87b-18f97baf0294\") " pod="openstack/cinder-api-0" Feb 19 18:50:05 crc kubenswrapper[4813]: I0219 18:50:05.172063 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ad2b86e-f285-4acc-a87b-18f97baf0294-config-data\") pod \"cinder-api-0\" (UID: \"6ad2b86e-f285-4acc-a87b-18f97baf0294\") " pod="openstack/cinder-api-0" Feb 19 18:50:05 crc kubenswrapper[4813]: I0219 18:50:05.172118 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6ad2b86e-f285-4acc-a87b-18f97baf0294-config-data-custom\") pod \"cinder-api-0\" (UID: \"6ad2b86e-f285-4acc-a87b-18f97baf0294\") " pod="openstack/cinder-api-0" Feb 19 18:50:05 crc kubenswrapper[4813]: I0219 18:50:05.172145 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66vx6\" (UniqueName: \"kubernetes.io/projected/6ad2b86e-f285-4acc-a87b-18f97baf0294-kube-api-access-66vx6\") pod \"cinder-api-0\" (UID: \"6ad2b86e-f285-4acc-a87b-18f97baf0294\") " pod="openstack/cinder-api-0" Feb 19 18:50:05 crc kubenswrapper[4813]: I0219 18:50:05.172171 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ad2b86e-f285-4acc-a87b-18f97baf0294-logs\") pod \"cinder-api-0\" (UID: \"6ad2b86e-f285-4acc-a87b-18f97baf0294\") " pod="openstack/cinder-api-0" Feb 19 18:50:05 crc kubenswrapper[4813]: I0219 18:50:05.172185 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6ad2b86e-f285-4acc-a87b-18f97baf0294-etc-machine-id\") pod \"cinder-api-0\" (UID: \"6ad2b86e-f285-4acc-a87b-18f97baf0294\") " pod="openstack/cinder-api-0" Feb 19 18:50:05 crc kubenswrapper[4813]: I0219 18:50:05.172353 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6ad2b86e-f285-4acc-a87b-18f97baf0294-etc-machine-id\") pod \"cinder-api-0\" (UID: \"6ad2b86e-f285-4acc-a87b-18f97baf0294\") " pod="openstack/cinder-api-0" Feb 19 18:50:05 crc kubenswrapper[4813]: I0219 18:50:05.173169 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ad2b86e-f285-4acc-a87b-18f97baf0294-logs\") pod \"cinder-api-0\" (UID: \"6ad2b86e-f285-4acc-a87b-18f97baf0294\") " pod="openstack/cinder-api-0" Feb 19 18:50:05 crc kubenswrapper[4813]: I0219 18:50:05.176744 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ad2b86e-f285-4acc-a87b-18f97baf0294-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"6ad2b86e-f285-4acc-a87b-18f97baf0294\") " pod="openstack/cinder-api-0" Feb 19 18:50:05 crc kubenswrapper[4813]: I0219 18:50:05.177995 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ad2b86e-f285-4acc-a87b-18f97baf0294-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"6ad2b86e-f285-4acc-a87b-18f97baf0294\") " pod="openstack/cinder-api-0" Feb 19 18:50:05 crc kubenswrapper[4813]: I0219 18:50:05.178064 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6ad2b86e-f285-4acc-a87b-18f97baf0294-config-data-custom\") pod \"cinder-api-0\" (UID: \"6ad2b86e-f285-4acc-a87b-18f97baf0294\") " pod="openstack/cinder-api-0" Feb 19 18:50:05 crc kubenswrapper[4813]: I0219 18:50:05.178964 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ad2b86e-f285-4acc-a87b-18f97baf0294-public-tls-certs\") pod \"cinder-api-0\" (UID: \"6ad2b86e-f285-4acc-a87b-18f97baf0294\") " pod="openstack/cinder-api-0" Feb 19 18:50:05 crc kubenswrapper[4813]: I0219 18:50:05.186447 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ad2b86e-f285-4acc-a87b-18f97baf0294-config-data\") pod \"cinder-api-0\" (UID: \"6ad2b86e-f285-4acc-a87b-18f97baf0294\") " pod="openstack/cinder-api-0" Feb 19 18:50:05 crc kubenswrapper[4813]: I0219 18:50:05.190079 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66vx6\" (UniqueName: \"kubernetes.io/projected/6ad2b86e-f285-4acc-a87b-18f97baf0294-kube-api-access-66vx6\") pod \"cinder-api-0\" (UID: \"6ad2b86e-f285-4acc-a87b-18f97baf0294\") " pod="openstack/cinder-api-0" Feb 19 18:50:05 crc kubenswrapper[4813]: I0219 18:50:05.204447 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ad2b86e-f285-4acc-a87b-18f97baf0294-scripts\") pod \"cinder-api-0\" (UID: \"6ad2b86e-f285-4acc-a87b-18f97baf0294\") " pod="openstack/cinder-api-0" Feb 19 18:50:05 crc kubenswrapper[4813]: I0219 18:50:05.405413 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 18:50:05 crc kubenswrapper[4813]: I0219 18:50:05.520251 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6da9f233-0235-4895-b898-a75d5d4e11d6" path="/var/lib/kubelet/pods/6da9f233-0235-4895-b898-a75d5d4e11d6/volumes" Feb 19 18:50:05 crc kubenswrapper[4813]: I0219 18:50:05.977589 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 19 18:50:06 crc kubenswrapper[4813]: I0219 18:50:06.972783 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6ad2b86e-f285-4acc-a87b-18f97baf0294","Type":"ContainerStarted","Data":"4710529a2f6d6a9963aa3e46758c6cb9e333d01ce11646280e8ee29697fa5528"} Feb 19 18:50:06 crc kubenswrapper[4813]: I0219 18:50:06.973167 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6ad2b86e-f285-4acc-a87b-18f97baf0294","Type":"ContainerStarted","Data":"f1dc77219d1c1adfda5c2d76cc46b9034c857a0f449e7487a2ba80767355f097"} Feb 19 18:50:07 crc kubenswrapper[4813]: I0219 18:50:07.874719 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6f96ddcd7d-r5nt7" Feb 19 18:50:07 crc kubenswrapper[4813]: I0219 18:50:07.928310 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6f96ddcd7d-r5nt7" Feb 19 18:50:07 crc kubenswrapper[4813]: I0219 18:50:07.987351 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6ad2b86e-f285-4acc-a87b-18f97baf0294","Type":"ContainerStarted","Data":"e4b7cafb1bd9bfd44873b9606ca95255b1dc06343b23e514ba497ba165a365d6"} Feb 19 18:50:08 crc kubenswrapper[4813]: I0219 18:50:08.024257 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.024237205 podStartE2EDuration="4.024237205s" podCreationTimestamp="2026-02-19 18:50:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:50:08.011170292 +0000 UTC m=+1227.236610873" watchObservedRunningTime="2026-02-19 18:50:08.024237205 +0000 UTC m=+1227.249677756" Feb 19 18:50:08 crc kubenswrapper[4813]: I0219 18:50:08.996829 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 19 18:50:09 crc kubenswrapper[4813]: I0219 18:50:09.390263 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6c8dc7b4d9-ltqxw" Feb 19 18:50:09 crc kubenswrapper[4813]: I0219 18:50:09.454241 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-db5c97f8f-96mx5"] Feb 19 18:50:09 crc kubenswrapper[4813]: I0219 18:50:09.454510 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-db5c97f8f-96mx5" podUID="f3273f47-8ad4-42ab-b905-a55e4e23400f" containerName="dnsmasq-dns" containerID="cri-o://89fb06a84ee1eb1f8794842873d2c140f6c67f316fd68d04b670483d239c1f4d" gracePeriod=10 Feb 19 18:50:09 crc kubenswrapper[4813]: I0219 18:50:09.585506 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 19 18:50:09 crc kubenswrapper[4813]: I0219 18:50:09.646203 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 18:50:10 crc kubenswrapper[4813]: I0219 18:50:10.030239 4813 generic.go:334] "Generic (PLEG): container finished" podID="f3273f47-8ad4-42ab-b905-a55e4e23400f" containerID="89fb06a84ee1eb1f8794842873d2c140f6c67f316fd68d04b670483d239c1f4d" exitCode=0 Feb 19 18:50:10 crc kubenswrapper[4813]: I0219 18:50:10.030277 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-db5c97f8f-96mx5" event={"ID":"f3273f47-8ad4-42ab-b905-a55e4e23400f","Type":"ContainerDied","Data":"89fb06a84ee1eb1f8794842873d2c140f6c67f316fd68d04b670483d239c1f4d"} Feb 19 18:50:10 crc kubenswrapper[4813]: I0219 18:50:10.031303 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="ae8a0c27-107e-49cf-a6c9-46429689f3de" containerName="cinder-scheduler" containerID="cri-o://b73bffbdb78c13ffb17d767c8bc8acb0aa526ef0a8f4d6ec4cb4bcaa1d7ac1b6" gracePeriod=30 Feb 19 18:50:10 crc kubenswrapper[4813]: I0219 18:50:10.031486 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="ae8a0c27-107e-49cf-a6c9-46429689f3de" containerName="probe" containerID="cri-o://1bdb72dfdf2d50015ed4fd85350d102b128cffc2a4785c9d6b3237c3ab260a54" gracePeriod=30 Feb 19 18:50:10 crc kubenswrapper[4813]: I0219 18:50:10.152844 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-db5c97f8f-96mx5" Feb 19 18:50:10 crc kubenswrapper[4813]: I0219 18:50:10.182637 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3273f47-8ad4-42ab-b905-a55e4e23400f-config\") pod \"f3273f47-8ad4-42ab-b905-a55e4e23400f\" (UID: \"f3273f47-8ad4-42ab-b905-a55e4e23400f\") " Feb 19 18:50:10 crc kubenswrapper[4813]: I0219 18:50:10.182814 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3273f47-8ad4-42ab-b905-a55e4e23400f-ovsdbserver-sb\") pod \"f3273f47-8ad4-42ab-b905-a55e4e23400f\" (UID: \"f3273f47-8ad4-42ab-b905-a55e4e23400f\") " Feb 19 18:50:10 crc kubenswrapper[4813]: I0219 18:50:10.183411 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3273f47-8ad4-42ab-b905-a55e4e23400f-dns-svc\") pod \"f3273f47-8ad4-42ab-b905-a55e4e23400f\" (UID: \"f3273f47-8ad4-42ab-b905-a55e4e23400f\") " Feb 19 18:50:10 crc kubenswrapper[4813]: I0219 18:50:10.183443 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f3273f47-8ad4-42ab-b905-a55e4e23400f-dns-swift-storage-0\") pod \"f3273f47-8ad4-42ab-b905-a55e4e23400f\" (UID: \"f3273f47-8ad4-42ab-b905-a55e4e23400f\") " Feb 19 18:50:10 crc kubenswrapper[4813]: I0219 18:50:10.183511 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-444fb\" (UniqueName: \"kubernetes.io/projected/f3273f47-8ad4-42ab-b905-a55e4e23400f-kube-api-access-444fb\") pod \"f3273f47-8ad4-42ab-b905-a55e4e23400f\" (UID: \"f3273f47-8ad4-42ab-b905-a55e4e23400f\") " Feb 19 18:50:10 crc kubenswrapper[4813]: I0219 18:50:10.183578 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f3273f47-8ad4-42ab-b905-a55e4e23400f-ovsdbserver-nb\") pod \"f3273f47-8ad4-42ab-b905-a55e4e23400f\" (UID: \"f3273f47-8ad4-42ab-b905-a55e4e23400f\") " Feb 19 18:50:10 crc kubenswrapper[4813]: I0219 18:50:10.221373 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5777547648-br5pd" Feb 19 18:50:10 crc kubenswrapper[4813]: I0219 18:50:10.224174 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3273f47-8ad4-42ab-b905-a55e4e23400f-kube-api-access-444fb" (OuterVolumeSpecName: "kube-api-access-444fb") pod "f3273f47-8ad4-42ab-b905-a55e4e23400f" (UID: "f3273f47-8ad4-42ab-b905-a55e4e23400f"). InnerVolumeSpecName "kube-api-access-444fb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:50:10 crc kubenswrapper[4813]: I0219 18:50:10.274349 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3273f47-8ad4-42ab-b905-a55e4e23400f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f3273f47-8ad4-42ab-b905-a55e4e23400f" (UID: "f3273f47-8ad4-42ab-b905-a55e4e23400f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:50:10 crc kubenswrapper[4813]: I0219 18:50:10.284010 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3273f47-8ad4-42ab-b905-a55e4e23400f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f3273f47-8ad4-42ab-b905-a55e4e23400f" (UID: "f3273f47-8ad4-42ab-b905-a55e4e23400f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:50:10 crc kubenswrapper[4813]: I0219 18:50:10.284620 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f3273f47-8ad4-42ab-b905-a55e4e23400f-ovsdbserver-nb\") pod \"f3273f47-8ad4-42ab-b905-a55e4e23400f\" (UID: \"f3273f47-8ad4-42ab-b905-a55e4e23400f\") " Feb 19 18:50:10 crc kubenswrapper[4813]: W0219 18:50:10.284881 4813 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/f3273f47-8ad4-42ab-b905-a55e4e23400f/volumes/kubernetes.io~configmap/ovsdbserver-nb Feb 19 18:50:10 crc kubenswrapper[4813]: I0219 18:50:10.284899 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3273f47-8ad4-42ab-b905-a55e4e23400f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f3273f47-8ad4-42ab-b905-a55e4e23400f" (UID: "f3273f47-8ad4-42ab-b905-a55e4e23400f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:50:10 crc kubenswrapper[4813]: I0219 18:50:10.285555 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3273f47-8ad4-42ab-b905-a55e4e23400f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:10 crc kubenswrapper[4813]: I0219 18:50:10.285575 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-444fb\" (UniqueName: \"kubernetes.io/projected/f3273f47-8ad4-42ab-b905-a55e4e23400f-kube-api-access-444fb\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:10 crc kubenswrapper[4813]: I0219 18:50:10.285586 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f3273f47-8ad4-42ab-b905-a55e4e23400f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:10 crc kubenswrapper[4813]: I0219 18:50:10.327558 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3273f47-8ad4-42ab-b905-a55e4e23400f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f3273f47-8ad4-42ab-b905-a55e4e23400f" (UID: "f3273f47-8ad4-42ab-b905-a55e4e23400f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:50:10 crc kubenswrapper[4813]: I0219 18:50:10.337853 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3273f47-8ad4-42ab-b905-a55e4e23400f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f3273f47-8ad4-42ab-b905-a55e4e23400f" (UID: "f3273f47-8ad4-42ab-b905-a55e4e23400f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:50:10 crc kubenswrapper[4813]: I0219 18:50:10.355609 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5777547648-br5pd" Feb 19 18:50:10 crc kubenswrapper[4813]: I0219 18:50:10.359936 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3273f47-8ad4-42ab-b905-a55e4e23400f-config" (OuterVolumeSpecName: "config") pod "f3273f47-8ad4-42ab-b905-a55e4e23400f" (UID: "f3273f47-8ad4-42ab-b905-a55e4e23400f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:50:10 crc kubenswrapper[4813]: I0219 18:50:10.387207 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3273f47-8ad4-42ab-b905-a55e4e23400f-config\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:10 crc kubenswrapper[4813]: I0219 18:50:10.387238 4813 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3273f47-8ad4-42ab-b905-a55e4e23400f-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:10 crc kubenswrapper[4813]: I0219 18:50:10.387248 4813 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f3273f47-8ad4-42ab-b905-a55e4e23400f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:10 crc kubenswrapper[4813]: I0219 18:50:10.637679 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-d474bcd44-n9tsd"] Feb 19 18:50:10 crc kubenswrapper[4813]: E0219 18:50:10.638024 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3273f47-8ad4-42ab-b905-a55e4e23400f" containerName="dnsmasq-dns" Feb 19 18:50:10 crc kubenswrapper[4813]: I0219 18:50:10.638040 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3273f47-8ad4-42ab-b905-a55e4e23400f" containerName="dnsmasq-dns" Feb 19 18:50:10 crc kubenswrapper[4813]: E0219 18:50:10.638057 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3273f47-8ad4-42ab-b905-a55e4e23400f" containerName="init" Feb 19 18:50:10 crc kubenswrapper[4813]: I0219 18:50:10.638064 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3273f47-8ad4-42ab-b905-a55e4e23400f" containerName="init" Feb 19 18:50:10 crc kubenswrapper[4813]: I0219 18:50:10.638232 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3273f47-8ad4-42ab-b905-a55e4e23400f" containerName="dnsmasq-dns" Feb 19 18:50:10 crc kubenswrapper[4813]: I0219 18:50:10.639132 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d474bcd44-n9tsd" Feb 19 18:50:10 crc kubenswrapper[4813]: I0219 18:50:10.655264 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-d474bcd44-n9tsd"] Feb 19 18:50:10 crc kubenswrapper[4813]: I0219 18:50:10.692663 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djqtd\" (UniqueName: \"kubernetes.io/projected/6f4b651a-00cc-4ca7-b49e-713eed4968b9-kube-api-access-djqtd\") pod \"placement-d474bcd44-n9tsd\" (UID: \"6f4b651a-00cc-4ca7-b49e-713eed4968b9\") " pod="openstack/placement-d474bcd44-n9tsd" Feb 19 18:50:10 crc kubenswrapper[4813]: I0219 18:50:10.692706 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f4b651a-00cc-4ca7-b49e-713eed4968b9-config-data\") pod \"placement-d474bcd44-n9tsd\" (UID: \"6f4b651a-00cc-4ca7-b49e-713eed4968b9\") " pod="openstack/placement-d474bcd44-n9tsd" Feb 19 18:50:10 crc kubenswrapper[4813]: I0219 18:50:10.692729 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f4b651a-00cc-4ca7-b49e-713eed4968b9-internal-tls-certs\") pod \"placement-d474bcd44-n9tsd\" (UID: \"6f4b651a-00cc-4ca7-b49e-713eed4968b9\") " pod="openstack/placement-d474bcd44-n9tsd" Feb 19 18:50:10 crc kubenswrapper[4813]: I0219 18:50:10.692793 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f4b651a-00cc-4ca7-b49e-713eed4968b9-combined-ca-bundle\") pod \"placement-d474bcd44-n9tsd\" (UID: \"6f4b651a-00cc-4ca7-b49e-713eed4968b9\") " pod="openstack/placement-d474bcd44-n9tsd" Feb 19 18:50:10 crc kubenswrapper[4813]: I0219 18:50:10.692825 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f4b651a-00cc-4ca7-b49e-713eed4968b9-logs\") pod \"placement-d474bcd44-n9tsd\" (UID: \"6f4b651a-00cc-4ca7-b49e-713eed4968b9\") " pod="openstack/placement-d474bcd44-n9tsd" Feb 19 18:50:10 crc kubenswrapper[4813]: I0219 18:50:10.692849 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f4b651a-00cc-4ca7-b49e-713eed4968b9-scripts\") pod \"placement-d474bcd44-n9tsd\" (UID: \"6f4b651a-00cc-4ca7-b49e-713eed4968b9\") " pod="openstack/placement-d474bcd44-n9tsd" Feb 19 18:50:10 crc kubenswrapper[4813]: I0219 18:50:10.692865 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f4b651a-00cc-4ca7-b49e-713eed4968b9-public-tls-certs\") pod \"placement-d474bcd44-n9tsd\" (UID: \"6f4b651a-00cc-4ca7-b49e-713eed4968b9\") " pod="openstack/placement-d474bcd44-n9tsd" Feb 19 18:50:10 crc kubenswrapper[4813]: I0219 18:50:10.797145 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f4b651a-00cc-4ca7-b49e-713eed4968b9-combined-ca-bundle\") pod \"placement-d474bcd44-n9tsd\" (UID: \"6f4b651a-00cc-4ca7-b49e-713eed4968b9\") " pod="openstack/placement-d474bcd44-n9tsd" Feb 19 18:50:10 crc kubenswrapper[4813]: I0219 18:50:10.797233 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f4b651a-00cc-4ca7-b49e-713eed4968b9-logs\") pod \"placement-d474bcd44-n9tsd\" (UID: \"6f4b651a-00cc-4ca7-b49e-713eed4968b9\") " pod="openstack/placement-d474bcd44-n9tsd" Feb 19 18:50:10 crc kubenswrapper[4813]: I0219 18:50:10.797287 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f4b651a-00cc-4ca7-b49e-713eed4968b9-scripts\") pod \"placement-d474bcd44-n9tsd\" (UID: \"6f4b651a-00cc-4ca7-b49e-713eed4968b9\") " pod="openstack/placement-d474bcd44-n9tsd" Feb 19 18:50:10 crc kubenswrapper[4813]: I0219 18:50:10.797315 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f4b651a-00cc-4ca7-b49e-713eed4968b9-public-tls-certs\") pod \"placement-d474bcd44-n9tsd\" (UID: \"6f4b651a-00cc-4ca7-b49e-713eed4968b9\") " pod="openstack/placement-d474bcd44-n9tsd" Feb 19 18:50:10 crc kubenswrapper[4813]: I0219 18:50:10.797473 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djqtd\" (UniqueName: \"kubernetes.io/projected/6f4b651a-00cc-4ca7-b49e-713eed4968b9-kube-api-access-djqtd\") pod \"placement-d474bcd44-n9tsd\" (UID: \"6f4b651a-00cc-4ca7-b49e-713eed4968b9\") " pod="openstack/placement-d474bcd44-n9tsd" Feb 19 18:50:10 crc kubenswrapper[4813]: I0219 18:50:10.797498 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f4b651a-00cc-4ca7-b49e-713eed4968b9-config-data\") pod \"placement-d474bcd44-n9tsd\" (UID: \"6f4b651a-00cc-4ca7-b49e-713eed4968b9\") " pod="openstack/placement-d474bcd44-n9tsd" Feb 19 18:50:10 crc kubenswrapper[4813]: I0219 18:50:10.797538 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f4b651a-00cc-4ca7-b49e-713eed4968b9-internal-tls-certs\") pod \"placement-d474bcd44-n9tsd\" (UID: \"6f4b651a-00cc-4ca7-b49e-713eed4968b9\") " pod="openstack/placement-d474bcd44-n9tsd" Feb 19 18:50:10 crc kubenswrapper[4813]: I0219 18:50:10.799341 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f4b651a-00cc-4ca7-b49e-713eed4968b9-logs\") pod \"placement-d474bcd44-n9tsd\" (UID: \"6f4b651a-00cc-4ca7-b49e-713eed4968b9\") " pod="openstack/placement-d474bcd44-n9tsd" Feb 19 18:50:10 crc kubenswrapper[4813]: I0219 18:50:10.811529 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f4b651a-00cc-4ca7-b49e-713eed4968b9-scripts\") pod \"placement-d474bcd44-n9tsd\" (UID: \"6f4b651a-00cc-4ca7-b49e-713eed4968b9\") " pod="openstack/placement-d474bcd44-n9tsd" Feb 19 18:50:10 crc kubenswrapper[4813]: I0219 18:50:10.811687 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f4b651a-00cc-4ca7-b49e-713eed4968b9-public-tls-certs\") pod \"placement-d474bcd44-n9tsd\" (UID: \"6f4b651a-00cc-4ca7-b49e-713eed4968b9\") " pod="openstack/placement-d474bcd44-n9tsd" Feb 19 18:50:10 crc kubenswrapper[4813]: I0219 18:50:10.812119 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f4b651a-00cc-4ca7-b49e-713eed4968b9-internal-tls-certs\") pod \"placement-d474bcd44-n9tsd\" (UID: \"6f4b651a-00cc-4ca7-b49e-713eed4968b9\") " pod="openstack/placement-d474bcd44-n9tsd" Feb 19 18:50:10 crc kubenswrapper[4813]: I0219 18:50:10.812145 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f4b651a-00cc-4ca7-b49e-713eed4968b9-combined-ca-bundle\") pod \"placement-d474bcd44-n9tsd\" (UID: \"6f4b651a-00cc-4ca7-b49e-713eed4968b9\") " pod="openstack/placement-d474bcd44-n9tsd" Feb 19 18:50:10 crc kubenswrapper[4813]: I0219 18:50:10.813502 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f4b651a-00cc-4ca7-b49e-713eed4968b9-config-data\") pod \"placement-d474bcd44-n9tsd\" (UID: \"6f4b651a-00cc-4ca7-b49e-713eed4968b9\") " pod="openstack/placement-d474bcd44-n9tsd" Feb 19 18:50:10 crc kubenswrapper[4813]: I0219 18:50:10.820376 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djqtd\" (UniqueName: \"kubernetes.io/projected/6f4b651a-00cc-4ca7-b49e-713eed4968b9-kube-api-access-djqtd\") pod \"placement-d474bcd44-n9tsd\" (UID: \"6f4b651a-00cc-4ca7-b49e-713eed4968b9\") " pod="openstack/placement-d474bcd44-n9tsd" Feb 19 18:50:10 crc kubenswrapper[4813]: I0219 18:50:10.953689 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d474bcd44-n9tsd" Feb 19 18:50:11 crc kubenswrapper[4813]: I0219 18:50:11.045799 4813 generic.go:334] "Generic (PLEG): container finished" podID="ae8a0c27-107e-49cf-a6c9-46429689f3de" containerID="1bdb72dfdf2d50015ed4fd85350d102b128cffc2a4785c9d6b3237c3ab260a54" exitCode=0 Feb 19 18:50:11 crc kubenswrapper[4813]: I0219 18:50:11.045901 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ae8a0c27-107e-49cf-a6c9-46429689f3de","Type":"ContainerDied","Data":"1bdb72dfdf2d50015ed4fd85350d102b128cffc2a4785c9d6b3237c3ab260a54"} Feb 19 18:50:11 crc kubenswrapper[4813]: I0219 18:50:11.048303 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-db5c97f8f-96mx5" Feb 19 18:50:11 crc kubenswrapper[4813]: I0219 18:50:11.048344 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-db5c97f8f-96mx5" event={"ID":"f3273f47-8ad4-42ab-b905-a55e4e23400f","Type":"ContainerDied","Data":"7ed945e781aaa414b839f98307d8c78316df0496987bdcf26488256c67ca1d4d"} Feb 19 18:50:11 crc kubenswrapper[4813]: I0219 18:50:11.048384 4813 scope.go:117] "RemoveContainer" containerID="89fb06a84ee1eb1f8794842873d2c140f6c67f316fd68d04b670483d239c1f4d" Feb 19 18:50:11 crc kubenswrapper[4813]: I0219 18:50:11.128747 4813 scope.go:117] "RemoveContainer" containerID="af6a2aea5c455e1c251c9ee0bdc5038ff3b28b3749d43d0e2b7495a6590ac959" Feb 19 18:50:11 crc kubenswrapper[4813]: I0219 18:50:11.137150 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-db5c97f8f-96mx5"] Feb 19 18:50:11 crc kubenswrapper[4813]: I0219 18:50:11.148790 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-db5c97f8f-96mx5"] Feb 19 18:50:11 crc kubenswrapper[4813]: I0219 18:50:11.484632 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3273f47-8ad4-42ab-b905-a55e4e23400f" path="/var/lib/kubelet/pods/f3273f47-8ad4-42ab-b905-a55e4e23400f/volumes" Feb 19 18:50:11 crc kubenswrapper[4813]: I0219 18:50:11.487002 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-d474bcd44-n9tsd"] Feb 19 18:50:11 crc kubenswrapper[4813]: W0219 18:50:11.504351 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6f4b651a_00cc_4ca7_b49e_713eed4968b9.slice/crio-03b6461108312cfdc52b9cab3920f5a21a37b85e6667057ffdb75a0d01c0018a WatchSource:0}: Error finding container 03b6461108312cfdc52b9cab3920f5a21a37b85e6667057ffdb75a0d01c0018a: Status 404 returned error can't find the container with id 03b6461108312cfdc52b9cab3920f5a21a37b85e6667057ffdb75a0d01c0018a Feb 19 18:50:11 crc kubenswrapper[4813]: I0219 18:50:11.617940 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7c88676b6d-zlhlk" Feb 19 18:50:11 crc kubenswrapper[4813]: I0219 18:50:11.795314 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7c88676b6d-zlhlk" Feb 19 18:50:11 crc kubenswrapper[4813]: I0219 18:50:11.848147 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6f96ddcd7d-r5nt7"] Feb 19 18:50:11 crc kubenswrapper[4813]: I0219 18:50:11.848633 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6f96ddcd7d-r5nt7" podUID="9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1" containerName="barbican-api-log" containerID="cri-o://787d274cf138fe60285ad37afb73be3c4f0032f5ecc749e03f67423dba5450bb" gracePeriod=30 Feb 19 18:50:11 crc kubenswrapper[4813]: I0219 18:50:11.849069 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6f96ddcd7d-r5nt7" podUID="9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1" containerName="barbican-api" containerID="cri-o://b5d1848c5c948f5ecdfed0aff2fe8bdb00c86151a565fd0ef767a3e18ea26c6c" gracePeriod=30 Feb 19 18:50:12 crc kubenswrapper[4813]: I0219 18:50:12.104825 4813 generic.go:334] "Generic (PLEG): container finished" podID="9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1" containerID="787d274cf138fe60285ad37afb73be3c4f0032f5ecc749e03f67423dba5450bb" exitCode=143 Feb 19 18:50:12 crc kubenswrapper[4813]: I0219 18:50:12.104904 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6f96ddcd7d-r5nt7" event={"ID":"9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1","Type":"ContainerDied","Data":"787d274cf138fe60285ad37afb73be3c4f0032f5ecc749e03f67423dba5450bb"} Feb 19 18:50:12 crc kubenswrapper[4813]: I0219 18:50:12.110754 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d474bcd44-n9tsd" event={"ID":"6f4b651a-00cc-4ca7-b49e-713eed4968b9","Type":"ContainerStarted","Data":"61fdb19e85db0c41232232581262b2f03bee939d644f27002a6fbcc6eee839c7"} Feb 19 18:50:12 crc kubenswrapper[4813]: I0219 18:50:12.110793 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d474bcd44-n9tsd" event={"ID":"6f4b651a-00cc-4ca7-b49e-713eed4968b9","Type":"ContainerStarted","Data":"bf8c06468da19346eec65138cd7874af7a212840ee1fcdf4cb2cd44182970cc3"} Feb 19 18:50:12 crc kubenswrapper[4813]: I0219 18:50:12.110804 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d474bcd44-n9tsd" event={"ID":"6f4b651a-00cc-4ca7-b49e-713eed4968b9","Type":"ContainerStarted","Data":"03b6461108312cfdc52b9cab3920f5a21a37b85e6667057ffdb75a0d01c0018a"} Feb 19 18:50:12 crc kubenswrapper[4813]: I0219 18:50:12.111283 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-d474bcd44-n9tsd" Feb 19 18:50:12 crc kubenswrapper[4813]: I0219 18:50:12.111340 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-d474bcd44-n9tsd" Feb 19 18:50:12 crc kubenswrapper[4813]: I0219 18:50:12.140772 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-d474bcd44-n9tsd" podStartSLOduration=2.140752292 podStartE2EDuration="2.140752292s" podCreationTimestamp="2026-02-19 18:50:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:50:12.136344207 +0000 UTC m=+1231.361784748" watchObservedRunningTime="2026-02-19 18:50:12.140752292 +0000 UTC m=+1231.366192833" Feb 19 18:50:12 crc kubenswrapper[4813]: I0219 18:50:12.166196 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-66787bd68b-jd5l8" Feb 19 18:50:12 crc kubenswrapper[4813]: I0219 18:50:12.447532 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-5899f78d95-lmnxh" Feb 19 18:50:12 crc kubenswrapper[4813]: I0219 18:50:12.868920 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 18:50:12 crc kubenswrapper[4813]: I0219 18:50:12.937412 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae8a0c27-107e-49cf-a6c9-46429689f3de-scripts\") pod \"ae8a0c27-107e-49cf-a6c9-46429689f3de\" (UID: \"ae8a0c27-107e-49cf-a6c9-46429689f3de\") " Feb 19 18:50:12 crc kubenswrapper[4813]: I0219 18:50:12.937464 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wq4h5\" (UniqueName: \"kubernetes.io/projected/ae8a0c27-107e-49cf-a6c9-46429689f3de-kube-api-access-wq4h5\") pod \"ae8a0c27-107e-49cf-a6c9-46429689f3de\" (UID: \"ae8a0c27-107e-49cf-a6c9-46429689f3de\") " Feb 19 18:50:12 crc kubenswrapper[4813]: I0219 18:50:12.937548 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae8a0c27-107e-49cf-a6c9-46429689f3de-combined-ca-bundle\") pod \"ae8a0c27-107e-49cf-a6c9-46429689f3de\" (UID: \"ae8a0c27-107e-49cf-a6c9-46429689f3de\") " Feb 19 18:50:12 crc kubenswrapper[4813]: I0219 18:50:12.937576 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae8a0c27-107e-49cf-a6c9-46429689f3de-config-data\") pod \"ae8a0c27-107e-49cf-a6c9-46429689f3de\" (UID: \"ae8a0c27-107e-49cf-a6c9-46429689f3de\") " Feb 19 18:50:12 crc kubenswrapper[4813]: I0219 18:50:12.937621 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae8a0c27-107e-49cf-a6c9-46429689f3de-config-data-custom\") pod \"ae8a0c27-107e-49cf-a6c9-46429689f3de\" (UID: \"ae8a0c27-107e-49cf-a6c9-46429689f3de\") " Feb 19 18:50:12 crc kubenswrapper[4813]: I0219 18:50:12.937669 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ae8a0c27-107e-49cf-a6c9-46429689f3de-etc-machine-id\") pod \"ae8a0c27-107e-49cf-a6c9-46429689f3de\" (UID: \"ae8a0c27-107e-49cf-a6c9-46429689f3de\") " Feb 19 18:50:12 crc kubenswrapper[4813]: I0219 18:50:12.938132 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae8a0c27-107e-49cf-a6c9-46429689f3de-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ae8a0c27-107e-49cf-a6c9-46429689f3de" (UID: "ae8a0c27-107e-49cf-a6c9-46429689f3de"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 18:50:12 crc kubenswrapper[4813]: I0219 18:50:12.951647 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae8a0c27-107e-49cf-a6c9-46429689f3de-scripts" (OuterVolumeSpecName: "scripts") pod "ae8a0c27-107e-49cf-a6c9-46429689f3de" (UID: "ae8a0c27-107e-49cf-a6c9-46429689f3de"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:50:12 crc kubenswrapper[4813]: I0219 18:50:12.951694 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae8a0c27-107e-49cf-a6c9-46429689f3de-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ae8a0c27-107e-49cf-a6c9-46429689f3de" (UID: "ae8a0c27-107e-49cf-a6c9-46429689f3de"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:50:12 crc kubenswrapper[4813]: I0219 18:50:12.951719 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae8a0c27-107e-49cf-a6c9-46429689f3de-kube-api-access-wq4h5" (OuterVolumeSpecName: "kube-api-access-wq4h5") pod "ae8a0c27-107e-49cf-a6c9-46429689f3de" (UID: "ae8a0c27-107e-49cf-a6c9-46429689f3de"). InnerVolumeSpecName "kube-api-access-wq4h5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:50:12 crc kubenswrapper[4813]: I0219 18:50:12.991427 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae8a0c27-107e-49cf-a6c9-46429689f3de-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ae8a0c27-107e-49cf-a6c9-46429689f3de" (UID: "ae8a0c27-107e-49cf-a6c9-46429689f3de"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:50:13 crc kubenswrapper[4813]: I0219 18:50:13.028800 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae8a0c27-107e-49cf-a6c9-46429689f3de-config-data" (OuterVolumeSpecName: "config-data") pod "ae8a0c27-107e-49cf-a6c9-46429689f3de" (UID: "ae8a0c27-107e-49cf-a6c9-46429689f3de"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:50:13 crc kubenswrapper[4813]: I0219 18:50:13.039598 4813 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae8a0c27-107e-49cf-a6c9-46429689f3de-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:13 crc kubenswrapper[4813]: I0219 18:50:13.039705 4813 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ae8a0c27-107e-49cf-a6c9-46429689f3de-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:13 crc kubenswrapper[4813]: I0219 18:50:13.039759 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae8a0c27-107e-49cf-a6c9-46429689f3de-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:13 crc kubenswrapper[4813]: I0219 18:50:13.039830 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wq4h5\" (UniqueName: \"kubernetes.io/projected/ae8a0c27-107e-49cf-a6c9-46429689f3de-kube-api-access-wq4h5\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:13 crc kubenswrapper[4813]: I0219 18:50:13.039886 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae8a0c27-107e-49cf-a6c9-46429689f3de-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:13 crc kubenswrapper[4813]: I0219 18:50:13.040144 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae8a0c27-107e-49cf-a6c9-46429689f3de-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:13 crc kubenswrapper[4813]: I0219 18:50:13.133987 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 18:50:13 crc kubenswrapper[4813]: I0219 18:50:13.134118 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ae8a0c27-107e-49cf-a6c9-46429689f3de","Type":"ContainerDied","Data":"b73bffbdb78c13ffb17d767c8bc8acb0aa526ef0a8f4d6ec4cb4bcaa1d7ac1b6"} Feb 19 18:50:13 crc kubenswrapper[4813]: I0219 18:50:13.134178 4813 scope.go:117] "RemoveContainer" containerID="1bdb72dfdf2d50015ed4fd85350d102b128cffc2a4785c9d6b3237c3ab260a54" Feb 19 18:50:13 crc kubenswrapper[4813]: I0219 18:50:13.133924 4813 generic.go:334] "Generic (PLEG): container finished" podID="ae8a0c27-107e-49cf-a6c9-46429689f3de" containerID="b73bffbdb78c13ffb17d767c8bc8acb0aa526ef0a8f4d6ec4cb4bcaa1d7ac1b6" exitCode=0 Feb 19 18:50:13 crc kubenswrapper[4813]: I0219 18:50:13.136068 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ae8a0c27-107e-49cf-a6c9-46429689f3de","Type":"ContainerDied","Data":"46f42a9e8556fe582a24a2a3e76232a69021a13577f392de23466572b2ad8cef"} Feb 19 18:50:13 crc kubenswrapper[4813]: I0219 18:50:13.164212 4813 scope.go:117] "RemoveContainer" containerID="b73bffbdb78c13ffb17d767c8bc8acb0aa526ef0a8f4d6ec4cb4bcaa1d7ac1b6" Feb 19 18:50:13 crc kubenswrapper[4813]: I0219 18:50:13.173123 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 18:50:13 crc kubenswrapper[4813]: I0219 18:50:13.184695 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 18:50:13 crc kubenswrapper[4813]: I0219 18:50:13.190443 4813 scope.go:117] "RemoveContainer" containerID="1bdb72dfdf2d50015ed4fd85350d102b128cffc2a4785c9d6b3237c3ab260a54" Feb 19 18:50:13 crc kubenswrapper[4813]: I0219 18:50:13.194235 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 18:50:13 crc kubenswrapper[4813]: E0219 18:50:13.194422 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bdb72dfdf2d50015ed4fd85350d102b128cffc2a4785c9d6b3237c3ab260a54\": container with ID starting with 1bdb72dfdf2d50015ed4fd85350d102b128cffc2a4785c9d6b3237c3ab260a54 not found: ID does not exist" containerID="1bdb72dfdf2d50015ed4fd85350d102b128cffc2a4785c9d6b3237c3ab260a54" Feb 19 18:50:13 crc kubenswrapper[4813]: I0219 18:50:13.194471 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bdb72dfdf2d50015ed4fd85350d102b128cffc2a4785c9d6b3237c3ab260a54"} err="failed to get container status \"1bdb72dfdf2d50015ed4fd85350d102b128cffc2a4785c9d6b3237c3ab260a54\": rpc error: code = NotFound desc = could not find container \"1bdb72dfdf2d50015ed4fd85350d102b128cffc2a4785c9d6b3237c3ab260a54\": container with ID starting with 1bdb72dfdf2d50015ed4fd85350d102b128cffc2a4785c9d6b3237c3ab260a54 not found: ID does not exist" Feb 19 18:50:13 crc kubenswrapper[4813]: I0219 18:50:13.194505 4813 scope.go:117] "RemoveContainer" containerID="b73bffbdb78c13ffb17d767c8bc8acb0aa526ef0a8f4d6ec4cb4bcaa1d7ac1b6" Feb 19 18:50:13 crc kubenswrapper[4813]: E0219 18:50:13.194667 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae8a0c27-107e-49cf-a6c9-46429689f3de" containerName="probe" Feb 19 18:50:13 crc kubenswrapper[4813]: I0219 18:50:13.194688 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae8a0c27-107e-49cf-a6c9-46429689f3de" containerName="probe" Feb 19 18:50:13 crc kubenswrapper[4813]: E0219 18:50:13.194730 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae8a0c27-107e-49cf-a6c9-46429689f3de" containerName="cinder-scheduler" Feb 19 18:50:13 crc kubenswrapper[4813]: I0219 18:50:13.194737 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae8a0c27-107e-49cf-a6c9-46429689f3de" containerName="cinder-scheduler" Feb 19 18:50:13 crc kubenswrapper[4813]: I0219 18:50:13.194903 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae8a0c27-107e-49cf-a6c9-46429689f3de" containerName="probe" Feb 19 18:50:13 crc kubenswrapper[4813]: I0219 18:50:13.194921 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae8a0c27-107e-49cf-a6c9-46429689f3de" containerName="cinder-scheduler" Feb 19 18:50:13 crc kubenswrapper[4813]: E0219 18:50:13.195302 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b73bffbdb78c13ffb17d767c8bc8acb0aa526ef0a8f4d6ec4cb4bcaa1d7ac1b6\": container with ID starting with b73bffbdb78c13ffb17d767c8bc8acb0aa526ef0a8f4d6ec4cb4bcaa1d7ac1b6 not found: ID does not exist" containerID="b73bffbdb78c13ffb17d767c8bc8acb0aa526ef0a8f4d6ec4cb4bcaa1d7ac1b6" Feb 19 18:50:13 crc kubenswrapper[4813]: I0219 18:50:13.195332 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b73bffbdb78c13ffb17d767c8bc8acb0aa526ef0a8f4d6ec4cb4bcaa1d7ac1b6"} err="failed to get container status \"b73bffbdb78c13ffb17d767c8bc8acb0aa526ef0a8f4d6ec4cb4bcaa1d7ac1b6\": rpc error: code = NotFound desc = could not find container \"b73bffbdb78c13ffb17d767c8bc8acb0aa526ef0a8f4d6ec4cb4bcaa1d7ac1b6\": container with ID starting with b73bffbdb78c13ffb17d767c8bc8acb0aa526ef0a8f4d6ec4cb4bcaa1d7ac1b6 not found: ID does not exist" Feb 19 18:50:13 crc kubenswrapper[4813]: I0219 18:50:13.195811 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 18:50:13 crc kubenswrapper[4813]: I0219 18:50:13.199331 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 19 18:50:13 crc kubenswrapper[4813]: I0219 18:50:13.206247 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 18:50:13 crc kubenswrapper[4813]: I0219 18:50:13.246555 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04885075-4d84-445b-b7c8-b6afaeb71600-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"04885075-4d84-445b-b7c8-b6afaeb71600\") " pod="openstack/cinder-scheduler-0" Feb 19 18:50:13 crc kubenswrapper[4813]: I0219 18:50:13.246918 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxwt9\" (UniqueName: \"kubernetes.io/projected/04885075-4d84-445b-b7c8-b6afaeb71600-kube-api-access-vxwt9\") pod \"cinder-scheduler-0\" (UID: \"04885075-4d84-445b-b7c8-b6afaeb71600\") " pod="openstack/cinder-scheduler-0" Feb 19 18:50:13 crc kubenswrapper[4813]: I0219 18:50:13.247026 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04885075-4d84-445b-b7c8-b6afaeb71600-config-data\") pod \"cinder-scheduler-0\" (UID: \"04885075-4d84-445b-b7c8-b6afaeb71600\") " pod="openstack/cinder-scheduler-0" Feb 19 18:50:13 crc kubenswrapper[4813]: I0219 18:50:13.247117 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04885075-4d84-445b-b7c8-b6afaeb71600-scripts\") pod \"cinder-scheduler-0\" (UID: \"04885075-4d84-445b-b7c8-b6afaeb71600\") " pod="openstack/cinder-scheduler-0" Feb 19 18:50:13 crc kubenswrapper[4813]: I0219 18:50:13.247205 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04885075-4d84-445b-b7c8-b6afaeb71600-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"04885075-4d84-445b-b7c8-b6afaeb71600\") " pod="openstack/cinder-scheduler-0" Feb 19 18:50:13 crc kubenswrapper[4813]: I0219 18:50:13.247287 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/04885075-4d84-445b-b7c8-b6afaeb71600-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"04885075-4d84-445b-b7c8-b6afaeb71600\") " pod="openstack/cinder-scheduler-0" Feb 19 18:50:13 crc kubenswrapper[4813]: I0219 18:50:13.349455 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxwt9\" (UniqueName: \"kubernetes.io/projected/04885075-4d84-445b-b7c8-b6afaeb71600-kube-api-access-vxwt9\") pod \"cinder-scheduler-0\" (UID: \"04885075-4d84-445b-b7c8-b6afaeb71600\") " pod="openstack/cinder-scheduler-0" Feb 19 18:50:13 crc kubenswrapper[4813]: I0219 18:50:13.350252 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04885075-4d84-445b-b7c8-b6afaeb71600-config-data\") pod \"cinder-scheduler-0\" (UID: \"04885075-4d84-445b-b7c8-b6afaeb71600\") " pod="openstack/cinder-scheduler-0" Feb 19 18:50:13 crc kubenswrapper[4813]: I0219 18:50:13.351027 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04885075-4d84-445b-b7c8-b6afaeb71600-scripts\") pod \"cinder-scheduler-0\" (UID: \"04885075-4d84-445b-b7c8-b6afaeb71600\") " pod="openstack/cinder-scheduler-0" Feb 19 18:50:13 crc kubenswrapper[4813]: I0219 18:50:13.351400 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04885075-4d84-445b-b7c8-b6afaeb71600-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"04885075-4d84-445b-b7c8-b6afaeb71600\") " pod="openstack/cinder-scheduler-0" Feb 19 18:50:13 crc kubenswrapper[4813]: I0219 18:50:13.351526 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/04885075-4d84-445b-b7c8-b6afaeb71600-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"04885075-4d84-445b-b7c8-b6afaeb71600\") " pod="openstack/cinder-scheduler-0" Feb 19 18:50:13 crc kubenswrapper[4813]: I0219 18:50:13.351682 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/04885075-4d84-445b-b7c8-b6afaeb71600-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"04885075-4d84-445b-b7c8-b6afaeb71600\") " pod="openstack/cinder-scheduler-0" Feb 19 18:50:13 crc kubenswrapper[4813]: I0219 18:50:13.351820 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04885075-4d84-445b-b7c8-b6afaeb71600-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"04885075-4d84-445b-b7c8-b6afaeb71600\") " pod="openstack/cinder-scheduler-0" Feb 19 18:50:13 crc kubenswrapper[4813]: I0219 18:50:13.355971 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04885075-4d84-445b-b7c8-b6afaeb71600-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"04885075-4d84-445b-b7c8-b6afaeb71600\") " pod="openstack/cinder-scheduler-0" Feb 19 18:50:13 crc kubenswrapper[4813]: I0219 18:50:13.356596 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04885075-4d84-445b-b7c8-b6afaeb71600-config-data\") pod \"cinder-scheduler-0\" (UID: \"04885075-4d84-445b-b7c8-b6afaeb71600\") " pod="openstack/cinder-scheduler-0" Feb 19 18:50:13 crc kubenswrapper[4813]: I0219 18:50:13.361395 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04885075-4d84-445b-b7c8-b6afaeb71600-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"04885075-4d84-445b-b7c8-b6afaeb71600\") " pod="openstack/cinder-scheduler-0" Feb 19 18:50:13 crc kubenswrapper[4813]: I0219 18:50:13.362911 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04885075-4d84-445b-b7c8-b6afaeb71600-scripts\") pod \"cinder-scheduler-0\" (UID: \"04885075-4d84-445b-b7c8-b6afaeb71600\") " pod="openstack/cinder-scheduler-0" Feb 19 18:50:13 crc kubenswrapper[4813]: I0219 18:50:13.373552 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxwt9\" (UniqueName: \"kubernetes.io/projected/04885075-4d84-445b-b7c8-b6afaeb71600-kube-api-access-vxwt9\") pod \"cinder-scheduler-0\" (UID: \"04885075-4d84-445b-b7c8-b6afaeb71600\") " pod="openstack/cinder-scheduler-0" Feb 19 18:50:13 crc kubenswrapper[4813]: I0219 18:50:13.483386 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae8a0c27-107e-49cf-a6c9-46429689f3de" path="/var/lib/kubelet/pods/ae8a0c27-107e-49cf-a6c9-46429689f3de/volumes" Feb 19 18:50:13 crc kubenswrapper[4813]: I0219 18:50:13.530484 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 18:50:14 crc kubenswrapper[4813]: I0219 18:50:14.006269 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 18:50:14 crc kubenswrapper[4813]: I0219 18:50:14.041374 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-79884964f7-nvxp2" Feb 19 18:50:14 crc kubenswrapper[4813]: I0219 18:50:14.120392 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-66787bd68b-jd5l8"] Feb 19 18:50:14 crc kubenswrapper[4813]: I0219 18:50:14.120583 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-66787bd68b-jd5l8" podUID="71bd7206-d9dd-40e7-a991-c5cf107989f4" containerName="neutron-api" containerID="cri-o://af026d010d31eb078c48b991d55c4e3b17bec908c6df634652a26ae1154003aa" gracePeriod=30 Feb 19 18:50:14 crc kubenswrapper[4813]: I0219 18:50:14.120983 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-66787bd68b-jd5l8" podUID="71bd7206-d9dd-40e7-a991-c5cf107989f4" containerName="neutron-httpd" containerID="cri-o://be42f7d08861220021c70e71acbf687aef181911a7cf12a8a39a0b4bb04847e0" gracePeriod=30 Feb 19 18:50:14 crc kubenswrapper[4813]: I0219 18:50:14.152235 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"04885075-4d84-445b-b7c8-b6afaeb71600","Type":"ContainerStarted","Data":"bf20266e4cfe55b7c0fb276116a695f1394fff8979737a421fee8025a90a67f5"} Feb 19 18:50:15 crc kubenswrapper[4813]: I0219 18:50:15.183312 4813 generic.go:334] "Generic (PLEG): container finished" podID="71bd7206-d9dd-40e7-a991-c5cf107989f4" containerID="be42f7d08861220021c70e71acbf687aef181911a7cf12a8a39a0b4bb04847e0" exitCode=0 Feb 19 18:50:15 crc kubenswrapper[4813]: I0219 18:50:15.183386 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66787bd68b-jd5l8" event={"ID":"71bd7206-d9dd-40e7-a991-c5cf107989f4","Type":"ContainerDied","Data":"be42f7d08861220021c70e71acbf687aef181911a7cf12a8a39a0b4bb04847e0"} Feb 19 18:50:15 crc kubenswrapper[4813]: I0219 18:50:15.189126 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"04885075-4d84-445b-b7c8-b6afaeb71600","Type":"ContainerStarted","Data":"13cc33bfb33bb924ac9f9d0035a948ba6c69e8d3c3ca76d6f358973425452794"} Feb 19 18:50:15 crc kubenswrapper[4813]: I0219 18:50:15.519160 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6f96ddcd7d-r5nt7" Feb 19 18:50:15 crc kubenswrapper[4813]: I0219 18:50:15.614998 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 19 18:50:15 crc kubenswrapper[4813]: E0219 18:50:15.615372 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1" containerName="barbican-api" Feb 19 18:50:15 crc kubenswrapper[4813]: I0219 18:50:15.615389 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1" containerName="barbican-api" Feb 19 18:50:15 crc kubenswrapper[4813]: E0219 18:50:15.615407 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1" containerName="barbican-api-log" Feb 19 18:50:15 crc kubenswrapper[4813]: I0219 18:50:15.615413 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1" containerName="barbican-api-log" Feb 19 18:50:15 crc kubenswrapper[4813]: I0219 18:50:15.615572 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1" containerName="barbican-api-log" Feb 19 18:50:15 crc kubenswrapper[4813]: I0219 18:50:15.615588 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1" containerName="barbican-api" Feb 19 18:50:15 crc kubenswrapper[4813]: I0219 18:50:15.616171 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 18:50:15 crc kubenswrapper[4813]: I0219 18:50:15.618824 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 19 18:50:15 crc kubenswrapper[4813]: I0219 18:50:15.619014 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 19 18:50:15 crc kubenswrapper[4813]: I0219 18:50:15.619214 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-9lhxn" Feb 19 18:50:15 crc kubenswrapper[4813]: I0219 18:50:15.626564 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 19 18:50:15 crc kubenswrapper[4813]: I0219 18:50:15.691477 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1-config-data\") pod \"9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1\" (UID: \"9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1\") " Feb 19 18:50:15 crc kubenswrapper[4813]: I0219 18:50:15.691545 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1-config-data-custom\") pod \"9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1\" (UID: \"9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1\") " Feb 19 18:50:15 crc kubenswrapper[4813]: I0219 18:50:15.691655 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1-logs\") pod \"9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1\" (UID: \"9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1\") " Feb 19 18:50:15 crc kubenswrapper[4813]: I0219 18:50:15.691695 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9wrs\" (UniqueName: \"kubernetes.io/projected/9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1-kube-api-access-c9wrs\") pod \"9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1\" (UID: \"9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1\") " Feb 19 18:50:15 crc kubenswrapper[4813]: I0219 18:50:15.691752 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1-combined-ca-bundle\") pod \"9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1\" (UID: \"9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1\") " Feb 19 18:50:15 crc kubenswrapper[4813]: I0219 18:50:15.695690 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1-logs" (OuterVolumeSpecName: "logs") pod "9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1" (UID: "9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:50:15 crc kubenswrapper[4813]: I0219 18:50:15.698597 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1" (UID: "9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:50:15 crc kubenswrapper[4813]: I0219 18:50:15.699708 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1-kube-api-access-c9wrs" (OuterVolumeSpecName: "kube-api-access-c9wrs") pod "9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1" (UID: "9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1"). InnerVolumeSpecName "kube-api-access-c9wrs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:50:15 crc kubenswrapper[4813]: I0219 18:50:15.723206 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1" (UID: "9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:50:15 crc kubenswrapper[4813]: I0219 18:50:15.747835 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1-config-data" (OuterVolumeSpecName: "config-data") pod "9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1" (UID: "9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:50:15 crc kubenswrapper[4813]: I0219 18:50:15.785778 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Feb 19 18:50:15 crc kubenswrapper[4813]: E0219 18:50:15.786422 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle kube-api-access-k5kk7 openstack-config openstack-config-secret], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/openstackclient" podUID="8fa613c1-f6b2-4e0d-8837-a474117fa68f" Feb 19 18:50:15 crc kubenswrapper[4813]: I0219 18:50:15.793287 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fa613c1-f6b2-4e0d-8837-a474117fa68f-combined-ca-bundle\") pod \"openstackclient\" (UID: \"8fa613c1-f6b2-4e0d-8837-a474117fa68f\") " pod="openstack/openstackclient" Feb 19 18:50:15 crc kubenswrapper[4813]: I0219 18:50:15.793355 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8fa613c1-f6b2-4e0d-8837-a474117fa68f-openstack-config-secret\") pod \"openstackclient\" (UID: \"8fa613c1-f6b2-4e0d-8837-a474117fa68f\") " pod="openstack/openstackclient" Feb 19 18:50:15 crc kubenswrapper[4813]: I0219 18:50:15.793436 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5kk7\" (UniqueName: \"kubernetes.io/projected/8fa613c1-f6b2-4e0d-8837-a474117fa68f-kube-api-access-k5kk7\") pod \"openstackclient\" (UID: \"8fa613c1-f6b2-4e0d-8837-a474117fa68f\") " pod="openstack/openstackclient" Feb 19 18:50:15 crc kubenswrapper[4813]: I0219 18:50:15.793720 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8fa613c1-f6b2-4e0d-8837-a474117fa68f-openstack-config\") pod \"openstackclient\" (UID: \"8fa613c1-f6b2-4e0d-8837-a474117fa68f\") " pod="openstack/openstackclient" Feb 19 18:50:15 crc kubenswrapper[4813]: I0219 18:50:15.793880 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1-logs\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:15 crc kubenswrapper[4813]: I0219 18:50:15.793900 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9wrs\" (UniqueName: \"kubernetes.io/projected/9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1-kube-api-access-c9wrs\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:15 crc kubenswrapper[4813]: I0219 18:50:15.793911 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:15 crc kubenswrapper[4813]: I0219 18:50:15.793930 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:15 crc kubenswrapper[4813]: I0219 18:50:15.793939 4813 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:15 crc kubenswrapper[4813]: I0219 18:50:15.794468 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Feb 19 18:50:15 crc kubenswrapper[4813]: I0219 18:50:15.813326 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 19 18:50:15 crc kubenswrapper[4813]: I0219 18:50:15.814723 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 18:50:15 crc kubenswrapper[4813]: I0219 18:50:15.873009 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 19 18:50:15 crc kubenswrapper[4813]: I0219 18:50:15.899907 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fa613c1-f6b2-4e0d-8837-a474117fa68f-combined-ca-bundle\") pod \"openstackclient\" (UID: \"8fa613c1-f6b2-4e0d-8837-a474117fa68f\") " pod="openstack/openstackclient" Feb 19 18:50:15 crc kubenswrapper[4813]: I0219 18:50:15.899999 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8fa613c1-f6b2-4e0d-8837-a474117fa68f-openstack-config-secret\") pod \"openstackclient\" (UID: \"8fa613c1-f6b2-4e0d-8837-a474117fa68f\") " pod="openstack/openstackclient" Feb 19 18:50:15 crc kubenswrapper[4813]: I0219 18:50:15.900024 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5kk7\" (UniqueName: \"kubernetes.io/projected/8fa613c1-f6b2-4e0d-8837-a474117fa68f-kube-api-access-k5kk7\") pod \"openstackclient\" (UID: \"8fa613c1-f6b2-4e0d-8837-a474117fa68f\") " pod="openstack/openstackclient" Feb 19 18:50:15 crc kubenswrapper[4813]: I0219 18:50:15.900091 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8fa613c1-f6b2-4e0d-8837-a474117fa68f-openstack-config\") pod \"openstackclient\" (UID: \"8fa613c1-f6b2-4e0d-8837-a474117fa68f\") " pod="openstack/openstackclient" Feb 19 18:50:15 crc kubenswrapper[4813]: I0219 18:50:15.901116 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8fa613c1-f6b2-4e0d-8837-a474117fa68f-openstack-config\") pod \"openstackclient\" (UID: \"8fa613c1-f6b2-4e0d-8837-a474117fa68f\") " pod="openstack/openstackclient" Feb 19 18:50:15 crc kubenswrapper[4813]: I0219 18:50:15.906503 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fa613c1-f6b2-4e0d-8837-a474117fa68f-combined-ca-bundle\") pod \"openstackclient\" (UID: \"8fa613c1-f6b2-4e0d-8837-a474117fa68f\") " pod="openstack/openstackclient" Feb 19 18:50:15 crc kubenswrapper[4813]: E0219 18:50:15.909206 4813 projected.go:194] Error preparing data for projected volume kube-api-access-k5kk7 for pod openstack/openstackclient: failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (8fa613c1-f6b2-4e0d-8837-a474117fa68f) does not match the UID in record. The object might have been deleted and then recreated Feb 19 18:50:15 crc kubenswrapper[4813]: E0219 18:50:15.909550 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8fa613c1-f6b2-4e0d-8837-a474117fa68f-kube-api-access-k5kk7 podName:8fa613c1-f6b2-4e0d-8837-a474117fa68f nodeName:}" failed. No retries permitted until 2026-02-19 18:50:16.409531866 +0000 UTC m=+1235.634972407 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-k5kk7" (UniqueName: "kubernetes.io/projected/8fa613c1-f6b2-4e0d-8837-a474117fa68f-kube-api-access-k5kk7") pod "openstackclient" (UID: "8fa613c1-f6b2-4e0d-8837-a474117fa68f") : failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (8fa613c1-f6b2-4e0d-8837-a474117fa68f) does not match the UID in record. The object might have been deleted and then recreated Feb 19 18:50:15 crc kubenswrapper[4813]: I0219 18:50:15.911846 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8fa613c1-f6b2-4e0d-8837-a474117fa68f-openstack-config-secret\") pod \"openstackclient\" (UID: \"8fa613c1-f6b2-4e0d-8837-a474117fa68f\") " pod="openstack/openstackclient" Feb 19 18:50:16 crc kubenswrapper[4813]: I0219 18:50:16.001127 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/786c324f-42b0-4099-adf7-3926fae87308-openstack-config-secret\") pod \"openstackclient\" (UID: \"786c324f-42b0-4099-adf7-3926fae87308\") " pod="openstack/openstackclient" Feb 19 18:50:16 crc kubenswrapper[4813]: I0219 18:50:16.001177 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/786c324f-42b0-4099-adf7-3926fae87308-combined-ca-bundle\") pod \"openstackclient\" (UID: \"786c324f-42b0-4099-adf7-3926fae87308\") " pod="openstack/openstackclient" Feb 19 18:50:16 crc kubenswrapper[4813]: I0219 18:50:16.001303 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/786c324f-42b0-4099-adf7-3926fae87308-openstack-config\") pod \"openstackclient\" (UID: \"786c324f-42b0-4099-adf7-3926fae87308\") " pod="openstack/openstackclient" Feb 19 18:50:16 crc kubenswrapper[4813]: I0219 18:50:16.001367 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnhrq\" (UniqueName: \"kubernetes.io/projected/786c324f-42b0-4099-adf7-3926fae87308-kube-api-access-qnhrq\") pod \"openstackclient\" (UID: \"786c324f-42b0-4099-adf7-3926fae87308\") " pod="openstack/openstackclient" Feb 19 18:50:16 crc kubenswrapper[4813]: I0219 18:50:16.102484 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/786c324f-42b0-4099-adf7-3926fae87308-openstack-config\") pod \"openstackclient\" (UID: \"786c324f-42b0-4099-adf7-3926fae87308\") " pod="openstack/openstackclient" Feb 19 18:50:16 crc kubenswrapper[4813]: I0219 18:50:16.102538 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnhrq\" (UniqueName: \"kubernetes.io/projected/786c324f-42b0-4099-adf7-3926fae87308-kube-api-access-qnhrq\") pod \"openstackclient\" (UID: \"786c324f-42b0-4099-adf7-3926fae87308\") " pod="openstack/openstackclient" Feb 19 18:50:16 crc kubenswrapper[4813]: I0219 18:50:16.102657 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/786c324f-42b0-4099-adf7-3926fae87308-openstack-config-secret\") pod \"openstackclient\" (UID: \"786c324f-42b0-4099-adf7-3926fae87308\") " pod="openstack/openstackclient" Feb 19 18:50:16 crc kubenswrapper[4813]: I0219 18:50:16.102686 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/786c324f-42b0-4099-adf7-3926fae87308-combined-ca-bundle\") pod \"openstackclient\" (UID: \"786c324f-42b0-4099-adf7-3926fae87308\") " pod="openstack/openstackclient" Feb 19 18:50:16 crc kubenswrapper[4813]: I0219 18:50:16.103938 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/786c324f-42b0-4099-adf7-3926fae87308-openstack-config\") pod \"openstackclient\" (UID: \"786c324f-42b0-4099-adf7-3926fae87308\") " pod="openstack/openstackclient" Feb 19 18:50:16 crc kubenswrapper[4813]: I0219 18:50:16.107105 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/786c324f-42b0-4099-adf7-3926fae87308-combined-ca-bundle\") pod \"openstackclient\" (UID: \"786c324f-42b0-4099-adf7-3926fae87308\") " pod="openstack/openstackclient" Feb 19 18:50:16 crc kubenswrapper[4813]: I0219 18:50:16.110099 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/786c324f-42b0-4099-adf7-3926fae87308-openstack-config-secret\") pod \"openstackclient\" (UID: \"786c324f-42b0-4099-adf7-3926fae87308\") " pod="openstack/openstackclient" Feb 19 18:50:16 crc kubenswrapper[4813]: I0219 18:50:16.126384 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnhrq\" (UniqueName: \"kubernetes.io/projected/786c324f-42b0-4099-adf7-3926fae87308-kube-api-access-qnhrq\") pod \"openstackclient\" (UID: \"786c324f-42b0-4099-adf7-3926fae87308\") " pod="openstack/openstackclient" Feb 19 18:50:16 crc kubenswrapper[4813]: I0219 18:50:16.159965 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 18:50:16 crc kubenswrapper[4813]: I0219 18:50:16.203194 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"04885075-4d84-445b-b7c8-b6afaeb71600","Type":"ContainerStarted","Data":"a75fc405c7da94e09b4fc05ed5bb23e7a20d02a1e8bcaeab522029453f4a2393"} Feb 19 18:50:16 crc kubenswrapper[4813]: I0219 18:50:16.205308 4813 generic.go:334] "Generic (PLEG): container finished" podID="9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1" containerID="b5d1848c5c948f5ecdfed0aff2fe8bdb00c86151a565fd0ef767a3e18ea26c6c" exitCode=0 Feb 19 18:50:16 crc kubenswrapper[4813]: I0219 18:50:16.205366 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 18:50:16 crc kubenswrapper[4813]: I0219 18:50:16.205371 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6f96ddcd7d-r5nt7" Feb 19 18:50:16 crc kubenswrapper[4813]: I0219 18:50:16.205385 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6f96ddcd7d-r5nt7" event={"ID":"9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1","Type":"ContainerDied","Data":"b5d1848c5c948f5ecdfed0aff2fe8bdb00c86151a565fd0ef767a3e18ea26c6c"} Feb 19 18:50:16 crc kubenswrapper[4813]: I0219 18:50:16.205435 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6f96ddcd7d-r5nt7" event={"ID":"9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1","Type":"ContainerDied","Data":"2478d0bc8e710678cb7039ad2351f4e883728f08bab2242028a980e5182c7c64"} Feb 19 18:50:16 crc kubenswrapper[4813]: I0219 18:50:16.205454 4813 scope.go:117] "RemoveContainer" containerID="b5d1848c5c948f5ecdfed0aff2fe8bdb00c86151a565fd0ef767a3e18ea26c6c" Feb 19 18:50:16 crc kubenswrapper[4813]: I0219 18:50:16.227673 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 18:50:16 crc kubenswrapper[4813]: I0219 18:50:16.236052 4813 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="8fa613c1-f6b2-4e0d-8837-a474117fa68f" podUID="786c324f-42b0-4099-adf7-3926fae87308" Feb 19 18:50:16 crc kubenswrapper[4813]: I0219 18:50:16.237381 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.237358555 podStartE2EDuration="3.237358555s" podCreationTimestamp="2026-02-19 18:50:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:50:16.230131662 +0000 UTC m=+1235.455572203" watchObservedRunningTime="2026-02-19 18:50:16.237358555 +0000 UTC m=+1235.462799096" Feb 19 18:50:16 crc kubenswrapper[4813]: I0219 18:50:16.256892 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6f96ddcd7d-r5nt7"] Feb 19 18:50:16 crc kubenswrapper[4813]: I0219 18:50:16.257431 4813 scope.go:117] "RemoveContainer" containerID="787d274cf138fe60285ad37afb73be3c4f0032f5ecc749e03f67423dba5450bb" Feb 19 18:50:16 crc kubenswrapper[4813]: I0219 18:50:16.269984 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-6f96ddcd7d-r5nt7"] Feb 19 18:50:16 crc kubenswrapper[4813]: I0219 18:50:16.289235 4813 scope.go:117] "RemoveContainer" containerID="b5d1848c5c948f5ecdfed0aff2fe8bdb00c86151a565fd0ef767a3e18ea26c6c" Feb 19 18:50:16 crc kubenswrapper[4813]: E0219 18:50:16.290092 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5d1848c5c948f5ecdfed0aff2fe8bdb00c86151a565fd0ef767a3e18ea26c6c\": container with ID starting with b5d1848c5c948f5ecdfed0aff2fe8bdb00c86151a565fd0ef767a3e18ea26c6c not found: ID does not exist" containerID="b5d1848c5c948f5ecdfed0aff2fe8bdb00c86151a565fd0ef767a3e18ea26c6c" Feb 19 18:50:16 crc kubenswrapper[4813]: I0219 18:50:16.290135 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5d1848c5c948f5ecdfed0aff2fe8bdb00c86151a565fd0ef767a3e18ea26c6c"} err="failed to get container status \"b5d1848c5c948f5ecdfed0aff2fe8bdb00c86151a565fd0ef767a3e18ea26c6c\": rpc error: code = NotFound desc = could not find container \"b5d1848c5c948f5ecdfed0aff2fe8bdb00c86151a565fd0ef767a3e18ea26c6c\": container with ID starting with b5d1848c5c948f5ecdfed0aff2fe8bdb00c86151a565fd0ef767a3e18ea26c6c not found: ID does not exist" Feb 19 18:50:16 crc kubenswrapper[4813]: I0219 18:50:16.290162 4813 scope.go:117] "RemoveContainer" containerID="787d274cf138fe60285ad37afb73be3c4f0032f5ecc749e03f67423dba5450bb" Feb 19 18:50:16 crc kubenswrapper[4813]: E0219 18:50:16.290570 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"787d274cf138fe60285ad37afb73be3c4f0032f5ecc749e03f67423dba5450bb\": container with ID starting with 787d274cf138fe60285ad37afb73be3c4f0032f5ecc749e03f67423dba5450bb not found: ID does not exist" containerID="787d274cf138fe60285ad37afb73be3c4f0032f5ecc749e03f67423dba5450bb" Feb 19 18:50:16 crc kubenswrapper[4813]: I0219 18:50:16.290603 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"787d274cf138fe60285ad37afb73be3c4f0032f5ecc749e03f67423dba5450bb"} err="failed to get container status \"787d274cf138fe60285ad37afb73be3c4f0032f5ecc749e03f67423dba5450bb\": rpc error: code = NotFound desc = could not find container \"787d274cf138fe60285ad37afb73be3c4f0032f5ecc749e03f67423dba5450bb\": container with ID starting with 787d274cf138fe60285ad37afb73be3c4f0032f5ecc749e03f67423dba5450bb not found: ID does not exist" Feb 19 18:50:16 crc kubenswrapper[4813]: I0219 18:50:16.409807 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8fa613c1-f6b2-4e0d-8837-a474117fa68f-openstack-config\") pod \"8fa613c1-f6b2-4e0d-8837-a474117fa68f\" (UID: \"8fa613c1-f6b2-4e0d-8837-a474117fa68f\") " Feb 19 18:50:16 crc kubenswrapper[4813]: I0219 18:50:16.410394 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8fa613c1-f6b2-4e0d-8837-a474117fa68f-openstack-config-secret\") pod \"8fa613c1-f6b2-4e0d-8837-a474117fa68f\" (UID: \"8fa613c1-f6b2-4e0d-8837-a474117fa68f\") " Feb 19 18:50:16 crc kubenswrapper[4813]: I0219 18:50:16.410445 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fa613c1-f6b2-4e0d-8837-a474117fa68f-combined-ca-bundle\") pod \"8fa613c1-f6b2-4e0d-8837-a474117fa68f\" (UID: \"8fa613c1-f6b2-4e0d-8837-a474117fa68f\") " Feb 19 18:50:16 crc kubenswrapper[4813]: I0219 18:50:16.410151 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fa613c1-f6b2-4e0d-8837-a474117fa68f-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "8fa613c1-f6b2-4e0d-8837-a474117fa68f" (UID: "8fa613c1-f6b2-4e0d-8837-a474117fa68f"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:50:16 crc kubenswrapper[4813]: I0219 18:50:16.411367 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5kk7\" (UniqueName: \"kubernetes.io/projected/8fa613c1-f6b2-4e0d-8837-a474117fa68f-kube-api-access-k5kk7\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:16 crc kubenswrapper[4813]: I0219 18:50:16.411386 4813 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8fa613c1-f6b2-4e0d-8837-a474117fa68f-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:16 crc kubenswrapper[4813]: I0219 18:50:16.417041 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fa613c1-f6b2-4e0d-8837-a474117fa68f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8fa613c1-f6b2-4e0d-8837-a474117fa68f" (UID: "8fa613c1-f6b2-4e0d-8837-a474117fa68f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:50:16 crc kubenswrapper[4813]: I0219 18:50:16.418384 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fa613c1-f6b2-4e0d-8837-a474117fa68f-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "8fa613c1-f6b2-4e0d-8837-a474117fa68f" (UID: "8fa613c1-f6b2-4e0d-8837-a474117fa68f"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:50:16 crc kubenswrapper[4813]: I0219 18:50:16.513493 4813 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8fa613c1-f6b2-4e0d-8837-a474117fa68f-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:16 crc kubenswrapper[4813]: I0219 18:50:16.513524 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fa613c1-f6b2-4e0d-8837-a474117fa68f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:16 crc kubenswrapper[4813]: I0219 18:50:16.696829 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 19 18:50:16 crc kubenswrapper[4813]: W0219 18:50:16.697818 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod786c324f_42b0_4099_adf7_3926fae87308.slice/crio-8457e16e8abbae1ff90361eb854cfce7b644cdcec47c651663b6a41bcf35641e WatchSource:0}: Error finding container 8457e16e8abbae1ff90361eb854cfce7b644cdcec47c651663b6a41bcf35641e: Status 404 returned error can't find the container with id 8457e16e8abbae1ff90361eb854cfce7b644cdcec47c651663b6a41bcf35641e Feb 19 18:50:17 crc kubenswrapper[4813]: I0219 18:50:17.217279 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 18:50:17 crc kubenswrapper[4813]: I0219 18:50:17.220388 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"786c324f-42b0-4099-adf7-3926fae87308","Type":"ContainerStarted","Data":"8457e16e8abbae1ff90361eb854cfce7b644cdcec47c651663b6a41bcf35641e"} Feb 19 18:50:17 crc kubenswrapper[4813]: I0219 18:50:17.234111 4813 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="8fa613c1-f6b2-4e0d-8837-a474117fa68f" podUID="786c324f-42b0-4099-adf7-3926fae87308" Feb 19 18:50:17 crc kubenswrapper[4813]: I0219 18:50:17.486084 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fa613c1-f6b2-4e0d-8837-a474117fa68f" path="/var/lib/kubelet/pods/8fa613c1-f6b2-4e0d-8837-a474117fa68f/volumes" Feb 19 18:50:17 crc kubenswrapper[4813]: I0219 18:50:17.486590 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1" path="/var/lib/kubelet/pods/9a9bf9a0-ffbe-47df-9e7a-7d33afc001e1/volumes" Feb 19 18:50:17 crc kubenswrapper[4813]: I0219 18:50:17.620656 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 19 18:50:18 crc kubenswrapper[4813]: I0219 18:50:18.531278 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 19 18:50:19 crc kubenswrapper[4813]: I0219 18:50:19.971238 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-66787bd68b-jd5l8" Feb 19 18:50:20 crc kubenswrapper[4813]: I0219 18:50:20.074526 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71bd7206-d9dd-40e7-a991-c5cf107989f4-combined-ca-bundle\") pod \"71bd7206-d9dd-40e7-a991-c5cf107989f4\" (UID: \"71bd7206-d9dd-40e7-a991-c5cf107989f4\") " Feb 19 18:50:20 crc kubenswrapper[4813]: I0219 18:50:20.074595 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/71bd7206-d9dd-40e7-a991-c5cf107989f4-ovndb-tls-certs\") pod \"71bd7206-d9dd-40e7-a991-c5cf107989f4\" (UID: \"71bd7206-d9dd-40e7-a991-c5cf107989f4\") " Feb 19 18:50:20 crc kubenswrapper[4813]: I0219 18:50:20.074615 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/71bd7206-d9dd-40e7-a991-c5cf107989f4-config\") pod \"71bd7206-d9dd-40e7-a991-c5cf107989f4\" (UID: \"71bd7206-d9dd-40e7-a991-c5cf107989f4\") " Feb 19 18:50:20 crc kubenswrapper[4813]: I0219 18:50:20.074671 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/71bd7206-d9dd-40e7-a991-c5cf107989f4-httpd-config\") pod \"71bd7206-d9dd-40e7-a991-c5cf107989f4\" (UID: \"71bd7206-d9dd-40e7-a991-c5cf107989f4\") " Feb 19 18:50:20 crc kubenswrapper[4813]: I0219 18:50:20.075035 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xg6b4\" (UniqueName: \"kubernetes.io/projected/71bd7206-d9dd-40e7-a991-c5cf107989f4-kube-api-access-xg6b4\") pod \"71bd7206-d9dd-40e7-a991-c5cf107989f4\" (UID: \"71bd7206-d9dd-40e7-a991-c5cf107989f4\") " Feb 19 18:50:20 crc kubenswrapper[4813]: I0219 18:50:20.080928 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71bd7206-d9dd-40e7-a991-c5cf107989f4-kube-api-access-xg6b4" (OuterVolumeSpecName: "kube-api-access-xg6b4") pod "71bd7206-d9dd-40e7-a991-c5cf107989f4" (UID: "71bd7206-d9dd-40e7-a991-c5cf107989f4"). InnerVolumeSpecName "kube-api-access-xg6b4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:50:20 crc kubenswrapper[4813]: I0219 18:50:20.082110 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71bd7206-d9dd-40e7-a991-c5cf107989f4-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "71bd7206-d9dd-40e7-a991-c5cf107989f4" (UID: "71bd7206-d9dd-40e7-a991-c5cf107989f4"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:50:20 crc kubenswrapper[4813]: I0219 18:50:20.136068 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71bd7206-d9dd-40e7-a991-c5cf107989f4-config" (OuterVolumeSpecName: "config") pod "71bd7206-d9dd-40e7-a991-c5cf107989f4" (UID: "71bd7206-d9dd-40e7-a991-c5cf107989f4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:50:20 crc kubenswrapper[4813]: I0219 18:50:20.156112 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71bd7206-d9dd-40e7-a991-c5cf107989f4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "71bd7206-d9dd-40e7-a991-c5cf107989f4" (UID: "71bd7206-d9dd-40e7-a991-c5cf107989f4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:50:20 crc kubenswrapper[4813]: I0219 18:50:20.162153 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71bd7206-d9dd-40e7-a991-c5cf107989f4-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "71bd7206-d9dd-40e7-a991-c5cf107989f4" (UID: "71bd7206-d9dd-40e7-a991-c5cf107989f4"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:50:20 crc kubenswrapper[4813]: I0219 18:50:20.177048 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xg6b4\" (UniqueName: \"kubernetes.io/projected/71bd7206-d9dd-40e7-a991-c5cf107989f4-kube-api-access-xg6b4\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:20 crc kubenswrapper[4813]: I0219 18:50:20.177076 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71bd7206-d9dd-40e7-a991-c5cf107989f4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:20 crc kubenswrapper[4813]: I0219 18:50:20.177086 4813 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/71bd7206-d9dd-40e7-a991-c5cf107989f4-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:20 crc kubenswrapper[4813]: I0219 18:50:20.177095 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/71bd7206-d9dd-40e7-a991-c5cf107989f4-config\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:20 crc kubenswrapper[4813]: I0219 18:50:20.177124 4813 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/71bd7206-d9dd-40e7-a991-c5cf107989f4-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:20 crc kubenswrapper[4813]: I0219 18:50:20.241652 4813 generic.go:334] "Generic (PLEG): container finished" podID="71bd7206-d9dd-40e7-a991-c5cf107989f4" containerID="af026d010d31eb078c48b991d55c4e3b17bec908c6df634652a26ae1154003aa" exitCode=0 Feb 19 18:50:20 crc kubenswrapper[4813]: I0219 18:50:20.241694 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66787bd68b-jd5l8" event={"ID":"71bd7206-d9dd-40e7-a991-c5cf107989f4","Type":"ContainerDied","Data":"af026d010d31eb078c48b991d55c4e3b17bec908c6df634652a26ae1154003aa"} Feb 19 18:50:20 crc kubenswrapper[4813]: I0219 18:50:20.241719 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66787bd68b-jd5l8" event={"ID":"71bd7206-d9dd-40e7-a991-c5cf107989f4","Type":"ContainerDied","Data":"fe6454e419b957e6a5f61830c6918674df7165b2e4a0491c5c30d46e09512c25"} Feb 19 18:50:20 crc kubenswrapper[4813]: I0219 18:50:20.241736 4813 scope.go:117] "RemoveContainer" containerID="be42f7d08861220021c70e71acbf687aef181911a7cf12a8a39a0b4bb04847e0" Feb 19 18:50:20 crc kubenswrapper[4813]: I0219 18:50:20.241845 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-66787bd68b-jd5l8" Feb 19 18:50:20 crc kubenswrapper[4813]: I0219 18:50:20.268316 4813 scope.go:117] "RemoveContainer" containerID="af026d010d31eb078c48b991d55c4e3b17bec908c6df634652a26ae1154003aa" Feb 19 18:50:20 crc kubenswrapper[4813]: I0219 18:50:20.278856 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-66787bd68b-jd5l8"] Feb 19 18:50:20 crc kubenswrapper[4813]: I0219 18:50:20.289801 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-66787bd68b-jd5l8"] Feb 19 18:50:20 crc kubenswrapper[4813]: I0219 18:50:20.294865 4813 scope.go:117] "RemoveContainer" containerID="be42f7d08861220021c70e71acbf687aef181911a7cf12a8a39a0b4bb04847e0" Feb 19 18:50:20 crc kubenswrapper[4813]: E0219 18:50:20.295512 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be42f7d08861220021c70e71acbf687aef181911a7cf12a8a39a0b4bb04847e0\": container with ID starting with be42f7d08861220021c70e71acbf687aef181911a7cf12a8a39a0b4bb04847e0 not found: ID does not exist" containerID="be42f7d08861220021c70e71acbf687aef181911a7cf12a8a39a0b4bb04847e0" Feb 19 18:50:20 crc kubenswrapper[4813]: I0219 18:50:20.295555 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be42f7d08861220021c70e71acbf687aef181911a7cf12a8a39a0b4bb04847e0"} err="failed to get container status \"be42f7d08861220021c70e71acbf687aef181911a7cf12a8a39a0b4bb04847e0\": rpc error: code = NotFound desc = could not find container \"be42f7d08861220021c70e71acbf687aef181911a7cf12a8a39a0b4bb04847e0\": container with ID starting with be42f7d08861220021c70e71acbf687aef181911a7cf12a8a39a0b4bb04847e0 not found: ID does not exist" Feb 19 18:50:20 crc kubenswrapper[4813]: I0219 18:50:20.295583 4813 scope.go:117] "RemoveContainer" containerID="af026d010d31eb078c48b991d55c4e3b17bec908c6df634652a26ae1154003aa" Feb 19 18:50:20 crc kubenswrapper[4813]: E0219 18:50:20.296312 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af026d010d31eb078c48b991d55c4e3b17bec908c6df634652a26ae1154003aa\": container with ID starting with af026d010d31eb078c48b991d55c4e3b17bec908c6df634652a26ae1154003aa not found: ID does not exist" containerID="af026d010d31eb078c48b991d55c4e3b17bec908c6df634652a26ae1154003aa" Feb 19 18:50:20 crc kubenswrapper[4813]: I0219 18:50:20.296338 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af026d010d31eb078c48b991d55c4e3b17bec908c6df634652a26ae1154003aa"} err="failed to get container status \"af026d010d31eb078c48b991d55c4e3b17bec908c6df634652a26ae1154003aa\": rpc error: code = NotFound desc = could not find container \"af026d010d31eb078c48b991d55c4e3b17bec908c6df634652a26ae1154003aa\": container with ID starting with af026d010d31eb078c48b991d55c4e3b17bec908c6df634652a26ae1154003aa not found: ID does not exist" Feb 19 18:50:21 crc kubenswrapper[4813]: I0219 18:50:21.533051 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71bd7206-d9dd-40e7-a991-c5cf107989f4" path="/var/lib/kubelet/pods/71bd7206-d9dd-40e7-a991-c5cf107989f4/volumes" Feb 19 18:50:22 crc kubenswrapper[4813]: I0219 18:50:22.242565 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 18:50:22 crc kubenswrapper[4813]: I0219 18:50:22.243051 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="005d604f-ced9-4b2e-aae7-1cac5398b880" containerName="ceilometer-central-agent" containerID="cri-o://3981e8ba101dc0e733451758abeb564cdd34a8ee4928fa93df480f496f607bfe" gracePeriod=30 Feb 19 18:50:22 crc kubenswrapper[4813]: I0219 18:50:22.243313 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="005d604f-ced9-4b2e-aae7-1cac5398b880" containerName="sg-core" containerID="cri-o://089cb6cd95b4c514d410b3a5358746ddec405997e5cbf7f302c0b8c2303104d3" gracePeriod=30 Feb 19 18:50:22 crc kubenswrapper[4813]: I0219 18:50:22.243330 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="005d604f-ced9-4b2e-aae7-1cac5398b880" containerName="ceilometer-notification-agent" containerID="cri-o://43d565c541a41da70248ed67715843bd5843ce149b092182d58128cdc186d943" gracePeriod=30 Feb 19 18:50:22 crc kubenswrapper[4813]: I0219 18:50:22.243662 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="005d604f-ced9-4b2e-aae7-1cac5398b880" containerName="proxy-httpd" containerID="cri-o://35c718accd8ada1711268971319cc2a6093a2def08dcaa7dbd09b1262dd23d6e" gracePeriod=30 Feb 19 18:50:22 crc kubenswrapper[4813]: I0219 18:50:22.261490 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="005d604f-ced9-4b2e-aae7-1cac5398b880" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.155:3000/\": EOF" Feb 19 18:50:22 crc kubenswrapper[4813]: I0219 18:50:22.306821 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-5bfc47d69f-qrwdk"] Feb 19 18:50:22 crc kubenswrapper[4813]: E0219 18:50:22.307218 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71bd7206-d9dd-40e7-a991-c5cf107989f4" containerName="neutron-api" Feb 19 18:50:22 crc kubenswrapper[4813]: I0219 18:50:22.307233 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="71bd7206-d9dd-40e7-a991-c5cf107989f4" containerName="neutron-api" Feb 19 18:50:22 crc kubenswrapper[4813]: E0219 18:50:22.307251 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71bd7206-d9dd-40e7-a991-c5cf107989f4" containerName="neutron-httpd" Feb 19 18:50:22 crc kubenswrapper[4813]: I0219 18:50:22.307257 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="71bd7206-d9dd-40e7-a991-c5cf107989f4" containerName="neutron-httpd" Feb 19 18:50:22 crc kubenswrapper[4813]: I0219 18:50:22.307430 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="71bd7206-d9dd-40e7-a991-c5cf107989f4" containerName="neutron-httpd" Feb 19 18:50:22 crc kubenswrapper[4813]: I0219 18:50:22.307440 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="71bd7206-d9dd-40e7-a991-c5cf107989f4" containerName="neutron-api" Feb 19 18:50:22 crc kubenswrapper[4813]: I0219 18:50:22.308309 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5bfc47d69f-qrwdk" Feb 19 18:50:22 crc kubenswrapper[4813]: I0219 18:50:22.311254 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 19 18:50:22 crc kubenswrapper[4813]: I0219 18:50:22.311541 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 19 18:50:22 crc kubenswrapper[4813]: I0219 18:50:22.311549 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 19 18:50:22 crc kubenswrapper[4813]: I0219 18:50:22.335013 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5bfc47d69f-qrwdk"] Feb 19 18:50:22 crc kubenswrapper[4813]: I0219 18:50:22.428436 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c239fd72-88d6-4394-bf24-be4fb0b3e579-config-data\") pod \"swift-proxy-5bfc47d69f-qrwdk\" (UID: \"c239fd72-88d6-4394-bf24-be4fb0b3e579\") " pod="openstack/swift-proxy-5bfc47d69f-qrwdk" Feb 19 18:50:22 crc kubenswrapper[4813]: I0219 18:50:22.428549 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c239fd72-88d6-4394-bf24-be4fb0b3e579-combined-ca-bundle\") pod \"swift-proxy-5bfc47d69f-qrwdk\" (UID: \"c239fd72-88d6-4394-bf24-be4fb0b3e579\") " pod="openstack/swift-proxy-5bfc47d69f-qrwdk" Feb 19 18:50:22 crc kubenswrapper[4813]: I0219 18:50:22.428697 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9lnq\" (UniqueName: \"kubernetes.io/projected/c239fd72-88d6-4394-bf24-be4fb0b3e579-kube-api-access-n9lnq\") pod \"swift-proxy-5bfc47d69f-qrwdk\" (UID: \"c239fd72-88d6-4394-bf24-be4fb0b3e579\") " pod="openstack/swift-proxy-5bfc47d69f-qrwdk" Feb 19 18:50:22 crc kubenswrapper[4813]: I0219 18:50:22.428787 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c239fd72-88d6-4394-bf24-be4fb0b3e579-log-httpd\") pod \"swift-proxy-5bfc47d69f-qrwdk\" (UID: \"c239fd72-88d6-4394-bf24-be4fb0b3e579\") " pod="openstack/swift-proxy-5bfc47d69f-qrwdk" Feb 19 18:50:22 crc kubenswrapper[4813]: I0219 18:50:22.428856 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c239fd72-88d6-4394-bf24-be4fb0b3e579-public-tls-certs\") pod \"swift-proxy-5bfc47d69f-qrwdk\" (UID: \"c239fd72-88d6-4394-bf24-be4fb0b3e579\") " pod="openstack/swift-proxy-5bfc47d69f-qrwdk" Feb 19 18:50:22 crc kubenswrapper[4813]: I0219 18:50:22.428933 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c239fd72-88d6-4394-bf24-be4fb0b3e579-internal-tls-certs\") pod \"swift-proxy-5bfc47d69f-qrwdk\" (UID: \"c239fd72-88d6-4394-bf24-be4fb0b3e579\") " pod="openstack/swift-proxy-5bfc47d69f-qrwdk" Feb 19 18:50:22 crc kubenswrapper[4813]: I0219 18:50:22.428997 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c239fd72-88d6-4394-bf24-be4fb0b3e579-run-httpd\") pod \"swift-proxy-5bfc47d69f-qrwdk\" (UID: \"c239fd72-88d6-4394-bf24-be4fb0b3e579\") " pod="openstack/swift-proxy-5bfc47d69f-qrwdk" Feb 19 18:50:22 crc kubenswrapper[4813]: I0219 18:50:22.429060 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c239fd72-88d6-4394-bf24-be4fb0b3e579-etc-swift\") pod \"swift-proxy-5bfc47d69f-qrwdk\" (UID: \"c239fd72-88d6-4394-bf24-be4fb0b3e579\") " pod="openstack/swift-proxy-5bfc47d69f-qrwdk" Feb 19 18:50:22 crc kubenswrapper[4813]: I0219 18:50:22.530230 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9lnq\" (UniqueName: \"kubernetes.io/projected/c239fd72-88d6-4394-bf24-be4fb0b3e579-kube-api-access-n9lnq\") pod \"swift-proxy-5bfc47d69f-qrwdk\" (UID: \"c239fd72-88d6-4394-bf24-be4fb0b3e579\") " pod="openstack/swift-proxy-5bfc47d69f-qrwdk" Feb 19 18:50:22 crc kubenswrapper[4813]: I0219 18:50:22.530283 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c239fd72-88d6-4394-bf24-be4fb0b3e579-log-httpd\") pod \"swift-proxy-5bfc47d69f-qrwdk\" (UID: \"c239fd72-88d6-4394-bf24-be4fb0b3e579\") " pod="openstack/swift-proxy-5bfc47d69f-qrwdk" Feb 19 18:50:22 crc kubenswrapper[4813]: I0219 18:50:22.530304 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c239fd72-88d6-4394-bf24-be4fb0b3e579-public-tls-certs\") pod \"swift-proxy-5bfc47d69f-qrwdk\" (UID: \"c239fd72-88d6-4394-bf24-be4fb0b3e579\") " pod="openstack/swift-proxy-5bfc47d69f-qrwdk" Feb 19 18:50:22 crc kubenswrapper[4813]: I0219 18:50:22.530336 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c239fd72-88d6-4394-bf24-be4fb0b3e579-internal-tls-certs\") pod \"swift-proxy-5bfc47d69f-qrwdk\" (UID: \"c239fd72-88d6-4394-bf24-be4fb0b3e579\") " pod="openstack/swift-proxy-5bfc47d69f-qrwdk" Feb 19 18:50:22 crc kubenswrapper[4813]: I0219 18:50:22.530352 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c239fd72-88d6-4394-bf24-be4fb0b3e579-run-httpd\") pod \"swift-proxy-5bfc47d69f-qrwdk\" (UID: \"c239fd72-88d6-4394-bf24-be4fb0b3e579\") " pod="openstack/swift-proxy-5bfc47d69f-qrwdk" Feb 19 18:50:22 crc kubenswrapper[4813]: I0219 18:50:22.530372 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c239fd72-88d6-4394-bf24-be4fb0b3e579-etc-swift\") pod \"swift-proxy-5bfc47d69f-qrwdk\" (UID: \"c239fd72-88d6-4394-bf24-be4fb0b3e579\") " pod="openstack/swift-proxy-5bfc47d69f-qrwdk" Feb 19 18:50:22 crc kubenswrapper[4813]: I0219 18:50:22.530401 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c239fd72-88d6-4394-bf24-be4fb0b3e579-config-data\") pod \"swift-proxy-5bfc47d69f-qrwdk\" (UID: \"c239fd72-88d6-4394-bf24-be4fb0b3e579\") " pod="openstack/swift-proxy-5bfc47d69f-qrwdk" Feb 19 18:50:22 crc kubenswrapper[4813]: I0219 18:50:22.530443 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c239fd72-88d6-4394-bf24-be4fb0b3e579-combined-ca-bundle\") pod \"swift-proxy-5bfc47d69f-qrwdk\" (UID: \"c239fd72-88d6-4394-bf24-be4fb0b3e579\") " pod="openstack/swift-proxy-5bfc47d69f-qrwdk" Feb 19 18:50:22 crc kubenswrapper[4813]: I0219 18:50:22.531324 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c239fd72-88d6-4394-bf24-be4fb0b3e579-run-httpd\") pod \"swift-proxy-5bfc47d69f-qrwdk\" (UID: \"c239fd72-88d6-4394-bf24-be4fb0b3e579\") " pod="openstack/swift-proxy-5bfc47d69f-qrwdk" Feb 19 18:50:22 crc kubenswrapper[4813]: I0219 18:50:22.531366 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c239fd72-88d6-4394-bf24-be4fb0b3e579-log-httpd\") pod \"swift-proxy-5bfc47d69f-qrwdk\" (UID: \"c239fd72-88d6-4394-bf24-be4fb0b3e579\") " pod="openstack/swift-proxy-5bfc47d69f-qrwdk" Feb 19 18:50:22 crc kubenswrapper[4813]: I0219 18:50:22.536070 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c239fd72-88d6-4394-bf24-be4fb0b3e579-config-data\") pod \"swift-proxy-5bfc47d69f-qrwdk\" (UID: \"c239fd72-88d6-4394-bf24-be4fb0b3e579\") " pod="openstack/swift-proxy-5bfc47d69f-qrwdk" Feb 19 18:50:22 crc kubenswrapper[4813]: I0219 18:50:22.536155 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c239fd72-88d6-4394-bf24-be4fb0b3e579-internal-tls-certs\") pod \"swift-proxy-5bfc47d69f-qrwdk\" (UID: \"c239fd72-88d6-4394-bf24-be4fb0b3e579\") " pod="openstack/swift-proxy-5bfc47d69f-qrwdk" Feb 19 18:50:22 crc kubenswrapper[4813]: I0219 18:50:22.544505 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c239fd72-88d6-4394-bf24-be4fb0b3e579-combined-ca-bundle\") pod \"swift-proxy-5bfc47d69f-qrwdk\" (UID: \"c239fd72-88d6-4394-bf24-be4fb0b3e579\") " pod="openstack/swift-proxy-5bfc47d69f-qrwdk" Feb 19 18:50:22 crc kubenswrapper[4813]: I0219 18:50:22.544969 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c239fd72-88d6-4394-bf24-be4fb0b3e579-public-tls-certs\") pod \"swift-proxy-5bfc47d69f-qrwdk\" (UID: \"c239fd72-88d6-4394-bf24-be4fb0b3e579\") " pod="openstack/swift-proxy-5bfc47d69f-qrwdk" Feb 19 18:50:22 crc kubenswrapper[4813]: I0219 18:50:22.554226 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c239fd72-88d6-4394-bf24-be4fb0b3e579-etc-swift\") pod \"swift-proxy-5bfc47d69f-qrwdk\" (UID: \"c239fd72-88d6-4394-bf24-be4fb0b3e579\") " pod="openstack/swift-proxy-5bfc47d69f-qrwdk" Feb 19 18:50:22 crc kubenswrapper[4813]: I0219 18:50:22.554622 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9lnq\" (UniqueName: \"kubernetes.io/projected/c239fd72-88d6-4394-bf24-be4fb0b3e579-kube-api-access-n9lnq\") pod \"swift-proxy-5bfc47d69f-qrwdk\" (UID: \"c239fd72-88d6-4394-bf24-be4fb0b3e579\") " pod="openstack/swift-proxy-5bfc47d69f-qrwdk" Feb 19 18:50:22 crc kubenswrapper[4813]: I0219 18:50:22.625900 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5bfc47d69f-qrwdk" Feb 19 18:50:23 crc kubenswrapper[4813]: I0219 18:50:23.275508 4813 generic.go:334] "Generic (PLEG): container finished" podID="005d604f-ced9-4b2e-aae7-1cac5398b880" containerID="35c718accd8ada1711268971319cc2a6093a2def08dcaa7dbd09b1262dd23d6e" exitCode=0 Feb 19 18:50:23 crc kubenswrapper[4813]: I0219 18:50:23.275790 4813 generic.go:334] "Generic (PLEG): container finished" podID="005d604f-ced9-4b2e-aae7-1cac5398b880" containerID="089cb6cd95b4c514d410b3a5358746ddec405997e5cbf7f302c0b8c2303104d3" exitCode=2 Feb 19 18:50:23 crc kubenswrapper[4813]: I0219 18:50:23.275799 4813 generic.go:334] "Generic (PLEG): container finished" podID="005d604f-ced9-4b2e-aae7-1cac5398b880" containerID="3981e8ba101dc0e733451758abeb564cdd34a8ee4928fa93df480f496f607bfe" exitCode=0 Feb 19 18:50:23 crc kubenswrapper[4813]: I0219 18:50:23.275595 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"005d604f-ced9-4b2e-aae7-1cac5398b880","Type":"ContainerDied","Data":"35c718accd8ada1711268971319cc2a6093a2def08dcaa7dbd09b1262dd23d6e"} Feb 19 18:50:23 crc kubenswrapper[4813]: I0219 18:50:23.275834 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"005d604f-ced9-4b2e-aae7-1cac5398b880","Type":"ContainerDied","Data":"089cb6cd95b4c514d410b3a5358746ddec405997e5cbf7f302c0b8c2303104d3"} Feb 19 18:50:23 crc kubenswrapper[4813]: I0219 18:50:23.275851 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"005d604f-ced9-4b2e-aae7-1cac5398b880","Type":"ContainerDied","Data":"3981e8ba101dc0e733451758abeb564cdd34a8ee4928fa93df480f496f607bfe"} Feb 19 18:50:23 crc kubenswrapper[4813]: I0219 18:50:23.767075 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 19 18:50:24 crc kubenswrapper[4813]: I0219 18:50:24.290599 4813 generic.go:334] "Generic (PLEG): container finished" podID="005d604f-ced9-4b2e-aae7-1cac5398b880" containerID="43d565c541a41da70248ed67715843bd5843ce149b092182d58128cdc186d943" exitCode=0 Feb 19 18:50:24 crc kubenswrapper[4813]: I0219 18:50:24.290674 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"005d604f-ced9-4b2e-aae7-1cac5398b880","Type":"ContainerDied","Data":"43d565c541a41da70248ed67715843bd5843ce149b092182d58128cdc186d943"} Feb 19 18:50:25 crc kubenswrapper[4813]: I0219 18:50:25.103142 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="005d604f-ced9-4b2e-aae7-1cac5398b880" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.155:3000/\": dial tcp 10.217.0.155:3000: connect: connection refused" Feb 19 18:50:27 crc kubenswrapper[4813]: I0219 18:50:27.422997 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-d44kt"] Feb 19 18:50:27 crc kubenswrapper[4813]: I0219 18:50:27.424868 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-d44kt" Feb 19 18:50:27 crc kubenswrapper[4813]: I0219 18:50:27.431319 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-d44kt"] Feb 19 18:50:27 crc kubenswrapper[4813]: I0219 18:50:27.518260 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cc7g4\" (UniqueName: \"kubernetes.io/projected/487f7774-103e-44b9-a773-e34f77657d2b-kube-api-access-cc7g4\") pod \"nova-api-db-create-d44kt\" (UID: \"487f7774-103e-44b9-a773-e34f77657d2b\") " pod="openstack/nova-api-db-create-d44kt" Feb 19 18:50:27 crc kubenswrapper[4813]: I0219 18:50:27.518338 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/487f7774-103e-44b9-a773-e34f77657d2b-operator-scripts\") pod \"nova-api-db-create-d44kt\" (UID: \"487f7774-103e-44b9-a773-e34f77657d2b\") " pod="openstack/nova-api-db-create-d44kt" Feb 19 18:50:27 crc kubenswrapper[4813]: I0219 18:50:27.536843 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-rnnlc"] Feb 19 18:50:27 crc kubenswrapper[4813]: I0219 18:50:27.538679 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-rnnlc" Feb 19 18:50:27 crc kubenswrapper[4813]: I0219 18:50:27.561789 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-345c-account-create-update-49nsj"] Feb 19 18:50:27 crc kubenswrapper[4813]: I0219 18:50:27.562896 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-345c-account-create-update-49nsj" Feb 19 18:50:27 crc kubenswrapper[4813]: I0219 18:50:27.565182 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 19 18:50:27 crc kubenswrapper[4813]: I0219 18:50:27.570552 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-rnnlc"] Feb 19 18:50:27 crc kubenswrapper[4813]: I0219 18:50:27.579656 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-345c-account-create-update-49nsj"] Feb 19 18:50:27 crc kubenswrapper[4813]: I0219 18:50:27.619614 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/195ab5e4-12b0-4c82-bc80-b109afd5898f-operator-scripts\") pod \"nova-cell0-db-create-rnnlc\" (UID: \"195ab5e4-12b0-4c82-bc80-b109afd5898f\") " pod="openstack/nova-cell0-db-create-rnnlc" Feb 19 18:50:27 crc kubenswrapper[4813]: I0219 18:50:27.619711 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/03736951-c024-46d1-90b1-dea0d3f528aa-operator-scripts\") pod \"nova-api-345c-account-create-update-49nsj\" (UID: \"03736951-c024-46d1-90b1-dea0d3f528aa\") " pod="openstack/nova-api-345c-account-create-update-49nsj" Feb 19 18:50:27 crc kubenswrapper[4813]: I0219 18:50:27.619731 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bntql\" (UniqueName: \"kubernetes.io/projected/195ab5e4-12b0-4c82-bc80-b109afd5898f-kube-api-access-bntql\") pod \"nova-cell0-db-create-rnnlc\" (UID: \"195ab5e4-12b0-4c82-bc80-b109afd5898f\") " pod="openstack/nova-cell0-db-create-rnnlc" Feb 19 18:50:27 crc kubenswrapper[4813]: I0219 18:50:27.619768 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cc7g4\" (UniqueName: \"kubernetes.io/projected/487f7774-103e-44b9-a773-e34f77657d2b-kube-api-access-cc7g4\") pod \"nova-api-db-create-d44kt\" (UID: \"487f7774-103e-44b9-a773-e34f77657d2b\") " pod="openstack/nova-api-db-create-d44kt" Feb 19 18:50:27 crc kubenswrapper[4813]: I0219 18:50:27.619808 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/487f7774-103e-44b9-a773-e34f77657d2b-operator-scripts\") pod \"nova-api-db-create-d44kt\" (UID: \"487f7774-103e-44b9-a773-e34f77657d2b\") " pod="openstack/nova-api-db-create-d44kt" Feb 19 18:50:27 crc kubenswrapper[4813]: I0219 18:50:27.619873 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzhv9\" (UniqueName: \"kubernetes.io/projected/03736951-c024-46d1-90b1-dea0d3f528aa-kube-api-access-bzhv9\") pod \"nova-api-345c-account-create-update-49nsj\" (UID: \"03736951-c024-46d1-90b1-dea0d3f528aa\") " pod="openstack/nova-api-345c-account-create-update-49nsj" Feb 19 18:50:27 crc kubenswrapper[4813]: I0219 18:50:27.620527 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/487f7774-103e-44b9-a773-e34f77657d2b-operator-scripts\") pod \"nova-api-db-create-d44kt\" (UID: \"487f7774-103e-44b9-a773-e34f77657d2b\") " pod="openstack/nova-api-db-create-d44kt" Feb 19 18:50:27 crc kubenswrapper[4813]: I0219 18:50:27.641506 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cc7g4\" (UniqueName: \"kubernetes.io/projected/487f7774-103e-44b9-a773-e34f77657d2b-kube-api-access-cc7g4\") pod \"nova-api-db-create-d44kt\" (UID: \"487f7774-103e-44b9-a773-e34f77657d2b\") " pod="openstack/nova-api-db-create-d44kt" Feb 19 18:50:27 crc kubenswrapper[4813]: I0219 18:50:27.721644 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/03736951-c024-46d1-90b1-dea0d3f528aa-operator-scripts\") pod \"nova-api-345c-account-create-update-49nsj\" (UID: \"03736951-c024-46d1-90b1-dea0d3f528aa\") " pod="openstack/nova-api-345c-account-create-update-49nsj" Feb 19 18:50:27 crc kubenswrapper[4813]: I0219 18:50:27.721691 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bntql\" (UniqueName: \"kubernetes.io/projected/195ab5e4-12b0-4c82-bc80-b109afd5898f-kube-api-access-bntql\") pod \"nova-cell0-db-create-rnnlc\" (UID: \"195ab5e4-12b0-4c82-bc80-b109afd5898f\") " pod="openstack/nova-cell0-db-create-rnnlc" Feb 19 18:50:27 crc kubenswrapper[4813]: I0219 18:50:27.721877 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzhv9\" (UniqueName: \"kubernetes.io/projected/03736951-c024-46d1-90b1-dea0d3f528aa-kube-api-access-bzhv9\") pod \"nova-api-345c-account-create-update-49nsj\" (UID: \"03736951-c024-46d1-90b1-dea0d3f528aa\") " pod="openstack/nova-api-345c-account-create-update-49nsj" Feb 19 18:50:27 crc kubenswrapper[4813]: I0219 18:50:27.722057 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/195ab5e4-12b0-4c82-bc80-b109afd5898f-operator-scripts\") pod \"nova-cell0-db-create-rnnlc\" (UID: \"195ab5e4-12b0-4c82-bc80-b109afd5898f\") " pod="openstack/nova-cell0-db-create-rnnlc" Feb 19 18:50:27 crc kubenswrapper[4813]: I0219 18:50:27.722863 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/195ab5e4-12b0-4c82-bc80-b109afd5898f-operator-scripts\") pod \"nova-cell0-db-create-rnnlc\" (UID: \"195ab5e4-12b0-4c82-bc80-b109afd5898f\") " pod="openstack/nova-cell0-db-create-rnnlc" Feb 19 18:50:27 crc kubenswrapper[4813]: I0219 18:50:27.723327 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/03736951-c024-46d1-90b1-dea0d3f528aa-operator-scripts\") pod \"nova-api-345c-account-create-update-49nsj\" (UID: \"03736951-c024-46d1-90b1-dea0d3f528aa\") " pod="openstack/nova-api-345c-account-create-update-49nsj" Feb 19 18:50:27 crc kubenswrapper[4813]: I0219 18:50:27.733338 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-dxwdc"] Feb 19 18:50:27 crc kubenswrapper[4813]: I0219 18:50:27.734498 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-dxwdc" Feb 19 18:50:27 crc kubenswrapper[4813]: I0219 18:50:27.741259 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-c594-account-create-update-5gz96"] Feb 19 18:50:27 crc kubenswrapper[4813]: I0219 18:50:27.742397 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-c594-account-create-update-5gz96" Feb 19 18:50:27 crc kubenswrapper[4813]: I0219 18:50:27.743925 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-d44kt" Feb 19 18:50:27 crc kubenswrapper[4813]: I0219 18:50:27.744330 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 19 18:50:27 crc kubenswrapper[4813]: I0219 18:50:27.746286 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzhv9\" (UniqueName: \"kubernetes.io/projected/03736951-c024-46d1-90b1-dea0d3f528aa-kube-api-access-bzhv9\") pod \"nova-api-345c-account-create-update-49nsj\" (UID: \"03736951-c024-46d1-90b1-dea0d3f528aa\") " pod="openstack/nova-api-345c-account-create-update-49nsj" Feb 19 18:50:27 crc kubenswrapper[4813]: I0219 18:50:27.751312 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bntql\" (UniqueName: \"kubernetes.io/projected/195ab5e4-12b0-4c82-bc80-b109afd5898f-kube-api-access-bntql\") pod \"nova-cell0-db-create-rnnlc\" (UID: \"195ab5e4-12b0-4c82-bc80-b109afd5898f\") " pod="openstack/nova-cell0-db-create-rnnlc" Feb 19 18:50:27 crc kubenswrapper[4813]: I0219 18:50:27.753770 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-dxwdc"] Feb 19 18:50:27 crc kubenswrapper[4813]: I0219 18:50:27.767807 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-c594-account-create-update-5gz96"] Feb 19 18:50:27 crc kubenswrapper[4813]: I0219 18:50:27.824800 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24e9d07f-0344-41f5-817b-c03c6516ae85-operator-scripts\") pod \"nova-cell1-db-create-dxwdc\" (UID: \"24e9d07f-0344-41f5-817b-c03c6516ae85\") " pod="openstack/nova-cell1-db-create-dxwdc" Feb 19 18:50:27 crc kubenswrapper[4813]: I0219 18:50:27.824913 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6gb4\" (UniqueName: \"kubernetes.io/projected/b43b69b2-f014-47e0-a8a7-acb5445dff51-kube-api-access-d6gb4\") pod \"nova-cell0-c594-account-create-update-5gz96\" (UID: \"b43b69b2-f014-47e0-a8a7-acb5445dff51\") " pod="openstack/nova-cell0-c594-account-create-update-5gz96" Feb 19 18:50:27 crc kubenswrapper[4813]: I0219 18:50:27.825034 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b43b69b2-f014-47e0-a8a7-acb5445dff51-operator-scripts\") pod \"nova-cell0-c594-account-create-update-5gz96\" (UID: \"b43b69b2-f014-47e0-a8a7-acb5445dff51\") " pod="openstack/nova-cell0-c594-account-create-update-5gz96" Feb 19 18:50:27 crc kubenswrapper[4813]: I0219 18:50:27.825067 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9smrs\" (UniqueName: \"kubernetes.io/projected/24e9d07f-0344-41f5-817b-c03c6516ae85-kube-api-access-9smrs\") pod \"nova-cell1-db-create-dxwdc\" (UID: \"24e9d07f-0344-41f5-817b-c03c6516ae85\") " pod="openstack/nova-cell1-db-create-dxwdc" Feb 19 18:50:27 crc kubenswrapper[4813]: I0219 18:50:27.863532 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-rnnlc" Feb 19 18:50:27 crc kubenswrapper[4813]: I0219 18:50:27.881174 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-345c-account-create-update-49nsj" Feb 19 18:50:27 crc kubenswrapper[4813]: I0219 18:50:27.928056 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24e9d07f-0344-41f5-817b-c03c6516ae85-operator-scripts\") pod \"nova-cell1-db-create-dxwdc\" (UID: \"24e9d07f-0344-41f5-817b-c03c6516ae85\") " pod="openstack/nova-cell1-db-create-dxwdc" Feb 19 18:50:27 crc kubenswrapper[4813]: I0219 18:50:27.928152 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6gb4\" (UniqueName: \"kubernetes.io/projected/b43b69b2-f014-47e0-a8a7-acb5445dff51-kube-api-access-d6gb4\") pod \"nova-cell0-c594-account-create-update-5gz96\" (UID: \"b43b69b2-f014-47e0-a8a7-acb5445dff51\") " pod="openstack/nova-cell0-c594-account-create-update-5gz96" Feb 19 18:50:27 crc kubenswrapper[4813]: I0219 18:50:27.928205 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b43b69b2-f014-47e0-a8a7-acb5445dff51-operator-scripts\") pod \"nova-cell0-c594-account-create-update-5gz96\" (UID: \"b43b69b2-f014-47e0-a8a7-acb5445dff51\") " pod="openstack/nova-cell0-c594-account-create-update-5gz96" Feb 19 18:50:27 crc kubenswrapper[4813]: I0219 18:50:27.928230 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9smrs\" (UniqueName: \"kubernetes.io/projected/24e9d07f-0344-41f5-817b-c03c6516ae85-kube-api-access-9smrs\") pod \"nova-cell1-db-create-dxwdc\" (UID: \"24e9d07f-0344-41f5-817b-c03c6516ae85\") " pod="openstack/nova-cell1-db-create-dxwdc" Feb 19 18:50:27 crc kubenswrapper[4813]: I0219 18:50:27.929638 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24e9d07f-0344-41f5-817b-c03c6516ae85-operator-scripts\") pod \"nova-cell1-db-create-dxwdc\" (UID: \"24e9d07f-0344-41f5-817b-c03c6516ae85\") " pod="openstack/nova-cell1-db-create-dxwdc" Feb 19 18:50:27 crc kubenswrapper[4813]: I0219 18:50:27.930247 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b43b69b2-f014-47e0-a8a7-acb5445dff51-operator-scripts\") pod \"nova-cell0-c594-account-create-update-5gz96\" (UID: \"b43b69b2-f014-47e0-a8a7-acb5445dff51\") " pod="openstack/nova-cell0-c594-account-create-update-5gz96" Feb 19 18:50:27 crc kubenswrapper[4813]: I0219 18:50:27.944108 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-4d36-account-create-update-ksf9n"] Feb 19 18:50:27 crc kubenswrapper[4813]: I0219 18:50:27.945109 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-4d36-account-create-update-ksf9n" Feb 19 18:50:27 crc kubenswrapper[4813]: I0219 18:50:27.946825 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 19 18:50:27 crc kubenswrapper[4813]: I0219 18:50:27.961645 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-4d36-account-create-update-ksf9n"] Feb 19 18:50:27 crc kubenswrapper[4813]: I0219 18:50:27.970042 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6gb4\" (UniqueName: \"kubernetes.io/projected/b43b69b2-f014-47e0-a8a7-acb5445dff51-kube-api-access-d6gb4\") pod \"nova-cell0-c594-account-create-update-5gz96\" (UID: \"b43b69b2-f014-47e0-a8a7-acb5445dff51\") " pod="openstack/nova-cell0-c594-account-create-update-5gz96" Feb 19 18:50:27 crc kubenswrapper[4813]: I0219 18:50:27.975422 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9smrs\" (UniqueName: \"kubernetes.io/projected/24e9d07f-0344-41f5-817b-c03c6516ae85-kube-api-access-9smrs\") pod \"nova-cell1-db-create-dxwdc\" (UID: \"24e9d07f-0344-41f5-817b-c03c6516ae85\") " pod="openstack/nova-cell1-db-create-dxwdc" Feb 19 18:50:28 crc kubenswrapper[4813]: I0219 18:50:28.030120 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be7c6283-3a04-4b6b-9419-82b4e91909bb-operator-scripts\") pod \"nova-cell1-4d36-account-create-update-ksf9n\" (UID: \"be7c6283-3a04-4b6b-9419-82b4e91909bb\") " pod="openstack/nova-cell1-4d36-account-create-update-ksf9n" Feb 19 18:50:28 crc kubenswrapper[4813]: I0219 18:50:28.030444 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zb5z\" (UniqueName: \"kubernetes.io/projected/be7c6283-3a04-4b6b-9419-82b4e91909bb-kube-api-access-5zb5z\") pod \"nova-cell1-4d36-account-create-update-ksf9n\" (UID: \"be7c6283-3a04-4b6b-9419-82b4e91909bb\") " pod="openstack/nova-cell1-4d36-account-create-update-ksf9n" Feb 19 18:50:28 crc kubenswrapper[4813]: I0219 18:50:28.132637 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zb5z\" (UniqueName: \"kubernetes.io/projected/be7c6283-3a04-4b6b-9419-82b4e91909bb-kube-api-access-5zb5z\") pod \"nova-cell1-4d36-account-create-update-ksf9n\" (UID: \"be7c6283-3a04-4b6b-9419-82b4e91909bb\") " pod="openstack/nova-cell1-4d36-account-create-update-ksf9n" Feb 19 18:50:28 crc kubenswrapper[4813]: I0219 18:50:28.132693 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be7c6283-3a04-4b6b-9419-82b4e91909bb-operator-scripts\") pod \"nova-cell1-4d36-account-create-update-ksf9n\" (UID: \"be7c6283-3a04-4b6b-9419-82b4e91909bb\") " pod="openstack/nova-cell1-4d36-account-create-update-ksf9n" Feb 19 18:50:28 crc kubenswrapper[4813]: I0219 18:50:28.141263 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be7c6283-3a04-4b6b-9419-82b4e91909bb-operator-scripts\") pod \"nova-cell1-4d36-account-create-update-ksf9n\" (UID: \"be7c6283-3a04-4b6b-9419-82b4e91909bb\") " pod="openstack/nova-cell1-4d36-account-create-update-ksf9n" Feb 19 18:50:28 crc kubenswrapper[4813]: I0219 18:50:28.157336 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-dxwdc" Feb 19 18:50:28 crc kubenswrapper[4813]: I0219 18:50:28.172855 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zb5z\" (UniqueName: \"kubernetes.io/projected/be7c6283-3a04-4b6b-9419-82b4e91909bb-kube-api-access-5zb5z\") pod \"nova-cell1-4d36-account-create-update-ksf9n\" (UID: \"be7c6283-3a04-4b6b-9419-82b4e91909bb\") " pod="openstack/nova-cell1-4d36-account-create-update-ksf9n" Feb 19 18:50:28 crc kubenswrapper[4813]: I0219 18:50:28.181335 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-c594-account-create-update-5gz96" Feb 19 18:50:28 crc kubenswrapper[4813]: I0219 18:50:28.326803 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-4d36-account-create-update-ksf9n" Feb 19 18:50:28 crc kubenswrapper[4813]: I0219 18:50:28.339658 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"786c324f-42b0-4099-adf7-3926fae87308","Type":"ContainerStarted","Data":"dcb3929d2e5414c8c6857fa34323fe5cbaa913507bb9f15d4791c4c3a55cd8b7"} Feb 19 18:50:28 crc kubenswrapper[4813]: I0219 18:50:28.374193 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.275339843 podStartE2EDuration="13.374169056s" podCreationTimestamp="2026-02-19 18:50:15 +0000 UTC" firstStartedPulling="2026-02-19 18:50:16.69964572 +0000 UTC m=+1235.925086261" lastFinishedPulling="2026-02-19 18:50:27.798474933 +0000 UTC m=+1247.023915474" observedRunningTime="2026-02-19 18:50:28.372652679 +0000 UTC m=+1247.598093220" watchObservedRunningTime="2026-02-19 18:50:28.374169056 +0000 UTC m=+1247.599609617" Feb 19 18:50:28 crc kubenswrapper[4813]: I0219 18:50:28.420163 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 18:50:28 crc kubenswrapper[4813]: I0219 18:50:28.440553 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pz6dr\" (UniqueName: \"kubernetes.io/projected/005d604f-ced9-4b2e-aae7-1cac5398b880-kube-api-access-pz6dr\") pod \"005d604f-ced9-4b2e-aae7-1cac5398b880\" (UID: \"005d604f-ced9-4b2e-aae7-1cac5398b880\") " Feb 19 18:50:28 crc kubenswrapper[4813]: I0219 18:50:28.440646 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/005d604f-ced9-4b2e-aae7-1cac5398b880-scripts\") pod \"005d604f-ced9-4b2e-aae7-1cac5398b880\" (UID: \"005d604f-ced9-4b2e-aae7-1cac5398b880\") " Feb 19 18:50:28 crc kubenswrapper[4813]: I0219 18:50:28.440696 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/005d604f-ced9-4b2e-aae7-1cac5398b880-log-httpd\") pod \"005d604f-ced9-4b2e-aae7-1cac5398b880\" (UID: \"005d604f-ced9-4b2e-aae7-1cac5398b880\") " Feb 19 18:50:28 crc kubenswrapper[4813]: I0219 18:50:28.440711 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/005d604f-ced9-4b2e-aae7-1cac5398b880-sg-core-conf-yaml\") pod \"005d604f-ced9-4b2e-aae7-1cac5398b880\" (UID: \"005d604f-ced9-4b2e-aae7-1cac5398b880\") " Feb 19 18:50:28 crc kubenswrapper[4813]: I0219 18:50:28.440737 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/005d604f-ced9-4b2e-aae7-1cac5398b880-run-httpd\") pod \"005d604f-ced9-4b2e-aae7-1cac5398b880\" (UID: \"005d604f-ced9-4b2e-aae7-1cac5398b880\") " Feb 19 18:50:28 crc kubenswrapper[4813]: I0219 18:50:28.440761 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/005d604f-ced9-4b2e-aae7-1cac5398b880-config-data\") pod \"005d604f-ced9-4b2e-aae7-1cac5398b880\" (UID: \"005d604f-ced9-4b2e-aae7-1cac5398b880\") " Feb 19 18:50:28 crc kubenswrapper[4813]: I0219 18:50:28.440868 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/005d604f-ced9-4b2e-aae7-1cac5398b880-combined-ca-bundle\") pod \"005d604f-ced9-4b2e-aae7-1cac5398b880\" (UID: \"005d604f-ced9-4b2e-aae7-1cac5398b880\") " Feb 19 18:50:28 crc kubenswrapper[4813]: I0219 18:50:28.443878 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/005d604f-ced9-4b2e-aae7-1cac5398b880-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "005d604f-ced9-4b2e-aae7-1cac5398b880" (UID: "005d604f-ced9-4b2e-aae7-1cac5398b880"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:50:28 crc kubenswrapper[4813]: I0219 18:50:28.451448 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/005d604f-ced9-4b2e-aae7-1cac5398b880-scripts" (OuterVolumeSpecName: "scripts") pod "005d604f-ced9-4b2e-aae7-1cac5398b880" (UID: "005d604f-ced9-4b2e-aae7-1cac5398b880"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:50:28 crc kubenswrapper[4813]: I0219 18:50:28.451721 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/005d604f-ced9-4b2e-aae7-1cac5398b880-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "005d604f-ced9-4b2e-aae7-1cac5398b880" (UID: "005d604f-ced9-4b2e-aae7-1cac5398b880"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:50:28 crc kubenswrapper[4813]: I0219 18:50:28.458047 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/005d604f-ced9-4b2e-aae7-1cac5398b880-kube-api-access-pz6dr" (OuterVolumeSpecName: "kube-api-access-pz6dr") pod "005d604f-ced9-4b2e-aae7-1cac5398b880" (UID: "005d604f-ced9-4b2e-aae7-1cac5398b880"). InnerVolumeSpecName "kube-api-access-pz6dr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:50:28 crc kubenswrapper[4813]: I0219 18:50:28.477079 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/005d604f-ced9-4b2e-aae7-1cac5398b880-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "005d604f-ced9-4b2e-aae7-1cac5398b880" (UID: "005d604f-ced9-4b2e-aae7-1cac5398b880"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:50:28 crc kubenswrapper[4813]: I0219 18:50:28.545684 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pz6dr\" (UniqueName: \"kubernetes.io/projected/005d604f-ced9-4b2e-aae7-1cac5398b880-kube-api-access-pz6dr\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:28 crc kubenswrapper[4813]: I0219 18:50:28.545736 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/005d604f-ced9-4b2e-aae7-1cac5398b880-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:28 crc kubenswrapper[4813]: I0219 18:50:28.545748 4813 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/005d604f-ced9-4b2e-aae7-1cac5398b880-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:28 crc kubenswrapper[4813]: I0219 18:50:28.545760 4813 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/005d604f-ced9-4b2e-aae7-1cac5398b880-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:28 crc kubenswrapper[4813]: I0219 18:50:28.545777 4813 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/005d604f-ced9-4b2e-aae7-1cac5398b880-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:28 crc kubenswrapper[4813]: I0219 18:50:28.571896 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/005d604f-ced9-4b2e-aae7-1cac5398b880-config-data" (OuterVolumeSpecName: "config-data") pod "005d604f-ced9-4b2e-aae7-1cac5398b880" (UID: "005d604f-ced9-4b2e-aae7-1cac5398b880"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:50:28 crc kubenswrapper[4813]: I0219 18:50:28.574107 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/005d604f-ced9-4b2e-aae7-1cac5398b880-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "005d604f-ced9-4b2e-aae7-1cac5398b880" (UID: "005d604f-ced9-4b2e-aae7-1cac5398b880"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:50:28 crc kubenswrapper[4813]: I0219 18:50:28.609477 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5bfc47d69f-qrwdk"] Feb 19 18:50:28 crc kubenswrapper[4813]: I0219 18:50:28.647045 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/005d604f-ced9-4b2e-aae7-1cac5398b880-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:28 crc kubenswrapper[4813]: I0219 18:50:28.647083 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/005d604f-ced9-4b2e-aae7-1cac5398b880-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:28 crc kubenswrapper[4813]: I0219 18:50:28.699820 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-345c-account-create-update-49nsj"] Feb 19 18:50:28 crc kubenswrapper[4813]: W0219 18:50:28.710552 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod03736951_c024_46d1_90b1_dea0d3f528aa.slice/crio-4aacce4822697354edb5ad9b7286c058f943cad7f2948b837e49c0e19eb30292 WatchSource:0}: Error finding container 4aacce4822697354edb5ad9b7286c058f943cad7f2948b837e49c0e19eb30292: Status 404 returned error can't find the container with id 4aacce4822697354edb5ad9b7286c058f943cad7f2948b837e49c0e19eb30292 Feb 19 18:50:28 crc kubenswrapper[4813]: I0219 18:50:28.749417 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-d44kt"] Feb 19 18:50:28 crc kubenswrapper[4813]: W0219 18:50:28.763927 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod487f7774_103e_44b9_a773_e34f77657d2b.slice/crio-9f7389d968e41fa2f4ddfb29c1b38ef0ee01e52ecfe0f97a3914618c0bd0ec2d WatchSource:0}: Error finding container 9f7389d968e41fa2f4ddfb29c1b38ef0ee01e52ecfe0f97a3914618c0bd0ec2d: Status 404 returned error can't find the container with id 9f7389d968e41fa2f4ddfb29c1b38ef0ee01e52ecfe0f97a3914618c0bd0ec2d Feb 19 18:50:28 crc kubenswrapper[4813]: I0219 18:50:28.835567 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-rnnlc"] Feb 19 18:50:28 crc kubenswrapper[4813]: W0219 18:50:28.859311 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod195ab5e4_12b0_4c82_bc80_b109afd5898f.slice/crio-71432fb636c5bf860607effbe5943fbf000c9dac2f8ee2f03fbef6505156b75a WatchSource:0}: Error finding container 71432fb636c5bf860607effbe5943fbf000c9dac2f8ee2f03fbef6505156b75a: Status 404 returned error can't find the container with id 71432fb636c5bf860607effbe5943fbf000c9dac2f8ee2f03fbef6505156b75a Feb 19 18:50:29 crc kubenswrapper[4813]: I0219 18:50:29.013930 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-c594-account-create-update-5gz96"] Feb 19 18:50:29 crc kubenswrapper[4813]: W0219 18:50:29.018892 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24e9d07f_0344_41f5_817b_c03c6516ae85.slice/crio-cec968e5961d02f4734b12214b1879f5c8084114ae4f8c57d307b330e35b7f6e WatchSource:0}: Error finding container cec968e5961d02f4734b12214b1879f5c8084114ae4f8c57d307b330e35b7f6e: Status 404 returned error can't find the container with id cec968e5961d02f4734b12214b1879f5c8084114ae4f8c57d307b330e35b7f6e Feb 19 18:50:29 crc kubenswrapper[4813]: I0219 18:50:29.022830 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-dxwdc"] Feb 19 18:50:29 crc kubenswrapper[4813]: I0219 18:50:29.042556 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-4d36-account-create-update-ksf9n"] Feb 19 18:50:29 crc kubenswrapper[4813]: W0219 18:50:29.062403 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe7c6283_3a04_4b6b_9419_82b4e91909bb.slice/crio-fa13f3801279170ab42dd8b5111bea28d82bb53ecbbfdd93f34d01da39d96861 WatchSource:0}: Error finding container fa13f3801279170ab42dd8b5111bea28d82bb53ecbbfdd93f34d01da39d96861: Status 404 returned error can't find the container with id fa13f3801279170ab42dd8b5111bea28d82bb53ecbbfdd93f34d01da39d96861 Feb 19 18:50:29 crc kubenswrapper[4813]: I0219 18:50:29.350193 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-dxwdc" event={"ID":"24e9d07f-0344-41f5-817b-c03c6516ae85","Type":"ContainerStarted","Data":"cec968e5961d02f4734b12214b1879f5c8084114ae4f8c57d307b330e35b7f6e"} Feb 19 18:50:29 crc kubenswrapper[4813]: I0219 18:50:29.354484 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-345c-account-create-update-49nsj" event={"ID":"03736951-c024-46d1-90b1-dea0d3f528aa","Type":"ContainerStarted","Data":"9e12afa0fedf488650c3cf3b2d1dd92ace9aebb22007478a22a377a14b17cb09"} Feb 19 18:50:29 crc kubenswrapper[4813]: I0219 18:50:29.354538 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-345c-account-create-update-49nsj" event={"ID":"03736951-c024-46d1-90b1-dea0d3f528aa","Type":"ContainerStarted","Data":"4aacce4822697354edb5ad9b7286c058f943cad7f2948b837e49c0e19eb30292"} Feb 19 18:50:29 crc kubenswrapper[4813]: I0219 18:50:29.356884 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-rnnlc" event={"ID":"195ab5e4-12b0-4c82-bc80-b109afd5898f","Type":"ContainerStarted","Data":"1f96f95505c7053c8d8ab41901548ec1360179712db919575860107692fc7c07"} Feb 19 18:50:29 crc kubenswrapper[4813]: I0219 18:50:29.356943 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-rnnlc" event={"ID":"195ab5e4-12b0-4c82-bc80-b109afd5898f","Type":"ContainerStarted","Data":"71432fb636c5bf860607effbe5943fbf000c9dac2f8ee2f03fbef6505156b75a"} Feb 19 18:50:29 crc kubenswrapper[4813]: I0219 18:50:29.360696 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5bfc47d69f-qrwdk" event={"ID":"c239fd72-88d6-4394-bf24-be4fb0b3e579","Type":"ContainerStarted","Data":"452e2a9d879b10f739d3b19cd7e40ef58643848e29fa2a75758bdac304dc3d57"} Feb 19 18:50:29 crc kubenswrapper[4813]: I0219 18:50:29.360746 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5bfc47d69f-qrwdk" event={"ID":"c239fd72-88d6-4394-bf24-be4fb0b3e579","Type":"ContainerStarted","Data":"0501528d4dbfd8f2a31374824be67e9775b6b6e6c1fc0f15dcb3c1621312e121"} Feb 19 18:50:29 crc kubenswrapper[4813]: I0219 18:50:29.362915 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-4d36-account-create-update-ksf9n" event={"ID":"be7c6283-3a04-4b6b-9419-82b4e91909bb","Type":"ContainerStarted","Data":"fa13f3801279170ab42dd8b5111bea28d82bb53ecbbfdd93f34d01da39d96861"} Feb 19 18:50:29 crc kubenswrapper[4813]: I0219 18:50:29.365025 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-d44kt" event={"ID":"487f7774-103e-44b9-a773-e34f77657d2b","Type":"ContainerStarted","Data":"87b73f80a6ec835366f4bf5b64ba364e2068a899ccab4e89a99247770dab0426"} Feb 19 18:50:29 crc kubenswrapper[4813]: I0219 18:50:29.365099 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-d44kt" event={"ID":"487f7774-103e-44b9-a773-e34f77657d2b","Type":"ContainerStarted","Data":"9f7389d968e41fa2f4ddfb29c1b38ef0ee01e52ecfe0f97a3914618c0bd0ec2d"} Feb 19 18:50:29 crc kubenswrapper[4813]: I0219 18:50:29.367770 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"005d604f-ced9-4b2e-aae7-1cac5398b880","Type":"ContainerDied","Data":"f8edc69fc08d0886f9fa7b1087afe30a20be05307f31b206aae33e1f88977648"} Feb 19 18:50:29 crc kubenswrapper[4813]: I0219 18:50:29.367807 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 18:50:29 crc kubenswrapper[4813]: I0219 18:50:29.367828 4813 scope.go:117] "RemoveContainer" containerID="35c718accd8ada1711268971319cc2a6093a2def08dcaa7dbd09b1262dd23d6e" Feb 19 18:50:29 crc kubenswrapper[4813]: I0219 18:50:29.375312 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-c594-account-create-update-5gz96" event={"ID":"b43b69b2-f014-47e0-a8a7-acb5445dff51","Type":"ContainerStarted","Data":"b86014540681be4c4fa4c1327a58642cf8d5baceb6108eea1dce480c400a6bc6"} Feb 19 18:50:29 crc kubenswrapper[4813]: I0219 18:50:29.375378 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-c594-account-create-update-5gz96" event={"ID":"b43b69b2-f014-47e0-a8a7-acb5445dff51","Type":"ContainerStarted","Data":"5d7d7f85bb7f9cc7f218531587b960baa8112c31c7062f410e939b9de0a86cf1"} Feb 19 18:50:29 crc kubenswrapper[4813]: I0219 18:50:29.378200 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-345c-account-create-update-49nsj" podStartSLOduration=2.3781827460000002 podStartE2EDuration="2.378182746s" podCreationTimestamp="2026-02-19 18:50:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:50:29.369120686 +0000 UTC m=+1248.594561247" watchObservedRunningTime="2026-02-19 18:50:29.378182746 +0000 UTC m=+1248.603623287" Feb 19 18:50:29 crc kubenswrapper[4813]: I0219 18:50:29.406074 4813 scope.go:117] "RemoveContainer" containerID="089cb6cd95b4c514d410b3a5358746ddec405997e5cbf7f302c0b8c2303104d3" Feb 19 18:50:29 crc kubenswrapper[4813]: I0219 18:50:29.406245 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-rnnlc" podStartSLOduration=2.40622629 podStartE2EDuration="2.40622629s" podCreationTimestamp="2026-02-19 18:50:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:50:29.400354419 +0000 UTC m=+1248.625794970" watchObservedRunningTime="2026-02-19 18:50:29.40622629 +0000 UTC m=+1248.631666831" Feb 19 18:50:29 crc kubenswrapper[4813]: I0219 18:50:29.425175 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-d44kt" podStartSLOduration=2.425155754 podStartE2EDuration="2.425155754s" podCreationTimestamp="2026-02-19 18:50:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:50:29.419905382 +0000 UTC m=+1248.645345923" watchObservedRunningTime="2026-02-19 18:50:29.425155754 +0000 UTC m=+1248.650596295" Feb 19 18:50:29 crc kubenswrapper[4813]: I0219 18:50:29.435640 4813 scope.go:117] "RemoveContainer" containerID="43d565c541a41da70248ed67715843bd5843ce149b092182d58128cdc186d943" Feb 19 18:50:29 crc kubenswrapper[4813]: I0219 18:50:29.448379 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 18:50:29 crc kubenswrapper[4813]: I0219 18:50:29.457478 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 18:50:29 crc kubenswrapper[4813]: I0219 18:50:29.464322 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 18:50:29 crc kubenswrapper[4813]: E0219 18:50:29.464682 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="005d604f-ced9-4b2e-aae7-1cac5398b880" containerName="ceilometer-central-agent" Feb 19 18:50:29 crc kubenswrapper[4813]: I0219 18:50:29.464701 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="005d604f-ced9-4b2e-aae7-1cac5398b880" containerName="ceilometer-central-agent" Feb 19 18:50:29 crc kubenswrapper[4813]: E0219 18:50:29.464716 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="005d604f-ced9-4b2e-aae7-1cac5398b880" containerName="sg-core" Feb 19 18:50:29 crc kubenswrapper[4813]: I0219 18:50:29.464723 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="005d604f-ced9-4b2e-aae7-1cac5398b880" containerName="sg-core" Feb 19 18:50:29 crc kubenswrapper[4813]: E0219 18:50:29.464751 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="005d604f-ced9-4b2e-aae7-1cac5398b880" containerName="ceilometer-notification-agent" Feb 19 18:50:29 crc kubenswrapper[4813]: I0219 18:50:29.464758 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="005d604f-ced9-4b2e-aae7-1cac5398b880" containerName="ceilometer-notification-agent" Feb 19 18:50:29 crc kubenswrapper[4813]: E0219 18:50:29.464769 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="005d604f-ced9-4b2e-aae7-1cac5398b880" containerName="proxy-httpd" Feb 19 18:50:29 crc kubenswrapper[4813]: I0219 18:50:29.464776 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="005d604f-ced9-4b2e-aae7-1cac5398b880" containerName="proxy-httpd" Feb 19 18:50:29 crc kubenswrapper[4813]: I0219 18:50:29.464930 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="005d604f-ced9-4b2e-aae7-1cac5398b880" containerName="proxy-httpd" Feb 19 18:50:29 crc kubenswrapper[4813]: I0219 18:50:29.464941 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="005d604f-ced9-4b2e-aae7-1cac5398b880" containerName="ceilometer-notification-agent" Feb 19 18:50:29 crc kubenswrapper[4813]: I0219 18:50:29.464976 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="005d604f-ced9-4b2e-aae7-1cac5398b880" containerName="ceilometer-central-agent" Feb 19 18:50:29 crc kubenswrapper[4813]: I0219 18:50:29.464990 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="005d604f-ced9-4b2e-aae7-1cac5398b880" containerName="sg-core" Feb 19 18:50:29 crc kubenswrapper[4813]: I0219 18:50:29.466423 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 18:50:29 crc kubenswrapper[4813]: I0219 18:50:29.642163 4813 scope.go:117] "RemoveContainer" containerID="3981e8ba101dc0e733451758abeb564cdd34a8ee4928fa93df480f496f607bfe" Feb 19 18:50:29 crc kubenswrapper[4813]: I0219 18:50:29.643620 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 18:50:29 crc kubenswrapper[4813]: I0219 18:50:29.643792 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 18:50:29 crc kubenswrapper[4813]: I0219 18:50:29.679371 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="005d604f-ced9-4b2e-aae7-1cac5398b880" path="/var/lib/kubelet/pods/005d604f-ced9-4b2e-aae7-1cac5398b880/volumes" Feb 19 18:50:29 crc kubenswrapper[4813]: I0219 18:50:29.680096 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 18:50:29 crc kubenswrapper[4813]: I0219 18:50:29.743454 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6afe8635-761b-4dcc-b2b4-ff6963f474d8-run-httpd\") pod \"ceilometer-0\" (UID: \"6afe8635-761b-4dcc-b2b4-ff6963f474d8\") " pod="openstack/ceilometer-0" Feb 19 18:50:29 crc kubenswrapper[4813]: I0219 18:50:29.743635 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6afe8635-761b-4dcc-b2b4-ff6963f474d8-scripts\") pod \"ceilometer-0\" (UID: \"6afe8635-761b-4dcc-b2b4-ff6963f474d8\") " pod="openstack/ceilometer-0" Feb 19 18:50:29 crc kubenswrapper[4813]: I0219 18:50:29.743686 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6afe8635-761b-4dcc-b2b4-ff6963f474d8-log-httpd\") pod \"ceilometer-0\" (UID: \"6afe8635-761b-4dcc-b2b4-ff6963f474d8\") " pod="openstack/ceilometer-0" Feb 19 18:50:29 crc kubenswrapper[4813]: I0219 18:50:29.743762 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6jh6\" (UniqueName: \"kubernetes.io/projected/6afe8635-761b-4dcc-b2b4-ff6963f474d8-kube-api-access-j6jh6\") pod \"ceilometer-0\" (UID: \"6afe8635-761b-4dcc-b2b4-ff6963f474d8\") " pod="openstack/ceilometer-0" Feb 19 18:50:29 crc kubenswrapper[4813]: I0219 18:50:29.743817 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6afe8635-761b-4dcc-b2b4-ff6963f474d8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6afe8635-761b-4dcc-b2b4-ff6963f474d8\") " pod="openstack/ceilometer-0" Feb 19 18:50:29 crc kubenswrapper[4813]: I0219 18:50:29.743886 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6afe8635-761b-4dcc-b2b4-ff6963f474d8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6afe8635-761b-4dcc-b2b4-ff6963f474d8\") " pod="openstack/ceilometer-0" Feb 19 18:50:29 crc kubenswrapper[4813]: I0219 18:50:29.743928 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6afe8635-761b-4dcc-b2b4-ff6963f474d8-config-data\") pod \"ceilometer-0\" (UID: \"6afe8635-761b-4dcc-b2b4-ff6963f474d8\") " pod="openstack/ceilometer-0" Feb 19 18:50:29 crc kubenswrapper[4813]: I0219 18:50:29.846213 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6jh6\" (UniqueName: \"kubernetes.io/projected/6afe8635-761b-4dcc-b2b4-ff6963f474d8-kube-api-access-j6jh6\") pod \"ceilometer-0\" (UID: \"6afe8635-761b-4dcc-b2b4-ff6963f474d8\") " pod="openstack/ceilometer-0" Feb 19 18:50:29 crc kubenswrapper[4813]: I0219 18:50:29.846303 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6afe8635-761b-4dcc-b2b4-ff6963f474d8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6afe8635-761b-4dcc-b2b4-ff6963f474d8\") " pod="openstack/ceilometer-0" Feb 19 18:50:29 crc kubenswrapper[4813]: I0219 18:50:29.846363 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6afe8635-761b-4dcc-b2b4-ff6963f474d8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6afe8635-761b-4dcc-b2b4-ff6963f474d8\") " pod="openstack/ceilometer-0" Feb 19 18:50:29 crc kubenswrapper[4813]: I0219 18:50:29.846410 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6afe8635-761b-4dcc-b2b4-ff6963f474d8-config-data\") pod \"ceilometer-0\" (UID: \"6afe8635-761b-4dcc-b2b4-ff6963f474d8\") " pod="openstack/ceilometer-0" Feb 19 18:50:29 crc kubenswrapper[4813]: I0219 18:50:29.846456 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6afe8635-761b-4dcc-b2b4-ff6963f474d8-run-httpd\") pod \"ceilometer-0\" (UID: \"6afe8635-761b-4dcc-b2b4-ff6963f474d8\") " pod="openstack/ceilometer-0" Feb 19 18:50:29 crc kubenswrapper[4813]: I0219 18:50:29.846637 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6afe8635-761b-4dcc-b2b4-ff6963f474d8-scripts\") pod \"ceilometer-0\" (UID: \"6afe8635-761b-4dcc-b2b4-ff6963f474d8\") " pod="openstack/ceilometer-0" Feb 19 18:50:29 crc kubenswrapper[4813]: I0219 18:50:29.846739 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6afe8635-761b-4dcc-b2b4-ff6963f474d8-log-httpd\") pod \"ceilometer-0\" (UID: \"6afe8635-761b-4dcc-b2b4-ff6963f474d8\") " pod="openstack/ceilometer-0" Feb 19 18:50:29 crc kubenswrapper[4813]: I0219 18:50:29.848487 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6afe8635-761b-4dcc-b2b4-ff6963f474d8-run-httpd\") pod \"ceilometer-0\" (UID: \"6afe8635-761b-4dcc-b2b4-ff6963f474d8\") " pod="openstack/ceilometer-0" Feb 19 18:50:29 crc kubenswrapper[4813]: I0219 18:50:29.848846 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6afe8635-761b-4dcc-b2b4-ff6963f474d8-log-httpd\") pod \"ceilometer-0\" (UID: \"6afe8635-761b-4dcc-b2b4-ff6963f474d8\") " pod="openstack/ceilometer-0" Feb 19 18:50:29 crc kubenswrapper[4813]: I0219 18:50:29.857422 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6afe8635-761b-4dcc-b2b4-ff6963f474d8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6afe8635-761b-4dcc-b2b4-ff6963f474d8\") " pod="openstack/ceilometer-0" Feb 19 18:50:29 crc kubenswrapper[4813]: I0219 18:50:29.872293 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6afe8635-761b-4dcc-b2b4-ff6963f474d8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6afe8635-761b-4dcc-b2b4-ff6963f474d8\") " pod="openstack/ceilometer-0" Feb 19 18:50:29 crc kubenswrapper[4813]: I0219 18:50:29.880129 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6afe8635-761b-4dcc-b2b4-ff6963f474d8-config-data\") pod \"ceilometer-0\" (UID: \"6afe8635-761b-4dcc-b2b4-ff6963f474d8\") " pod="openstack/ceilometer-0" Feb 19 18:50:29 crc kubenswrapper[4813]: I0219 18:50:29.880482 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6jh6\" (UniqueName: \"kubernetes.io/projected/6afe8635-761b-4dcc-b2b4-ff6963f474d8-kube-api-access-j6jh6\") pod \"ceilometer-0\" (UID: \"6afe8635-761b-4dcc-b2b4-ff6963f474d8\") " pod="openstack/ceilometer-0" Feb 19 18:50:29 crc kubenswrapper[4813]: I0219 18:50:29.882027 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6afe8635-761b-4dcc-b2b4-ff6963f474d8-scripts\") pod \"ceilometer-0\" (UID: \"6afe8635-761b-4dcc-b2b4-ff6963f474d8\") " pod="openstack/ceilometer-0" Feb 19 18:50:30 crc kubenswrapper[4813]: I0219 18:50:30.004489 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 18:50:30 crc kubenswrapper[4813]: I0219 18:50:30.387345 4813 generic.go:334] "Generic (PLEG): container finished" podID="be7c6283-3a04-4b6b-9419-82b4e91909bb" containerID="81b54b28e409adeced7c4ba8c4d1018f2f0fe17fb4d5cc0cb31579530c721fd8" exitCode=0 Feb 19 18:50:30 crc kubenswrapper[4813]: I0219 18:50:30.387754 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-4d36-account-create-update-ksf9n" event={"ID":"be7c6283-3a04-4b6b-9419-82b4e91909bb","Type":"ContainerDied","Data":"81b54b28e409adeced7c4ba8c4d1018f2f0fe17fb4d5cc0cb31579530c721fd8"} Feb 19 18:50:30 crc kubenswrapper[4813]: I0219 18:50:30.389697 4813 generic.go:334] "Generic (PLEG): container finished" podID="487f7774-103e-44b9-a773-e34f77657d2b" containerID="87b73f80a6ec835366f4bf5b64ba364e2068a899ccab4e89a99247770dab0426" exitCode=0 Feb 19 18:50:30 crc kubenswrapper[4813]: I0219 18:50:30.389735 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-d44kt" event={"ID":"487f7774-103e-44b9-a773-e34f77657d2b","Type":"ContainerDied","Data":"87b73f80a6ec835366f4bf5b64ba364e2068a899ccab4e89a99247770dab0426"} Feb 19 18:50:30 crc kubenswrapper[4813]: I0219 18:50:30.393087 4813 generic.go:334] "Generic (PLEG): container finished" podID="b43b69b2-f014-47e0-a8a7-acb5445dff51" containerID="b86014540681be4c4fa4c1327a58642cf8d5baceb6108eea1dce480c400a6bc6" exitCode=0 Feb 19 18:50:30 crc kubenswrapper[4813]: I0219 18:50:30.393273 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-c594-account-create-update-5gz96" event={"ID":"b43b69b2-f014-47e0-a8a7-acb5445dff51","Type":"ContainerDied","Data":"b86014540681be4c4fa4c1327a58642cf8d5baceb6108eea1dce480c400a6bc6"} Feb 19 18:50:30 crc kubenswrapper[4813]: I0219 18:50:30.395119 4813 generic.go:334] "Generic (PLEG): container finished" podID="24e9d07f-0344-41f5-817b-c03c6516ae85" containerID="d6492c712dfe40b99d0b2b14e6f1346b42c1c9bffe85732430bdd537d8749da8" exitCode=0 Feb 19 18:50:30 crc kubenswrapper[4813]: I0219 18:50:30.395163 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-dxwdc" event={"ID":"24e9d07f-0344-41f5-817b-c03c6516ae85","Type":"ContainerDied","Data":"d6492c712dfe40b99d0b2b14e6f1346b42c1c9bffe85732430bdd537d8749da8"} Feb 19 18:50:30 crc kubenswrapper[4813]: I0219 18:50:30.397538 4813 generic.go:334] "Generic (PLEG): container finished" podID="03736951-c024-46d1-90b1-dea0d3f528aa" containerID="9e12afa0fedf488650c3cf3b2d1dd92ace9aebb22007478a22a377a14b17cb09" exitCode=0 Feb 19 18:50:30 crc kubenswrapper[4813]: I0219 18:50:30.397577 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-345c-account-create-update-49nsj" event={"ID":"03736951-c024-46d1-90b1-dea0d3f528aa","Type":"ContainerDied","Data":"9e12afa0fedf488650c3cf3b2d1dd92ace9aebb22007478a22a377a14b17cb09"} Feb 19 18:50:30 crc kubenswrapper[4813]: I0219 18:50:30.399767 4813 generic.go:334] "Generic (PLEG): container finished" podID="195ab5e4-12b0-4c82-bc80-b109afd5898f" containerID="1f96f95505c7053c8d8ab41901548ec1360179712db919575860107692fc7c07" exitCode=0 Feb 19 18:50:30 crc kubenswrapper[4813]: I0219 18:50:30.399805 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-rnnlc" event={"ID":"195ab5e4-12b0-4c82-bc80-b109afd5898f","Type":"ContainerDied","Data":"1f96f95505c7053c8d8ab41901548ec1360179712db919575860107692fc7c07"} Feb 19 18:50:30 crc kubenswrapper[4813]: I0219 18:50:30.401711 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5bfc47d69f-qrwdk" event={"ID":"c239fd72-88d6-4394-bf24-be4fb0b3e579","Type":"ContainerStarted","Data":"fa176bf7b54c8db7a012a6c8fd130237f020b6ebfd7bd99836698ae1cab7d252"} Feb 19 18:50:30 crc kubenswrapper[4813]: I0219 18:50:30.402503 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5bfc47d69f-qrwdk" Feb 19 18:50:30 crc kubenswrapper[4813]: I0219 18:50:30.402567 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5bfc47d69f-qrwdk" Feb 19 18:50:30 crc kubenswrapper[4813]: I0219 18:50:30.433841 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-5bfc47d69f-qrwdk" podStartSLOduration=8.433826767 podStartE2EDuration="8.433826767s" podCreationTimestamp="2026-02-19 18:50:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:50:30.433163327 +0000 UTC m=+1249.658603868" watchObservedRunningTime="2026-02-19 18:50:30.433826767 +0000 UTC m=+1249.659267308" Feb 19 18:50:30 crc kubenswrapper[4813]: I0219 18:50:30.539038 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 18:50:30 crc kubenswrapper[4813]: I0219 18:50:30.993001 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 18:50:31 crc kubenswrapper[4813]: I0219 18:50:31.411440 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6afe8635-761b-4dcc-b2b4-ff6963f474d8","Type":"ContainerStarted","Data":"9e029733d85825b371e364520952b48a1cc2ae4adde3e66c05e62cc539ccb854"} Feb 19 18:50:31 crc kubenswrapper[4813]: I0219 18:50:31.411483 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6afe8635-761b-4dcc-b2b4-ff6963f474d8","Type":"ContainerStarted","Data":"5c7cf94160901f5a3718ce718325e9e256567406ec0e5b0b5bedba3cd5e3747b"} Feb 19 18:50:31 crc kubenswrapper[4813]: I0219 18:50:31.859586 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-c594-account-create-update-5gz96" Feb 19 18:50:32 crc kubenswrapper[4813]: I0219 18:50:32.006767 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b43b69b2-f014-47e0-a8a7-acb5445dff51-operator-scripts\") pod \"b43b69b2-f014-47e0-a8a7-acb5445dff51\" (UID: \"b43b69b2-f014-47e0-a8a7-acb5445dff51\") " Feb 19 18:50:32 crc kubenswrapper[4813]: I0219 18:50:32.006836 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6gb4\" (UniqueName: \"kubernetes.io/projected/b43b69b2-f014-47e0-a8a7-acb5445dff51-kube-api-access-d6gb4\") pod \"b43b69b2-f014-47e0-a8a7-acb5445dff51\" (UID: \"b43b69b2-f014-47e0-a8a7-acb5445dff51\") " Feb 19 18:50:32 crc kubenswrapper[4813]: I0219 18:50:32.007464 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b43b69b2-f014-47e0-a8a7-acb5445dff51-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b43b69b2-f014-47e0-a8a7-acb5445dff51" (UID: "b43b69b2-f014-47e0-a8a7-acb5445dff51"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:50:32 crc kubenswrapper[4813]: I0219 18:50:32.051927 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b43b69b2-f014-47e0-a8a7-acb5445dff51-kube-api-access-d6gb4" (OuterVolumeSpecName: "kube-api-access-d6gb4") pod "b43b69b2-f014-47e0-a8a7-acb5445dff51" (UID: "b43b69b2-f014-47e0-a8a7-acb5445dff51"). InnerVolumeSpecName "kube-api-access-d6gb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:50:32 crc kubenswrapper[4813]: I0219 18:50:32.086596 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-rnnlc" Feb 19 18:50:32 crc kubenswrapper[4813]: I0219 18:50:32.107394 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/195ab5e4-12b0-4c82-bc80-b109afd5898f-operator-scripts\") pod \"195ab5e4-12b0-4c82-bc80-b109afd5898f\" (UID: \"195ab5e4-12b0-4c82-bc80-b109afd5898f\") " Feb 19 18:50:32 crc kubenswrapper[4813]: I0219 18:50:32.107458 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bntql\" (UniqueName: \"kubernetes.io/projected/195ab5e4-12b0-4c82-bc80-b109afd5898f-kube-api-access-bntql\") pod \"195ab5e4-12b0-4c82-bc80-b109afd5898f\" (UID: \"195ab5e4-12b0-4c82-bc80-b109afd5898f\") " Feb 19 18:50:32 crc kubenswrapper[4813]: I0219 18:50:32.107726 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6gb4\" (UniqueName: \"kubernetes.io/projected/b43b69b2-f014-47e0-a8a7-acb5445dff51-kube-api-access-d6gb4\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:32 crc kubenswrapper[4813]: I0219 18:50:32.107737 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b43b69b2-f014-47e0-a8a7-acb5445dff51-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:32 crc kubenswrapper[4813]: I0219 18:50:32.108167 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-345c-account-create-update-49nsj" Feb 19 18:50:32 crc kubenswrapper[4813]: I0219 18:50:32.108585 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/195ab5e4-12b0-4c82-bc80-b109afd5898f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "195ab5e4-12b0-4c82-bc80-b109afd5898f" (UID: "195ab5e4-12b0-4c82-bc80-b109afd5898f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:50:32 crc kubenswrapper[4813]: I0219 18:50:32.115481 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/195ab5e4-12b0-4c82-bc80-b109afd5898f-kube-api-access-bntql" (OuterVolumeSpecName: "kube-api-access-bntql") pod "195ab5e4-12b0-4c82-bc80-b109afd5898f" (UID: "195ab5e4-12b0-4c82-bc80-b109afd5898f"). InnerVolumeSpecName "kube-api-access-bntql". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:50:32 crc kubenswrapper[4813]: I0219 18:50:32.116108 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-d44kt" Feb 19 18:50:32 crc kubenswrapper[4813]: I0219 18:50:32.213833 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/195ab5e4-12b0-4c82-bc80-b109afd5898f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:32 crc kubenswrapper[4813]: I0219 18:50:32.213877 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bntql\" (UniqueName: \"kubernetes.io/projected/195ab5e4-12b0-4c82-bc80-b109afd5898f-kube-api-access-bntql\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:32 crc kubenswrapper[4813]: I0219 18:50:32.284892 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-dxwdc" Feb 19 18:50:32 crc kubenswrapper[4813]: I0219 18:50:32.316940 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/487f7774-103e-44b9-a773-e34f77657d2b-operator-scripts\") pod \"487f7774-103e-44b9-a773-e34f77657d2b\" (UID: \"487f7774-103e-44b9-a773-e34f77657d2b\") " Feb 19 18:50:32 crc kubenswrapper[4813]: I0219 18:50:32.317101 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzhv9\" (UniqueName: \"kubernetes.io/projected/03736951-c024-46d1-90b1-dea0d3f528aa-kube-api-access-bzhv9\") pod \"03736951-c024-46d1-90b1-dea0d3f528aa\" (UID: \"03736951-c024-46d1-90b1-dea0d3f528aa\") " Feb 19 18:50:32 crc kubenswrapper[4813]: I0219 18:50:32.317197 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cc7g4\" (UniqueName: \"kubernetes.io/projected/487f7774-103e-44b9-a773-e34f77657d2b-kube-api-access-cc7g4\") pod \"487f7774-103e-44b9-a773-e34f77657d2b\" (UID: \"487f7774-103e-44b9-a773-e34f77657d2b\") " Feb 19 18:50:32 crc kubenswrapper[4813]: I0219 18:50:32.317293 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/03736951-c024-46d1-90b1-dea0d3f528aa-operator-scripts\") pod \"03736951-c024-46d1-90b1-dea0d3f528aa\" (UID: \"03736951-c024-46d1-90b1-dea0d3f528aa\") " Feb 19 18:50:32 crc kubenswrapper[4813]: I0219 18:50:32.317401 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9smrs\" (UniqueName: \"kubernetes.io/projected/24e9d07f-0344-41f5-817b-c03c6516ae85-kube-api-access-9smrs\") pod \"24e9d07f-0344-41f5-817b-c03c6516ae85\" (UID: \"24e9d07f-0344-41f5-817b-c03c6516ae85\") " Feb 19 18:50:32 crc kubenswrapper[4813]: I0219 18:50:32.318716 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/487f7774-103e-44b9-a773-e34f77657d2b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "487f7774-103e-44b9-a773-e34f77657d2b" (UID: "487f7774-103e-44b9-a773-e34f77657d2b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:50:32 crc kubenswrapper[4813]: I0219 18:50:32.320880 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03736951-c024-46d1-90b1-dea0d3f528aa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "03736951-c024-46d1-90b1-dea0d3f528aa" (UID: "03736951-c024-46d1-90b1-dea0d3f528aa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:50:32 crc kubenswrapper[4813]: I0219 18:50:32.325369 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03736951-c024-46d1-90b1-dea0d3f528aa-kube-api-access-bzhv9" (OuterVolumeSpecName: "kube-api-access-bzhv9") pod "03736951-c024-46d1-90b1-dea0d3f528aa" (UID: "03736951-c024-46d1-90b1-dea0d3f528aa"). InnerVolumeSpecName "kube-api-access-bzhv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:50:32 crc kubenswrapper[4813]: I0219 18:50:32.330165 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/487f7774-103e-44b9-a773-e34f77657d2b-kube-api-access-cc7g4" (OuterVolumeSpecName: "kube-api-access-cc7g4") pod "487f7774-103e-44b9-a773-e34f77657d2b" (UID: "487f7774-103e-44b9-a773-e34f77657d2b"). InnerVolumeSpecName "kube-api-access-cc7g4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:50:32 crc kubenswrapper[4813]: I0219 18:50:32.338968 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24e9d07f-0344-41f5-817b-c03c6516ae85-kube-api-access-9smrs" (OuterVolumeSpecName: "kube-api-access-9smrs") pod "24e9d07f-0344-41f5-817b-c03c6516ae85" (UID: "24e9d07f-0344-41f5-817b-c03c6516ae85"). InnerVolumeSpecName "kube-api-access-9smrs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:50:32 crc kubenswrapper[4813]: I0219 18:50:32.406388 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-4d36-account-create-update-ksf9n" Feb 19 18:50:32 crc kubenswrapper[4813]: I0219 18:50:32.418633 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24e9d07f-0344-41f5-817b-c03c6516ae85-operator-scripts\") pod \"24e9d07f-0344-41f5-817b-c03c6516ae85\" (UID: \"24e9d07f-0344-41f5-817b-c03c6516ae85\") " Feb 19 18:50:32 crc kubenswrapper[4813]: I0219 18:50:32.419071 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24e9d07f-0344-41f5-817b-c03c6516ae85-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "24e9d07f-0344-41f5-817b-c03c6516ae85" (UID: "24e9d07f-0344-41f5-817b-c03c6516ae85"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:50:32 crc kubenswrapper[4813]: I0219 18:50:32.419368 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bzhv9\" (UniqueName: \"kubernetes.io/projected/03736951-c024-46d1-90b1-dea0d3f528aa-kube-api-access-bzhv9\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:32 crc kubenswrapper[4813]: I0219 18:50:32.419391 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cc7g4\" (UniqueName: \"kubernetes.io/projected/487f7774-103e-44b9-a773-e34f77657d2b-kube-api-access-cc7g4\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:32 crc kubenswrapper[4813]: I0219 18:50:32.419404 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24e9d07f-0344-41f5-817b-c03c6516ae85-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:32 crc kubenswrapper[4813]: I0219 18:50:32.419417 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/03736951-c024-46d1-90b1-dea0d3f528aa-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:32 crc kubenswrapper[4813]: I0219 18:50:32.419430 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9smrs\" (UniqueName: \"kubernetes.io/projected/24e9d07f-0344-41f5-817b-c03c6516ae85-kube-api-access-9smrs\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:32 crc kubenswrapper[4813]: I0219 18:50:32.419442 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/487f7774-103e-44b9-a773-e34f77657d2b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:32 crc kubenswrapper[4813]: I0219 18:50:32.420019 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-dxwdc" event={"ID":"24e9d07f-0344-41f5-817b-c03c6516ae85","Type":"ContainerDied","Data":"cec968e5961d02f4734b12214b1879f5c8084114ae4f8c57d307b330e35b7f6e"} Feb 19 18:50:32 crc kubenswrapper[4813]: I0219 18:50:32.420061 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cec968e5961d02f4734b12214b1879f5c8084114ae4f8c57d307b330e35b7f6e" Feb 19 18:50:32 crc kubenswrapper[4813]: I0219 18:50:32.420042 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-dxwdc" Feb 19 18:50:32 crc kubenswrapper[4813]: I0219 18:50:32.421690 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-345c-account-create-update-49nsj" Feb 19 18:50:32 crc kubenswrapper[4813]: I0219 18:50:32.422095 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-345c-account-create-update-49nsj" event={"ID":"03736951-c024-46d1-90b1-dea0d3f528aa","Type":"ContainerDied","Data":"4aacce4822697354edb5ad9b7286c058f943cad7f2948b837e49c0e19eb30292"} Feb 19 18:50:32 crc kubenswrapper[4813]: I0219 18:50:32.422134 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4aacce4822697354edb5ad9b7286c058f943cad7f2948b837e49c0e19eb30292" Feb 19 18:50:32 crc kubenswrapper[4813]: I0219 18:50:32.428100 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-rnnlc" event={"ID":"195ab5e4-12b0-4c82-bc80-b109afd5898f","Type":"ContainerDied","Data":"71432fb636c5bf860607effbe5943fbf000c9dac2f8ee2f03fbef6505156b75a"} Feb 19 18:50:32 crc kubenswrapper[4813]: I0219 18:50:32.428143 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71432fb636c5bf860607effbe5943fbf000c9dac2f8ee2f03fbef6505156b75a" Feb 19 18:50:32 crc kubenswrapper[4813]: I0219 18:50:32.428204 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-rnnlc" Feb 19 18:50:32 crc kubenswrapper[4813]: I0219 18:50:32.433345 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-4d36-account-create-update-ksf9n" event={"ID":"be7c6283-3a04-4b6b-9419-82b4e91909bb","Type":"ContainerDied","Data":"fa13f3801279170ab42dd8b5111bea28d82bb53ecbbfdd93f34d01da39d96861"} Feb 19 18:50:32 crc kubenswrapper[4813]: I0219 18:50:32.433401 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa13f3801279170ab42dd8b5111bea28d82bb53ecbbfdd93f34d01da39d96861" Feb 19 18:50:32 crc kubenswrapper[4813]: I0219 18:50:32.433464 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-4d36-account-create-update-ksf9n" Feb 19 18:50:32 crc kubenswrapper[4813]: I0219 18:50:32.443103 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-d44kt" event={"ID":"487f7774-103e-44b9-a773-e34f77657d2b","Type":"ContainerDied","Data":"9f7389d968e41fa2f4ddfb29c1b38ef0ee01e52ecfe0f97a3914618c0bd0ec2d"} Feb 19 18:50:32 crc kubenswrapper[4813]: I0219 18:50:32.443158 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f7389d968e41fa2f4ddfb29c1b38ef0ee01e52ecfe0f97a3914618c0bd0ec2d" Feb 19 18:50:32 crc kubenswrapper[4813]: I0219 18:50:32.443261 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-d44kt" Feb 19 18:50:32 crc kubenswrapper[4813]: I0219 18:50:32.454305 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-c594-account-create-update-5gz96" Feb 19 18:50:32 crc kubenswrapper[4813]: I0219 18:50:32.459292 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-c594-account-create-update-5gz96" event={"ID":"b43b69b2-f014-47e0-a8a7-acb5445dff51","Type":"ContainerDied","Data":"5d7d7f85bb7f9cc7f218531587b960baa8112c31c7062f410e939b9de0a86cf1"} Feb 19 18:50:32 crc kubenswrapper[4813]: I0219 18:50:32.459345 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d7d7f85bb7f9cc7f218531587b960baa8112c31c7062f410e939b9de0a86cf1" Feb 19 18:50:32 crc kubenswrapper[4813]: I0219 18:50:32.520173 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zb5z\" (UniqueName: \"kubernetes.io/projected/be7c6283-3a04-4b6b-9419-82b4e91909bb-kube-api-access-5zb5z\") pod \"be7c6283-3a04-4b6b-9419-82b4e91909bb\" (UID: \"be7c6283-3a04-4b6b-9419-82b4e91909bb\") " Feb 19 18:50:32 crc kubenswrapper[4813]: I0219 18:50:32.520306 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be7c6283-3a04-4b6b-9419-82b4e91909bb-operator-scripts\") pod \"be7c6283-3a04-4b6b-9419-82b4e91909bb\" (UID: \"be7c6283-3a04-4b6b-9419-82b4e91909bb\") " Feb 19 18:50:32 crc kubenswrapper[4813]: I0219 18:50:32.520587 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be7c6283-3a04-4b6b-9419-82b4e91909bb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "be7c6283-3a04-4b6b-9419-82b4e91909bb" (UID: "be7c6283-3a04-4b6b-9419-82b4e91909bb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:50:32 crc kubenswrapper[4813]: I0219 18:50:32.521019 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be7c6283-3a04-4b6b-9419-82b4e91909bb-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:32 crc kubenswrapper[4813]: I0219 18:50:32.525820 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be7c6283-3a04-4b6b-9419-82b4e91909bb-kube-api-access-5zb5z" (OuterVolumeSpecName: "kube-api-access-5zb5z") pod "be7c6283-3a04-4b6b-9419-82b4e91909bb" (UID: "be7c6283-3a04-4b6b-9419-82b4e91909bb"). InnerVolumeSpecName "kube-api-access-5zb5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:50:32 crc kubenswrapper[4813]: I0219 18:50:32.622695 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zb5z\" (UniqueName: \"kubernetes.io/projected/be7c6283-3a04-4b6b-9419-82b4e91909bb-kube-api-access-5zb5z\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:33 crc kubenswrapper[4813]: I0219 18:50:33.462731 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6afe8635-761b-4dcc-b2b4-ff6963f474d8","Type":"ContainerStarted","Data":"0206f662446a4640a2a2a4d84b8c38c156f42108b8dd8ee7e9f13be6d3df0aa5"} Feb 19 18:50:33 crc kubenswrapper[4813]: I0219 18:50:33.463076 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6afe8635-761b-4dcc-b2b4-ff6963f474d8","Type":"ContainerStarted","Data":"accd733dfb0a7c65272f9891db6ea8f0b431a14866fe28f386effabcecb8fc2b"} Feb 19 18:50:34 crc kubenswrapper[4813]: I0219 18:50:34.491010 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6afe8635-761b-4dcc-b2b4-ff6963f474d8","Type":"ContainerStarted","Data":"1d2afcb379a400372d4098a63f5de68b91f0b29270c3b44019b9649cd59da8b4"} Feb 19 18:50:34 crc kubenswrapper[4813]: I0219 18:50:34.491661 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6afe8635-761b-4dcc-b2b4-ff6963f474d8" containerName="ceilometer-central-agent" containerID="cri-o://9e029733d85825b371e364520952b48a1cc2ae4adde3e66c05e62cc539ccb854" gracePeriod=30 Feb 19 18:50:34 crc kubenswrapper[4813]: I0219 18:50:34.491947 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 18:50:34 crc kubenswrapper[4813]: I0219 18:50:34.492352 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6afe8635-761b-4dcc-b2b4-ff6963f474d8" containerName="proxy-httpd" containerID="cri-o://1d2afcb379a400372d4098a63f5de68b91f0b29270c3b44019b9649cd59da8b4" gracePeriod=30 Feb 19 18:50:34 crc kubenswrapper[4813]: I0219 18:50:34.492410 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6afe8635-761b-4dcc-b2b4-ff6963f474d8" containerName="sg-core" containerID="cri-o://0206f662446a4640a2a2a4d84b8c38c156f42108b8dd8ee7e9f13be6d3df0aa5" gracePeriod=30 Feb 19 18:50:34 crc kubenswrapper[4813]: I0219 18:50:34.492450 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6afe8635-761b-4dcc-b2b4-ff6963f474d8" containerName="ceilometer-notification-agent" containerID="cri-o://accd733dfb0a7c65272f9891db6ea8f0b431a14866fe28f386effabcecb8fc2b" gracePeriod=30 Feb 19 18:50:34 crc kubenswrapper[4813]: I0219 18:50:34.523078 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.8205065409999999 podStartE2EDuration="5.523059673s" podCreationTimestamp="2026-02-19 18:50:29 +0000 UTC" firstStartedPulling="2026-02-19 18:50:30.52958465 +0000 UTC m=+1249.755025191" lastFinishedPulling="2026-02-19 18:50:34.232137792 +0000 UTC m=+1253.457578323" observedRunningTime="2026-02-19 18:50:34.518241685 +0000 UTC m=+1253.743682226" watchObservedRunningTime="2026-02-19 18:50:34.523059673 +0000 UTC m=+1253.748500214" Feb 19 18:50:35 crc kubenswrapper[4813]: I0219 18:50:35.506878 4813 generic.go:334] "Generic (PLEG): container finished" podID="6afe8635-761b-4dcc-b2b4-ff6963f474d8" containerID="0206f662446a4640a2a2a4d84b8c38c156f42108b8dd8ee7e9f13be6d3df0aa5" exitCode=2 Feb 19 18:50:35 crc kubenswrapper[4813]: I0219 18:50:35.507334 4813 generic.go:334] "Generic (PLEG): container finished" podID="6afe8635-761b-4dcc-b2b4-ff6963f474d8" containerID="accd733dfb0a7c65272f9891db6ea8f0b431a14866fe28f386effabcecb8fc2b" exitCode=0 Feb 19 18:50:35 crc kubenswrapper[4813]: I0219 18:50:35.506972 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6afe8635-761b-4dcc-b2b4-ff6963f474d8","Type":"ContainerDied","Data":"0206f662446a4640a2a2a4d84b8c38c156f42108b8dd8ee7e9f13be6d3df0aa5"} Feb 19 18:50:35 crc kubenswrapper[4813]: I0219 18:50:35.507373 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6afe8635-761b-4dcc-b2b4-ff6963f474d8","Type":"ContainerDied","Data":"accd733dfb0a7c65272f9891db6ea8f0b431a14866fe28f386effabcecb8fc2b"} Feb 19 18:50:37 crc kubenswrapper[4813]: I0219 18:50:37.634207 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5bfc47d69f-qrwdk" Feb 19 18:50:37 crc kubenswrapper[4813]: I0219 18:50:37.635512 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5bfc47d69f-qrwdk" Feb 19 18:50:37 crc kubenswrapper[4813]: I0219 18:50:37.980799 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-fvfzp"] Feb 19 18:50:37 crc kubenswrapper[4813]: E0219 18:50:37.981247 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b43b69b2-f014-47e0-a8a7-acb5445dff51" containerName="mariadb-account-create-update" Feb 19 18:50:37 crc kubenswrapper[4813]: I0219 18:50:37.981269 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="b43b69b2-f014-47e0-a8a7-acb5445dff51" containerName="mariadb-account-create-update" Feb 19 18:50:37 crc kubenswrapper[4813]: E0219 18:50:37.981292 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03736951-c024-46d1-90b1-dea0d3f528aa" containerName="mariadb-account-create-update" Feb 19 18:50:37 crc kubenswrapper[4813]: I0219 18:50:37.981300 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="03736951-c024-46d1-90b1-dea0d3f528aa" containerName="mariadb-account-create-update" Feb 19 18:50:37 crc kubenswrapper[4813]: E0219 18:50:37.981314 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="195ab5e4-12b0-4c82-bc80-b109afd5898f" containerName="mariadb-database-create" Feb 19 18:50:37 crc kubenswrapper[4813]: I0219 18:50:37.981322 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="195ab5e4-12b0-4c82-bc80-b109afd5898f" containerName="mariadb-database-create" Feb 19 18:50:37 crc kubenswrapper[4813]: E0219 18:50:37.981334 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be7c6283-3a04-4b6b-9419-82b4e91909bb" containerName="mariadb-account-create-update" Feb 19 18:50:37 crc kubenswrapper[4813]: I0219 18:50:37.981344 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="be7c6283-3a04-4b6b-9419-82b4e91909bb" containerName="mariadb-account-create-update" Feb 19 18:50:37 crc kubenswrapper[4813]: E0219 18:50:37.981371 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="487f7774-103e-44b9-a773-e34f77657d2b" containerName="mariadb-database-create" Feb 19 18:50:37 crc kubenswrapper[4813]: I0219 18:50:37.981379 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="487f7774-103e-44b9-a773-e34f77657d2b" containerName="mariadb-database-create" Feb 19 18:50:37 crc kubenswrapper[4813]: E0219 18:50:37.981404 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24e9d07f-0344-41f5-817b-c03c6516ae85" containerName="mariadb-database-create" Feb 19 18:50:37 crc kubenswrapper[4813]: I0219 18:50:37.981412 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="24e9d07f-0344-41f5-817b-c03c6516ae85" containerName="mariadb-database-create" Feb 19 18:50:37 crc kubenswrapper[4813]: I0219 18:50:37.981608 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="b43b69b2-f014-47e0-a8a7-acb5445dff51" containerName="mariadb-account-create-update" Feb 19 18:50:37 crc kubenswrapper[4813]: I0219 18:50:37.981627 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="be7c6283-3a04-4b6b-9419-82b4e91909bb" containerName="mariadb-account-create-update" Feb 19 18:50:37 crc kubenswrapper[4813]: I0219 18:50:37.981645 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="03736951-c024-46d1-90b1-dea0d3f528aa" containerName="mariadb-account-create-update" Feb 19 18:50:37 crc kubenswrapper[4813]: I0219 18:50:37.981662 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="195ab5e4-12b0-4c82-bc80-b109afd5898f" containerName="mariadb-database-create" Feb 19 18:50:37 crc kubenswrapper[4813]: I0219 18:50:37.981675 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="24e9d07f-0344-41f5-817b-c03c6516ae85" containerName="mariadb-database-create" Feb 19 18:50:37 crc kubenswrapper[4813]: I0219 18:50:37.981692 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="487f7774-103e-44b9-a773-e34f77657d2b" containerName="mariadb-database-create" Feb 19 18:50:37 crc kubenswrapper[4813]: I0219 18:50:37.982450 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-fvfzp" Feb 19 18:50:37 crc kubenswrapper[4813]: I0219 18:50:37.991304 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 19 18:50:37 crc kubenswrapper[4813]: I0219 18:50:37.991403 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 19 18:50:37 crc kubenswrapper[4813]: I0219 18:50:37.991688 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-trm9l" Feb 19 18:50:37 crc kubenswrapper[4813]: I0219 18:50:37.993876 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-fvfzp"] Feb 19 18:50:38 crc kubenswrapper[4813]: I0219 18:50:38.123574 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f306bed-c42d-4853-b2ce-929c6356929d-config-data\") pod \"nova-cell0-conductor-db-sync-fvfzp\" (UID: \"6f306bed-c42d-4853-b2ce-929c6356929d\") " pod="openstack/nova-cell0-conductor-db-sync-fvfzp" Feb 19 18:50:38 crc kubenswrapper[4813]: I0219 18:50:38.123627 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f306bed-c42d-4853-b2ce-929c6356929d-scripts\") pod \"nova-cell0-conductor-db-sync-fvfzp\" (UID: \"6f306bed-c42d-4853-b2ce-929c6356929d\") " pod="openstack/nova-cell0-conductor-db-sync-fvfzp" Feb 19 18:50:38 crc kubenswrapper[4813]: I0219 18:50:38.123705 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb2bt\" (UniqueName: \"kubernetes.io/projected/6f306bed-c42d-4853-b2ce-929c6356929d-kube-api-access-sb2bt\") pod \"nova-cell0-conductor-db-sync-fvfzp\" (UID: \"6f306bed-c42d-4853-b2ce-929c6356929d\") " pod="openstack/nova-cell0-conductor-db-sync-fvfzp" Feb 19 18:50:38 crc kubenswrapper[4813]: I0219 18:50:38.123784 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f306bed-c42d-4853-b2ce-929c6356929d-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-fvfzp\" (UID: \"6f306bed-c42d-4853-b2ce-929c6356929d\") " pod="openstack/nova-cell0-conductor-db-sync-fvfzp" Feb 19 18:50:38 crc kubenswrapper[4813]: I0219 18:50:38.225672 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sb2bt\" (UniqueName: \"kubernetes.io/projected/6f306bed-c42d-4853-b2ce-929c6356929d-kube-api-access-sb2bt\") pod \"nova-cell0-conductor-db-sync-fvfzp\" (UID: \"6f306bed-c42d-4853-b2ce-929c6356929d\") " pod="openstack/nova-cell0-conductor-db-sync-fvfzp" Feb 19 18:50:38 crc kubenswrapper[4813]: I0219 18:50:38.226067 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f306bed-c42d-4853-b2ce-929c6356929d-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-fvfzp\" (UID: \"6f306bed-c42d-4853-b2ce-929c6356929d\") " pod="openstack/nova-cell0-conductor-db-sync-fvfzp" Feb 19 18:50:38 crc kubenswrapper[4813]: I0219 18:50:38.226224 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f306bed-c42d-4853-b2ce-929c6356929d-config-data\") pod \"nova-cell0-conductor-db-sync-fvfzp\" (UID: \"6f306bed-c42d-4853-b2ce-929c6356929d\") " pod="openstack/nova-cell0-conductor-db-sync-fvfzp" Feb 19 18:50:38 crc kubenswrapper[4813]: I0219 18:50:38.226253 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f306bed-c42d-4853-b2ce-929c6356929d-scripts\") pod \"nova-cell0-conductor-db-sync-fvfzp\" (UID: \"6f306bed-c42d-4853-b2ce-929c6356929d\") " pod="openstack/nova-cell0-conductor-db-sync-fvfzp" Feb 19 18:50:38 crc kubenswrapper[4813]: I0219 18:50:38.231429 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f306bed-c42d-4853-b2ce-929c6356929d-scripts\") pod \"nova-cell0-conductor-db-sync-fvfzp\" (UID: \"6f306bed-c42d-4853-b2ce-929c6356929d\") " pod="openstack/nova-cell0-conductor-db-sync-fvfzp" Feb 19 18:50:38 crc kubenswrapper[4813]: I0219 18:50:38.231769 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f306bed-c42d-4853-b2ce-929c6356929d-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-fvfzp\" (UID: \"6f306bed-c42d-4853-b2ce-929c6356929d\") " pod="openstack/nova-cell0-conductor-db-sync-fvfzp" Feb 19 18:50:38 crc kubenswrapper[4813]: I0219 18:50:38.238636 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f306bed-c42d-4853-b2ce-929c6356929d-config-data\") pod \"nova-cell0-conductor-db-sync-fvfzp\" (UID: \"6f306bed-c42d-4853-b2ce-929c6356929d\") " pod="openstack/nova-cell0-conductor-db-sync-fvfzp" Feb 19 18:50:38 crc kubenswrapper[4813]: I0219 18:50:38.246496 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sb2bt\" (UniqueName: \"kubernetes.io/projected/6f306bed-c42d-4853-b2ce-929c6356929d-kube-api-access-sb2bt\") pod \"nova-cell0-conductor-db-sync-fvfzp\" (UID: \"6f306bed-c42d-4853-b2ce-929c6356929d\") " pod="openstack/nova-cell0-conductor-db-sync-fvfzp" Feb 19 18:50:38 crc kubenswrapper[4813]: I0219 18:50:38.301048 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-fvfzp" Feb 19 18:50:38 crc kubenswrapper[4813]: I0219 18:50:38.873369 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-fvfzp"] Feb 19 18:50:38 crc kubenswrapper[4813]: W0219 18:50:38.882816 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6f306bed_c42d_4853_b2ce_929c6356929d.slice/crio-926b6693aa0219065460a4a3833af8821f7dfe9ec117f7323749af011091acf9 WatchSource:0}: Error finding container 926b6693aa0219065460a4a3833af8821f7dfe9ec117f7323749af011091acf9: Status 404 returned error can't find the container with id 926b6693aa0219065460a4a3833af8821f7dfe9ec117f7323749af011091acf9 Feb 19 18:50:39 crc kubenswrapper[4813]: I0219 18:50:39.592642 4813 generic.go:334] "Generic (PLEG): container finished" podID="6afe8635-761b-4dcc-b2b4-ff6963f474d8" containerID="9e029733d85825b371e364520952b48a1cc2ae4adde3e66c05e62cc539ccb854" exitCode=0 Feb 19 18:50:39 crc kubenswrapper[4813]: I0219 18:50:39.592724 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6afe8635-761b-4dcc-b2b4-ff6963f474d8","Type":"ContainerDied","Data":"9e029733d85825b371e364520952b48a1cc2ae4adde3e66c05e62cc539ccb854"} Feb 19 18:50:39 crc kubenswrapper[4813]: I0219 18:50:39.594720 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-fvfzp" event={"ID":"6f306bed-c42d-4853-b2ce-929c6356929d","Type":"ContainerStarted","Data":"926b6693aa0219065460a4a3833af8821f7dfe9ec117f7323749af011091acf9"} Feb 19 18:50:42 crc kubenswrapper[4813]: I0219 18:50:42.018180 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-d474bcd44-n9tsd" Feb 19 18:50:42 crc kubenswrapper[4813]: I0219 18:50:42.254019 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-d474bcd44-n9tsd" Feb 19 18:50:42 crc kubenswrapper[4813]: I0219 18:50:42.315341 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-5777547648-br5pd"] Feb 19 18:50:42 crc kubenswrapper[4813]: I0219 18:50:42.318160 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-5777547648-br5pd" podUID="f91fa4c5-f63e-4a98-be87-3d154cdc6db0" containerName="placement-api" containerID="cri-o://691381488d7f78058688fb4f5e70199f772f40bff52786b7ec81259c77c0b817" gracePeriod=30 Feb 19 18:50:42 crc kubenswrapper[4813]: I0219 18:50:42.318279 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-5777547648-br5pd" podUID="f91fa4c5-f63e-4a98-be87-3d154cdc6db0" containerName="placement-log" containerID="cri-o://6924f9ac4ffac182d1d22d2801b9ecfb7d5714952dd528d4dc0ed31a85a2d612" gracePeriod=30 Feb 19 18:50:42 crc kubenswrapper[4813]: I0219 18:50:42.628642 4813 generic.go:334] "Generic (PLEG): container finished" podID="f91fa4c5-f63e-4a98-be87-3d154cdc6db0" containerID="6924f9ac4ffac182d1d22d2801b9ecfb7d5714952dd528d4dc0ed31a85a2d612" exitCode=143 Feb 19 18:50:42 crc kubenswrapper[4813]: I0219 18:50:42.629091 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5777547648-br5pd" event={"ID":"f91fa4c5-f63e-4a98-be87-3d154cdc6db0","Type":"ContainerDied","Data":"6924f9ac4ffac182d1d22d2801b9ecfb7d5714952dd528d4dc0ed31a85a2d612"} Feb 19 18:50:45 crc kubenswrapper[4813]: I0219 18:50:45.656249 4813 generic.go:334] "Generic (PLEG): container finished" podID="f91fa4c5-f63e-4a98-be87-3d154cdc6db0" containerID="691381488d7f78058688fb4f5e70199f772f40bff52786b7ec81259c77c0b817" exitCode=0 Feb 19 18:50:45 crc kubenswrapper[4813]: I0219 18:50:45.656438 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5777547648-br5pd" event={"ID":"f91fa4c5-f63e-4a98-be87-3d154cdc6db0","Type":"ContainerDied","Data":"691381488d7f78058688fb4f5e70199f772f40bff52786b7ec81259c77c0b817"} Feb 19 18:50:46 crc kubenswrapper[4813]: I0219 18:50:46.995941 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 18:50:46 crc kubenswrapper[4813]: I0219 18:50:46.996667 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="5d67b04b-756d-4c0d-93bf-ce2766c48cd9" containerName="glance-log" containerID="cri-o://5e62d4e283cff7634e214723697d9b2fcddf021bab89ee818599c42f7ab5f8c4" gracePeriod=30 Feb 19 18:50:46 crc kubenswrapper[4813]: I0219 18:50:46.997077 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="5d67b04b-756d-4c0d-93bf-ce2766c48cd9" containerName="glance-httpd" containerID="cri-o://352bd4fd06c385c9f5e35f72e77478143c55653358a40bf88d2506b71b54a63d" gracePeriod=30 Feb 19 18:50:47 crc kubenswrapper[4813]: I0219 18:50:47.251258 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5777547648-br5pd" Feb 19 18:50:47 crc kubenswrapper[4813]: I0219 18:50:47.360444 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f91fa4c5-f63e-4a98-be87-3d154cdc6db0-combined-ca-bundle\") pod \"f91fa4c5-f63e-4a98-be87-3d154cdc6db0\" (UID: \"f91fa4c5-f63e-4a98-be87-3d154cdc6db0\") " Feb 19 18:50:47 crc kubenswrapper[4813]: I0219 18:50:47.360769 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8r4z5\" (UniqueName: \"kubernetes.io/projected/f91fa4c5-f63e-4a98-be87-3d154cdc6db0-kube-api-access-8r4z5\") pod \"f91fa4c5-f63e-4a98-be87-3d154cdc6db0\" (UID: \"f91fa4c5-f63e-4a98-be87-3d154cdc6db0\") " Feb 19 18:50:47 crc kubenswrapper[4813]: I0219 18:50:47.360838 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f91fa4c5-f63e-4a98-be87-3d154cdc6db0-scripts\") pod \"f91fa4c5-f63e-4a98-be87-3d154cdc6db0\" (UID: \"f91fa4c5-f63e-4a98-be87-3d154cdc6db0\") " Feb 19 18:50:47 crc kubenswrapper[4813]: I0219 18:50:47.360920 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f91fa4c5-f63e-4a98-be87-3d154cdc6db0-config-data\") pod \"f91fa4c5-f63e-4a98-be87-3d154cdc6db0\" (UID: \"f91fa4c5-f63e-4a98-be87-3d154cdc6db0\") " Feb 19 18:50:47 crc kubenswrapper[4813]: I0219 18:50:47.360946 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f91fa4c5-f63e-4a98-be87-3d154cdc6db0-public-tls-certs\") pod \"f91fa4c5-f63e-4a98-be87-3d154cdc6db0\" (UID: \"f91fa4c5-f63e-4a98-be87-3d154cdc6db0\") " Feb 19 18:50:47 crc kubenswrapper[4813]: I0219 18:50:47.361022 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f91fa4c5-f63e-4a98-be87-3d154cdc6db0-logs\") pod \"f91fa4c5-f63e-4a98-be87-3d154cdc6db0\" (UID: \"f91fa4c5-f63e-4a98-be87-3d154cdc6db0\") " Feb 19 18:50:47 crc kubenswrapper[4813]: I0219 18:50:47.361085 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f91fa4c5-f63e-4a98-be87-3d154cdc6db0-internal-tls-certs\") pod \"f91fa4c5-f63e-4a98-be87-3d154cdc6db0\" (UID: \"f91fa4c5-f63e-4a98-be87-3d154cdc6db0\") " Feb 19 18:50:47 crc kubenswrapper[4813]: I0219 18:50:47.361425 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f91fa4c5-f63e-4a98-be87-3d154cdc6db0-logs" (OuterVolumeSpecName: "logs") pod "f91fa4c5-f63e-4a98-be87-3d154cdc6db0" (UID: "f91fa4c5-f63e-4a98-be87-3d154cdc6db0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:50:47 crc kubenswrapper[4813]: I0219 18:50:47.361504 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f91fa4c5-f63e-4a98-be87-3d154cdc6db0-logs\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:47 crc kubenswrapper[4813]: I0219 18:50:47.366315 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f91fa4c5-f63e-4a98-be87-3d154cdc6db0-scripts" (OuterVolumeSpecName: "scripts") pod "f91fa4c5-f63e-4a98-be87-3d154cdc6db0" (UID: "f91fa4c5-f63e-4a98-be87-3d154cdc6db0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:50:47 crc kubenswrapper[4813]: I0219 18:50:47.366333 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f91fa4c5-f63e-4a98-be87-3d154cdc6db0-kube-api-access-8r4z5" (OuterVolumeSpecName: "kube-api-access-8r4z5") pod "f91fa4c5-f63e-4a98-be87-3d154cdc6db0" (UID: "f91fa4c5-f63e-4a98-be87-3d154cdc6db0"). InnerVolumeSpecName "kube-api-access-8r4z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:50:47 crc kubenswrapper[4813]: I0219 18:50:47.413151 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f91fa4c5-f63e-4a98-be87-3d154cdc6db0-config-data" (OuterVolumeSpecName: "config-data") pod "f91fa4c5-f63e-4a98-be87-3d154cdc6db0" (UID: "f91fa4c5-f63e-4a98-be87-3d154cdc6db0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:50:47 crc kubenswrapper[4813]: I0219 18:50:47.413257 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f91fa4c5-f63e-4a98-be87-3d154cdc6db0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f91fa4c5-f63e-4a98-be87-3d154cdc6db0" (UID: "f91fa4c5-f63e-4a98-be87-3d154cdc6db0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:50:47 crc kubenswrapper[4813]: I0219 18:50:47.463405 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f91fa4c5-f63e-4a98-be87-3d154cdc6db0-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:47 crc kubenswrapper[4813]: I0219 18:50:47.463439 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f91fa4c5-f63e-4a98-be87-3d154cdc6db0-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:47 crc kubenswrapper[4813]: I0219 18:50:47.463450 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f91fa4c5-f63e-4a98-be87-3d154cdc6db0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:47 crc kubenswrapper[4813]: I0219 18:50:47.463462 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8r4z5\" (UniqueName: \"kubernetes.io/projected/f91fa4c5-f63e-4a98-be87-3d154cdc6db0-kube-api-access-8r4z5\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:47 crc kubenswrapper[4813]: I0219 18:50:47.468798 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f91fa4c5-f63e-4a98-be87-3d154cdc6db0-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f91fa4c5-f63e-4a98-be87-3d154cdc6db0" (UID: "f91fa4c5-f63e-4a98-be87-3d154cdc6db0"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:50:47 crc kubenswrapper[4813]: I0219 18:50:47.478165 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f91fa4c5-f63e-4a98-be87-3d154cdc6db0-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f91fa4c5-f63e-4a98-be87-3d154cdc6db0" (UID: "f91fa4c5-f63e-4a98-be87-3d154cdc6db0"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:50:47 crc kubenswrapper[4813]: I0219 18:50:47.564633 4813 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f91fa4c5-f63e-4a98-be87-3d154cdc6db0-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:47 crc kubenswrapper[4813]: I0219 18:50:47.564662 4813 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f91fa4c5-f63e-4a98-be87-3d154cdc6db0-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:47 crc kubenswrapper[4813]: I0219 18:50:47.674515 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-fvfzp" event={"ID":"6f306bed-c42d-4853-b2ce-929c6356929d","Type":"ContainerStarted","Data":"c515b6612026ed474298fe3c18c2bf37548d0465b0492238ee919dbb763eac6c"} Feb 19 18:50:47 crc kubenswrapper[4813]: I0219 18:50:47.676519 4813 generic.go:334] "Generic (PLEG): container finished" podID="5d67b04b-756d-4c0d-93bf-ce2766c48cd9" containerID="5e62d4e283cff7634e214723697d9b2fcddf021bab89ee818599c42f7ab5f8c4" exitCode=143 Feb 19 18:50:47 crc kubenswrapper[4813]: I0219 18:50:47.676595 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5d67b04b-756d-4c0d-93bf-ce2766c48cd9","Type":"ContainerDied","Data":"5e62d4e283cff7634e214723697d9b2fcddf021bab89ee818599c42f7ab5f8c4"} Feb 19 18:50:47 crc kubenswrapper[4813]: I0219 18:50:47.678114 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5777547648-br5pd" event={"ID":"f91fa4c5-f63e-4a98-be87-3d154cdc6db0","Type":"ContainerDied","Data":"f46f23d585165b549d6ec7aa1351bc2940ccc533bbf83b5ce1490e8c79c0863e"} Feb 19 18:50:47 crc kubenswrapper[4813]: I0219 18:50:47.678161 4813 scope.go:117] "RemoveContainer" containerID="691381488d7f78058688fb4f5e70199f772f40bff52786b7ec81259c77c0b817" Feb 19 18:50:47 crc kubenswrapper[4813]: I0219 18:50:47.678173 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5777547648-br5pd" Feb 19 18:50:47 crc kubenswrapper[4813]: I0219 18:50:47.697297 4813 scope.go:117] "RemoveContainer" containerID="6924f9ac4ffac182d1d22d2801b9ecfb7d5714952dd528d4dc0ed31a85a2d612" Feb 19 18:50:47 crc kubenswrapper[4813]: I0219 18:50:47.700723 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-fvfzp" podStartSLOduration=2.690500616 podStartE2EDuration="10.700698278s" podCreationTimestamp="2026-02-19 18:50:37 +0000 UTC" firstStartedPulling="2026-02-19 18:50:38.885286337 +0000 UTC m=+1258.110726878" lastFinishedPulling="2026-02-19 18:50:46.895483999 +0000 UTC m=+1266.120924540" observedRunningTime="2026-02-19 18:50:47.689062749 +0000 UTC m=+1266.914503300" watchObservedRunningTime="2026-02-19 18:50:47.700698278 +0000 UTC m=+1266.926138809" Feb 19 18:50:47 crc kubenswrapper[4813]: I0219 18:50:47.717909 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-5777547648-br5pd"] Feb 19 18:50:47 crc kubenswrapper[4813]: I0219 18:50:47.727027 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-5777547648-br5pd"] Feb 19 18:50:48 crc kubenswrapper[4813]: I0219 18:50:48.393976 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 18:50:48 crc kubenswrapper[4813]: I0219 18:50:48.394621 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="6c68d02b-5a16-4663-b109-265ae29b311b" containerName="glance-log" containerID="cri-o://128dac3b620e663ac00e57ee6e601177e3f89b5b91cf0c39b2978ce209425fab" gracePeriod=30 Feb 19 18:50:48 crc kubenswrapper[4813]: I0219 18:50:48.394724 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="6c68d02b-5a16-4663-b109-265ae29b311b" containerName="glance-httpd" containerID="cri-o://8e12b692f541ff3e34487b31b7479a4414bdcfbee0b1c3a4dfc4db8b7258f44e" gracePeriod=30 Feb 19 18:50:48 crc kubenswrapper[4813]: I0219 18:50:48.687838 4813 generic.go:334] "Generic (PLEG): container finished" podID="6c68d02b-5a16-4663-b109-265ae29b311b" containerID="128dac3b620e663ac00e57ee6e601177e3f89b5b91cf0c39b2978ce209425fab" exitCode=143 Feb 19 18:50:48 crc kubenswrapper[4813]: I0219 18:50:48.687907 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6c68d02b-5a16-4663-b109-265ae29b311b","Type":"ContainerDied","Data":"128dac3b620e663ac00e57ee6e601177e3f89b5b91cf0c39b2978ce209425fab"} Feb 19 18:50:49 crc kubenswrapper[4813]: I0219 18:50:49.480744 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f91fa4c5-f63e-4a98-be87-3d154cdc6db0" path="/var/lib/kubelet/pods/f91fa4c5-f63e-4a98-be87-3d154cdc6db0/volumes" Feb 19 18:50:50 crc kubenswrapper[4813]: I0219 18:50:50.709836 4813 generic.go:334] "Generic (PLEG): container finished" podID="5d67b04b-756d-4c0d-93bf-ce2766c48cd9" containerID="352bd4fd06c385c9f5e35f72e77478143c55653358a40bf88d2506b71b54a63d" exitCode=0 Feb 19 18:50:50 crc kubenswrapper[4813]: I0219 18:50:50.709881 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5d67b04b-756d-4c0d-93bf-ce2766c48cd9","Type":"ContainerDied","Data":"352bd4fd06c385c9f5e35f72e77478143c55653358a40bf88d2506b71b54a63d"} Feb 19 18:50:51 crc kubenswrapper[4813]: I0219 18:50:51.094021 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 18:50:51 crc kubenswrapper[4813]: I0219 18:50:51.231799 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d67b04b-756d-4c0d-93bf-ce2766c48cd9-logs\") pod \"5d67b04b-756d-4c0d-93bf-ce2766c48cd9\" (UID: \"5d67b04b-756d-4c0d-93bf-ce2766c48cd9\") " Feb 19 18:50:51 crc kubenswrapper[4813]: I0219 18:50:51.231876 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5d67b04b-756d-4c0d-93bf-ce2766c48cd9-httpd-run\") pod \"5d67b04b-756d-4c0d-93bf-ce2766c48cd9\" (UID: \"5d67b04b-756d-4c0d-93bf-ce2766c48cd9\") " Feb 19 18:50:51 crc kubenswrapper[4813]: I0219 18:50:51.231926 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d67b04b-756d-4c0d-93bf-ce2766c48cd9-public-tls-certs\") pod \"5d67b04b-756d-4c0d-93bf-ce2766c48cd9\" (UID: \"5d67b04b-756d-4c0d-93bf-ce2766c48cd9\") " Feb 19 18:50:51 crc kubenswrapper[4813]: I0219 18:50:51.232041 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"5d67b04b-756d-4c0d-93bf-ce2766c48cd9\" (UID: \"5d67b04b-756d-4c0d-93bf-ce2766c48cd9\") " Feb 19 18:50:51 crc kubenswrapper[4813]: I0219 18:50:51.232285 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9pwk7\" (UniqueName: \"kubernetes.io/projected/5d67b04b-756d-4c0d-93bf-ce2766c48cd9-kube-api-access-9pwk7\") pod \"5d67b04b-756d-4c0d-93bf-ce2766c48cd9\" (UID: \"5d67b04b-756d-4c0d-93bf-ce2766c48cd9\") " Feb 19 18:50:51 crc kubenswrapper[4813]: I0219 18:50:51.232311 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d67b04b-756d-4c0d-93bf-ce2766c48cd9-scripts\") pod \"5d67b04b-756d-4c0d-93bf-ce2766c48cd9\" (UID: \"5d67b04b-756d-4c0d-93bf-ce2766c48cd9\") " Feb 19 18:50:51 crc kubenswrapper[4813]: I0219 18:50:51.232353 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d67b04b-756d-4c0d-93bf-ce2766c48cd9-config-data\") pod \"5d67b04b-756d-4c0d-93bf-ce2766c48cd9\" (UID: \"5d67b04b-756d-4c0d-93bf-ce2766c48cd9\") " Feb 19 18:50:51 crc kubenswrapper[4813]: I0219 18:50:51.232410 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d67b04b-756d-4c0d-93bf-ce2766c48cd9-combined-ca-bundle\") pod \"5d67b04b-756d-4c0d-93bf-ce2766c48cd9\" (UID: \"5d67b04b-756d-4c0d-93bf-ce2766c48cd9\") " Feb 19 18:50:51 crc kubenswrapper[4813]: I0219 18:50:51.232652 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d67b04b-756d-4c0d-93bf-ce2766c48cd9-logs" (OuterVolumeSpecName: "logs") pod "5d67b04b-756d-4c0d-93bf-ce2766c48cd9" (UID: "5d67b04b-756d-4c0d-93bf-ce2766c48cd9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:50:51 crc kubenswrapper[4813]: I0219 18:50:51.232702 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d67b04b-756d-4c0d-93bf-ce2766c48cd9-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "5d67b04b-756d-4c0d-93bf-ce2766c48cd9" (UID: "5d67b04b-756d-4c0d-93bf-ce2766c48cd9"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:50:51 crc kubenswrapper[4813]: I0219 18:50:51.233107 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d67b04b-756d-4c0d-93bf-ce2766c48cd9-logs\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:51 crc kubenswrapper[4813]: I0219 18:50:51.233129 4813 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5d67b04b-756d-4c0d-93bf-ce2766c48cd9-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:51 crc kubenswrapper[4813]: I0219 18:50:51.238296 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d67b04b-756d-4c0d-93bf-ce2766c48cd9-kube-api-access-9pwk7" (OuterVolumeSpecName: "kube-api-access-9pwk7") pod "5d67b04b-756d-4c0d-93bf-ce2766c48cd9" (UID: "5d67b04b-756d-4c0d-93bf-ce2766c48cd9"). InnerVolumeSpecName "kube-api-access-9pwk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:50:51 crc kubenswrapper[4813]: I0219 18:50:51.244175 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d67b04b-756d-4c0d-93bf-ce2766c48cd9-scripts" (OuterVolumeSpecName: "scripts") pod "5d67b04b-756d-4c0d-93bf-ce2766c48cd9" (UID: "5d67b04b-756d-4c0d-93bf-ce2766c48cd9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:50:51 crc kubenswrapper[4813]: I0219 18:50:51.244282 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "5d67b04b-756d-4c0d-93bf-ce2766c48cd9" (UID: "5d67b04b-756d-4c0d-93bf-ce2766c48cd9"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 18:50:51 crc kubenswrapper[4813]: I0219 18:50:51.262797 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d67b04b-756d-4c0d-93bf-ce2766c48cd9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5d67b04b-756d-4c0d-93bf-ce2766c48cd9" (UID: "5d67b04b-756d-4c0d-93bf-ce2766c48cd9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:50:51 crc kubenswrapper[4813]: I0219 18:50:51.299042 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d67b04b-756d-4c0d-93bf-ce2766c48cd9-config-data" (OuterVolumeSpecName: "config-data") pod "5d67b04b-756d-4c0d-93bf-ce2766c48cd9" (UID: "5d67b04b-756d-4c0d-93bf-ce2766c48cd9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:50:51 crc kubenswrapper[4813]: I0219 18:50:51.307017 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d67b04b-756d-4c0d-93bf-ce2766c48cd9-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "5d67b04b-756d-4c0d-93bf-ce2766c48cd9" (UID: "5d67b04b-756d-4c0d-93bf-ce2766c48cd9"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:50:51 crc kubenswrapper[4813]: I0219 18:50:51.335064 4813 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d67b04b-756d-4c0d-93bf-ce2766c48cd9-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:51 crc kubenswrapper[4813]: I0219 18:50:51.335111 4813 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Feb 19 18:50:51 crc kubenswrapper[4813]: I0219 18:50:51.335122 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9pwk7\" (UniqueName: \"kubernetes.io/projected/5d67b04b-756d-4c0d-93bf-ce2766c48cd9-kube-api-access-9pwk7\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:51 crc kubenswrapper[4813]: I0219 18:50:51.335134 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d67b04b-756d-4c0d-93bf-ce2766c48cd9-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:51 crc kubenswrapper[4813]: I0219 18:50:51.335143 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d67b04b-756d-4c0d-93bf-ce2766c48cd9-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:51 crc kubenswrapper[4813]: I0219 18:50:51.335152 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d67b04b-756d-4c0d-93bf-ce2766c48cd9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:51 crc kubenswrapper[4813]: I0219 18:50:51.354329 4813 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Feb 19 18:50:51 crc kubenswrapper[4813]: I0219 18:50:51.436819 4813 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:51 crc kubenswrapper[4813]: I0219 18:50:51.719391 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5d67b04b-756d-4c0d-93bf-ce2766c48cd9","Type":"ContainerDied","Data":"3eade5c848599515406bd7226bcbbfa05218e5ec8365f17338ecf20972b9bc3c"} Feb 19 18:50:51 crc kubenswrapper[4813]: I0219 18:50:51.719713 4813 scope.go:117] "RemoveContainer" containerID="352bd4fd06c385c9f5e35f72e77478143c55653358a40bf88d2506b71b54a63d" Feb 19 18:50:51 crc kubenswrapper[4813]: I0219 18:50:51.719730 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 18:50:51 crc kubenswrapper[4813]: I0219 18:50:51.724138 4813 generic.go:334] "Generic (PLEG): container finished" podID="6c68d02b-5a16-4663-b109-265ae29b311b" containerID="8e12b692f541ff3e34487b31b7479a4414bdcfbee0b1c3a4dfc4db8b7258f44e" exitCode=0 Feb 19 18:50:51 crc kubenswrapper[4813]: I0219 18:50:51.724184 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6c68d02b-5a16-4663-b109-265ae29b311b","Type":"ContainerDied","Data":"8e12b692f541ff3e34487b31b7479a4414bdcfbee0b1c3a4dfc4db8b7258f44e"} Feb 19 18:50:51 crc kubenswrapper[4813]: I0219 18:50:51.744289 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 18:50:51 crc kubenswrapper[4813]: I0219 18:50:51.748914 4813 scope.go:117] "RemoveContainer" containerID="5e62d4e283cff7634e214723697d9b2fcddf021bab89ee818599c42f7ab5f8c4" Feb 19 18:50:51 crc kubenswrapper[4813]: I0219 18:50:51.764156 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 18:50:51 crc kubenswrapper[4813]: I0219 18:50:51.820323 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 18:50:51 crc kubenswrapper[4813]: E0219 18:50:51.820786 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d67b04b-756d-4c0d-93bf-ce2766c48cd9" containerName="glance-log" Feb 19 18:50:51 crc kubenswrapper[4813]: I0219 18:50:51.820807 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d67b04b-756d-4c0d-93bf-ce2766c48cd9" containerName="glance-log" Feb 19 18:50:51 crc kubenswrapper[4813]: E0219 18:50:51.820829 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f91fa4c5-f63e-4a98-be87-3d154cdc6db0" containerName="placement-log" Feb 19 18:50:51 crc kubenswrapper[4813]: I0219 18:50:51.820839 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f91fa4c5-f63e-4a98-be87-3d154cdc6db0" containerName="placement-log" Feb 19 18:50:51 crc kubenswrapper[4813]: E0219 18:50:51.820864 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d67b04b-756d-4c0d-93bf-ce2766c48cd9" containerName="glance-httpd" Feb 19 18:50:51 crc kubenswrapper[4813]: I0219 18:50:51.820872 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d67b04b-756d-4c0d-93bf-ce2766c48cd9" containerName="glance-httpd" Feb 19 18:50:51 crc kubenswrapper[4813]: E0219 18:50:51.820883 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f91fa4c5-f63e-4a98-be87-3d154cdc6db0" containerName="placement-api" Feb 19 18:50:51 crc kubenswrapper[4813]: I0219 18:50:51.820892 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f91fa4c5-f63e-4a98-be87-3d154cdc6db0" containerName="placement-api" Feb 19 18:50:51 crc kubenswrapper[4813]: I0219 18:50:51.821130 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d67b04b-756d-4c0d-93bf-ce2766c48cd9" containerName="glance-httpd" Feb 19 18:50:51 crc kubenswrapper[4813]: I0219 18:50:51.821161 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d67b04b-756d-4c0d-93bf-ce2766c48cd9" containerName="glance-log" Feb 19 18:50:51 crc kubenswrapper[4813]: I0219 18:50:51.821177 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f91fa4c5-f63e-4a98-be87-3d154cdc6db0" containerName="placement-log" Feb 19 18:50:51 crc kubenswrapper[4813]: I0219 18:50:51.821189 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f91fa4c5-f63e-4a98-be87-3d154cdc6db0" containerName="placement-api" Feb 19 18:50:51 crc kubenswrapper[4813]: I0219 18:50:51.822323 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 18:50:51 crc kubenswrapper[4813]: I0219 18:50:51.825074 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 19 18:50:51 crc kubenswrapper[4813]: I0219 18:50:51.825211 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 19 18:50:51 crc kubenswrapper[4813]: I0219 18:50:51.836562 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 18:50:51 crc kubenswrapper[4813]: I0219 18:50:51.944425 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ea3ff24-c40b-432b-a2f8-522284d17ff0-logs\") pod \"glance-default-external-api-0\" (UID: \"2ea3ff24-c40b-432b-a2f8-522284d17ff0\") " pod="openstack/glance-default-external-api-0" Feb 19 18:50:51 crc kubenswrapper[4813]: I0219 18:50:51.944486 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ea3ff24-c40b-432b-a2f8-522284d17ff0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2ea3ff24-c40b-432b-a2f8-522284d17ff0\") " pod="openstack/glance-default-external-api-0" Feb 19 18:50:51 crc kubenswrapper[4813]: I0219 18:50:51.944523 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ea3ff24-c40b-432b-a2f8-522284d17ff0-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2ea3ff24-c40b-432b-a2f8-522284d17ff0\") " pod="openstack/glance-default-external-api-0" Feb 19 18:50:51 crc kubenswrapper[4813]: I0219 18:50:51.944558 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"2ea3ff24-c40b-432b-a2f8-522284d17ff0\") " pod="openstack/glance-default-external-api-0" Feb 19 18:50:51 crc kubenswrapper[4813]: I0219 18:50:51.944628 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ea3ff24-c40b-432b-a2f8-522284d17ff0-scripts\") pod \"glance-default-external-api-0\" (UID: \"2ea3ff24-c40b-432b-a2f8-522284d17ff0\") " pod="openstack/glance-default-external-api-0" Feb 19 18:50:51 crc kubenswrapper[4813]: I0219 18:50:51.944672 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ea3ff24-c40b-432b-a2f8-522284d17ff0-config-data\") pod \"glance-default-external-api-0\" (UID: \"2ea3ff24-c40b-432b-a2f8-522284d17ff0\") " pod="openstack/glance-default-external-api-0" Feb 19 18:50:51 crc kubenswrapper[4813]: I0219 18:50:51.944703 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp6nc\" (UniqueName: \"kubernetes.io/projected/2ea3ff24-c40b-432b-a2f8-522284d17ff0-kube-api-access-vp6nc\") pod \"glance-default-external-api-0\" (UID: \"2ea3ff24-c40b-432b-a2f8-522284d17ff0\") " pod="openstack/glance-default-external-api-0" Feb 19 18:50:51 crc kubenswrapper[4813]: I0219 18:50:51.944760 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2ea3ff24-c40b-432b-a2f8-522284d17ff0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2ea3ff24-c40b-432b-a2f8-522284d17ff0\") " pod="openstack/glance-default-external-api-0" Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.045868 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vp6nc\" (UniqueName: \"kubernetes.io/projected/2ea3ff24-c40b-432b-a2f8-522284d17ff0-kube-api-access-vp6nc\") pod \"glance-default-external-api-0\" (UID: \"2ea3ff24-c40b-432b-a2f8-522284d17ff0\") " pod="openstack/glance-default-external-api-0" Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.045987 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2ea3ff24-c40b-432b-a2f8-522284d17ff0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2ea3ff24-c40b-432b-a2f8-522284d17ff0\") " pod="openstack/glance-default-external-api-0" Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.046100 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ea3ff24-c40b-432b-a2f8-522284d17ff0-logs\") pod \"glance-default-external-api-0\" (UID: \"2ea3ff24-c40b-432b-a2f8-522284d17ff0\") " pod="openstack/glance-default-external-api-0" Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.046146 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ea3ff24-c40b-432b-a2f8-522284d17ff0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2ea3ff24-c40b-432b-a2f8-522284d17ff0\") " pod="openstack/glance-default-external-api-0" Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.046174 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ea3ff24-c40b-432b-a2f8-522284d17ff0-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2ea3ff24-c40b-432b-a2f8-522284d17ff0\") " pod="openstack/glance-default-external-api-0" Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.046215 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"2ea3ff24-c40b-432b-a2f8-522284d17ff0\") " pod="openstack/glance-default-external-api-0" Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.046557 4813 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"2ea3ff24-c40b-432b-a2f8-522284d17ff0\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-external-api-0" Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.046677 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ea3ff24-c40b-432b-a2f8-522284d17ff0-scripts\") pod \"glance-default-external-api-0\" (UID: \"2ea3ff24-c40b-432b-a2f8-522284d17ff0\") " pod="openstack/glance-default-external-api-0" Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.047191 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ea3ff24-c40b-432b-a2f8-522284d17ff0-config-data\") pod \"glance-default-external-api-0\" (UID: \"2ea3ff24-c40b-432b-a2f8-522284d17ff0\") " pod="openstack/glance-default-external-api-0" Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.046558 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2ea3ff24-c40b-432b-a2f8-522284d17ff0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2ea3ff24-c40b-432b-a2f8-522284d17ff0\") " pod="openstack/glance-default-external-api-0" Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.046852 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ea3ff24-c40b-432b-a2f8-522284d17ff0-logs\") pod \"glance-default-external-api-0\" (UID: \"2ea3ff24-c40b-432b-a2f8-522284d17ff0\") " pod="openstack/glance-default-external-api-0" Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.052155 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ea3ff24-c40b-432b-a2f8-522284d17ff0-scripts\") pod \"glance-default-external-api-0\" (UID: \"2ea3ff24-c40b-432b-a2f8-522284d17ff0\") " pod="openstack/glance-default-external-api-0" Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.053387 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ea3ff24-c40b-432b-a2f8-522284d17ff0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2ea3ff24-c40b-432b-a2f8-522284d17ff0\") " pod="openstack/glance-default-external-api-0" Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.064028 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ea3ff24-c40b-432b-a2f8-522284d17ff0-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2ea3ff24-c40b-432b-a2f8-522284d17ff0\") " pod="openstack/glance-default-external-api-0" Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.066450 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vp6nc\" (UniqueName: \"kubernetes.io/projected/2ea3ff24-c40b-432b-a2f8-522284d17ff0-kube-api-access-vp6nc\") pod \"glance-default-external-api-0\" (UID: \"2ea3ff24-c40b-432b-a2f8-522284d17ff0\") " pod="openstack/glance-default-external-api-0" Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.098333 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ea3ff24-c40b-432b-a2f8-522284d17ff0-config-data\") pod \"glance-default-external-api-0\" (UID: \"2ea3ff24-c40b-432b-a2f8-522284d17ff0\") " pod="openstack/glance-default-external-api-0" Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.098441 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"2ea3ff24-c40b-432b-a2f8-522284d17ff0\") " pod="openstack/glance-default-external-api-0" Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.148355 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.155631 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.253670 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4j8z\" (UniqueName: \"kubernetes.io/projected/6c68d02b-5a16-4663-b109-265ae29b311b-kube-api-access-m4j8z\") pod \"6c68d02b-5a16-4663-b109-265ae29b311b\" (UID: \"6c68d02b-5a16-4663-b109-265ae29b311b\") " Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.254527 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c68d02b-5a16-4663-b109-265ae29b311b-config-data\") pod \"6c68d02b-5a16-4663-b109-265ae29b311b\" (UID: \"6c68d02b-5a16-4663-b109-265ae29b311b\") " Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.254576 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c68d02b-5a16-4663-b109-265ae29b311b-internal-tls-certs\") pod \"6c68d02b-5a16-4663-b109-265ae29b311b\" (UID: \"6c68d02b-5a16-4663-b109-265ae29b311b\") " Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.254635 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6c68d02b-5a16-4663-b109-265ae29b311b-httpd-run\") pod \"6c68d02b-5a16-4663-b109-265ae29b311b\" (UID: \"6c68d02b-5a16-4663-b109-265ae29b311b\") " Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.254657 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"6c68d02b-5a16-4663-b109-265ae29b311b\" (UID: \"6c68d02b-5a16-4663-b109-265ae29b311b\") " Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.254743 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c68d02b-5a16-4663-b109-265ae29b311b-scripts\") pod \"6c68d02b-5a16-4663-b109-265ae29b311b\" (UID: \"6c68d02b-5a16-4663-b109-265ae29b311b\") " Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.254769 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c68d02b-5a16-4663-b109-265ae29b311b-combined-ca-bundle\") pod \"6c68d02b-5a16-4663-b109-265ae29b311b\" (UID: \"6c68d02b-5a16-4663-b109-265ae29b311b\") " Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.254806 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c68d02b-5a16-4663-b109-265ae29b311b-logs\") pod \"6c68d02b-5a16-4663-b109-265ae29b311b\" (UID: \"6c68d02b-5a16-4663-b109-265ae29b311b\") " Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.255467 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c68d02b-5a16-4663-b109-265ae29b311b-logs" (OuterVolumeSpecName: "logs") pod "6c68d02b-5a16-4663-b109-265ae29b311b" (UID: "6c68d02b-5a16-4663-b109-265ae29b311b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.255604 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c68d02b-5a16-4663-b109-265ae29b311b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "6c68d02b-5a16-4663-b109-265ae29b311b" (UID: "6c68d02b-5a16-4663-b109-265ae29b311b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.259672 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c68d02b-5a16-4663-b109-265ae29b311b-scripts" (OuterVolumeSpecName: "scripts") pod "6c68d02b-5a16-4663-b109-265ae29b311b" (UID: "6c68d02b-5a16-4663-b109-265ae29b311b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.261175 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c68d02b-5a16-4663-b109-265ae29b311b-kube-api-access-m4j8z" (OuterVolumeSpecName: "kube-api-access-m4j8z") pod "6c68d02b-5a16-4663-b109-265ae29b311b" (UID: "6c68d02b-5a16-4663-b109-265ae29b311b"). InnerVolumeSpecName "kube-api-access-m4j8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.267134 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "6c68d02b-5a16-4663-b109-265ae29b311b" (UID: "6c68d02b-5a16-4663-b109-265ae29b311b"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.311081 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c68d02b-5a16-4663-b109-265ae29b311b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6c68d02b-5a16-4663-b109-265ae29b311b" (UID: "6c68d02b-5a16-4663-b109-265ae29b311b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.340212 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c68d02b-5a16-4663-b109-265ae29b311b-config-data" (OuterVolumeSpecName: "config-data") pod "6c68d02b-5a16-4663-b109-265ae29b311b" (UID: "6c68d02b-5a16-4663-b109-265ae29b311b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.341223 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c68d02b-5a16-4663-b109-265ae29b311b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "6c68d02b-5a16-4663-b109-265ae29b311b" (UID: "6c68d02b-5a16-4663-b109-265ae29b311b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.356779 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c68d02b-5a16-4663-b109-265ae29b311b-logs\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.356826 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4j8z\" (UniqueName: \"kubernetes.io/projected/6c68d02b-5a16-4663-b109-265ae29b311b-kube-api-access-m4j8z\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.356854 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c68d02b-5a16-4663-b109-265ae29b311b-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.356871 4813 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c68d02b-5a16-4663-b109-265ae29b311b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.356884 4813 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6c68d02b-5a16-4663-b109-265ae29b311b-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.356939 4813 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.356967 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c68d02b-5a16-4663-b109-265ae29b311b-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.356979 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c68d02b-5a16-4663-b109-265ae29b311b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.384079 4813 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.473061 4813 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.734831 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6c68d02b-5a16-4663-b109-265ae29b311b","Type":"ContainerDied","Data":"5f03789ae2df4f55fc6133f731f3188e9d4921b3b477f5a0541d4388d60c2d52"} Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.734896 4813 scope.go:117] "RemoveContainer" containerID="8e12b692f541ff3e34487b31b7479a4414bdcfbee0b1c3a4dfc4db8b7258f44e" Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.734882 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.756238 4813 scope.go:117] "RemoveContainer" containerID="128dac3b620e663ac00e57ee6e601177e3f89b5b91cf0c39b2978ce209425fab" Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.772749 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.791912 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.810485 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 18:50:52 crc kubenswrapper[4813]: E0219 18:50:52.811043 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c68d02b-5a16-4663-b109-265ae29b311b" containerName="glance-httpd" Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.811068 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c68d02b-5a16-4663-b109-265ae29b311b" containerName="glance-httpd" Feb 19 18:50:52 crc kubenswrapper[4813]: E0219 18:50:52.811101 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c68d02b-5a16-4663-b109-265ae29b311b" containerName="glance-log" Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.811111 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c68d02b-5a16-4663-b109-265ae29b311b" containerName="glance-log" Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.811373 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c68d02b-5a16-4663-b109-265ae29b311b" containerName="glance-httpd" Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.811398 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c68d02b-5a16-4663-b109-265ae29b311b" containerName="glance-log" Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.812512 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.815001 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.815169 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.829139 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.880468 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9c7a4f0-3d5e-4e7a-8958-8a2022de3394-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c9c7a4f0-3d5e-4e7a-8958-8a2022de3394\") " pod="openstack/glance-default-internal-api-0" Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.880726 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"c9c7a4f0-3d5e-4e7a-8958-8a2022de3394\") " pod="openstack/glance-default-internal-api-0" Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.880783 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9c7a4f0-3d5e-4e7a-8958-8a2022de3394-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c9c7a4f0-3d5e-4e7a-8958-8a2022de3394\") " pod="openstack/glance-default-internal-api-0" Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.880853 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9c7a4f0-3d5e-4e7a-8958-8a2022de3394-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c9c7a4f0-3d5e-4e7a-8958-8a2022de3394\") " pod="openstack/glance-default-internal-api-0" Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.880907 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9c7a4f0-3d5e-4e7a-8958-8a2022de3394-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c9c7a4f0-3d5e-4e7a-8958-8a2022de3394\") " pod="openstack/glance-default-internal-api-0" Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.880983 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvtx9\" (UniqueName: \"kubernetes.io/projected/c9c7a4f0-3d5e-4e7a-8958-8a2022de3394-kube-api-access-bvtx9\") pod \"glance-default-internal-api-0\" (UID: \"c9c7a4f0-3d5e-4e7a-8958-8a2022de3394\") " pod="openstack/glance-default-internal-api-0" Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.881035 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9c7a4f0-3d5e-4e7a-8958-8a2022de3394-logs\") pod \"glance-default-internal-api-0\" (UID: \"c9c7a4f0-3d5e-4e7a-8958-8a2022de3394\") " pod="openstack/glance-default-internal-api-0" Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.881198 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c9c7a4f0-3d5e-4e7a-8958-8a2022de3394-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c9c7a4f0-3d5e-4e7a-8958-8a2022de3394\") " pod="openstack/glance-default-internal-api-0" Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.897221 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.988306 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"c9c7a4f0-3d5e-4e7a-8958-8a2022de3394\") " pod="openstack/glance-default-internal-api-0" Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.988353 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9c7a4f0-3d5e-4e7a-8958-8a2022de3394-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c9c7a4f0-3d5e-4e7a-8958-8a2022de3394\") " pod="openstack/glance-default-internal-api-0" Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.988388 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9c7a4f0-3d5e-4e7a-8958-8a2022de3394-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c9c7a4f0-3d5e-4e7a-8958-8a2022de3394\") " pod="openstack/glance-default-internal-api-0" Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.988418 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9c7a4f0-3d5e-4e7a-8958-8a2022de3394-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c9c7a4f0-3d5e-4e7a-8958-8a2022de3394\") " pod="openstack/glance-default-internal-api-0" Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.988449 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvtx9\" (UniqueName: \"kubernetes.io/projected/c9c7a4f0-3d5e-4e7a-8958-8a2022de3394-kube-api-access-bvtx9\") pod \"glance-default-internal-api-0\" (UID: \"c9c7a4f0-3d5e-4e7a-8958-8a2022de3394\") " pod="openstack/glance-default-internal-api-0" Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.988477 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9c7a4f0-3d5e-4e7a-8958-8a2022de3394-logs\") pod \"glance-default-internal-api-0\" (UID: \"c9c7a4f0-3d5e-4e7a-8958-8a2022de3394\") " pod="openstack/glance-default-internal-api-0" Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.988519 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c9c7a4f0-3d5e-4e7a-8958-8a2022de3394-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c9c7a4f0-3d5e-4e7a-8958-8a2022de3394\") " pod="openstack/glance-default-internal-api-0" Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.988538 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9c7a4f0-3d5e-4e7a-8958-8a2022de3394-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c9c7a4f0-3d5e-4e7a-8958-8a2022de3394\") " pod="openstack/glance-default-internal-api-0" Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.988546 4813 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"c9c7a4f0-3d5e-4e7a-8958-8a2022de3394\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.990061 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c9c7a4f0-3d5e-4e7a-8958-8a2022de3394-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c9c7a4f0-3d5e-4e7a-8958-8a2022de3394\") " pod="openstack/glance-default-internal-api-0" Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.990367 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9c7a4f0-3d5e-4e7a-8958-8a2022de3394-logs\") pod \"glance-default-internal-api-0\" (UID: \"c9c7a4f0-3d5e-4e7a-8958-8a2022de3394\") " pod="openstack/glance-default-internal-api-0" Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.993265 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9c7a4f0-3d5e-4e7a-8958-8a2022de3394-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c9c7a4f0-3d5e-4e7a-8958-8a2022de3394\") " pod="openstack/glance-default-internal-api-0" Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.994146 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9c7a4f0-3d5e-4e7a-8958-8a2022de3394-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c9c7a4f0-3d5e-4e7a-8958-8a2022de3394\") " pod="openstack/glance-default-internal-api-0" Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.994216 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9c7a4f0-3d5e-4e7a-8958-8a2022de3394-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c9c7a4f0-3d5e-4e7a-8958-8a2022de3394\") " pod="openstack/glance-default-internal-api-0" Feb 19 18:50:52 crc kubenswrapper[4813]: I0219 18:50:52.994843 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9c7a4f0-3d5e-4e7a-8958-8a2022de3394-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c9c7a4f0-3d5e-4e7a-8958-8a2022de3394\") " pod="openstack/glance-default-internal-api-0" Feb 19 18:50:53 crc kubenswrapper[4813]: I0219 18:50:53.012608 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvtx9\" (UniqueName: \"kubernetes.io/projected/c9c7a4f0-3d5e-4e7a-8958-8a2022de3394-kube-api-access-bvtx9\") pod \"glance-default-internal-api-0\" (UID: \"c9c7a4f0-3d5e-4e7a-8958-8a2022de3394\") " pod="openstack/glance-default-internal-api-0" Feb 19 18:50:53 crc kubenswrapper[4813]: I0219 18:50:53.015682 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"c9c7a4f0-3d5e-4e7a-8958-8a2022de3394\") " pod="openstack/glance-default-internal-api-0" Feb 19 18:50:53 crc kubenswrapper[4813]: I0219 18:50:53.147901 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 18:50:53 crc kubenswrapper[4813]: I0219 18:50:53.492734 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d67b04b-756d-4c0d-93bf-ce2766c48cd9" path="/var/lib/kubelet/pods/5d67b04b-756d-4c0d-93bf-ce2766c48cd9/volumes" Feb 19 18:50:53 crc kubenswrapper[4813]: I0219 18:50:53.494036 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c68d02b-5a16-4663-b109-265ae29b311b" path="/var/lib/kubelet/pods/6c68d02b-5a16-4663-b109-265ae29b311b/volumes" Feb 19 18:50:53 crc kubenswrapper[4813]: I0219 18:50:53.747580 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2ea3ff24-c40b-432b-a2f8-522284d17ff0","Type":"ContainerStarted","Data":"c739347572fde9810eb2e984054e31d8f98d420e8deb6f11d5a7c741a7bd9e0a"} Feb 19 18:50:53 crc kubenswrapper[4813]: I0219 18:50:53.747636 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2ea3ff24-c40b-432b-a2f8-522284d17ff0","Type":"ContainerStarted","Data":"4ad4c2771908f3a1ca1fad2fdf7293a77d8b5127e771d75f49e3fa533ff07b1b"} Feb 19 18:50:53 crc kubenswrapper[4813]: I0219 18:50:53.791108 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 18:50:53 crc kubenswrapper[4813]: W0219 18:50:53.798412 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9c7a4f0_3d5e_4e7a_8958_8a2022de3394.slice/crio-7b6c78b69661dd4aa07ea77dc1e4e080726c89d13470e2a0867e6cb432f6e447 WatchSource:0}: Error finding container 7b6c78b69661dd4aa07ea77dc1e4e080726c89d13470e2a0867e6cb432f6e447: Status 404 returned error can't find the container with id 7b6c78b69661dd4aa07ea77dc1e4e080726c89d13470e2a0867e6cb432f6e447 Feb 19 18:50:54 crc kubenswrapper[4813]: I0219 18:50:54.770407 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2ea3ff24-c40b-432b-a2f8-522284d17ff0","Type":"ContainerStarted","Data":"cdcf02dfe81d5acf0aa0015ceb87d0dc57996ba7b45c419fa6770df217892dee"} Feb 19 18:50:54 crc kubenswrapper[4813]: I0219 18:50:54.772686 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c9c7a4f0-3d5e-4e7a-8958-8a2022de3394","Type":"ContainerStarted","Data":"912bf4bcdf44e6c87d2ddd9d02005a5efe71719e483128b7f0504e540c94af8f"} Feb 19 18:50:54 crc kubenswrapper[4813]: I0219 18:50:54.772720 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c9c7a4f0-3d5e-4e7a-8958-8a2022de3394","Type":"ContainerStarted","Data":"7b6c78b69661dd4aa07ea77dc1e4e080726c89d13470e2a0867e6cb432f6e447"} Feb 19 18:50:54 crc kubenswrapper[4813]: I0219 18:50:54.814266 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.814242392 podStartE2EDuration="3.814242392s" podCreationTimestamp="2026-02-19 18:50:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:50:54.797755843 +0000 UTC m=+1274.023196404" watchObservedRunningTime="2026-02-19 18:50:54.814242392 +0000 UTC m=+1274.039682943" Feb 19 18:50:55 crc kubenswrapper[4813]: I0219 18:50:55.782186 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c9c7a4f0-3d5e-4e7a-8958-8a2022de3394","Type":"ContainerStarted","Data":"24d702d7195d76e4a45075dc68672b623d20a1f8acf2e94fc16e64fdd1806db3"} Feb 19 18:50:55 crc kubenswrapper[4813]: I0219 18:50:55.805344 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.805321923 podStartE2EDuration="3.805321923s" podCreationTimestamp="2026-02-19 18:50:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:50:55.800838705 +0000 UTC m=+1275.026279246" watchObservedRunningTime="2026-02-19 18:50:55.805321923 +0000 UTC m=+1275.030762464" Feb 19 18:50:59 crc kubenswrapper[4813]: I0219 18:50:59.817379 4813 generic.go:334] "Generic (PLEG): container finished" podID="6f306bed-c42d-4853-b2ce-929c6356929d" containerID="c515b6612026ed474298fe3c18c2bf37548d0465b0492238ee919dbb763eac6c" exitCode=0 Feb 19 18:50:59 crc kubenswrapper[4813]: I0219 18:50:59.817613 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-fvfzp" event={"ID":"6f306bed-c42d-4853-b2ce-929c6356929d","Type":"ContainerDied","Data":"c515b6612026ed474298fe3c18c2bf37548d0465b0492238ee919dbb763eac6c"} Feb 19 18:51:00 crc kubenswrapper[4813]: I0219 18:51:00.010276 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="6afe8635-761b-4dcc-b2b4-ff6963f474d8" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 19 18:51:01 crc kubenswrapper[4813]: I0219 18:51:01.191571 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-fvfzp" Feb 19 18:51:01 crc kubenswrapper[4813]: I0219 18:51:01.249770 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f306bed-c42d-4853-b2ce-929c6356929d-combined-ca-bundle\") pod \"6f306bed-c42d-4853-b2ce-929c6356929d\" (UID: \"6f306bed-c42d-4853-b2ce-929c6356929d\") " Feb 19 18:51:01 crc kubenswrapper[4813]: I0219 18:51:01.250232 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f306bed-c42d-4853-b2ce-929c6356929d-config-data\") pod \"6f306bed-c42d-4853-b2ce-929c6356929d\" (UID: \"6f306bed-c42d-4853-b2ce-929c6356929d\") " Feb 19 18:51:01 crc kubenswrapper[4813]: I0219 18:51:01.250311 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f306bed-c42d-4853-b2ce-929c6356929d-scripts\") pod \"6f306bed-c42d-4853-b2ce-929c6356929d\" (UID: \"6f306bed-c42d-4853-b2ce-929c6356929d\") " Feb 19 18:51:01 crc kubenswrapper[4813]: I0219 18:51:01.250436 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb2bt\" (UniqueName: \"kubernetes.io/projected/6f306bed-c42d-4853-b2ce-929c6356929d-kube-api-access-sb2bt\") pod \"6f306bed-c42d-4853-b2ce-929c6356929d\" (UID: \"6f306bed-c42d-4853-b2ce-929c6356929d\") " Feb 19 18:51:01 crc kubenswrapper[4813]: I0219 18:51:01.268163 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f306bed-c42d-4853-b2ce-929c6356929d-kube-api-access-sb2bt" (OuterVolumeSpecName: "kube-api-access-sb2bt") pod "6f306bed-c42d-4853-b2ce-929c6356929d" (UID: "6f306bed-c42d-4853-b2ce-929c6356929d"). InnerVolumeSpecName "kube-api-access-sb2bt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:51:01 crc kubenswrapper[4813]: I0219 18:51:01.274803 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f306bed-c42d-4853-b2ce-929c6356929d-scripts" (OuterVolumeSpecName: "scripts") pod "6f306bed-c42d-4853-b2ce-929c6356929d" (UID: "6f306bed-c42d-4853-b2ce-929c6356929d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:51:01 crc kubenswrapper[4813]: I0219 18:51:01.298151 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f306bed-c42d-4853-b2ce-929c6356929d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6f306bed-c42d-4853-b2ce-929c6356929d" (UID: "6f306bed-c42d-4853-b2ce-929c6356929d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:51:01 crc kubenswrapper[4813]: I0219 18:51:01.302213 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f306bed-c42d-4853-b2ce-929c6356929d-config-data" (OuterVolumeSpecName: "config-data") pod "6f306bed-c42d-4853-b2ce-929c6356929d" (UID: "6f306bed-c42d-4853-b2ce-929c6356929d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:51:01 crc kubenswrapper[4813]: I0219 18:51:01.351704 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f306bed-c42d-4853-b2ce-929c6356929d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:51:01 crc kubenswrapper[4813]: I0219 18:51:01.351738 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f306bed-c42d-4853-b2ce-929c6356929d-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 18:51:01 crc kubenswrapper[4813]: I0219 18:51:01.351749 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f306bed-c42d-4853-b2ce-929c6356929d-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 18:51:01 crc kubenswrapper[4813]: I0219 18:51:01.351761 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb2bt\" (UniqueName: \"kubernetes.io/projected/6f306bed-c42d-4853-b2ce-929c6356929d-kube-api-access-sb2bt\") on node \"crc\" DevicePath \"\"" Feb 19 18:51:01 crc kubenswrapper[4813]: E0219 18:51:01.567714 4813 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6f306bed_c42d_4853_b2ce_929c6356929d.slice/crio-926b6693aa0219065460a4a3833af8821f7dfe9ec117f7323749af011091acf9\": RecentStats: unable to find data in memory cache]" Feb 19 18:51:01 crc kubenswrapper[4813]: I0219 18:51:01.842024 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-fvfzp" event={"ID":"6f306bed-c42d-4853-b2ce-929c6356929d","Type":"ContainerDied","Data":"926b6693aa0219065460a4a3833af8821f7dfe9ec117f7323749af011091acf9"} Feb 19 18:51:01 crc kubenswrapper[4813]: I0219 18:51:01.842067 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="926b6693aa0219065460a4a3833af8821f7dfe9ec117f7323749af011091acf9" Feb 19 18:51:01 crc kubenswrapper[4813]: I0219 18:51:01.842124 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-fvfzp" Feb 19 18:51:02 crc kubenswrapper[4813]: I0219 18:51:02.025114 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 18:51:02 crc kubenswrapper[4813]: E0219 18:51:02.025529 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f306bed-c42d-4853-b2ce-929c6356929d" containerName="nova-cell0-conductor-db-sync" Feb 19 18:51:02 crc kubenswrapper[4813]: I0219 18:51:02.025548 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f306bed-c42d-4853-b2ce-929c6356929d" containerName="nova-cell0-conductor-db-sync" Feb 19 18:51:02 crc kubenswrapper[4813]: I0219 18:51:02.025770 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f306bed-c42d-4853-b2ce-929c6356929d" containerName="nova-cell0-conductor-db-sync" Feb 19 18:51:02 crc kubenswrapper[4813]: I0219 18:51:02.026860 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 19 18:51:02 crc kubenswrapper[4813]: I0219 18:51:02.029498 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-trm9l" Feb 19 18:51:02 crc kubenswrapper[4813]: I0219 18:51:02.034208 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 19 18:51:02 crc kubenswrapper[4813]: I0219 18:51:02.053327 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 18:51:02 crc kubenswrapper[4813]: I0219 18:51:02.065266 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3422b8bd-2817-4e8f-8a5a-731c773b73a4-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"3422b8bd-2817-4e8f-8a5a-731c773b73a4\") " pod="openstack/nova-cell0-conductor-0" Feb 19 18:51:02 crc kubenswrapper[4813]: I0219 18:51:02.065453 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3422b8bd-2817-4e8f-8a5a-731c773b73a4-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"3422b8bd-2817-4e8f-8a5a-731c773b73a4\") " pod="openstack/nova-cell0-conductor-0" Feb 19 18:51:02 crc kubenswrapper[4813]: I0219 18:51:02.065496 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxtbh\" (UniqueName: \"kubernetes.io/projected/3422b8bd-2817-4e8f-8a5a-731c773b73a4-kube-api-access-wxtbh\") pod \"nova-cell0-conductor-0\" (UID: \"3422b8bd-2817-4e8f-8a5a-731c773b73a4\") " pod="openstack/nova-cell0-conductor-0" Feb 19 18:51:02 crc kubenswrapper[4813]: I0219 18:51:02.148566 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 19 18:51:02 crc kubenswrapper[4813]: I0219 18:51:02.149864 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 19 18:51:02 crc kubenswrapper[4813]: I0219 18:51:02.167252 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3422b8bd-2817-4e8f-8a5a-731c773b73a4-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"3422b8bd-2817-4e8f-8a5a-731c773b73a4\") " pod="openstack/nova-cell0-conductor-0" Feb 19 18:51:02 crc kubenswrapper[4813]: I0219 18:51:02.167303 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxtbh\" (UniqueName: \"kubernetes.io/projected/3422b8bd-2817-4e8f-8a5a-731c773b73a4-kube-api-access-wxtbh\") pod \"nova-cell0-conductor-0\" (UID: \"3422b8bd-2817-4e8f-8a5a-731c773b73a4\") " pod="openstack/nova-cell0-conductor-0" Feb 19 18:51:02 crc kubenswrapper[4813]: I0219 18:51:02.167436 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3422b8bd-2817-4e8f-8a5a-731c773b73a4-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"3422b8bd-2817-4e8f-8a5a-731c773b73a4\") " pod="openstack/nova-cell0-conductor-0" Feb 19 18:51:02 crc kubenswrapper[4813]: I0219 18:51:02.172413 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3422b8bd-2817-4e8f-8a5a-731c773b73a4-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"3422b8bd-2817-4e8f-8a5a-731c773b73a4\") " pod="openstack/nova-cell0-conductor-0" Feb 19 18:51:02 crc kubenswrapper[4813]: I0219 18:51:02.181193 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3422b8bd-2817-4e8f-8a5a-731c773b73a4-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"3422b8bd-2817-4e8f-8a5a-731c773b73a4\") " pod="openstack/nova-cell0-conductor-0" Feb 19 18:51:02 crc kubenswrapper[4813]: I0219 18:51:02.192461 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxtbh\" (UniqueName: \"kubernetes.io/projected/3422b8bd-2817-4e8f-8a5a-731c773b73a4-kube-api-access-wxtbh\") pod \"nova-cell0-conductor-0\" (UID: \"3422b8bd-2817-4e8f-8a5a-731c773b73a4\") " pod="openstack/nova-cell0-conductor-0" Feb 19 18:51:02 crc kubenswrapper[4813]: I0219 18:51:02.196558 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 19 18:51:02 crc kubenswrapper[4813]: I0219 18:51:02.199458 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 19 18:51:02 crc kubenswrapper[4813]: I0219 18:51:02.350139 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 19 18:51:02 crc kubenswrapper[4813]: I0219 18:51:02.825256 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 18:51:02 crc kubenswrapper[4813]: I0219 18:51:02.865729 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"3422b8bd-2817-4e8f-8a5a-731c773b73a4","Type":"ContainerStarted","Data":"7acc57d450d03d47043a9b50283daa8549df90f583b57d245b091e5f193a1b62"} Feb 19 18:51:02 crc kubenswrapper[4813]: I0219 18:51:02.865806 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 19 18:51:02 crc kubenswrapper[4813]: I0219 18:51:02.866075 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 19 18:51:03 crc kubenswrapper[4813]: I0219 18:51:03.148451 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 19 18:51:03 crc kubenswrapper[4813]: I0219 18:51:03.153192 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 19 18:51:03 crc kubenswrapper[4813]: I0219 18:51:03.209146 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 19 18:51:03 crc kubenswrapper[4813]: I0219 18:51:03.212841 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 19 18:51:03 crc kubenswrapper[4813]: I0219 18:51:03.883586 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"3422b8bd-2817-4e8f-8a5a-731c773b73a4","Type":"ContainerStarted","Data":"e3f45f7478a2e402720223591b16af0ed7d32314ccb1d2c38bb96ffc15553872"} Feb 19 18:51:03 crc kubenswrapper[4813]: I0219 18:51:03.884059 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 19 18:51:03 crc kubenswrapper[4813]: I0219 18:51:03.885238 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 19 18:51:03 crc kubenswrapper[4813]: I0219 18:51:03.918973 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=1.918933754 podStartE2EDuration="1.918933754s" podCreationTimestamp="2026-02-19 18:51:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:51:03.904338644 +0000 UTC m=+1283.129779225" watchObservedRunningTime="2026-02-19 18:51:03.918933754 +0000 UTC m=+1283.144374295" Feb 19 18:51:04 crc kubenswrapper[4813]: I0219 18:51:04.818466 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 19 18:51:04 crc kubenswrapper[4813]: I0219 18:51:04.818889 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 19 18:51:04 crc kubenswrapper[4813]: I0219 18:51:04.910825 4813 generic.go:334] "Generic (PLEG): container finished" podID="6afe8635-761b-4dcc-b2b4-ff6963f474d8" containerID="1d2afcb379a400372d4098a63f5de68b91f0b29270c3b44019b9649cd59da8b4" exitCode=137 Feb 19 18:51:04 crc kubenswrapper[4813]: I0219 18:51:04.912893 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6afe8635-761b-4dcc-b2b4-ff6963f474d8","Type":"ContainerDied","Data":"1d2afcb379a400372d4098a63f5de68b91f0b29270c3b44019b9649cd59da8b4"} Feb 19 18:51:04 crc kubenswrapper[4813]: I0219 18:51:04.912931 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6afe8635-761b-4dcc-b2b4-ff6963f474d8","Type":"ContainerDied","Data":"5c7cf94160901f5a3718ce718325e9e256567406ec0e5b0b5bedba3cd5e3747b"} Feb 19 18:51:04 crc kubenswrapper[4813]: I0219 18:51:04.912944 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c7cf94160901f5a3718ce718325e9e256567406ec0e5b0b5bedba3cd5e3747b" Feb 19 18:51:04 crc kubenswrapper[4813]: I0219 18:51:04.912974 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 19 18:51:05 crc kubenswrapper[4813]: I0219 18:51:05.000938 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 18:51:05 crc kubenswrapper[4813]: I0219 18:51:05.126458 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6afe8635-761b-4dcc-b2b4-ff6963f474d8-combined-ca-bundle\") pod \"6afe8635-761b-4dcc-b2b4-ff6963f474d8\" (UID: \"6afe8635-761b-4dcc-b2b4-ff6963f474d8\") " Feb 19 18:51:05 crc kubenswrapper[4813]: I0219 18:51:05.126540 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6afe8635-761b-4dcc-b2b4-ff6963f474d8-scripts\") pod \"6afe8635-761b-4dcc-b2b4-ff6963f474d8\" (UID: \"6afe8635-761b-4dcc-b2b4-ff6963f474d8\") " Feb 19 18:51:05 crc kubenswrapper[4813]: I0219 18:51:05.126578 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6afe8635-761b-4dcc-b2b4-ff6963f474d8-sg-core-conf-yaml\") pod \"6afe8635-761b-4dcc-b2b4-ff6963f474d8\" (UID: \"6afe8635-761b-4dcc-b2b4-ff6963f474d8\") " Feb 19 18:51:05 crc kubenswrapper[4813]: I0219 18:51:05.126650 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6jh6\" (UniqueName: \"kubernetes.io/projected/6afe8635-761b-4dcc-b2b4-ff6963f474d8-kube-api-access-j6jh6\") pod \"6afe8635-761b-4dcc-b2b4-ff6963f474d8\" (UID: \"6afe8635-761b-4dcc-b2b4-ff6963f474d8\") " Feb 19 18:51:05 crc kubenswrapper[4813]: I0219 18:51:05.126765 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6afe8635-761b-4dcc-b2b4-ff6963f474d8-run-httpd\") pod \"6afe8635-761b-4dcc-b2b4-ff6963f474d8\" (UID: \"6afe8635-761b-4dcc-b2b4-ff6963f474d8\") " Feb 19 18:51:05 crc kubenswrapper[4813]: I0219 18:51:05.126789 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6afe8635-761b-4dcc-b2b4-ff6963f474d8-config-data\") pod \"6afe8635-761b-4dcc-b2b4-ff6963f474d8\" (UID: \"6afe8635-761b-4dcc-b2b4-ff6963f474d8\") " Feb 19 18:51:05 crc kubenswrapper[4813]: I0219 18:51:05.126814 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6afe8635-761b-4dcc-b2b4-ff6963f474d8-log-httpd\") pod \"6afe8635-761b-4dcc-b2b4-ff6963f474d8\" (UID: \"6afe8635-761b-4dcc-b2b4-ff6963f474d8\") " Feb 19 18:51:05 crc kubenswrapper[4813]: I0219 18:51:05.127522 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6afe8635-761b-4dcc-b2b4-ff6963f474d8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6afe8635-761b-4dcc-b2b4-ff6963f474d8" (UID: "6afe8635-761b-4dcc-b2b4-ff6963f474d8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:51:05 crc kubenswrapper[4813]: I0219 18:51:05.128609 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6afe8635-761b-4dcc-b2b4-ff6963f474d8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6afe8635-761b-4dcc-b2b4-ff6963f474d8" (UID: "6afe8635-761b-4dcc-b2b4-ff6963f474d8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:51:05 crc kubenswrapper[4813]: I0219 18:51:05.133406 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6afe8635-761b-4dcc-b2b4-ff6963f474d8-scripts" (OuterVolumeSpecName: "scripts") pod "6afe8635-761b-4dcc-b2b4-ff6963f474d8" (UID: "6afe8635-761b-4dcc-b2b4-ff6963f474d8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:51:05 crc kubenswrapper[4813]: I0219 18:51:05.134235 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6afe8635-761b-4dcc-b2b4-ff6963f474d8-kube-api-access-j6jh6" (OuterVolumeSpecName: "kube-api-access-j6jh6") pod "6afe8635-761b-4dcc-b2b4-ff6963f474d8" (UID: "6afe8635-761b-4dcc-b2b4-ff6963f474d8"). InnerVolumeSpecName "kube-api-access-j6jh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:51:05 crc kubenswrapper[4813]: I0219 18:51:05.156239 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6afe8635-761b-4dcc-b2b4-ff6963f474d8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6afe8635-761b-4dcc-b2b4-ff6963f474d8" (UID: "6afe8635-761b-4dcc-b2b4-ff6963f474d8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:51:05 crc kubenswrapper[4813]: I0219 18:51:05.229755 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6afe8635-761b-4dcc-b2b4-ff6963f474d8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6afe8635-761b-4dcc-b2b4-ff6963f474d8" (UID: "6afe8635-761b-4dcc-b2b4-ff6963f474d8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:51:05 crc kubenswrapper[4813]: I0219 18:51:05.235242 4813 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6afe8635-761b-4dcc-b2b4-ff6963f474d8-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 18:51:05 crc kubenswrapper[4813]: I0219 18:51:05.235301 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6afe8635-761b-4dcc-b2b4-ff6963f474d8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:51:05 crc kubenswrapper[4813]: I0219 18:51:05.237130 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6afe8635-761b-4dcc-b2b4-ff6963f474d8-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 18:51:05 crc kubenswrapper[4813]: I0219 18:51:05.237176 4813 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6afe8635-761b-4dcc-b2b4-ff6963f474d8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 18:51:05 crc kubenswrapper[4813]: I0219 18:51:05.237195 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6jh6\" (UniqueName: \"kubernetes.io/projected/6afe8635-761b-4dcc-b2b4-ff6963f474d8-kube-api-access-j6jh6\") on node \"crc\" DevicePath \"\"" Feb 19 18:51:05 crc kubenswrapper[4813]: I0219 18:51:05.237218 4813 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6afe8635-761b-4dcc-b2b4-ff6963f474d8-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 18:51:05 crc kubenswrapper[4813]: I0219 18:51:05.251285 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6afe8635-761b-4dcc-b2b4-ff6963f474d8-config-data" (OuterVolumeSpecName: "config-data") pod "6afe8635-761b-4dcc-b2b4-ff6963f474d8" (UID: "6afe8635-761b-4dcc-b2b4-ff6963f474d8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:51:05 crc kubenswrapper[4813]: I0219 18:51:05.339082 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6afe8635-761b-4dcc-b2b4-ff6963f474d8-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 18:51:05 crc kubenswrapper[4813]: I0219 18:51:05.813398 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 19 18:51:05 crc kubenswrapper[4813]: I0219 18:51:05.819297 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 19 18:51:05 crc kubenswrapper[4813]: I0219 18:51:05.942666 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 18:51:05 crc kubenswrapper[4813]: I0219 18:51:05.983652 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 18:51:06 crc kubenswrapper[4813]: I0219 18:51:06.011719 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 18:51:06 crc kubenswrapper[4813]: I0219 18:51:06.023189 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 18:51:06 crc kubenswrapper[4813]: E0219 18:51:06.023764 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6afe8635-761b-4dcc-b2b4-ff6963f474d8" containerName="sg-core" Feb 19 18:51:06 crc kubenswrapper[4813]: I0219 18:51:06.023853 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="6afe8635-761b-4dcc-b2b4-ff6963f474d8" containerName="sg-core" Feb 19 18:51:06 crc kubenswrapper[4813]: E0219 18:51:06.023913 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6afe8635-761b-4dcc-b2b4-ff6963f474d8" containerName="ceilometer-central-agent" Feb 19 18:51:06 crc kubenswrapper[4813]: I0219 18:51:06.024007 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="6afe8635-761b-4dcc-b2b4-ff6963f474d8" containerName="ceilometer-central-agent" Feb 19 18:51:06 crc kubenswrapper[4813]: E0219 18:51:06.024079 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6afe8635-761b-4dcc-b2b4-ff6963f474d8" containerName="ceilometer-notification-agent" Feb 19 18:51:06 crc kubenswrapper[4813]: I0219 18:51:06.024142 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="6afe8635-761b-4dcc-b2b4-ff6963f474d8" containerName="ceilometer-notification-agent" Feb 19 18:51:06 crc kubenswrapper[4813]: E0219 18:51:06.024202 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6afe8635-761b-4dcc-b2b4-ff6963f474d8" containerName="proxy-httpd" Feb 19 18:51:06 crc kubenswrapper[4813]: I0219 18:51:06.024260 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="6afe8635-761b-4dcc-b2b4-ff6963f474d8" containerName="proxy-httpd" Feb 19 18:51:06 crc kubenswrapper[4813]: I0219 18:51:06.024473 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="6afe8635-761b-4dcc-b2b4-ff6963f474d8" containerName="sg-core" Feb 19 18:51:06 crc kubenswrapper[4813]: I0219 18:51:06.024533 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="6afe8635-761b-4dcc-b2b4-ff6963f474d8" containerName="ceilometer-central-agent" Feb 19 18:51:06 crc kubenswrapper[4813]: I0219 18:51:06.024597 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="6afe8635-761b-4dcc-b2b4-ff6963f474d8" containerName="ceilometer-notification-agent" Feb 19 18:51:06 crc kubenswrapper[4813]: I0219 18:51:06.024665 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="6afe8635-761b-4dcc-b2b4-ff6963f474d8" containerName="proxy-httpd" Feb 19 18:51:06 crc kubenswrapper[4813]: I0219 18:51:06.026338 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 18:51:06 crc kubenswrapper[4813]: I0219 18:51:06.028229 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 18:51:06 crc kubenswrapper[4813]: I0219 18:51:06.028690 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 18:51:06 crc kubenswrapper[4813]: I0219 18:51:06.052322 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 18:51:06 crc kubenswrapper[4813]: I0219 18:51:06.157131 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49fa43f1-0aa1-4f1c-b39f-948acdce4825-log-httpd\") pod \"ceilometer-0\" (UID: \"49fa43f1-0aa1-4f1c-b39f-948acdce4825\") " pod="openstack/ceilometer-0" Feb 19 18:51:06 crc kubenswrapper[4813]: I0219 18:51:06.157281 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49fa43f1-0aa1-4f1c-b39f-948acdce4825-run-httpd\") pod \"ceilometer-0\" (UID: \"49fa43f1-0aa1-4f1c-b39f-948acdce4825\") " pod="openstack/ceilometer-0" Feb 19 18:51:06 crc kubenswrapper[4813]: I0219 18:51:06.157335 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/49fa43f1-0aa1-4f1c-b39f-948acdce4825-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"49fa43f1-0aa1-4f1c-b39f-948acdce4825\") " pod="openstack/ceilometer-0" Feb 19 18:51:06 crc kubenswrapper[4813]: I0219 18:51:06.157495 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49fa43f1-0aa1-4f1c-b39f-948acdce4825-config-data\") pod \"ceilometer-0\" (UID: \"49fa43f1-0aa1-4f1c-b39f-948acdce4825\") " pod="openstack/ceilometer-0" Feb 19 18:51:06 crc kubenswrapper[4813]: I0219 18:51:06.157583 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49fa43f1-0aa1-4f1c-b39f-948acdce4825-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"49fa43f1-0aa1-4f1c-b39f-948acdce4825\") " pod="openstack/ceilometer-0" Feb 19 18:51:06 crc kubenswrapper[4813]: I0219 18:51:06.157665 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49fa43f1-0aa1-4f1c-b39f-948acdce4825-scripts\") pod \"ceilometer-0\" (UID: \"49fa43f1-0aa1-4f1c-b39f-948acdce4825\") " pod="openstack/ceilometer-0" Feb 19 18:51:06 crc kubenswrapper[4813]: I0219 18:51:06.157807 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7l5ts\" (UniqueName: \"kubernetes.io/projected/49fa43f1-0aa1-4f1c-b39f-948acdce4825-kube-api-access-7l5ts\") pod \"ceilometer-0\" (UID: \"49fa43f1-0aa1-4f1c-b39f-948acdce4825\") " pod="openstack/ceilometer-0" Feb 19 18:51:06 crc kubenswrapper[4813]: I0219 18:51:06.259602 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7l5ts\" (UniqueName: \"kubernetes.io/projected/49fa43f1-0aa1-4f1c-b39f-948acdce4825-kube-api-access-7l5ts\") pod \"ceilometer-0\" (UID: \"49fa43f1-0aa1-4f1c-b39f-948acdce4825\") " pod="openstack/ceilometer-0" Feb 19 18:51:06 crc kubenswrapper[4813]: I0219 18:51:06.259686 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49fa43f1-0aa1-4f1c-b39f-948acdce4825-log-httpd\") pod \"ceilometer-0\" (UID: \"49fa43f1-0aa1-4f1c-b39f-948acdce4825\") " pod="openstack/ceilometer-0" Feb 19 18:51:06 crc kubenswrapper[4813]: I0219 18:51:06.259749 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49fa43f1-0aa1-4f1c-b39f-948acdce4825-run-httpd\") pod \"ceilometer-0\" (UID: \"49fa43f1-0aa1-4f1c-b39f-948acdce4825\") " pod="openstack/ceilometer-0" Feb 19 18:51:06 crc kubenswrapper[4813]: I0219 18:51:06.259779 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/49fa43f1-0aa1-4f1c-b39f-948acdce4825-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"49fa43f1-0aa1-4f1c-b39f-948acdce4825\") " pod="openstack/ceilometer-0" Feb 19 18:51:06 crc kubenswrapper[4813]: I0219 18:51:06.259833 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49fa43f1-0aa1-4f1c-b39f-948acdce4825-config-data\") pod \"ceilometer-0\" (UID: \"49fa43f1-0aa1-4f1c-b39f-948acdce4825\") " pod="openstack/ceilometer-0" Feb 19 18:51:06 crc kubenswrapper[4813]: I0219 18:51:06.259863 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49fa43f1-0aa1-4f1c-b39f-948acdce4825-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"49fa43f1-0aa1-4f1c-b39f-948acdce4825\") " pod="openstack/ceilometer-0" Feb 19 18:51:06 crc kubenswrapper[4813]: I0219 18:51:06.259906 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49fa43f1-0aa1-4f1c-b39f-948acdce4825-scripts\") pod \"ceilometer-0\" (UID: \"49fa43f1-0aa1-4f1c-b39f-948acdce4825\") " pod="openstack/ceilometer-0" Feb 19 18:51:06 crc kubenswrapper[4813]: I0219 18:51:06.261342 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49fa43f1-0aa1-4f1c-b39f-948acdce4825-run-httpd\") pod \"ceilometer-0\" (UID: \"49fa43f1-0aa1-4f1c-b39f-948acdce4825\") " pod="openstack/ceilometer-0" Feb 19 18:51:06 crc kubenswrapper[4813]: I0219 18:51:06.261789 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49fa43f1-0aa1-4f1c-b39f-948acdce4825-log-httpd\") pod \"ceilometer-0\" (UID: \"49fa43f1-0aa1-4f1c-b39f-948acdce4825\") " pod="openstack/ceilometer-0" Feb 19 18:51:06 crc kubenswrapper[4813]: I0219 18:51:06.265313 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/49fa43f1-0aa1-4f1c-b39f-948acdce4825-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"49fa43f1-0aa1-4f1c-b39f-948acdce4825\") " pod="openstack/ceilometer-0" Feb 19 18:51:06 crc kubenswrapper[4813]: I0219 18:51:06.265849 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49fa43f1-0aa1-4f1c-b39f-948acdce4825-scripts\") pod \"ceilometer-0\" (UID: \"49fa43f1-0aa1-4f1c-b39f-948acdce4825\") " pod="openstack/ceilometer-0" Feb 19 18:51:06 crc kubenswrapper[4813]: I0219 18:51:06.266723 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49fa43f1-0aa1-4f1c-b39f-948acdce4825-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"49fa43f1-0aa1-4f1c-b39f-948acdce4825\") " pod="openstack/ceilometer-0" Feb 19 18:51:06 crc kubenswrapper[4813]: I0219 18:51:06.276826 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49fa43f1-0aa1-4f1c-b39f-948acdce4825-config-data\") pod \"ceilometer-0\" (UID: \"49fa43f1-0aa1-4f1c-b39f-948acdce4825\") " pod="openstack/ceilometer-0" Feb 19 18:51:06 crc kubenswrapper[4813]: I0219 18:51:06.279743 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7l5ts\" (UniqueName: \"kubernetes.io/projected/49fa43f1-0aa1-4f1c-b39f-948acdce4825-kube-api-access-7l5ts\") pod \"ceilometer-0\" (UID: \"49fa43f1-0aa1-4f1c-b39f-948acdce4825\") " pod="openstack/ceilometer-0" Feb 19 18:51:06 crc kubenswrapper[4813]: I0219 18:51:06.352577 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 18:51:06 crc kubenswrapper[4813]: I0219 18:51:06.847310 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 18:51:06 crc kubenswrapper[4813]: W0219 18:51:06.850831 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod49fa43f1_0aa1_4f1c_b39f_948acdce4825.slice/crio-77e065478f7b87b293a5e0aa8181a43826148acd0af366e0ec3589bffc39c6a6 WatchSource:0}: Error finding container 77e065478f7b87b293a5e0aa8181a43826148acd0af366e0ec3589bffc39c6a6: Status 404 returned error can't find the container with id 77e065478f7b87b293a5e0aa8181a43826148acd0af366e0ec3589bffc39c6a6 Feb 19 18:51:06 crc kubenswrapper[4813]: I0219 18:51:06.955383 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49fa43f1-0aa1-4f1c-b39f-948acdce4825","Type":"ContainerStarted","Data":"77e065478f7b87b293a5e0aa8181a43826148acd0af366e0ec3589bffc39c6a6"} Feb 19 18:51:07 crc kubenswrapper[4813]: I0219 18:51:07.482280 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6afe8635-761b-4dcc-b2b4-ff6963f474d8" path="/var/lib/kubelet/pods/6afe8635-761b-4dcc-b2b4-ff6963f474d8/volumes" Feb 19 18:51:07 crc kubenswrapper[4813]: I0219 18:51:07.966266 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49fa43f1-0aa1-4f1c-b39f-948acdce4825","Type":"ContainerStarted","Data":"ab1853900bd306abf16dd54781d641cac8027e702677508d3cc7c2e1d0a2ddb2"} Feb 19 18:51:08 crc kubenswrapper[4813]: I0219 18:51:08.979369 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49fa43f1-0aa1-4f1c-b39f-948acdce4825","Type":"ContainerStarted","Data":"163b6a0fe25e1d50e784cbe44b2dec473230f88d0f1845044d6b162853450091"} Feb 19 18:51:08 crc kubenswrapper[4813]: I0219 18:51:08.979919 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49fa43f1-0aa1-4f1c-b39f-948acdce4825","Type":"ContainerStarted","Data":"994efa1617047ad5bf4f6343264fd62528e456b0a7f377bf6017d0bb47ed4006"} Feb 19 18:51:11 crc kubenswrapper[4813]: I0219 18:51:11.019822 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49fa43f1-0aa1-4f1c-b39f-948acdce4825","Type":"ContainerStarted","Data":"2d1006a4f53f89f14386fb997d2b8b3ec2b732e5dcdb7115a52aff9373a2b497"} Feb 19 18:51:11 crc kubenswrapper[4813]: I0219 18:51:11.021515 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 18:51:11 crc kubenswrapper[4813]: I0219 18:51:11.048415 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.855064159 podStartE2EDuration="6.048389258s" podCreationTimestamp="2026-02-19 18:51:05 +0000 UTC" firstStartedPulling="2026-02-19 18:51:06.853068562 +0000 UTC m=+1286.078509093" lastFinishedPulling="2026-02-19 18:51:10.046393641 +0000 UTC m=+1289.271834192" observedRunningTime="2026-02-19 18:51:11.042862478 +0000 UTC m=+1290.268303069" watchObservedRunningTime="2026-02-19 18:51:11.048389258 +0000 UTC m=+1290.273829819" Feb 19 18:51:12 crc kubenswrapper[4813]: I0219 18:51:12.382191 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 19 18:51:12 crc kubenswrapper[4813]: I0219 18:51:12.861424 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-5knnd"] Feb 19 18:51:12 crc kubenswrapper[4813]: I0219 18:51:12.863027 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-5knnd" Feb 19 18:51:12 crc kubenswrapper[4813]: I0219 18:51:12.865725 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 19 18:51:12 crc kubenswrapper[4813]: I0219 18:51:12.870005 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 19 18:51:12 crc kubenswrapper[4813]: I0219 18:51:12.882656 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-5knnd"] Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.005840 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzhbq\" (UniqueName: \"kubernetes.io/projected/3d6da42c-1604-467c-b9ef-dde47711b95a-kube-api-access-fzhbq\") pod \"nova-cell0-cell-mapping-5knnd\" (UID: \"3d6da42c-1604-467c-b9ef-dde47711b95a\") " pod="openstack/nova-cell0-cell-mapping-5knnd" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.005904 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d6da42c-1604-467c-b9ef-dde47711b95a-config-data\") pod \"nova-cell0-cell-mapping-5knnd\" (UID: \"3d6da42c-1604-467c-b9ef-dde47711b95a\") " pod="openstack/nova-cell0-cell-mapping-5knnd" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.006009 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d6da42c-1604-467c-b9ef-dde47711b95a-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-5knnd\" (UID: \"3d6da42c-1604-467c-b9ef-dde47711b95a\") " pod="openstack/nova-cell0-cell-mapping-5knnd" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.006071 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d6da42c-1604-467c-b9ef-dde47711b95a-scripts\") pod \"nova-cell0-cell-mapping-5knnd\" (UID: \"3d6da42c-1604-467c-b9ef-dde47711b95a\") " pod="openstack/nova-cell0-cell-mapping-5knnd" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.036694 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.038306 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.057998 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.060971 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.107602 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4ca3e22-9d6b-4872-bbec-ad276a27540f-config-data\") pod \"nova-api-0\" (UID: \"a4ca3e22-9d6b-4872-bbec-ad276a27540f\") " pod="openstack/nova-api-0" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.107665 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gw6r2\" (UniqueName: \"kubernetes.io/projected/a4ca3e22-9d6b-4872-bbec-ad276a27540f-kube-api-access-gw6r2\") pod \"nova-api-0\" (UID: \"a4ca3e22-9d6b-4872-bbec-ad276a27540f\") " pod="openstack/nova-api-0" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.107712 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4ca3e22-9d6b-4872-bbec-ad276a27540f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a4ca3e22-9d6b-4872-bbec-ad276a27540f\") " pod="openstack/nova-api-0" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.107741 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d6da42c-1604-467c-b9ef-dde47711b95a-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-5knnd\" (UID: \"3d6da42c-1604-467c-b9ef-dde47711b95a\") " pod="openstack/nova-cell0-cell-mapping-5knnd" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.107789 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d6da42c-1604-467c-b9ef-dde47711b95a-scripts\") pod \"nova-cell0-cell-mapping-5knnd\" (UID: \"3d6da42c-1604-467c-b9ef-dde47711b95a\") " pod="openstack/nova-cell0-cell-mapping-5knnd" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.107803 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4ca3e22-9d6b-4872-bbec-ad276a27540f-logs\") pod \"nova-api-0\" (UID: \"a4ca3e22-9d6b-4872-bbec-ad276a27540f\") " pod="openstack/nova-api-0" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.107895 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzhbq\" (UniqueName: \"kubernetes.io/projected/3d6da42c-1604-467c-b9ef-dde47711b95a-kube-api-access-fzhbq\") pod \"nova-cell0-cell-mapping-5knnd\" (UID: \"3d6da42c-1604-467c-b9ef-dde47711b95a\") " pod="openstack/nova-cell0-cell-mapping-5knnd" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.107936 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d6da42c-1604-467c-b9ef-dde47711b95a-config-data\") pod \"nova-cell0-cell-mapping-5knnd\" (UID: \"3d6da42c-1604-467c-b9ef-dde47711b95a\") " pod="openstack/nova-cell0-cell-mapping-5knnd" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.118893 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d6da42c-1604-467c-b9ef-dde47711b95a-config-data\") pod \"nova-cell0-cell-mapping-5knnd\" (UID: \"3d6da42c-1604-467c-b9ef-dde47711b95a\") " pod="openstack/nova-cell0-cell-mapping-5knnd" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.127404 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.128505 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d6da42c-1604-467c-b9ef-dde47711b95a-scripts\") pod \"nova-cell0-cell-mapping-5knnd\" (UID: \"3d6da42c-1604-467c-b9ef-dde47711b95a\") " pod="openstack/nova-cell0-cell-mapping-5knnd" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.129106 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d6da42c-1604-467c-b9ef-dde47711b95a-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-5knnd\" (UID: \"3d6da42c-1604-467c-b9ef-dde47711b95a\") " pod="openstack/nova-cell0-cell-mapping-5knnd" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.129442 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.138008 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.167796 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.168944 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.183770 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzhbq\" (UniqueName: \"kubernetes.io/projected/3d6da42c-1604-467c-b9ef-dde47711b95a-kube-api-access-fzhbq\") pod \"nova-cell0-cell-mapping-5knnd\" (UID: \"3d6da42c-1604-467c-b9ef-dde47711b95a\") " pod="openstack/nova-cell0-cell-mapping-5knnd" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.184151 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.184658 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.203448 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-5knnd" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.215723 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4ca3e22-9d6b-4872-bbec-ad276a27540f-config-data\") pod \"nova-api-0\" (UID: \"a4ca3e22-9d6b-4872-bbec-ad276a27540f\") " pod="openstack/nova-api-0" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.215808 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3d09bea-abf7-4cca-97ca-55e9e3d5d6cc-logs\") pod \"nova-metadata-0\" (UID: \"c3d09bea-abf7-4cca-97ca-55e9e3d5d6cc\") " pod="openstack/nova-metadata-0" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.215843 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gw6r2\" (UniqueName: \"kubernetes.io/projected/a4ca3e22-9d6b-4872-bbec-ad276a27540f-kube-api-access-gw6r2\") pod \"nova-api-0\" (UID: \"a4ca3e22-9d6b-4872-bbec-ad276a27540f\") " pod="openstack/nova-api-0" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.215868 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4ca3e22-9d6b-4872-bbec-ad276a27540f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a4ca3e22-9d6b-4872-bbec-ad276a27540f\") " pod="openstack/nova-api-0" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.215926 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3d09bea-abf7-4cca-97ca-55e9e3d5d6cc-config-data\") pod \"nova-metadata-0\" (UID: \"c3d09bea-abf7-4cca-97ca-55e9e3d5d6cc\") " pod="openstack/nova-metadata-0" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.216328 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4ca3e22-9d6b-4872-bbec-ad276a27540f-logs\") pod \"nova-api-0\" (UID: \"a4ca3e22-9d6b-4872-bbec-ad276a27540f\") " pod="openstack/nova-api-0" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.216363 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcnqc\" (UniqueName: \"kubernetes.io/projected/c3d09bea-abf7-4cca-97ca-55e9e3d5d6cc-kube-api-access-fcnqc\") pod \"nova-metadata-0\" (UID: \"c3d09bea-abf7-4cca-97ca-55e9e3d5d6cc\") " pod="openstack/nova-metadata-0" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.216398 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3d09bea-abf7-4cca-97ca-55e9e3d5d6cc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c3d09bea-abf7-4cca-97ca-55e9e3d5d6cc\") " pod="openstack/nova-metadata-0" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.224750 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4ca3e22-9d6b-4872-bbec-ad276a27540f-config-data\") pod \"nova-api-0\" (UID: \"a4ca3e22-9d6b-4872-bbec-ad276a27540f\") " pod="openstack/nova-api-0" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.226297 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.226622 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4ca3e22-9d6b-4872-bbec-ad276a27540f-logs\") pod \"nova-api-0\" (UID: \"a4ca3e22-9d6b-4872-bbec-ad276a27540f\") " pod="openstack/nova-api-0" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.236058 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4ca3e22-9d6b-4872-bbec-ad276a27540f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a4ca3e22-9d6b-4872-bbec-ad276a27540f\") " pod="openstack/nova-api-0" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.297935 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gw6r2\" (UniqueName: \"kubernetes.io/projected/a4ca3e22-9d6b-4872-bbec-ad276a27540f-kube-api-access-gw6r2\") pod \"nova-api-0\" (UID: \"a4ca3e22-9d6b-4872-bbec-ad276a27540f\") " pod="openstack/nova-api-0" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.318293 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcnqc\" (UniqueName: \"kubernetes.io/projected/c3d09bea-abf7-4cca-97ca-55e9e3d5d6cc-kube-api-access-fcnqc\") pod \"nova-metadata-0\" (UID: \"c3d09bea-abf7-4cca-97ca-55e9e3d5d6cc\") " pod="openstack/nova-metadata-0" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.318540 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3d09bea-abf7-4cca-97ca-55e9e3d5d6cc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c3d09bea-abf7-4cca-97ca-55e9e3d5d6cc\") " pod="openstack/nova-metadata-0" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.319312 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/071faa4f-f20b-419c-bd95-3e65d40101aa-config-data\") pod \"nova-scheduler-0\" (UID: \"071faa4f-f20b-419c-bd95-3e65d40101aa\") " pod="openstack/nova-scheduler-0" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.319406 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brp6t\" (UniqueName: \"kubernetes.io/projected/071faa4f-f20b-419c-bd95-3e65d40101aa-kube-api-access-brp6t\") pod \"nova-scheduler-0\" (UID: \"071faa4f-f20b-419c-bd95-3e65d40101aa\") " pod="openstack/nova-scheduler-0" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.319531 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/071faa4f-f20b-419c-bd95-3e65d40101aa-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"071faa4f-f20b-419c-bd95-3e65d40101aa\") " pod="openstack/nova-scheduler-0" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.319654 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3d09bea-abf7-4cca-97ca-55e9e3d5d6cc-logs\") pod \"nova-metadata-0\" (UID: \"c3d09bea-abf7-4cca-97ca-55e9e3d5d6cc\") " pod="openstack/nova-metadata-0" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.319785 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3d09bea-abf7-4cca-97ca-55e9e3d5d6cc-config-data\") pod \"nova-metadata-0\" (UID: \"c3d09bea-abf7-4cca-97ca-55e9e3d5d6cc\") " pod="openstack/nova-metadata-0" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.322347 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3d09bea-abf7-4cca-97ca-55e9e3d5d6cc-logs\") pod \"nova-metadata-0\" (UID: \"c3d09bea-abf7-4cca-97ca-55e9e3d5d6cc\") " pod="openstack/nova-metadata-0" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.324682 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3d09bea-abf7-4cca-97ca-55e9e3d5d6cc-config-data\") pod \"nova-metadata-0\" (UID: \"c3d09bea-abf7-4cca-97ca-55e9e3d5d6cc\") " pod="openstack/nova-metadata-0" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.341080 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3d09bea-abf7-4cca-97ca-55e9e3d5d6cc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c3d09bea-abf7-4cca-97ca-55e9e3d5d6cc\") " pod="openstack/nova-metadata-0" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.341170 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75ddbf7c75-h77vl"] Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.342938 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75ddbf7c75-h77vl" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.346846 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcnqc\" (UniqueName: \"kubernetes.io/projected/c3d09bea-abf7-4cca-97ca-55e9e3d5d6cc-kube-api-access-fcnqc\") pod \"nova-metadata-0\" (UID: \"c3d09bea-abf7-4cca-97ca-55e9e3d5d6cc\") " pod="openstack/nova-metadata-0" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.359326 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.359435 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75ddbf7c75-h77vl"] Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.397510 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.399382 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.402598 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.416397 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.422127 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87a5f970-b35c-424e-b7b7-2612080926b3-config\") pod \"dnsmasq-dns-75ddbf7c75-h77vl\" (UID: \"87a5f970-b35c-424e-b7b7-2612080926b3\") " pod="openstack/dnsmasq-dns-75ddbf7c75-h77vl" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.422188 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjs8s\" (UniqueName: \"kubernetes.io/projected/87a5f970-b35c-424e-b7b7-2612080926b3-kube-api-access-kjs8s\") pod \"dnsmasq-dns-75ddbf7c75-h77vl\" (UID: \"87a5f970-b35c-424e-b7b7-2612080926b3\") " pod="openstack/dnsmasq-dns-75ddbf7c75-h77vl" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.422232 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/87a5f970-b35c-424e-b7b7-2612080926b3-ovsdbserver-sb\") pod \"dnsmasq-dns-75ddbf7c75-h77vl\" (UID: \"87a5f970-b35c-424e-b7b7-2612080926b3\") " pod="openstack/dnsmasq-dns-75ddbf7c75-h77vl" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.422251 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/87a5f970-b35c-424e-b7b7-2612080926b3-dns-swift-storage-0\") pod \"dnsmasq-dns-75ddbf7c75-h77vl\" (UID: \"87a5f970-b35c-424e-b7b7-2612080926b3\") " pod="openstack/dnsmasq-dns-75ddbf7c75-h77vl" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.422286 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/87a5f970-b35c-424e-b7b7-2612080926b3-dns-svc\") pod \"dnsmasq-dns-75ddbf7c75-h77vl\" (UID: \"87a5f970-b35c-424e-b7b7-2612080926b3\") " pod="openstack/dnsmasq-dns-75ddbf7c75-h77vl" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.422342 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/071faa4f-f20b-419c-bd95-3e65d40101aa-config-data\") pod \"nova-scheduler-0\" (UID: \"071faa4f-f20b-419c-bd95-3e65d40101aa\") " pod="openstack/nova-scheduler-0" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.422362 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brp6t\" (UniqueName: \"kubernetes.io/projected/071faa4f-f20b-419c-bd95-3e65d40101aa-kube-api-access-brp6t\") pod \"nova-scheduler-0\" (UID: \"071faa4f-f20b-419c-bd95-3e65d40101aa\") " pod="openstack/nova-scheduler-0" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.422394 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/87a5f970-b35c-424e-b7b7-2612080926b3-ovsdbserver-nb\") pod \"dnsmasq-dns-75ddbf7c75-h77vl\" (UID: \"87a5f970-b35c-424e-b7b7-2612080926b3\") " pod="openstack/dnsmasq-dns-75ddbf7c75-h77vl" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.422423 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/071faa4f-f20b-419c-bd95-3e65d40101aa-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"071faa4f-f20b-419c-bd95-3e65d40101aa\") " pod="openstack/nova-scheduler-0" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.426717 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/071faa4f-f20b-419c-bd95-3e65d40101aa-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"071faa4f-f20b-419c-bd95-3e65d40101aa\") " pod="openstack/nova-scheduler-0" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.429734 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/071faa4f-f20b-419c-bd95-3e65d40101aa-config-data\") pod \"nova-scheduler-0\" (UID: \"071faa4f-f20b-419c-bd95-3e65d40101aa\") " pod="openstack/nova-scheduler-0" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.438334 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brp6t\" (UniqueName: \"kubernetes.io/projected/071faa4f-f20b-419c-bd95-3e65d40101aa-kube-api-access-brp6t\") pod \"nova-scheduler-0\" (UID: \"071faa4f-f20b-419c-bd95-3e65d40101aa\") " pod="openstack/nova-scheduler-0" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.523603 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/87a5f970-b35c-424e-b7b7-2612080926b3-ovsdbserver-nb\") pod \"dnsmasq-dns-75ddbf7c75-h77vl\" (UID: \"87a5f970-b35c-424e-b7b7-2612080926b3\") " pod="openstack/dnsmasq-dns-75ddbf7c75-h77vl" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.523847 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87a5f970-b35c-424e-b7b7-2612080926b3-config\") pod \"dnsmasq-dns-75ddbf7c75-h77vl\" (UID: \"87a5f970-b35c-424e-b7b7-2612080926b3\") " pod="openstack/dnsmasq-dns-75ddbf7c75-h77vl" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.523986 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6d615b6-3d85-40c9-8ae8-81a27f792856-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b6d615b6-3d85-40c9-8ae8-81a27f792856\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.524072 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjs8s\" (UniqueName: \"kubernetes.io/projected/87a5f970-b35c-424e-b7b7-2612080926b3-kube-api-access-kjs8s\") pod \"dnsmasq-dns-75ddbf7c75-h77vl\" (UID: \"87a5f970-b35c-424e-b7b7-2612080926b3\") " pod="openstack/dnsmasq-dns-75ddbf7c75-h77vl" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.524123 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7t5qh\" (UniqueName: \"kubernetes.io/projected/b6d615b6-3d85-40c9-8ae8-81a27f792856-kube-api-access-7t5qh\") pod \"nova-cell1-novncproxy-0\" (UID: \"b6d615b6-3d85-40c9-8ae8-81a27f792856\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.524150 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/87a5f970-b35c-424e-b7b7-2612080926b3-ovsdbserver-sb\") pod \"dnsmasq-dns-75ddbf7c75-h77vl\" (UID: \"87a5f970-b35c-424e-b7b7-2612080926b3\") " pod="openstack/dnsmasq-dns-75ddbf7c75-h77vl" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.524178 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/87a5f970-b35c-424e-b7b7-2612080926b3-dns-swift-storage-0\") pod \"dnsmasq-dns-75ddbf7c75-h77vl\" (UID: \"87a5f970-b35c-424e-b7b7-2612080926b3\") " pod="openstack/dnsmasq-dns-75ddbf7c75-h77vl" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.524248 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6d615b6-3d85-40c9-8ae8-81a27f792856-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b6d615b6-3d85-40c9-8ae8-81a27f792856\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.524289 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/87a5f970-b35c-424e-b7b7-2612080926b3-dns-svc\") pod \"dnsmasq-dns-75ddbf7c75-h77vl\" (UID: \"87a5f970-b35c-424e-b7b7-2612080926b3\") " pod="openstack/dnsmasq-dns-75ddbf7c75-h77vl" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.524425 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/87a5f970-b35c-424e-b7b7-2612080926b3-ovsdbserver-nb\") pod \"dnsmasq-dns-75ddbf7c75-h77vl\" (UID: \"87a5f970-b35c-424e-b7b7-2612080926b3\") " pod="openstack/dnsmasq-dns-75ddbf7c75-h77vl" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.524529 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87a5f970-b35c-424e-b7b7-2612080926b3-config\") pod \"dnsmasq-dns-75ddbf7c75-h77vl\" (UID: \"87a5f970-b35c-424e-b7b7-2612080926b3\") " pod="openstack/dnsmasq-dns-75ddbf7c75-h77vl" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.525014 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/87a5f970-b35c-424e-b7b7-2612080926b3-dns-svc\") pod \"dnsmasq-dns-75ddbf7c75-h77vl\" (UID: \"87a5f970-b35c-424e-b7b7-2612080926b3\") " pod="openstack/dnsmasq-dns-75ddbf7c75-h77vl" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.525464 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/87a5f970-b35c-424e-b7b7-2612080926b3-dns-swift-storage-0\") pod \"dnsmasq-dns-75ddbf7c75-h77vl\" (UID: \"87a5f970-b35c-424e-b7b7-2612080926b3\") " pod="openstack/dnsmasq-dns-75ddbf7c75-h77vl" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.525614 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/87a5f970-b35c-424e-b7b7-2612080926b3-ovsdbserver-sb\") pod \"dnsmasq-dns-75ddbf7c75-h77vl\" (UID: \"87a5f970-b35c-424e-b7b7-2612080926b3\") " pod="openstack/dnsmasq-dns-75ddbf7c75-h77vl" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.545071 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjs8s\" (UniqueName: \"kubernetes.io/projected/87a5f970-b35c-424e-b7b7-2612080926b3-kube-api-access-kjs8s\") pod \"dnsmasq-dns-75ddbf7c75-h77vl\" (UID: \"87a5f970-b35c-424e-b7b7-2612080926b3\") " pod="openstack/dnsmasq-dns-75ddbf7c75-h77vl" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.629166 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6d615b6-3d85-40c9-8ae8-81a27f792856-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b6d615b6-3d85-40c9-8ae8-81a27f792856\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.629266 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7t5qh\" (UniqueName: \"kubernetes.io/projected/b6d615b6-3d85-40c9-8ae8-81a27f792856-kube-api-access-7t5qh\") pod \"nova-cell1-novncproxy-0\" (UID: \"b6d615b6-3d85-40c9-8ae8-81a27f792856\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.629326 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6d615b6-3d85-40c9-8ae8-81a27f792856-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b6d615b6-3d85-40c9-8ae8-81a27f792856\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.639436 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6d615b6-3d85-40c9-8ae8-81a27f792856-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b6d615b6-3d85-40c9-8ae8-81a27f792856\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.642246 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6d615b6-3d85-40c9-8ae8-81a27f792856-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b6d615b6-3d85-40c9-8ae8-81a27f792856\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.645242 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.654858 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7t5qh\" (UniqueName: \"kubernetes.io/projected/b6d615b6-3d85-40c9-8ae8-81a27f792856-kube-api-access-7t5qh\") pod \"nova-cell1-novncproxy-0\" (UID: \"b6d615b6-3d85-40c9-8ae8-81a27f792856\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.664830 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.685232 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75ddbf7c75-h77vl" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.718800 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.761089 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-5knnd"] Feb 19 18:51:13 crc kubenswrapper[4813]: I0219 18:51:13.903796 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 18:51:14 crc kubenswrapper[4813]: I0219 18:51:14.057745 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a4ca3e22-9d6b-4872-bbec-ad276a27540f","Type":"ContainerStarted","Data":"bd4d57b437ba8156b9f17c0d089600d6c35f0c00ee2a9e2c3d65a861841716e6"} Feb 19 18:51:14 crc kubenswrapper[4813]: I0219 18:51:14.064794 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-5knnd" event={"ID":"3d6da42c-1604-467c-b9ef-dde47711b95a","Type":"ContainerStarted","Data":"c8b5c29fa08448ca6aa47d5cd6998cafeacfbea82b3ce33008b68fbee01b2923"} Feb 19 18:51:14 crc kubenswrapper[4813]: I0219 18:51:14.077999 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-spl4z"] Feb 19 18:51:14 crc kubenswrapper[4813]: I0219 18:51:14.079137 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-spl4z" Feb 19 18:51:14 crc kubenswrapper[4813]: I0219 18:51:14.081295 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 19 18:51:14 crc kubenswrapper[4813]: I0219 18:51:14.081543 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 19 18:51:14 crc kubenswrapper[4813]: I0219 18:51:14.087787 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-spl4z"] Feb 19 18:51:14 crc kubenswrapper[4813]: I0219 18:51:14.160317 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b98f2a59-47e4-46aa-acbe-74250ed631ca-config-data\") pod \"nova-cell1-conductor-db-sync-spl4z\" (UID: \"b98f2a59-47e4-46aa-acbe-74250ed631ca\") " pod="openstack/nova-cell1-conductor-db-sync-spl4z" Feb 19 18:51:14 crc kubenswrapper[4813]: I0219 18:51:14.160407 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b98f2a59-47e4-46aa-acbe-74250ed631ca-scripts\") pod \"nova-cell1-conductor-db-sync-spl4z\" (UID: \"b98f2a59-47e4-46aa-acbe-74250ed631ca\") " pod="openstack/nova-cell1-conductor-db-sync-spl4z" Feb 19 18:51:14 crc kubenswrapper[4813]: I0219 18:51:14.160430 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fh86c\" (UniqueName: \"kubernetes.io/projected/b98f2a59-47e4-46aa-acbe-74250ed631ca-kube-api-access-fh86c\") pod \"nova-cell1-conductor-db-sync-spl4z\" (UID: \"b98f2a59-47e4-46aa-acbe-74250ed631ca\") " pod="openstack/nova-cell1-conductor-db-sync-spl4z" Feb 19 18:51:14 crc kubenswrapper[4813]: I0219 18:51:14.160473 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b98f2a59-47e4-46aa-acbe-74250ed631ca-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-spl4z\" (UID: \"b98f2a59-47e4-46aa-acbe-74250ed631ca\") " pod="openstack/nova-cell1-conductor-db-sync-spl4z" Feb 19 18:51:14 crc kubenswrapper[4813]: I0219 18:51:14.239607 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 18:51:14 crc kubenswrapper[4813]: I0219 18:51:14.258427 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 18:51:14 crc kubenswrapper[4813]: I0219 18:51:14.262002 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b98f2a59-47e4-46aa-acbe-74250ed631ca-config-data\") pod \"nova-cell1-conductor-db-sync-spl4z\" (UID: \"b98f2a59-47e4-46aa-acbe-74250ed631ca\") " pod="openstack/nova-cell1-conductor-db-sync-spl4z" Feb 19 18:51:14 crc kubenswrapper[4813]: I0219 18:51:14.262084 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b98f2a59-47e4-46aa-acbe-74250ed631ca-scripts\") pod \"nova-cell1-conductor-db-sync-spl4z\" (UID: \"b98f2a59-47e4-46aa-acbe-74250ed631ca\") " pod="openstack/nova-cell1-conductor-db-sync-spl4z" Feb 19 18:51:14 crc kubenswrapper[4813]: I0219 18:51:14.262106 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fh86c\" (UniqueName: \"kubernetes.io/projected/b98f2a59-47e4-46aa-acbe-74250ed631ca-kube-api-access-fh86c\") pod \"nova-cell1-conductor-db-sync-spl4z\" (UID: \"b98f2a59-47e4-46aa-acbe-74250ed631ca\") " pod="openstack/nova-cell1-conductor-db-sync-spl4z" Feb 19 18:51:14 crc kubenswrapper[4813]: I0219 18:51:14.262146 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b98f2a59-47e4-46aa-acbe-74250ed631ca-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-spl4z\" (UID: \"b98f2a59-47e4-46aa-acbe-74250ed631ca\") " pod="openstack/nova-cell1-conductor-db-sync-spl4z" Feb 19 18:51:14 crc kubenswrapper[4813]: I0219 18:51:14.269095 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b98f2a59-47e4-46aa-acbe-74250ed631ca-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-spl4z\" (UID: \"b98f2a59-47e4-46aa-acbe-74250ed631ca\") " pod="openstack/nova-cell1-conductor-db-sync-spl4z" Feb 19 18:51:14 crc kubenswrapper[4813]: I0219 18:51:14.285219 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b98f2a59-47e4-46aa-acbe-74250ed631ca-config-data\") pod \"nova-cell1-conductor-db-sync-spl4z\" (UID: \"b98f2a59-47e4-46aa-acbe-74250ed631ca\") " pod="openstack/nova-cell1-conductor-db-sync-spl4z" Feb 19 18:51:14 crc kubenswrapper[4813]: I0219 18:51:14.285632 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b98f2a59-47e4-46aa-acbe-74250ed631ca-scripts\") pod \"nova-cell1-conductor-db-sync-spl4z\" (UID: \"b98f2a59-47e4-46aa-acbe-74250ed631ca\") " pod="openstack/nova-cell1-conductor-db-sync-spl4z" Feb 19 18:51:14 crc kubenswrapper[4813]: I0219 18:51:14.319881 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fh86c\" (UniqueName: \"kubernetes.io/projected/b98f2a59-47e4-46aa-acbe-74250ed631ca-kube-api-access-fh86c\") pod \"nova-cell1-conductor-db-sync-spl4z\" (UID: \"b98f2a59-47e4-46aa-acbe-74250ed631ca\") " pod="openstack/nova-cell1-conductor-db-sync-spl4z" Feb 19 18:51:14 crc kubenswrapper[4813]: I0219 18:51:14.411004 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-spl4z" Feb 19 18:51:14 crc kubenswrapper[4813]: I0219 18:51:14.446589 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 18:51:14 crc kubenswrapper[4813]: I0219 18:51:14.454038 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75ddbf7c75-h77vl"] Feb 19 18:51:14 crc kubenswrapper[4813]: I0219 18:51:14.822681 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-spl4z"] Feb 19 18:51:15 crc kubenswrapper[4813]: I0219 18:51:15.080629 4813 generic.go:334] "Generic (PLEG): container finished" podID="87a5f970-b35c-424e-b7b7-2612080926b3" containerID="fd2376d3999a03e034fa4ebfb7dd8d651f752bbaccf53c555dad8f7cac4130ab" exitCode=0 Feb 19 18:51:15 crc kubenswrapper[4813]: I0219 18:51:15.080718 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75ddbf7c75-h77vl" event={"ID":"87a5f970-b35c-424e-b7b7-2612080926b3","Type":"ContainerDied","Data":"fd2376d3999a03e034fa4ebfb7dd8d651f752bbaccf53c555dad8f7cac4130ab"} Feb 19 18:51:15 crc kubenswrapper[4813]: I0219 18:51:15.080926 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75ddbf7c75-h77vl" event={"ID":"87a5f970-b35c-424e-b7b7-2612080926b3","Type":"ContainerStarted","Data":"a35dcc9c6637385f37426daef069a48e7c0a8f4dd69c530a964e45ce37d4629d"} Feb 19 18:51:15 crc kubenswrapper[4813]: I0219 18:51:15.083774 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c3d09bea-abf7-4cca-97ca-55e9e3d5d6cc","Type":"ContainerStarted","Data":"5cf266695021514d657796ffaab8cf36a86e5d7bcb43091fcc5b9920fc5c7a46"} Feb 19 18:51:15 crc kubenswrapper[4813]: I0219 18:51:15.085427 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"071faa4f-f20b-419c-bd95-3e65d40101aa","Type":"ContainerStarted","Data":"8a6d5d56cb8ab8037c50ca0a3e7281aaf62ce83434241d081aedc3dad3e2856a"} Feb 19 18:51:15 crc kubenswrapper[4813]: I0219 18:51:15.098096 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-spl4z" event={"ID":"b98f2a59-47e4-46aa-acbe-74250ed631ca","Type":"ContainerStarted","Data":"2a25594eb86a8b04ccafbac99714dff18b4f67f9a8f46b7bb7d2e2c3856db867"} Feb 19 18:51:15 crc kubenswrapper[4813]: I0219 18:51:15.102086 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-5knnd" event={"ID":"3d6da42c-1604-467c-b9ef-dde47711b95a","Type":"ContainerStarted","Data":"e716782824789876e737f679b53e25e942ffd227219538feea44c84706b6d858"} Feb 19 18:51:15 crc kubenswrapper[4813]: I0219 18:51:15.103652 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b6d615b6-3d85-40c9-8ae8-81a27f792856","Type":"ContainerStarted","Data":"8c3fc201bc12a4c48b77abfd07c8b6e8e9ce09c51625e2e1debbb1466c27ed65"} Feb 19 18:51:15 crc kubenswrapper[4813]: I0219 18:51:15.121848 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-5knnd" podStartSLOduration=3.121830537 podStartE2EDuration="3.121830537s" podCreationTimestamp="2026-02-19 18:51:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:51:15.11965193 +0000 UTC m=+1294.345092461" watchObservedRunningTime="2026-02-19 18:51:15.121830537 +0000 UTC m=+1294.347271078" Feb 19 18:51:16 crc kubenswrapper[4813]: I0219 18:51:16.122280 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-spl4z" event={"ID":"b98f2a59-47e4-46aa-acbe-74250ed631ca","Type":"ContainerStarted","Data":"1ba735df184697c88fb38b34247b6c1cb15bd976c21e2519ffc06459f7f5c81e"} Feb 19 18:51:16 crc kubenswrapper[4813]: I0219 18:51:16.127134 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75ddbf7c75-h77vl" event={"ID":"87a5f970-b35c-424e-b7b7-2612080926b3","Type":"ContainerStarted","Data":"41fe4440fbdf484a9b01d6c7c67d89c22dc67c23e246d50806cdad764d8c0d45"} Feb 19 18:51:16 crc kubenswrapper[4813]: I0219 18:51:16.127186 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-75ddbf7c75-h77vl" Feb 19 18:51:16 crc kubenswrapper[4813]: I0219 18:51:16.144928 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-spl4z" podStartSLOduration=2.144906021 podStartE2EDuration="2.144906021s" podCreationTimestamp="2026-02-19 18:51:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:51:16.140209166 +0000 UTC m=+1295.365649727" watchObservedRunningTime="2026-02-19 18:51:16.144906021 +0000 UTC m=+1295.370346562" Feb 19 18:51:16 crc kubenswrapper[4813]: I0219 18:51:16.165694 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-75ddbf7c75-h77vl" podStartSLOduration=3.1656726920000002 podStartE2EDuration="3.165672692s" podCreationTimestamp="2026-02-19 18:51:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:51:16.164399942 +0000 UTC m=+1295.389840483" watchObservedRunningTime="2026-02-19 18:51:16.165672692 +0000 UTC m=+1295.391113243" Feb 19 18:51:16 crc kubenswrapper[4813]: I0219 18:51:16.940405 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 18:51:16 crc kubenswrapper[4813]: I0219 18:51:16.959249 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 18:51:18 crc kubenswrapper[4813]: I0219 18:51:18.161615 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a4ca3e22-9d6b-4872-bbec-ad276a27540f","Type":"ContainerStarted","Data":"9c53928bb686ea7fb8a94957477024c8651c1173220ce378d9c31e1768bb67c2"} Feb 19 18:51:18 crc kubenswrapper[4813]: I0219 18:51:18.167052 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="b6d615b6-3d85-40c9-8ae8-81a27f792856" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://67373de9f836bbcc9734c63987d52b1116dea47802d978e06050d215a0cf0e97" gracePeriod=30 Feb 19 18:51:18 crc kubenswrapper[4813]: I0219 18:51:18.197863 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=1.914324068 podStartE2EDuration="5.197841246s" podCreationTimestamp="2026-02-19 18:51:13 +0000 UTC" firstStartedPulling="2026-02-19 18:51:14.500065684 +0000 UTC m=+1293.725506225" lastFinishedPulling="2026-02-19 18:51:17.783582862 +0000 UTC m=+1297.009023403" observedRunningTime="2026-02-19 18:51:18.186756544 +0000 UTC m=+1297.412197085" watchObservedRunningTime="2026-02-19 18:51:18.197841246 +0000 UTC m=+1297.423281787" Feb 19 18:51:18 crc kubenswrapper[4813]: I0219 18:51:18.719646 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 19 18:51:19 crc kubenswrapper[4813]: I0219 18:51:19.180275 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c3d09bea-abf7-4cca-97ca-55e9e3d5d6cc","Type":"ContainerStarted","Data":"ca5fded6408c43b0593e89703c2d5a5f0b4a1568a8c9a704ea1750c1075f39d2"} Feb 19 18:51:19 crc kubenswrapper[4813]: I0219 18:51:19.180324 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c3d09bea-abf7-4cca-97ca-55e9e3d5d6cc","Type":"ContainerStarted","Data":"cd78b2b20769c17169ad305c93f71161ce69b7a0baf847a910a030a33cfbee0c"} Feb 19 18:51:19 crc kubenswrapper[4813]: I0219 18:51:19.180379 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c3d09bea-abf7-4cca-97ca-55e9e3d5d6cc" containerName="nova-metadata-metadata" containerID="cri-o://cd78b2b20769c17169ad305c93f71161ce69b7a0baf847a910a030a33cfbee0c" gracePeriod=30 Feb 19 18:51:19 crc kubenswrapper[4813]: I0219 18:51:19.180377 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c3d09bea-abf7-4cca-97ca-55e9e3d5d6cc" containerName="nova-metadata-log" containerID="cri-o://ca5fded6408c43b0593e89703c2d5a5f0b4a1568a8c9a704ea1750c1075f39d2" gracePeriod=30 Feb 19 18:51:19 crc kubenswrapper[4813]: I0219 18:51:19.184320 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"071faa4f-f20b-419c-bd95-3e65d40101aa","Type":"ContainerStarted","Data":"ae2a972e4ff33caab66dcf522d1de90b13f54f86b8b19860273ec8b250252915"} Feb 19 18:51:19 crc kubenswrapper[4813]: I0219 18:51:19.187051 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a4ca3e22-9d6b-4872-bbec-ad276a27540f","Type":"ContainerStarted","Data":"1974dc952a0d7a118c1b4ae017d610213b3f38ecf31c3a385abe73f7a324638b"} Feb 19 18:51:19 crc kubenswrapper[4813]: I0219 18:51:19.192306 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b6d615b6-3d85-40c9-8ae8-81a27f792856","Type":"ContainerStarted","Data":"67373de9f836bbcc9734c63987d52b1116dea47802d978e06050d215a0cf0e97"} Feb 19 18:51:19 crc kubenswrapper[4813]: I0219 18:51:19.216776 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.700515847 podStartE2EDuration="6.21675042s" podCreationTimestamp="2026-02-19 18:51:13 +0000 UTC" firstStartedPulling="2026-02-19 18:51:14.264537822 +0000 UTC m=+1293.489978363" lastFinishedPulling="2026-02-19 18:51:17.780772395 +0000 UTC m=+1297.006212936" observedRunningTime="2026-02-19 18:51:19.204872044 +0000 UTC m=+1298.430312605" watchObservedRunningTime="2026-02-19 18:51:19.21675042 +0000 UTC m=+1298.442191001" Feb 19 18:51:19 crc kubenswrapper[4813]: I0219 18:51:19.244272 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.716686737 podStartE2EDuration="6.244243309s" podCreationTimestamp="2026-02-19 18:51:13 +0000 UTC" firstStartedPulling="2026-02-19 18:51:14.2531006 +0000 UTC m=+1293.478541141" lastFinishedPulling="2026-02-19 18:51:17.780657152 +0000 UTC m=+1297.006097713" observedRunningTime="2026-02-19 18:51:19.236473699 +0000 UTC m=+1298.461914250" watchObservedRunningTime="2026-02-19 18:51:19.244243309 +0000 UTC m=+1298.469683860" Feb 19 18:51:19 crc kubenswrapper[4813]: I0219 18:51:19.268345 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.421810004 podStartE2EDuration="6.268326892s" podCreationTimestamp="2026-02-19 18:51:13 +0000 UTC" firstStartedPulling="2026-02-19 18:51:13.93661575 +0000 UTC m=+1293.162056281" lastFinishedPulling="2026-02-19 18:51:17.783132638 +0000 UTC m=+1297.008573169" observedRunningTime="2026-02-19 18:51:19.256079854 +0000 UTC m=+1298.481520405" watchObservedRunningTime="2026-02-19 18:51:19.268326892 +0000 UTC m=+1298.493767433" Feb 19 18:51:20 crc kubenswrapper[4813]: I0219 18:51:20.206506 4813 generic.go:334] "Generic (PLEG): container finished" podID="c3d09bea-abf7-4cca-97ca-55e9e3d5d6cc" containerID="cd78b2b20769c17169ad305c93f71161ce69b7a0baf847a910a030a33cfbee0c" exitCode=0 Feb 19 18:51:20 crc kubenswrapper[4813]: I0219 18:51:20.206887 4813 generic.go:334] "Generic (PLEG): container finished" podID="c3d09bea-abf7-4cca-97ca-55e9e3d5d6cc" containerID="ca5fded6408c43b0593e89703c2d5a5f0b4a1568a8c9a704ea1750c1075f39d2" exitCode=143 Feb 19 18:51:20 crc kubenswrapper[4813]: I0219 18:51:20.206603 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c3d09bea-abf7-4cca-97ca-55e9e3d5d6cc","Type":"ContainerDied","Data":"cd78b2b20769c17169ad305c93f71161ce69b7a0baf847a910a030a33cfbee0c"} Feb 19 18:51:20 crc kubenswrapper[4813]: I0219 18:51:20.207048 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c3d09bea-abf7-4cca-97ca-55e9e3d5d6cc","Type":"ContainerDied","Data":"ca5fded6408c43b0593e89703c2d5a5f0b4a1568a8c9a704ea1750c1075f39d2"} Feb 19 18:51:20 crc kubenswrapper[4813]: I0219 18:51:20.619121 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 18:51:20 crc kubenswrapper[4813]: I0219 18:51:20.729470 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3d09bea-abf7-4cca-97ca-55e9e3d5d6cc-config-data\") pod \"c3d09bea-abf7-4cca-97ca-55e9e3d5d6cc\" (UID: \"c3d09bea-abf7-4cca-97ca-55e9e3d5d6cc\") " Feb 19 18:51:20 crc kubenswrapper[4813]: I0219 18:51:20.729888 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3d09bea-abf7-4cca-97ca-55e9e3d5d6cc-logs\") pod \"c3d09bea-abf7-4cca-97ca-55e9e3d5d6cc\" (UID: \"c3d09bea-abf7-4cca-97ca-55e9e3d5d6cc\") " Feb 19 18:51:20 crc kubenswrapper[4813]: I0219 18:51:20.730163 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcnqc\" (UniqueName: \"kubernetes.io/projected/c3d09bea-abf7-4cca-97ca-55e9e3d5d6cc-kube-api-access-fcnqc\") pod \"c3d09bea-abf7-4cca-97ca-55e9e3d5d6cc\" (UID: \"c3d09bea-abf7-4cca-97ca-55e9e3d5d6cc\") " Feb 19 18:51:20 crc kubenswrapper[4813]: I0219 18:51:20.730312 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3d09bea-abf7-4cca-97ca-55e9e3d5d6cc-logs" (OuterVolumeSpecName: "logs") pod "c3d09bea-abf7-4cca-97ca-55e9e3d5d6cc" (UID: "c3d09bea-abf7-4cca-97ca-55e9e3d5d6cc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:51:20 crc kubenswrapper[4813]: I0219 18:51:20.730551 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3d09bea-abf7-4cca-97ca-55e9e3d5d6cc-combined-ca-bundle\") pod \"c3d09bea-abf7-4cca-97ca-55e9e3d5d6cc\" (UID: \"c3d09bea-abf7-4cca-97ca-55e9e3d5d6cc\") " Feb 19 18:51:20 crc kubenswrapper[4813]: I0219 18:51:20.731403 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3d09bea-abf7-4cca-97ca-55e9e3d5d6cc-logs\") on node \"crc\" DevicePath \"\"" Feb 19 18:51:20 crc kubenswrapper[4813]: I0219 18:51:20.743225 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3d09bea-abf7-4cca-97ca-55e9e3d5d6cc-kube-api-access-fcnqc" (OuterVolumeSpecName: "kube-api-access-fcnqc") pod "c3d09bea-abf7-4cca-97ca-55e9e3d5d6cc" (UID: "c3d09bea-abf7-4cca-97ca-55e9e3d5d6cc"). InnerVolumeSpecName "kube-api-access-fcnqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:51:20 crc kubenswrapper[4813]: I0219 18:51:20.777593 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3d09bea-abf7-4cca-97ca-55e9e3d5d6cc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c3d09bea-abf7-4cca-97ca-55e9e3d5d6cc" (UID: "c3d09bea-abf7-4cca-97ca-55e9e3d5d6cc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:51:20 crc kubenswrapper[4813]: I0219 18:51:20.779422 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3d09bea-abf7-4cca-97ca-55e9e3d5d6cc-config-data" (OuterVolumeSpecName: "config-data") pod "c3d09bea-abf7-4cca-97ca-55e9e3d5d6cc" (UID: "c3d09bea-abf7-4cca-97ca-55e9e3d5d6cc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:51:20 crc kubenswrapper[4813]: I0219 18:51:20.834164 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcnqc\" (UniqueName: \"kubernetes.io/projected/c3d09bea-abf7-4cca-97ca-55e9e3d5d6cc-kube-api-access-fcnqc\") on node \"crc\" DevicePath \"\"" Feb 19 18:51:20 crc kubenswrapper[4813]: I0219 18:51:20.834341 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3d09bea-abf7-4cca-97ca-55e9e3d5d6cc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:51:20 crc kubenswrapper[4813]: I0219 18:51:20.834458 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3d09bea-abf7-4cca-97ca-55e9e3d5d6cc-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 18:51:21 crc kubenswrapper[4813]: I0219 18:51:21.221509 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c3d09bea-abf7-4cca-97ca-55e9e3d5d6cc","Type":"ContainerDied","Data":"5cf266695021514d657796ffaab8cf36a86e5d7bcb43091fcc5b9920fc5c7a46"} Feb 19 18:51:21 crc kubenswrapper[4813]: I0219 18:51:21.221593 4813 scope.go:117] "RemoveContainer" containerID="cd78b2b20769c17169ad305c93f71161ce69b7a0baf847a910a030a33cfbee0c" Feb 19 18:51:21 crc kubenswrapper[4813]: I0219 18:51:21.221545 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 18:51:21 crc kubenswrapper[4813]: I0219 18:51:21.264736 4813 scope.go:117] "RemoveContainer" containerID="ca5fded6408c43b0593e89703c2d5a5f0b4a1568a8c9a704ea1750c1075f39d2" Feb 19 18:51:21 crc kubenswrapper[4813]: I0219 18:51:21.284472 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 18:51:21 crc kubenswrapper[4813]: I0219 18:51:21.304989 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 18:51:21 crc kubenswrapper[4813]: I0219 18:51:21.318534 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 19 18:51:21 crc kubenswrapper[4813]: E0219 18:51:21.319295 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3d09bea-abf7-4cca-97ca-55e9e3d5d6cc" containerName="nova-metadata-metadata" Feb 19 18:51:21 crc kubenswrapper[4813]: I0219 18:51:21.319333 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3d09bea-abf7-4cca-97ca-55e9e3d5d6cc" containerName="nova-metadata-metadata" Feb 19 18:51:21 crc kubenswrapper[4813]: E0219 18:51:21.319371 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3d09bea-abf7-4cca-97ca-55e9e3d5d6cc" containerName="nova-metadata-log" Feb 19 18:51:21 crc kubenswrapper[4813]: I0219 18:51:21.319385 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3d09bea-abf7-4cca-97ca-55e9e3d5d6cc" containerName="nova-metadata-log" Feb 19 18:51:21 crc kubenswrapper[4813]: I0219 18:51:21.319766 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3d09bea-abf7-4cca-97ca-55e9e3d5d6cc" containerName="nova-metadata-metadata" Feb 19 18:51:21 crc kubenswrapper[4813]: I0219 18:51:21.319825 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3d09bea-abf7-4cca-97ca-55e9e3d5d6cc" containerName="nova-metadata-log" Feb 19 18:51:21 crc kubenswrapper[4813]: I0219 18:51:21.321687 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 18:51:21 crc kubenswrapper[4813]: I0219 18:51:21.324008 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 19 18:51:21 crc kubenswrapper[4813]: I0219 18:51:21.327335 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 19 18:51:21 crc kubenswrapper[4813]: I0219 18:51:21.338754 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 18:51:21 crc kubenswrapper[4813]: I0219 18:51:21.446477 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gpbp\" (UniqueName: \"kubernetes.io/projected/03c53522-206c-442c-ac97-7044b0f81e4a-kube-api-access-9gpbp\") pod \"nova-metadata-0\" (UID: \"03c53522-206c-442c-ac97-7044b0f81e4a\") " pod="openstack/nova-metadata-0" Feb 19 18:51:21 crc kubenswrapper[4813]: I0219 18:51:21.446569 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/03c53522-206c-442c-ac97-7044b0f81e4a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"03c53522-206c-442c-ac97-7044b0f81e4a\") " pod="openstack/nova-metadata-0" Feb 19 18:51:21 crc kubenswrapper[4813]: I0219 18:51:21.446639 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03c53522-206c-442c-ac97-7044b0f81e4a-config-data\") pod \"nova-metadata-0\" (UID: \"03c53522-206c-442c-ac97-7044b0f81e4a\") " pod="openstack/nova-metadata-0" Feb 19 18:51:21 crc kubenswrapper[4813]: I0219 18:51:21.446660 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03c53522-206c-442c-ac97-7044b0f81e4a-logs\") pod \"nova-metadata-0\" (UID: \"03c53522-206c-442c-ac97-7044b0f81e4a\") " pod="openstack/nova-metadata-0" Feb 19 18:51:21 crc kubenswrapper[4813]: I0219 18:51:21.446691 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03c53522-206c-442c-ac97-7044b0f81e4a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"03c53522-206c-442c-ac97-7044b0f81e4a\") " pod="openstack/nova-metadata-0" Feb 19 18:51:21 crc kubenswrapper[4813]: I0219 18:51:21.488440 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3d09bea-abf7-4cca-97ca-55e9e3d5d6cc" path="/var/lib/kubelet/pods/c3d09bea-abf7-4cca-97ca-55e9e3d5d6cc/volumes" Feb 19 18:51:21 crc kubenswrapper[4813]: I0219 18:51:21.549056 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gpbp\" (UniqueName: \"kubernetes.io/projected/03c53522-206c-442c-ac97-7044b0f81e4a-kube-api-access-9gpbp\") pod \"nova-metadata-0\" (UID: \"03c53522-206c-442c-ac97-7044b0f81e4a\") " pod="openstack/nova-metadata-0" Feb 19 18:51:21 crc kubenswrapper[4813]: I0219 18:51:21.549169 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/03c53522-206c-442c-ac97-7044b0f81e4a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"03c53522-206c-442c-ac97-7044b0f81e4a\") " pod="openstack/nova-metadata-0" Feb 19 18:51:21 crc kubenswrapper[4813]: I0219 18:51:21.549265 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03c53522-206c-442c-ac97-7044b0f81e4a-config-data\") pod \"nova-metadata-0\" (UID: \"03c53522-206c-442c-ac97-7044b0f81e4a\") " pod="openstack/nova-metadata-0" Feb 19 18:51:21 crc kubenswrapper[4813]: I0219 18:51:21.549299 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03c53522-206c-442c-ac97-7044b0f81e4a-logs\") pod \"nova-metadata-0\" (UID: \"03c53522-206c-442c-ac97-7044b0f81e4a\") " pod="openstack/nova-metadata-0" Feb 19 18:51:21 crc kubenswrapper[4813]: I0219 18:51:21.549343 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03c53522-206c-442c-ac97-7044b0f81e4a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"03c53522-206c-442c-ac97-7044b0f81e4a\") " pod="openstack/nova-metadata-0" Feb 19 18:51:21 crc kubenswrapper[4813]: I0219 18:51:21.550070 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03c53522-206c-442c-ac97-7044b0f81e4a-logs\") pod \"nova-metadata-0\" (UID: \"03c53522-206c-442c-ac97-7044b0f81e4a\") " pod="openstack/nova-metadata-0" Feb 19 18:51:21 crc kubenswrapper[4813]: I0219 18:51:21.555710 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03c53522-206c-442c-ac97-7044b0f81e4a-config-data\") pod \"nova-metadata-0\" (UID: \"03c53522-206c-442c-ac97-7044b0f81e4a\") " pod="openstack/nova-metadata-0" Feb 19 18:51:21 crc kubenswrapper[4813]: I0219 18:51:21.558600 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03c53522-206c-442c-ac97-7044b0f81e4a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"03c53522-206c-442c-ac97-7044b0f81e4a\") " pod="openstack/nova-metadata-0" Feb 19 18:51:21 crc kubenswrapper[4813]: I0219 18:51:21.559033 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/03c53522-206c-442c-ac97-7044b0f81e4a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"03c53522-206c-442c-ac97-7044b0f81e4a\") " pod="openstack/nova-metadata-0" Feb 19 18:51:21 crc kubenswrapper[4813]: I0219 18:51:21.588600 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gpbp\" (UniqueName: \"kubernetes.io/projected/03c53522-206c-442c-ac97-7044b0f81e4a-kube-api-access-9gpbp\") pod \"nova-metadata-0\" (UID: \"03c53522-206c-442c-ac97-7044b0f81e4a\") " pod="openstack/nova-metadata-0" Feb 19 18:51:21 crc kubenswrapper[4813]: I0219 18:51:21.642048 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 18:51:22 crc kubenswrapper[4813]: I0219 18:51:22.187343 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 18:51:22 crc kubenswrapper[4813]: I0219 18:51:22.230312 4813 generic.go:334] "Generic (PLEG): container finished" podID="3d6da42c-1604-467c-b9ef-dde47711b95a" containerID="e716782824789876e737f679b53e25e942ffd227219538feea44c84706b6d858" exitCode=0 Feb 19 18:51:22 crc kubenswrapper[4813]: I0219 18:51:22.230403 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-5knnd" event={"ID":"3d6da42c-1604-467c-b9ef-dde47711b95a","Type":"ContainerDied","Data":"e716782824789876e737f679b53e25e942ffd227219538feea44c84706b6d858"} Feb 19 18:51:22 crc kubenswrapper[4813]: I0219 18:51:22.235892 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"03c53522-206c-442c-ac97-7044b0f81e4a","Type":"ContainerStarted","Data":"cadc8335b35f9c43eee69a52e243ebb493d8bec4c255bf75c090e0c8f92dcd6d"} Feb 19 18:51:23 crc kubenswrapper[4813]: I0219 18:51:23.254255 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"03c53522-206c-442c-ac97-7044b0f81e4a","Type":"ContainerStarted","Data":"64f14ed323d6b0ffa5cbca3164e6d8ac2cce2df22440e498c3d2176a8d2fd980"} Feb 19 18:51:23 crc kubenswrapper[4813]: I0219 18:51:23.254851 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"03c53522-206c-442c-ac97-7044b0f81e4a","Type":"ContainerStarted","Data":"0751b4628aabde28f78492ff453afa0077b952a7e066194c942821869570d401"} Feb 19 18:51:23 crc kubenswrapper[4813]: I0219 18:51:23.289282 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.289266011 podStartE2EDuration="2.289266011s" podCreationTimestamp="2026-02-19 18:51:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:51:23.287473776 +0000 UTC m=+1302.512914337" watchObservedRunningTime="2026-02-19 18:51:23.289266011 +0000 UTC m=+1302.514706552" Feb 19 18:51:23 crc kubenswrapper[4813]: I0219 18:51:23.361146 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 18:51:23 crc kubenswrapper[4813]: I0219 18:51:23.361520 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 18:51:23 crc kubenswrapper[4813]: I0219 18:51:23.665964 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 19 18:51:23 crc kubenswrapper[4813]: I0219 18:51:23.666010 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 19 18:51:23 crc kubenswrapper[4813]: I0219 18:51:23.678319 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-5knnd" Feb 19 18:51:23 crc kubenswrapper[4813]: I0219 18:51:23.688849 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-75ddbf7c75-h77vl" Feb 19 18:51:23 crc kubenswrapper[4813]: I0219 18:51:23.701539 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 19 18:51:23 crc kubenswrapper[4813]: I0219 18:51:23.793942 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c8dc7b4d9-ltqxw"] Feb 19 18:51:23 crc kubenswrapper[4813]: I0219 18:51:23.794208 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6c8dc7b4d9-ltqxw" podUID="dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b" containerName="dnsmasq-dns" containerID="cri-o://6f157cff439cf9cd81cac1e259897c0f46a7ac0701b5998a6e0fb3e7cd22c487" gracePeriod=10 Feb 19 18:51:23 crc kubenswrapper[4813]: I0219 18:51:23.799557 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d6da42c-1604-467c-b9ef-dde47711b95a-scripts\") pod \"3d6da42c-1604-467c-b9ef-dde47711b95a\" (UID: \"3d6da42c-1604-467c-b9ef-dde47711b95a\") " Feb 19 18:51:23 crc kubenswrapper[4813]: I0219 18:51:23.799634 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzhbq\" (UniqueName: \"kubernetes.io/projected/3d6da42c-1604-467c-b9ef-dde47711b95a-kube-api-access-fzhbq\") pod \"3d6da42c-1604-467c-b9ef-dde47711b95a\" (UID: \"3d6da42c-1604-467c-b9ef-dde47711b95a\") " Feb 19 18:51:23 crc kubenswrapper[4813]: I0219 18:51:23.799659 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d6da42c-1604-467c-b9ef-dde47711b95a-combined-ca-bundle\") pod \"3d6da42c-1604-467c-b9ef-dde47711b95a\" (UID: \"3d6da42c-1604-467c-b9ef-dde47711b95a\") " Feb 19 18:51:23 crc kubenswrapper[4813]: I0219 18:51:23.799810 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d6da42c-1604-467c-b9ef-dde47711b95a-config-data\") pod \"3d6da42c-1604-467c-b9ef-dde47711b95a\" (UID: \"3d6da42c-1604-467c-b9ef-dde47711b95a\") " Feb 19 18:51:23 crc kubenswrapper[4813]: I0219 18:51:23.809180 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d6da42c-1604-467c-b9ef-dde47711b95a-kube-api-access-fzhbq" (OuterVolumeSpecName: "kube-api-access-fzhbq") pod "3d6da42c-1604-467c-b9ef-dde47711b95a" (UID: "3d6da42c-1604-467c-b9ef-dde47711b95a"). InnerVolumeSpecName "kube-api-access-fzhbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:51:23 crc kubenswrapper[4813]: I0219 18:51:23.829086 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d6da42c-1604-467c-b9ef-dde47711b95a-scripts" (OuterVolumeSpecName: "scripts") pod "3d6da42c-1604-467c-b9ef-dde47711b95a" (UID: "3d6da42c-1604-467c-b9ef-dde47711b95a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:51:23 crc kubenswrapper[4813]: I0219 18:51:23.854159 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d6da42c-1604-467c-b9ef-dde47711b95a-config-data" (OuterVolumeSpecName: "config-data") pod "3d6da42c-1604-467c-b9ef-dde47711b95a" (UID: "3d6da42c-1604-467c-b9ef-dde47711b95a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:51:23 crc kubenswrapper[4813]: I0219 18:51:23.869181 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d6da42c-1604-467c-b9ef-dde47711b95a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3d6da42c-1604-467c-b9ef-dde47711b95a" (UID: "3d6da42c-1604-467c-b9ef-dde47711b95a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:51:23 crc kubenswrapper[4813]: I0219 18:51:23.902435 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d6da42c-1604-467c-b9ef-dde47711b95a-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 18:51:23 crc kubenswrapper[4813]: I0219 18:51:23.902483 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzhbq\" (UniqueName: \"kubernetes.io/projected/3d6da42c-1604-467c-b9ef-dde47711b95a-kube-api-access-fzhbq\") on node \"crc\" DevicePath \"\"" Feb 19 18:51:23 crc kubenswrapper[4813]: I0219 18:51:23.902493 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d6da42c-1604-467c-b9ef-dde47711b95a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:51:23 crc kubenswrapper[4813]: I0219 18:51:23.902501 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d6da42c-1604-467c-b9ef-dde47711b95a-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 18:51:24 crc kubenswrapper[4813]: I0219 18:51:24.277400 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-5knnd" event={"ID":"3d6da42c-1604-467c-b9ef-dde47711b95a","Type":"ContainerDied","Data":"c8b5c29fa08448ca6aa47d5cd6998cafeacfbea82b3ce33008b68fbee01b2923"} Feb 19 18:51:24 crc kubenswrapper[4813]: I0219 18:51:24.277444 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8b5c29fa08448ca6aa47d5cd6998cafeacfbea82b3ce33008b68fbee01b2923" Feb 19 18:51:24 crc kubenswrapper[4813]: I0219 18:51:24.277512 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-5knnd" Feb 19 18:51:24 crc kubenswrapper[4813]: I0219 18:51:24.283772 4813 generic.go:334] "Generic (PLEG): container finished" podID="dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b" containerID="6f157cff439cf9cd81cac1e259897c0f46a7ac0701b5998a6e0fb3e7cd22c487" exitCode=0 Feb 19 18:51:24 crc kubenswrapper[4813]: I0219 18:51:24.283859 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c8dc7b4d9-ltqxw" event={"ID":"dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b","Type":"ContainerDied","Data":"6f157cff439cf9cd81cac1e259897c0f46a7ac0701b5998a6e0fb3e7cd22c487"} Feb 19 18:51:24 crc kubenswrapper[4813]: I0219 18:51:24.313976 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 19 18:51:24 crc kubenswrapper[4813]: I0219 18:51:24.389722 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6c8dc7b4d9-ltqxw" podUID="dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.161:5353: connect: connection refused" Feb 19 18:51:24 crc kubenswrapper[4813]: I0219 18:51:24.444224 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a4ca3e22-9d6b-4872-bbec-ad276a27540f" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.183:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 18:51:24 crc kubenswrapper[4813]: I0219 18:51:24.444329 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a4ca3e22-9d6b-4872-bbec-ad276a27540f" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.183:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 18:51:24 crc kubenswrapper[4813]: I0219 18:51:24.493333 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 18:51:24 crc kubenswrapper[4813]: I0219 18:51:24.493580 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a4ca3e22-9d6b-4872-bbec-ad276a27540f" containerName="nova-api-log" containerID="cri-o://9c53928bb686ea7fb8a94957477024c8651c1173220ce378d9c31e1768bb67c2" gracePeriod=30 Feb 19 18:51:24 crc kubenswrapper[4813]: I0219 18:51:24.494060 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a4ca3e22-9d6b-4872-bbec-ad276a27540f" containerName="nova-api-api" containerID="cri-o://1974dc952a0d7a118c1b4ae017d610213b3f38ecf31c3a385abe73f7a324638b" gracePeriod=30 Feb 19 18:51:24 crc kubenswrapper[4813]: I0219 18:51:24.509790 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 18:51:24 crc kubenswrapper[4813]: I0219 18:51:24.815107 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 18:51:24 crc kubenswrapper[4813]: I0219 18:51:24.819549 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c8dc7b4d9-ltqxw" Feb 19 18:51:24 crc kubenswrapper[4813]: I0219 18:51:24.919101 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b-ovsdbserver-sb\") pod \"dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b\" (UID: \"dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b\") " Feb 19 18:51:24 crc kubenswrapper[4813]: I0219 18:51:24.919525 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b-dns-svc\") pod \"dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b\" (UID: \"dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b\") " Feb 19 18:51:24 crc kubenswrapper[4813]: I0219 18:51:24.919610 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b-config\") pod \"dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b\" (UID: \"dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b\") " Feb 19 18:51:24 crc kubenswrapper[4813]: I0219 18:51:24.919723 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8prp\" (UniqueName: \"kubernetes.io/projected/dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b-kube-api-access-j8prp\") pod \"dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b\" (UID: \"dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b\") " Feb 19 18:51:24 crc kubenswrapper[4813]: I0219 18:51:24.919814 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b-ovsdbserver-nb\") pod \"dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b\" (UID: \"dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b\") " Feb 19 18:51:24 crc kubenswrapper[4813]: I0219 18:51:24.919930 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b-dns-swift-storage-0\") pod \"dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b\" (UID: \"dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b\") " Feb 19 18:51:24 crc kubenswrapper[4813]: I0219 18:51:24.930125 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b-kube-api-access-j8prp" (OuterVolumeSpecName: "kube-api-access-j8prp") pod "dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b" (UID: "dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b"). InnerVolumeSpecName "kube-api-access-j8prp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:51:24 crc kubenswrapper[4813]: I0219 18:51:24.999145 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b" (UID: "dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:51:25 crc kubenswrapper[4813]: I0219 18:51:25.003802 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b" (UID: "dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:51:25 crc kubenswrapper[4813]: I0219 18:51:25.015153 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b" (UID: "dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:51:25 crc kubenswrapper[4813]: I0219 18:51:25.020981 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b-config" (OuterVolumeSpecName: "config") pod "dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b" (UID: "dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:51:25 crc kubenswrapper[4813]: I0219 18:51:25.025558 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8prp\" (UniqueName: \"kubernetes.io/projected/dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b-kube-api-access-j8prp\") on node \"crc\" DevicePath \"\"" Feb 19 18:51:25 crc kubenswrapper[4813]: I0219 18:51:25.025600 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 18:51:25 crc kubenswrapper[4813]: I0219 18:51:25.025616 4813 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 18:51:25 crc kubenswrapper[4813]: I0219 18:51:25.025626 4813 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 18:51:25 crc kubenswrapper[4813]: I0219 18:51:25.025637 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b-config\") on node \"crc\" DevicePath \"\"" Feb 19 18:51:25 crc kubenswrapper[4813]: I0219 18:51:25.050273 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b" (UID: "dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:51:25 crc kubenswrapper[4813]: I0219 18:51:25.127140 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 18:51:25 crc kubenswrapper[4813]: I0219 18:51:25.295278 4813 generic.go:334] "Generic (PLEG): container finished" podID="a4ca3e22-9d6b-4872-bbec-ad276a27540f" containerID="9c53928bb686ea7fb8a94957477024c8651c1173220ce378d9c31e1768bb67c2" exitCode=143 Feb 19 18:51:25 crc kubenswrapper[4813]: I0219 18:51:25.295565 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a4ca3e22-9d6b-4872-bbec-ad276a27540f","Type":"ContainerDied","Data":"9c53928bb686ea7fb8a94957477024c8651c1173220ce378d9c31e1768bb67c2"} Feb 19 18:51:25 crc kubenswrapper[4813]: I0219 18:51:25.297256 4813 generic.go:334] "Generic (PLEG): container finished" podID="b98f2a59-47e4-46aa-acbe-74250ed631ca" containerID="1ba735df184697c88fb38b34247b6c1cb15bd976c21e2519ffc06459f7f5c81e" exitCode=0 Feb 19 18:51:25 crc kubenswrapper[4813]: I0219 18:51:25.297324 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-spl4z" event={"ID":"b98f2a59-47e4-46aa-acbe-74250ed631ca","Type":"ContainerDied","Data":"1ba735df184697c88fb38b34247b6c1cb15bd976c21e2519ffc06459f7f5c81e"} Feb 19 18:51:25 crc kubenswrapper[4813]: I0219 18:51:25.300501 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="03c53522-206c-442c-ac97-7044b0f81e4a" containerName="nova-metadata-log" containerID="cri-o://0751b4628aabde28f78492ff453afa0077b952a7e066194c942821869570d401" gracePeriod=30 Feb 19 18:51:25 crc kubenswrapper[4813]: I0219 18:51:25.300759 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c8dc7b4d9-ltqxw" Feb 19 18:51:25 crc kubenswrapper[4813]: I0219 18:51:25.304106 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c8dc7b4d9-ltqxw" event={"ID":"dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b","Type":"ContainerDied","Data":"87409e993151f2c2bbc335e7d311939fa641e804d19876e2f60c3789dd5c0cc9"} Feb 19 18:51:25 crc kubenswrapper[4813]: I0219 18:51:25.304165 4813 scope.go:117] "RemoveContainer" containerID="6f157cff439cf9cd81cac1e259897c0f46a7ac0701b5998a6e0fb3e7cd22c487" Feb 19 18:51:25 crc kubenswrapper[4813]: I0219 18:51:25.304225 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="03c53522-206c-442c-ac97-7044b0f81e4a" containerName="nova-metadata-metadata" containerID="cri-o://64f14ed323d6b0ffa5cbca3164e6d8ac2cce2df22440e498c3d2176a8d2fd980" gracePeriod=30 Feb 19 18:51:25 crc kubenswrapper[4813]: I0219 18:51:25.337176 4813 scope.go:117] "RemoveContainer" containerID="05799b021a1f38afb61377db7f0d244f5916a280faff16bcca5e207c5e9ec44e" Feb 19 18:51:25 crc kubenswrapper[4813]: I0219 18:51:25.353410 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c8dc7b4d9-ltqxw"] Feb 19 18:51:25 crc kubenswrapper[4813]: I0219 18:51:25.360690 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6c8dc7b4d9-ltqxw"] Feb 19 18:51:25 crc kubenswrapper[4813]: I0219 18:51:25.520584 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b" path="/var/lib/kubelet/pods/dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b/volumes" Feb 19 18:51:25 crc kubenswrapper[4813]: I0219 18:51:25.825968 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 18:51:25 crc kubenswrapper[4813]: I0219 18:51:25.943250 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/03c53522-206c-442c-ac97-7044b0f81e4a-nova-metadata-tls-certs\") pod \"03c53522-206c-442c-ac97-7044b0f81e4a\" (UID: \"03c53522-206c-442c-ac97-7044b0f81e4a\") " Feb 19 18:51:25 crc kubenswrapper[4813]: I0219 18:51:25.943534 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03c53522-206c-442c-ac97-7044b0f81e4a-config-data\") pod \"03c53522-206c-442c-ac97-7044b0f81e4a\" (UID: \"03c53522-206c-442c-ac97-7044b0f81e4a\") " Feb 19 18:51:25 crc kubenswrapper[4813]: I0219 18:51:25.943626 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03c53522-206c-442c-ac97-7044b0f81e4a-logs\") pod \"03c53522-206c-442c-ac97-7044b0f81e4a\" (UID: \"03c53522-206c-442c-ac97-7044b0f81e4a\") " Feb 19 18:51:25 crc kubenswrapper[4813]: I0219 18:51:25.943656 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gpbp\" (UniqueName: \"kubernetes.io/projected/03c53522-206c-442c-ac97-7044b0f81e4a-kube-api-access-9gpbp\") pod \"03c53522-206c-442c-ac97-7044b0f81e4a\" (UID: \"03c53522-206c-442c-ac97-7044b0f81e4a\") " Feb 19 18:51:25 crc kubenswrapper[4813]: I0219 18:51:25.943677 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03c53522-206c-442c-ac97-7044b0f81e4a-combined-ca-bundle\") pod \"03c53522-206c-442c-ac97-7044b0f81e4a\" (UID: \"03c53522-206c-442c-ac97-7044b0f81e4a\") " Feb 19 18:51:25 crc kubenswrapper[4813]: I0219 18:51:25.944603 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03c53522-206c-442c-ac97-7044b0f81e4a-logs" (OuterVolumeSpecName: "logs") pod "03c53522-206c-442c-ac97-7044b0f81e4a" (UID: "03c53522-206c-442c-ac97-7044b0f81e4a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:51:25 crc kubenswrapper[4813]: I0219 18:51:25.950643 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03c53522-206c-442c-ac97-7044b0f81e4a-kube-api-access-9gpbp" (OuterVolumeSpecName: "kube-api-access-9gpbp") pod "03c53522-206c-442c-ac97-7044b0f81e4a" (UID: "03c53522-206c-442c-ac97-7044b0f81e4a"). InnerVolumeSpecName "kube-api-access-9gpbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:51:25 crc kubenswrapper[4813]: I0219 18:51:25.973079 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03c53522-206c-442c-ac97-7044b0f81e4a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "03c53522-206c-442c-ac97-7044b0f81e4a" (UID: "03c53522-206c-442c-ac97-7044b0f81e4a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:51:25 crc kubenswrapper[4813]: I0219 18:51:25.980081 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03c53522-206c-442c-ac97-7044b0f81e4a-config-data" (OuterVolumeSpecName: "config-data") pod "03c53522-206c-442c-ac97-7044b0f81e4a" (UID: "03c53522-206c-442c-ac97-7044b0f81e4a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:51:26 crc kubenswrapper[4813]: I0219 18:51:26.000087 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03c53522-206c-442c-ac97-7044b0f81e4a-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "03c53522-206c-442c-ac97-7044b0f81e4a" (UID: "03c53522-206c-442c-ac97-7044b0f81e4a"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:51:26 crc kubenswrapper[4813]: I0219 18:51:26.046024 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03c53522-206c-442c-ac97-7044b0f81e4a-logs\") on node \"crc\" DevicePath \"\"" Feb 19 18:51:26 crc kubenswrapper[4813]: I0219 18:51:26.046058 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9gpbp\" (UniqueName: \"kubernetes.io/projected/03c53522-206c-442c-ac97-7044b0f81e4a-kube-api-access-9gpbp\") on node \"crc\" DevicePath \"\"" Feb 19 18:51:26 crc kubenswrapper[4813]: I0219 18:51:26.046070 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03c53522-206c-442c-ac97-7044b0f81e4a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:51:26 crc kubenswrapper[4813]: I0219 18:51:26.046080 4813 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/03c53522-206c-442c-ac97-7044b0f81e4a-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 18:51:26 crc kubenswrapper[4813]: I0219 18:51:26.046089 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03c53522-206c-442c-ac97-7044b0f81e4a-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 18:51:26 crc kubenswrapper[4813]: I0219 18:51:26.312844 4813 generic.go:334] "Generic (PLEG): container finished" podID="03c53522-206c-442c-ac97-7044b0f81e4a" containerID="64f14ed323d6b0ffa5cbca3164e6d8ac2cce2df22440e498c3d2176a8d2fd980" exitCode=0 Feb 19 18:51:26 crc kubenswrapper[4813]: I0219 18:51:26.312928 4813 generic.go:334] "Generic (PLEG): container finished" podID="03c53522-206c-442c-ac97-7044b0f81e4a" containerID="0751b4628aabde28f78492ff453afa0077b952a7e066194c942821869570d401" exitCode=143 Feb 19 18:51:26 crc kubenswrapper[4813]: I0219 18:51:26.312934 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"03c53522-206c-442c-ac97-7044b0f81e4a","Type":"ContainerDied","Data":"64f14ed323d6b0ffa5cbca3164e6d8ac2cce2df22440e498c3d2176a8d2fd980"} Feb 19 18:51:26 crc kubenswrapper[4813]: I0219 18:51:26.313045 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 18:51:26 crc kubenswrapper[4813]: I0219 18:51:26.313082 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"03c53522-206c-442c-ac97-7044b0f81e4a","Type":"ContainerDied","Data":"0751b4628aabde28f78492ff453afa0077b952a7e066194c942821869570d401"} Feb 19 18:51:26 crc kubenswrapper[4813]: I0219 18:51:26.313116 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"03c53522-206c-442c-ac97-7044b0f81e4a","Type":"ContainerDied","Data":"cadc8335b35f9c43eee69a52e243ebb493d8bec4c255bf75c090e0c8f92dcd6d"} Feb 19 18:51:26 crc kubenswrapper[4813]: I0219 18:51:26.313146 4813 scope.go:117] "RemoveContainer" containerID="64f14ed323d6b0ffa5cbca3164e6d8ac2cce2df22440e498c3d2176a8d2fd980" Feb 19 18:51:26 crc kubenswrapper[4813]: I0219 18:51:26.318680 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="071faa4f-f20b-419c-bd95-3e65d40101aa" containerName="nova-scheduler-scheduler" containerID="cri-o://ae2a972e4ff33caab66dcf522d1de90b13f54f86b8b19860273ec8b250252915" gracePeriod=30 Feb 19 18:51:26 crc kubenswrapper[4813]: I0219 18:51:26.392204 4813 scope.go:117] "RemoveContainer" containerID="0751b4628aabde28f78492ff453afa0077b952a7e066194c942821869570d401" Feb 19 18:51:26 crc kubenswrapper[4813]: I0219 18:51:26.400506 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 18:51:26 crc kubenswrapper[4813]: I0219 18:51:26.433624 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 18:51:26 crc kubenswrapper[4813]: I0219 18:51:26.437123 4813 scope.go:117] "RemoveContainer" containerID="64f14ed323d6b0ffa5cbca3164e6d8ac2cce2df22440e498c3d2176a8d2fd980" Feb 19 18:51:26 crc kubenswrapper[4813]: E0219 18:51:26.437551 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64f14ed323d6b0ffa5cbca3164e6d8ac2cce2df22440e498c3d2176a8d2fd980\": container with ID starting with 64f14ed323d6b0ffa5cbca3164e6d8ac2cce2df22440e498c3d2176a8d2fd980 not found: ID does not exist" containerID="64f14ed323d6b0ffa5cbca3164e6d8ac2cce2df22440e498c3d2176a8d2fd980" Feb 19 18:51:26 crc kubenswrapper[4813]: I0219 18:51:26.437591 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64f14ed323d6b0ffa5cbca3164e6d8ac2cce2df22440e498c3d2176a8d2fd980"} err="failed to get container status \"64f14ed323d6b0ffa5cbca3164e6d8ac2cce2df22440e498c3d2176a8d2fd980\": rpc error: code = NotFound desc = could not find container \"64f14ed323d6b0ffa5cbca3164e6d8ac2cce2df22440e498c3d2176a8d2fd980\": container with ID starting with 64f14ed323d6b0ffa5cbca3164e6d8ac2cce2df22440e498c3d2176a8d2fd980 not found: ID does not exist" Feb 19 18:51:26 crc kubenswrapper[4813]: I0219 18:51:26.437616 4813 scope.go:117] "RemoveContainer" containerID="0751b4628aabde28f78492ff453afa0077b952a7e066194c942821869570d401" Feb 19 18:51:26 crc kubenswrapper[4813]: E0219 18:51:26.438095 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0751b4628aabde28f78492ff453afa0077b952a7e066194c942821869570d401\": container with ID starting with 0751b4628aabde28f78492ff453afa0077b952a7e066194c942821869570d401 not found: ID does not exist" containerID="0751b4628aabde28f78492ff453afa0077b952a7e066194c942821869570d401" Feb 19 18:51:26 crc kubenswrapper[4813]: I0219 18:51:26.438120 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0751b4628aabde28f78492ff453afa0077b952a7e066194c942821869570d401"} err="failed to get container status \"0751b4628aabde28f78492ff453afa0077b952a7e066194c942821869570d401\": rpc error: code = NotFound desc = could not find container \"0751b4628aabde28f78492ff453afa0077b952a7e066194c942821869570d401\": container with ID starting with 0751b4628aabde28f78492ff453afa0077b952a7e066194c942821869570d401 not found: ID does not exist" Feb 19 18:51:26 crc kubenswrapper[4813]: I0219 18:51:26.438137 4813 scope.go:117] "RemoveContainer" containerID="64f14ed323d6b0ffa5cbca3164e6d8ac2cce2df22440e498c3d2176a8d2fd980" Feb 19 18:51:26 crc kubenswrapper[4813]: I0219 18:51:26.438464 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64f14ed323d6b0ffa5cbca3164e6d8ac2cce2df22440e498c3d2176a8d2fd980"} err="failed to get container status \"64f14ed323d6b0ffa5cbca3164e6d8ac2cce2df22440e498c3d2176a8d2fd980\": rpc error: code = NotFound desc = could not find container \"64f14ed323d6b0ffa5cbca3164e6d8ac2cce2df22440e498c3d2176a8d2fd980\": container with ID starting with 64f14ed323d6b0ffa5cbca3164e6d8ac2cce2df22440e498c3d2176a8d2fd980 not found: ID does not exist" Feb 19 18:51:26 crc kubenswrapper[4813]: I0219 18:51:26.438527 4813 scope.go:117] "RemoveContainer" containerID="0751b4628aabde28f78492ff453afa0077b952a7e066194c942821869570d401" Feb 19 18:51:26 crc kubenswrapper[4813]: I0219 18:51:26.438845 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0751b4628aabde28f78492ff453afa0077b952a7e066194c942821869570d401"} err="failed to get container status \"0751b4628aabde28f78492ff453afa0077b952a7e066194c942821869570d401\": rpc error: code = NotFound desc = could not find container \"0751b4628aabde28f78492ff453afa0077b952a7e066194c942821869570d401\": container with ID starting with 0751b4628aabde28f78492ff453afa0077b952a7e066194c942821869570d401 not found: ID does not exist" Feb 19 18:51:26 crc kubenswrapper[4813]: I0219 18:51:26.450137 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 19 18:51:26 crc kubenswrapper[4813]: E0219 18:51:26.450848 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d6da42c-1604-467c-b9ef-dde47711b95a" containerName="nova-manage" Feb 19 18:51:26 crc kubenswrapper[4813]: I0219 18:51:26.450876 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d6da42c-1604-467c-b9ef-dde47711b95a" containerName="nova-manage" Feb 19 18:51:26 crc kubenswrapper[4813]: E0219 18:51:26.450908 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b" containerName="dnsmasq-dns" Feb 19 18:51:26 crc kubenswrapper[4813]: I0219 18:51:26.450921 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b" containerName="dnsmasq-dns" Feb 19 18:51:26 crc kubenswrapper[4813]: E0219 18:51:26.450943 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03c53522-206c-442c-ac97-7044b0f81e4a" containerName="nova-metadata-metadata" Feb 19 18:51:26 crc kubenswrapper[4813]: I0219 18:51:26.451031 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="03c53522-206c-442c-ac97-7044b0f81e4a" containerName="nova-metadata-metadata" Feb 19 18:51:26 crc kubenswrapper[4813]: E0219 18:51:26.451063 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b" containerName="init" Feb 19 18:51:26 crc kubenswrapper[4813]: I0219 18:51:26.451076 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b" containerName="init" Feb 19 18:51:26 crc kubenswrapper[4813]: E0219 18:51:26.451104 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03c53522-206c-442c-ac97-7044b0f81e4a" containerName="nova-metadata-log" Feb 19 18:51:26 crc kubenswrapper[4813]: I0219 18:51:26.451116 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="03c53522-206c-442c-ac97-7044b0f81e4a" containerName="nova-metadata-log" Feb 19 18:51:26 crc kubenswrapper[4813]: I0219 18:51:26.451419 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="03c53522-206c-442c-ac97-7044b0f81e4a" containerName="nova-metadata-metadata" Feb 19 18:51:26 crc kubenswrapper[4813]: I0219 18:51:26.451449 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc4cab34-4cef-4f7e-9738-d4ceaa32cc0b" containerName="dnsmasq-dns" Feb 19 18:51:26 crc kubenswrapper[4813]: I0219 18:51:26.451471 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="03c53522-206c-442c-ac97-7044b0f81e4a" containerName="nova-metadata-log" Feb 19 18:51:26 crc kubenswrapper[4813]: I0219 18:51:26.451501 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d6da42c-1604-467c-b9ef-dde47711b95a" containerName="nova-manage" Feb 19 18:51:26 crc kubenswrapper[4813]: I0219 18:51:26.453203 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 18:51:26 crc kubenswrapper[4813]: I0219 18:51:26.456664 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 19 18:51:26 crc kubenswrapper[4813]: I0219 18:51:26.460185 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 18:51:26 crc kubenswrapper[4813]: I0219 18:51:26.461647 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 19 18:51:26 crc kubenswrapper[4813]: I0219 18:51:26.555042 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmdn8\" (UniqueName: \"kubernetes.io/projected/8cad7f3d-9640-4949-bd7a-11407f2c8f97-kube-api-access-bmdn8\") pod \"nova-metadata-0\" (UID: \"8cad7f3d-9640-4949-bd7a-11407f2c8f97\") " pod="openstack/nova-metadata-0" Feb 19 18:51:26 crc kubenswrapper[4813]: I0219 18:51:26.555135 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cad7f3d-9640-4949-bd7a-11407f2c8f97-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8cad7f3d-9640-4949-bd7a-11407f2c8f97\") " pod="openstack/nova-metadata-0" Feb 19 18:51:26 crc kubenswrapper[4813]: I0219 18:51:26.555531 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cad7f3d-9640-4949-bd7a-11407f2c8f97-config-data\") pod \"nova-metadata-0\" (UID: \"8cad7f3d-9640-4949-bd7a-11407f2c8f97\") " pod="openstack/nova-metadata-0" Feb 19 18:51:26 crc kubenswrapper[4813]: I0219 18:51:26.555618 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cad7f3d-9640-4949-bd7a-11407f2c8f97-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8cad7f3d-9640-4949-bd7a-11407f2c8f97\") " pod="openstack/nova-metadata-0" Feb 19 18:51:26 crc kubenswrapper[4813]: I0219 18:51:26.555669 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cad7f3d-9640-4949-bd7a-11407f2c8f97-logs\") pod \"nova-metadata-0\" (UID: \"8cad7f3d-9640-4949-bd7a-11407f2c8f97\") " pod="openstack/nova-metadata-0" Feb 19 18:51:26 crc kubenswrapper[4813]: I0219 18:51:26.656942 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cad7f3d-9640-4949-bd7a-11407f2c8f97-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8cad7f3d-9640-4949-bd7a-11407f2c8f97\") " pod="openstack/nova-metadata-0" Feb 19 18:51:26 crc kubenswrapper[4813]: I0219 18:51:26.657044 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cad7f3d-9640-4949-bd7a-11407f2c8f97-config-data\") pod \"nova-metadata-0\" (UID: \"8cad7f3d-9640-4949-bd7a-11407f2c8f97\") " pod="openstack/nova-metadata-0" Feb 19 18:51:26 crc kubenswrapper[4813]: I0219 18:51:26.657073 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cad7f3d-9640-4949-bd7a-11407f2c8f97-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8cad7f3d-9640-4949-bd7a-11407f2c8f97\") " pod="openstack/nova-metadata-0" Feb 19 18:51:26 crc kubenswrapper[4813]: I0219 18:51:26.657100 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cad7f3d-9640-4949-bd7a-11407f2c8f97-logs\") pod \"nova-metadata-0\" (UID: \"8cad7f3d-9640-4949-bd7a-11407f2c8f97\") " pod="openstack/nova-metadata-0" Feb 19 18:51:26 crc kubenswrapper[4813]: I0219 18:51:26.657165 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmdn8\" (UniqueName: \"kubernetes.io/projected/8cad7f3d-9640-4949-bd7a-11407f2c8f97-kube-api-access-bmdn8\") pod \"nova-metadata-0\" (UID: \"8cad7f3d-9640-4949-bd7a-11407f2c8f97\") " pod="openstack/nova-metadata-0" Feb 19 18:51:26 crc kubenswrapper[4813]: I0219 18:51:26.657540 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cad7f3d-9640-4949-bd7a-11407f2c8f97-logs\") pod \"nova-metadata-0\" (UID: \"8cad7f3d-9640-4949-bd7a-11407f2c8f97\") " pod="openstack/nova-metadata-0" Feb 19 18:51:26 crc kubenswrapper[4813]: I0219 18:51:26.661182 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cad7f3d-9640-4949-bd7a-11407f2c8f97-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8cad7f3d-9640-4949-bd7a-11407f2c8f97\") " pod="openstack/nova-metadata-0" Feb 19 18:51:26 crc kubenswrapper[4813]: I0219 18:51:26.661421 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cad7f3d-9640-4949-bd7a-11407f2c8f97-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8cad7f3d-9640-4949-bd7a-11407f2c8f97\") " pod="openstack/nova-metadata-0" Feb 19 18:51:26 crc kubenswrapper[4813]: I0219 18:51:26.674678 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cad7f3d-9640-4949-bd7a-11407f2c8f97-config-data\") pod \"nova-metadata-0\" (UID: \"8cad7f3d-9640-4949-bd7a-11407f2c8f97\") " pod="openstack/nova-metadata-0" Feb 19 18:51:26 crc kubenswrapper[4813]: I0219 18:51:26.675135 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmdn8\" (UniqueName: \"kubernetes.io/projected/8cad7f3d-9640-4949-bd7a-11407f2c8f97-kube-api-access-bmdn8\") pod \"nova-metadata-0\" (UID: \"8cad7f3d-9640-4949-bd7a-11407f2c8f97\") " pod="openstack/nova-metadata-0" Feb 19 18:51:26 crc kubenswrapper[4813]: I0219 18:51:26.783295 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 18:51:26 crc kubenswrapper[4813]: I0219 18:51:26.875693 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-spl4z" Feb 19 18:51:27 crc kubenswrapper[4813]: I0219 18:51:27.067513 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b98f2a59-47e4-46aa-acbe-74250ed631ca-scripts\") pod \"b98f2a59-47e4-46aa-acbe-74250ed631ca\" (UID: \"b98f2a59-47e4-46aa-acbe-74250ed631ca\") " Feb 19 18:51:27 crc kubenswrapper[4813]: I0219 18:51:27.067608 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fh86c\" (UniqueName: \"kubernetes.io/projected/b98f2a59-47e4-46aa-acbe-74250ed631ca-kube-api-access-fh86c\") pod \"b98f2a59-47e4-46aa-acbe-74250ed631ca\" (UID: \"b98f2a59-47e4-46aa-acbe-74250ed631ca\") " Feb 19 18:51:27 crc kubenswrapper[4813]: I0219 18:51:27.067656 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b98f2a59-47e4-46aa-acbe-74250ed631ca-combined-ca-bundle\") pod \"b98f2a59-47e4-46aa-acbe-74250ed631ca\" (UID: \"b98f2a59-47e4-46aa-acbe-74250ed631ca\") " Feb 19 18:51:27 crc kubenswrapper[4813]: I0219 18:51:27.067688 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b98f2a59-47e4-46aa-acbe-74250ed631ca-config-data\") pod \"b98f2a59-47e4-46aa-acbe-74250ed631ca\" (UID: \"b98f2a59-47e4-46aa-acbe-74250ed631ca\") " Feb 19 18:51:27 crc kubenswrapper[4813]: I0219 18:51:27.072342 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b98f2a59-47e4-46aa-acbe-74250ed631ca-kube-api-access-fh86c" (OuterVolumeSpecName: "kube-api-access-fh86c") pod "b98f2a59-47e4-46aa-acbe-74250ed631ca" (UID: "b98f2a59-47e4-46aa-acbe-74250ed631ca"). InnerVolumeSpecName "kube-api-access-fh86c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:51:27 crc kubenswrapper[4813]: I0219 18:51:27.073636 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b98f2a59-47e4-46aa-acbe-74250ed631ca-scripts" (OuterVolumeSpecName: "scripts") pod "b98f2a59-47e4-46aa-acbe-74250ed631ca" (UID: "b98f2a59-47e4-46aa-acbe-74250ed631ca"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:51:27 crc kubenswrapper[4813]: I0219 18:51:27.115633 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b98f2a59-47e4-46aa-acbe-74250ed631ca-config-data" (OuterVolumeSpecName: "config-data") pod "b98f2a59-47e4-46aa-acbe-74250ed631ca" (UID: "b98f2a59-47e4-46aa-acbe-74250ed631ca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:51:27 crc kubenswrapper[4813]: I0219 18:51:27.119388 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b98f2a59-47e4-46aa-acbe-74250ed631ca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b98f2a59-47e4-46aa-acbe-74250ed631ca" (UID: "b98f2a59-47e4-46aa-acbe-74250ed631ca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:51:27 crc kubenswrapper[4813]: I0219 18:51:27.169565 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fh86c\" (UniqueName: \"kubernetes.io/projected/b98f2a59-47e4-46aa-acbe-74250ed631ca-kube-api-access-fh86c\") on node \"crc\" DevicePath \"\"" Feb 19 18:51:27 crc kubenswrapper[4813]: I0219 18:51:27.169594 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b98f2a59-47e4-46aa-acbe-74250ed631ca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:51:27 crc kubenswrapper[4813]: I0219 18:51:27.169603 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b98f2a59-47e4-46aa-acbe-74250ed631ca-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 18:51:27 crc kubenswrapper[4813]: I0219 18:51:27.169611 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b98f2a59-47e4-46aa-acbe-74250ed631ca-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 18:51:27 crc kubenswrapper[4813]: I0219 18:51:27.258718 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 18:51:27 crc kubenswrapper[4813]: I0219 18:51:27.328558 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-spl4z" event={"ID":"b98f2a59-47e4-46aa-acbe-74250ed631ca","Type":"ContainerDied","Data":"2a25594eb86a8b04ccafbac99714dff18b4f67f9a8f46b7bb7d2e2c3856db867"} Feb 19 18:51:27 crc kubenswrapper[4813]: I0219 18:51:27.328602 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-spl4z" Feb 19 18:51:27 crc kubenswrapper[4813]: I0219 18:51:27.328622 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a25594eb86a8b04ccafbac99714dff18b4f67f9a8f46b7bb7d2e2c3856db867" Feb 19 18:51:27 crc kubenswrapper[4813]: I0219 18:51:27.331190 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8cad7f3d-9640-4949-bd7a-11407f2c8f97","Type":"ContainerStarted","Data":"fcf2a6aa3ed42d789311bfaf2bfb4fc9d9bb8cffb037ddee33240383729c8007"} Feb 19 18:51:27 crc kubenswrapper[4813]: I0219 18:51:27.418310 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 18:51:27 crc kubenswrapper[4813]: E0219 18:51:27.419055 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b98f2a59-47e4-46aa-acbe-74250ed631ca" containerName="nova-cell1-conductor-db-sync" Feb 19 18:51:27 crc kubenswrapper[4813]: I0219 18:51:27.419080 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="b98f2a59-47e4-46aa-acbe-74250ed631ca" containerName="nova-cell1-conductor-db-sync" Feb 19 18:51:27 crc kubenswrapper[4813]: I0219 18:51:27.419368 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="b98f2a59-47e4-46aa-acbe-74250ed631ca" containerName="nova-cell1-conductor-db-sync" Feb 19 18:51:27 crc kubenswrapper[4813]: I0219 18:51:27.420194 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 19 18:51:27 crc kubenswrapper[4813]: I0219 18:51:27.423426 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 19 18:51:27 crc kubenswrapper[4813]: I0219 18:51:27.446104 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 18:51:27 crc kubenswrapper[4813]: I0219 18:51:27.482944 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03c53522-206c-442c-ac97-7044b0f81e4a" path="/var/lib/kubelet/pods/03c53522-206c-442c-ac97-7044b0f81e4a/volumes" Feb 19 18:51:27 crc kubenswrapper[4813]: I0219 18:51:27.575851 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/973faa07-fbab-4a50-ac4a-c62302e9f9c1-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"973faa07-fbab-4a50-ac4a-c62302e9f9c1\") " pod="openstack/nova-cell1-conductor-0" Feb 19 18:51:27 crc kubenswrapper[4813]: I0219 18:51:27.575937 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvwmf\" (UniqueName: \"kubernetes.io/projected/973faa07-fbab-4a50-ac4a-c62302e9f9c1-kube-api-access-pvwmf\") pod \"nova-cell1-conductor-0\" (UID: \"973faa07-fbab-4a50-ac4a-c62302e9f9c1\") " pod="openstack/nova-cell1-conductor-0" Feb 19 18:51:27 crc kubenswrapper[4813]: I0219 18:51:27.576052 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/973faa07-fbab-4a50-ac4a-c62302e9f9c1-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"973faa07-fbab-4a50-ac4a-c62302e9f9c1\") " pod="openstack/nova-cell1-conductor-0" Feb 19 18:51:27 crc kubenswrapper[4813]: I0219 18:51:27.678474 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/973faa07-fbab-4a50-ac4a-c62302e9f9c1-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"973faa07-fbab-4a50-ac4a-c62302e9f9c1\") " pod="openstack/nova-cell1-conductor-0" Feb 19 18:51:27 crc kubenswrapper[4813]: I0219 18:51:27.678619 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvwmf\" (UniqueName: \"kubernetes.io/projected/973faa07-fbab-4a50-ac4a-c62302e9f9c1-kube-api-access-pvwmf\") pod \"nova-cell1-conductor-0\" (UID: \"973faa07-fbab-4a50-ac4a-c62302e9f9c1\") " pod="openstack/nova-cell1-conductor-0" Feb 19 18:51:27 crc kubenswrapper[4813]: I0219 18:51:27.678752 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/973faa07-fbab-4a50-ac4a-c62302e9f9c1-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"973faa07-fbab-4a50-ac4a-c62302e9f9c1\") " pod="openstack/nova-cell1-conductor-0" Feb 19 18:51:27 crc kubenswrapper[4813]: I0219 18:51:27.683931 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/973faa07-fbab-4a50-ac4a-c62302e9f9c1-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"973faa07-fbab-4a50-ac4a-c62302e9f9c1\") " pod="openstack/nova-cell1-conductor-0" Feb 19 18:51:27 crc kubenswrapper[4813]: I0219 18:51:27.684235 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/973faa07-fbab-4a50-ac4a-c62302e9f9c1-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"973faa07-fbab-4a50-ac4a-c62302e9f9c1\") " pod="openstack/nova-cell1-conductor-0" Feb 19 18:51:27 crc kubenswrapper[4813]: I0219 18:51:27.712133 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvwmf\" (UniqueName: \"kubernetes.io/projected/973faa07-fbab-4a50-ac4a-c62302e9f9c1-kube-api-access-pvwmf\") pod \"nova-cell1-conductor-0\" (UID: \"973faa07-fbab-4a50-ac4a-c62302e9f9c1\") " pod="openstack/nova-cell1-conductor-0" Feb 19 18:51:27 crc kubenswrapper[4813]: I0219 18:51:27.762243 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 19 18:51:28 crc kubenswrapper[4813]: I0219 18:51:28.232200 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 18:51:28 crc kubenswrapper[4813]: I0219 18:51:28.355610 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8cad7f3d-9640-4949-bd7a-11407f2c8f97","Type":"ContainerStarted","Data":"91240f6202bc1a26cbe97d04b6f255e2c37f1b1794253ff8fd5fd43a5a9ad7f5"} Feb 19 18:51:28 crc kubenswrapper[4813]: I0219 18:51:28.355684 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8cad7f3d-9640-4949-bd7a-11407f2c8f97","Type":"ContainerStarted","Data":"54c437315b39fd0d37736226869465e05b6ed2ef27f4780c07255ef49d0147a6"} Feb 19 18:51:28 crc kubenswrapper[4813]: I0219 18:51:28.356947 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"973faa07-fbab-4a50-ac4a-c62302e9f9c1","Type":"ContainerStarted","Data":"f1526a1adea998ab2b5e1300ebf3c3cdf8a9e760185edc27a1a4789e079ed460"} Feb 19 18:51:28 crc kubenswrapper[4813]: I0219 18:51:28.393136 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.3931205110000002 podStartE2EDuration="2.393120511s" podCreationTimestamp="2026-02-19 18:51:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:51:28.387748265 +0000 UTC m=+1307.613188846" watchObservedRunningTime="2026-02-19 18:51:28.393120511 +0000 UTC m=+1307.618561052" Feb 19 18:51:28 crc kubenswrapper[4813]: E0219 18:51:28.671703 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ae2a972e4ff33caab66dcf522d1de90b13f54f86b8b19860273ec8b250252915" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 18:51:28 crc kubenswrapper[4813]: E0219 18:51:28.674600 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ae2a972e4ff33caab66dcf522d1de90b13f54f86b8b19860273ec8b250252915" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 18:51:28 crc kubenswrapper[4813]: E0219 18:51:28.676159 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ae2a972e4ff33caab66dcf522d1de90b13f54f86b8b19860273ec8b250252915" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 18:51:28 crc kubenswrapper[4813]: E0219 18:51:28.676205 4813 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="071faa4f-f20b-419c-bd95-3e65d40101aa" containerName="nova-scheduler-scheduler" Feb 19 18:51:29 crc kubenswrapper[4813]: I0219 18:51:29.371000 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"973faa07-fbab-4a50-ac4a-c62302e9f9c1","Type":"ContainerStarted","Data":"120d2d23bdd5ab00b179e24b7405a4df74b8b142005b13e2f6722351e1038530"} Feb 19 18:51:29 crc kubenswrapper[4813]: I0219 18:51:29.400031 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.400011774 podStartE2EDuration="2.400011774s" podCreationTimestamp="2026-02-19 18:51:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:51:29.391828181 +0000 UTC m=+1308.617268762" watchObservedRunningTime="2026-02-19 18:51:29.400011774 +0000 UTC m=+1308.625452315" Feb 19 18:51:30 crc kubenswrapper[4813]: I0219 18:51:30.229220 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 18:51:30 crc kubenswrapper[4813]: I0219 18:51:30.327767 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/071faa4f-f20b-419c-bd95-3e65d40101aa-config-data\") pod \"071faa4f-f20b-419c-bd95-3e65d40101aa\" (UID: \"071faa4f-f20b-419c-bd95-3e65d40101aa\") " Feb 19 18:51:30 crc kubenswrapper[4813]: I0219 18:51:30.327931 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/071faa4f-f20b-419c-bd95-3e65d40101aa-combined-ca-bundle\") pod \"071faa4f-f20b-419c-bd95-3e65d40101aa\" (UID: \"071faa4f-f20b-419c-bd95-3e65d40101aa\") " Feb 19 18:51:30 crc kubenswrapper[4813]: I0219 18:51:30.328341 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brp6t\" (UniqueName: \"kubernetes.io/projected/071faa4f-f20b-419c-bd95-3e65d40101aa-kube-api-access-brp6t\") pod \"071faa4f-f20b-419c-bd95-3e65d40101aa\" (UID: \"071faa4f-f20b-419c-bd95-3e65d40101aa\") " Feb 19 18:51:30 crc kubenswrapper[4813]: I0219 18:51:30.333057 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/071faa4f-f20b-419c-bd95-3e65d40101aa-kube-api-access-brp6t" (OuterVolumeSpecName: "kube-api-access-brp6t") pod "071faa4f-f20b-419c-bd95-3e65d40101aa" (UID: "071faa4f-f20b-419c-bd95-3e65d40101aa"). InnerVolumeSpecName "kube-api-access-brp6t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:51:30 crc kubenswrapper[4813]: I0219 18:51:30.371167 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/071faa4f-f20b-419c-bd95-3e65d40101aa-config-data" (OuterVolumeSpecName: "config-data") pod "071faa4f-f20b-419c-bd95-3e65d40101aa" (UID: "071faa4f-f20b-419c-bd95-3e65d40101aa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:51:30 crc kubenswrapper[4813]: I0219 18:51:30.373294 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/071faa4f-f20b-419c-bd95-3e65d40101aa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "071faa4f-f20b-419c-bd95-3e65d40101aa" (UID: "071faa4f-f20b-419c-bd95-3e65d40101aa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:51:30 crc kubenswrapper[4813]: I0219 18:51:30.390650 4813 generic.go:334] "Generic (PLEG): container finished" podID="071faa4f-f20b-419c-bd95-3e65d40101aa" containerID="ae2a972e4ff33caab66dcf522d1de90b13f54f86b8b19860273ec8b250252915" exitCode=0 Feb 19 18:51:30 crc kubenswrapper[4813]: I0219 18:51:30.390711 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 18:51:30 crc kubenswrapper[4813]: I0219 18:51:30.390786 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"071faa4f-f20b-419c-bd95-3e65d40101aa","Type":"ContainerDied","Data":"ae2a972e4ff33caab66dcf522d1de90b13f54f86b8b19860273ec8b250252915"} Feb 19 18:51:30 crc kubenswrapper[4813]: I0219 18:51:30.390831 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"071faa4f-f20b-419c-bd95-3e65d40101aa","Type":"ContainerDied","Data":"8a6d5d56cb8ab8037c50ca0a3e7281aaf62ce83434241d081aedc3dad3e2856a"} Feb 19 18:51:30 crc kubenswrapper[4813]: I0219 18:51:30.390858 4813 scope.go:117] "RemoveContainer" containerID="ae2a972e4ff33caab66dcf522d1de90b13f54f86b8b19860273ec8b250252915" Feb 19 18:51:30 crc kubenswrapper[4813]: I0219 18:51:30.394281 4813 generic.go:334] "Generic (PLEG): container finished" podID="a4ca3e22-9d6b-4872-bbec-ad276a27540f" containerID="1974dc952a0d7a118c1b4ae017d610213b3f38ecf31c3a385abe73f7a324638b" exitCode=0 Feb 19 18:51:30 crc kubenswrapper[4813]: I0219 18:51:30.394425 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a4ca3e22-9d6b-4872-bbec-ad276a27540f","Type":"ContainerDied","Data":"1974dc952a0d7a118c1b4ae017d610213b3f38ecf31c3a385abe73f7a324638b"} Feb 19 18:51:30 crc kubenswrapper[4813]: I0219 18:51:30.394457 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a4ca3e22-9d6b-4872-bbec-ad276a27540f","Type":"ContainerDied","Data":"bd4d57b437ba8156b9f17c0d089600d6c35f0c00ee2a9e2c3d65a861841716e6"} Feb 19 18:51:30 crc kubenswrapper[4813]: I0219 18:51:30.394471 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd4d57b437ba8156b9f17c0d089600d6c35f0c00ee2a9e2c3d65a861841716e6" Feb 19 18:51:30 crc kubenswrapper[4813]: I0219 18:51:30.396260 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 19 18:51:30 crc kubenswrapper[4813]: I0219 18:51:30.397895 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 18:51:30 crc kubenswrapper[4813]: I0219 18:51:30.413287 4813 scope.go:117] "RemoveContainer" containerID="ae2a972e4ff33caab66dcf522d1de90b13f54f86b8b19860273ec8b250252915" Feb 19 18:51:30 crc kubenswrapper[4813]: E0219 18:51:30.413742 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae2a972e4ff33caab66dcf522d1de90b13f54f86b8b19860273ec8b250252915\": container with ID starting with ae2a972e4ff33caab66dcf522d1de90b13f54f86b8b19860273ec8b250252915 not found: ID does not exist" containerID="ae2a972e4ff33caab66dcf522d1de90b13f54f86b8b19860273ec8b250252915" Feb 19 18:51:30 crc kubenswrapper[4813]: I0219 18:51:30.413787 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae2a972e4ff33caab66dcf522d1de90b13f54f86b8b19860273ec8b250252915"} err="failed to get container status \"ae2a972e4ff33caab66dcf522d1de90b13f54f86b8b19860273ec8b250252915\": rpc error: code = NotFound desc = could not find container \"ae2a972e4ff33caab66dcf522d1de90b13f54f86b8b19860273ec8b250252915\": container with ID starting with ae2a972e4ff33caab66dcf522d1de90b13f54f86b8b19860273ec8b250252915 not found: ID does not exist" Feb 19 18:51:30 crc kubenswrapper[4813]: I0219 18:51:30.432726 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/071faa4f-f20b-419c-bd95-3e65d40101aa-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 18:51:30 crc kubenswrapper[4813]: I0219 18:51:30.433013 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/071faa4f-f20b-419c-bd95-3e65d40101aa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:51:30 crc kubenswrapper[4813]: I0219 18:51:30.433163 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brp6t\" (UniqueName: \"kubernetes.io/projected/071faa4f-f20b-419c-bd95-3e65d40101aa-kube-api-access-brp6t\") on node \"crc\" DevicePath \"\"" Feb 19 18:51:30 crc kubenswrapper[4813]: I0219 18:51:30.458061 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 18:51:30 crc kubenswrapper[4813]: I0219 18:51:30.469018 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 18:51:30 crc kubenswrapper[4813]: I0219 18:51:30.476904 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 18:51:30 crc kubenswrapper[4813]: E0219 18:51:30.477323 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4ca3e22-9d6b-4872-bbec-ad276a27540f" containerName="nova-api-api" Feb 19 18:51:30 crc kubenswrapper[4813]: I0219 18:51:30.477341 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4ca3e22-9d6b-4872-bbec-ad276a27540f" containerName="nova-api-api" Feb 19 18:51:30 crc kubenswrapper[4813]: E0219 18:51:30.477348 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4ca3e22-9d6b-4872-bbec-ad276a27540f" containerName="nova-api-log" Feb 19 18:51:30 crc kubenswrapper[4813]: I0219 18:51:30.477354 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4ca3e22-9d6b-4872-bbec-ad276a27540f" containerName="nova-api-log" Feb 19 18:51:30 crc kubenswrapper[4813]: E0219 18:51:30.477376 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="071faa4f-f20b-419c-bd95-3e65d40101aa" containerName="nova-scheduler-scheduler" Feb 19 18:51:30 crc kubenswrapper[4813]: I0219 18:51:30.477382 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="071faa4f-f20b-419c-bd95-3e65d40101aa" containerName="nova-scheduler-scheduler" Feb 19 18:51:30 crc kubenswrapper[4813]: I0219 18:51:30.477562 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4ca3e22-9d6b-4872-bbec-ad276a27540f" containerName="nova-api-log" Feb 19 18:51:30 crc kubenswrapper[4813]: I0219 18:51:30.477580 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="071faa4f-f20b-419c-bd95-3e65d40101aa" containerName="nova-scheduler-scheduler" Feb 19 18:51:30 crc kubenswrapper[4813]: I0219 18:51:30.477589 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4ca3e22-9d6b-4872-bbec-ad276a27540f" containerName="nova-api-api" Feb 19 18:51:30 crc kubenswrapper[4813]: I0219 18:51:30.478320 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 18:51:30 crc kubenswrapper[4813]: I0219 18:51:30.482165 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 19 18:51:30 crc kubenswrapper[4813]: I0219 18:51:30.484916 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 18:51:30 crc kubenswrapper[4813]: I0219 18:51:30.534976 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4ca3e22-9d6b-4872-bbec-ad276a27540f-logs\") pod \"a4ca3e22-9d6b-4872-bbec-ad276a27540f\" (UID: \"a4ca3e22-9d6b-4872-bbec-ad276a27540f\") " Feb 19 18:51:30 crc kubenswrapper[4813]: I0219 18:51:30.535021 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4ca3e22-9d6b-4872-bbec-ad276a27540f-combined-ca-bundle\") pod \"a4ca3e22-9d6b-4872-bbec-ad276a27540f\" (UID: \"a4ca3e22-9d6b-4872-bbec-ad276a27540f\") " Feb 19 18:51:30 crc kubenswrapper[4813]: I0219 18:51:30.535108 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4ca3e22-9d6b-4872-bbec-ad276a27540f-config-data\") pod \"a4ca3e22-9d6b-4872-bbec-ad276a27540f\" (UID: \"a4ca3e22-9d6b-4872-bbec-ad276a27540f\") " Feb 19 18:51:30 crc kubenswrapper[4813]: I0219 18:51:30.535186 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gw6r2\" (UniqueName: \"kubernetes.io/projected/a4ca3e22-9d6b-4872-bbec-ad276a27540f-kube-api-access-gw6r2\") pod \"a4ca3e22-9d6b-4872-bbec-ad276a27540f\" (UID: \"a4ca3e22-9d6b-4872-bbec-ad276a27540f\") " Feb 19 18:51:30 crc kubenswrapper[4813]: I0219 18:51:30.535719 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4ca3e22-9d6b-4872-bbec-ad276a27540f-logs" (OuterVolumeSpecName: "logs") pod "a4ca3e22-9d6b-4872-bbec-ad276a27540f" (UID: "a4ca3e22-9d6b-4872-bbec-ad276a27540f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:51:30 crc kubenswrapper[4813]: I0219 18:51:30.536565 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4ca3e22-9d6b-4872-bbec-ad276a27540f-logs\") on node \"crc\" DevicePath \"\"" Feb 19 18:51:30 crc kubenswrapper[4813]: I0219 18:51:30.539784 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4ca3e22-9d6b-4872-bbec-ad276a27540f-kube-api-access-gw6r2" (OuterVolumeSpecName: "kube-api-access-gw6r2") pod "a4ca3e22-9d6b-4872-bbec-ad276a27540f" (UID: "a4ca3e22-9d6b-4872-bbec-ad276a27540f"). InnerVolumeSpecName "kube-api-access-gw6r2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:51:30 crc kubenswrapper[4813]: I0219 18:51:30.565278 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4ca3e22-9d6b-4872-bbec-ad276a27540f-config-data" (OuterVolumeSpecName: "config-data") pod "a4ca3e22-9d6b-4872-bbec-ad276a27540f" (UID: "a4ca3e22-9d6b-4872-bbec-ad276a27540f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:51:30 crc kubenswrapper[4813]: I0219 18:51:30.565680 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4ca3e22-9d6b-4872-bbec-ad276a27540f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a4ca3e22-9d6b-4872-bbec-ad276a27540f" (UID: "a4ca3e22-9d6b-4872-bbec-ad276a27540f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:51:30 crc kubenswrapper[4813]: I0219 18:51:30.638505 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2plm\" (UniqueName: \"kubernetes.io/projected/78de48f7-3d0a-45dc-8aad-f61546b2f5a6-kube-api-access-h2plm\") pod \"nova-scheduler-0\" (UID: \"78de48f7-3d0a-45dc-8aad-f61546b2f5a6\") " pod="openstack/nova-scheduler-0" Feb 19 18:51:30 crc kubenswrapper[4813]: I0219 18:51:30.638857 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78de48f7-3d0a-45dc-8aad-f61546b2f5a6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"78de48f7-3d0a-45dc-8aad-f61546b2f5a6\") " pod="openstack/nova-scheduler-0" Feb 19 18:51:30 crc kubenswrapper[4813]: I0219 18:51:30.639663 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78de48f7-3d0a-45dc-8aad-f61546b2f5a6-config-data\") pod \"nova-scheduler-0\" (UID: \"78de48f7-3d0a-45dc-8aad-f61546b2f5a6\") " pod="openstack/nova-scheduler-0" Feb 19 18:51:30 crc kubenswrapper[4813]: I0219 18:51:30.640215 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4ca3e22-9d6b-4872-bbec-ad276a27540f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:51:30 crc kubenswrapper[4813]: I0219 18:51:30.640300 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4ca3e22-9d6b-4872-bbec-ad276a27540f-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 18:51:30 crc kubenswrapper[4813]: I0219 18:51:30.640321 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gw6r2\" (UniqueName: \"kubernetes.io/projected/a4ca3e22-9d6b-4872-bbec-ad276a27540f-kube-api-access-gw6r2\") on node \"crc\" DevicePath \"\"" Feb 19 18:51:30 crc kubenswrapper[4813]: I0219 18:51:30.741791 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78de48f7-3d0a-45dc-8aad-f61546b2f5a6-config-data\") pod \"nova-scheduler-0\" (UID: \"78de48f7-3d0a-45dc-8aad-f61546b2f5a6\") " pod="openstack/nova-scheduler-0" Feb 19 18:51:30 crc kubenswrapper[4813]: I0219 18:51:30.742029 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2plm\" (UniqueName: \"kubernetes.io/projected/78de48f7-3d0a-45dc-8aad-f61546b2f5a6-kube-api-access-h2plm\") pod \"nova-scheduler-0\" (UID: \"78de48f7-3d0a-45dc-8aad-f61546b2f5a6\") " pod="openstack/nova-scheduler-0" Feb 19 18:51:30 crc kubenswrapper[4813]: I0219 18:51:30.742187 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78de48f7-3d0a-45dc-8aad-f61546b2f5a6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"78de48f7-3d0a-45dc-8aad-f61546b2f5a6\") " pod="openstack/nova-scheduler-0" Feb 19 18:51:30 crc kubenswrapper[4813]: I0219 18:51:30.746212 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78de48f7-3d0a-45dc-8aad-f61546b2f5a6-config-data\") pod \"nova-scheduler-0\" (UID: \"78de48f7-3d0a-45dc-8aad-f61546b2f5a6\") " pod="openstack/nova-scheduler-0" Feb 19 18:51:30 crc kubenswrapper[4813]: I0219 18:51:30.747127 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78de48f7-3d0a-45dc-8aad-f61546b2f5a6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"78de48f7-3d0a-45dc-8aad-f61546b2f5a6\") " pod="openstack/nova-scheduler-0" Feb 19 18:51:30 crc kubenswrapper[4813]: I0219 18:51:30.757411 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2plm\" (UniqueName: \"kubernetes.io/projected/78de48f7-3d0a-45dc-8aad-f61546b2f5a6-kube-api-access-h2plm\") pod \"nova-scheduler-0\" (UID: \"78de48f7-3d0a-45dc-8aad-f61546b2f5a6\") " pod="openstack/nova-scheduler-0" Feb 19 18:51:30 crc kubenswrapper[4813]: I0219 18:51:30.798146 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 18:51:31 crc kubenswrapper[4813]: I0219 18:51:31.281729 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 18:51:31 crc kubenswrapper[4813]: W0219 18:51:31.282930 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78de48f7_3d0a_45dc_8aad_f61546b2f5a6.slice/crio-9e52ff3099f45b1a1cf1a44758b6fa13a4b27694f77534cb9ee1cd6e12180924 WatchSource:0}: Error finding container 9e52ff3099f45b1a1cf1a44758b6fa13a4b27694f77534cb9ee1cd6e12180924: Status 404 returned error can't find the container with id 9e52ff3099f45b1a1cf1a44758b6fa13a4b27694f77534cb9ee1cd6e12180924 Feb 19 18:51:31 crc kubenswrapper[4813]: I0219 18:51:31.407699 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"78de48f7-3d0a-45dc-8aad-f61546b2f5a6","Type":"ContainerStarted","Data":"9e52ff3099f45b1a1cf1a44758b6fa13a4b27694f77534cb9ee1cd6e12180924"} Feb 19 18:51:31 crc kubenswrapper[4813]: I0219 18:51:31.413429 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 18:51:31 crc kubenswrapper[4813]: I0219 18:51:31.487095 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="071faa4f-f20b-419c-bd95-3e65d40101aa" path="/var/lib/kubelet/pods/071faa4f-f20b-419c-bd95-3e65d40101aa/volumes" Feb 19 18:51:31 crc kubenswrapper[4813]: I0219 18:51:31.527229 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 18:51:31 crc kubenswrapper[4813]: I0219 18:51:31.537427 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 19 18:51:31 crc kubenswrapper[4813]: I0219 18:51:31.545705 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 19 18:51:31 crc kubenswrapper[4813]: I0219 18:51:31.561244 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 18:51:31 crc kubenswrapper[4813]: I0219 18:51:31.568618 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 19 18:51:31 crc kubenswrapper[4813]: I0219 18:51:31.594611 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 18:51:31 crc kubenswrapper[4813]: I0219 18:51:31.658288 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c77d229-1f7e-432b-b8e2-f1bcf0561b04-config-data\") pod \"nova-api-0\" (UID: \"8c77d229-1f7e-432b-b8e2-f1bcf0561b04\") " pod="openstack/nova-api-0" Feb 19 18:51:31 crc kubenswrapper[4813]: I0219 18:51:31.658438 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rh5mg\" (UniqueName: \"kubernetes.io/projected/8c77d229-1f7e-432b-b8e2-f1bcf0561b04-kube-api-access-rh5mg\") pod \"nova-api-0\" (UID: \"8c77d229-1f7e-432b-b8e2-f1bcf0561b04\") " pod="openstack/nova-api-0" Feb 19 18:51:31 crc kubenswrapper[4813]: I0219 18:51:31.658590 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c77d229-1f7e-432b-b8e2-f1bcf0561b04-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8c77d229-1f7e-432b-b8e2-f1bcf0561b04\") " pod="openstack/nova-api-0" Feb 19 18:51:31 crc kubenswrapper[4813]: I0219 18:51:31.658802 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c77d229-1f7e-432b-b8e2-f1bcf0561b04-logs\") pod \"nova-api-0\" (UID: \"8c77d229-1f7e-432b-b8e2-f1bcf0561b04\") " pod="openstack/nova-api-0" Feb 19 18:51:31 crc kubenswrapper[4813]: I0219 18:51:31.760433 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c77d229-1f7e-432b-b8e2-f1bcf0561b04-config-data\") pod \"nova-api-0\" (UID: \"8c77d229-1f7e-432b-b8e2-f1bcf0561b04\") " pod="openstack/nova-api-0" Feb 19 18:51:31 crc kubenswrapper[4813]: I0219 18:51:31.760709 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rh5mg\" (UniqueName: \"kubernetes.io/projected/8c77d229-1f7e-432b-b8e2-f1bcf0561b04-kube-api-access-rh5mg\") pod \"nova-api-0\" (UID: \"8c77d229-1f7e-432b-b8e2-f1bcf0561b04\") " pod="openstack/nova-api-0" Feb 19 18:51:31 crc kubenswrapper[4813]: I0219 18:51:31.760787 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c77d229-1f7e-432b-b8e2-f1bcf0561b04-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8c77d229-1f7e-432b-b8e2-f1bcf0561b04\") " pod="openstack/nova-api-0" Feb 19 18:51:31 crc kubenswrapper[4813]: I0219 18:51:31.760850 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c77d229-1f7e-432b-b8e2-f1bcf0561b04-logs\") pod \"nova-api-0\" (UID: \"8c77d229-1f7e-432b-b8e2-f1bcf0561b04\") " pod="openstack/nova-api-0" Feb 19 18:51:31 crc kubenswrapper[4813]: I0219 18:51:31.761515 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c77d229-1f7e-432b-b8e2-f1bcf0561b04-logs\") pod \"nova-api-0\" (UID: \"8c77d229-1f7e-432b-b8e2-f1bcf0561b04\") " pod="openstack/nova-api-0" Feb 19 18:51:31 crc kubenswrapper[4813]: I0219 18:51:31.768861 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c77d229-1f7e-432b-b8e2-f1bcf0561b04-config-data\") pod \"nova-api-0\" (UID: \"8c77d229-1f7e-432b-b8e2-f1bcf0561b04\") " pod="openstack/nova-api-0" Feb 19 18:51:31 crc kubenswrapper[4813]: I0219 18:51:31.771453 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c77d229-1f7e-432b-b8e2-f1bcf0561b04-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8c77d229-1f7e-432b-b8e2-f1bcf0561b04\") " pod="openstack/nova-api-0" Feb 19 18:51:31 crc kubenswrapper[4813]: I0219 18:51:31.777533 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rh5mg\" (UniqueName: \"kubernetes.io/projected/8c77d229-1f7e-432b-b8e2-f1bcf0561b04-kube-api-access-rh5mg\") pod \"nova-api-0\" (UID: \"8c77d229-1f7e-432b-b8e2-f1bcf0561b04\") " pod="openstack/nova-api-0" Feb 19 18:51:31 crc kubenswrapper[4813]: I0219 18:51:31.784294 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 18:51:31 crc kubenswrapper[4813]: I0219 18:51:31.784678 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 18:51:31 crc kubenswrapper[4813]: I0219 18:51:31.897399 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 18:51:32 crc kubenswrapper[4813]: I0219 18:51:32.403724 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 18:51:32 crc kubenswrapper[4813]: W0219 18:51:32.418350 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c77d229_1f7e_432b_b8e2_f1bcf0561b04.slice/crio-3276e3e3d1d94da8a539a256d48c0499afb8f5bf30a7a6aebcccaf88b5c43bcd WatchSource:0}: Error finding container 3276e3e3d1d94da8a539a256d48c0499afb8f5bf30a7a6aebcccaf88b5c43bcd: Status 404 returned error can't find the container with id 3276e3e3d1d94da8a539a256d48c0499afb8f5bf30a7a6aebcccaf88b5c43bcd Feb 19 18:51:32 crc kubenswrapper[4813]: I0219 18:51:32.428836 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"78de48f7-3d0a-45dc-8aad-f61546b2f5a6","Type":"ContainerStarted","Data":"e0817ecad050f3721ed4924ec075780185d08a2788addfd90f84d19e00dd8d5c"} Feb 19 18:51:32 crc kubenswrapper[4813]: I0219 18:51:32.450746 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.450728952 podStartE2EDuration="2.450728952s" podCreationTimestamp="2026-02-19 18:51:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:51:32.445931833 +0000 UTC m=+1311.671372444" watchObservedRunningTime="2026-02-19 18:51:32.450728952 +0000 UTC m=+1311.676169493" Feb 19 18:51:33 crc kubenswrapper[4813]: I0219 18:51:33.459448 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8c77d229-1f7e-432b-b8e2-f1bcf0561b04","Type":"ContainerStarted","Data":"1b9f3c778313eb9d2af44946a1f3868ac0120afcac2b852abaf93686eb2eec5f"} Feb 19 18:51:33 crc kubenswrapper[4813]: I0219 18:51:33.461938 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8c77d229-1f7e-432b-b8e2-f1bcf0561b04","Type":"ContainerStarted","Data":"d86f14bde71bd4136682115120b1e45b69d7aa9076c425b5939eca01739e7f2a"} Feb 19 18:51:33 crc kubenswrapper[4813]: I0219 18:51:33.462174 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8c77d229-1f7e-432b-b8e2-f1bcf0561b04","Type":"ContainerStarted","Data":"3276e3e3d1d94da8a539a256d48c0499afb8f5bf30a7a6aebcccaf88b5c43bcd"} Feb 19 18:51:33 crc kubenswrapper[4813]: I0219 18:51:33.500678 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4ca3e22-9d6b-4872-bbec-ad276a27540f" path="/var/lib/kubelet/pods/a4ca3e22-9d6b-4872-bbec-ad276a27540f/volumes" Feb 19 18:51:33 crc kubenswrapper[4813]: I0219 18:51:33.504628 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.5046001650000003 podStartE2EDuration="2.504600165s" podCreationTimestamp="2026-02-19 18:51:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:51:33.491406448 +0000 UTC m=+1312.716847069" watchObservedRunningTime="2026-02-19 18:51:33.504600165 +0000 UTC m=+1312.730040736" Feb 19 18:51:35 crc kubenswrapper[4813]: I0219 18:51:35.799081 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 19 18:51:36 crc kubenswrapper[4813]: I0219 18:51:36.360044 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 19 18:51:36 crc kubenswrapper[4813]: I0219 18:51:36.784346 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 19 18:51:36 crc kubenswrapper[4813]: I0219 18:51:36.785115 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 19 18:51:37 crc kubenswrapper[4813]: I0219 18:51:37.788916 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 19 18:51:37 crc kubenswrapper[4813]: I0219 18:51:37.799135 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="8cad7f3d-9640-4949-bd7a-11407f2c8f97" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.190:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 18:51:37 crc kubenswrapper[4813]: I0219 18:51:37.799134 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="8cad7f3d-9640-4949-bd7a-11407f2c8f97" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.190:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 18:51:40 crc kubenswrapper[4813]: I0219 18:51:40.364219 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 18:51:40 crc kubenswrapper[4813]: I0219 18:51:40.365088 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="42255dff-2745-4bb9-a1fa-9c6f39327204" containerName="kube-state-metrics" containerID="cri-o://d4cd053903955edd8e43f97062778d55b27af38ad61137ea07d21f50890b3833" gracePeriod=30 Feb 19 18:51:40 crc kubenswrapper[4813]: I0219 18:51:40.560079 4813 generic.go:334] "Generic (PLEG): container finished" podID="42255dff-2745-4bb9-a1fa-9c6f39327204" containerID="d4cd053903955edd8e43f97062778d55b27af38ad61137ea07d21f50890b3833" exitCode=2 Feb 19 18:51:40 crc kubenswrapper[4813]: I0219 18:51:40.560117 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"42255dff-2745-4bb9-a1fa-9c6f39327204","Type":"ContainerDied","Data":"d4cd053903955edd8e43f97062778d55b27af38ad61137ea07d21f50890b3833"} Feb 19 18:51:40 crc kubenswrapper[4813]: I0219 18:51:40.798470 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 19 18:51:40 crc kubenswrapper[4813]: I0219 18:51:40.834575 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 19 18:51:40 crc kubenswrapper[4813]: I0219 18:51:40.912532 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 18:51:41 crc kubenswrapper[4813]: I0219 18:51:41.061680 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9m8gq\" (UniqueName: \"kubernetes.io/projected/42255dff-2745-4bb9-a1fa-9c6f39327204-kube-api-access-9m8gq\") pod \"42255dff-2745-4bb9-a1fa-9c6f39327204\" (UID: \"42255dff-2745-4bb9-a1fa-9c6f39327204\") " Feb 19 18:51:41 crc kubenswrapper[4813]: I0219 18:51:41.079128 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42255dff-2745-4bb9-a1fa-9c6f39327204-kube-api-access-9m8gq" (OuterVolumeSpecName: "kube-api-access-9m8gq") pod "42255dff-2745-4bb9-a1fa-9c6f39327204" (UID: "42255dff-2745-4bb9-a1fa-9c6f39327204"). InnerVolumeSpecName "kube-api-access-9m8gq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:51:41 crc kubenswrapper[4813]: I0219 18:51:41.163930 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9m8gq\" (UniqueName: \"kubernetes.io/projected/42255dff-2745-4bb9-a1fa-9c6f39327204-kube-api-access-9m8gq\") on node \"crc\" DevicePath \"\"" Feb 19 18:51:41 crc kubenswrapper[4813]: I0219 18:51:41.570175 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 18:51:41 crc kubenswrapper[4813]: I0219 18:51:41.570188 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"42255dff-2745-4bb9-a1fa-9c6f39327204","Type":"ContainerDied","Data":"00d30e9d61135a1f95e640b8f6f5f43de02fa70ca088ef4846071dd7e2087afa"} Feb 19 18:51:41 crc kubenswrapper[4813]: I0219 18:51:41.570295 4813 scope.go:117] "RemoveContainer" containerID="d4cd053903955edd8e43f97062778d55b27af38ad61137ea07d21f50890b3833" Feb 19 18:51:41 crc kubenswrapper[4813]: I0219 18:51:41.598593 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 18:51:41 crc kubenswrapper[4813]: I0219 18:51:41.618770 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 19 18:51:41 crc kubenswrapper[4813]: I0219 18:51:41.640769 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 18:51:41 crc kubenswrapper[4813]: I0219 18:51:41.654975 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 18:51:41 crc kubenswrapper[4813]: E0219 18:51:41.655489 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42255dff-2745-4bb9-a1fa-9c6f39327204" containerName="kube-state-metrics" Feb 19 18:51:41 crc kubenswrapper[4813]: I0219 18:51:41.655519 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="42255dff-2745-4bb9-a1fa-9c6f39327204" containerName="kube-state-metrics" Feb 19 18:51:41 crc kubenswrapper[4813]: I0219 18:51:41.655767 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="42255dff-2745-4bb9-a1fa-9c6f39327204" containerName="kube-state-metrics" Feb 19 18:51:41 crc kubenswrapper[4813]: I0219 18:51:41.656555 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 18:51:41 crc kubenswrapper[4813]: I0219 18:51:41.659107 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 19 18:51:41 crc kubenswrapper[4813]: I0219 18:51:41.659245 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 19 18:51:41 crc kubenswrapper[4813]: I0219 18:51:41.667559 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 18:51:41 crc kubenswrapper[4813]: I0219 18:51:41.786872 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ca7379e-357b-4246-a820-c1aed48b722e-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"2ca7379e-357b-4246-a820-c1aed48b722e\") " pod="openstack/kube-state-metrics-0" Feb 19 18:51:41 crc kubenswrapper[4813]: I0219 18:51:41.786932 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/2ca7379e-357b-4246-a820-c1aed48b722e-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"2ca7379e-357b-4246-a820-c1aed48b722e\") " pod="openstack/kube-state-metrics-0" Feb 19 18:51:41 crc kubenswrapper[4813]: I0219 18:51:41.786991 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzspb\" (UniqueName: \"kubernetes.io/projected/2ca7379e-357b-4246-a820-c1aed48b722e-kube-api-access-nzspb\") pod \"kube-state-metrics-0\" (UID: \"2ca7379e-357b-4246-a820-c1aed48b722e\") " pod="openstack/kube-state-metrics-0" Feb 19 18:51:41 crc kubenswrapper[4813]: I0219 18:51:41.787073 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ca7379e-357b-4246-a820-c1aed48b722e-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"2ca7379e-357b-4246-a820-c1aed48b722e\") " pod="openstack/kube-state-metrics-0" Feb 19 18:51:41 crc kubenswrapper[4813]: I0219 18:51:41.889005 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ca7379e-357b-4246-a820-c1aed48b722e-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"2ca7379e-357b-4246-a820-c1aed48b722e\") " pod="openstack/kube-state-metrics-0" Feb 19 18:51:41 crc kubenswrapper[4813]: I0219 18:51:41.889071 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/2ca7379e-357b-4246-a820-c1aed48b722e-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"2ca7379e-357b-4246-a820-c1aed48b722e\") " pod="openstack/kube-state-metrics-0" Feb 19 18:51:41 crc kubenswrapper[4813]: I0219 18:51:41.889094 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzspb\" (UniqueName: \"kubernetes.io/projected/2ca7379e-357b-4246-a820-c1aed48b722e-kube-api-access-nzspb\") pod \"kube-state-metrics-0\" (UID: \"2ca7379e-357b-4246-a820-c1aed48b722e\") " pod="openstack/kube-state-metrics-0" Feb 19 18:51:41 crc kubenswrapper[4813]: I0219 18:51:41.889178 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ca7379e-357b-4246-a820-c1aed48b722e-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"2ca7379e-357b-4246-a820-c1aed48b722e\") " pod="openstack/kube-state-metrics-0" Feb 19 18:51:41 crc kubenswrapper[4813]: I0219 18:51:41.893604 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ca7379e-357b-4246-a820-c1aed48b722e-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"2ca7379e-357b-4246-a820-c1aed48b722e\") " pod="openstack/kube-state-metrics-0" Feb 19 18:51:41 crc kubenswrapper[4813]: I0219 18:51:41.894011 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/2ca7379e-357b-4246-a820-c1aed48b722e-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"2ca7379e-357b-4246-a820-c1aed48b722e\") " pod="openstack/kube-state-metrics-0" Feb 19 18:51:41 crc kubenswrapper[4813]: I0219 18:51:41.896055 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ca7379e-357b-4246-a820-c1aed48b722e-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"2ca7379e-357b-4246-a820-c1aed48b722e\") " pod="openstack/kube-state-metrics-0" Feb 19 18:51:41 crc kubenswrapper[4813]: I0219 18:51:41.898289 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 18:51:41 crc kubenswrapper[4813]: I0219 18:51:41.898333 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 18:51:41 crc kubenswrapper[4813]: I0219 18:51:41.913460 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzspb\" (UniqueName: \"kubernetes.io/projected/2ca7379e-357b-4246-a820-c1aed48b722e-kube-api-access-nzspb\") pod \"kube-state-metrics-0\" (UID: \"2ca7379e-357b-4246-a820-c1aed48b722e\") " pod="openstack/kube-state-metrics-0" Feb 19 18:51:41 crc kubenswrapper[4813]: I0219 18:51:41.991125 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 18:51:42 crc kubenswrapper[4813]: I0219 18:51:42.275767 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 18:51:42 crc kubenswrapper[4813]: I0219 18:51:42.276359 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="49fa43f1-0aa1-4f1c-b39f-948acdce4825" containerName="ceilometer-central-agent" containerID="cri-o://ab1853900bd306abf16dd54781d641cac8027e702677508d3cc7c2e1d0a2ddb2" gracePeriod=30 Feb 19 18:51:42 crc kubenswrapper[4813]: I0219 18:51:42.276815 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="49fa43f1-0aa1-4f1c-b39f-948acdce4825" containerName="proxy-httpd" containerID="cri-o://2d1006a4f53f89f14386fb997d2b8b3ec2b732e5dcdb7115a52aff9373a2b497" gracePeriod=30 Feb 19 18:51:42 crc kubenswrapper[4813]: I0219 18:51:42.276869 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="49fa43f1-0aa1-4f1c-b39f-948acdce4825" containerName="sg-core" containerID="cri-o://163b6a0fe25e1d50e784cbe44b2dec473230f88d0f1845044d6b162853450091" gracePeriod=30 Feb 19 18:51:42 crc kubenswrapper[4813]: I0219 18:51:42.276902 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="49fa43f1-0aa1-4f1c-b39f-948acdce4825" containerName="ceilometer-notification-agent" containerID="cri-o://994efa1617047ad5bf4f6343264fd62528e456b0a7f377bf6017d0bb47ed4006" gracePeriod=30 Feb 19 18:51:42 crc kubenswrapper[4813]: I0219 18:51:42.509130 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 18:51:42 crc kubenswrapper[4813]: W0219 18:51:42.509632 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ca7379e_357b_4246_a820_c1aed48b722e.slice/crio-3f0185052a318f974a5da87e5ebd410f14733d9f0b6ef3231812d44706fbc1e1 WatchSource:0}: Error finding container 3f0185052a318f974a5da87e5ebd410f14733d9f0b6ef3231812d44706fbc1e1: Status 404 returned error can't find the container with id 3f0185052a318f974a5da87e5ebd410f14733d9f0b6ef3231812d44706fbc1e1 Feb 19 18:51:42 crc kubenswrapper[4813]: I0219 18:51:42.511808 4813 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 18:51:42 crc kubenswrapper[4813]: E0219 18:51:42.565103 4813 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod49fa43f1_0aa1_4f1c_b39f_948acdce4825.slice/crio-conmon-2d1006a4f53f89f14386fb997d2b8b3ec2b732e5dcdb7115a52aff9373a2b497.scope\": RecentStats: unable to find data in memory cache]" Feb 19 18:51:42 crc kubenswrapper[4813]: I0219 18:51:42.578939 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2ca7379e-357b-4246-a820-c1aed48b722e","Type":"ContainerStarted","Data":"3f0185052a318f974a5da87e5ebd410f14733d9f0b6ef3231812d44706fbc1e1"} Feb 19 18:51:42 crc kubenswrapper[4813]: I0219 18:51:42.581206 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49fa43f1-0aa1-4f1c-b39f-948acdce4825","Type":"ContainerDied","Data":"2d1006a4f53f89f14386fb997d2b8b3ec2b732e5dcdb7115a52aff9373a2b497"} Feb 19 18:51:42 crc kubenswrapper[4813]: I0219 18:51:42.581288 4813 generic.go:334] "Generic (PLEG): container finished" podID="49fa43f1-0aa1-4f1c-b39f-948acdce4825" containerID="2d1006a4f53f89f14386fb997d2b8b3ec2b732e5dcdb7115a52aff9373a2b497" exitCode=0 Feb 19 18:51:42 crc kubenswrapper[4813]: I0219 18:51:42.581323 4813 generic.go:334] "Generic (PLEG): container finished" podID="49fa43f1-0aa1-4f1c-b39f-948acdce4825" containerID="163b6a0fe25e1d50e784cbe44b2dec473230f88d0f1845044d6b162853450091" exitCode=2 Feb 19 18:51:42 crc kubenswrapper[4813]: I0219 18:51:42.581373 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49fa43f1-0aa1-4f1c-b39f-948acdce4825","Type":"ContainerDied","Data":"163b6a0fe25e1d50e784cbe44b2dec473230f88d0f1845044d6b162853450091"} Feb 19 18:51:42 crc kubenswrapper[4813]: I0219 18:51:42.984200 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8c77d229-1f7e-432b-b8e2-f1bcf0561b04" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.193:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 18:51:42 crc kubenswrapper[4813]: I0219 18:51:42.984211 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8c77d229-1f7e-432b-b8e2-f1bcf0561b04" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.193:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 18:51:43 crc kubenswrapper[4813]: I0219 18:51:43.484135 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42255dff-2745-4bb9-a1fa-9c6f39327204" path="/var/lib/kubelet/pods/42255dff-2745-4bb9-a1fa-9c6f39327204/volumes" Feb 19 18:51:43 crc kubenswrapper[4813]: I0219 18:51:43.593699 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2ca7379e-357b-4246-a820-c1aed48b722e","Type":"ContainerStarted","Data":"a32a6d2b95db3921a0795c8dfb6c3f6ac4703950bf4f5218f53c1129cb7f36d4"} Feb 19 18:51:43 crc kubenswrapper[4813]: I0219 18:51:43.594920 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 19 18:51:43 crc kubenswrapper[4813]: I0219 18:51:43.598483 4813 generic.go:334] "Generic (PLEG): container finished" podID="49fa43f1-0aa1-4f1c-b39f-948acdce4825" containerID="ab1853900bd306abf16dd54781d641cac8027e702677508d3cc7c2e1d0a2ddb2" exitCode=0 Feb 19 18:51:43 crc kubenswrapper[4813]: I0219 18:51:43.598563 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49fa43f1-0aa1-4f1c-b39f-948acdce4825","Type":"ContainerDied","Data":"ab1853900bd306abf16dd54781d641cac8027e702677508d3cc7c2e1d0a2ddb2"} Feb 19 18:51:43 crc kubenswrapper[4813]: I0219 18:51:43.619582 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.196799125 podStartE2EDuration="2.619558341s" podCreationTimestamp="2026-02-19 18:51:41 +0000 UTC" firstStartedPulling="2026-02-19 18:51:42.511470635 +0000 UTC m=+1321.736911176" lastFinishedPulling="2026-02-19 18:51:42.934229851 +0000 UTC m=+1322.159670392" observedRunningTime="2026-02-19 18:51:43.612088401 +0000 UTC m=+1322.837528952" watchObservedRunningTime="2026-02-19 18:51:43.619558341 +0000 UTC m=+1322.844998882" Feb 19 18:51:46 crc kubenswrapper[4813]: I0219 18:51:46.634033 4813 generic.go:334] "Generic (PLEG): container finished" podID="49fa43f1-0aa1-4f1c-b39f-948acdce4825" containerID="994efa1617047ad5bf4f6343264fd62528e456b0a7f377bf6017d0bb47ed4006" exitCode=0 Feb 19 18:51:46 crc kubenswrapper[4813]: I0219 18:51:46.635241 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49fa43f1-0aa1-4f1c-b39f-948acdce4825","Type":"ContainerDied","Data":"994efa1617047ad5bf4f6343264fd62528e456b0a7f377bf6017d0bb47ed4006"} Feb 19 18:51:46 crc kubenswrapper[4813]: I0219 18:51:46.635419 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49fa43f1-0aa1-4f1c-b39f-948acdce4825","Type":"ContainerDied","Data":"77e065478f7b87b293a5e0aa8181a43826148acd0af366e0ec3589bffc39c6a6"} Feb 19 18:51:46 crc kubenswrapper[4813]: I0219 18:51:46.635452 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77e065478f7b87b293a5e0aa8181a43826148acd0af366e0ec3589bffc39c6a6" Feb 19 18:51:46 crc kubenswrapper[4813]: I0219 18:51:46.710480 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 18:51:46 crc kubenswrapper[4813]: I0219 18:51:46.789984 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 19 18:51:46 crc kubenswrapper[4813]: I0219 18:51:46.796459 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 19 18:51:46 crc kubenswrapper[4813]: I0219 18:51:46.801787 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 19 18:51:46 crc kubenswrapper[4813]: I0219 18:51:46.884754 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7l5ts\" (UniqueName: \"kubernetes.io/projected/49fa43f1-0aa1-4f1c-b39f-948acdce4825-kube-api-access-7l5ts\") pod \"49fa43f1-0aa1-4f1c-b39f-948acdce4825\" (UID: \"49fa43f1-0aa1-4f1c-b39f-948acdce4825\") " Feb 19 18:51:46 crc kubenswrapper[4813]: I0219 18:51:46.884828 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/49fa43f1-0aa1-4f1c-b39f-948acdce4825-sg-core-conf-yaml\") pod \"49fa43f1-0aa1-4f1c-b39f-948acdce4825\" (UID: \"49fa43f1-0aa1-4f1c-b39f-948acdce4825\") " Feb 19 18:51:46 crc kubenswrapper[4813]: I0219 18:51:46.884853 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49fa43f1-0aa1-4f1c-b39f-948acdce4825-combined-ca-bundle\") pod \"49fa43f1-0aa1-4f1c-b39f-948acdce4825\" (UID: \"49fa43f1-0aa1-4f1c-b39f-948acdce4825\") " Feb 19 18:51:46 crc kubenswrapper[4813]: I0219 18:51:46.885017 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49fa43f1-0aa1-4f1c-b39f-948acdce4825-run-httpd\") pod \"49fa43f1-0aa1-4f1c-b39f-948acdce4825\" (UID: \"49fa43f1-0aa1-4f1c-b39f-948acdce4825\") " Feb 19 18:51:46 crc kubenswrapper[4813]: I0219 18:51:46.885049 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49fa43f1-0aa1-4f1c-b39f-948acdce4825-log-httpd\") pod \"49fa43f1-0aa1-4f1c-b39f-948acdce4825\" (UID: \"49fa43f1-0aa1-4f1c-b39f-948acdce4825\") " Feb 19 18:51:46 crc kubenswrapper[4813]: I0219 18:51:46.885125 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49fa43f1-0aa1-4f1c-b39f-948acdce4825-config-data\") pod \"49fa43f1-0aa1-4f1c-b39f-948acdce4825\" (UID: \"49fa43f1-0aa1-4f1c-b39f-948acdce4825\") " Feb 19 18:51:46 crc kubenswrapper[4813]: I0219 18:51:46.885150 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49fa43f1-0aa1-4f1c-b39f-948acdce4825-scripts\") pod \"49fa43f1-0aa1-4f1c-b39f-948acdce4825\" (UID: \"49fa43f1-0aa1-4f1c-b39f-948acdce4825\") " Feb 19 18:51:46 crc kubenswrapper[4813]: I0219 18:51:46.885554 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49fa43f1-0aa1-4f1c-b39f-948acdce4825-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "49fa43f1-0aa1-4f1c-b39f-948acdce4825" (UID: "49fa43f1-0aa1-4f1c-b39f-948acdce4825"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:51:46 crc kubenswrapper[4813]: I0219 18:51:46.886108 4813 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49fa43f1-0aa1-4f1c-b39f-948acdce4825-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 18:51:46 crc kubenswrapper[4813]: I0219 18:51:46.887091 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49fa43f1-0aa1-4f1c-b39f-948acdce4825-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "49fa43f1-0aa1-4f1c-b39f-948acdce4825" (UID: "49fa43f1-0aa1-4f1c-b39f-948acdce4825"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:51:46 crc kubenswrapper[4813]: I0219 18:51:46.890303 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49fa43f1-0aa1-4f1c-b39f-948acdce4825-scripts" (OuterVolumeSpecName: "scripts") pod "49fa43f1-0aa1-4f1c-b39f-948acdce4825" (UID: "49fa43f1-0aa1-4f1c-b39f-948acdce4825"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:51:46 crc kubenswrapper[4813]: I0219 18:51:46.892216 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49fa43f1-0aa1-4f1c-b39f-948acdce4825-kube-api-access-7l5ts" (OuterVolumeSpecName: "kube-api-access-7l5ts") pod "49fa43f1-0aa1-4f1c-b39f-948acdce4825" (UID: "49fa43f1-0aa1-4f1c-b39f-948acdce4825"). InnerVolumeSpecName "kube-api-access-7l5ts". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:51:46 crc kubenswrapper[4813]: I0219 18:51:46.925646 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49fa43f1-0aa1-4f1c-b39f-948acdce4825-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "49fa43f1-0aa1-4f1c-b39f-948acdce4825" (UID: "49fa43f1-0aa1-4f1c-b39f-948acdce4825"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:51:46 crc kubenswrapper[4813]: I0219 18:51:46.958753 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49fa43f1-0aa1-4f1c-b39f-948acdce4825-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "49fa43f1-0aa1-4f1c-b39f-948acdce4825" (UID: "49fa43f1-0aa1-4f1c-b39f-948acdce4825"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:51:46 crc kubenswrapper[4813]: I0219 18:51:46.988149 4813 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49fa43f1-0aa1-4f1c-b39f-948acdce4825-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 18:51:46 crc kubenswrapper[4813]: I0219 18:51:46.988187 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49fa43f1-0aa1-4f1c-b39f-948acdce4825-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 18:51:46 crc kubenswrapper[4813]: I0219 18:51:46.988200 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7l5ts\" (UniqueName: \"kubernetes.io/projected/49fa43f1-0aa1-4f1c-b39f-948acdce4825-kube-api-access-7l5ts\") on node \"crc\" DevicePath \"\"" Feb 19 18:51:46 crc kubenswrapper[4813]: I0219 18:51:46.988215 4813 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/49fa43f1-0aa1-4f1c-b39f-948acdce4825-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 18:51:46 crc kubenswrapper[4813]: I0219 18:51:46.988229 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49fa43f1-0aa1-4f1c-b39f-948acdce4825-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:51:47 crc kubenswrapper[4813]: I0219 18:51:47.001300 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49fa43f1-0aa1-4f1c-b39f-948acdce4825-config-data" (OuterVolumeSpecName: "config-data") pod "49fa43f1-0aa1-4f1c-b39f-948acdce4825" (UID: "49fa43f1-0aa1-4f1c-b39f-948acdce4825"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:51:47 crc kubenswrapper[4813]: I0219 18:51:47.091402 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49fa43f1-0aa1-4f1c-b39f-948acdce4825-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 18:51:47 crc kubenswrapper[4813]: I0219 18:51:47.643386 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 18:51:47 crc kubenswrapper[4813]: I0219 18:51:47.653717 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 19 18:51:47 crc kubenswrapper[4813]: I0219 18:51:47.710320 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 18:51:47 crc kubenswrapper[4813]: I0219 18:51:47.759497 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 18:51:47 crc kubenswrapper[4813]: I0219 18:51:47.779105 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 18:51:47 crc kubenswrapper[4813]: E0219 18:51:47.779697 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49fa43f1-0aa1-4f1c-b39f-948acdce4825" containerName="ceilometer-notification-agent" Feb 19 18:51:47 crc kubenswrapper[4813]: I0219 18:51:47.779718 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="49fa43f1-0aa1-4f1c-b39f-948acdce4825" containerName="ceilometer-notification-agent" Feb 19 18:51:47 crc kubenswrapper[4813]: E0219 18:51:47.779728 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49fa43f1-0aa1-4f1c-b39f-948acdce4825" containerName="sg-core" Feb 19 18:51:47 crc kubenswrapper[4813]: I0219 18:51:47.779735 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="49fa43f1-0aa1-4f1c-b39f-948acdce4825" containerName="sg-core" Feb 19 18:51:47 crc kubenswrapper[4813]: E0219 18:51:47.779748 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49fa43f1-0aa1-4f1c-b39f-948acdce4825" containerName="proxy-httpd" Feb 19 18:51:47 crc kubenswrapper[4813]: I0219 18:51:47.779754 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="49fa43f1-0aa1-4f1c-b39f-948acdce4825" containerName="proxy-httpd" Feb 19 18:51:47 crc kubenswrapper[4813]: E0219 18:51:47.779767 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49fa43f1-0aa1-4f1c-b39f-948acdce4825" containerName="ceilometer-central-agent" Feb 19 18:51:47 crc kubenswrapper[4813]: I0219 18:51:47.779773 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="49fa43f1-0aa1-4f1c-b39f-948acdce4825" containerName="ceilometer-central-agent" Feb 19 18:51:47 crc kubenswrapper[4813]: I0219 18:51:47.779994 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="49fa43f1-0aa1-4f1c-b39f-948acdce4825" containerName="ceilometer-notification-agent" Feb 19 18:51:47 crc kubenswrapper[4813]: I0219 18:51:47.780015 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="49fa43f1-0aa1-4f1c-b39f-948acdce4825" containerName="proxy-httpd" Feb 19 18:51:47 crc kubenswrapper[4813]: I0219 18:51:47.780027 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="49fa43f1-0aa1-4f1c-b39f-948acdce4825" containerName="ceilometer-central-agent" Feb 19 18:51:47 crc kubenswrapper[4813]: I0219 18:51:47.780043 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="49fa43f1-0aa1-4f1c-b39f-948acdce4825" containerName="sg-core" Feb 19 18:51:47 crc kubenswrapper[4813]: I0219 18:51:47.783419 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 18:51:47 crc kubenswrapper[4813]: I0219 18:51:47.786815 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 18:51:47 crc kubenswrapper[4813]: I0219 18:51:47.788067 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 18:51:47 crc kubenswrapper[4813]: I0219 18:51:47.788592 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 18:51:47 crc kubenswrapper[4813]: I0219 18:51:47.788799 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 19 18:51:47 crc kubenswrapper[4813]: I0219 18:51:47.917236 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/07daf784-b7fa-464b-a9c5-83fd04e1d613-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"07daf784-b7fa-464b-a9c5-83fd04e1d613\") " pod="openstack/ceilometer-0" Feb 19 18:51:47 crc kubenswrapper[4813]: I0219 18:51:47.917286 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07daf784-b7fa-464b-a9c5-83fd04e1d613-config-data\") pod \"ceilometer-0\" (UID: \"07daf784-b7fa-464b-a9c5-83fd04e1d613\") " pod="openstack/ceilometer-0" Feb 19 18:51:47 crc kubenswrapper[4813]: I0219 18:51:47.917320 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07daf784-b7fa-464b-a9c5-83fd04e1d613-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"07daf784-b7fa-464b-a9c5-83fd04e1d613\") " pod="openstack/ceilometer-0" Feb 19 18:51:47 crc kubenswrapper[4813]: I0219 18:51:47.917340 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07daf784-b7fa-464b-a9c5-83fd04e1d613-log-httpd\") pod \"ceilometer-0\" (UID: \"07daf784-b7fa-464b-a9c5-83fd04e1d613\") " pod="openstack/ceilometer-0" Feb 19 18:51:47 crc kubenswrapper[4813]: I0219 18:51:47.917532 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww59v\" (UniqueName: \"kubernetes.io/projected/07daf784-b7fa-464b-a9c5-83fd04e1d613-kube-api-access-ww59v\") pod \"ceilometer-0\" (UID: \"07daf784-b7fa-464b-a9c5-83fd04e1d613\") " pod="openstack/ceilometer-0" Feb 19 18:51:47 crc kubenswrapper[4813]: I0219 18:51:47.917580 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07daf784-b7fa-464b-a9c5-83fd04e1d613-scripts\") pod \"ceilometer-0\" (UID: \"07daf784-b7fa-464b-a9c5-83fd04e1d613\") " pod="openstack/ceilometer-0" Feb 19 18:51:47 crc kubenswrapper[4813]: I0219 18:51:47.917731 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07daf784-b7fa-464b-a9c5-83fd04e1d613-run-httpd\") pod \"ceilometer-0\" (UID: \"07daf784-b7fa-464b-a9c5-83fd04e1d613\") " pod="openstack/ceilometer-0" Feb 19 18:51:47 crc kubenswrapper[4813]: I0219 18:51:47.917974 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/07daf784-b7fa-464b-a9c5-83fd04e1d613-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"07daf784-b7fa-464b-a9c5-83fd04e1d613\") " pod="openstack/ceilometer-0" Feb 19 18:51:48 crc kubenswrapper[4813]: I0219 18:51:48.019313 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/07daf784-b7fa-464b-a9c5-83fd04e1d613-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"07daf784-b7fa-464b-a9c5-83fd04e1d613\") " pod="openstack/ceilometer-0" Feb 19 18:51:48 crc kubenswrapper[4813]: I0219 18:51:48.019384 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/07daf784-b7fa-464b-a9c5-83fd04e1d613-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"07daf784-b7fa-464b-a9c5-83fd04e1d613\") " pod="openstack/ceilometer-0" Feb 19 18:51:48 crc kubenswrapper[4813]: I0219 18:51:48.019416 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07daf784-b7fa-464b-a9c5-83fd04e1d613-config-data\") pod \"ceilometer-0\" (UID: \"07daf784-b7fa-464b-a9c5-83fd04e1d613\") " pod="openstack/ceilometer-0" Feb 19 18:51:48 crc kubenswrapper[4813]: I0219 18:51:48.019457 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07daf784-b7fa-464b-a9c5-83fd04e1d613-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"07daf784-b7fa-464b-a9c5-83fd04e1d613\") " pod="openstack/ceilometer-0" Feb 19 18:51:48 crc kubenswrapper[4813]: I0219 18:51:48.019480 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07daf784-b7fa-464b-a9c5-83fd04e1d613-log-httpd\") pod \"ceilometer-0\" (UID: \"07daf784-b7fa-464b-a9c5-83fd04e1d613\") " pod="openstack/ceilometer-0" Feb 19 18:51:48 crc kubenswrapper[4813]: I0219 18:51:48.019531 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ww59v\" (UniqueName: \"kubernetes.io/projected/07daf784-b7fa-464b-a9c5-83fd04e1d613-kube-api-access-ww59v\") pod \"ceilometer-0\" (UID: \"07daf784-b7fa-464b-a9c5-83fd04e1d613\") " pod="openstack/ceilometer-0" Feb 19 18:51:48 crc kubenswrapper[4813]: I0219 18:51:48.019558 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07daf784-b7fa-464b-a9c5-83fd04e1d613-scripts\") pod \"ceilometer-0\" (UID: \"07daf784-b7fa-464b-a9c5-83fd04e1d613\") " pod="openstack/ceilometer-0" Feb 19 18:51:48 crc kubenswrapper[4813]: I0219 18:51:48.019601 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07daf784-b7fa-464b-a9c5-83fd04e1d613-run-httpd\") pod \"ceilometer-0\" (UID: \"07daf784-b7fa-464b-a9c5-83fd04e1d613\") " pod="openstack/ceilometer-0" Feb 19 18:51:48 crc kubenswrapper[4813]: I0219 18:51:48.020083 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07daf784-b7fa-464b-a9c5-83fd04e1d613-log-httpd\") pod \"ceilometer-0\" (UID: \"07daf784-b7fa-464b-a9c5-83fd04e1d613\") " pod="openstack/ceilometer-0" Feb 19 18:51:48 crc kubenswrapper[4813]: I0219 18:51:48.020137 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07daf784-b7fa-464b-a9c5-83fd04e1d613-run-httpd\") pod \"ceilometer-0\" (UID: \"07daf784-b7fa-464b-a9c5-83fd04e1d613\") " pod="openstack/ceilometer-0" Feb 19 18:51:48 crc kubenswrapper[4813]: I0219 18:51:48.024516 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/07daf784-b7fa-464b-a9c5-83fd04e1d613-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"07daf784-b7fa-464b-a9c5-83fd04e1d613\") " pod="openstack/ceilometer-0" Feb 19 18:51:48 crc kubenswrapper[4813]: I0219 18:51:48.025205 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07daf784-b7fa-464b-a9c5-83fd04e1d613-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"07daf784-b7fa-464b-a9c5-83fd04e1d613\") " pod="openstack/ceilometer-0" Feb 19 18:51:48 crc kubenswrapper[4813]: I0219 18:51:48.025560 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/07daf784-b7fa-464b-a9c5-83fd04e1d613-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"07daf784-b7fa-464b-a9c5-83fd04e1d613\") " pod="openstack/ceilometer-0" Feb 19 18:51:48 crc kubenswrapper[4813]: I0219 18:51:48.032854 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07daf784-b7fa-464b-a9c5-83fd04e1d613-config-data\") pod \"ceilometer-0\" (UID: \"07daf784-b7fa-464b-a9c5-83fd04e1d613\") " pod="openstack/ceilometer-0" Feb 19 18:51:48 crc kubenswrapper[4813]: I0219 18:51:48.035687 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07daf784-b7fa-464b-a9c5-83fd04e1d613-scripts\") pod \"ceilometer-0\" (UID: \"07daf784-b7fa-464b-a9c5-83fd04e1d613\") " pod="openstack/ceilometer-0" Feb 19 18:51:48 crc kubenswrapper[4813]: I0219 18:51:48.051038 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ww59v\" (UniqueName: \"kubernetes.io/projected/07daf784-b7fa-464b-a9c5-83fd04e1d613-kube-api-access-ww59v\") pod \"ceilometer-0\" (UID: \"07daf784-b7fa-464b-a9c5-83fd04e1d613\") " pod="openstack/ceilometer-0" Feb 19 18:51:48 crc kubenswrapper[4813]: I0219 18:51:48.103354 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 18:51:48 crc kubenswrapper[4813]: I0219 18:51:48.629053 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 18:51:48 crc kubenswrapper[4813]: W0219 18:51:48.637277 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07daf784_b7fa_464b_a9c5_83fd04e1d613.slice/crio-c6bd95b8f3665f406a2f4fe68255ddcc9bca016080cf3b7f3f2e881bee39df2f WatchSource:0}: Error finding container c6bd95b8f3665f406a2f4fe68255ddcc9bca016080cf3b7f3f2e881bee39df2f: Status 404 returned error can't find the container with id c6bd95b8f3665f406a2f4fe68255ddcc9bca016080cf3b7f3f2e881bee39df2f Feb 19 18:51:48 crc kubenswrapper[4813]: I0219 18:51:48.653702 4813 generic.go:334] "Generic (PLEG): container finished" podID="b6d615b6-3d85-40c9-8ae8-81a27f792856" containerID="67373de9f836bbcc9734c63987d52b1116dea47802d978e06050d215a0cf0e97" exitCode=137 Feb 19 18:51:48 crc kubenswrapper[4813]: I0219 18:51:48.653753 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b6d615b6-3d85-40c9-8ae8-81a27f792856","Type":"ContainerDied","Data":"67373de9f836bbcc9734c63987d52b1116dea47802d978e06050d215a0cf0e97"} Feb 19 18:51:48 crc kubenswrapper[4813]: I0219 18:51:48.654142 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b6d615b6-3d85-40c9-8ae8-81a27f792856","Type":"ContainerDied","Data":"8c3fc201bc12a4c48b77abfd07c8b6e8e9ce09c51625e2e1debbb1466c27ed65"} Feb 19 18:51:48 crc kubenswrapper[4813]: I0219 18:51:48.654158 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c3fc201bc12a4c48b77abfd07c8b6e8e9ce09c51625e2e1debbb1466c27ed65" Feb 19 18:51:48 crc kubenswrapper[4813]: I0219 18:51:48.655415 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07daf784-b7fa-464b-a9c5-83fd04e1d613","Type":"ContainerStarted","Data":"c6bd95b8f3665f406a2f4fe68255ddcc9bca016080cf3b7f3f2e881bee39df2f"} Feb 19 18:51:48 crc kubenswrapper[4813]: I0219 18:51:48.710876 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 18:51:48 crc kubenswrapper[4813]: I0219 18:51:48.832256 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7t5qh\" (UniqueName: \"kubernetes.io/projected/b6d615b6-3d85-40c9-8ae8-81a27f792856-kube-api-access-7t5qh\") pod \"b6d615b6-3d85-40c9-8ae8-81a27f792856\" (UID: \"b6d615b6-3d85-40c9-8ae8-81a27f792856\") " Feb 19 18:51:48 crc kubenswrapper[4813]: I0219 18:51:48.832528 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6d615b6-3d85-40c9-8ae8-81a27f792856-combined-ca-bundle\") pod \"b6d615b6-3d85-40c9-8ae8-81a27f792856\" (UID: \"b6d615b6-3d85-40c9-8ae8-81a27f792856\") " Feb 19 18:51:48 crc kubenswrapper[4813]: I0219 18:51:48.832630 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6d615b6-3d85-40c9-8ae8-81a27f792856-config-data\") pod \"b6d615b6-3d85-40c9-8ae8-81a27f792856\" (UID: \"b6d615b6-3d85-40c9-8ae8-81a27f792856\") " Feb 19 18:51:48 crc kubenswrapper[4813]: I0219 18:51:48.844253 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6d615b6-3d85-40c9-8ae8-81a27f792856-kube-api-access-7t5qh" (OuterVolumeSpecName: "kube-api-access-7t5qh") pod "b6d615b6-3d85-40c9-8ae8-81a27f792856" (UID: "b6d615b6-3d85-40c9-8ae8-81a27f792856"). InnerVolumeSpecName "kube-api-access-7t5qh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:51:48 crc kubenswrapper[4813]: I0219 18:51:48.872616 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6d615b6-3d85-40c9-8ae8-81a27f792856-config-data" (OuterVolumeSpecName: "config-data") pod "b6d615b6-3d85-40c9-8ae8-81a27f792856" (UID: "b6d615b6-3d85-40c9-8ae8-81a27f792856"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:51:48 crc kubenswrapper[4813]: I0219 18:51:48.874001 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6d615b6-3d85-40c9-8ae8-81a27f792856-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b6d615b6-3d85-40c9-8ae8-81a27f792856" (UID: "b6d615b6-3d85-40c9-8ae8-81a27f792856"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:51:48 crc kubenswrapper[4813]: I0219 18:51:48.935406 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7t5qh\" (UniqueName: \"kubernetes.io/projected/b6d615b6-3d85-40c9-8ae8-81a27f792856-kube-api-access-7t5qh\") on node \"crc\" DevicePath \"\"" Feb 19 18:51:48 crc kubenswrapper[4813]: I0219 18:51:48.935445 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6d615b6-3d85-40c9-8ae8-81a27f792856-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:51:48 crc kubenswrapper[4813]: I0219 18:51:48.935457 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6d615b6-3d85-40c9-8ae8-81a27f792856-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 18:51:49 crc kubenswrapper[4813]: I0219 18:51:49.483823 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49fa43f1-0aa1-4f1c-b39f-948acdce4825" path="/var/lib/kubelet/pods/49fa43f1-0aa1-4f1c-b39f-948acdce4825/volumes" Feb 19 18:51:49 crc kubenswrapper[4813]: I0219 18:51:49.664886 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 18:51:49 crc kubenswrapper[4813]: I0219 18:51:49.664909 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07daf784-b7fa-464b-a9c5-83fd04e1d613","Type":"ContainerStarted","Data":"f3ad2f79a578ffeb0ed68cf52e4f9b93052de5e6bda283fc5c55c02a81d77e81"} Feb 19 18:51:49 crc kubenswrapper[4813]: I0219 18:51:49.687598 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 18:51:49 crc kubenswrapper[4813]: I0219 18:51:49.695247 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 18:51:49 crc kubenswrapper[4813]: I0219 18:51:49.713724 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 18:51:49 crc kubenswrapper[4813]: E0219 18:51:49.714168 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6d615b6-3d85-40c9-8ae8-81a27f792856" containerName="nova-cell1-novncproxy-novncproxy" Feb 19 18:51:49 crc kubenswrapper[4813]: I0219 18:51:49.714194 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6d615b6-3d85-40c9-8ae8-81a27f792856" containerName="nova-cell1-novncproxy-novncproxy" Feb 19 18:51:49 crc kubenswrapper[4813]: I0219 18:51:49.714464 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6d615b6-3d85-40c9-8ae8-81a27f792856" containerName="nova-cell1-novncproxy-novncproxy" Feb 19 18:51:49 crc kubenswrapper[4813]: I0219 18:51:49.715207 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 18:51:49 crc kubenswrapper[4813]: I0219 18:51:49.717587 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 19 18:51:49 crc kubenswrapper[4813]: I0219 18:51:49.726292 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 19 18:51:49 crc kubenswrapper[4813]: I0219 18:51:49.726404 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 19 18:51:49 crc kubenswrapper[4813]: I0219 18:51:49.732164 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 18:51:49 crc kubenswrapper[4813]: I0219 18:51:49.871219 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dac18d7-a3cc-46ce-98be-bf34e69398d7-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"4dac18d7-a3cc-46ce-98be-bf34e69398d7\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 18:51:49 crc kubenswrapper[4813]: I0219 18:51:49.871455 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dac18d7-a3cc-46ce-98be-bf34e69398d7-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"4dac18d7-a3cc-46ce-98be-bf34e69398d7\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 18:51:49 crc kubenswrapper[4813]: I0219 18:51:49.871501 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dac18d7-a3cc-46ce-98be-bf34e69398d7-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4dac18d7-a3cc-46ce-98be-bf34e69398d7\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 18:51:49 crc kubenswrapper[4813]: I0219 18:51:49.871533 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8667\" (UniqueName: \"kubernetes.io/projected/4dac18d7-a3cc-46ce-98be-bf34e69398d7-kube-api-access-q8667\") pod \"nova-cell1-novncproxy-0\" (UID: \"4dac18d7-a3cc-46ce-98be-bf34e69398d7\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 18:51:49 crc kubenswrapper[4813]: I0219 18:51:49.871567 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dac18d7-a3cc-46ce-98be-bf34e69398d7-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4dac18d7-a3cc-46ce-98be-bf34e69398d7\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 18:51:49 crc kubenswrapper[4813]: I0219 18:51:49.973121 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dac18d7-a3cc-46ce-98be-bf34e69398d7-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"4dac18d7-a3cc-46ce-98be-bf34e69398d7\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 18:51:49 crc kubenswrapper[4813]: I0219 18:51:49.973161 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dac18d7-a3cc-46ce-98be-bf34e69398d7-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4dac18d7-a3cc-46ce-98be-bf34e69398d7\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 18:51:49 crc kubenswrapper[4813]: I0219 18:51:49.973198 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8667\" (UniqueName: \"kubernetes.io/projected/4dac18d7-a3cc-46ce-98be-bf34e69398d7-kube-api-access-q8667\") pod \"nova-cell1-novncproxy-0\" (UID: \"4dac18d7-a3cc-46ce-98be-bf34e69398d7\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 18:51:49 crc kubenswrapper[4813]: I0219 18:51:49.973221 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dac18d7-a3cc-46ce-98be-bf34e69398d7-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4dac18d7-a3cc-46ce-98be-bf34e69398d7\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 18:51:49 crc kubenswrapper[4813]: I0219 18:51:49.973263 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dac18d7-a3cc-46ce-98be-bf34e69398d7-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"4dac18d7-a3cc-46ce-98be-bf34e69398d7\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 18:51:49 crc kubenswrapper[4813]: I0219 18:51:49.987587 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dac18d7-a3cc-46ce-98be-bf34e69398d7-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4dac18d7-a3cc-46ce-98be-bf34e69398d7\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 18:51:49 crc kubenswrapper[4813]: I0219 18:51:49.989658 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dac18d7-a3cc-46ce-98be-bf34e69398d7-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4dac18d7-a3cc-46ce-98be-bf34e69398d7\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 18:51:49 crc kubenswrapper[4813]: I0219 18:51:49.990494 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dac18d7-a3cc-46ce-98be-bf34e69398d7-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"4dac18d7-a3cc-46ce-98be-bf34e69398d7\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 18:51:50 crc kubenswrapper[4813]: I0219 18:51:50.013476 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dac18d7-a3cc-46ce-98be-bf34e69398d7-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"4dac18d7-a3cc-46ce-98be-bf34e69398d7\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 18:51:50 crc kubenswrapper[4813]: I0219 18:51:50.014571 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8667\" (UniqueName: \"kubernetes.io/projected/4dac18d7-a3cc-46ce-98be-bf34e69398d7-kube-api-access-q8667\") pod \"nova-cell1-novncproxy-0\" (UID: \"4dac18d7-a3cc-46ce-98be-bf34e69398d7\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 18:51:50 crc kubenswrapper[4813]: I0219 18:51:50.039445 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 18:51:50 crc kubenswrapper[4813]: I0219 18:51:50.590464 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 18:51:50 crc kubenswrapper[4813]: W0219 18:51:50.595393 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4dac18d7_a3cc_46ce_98be_bf34e69398d7.slice/crio-946a4e63be280d303e4aa9c6612083f5289f995434927de0c53b54e9f89540f2 WatchSource:0}: Error finding container 946a4e63be280d303e4aa9c6612083f5289f995434927de0c53b54e9f89540f2: Status 404 returned error can't find the container with id 946a4e63be280d303e4aa9c6612083f5289f995434927de0c53b54e9f89540f2 Feb 19 18:51:50 crc kubenswrapper[4813]: I0219 18:51:50.675365 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"4dac18d7-a3cc-46ce-98be-bf34e69398d7","Type":"ContainerStarted","Data":"946a4e63be280d303e4aa9c6612083f5289f995434927de0c53b54e9f89540f2"} Feb 19 18:51:50 crc kubenswrapper[4813]: I0219 18:51:50.680610 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07daf784-b7fa-464b-a9c5-83fd04e1d613","Type":"ContainerStarted","Data":"688602f2d85cfeffc6aa32dae1f22b6ceb210783a79a85b73ac9280c08233d03"} Feb 19 18:51:51 crc kubenswrapper[4813]: I0219 18:51:51.490021 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6d615b6-3d85-40c9-8ae8-81a27f792856" path="/var/lib/kubelet/pods/b6d615b6-3d85-40c9-8ae8-81a27f792856/volumes" Feb 19 18:51:51 crc kubenswrapper[4813]: I0219 18:51:51.695718 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"4dac18d7-a3cc-46ce-98be-bf34e69398d7","Type":"ContainerStarted","Data":"52e092f91f830324e6e20c3dd977a8a1a5e42acbc30389990cc49f2b8250eba6"} Feb 19 18:51:51 crc kubenswrapper[4813]: I0219 18:51:51.702344 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07daf784-b7fa-464b-a9c5-83fd04e1d613","Type":"ContainerStarted","Data":"4cab7fd3573415edaa9c20df5bb8609864000d9fee036a70e42b6a354fc19076"} Feb 19 18:51:51 crc kubenswrapper[4813]: I0219 18:51:51.729335 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.7293138949999998 podStartE2EDuration="2.729313895s" podCreationTimestamp="2026-02-19 18:51:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:51:51.717859621 +0000 UTC m=+1330.943300162" watchObservedRunningTime="2026-02-19 18:51:51.729313895 +0000 UTC m=+1330.954754446" Feb 19 18:51:51 crc kubenswrapper[4813]: I0219 18:51:51.902620 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 19 18:51:51 crc kubenswrapper[4813]: I0219 18:51:51.903213 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 19 18:51:51 crc kubenswrapper[4813]: I0219 18:51:51.907721 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 19 18:51:51 crc kubenswrapper[4813]: I0219 18:51:51.919546 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 19 18:51:52 crc kubenswrapper[4813]: I0219 18:51:52.005924 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 19 18:51:52 crc kubenswrapper[4813]: I0219 18:51:52.718924 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 19 18:51:52 crc kubenswrapper[4813]: I0219 18:51:52.722287 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 19 18:51:52 crc kubenswrapper[4813]: I0219 18:51:52.920412 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7677694455-nj2vp"] Feb 19 18:51:52 crc kubenswrapper[4813]: I0219 18:51:52.921808 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7677694455-nj2vp" Feb 19 18:51:52 crc kubenswrapper[4813]: I0219 18:51:52.933173 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7677694455-nj2vp"] Feb 19 18:51:53 crc kubenswrapper[4813]: I0219 18:51:53.051027 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/279e5b26-1a20-4809-8dfe-ff290d191f38-dns-svc\") pod \"dnsmasq-dns-7677694455-nj2vp\" (UID: \"279e5b26-1a20-4809-8dfe-ff290d191f38\") " pod="openstack/dnsmasq-dns-7677694455-nj2vp" Feb 19 18:51:53 crc kubenswrapper[4813]: I0219 18:51:53.051437 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkczs\" (UniqueName: \"kubernetes.io/projected/279e5b26-1a20-4809-8dfe-ff290d191f38-kube-api-access-nkczs\") pod \"dnsmasq-dns-7677694455-nj2vp\" (UID: \"279e5b26-1a20-4809-8dfe-ff290d191f38\") " pod="openstack/dnsmasq-dns-7677694455-nj2vp" Feb 19 18:51:53 crc kubenswrapper[4813]: I0219 18:51:53.051503 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/279e5b26-1a20-4809-8dfe-ff290d191f38-dns-swift-storage-0\") pod \"dnsmasq-dns-7677694455-nj2vp\" (UID: \"279e5b26-1a20-4809-8dfe-ff290d191f38\") " pod="openstack/dnsmasq-dns-7677694455-nj2vp" Feb 19 18:51:53 crc kubenswrapper[4813]: I0219 18:51:53.051576 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/279e5b26-1a20-4809-8dfe-ff290d191f38-ovsdbserver-nb\") pod \"dnsmasq-dns-7677694455-nj2vp\" (UID: \"279e5b26-1a20-4809-8dfe-ff290d191f38\") " pod="openstack/dnsmasq-dns-7677694455-nj2vp" Feb 19 18:51:53 crc kubenswrapper[4813]: I0219 18:51:53.051622 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/279e5b26-1a20-4809-8dfe-ff290d191f38-ovsdbserver-sb\") pod \"dnsmasq-dns-7677694455-nj2vp\" (UID: \"279e5b26-1a20-4809-8dfe-ff290d191f38\") " pod="openstack/dnsmasq-dns-7677694455-nj2vp" Feb 19 18:51:53 crc kubenswrapper[4813]: I0219 18:51:53.051658 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/279e5b26-1a20-4809-8dfe-ff290d191f38-config\") pod \"dnsmasq-dns-7677694455-nj2vp\" (UID: \"279e5b26-1a20-4809-8dfe-ff290d191f38\") " pod="openstack/dnsmasq-dns-7677694455-nj2vp" Feb 19 18:51:53 crc kubenswrapper[4813]: I0219 18:51:53.153354 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkczs\" (UniqueName: \"kubernetes.io/projected/279e5b26-1a20-4809-8dfe-ff290d191f38-kube-api-access-nkczs\") pod \"dnsmasq-dns-7677694455-nj2vp\" (UID: \"279e5b26-1a20-4809-8dfe-ff290d191f38\") " pod="openstack/dnsmasq-dns-7677694455-nj2vp" Feb 19 18:51:53 crc kubenswrapper[4813]: I0219 18:51:53.153422 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/279e5b26-1a20-4809-8dfe-ff290d191f38-dns-swift-storage-0\") pod \"dnsmasq-dns-7677694455-nj2vp\" (UID: \"279e5b26-1a20-4809-8dfe-ff290d191f38\") " pod="openstack/dnsmasq-dns-7677694455-nj2vp" Feb 19 18:51:53 crc kubenswrapper[4813]: I0219 18:51:53.153492 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/279e5b26-1a20-4809-8dfe-ff290d191f38-ovsdbserver-nb\") pod \"dnsmasq-dns-7677694455-nj2vp\" (UID: \"279e5b26-1a20-4809-8dfe-ff290d191f38\") " pod="openstack/dnsmasq-dns-7677694455-nj2vp" Feb 19 18:51:53 crc kubenswrapper[4813]: I0219 18:51:53.153547 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/279e5b26-1a20-4809-8dfe-ff290d191f38-ovsdbserver-sb\") pod \"dnsmasq-dns-7677694455-nj2vp\" (UID: \"279e5b26-1a20-4809-8dfe-ff290d191f38\") " pod="openstack/dnsmasq-dns-7677694455-nj2vp" Feb 19 18:51:53 crc kubenswrapper[4813]: I0219 18:51:53.153583 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/279e5b26-1a20-4809-8dfe-ff290d191f38-config\") pod \"dnsmasq-dns-7677694455-nj2vp\" (UID: \"279e5b26-1a20-4809-8dfe-ff290d191f38\") " pod="openstack/dnsmasq-dns-7677694455-nj2vp" Feb 19 18:51:53 crc kubenswrapper[4813]: I0219 18:51:53.153624 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/279e5b26-1a20-4809-8dfe-ff290d191f38-dns-svc\") pod \"dnsmasq-dns-7677694455-nj2vp\" (UID: \"279e5b26-1a20-4809-8dfe-ff290d191f38\") " pod="openstack/dnsmasq-dns-7677694455-nj2vp" Feb 19 18:51:53 crc kubenswrapper[4813]: I0219 18:51:53.154923 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/279e5b26-1a20-4809-8dfe-ff290d191f38-config\") pod \"dnsmasq-dns-7677694455-nj2vp\" (UID: \"279e5b26-1a20-4809-8dfe-ff290d191f38\") " pod="openstack/dnsmasq-dns-7677694455-nj2vp" Feb 19 18:51:53 crc kubenswrapper[4813]: I0219 18:51:53.155128 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/279e5b26-1a20-4809-8dfe-ff290d191f38-ovsdbserver-sb\") pod \"dnsmasq-dns-7677694455-nj2vp\" (UID: \"279e5b26-1a20-4809-8dfe-ff290d191f38\") " pod="openstack/dnsmasq-dns-7677694455-nj2vp" Feb 19 18:51:53 crc kubenswrapper[4813]: I0219 18:51:53.155132 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/279e5b26-1a20-4809-8dfe-ff290d191f38-dns-swift-storage-0\") pod \"dnsmasq-dns-7677694455-nj2vp\" (UID: \"279e5b26-1a20-4809-8dfe-ff290d191f38\") " pod="openstack/dnsmasq-dns-7677694455-nj2vp" Feb 19 18:51:53 crc kubenswrapper[4813]: I0219 18:51:53.155409 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/279e5b26-1a20-4809-8dfe-ff290d191f38-dns-svc\") pod \"dnsmasq-dns-7677694455-nj2vp\" (UID: \"279e5b26-1a20-4809-8dfe-ff290d191f38\") " pod="openstack/dnsmasq-dns-7677694455-nj2vp" Feb 19 18:51:53 crc kubenswrapper[4813]: I0219 18:51:53.155547 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/279e5b26-1a20-4809-8dfe-ff290d191f38-ovsdbserver-nb\") pod \"dnsmasq-dns-7677694455-nj2vp\" (UID: \"279e5b26-1a20-4809-8dfe-ff290d191f38\") " pod="openstack/dnsmasq-dns-7677694455-nj2vp" Feb 19 18:51:53 crc kubenswrapper[4813]: I0219 18:51:53.175262 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkczs\" (UniqueName: \"kubernetes.io/projected/279e5b26-1a20-4809-8dfe-ff290d191f38-kube-api-access-nkczs\") pod \"dnsmasq-dns-7677694455-nj2vp\" (UID: \"279e5b26-1a20-4809-8dfe-ff290d191f38\") " pod="openstack/dnsmasq-dns-7677694455-nj2vp" Feb 19 18:51:53 crc kubenswrapper[4813]: I0219 18:51:53.248161 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7677694455-nj2vp" Feb 19 18:51:53 crc kubenswrapper[4813]: I0219 18:51:53.728099 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07daf784-b7fa-464b-a9c5-83fd04e1d613","Type":"ContainerStarted","Data":"de9c8ef11e401b0a562dedaca5582b7f9653e396f5c74e6b973932dd3799e944"} Feb 19 18:51:53 crc kubenswrapper[4813]: I0219 18:51:53.728434 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 18:51:53 crc kubenswrapper[4813]: I0219 18:51:53.752638 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.79432783 podStartE2EDuration="6.752621836s" podCreationTimestamp="2026-02-19 18:51:47 +0000 UTC" firstStartedPulling="2026-02-19 18:51:48.640183942 +0000 UTC m=+1327.865624483" lastFinishedPulling="2026-02-19 18:51:52.598477908 +0000 UTC m=+1331.823918489" observedRunningTime="2026-02-19 18:51:53.747164268 +0000 UTC m=+1332.972604809" watchObservedRunningTime="2026-02-19 18:51:53.752621836 +0000 UTC m=+1332.978062377" Feb 19 18:51:53 crc kubenswrapper[4813]: W0219 18:51:53.764611 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod279e5b26_1a20_4809_8dfe_ff290d191f38.slice/crio-009455cd49df735df98dc2d03dafff77a9308fd2412de086b44ffb008b58643e WatchSource:0}: Error finding container 009455cd49df735df98dc2d03dafff77a9308fd2412de086b44ffb008b58643e: Status 404 returned error can't find the container with id 009455cd49df735df98dc2d03dafff77a9308fd2412de086b44ffb008b58643e Feb 19 18:51:53 crc kubenswrapper[4813]: I0219 18:51:53.800707 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7677694455-nj2vp"] Feb 19 18:51:54 crc kubenswrapper[4813]: I0219 18:51:54.736314 4813 generic.go:334] "Generic (PLEG): container finished" podID="279e5b26-1a20-4809-8dfe-ff290d191f38" containerID="bc5da64f47f86536bb11a20a54f840d076fee7c08731ee597fbd33d1e0a94d46" exitCode=0 Feb 19 18:51:54 crc kubenswrapper[4813]: I0219 18:51:54.736406 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7677694455-nj2vp" event={"ID":"279e5b26-1a20-4809-8dfe-ff290d191f38","Type":"ContainerDied","Data":"bc5da64f47f86536bb11a20a54f840d076fee7c08731ee597fbd33d1e0a94d46"} Feb 19 18:51:54 crc kubenswrapper[4813]: I0219 18:51:54.736725 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7677694455-nj2vp" event={"ID":"279e5b26-1a20-4809-8dfe-ff290d191f38","Type":"ContainerStarted","Data":"009455cd49df735df98dc2d03dafff77a9308fd2412de086b44ffb008b58643e"} Feb 19 18:51:55 crc kubenswrapper[4813]: I0219 18:51:55.040836 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 19 18:51:55 crc kubenswrapper[4813]: I0219 18:51:55.382131 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 18:51:55 crc kubenswrapper[4813]: I0219 18:51:55.455763 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 18:51:55 crc kubenswrapper[4813]: I0219 18:51:55.747162 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7677694455-nj2vp" event={"ID":"279e5b26-1a20-4809-8dfe-ff290d191f38","Type":"ContainerStarted","Data":"57b25170e965640c68550b7bface39f788f95c0a649a1c484dd11cc853485b0e"} Feb 19 18:51:55 crc kubenswrapper[4813]: I0219 18:51:55.747323 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="07daf784-b7fa-464b-a9c5-83fd04e1d613" containerName="ceilometer-central-agent" containerID="cri-o://f3ad2f79a578ffeb0ed68cf52e4f9b93052de5e6bda283fc5c55c02a81d77e81" gracePeriod=30 Feb 19 18:51:55 crc kubenswrapper[4813]: I0219 18:51:55.747367 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="07daf784-b7fa-464b-a9c5-83fd04e1d613" containerName="proxy-httpd" containerID="cri-o://de9c8ef11e401b0a562dedaca5582b7f9653e396f5c74e6b973932dd3799e944" gracePeriod=30 Feb 19 18:51:55 crc kubenswrapper[4813]: I0219 18:51:55.747393 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="07daf784-b7fa-464b-a9c5-83fd04e1d613" containerName="sg-core" containerID="cri-o://4cab7fd3573415edaa9c20df5bb8609864000d9fee036a70e42b6a354fc19076" gracePeriod=30 Feb 19 18:51:55 crc kubenswrapper[4813]: I0219 18:51:55.747378 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="07daf784-b7fa-464b-a9c5-83fd04e1d613" containerName="ceilometer-notification-agent" containerID="cri-o://688602f2d85cfeffc6aa32dae1f22b6ceb210783a79a85b73ac9280c08233d03" gracePeriod=30 Feb 19 18:51:55 crc kubenswrapper[4813]: I0219 18:51:55.747718 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8c77d229-1f7e-432b-b8e2-f1bcf0561b04" containerName="nova-api-log" containerID="cri-o://d86f14bde71bd4136682115120b1e45b69d7aa9076c425b5939eca01739e7f2a" gracePeriod=30 Feb 19 18:51:55 crc kubenswrapper[4813]: I0219 18:51:55.747820 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8c77d229-1f7e-432b-b8e2-f1bcf0561b04" containerName="nova-api-api" containerID="cri-o://1b9f3c778313eb9d2af44946a1f3868ac0120afcac2b852abaf93686eb2eec5f" gracePeriod=30 Feb 19 18:51:55 crc kubenswrapper[4813]: I0219 18:51:55.784420 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7677694455-nj2vp" podStartSLOduration=3.784398968 podStartE2EDuration="3.784398968s" podCreationTimestamp="2026-02-19 18:51:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:51:55.777692341 +0000 UTC m=+1335.003132892" watchObservedRunningTime="2026-02-19 18:51:55.784398968 +0000 UTC m=+1335.009839519" Feb 19 18:51:56 crc kubenswrapper[4813]: I0219 18:51:56.762769 4813 generic.go:334] "Generic (PLEG): container finished" podID="07daf784-b7fa-464b-a9c5-83fd04e1d613" containerID="de9c8ef11e401b0a562dedaca5582b7f9653e396f5c74e6b973932dd3799e944" exitCode=0 Feb 19 18:51:56 crc kubenswrapper[4813]: I0219 18:51:56.763012 4813 generic.go:334] "Generic (PLEG): container finished" podID="07daf784-b7fa-464b-a9c5-83fd04e1d613" containerID="4cab7fd3573415edaa9c20df5bb8609864000d9fee036a70e42b6a354fc19076" exitCode=2 Feb 19 18:51:56 crc kubenswrapper[4813]: I0219 18:51:56.763020 4813 generic.go:334] "Generic (PLEG): container finished" podID="07daf784-b7fa-464b-a9c5-83fd04e1d613" containerID="688602f2d85cfeffc6aa32dae1f22b6ceb210783a79a85b73ac9280c08233d03" exitCode=0 Feb 19 18:51:56 crc kubenswrapper[4813]: I0219 18:51:56.762887 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07daf784-b7fa-464b-a9c5-83fd04e1d613","Type":"ContainerDied","Data":"de9c8ef11e401b0a562dedaca5582b7f9653e396f5c74e6b973932dd3799e944"} Feb 19 18:51:56 crc kubenswrapper[4813]: I0219 18:51:56.763097 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07daf784-b7fa-464b-a9c5-83fd04e1d613","Type":"ContainerDied","Data":"4cab7fd3573415edaa9c20df5bb8609864000d9fee036a70e42b6a354fc19076"} Feb 19 18:51:56 crc kubenswrapper[4813]: I0219 18:51:56.763113 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07daf784-b7fa-464b-a9c5-83fd04e1d613","Type":"ContainerDied","Data":"688602f2d85cfeffc6aa32dae1f22b6ceb210783a79a85b73ac9280c08233d03"} Feb 19 18:51:56 crc kubenswrapper[4813]: I0219 18:51:56.765812 4813 generic.go:334] "Generic (PLEG): container finished" podID="8c77d229-1f7e-432b-b8e2-f1bcf0561b04" containerID="d86f14bde71bd4136682115120b1e45b69d7aa9076c425b5939eca01739e7f2a" exitCode=143 Feb 19 18:51:56 crc kubenswrapper[4813]: I0219 18:51:56.766241 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8c77d229-1f7e-432b-b8e2-f1bcf0561b04","Type":"ContainerDied","Data":"d86f14bde71bd4136682115120b1e45b69d7aa9076c425b5939eca01739e7f2a"} Feb 19 18:51:56 crc kubenswrapper[4813]: I0219 18:51:56.766332 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7677694455-nj2vp" Feb 19 18:51:57 crc kubenswrapper[4813]: I0219 18:51:57.165193 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 18:51:57 crc kubenswrapper[4813]: I0219 18:51:57.231671 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/07daf784-b7fa-464b-a9c5-83fd04e1d613-sg-core-conf-yaml\") pod \"07daf784-b7fa-464b-a9c5-83fd04e1d613\" (UID: \"07daf784-b7fa-464b-a9c5-83fd04e1d613\") " Feb 19 18:51:57 crc kubenswrapper[4813]: I0219 18:51:57.231909 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07daf784-b7fa-464b-a9c5-83fd04e1d613-log-httpd\") pod \"07daf784-b7fa-464b-a9c5-83fd04e1d613\" (UID: \"07daf784-b7fa-464b-a9c5-83fd04e1d613\") " Feb 19 18:51:57 crc kubenswrapper[4813]: I0219 18:51:57.232165 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07daf784-b7fa-464b-a9c5-83fd04e1d613-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "07daf784-b7fa-464b-a9c5-83fd04e1d613" (UID: "07daf784-b7fa-464b-a9c5-83fd04e1d613"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:51:57 crc kubenswrapper[4813]: I0219 18:51:57.232209 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07daf784-b7fa-464b-a9c5-83fd04e1d613-run-httpd\") pod \"07daf784-b7fa-464b-a9c5-83fd04e1d613\" (UID: \"07daf784-b7fa-464b-a9c5-83fd04e1d613\") " Feb 19 18:51:57 crc kubenswrapper[4813]: I0219 18:51:57.232278 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/07daf784-b7fa-464b-a9c5-83fd04e1d613-ceilometer-tls-certs\") pod \"07daf784-b7fa-464b-a9c5-83fd04e1d613\" (UID: \"07daf784-b7fa-464b-a9c5-83fd04e1d613\") " Feb 19 18:51:57 crc kubenswrapper[4813]: I0219 18:51:57.232315 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07daf784-b7fa-464b-a9c5-83fd04e1d613-combined-ca-bundle\") pod \"07daf784-b7fa-464b-a9c5-83fd04e1d613\" (UID: \"07daf784-b7fa-464b-a9c5-83fd04e1d613\") " Feb 19 18:51:57 crc kubenswrapper[4813]: I0219 18:51:57.232362 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07daf784-b7fa-464b-a9c5-83fd04e1d613-scripts\") pod \"07daf784-b7fa-464b-a9c5-83fd04e1d613\" (UID: \"07daf784-b7fa-464b-a9c5-83fd04e1d613\") " Feb 19 18:51:57 crc kubenswrapper[4813]: I0219 18:51:57.232417 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07daf784-b7fa-464b-a9c5-83fd04e1d613-config-data\") pod \"07daf784-b7fa-464b-a9c5-83fd04e1d613\" (UID: \"07daf784-b7fa-464b-a9c5-83fd04e1d613\") " Feb 19 18:51:57 crc kubenswrapper[4813]: I0219 18:51:57.232512 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ww59v\" (UniqueName: \"kubernetes.io/projected/07daf784-b7fa-464b-a9c5-83fd04e1d613-kube-api-access-ww59v\") pod \"07daf784-b7fa-464b-a9c5-83fd04e1d613\" (UID: \"07daf784-b7fa-464b-a9c5-83fd04e1d613\") " Feb 19 18:51:57 crc kubenswrapper[4813]: I0219 18:51:57.232540 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07daf784-b7fa-464b-a9c5-83fd04e1d613-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "07daf784-b7fa-464b-a9c5-83fd04e1d613" (UID: "07daf784-b7fa-464b-a9c5-83fd04e1d613"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:51:57 crc kubenswrapper[4813]: I0219 18:51:57.233208 4813 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07daf784-b7fa-464b-a9c5-83fd04e1d613-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 18:51:57 crc kubenswrapper[4813]: I0219 18:51:57.233241 4813 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07daf784-b7fa-464b-a9c5-83fd04e1d613-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 18:51:57 crc kubenswrapper[4813]: I0219 18:51:57.241325 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07daf784-b7fa-464b-a9c5-83fd04e1d613-kube-api-access-ww59v" (OuterVolumeSpecName: "kube-api-access-ww59v") pod "07daf784-b7fa-464b-a9c5-83fd04e1d613" (UID: "07daf784-b7fa-464b-a9c5-83fd04e1d613"). InnerVolumeSpecName "kube-api-access-ww59v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:51:57 crc kubenswrapper[4813]: I0219 18:51:57.243800 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07daf784-b7fa-464b-a9c5-83fd04e1d613-scripts" (OuterVolumeSpecName: "scripts") pod "07daf784-b7fa-464b-a9c5-83fd04e1d613" (UID: "07daf784-b7fa-464b-a9c5-83fd04e1d613"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:51:57 crc kubenswrapper[4813]: I0219 18:51:57.270202 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07daf784-b7fa-464b-a9c5-83fd04e1d613-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "07daf784-b7fa-464b-a9c5-83fd04e1d613" (UID: "07daf784-b7fa-464b-a9c5-83fd04e1d613"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:51:57 crc kubenswrapper[4813]: I0219 18:51:57.297347 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07daf784-b7fa-464b-a9c5-83fd04e1d613-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "07daf784-b7fa-464b-a9c5-83fd04e1d613" (UID: "07daf784-b7fa-464b-a9c5-83fd04e1d613"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:51:57 crc kubenswrapper[4813]: I0219 18:51:57.331145 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07daf784-b7fa-464b-a9c5-83fd04e1d613-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "07daf784-b7fa-464b-a9c5-83fd04e1d613" (UID: "07daf784-b7fa-464b-a9c5-83fd04e1d613"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:51:57 crc kubenswrapper[4813]: I0219 18:51:57.335059 4813 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/07daf784-b7fa-464b-a9c5-83fd04e1d613-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 18:51:57 crc kubenswrapper[4813]: I0219 18:51:57.335192 4813 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/07daf784-b7fa-464b-a9c5-83fd04e1d613-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 18:51:57 crc kubenswrapper[4813]: I0219 18:51:57.335252 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07daf784-b7fa-464b-a9c5-83fd04e1d613-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:51:57 crc kubenswrapper[4813]: I0219 18:51:57.335309 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07daf784-b7fa-464b-a9c5-83fd04e1d613-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 18:51:57 crc kubenswrapper[4813]: I0219 18:51:57.335396 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ww59v\" (UniqueName: \"kubernetes.io/projected/07daf784-b7fa-464b-a9c5-83fd04e1d613-kube-api-access-ww59v\") on node \"crc\" DevicePath \"\"" Feb 19 18:51:57 crc kubenswrapper[4813]: I0219 18:51:57.353816 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07daf784-b7fa-464b-a9c5-83fd04e1d613-config-data" (OuterVolumeSpecName: "config-data") pod "07daf784-b7fa-464b-a9c5-83fd04e1d613" (UID: "07daf784-b7fa-464b-a9c5-83fd04e1d613"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:51:57 crc kubenswrapper[4813]: I0219 18:51:57.436780 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07daf784-b7fa-464b-a9c5-83fd04e1d613-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 18:51:57 crc kubenswrapper[4813]: I0219 18:51:57.775637 4813 generic.go:334] "Generic (PLEG): container finished" podID="07daf784-b7fa-464b-a9c5-83fd04e1d613" containerID="f3ad2f79a578ffeb0ed68cf52e4f9b93052de5e6bda283fc5c55c02a81d77e81" exitCode=0 Feb 19 18:51:57 crc kubenswrapper[4813]: I0219 18:51:57.775666 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07daf784-b7fa-464b-a9c5-83fd04e1d613","Type":"ContainerDied","Data":"f3ad2f79a578ffeb0ed68cf52e4f9b93052de5e6bda283fc5c55c02a81d77e81"} Feb 19 18:51:57 crc kubenswrapper[4813]: I0219 18:51:57.775736 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 18:51:57 crc kubenswrapper[4813]: I0219 18:51:57.775993 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07daf784-b7fa-464b-a9c5-83fd04e1d613","Type":"ContainerDied","Data":"c6bd95b8f3665f406a2f4fe68255ddcc9bca016080cf3b7f3f2e881bee39df2f"} Feb 19 18:51:57 crc kubenswrapper[4813]: I0219 18:51:57.776033 4813 scope.go:117] "RemoveContainer" containerID="de9c8ef11e401b0a562dedaca5582b7f9653e396f5c74e6b973932dd3799e944" Feb 19 18:51:57 crc kubenswrapper[4813]: I0219 18:51:57.810076 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 18:51:57 crc kubenswrapper[4813]: I0219 18:51:57.830588 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 18:51:57 crc kubenswrapper[4813]: I0219 18:51:57.841152 4813 scope.go:117] "RemoveContainer" containerID="4cab7fd3573415edaa9c20df5bb8609864000d9fee036a70e42b6a354fc19076" Feb 19 18:51:57 crc kubenswrapper[4813]: I0219 18:51:57.845118 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 18:51:57 crc kubenswrapper[4813]: E0219 18:51:57.845721 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07daf784-b7fa-464b-a9c5-83fd04e1d613" containerName="ceilometer-notification-agent" Feb 19 18:51:57 crc kubenswrapper[4813]: I0219 18:51:57.845757 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="07daf784-b7fa-464b-a9c5-83fd04e1d613" containerName="ceilometer-notification-agent" Feb 19 18:51:57 crc kubenswrapper[4813]: E0219 18:51:57.845801 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07daf784-b7fa-464b-a9c5-83fd04e1d613" containerName="sg-core" Feb 19 18:51:57 crc kubenswrapper[4813]: I0219 18:51:57.845814 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="07daf784-b7fa-464b-a9c5-83fd04e1d613" containerName="sg-core" Feb 19 18:51:57 crc kubenswrapper[4813]: E0219 18:51:57.845843 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07daf784-b7fa-464b-a9c5-83fd04e1d613" containerName="proxy-httpd" Feb 19 18:51:57 crc kubenswrapper[4813]: I0219 18:51:57.845858 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="07daf784-b7fa-464b-a9c5-83fd04e1d613" containerName="proxy-httpd" Feb 19 18:51:57 crc kubenswrapper[4813]: E0219 18:51:57.845895 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07daf784-b7fa-464b-a9c5-83fd04e1d613" containerName="ceilometer-central-agent" Feb 19 18:51:57 crc kubenswrapper[4813]: I0219 18:51:57.845907 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="07daf784-b7fa-464b-a9c5-83fd04e1d613" containerName="ceilometer-central-agent" Feb 19 18:51:57 crc kubenswrapper[4813]: I0219 18:51:57.846270 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="07daf784-b7fa-464b-a9c5-83fd04e1d613" containerName="proxy-httpd" Feb 19 18:51:57 crc kubenswrapper[4813]: I0219 18:51:57.846288 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="07daf784-b7fa-464b-a9c5-83fd04e1d613" containerName="ceilometer-central-agent" Feb 19 18:51:57 crc kubenswrapper[4813]: I0219 18:51:57.846305 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="07daf784-b7fa-464b-a9c5-83fd04e1d613" containerName="sg-core" Feb 19 18:51:57 crc kubenswrapper[4813]: I0219 18:51:57.846343 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="07daf784-b7fa-464b-a9c5-83fd04e1d613" containerName="ceilometer-notification-agent" Feb 19 18:51:57 crc kubenswrapper[4813]: I0219 18:51:57.850657 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 18:51:57 crc kubenswrapper[4813]: I0219 18:51:57.855594 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 19 18:51:57 crc kubenswrapper[4813]: I0219 18:51:57.855715 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 18:51:57 crc kubenswrapper[4813]: I0219 18:51:57.855880 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 18:51:57 crc kubenswrapper[4813]: I0219 18:51:57.857657 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 18:51:57 crc kubenswrapper[4813]: I0219 18:51:57.876592 4813 scope.go:117] "RemoveContainer" containerID="688602f2d85cfeffc6aa32dae1f22b6ceb210783a79a85b73ac9280c08233d03" Feb 19 18:51:57 crc kubenswrapper[4813]: I0219 18:51:57.934238 4813 scope.go:117] "RemoveContainer" containerID="f3ad2f79a578ffeb0ed68cf52e4f9b93052de5e6bda283fc5c55c02a81d77e81" Feb 19 18:51:57 crc kubenswrapper[4813]: I0219 18:51:57.947817 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f5fbcd22-57c9-4c44-99e8-f9307f26a525-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f5fbcd22-57c9-4c44-99e8-f9307f26a525\") " pod="openstack/ceilometer-0" Feb 19 18:51:57 crc kubenswrapper[4813]: I0219 18:51:57.947883 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5fbcd22-57c9-4c44-99e8-f9307f26a525-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f5fbcd22-57c9-4c44-99e8-f9307f26a525\") " pod="openstack/ceilometer-0" Feb 19 18:51:57 crc kubenswrapper[4813]: I0219 18:51:57.947908 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5fbcd22-57c9-4c44-99e8-f9307f26a525-scripts\") pod \"ceilometer-0\" (UID: \"f5fbcd22-57c9-4c44-99e8-f9307f26a525\") " pod="openstack/ceilometer-0" Feb 19 18:51:57 crc kubenswrapper[4813]: I0219 18:51:57.947980 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5fbcd22-57c9-4c44-99e8-f9307f26a525-config-data\") pod \"ceilometer-0\" (UID: \"f5fbcd22-57c9-4c44-99e8-f9307f26a525\") " pod="openstack/ceilometer-0" Feb 19 18:51:57 crc kubenswrapper[4813]: I0219 18:51:57.948063 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5fbcd22-57c9-4c44-99e8-f9307f26a525-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f5fbcd22-57c9-4c44-99e8-f9307f26a525\") " pod="openstack/ceilometer-0" Feb 19 18:51:57 crc kubenswrapper[4813]: I0219 18:51:57.948087 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f5fbcd22-57c9-4c44-99e8-f9307f26a525-run-httpd\") pod \"ceilometer-0\" (UID: \"f5fbcd22-57c9-4c44-99e8-f9307f26a525\") " pod="openstack/ceilometer-0" Feb 19 18:51:57 crc kubenswrapper[4813]: I0219 18:51:57.948162 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f5fbcd22-57c9-4c44-99e8-f9307f26a525-log-httpd\") pod \"ceilometer-0\" (UID: \"f5fbcd22-57c9-4c44-99e8-f9307f26a525\") " pod="openstack/ceilometer-0" Feb 19 18:51:57 crc kubenswrapper[4813]: I0219 18:51:57.948238 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kv4nh\" (UniqueName: \"kubernetes.io/projected/f5fbcd22-57c9-4c44-99e8-f9307f26a525-kube-api-access-kv4nh\") pod \"ceilometer-0\" (UID: \"f5fbcd22-57c9-4c44-99e8-f9307f26a525\") " pod="openstack/ceilometer-0" Feb 19 18:51:57 crc kubenswrapper[4813]: I0219 18:51:57.955285 4813 scope.go:117] "RemoveContainer" containerID="de9c8ef11e401b0a562dedaca5582b7f9653e396f5c74e6b973932dd3799e944" Feb 19 18:51:57 crc kubenswrapper[4813]: E0219 18:51:57.955770 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de9c8ef11e401b0a562dedaca5582b7f9653e396f5c74e6b973932dd3799e944\": container with ID starting with de9c8ef11e401b0a562dedaca5582b7f9653e396f5c74e6b973932dd3799e944 not found: ID does not exist" containerID="de9c8ef11e401b0a562dedaca5582b7f9653e396f5c74e6b973932dd3799e944" Feb 19 18:51:57 crc kubenswrapper[4813]: I0219 18:51:57.955821 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de9c8ef11e401b0a562dedaca5582b7f9653e396f5c74e6b973932dd3799e944"} err="failed to get container status \"de9c8ef11e401b0a562dedaca5582b7f9653e396f5c74e6b973932dd3799e944\": rpc error: code = NotFound desc = could not find container \"de9c8ef11e401b0a562dedaca5582b7f9653e396f5c74e6b973932dd3799e944\": container with ID starting with de9c8ef11e401b0a562dedaca5582b7f9653e396f5c74e6b973932dd3799e944 not found: ID does not exist" Feb 19 18:51:57 crc kubenswrapper[4813]: I0219 18:51:57.955858 4813 scope.go:117] "RemoveContainer" containerID="4cab7fd3573415edaa9c20df5bb8609864000d9fee036a70e42b6a354fc19076" Feb 19 18:51:57 crc kubenswrapper[4813]: E0219 18:51:57.956258 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cab7fd3573415edaa9c20df5bb8609864000d9fee036a70e42b6a354fc19076\": container with ID starting with 4cab7fd3573415edaa9c20df5bb8609864000d9fee036a70e42b6a354fc19076 not found: ID does not exist" containerID="4cab7fd3573415edaa9c20df5bb8609864000d9fee036a70e42b6a354fc19076" Feb 19 18:51:57 crc kubenswrapper[4813]: I0219 18:51:57.956286 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cab7fd3573415edaa9c20df5bb8609864000d9fee036a70e42b6a354fc19076"} err="failed to get container status \"4cab7fd3573415edaa9c20df5bb8609864000d9fee036a70e42b6a354fc19076\": rpc error: code = NotFound desc = could not find container \"4cab7fd3573415edaa9c20df5bb8609864000d9fee036a70e42b6a354fc19076\": container with ID starting with 4cab7fd3573415edaa9c20df5bb8609864000d9fee036a70e42b6a354fc19076 not found: ID does not exist" Feb 19 18:51:57 crc kubenswrapper[4813]: I0219 18:51:57.956307 4813 scope.go:117] "RemoveContainer" containerID="688602f2d85cfeffc6aa32dae1f22b6ceb210783a79a85b73ac9280c08233d03" Feb 19 18:51:57 crc kubenswrapper[4813]: E0219 18:51:57.956661 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"688602f2d85cfeffc6aa32dae1f22b6ceb210783a79a85b73ac9280c08233d03\": container with ID starting with 688602f2d85cfeffc6aa32dae1f22b6ceb210783a79a85b73ac9280c08233d03 not found: ID does not exist" containerID="688602f2d85cfeffc6aa32dae1f22b6ceb210783a79a85b73ac9280c08233d03" Feb 19 18:51:57 crc kubenswrapper[4813]: I0219 18:51:57.956696 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"688602f2d85cfeffc6aa32dae1f22b6ceb210783a79a85b73ac9280c08233d03"} err="failed to get container status \"688602f2d85cfeffc6aa32dae1f22b6ceb210783a79a85b73ac9280c08233d03\": rpc error: code = NotFound desc = could not find container \"688602f2d85cfeffc6aa32dae1f22b6ceb210783a79a85b73ac9280c08233d03\": container with ID starting with 688602f2d85cfeffc6aa32dae1f22b6ceb210783a79a85b73ac9280c08233d03 not found: ID does not exist" Feb 19 18:51:57 crc kubenswrapper[4813]: I0219 18:51:57.956722 4813 scope.go:117] "RemoveContainer" containerID="f3ad2f79a578ffeb0ed68cf52e4f9b93052de5e6bda283fc5c55c02a81d77e81" Feb 19 18:51:57 crc kubenswrapper[4813]: E0219 18:51:57.957053 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3ad2f79a578ffeb0ed68cf52e4f9b93052de5e6bda283fc5c55c02a81d77e81\": container with ID starting with f3ad2f79a578ffeb0ed68cf52e4f9b93052de5e6bda283fc5c55c02a81d77e81 not found: ID does not exist" containerID="f3ad2f79a578ffeb0ed68cf52e4f9b93052de5e6bda283fc5c55c02a81d77e81" Feb 19 18:51:57 crc kubenswrapper[4813]: I0219 18:51:57.957085 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3ad2f79a578ffeb0ed68cf52e4f9b93052de5e6bda283fc5c55c02a81d77e81"} err="failed to get container status \"f3ad2f79a578ffeb0ed68cf52e4f9b93052de5e6bda283fc5c55c02a81d77e81\": rpc error: code = NotFound desc = could not find container \"f3ad2f79a578ffeb0ed68cf52e4f9b93052de5e6bda283fc5c55c02a81d77e81\": container with ID starting with f3ad2f79a578ffeb0ed68cf52e4f9b93052de5e6bda283fc5c55c02a81d77e81 not found: ID does not exist" Feb 19 18:51:58 crc kubenswrapper[4813]: I0219 18:51:58.049677 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f5fbcd22-57c9-4c44-99e8-f9307f26a525-run-httpd\") pod \"ceilometer-0\" (UID: \"f5fbcd22-57c9-4c44-99e8-f9307f26a525\") " pod="openstack/ceilometer-0" Feb 19 18:51:58 crc kubenswrapper[4813]: I0219 18:51:58.049752 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5fbcd22-57c9-4c44-99e8-f9307f26a525-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f5fbcd22-57c9-4c44-99e8-f9307f26a525\") " pod="openstack/ceilometer-0" Feb 19 18:51:58 crc kubenswrapper[4813]: I0219 18:51:58.049784 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f5fbcd22-57c9-4c44-99e8-f9307f26a525-log-httpd\") pod \"ceilometer-0\" (UID: \"f5fbcd22-57c9-4c44-99e8-f9307f26a525\") " pod="openstack/ceilometer-0" Feb 19 18:51:58 crc kubenswrapper[4813]: I0219 18:51:58.050090 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kv4nh\" (UniqueName: \"kubernetes.io/projected/f5fbcd22-57c9-4c44-99e8-f9307f26a525-kube-api-access-kv4nh\") pod \"ceilometer-0\" (UID: \"f5fbcd22-57c9-4c44-99e8-f9307f26a525\") " pod="openstack/ceilometer-0" Feb 19 18:51:58 crc kubenswrapper[4813]: I0219 18:51:58.050293 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f5fbcd22-57c9-4c44-99e8-f9307f26a525-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f5fbcd22-57c9-4c44-99e8-f9307f26a525\") " pod="openstack/ceilometer-0" Feb 19 18:51:58 crc kubenswrapper[4813]: I0219 18:51:58.050353 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5fbcd22-57c9-4c44-99e8-f9307f26a525-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f5fbcd22-57c9-4c44-99e8-f9307f26a525\") " pod="openstack/ceilometer-0" Feb 19 18:51:58 crc kubenswrapper[4813]: I0219 18:51:58.050398 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5fbcd22-57c9-4c44-99e8-f9307f26a525-scripts\") pod \"ceilometer-0\" (UID: \"f5fbcd22-57c9-4c44-99e8-f9307f26a525\") " pod="openstack/ceilometer-0" Feb 19 18:51:58 crc kubenswrapper[4813]: I0219 18:51:58.050452 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f5fbcd22-57c9-4c44-99e8-f9307f26a525-run-httpd\") pod \"ceilometer-0\" (UID: \"f5fbcd22-57c9-4c44-99e8-f9307f26a525\") " pod="openstack/ceilometer-0" Feb 19 18:51:58 crc kubenswrapper[4813]: I0219 18:51:58.050502 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5fbcd22-57c9-4c44-99e8-f9307f26a525-config-data\") pod \"ceilometer-0\" (UID: \"f5fbcd22-57c9-4c44-99e8-f9307f26a525\") " pod="openstack/ceilometer-0" Feb 19 18:51:58 crc kubenswrapper[4813]: I0219 18:51:58.050460 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f5fbcd22-57c9-4c44-99e8-f9307f26a525-log-httpd\") pod \"ceilometer-0\" (UID: \"f5fbcd22-57c9-4c44-99e8-f9307f26a525\") " pod="openstack/ceilometer-0" Feb 19 18:51:58 crc kubenswrapper[4813]: I0219 18:51:58.055151 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5fbcd22-57c9-4c44-99e8-f9307f26a525-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f5fbcd22-57c9-4c44-99e8-f9307f26a525\") " pod="openstack/ceilometer-0" Feb 19 18:51:58 crc kubenswrapper[4813]: I0219 18:51:58.055534 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f5fbcd22-57c9-4c44-99e8-f9307f26a525-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f5fbcd22-57c9-4c44-99e8-f9307f26a525\") " pod="openstack/ceilometer-0" Feb 19 18:51:58 crc kubenswrapper[4813]: I0219 18:51:58.056217 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5fbcd22-57c9-4c44-99e8-f9307f26a525-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f5fbcd22-57c9-4c44-99e8-f9307f26a525\") " pod="openstack/ceilometer-0" Feb 19 18:51:58 crc kubenswrapper[4813]: I0219 18:51:58.057201 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5fbcd22-57c9-4c44-99e8-f9307f26a525-config-data\") pod \"ceilometer-0\" (UID: \"f5fbcd22-57c9-4c44-99e8-f9307f26a525\") " pod="openstack/ceilometer-0" Feb 19 18:51:58 crc kubenswrapper[4813]: I0219 18:51:58.058917 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5fbcd22-57c9-4c44-99e8-f9307f26a525-scripts\") pod \"ceilometer-0\" (UID: \"f5fbcd22-57c9-4c44-99e8-f9307f26a525\") " pod="openstack/ceilometer-0" Feb 19 18:51:58 crc kubenswrapper[4813]: I0219 18:51:58.070211 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kv4nh\" (UniqueName: \"kubernetes.io/projected/f5fbcd22-57c9-4c44-99e8-f9307f26a525-kube-api-access-kv4nh\") pod \"ceilometer-0\" (UID: \"f5fbcd22-57c9-4c44-99e8-f9307f26a525\") " pod="openstack/ceilometer-0" Feb 19 18:51:58 crc kubenswrapper[4813]: I0219 18:51:58.218609 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 18:51:58 crc kubenswrapper[4813]: I0219 18:51:58.689078 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 18:51:58 crc kubenswrapper[4813]: I0219 18:51:58.784633 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f5fbcd22-57c9-4c44-99e8-f9307f26a525","Type":"ContainerStarted","Data":"14f6b4502310c7c885fea7d916adbc5f44a2936e68de5dc51865381e200b057b"} Feb 19 18:51:59 crc kubenswrapper[4813]: I0219 18:51:59.303219 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 18:51:59 crc kubenswrapper[4813]: I0219 18:51:59.378963 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c77d229-1f7e-432b-b8e2-f1bcf0561b04-config-data\") pod \"8c77d229-1f7e-432b-b8e2-f1bcf0561b04\" (UID: \"8c77d229-1f7e-432b-b8e2-f1bcf0561b04\") " Feb 19 18:51:59 crc kubenswrapper[4813]: I0219 18:51:59.379054 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c77d229-1f7e-432b-b8e2-f1bcf0561b04-logs\") pod \"8c77d229-1f7e-432b-b8e2-f1bcf0561b04\" (UID: \"8c77d229-1f7e-432b-b8e2-f1bcf0561b04\") " Feb 19 18:51:59 crc kubenswrapper[4813]: I0219 18:51:59.379126 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c77d229-1f7e-432b-b8e2-f1bcf0561b04-combined-ca-bundle\") pod \"8c77d229-1f7e-432b-b8e2-f1bcf0561b04\" (UID: \"8c77d229-1f7e-432b-b8e2-f1bcf0561b04\") " Feb 19 18:51:59 crc kubenswrapper[4813]: I0219 18:51:59.379170 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rh5mg\" (UniqueName: \"kubernetes.io/projected/8c77d229-1f7e-432b-b8e2-f1bcf0561b04-kube-api-access-rh5mg\") pod \"8c77d229-1f7e-432b-b8e2-f1bcf0561b04\" (UID: \"8c77d229-1f7e-432b-b8e2-f1bcf0561b04\") " Feb 19 18:51:59 crc kubenswrapper[4813]: I0219 18:51:59.379647 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c77d229-1f7e-432b-b8e2-f1bcf0561b04-logs" (OuterVolumeSpecName: "logs") pod "8c77d229-1f7e-432b-b8e2-f1bcf0561b04" (UID: "8c77d229-1f7e-432b-b8e2-f1bcf0561b04"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:51:59 crc kubenswrapper[4813]: I0219 18:51:59.392063 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c77d229-1f7e-432b-b8e2-f1bcf0561b04-kube-api-access-rh5mg" (OuterVolumeSpecName: "kube-api-access-rh5mg") pod "8c77d229-1f7e-432b-b8e2-f1bcf0561b04" (UID: "8c77d229-1f7e-432b-b8e2-f1bcf0561b04"). InnerVolumeSpecName "kube-api-access-rh5mg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:51:59 crc kubenswrapper[4813]: I0219 18:51:59.415933 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c77d229-1f7e-432b-b8e2-f1bcf0561b04-config-data" (OuterVolumeSpecName: "config-data") pod "8c77d229-1f7e-432b-b8e2-f1bcf0561b04" (UID: "8c77d229-1f7e-432b-b8e2-f1bcf0561b04"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:51:59 crc kubenswrapper[4813]: I0219 18:51:59.417751 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c77d229-1f7e-432b-b8e2-f1bcf0561b04-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8c77d229-1f7e-432b-b8e2-f1bcf0561b04" (UID: "8c77d229-1f7e-432b-b8e2-f1bcf0561b04"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:51:59 crc kubenswrapper[4813]: I0219 18:51:59.484903 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07daf784-b7fa-464b-a9c5-83fd04e1d613" path="/var/lib/kubelet/pods/07daf784-b7fa-464b-a9c5-83fd04e1d613/volumes" Feb 19 18:51:59 crc kubenswrapper[4813]: I0219 18:51:59.491906 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c77d229-1f7e-432b-b8e2-f1bcf0561b04-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 18:51:59 crc kubenswrapper[4813]: I0219 18:51:59.491935 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c77d229-1f7e-432b-b8e2-f1bcf0561b04-logs\") on node \"crc\" DevicePath \"\"" Feb 19 18:51:59 crc kubenswrapper[4813]: I0219 18:51:59.491952 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c77d229-1f7e-432b-b8e2-f1bcf0561b04-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:51:59 crc kubenswrapper[4813]: I0219 18:51:59.491980 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rh5mg\" (UniqueName: \"kubernetes.io/projected/8c77d229-1f7e-432b-b8e2-f1bcf0561b04-kube-api-access-rh5mg\") on node \"crc\" DevicePath \"\"" Feb 19 18:51:59 crc kubenswrapper[4813]: I0219 18:51:59.805368 4813 generic.go:334] "Generic (PLEG): container finished" podID="8c77d229-1f7e-432b-b8e2-f1bcf0561b04" containerID="1b9f3c778313eb9d2af44946a1f3868ac0120afcac2b852abaf93686eb2eec5f" exitCode=0 Feb 19 18:51:59 crc kubenswrapper[4813]: I0219 18:51:59.805479 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8c77d229-1f7e-432b-b8e2-f1bcf0561b04","Type":"ContainerDied","Data":"1b9f3c778313eb9d2af44946a1f3868ac0120afcac2b852abaf93686eb2eec5f"} Feb 19 18:51:59 crc kubenswrapper[4813]: I0219 18:51:59.805510 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8c77d229-1f7e-432b-b8e2-f1bcf0561b04","Type":"ContainerDied","Data":"3276e3e3d1d94da8a539a256d48c0499afb8f5bf30a7a6aebcccaf88b5c43bcd"} Feb 19 18:51:59 crc kubenswrapper[4813]: I0219 18:51:59.805532 4813 scope.go:117] "RemoveContainer" containerID="1b9f3c778313eb9d2af44946a1f3868ac0120afcac2b852abaf93686eb2eec5f" Feb 19 18:51:59 crc kubenswrapper[4813]: I0219 18:51:59.805641 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 18:51:59 crc kubenswrapper[4813]: I0219 18:51:59.812272 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f5fbcd22-57c9-4c44-99e8-f9307f26a525","Type":"ContainerStarted","Data":"b40ac96b08f97e5858b87830a251d48e4c373ca1dd61bf0a07e6db7e47f55c35"} Feb 19 18:51:59 crc kubenswrapper[4813]: I0219 18:51:59.832880 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 18:51:59 crc kubenswrapper[4813]: I0219 18:51:59.859779 4813 scope.go:117] "RemoveContainer" containerID="d86f14bde71bd4136682115120b1e45b69d7aa9076c425b5939eca01739e7f2a" Feb 19 18:51:59 crc kubenswrapper[4813]: I0219 18:51:59.882365 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 19 18:51:59 crc kubenswrapper[4813]: I0219 18:51:59.907768 4813 scope.go:117] "RemoveContainer" containerID="1b9f3c778313eb9d2af44946a1f3868ac0120afcac2b852abaf93686eb2eec5f" Feb 19 18:51:59 crc kubenswrapper[4813]: I0219 18:51:59.909344 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 19 18:51:59 crc kubenswrapper[4813]: E0219 18:51:59.909685 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c77d229-1f7e-432b-b8e2-f1bcf0561b04" containerName="nova-api-api" Feb 19 18:51:59 crc kubenswrapper[4813]: I0219 18:51:59.909701 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c77d229-1f7e-432b-b8e2-f1bcf0561b04" containerName="nova-api-api" Feb 19 18:51:59 crc kubenswrapper[4813]: E0219 18:51:59.909724 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c77d229-1f7e-432b-b8e2-f1bcf0561b04" containerName="nova-api-log" Feb 19 18:51:59 crc kubenswrapper[4813]: I0219 18:51:59.909731 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c77d229-1f7e-432b-b8e2-f1bcf0561b04" containerName="nova-api-log" Feb 19 18:51:59 crc kubenswrapper[4813]: I0219 18:51:59.909912 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c77d229-1f7e-432b-b8e2-f1bcf0561b04" containerName="nova-api-log" Feb 19 18:51:59 crc kubenswrapper[4813]: I0219 18:51:59.909941 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c77d229-1f7e-432b-b8e2-f1bcf0561b04" containerName="nova-api-api" Feb 19 18:51:59 crc kubenswrapper[4813]: I0219 18:51:59.910933 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 18:51:59 crc kubenswrapper[4813]: E0219 18:51:59.911951 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b9f3c778313eb9d2af44946a1f3868ac0120afcac2b852abaf93686eb2eec5f\": container with ID starting with 1b9f3c778313eb9d2af44946a1f3868ac0120afcac2b852abaf93686eb2eec5f not found: ID does not exist" containerID="1b9f3c778313eb9d2af44946a1f3868ac0120afcac2b852abaf93686eb2eec5f" Feb 19 18:51:59 crc kubenswrapper[4813]: I0219 18:51:59.911993 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b9f3c778313eb9d2af44946a1f3868ac0120afcac2b852abaf93686eb2eec5f"} err="failed to get container status \"1b9f3c778313eb9d2af44946a1f3868ac0120afcac2b852abaf93686eb2eec5f\": rpc error: code = NotFound desc = could not find container \"1b9f3c778313eb9d2af44946a1f3868ac0120afcac2b852abaf93686eb2eec5f\": container with ID starting with 1b9f3c778313eb9d2af44946a1f3868ac0120afcac2b852abaf93686eb2eec5f not found: ID does not exist" Feb 19 18:51:59 crc kubenswrapper[4813]: I0219 18:51:59.912014 4813 scope.go:117] "RemoveContainer" containerID="d86f14bde71bd4136682115120b1e45b69d7aa9076c425b5939eca01739e7f2a" Feb 19 18:51:59 crc kubenswrapper[4813]: E0219 18:51:59.912232 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d86f14bde71bd4136682115120b1e45b69d7aa9076c425b5939eca01739e7f2a\": container with ID starting with d86f14bde71bd4136682115120b1e45b69d7aa9076c425b5939eca01739e7f2a not found: ID does not exist" containerID="d86f14bde71bd4136682115120b1e45b69d7aa9076c425b5939eca01739e7f2a" Feb 19 18:51:59 crc kubenswrapper[4813]: I0219 18:51:59.912247 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d86f14bde71bd4136682115120b1e45b69d7aa9076c425b5939eca01739e7f2a"} err="failed to get container status \"d86f14bde71bd4136682115120b1e45b69d7aa9076c425b5939eca01739e7f2a\": rpc error: code = NotFound desc = could not find container \"d86f14bde71bd4136682115120b1e45b69d7aa9076c425b5939eca01739e7f2a\": container with ID starting with d86f14bde71bd4136682115120b1e45b69d7aa9076c425b5939eca01739e7f2a not found: ID does not exist" Feb 19 18:51:59 crc kubenswrapper[4813]: I0219 18:51:59.913340 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 19 18:51:59 crc kubenswrapper[4813]: I0219 18:51:59.913787 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 19 18:51:59 crc kubenswrapper[4813]: I0219 18:51:59.913934 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 19 18:51:59 crc kubenswrapper[4813]: I0219 18:51:59.925800 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 18:52:00 crc kubenswrapper[4813]: I0219 18:52:00.001734 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9rdd\" (UniqueName: \"kubernetes.io/projected/0d99b10a-e401-4a6f-88f2-942ca4269895-kube-api-access-x9rdd\") pod \"nova-api-0\" (UID: \"0d99b10a-e401-4a6f-88f2-942ca4269895\") " pod="openstack/nova-api-0" Feb 19 18:52:00 crc kubenswrapper[4813]: I0219 18:52:00.001993 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d99b10a-e401-4a6f-88f2-942ca4269895-public-tls-certs\") pod \"nova-api-0\" (UID: \"0d99b10a-e401-4a6f-88f2-942ca4269895\") " pod="openstack/nova-api-0" Feb 19 18:52:00 crc kubenswrapper[4813]: I0219 18:52:00.002012 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d99b10a-e401-4a6f-88f2-942ca4269895-logs\") pod \"nova-api-0\" (UID: \"0d99b10a-e401-4a6f-88f2-942ca4269895\") " pod="openstack/nova-api-0" Feb 19 18:52:00 crc kubenswrapper[4813]: I0219 18:52:00.002295 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d99b10a-e401-4a6f-88f2-942ca4269895-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0d99b10a-e401-4a6f-88f2-942ca4269895\") " pod="openstack/nova-api-0" Feb 19 18:52:00 crc kubenswrapper[4813]: I0219 18:52:00.002489 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d99b10a-e401-4a6f-88f2-942ca4269895-internal-tls-certs\") pod \"nova-api-0\" (UID: \"0d99b10a-e401-4a6f-88f2-942ca4269895\") " pod="openstack/nova-api-0" Feb 19 18:52:00 crc kubenswrapper[4813]: I0219 18:52:00.002591 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d99b10a-e401-4a6f-88f2-942ca4269895-config-data\") pod \"nova-api-0\" (UID: \"0d99b10a-e401-4a6f-88f2-942ca4269895\") " pod="openstack/nova-api-0" Feb 19 18:52:00 crc kubenswrapper[4813]: I0219 18:52:00.040574 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 19 18:52:00 crc kubenswrapper[4813]: I0219 18:52:00.058524 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 19 18:52:00 crc kubenswrapper[4813]: I0219 18:52:00.104438 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d99b10a-e401-4a6f-88f2-942ca4269895-internal-tls-certs\") pod \"nova-api-0\" (UID: \"0d99b10a-e401-4a6f-88f2-942ca4269895\") " pod="openstack/nova-api-0" Feb 19 18:52:00 crc kubenswrapper[4813]: I0219 18:52:00.104509 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d99b10a-e401-4a6f-88f2-942ca4269895-config-data\") pod \"nova-api-0\" (UID: \"0d99b10a-e401-4a6f-88f2-942ca4269895\") " pod="openstack/nova-api-0" Feb 19 18:52:00 crc kubenswrapper[4813]: I0219 18:52:00.104592 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9rdd\" (UniqueName: \"kubernetes.io/projected/0d99b10a-e401-4a6f-88f2-942ca4269895-kube-api-access-x9rdd\") pod \"nova-api-0\" (UID: \"0d99b10a-e401-4a6f-88f2-942ca4269895\") " pod="openstack/nova-api-0" Feb 19 18:52:00 crc kubenswrapper[4813]: I0219 18:52:00.104609 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d99b10a-e401-4a6f-88f2-942ca4269895-public-tls-certs\") pod \"nova-api-0\" (UID: \"0d99b10a-e401-4a6f-88f2-942ca4269895\") " pod="openstack/nova-api-0" Feb 19 18:52:00 crc kubenswrapper[4813]: I0219 18:52:00.104926 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d99b10a-e401-4a6f-88f2-942ca4269895-logs\") pod \"nova-api-0\" (UID: \"0d99b10a-e401-4a6f-88f2-942ca4269895\") " pod="openstack/nova-api-0" Feb 19 18:52:00 crc kubenswrapper[4813]: I0219 18:52:00.105301 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d99b10a-e401-4a6f-88f2-942ca4269895-logs\") pod \"nova-api-0\" (UID: \"0d99b10a-e401-4a6f-88f2-942ca4269895\") " pod="openstack/nova-api-0" Feb 19 18:52:00 crc kubenswrapper[4813]: I0219 18:52:00.105693 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d99b10a-e401-4a6f-88f2-942ca4269895-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0d99b10a-e401-4a6f-88f2-942ca4269895\") " pod="openstack/nova-api-0" Feb 19 18:52:00 crc kubenswrapper[4813]: I0219 18:52:00.109503 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d99b10a-e401-4a6f-88f2-942ca4269895-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0d99b10a-e401-4a6f-88f2-942ca4269895\") " pod="openstack/nova-api-0" Feb 19 18:52:00 crc kubenswrapper[4813]: I0219 18:52:00.109577 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d99b10a-e401-4a6f-88f2-942ca4269895-public-tls-certs\") pod \"nova-api-0\" (UID: \"0d99b10a-e401-4a6f-88f2-942ca4269895\") " pod="openstack/nova-api-0" Feb 19 18:52:00 crc kubenswrapper[4813]: I0219 18:52:00.109611 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d99b10a-e401-4a6f-88f2-942ca4269895-config-data\") pod \"nova-api-0\" (UID: \"0d99b10a-e401-4a6f-88f2-942ca4269895\") " pod="openstack/nova-api-0" Feb 19 18:52:00 crc kubenswrapper[4813]: I0219 18:52:00.109669 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d99b10a-e401-4a6f-88f2-942ca4269895-internal-tls-certs\") pod \"nova-api-0\" (UID: \"0d99b10a-e401-4a6f-88f2-942ca4269895\") " pod="openstack/nova-api-0" Feb 19 18:52:00 crc kubenswrapper[4813]: I0219 18:52:00.122123 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9rdd\" (UniqueName: \"kubernetes.io/projected/0d99b10a-e401-4a6f-88f2-942ca4269895-kube-api-access-x9rdd\") pod \"nova-api-0\" (UID: \"0d99b10a-e401-4a6f-88f2-942ca4269895\") " pod="openstack/nova-api-0" Feb 19 18:52:00 crc kubenswrapper[4813]: I0219 18:52:00.243902 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 18:52:00 crc kubenswrapper[4813]: I0219 18:52:00.329708 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 18:52:00 crc kubenswrapper[4813]: I0219 18:52:00.329776 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 18:52:00 crc kubenswrapper[4813]: I0219 18:52:00.733497 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 18:52:00 crc kubenswrapper[4813]: W0219 18:52:00.744443 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d99b10a_e401_4a6f_88f2_942ca4269895.slice/crio-a033217f7180e3e7ae651690ab982c542172c0a4937484efed190be11499e463 WatchSource:0}: Error finding container a033217f7180e3e7ae651690ab982c542172c0a4937484efed190be11499e463: Status 404 returned error can't find the container with id a033217f7180e3e7ae651690ab982c542172c0a4937484efed190be11499e463 Feb 19 18:52:00 crc kubenswrapper[4813]: I0219 18:52:00.822777 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0d99b10a-e401-4a6f-88f2-942ca4269895","Type":"ContainerStarted","Data":"a033217f7180e3e7ae651690ab982c542172c0a4937484efed190be11499e463"} Feb 19 18:52:00 crc kubenswrapper[4813]: I0219 18:52:00.826396 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f5fbcd22-57c9-4c44-99e8-f9307f26a525","Type":"ContainerStarted","Data":"579e9c4c32174ff288dea84c76634ebfebcf521ba3330c8727fcb1173b1e3d77"} Feb 19 18:52:00 crc kubenswrapper[4813]: I0219 18:52:00.826445 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f5fbcd22-57c9-4c44-99e8-f9307f26a525","Type":"ContainerStarted","Data":"7b5552d482368d69a980b6f9acc9f03b5afc2c61db4aac5df25c2dd3bc9b75f1"} Feb 19 18:52:00 crc kubenswrapper[4813]: I0219 18:52:00.846910 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 19 18:52:01 crc kubenswrapper[4813]: I0219 18:52:01.001308 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-n9xc4"] Feb 19 18:52:01 crc kubenswrapper[4813]: I0219 18:52:01.002671 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-n9xc4" Feb 19 18:52:01 crc kubenswrapper[4813]: I0219 18:52:01.004599 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 19 18:52:01 crc kubenswrapper[4813]: I0219 18:52:01.005251 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 19 18:52:01 crc kubenswrapper[4813]: I0219 18:52:01.033058 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-n9xc4"] Feb 19 18:52:01 crc kubenswrapper[4813]: I0219 18:52:01.121605 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knzrd\" (UniqueName: \"kubernetes.io/projected/361c9f8a-61c4-4781-9394-933bc962a0b4-kube-api-access-knzrd\") pod \"nova-cell1-cell-mapping-n9xc4\" (UID: \"361c9f8a-61c4-4781-9394-933bc962a0b4\") " pod="openstack/nova-cell1-cell-mapping-n9xc4" Feb 19 18:52:01 crc kubenswrapper[4813]: I0219 18:52:01.121675 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/361c9f8a-61c4-4781-9394-933bc962a0b4-scripts\") pod \"nova-cell1-cell-mapping-n9xc4\" (UID: \"361c9f8a-61c4-4781-9394-933bc962a0b4\") " pod="openstack/nova-cell1-cell-mapping-n9xc4" Feb 19 18:52:01 crc kubenswrapper[4813]: I0219 18:52:01.122323 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/361c9f8a-61c4-4781-9394-933bc962a0b4-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-n9xc4\" (UID: \"361c9f8a-61c4-4781-9394-933bc962a0b4\") " pod="openstack/nova-cell1-cell-mapping-n9xc4" Feb 19 18:52:01 crc kubenswrapper[4813]: I0219 18:52:01.122403 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/361c9f8a-61c4-4781-9394-933bc962a0b4-config-data\") pod \"nova-cell1-cell-mapping-n9xc4\" (UID: \"361c9f8a-61c4-4781-9394-933bc962a0b4\") " pod="openstack/nova-cell1-cell-mapping-n9xc4" Feb 19 18:52:01 crc kubenswrapper[4813]: I0219 18:52:01.224589 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knzrd\" (UniqueName: \"kubernetes.io/projected/361c9f8a-61c4-4781-9394-933bc962a0b4-kube-api-access-knzrd\") pod \"nova-cell1-cell-mapping-n9xc4\" (UID: \"361c9f8a-61c4-4781-9394-933bc962a0b4\") " pod="openstack/nova-cell1-cell-mapping-n9xc4" Feb 19 18:52:01 crc kubenswrapper[4813]: I0219 18:52:01.224997 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/361c9f8a-61c4-4781-9394-933bc962a0b4-scripts\") pod \"nova-cell1-cell-mapping-n9xc4\" (UID: \"361c9f8a-61c4-4781-9394-933bc962a0b4\") " pod="openstack/nova-cell1-cell-mapping-n9xc4" Feb 19 18:52:01 crc kubenswrapper[4813]: I0219 18:52:01.225044 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/361c9f8a-61c4-4781-9394-933bc962a0b4-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-n9xc4\" (UID: \"361c9f8a-61c4-4781-9394-933bc962a0b4\") " pod="openstack/nova-cell1-cell-mapping-n9xc4" Feb 19 18:52:01 crc kubenswrapper[4813]: I0219 18:52:01.225136 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/361c9f8a-61c4-4781-9394-933bc962a0b4-config-data\") pod \"nova-cell1-cell-mapping-n9xc4\" (UID: \"361c9f8a-61c4-4781-9394-933bc962a0b4\") " pod="openstack/nova-cell1-cell-mapping-n9xc4" Feb 19 18:52:01 crc kubenswrapper[4813]: I0219 18:52:01.229642 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/361c9f8a-61c4-4781-9394-933bc962a0b4-config-data\") pod \"nova-cell1-cell-mapping-n9xc4\" (UID: \"361c9f8a-61c4-4781-9394-933bc962a0b4\") " pod="openstack/nova-cell1-cell-mapping-n9xc4" Feb 19 18:52:01 crc kubenswrapper[4813]: I0219 18:52:01.231286 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/361c9f8a-61c4-4781-9394-933bc962a0b4-scripts\") pod \"nova-cell1-cell-mapping-n9xc4\" (UID: \"361c9f8a-61c4-4781-9394-933bc962a0b4\") " pod="openstack/nova-cell1-cell-mapping-n9xc4" Feb 19 18:52:01 crc kubenswrapper[4813]: I0219 18:52:01.240505 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/361c9f8a-61c4-4781-9394-933bc962a0b4-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-n9xc4\" (UID: \"361c9f8a-61c4-4781-9394-933bc962a0b4\") " pod="openstack/nova-cell1-cell-mapping-n9xc4" Feb 19 18:52:01 crc kubenswrapper[4813]: I0219 18:52:01.244993 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knzrd\" (UniqueName: \"kubernetes.io/projected/361c9f8a-61c4-4781-9394-933bc962a0b4-kube-api-access-knzrd\") pod \"nova-cell1-cell-mapping-n9xc4\" (UID: \"361c9f8a-61c4-4781-9394-933bc962a0b4\") " pod="openstack/nova-cell1-cell-mapping-n9xc4" Feb 19 18:52:01 crc kubenswrapper[4813]: I0219 18:52:01.335134 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-n9xc4" Feb 19 18:52:01 crc kubenswrapper[4813]: I0219 18:52:01.487053 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c77d229-1f7e-432b-b8e2-f1bcf0561b04" path="/var/lib/kubelet/pods/8c77d229-1f7e-432b-b8e2-f1bcf0561b04/volumes" Feb 19 18:52:01 crc kubenswrapper[4813]: I0219 18:52:01.813885 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-n9xc4"] Feb 19 18:52:01 crc kubenswrapper[4813]: W0219 18:52:01.818784 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod361c9f8a_61c4_4781_9394_933bc962a0b4.slice/crio-62a0d92bc180cabbd9d0267e7e46d16843e4f96202c51968e5ca391557615a88 WatchSource:0}: Error finding container 62a0d92bc180cabbd9d0267e7e46d16843e4f96202c51968e5ca391557615a88: Status 404 returned error can't find the container with id 62a0d92bc180cabbd9d0267e7e46d16843e4f96202c51968e5ca391557615a88 Feb 19 18:52:01 crc kubenswrapper[4813]: I0219 18:52:01.837339 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0d99b10a-e401-4a6f-88f2-942ca4269895","Type":"ContainerStarted","Data":"e2eea6ec1a526b93ff470416a3178bd449d8b3b3c0db238826191bcbf5b4f8c4"} Feb 19 18:52:01 crc kubenswrapper[4813]: I0219 18:52:01.837387 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0d99b10a-e401-4a6f-88f2-942ca4269895","Type":"ContainerStarted","Data":"3a915c1d4f71f392bff5013072f2c6a9fef9c7ef25e7914b245ae2fbae2d8f08"} Feb 19 18:52:01 crc kubenswrapper[4813]: I0219 18:52:01.839675 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-n9xc4" event={"ID":"361c9f8a-61c4-4781-9394-933bc962a0b4","Type":"ContainerStarted","Data":"62a0d92bc180cabbd9d0267e7e46d16843e4f96202c51968e5ca391557615a88"} Feb 19 18:52:01 crc kubenswrapper[4813]: I0219 18:52:01.867688 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.8676664929999998 podStartE2EDuration="2.867666493s" podCreationTimestamp="2026-02-19 18:51:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:52:01.861227994 +0000 UTC m=+1341.086668525" watchObservedRunningTime="2026-02-19 18:52:01.867666493 +0000 UTC m=+1341.093107044" Feb 19 18:52:02 crc kubenswrapper[4813]: I0219 18:52:02.866872 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-n9xc4" event={"ID":"361c9f8a-61c4-4781-9394-933bc962a0b4","Type":"ContainerStarted","Data":"2687e2921a7978760aa8bb23883efc53ffec14ee1347d79693b3de333441e1a5"} Feb 19 18:52:02 crc kubenswrapper[4813]: I0219 18:52:02.872801 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f5fbcd22-57c9-4c44-99e8-f9307f26a525","Type":"ContainerStarted","Data":"56865a353e832ea3d2554a9134268053435401ed9d60bb86dd7266e7a226e156"} Feb 19 18:52:02 crc kubenswrapper[4813]: I0219 18:52:02.872854 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 18:52:02 crc kubenswrapper[4813]: I0219 18:52:02.897677 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-n9xc4" podStartSLOduration=2.897655499 podStartE2EDuration="2.897655499s" podCreationTimestamp="2026-02-19 18:52:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:52:02.881202702 +0000 UTC m=+1342.106643253" watchObservedRunningTime="2026-02-19 18:52:02.897655499 +0000 UTC m=+1342.123096050" Feb 19 18:52:02 crc kubenswrapper[4813]: I0219 18:52:02.915781 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.408322406 podStartE2EDuration="5.915753958s" podCreationTimestamp="2026-02-19 18:51:57 +0000 UTC" firstStartedPulling="2026-02-19 18:51:58.695623251 +0000 UTC m=+1337.921063802" lastFinishedPulling="2026-02-19 18:52:02.203054803 +0000 UTC m=+1341.428495354" observedRunningTime="2026-02-19 18:52:02.91485543 +0000 UTC m=+1342.140296021" watchObservedRunningTime="2026-02-19 18:52:02.915753958 +0000 UTC m=+1342.141194529" Feb 19 18:52:03 crc kubenswrapper[4813]: I0219 18:52:03.250130 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7677694455-nj2vp" Feb 19 18:52:03 crc kubenswrapper[4813]: I0219 18:52:03.322335 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75ddbf7c75-h77vl"] Feb 19 18:52:03 crc kubenswrapper[4813]: I0219 18:52:03.322649 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-75ddbf7c75-h77vl" podUID="87a5f970-b35c-424e-b7b7-2612080926b3" containerName="dnsmasq-dns" containerID="cri-o://41fe4440fbdf484a9b01d6c7c67d89c22dc67c23e246d50806cdad764d8c0d45" gracePeriod=10 Feb 19 18:52:03 crc kubenswrapper[4813]: I0219 18:52:03.783276 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75ddbf7c75-h77vl" Feb 19 18:52:03 crc kubenswrapper[4813]: I0219 18:52:03.873534 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/87a5f970-b35c-424e-b7b7-2612080926b3-dns-swift-storage-0\") pod \"87a5f970-b35c-424e-b7b7-2612080926b3\" (UID: \"87a5f970-b35c-424e-b7b7-2612080926b3\") " Feb 19 18:52:03 crc kubenswrapper[4813]: I0219 18:52:03.873645 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjs8s\" (UniqueName: \"kubernetes.io/projected/87a5f970-b35c-424e-b7b7-2612080926b3-kube-api-access-kjs8s\") pod \"87a5f970-b35c-424e-b7b7-2612080926b3\" (UID: \"87a5f970-b35c-424e-b7b7-2612080926b3\") " Feb 19 18:52:03 crc kubenswrapper[4813]: I0219 18:52:03.873680 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87a5f970-b35c-424e-b7b7-2612080926b3-config\") pod \"87a5f970-b35c-424e-b7b7-2612080926b3\" (UID: \"87a5f970-b35c-424e-b7b7-2612080926b3\") " Feb 19 18:52:03 crc kubenswrapper[4813]: I0219 18:52:03.873715 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/87a5f970-b35c-424e-b7b7-2612080926b3-ovsdbserver-sb\") pod \"87a5f970-b35c-424e-b7b7-2612080926b3\" (UID: \"87a5f970-b35c-424e-b7b7-2612080926b3\") " Feb 19 18:52:03 crc kubenswrapper[4813]: I0219 18:52:03.873792 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/87a5f970-b35c-424e-b7b7-2612080926b3-ovsdbserver-nb\") pod \"87a5f970-b35c-424e-b7b7-2612080926b3\" (UID: \"87a5f970-b35c-424e-b7b7-2612080926b3\") " Feb 19 18:52:03 crc kubenswrapper[4813]: I0219 18:52:03.873843 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/87a5f970-b35c-424e-b7b7-2612080926b3-dns-svc\") pod \"87a5f970-b35c-424e-b7b7-2612080926b3\" (UID: \"87a5f970-b35c-424e-b7b7-2612080926b3\") " Feb 19 18:52:03 crc kubenswrapper[4813]: I0219 18:52:03.884769 4813 generic.go:334] "Generic (PLEG): container finished" podID="87a5f970-b35c-424e-b7b7-2612080926b3" containerID="41fe4440fbdf484a9b01d6c7c67d89c22dc67c23e246d50806cdad764d8c0d45" exitCode=0 Feb 19 18:52:03 crc kubenswrapper[4813]: I0219 18:52:03.885706 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75ddbf7c75-h77vl" Feb 19 18:52:03 crc kubenswrapper[4813]: I0219 18:52:03.885908 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75ddbf7c75-h77vl" event={"ID":"87a5f970-b35c-424e-b7b7-2612080926b3","Type":"ContainerDied","Data":"41fe4440fbdf484a9b01d6c7c67d89c22dc67c23e246d50806cdad764d8c0d45"} Feb 19 18:52:03 crc kubenswrapper[4813]: I0219 18:52:03.885974 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75ddbf7c75-h77vl" event={"ID":"87a5f970-b35c-424e-b7b7-2612080926b3","Type":"ContainerDied","Data":"a35dcc9c6637385f37426daef069a48e7c0a8f4dd69c530a964e45ce37d4629d"} Feb 19 18:52:03 crc kubenswrapper[4813]: I0219 18:52:03.885992 4813 scope.go:117] "RemoveContainer" containerID="41fe4440fbdf484a9b01d6c7c67d89c22dc67c23e246d50806cdad764d8c0d45" Feb 19 18:52:03 crc kubenswrapper[4813]: I0219 18:52:03.887837 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87a5f970-b35c-424e-b7b7-2612080926b3-kube-api-access-kjs8s" (OuterVolumeSpecName: "kube-api-access-kjs8s") pod "87a5f970-b35c-424e-b7b7-2612080926b3" (UID: "87a5f970-b35c-424e-b7b7-2612080926b3"). InnerVolumeSpecName "kube-api-access-kjs8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:52:03 crc kubenswrapper[4813]: I0219 18:52:03.944169 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87a5f970-b35c-424e-b7b7-2612080926b3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "87a5f970-b35c-424e-b7b7-2612080926b3" (UID: "87a5f970-b35c-424e-b7b7-2612080926b3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:52:03 crc kubenswrapper[4813]: I0219 18:52:03.950544 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87a5f970-b35c-424e-b7b7-2612080926b3-config" (OuterVolumeSpecName: "config") pod "87a5f970-b35c-424e-b7b7-2612080926b3" (UID: "87a5f970-b35c-424e-b7b7-2612080926b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:52:03 crc kubenswrapper[4813]: I0219 18:52:03.955292 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87a5f970-b35c-424e-b7b7-2612080926b3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "87a5f970-b35c-424e-b7b7-2612080926b3" (UID: "87a5f970-b35c-424e-b7b7-2612080926b3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:52:03 crc kubenswrapper[4813]: I0219 18:52:03.955764 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87a5f970-b35c-424e-b7b7-2612080926b3-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "87a5f970-b35c-424e-b7b7-2612080926b3" (UID: "87a5f970-b35c-424e-b7b7-2612080926b3"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:52:03 crc kubenswrapper[4813]: I0219 18:52:03.964364 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87a5f970-b35c-424e-b7b7-2612080926b3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "87a5f970-b35c-424e-b7b7-2612080926b3" (UID: "87a5f970-b35c-424e-b7b7-2612080926b3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:52:03 crc kubenswrapper[4813]: I0219 18:52:03.975670 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/87a5f970-b35c-424e-b7b7-2612080926b3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:03 crc kubenswrapper[4813]: I0219 18:52:03.975697 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/87a5f970-b35c-424e-b7b7-2612080926b3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:03 crc kubenswrapper[4813]: I0219 18:52:03.975709 4813 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/87a5f970-b35c-424e-b7b7-2612080926b3-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:03 crc kubenswrapper[4813]: I0219 18:52:03.975718 4813 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/87a5f970-b35c-424e-b7b7-2612080926b3-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:03 crc kubenswrapper[4813]: I0219 18:52:03.975728 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjs8s\" (UniqueName: \"kubernetes.io/projected/87a5f970-b35c-424e-b7b7-2612080926b3-kube-api-access-kjs8s\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:03 crc kubenswrapper[4813]: I0219 18:52:03.975738 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87a5f970-b35c-424e-b7b7-2612080926b3-config\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:03 crc kubenswrapper[4813]: I0219 18:52:03.981187 4813 scope.go:117] "RemoveContainer" containerID="fd2376d3999a03e034fa4ebfb7dd8d651f752bbaccf53c555dad8f7cac4130ab" Feb 19 18:52:04 crc kubenswrapper[4813]: I0219 18:52:04.006465 4813 scope.go:117] "RemoveContainer" containerID="41fe4440fbdf484a9b01d6c7c67d89c22dc67c23e246d50806cdad764d8c0d45" Feb 19 18:52:04 crc kubenswrapper[4813]: E0219 18:52:04.006901 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41fe4440fbdf484a9b01d6c7c67d89c22dc67c23e246d50806cdad764d8c0d45\": container with ID starting with 41fe4440fbdf484a9b01d6c7c67d89c22dc67c23e246d50806cdad764d8c0d45 not found: ID does not exist" containerID="41fe4440fbdf484a9b01d6c7c67d89c22dc67c23e246d50806cdad764d8c0d45" Feb 19 18:52:04 crc kubenswrapper[4813]: I0219 18:52:04.006942 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41fe4440fbdf484a9b01d6c7c67d89c22dc67c23e246d50806cdad764d8c0d45"} err="failed to get container status \"41fe4440fbdf484a9b01d6c7c67d89c22dc67c23e246d50806cdad764d8c0d45\": rpc error: code = NotFound desc = could not find container \"41fe4440fbdf484a9b01d6c7c67d89c22dc67c23e246d50806cdad764d8c0d45\": container with ID starting with 41fe4440fbdf484a9b01d6c7c67d89c22dc67c23e246d50806cdad764d8c0d45 not found: ID does not exist" Feb 19 18:52:04 crc kubenswrapper[4813]: I0219 18:52:04.006994 4813 scope.go:117] "RemoveContainer" containerID="fd2376d3999a03e034fa4ebfb7dd8d651f752bbaccf53c555dad8f7cac4130ab" Feb 19 18:52:04 crc kubenswrapper[4813]: E0219 18:52:04.007605 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd2376d3999a03e034fa4ebfb7dd8d651f752bbaccf53c555dad8f7cac4130ab\": container with ID starting with fd2376d3999a03e034fa4ebfb7dd8d651f752bbaccf53c555dad8f7cac4130ab not found: ID does not exist" containerID="fd2376d3999a03e034fa4ebfb7dd8d651f752bbaccf53c555dad8f7cac4130ab" Feb 19 18:52:04 crc kubenswrapper[4813]: I0219 18:52:04.007648 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd2376d3999a03e034fa4ebfb7dd8d651f752bbaccf53c555dad8f7cac4130ab"} err="failed to get container status \"fd2376d3999a03e034fa4ebfb7dd8d651f752bbaccf53c555dad8f7cac4130ab\": rpc error: code = NotFound desc = could not find container \"fd2376d3999a03e034fa4ebfb7dd8d651f752bbaccf53c555dad8f7cac4130ab\": container with ID starting with fd2376d3999a03e034fa4ebfb7dd8d651f752bbaccf53c555dad8f7cac4130ab not found: ID does not exist" Feb 19 18:52:04 crc kubenswrapper[4813]: I0219 18:52:04.232840 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75ddbf7c75-h77vl"] Feb 19 18:52:04 crc kubenswrapper[4813]: I0219 18:52:04.242531 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75ddbf7c75-h77vl"] Feb 19 18:52:05 crc kubenswrapper[4813]: I0219 18:52:05.490069 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87a5f970-b35c-424e-b7b7-2612080926b3" path="/var/lib/kubelet/pods/87a5f970-b35c-424e-b7b7-2612080926b3/volumes" Feb 19 18:52:06 crc kubenswrapper[4813]: I0219 18:52:06.928822 4813 generic.go:334] "Generic (PLEG): container finished" podID="361c9f8a-61c4-4781-9394-933bc962a0b4" containerID="2687e2921a7978760aa8bb23883efc53ffec14ee1347d79693b3de333441e1a5" exitCode=0 Feb 19 18:52:06 crc kubenswrapper[4813]: I0219 18:52:06.928948 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-n9xc4" event={"ID":"361c9f8a-61c4-4781-9394-933bc962a0b4","Type":"ContainerDied","Data":"2687e2921a7978760aa8bb23883efc53ffec14ee1347d79693b3de333441e1a5"} Feb 19 18:52:08 crc kubenswrapper[4813]: I0219 18:52:08.372151 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-n9xc4" Feb 19 18:52:08 crc kubenswrapper[4813]: I0219 18:52:08.472522 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/361c9f8a-61c4-4781-9394-933bc962a0b4-config-data\") pod \"361c9f8a-61c4-4781-9394-933bc962a0b4\" (UID: \"361c9f8a-61c4-4781-9394-933bc962a0b4\") " Feb 19 18:52:08 crc kubenswrapper[4813]: I0219 18:52:08.472978 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/361c9f8a-61c4-4781-9394-933bc962a0b4-scripts\") pod \"361c9f8a-61c4-4781-9394-933bc962a0b4\" (UID: \"361c9f8a-61c4-4781-9394-933bc962a0b4\") " Feb 19 18:52:08 crc kubenswrapper[4813]: I0219 18:52:08.473071 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knzrd\" (UniqueName: \"kubernetes.io/projected/361c9f8a-61c4-4781-9394-933bc962a0b4-kube-api-access-knzrd\") pod \"361c9f8a-61c4-4781-9394-933bc962a0b4\" (UID: \"361c9f8a-61c4-4781-9394-933bc962a0b4\") " Feb 19 18:52:08 crc kubenswrapper[4813]: I0219 18:52:08.473219 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/361c9f8a-61c4-4781-9394-933bc962a0b4-combined-ca-bundle\") pod \"361c9f8a-61c4-4781-9394-933bc962a0b4\" (UID: \"361c9f8a-61c4-4781-9394-933bc962a0b4\") " Feb 19 18:52:08 crc kubenswrapper[4813]: I0219 18:52:08.479237 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/361c9f8a-61c4-4781-9394-933bc962a0b4-scripts" (OuterVolumeSpecName: "scripts") pod "361c9f8a-61c4-4781-9394-933bc962a0b4" (UID: "361c9f8a-61c4-4781-9394-933bc962a0b4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:08 crc kubenswrapper[4813]: I0219 18:52:08.480532 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/361c9f8a-61c4-4781-9394-933bc962a0b4-kube-api-access-knzrd" (OuterVolumeSpecName: "kube-api-access-knzrd") pod "361c9f8a-61c4-4781-9394-933bc962a0b4" (UID: "361c9f8a-61c4-4781-9394-933bc962a0b4"). InnerVolumeSpecName "kube-api-access-knzrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:52:08 crc kubenswrapper[4813]: I0219 18:52:08.503493 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/361c9f8a-61c4-4781-9394-933bc962a0b4-config-data" (OuterVolumeSpecName: "config-data") pod "361c9f8a-61c4-4781-9394-933bc962a0b4" (UID: "361c9f8a-61c4-4781-9394-933bc962a0b4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:08 crc kubenswrapper[4813]: I0219 18:52:08.510547 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/361c9f8a-61c4-4781-9394-933bc962a0b4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "361c9f8a-61c4-4781-9394-933bc962a0b4" (UID: "361c9f8a-61c4-4781-9394-933bc962a0b4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:08 crc kubenswrapper[4813]: I0219 18:52:08.576499 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/361c9f8a-61c4-4781-9394-933bc962a0b4-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:08 crc kubenswrapper[4813]: I0219 18:52:08.576552 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/361c9f8a-61c4-4781-9394-933bc962a0b4-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:08 crc kubenswrapper[4813]: I0219 18:52:08.576615 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-knzrd\" (UniqueName: \"kubernetes.io/projected/361c9f8a-61c4-4781-9394-933bc962a0b4-kube-api-access-knzrd\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:08 crc kubenswrapper[4813]: I0219 18:52:08.576636 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/361c9f8a-61c4-4781-9394-933bc962a0b4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:08 crc kubenswrapper[4813]: I0219 18:52:08.686695 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-75ddbf7c75-h77vl" podUID="87a5f970-b35c-424e-b7b7-2612080926b3" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.186:5353: i/o timeout" Feb 19 18:52:08 crc kubenswrapper[4813]: I0219 18:52:08.951636 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-n9xc4" event={"ID":"361c9f8a-61c4-4781-9394-933bc962a0b4","Type":"ContainerDied","Data":"62a0d92bc180cabbd9d0267e7e46d16843e4f96202c51968e5ca391557615a88"} Feb 19 18:52:08 crc kubenswrapper[4813]: I0219 18:52:08.951694 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62a0d92bc180cabbd9d0267e7e46d16843e4f96202c51968e5ca391557615a88" Feb 19 18:52:08 crc kubenswrapper[4813]: I0219 18:52:08.951709 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-n9xc4" Feb 19 18:52:09 crc kubenswrapper[4813]: I0219 18:52:09.170063 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 18:52:09 crc kubenswrapper[4813]: I0219 18:52:09.170503 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="78de48f7-3d0a-45dc-8aad-f61546b2f5a6" containerName="nova-scheduler-scheduler" containerID="cri-o://e0817ecad050f3721ed4924ec075780185d08a2788addfd90f84d19e00dd8d5c" gracePeriod=30 Feb 19 18:52:09 crc kubenswrapper[4813]: I0219 18:52:09.183481 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 18:52:09 crc kubenswrapper[4813]: I0219 18:52:09.183932 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0d99b10a-e401-4a6f-88f2-942ca4269895" containerName="nova-api-log" containerID="cri-o://3a915c1d4f71f392bff5013072f2c6a9fef9c7ef25e7914b245ae2fbae2d8f08" gracePeriod=30 Feb 19 18:52:09 crc kubenswrapper[4813]: I0219 18:52:09.184050 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0d99b10a-e401-4a6f-88f2-942ca4269895" containerName="nova-api-api" containerID="cri-o://e2eea6ec1a526b93ff470416a3178bd449d8b3b3c0db238826191bcbf5b4f8c4" gracePeriod=30 Feb 19 18:52:09 crc kubenswrapper[4813]: I0219 18:52:09.197987 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 18:52:09 crc kubenswrapper[4813]: I0219 18:52:09.198224 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="8cad7f3d-9640-4949-bd7a-11407f2c8f97" containerName="nova-metadata-log" containerID="cri-o://54c437315b39fd0d37736226869465e05b6ed2ef27f4780c07255ef49d0147a6" gracePeriod=30 Feb 19 18:52:09 crc kubenswrapper[4813]: I0219 18:52:09.198355 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="8cad7f3d-9640-4949-bd7a-11407f2c8f97" containerName="nova-metadata-metadata" containerID="cri-o://91240f6202bc1a26cbe97d04b6f255e2c37f1b1794253ff8fd5fd43a5a9ad7f5" gracePeriod=30 Feb 19 18:52:09 crc kubenswrapper[4813]: I0219 18:52:09.798113 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 18:52:09 crc kubenswrapper[4813]: I0219 18:52:09.903877 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d99b10a-e401-4a6f-88f2-942ca4269895-internal-tls-certs\") pod \"0d99b10a-e401-4a6f-88f2-942ca4269895\" (UID: \"0d99b10a-e401-4a6f-88f2-942ca4269895\") " Feb 19 18:52:09 crc kubenswrapper[4813]: I0219 18:52:09.904048 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d99b10a-e401-4a6f-88f2-942ca4269895-config-data\") pod \"0d99b10a-e401-4a6f-88f2-942ca4269895\" (UID: \"0d99b10a-e401-4a6f-88f2-942ca4269895\") " Feb 19 18:52:09 crc kubenswrapper[4813]: I0219 18:52:09.904613 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9rdd\" (UniqueName: \"kubernetes.io/projected/0d99b10a-e401-4a6f-88f2-942ca4269895-kube-api-access-x9rdd\") pod \"0d99b10a-e401-4a6f-88f2-942ca4269895\" (UID: \"0d99b10a-e401-4a6f-88f2-942ca4269895\") " Feb 19 18:52:09 crc kubenswrapper[4813]: I0219 18:52:09.904651 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d99b10a-e401-4a6f-88f2-942ca4269895-combined-ca-bundle\") pod \"0d99b10a-e401-4a6f-88f2-942ca4269895\" (UID: \"0d99b10a-e401-4a6f-88f2-942ca4269895\") " Feb 19 18:52:09 crc kubenswrapper[4813]: I0219 18:52:09.904694 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d99b10a-e401-4a6f-88f2-942ca4269895-public-tls-certs\") pod \"0d99b10a-e401-4a6f-88f2-942ca4269895\" (UID: \"0d99b10a-e401-4a6f-88f2-942ca4269895\") " Feb 19 18:52:09 crc kubenswrapper[4813]: I0219 18:52:09.904745 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d99b10a-e401-4a6f-88f2-942ca4269895-logs\") pod \"0d99b10a-e401-4a6f-88f2-942ca4269895\" (UID: \"0d99b10a-e401-4a6f-88f2-942ca4269895\") " Feb 19 18:52:09 crc kubenswrapper[4813]: I0219 18:52:09.905746 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d99b10a-e401-4a6f-88f2-942ca4269895-logs" (OuterVolumeSpecName: "logs") pod "0d99b10a-e401-4a6f-88f2-942ca4269895" (UID: "0d99b10a-e401-4a6f-88f2-942ca4269895"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:52:09 crc kubenswrapper[4813]: I0219 18:52:09.912513 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d99b10a-e401-4a6f-88f2-942ca4269895-kube-api-access-x9rdd" (OuterVolumeSpecName: "kube-api-access-x9rdd") pod "0d99b10a-e401-4a6f-88f2-942ca4269895" (UID: "0d99b10a-e401-4a6f-88f2-942ca4269895"). InnerVolumeSpecName "kube-api-access-x9rdd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:52:09 crc kubenswrapper[4813]: I0219 18:52:09.936248 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d99b10a-e401-4a6f-88f2-942ca4269895-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0d99b10a-e401-4a6f-88f2-942ca4269895" (UID: "0d99b10a-e401-4a6f-88f2-942ca4269895"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:09 crc kubenswrapper[4813]: I0219 18:52:09.936732 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d99b10a-e401-4a6f-88f2-942ca4269895-config-data" (OuterVolumeSpecName: "config-data") pod "0d99b10a-e401-4a6f-88f2-942ca4269895" (UID: "0d99b10a-e401-4a6f-88f2-942ca4269895"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:09 crc kubenswrapper[4813]: I0219 18:52:09.957369 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d99b10a-e401-4a6f-88f2-942ca4269895-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "0d99b10a-e401-4a6f-88f2-942ca4269895" (UID: "0d99b10a-e401-4a6f-88f2-942ca4269895"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:09 crc kubenswrapper[4813]: I0219 18:52:09.962504 4813 generic.go:334] "Generic (PLEG): container finished" podID="8cad7f3d-9640-4949-bd7a-11407f2c8f97" containerID="54c437315b39fd0d37736226869465e05b6ed2ef27f4780c07255ef49d0147a6" exitCode=143 Feb 19 18:52:09 crc kubenswrapper[4813]: I0219 18:52:09.962593 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8cad7f3d-9640-4949-bd7a-11407f2c8f97","Type":"ContainerDied","Data":"54c437315b39fd0d37736226869465e05b6ed2ef27f4780c07255ef49d0147a6"} Feb 19 18:52:09 crc kubenswrapper[4813]: I0219 18:52:09.965058 4813 generic.go:334] "Generic (PLEG): container finished" podID="0d99b10a-e401-4a6f-88f2-942ca4269895" containerID="e2eea6ec1a526b93ff470416a3178bd449d8b3b3c0db238826191bcbf5b4f8c4" exitCode=0 Feb 19 18:52:09 crc kubenswrapper[4813]: I0219 18:52:09.965207 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0d99b10a-e401-4a6f-88f2-942ca4269895","Type":"ContainerDied","Data":"e2eea6ec1a526b93ff470416a3178bd449d8b3b3c0db238826191bcbf5b4f8c4"} Feb 19 18:52:09 crc kubenswrapper[4813]: I0219 18:52:09.965244 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0d99b10a-e401-4a6f-88f2-942ca4269895","Type":"ContainerDied","Data":"3a915c1d4f71f392bff5013072f2c6a9fef9c7ef25e7914b245ae2fbae2d8f08"} Feb 19 18:52:09 crc kubenswrapper[4813]: I0219 18:52:09.965238 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 18:52:09 crc kubenswrapper[4813]: I0219 18:52:09.965279 4813 scope.go:117] "RemoveContainer" containerID="e2eea6ec1a526b93ff470416a3178bd449d8b3b3c0db238826191bcbf5b4f8c4" Feb 19 18:52:09 crc kubenswrapper[4813]: I0219 18:52:09.965451 4813 generic.go:334] "Generic (PLEG): container finished" podID="0d99b10a-e401-4a6f-88f2-942ca4269895" containerID="3a915c1d4f71f392bff5013072f2c6a9fef9c7ef25e7914b245ae2fbae2d8f08" exitCode=143 Feb 19 18:52:09 crc kubenswrapper[4813]: I0219 18:52:09.965750 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0d99b10a-e401-4a6f-88f2-942ca4269895","Type":"ContainerDied","Data":"a033217f7180e3e7ae651690ab982c542172c0a4937484efed190be11499e463"} Feb 19 18:52:09 crc kubenswrapper[4813]: I0219 18:52:09.989130 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d99b10a-e401-4a6f-88f2-942ca4269895-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "0d99b10a-e401-4a6f-88f2-942ca4269895" (UID: "0d99b10a-e401-4a6f-88f2-942ca4269895"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:10 crc kubenswrapper[4813]: I0219 18:52:10.007018 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9rdd\" (UniqueName: \"kubernetes.io/projected/0d99b10a-e401-4a6f-88f2-942ca4269895-kube-api-access-x9rdd\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:10 crc kubenswrapper[4813]: I0219 18:52:10.007049 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d99b10a-e401-4a6f-88f2-942ca4269895-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:10 crc kubenswrapper[4813]: I0219 18:52:10.007058 4813 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d99b10a-e401-4a6f-88f2-942ca4269895-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:10 crc kubenswrapper[4813]: I0219 18:52:10.007069 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d99b10a-e401-4a6f-88f2-942ca4269895-logs\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:10 crc kubenswrapper[4813]: I0219 18:52:10.007080 4813 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d99b10a-e401-4a6f-88f2-942ca4269895-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:10 crc kubenswrapper[4813]: I0219 18:52:10.007088 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d99b10a-e401-4a6f-88f2-942ca4269895-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:10 crc kubenswrapper[4813]: I0219 18:52:10.009169 4813 scope.go:117] "RemoveContainer" containerID="3a915c1d4f71f392bff5013072f2c6a9fef9c7ef25e7914b245ae2fbae2d8f08" Feb 19 18:52:10 crc kubenswrapper[4813]: I0219 18:52:10.035177 4813 scope.go:117] "RemoveContainer" containerID="e2eea6ec1a526b93ff470416a3178bd449d8b3b3c0db238826191bcbf5b4f8c4" Feb 19 18:52:10 crc kubenswrapper[4813]: E0219 18:52:10.035731 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2eea6ec1a526b93ff470416a3178bd449d8b3b3c0db238826191bcbf5b4f8c4\": container with ID starting with e2eea6ec1a526b93ff470416a3178bd449d8b3b3c0db238826191bcbf5b4f8c4 not found: ID does not exist" containerID="e2eea6ec1a526b93ff470416a3178bd449d8b3b3c0db238826191bcbf5b4f8c4" Feb 19 18:52:10 crc kubenswrapper[4813]: I0219 18:52:10.035768 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2eea6ec1a526b93ff470416a3178bd449d8b3b3c0db238826191bcbf5b4f8c4"} err="failed to get container status \"e2eea6ec1a526b93ff470416a3178bd449d8b3b3c0db238826191bcbf5b4f8c4\": rpc error: code = NotFound desc = could not find container \"e2eea6ec1a526b93ff470416a3178bd449d8b3b3c0db238826191bcbf5b4f8c4\": container with ID starting with e2eea6ec1a526b93ff470416a3178bd449d8b3b3c0db238826191bcbf5b4f8c4 not found: ID does not exist" Feb 19 18:52:10 crc kubenswrapper[4813]: I0219 18:52:10.035789 4813 scope.go:117] "RemoveContainer" containerID="3a915c1d4f71f392bff5013072f2c6a9fef9c7ef25e7914b245ae2fbae2d8f08" Feb 19 18:52:10 crc kubenswrapper[4813]: E0219 18:52:10.036236 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a915c1d4f71f392bff5013072f2c6a9fef9c7ef25e7914b245ae2fbae2d8f08\": container with ID starting with 3a915c1d4f71f392bff5013072f2c6a9fef9c7ef25e7914b245ae2fbae2d8f08 not found: ID does not exist" containerID="3a915c1d4f71f392bff5013072f2c6a9fef9c7ef25e7914b245ae2fbae2d8f08" Feb 19 18:52:10 crc kubenswrapper[4813]: I0219 18:52:10.036289 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a915c1d4f71f392bff5013072f2c6a9fef9c7ef25e7914b245ae2fbae2d8f08"} err="failed to get container status \"3a915c1d4f71f392bff5013072f2c6a9fef9c7ef25e7914b245ae2fbae2d8f08\": rpc error: code = NotFound desc = could not find container \"3a915c1d4f71f392bff5013072f2c6a9fef9c7ef25e7914b245ae2fbae2d8f08\": container with ID starting with 3a915c1d4f71f392bff5013072f2c6a9fef9c7ef25e7914b245ae2fbae2d8f08 not found: ID does not exist" Feb 19 18:52:10 crc kubenswrapper[4813]: I0219 18:52:10.036309 4813 scope.go:117] "RemoveContainer" containerID="e2eea6ec1a526b93ff470416a3178bd449d8b3b3c0db238826191bcbf5b4f8c4" Feb 19 18:52:10 crc kubenswrapper[4813]: I0219 18:52:10.036667 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2eea6ec1a526b93ff470416a3178bd449d8b3b3c0db238826191bcbf5b4f8c4"} err="failed to get container status \"e2eea6ec1a526b93ff470416a3178bd449d8b3b3c0db238826191bcbf5b4f8c4\": rpc error: code = NotFound desc = could not find container \"e2eea6ec1a526b93ff470416a3178bd449d8b3b3c0db238826191bcbf5b4f8c4\": container with ID starting with e2eea6ec1a526b93ff470416a3178bd449d8b3b3c0db238826191bcbf5b4f8c4 not found: ID does not exist" Feb 19 18:52:10 crc kubenswrapper[4813]: I0219 18:52:10.036707 4813 scope.go:117] "RemoveContainer" containerID="3a915c1d4f71f392bff5013072f2c6a9fef9c7ef25e7914b245ae2fbae2d8f08" Feb 19 18:52:10 crc kubenswrapper[4813]: I0219 18:52:10.036993 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a915c1d4f71f392bff5013072f2c6a9fef9c7ef25e7914b245ae2fbae2d8f08"} err="failed to get container status \"3a915c1d4f71f392bff5013072f2c6a9fef9c7ef25e7914b245ae2fbae2d8f08\": rpc error: code = NotFound desc = could not find container \"3a915c1d4f71f392bff5013072f2c6a9fef9c7ef25e7914b245ae2fbae2d8f08\": container with ID starting with 3a915c1d4f71f392bff5013072f2c6a9fef9c7ef25e7914b245ae2fbae2d8f08 not found: ID does not exist" Feb 19 18:52:10 crc kubenswrapper[4813]: I0219 18:52:10.300086 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 18:52:10 crc kubenswrapper[4813]: I0219 18:52:10.310791 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 19 18:52:10 crc kubenswrapper[4813]: I0219 18:52:10.329185 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 19 18:52:10 crc kubenswrapper[4813]: E0219 18:52:10.329523 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87a5f970-b35c-424e-b7b7-2612080926b3" containerName="dnsmasq-dns" Feb 19 18:52:10 crc kubenswrapper[4813]: I0219 18:52:10.329539 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="87a5f970-b35c-424e-b7b7-2612080926b3" containerName="dnsmasq-dns" Feb 19 18:52:10 crc kubenswrapper[4813]: E0219 18:52:10.329550 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="361c9f8a-61c4-4781-9394-933bc962a0b4" containerName="nova-manage" Feb 19 18:52:10 crc kubenswrapper[4813]: I0219 18:52:10.329556 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="361c9f8a-61c4-4781-9394-933bc962a0b4" containerName="nova-manage" Feb 19 18:52:10 crc kubenswrapper[4813]: E0219 18:52:10.329569 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d99b10a-e401-4a6f-88f2-942ca4269895" containerName="nova-api-log" Feb 19 18:52:10 crc kubenswrapper[4813]: I0219 18:52:10.329576 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d99b10a-e401-4a6f-88f2-942ca4269895" containerName="nova-api-log" Feb 19 18:52:10 crc kubenswrapper[4813]: E0219 18:52:10.329587 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87a5f970-b35c-424e-b7b7-2612080926b3" containerName="init" Feb 19 18:52:10 crc kubenswrapper[4813]: I0219 18:52:10.329592 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="87a5f970-b35c-424e-b7b7-2612080926b3" containerName="init" Feb 19 18:52:10 crc kubenswrapper[4813]: E0219 18:52:10.329620 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d99b10a-e401-4a6f-88f2-942ca4269895" containerName="nova-api-api" Feb 19 18:52:10 crc kubenswrapper[4813]: I0219 18:52:10.329625 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d99b10a-e401-4a6f-88f2-942ca4269895" containerName="nova-api-api" Feb 19 18:52:10 crc kubenswrapper[4813]: I0219 18:52:10.329778 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="361c9f8a-61c4-4781-9394-933bc962a0b4" containerName="nova-manage" Feb 19 18:52:10 crc kubenswrapper[4813]: I0219 18:52:10.329791 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="87a5f970-b35c-424e-b7b7-2612080926b3" containerName="dnsmasq-dns" Feb 19 18:52:10 crc kubenswrapper[4813]: I0219 18:52:10.329809 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d99b10a-e401-4a6f-88f2-942ca4269895" containerName="nova-api-api" Feb 19 18:52:10 crc kubenswrapper[4813]: I0219 18:52:10.329816 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d99b10a-e401-4a6f-88f2-942ca4269895" containerName="nova-api-log" Feb 19 18:52:10 crc kubenswrapper[4813]: I0219 18:52:10.331542 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 18:52:10 crc kubenswrapper[4813]: I0219 18:52:10.336493 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 19 18:52:10 crc kubenswrapper[4813]: I0219 18:52:10.336571 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 19 18:52:10 crc kubenswrapper[4813]: I0219 18:52:10.337611 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 19 18:52:10 crc kubenswrapper[4813]: I0219 18:52:10.356097 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 18:52:10 crc kubenswrapper[4813]: I0219 18:52:10.413420 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/153c22ed-7e2e-496e-9a13-9ef0ce79efd8-internal-tls-certs\") pod \"nova-api-0\" (UID: \"153c22ed-7e2e-496e-9a13-9ef0ce79efd8\") " pod="openstack/nova-api-0" Feb 19 18:52:10 crc kubenswrapper[4813]: I0219 18:52:10.413458 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/153c22ed-7e2e-496e-9a13-9ef0ce79efd8-config-data\") pod \"nova-api-0\" (UID: \"153c22ed-7e2e-496e-9a13-9ef0ce79efd8\") " pod="openstack/nova-api-0" Feb 19 18:52:10 crc kubenswrapper[4813]: I0219 18:52:10.413482 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/153c22ed-7e2e-496e-9a13-9ef0ce79efd8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"153c22ed-7e2e-496e-9a13-9ef0ce79efd8\") " pod="openstack/nova-api-0" Feb 19 18:52:10 crc kubenswrapper[4813]: I0219 18:52:10.413512 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/153c22ed-7e2e-496e-9a13-9ef0ce79efd8-logs\") pod \"nova-api-0\" (UID: \"153c22ed-7e2e-496e-9a13-9ef0ce79efd8\") " pod="openstack/nova-api-0" Feb 19 18:52:10 crc kubenswrapper[4813]: I0219 18:52:10.413806 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skbh8\" (UniqueName: \"kubernetes.io/projected/153c22ed-7e2e-496e-9a13-9ef0ce79efd8-kube-api-access-skbh8\") pod \"nova-api-0\" (UID: \"153c22ed-7e2e-496e-9a13-9ef0ce79efd8\") " pod="openstack/nova-api-0" Feb 19 18:52:10 crc kubenswrapper[4813]: I0219 18:52:10.413885 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/153c22ed-7e2e-496e-9a13-9ef0ce79efd8-public-tls-certs\") pod \"nova-api-0\" (UID: \"153c22ed-7e2e-496e-9a13-9ef0ce79efd8\") " pod="openstack/nova-api-0" Feb 19 18:52:10 crc kubenswrapper[4813]: I0219 18:52:10.515578 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/153c22ed-7e2e-496e-9a13-9ef0ce79efd8-public-tls-certs\") pod \"nova-api-0\" (UID: \"153c22ed-7e2e-496e-9a13-9ef0ce79efd8\") " pod="openstack/nova-api-0" Feb 19 18:52:10 crc kubenswrapper[4813]: I0219 18:52:10.515705 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/153c22ed-7e2e-496e-9a13-9ef0ce79efd8-internal-tls-certs\") pod \"nova-api-0\" (UID: \"153c22ed-7e2e-496e-9a13-9ef0ce79efd8\") " pod="openstack/nova-api-0" Feb 19 18:52:10 crc kubenswrapper[4813]: I0219 18:52:10.515739 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/153c22ed-7e2e-496e-9a13-9ef0ce79efd8-config-data\") pod \"nova-api-0\" (UID: \"153c22ed-7e2e-496e-9a13-9ef0ce79efd8\") " pod="openstack/nova-api-0" Feb 19 18:52:10 crc kubenswrapper[4813]: I0219 18:52:10.515771 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/153c22ed-7e2e-496e-9a13-9ef0ce79efd8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"153c22ed-7e2e-496e-9a13-9ef0ce79efd8\") " pod="openstack/nova-api-0" Feb 19 18:52:10 crc kubenswrapper[4813]: I0219 18:52:10.515813 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/153c22ed-7e2e-496e-9a13-9ef0ce79efd8-logs\") pod \"nova-api-0\" (UID: \"153c22ed-7e2e-496e-9a13-9ef0ce79efd8\") " pod="openstack/nova-api-0" Feb 19 18:52:10 crc kubenswrapper[4813]: I0219 18:52:10.515923 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skbh8\" (UniqueName: \"kubernetes.io/projected/153c22ed-7e2e-496e-9a13-9ef0ce79efd8-kube-api-access-skbh8\") pod \"nova-api-0\" (UID: \"153c22ed-7e2e-496e-9a13-9ef0ce79efd8\") " pod="openstack/nova-api-0" Feb 19 18:52:10 crc kubenswrapper[4813]: I0219 18:52:10.516437 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/153c22ed-7e2e-496e-9a13-9ef0ce79efd8-logs\") pod \"nova-api-0\" (UID: \"153c22ed-7e2e-496e-9a13-9ef0ce79efd8\") " pod="openstack/nova-api-0" Feb 19 18:52:10 crc kubenswrapper[4813]: I0219 18:52:10.519585 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/153c22ed-7e2e-496e-9a13-9ef0ce79efd8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"153c22ed-7e2e-496e-9a13-9ef0ce79efd8\") " pod="openstack/nova-api-0" Feb 19 18:52:10 crc kubenswrapper[4813]: I0219 18:52:10.519748 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/153c22ed-7e2e-496e-9a13-9ef0ce79efd8-config-data\") pod \"nova-api-0\" (UID: \"153c22ed-7e2e-496e-9a13-9ef0ce79efd8\") " pod="openstack/nova-api-0" Feb 19 18:52:10 crc kubenswrapper[4813]: I0219 18:52:10.523312 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/153c22ed-7e2e-496e-9a13-9ef0ce79efd8-internal-tls-certs\") pod \"nova-api-0\" (UID: \"153c22ed-7e2e-496e-9a13-9ef0ce79efd8\") " pod="openstack/nova-api-0" Feb 19 18:52:10 crc kubenswrapper[4813]: I0219 18:52:10.523463 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/153c22ed-7e2e-496e-9a13-9ef0ce79efd8-public-tls-certs\") pod \"nova-api-0\" (UID: \"153c22ed-7e2e-496e-9a13-9ef0ce79efd8\") " pod="openstack/nova-api-0" Feb 19 18:52:10 crc kubenswrapper[4813]: I0219 18:52:10.542406 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skbh8\" (UniqueName: \"kubernetes.io/projected/153c22ed-7e2e-496e-9a13-9ef0ce79efd8-kube-api-access-skbh8\") pod \"nova-api-0\" (UID: \"153c22ed-7e2e-496e-9a13-9ef0ce79efd8\") " pod="openstack/nova-api-0" Feb 19 18:52:10 crc kubenswrapper[4813]: I0219 18:52:10.685948 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 18:52:10 crc kubenswrapper[4813]: E0219 18:52:10.800841 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e0817ecad050f3721ed4924ec075780185d08a2788addfd90f84d19e00dd8d5c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 18:52:10 crc kubenswrapper[4813]: E0219 18:52:10.803378 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e0817ecad050f3721ed4924ec075780185d08a2788addfd90f84d19e00dd8d5c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 18:52:10 crc kubenswrapper[4813]: E0219 18:52:10.808403 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e0817ecad050f3721ed4924ec075780185d08a2788addfd90f84d19e00dd8d5c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 18:52:10 crc kubenswrapper[4813]: E0219 18:52:10.808435 4813 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="78de48f7-3d0a-45dc-8aad-f61546b2f5a6" containerName="nova-scheduler-scheduler" Feb 19 18:52:11 crc kubenswrapper[4813]: I0219 18:52:11.202226 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 18:52:11 crc kubenswrapper[4813]: W0219 18:52:11.202715 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod153c22ed_7e2e_496e_9a13_9ef0ce79efd8.slice/crio-1cafd68c3c22c47bf0012d5f19c420b6a8694b41e6f1af8da2bf387612c8e351 WatchSource:0}: Error finding container 1cafd68c3c22c47bf0012d5f19c420b6a8694b41e6f1af8da2bf387612c8e351: Status 404 returned error can't find the container with id 1cafd68c3c22c47bf0012d5f19c420b6a8694b41e6f1af8da2bf387612c8e351 Feb 19 18:52:11 crc kubenswrapper[4813]: I0219 18:52:11.492223 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d99b10a-e401-4a6f-88f2-942ca4269895" path="/var/lib/kubelet/pods/0d99b10a-e401-4a6f-88f2-942ca4269895/volumes" Feb 19 18:52:11 crc kubenswrapper[4813]: I0219 18:52:11.989527 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"153c22ed-7e2e-496e-9a13-9ef0ce79efd8","Type":"ContainerStarted","Data":"80732d7919ccb302a5ba0c1ac5c2fcd0229a26a0e569dafeadc7852c9e14fefa"} Feb 19 18:52:11 crc kubenswrapper[4813]: I0219 18:52:11.989578 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"153c22ed-7e2e-496e-9a13-9ef0ce79efd8","Type":"ContainerStarted","Data":"84023d51754b9bdea418d6af28a79e9ce79c0d010f1e0a43ee78d69255fd77b9"} Feb 19 18:52:11 crc kubenswrapper[4813]: I0219 18:52:11.989593 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"153c22ed-7e2e-496e-9a13-9ef0ce79efd8","Type":"ContainerStarted","Data":"1cafd68c3c22c47bf0012d5f19c420b6a8694b41e6f1af8da2bf387612c8e351"} Feb 19 18:52:12 crc kubenswrapper[4813]: I0219 18:52:12.016262 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.016239266 podStartE2EDuration="2.016239266s" podCreationTimestamp="2026-02-19 18:52:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:52:12.015091021 +0000 UTC m=+1351.240531582" watchObservedRunningTime="2026-02-19 18:52:12.016239266 +0000 UTC m=+1351.241679817" Feb 19 18:52:12 crc kubenswrapper[4813]: I0219 18:52:12.331122 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="8cad7f3d-9640-4949-bd7a-11407f2c8f97" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.190:8775/\": read tcp 10.217.0.2:35168->10.217.0.190:8775: read: connection reset by peer" Feb 19 18:52:12 crc kubenswrapper[4813]: I0219 18:52:12.331259 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="8cad7f3d-9640-4949-bd7a-11407f2c8f97" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.190:8775/\": read tcp 10.217.0.2:35172->10.217.0.190:8775: read: connection reset by peer" Feb 19 18:52:12 crc kubenswrapper[4813]: I0219 18:52:12.794101 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 18:52:12 crc kubenswrapper[4813]: I0219 18:52:12.864664 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cad7f3d-9640-4949-bd7a-11407f2c8f97-config-data\") pod \"8cad7f3d-9640-4949-bd7a-11407f2c8f97\" (UID: \"8cad7f3d-9640-4949-bd7a-11407f2c8f97\") " Feb 19 18:52:12 crc kubenswrapper[4813]: I0219 18:52:12.865079 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cad7f3d-9640-4949-bd7a-11407f2c8f97-combined-ca-bundle\") pod \"8cad7f3d-9640-4949-bd7a-11407f2c8f97\" (UID: \"8cad7f3d-9640-4949-bd7a-11407f2c8f97\") " Feb 19 18:52:12 crc kubenswrapper[4813]: I0219 18:52:12.865117 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmdn8\" (UniqueName: \"kubernetes.io/projected/8cad7f3d-9640-4949-bd7a-11407f2c8f97-kube-api-access-bmdn8\") pod \"8cad7f3d-9640-4949-bd7a-11407f2c8f97\" (UID: \"8cad7f3d-9640-4949-bd7a-11407f2c8f97\") " Feb 19 18:52:12 crc kubenswrapper[4813]: I0219 18:52:12.865175 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cad7f3d-9640-4949-bd7a-11407f2c8f97-logs\") pod \"8cad7f3d-9640-4949-bd7a-11407f2c8f97\" (UID: \"8cad7f3d-9640-4949-bd7a-11407f2c8f97\") " Feb 19 18:52:12 crc kubenswrapper[4813]: I0219 18:52:12.865213 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cad7f3d-9640-4949-bd7a-11407f2c8f97-nova-metadata-tls-certs\") pod \"8cad7f3d-9640-4949-bd7a-11407f2c8f97\" (UID: \"8cad7f3d-9640-4949-bd7a-11407f2c8f97\") " Feb 19 18:52:12 crc kubenswrapper[4813]: I0219 18:52:12.867371 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8cad7f3d-9640-4949-bd7a-11407f2c8f97-logs" (OuterVolumeSpecName: "logs") pod "8cad7f3d-9640-4949-bd7a-11407f2c8f97" (UID: "8cad7f3d-9640-4949-bd7a-11407f2c8f97"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:52:12 crc kubenswrapper[4813]: I0219 18:52:12.875604 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cad7f3d-9640-4949-bd7a-11407f2c8f97-kube-api-access-bmdn8" (OuterVolumeSpecName: "kube-api-access-bmdn8") pod "8cad7f3d-9640-4949-bd7a-11407f2c8f97" (UID: "8cad7f3d-9640-4949-bd7a-11407f2c8f97"). InnerVolumeSpecName "kube-api-access-bmdn8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:52:12 crc kubenswrapper[4813]: I0219 18:52:12.912365 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cad7f3d-9640-4949-bd7a-11407f2c8f97-config-data" (OuterVolumeSpecName: "config-data") pod "8cad7f3d-9640-4949-bd7a-11407f2c8f97" (UID: "8cad7f3d-9640-4949-bd7a-11407f2c8f97"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:12 crc kubenswrapper[4813]: I0219 18:52:12.916633 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cad7f3d-9640-4949-bd7a-11407f2c8f97-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8cad7f3d-9640-4949-bd7a-11407f2c8f97" (UID: "8cad7f3d-9640-4949-bd7a-11407f2c8f97"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:12 crc kubenswrapper[4813]: I0219 18:52:12.955064 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cad7f3d-9640-4949-bd7a-11407f2c8f97-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "8cad7f3d-9640-4949-bd7a-11407f2c8f97" (UID: "8cad7f3d-9640-4949-bd7a-11407f2c8f97"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:12 crc kubenswrapper[4813]: I0219 18:52:12.966861 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmdn8\" (UniqueName: \"kubernetes.io/projected/8cad7f3d-9640-4949-bd7a-11407f2c8f97-kube-api-access-bmdn8\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:12 crc kubenswrapper[4813]: I0219 18:52:12.966904 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cad7f3d-9640-4949-bd7a-11407f2c8f97-logs\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:12 crc kubenswrapper[4813]: I0219 18:52:12.966917 4813 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cad7f3d-9640-4949-bd7a-11407f2c8f97-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:12 crc kubenswrapper[4813]: I0219 18:52:12.966929 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cad7f3d-9640-4949-bd7a-11407f2c8f97-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:12 crc kubenswrapper[4813]: I0219 18:52:12.966942 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cad7f3d-9640-4949-bd7a-11407f2c8f97-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:13 crc kubenswrapper[4813]: I0219 18:52:13.004658 4813 generic.go:334] "Generic (PLEG): container finished" podID="8cad7f3d-9640-4949-bd7a-11407f2c8f97" containerID="91240f6202bc1a26cbe97d04b6f255e2c37f1b1794253ff8fd5fd43a5a9ad7f5" exitCode=0 Feb 19 18:52:13 crc kubenswrapper[4813]: I0219 18:52:13.004701 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 18:52:13 crc kubenswrapper[4813]: I0219 18:52:13.004757 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8cad7f3d-9640-4949-bd7a-11407f2c8f97","Type":"ContainerDied","Data":"91240f6202bc1a26cbe97d04b6f255e2c37f1b1794253ff8fd5fd43a5a9ad7f5"} Feb 19 18:52:13 crc kubenswrapper[4813]: I0219 18:52:13.004818 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8cad7f3d-9640-4949-bd7a-11407f2c8f97","Type":"ContainerDied","Data":"fcf2a6aa3ed42d789311bfaf2bfb4fc9d9bb8cffb037ddee33240383729c8007"} Feb 19 18:52:13 crc kubenswrapper[4813]: I0219 18:52:13.004841 4813 scope.go:117] "RemoveContainer" containerID="91240f6202bc1a26cbe97d04b6f255e2c37f1b1794253ff8fd5fd43a5a9ad7f5" Feb 19 18:52:13 crc kubenswrapper[4813]: I0219 18:52:13.050157 4813 scope.go:117] "RemoveContainer" containerID="54c437315b39fd0d37736226869465e05b6ed2ef27f4780c07255ef49d0147a6" Feb 19 18:52:13 crc kubenswrapper[4813]: I0219 18:52:13.057551 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 18:52:13 crc kubenswrapper[4813]: I0219 18:52:13.079753 4813 scope.go:117] "RemoveContainer" containerID="91240f6202bc1a26cbe97d04b6f255e2c37f1b1794253ff8fd5fd43a5a9ad7f5" Feb 19 18:52:13 crc kubenswrapper[4813]: E0219 18:52:13.080209 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91240f6202bc1a26cbe97d04b6f255e2c37f1b1794253ff8fd5fd43a5a9ad7f5\": container with ID starting with 91240f6202bc1a26cbe97d04b6f255e2c37f1b1794253ff8fd5fd43a5a9ad7f5 not found: ID does not exist" containerID="91240f6202bc1a26cbe97d04b6f255e2c37f1b1794253ff8fd5fd43a5a9ad7f5" Feb 19 18:52:13 crc kubenswrapper[4813]: I0219 18:52:13.080252 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91240f6202bc1a26cbe97d04b6f255e2c37f1b1794253ff8fd5fd43a5a9ad7f5"} err="failed to get container status \"91240f6202bc1a26cbe97d04b6f255e2c37f1b1794253ff8fd5fd43a5a9ad7f5\": rpc error: code = NotFound desc = could not find container \"91240f6202bc1a26cbe97d04b6f255e2c37f1b1794253ff8fd5fd43a5a9ad7f5\": container with ID starting with 91240f6202bc1a26cbe97d04b6f255e2c37f1b1794253ff8fd5fd43a5a9ad7f5 not found: ID does not exist" Feb 19 18:52:13 crc kubenswrapper[4813]: I0219 18:52:13.080272 4813 scope.go:117] "RemoveContainer" containerID="54c437315b39fd0d37736226869465e05b6ed2ef27f4780c07255ef49d0147a6" Feb 19 18:52:13 crc kubenswrapper[4813]: E0219 18:52:13.080593 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54c437315b39fd0d37736226869465e05b6ed2ef27f4780c07255ef49d0147a6\": container with ID starting with 54c437315b39fd0d37736226869465e05b6ed2ef27f4780c07255ef49d0147a6 not found: ID does not exist" containerID="54c437315b39fd0d37736226869465e05b6ed2ef27f4780c07255ef49d0147a6" Feb 19 18:52:13 crc kubenswrapper[4813]: I0219 18:52:13.080686 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54c437315b39fd0d37736226869465e05b6ed2ef27f4780c07255ef49d0147a6"} err="failed to get container status \"54c437315b39fd0d37736226869465e05b6ed2ef27f4780c07255ef49d0147a6\": rpc error: code = NotFound desc = could not find container \"54c437315b39fd0d37736226869465e05b6ed2ef27f4780c07255ef49d0147a6\": container with ID starting with 54c437315b39fd0d37736226869465e05b6ed2ef27f4780c07255ef49d0147a6 not found: ID does not exist" Feb 19 18:52:13 crc kubenswrapper[4813]: I0219 18:52:13.081339 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 18:52:13 crc kubenswrapper[4813]: I0219 18:52:13.090516 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 19 18:52:13 crc kubenswrapper[4813]: E0219 18:52:13.090873 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cad7f3d-9640-4949-bd7a-11407f2c8f97" containerName="nova-metadata-log" Feb 19 18:52:13 crc kubenswrapper[4813]: I0219 18:52:13.090888 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cad7f3d-9640-4949-bd7a-11407f2c8f97" containerName="nova-metadata-log" Feb 19 18:52:13 crc kubenswrapper[4813]: E0219 18:52:13.090903 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cad7f3d-9640-4949-bd7a-11407f2c8f97" containerName="nova-metadata-metadata" Feb 19 18:52:13 crc kubenswrapper[4813]: I0219 18:52:13.090909 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cad7f3d-9640-4949-bd7a-11407f2c8f97" containerName="nova-metadata-metadata" Feb 19 18:52:13 crc kubenswrapper[4813]: I0219 18:52:13.091087 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cad7f3d-9640-4949-bd7a-11407f2c8f97" containerName="nova-metadata-metadata" Feb 19 18:52:13 crc kubenswrapper[4813]: I0219 18:52:13.091106 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cad7f3d-9640-4949-bd7a-11407f2c8f97" containerName="nova-metadata-log" Feb 19 18:52:13 crc kubenswrapper[4813]: I0219 18:52:13.091989 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 18:52:13 crc kubenswrapper[4813]: I0219 18:52:13.100176 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 18:52:13 crc kubenswrapper[4813]: I0219 18:52:13.119662 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 19 18:52:13 crc kubenswrapper[4813]: I0219 18:52:13.119902 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 19 18:52:13 crc kubenswrapper[4813]: I0219 18:52:13.170148 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c389f482-3000-4f94-924d-158c9d51a2e9-config-data\") pod \"nova-metadata-0\" (UID: \"c389f482-3000-4f94-924d-158c9d51a2e9\") " pod="openstack/nova-metadata-0" Feb 19 18:52:13 crc kubenswrapper[4813]: I0219 18:52:13.170191 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c389f482-3000-4f94-924d-158c9d51a2e9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c389f482-3000-4f94-924d-158c9d51a2e9\") " pod="openstack/nova-metadata-0" Feb 19 18:52:13 crc kubenswrapper[4813]: I0219 18:52:13.170663 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c389f482-3000-4f94-924d-158c9d51a2e9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c389f482-3000-4f94-924d-158c9d51a2e9\") " pod="openstack/nova-metadata-0" Feb 19 18:52:13 crc kubenswrapper[4813]: I0219 18:52:13.170772 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5wcw\" (UniqueName: \"kubernetes.io/projected/c389f482-3000-4f94-924d-158c9d51a2e9-kube-api-access-j5wcw\") pod \"nova-metadata-0\" (UID: \"c389f482-3000-4f94-924d-158c9d51a2e9\") " pod="openstack/nova-metadata-0" Feb 19 18:52:13 crc kubenswrapper[4813]: I0219 18:52:13.170888 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c389f482-3000-4f94-924d-158c9d51a2e9-logs\") pod \"nova-metadata-0\" (UID: \"c389f482-3000-4f94-924d-158c9d51a2e9\") " pod="openstack/nova-metadata-0" Feb 19 18:52:13 crc kubenswrapper[4813]: I0219 18:52:13.272099 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c389f482-3000-4f94-924d-158c9d51a2e9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c389f482-3000-4f94-924d-158c9d51a2e9\") " pod="openstack/nova-metadata-0" Feb 19 18:52:13 crc kubenswrapper[4813]: I0219 18:52:13.272148 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5wcw\" (UniqueName: \"kubernetes.io/projected/c389f482-3000-4f94-924d-158c9d51a2e9-kube-api-access-j5wcw\") pod \"nova-metadata-0\" (UID: \"c389f482-3000-4f94-924d-158c9d51a2e9\") " pod="openstack/nova-metadata-0" Feb 19 18:52:13 crc kubenswrapper[4813]: I0219 18:52:13.272193 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c389f482-3000-4f94-924d-158c9d51a2e9-logs\") pod \"nova-metadata-0\" (UID: \"c389f482-3000-4f94-924d-158c9d51a2e9\") " pod="openstack/nova-metadata-0" Feb 19 18:52:13 crc kubenswrapper[4813]: I0219 18:52:13.272212 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c389f482-3000-4f94-924d-158c9d51a2e9-config-data\") pod \"nova-metadata-0\" (UID: \"c389f482-3000-4f94-924d-158c9d51a2e9\") " pod="openstack/nova-metadata-0" Feb 19 18:52:13 crc kubenswrapper[4813]: I0219 18:52:13.272230 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c389f482-3000-4f94-924d-158c9d51a2e9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c389f482-3000-4f94-924d-158c9d51a2e9\") " pod="openstack/nova-metadata-0" Feb 19 18:52:13 crc kubenswrapper[4813]: I0219 18:52:13.273531 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c389f482-3000-4f94-924d-158c9d51a2e9-logs\") pod \"nova-metadata-0\" (UID: \"c389f482-3000-4f94-924d-158c9d51a2e9\") " pod="openstack/nova-metadata-0" Feb 19 18:52:13 crc kubenswrapper[4813]: I0219 18:52:13.277653 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c389f482-3000-4f94-924d-158c9d51a2e9-config-data\") pod \"nova-metadata-0\" (UID: \"c389f482-3000-4f94-924d-158c9d51a2e9\") " pod="openstack/nova-metadata-0" Feb 19 18:52:13 crc kubenswrapper[4813]: I0219 18:52:13.280553 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c389f482-3000-4f94-924d-158c9d51a2e9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c389f482-3000-4f94-924d-158c9d51a2e9\") " pod="openstack/nova-metadata-0" Feb 19 18:52:13 crc kubenswrapper[4813]: I0219 18:52:13.280841 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c389f482-3000-4f94-924d-158c9d51a2e9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c389f482-3000-4f94-924d-158c9d51a2e9\") " pod="openstack/nova-metadata-0" Feb 19 18:52:13 crc kubenswrapper[4813]: I0219 18:52:13.288001 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5wcw\" (UniqueName: \"kubernetes.io/projected/c389f482-3000-4f94-924d-158c9d51a2e9-kube-api-access-j5wcw\") pod \"nova-metadata-0\" (UID: \"c389f482-3000-4f94-924d-158c9d51a2e9\") " pod="openstack/nova-metadata-0" Feb 19 18:52:13 crc kubenswrapper[4813]: I0219 18:52:13.437623 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 18:52:13 crc kubenswrapper[4813]: I0219 18:52:13.486432 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cad7f3d-9640-4949-bd7a-11407f2c8f97" path="/var/lib/kubelet/pods/8cad7f3d-9640-4949-bd7a-11407f2c8f97/volumes" Feb 19 18:52:13 crc kubenswrapper[4813]: I0219 18:52:13.938023 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 18:52:13 crc kubenswrapper[4813]: W0219 18:52:13.940333 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc389f482_3000_4f94_924d_158c9d51a2e9.slice/crio-9f772f680748d60af5a6560ada0a69e5535e128aaa419c6744f56b1f8c246a23 WatchSource:0}: Error finding container 9f772f680748d60af5a6560ada0a69e5535e128aaa419c6744f56b1f8c246a23: Status 404 returned error can't find the container with id 9f772f680748d60af5a6560ada0a69e5535e128aaa419c6744f56b1f8c246a23 Feb 19 18:52:14 crc kubenswrapper[4813]: I0219 18:52:14.021487 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c389f482-3000-4f94-924d-158c9d51a2e9","Type":"ContainerStarted","Data":"9f772f680748d60af5a6560ada0a69e5535e128aaa419c6744f56b1f8c246a23"} Feb 19 18:52:14 crc kubenswrapper[4813]: I0219 18:52:14.844794 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 18:52:14 crc kubenswrapper[4813]: I0219 18:52:14.913914 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2plm\" (UniqueName: \"kubernetes.io/projected/78de48f7-3d0a-45dc-8aad-f61546b2f5a6-kube-api-access-h2plm\") pod \"78de48f7-3d0a-45dc-8aad-f61546b2f5a6\" (UID: \"78de48f7-3d0a-45dc-8aad-f61546b2f5a6\") " Feb 19 18:52:14 crc kubenswrapper[4813]: I0219 18:52:14.914578 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78de48f7-3d0a-45dc-8aad-f61546b2f5a6-config-data\") pod \"78de48f7-3d0a-45dc-8aad-f61546b2f5a6\" (UID: \"78de48f7-3d0a-45dc-8aad-f61546b2f5a6\") " Feb 19 18:52:14 crc kubenswrapper[4813]: I0219 18:52:14.914891 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78de48f7-3d0a-45dc-8aad-f61546b2f5a6-combined-ca-bundle\") pod \"78de48f7-3d0a-45dc-8aad-f61546b2f5a6\" (UID: \"78de48f7-3d0a-45dc-8aad-f61546b2f5a6\") " Feb 19 18:52:14 crc kubenswrapper[4813]: I0219 18:52:14.924699 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78de48f7-3d0a-45dc-8aad-f61546b2f5a6-kube-api-access-h2plm" (OuterVolumeSpecName: "kube-api-access-h2plm") pod "78de48f7-3d0a-45dc-8aad-f61546b2f5a6" (UID: "78de48f7-3d0a-45dc-8aad-f61546b2f5a6"). InnerVolumeSpecName "kube-api-access-h2plm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:52:14 crc kubenswrapper[4813]: I0219 18:52:14.960058 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78de48f7-3d0a-45dc-8aad-f61546b2f5a6-config-data" (OuterVolumeSpecName: "config-data") pod "78de48f7-3d0a-45dc-8aad-f61546b2f5a6" (UID: "78de48f7-3d0a-45dc-8aad-f61546b2f5a6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:14 crc kubenswrapper[4813]: I0219 18:52:14.981894 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78de48f7-3d0a-45dc-8aad-f61546b2f5a6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "78de48f7-3d0a-45dc-8aad-f61546b2f5a6" (UID: "78de48f7-3d0a-45dc-8aad-f61546b2f5a6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:15 crc kubenswrapper[4813]: I0219 18:52:15.017275 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78de48f7-3d0a-45dc-8aad-f61546b2f5a6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:15 crc kubenswrapper[4813]: I0219 18:52:15.017318 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h2plm\" (UniqueName: \"kubernetes.io/projected/78de48f7-3d0a-45dc-8aad-f61546b2f5a6-kube-api-access-h2plm\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:15 crc kubenswrapper[4813]: I0219 18:52:15.017332 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78de48f7-3d0a-45dc-8aad-f61546b2f5a6-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:15 crc kubenswrapper[4813]: I0219 18:52:15.029834 4813 generic.go:334] "Generic (PLEG): container finished" podID="78de48f7-3d0a-45dc-8aad-f61546b2f5a6" containerID="e0817ecad050f3721ed4924ec075780185d08a2788addfd90f84d19e00dd8d5c" exitCode=0 Feb 19 18:52:15 crc kubenswrapper[4813]: I0219 18:52:15.029922 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 18:52:15 crc kubenswrapper[4813]: I0219 18:52:15.029928 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"78de48f7-3d0a-45dc-8aad-f61546b2f5a6","Type":"ContainerDied","Data":"e0817ecad050f3721ed4924ec075780185d08a2788addfd90f84d19e00dd8d5c"} Feb 19 18:52:15 crc kubenswrapper[4813]: I0219 18:52:15.030119 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"78de48f7-3d0a-45dc-8aad-f61546b2f5a6","Type":"ContainerDied","Data":"9e52ff3099f45b1a1cf1a44758b6fa13a4b27694f77534cb9ee1cd6e12180924"} Feb 19 18:52:15 crc kubenswrapper[4813]: I0219 18:52:15.030136 4813 scope.go:117] "RemoveContainer" containerID="e0817ecad050f3721ed4924ec075780185d08a2788addfd90f84d19e00dd8d5c" Feb 19 18:52:15 crc kubenswrapper[4813]: I0219 18:52:15.033699 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c389f482-3000-4f94-924d-158c9d51a2e9","Type":"ContainerStarted","Data":"610a526fd13a6efb0d4ac221965cec53053e05e9e010cd31550a6cc426a151df"} Feb 19 18:52:15 crc kubenswrapper[4813]: I0219 18:52:15.033725 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c389f482-3000-4f94-924d-158c9d51a2e9","Type":"ContainerStarted","Data":"42f9e9f8d59b24e0b6b81176e439582dcdd7e50952fc2dd61a52acdcf4367e0e"} Feb 19 18:52:15 crc kubenswrapper[4813]: I0219 18:52:15.063889 4813 scope.go:117] "RemoveContainer" containerID="e0817ecad050f3721ed4924ec075780185d08a2788addfd90f84d19e00dd8d5c" Feb 19 18:52:15 crc kubenswrapper[4813]: E0219 18:52:15.064434 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0817ecad050f3721ed4924ec075780185d08a2788addfd90f84d19e00dd8d5c\": container with ID starting with e0817ecad050f3721ed4924ec075780185d08a2788addfd90f84d19e00dd8d5c not found: ID does not exist" containerID="e0817ecad050f3721ed4924ec075780185d08a2788addfd90f84d19e00dd8d5c" Feb 19 18:52:15 crc kubenswrapper[4813]: I0219 18:52:15.064484 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0817ecad050f3721ed4924ec075780185d08a2788addfd90f84d19e00dd8d5c"} err="failed to get container status \"e0817ecad050f3721ed4924ec075780185d08a2788addfd90f84d19e00dd8d5c\": rpc error: code = NotFound desc = could not find container \"e0817ecad050f3721ed4924ec075780185d08a2788addfd90f84d19e00dd8d5c\": container with ID starting with e0817ecad050f3721ed4924ec075780185d08a2788addfd90f84d19e00dd8d5c not found: ID does not exist" Feb 19 18:52:15 crc kubenswrapper[4813]: I0219 18:52:15.084912 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.084888878 podStartE2EDuration="2.084888878s" podCreationTimestamp="2026-02-19 18:52:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:52:15.06361203 +0000 UTC m=+1354.289052591" watchObservedRunningTime="2026-02-19 18:52:15.084888878 +0000 UTC m=+1354.310329419" Feb 19 18:52:15 crc kubenswrapper[4813]: I0219 18:52:15.099248 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 18:52:15 crc kubenswrapper[4813]: I0219 18:52:15.120892 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 18:52:15 crc kubenswrapper[4813]: I0219 18:52:15.135116 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 18:52:15 crc kubenswrapper[4813]: E0219 18:52:15.135729 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78de48f7-3d0a-45dc-8aad-f61546b2f5a6" containerName="nova-scheduler-scheduler" Feb 19 18:52:15 crc kubenswrapper[4813]: I0219 18:52:15.135749 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="78de48f7-3d0a-45dc-8aad-f61546b2f5a6" containerName="nova-scheduler-scheduler" Feb 19 18:52:15 crc kubenswrapper[4813]: I0219 18:52:15.135918 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="78de48f7-3d0a-45dc-8aad-f61546b2f5a6" containerName="nova-scheduler-scheduler" Feb 19 18:52:15 crc kubenswrapper[4813]: I0219 18:52:15.136820 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 18:52:15 crc kubenswrapper[4813]: I0219 18:52:15.138422 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 19 18:52:15 crc kubenswrapper[4813]: I0219 18:52:15.146101 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 18:52:15 crc kubenswrapper[4813]: I0219 18:52:15.221536 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b69a5561-31f2-4e4f-96d3-d0db19a6a51f-config-data\") pod \"nova-scheduler-0\" (UID: \"b69a5561-31f2-4e4f-96d3-d0db19a6a51f\") " pod="openstack/nova-scheduler-0" Feb 19 18:52:15 crc kubenswrapper[4813]: I0219 18:52:15.221649 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ls27j\" (UniqueName: \"kubernetes.io/projected/b69a5561-31f2-4e4f-96d3-d0db19a6a51f-kube-api-access-ls27j\") pod \"nova-scheduler-0\" (UID: \"b69a5561-31f2-4e4f-96d3-d0db19a6a51f\") " pod="openstack/nova-scheduler-0" Feb 19 18:52:15 crc kubenswrapper[4813]: I0219 18:52:15.221973 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b69a5561-31f2-4e4f-96d3-d0db19a6a51f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b69a5561-31f2-4e4f-96d3-d0db19a6a51f\") " pod="openstack/nova-scheduler-0" Feb 19 18:52:15 crc kubenswrapper[4813]: I0219 18:52:15.323429 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b69a5561-31f2-4e4f-96d3-d0db19a6a51f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b69a5561-31f2-4e4f-96d3-d0db19a6a51f\") " pod="openstack/nova-scheduler-0" Feb 19 18:52:15 crc kubenswrapper[4813]: I0219 18:52:15.323531 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b69a5561-31f2-4e4f-96d3-d0db19a6a51f-config-data\") pod \"nova-scheduler-0\" (UID: \"b69a5561-31f2-4e4f-96d3-d0db19a6a51f\") " pod="openstack/nova-scheduler-0" Feb 19 18:52:15 crc kubenswrapper[4813]: I0219 18:52:15.323570 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ls27j\" (UniqueName: \"kubernetes.io/projected/b69a5561-31f2-4e4f-96d3-d0db19a6a51f-kube-api-access-ls27j\") pod \"nova-scheduler-0\" (UID: \"b69a5561-31f2-4e4f-96d3-d0db19a6a51f\") " pod="openstack/nova-scheduler-0" Feb 19 18:52:15 crc kubenswrapper[4813]: I0219 18:52:15.328228 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b69a5561-31f2-4e4f-96d3-d0db19a6a51f-config-data\") pod \"nova-scheduler-0\" (UID: \"b69a5561-31f2-4e4f-96d3-d0db19a6a51f\") " pod="openstack/nova-scheduler-0" Feb 19 18:52:15 crc kubenswrapper[4813]: I0219 18:52:15.329388 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b69a5561-31f2-4e4f-96d3-d0db19a6a51f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b69a5561-31f2-4e4f-96d3-d0db19a6a51f\") " pod="openstack/nova-scheduler-0" Feb 19 18:52:15 crc kubenswrapper[4813]: I0219 18:52:15.340065 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ls27j\" (UniqueName: \"kubernetes.io/projected/b69a5561-31f2-4e4f-96d3-d0db19a6a51f-kube-api-access-ls27j\") pod \"nova-scheduler-0\" (UID: \"b69a5561-31f2-4e4f-96d3-d0db19a6a51f\") " pod="openstack/nova-scheduler-0" Feb 19 18:52:15 crc kubenswrapper[4813]: I0219 18:52:15.464483 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 18:52:15 crc kubenswrapper[4813]: I0219 18:52:15.501312 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78de48f7-3d0a-45dc-8aad-f61546b2f5a6" path="/var/lib/kubelet/pods/78de48f7-3d0a-45dc-8aad-f61546b2f5a6/volumes" Feb 19 18:52:16 crc kubenswrapper[4813]: I0219 18:52:16.009450 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 18:52:16 crc kubenswrapper[4813]: W0219 18:52:16.013209 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb69a5561_31f2_4e4f_96d3_d0db19a6a51f.slice/crio-c5a1bb754235f9deba7e9fe4d6e2837be736135482927a19fc50b52cf66e0851 WatchSource:0}: Error finding container c5a1bb754235f9deba7e9fe4d6e2837be736135482927a19fc50b52cf66e0851: Status 404 returned error can't find the container with id c5a1bb754235f9deba7e9fe4d6e2837be736135482927a19fc50b52cf66e0851 Feb 19 18:52:16 crc kubenswrapper[4813]: I0219 18:52:16.047223 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b69a5561-31f2-4e4f-96d3-d0db19a6a51f","Type":"ContainerStarted","Data":"c5a1bb754235f9deba7e9fe4d6e2837be736135482927a19fc50b52cf66e0851"} Feb 19 18:52:17 crc kubenswrapper[4813]: I0219 18:52:17.064714 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b69a5561-31f2-4e4f-96d3-d0db19a6a51f","Type":"ContainerStarted","Data":"4e6ba2ed7bbec736dcf293ba52811dd6e91ad2796a685f3c0760ed0b7c04cd3a"} Feb 19 18:52:17 crc kubenswrapper[4813]: I0219 18:52:17.111388 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.111354256 podStartE2EDuration="2.111354256s" podCreationTimestamp="2026-02-19 18:52:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 18:52:17.098849269 +0000 UTC m=+1356.324289860" watchObservedRunningTime="2026-02-19 18:52:17.111354256 +0000 UTC m=+1356.336794827" Feb 19 18:52:18 crc kubenswrapper[4813]: I0219 18:52:18.437840 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 18:52:18 crc kubenswrapper[4813]: I0219 18:52:18.438179 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 18:52:20 crc kubenswrapper[4813]: I0219 18:52:20.464758 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 19 18:52:20 crc kubenswrapper[4813]: I0219 18:52:20.686352 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 18:52:20 crc kubenswrapper[4813]: I0219 18:52:20.686683 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 18:52:21 crc kubenswrapper[4813]: I0219 18:52:21.702288 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="153c22ed-7e2e-496e-9a13-9ef0ce79efd8" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.201:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 18:52:21 crc kubenswrapper[4813]: I0219 18:52:21.702308 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="153c22ed-7e2e-496e-9a13-9ef0ce79efd8" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.201:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 18:52:23 crc kubenswrapper[4813]: I0219 18:52:23.438138 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 19 18:52:23 crc kubenswrapper[4813]: I0219 18:52:23.438455 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 19 18:52:24 crc kubenswrapper[4813]: I0219 18:52:24.456160 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="c389f482-3000-4f94-924d-158c9d51a2e9" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.202:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 18:52:24 crc kubenswrapper[4813]: I0219 18:52:24.456153 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="c389f482-3000-4f94-924d-158c9d51a2e9" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.202:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 18:52:25 crc kubenswrapper[4813]: I0219 18:52:25.465267 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 19 18:52:25 crc kubenswrapper[4813]: I0219 18:52:25.513184 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 19 18:52:26 crc kubenswrapper[4813]: I0219 18:52:26.230739 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 19 18:52:28 crc kubenswrapper[4813]: I0219 18:52:28.247150 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 19 18:52:30 crc kubenswrapper[4813]: I0219 18:52:30.329940 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 18:52:30 crc kubenswrapper[4813]: I0219 18:52:30.330304 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 18:52:30 crc kubenswrapper[4813]: I0219 18:52:30.696308 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 19 18:52:30 crc kubenswrapper[4813]: I0219 18:52:30.697005 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 19 18:52:30 crc kubenswrapper[4813]: I0219 18:52:30.700860 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 19 18:52:30 crc kubenswrapper[4813]: I0219 18:52:30.704942 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 19 18:52:31 crc kubenswrapper[4813]: I0219 18:52:31.229702 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 19 18:52:31 crc kubenswrapper[4813]: I0219 18:52:31.237156 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 19 18:52:33 crc kubenswrapper[4813]: I0219 18:52:33.446375 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 19 18:52:33 crc kubenswrapper[4813]: I0219 18:52:33.446833 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 19 18:52:33 crc kubenswrapper[4813]: I0219 18:52:33.454695 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 19 18:52:33 crc kubenswrapper[4813]: I0219 18:52:33.455615 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 19 18:52:49 crc kubenswrapper[4813]: I0219 18:52:49.172030 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-2kbrh"] Feb 19 18:52:49 crc kubenswrapper[4813]: I0219 18:52:49.173934 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2kbrh" Feb 19 18:52:49 crc kubenswrapper[4813]: I0219 18:52:49.182298 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 19 18:52:49 crc kubenswrapper[4813]: I0219 18:52:49.213603 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-2kbrh"] Feb 19 18:52:49 crc kubenswrapper[4813]: I0219 18:52:49.281521 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kcrh\" (UniqueName: \"kubernetes.io/projected/0055f8bf-7085-42c5-86d4-9cee7033a7d1-kube-api-access-8kcrh\") pod \"root-account-create-update-2kbrh\" (UID: \"0055f8bf-7085-42c5-86d4-9cee7033a7d1\") " pod="openstack/root-account-create-update-2kbrh" Feb 19 18:52:49 crc kubenswrapper[4813]: I0219 18:52:49.281581 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0055f8bf-7085-42c5-86d4-9cee7033a7d1-operator-scripts\") pod \"root-account-create-update-2kbrh\" (UID: \"0055f8bf-7085-42c5-86d4-9cee7033a7d1\") " pod="openstack/root-account-create-update-2kbrh" Feb 19 18:52:49 crc kubenswrapper[4813]: I0219 18:52:49.328324 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Feb 19 18:52:49 crc kubenswrapper[4813]: I0219 18:52:49.328567 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="786c324f-42b0-4099-adf7-3926fae87308" containerName="openstackclient" containerID="cri-o://dcb3929d2e5414c8c6857fa34323fe5cbaa913507bb9f15d4791c4c3a55cd8b7" gracePeriod=2 Feb 19 18:52:49 crc kubenswrapper[4813]: I0219 18:52:49.346099 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Feb 19 18:52:49 crc kubenswrapper[4813]: I0219 18:52:49.363020 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-s59x2"] Feb 19 18:52:49 crc kubenswrapper[4813]: I0219 18:52:49.380026 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-s59x2"] Feb 19 18:52:49 crc kubenswrapper[4813]: I0219 18:52:49.384427 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kcrh\" (UniqueName: \"kubernetes.io/projected/0055f8bf-7085-42c5-86d4-9cee7033a7d1-kube-api-access-8kcrh\") pod \"root-account-create-update-2kbrh\" (UID: \"0055f8bf-7085-42c5-86d4-9cee7033a7d1\") " pod="openstack/root-account-create-update-2kbrh" Feb 19 18:52:49 crc kubenswrapper[4813]: I0219 18:52:49.386273 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0055f8bf-7085-42c5-86d4-9cee7033a7d1-operator-scripts\") pod \"root-account-create-update-2kbrh\" (UID: \"0055f8bf-7085-42c5-86d4-9cee7033a7d1\") " pod="openstack/root-account-create-update-2kbrh" Feb 19 18:52:49 crc kubenswrapper[4813]: I0219 18:52:49.388857 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0055f8bf-7085-42c5-86d4-9cee7033a7d1-operator-scripts\") pod \"root-account-create-update-2kbrh\" (UID: \"0055f8bf-7085-42c5-86d4-9cee7033a7d1\") " pod="openstack/root-account-create-update-2kbrh" Feb 19 18:52:49 crc kubenswrapper[4813]: I0219 18:52:49.395012 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 18:52:49 crc kubenswrapper[4813]: I0219 18:52:49.395290 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="04885075-4d84-445b-b7c8-b6afaeb71600" containerName="cinder-scheduler" containerID="cri-o://13cc33bfb33bb924ac9f9d0035a948ba6c69e8d3c3ca76d6f358973425452794" gracePeriod=30 Feb 19 18:52:49 crc kubenswrapper[4813]: I0219 18:52:49.395827 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="04885075-4d84-445b-b7c8-b6afaeb71600" containerName="probe" containerID="cri-o://a75fc405c7da94e09b4fc05ed5bb23e7a20d02a1e8bcaeab522029453f4a2393" gracePeriod=30 Feb 19 18:52:49 crc kubenswrapper[4813]: I0219 18:52:49.411023 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 18:52:49 crc kubenswrapper[4813]: I0219 18:52:49.427251 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kcrh\" (UniqueName: \"kubernetes.io/projected/0055f8bf-7085-42c5-86d4-9cee7033a7d1-kube-api-access-8kcrh\") pod \"root-account-create-update-2kbrh\" (UID: \"0055f8bf-7085-42c5-86d4-9cee7033a7d1\") " pod="openstack/root-account-create-update-2kbrh" Feb 19 18:52:49 crc kubenswrapper[4813]: I0219 18:52:49.427575 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-d489-account-create-update-7mn8q"] Feb 19 18:52:49 crc kubenswrapper[4813]: E0219 18:52:49.428005 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="786c324f-42b0-4099-adf7-3926fae87308" containerName="openstackclient" Feb 19 18:52:49 crc kubenswrapper[4813]: I0219 18:52:49.428017 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="786c324f-42b0-4099-adf7-3926fae87308" containerName="openstackclient" Feb 19 18:52:49 crc kubenswrapper[4813]: I0219 18:52:49.428189 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="786c324f-42b0-4099-adf7-3926fae87308" containerName="openstackclient" Feb 19 18:52:49 crc kubenswrapper[4813]: I0219 18:52:49.428778 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d489-account-create-update-7mn8q" Feb 19 18:52:49 crc kubenswrapper[4813]: I0219 18:52:49.433216 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 19 18:52:49 crc kubenswrapper[4813]: I0219 18:52:49.490842 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2kbrh" Feb 19 18:52:49 crc kubenswrapper[4813]: I0219 18:52:49.515925 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15bef837-ce66-495d-8b85-f072341093ee" path="/var/lib/kubelet/pods/15bef837-ce66-495d-8b85-f072341093ee/volumes" Feb 19 18:52:49 crc kubenswrapper[4813]: I0219 18:52:49.525111 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-d489-account-create-update-7mn8q"] Feb 19 18:52:49 crc kubenswrapper[4813]: I0219 18:52:49.525222 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-570b-account-create-update-h7vw7"] Feb 19 18:52:49 crc kubenswrapper[4813]: I0219 18:52:49.529130 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-570b-account-create-update-h7vw7" Feb 19 18:52:49 crc kubenswrapper[4813]: I0219 18:52:49.538589 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 19 18:52:49 crc kubenswrapper[4813]: I0219 18:52:49.559316 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-570b-account-create-update-h7vw7"] Feb 19 18:52:49 crc kubenswrapper[4813]: I0219 18:52:49.598936 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcgwv\" (UniqueName: \"kubernetes.io/projected/2ed035dd-719a-45e9-825f-4ce51dbb9866-kube-api-access-xcgwv\") pod \"placement-d489-account-create-update-7mn8q\" (UID: \"2ed035dd-719a-45e9-825f-4ce51dbb9866\") " pod="openstack/placement-d489-account-create-update-7mn8q" Feb 19 18:52:49 crc kubenswrapper[4813]: I0219 18:52:49.598996 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ed035dd-719a-45e9-825f-4ce51dbb9866-operator-scripts\") pod \"placement-d489-account-create-update-7mn8q\" (UID: \"2ed035dd-719a-45e9-825f-4ce51dbb9866\") " pod="openstack/placement-d489-account-create-update-7mn8q" Feb 19 18:52:49 crc kubenswrapper[4813]: E0219 18:52:49.605206 4813 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Feb 19 18:52:49 crc kubenswrapper[4813]: E0219 18:52:49.605308 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c69ff3db-8806-451a-9df0-c6289c327579-config-data podName:c69ff3db-8806-451a-9df0-c6289c327579 nodeName:}" failed. No retries permitted until 2026-02-19 18:52:50.105288725 +0000 UTC m=+1389.330729266 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/c69ff3db-8806-451a-9df0-c6289c327579-config-data") pod "rabbitmq-server-0" (UID: "c69ff3db-8806-451a-9df0-c6289c327579") : configmap "rabbitmq-config-data" not found Feb 19 18:52:49 crc kubenswrapper[4813]: I0219 18:52:49.623314 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 19 18:52:49 crc kubenswrapper[4813]: I0219 18:52:49.623594 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="6ad2b86e-f285-4acc-a87b-18f97baf0294" containerName="cinder-api-log" containerID="cri-o://4710529a2f6d6a9963aa3e46758c6cb9e333d01ce11646280e8ee29697fa5528" gracePeriod=30 Feb 19 18:52:49 crc kubenswrapper[4813]: I0219 18:52:49.624002 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="6ad2b86e-f285-4acc-a87b-18f97baf0294" containerName="cinder-api" containerID="cri-o://e4b7cafb1bd9bfd44873b9606ca95255b1dc06343b23e514ba497ba165a365d6" gracePeriod=30 Feb 19 18:52:49 crc kubenswrapper[4813]: I0219 18:52:49.651732 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-cde0-account-create-update-b8qsr"] Feb 19 18:52:49 crc kubenswrapper[4813]: I0219 18:52:49.652892 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cde0-account-create-update-b8qsr" Feb 19 18:52:49 crc kubenswrapper[4813]: I0219 18:52:49.680647 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 19 18:52:49 crc kubenswrapper[4813]: I0219 18:52:49.699667 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-f626-account-create-update-pclhq"] Feb 19 18:52:49 crc kubenswrapper[4813]: I0219 18:52:49.700834 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-f626-account-create-update-pclhq" Feb 19 18:52:49 crc kubenswrapper[4813]: I0219 18:52:49.701568 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ed035dd-719a-45e9-825f-4ce51dbb9866-operator-scripts\") pod \"placement-d489-account-create-update-7mn8q\" (UID: \"2ed035dd-719a-45e9-825f-4ce51dbb9866\") " pod="openstack/placement-d489-account-create-update-7mn8q" Feb 19 18:52:49 crc kubenswrapper[4813]: I0219 18:52:49.701619 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1df3c12-83d7-4cf9-8f00-8cbd54be355f-operator-scripts\") pod \"cinder-570b-account-create-update-h7vw7\" (UID: \"e1df3c12-83d7-4cf9-8f00-8cbd54be355f\") " pod="openstack/cinder-570b-account-create-update-h7vw7" Feb 19 18:52:49 crc kubenswrapper[4813]: I0219 18:52:49.701666 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2llgw\" (UniqueName: \"kubernetes.io/projected/e1df3c12-83d7-4cf9-8f00-8cbd54be355f-kube-api-access-2llgw\") pod \"cinder-570b-account-create-update-h7vw7\" (UID: \"e1df3c12-83d7-4cf9-8f00-8cbd54be355f\") " pod="openstack/cinder-570b-account-create-update-h7vw7" Feb 19 18:52:49 crc kubenswrapper[4813]: I0219 18:52:49.701788 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcgwv\" (UniqueName: \"kubernetes.io/projected/2ed035dd-719a-45e9-825f-4ce51dbb9866-kube-api-access-xcgwv\") pod \"placement-d489-account-create-update-7mn8q\" (UID: \"2ed035dd-719a-45e9-825f-4ce51dbb9866\") " pod="openstack/placement-d489-account-create-update-7mn8q" Feb 19 18:52:49 crc kubenswrapper[4813]: I0219 18:52:49.702639 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ed035dd-719a-45e9-825f-4ce51dbb9866-operator-scripts\") pod \"placement-d489-account-create-update-7mn8q\" (UID: \"2ed035dd-719a-45e9-825f-4ce51dbb9866\") " pod="openstack/placement-d489-account-create-update-7mn8q" Feb 19 18:52:49 crc kubenswrapper[4813]: I0219 18:52:49.720787 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 19 18:52:49 crc kubenswrapper[4813]: I0219 18:52:49.742872 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-345c-account-create-update-wtmbd"] Feb 19 18:52:49 crc kubenswrapper[4813]: I0219 18:52:49.744081 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-345c-account-create-update-wtmbd" Feb 19 18:52:49 crc kubenswrapper[4813]: I0219 18:52:49.768204 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-cde0-account-create-update-b8qsr"] Feb 19 18:52:49 crc kubenswrapper[4813]: I0219 18:52:49.768270 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 19 18:52:49 crc kubenswrapper[4813]: I0219 18:52:49.779691 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcgwv\" (UniqueName: \"kubernetes.io/projected/2ed035dd-719a-45e9-825f-4ce51dbb9866-kube-api-access-xcgwv\") pod \"placement-d489-account-create-update-7mn8q\" (UID: \"2ed035dd-719a-45e9-825f-4ce51dbb9866\") " pod="openstack/placement-d489-account-create-update-7mn8q" Feb 19 18:52:49 crc kubenswrapper[4813]: I0219 18:52:49.783883 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-f626-account-create-update-pclhq"] Feb 19 18:52:49 crc kubenswrapper[4813]: I0219 18:52:49.802981 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lgch\" (UniqueName: \"kubernetes.io/projected/79168133-5b77-4689-89b3-5f15fa765750-kube-api-access-5lgch\") pod \"glance-f626-account-create-update-pclhq\" (UID: \"79168133-5b77-4689-89b3-5f15fa765750\") " pod="openstack/glance-f626-account-create-update-pclhq" Feb 19 18:52:49 crc kubenswrapper[4813]: I0219 18:52:49.803045 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2llgw\" (UniqueName: \"kubernetes.io/projected/e1df3c12-83d7-4cf9-8f00-8cbd54be355f-kube-api-access-2llgw\") pod \"cinder-570b-account-create-update-h7vw7\" (UID: \"e1df3c12-83d7-4cf9-8f00-8cbd54be355f\") " pod="openstack/cinder-570b-account-create-update-h7vw7" Feb 19 18:52:49 crc kubenswrapper[4813]: I0219 18:52:49.803090 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dj6tx\" (UniqueName: \"kubernetes.io/projected/51a5d629-ac3a-4046-be99-01b665dce3ac-kube-api-access-dj6tx\") pod \"neutron-cde0-account-create-update-b8qsr\" (UID: \"51a5d629-ac3a-4046-be99-01b665dce3ac\") " pod="openstack/neutron-cde0-account-create-update-b8qsr" Feb 19 18:52:49 crc kubenswrapper[4813]: I0219 18:52:49.803181 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51a5d629-ac3a-4046-be99-01b665dce3ac-operator-scripts\") pod \"neutron-cde0-account-create-update-b8qsr\" (UID: \"51a5d629-ac3a-4046-be99-01b665dce3ac\") " pod="openstack/neutron-cde0-account-create-update-b8qsr" Feb 19 18:52:49 crc kubenswrapper[4813]: I0219 18:52:49.803214 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79168133-5b77-4689-89b3-5f15fa765750-operator-scripts\") pod \"glance-f626-account-create-update-pclhq\" (UID: \"79168133-5b77-4689-89b3-5f15fa765750\") " pod="openstack/glance-f626-account-create-update-pclhq" Feb 19 18:52:49 crc kubenswrapper[4813]: I0219 18:52:49.803254 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1df3c12-83d7-4cf9-8f00-8cbd54be355f-operator-scripts\") pod \"cinder-570b-account-create-update-h7vw7\" (UID: \"e1df3c12-83d7-4cf9-8f00-8cbd54be355f\") " pod="openstack/cinder-570b-account-create-update-h7vw7" Feb 19 18:52:49 crc kubenswrapper[4813]: I0219 18:52:49.804122 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1df3c12-83d7-4cf9-8f00-8cbd54be355f-operator-scripts\") pod \"cinder-570b-account-create-update-h7vw7\" (UID: \"e1df3c12-83d7-4cf9-8f00-8cbd54be355f\") " pod="openstack/cinder-570b-account-create-update-h7vw7" Feb 19 18:52:49 crc kubenswrapper[4813]: I0219 18:52:49.809446 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d489-account-create-update-7mn8q" Feb 19 18:52:49 crc kubenswrapper[4813]: I0219 18:52:49.830978 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-d489-account-create-update-xvmmd"] Feb 19 18:52:49 crc kubenswrapper[4813]: I0219 18:52:49.861304 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-d489-account-create-update-xvmmd"] Feb 19 18:52:49 crc kubenswrapper[4813]: I0219 18:52:49.877010 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-c594-account-create-update-xlflc"] Feb 19 18:52:49 crc kubenswrapper[4813]: I0219 18:52:49.878582 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-c594-account-create-update-xlflc" Feb 19 18:52:49 crc kubenswrapper[4813]: I0219 18:52:49.950203 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 19 18:52:49 crc kubenswrapper[4813]: I0219 18:52:49.957525 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51a5d629-ac3a-4046-be99-01b665dce3ac-operator-scripts\") pod \"neutron-cde0-account-create-update-b8qsr\" (UID: \"51a5d629-ac3a-4046-be99-01b665dce3ac\") " pod="openstack/neutron-cde0-account-create-update-b8qsr" Feb 19 18:52:49 crc kubenswrapper[4813]: I0219 18:52:49.957587 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-558sp\" (UniqueName: \"kubernetes.io/projected/acbc878e-b3ae-49db-8fca-1300efb20564-kube-api-access-558sp\") pod \"nova-api-345c-account-create-update-wtmbd\" (UID: \"acbc878e-b3ae-49db-8fca-1300efb20564\") " pod="openstack/nova-api-345c-account-create-update-wtmbd" Feb 19 18:52:49 crc kubenswrapper[4813]: I0219 18:52:49.957610 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/acbc878e-b3ae-49db-8fca-1300efb20564-operator-scripts\") pod \"nova-api-345c-account-create-update-wtmbd\" (UID: \"acbc878e-b3ae-49db-8fca-1300efb20564\") " pod="openstack/nova-api-345c-account-create-update-wtmbd" Feb 19 18:52:49 crc kubenswrapper[4813]: I0219 18:52:49.957635 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79168133-5b77-4689-89b3-5f15fa765750-operator-scripts\") pod \"glance-f626-account-create-update-pclhq\" (UID: \"79168133-5b77-4689-89b3-5f15fa765750\") " pod="openstack/glance-f626-account-create-update-pclhq" Feb 19 18:52:49 crc kubenswrapper[4813]: I0219 18:52:49.960989 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lgch\" (UniqueName: \"kubernetes.io/projected/79168133-5b77-4689-89b3-5f15fa765750-kube-api-access-5lgch\") pod \"glance-f626-account-create-update-pclhq\" (UID: \"79168133-5b77-4689-89b3-5f15fa765750\") " pod="openstack/glance-f626-account-create-update-pclhq" Feb 19 18:52:49 crc kubenswrapper[4813]: I0219 18:52:49.961096 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dj6tx\" (UniqueName: \"kubernetes.io/projected/51a5d629-ac3a-4046-be99-01b665dce3ac-kube-api-access-dj6tx\") pod \"neutron-cde0-account-create-update-b8qsr\" (UID: \"51a5d629-ac3a-4046-be99-01b665dce3ac\") " pod="openstack/neutron-cde0-account-create-update-b8qsr" Feb 19 18:52:49 crc kubenswrapper[4813]: I0219 18:52:49.962135 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79168133-5b77-4689-89b3-5f15fa765750-operator-scripts\") pod \"glance-f626-account-create-update-pclhq\" (UID: \"79168133-5b77-4689-89b3-5f15fa765750\") " pod="openstack/glance-f626-account-create-update-pclhq" Feb 19 18:52:49 crc kubenswrapper[4813]: I0219 18:52:49.966410 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2llgw\" (UniqueName: \"kubernetes.io/projected/e1df3c12-83d7-4cf9-8f00-8cbd54be355f-kube-api-access-2llgw\") pod \"cinder-570b-account-create-update-h7vw7\" (UID: \"e1df3c12-83d7-4cf9-8f00-8cbd54be355f\") " pod="openstack/cinder-570b-account-create-update-h7vw7" Feb 19 18:52:49 crc kubenswrapper[4813]: I0219 18:52:49.966797 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51a5d629-ac3a-4046-be99-01b665dce3ac-operator-scripts\") pod \"neutron-cde0-account-create-update-b8qsr\" (UID: \"51a5d629-ac3a-4046-be99-01b665dce3ac\") " pod="openstack/neutron-cde0-account-create-update-b8qsr" Feb 19 18:52:49 crc kubenswrapper[4813]: I0219 18:52:49.994271 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lgch\" (UniqueName: \"kubernetes.io/projected/79168133-5b77-4689-89b3-5f15fa765750-kube-api-access-5lgch\") pod \"glance-f626-account-create-update-pclhq\" (UID: \"79168133-5b77-4689-89b3-5f15fa765750\") " pod="openstack/glance-f626-account-create-update-pclhq" Feb 19 18:52:50 crc kubenswrapper[4813]: I0219 18:52:50.034528 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dj6tx\" (UniqueName: \"kubernetes.io/projected/51a5d629-ac3a-4046-be99-01b665dce3ac-kube-api-access-dj6tx\") pod \"neutron-cde0-account-create-update-b8qsr\" (UID: \"51a5d629-ac3a-4046-be99-01b665dce3ac\") " pod="openstack/neutron-cde0-account-create-update-b8qsr" Feb 19 18:52:50 crc kubenswrapper[4813]: I0219 18:52:50.113987 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-f626-account-create-update-pclhq" Feb 19 18:52:50 crc kubenswrapper[4813]: I0219 18:52:50.115302 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-262z5\" (UniqueName: \"kubernetes.io/projected/c3275e7f-f99c-431a-a777-ea9a62895faa-kube-api-access-262z5\") pod \"nova-cell0-c594-account-create-update-xlflc\" (UID: \"c3275e7f-f99c-431a-a777-ea9a62895faa\") " pod="openstack/nova-cell0-c594-account-create-update-xlflc" Feb 19 18:52:50 crc kubenswrapper[4813]: I0219 18:52:50.115421 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-558sp\" (UniqueName: \"kubernetes.io/projected/acbc878e-b3ae-49db-8fca-1300efb20564-kube-api-access-558sp\") pod \"nova-api-345c-account-create-update-wtmbd\" (UID: \"acbc878e-b3ae-49db-8fca-1300efb20564\") " pod="openstack/nova-api-345c-account-create-update-wtmbd" Feb 19 18:52:50 crc kubenswrapper[4813]: I0219 18:52:50.115443 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/acbc878e-b3ae-49db-8fca-1300efb20564-operator-scripts\") pod \"nova-api-345c-account-create-update-wtmbd\" (UID: \"acbc878e-b3ae-49db-8fca-1300efb20564\") " pod="openstack/nova-api-345c-account-create-update-wtmbd" Feb 19 18:52:50 crc kubenswrapper[4813]: I0219 18:52:50.115466 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3275e7f-f99c-431a-a777-ea9a62895faa-operator-scripts\") pod \"nova-cell0-c594-account-create-update-xlflc\" (UID: \"c3275e7f-f99c-431a-a777-ea9a62895faa\") " pod="openstack/nova-cell0-c594-account-create-update-xlflc" Feb 19 18:52:50 crc kubenswrapper[4813]: E0219 18:52:50.115664 4813 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Feb 19 18:52:50 crc kubenswrapper[4813]: E0219 18:52:50.115715 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c69ff3db-8806-451a-9df0-c6289c327579-config-data podName:c69ff3db-8806-451a-9df0-c6289c327579 nodeName:}" failed. No retries permitted until 2026-02-19 18:52:51.115698457 +0000 UTC m=+1390.341138998 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/c69ff3db-8806-451a-9df0-c6289c327579-config-data") pod "rabbitmq-server-0" (UID: "c69ff3db-8806-451a-9df0-c6289c327579") : configmap "rabbitmq-config-data" not found Feb 19 18:52:50 crc kubenswrapper[4813]: I0219 18:52:50.127038 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-345c-account-create-update-wtmbd"] Feb 19 18:52:50 crc kubenswrapper[4813]: I0219 18:52:50.129218 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/acbc878e-b3ae-49db-8fca-1300efb20564-operator-scripts\") pod \"nova-api-345c-account-create-update-wtmbd\" (UID: \"acbc878e-b3ae-49db-8fca-1300efb20564\") " pod="openstack/nova-api-345c-account-create-update-wtmbd" Feb 19 18:52:50 crc kubenswrapper[4813]: I0219 18:52:50.154444 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-c594-account-create-update-xlflc"] Feb 19 18:52:50 crc kubenswrapper[4813]: I0219 18:52:50.185419 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-570b-account-create-update-h7vw7" Feb 19 18:52:50 crc kubenswrapper[4813]: I0219 18:52:50.217963 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3275e7f-f99c-431a-a777-ea9a62895faa-operator-scripts\") pod \"nova-cell0-c594-account-create-update-xlflc\" (UID: \"c3275e7f-f99c-431a-a777-ea9a62895faa\") " pod="openstack/nova-cell0-c594-account-create-update-xlflc" Feb 19 18:52:50 crc kubenswrapper[4813]: I0219 18:52:50.218103 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-262z5\" (UniqueName: \"kubernetes.io/projected/c3275e7f-f99c-431a-a777-ea9a62895faa-kube-api-access-262z5\") pod \"nova-cell0-c594-account-create-update-xlflc\" (UID: \"c3275e7f-f99c-431a-a777-ea9a62895faa\") " pod="openstack/nova-cell0-c594-account-create-update-xlflc" Feb 19 18:52:50 crc kubenswrapper[4813]: I0219 18:52:50.219014 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3275e7f-f99c-431a-a777-ea9a62895faa-operator-scripts\") pod \"nova-cell0-c594-account-create-update-xlflc\" (UID: \"c3275e7f-f99c-431a-a777-ea9a62895faa\") " pod="openstack/nova-cell0-c594-account-create-update-xlflc" Feb 19 18:52:50 crc kubenswrapper[4813]: I0219 18:52:50.242008 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-cde0-account-create-update-pd9m9"] Feb 19 18:52:50 crc kubenswrapper[4813]: I0219 18:52:50.268022 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-cde0-account-create-update-pd9m9"] Feb 19 18:52:50 crc kubenswrapper[4813]: I0219 18:52:50.281849 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-570b-account-create-update-kslzx"] Feb 19 18:52:50 crc kubenswrapper[4813]: I0219 18:52:50.324194 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cde0-account-create-update-b8qsr" Feb 19 18:52:50 crc kubenswrapper[4813]: I0219 18:52:50.339112 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-4d36-account-create-update-tpk8j"] Feb 19 18:52:50 crc kubenswrapper[4813]: I0219 18:52:50.340784 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-262z5\" (UniqueName: \"kubernetes.io/projected/c3275e7f-f99c-431a-a777-ea9a62895faa-kube-api-access-262z5\") pod \"nova-cell0-c594-account-create-update-xlflc\" (UID: \"c3275e7f-f99c-431a-a777-ea9a62895faa\") " pod="openstack/nova-cell0-c594-account-create-update-xlflc" Feb 19 18:52:50 crc kubenswrapper[4813]: I0219 18:52:50.341760 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-4d36-account-create-update-tpk8j" Feb 19 18:52:50 crc kubenswrapper[4813]: I0219 18:52:50.375513 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 19 18:52:50 crc kubenswrapper[4813]: I0219 18:52:50.375593 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-570b-account-create-update-kslzx"] Feb 19 18:52:50 crc kubenswrapper[4813]: I0219 18:52:50.403373 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-4d36-account-create-update-tpk8j"] Feb 19 18:52:50 crc kubenswrapper[4813]: I0219 18:52:50.432880 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 19 18:52:50 crc kubenswrapper[4813]: I0219 18:52:50.433165 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="25674cd0-03bb-481f-b039-b3b1db4ea1d4" containerName="openstack-network-exporter" containerID="cri-o://0d9e225c56d6e81840554a63919cdbe4f5142d75bbdc351a40de2802bf43b99e" gracePeriod=300 Feb 19 18:52:50 crc kubenswrapper[4813]: I0219 18:52:50.464445 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Feb 19 18:52:50 crc kubenswrapper[4813]: I0219 18:52:50.464974 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="8c675ac6-fe1c-42a1-b67e-f958aef3c086" containerName="openstack-network-exporter" containerID="cri-o://ef8054e602e4f8d1b1826408ddcea8dc3c850848bc422d974dd7ed560d2484d9" gracePeriod=30 Feb 19 18:52:50 crc kubenswrapper[4813]: I0219 18:52:50.464655 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="8c675ac6-fe1c-42a1-b67e-f958aef3c086" containerName="ovn-northd" containerID="cri-o://9226feb2aaff488664cb846e1b506dadfbfcb60658291ac816823714d99eabc1" gracePeriod=30 Feb 19 18:52:50 crc kubenswrapper[4813]: I0219 18:52:50.482055 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 19 18:52:50 crc kubenswrapper[4813]: I0219 18:52:50.482672 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="b3304ea7-bdca-4b4c-b290-07eacbc6a646" containerName="openstack-network-exporter" containerID="cri-o://fdb0030cda574aea48f90a9d627c54cbe19111ce3ca5470320cb6b9f671e12fe" gracePeriod=300 Feb 19 18:52:50 crc kubenswrapper[4813]: I0219 18:52:50.535546 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-558sp\" (UniqueName: \"kubernetes.io/projected/acbc878e-b3ae-49db-8fca-1300efb20564-kube-api-access-558sp\") pod \"nova-api-345c-account-create-update-wtmbd\" (UID: \"acbc878e-b3ae-49db-8fca-1300efb20564\") " pod="openstack/nova-api-345c-account-create-update-wtmbd" Feb 19 18:52:50 crc kubenswrapper[4813]: I0219 18:52:50.537225 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tx4vd\" (UniqueName: \"kubernetes.io/projected/f10d113b-edb7-4a73-ada0-659e82a43e84-kube-api-access-tx4vd\") pod \"nova-cell1-4d36-account-create-update-tpk8j\" (UID: \"f10d113b-edb7-4a73-ada0-659e82a43e84\") " pod="openstack/nova-cell1-4d36-account-create-update-tpk8j" Feb 19 18:52:50 crc kubenswrapper[4813]: I0219 18:52:50.537327 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f10d113b-edb7-4a73-ada0-659e82a43e84-operator-scripts\") pod \"nova-cell1-4d36-account-create-update-tpk8j\" (UID: \"f10d113b-edb7-4a73-ada0-659e82a43e84\") " pod="openstack/nova-cell1-4d36-account-create-update-tpk8j" Feb 19 18:52:50 crc kubenswrapper[4813]: I0219 18:52:50.569149 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-f626-account-create-update-jp84f"] Feb 19 18:52:50 crc kubenswrapper[4813]: I0219 18:52:50.574431 4813 generic.go:334] "Generic (PLEG): container finished" podID="6ad2b86e-f285-4acc-a87b-18f97baf0294" containerID="4710529a2f6d6a9963aa3e46758c6cb9e333d01ce11646280e8ee29697fa5528" exitCode=143 Feb 19 18:52:50 crc kubenswrapper[4813]: I0219 18:52:50.574466 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6ad2b86e-f285-4acc-a87b-18f97baf0294","Type":"ContainerDied","Data":"4710529a2f6d6a9963aa3e46758c6cb9e333d01ce11646280e8ee29697fa5528"} Feb 19 18:52:50 crc kubenswrapper[4813]: I0219 18:52:50.605070 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-345c-account-create-update-49nsj"] Feb 19 18:52:50 crc kubenswrapper[4813]: I0219 18:52:50.621381 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-c594-account-create-update-xlflc" Feb 19 18:52:50 crc kubenswrapper[4813]: I0219 18:52:50.640239 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tx4vd\" (UniqueName: \"kubernetes.io/projected/f10d113b-edb7-4a73-ada0-659e82a43e84-kube-api-access-tx4vd\") pod \"nova-cell1-4d36-account-create-update-tpk8j\" (UID: \"f10d113b-edb7-4a73-ada0-659e82a43e84\") " pod="openstack/nova-cell1-4d36-account-create-update-tpk8j" Feb 19 18:52:50 crc kubenswrapper[4813]: I0219 18:52:50.640537 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f10d113b-edb7-4a73-ada0-659e82a43e84-operator-scripts\") pod \"nova-cell1-4d36-account-create-update-tpk8j\" (UID: \"f10d113b-edb7-4a73-ada0-659e82a43e84\") " pod="openstack/nova-cell1-4d36-account-create-update-tpk8j" Feb 19 18:52:50 crc kubenswrapper[4813]: I0219 18:52:50.641614 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f10d113b-edb7-4a73-ada0-659e82a43e84-operator-scripts\") pod \"nova-cell1-4d36-account-create-update-tpk8j\" (UID: \"f10d113b-edb7-4a73-ada0-659e82a43e84\") " pod="openstack/nova-cell1-4d36-account-create-update-tpk8j" Feb 19 18:52:50 crc kubenswrapper[4813]: I0219 18:52:50.676051 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-f626-account-create-update-jp84f"] Feb 19 18:52:50 crc kubenswrapper[4813]: I0219 18:52:50.712758 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tx4vd\" (UniqueName: \"kubernetes.io/projected/f10d113b-edb7-4a73-ada0-659e82a43e84-kube-api-access-tx4vd\") pod \"nova-cell1-4d36-account-create-update-tpk8j\" (UID: \"f10d113b-edb7-4a73-ada0-659e82a43e84\") " pod="openstack/nova-cell1-4d36-account-create-update-tpk8j" Feb 19 18:52:50 crc kubenswrapper[4813]: I0219 18:52:50.752057 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-345c-account-create-update-49nsj"] Feb 19 18:52:50 crc kubenswrapper[4813]: I0219 18:52:50.753249 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-345c-account-create-update-wtmbd" Feb 19 18:52:50 crc kubenswrapper[4813]: I0219 18:52:50.779117 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-9s5kt"] Feb 19 18:52:50 crc kubenswrapper[4813]: I0219 18:52:50.795414 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-4d36-account-create-update-tpk8j" Feb 19 18:52:50 crc kubenswrapper[4813]: I0219 18:52:50.804751 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-c594-account-create-update-5gz96"] Feb 19 18:52:50 crc kubenswrapper[4813]: I0219 18:52:50.856082 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-4d36-account-create-update-ksf9n"] Feb 19 18:52:50 crc kubenswrapper[4813]: I0219 18:52:50.878634 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-9s5kt"] Feb 19 18:52:50 crc kubenswrapper[4813]: I0219 18:52:50.911989 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-c594-account-create-update-5gz96"] Feb 19 18:52:50 crc kubenswrapper[4813]: I0219 18:52:50.925475 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="25674cd0-03bb-481f-b039-b3b1db4ea1d4" containerName="ovsdbserver-sb" containerID="cri-o://8502c4b79b14dd3ab247b7d4c9c00c1dcc9265447761cd275f6e5165983c1b3d" gracePeriod=300 Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:50.957831 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-4d36-account-create-update-ksf9n"] Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:51.003300 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-nnwfh"] Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:51.075633 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-nnwfh"] Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:51.135097 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-vr4rs"] Feb 19 18:52:52 crc kubenswrapper[4813]: E0219 18:52:51.157067 4813 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Feb 19 18:52:52 crc kubenswrapper[4813]: E0219 18:52:51.157119 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c69ff3db-8806-451a-9df0-c6289c327579-config-data podName:c69ff3db-8806-451a-9df0-c6289c327579 nodeName:}" failed. No retries permitted until 2026-02-19 18:52:53.157104496 +0000 UTC m=+1392.382545037 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/c69ff3db-8806-451a-9df0-c6289c327579-config-data") pod "rabbitmq-server-0" (UID: "c69ff3db-8806-451a-9df0-c6289c327579") : configmap "rabbitmq-config-data" not found Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:51.232261 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-85btv"] Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:51.232466 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-metrics-85btv" podUID="c8734aca-5b08-4847-b485-1d31add9fba1" containerName="openstack-network-exporter" containerID="cri-o://3c2c71c20556e819af235dccf56fcb9253ca0ffc4dcbeab4a4b3cac3cd471eda" gracePeriod=30 Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:51.298527 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="b3304ea7-bdca-4b4c-b290-07eacbc6a646" containerName="ovsdbserver-nb" containerID="cri-o://d7b8203f3013dcbfda0d8e5f52cf6970deeae02a6a776d49e0ee9660ef66cb59" gracePeriod=300 Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:51.298924 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-w7s6z"] Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:51.313793 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-lc2jp"] Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:51.352850 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-lc2jp"] Feb 19 18:52:52 crc kubenswrapper[4813]: E0219 18:52:51.420895 4813 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 19 18:52:52 crc kubenswrapper[4813]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed,Command:[/bin/sh -c #!/bin/bash Feb 19 18:52:52 crc kubenswrapper[4813]: Feb 19 18:52:52 crc kubenswrapper[4813]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 19 18:52:52 crc kubenswrapper[4813]: Feb 19 18:52:52 crc kubenswrapper[4813]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 19 18:52:52 crc kubenswrapper[4813]: Feb 19 18:52:52 crc kubenswrapper[4813]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 19 18:52:52 crc kubenswrapper[4813]: Feb 19 18:52:52 crc kubenswrapper[4813]: if [ -n "placement" ]; then Feb 19 18:52:52 crc kubenswrapper[4813]: GRANT_DATABASE="placement" Feb 19 18:52:52 crc kubenswrapper[4813]: else Feb 19 18:52:52 crc kubenswrapper[4813]: GRANT_DATABASE="*" Feb 19 18:52:52 crc kubenswrapper[4813]: fi Feb 19 18:52:52 crc kubenswrapper[4813]: Feb 19 18:52:52 crc kubenswrapper[4813]: # going for maximum compatibility here: Feb 19 18:52:52 crc kubenswrapper[4813]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 19 18:52:52 crc kubenswrapper[4813]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 19 18:52:52 crc kubenswrapper[4813]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 19 18:52:52 crc kubenswrapper[4813]: # support updates Feb 19 18:52:52 crc kubenswrapper[4813]: Feb 19 18:52:52 crc kubenswrapper[4813]: $MYSQL_CMD < logger="UnhandledError" Feb 19 18:52:52 crc kubenswrapper[4813]: E0219 18:52:51.424945 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"placement-db-secret\\\" not found\"" pod="openstack/placement-d489-account-create-update-7mn8q" podUID="2ed035dd-719a-45e9-825f-4ce51dbb9866" Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:51.480141 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-fbp7p"] Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:51.658811 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03736951-c024-46d1-90b1-dea0d3f528aa" path="/var/lib/kubelet/pods/03736951-c024-46d1-90b1-dea0d3f528aa/volumes" Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:51.660480 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27d0219c-fd97-4271-9059-0903aae52f65" path="/var/lib/kubelet/pods/27d0219c-fd97-4271-9059-0903aae52f65/volumes" Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:51.663239 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6899daa4-6104-4900-ab52-6ffaeff57788" path="/var/lib/kubelet/pods/6899daa4-6104-4900-ab52-6ffaeff57788/volumes" Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:51.663870 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a87080d3-007c-48e0-aa89-b82c5d9dafab" path="/var/lib/kubelet/pods/a87080d3-007c-48e0-aa89-b82c5d9dafab/volumes" Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:51.664730 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af0adc50-6632-4b6f-a01f-56314538b07b" path="/var/lib/kubelet/pods/af0adc50-6632-4b6f-a01f-56314538b07b/volumes" Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:51.665242 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b43b69b2-f014-47e0-a8a7-acb5445dff51" path="/var/lib/kubelet/pods/b43b69b2-f014-47e0-a8a7-acb5445dff51/volumes" Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:51.666164 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be7c6283-3a04-4b6b-9419-82b4e91909bb" path="/var/lib/kubelet/pods/be7c6283-3a04-4b6b-9419-82b4e91909bb/volumes" Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:51.666670 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3296d69-fa27-44f5-89a4-5122c3662dc5" path="/var/lib/kubelet/pods/d3296d69-fa27-44f5-89a4-5122c3662dc5/volumes" Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:51.667656 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d622858a-0915-43b1-9169-8f176f0b16f0" path="/var/lib/kubelet/pods/d622858a-0915-43b1-9169-8f176f0b16f0/volumes" Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:51.675909 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd93a436-429b-4ec3-8c52-f5a50c1d1ae2" path="/var/lib/kubelet/pods/fd93a436-429b-4ec3-8c52-f5a50c1d1ae2/volumes" Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:51.688163 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-fbp7p"] Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:51.689435 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-2kbrh"] Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:51.719015 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-xg95m"] Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:51.723480 4813 generic.go:334] "Generic (PLEG): container finished" podID="786c324f-42b0-4099-adf7-3926fae87308" containerID="dcb3929d2e5414c8c6857fa34323fe5cbaa913507bb9f15d4791c4c3a55cd8b7" exitCode=137 Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:51.727784 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:51.745454 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-xg95m"] Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:51.770101 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-n9xc4"] Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:51.779751 4813 generic.go:334] "Generic (PLEG): container finished" podID="8c675ac6-fe1c-42a1-b67e-f958aef3c086" containerID="ef8054e602e4f8d1b1826408ddcea8dc3c850848bc422d974dd7ed560d2484d9" exitCode=2 Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:51.779848 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"8c675ac6-fe1c-42a1-b67e-f958aef3c086","Type":"ContainerDied","Data":"ef8054e602e4f8d1b1826408ddcea8dc3c850848bc422d974dd7ed560d2484d9"} Feb 19 18:52:52 crc kubenswrapper[4813]: E0219 18:52:51.782256 4813 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Feb 19 18:52:52 crc kubenswrapper[4813]: E0219 18:52:51.782298 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/db22a584-f05a-41ba-ad23-387b4100a9e1-config-data podName:db22a584-f05a-41ba-ad23-387b4100a9e1 nodeName:}" failed. No retries permitted until 2026-02-19 18:52:52.282282948 +0000 UTC m=+1391.507723479 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/db22a584-f05a-41ba-ad23-387b4100a9e1-config-data") pod "rabbitmq-cell1-server-0" (UID: "db22a584-f05a-41ba-ad23-387b4100a9e1") : configmap "rabbitmq-cell1-config-data" not found Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:51.788129 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_25674cd0-03bb-481f-b039-b3b1db4ea1d4/ovsdbserver-sb/0.log" Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:51.788165 4813 generic.go:334] "Generic (PLEG): container finished" podID="25674cd0-03bb-481f-b039-b3b1db4ea1d4" containerID="0d9e225c56d6e81840554a63919cdbe4f5142d75bbdc351a40de2802bf43b99e" exitCode=2 Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:51.788181 4813 generic.go:334] "Generic (PLEG): container finished" podID="25674cd0-03bb-481f-b039-b3b1db4ea1d4" containerID="8502c4b79b14dd3ab247b7d4c9c00c1dcc9265447761cd275f6e5165983c1b3d" exitCode=143 Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:51.788248 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"25674cd0-03bb-481f-b039-b3b1db4ea1d4","Type":"ContainerDied","Data":"0d9e225c56d6e81840554a63919cdbe4f5142d75bbdc351a40de2802bf43b99e"} Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:51.788299 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"25674cd0-03bb-481f-b039-b3b1db4ea1d4","Type":"ContainerDied","Data":"8502c4b79b14dd3ab247b7d4c9c00c1dcc9265447761cd275f6e5165983c1b3d"} Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:51.800713 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-n9xc4"] Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:51.816712 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d489-account-create-update-7mn8q" event={"ID":"2ed035dd-719a-45e9-825f-4ce51dbb9866","Type":"ContainerStarted","Data":"d93e6656c6369a1270347b793c8072d0a5bc7cbb8fe07f23a7b32e75ac306789"} Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:51.832441 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_b3304ea7-bdca-4b4c-b290-07eacbc6a646/ovsdbserver-nb/0.log" Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:51.832671 4813 generic.go:334] "Generic (PLEG): container finished" podID="b3304ea7-bdca-4b4c-b290-07eacbc6a646" containerID="fdb0030cda574aea48f90a9d627c54cbe19111ce3ca5470320cb6b9f671e12fe" exitCode=2 Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:51.832688 4813 generic.go:334] "Generic (PLEG): container finished" podID="b3304ea7-bdca-4b4c-b290-07eacbc6a646" containerID="d7b8203f3013dcbfda0d8e5f52cf6970deeae02a6a776d49e0ee9660ef66cb59" exitCode=143 Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:51.832742 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"b3304ea7-bdca-4b4c-b290-07eacbc6a646","Type":"ContainerDied","Data":"fdb0030cda574aea48f90a9d627c54cbe19111ce3ca5470320cb6b9f671e12fe"} Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:51.832766 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"b3304ea7-bdca-4b4c-b290-07eacbc6a646","Type":"ContainerDied","Data":"d7b8203f3013dcbfda0d8e5f52cf6970deeae02a6a776d49e0ee9660ef66cb59"} Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:51.844139 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:51.844896 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="1cb5586f-0789-4095-84e2-32c8c41984c1" containerName="account-server" containerID="cri-o://bbf92d00811dd2fb35c3895f4c863c12eb1772bf8d221d1a27c08caefbd5bf0f" gracePeriod=30 Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:51.845026 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="1cb5586f-0789-4095-84e2-32c8c41984c1" containerName="swift-recon-cron" containerID="cri-o://7fd9247f2c24fe230ceea2d4153e592d608389bd4aeb769e1c50f8d5b7ce6c1f" gracePeriod=30 Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:51.845063 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="1cb5586f-0789-4095-84e2-32c8c41984c1" containerName="rsync" containerID="cri-o://c1d32779936dc64abfb1211d92a86919ed65d162961f758373172e5d8896b226" gracePeriod=30 Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:51.845094 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="1cb5586f-0789-4095-84e2-32c8c41984c1" containerName="object-expirer" containerID="cri-o://3463fd0ce80861bf450484b04bd0d12e36c7e89f7201fbf18492f7f3c1961d7e" gracePeriod=30 Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:51.845128 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="1cb5586f-0789-4095-84e2-32c8c41984c1" containerName="object-updater" containerID="cri-o://63ed63fc4060e5e77bda2fcedba7a9038be456080560f0972c1162295667a995" gracePeriod=30 Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:51.845166 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="1cb5586f-0789-4095-84e2-32c8c41984c1" containerName="object-auditor" containerID="cri-o://ecdd61b7c76fea90d5030cd83e27eede68a2613e82c87d74bf1b52eae40ede4c" gracePeriod=30 Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:51.845196 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="1cb5586f-0789-4095-84e2-32c8c41984c1" containerName="object-replicator" containerID="cri-o://b247b4f4fb5c13c744d6518cac9721377f2279093c96dcef4c1fb84f402a96fe" gracePeriod=30 Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:51.845228 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="1cb5586f-0789-4095-84e2-32c8c41984c1" containerName="object-server" containerID="cri-o://fe2ab5db7a8282b411ff6c2df1477d67ccefc71efdf7f49af3550ca28e6ca30f" gracePeriod=30 Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:51.845257 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="1cb5586f-0789-4095-84e2-32c8c41984c1" containerName="container-updater" containerID="cri-o://5e213ff63ffa13d3db8cd64214ffc25c184324e7bcb422854cb7d5544e995e3a" gracePeriod=30 Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:51.845291 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="1cb5586f-0789-4095-84e2-32c8c41984c1" containerName="container-auditor" containerID="cri-o://3f73192612ae69b540b4da233b53fc0e0246e8770b1168eaa09e2818d0475b0b" gracePeriod=30 Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:51.845317 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="1cb5586f-0789-4095-84e2-32c8c41984c1" containerName="container-replicator" containerID="cri-o://8a3df88ea660009faf21ceee32e4bf9966f8ba5d8d06da52cc8eed2e8281585b" gracePeriod=30 Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:51.845343 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="1cb5586f-0789-4095-84e2-32c8c41984c1" containerName="container-server" containerID="cri-o://8e7a00f1985307d87ed43318441bfb1dd483cd2506f6ccb527a5a1f992665d68" gracePeriod=30 Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:51.845370 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="1cb5586f-0789-4095-84e2-32c8c41984c1" containerName="account-reaper" containerID="cri-o://c810d20c3f622957076f0b197785575897ec5b3aab666c3929ba959f136bbac7" gracePeriod=30 Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:51.845409 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="1cb5586f-0789-4095-84e2-32c8c41984c1" containerName="account-auditor" containerID="cri-o://173ce911f070f154fad4553f907b72434be2bbf4d9cc33dc8c7d8f6176342729" gracePeriod=30 Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:51.845435 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="1cb5586f-0789-4095-84e2-32c8c41984c1" containerName="account-replicator" containerID="cri-o://733548779f14379f65bf0cb205274b2eac4a5ab44281cf9b72a9a0a27ccb488b" gracePeriod=30 Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:51.862910 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-85btv_c8734aca-5b08-4847-b485-1d31add9fba1/openstack-network-exporter/0.log" Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:51.862945 4813 generic.go:334] "Generic (PLEG): container finished" podID="c8734aca-5b08-4847-b485-1d31add9fba1" containerID="3c2c71c20556e819af235dccf56fcb9253ca0ffc4dcbeab4a4b3cac3cd471eda" exitCode=2 Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:51.863021 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-85btv" event={"ID":"c8734aca-5b08-4847-b485-1d31add9fba1","Type":"ContainerDied","Data":"3c2c71c20556e819af235dccf56fcb9253ca0ffc4dcbeab4a4b3cac3cd471eda"} Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:51.880300 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-2kbrh" event={"ID":"0055f8bf-7085-42c5-86d4-9cee7033a7d1","Type":"ContainerStarted","Data":"3a43e478c1966d3727ef30c1ebfeb4bfd4d6ad484032c1319713b57514a654f4"} Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:51.884788 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-5knnd"] Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:51.917131 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-5knnd"] Feb 19 18:52:52 crc kubenswrapper[4813]: E0219 18:52:51.926930 4813 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 19 18:52:52 crc kubenswrapper[4813]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed,Command:[/bin/sh -c #!/bin/bash Feb 19 18:52:52 crc kubenswrapper[4813]: Feb 19 18:52:52 crc kubenswrapper[4813]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 19 18:52:52 crc kubenswrapper[4813]: Feb 19 18:52:52 crc kubenswrapper[4813]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 19 18:52:52 crc kubenswrapper[4813]: Feb 19 18:52:52 crc kubenswrapper[4813]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 19 18:52:52 crc kubenswrapper[4813]: Feb 19 18:52:52 crc kubenswrapper[4813]: if [ -n "placement" ]; then Feb 19 18:52:52 crc kubenswrapper[4813]: GRANT_DATABASE="placement" Feb 19 18:52:52 crc kubenswrapper[4813]: else Feb 19 18:52:52 crc kubenswrapper[4813]: GRANT_DATABASE="*" Feb 19 18:52:52 crc kubenswrapper[4813]: fi Feb 19 18:52:52 crc kubenswrapper[4813]: Feb 19 18:52:52 crc kubenswrapper[4813]: # going for maximum compatibility here: Feb 19 18:52:52 crc kubenswrapper[4813]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 19 18:52:52 crc kubenswrapper[4813]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 19 18:52:52 crc kubenswrapper[4813]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 19 18:52:52 crc kubenswrapper[4813]: # support updates Feb 19 18:52:52 crc kubenswrapper[4813]: Feb 19 18:52:52 crc kubenswrapper[4813]: $MYSQL_CMD < logger="UnhandledError" Feb 19 18:52:52 crc kubenswrapper[4813]: E0219 18:52:51.929605 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"placement-db-secret\\\" not found\"" pod="openstack/placement-d489-account-create-update-7mn8q" podUID="2ed035dd-719a-45e9-825f-4ce51dbb9866" Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:51.955717 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-crwbn"] Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:51.967035 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-wjwfp"] Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:51.977028 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-wjwfp"] Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:51.990827 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7677694455-nj2vp"] Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:51.991086 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7677694455-nj2vp" podUID="279e5b26-1a20-4809-8dfe-ff290d191f38" containerName="dnsmasq-dns" containerID="cri-o://57b25170e965640c68550b7bface39f788f95c0a649a1c484dd11cc853485b0e" gracePeriod=10 Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.004058 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-crwbn"] Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.026167 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-2083-account-create-update-w69p8"] Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.057121 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-2083-account-create-update-w69p8"] Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.093039 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-d474bcd44-n9tsd"] Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.093281 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-d474bcd44-n9tsd" podUID="6f4b651a-00cc-4ca7-b49e-713eed4968b9" containerName="placement-log" containerID="cri-o://bf8c06468da19346eec65138cd7874af7a212840ee1fcdf4cb2cd44182970cc3" gracePeriod=30 Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.093645 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-d474bcd44-n9tsd" podUID="6f4b651a-00cc-4ca7-b49e-713eed4968b9" containerName="placement-api" containerID="cri-o://61fdb19e85db0c41232232581262b2f03bee939d644f27002a6fbcc6eee839c7" gracePeriod=30 Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.099289 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-79884964f7-nvxp2"] Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.099667 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-79884964f7-nvxp2" podUID="6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd" containerName="neutron-api" containerID="cri-o://8d8d3b2b19279b349f178ba59ce9a2b30004d64b0f858bebde8ea20429c2ad81" gracePeriod=30 Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.100034 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-79884964f7-nvxp2" podUID="6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd" containerName="neutron-httpd" containerID="cri-o://798087b6274eb1a02113a606fd85310be018115d9f4e8a89578ca60d75da110c" gracePeriod=30 Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.120100 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-tmnlr"] Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.149061 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-tmnlr"] Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.160871 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-570b-account-create-update-h7vw7"] Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.169467 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-pf8pv"] Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.178098 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-mlk69"] Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.185848 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-pf8pv"] Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.200327 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-mlk69"] Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.243062 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-d489-account-create-update-7mn8q"] Feb 19 18:52:52 crc kubenswrapper[4813]: E0219 18:52:52.306434 4813 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Feb 19 18:52:52 crc kubenswrapper[4813]: E0219 18:52:52.306506 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/db22a584-f05a-41ba-ad23-387b4100a9e1-config-data podName:db22a584-f05a-41ba-ad23-387b4100a9e1 nodeName:}" failed. No retries permitted until 2026-02-19 18:52:53.306488406 +0000 UTC m=+1392.531928947 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/db22a584-f05a-41ba-ad23-387b4100a9e1-config-data") pod "rabbitmq-cell1-server-0" (UID: "db22a584-f05a-41ba-ad23-387b4100a9e1") : configmap "rabbitmq-cell1-config-data" not found Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.326311 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.326564 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="2ea3ff24-c40b-432b-a2f8-522284d17ff0" containerName="glance-log" containerID="cri-o://c739347572fde9810eb2e984054e31d8f98d420e8deb6f11d5a7c741a7bd9e0a" gracePeriod=30 Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.326974 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="2ea3ff24-c40b-432b-a2f8-522284d17ff0" containerName="glance-httpd" containerID="cri-o://cdcf02dfe81d5acf0aa0015ceb87d0dc57996ba7b45c419fa6770df217892dee" gracePeriod=30 Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.365051 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-w7s6z" podUID="096a88db-91ee-4ac3-b5ae-ba4bca838436" containerName="ovs-vswitchd" containerID="cri-o://e06b8d6895c1ae07d8bfb4358319aa314b17cc2acab940e6baa96f82402c1514" gracePeriod=29 Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.377503 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-d489-account-create-update-7mn8q"] Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.406744 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.407109 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="c9c7a4f0-3d5e-4e7a-8958-8a2022de3394" containerName="glance-log" containerID="cri-o://912bf4bcdf44e6c87d2ddd9d02005a5efe71719e483128b7f0504e540c94af8f" gracePeriod=30 Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.407687 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="c9c7a4f0-3d5e-4e7a-8958-8a2022de3394" containerName="glance-httpd" containerID="cri-o://24d702d7195d76e4a45075dc68672b623d20a1f8acf2e94fc16e64fdd1806db3" gracePeriod=30 Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.415907 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-cde0-account-create-update-b8qsr"] Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.428298 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-bf98f678b-j6t6g"] Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.428514 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-bf98f678b-j6t6g" podUID="81790f67-278c-4b6a-82e5-ec5bb521c6ac" containerName="barbican-keystone-listener-log" containerID="cri-o://479709b4af77750e6b92b1b1ddf45a2dfef9f8a888e9bcb59242645580afb4a9" gracePeriod=30 Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.428852 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-bf98f678b-j6t6g" podUID="81790f67-278c-4b6a-82e5-ec5bb521c6ac" containerName="barbican-keystone-listener" containerID="cri-o://d4ea4dbc1137b15bb3aa4bc83a3b37e503f5c54a481f8b73a7cfd507673900a8" gracePeriod=30 Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.438003 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.542675 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7c88676b6d-zlhlk"] Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.542991 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7c88676b6d-zlhlk" podUID="6ecb54a2-f23d-45b0-8311-eb5ff83f9f30" containerName="barbican-api-log" containerID="cri-o://c497a249844ccadf079da17947184a1a812d81325a2ff39036321f46b9c5c309" gracePeriod=30 Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.543527 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7c88676b6d-zlhlk" podUID="6ecb54a2-f23d-45b0-8311-eb5ff83f9f30" containerName="barbican-api" containerID="cri-o://fe9cae1f29fb502eab2ed61c37be245fbecda7cbaa6a4d4b23a769893fe52d66" gracePeriod=30 Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.705868 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-9f6f7ccc7-dwqzl"] Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.706313 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-9f6f7ccc7-dwqzl" podUID="5aeb8bcb-4373-48d4-9ac6-e6472189e440" containerName="barbican-worker-log" containerID="cri-o://52e141647daf76f468f278365830d8a4de021205dfe34eb192a73cab648f4e9a" gracePeriod=30 Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.706625 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-9f6f7ccc7-dwqzl" podUID="5aeb8bcb-4373-48d4-9ac6-e6472189e440" containerName="barbican-worker" containerID="cri-o://7154bb37e5f6fe174542d1e2d97dda065217b924fa83469cc6cbb289e826d07a" gracePeriod=30 Feb 19 18:52:52 crc kubenswrapper[4813]: E0219 18:52:52.722242 4813 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Feb 19 18:52:52 crc kubenswrapper[4813]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Feb 19 18:52:52 crc kubenswrapper[4813]: + source /usr/local/bin/container-scripts/functions Feb 19 18:52:52 crc kubenswrapper[4813]: ++ OVNBridge=br-int Feb 19 18:52:52 crc kubenswrapper[4813]: ++ OVNRemote=tcp:localhost:6642 Feb 19 18:52:52 crc kubenswrapper[4813]: ++ OVNEncapType=geneve Feb 19 18:52:52 crc kubenswrapper[4813]: ++ OVNAvailabilityZones= Feb 19 18:52:52 crc kubenswrapper[4813]: ++ EnableChassisAsGateway=true Feb 19 18:52:52 crc kubenswrapper[4813]: ++ PhysicalNetworks= Feb 19 18:52:52 crc kubenswrapper[4813]: ++ OVNHostName= Feb 19 18:52:52 crc kubenswrapper[4813]: ++ DB_FILE=/etc/openvswitch/conf.db Feb 19 18:52:52 crc kubenswrapper[4813]: ++ ovs_dir=/var/lib/openvswitch Feb 19 18:52:52 crc kubenswrapper[4813]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Feb 19 18:52:52 crc kubenswrapper[4813]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Feb 19 18:52:52 crc kubenswrapper[4813]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Feb 19 18:52:52 crc kubenswrapper[4813]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Feb 19 18:52:52 crc kubenswrapper[4813]: + sleep 0.5 Feb 19 18:52:52 crc kubenswrapper[4813]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Feb 19 18:52:52 crc kubenswrapper[4813]: + sleep 0.5 Feb 19 18:52:52 crc kubenswrapper[4813]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Feb 19 18:52:52 crc kubenswrapper[4813]: + cleanup_ovsdb_server_semaphore Feb 19 18:52:52 crc kubenswrapper[4813]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Feb 19 18:52:52 crc kubenswrapper[4813]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Feb 19 18:52:52 crc kubenswrapper[4813]: > execCommand=["/usr/local/bin/container-scripts/stop-ovsdb-server.sh"] containerName="ovsdb-server" pod="openstack/ovn-controller-ovs-w7s6z" message=< Feb 19 18:52:52 crc kubenswrapper[4813]: Exiting ovsdb-server (5) ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Feb 19 18:52:52 crc kubenswrapper[4813]: + source /usr/local/bin/container-scripts/functions Feb 19 18:52:52 crc kubenswrapper[4813]: ++ OVNBridge=br-int Feb 19 18:52:52 crc kubenswrapper[4813]: ++ OVNRemote=tcp:localhost:6642 Feb 19 18:52:52 crc kubenswrapper[4813]: ++ OVNEncapType=geneve Feb 19 18:52:52 crc kubenswrapper[4813]: ++ OVNAvailabilityZones= Feb 19 18:52:52 crc kubenswrapper[4813]: ++ EnableChassisAsGateway=true Feb 19 18:52:52 crc kubenswrapper[4813]: ++ PhysicalNetworks= Feb 19 18:52:52 crc kubenswrapper[4813]: ++ OVNHostName= Feb 19 18:52:52 crc kubenswrapper[4813]: ++ DB_FILE=/etc/openvswitch/conf.db Feb 19 18:52:52 crc kubenswrapper[4813]: ++ ovs_dir=/var/lib/openvswitch Feb 19 18:52:52 crc kubenswrapper[4813]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Feb 19 18:52:52 crc kubenswrapper[4813]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Feb 19 18:52:52 crc kubenswrapper[4813]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Feb 19 18:52:52 crc kubenswrapper[4813]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Feb 19 18:52:52 crc kubenswrapper[4813]: + sleep 0.5 Feb 19 18:52:52 crc kubenswrapper[4813]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Feb 19 18:52:52 crc kubenswrapper[4813]: + sleep 0.5 Feb 19 18:52:52 crc kubenswrapper[4813]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Feb 19 18:52:52 crc kubenswrapper[4813]: + cleanup_ovsdb_server_semaphore Feb 19 18:52:52 crc kubenswrapper[4813]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Feb 19 18:52:52 crc kubenswrapper[4813]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Feb 19 18:52:52 crc kubenswrapper[4813]: > Feb 19 18:52:52 crc kubenswrapper[4813]: E0219 18:52:52.722278 4813 kuberuntime_container.go:691] "PreStop hook failed" err=< Feb 19 18:52:52 crc kubenswrapper[4813]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Feb 19 18:52:52 crc kubenswrapper[4813]: + source /usr/local/bin/container-scripts/functions Feb 19 18:52:52 crc kubenswrapper[4813]: ++ OVNBridge=br-int Feb 19 18:52:52 crc kubenswrapper[4813]: ++ OVNRemote=tcp:localhost:6642 Feb 19 18:52:52 crc kubenswrapper[4813]: ++ OVNEncapType=geneve Feb 19 18:52:52 crc kubenswrapper[4813]: ++ OVNAvailabilityZones= Feb 19 18:52:52 crc kubenswrapper[4813]: ++ EnableChassisAsGateway=true Feb 19 18:52:52 crc kubenswrapper[4813]: ++ PhysicalNetworks= Feb 19 18:52:52 crc kubenswrapper[4813]: ++ OVNHostName= Feb 19 18:52:52 crc kubenswrapper[4813]: ++ DB_FILE=/etc/openvswitch/conf.db Feb 19 18:52:52 crc kubenswrapper[4813]: ++ ovs_dir=/var/lib/openvswitch Feb 19 18:52:52 crc kubenswrapper[4813]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Feb 19 18:52:52 crc kubenswrapper[4813]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Feb 19 18:52:52 crc kubenswrapper[4813]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Feb 19 18:52:52 crc kubenswrapper[4813]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Feb 19 18:52:52 crc kubenswrapper[4813]: + sleep 0.5 Feb 19 18:52:52 crc kubenswrapper[4813]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Feb 19 18:52:52 crc kubenswrapper[4813]: + sleep 0.5 Feb 19 18:52:52 crc kubenswrapper[4813]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Feb 19 18:52:52 crc kubenswrapper[4813]: + cleanup_ovsdb_server_semaphore Feb 19 18:52:52 crc kubenswrapper[4813]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Feb 19 18:52:52 crc kubenswrapper[4813]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Feb 19 18:52:52 crc kubenswrapper[4813]: > pod="openstack/ovn-controller-ovs-w7s6z" podUID="096a88db-91ee-4ac3-b5ae-ba4bca838436" containerName="ovsdb-server" containerID="cri-o://fa363048760a37417b36d93a6b7c7436ba2ba02b903ff26bcb4c7f22d5b4f33f" Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.722309 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-w7s6z" podUID="096a88db-91ee-4ac3-b5ae-ba4bca838436" containerName="ovsdb-server" containerID="cri-o://fa363048760a37417b36d93a6b7c7436ba2ba02b903ff26bcb4c7f22d5b4f33f" gracePeriod=29 Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.767038 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.767414 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c389f482-3000-4f94-924d-158c9d51a2e9" containerName="nova-metadata-log" containerID="cri-o://42f9e9f8d59b24e0b6b81176e439582dcdd7e50952fc2dd61a52acdcf4367e0e" gracePeriod=30 Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.767798 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c389f482-3000-4f94-924d-158c9d51a2e9" containerName="nova-metadata-metadata" containerID="cri-o://610a526fd13a6efb0d4ac221965cec53053e05e9e010cd31550a6cc426a151df" gracePeriod=30 Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.777100 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="a7385c55-b36b-486d-add0-958b8cece7de" containerName="galera" containerID="cri-o://009f0408ab4ffcc4868aed437d6e30405baf42c34d352c5760db13ab83d1b55f" gracePeriod=30 Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.777260 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-f626-account-create-update-pclhq"] Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.783436 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.792377 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-lm976"] Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.801854 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-lm976"] Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.808909 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.809203 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="153c22ed-7e2e-496e-9a13-9ef0ce79efd8" containerName="nova-api-log" containerID="cri-o://84023d51754b9bdea418d6af28a79e9ce79c0d010f1e0a43ee78d69255fd77b9" gracePeriod=30 Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.809256 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="153c22ed-7e2e-496e-9a13-9ef0ce79efd8" containerName="nova-api-api" containerID="cri-o://80732d7919ccb302a5ba0c1ac5c2fcd0229a26a0e569dafeadc7852c9e14fefa" gracePeriod=30 Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.823130 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-345c-account-create-update-wtmbd"] Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.851829 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-d44kt"] Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.864498 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="c69ff3db-8806-451a-9df0-c6289c327579" containerName="rabbitmq" containerID="cri-o://902b709f26604393c782b1c130285fdc4bd898bf2ec34607dbfa328616279266" gracePeriod=604800 Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.866428 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_25674cd0-03bb-481f-b039-b3b1db4ea1d4/ovsdbserver-sb/0.log" Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.866515 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.867481 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.869645 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7677694455-nj2vp" Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.889279 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-dxwdc"] Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.945575 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-d44kt"] Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.945973 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-85btv_c8734aca-5b08-4847-b485-1d31add9fba1/openstack-network-exporter/0.log" Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.946022 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-85btv" Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.956626 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-dxwdc"] Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.960723 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_b3304ea7-bdca-4b4c-b290-07eacbc6a646/ovsdbserver-nb/0.log" Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.960799 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.962381 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8734aca-5b08-4847-b485-1d31add9fba1-combined-ca-bundle\") pod \"c8734aca-5b08-4847-b485-1d31add9fba1\" (UID: \"c8734aca-5b08-4847-b485-1d31add9fba1\") " Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.962503 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/25674cd0-03bb-481f-b039-b3b1db4ea1d4-ovsdb-rundir\") pod \"25674cd0-03bb-481f-b039-b3b1db4ea1d4\" (UID: \"25674cd0-03bb-481f-b039-b3b1db4ea1d4\") " Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.962587 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8734aca-5b08-4847-b485-1d31add9fba1-metrics-certs-tls-certs\") pod \"c8734aca-5b08-4847-b485-1d31add9fba1\" (UID: \"c8734aca-5b08-4847-b485-1d31add9fba1\") " Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.962661 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/279e5b26-1a20-4809-8dfe-ff290d191f38-dns-svc\") pod \"279e5b26-1a20-4809-8dfe-ff290d191f38\" (UID: \"279e5b26-1a20-4809-8dfe-ff290d191f38\") " Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.962764 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c8734aca-5b08-4847-b485-1d31add9fba1-ovn-rundir\") pod \"c8734aca-5b08-4847-b485-1d31add9fba1\" (UID: \"c8734aca-5b08-4847-b485-1d31add9fba1\") " Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.962837 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nkczs\" (UniqueName: \"kubernetes.io/projected/279e5b26-1a20-4809-8dfe-ff290d191f38-kube-api-access-nkczs\") pod \"279e5b26-1a20-4809-8dfe-ff290d191f38\" (UID: \"279e5b26-1a20-4809-8dfe-ff290d191f38\") " Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.962897 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnhrq\" (UniqueName: \"kubernetes.io/projected/786c324f-42b0-4099-adf7-3926fae87308-kube-api-access-qnhrq\") pod \"786c324f-42b0-4099-adf7-3926fae87308\" (UID: \"786c324f-42b0-4099-adf7-3926fae87308\") " Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.963003 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/279e5b26-1a20-4809-8dfe-ff290d191f38-ovsdbserver-nb\") pod \"279e5b26-1a20-4809-8dfe-ff290d191f38\" (UID: \"279e5b26-1a20-4809-8dfe-ff290d191f38\") " Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.963085 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25674cd0-03bb-481f-b039-b3b1db4ea1d4-config\") pod \"25674cd0-03bb-481f-b039-b3b1db4ea1d4\" (UID: \"25674cd0-03bb-481f-b039-b3b1db4ea1d4\") " Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.963301 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c8734aca-5b08-4847-b485-1d31add9fba1-ovs-rundir\") pod \"c8734aca-5b08-4847-b485-1d31add9fba1\" (UID: \"c8734aca-5b08-4847-b485-1d31add9fba1\") " Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.963374 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/786c324f-42b0-4099-adf7-3926fae87308-combined-ca-bundle\") pod \"786c324f-42b0-4099-adf7-3926fae87308\" (UID: \"786c324f-42b0-4099-adf7-3926fae87308\") " Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.966400 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/786c324f-42b0-4099-adf7-3926fae87308-openstack-config\") pod \"786c324f-42b0-4099-adf7-3926fae87308\" (UID: \"786c324f-42b0-4099-adf7-3926fae87308\") " Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.966843 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/786c324f-42b0-4099-adf7-3926fae87308-openstack-config-secret\") pod \"786c324f-42b0-4099-adf7-3926fae87308\" (UID: \"786c324f-42b0-4099-adf7-3926fae87308\") " Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.966874 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/279e5b26-1a20-4809-8dfe-ff290d191f38-config\") pod \"279e5b26-1a20-4809-8dfe-ff290d191f38\" (UID: \"279e5b26-1a20-4809-8dfe-ff290d191f38\") " Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.966921 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/25674cd0-03bb-481f-b039-b3b1db4ea1d4-scripts\") pod \"25674cd0-03bb-481f-b039-b3b1db4ea1d4\" (UID: \"25674cd0-03bb-481f-b039-b3b1db4ea1d4\") " Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.966982 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/25674cd0-03bb-481f-b039-b3b1db4ea1d4-metrics-certs-tls-certs\") pod \"25674cd0-03bb-481f-b039-b3b1db4ea1d4\" (UID: \"25674cd0-03bb-481f-b039-b3b1db4ea1d4\") " Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.967006 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmnsh\" (UniqueName: \"kubernetes.io/projected/25674cd0-03bb-481f-b039-b3b1db4ea1d4-kube-api-access-xmnsh\") pod \"25674cd0-03bb-481f-b039-b3b1db4ea1d4\" (UID: \"25674cd0-03bb-481f-b039-b3b1db4ea1d4\") " Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.967025 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-sb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"25674cd0-03bb-481f-b039-b3b1db4ea1d4\" (UID: \"25674cd0-03bb-481f-b039-b3b1db4ea1d4\") " Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.967069 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25674cd0-03bb-481f-b039-b3b1db4ea1d4-combined-ca-bundle\") pod \"25674cd0-03bb-481f-b039-b3b1db4ea1d4\" (UID: \"25674cd0-03bb-481f-b039-b3b1db4ea1d4\") " Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.967257 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/25674cd0-03bb-481f-b039-b3b1db4ea1d4-ovsdbserver-sb-tls-certs\") pod \"25674cd0-03bb-481f-b039-b3b1db4ea1d4\" (UID: \"25674cd0-03bb-481f-b039-b3b1db4ea1d4\") " Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.967321 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dktsq\" (UniqueName: \"kubernetes.io/projected/c8734aca-5b08-4847-b485-1d31add9fba1-kube-api-access-dktsq\") pod \"c8734aca-5b08-4847-b485-1d31add9fba1\" (UID: \"c8734aca-5b08-4847-b485-1d31add9fba1\") " Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.967365 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/279e5b26-1a20-4809-8dfe-ff290d191f38-dns-swift-storage-0\") pod \"279e5b26-1a20-4809-8dfe-ff290d191f38\" (UID: \"279e5b26-1a20-4809-8dfe-ff290d191f38\") " Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.967402 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8734aca-5b08-4847-b485-1d31add9fba1-config\") pod \"c8734aca-5b08-4847-b485-1d31add9fba1\" (UID: \"c8734aca-5b08-4847-b485-1d31add9fba1\") " Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.967562 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/279e5b26-1a20-4809-8dfe-ff290d191f38-ovsdbserver-sb\") pod \"279e5b26-1a20-4809-8dfe-ff290d191f38\" (UID: \"279e5b26-1a20-4809-8dfe-ff290d191f38\") " Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.968013 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/786c324f-42b0-4099-adf7-3926fae87308-kube-api-access-qnhrq" (OuterVolumeSpecName: "kube-api-access-qnhrq") pod "786c324f-42b0-4099-adf7-3926fae87308" (UID: "786c324f-42b0-4099-adf7-3926fae87308"). InnerVolumeSpecName "kube-api-access-qnhrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.968729 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qnhrq\" (UniqueName: \"kubernetes.io/projected/786c324f-42b0-4099-adf7-3926fae87308-kube-api-access-qnhrq\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.982007 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c8734aca-5b08-4847-b485-1d31add9fba1-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "c8734aca-5b08-4847-b485-1d31add9fba1" (UID: "c8734aca-5b08-4847-b485-1d31add9fba1"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 18:52:52 crc kubenswrapper[4813]: E0219 18:52:52.982356 4813 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 19 18:52:52 crc kubenswrapper[4813]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed,Command:[/bin/sh -c #!/bin/bash Feb 19 18:52:52 crc kubenswrapper[4813]: Feb 19 18:52:52 crc kubenswrapper[4813]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 19 18:52:52 crc kubenswrapper[4813]: Feb 19 18:52:52 crc kubenswrapper[4813]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 19 18:52:52 crc kubenswrapper[4813]: Feb 19 18:52:52 crc kubenswrapper[4813]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 19 18:52:52 crc kubenswrapper[4813]: Feb 19 18:52:52 crc kubenswrapper[4813]: if [ -n "glance" ]; then Feb 19 18:52:52 crc kubenswrapper[4813]: GRANT_DATABASE="glance" Feb 19 18:52:52 crc kubenswrapper[4813]: else Feb 19 18:52:52 crc kubenswrapper[4813]: GRANT_DATABASE="*" Feb 19 18:52:52 crc kubenswrapper[4813]: fi Feb 19 18:52:52 crc kubenswrapper[4813]: Feb 19 18:52:52 crc kubenswrapper[4813]: # going for maximum compatibility here: Feb 19 18:52:52 crc kubenswrapper[4813]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 19 18:52:52 crc kubenswrapper[4813]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 19 18:52:52 crc kubenswrapper[4813]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 19 18:52:52 crc kubenswrapper[4813]: # support updates Feb 19 18:52:52 crc kubenswrapper[4813]: Feb 19 18:52:52 crc kubenswrapper[4813]: $MYSQL_CMD < logger="UnhandledError" Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.989662 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "ovndbcluster-sb-etc-ovn") pod "25674cd0-03bb-481f-b039-b3b1db4ea1d4" (UID: "25674cd0-03bb-481f-b039-b3b1db4ea1d4"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.990011 4813 scope.go:117] "RemoveContainer" containerID="dcb3929d2e5414c8c6857fa34323fe5cbaa913507bb9f15d4791c4c3a55cd8b7" Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.990136 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.994543 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c8734aca-5b08-4847-b485-1d31add9fba1-ovs-rundir" (OuterVolumeSpecName: "ovs-rundir") pod "c8734aca-5b08-4847-b485-1d31add9fba1" (UID: "c8734aca-5b08-4847-b485-1d31add9fba1"). InnerVolumeSpecName "ovs-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.994554 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25674cd0-03bb-481f-b039-b3b1db4ea1d4-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "25674cd0-03bb-481f-b039-b3b1db4ea1d4" (UID: "25674cd0-03bb-481f-b039-b3b1db4ea1d4"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.995216 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25674cd0-03bb-481f-b039-b3b1db4ea1d4-scripts" (OuterVolumeSpecName: "scripts") pod "25674cd0-03bb-481f-b039-b3b1db4ea1d4" (UID: "25674cd0-03bb-481f-b039-b3b1db4ea1d4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.995230 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25674cd0-03bb-481f-b039-b3b1db4ea1d4-config" (OuterVolumeSpecName: "config") pod "25674cd0-03bb-481f-b039-b3b1db4ea1d4" (UID: "25674cd0-03bb-481f-b039-b3b1db4ea1d4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.995615 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8734aca-5b08-4847-b485-1d31add9fba1-config" (OuterVolumeSpecName: "config") pod "c8734aca-5b08-4847-b485-1d31add9fba1" (UID: "c8734aca-5b08-4847-b485-1d31add9fba1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:52:52 crc kubenswrapper[4813]: I0219 18:52:52.997968 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25674cd0-03bb-481f-b039-b3b1db4ea1d4-kube-api-access-xmnsh" (OuterVolumeSpecName: "kube-api-access-xmnsh") pod "25674cd0-03bb-481f-b039-b3b1db4ea1d4" (UID: "25674cd0-03bb-481f-b039-b3b1db4ea1d4"). InnerVolumeSpecName "kube-api-access-xmnsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:52:53 crc kubenswrapper[4813]: E0219 18:52:53.000012 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"glance-db-secret\\\" not found\"" pod="openstack/glance-f626-account-create-update-pclhq" podUID="79168133-5b77-4689-89b3-5f15fa765750" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.001544 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_b3304ea7-bdca-4b4c-b290-07eacbc6a646/ovsdbserver-nb/0.log" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.001633 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"b3304ea7-bdca-4b4c-b290-07eacbc6a646","Type":"ContainerDied","Data":"dfa2737b381569a8492c0f0e2b897493498f4eb64e965ab3b0460263bc7ae632"} Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.001679 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.013492 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-4d36-account-create-update-tpk8j"] Feb 19 18:52:53 crc kubenswrapper[4813]: E0219 18:52:53.015096 4813 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 19 18:52:53 crc kubenswrapper[4813]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed,Command:[/bin/sh -c #!/bin/bash Feb 19 18:52:53 crc kubenswrapper[4813]: Feb 19 18:52:53 crc kubenswrapper[4813]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 19 18:52:53 crc kubenswrapper[4813]: Feb 19 18:52:53 crc kubenswrapper[4813]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 19 18:52:53 crc kubenswrapper[4813]: Feb 19 18:52:53 crc kubenswrapper[4813]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 19 18:52:53 crc kubenswrapper[4813]: Feb 19 18:52:53 crc kubenswrapper[4813]: if [ -n "cinder" ]; then Feb 19 18:52:53 crc kubenswrapper[4813]: GRANT_DATABASE="cinder" Feb 19 18:52:53 crc kubenswrapper[4813]: else Feb 19 18:52:53 crc kubenswrapper[4813]: GRANT_DATABASE="*" Feb 19 18:52:53 crc kubenswrapper[4813]: fi Feb 19 18:52:53 crc kubenswrapper[4813]: Feb 19 18:52:53 crc kubenswrapper[4813]: # going for maximum compatibility here: Feb 19 18:52:53 crc kubenswrapper[4813]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 19 18:52:53 crc kubenswrapper[4813]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 19 18:52:53 crc kubenswrapper[4813]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 19 18:52:53 crc kubenswrapper[4813]: # support updates Feb 19 18:52:53 crc kubenswrapper[4813]: Feb 19 18:52:53 crc kubenswrapper[4813]: $MYSQL_CMD < logger="UnhandledError" Feb 19 18:52:53 crc kubenswrapper[4813]: E0219 18:52:53.016189 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"cinder-db-secret\\\" not found\"" pod="openstack/cinder-570b-account-create-update-h7vw7" podUID="e1df3c12-83d7-4cf9-8f00-8cbd54be355f" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.017095 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8734aca-5b08-4847-b485-1d31add9fba1-kube-api-access-dktsq" (OuterVolumeSpecName: "kube-api-access-dktsq") pod "c8734aca-5b08-4847-b485-1d31add9fba1" (UID: "c8734aca-5b08-4847-b485-1d31add9fba1"). InnerVolumeSpecName "kube-api-access-dktsq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.017164 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/279e5b26-1a20-4809-8dfe-ff290d191f38-kube-api-access-nkczs" (OuterVolumeSpecName: "kube-api-access-nkczs") pod "279e5b26-1a20-4809-8dfe-ff290d191f38" (UID: "279e5b26-1a20-4809-8dfe-ff290d191f38"). InnerVolumeSpecName "kube-api-access-nkczs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.034001 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.034228 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="4dac18d7-a3cc-46ce-98be-bf34e69398d7" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://52e092f91f830324e6e20c3dd977a8a1a5e42acbc30389990cc49f2b8250eba6" gracePeriod=30 Feb 19 18:52:53 crc kubenswrapper[4813]: E0219 18:52:53.035860 4813 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 19 18:52:53 crc kubenswrapper[4813]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed,Command:[/bin/sh -c #!/bin/bash Feb 19 18:52:53 crc kubenswrapper[4813]: Feb 19 18:52:53 crc kubenswrapper[4813]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 19 18:52:53 crc kubenswrapper[4813]: Feb 19 18:52:53 crc kubenswrapper[4813]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 19 18:52:53 crc kubenswrapper[4813]: Feb 19 18:52:53 crc kubenswrapper[4813]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 19 18:52:53 crc kubenswrapper[4813]: Feb 19 18:52:53 crc kubenswrapper[4813]: if [ -n "neutron" ]; then Feb 19 18:52:53 crc kubenswrapper[4813]: GRANT_DATABASE="neutron" Feb 19 18:52:53 crc kubenswrapper[4813]: else Feb 19 18:52:53 crc kubenswrapper[4813]: GRANT_DATABASE="*" Feb 19 18:52:53 crc kubenswrapper[4813]: fi Feb 19 18:52:53 crc kubenswrapper[4813]: Feb 19 18:52:53 crc kubenswrapper[4813]: # going for maximum compatibility here: Feb 19 18:52:53 crc kubenswrapper[4813]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 19 18:52:53 crc kubenswrapper[4813]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 19 18:52:53 crc kubenswrapper[4813]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 19 18:52:53 crc kubenswrapper[4813]: # support updates Feb 19 18:52:53 crc kubenswrapper[4813]: Feb 19 18:52:53 crc kubenswrapper[4813]: $MYSQL_CMD < logger="UnhandledError" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.035978 4813 scope.go:117] "RemoveContainer" containerID="fdb0030cda574aea48f90a9d627c54cbe19111ce3ca5470320cb6b9f671e12fe" Feb 19 18:52:53 crc kubenswrapper[4813]: E0219 18:52:53.037201 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"neutron-db-secret\\\" not found\"" pod="openstack/neutron-cde0-account-create-update-b8qsr" podUID="51a5d629-ac3a-4046-be99-01b665dce3ac" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.037614 4813 generic.go:334] "Generic (PLEG): container finished" podID="1cb5586f-0789-4095-84e2-32c8c41984c1" containerID="c1d32779936dc64abfb1211d92a86919ed65d162961f758373172e5d8896b226" exitCode=0 Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.037644 4813 generic.go:334] "Generic (PLEG): container finished" podID="1cb5586f-0789-4095-84e2-32c8c41984c1" containerID="3463fd0ce80861bf450484b04bd0d12e36c7e89f7201fbf18492f7f3c1961d7e" exitCode=0 Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.037652 4813 generic.go:334] "Generic (PLEG): container finished" podID="1cb5586f-0789-4095-84e2-32c8c41984c1" containerID="63ed63fc4060e5e77bda2fcedba7a9038be456080560f0972c1162295667a995" exitCode=0 Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.037660 4813 generic.go:334] "Generic (PLEG): container finished" podID="1cb5586f-0789-4095-84e2-32c8c41984c1" containerID="ecdd61b7c76fea90d5030cd83e27eede68a2613e82c87d74bf1b52eae40ede4c" exitCode=0 Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.037667 4813 generic.go:334] "Generic (PLEG): container finished" podID="1cb5586f-0789-4095-84e2-32c8c41984c1" containerID="b247b4f4fb5c13c744d6518cac9721377f2279093c96dcef4c1fb84f402a96fe" exitCode=0 Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.037674 4813 generic.go:334] "Generic (PLEG): container finished" podID="1cb5586f-0789-4095-84e2-32c8c41984c1" containerID="fe2ab5db7a8282b411ff6c2df1477d67ccefc71efdf7f49af3550ca28e6ca30f" exitCode=0 Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.037680 4813 generic.go:334] "Generic (PLEG): container finished" podID="1cb5586f-0789-4095-84e2-32c8c41984c1" containerID="5e213ff63ffa13d3db8cd64214ffc25c184324e7bcb422854cb7d5544e995e3a" exitCode=0 Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.037689 4813 generic.go:334] "Generic (PLEG): container finished" podID="1cb5586f-0789-4095-84e2-32c8c41984c1" containerID="3f73192612ae69b540b4da233b53fc0e0246e8770b1168eaa09e2818d0475b0b" exitCode=0 Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.037696 4813 generic.go:334] "Generic (PLEG): container finished" podID="1cb5586f-0789-4095-84e2-32c8c41984c1" containerID="8a3df88ea660009faf21ceee32e4bf9966f8ba5d8d06da52cc8eed2e8281585b" exitCode=0 Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.037704 4813 generic.go:334] "Generic (PLEG): container finished" podID="1cb5586f-0789-4095-84e2-32c8c41984c1" containerID="8e7a00f1985307d87ed43318441bfb1dd483cd2506f6ccb527a5a1f992665d68" exitCode=0 Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.037711 4813 generic.go:334] "Generic (PLEG): container finished" podID="1cb5586f-0789-4095-84e2-32c8c41984c1" containerID="c810d20c3f622957076f0b197785575897ec5b3aab666c3929ba959f136bbac7" exitCode=0 Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.037717 4813 generic.go:334] "Generic (PLEG): container finished" podID="1cb5586f-0789-4095-84e2-32c8c41984c1" containerID="173ce911f070f154fad4553f907b72434be2bbf4d9cc33dc8c7d8f6176342729" exitCode=0 Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.037724 4813 generic.go:334] "Generic (PLEG): container finished" podID="1cb5586f-0789-4095-84e2-32c8c41984c1" containerID="733548779f14379f65bf0cb205274b2eac4a5ab44281cf9b72a9a0a27ccb488b" exitCode=0 Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.037730 4813 generic.go:334] "Generic (PLEG): container finished" podID="1cb5586f-0789-4095-84e2-32c8c41984c1" containerID="bbf92d00811dd2fb35c3895f4c863c12eb1772bf8d221d1a27c08caefbd5bf0f" exitCode=0 Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.037681 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1cb5586f-0789-4095-84e2-32c8c41984c1","Type":"ContainerDied","Data":"c1d32779936dc64abfb1211d92a86919ed65d162961f758373172e5d8896b226"} Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.037819 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1cb5586f-0789-4095-84e2-32c8c41984c1","Type":"ContainerDied","Data":"3463fd0ce80861bf450484b04bd0d12e36c7e89f7201fbf18492f7f3c1961d7e"} Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.037833 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1cb5586f-0789-4095-84e2-32c8c41984c1","Type":"ContainerDied","Data":"63ed63fc4060e5e77bda2fcedba7a9038be456080560f0972c1162295667a995"} Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.037844 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1cb5586f-0789-4095-84e2-32c8c41984c1","Type":"ContainerDied","Data":"ecdd61b7c76fea90d5030cd83e27eede68a2613e82c87d74bf1b52eae40ede4c"} Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.037854 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1cb5586f-0789-4095-84e2-32c8c41984c1","Type":"ContainerDied","Data":"b247b4f4fb5c13c744d6518cac9721377f2279093c96dcef4c1fb84f402a96fe"} Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.037864 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1cb5586f-0789-4095-84e2-32c8c41984c1","Type":"ContainerDied","Data":"fe2ab5db7a8282b411ff6c2df1477d67ccefc71efdf7f49af3550ca28e6ca30f"} Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.037874 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1cb5586f-0789-4095-84e2-32c8c41984c1","Type":"ContainerDied","Data":"5e213ff63ffa13d3db8cd64214ffc25c184324e7bcb422854cb7d5544e995e3a"} Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.037883 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1cb5586f-0789-4095-84e2-32c8c41984c1","Type":"ContainerDied","Data":"3f73192612ae69b540b4da233b53fc0e0246e8770b1168eaa09e2818d0475b0b"} Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.037892 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1cb5586f-0789-4095-84e2-32c8c41984c1","Type":"ContainerDied","Data":"8a3df88ea660009faf21ceee32e4bf9966f8ba5d8d06da52cc8eed2e8281585b"} Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.037900 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1cb5586f-0789-4095-84e2-32c8c41984c1","Type":"ContainerDied","Data":"8e7a00f1985307d87ed43318441bfb1dd483cd2506f6ccb527a5a1f992665d68"} Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.037909 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1cb5586f-0789-4095-84e2-32c8c41984c1","Type":"ContainerDied","Data":"c810d20c3f622957076f0b197785575897ec5b3aab666c3929ba959f136bbac7"} Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.037917 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1cb5586f-0789-4095-84e2-32c8c41984c1","Type":"ContainerDied","Data":"173ce911f070f154fad4553f907b72434be2bbf4d9cc33dc8c7d8f6176342729"} Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.037925 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1cb5586f-0789-4095-84e2-32c8c41984c1","Type":"ContainerDied","Data":"733548779f14379f65bf0cb205274b2eac4a5ab44281cf9b72a9a0a27ccb488b"} Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.037935 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1cb5586f-0789-4095-84e2-32c8c41984c1","Type":"ContainerDied","Data":"bbf92d00811dd2fb35c3895f4c863c12eb1772bf8d221d1a27c08caefbd5bf0f"} Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.043199 4813 generic.go:334] "Generic (PLEG): container finished" podID="2ea3ff24-c40b-432b-a2f8-522284d17ff0" containerID="c739347572fde9810eb2e984054e31d8f98d420e8deb6f11d5a7c741a7bd9e0a" exitCode=143 Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.043362 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2ea3ff24-c40b-432b-a2f8-522284d17ff0","Type":"ContainerDied","Data":"c739347572fde9810eb2e984054e31d8f98d420e8deb6f11d5a7c741a7bd9e0a"} Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.057419 4813 generic.go:334] "Generic (PLEG): container finished" podID="279e5b26-1a20-4809-8dfe-ff290d191f38" containerID="57b25170e965640c68550b7bface39f788f95c0a649a1c484dd11cc853485b0e" exitCode=0 Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.057512 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7677694455-nj2vp" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.057522 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7677694455-nj2vp" event={"ID":"279e5b26-1a20-4809-8dfe-ff290d191f38","Type":"ContainerDied","Data":"57b25170e965640c68550b7bface39f788f95c0a649a1c484dd11cc853485b0e"} Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.057549 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7677694455-nj2vp" event={"ID":"279e5b26-1a20-4809-8dfe-ff290d191f38","Type":"ContainerDied","Data":"009455cd49df735df98dc2d03dafff77a9308fd2412de086b44ffb008b58643e"} Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.064547 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8734aca-5b08-4847-b485-1d31add9fba1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c8734aca-5b08-4847-b485-1d31add9fba1" (UID: "c8734aca-5b08-4847-b485-1d31add9fba1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.069938 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3304ea7-bdca-4b4c-b290-07eacbc6a646-config\") pod \"b3304ea7-bdca-4b4c-b290-07eacbc6a646\" (UID: \"b3304ea7-bdca-4b4c-b290-07eacbc6a646\") " Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.070922 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3304ea7-bdca-4b4c-b290-07eacbc6a646-metrics-certs-tls-certs\") pod \"b3304ea7-bdca-4b4c-b290-07eacbc6a646\" (UID: \"b3304ea7-bdca-4b4c-b290-07eacbc6a646\") " Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.070867 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3304ea7-bdca-4b4c-b290-07eacbc6a646-config" (OuterVolumeSpecName: "config") pod "b3304ea7-bdca-4b4c-b290-07eacbc6a646" (UID: "b3304ea7-bdca-4b4c-b290-07eacbc6a646"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.071024 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttzcq\" (UniqueName: \"kubernetes.io/projected/b3304ea7-bdca-4b4c-b290-07eacbc6a646-kube-api-access-ttzcq\") pod \"b3304ea7-bdca-4b4c-b290-07eacbc6a646\" (UID: \"b3304ea7-bdca-4b4c-b290-07eacbc6a646\") " Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.071084 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3304ea7-bdca-4b4c-b290-07eacbc6a646-combined-ca-bundle\") pod \"b3304ea7-bdca-4b4c-b290-07eacbc6a646\" (UID: \"b3304ea7-bdca-4b4c-b290-07eacbc6a646\") " Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.071839 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3304ea7-bdca-4b4c-b290-07eacbc6a646-ovsdbserver-nb-tls-certs\") pod \"b3304ea7-bdca-4b4c-b290-07eacbc6a646\" (UID: \"b3304ea7-bdca-4b4c-b290-07eacbc6a646\") " Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.071868 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-nb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"b3304ea7-bdca-4b4c-b290-07eacbc6a646\" (UID: \"b3304ea7-bdca-4b4c-b290-07eacbc6a646\") " Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.073273 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b3304ea7-bdca-4b4c-b290-07eacbc6a646-ovsdb-rundir\") pod \"b3304ea7-bdca-4b4c-b290-07eacbc6a646\" (UID: \"b3304ea7-bdca-4b4c-b290-07eacbc6a646\") " Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.077044 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b3304ea7-bdca-4b4c-b290-07eacbc6a646-scripts\") pod \"b3304ea7-bdca-4b4c-b290-07eacbc6a646\" (UID: \"b3304ea7-bdca-4b4c-b290-07eacbc6a646\") " Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.078079 4813 generic.go:334] "Generic (PLEG): container finished" podID="096a88db-91ee-4ac3-b5ae-ba4bca838436" containerID="fa363048760a37417b36d93a6b7c7436ba2ba02b903ff26bcb4c7f22d5b4f33f" exitCode=0 Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.078166 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-w7s6z" event={"ID":"096a88db-91ee-4ac3-b5ae-ba4bca838436","Type":"ContainerDied","Data":"fa363048760a37417b36d93a6b7c7436ba2ba02b903ff26bcb4c7f22d5b4f33f"} Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.081828 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3304ea7-bdca-4b4c-b290-07eacbc6a646-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "b3304ea7-bdca-4b4c-b290-07eacbc6a646" (UID: "b3304ea7-bdca-4b4c-b290-07eacbc6a646"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.081885 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3304ea7-bdca-4b4c-b290-07eacbc6a646-scripts" (OuterVolumeSpecName: "scripts") pod "b3304ea7-bdca-4b4c-b290-07eacbc6a646" (UID: "b3304ea7-bdca-4b4c-b290-07eacbc6a646"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.083441 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3304ea7-bdca-4b4c-b290-07eacbc6a646-kube-api-access-ttzcq" (OuterVolumeSpecName: "kube-api-access-ttzcq") pod "b3304ea7-bdca-4b4c-b290-07eacbc6a646" (UID: "b3304ea7-bdca-4b4c-b290-07eacbc6a646"). InnerVolumeSpecName "kube-api-access-ttzcq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.084736 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8734aca-5b08-4847-b485-1d31add9fba1-config\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.084760 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b3304ea7-bdca-4b4c-b290-07eacbc6a646-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.084771 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8734aca-5b08-4847-b485-1d31add9fba1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.084781 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/25674cd0-03bb-481f-b039-b3b1db4ea1d4-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.084789 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3304ea7-bdca-4b4c-b290-07eacbc6a646-config\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.084798 4813 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c8734aca-5b08-4847-b485-1d31add9fba1-ovn-rundir\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.084806 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nkczs\" (UniqueName: \"kubernetes.io/projected/279e5b26-1a20-4809-8dfe-ff290d191f38-kube-api-access-nkczs\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.084815 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttzcq\" (UniqueName: \"kubernetes.io/projected/b3304ea7-bdca-4b4c-b290-07eacbc6a646-kube-api-access-ttzcq\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.084824 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25674cd0-03bb-481f-b039-b3b1db4ea1d4-config\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.084832 4813 reconciler_common.go:293] "Volume detached for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c8734aca-5b08-4847-b485-1d31add9fba1-ovs-rundir\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.084840 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/25674cd0-03bb-481f-b039-b3b1db4ea1d4-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.084849 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xmnsh\" (UniqueName: \"kubernetes.io/projected/25674cd0-03bb-481f-b039-b3b1db4ea1d4-kube-api-access-xmnsh\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.085294 4813 scope.go:117] "RemoveContainer" containerID="d7b8203f3013dcbfda0d8e5f52cf6970deeae02a6a776d49e0ee9660ef66cb59" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.088156 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-rnnlc"] Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.091634 4813 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.091676 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dktsq\" (UniqueName: \"kubernetes.io/projected/c8734aca-5b08-4847-b485-1d31add9fba1-kube-api-access-dktsq\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.091692 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b3304ea7-bdca-4b4c-b290-07eacbc6a646-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.094734 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-2kbrh"] Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.096112 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "ovndbcluster-nb-etc-ovn") pod "b3304ea7-bdca-4b4c-b290-07eacbc6a646" (UID: "b3304ea7-bdca-4b4c-b290-07eacbc6a646"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.099496 4813 generic.go:334] "Generic (PLEG): container finished" podID="c9c7a4f0-3d5e-4e7a-8958-8a2022de3394" containerID="912bf4bcdf44e6c87d2ddd9d02005a5efe71719e483128b7f0504e540c94af8f" exitCode=143 Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.099565 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c9c7a4f0-3d5e-4e7a-8958-8a2022de3394","Type":"ContainerDied","Data":"912bf4bcdf44e6c87d2ddd9d02005a5efe71719e483128b7f0504e540c94af8f"} Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.100271 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/786c324f-42b0-4099-adf7-3926fae87308-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "786c324f-42b0-4099-adf7-3926fae87308" (UID: "786c324f-42b0-4099-adf7-3926fae87308"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.103180 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-rnnlc"] Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.103278 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-85btv_c8734aca-5b08-4847-b485-1d31add9fba1/openstack-network-exporter/0.log" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.103368 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-85btv" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.103410 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-85btv" event={"ID":"c8734aca-5b08-4847-b485-1d31add9fba1","Type":"ContainerDied","Data":"bdb717649bdc194b1d9468b82ca9065a9baa5f497b97b5425c0cc6433105ac8f"} Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.114628 4813 generic.go:334] "Generic (PLEG): container finished" podID="6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd" containerID="798087b6274eb1a02113a606fd85310be018115d9f4e8a89578ca60d75da110c" exitCode=0 Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.114704 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-79884964f7-nvxp2" event={"ID":"6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd","Type":"ContainerDied","Data":"798087b6274eb1a02113a606fd85310be018115d9f4e8a89578ca60d75da110c"} Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.119811 4813 generic.go:334] "Generic (PLEG): container finished" podID="6f4b651a-00cc-4ca7-b49e-713eed4968b9" containerID="bf8c06468da19346eec65138cd7874af7a212840ee1fcdf4cb2cd44182970cc3" exitCode=143 Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.119877 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d474bcd44-n9tsd" event={"ID":"6f4b651a-00cc-4ca7-b49e-713eed4968b9","Type":"ContainerDied","Data":"bf8c06468da19346eec65138cd7874af7a212840ee1fcdf4cb2cd44182970cc3"} Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.124473 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-c594-account-create-update-xlflc"] Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.124760 4813 generic.go:334] "Generic (PLEG): container finished" podID="04885075-4d84-445b-b7c8-b6afaeb71600" containerID="a75fc405c7da94e09b4fc05ed5bb23e7a20d02a1e8bcaeab522029453f4a2393" exitCode=0 Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.124794 4813 generic.go:334] "Generic (PLEG): container finished" podID="04885075-4d84-445b-b7c8-b6afaeb71600" containerID="13cc33bfb33bb924ac9f9d0035a948ba6c69e8d3c3ca76d6f358973425452794" exitCode=0 Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.124833 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"04885075-4d84-445b-b7c8-b6afaeb71600","Type":"ContainerDied","Data":"a75fc405c7da94e09b4fc05ed5bb23e7a20d02a1e8bcaeab522029453f4a2393"} Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.124864 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"04885075-4d84-445b-b7c8-b6afaeb71600","Type":"ContainerDied","Data":"13cc33bfb33bb924ac9f9d0035a948ba6c69e8d3c3ca76d6f358973425452794"} Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.129189 4813 generic.go:334] "Generic (PLEG): container finished" podID="81790f67-278c-4b6a-82e5-ec5bb521c6ac" containerID="479709b4af77750e6b92b1b1ddf45a2dfef9f8a888e9bcb59242645580afb4a9" exitCode=143 Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.129248 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-bf98f678b-j6t6g" event={"ID":"81790f67-278c-4b6a-82e5-ec5bb521c6ac","Type":"ContainerDied","Data":"479709b4af77750e6b92b1b1ddf45a2dfef9f8a888e9bcb59242645580afb4a9"} Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.133349 4813 generic.go:334] "Generic (PLEG): container finished" podID="0055f8bf-7085-42c5-86d4-9cee7033a7d1" containerID="252479d2c0b49a05c96f151d10c63bcd71853dfc94f50270cae0aa5023c1cc5c" exitCode=1 Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.133402 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-2kbrh" event={"ID":"0055f8bf-7085-42c5-86d4-9cee7033a7d1","Type":"ContainerDied","Data":"252479d2c0b49a05c96f151d10c63bcd71853dfc94f50270cae0aa5023c1cc5c"} Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.133848 4813 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/root-account-create-update-2kbrh" secret="" err="secret \"galera-openstack-cell1-dockercfg-ljklq\" not found" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.133885 4813 scope.go:117] "RemoveContainer" containerID="252479d2c0b49a05c96f151d10c63bcd71853dfc94f50270cae0aa5023c1cc5c" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.138631 4813 generic.go:334] "Generic (PLEG): container finished" podID="6ecb54a2-f23d-45b0-8311-eb5ff83f9f30" containerID="c497a249844ccadf079da17947184a1a812d81325a2ff39036321f46b9c5c309" exitCode=143 Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.138686 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7c88676b6d-zlhlk" event={"ID":"6ecb54a2-f23d-45b0-8311-eb5ff83f9f30","Type":"ContainerDied","Data":"c497a249844ccadf079da17947184a1a812d81325a2ff39036321f46b9c5c309"} Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.140654 4813 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.141123 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_25674cd0-03bb-481f-b039-b3b1db4ea1d4/ovsdbserver-sb/0.log" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.141252 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.141420 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"25674cd0-03bb-481f-b039-b3b1db4ea1d4","Type":"ContainerDied","Data":"6a39da59ed0e23a81845636fb58a1ca21c1684421cacbd9b7f1c2dd2547c7f37"} Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.141913 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.142135 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="b69a5561-31f2-4e4f-96d3-d0db19a6a51f" containerName="nova-scheduler-scheduler" containerID="cri-o://4e6ba2ed7bbec736dcf293ba52811dd6e91ad2796a685f3c0760ed0b7c04cd3a" gracePeriod=30 Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.148849 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-spl4z"] Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.159834 4813 scope.go:117] "RemoveContainer" containerID="57b25170e965640c68550b7bface39f788f95c0a649a1c484dd11cc853485b0e" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.160056 4813 generic.go:334] "Generic (PLEG): container finished" podID="5aeb8bcb-4373-48d4-9ac6-e6472189e440" containerID="52e141647daf76f468f278365830d8a4de021205dfe34eb192a73cab648f4e9a" exitCode=143 Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.160839 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-9f6f7ccc7-dwqzl" event={"ID":"5aeb8bcb-4373-48d4-9ac6-e6472189e440","Type":"ContainerDied","Data":"52e141647daf76f468f278365830d8a4de021205dfe34eb192a73cab648f4e9a"} Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.161758 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3304ea7-bdca-4b4c-b290-07eacbc6a646-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b3304ea7-bdca-4b4c-b290-07eacbc6a646" (UID: "b3304ea7-bdca-4b4c-b290-07eacbc6a646"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:53 crc kubenswrapper[4813]: E0219 18:52:53.166360 4813 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 19 18:52:53 crc kubenswrapper[4813]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed,Command:[/bin/sh -c #!/bin/bash Feb 19 18:52:53 crc kubenswrapper[4813]: Feb 19 18:52:53 crc kubenswrapper[4813]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 19 18:52:53 crc kubenswrapper[4813]: Feb 19 18:52:53 crc kubenswrapper[4813]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 19 18:52:53 crc kubenswrapper[4813]: Feb 19 18:52:53 crc kubenswrapper[4813]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 19 18:52:53 crc kubenswrapper[4813]: Feb 19 18:52:53 crc kubenswrapper[4813]: if [ -n "placement" ]; then Feb 19 18:52:53 crc kubenswrapper[4813]: GRANT_DATABASE="placement" Feb 19 18:52:53 crc kubenswrapper[4813]: else Feb 19 18:52:53 crc kubenswrapper[4813]: GRANT_DATABASE="*" Feb 19 18:52:53 crc kubenswrapper[4813]: fi Feb 19 18:52:53 crc kubenswrapper[4813]: Feb 19 18:52:53 crc kubenswrapper[4813]: # going for maximum compatibility here: Feb 19 18:52:53 crc kubenswrapper[4813]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 19 18:52:53 crc kubenswrapper[4813]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 19 18:52:53 crc kubenswrapper[4813]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 19 18:52:53 crc kubenswrapper[4813]: # support updates Feb 19 18:52:53 crc kubenswrapper[4813]: Feb 19 18:52:53 crc kubenswrapper[4813]: $MYSQL_CMD < logger="UnhandledError" Feb 19 18:52:53 crc kubenswrapper[4813]: E0219 18:52:53.169248 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"placement-db-secret\\\" not found\"" pod="openstack/placement-d489-account-create-update-7mn8q" podUID="2ed035dd-719a-45e9-825f-4ce51dbb9866" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.176006 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/279e5b26-1a20-4809-8dfe-ff290d191f38-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "279e5b26-1a20-4809-8dfe-ff290d191f38" (UID: "279e5b26-1a20-4809-8dfe-ff290d191f38"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.194196 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/279e5b26-1a20-4809-8dfe-ff290d191f38-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.194221 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3304ea7-bdca-4b4c-b290-07eacbc6a646-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.194232 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/786c324f-42b0-4099-adf7-3926fae87308-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.194241 4813 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.194260 4813 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Feb 19 18:52:53 crc kubenswrapper[4813]: E0219 18:52:53.194342 4813 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Feb 19 18:52:53 crc kubenswrapper[4813]: E0219 18:52:53.194412 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c69ff3db-8806-451a-9df0-c6289c327579-config-data podName:c69ff3db-8806-451a-9df0-c6289c327579 nodeName:}" failed. No retries permitted until 2026-02-19 18:52:57.194394388 +0000 UTC m=+1396.419834929 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/c69ff3db-8806-451a-9df0-c6289c327579-config-data") pod "rabbitmq-server-0" (UID: "c69ff3db-8806-451a-9df0-c6289c327579") : configmap "rabbitmq-config-data" not found Feb 19 18:52:53 crc kubenswrapper[4813]: E0219 18:52:53.194746 4813 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Feb 19 18:52:53 crc kubenswrapper[4813]: E0219 18:52:53.195222 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0055f8bf-7085-42c5-86d4-9cee7033a7d1-operator-scripts podName:0055f8bf-7085-42c5-86d4-9cee7033a7d1 nodeName:}" failed. No retries permitted until 2026-02-19 18:52:53.695173722 +0000 UTC m=+1392.920614263 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/0055f8bf-7085-42c5-86d4-9cee7033a7d1-operator-scripts") pod "root-account-create-update-2kbrh" (UID: "0055f8bf-7085-42c5-86d4-9cee7033a7d1") : configmap "openstack-cell1-scripts" not found Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.196258 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/786c324f-42b0-4099-adf7-3926fae87308-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "786c324f-42b0-4099-adf7-3926fae87308" (UID: "786c324f-42b0-4099-adf7-3926fae87308"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.196360 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25674cd0-03bb-481f-b039-b3b1db4ea1d4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "25674cd0-03bb-481f-b039-b3b1db4ea1d4" (UID: "25674cd0-03bb-481f-b039-b3b1db4ea1d4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.200034 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-spl4z"] Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.208365 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.208584 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="973faa07-fbab-4a50-ac4a-c62302e9f9c1" containerName="nova-cell1-conductor-conductor" containerID="cri-o://120d2d23bdd5ab00b179e24b7405a4df74b8b142005b13e2f6722351e1038530" gracePeriod=30 Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.221007 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-fvfzp"] Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.221093 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.221311 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="3422b8bd-2817-4e8f-8a5a-731c773b73a4" containerName="nova-cell0-conductor-conductor" containerID="cri-o://e3f45f7478a2e402720223591b16af0ed7d32314ccb1d2c38bb96ffc15553872" gracePeriod=30 Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.227411 4813 scope.go:117] "RemoveContainer" containerID="bc5da64f47f86536bb11a20a54f840d076fee7c08731ee597fbd33d1e0a94d46" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.235767 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-fvfzp"] Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.235906 4813 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.256891 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/279e5b26-1a20-4809-8dfe-ff290d191f38-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "279e5b26-1a20-4809-8dfe-ff290d191f38" (UID: "279e5b26-1a20-4809-8dfe-ff290d191f38"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.263684 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.274060 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-f626-account-create-update-pclhq"] Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.318941 4813 scope.go:117] "RemoveContainer" containerID="57b25170e965640c68550b7bface39f788f95c0a649a1c484dd11cc853485b0e" Feb 19 18:52:53 crc kubenswrapper[4813]: E0219 18:52:53.319631 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57b25170e965640c68550b7bface39f788f95c0a649a1c484dd11cc853485b0e\": container with ID starting with 57b25170e965640c68550b7bface39f788f95c0a649a1c484dd11cc853485b0e not found: ID does not exist" containerID="57b25170e965640c68550b7bface39f788f95c0a649a1c484dd11cc853485b0e" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.319718 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57b25170e965640c68550b7bface39f788f95c0a649a1c484dd11cc853485b0e"} err="failed to get container status \"57b25170e965640c68550b7bface39f788f95c0a649a1c484dd11cc853485b0e\": rpc error: code = NotFound desc = could not find container \"57b25170e965640c68550b7bface39f788f95c0a649a1c484dd11cc853485b0e\": container with ID starting with 57b25170e965640c68550b7bface39f788f95c0a649a1c484dd11cc853485b0e not found: ID does not exist" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.319749 4813 scope.go:117] "RemoveContainer" containerID="bc5da64f47f86536bb11a20a54f840d076fee7c08731ee597fbd33d1e0a94d46" Feb 19 18:52:53 crc kubenswrapper[4813]: E0219 18:52:53.324516 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc5da64f47f86536bb11a20a54f840d076fee7c08731ee597fbd33d1e0a94d46\": container with ID starting with bc5da64f47f86536bb11a20a54f840d076fee7c08731ee597fbd33d1e0a94d46 not found: ID does not exist" containerID="bc5da64f47f86536bb11a20a54f840d076fee7c08731ee597fbd33d1e0a94d46" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.324561 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc5da64f47f86536bb11a20a54f840d076fee7c08731ee597fbd33d1e0a94d46"} err="failed to get container status \"bc5da64f47f86536bb11a20a54f840d076fee7c08731ee597fbd33d1e0a94d46\": rpc error: code = NotFound desc = could not find container \"bc5da64f47f86536bb11a20a54f840d076fee7c08731ee597fbd33d1e0a94d46\": container with ID starting with bc5da64f47f86536bb11a20a54f840d076fee7c08731ee597fbd33d1e0a94d46 not found: ID does not exist" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.324598 4813 scope.go:117] "RemoveContainer" containerID="3c2c71c20556e819af235dccf56fcb9253ca0ffc4dcbeab4a4b3cac3cd471eda" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.327244 4813 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/279e5b26-1a20-4809-8dfe-ff290d191f38-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.327370 4813 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/786c324f-42b0-4099-adf7-3926fae87308-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.327388 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25674cd0-03bb-481f-b039-b3b1db4ea1d4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.327397 4813 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:53 crc kubenswrapper[4813]: E0219 18:52:53.327573 4813 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Feb 19 18:52:53 crc kubenswrapper[4813]: E0219 18:52:53.327815 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/db22a584-f05a-41ba-ad23-387b4100a9e1-config-data podName:db22a584-f05a-41ba-ad23-387b4100a9e1 nodeName:}" failed. No retries permitted until 2026-02-19 18:52:55.327800355 +0000 UTC m=+1394.553240896 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/db22a584-f05a-41ba-ad23-387b4100a9e1-config-data") pod "rabbitmq-cell1-server-0" (UID: "db22a584-f05a-41ba-ad23-387b4100a9e1") : configmap "rabbitmq-cell1-config-data" not found Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.336520 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/279e5b26-1a20-4809-8dfe-ff290d191f38-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "279e5b26-1a20-4809-8dfe-ff290d191f38" (UID: "279e5b26-1a20-4809-8dfe-ff290d191f38"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.357881 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-570b-account-create-update-h7vw7"] Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.370289 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25674cd0-03bb-481f-b039-b3b1db4ea1d4-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "25674cd0-03bb-481f-b039-b3b1db4ea1d4" (UID: "25674cd0-03bb-481f-b039-b3b1db4ea1d4"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.370342 4813 scope.go:117] "RemoveContainer" containerID="0d9e225c56d6e81840554a63919cdbe4f5142d75bbdc351a40de2802bf43b99e" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.370621 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="db22a584-f05a-41ba-ad23-387b4100a9e1" containerName="rabbitmq" containerID="cri-o://e519f5b9e793340baf7974d5e67220195aa91581ee6dc67bcc7ab9451042be70" gracePeriod=604800 Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.370921 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-cde0-account-create-update-b8qsr"] Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.380901 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/786c324f-42b0-4099-adf7-3926fae87308-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "786c324f-42b0-4099-adf7-3926fae87308" (UID: "786c324f-42b0-4099-adf7-3926fae87308"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.385109 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25674cd0-03bb-481f-b039-b3b1db4ea1d4-ovsdbserver-sb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-sb-tls-certs") pod "25674cd0-03bb-481f-b039-b3b1db4ea1d4" (UID: "25674cd0-03bb-481f-b039-b3b1db4ea1d4"). InnerVolumeSpecName "ovsdbserver-sb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.409485 4813 scope.go:117] "RemoveContainer" containerID="8502c4b79b14dd3ab247b7d4c9c00c1dcc9265447761cd275f6e5165983c1b3d" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.429879 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/279e5b26-1a20-4809-8dfe-ff290d191f38-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.429906 4813 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/786c324f-42b0-4099-adf7-3926fae87308-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.429917 4813 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/25674cd0-03bb-481f-b039-b3b1db4ea1d4-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.429925 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/25674cd0-03bb-481f-b039-b3b1db4ea1d4-ovsdbserver-sb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.476251 4813 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/swift-proxy-5bfc47d69f-qrwdk" secret="" err="secret \"swift-swift-dockercfg-bhhf2\" not found" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.498756 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/279e5b26-1a20-4809-8dfe-ff290d191f38-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "279e5b26-1a20-4809-8dfe-ff290d191f38" (UID: "279e5b26-1a20-4809-8dfe-ff290d191f38"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.537052 4813 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/279e5b26-1a20-4809-8dfe-ff290d191f38-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.583467 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="195ab5e4-12b0-4c82-bc80-b109afd5898f" path="/var/lib/kubelet/pods/195ab5e4-12b0-4c82-bc80-b109afd5898f/volumes" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.584434 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24e9d07f-0344-41f5-817b-c03c6516ae85" path="/var/lib/kubelet/pods/24e9d07f-0344-41f5-817b-c03c6516ae85/volumes" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.592571 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8734aca-5b08-4847-b485-1d31add9fba1-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "c8734aca-5b08-4847-b485-1d31add9fba1" (UID: "c8734aca-5b08-4847-b485-1d31add9fba1"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.596703 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cc92073-5619-4686-b083-0a824d82934f" path="/var/lib/kubelet/pods/2cc92073-5619-4686-b083-0a824d82934f/volumes" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.597533 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="319f74d2-fc89-40c3-98a6-43f9c3ec542e" path="/var/lib/kubelet/pods/319f74d2-fc89-40c3-98a6-43f9c3ec542e/volumes" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.598474 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/279e5b26-1a20-4809-8dfe-ff290d191f38-config" (OuterVolumeSpecName: "config") pod "279e5b26-1a20-4809-8dfe-ff290d191f38" (UID: "279e5b26-1a20-4809-8dfe-ff290d191f38"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.600383 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3304ea7-bdca-4b4c-b290-07eacbc6a646-ovsdbserver-nb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-nb-tls-certs") pod "b3304ea7-bdca-4b4c-b290-07eacbc6a646" (UID: "b3304ea7-bdca-4b4c-b290-07eacbc6a646"). InnerVolumeSpecName "ovsdbserver-nb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.613064 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33b0ba56-83a8-4c5c-a8f4-91c0e7249a7b" path="/var/lib/kubelet/pods/33b0ba56-83a8-4c5c-a8f4-91c0e7249a7b/volumes" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.619281 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="361c9f8a-61c4-4781-9394-933bc962a0b4" path="/var/lib/kubelet/pods/361c9f8a-61c4-4781-9394-933bc962a0b4/volumes" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.640925 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/279e5b26-1a20-4809-8dfe-ff290d191f38-config\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:53 crc kubenswrapper[4813]: E0219 18:52:53.642926 4813 projected.go:263] Couldn't get secret openstack/swift-proxy-config-data: secret "swift-proxy-config-data" not found Feb 19 18:52:53 crc kubenswrapper[4813]: E0219 18:52:53.649436 4813 projected.go:263] Couldn't get secret openstack/swift-conf: secret "swift-conf" not found Feb 19 18:52:53 crc kubenswrapper[4813]: E0219 18:52:53.649536 4813 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 19 18:52:53 crc kubenswrapper[4813]: E0219 18:52:53.649606 4813 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-proxy-5bfc47d69f-qrwdk: [secret "swift-proxy-config-data" not found, secret "swift-conf" not found, configmap "swift-ring-files" not found] Feb 19 18:52:53 crc kubenswrapper[4813]: E0219 18:52:53.643014 4813 secret.go:188] Couldn't get secret openstack/swift-proxy-config-data: secret "swift-proxy-config-data" not found Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.650022 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d6da42c-1604-467c-b9ef-dde47711b95a" path="/var/lib/kubelet/pods/3d6da42c-1604-467c-b9ef-dde47711b95a/volumes" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.651018 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3304ea7-bdca-4b4c-b290-07eacbc6a646-ovsdbserver-nb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:53 crc kubenswrapper[4813]: E0219 18:52:53.651081 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c239fd72-88d6-4394-bf24-be4fb0b3e579-config-data podName:c239fd72-88d6-4394-bf24-be4fb0b3e579 nodeName:}" failed. No retries permitted until 2026-02-19 18:52:54.151058091 +0000 UTC m=+1393.376498622 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/c239fd72-88d6-4394-bf24-be4fb0b3e579-config-data") pod "swift-proxy-5bfc47d69f-qrwdk" (UID: "c239fd72-88d6-4394-bf24-be4fb0b3e579") : secret "swift-proxy-config-data" not found Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.652589 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3304ea7-bdca-4b4c-b290-07eacbc6a646-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "b3304ea7-bdca-4b4c-b290-07eacbc6a646" (UID: "b3304ea7-bdca-4b4c-b290-07eacbc6a646"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:53 crc kubenswrapper[4813]: E0219 18:52:53.655858 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c239fd72-88d6-4394-bf24-be4fb0b3e579-etc-swift podName:c239fd72-88d6-4394-bf24-be4fb0b3e579 nodeName:}" failed. No retries permitted until 2026-02-19 18:52:54.155826178 +0000 UTC m=+1393.381266719 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c239fd72-88d6-4394-bf24-be4fb0b3e579-etc-swift") pod "swift-proxy-5bfc47d69f-qrwdk" (UID: "c239fd72-88d6-4394-bf24-be4fb0b3e579") : [secret "swift-proxy-config-data" not found, secret "swift-conf" not found, configmap "swift-ring-files" not found] Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.656546 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="487f7774-103e-44b9-a773-e34f77657d2b" path="/var/lib/kubelet/pods/487f7774-103e-44b9-a773-e34f77657d2b/volumes" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.656893 4813 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8734aca-5b08-4847-b485-1d31add9fba1-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.657217 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f306bed-c42d-4853-b2ce-929c6356929d" path="/var/lib/kubelet/pods/6f306bed-c42d-4853-b2ce-929c6356929d/volumes" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.657879 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fa56113-499a-490c-bba9-8676d4312e4e" path="/var/lib/kubelet/pods/6fa56113-499a-490c-bba9-8676d4312e4e/volumes" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.659020 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="786c324f-42b0-4099-adf7-3926fae87308" path="/var/lib/kubelet/pods/786c324f-42b0-4099-adf7-3926fae87308/volumes" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.666997 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f5d86d9-0531-43ec-ad26-29f99daf42cb" path="/var/lib/kubelet/pods/7f5d86d9-0531-43ec-ad26-29f99daf42cb/volumes" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.667629 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95494006-9962-4c6c-b3f6-0637d97734a5" path="/var/lib/kubelet/pods/95494006-9962-4c6c-b3f6-0637d97734a5/volumes" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.668228 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8e68eb2-b8d9-4d8a-85b0-25fe7038a6ab" path="/var/lib/kubelet/pods/b8e68eb2-b8d9-4d8a-85b0-25fe7038a6ab/volumes" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.669335 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b98f2a59-47e4-46aa-acbe-74250ed631ca" path="/var/lib/kubelet/pods/b98f2a59-47e4-46aa-acbe-74250ed631ca/volumes" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.670066 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0aa6e15-2818-4ba2-9cfc-001324222fa7" path="/var/lib/kubelet/pods/c0aa6e15-2818-4ba2-9cfc-001324222fa7/volumes" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.671504 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2a6f952-c093-4d22-9452-3043f5f26472" path="/var/lib/kubelet/pods/d2a6f952-c093-4d22-9452-3043f5f26472/volumes" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.674625 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-5bfc47d69f-qrwdk"] Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.675038 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-345c-account-create-update-wtmbd"] Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.675103 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-c594-account-create-update-xlflc"] Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.675156 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-4d36-account-create-update-tpk8j"] Feb 19 18:52:53 crc kubenswrapper[4813]: E0219 18:52:53.736557 4813 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 19 18:52:53 crc kubenswrapper[4813]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed,Command:[/bin/sh -c #!/bin/bash Feb 19 18:52:53 crc kubenswrapper[4813]: Feb 19 18:52:53 crc kubenswrapper[4813]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 19 18:52:53 crc kubenswrapper[4813]: Feb 19 18:52:53 crc kubenswrapper[4813]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 19 18:52:53 crc kubenswrapper[4813]: Feb 19 18:52:53 crc kubenswrapper[4813]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 19 18:52:53 crc kubenswrapper[4813]: Feb 19 18:52:53 crc kubenswrapper[4813]: if [ -n "nova_cell1" ]; then Feb 19 18:52:53 crc kubenswrapper[4813]: GRANT_DATABASE="nova_cell1" Feb 19 18:52:53 crc kubenswrapper[4813]: else Feb 19 18:52:53 crc kubenswrapper[4813]: GRANT_DATABASE="*" Feb 19 18:52:53 crc kubenswrapper[4813]: fi Feb 19 18:52:53 crc kubenswrapper[4813]: Feb 19 18:52:53 crc kubenswrapper[4813]: # going for maximum compatibility here: Feb 19 18:52:53 crc kubenswrapper[4813]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 19 18:52:53 crc kubenswrapper[4813]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 19 18:52:53 crc kubenswrapper[4813]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 19 18:52:53 crc kubenswrapper[4813]: # support updates Feb 19 18:52:53 crc kubenswrapper[4813]: Feb 19 18:52:53 crc kubenswrapper[4813]: $MYSQL_CMD < logger="UnhandledError" Feb 19 18:52:53 crc kubenswrapper[4813]: E0219 18:52:53.737178 4813 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 19 18:52:53 crc kubenswrapper[4813]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed,Command:[/bin/sh -c #!/bin/bash Feb 19 18:52:53 crc kubenswrapper[4813]: Feb 19 18:52:53 crc kubenswrapper[4813]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 19 18:52:53 crc kubenswrapper[4813]: Feb 19 18:52:53 crc kubenswrapper[4813]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 19 18:52:53 crc kubenswrapper[4813]: Feb 19 18:52:53 crc kubenswrapper[4813]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 19 18:52:53 crc kubenswrapper[4813]: Feb 19 18:52:53 crc kubenswrapper[4813]: if [ -n "nova_cell0" ]; then Feb 19 18:52:53 crc kubenswrapper[4813]: GRANT_DATABASE="nova_cell0" Feb 19 18:52:53 crc kubenswrapper[4813]: else Feb 19 18:52:53 crc kubenswrapper[4813]: GRANT_DATABASE="*" Feb 19 18:52:53 crc kubenswrapper[4813]: fi Feb 19 18:52:53 crc kubenswrapper[4813]: Feb 19 18:52:53 crc kubenswrapper[4813]: # going for maximum compatibility here: Feb 19 18:52:53 crc kubenswrapper[4813]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 19 18:52:53 crc kubenswrapper[4813]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 19 18:52:53 crc kubenswrapper[4813]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 19 18:52:53 crc kubenswrapper[4813]: # support updates Feb 19 18:52:53 crc kubenswrapper[4813]: Feb 19 18:52:53 crc kubenswrapper[4813]: $MYSQL_CMD < logger="UnhandledError" Feb 19 18:52:53 crc kubenswrapper[4813]: E0219 18:52:53.738694 4813 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 19 18:52:53 crc kubenswrapper[4813]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed,Command:[/bin/sh -c #!/bin/bash Feb 19 18:52:53 crc kubenswrapper[4813]: Feb 19 18:52:53 crc kubenswrapper[4813]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 19 18:52:53 crc kubenswrapper[4813]: Feb 19 18:52:53 crc kubenswrapper[4813]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 19 18:52:53 crc kubenswrapper[4813]: Feb 19 18:52:53 crc kubenswrapper[4813]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 19 18:52:53 crc kubenswrapper[4813]: Feb 19 18:52:53 crc kubenswrapper[4813]: if [ -n "nova_api" ]; then Feb 19 18:52:53 crc kubenswrapper[4813]: GRANT_DATABASE="nova_api" Feb 19 18:52:53 crc kubenswrapper[4813]: else Feb 19 18:52:53 crc kubenswrapper[4813]: GRANT_DATABASE="*" Feb 19 18:52:53 crc kubenswrapper[4813]: fi Feb 19 18:52:53 crc kubenswrapper[4813]: Feb 19 18:52:53 crc kubenswrapper[4813]: # going for maximum compatibility here: Feb 19 18:52:53 crc kubenswrapper[4813]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 19 18:52:53 crc kubenswrapper[4813]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 19 18:52:53 crc kubenswrapper[4813]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 19 18:52:53 crc kubenswrapper[4813]: # support updates Feb 19 18:52:53 crc kubenswrapper[4813]: Feb 19 18:52:53 crc kubenswrapper[4813]: $MYSQL_CMD < logger="UnhandledError" Feb 19 18:52:53 crc kubenswrapper[4813]: E0219 18:52:53.738744 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-cell0-db-secret\\\" not found\"" pod="openstack/nova-cell0-c594-account-create-update-xlflc" podUID="c3275e7f-f99c-431a-a777-ea9a62895faa" Feb 19 18:52:53 crc kubenswrapper[4813]: E0219 18:52:53.738781 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-cell1-db-secret\\\" not found\"" pod="openstack/nova-cell1-4d36-account-create-update-tpk8j" podUID="f10d113b-edb7-4a73-ada0-659e82a43e84" Feb 19 18:52:53 crc kubenswrapper[4813]: E0219 18:52:53.745381 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-api-db-secret\\\" not found\"" pod="openstack/nova-api-345c-account-create-update-wtmbd" podUID="acbc878e-b3ae-49db-8fca-1300efb20564" Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.759638 4813 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3304ea7-bdca-4b4c-b290-07eacbc6a646-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:53 crc kubenswrapper[4813]: E0219 18:52:53.759719 4813 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Feb 19 18:52:53 crc kubenswrapper[4813]: E0219 18:52:53.759764 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0055f8bf-7085-42c5-86d4-9cee7033a7d1-operator-scripts podName:0055f8bf-7085-42c5-86d4-9cee7033a7d1 nodeName:}" failed. No retries permitted until 2026-02-19 18:52:54.759751605 +0000 UTC m=+1393.985192146 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/0055f8bf-7085-42c5-86d4-9cee7033a7d1-operator-scripts") pod "root-account-create-update-2kbrh" (UID: "0055f8bf-7085-42c5-86d4-9cee7033a7d1") : configmap "openstack-cell1-scripts" not found Feb 19 18:52:53 crc kubenswrapper[4813]: I0219 18:52:53.896642 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.004735 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-85btv"] Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.034576 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-metrics-85btv"] Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.069262 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.071377 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04885075-4d84-445b-b7c8-b6afaeb71600-combined-ca-bundle\") pod \"04885075-4d84-445b-b7c8-b6afaeb71600\" (UID: \"04885075-4d84-445b-b7c8-b6afaeb71600\") " Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.071437 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/04885075-4d84-445b-b7c8-b6afaeb71600-etc-machine-id\") pod \"04885075-4d84-445b-b7c8-b6afaeb71600\" (UID: \"04885075-4d84-445b-b7c8-b6afaeb71600\") " Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.071557 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04885075-4d84-445b-b7c8-b6afaeb71600-config-data\") pod \"04885075-4d84-445b-b7c8-b6afaeb71600\" (UID: \"04885075-4d84-445b-b7c8-b6afaeb71600\") " Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.071599 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04885075-4d84-445b-b7c8-b6afaeb71600-config-data-custom\") pod \"04885075-4d84-445b-b7c8-b6afaeb71600\" (UID: \"04885075-4d84-445b-b7c8-b6afaeb71600\") " Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.071629 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04885075-4d84-445b-b7c8-b6afaeb71600-scripts\") pod \"04885075-4d84-445b-b7c8-b6afaeb71600\" (UID: \"04885075-4d84-445b-b7c8-b6afaeb71600\") " Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.071669 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxwt9\" (UniqueName: \"kubernetes.io/projected/04885075-4d84-445b-b7c8-b6afaeb71600-kube-api-access-vxwt9\") pod \"04885075-4d84-445b-b7c8-b6afaeb71600\" (UID: \"04885075-4d84-445b-b7c8-b6afaeb71600\") " Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.087452 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04885075-4d84-445b-b7c8-b6afaeb71600-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "04885075-4d84-445b-b7c8-b6afaeb71600" (UID: "04885075-4d84-445b-b7c8-b6afaeb71600"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.089357 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04885075-4d84-445b-b7c8-b6afaeb71600-kube-api-access-vxwt9" (OuterVolumeSpecName: "kube-api-access-vxwt9") pod "04885075-4d84-445b-b7c8-b6afaeb71600" (UID: "04885075-4d84-445b-b7c8-b6afaeb71600"). InnerVolumeSpecName "kube-api-access-vxwt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.119328 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04885075-4d84-445b-b7c8-b6afaeb71600-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "04885075-4d84-445b-b7c8-b6afaeb71600" (UID: "04885075-4d84-445b-b7c8-b6afaeb71600"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.119617 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04885075-4d84-445b-b7c8-b6afaeb71600-scripts" (OuterVolumeSpecName: "scripts") pod "04885075-4d84-445b-b7c8-b6afaeb71600" (UID: "04885075-4d84-445b-b7c8-b6afaeb71600"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.123178 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.175627 4813 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04885075-4d84-445b-b7c8-b6afaeb71600-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.175653 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04885075-4d84-445b-b7c8-b6afaeb71600-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.175662 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxwt9\" (UniqueName: \"kubernetes.io/projected/04885075-4d84-445b-b7c8-b6afaeb71600-kube-api-access-vxwt9\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.175670 4813 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/04885075-4d84-445b-b7c8-b6afaeb71600-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:54 crc kubenswrapper[4813]: E0219 18:52:54.175743 4813 secret.go:188] Couldn't get secret openstack/swift-proxy-config-data: secret "swift-proxy-config-data" not found Feb 19 18:52:54 crc kubenswrapper[4813]: E0219 18:52:54.175784 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c239fd72-88d6-4394-bf24-be4fb0b3e579-config-data podName:c239fd72-88d6-4394-bf24-be4fb0b3e579 nodeName:}" failed. No retries permitted until 2026-02-19 18:52:55.175770684 +0000 UTC m=+1394.401211225 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/c239fd72-88d6-4394-bf24-be4fb0b3e579-config-data") pod "swift-proxy-5bfc47d69f-qrwdk" (UID: "c239fd72-88d6-4394-bf24-be4fb0b3e579") : secret "swift-proxy-config-data" not found Feb 19 18:52:54 crc kubenswrapper[4813]: E0219 18:52:54.176004 4813 projected.go:263] Couldn't get secret openstack/swift-proxy-config-data: secret "swift-proxy-config-data" not found Feb 19 18:52:54 crc kubenswrapper[4813]: E0219 18:52:54.176015 4813 projected.go:263] Couldn't get secret openstack/swift-conf: secret "swift-conf" not found Feb 19 18:52:54 crc kubenswrapper[4813]: E0219 18:52:54.176025 4813 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 19 18:52:54 crc kubenswrapper[4813]: E0219 18:52:54.176035 4813 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-proxy-5bfc47d69f-qrwdk: [secret "swift-proxy-config-data" not found, secret "swift-conf" not found, configmap "swift-ring-files" not found] Feb 19 18:52:54 crc kubenswrapper[4813]: E0219 18:52:54.176057 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c239fd72-88d6-4394-bf24-be4fb0b3e579-etc-swift podName:c239fd72-88d6-4394-bf24-be4fb0b3e579 nodeName:}" failed. No retries permitted until 2026-02-19 18:52:55.176050392 +0000 UTC m=+1394.401490923 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c239fd72-88d6-4394-bf24-be4fb0b3e579-etc-swift") pod "swift-proxy-5bfc47d69f-qrwdk" (UID: "c239fd72-88d6-4394-bf24-be4fb0b3e579") : [secret "swift-proxy-config-data" not found, secret "swift-conf" not found, configmap "swift-ring-files" not found] Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.192203 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7677694455-nj2vp"] Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.199801 4813 generic.go:334] "Generic (PLEG): container finished" podID="0055f8bf-7085-42c5-86d4-9cee7033a7d1" containerID="8245f3047ff057018667950e19c6b9f5ff80069a6b7c3615c7b2cf96bcd2f2ae" exitCode=1 Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.199851 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-2kbrh" event={"ID":"0055f8bf-7085-42c5-86d4-9cee7033a7d1","Type":"ContainerDied","Data":"8245f3047ff057018667950e19c6b9f5ff80069a6b7c3615c7b2cf96bcd2f2ae"} Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.199877 4813 scope.go:117] "RemoveContainer" containerID="252479d2c0b49a05c96f151d10c63bcd71853dfc94f50270cae0aa5023c1cc5c" Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.203037 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04885075-4d84-445b-b7c8-b6afaeb71600-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "04885075-4d84-445b-b7c8-b6afaeb71600" (UID: "04885075-4d84-445b-b7c8-b6afaeb71600"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.208218 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-570b-account-create-update-h7vw7" event={"ID":"e1df3c12-83d7-4cf9-8f00-8cbd54be355f","Type":"ContainerStarted","Data":"d98535ab9e93ecd71847a9c38758a825824a1437602664cf194073ac94f7cc01"} Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.215258 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-345c-account-create-update-wtmbd" event={"ID":"acbc878e-b3ae-49db-8fca-1300efb20564","Type":"ContainerStarted","Data":"578ef21254199d92980dbe566c9c064d7ccb1d84b1a0073aab5cae187a169ab8"} Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.227722 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-c594-account-create-update-xlflc" event={"ID":"c3275e7f-f99c-431a-a777-ea9a62895faa","Type":"ContainerStarted","Data":"9f10dce0112b6d7935171c9dcb4ec77377fa495f4671e3622adcded492b9636f"} Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.238755 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7677694455-nj2vp"] Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.263303 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-4d36-account-create-update-tpk8j" event={"ID":"f10d113b-edb7-4a73-ada0-659e82a43e84","Type":"ContainerStarted","Data":"e6422d20e30e7640f4d35e0699fa4c4c5bddd68a9027f77b0179dec3d005ec35"} Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.268024 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04885075-4d84-445b-b7c8-b6afaeb71600-config-data" (OuterVolumeSpecName: "config-data") pod "04885075-4d84-445b-b7c8-b6afaeb71600" (UID: "04885075-4d84-445b-b7c8-b6afaeb71600"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.275121 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.276692 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04885075-4d84-445b-b7c8-b6afaeb71600-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.276711 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04885075-4d84-445b-b7c8-b6afaeb71600-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.292740 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cde0-account-create-update-b8qsr" event={"ID":"51a5d629-ac3a-4046-be99-01b665dce3ac","Type":"ContainerStarted","Data":"52a812711a85227bba1f14019d8dcee7289de2858808238f7d590ed1043d8f3e"} Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.303818 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.304068 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a7385c55-b36b-486d-add0-958b8cece7de","Type":"ContainerDied","Data":"009f0408ab4ffcc4868aed437d6e30405baf42c34d352c5760db13ab83d1b55f"} Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.304023 4813 generic.go:334] "Generic (PLEG): container finished" podID="a7385c55-b36b-486d-add0-958b8cece7de" containerID="009f0408ab4ffcc4868aed437d6e30405baf42c34d352c5760db13ab83d1b55f" exitCode=0 Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.315023 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c389f482-3000-4f94-924d-158c9d51a2e9","Type":"ContainerDied","Data":"42f9e9f8d59b24e0b6b81176e439582dcdd7e50952fc2dd61a52acdcf4367e0e"} Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.314552 4813 generic.go:334] "Generic (PLEG): container finished" podID="c389f482-3000-4f94-924d-158c9d51a2e9" containerID="42f9e9f8d59b24e0b6b81176e439582dcdd7e50952fc2dd61a52acdcf4367e0e" exitCode=143 Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.320501 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"04885075-4d84-445b-b7c8-b6afaeb71600","Type":"ContainerDied","Data":"bf20266e4cfe55b7c0fb276116a695f1394fff8979737a421fee8025a90a67f5"} Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.320543 4813 scope.go:117] "RemoveContainer" containerID="a75fc405c7da94e09b4fc05ed5bb23e7a20d02a1e8bcaeab522029453f4a2393" Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.320637 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.328094 4813 generic.go:334] "Generic (PLEG): container finished" podID="153c22ed-7e2e-496e-9a13-9ef0ce79efd8" containerID="84023d51754b9bdea418d6af28a79e9ce79c0d010f1e0a43ee78d69255fd77b9" exitCode=143 Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.328200 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"153c22ed-7e2e-496e-9a13-9ef0ce79efd8","Type":"ContainerDied","Data":"84023d51754b9bdea418d6af28a79e9ce79c0d010f1e0a43ee78d69255fd77b9"} Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.329303 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-f626-account-create-update-pclhq" event={"ID":"79168133-5b77-4689-89b3-5f15fa765750","Type":"ContainerStarted","Data":"2787dd4164bbb405aa41a9c594e82c8ea35a9af6fcdfd1501f313a8da5f33c86"} Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.359248 4813 scope.go:117] "RemoveContainer" containerID="13cc33bfb33bb924ac9f9d0035a948ba6c69e8d3c3ca76d6f358973425452794" Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.366775 4813 generic.go:334] "Generic (PLEG): container finished" podID="4dac18d7-a3cc-46ce-98be-bf34e69398d7" containerID="52e092f91f830324e6e20c3dd977a8a1a5e42acbc30389990cc49f2b8250eba6" exitCode=0 Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.366881 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"4dac18d7-a3cc-46ce-98be-bf34e69398d7","Type":"ContainerDied","Data":"52e092f91f830324e6e20c3dd977a8a1a5e42acbc30389990cc49f2b8250eba6"} Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.367048 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.368307 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-5bfc47d69f-qrwdk" podUID="c239fd72-88d6-4394-bf24-be4fb0b3e579" containerName="proxy-httpd" containerID="cri-o://452e2a9d879b10f739d3b19cd7e40ef58643848e29fa2a75758bdac304dc3d57" gracePeriod=30 Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.368549 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-5bfc47d69f-qrwdk" podUID="c239fd72-88d6-4394-bf24-be4fb0b3e579" containerName="proxy-server" containerID="cri-o://fa176bf7b54c8db7a012a6c8fd130237f020b6ebfd7bd99836698ae1cab7d252" gracePeriod=30 Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.369022 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.378135 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dac18d7-a3cc-46ce-98be-bf34e69398d7-vencrypt-tls-certs\") pod \"4dac18d7-a3cc-46ce-98be-bf34e69398d7\" (UID: \"4dac18d7-a3cc-46ce-98be-bf34e69398d7\") " Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.378188 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8667\" (UniqueName: \"kubernetes.io/projected/4dac18d7-a3cc-46ce-98be-bf34e69398d7-kube-api-access-q8667\") pod \"4dac18d7-a3cc-46ce-98be-bf34e69398d7\" (UID: \"4dac18d7-a3cc-46ce-98be-bf34e69398d7\") " Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.378279 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dac18d7-a3cc-46ce-98be-bf34e69398d7-config-data\") pod \"4dac18d7-a3cc-46ce-98be-bf34e69398d7\" (UID: \"4dac18d7-a3cc-46ce-98be-bf34e69398d7\") " Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.378313 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dac18d7-a3cc-46ce-98be-bf34e69398d7-combined-ca-bundle\") pod \"4dac18d7-a3cc-46ce-98be-bf34e69398d7\" (UID: \"4dac18d7-a3cc-46ce-98be-bf34e69398d7\") " Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.378353 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dac18d7-a3cc-46ce-98be-bf34e69398d7-nova-novncproxy-tls-certs\") pod \"4dac18d7-a3cc-46ce-98be-bf34e69398d7\" (UID: \"4dac18d7-a3cc-46ce-98be-bf34e69398d7\") " Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.389123 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4dac18d7-a3cc-46ce-98be-bf34e69398d7-kube-api-access-q8667" (OuterVolumeSpecName: "kube-api-access-q8667") pod "4dac18d7-a3cc-46ce-98be-bf34e69398d7" (UID: "4dac18d7-a3cc-46ce-98be-bf34e69398d7"). InnerVolumeSpecName "kube-api-access-q8667". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.392305 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-vfdhs"] Feb 19 18:52:54 crc kubenswrapper[4813]: E0219 18:52:54.392710 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3304ea7-bdca-4b4c-b290-07eacbc6a646" containerName="ovsdbserver-nb" Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.392722 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3304ea7-bdca-4b4c-b290-07eacbc6a646" containerName="ovsdbserver-nb" Feb 19 18:52:54 crc kubenswrapper[4813]: E0219 18:52:54.392736 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="279e5b26-1a20-4809-8dfe-ff290d191f38" containerName="dnsmasq-dns" Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.392742 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="279e5b26-1a20-4809-8dfe-ff290d191f38" containerName="dnsmasq-dns" Feb 19 18:52:54 crc kubenswrapper[4813]: E0219 18:52:54.392754 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="279e5b26-1a20-4809-8dfe-ff290d191f38" containerName="init" Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.392761 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="279e5b26-1a20-4809-8dfe-ff290d191f38" containerName="init" Feb 19 18:52:54 crc kubenswrapper[4813]: E0219 18:52:54.392774 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25674cd0-03bb-481f-b039-b3b1db4ea1d4" containerName="openstack-network-exporter" Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.392780 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="25674cd0-03bb-481f-b039-b3b1db4ea1d4" containerName="openstack-network-exporter" Feb 19 18:52:54 crc kubenswrapper[4813]: E0219 18:52:54.392790 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3304ea7-bdca-4b4c-b290-07eacbc6a646" containerName="openstack-network-exporter" Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.392795 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3304ea7-bdca-4b4c-b290-07eacbc6a646" containerName="openstack-network-exporter" Feb 19 18:52:54 crc kubenswrapper[4813]: E0219 18:52:54.392807 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8734aca-5b08-4847-b485-1d31add9fba1" containerName="openstack-network-exporter" Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.392813 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8734aca-5b08-4847-b485-1d31add9fba1" containerName="openstack-network-exporter" Feb 19 18:52:54 crc kubenswrapper[4813]: E0219 18:52:54.392822 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04885075-4d84-445b-b7c8-b6afaeb71600" containerName="cinder-scheduler" Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.392828 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="04885075-4d84-445b-b7c8-b6afaeb71600" containerName="cinder-scheduler" Feb 19 18:52:54 crc kubenswrapper[4813]: E0219 18:52:54.392835 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25674cd0-03bb-481f-b039-b3b1db4ea1d4" containerName="ovsdbserver-sb" Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.392841 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="25674cd0-03bb-481f-b039-b3b1db4ea1d4" containerName="ovsdbserver-sb" Feb 19 18:52:54 crc kubenswrapper[4813]: E0219 18:52:54.392852 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dac18d7-a3cc-46ce-98be-bf34e69398d7" containerName="nova-cell1-novncproxy-novncproxy" Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.392858 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dac18d7-a3cc-46ce-98be-bf34e69398d7" containerName="nova-cell1-novncproxy-novncproxy" Feb 19 18:52:54 crc kubenswrapper[4813]: E0219 18:52:54.392870 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04885075-4d84-445b-b7c8-b6afaeb71600" containerName="probe" Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.392875 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="04885075-4d84-445b-b7c8-b6afaeb71600" containerName="probe" Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.393054 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="04885075-4d84-445b-b7c8-b6afaeb71600" containerName="cinder-scheduler" Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.393062 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3304ea7-bdca-4b4c-b290-07eacbc6a646" containerName="ovsdbserver-nb" Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.393093 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="25674cd0-03bb-481f-b039-b3b1db4ea1d4" containerName="openstack-network-exporter" Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.393104 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="279e5b26-1a20-4809-8dfe-ff290d191f38" containerName="dnsmasq-dns" Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.393113 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8734aca-5b08-4847-b485-1d31add9fba1" containerName="openstack-network-exporter" Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.393122 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="4dac18d7-a3cc-46ce-98be-bf34e69398d7" containerName="nova-cell1-novncproxy-novncproxy" Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.393130 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="25674cd0-03bb-481f-b039-b3b1db4ea1d4" containerName="ovsdbserver-sb" Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.393146 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3304ea7-bdca-4b4c-b290-07eacbc6a646" containerName="openstack-network-exporter" Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.393155 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="04885075-4d84-445b-b7c8-b6afaeb71600" containerName="probe" Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.393701 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vfdhs" Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.400457 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.400629 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-vfdhs"] Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.447275 4813 scope.go:117] "RemoveContainer" containerID="52e092f91f830324e6e20c3dd977a8a1a5e42acbc30389990cc49f2b8250eba6" Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.486456 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdjqd\" (UniqueName: \"kubernetes.io/projected/21e36d83-8928-4c58-8432-eb51809336a7-kube-api-access-zdjqd\") pod \"root-account-create-update-vfdhs\" (UID: \"21e36d83-8928-4c58-8432-eb51809336a7\") " pod="openstack/root-account-create-update-vfdhs" Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.488208 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21e36d83-8928-4c58-8432-eb51809336a7-operator-scripts\") pod \"root-account-create-update-vfdhs\" (UID: \"21e36d83-8928-4c58-8432-eb51809336a7\") " pod="openstack/root-account-create-update-vfdhs" Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.488586 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8667\" (UniqueName: \"kubernetes.io/projected/4dac18d7-a3cc-46ce-98be-bf34e69398d7-kube-api-access-q8667\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.576257 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4dac18d7-a3cc-46ce-98be-bf34e69398d7-config-data" (OuterVolumeSpecName: "config-data") pod "4dac18d7-a3cc-46ce-98be-bf34e69398d7" (UID: "4dac18d7-a3cc-46ce-98be-bf34e69398d7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.577879 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4dac18d7-a3cc-46ce-98be-bf34e69398d7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4dac18d7-a3cc-46ce-98be-bf34e69398d7" (UID: "4dac18d7-a3cc-46ce-98be-bf34e69398d7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.592825 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21e36d83-8928-4c58-8432-eb51809336a7-operator-scripts\") pod \"root-account-create-update-vfdhs\" (UID: \"21e36d83-8928-4c58-8432-eb51809336a7\") " pod="openstack/root-account-create-update-vfdhs" Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.593039 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdjqd\" (UniqueName: \"kubernetes.io/projected/21e36d83-8928-4c58-8432-eb51809336a7-kube-api-access-zdjqd\") pod \"root-account-create-update-vfdhs\" (UID: \"21e36d83-8928-4c58-8432-eb51809336a7\") " pod="openstack/root-account-create-update-vfdhs" Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.593228 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dac18d7-a3cc-46ce-98be-bf34e69398d7-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.593240 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dac18d7-a3cc-46ce-98be-bf34e69398d7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.603276 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21e36d83-8928-4c58-8432-eb51809336a7-operator-scripts\") pod \"root-account-create-update-vfdhs\" (UID: \"21e36d83-8928-4c58-8432-eb51809336a7\") " pod="openstack/root-account-create-update-vfdhs" Feb 19 18:52:54 crc kubenswrapper[4813]: E0219 18:52:54.605687 4813 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Feb 19 18:52:54 crc kubenswrapper[4813]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2026-02-19T18:52:52Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Feb 19 18:52:54 crc kubenswrapper[4813]: /etc/init.d/functions: line 589: 386 Alarm clock "$@" Feb 19 18:52:54 crc kubenswrapper[4813]: > execCommand=["/usr/share/ovn/scripts/ovn-ctl","stop_controller"] containerName="ovn-controller" pod="openstack/ovn-controller-vr4rs" message=< Feb 19 18:52:54 crc kubenswrapper[4813]: Exiting ovn-controller (1) [FAILED] Feb 19 18:52:54 crc kubenswrapper[4813]: Killing ovn-controller (1) [ OK ] Feb 19 18:52:54 crc kubenswrapper[4813]: 2026-02-19T18:52:52Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Feb 19 18:52:54 crc kubenswrapper[4813]: /etc/init.d/functions: line 589: 386 Alarm clock "$@" Feb 19 18:52:54 crc kubenswrapper[4813]: > Feb 19 18:52:54 crc kubenswrapper[4813]: E0219 18:52:54.605739 4813 kuberuntime_container.go:691] "PreStop hook failed" err=< Feb 19 18:52:54 crc kubenswrapper[4813]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2026-02-19T18:52:52Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Feb 19 18:52:54 crc kubenswrapper[4813]: /etc/init.d/functions: line 589: 386 Alarm clock "$@" Feb 19 18:52:54 crc kubenswrapper[4813]: > pod="openstack/ovn-controller-vr4rs" podUID="abaee778-ea35-4887-90c8-2834d3eef00d" containerName="ovn-controller" containerID="cri-o://0f7e482e975d5a408c6ad81d42d1de1741994d6d368e1feef74405613c2006a7" Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.605773 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-vr4rs" podUID="abaee778-ea35-4887-90c8-2834d3eef00d" containerName="ovn-controller" containerID="cri-o://0f7e482e975d5a408c6ad81d42d1de1741994d6d368e1feef74405613c2006a7" gracePeriod=27 Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.624525 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="6ad2b86e-f285-4acc-a87b-18f97baf0294" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.164:8776/healthcheck\": read tcp 10.217.0.2:44802->10.217.0.164:8776: read: connection reset by peer" Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.628768 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.632488 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdjqd\" (UniqueName: \"kubernetes.io/projected/21e36d83-8928-4c58-8432-eb51809336a7-kube-api-access-zdjqd\") pod \"root-account-create-update-vfdhs\" (UID: \"21e36d83-8928-4c58-8432-eb51809336a7\") " pod="openstack/root-account-create-update-vfdhs" Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.636489 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4dac18d7-a3cc-46ce-98be-bf34e69398d7-vencrypt-tls-certs" (OuterVolumeSpecName: "vencrypt-tls-certs") pod "4dac18d7-a3cc-46ce-98be-bf34e69398d7" (UID: "4dac18d7-a3cc-46ce-98be-bf34e69398d7"). InnerVolumeSpecName "vencrypt-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.636789 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.680375 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4dac18d7-a3cc-46ce-98be-bf34e69398d7-nova-novncproxy-tls-certs" (OuterVolumeSpecName: "nova-novncproxy-tls-certs") pod "4dac18d7-a3cc-46ce-98be-bf34e69398d7" (UID: "4dac18d7-a3cc-46ce-98be-bf34e69398d7"). InnerVolumeSpecName "nova-novncproxy-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.694353 4813 reconciler_common.go:293] "Volume detached for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dac18d7-a3cc-46ce-98be-bf34e69398d7-vencrypt-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.694381 4813 reconciler_common.go:293] "Volume detached for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dac18d7-a3cc-46ce-98be-bf34e69398d7-nova-novncproxy-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.702081 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vfdhs" Feb 19 18:52:54 crc kubenswrapper[4813]: E0219 18:52:54.797771 4813 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Feb 19 18:52:54 crc kubenswrapper[4813]: E0219 18:52:54.797850 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0055f8bf-7085-42c5-86d4-9cee7033a7d1-operator-scripts podName:0055f8bf-7085-42c5-86d4-9cee7033a7d1 nodeName:}" failed. No retries permitted until 2026-02-19 18:52:56.797835991 +0000 UTC m=+1396.023276532 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/0055f8bf-7085-42c5-86d4-9cee7033a7d1-operator-scripts") pod "root-account-create-update-2kbrh" (UID: "0055f8bf-7085-42c5-86d4-9cee7033a7d1") : configmap "openstack-cell1-scripts" not found Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.850432 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 19 18:52:54 crc kubenswrapper[4813]: E0219 18:52:54.940085 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0f7e482e975d5a408c6ad81d42d1de1741994d6d368e1feef74405613c2006a7 is running failed: container process not found" containerID="0f7e482e975d5a408c6ad81d42d1de1741994d6d368e1feef74405613c2006a7" cmd=["/usr/local/bin/container-scripts/ovn_controller_readiness.sh"] Feb 19 18:52:54 crc kubenswrapper[4813]: E0219 18:52:54.940395 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0f7e482e975d5a408c6ad81d42d1de1741994d6d368e1feef74405613c2006a7 is running failed: container process not found" containerID="0f7e482e975d5a408c6ad81d42d1de1741994d6d368e1feef74405613c2006a7" cmd=["/usr/local/bin/container-scripts/ovn_controller_readiness.sh"] Feb 19 18:52:54 crc kubenswrapper[4813]: E0219 18:52:54.940563 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0f7e482e975d5a408c6ad81d42d1de1741994d6d368e1feef74405613c2006a7 is running failed: container process not found" containerID="0f7e482e975d5a408c6ad81d42d1de1741994d6d368e1feef74405613c2006a7" cmd=["/usr/local/bin/container-scripts/ovn_controller_readiness.sh"] Feb 19 18:52:54 crc kubenswrapper[4813]: E0219 18:52:54.940591 4813 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0f7e482e975d5a408c6ad81d42d1de1741994d6d368e1feef74405613c2006a7 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-vr4rs" podUID="abaee778-ea35-4887-90c8-2834d3eef00d" containerName="ovn-controller" Feb 19 18:52:54 crc kubenswrapper[4813]: E0219 18:52:54.964454 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e06b8d6895c1ae07d8bfb4358319aa314b17cc2acab940e6baa96f82402c1514" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 19 18:52:54 crc kubenswrapper[4813]: E0219 18:52:54.964493 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fa363048760a37417b36d93a6b7c7436ba2ba02b903ff26bcb4c7f22d5b4f33f is running failed: container process not found" containerID="fa363048760a37417b36d93a6b7c7436ba2ba02b903ff26bcb4c7f22d5b4f33f" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 19 18:52:54 crc kubenswrapper[4813]: E0219 18:52:54.965763 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fa363048760a37417b36d93a6b7c7436ba2ba02b903ff26bcb4c7f22d5b4f33f is running failed: container process not found" containerID="fa363048760a37417b36d93a6b7c7436ba2ba02b903ff26bcb4c7f22d5b4f33f" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 19 18:52:54 crc kubenswrapper[4813]: E0219 18:52:54.966015 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e06b8d6895c1ae07d8bfb4358319aa314b17cc2acab940e6baa96f82402c1514" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 19 18:52:54 crc kubenswrapper[4813]: E0219 18:52:54.967782 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e06b8d6895c1ae07d8bfb4358319aa314b17cc2acab940e6baa96f82402c1514" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 19 18:52:54 crc kubenswrapper[4813]: E0219 18:52:54.967832 4813 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-w7s6z" podUID="096a88db-91ee-4ac3-b5ae-ba4bca838436" containerName="ovs-vswitchd" Feb 19 18:52:54 crc kubenswrapper[4813]: E0219 18:52:54.968499 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fa363048760a37417b36d93a6b7c7436ba2ba02b903ff26bcb4c7f22d5b4f33f is running failed: container process not found" containerID="fa363048760a37417b36d93a6b7c7436ba2ba02b903ff26bcb4c7f22d5b4f33f" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 19 18:52:54 crc kubenswrapper[4813]: E0219 18:52:54.968522 4813 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fa363048760a37417b36d93a6b7c7436ba2ba02b903ff26bcb4c7f22d5b4f33f is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-w7s6z" podUID="096a88db-91ee-4ac3-b5ae-ba4bca838436" containerName="ovsdb-server" Feb 19 18:52:54 crc kubenswrapper[4813]: I0219 18:52:54.972306 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-570b-account-create-update-h7vw7" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.011573 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a7385c55-b36b-486d-add0-958b8cece7de-kolla-config\") pod \"a7385c55-b36b-486d-add0-958b8cece7de\" (UID: \"a7385c55-b36b-486d-add0-958b8cece7de\") " Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.011614 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7385c55-b36b-486d-add0-958b8cece7de-galera-tls-certs\") pod \"a7385c55-b36b-486d-add0-958b8cece7de\" (UID: \"a7385c55-b36b-486d-add0-958b8cece7de\") " Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.020238 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"a7385c55-b36b-486d-add0-958b8cece7de\" (UID: \"a7385c55-b36b-486d-add0-958b8cece7de\") " Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.020279 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6b86\" (UniqueName: \"kubernetes.io/projected/a7385c55-b36b-486d-add0-958b8cece7de-kube-api-access-w6b86\") pod \"a7385c55-b36b-486d-add0-958b8cece7de\" (UID: \"a7385c55-b36b-486d-add0-958b8cece7de\") " Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.020349 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a7385c55-b36b-486d-add0-958b8cece7de-config-data-default\") pod \"a7385c55-b36b-486d-add0-958b8cece7de\" (UID: \"a7385c55-b36b-486d-add0-958b8cece7de\") " Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.020402 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7385c55-b36b-486d-add0-958b8cece7de-operator-scripts\") pod \"a7385c55-b36b-486d-add0-958b8cece7de\" (UID: \"a7385c55-b36b-486d-add0-958b8cece7de\") " Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.020486 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7385c55-b36b-486d-add0-958b8cece7de-combined-ca-bundle\") pod \"a7385c55-b36b-486d-add0-958b8cece7de\" (UID: \"a7385c55-b36b-486d-add0-958b8cece7de\") " Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.012379 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7385c55-b36b-486d-add0-958b8cece7de-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "a7385c55-b36b-486d-add0-958b8cece7de" (UID: "a7385c55-b36b-486d-add0-958b8cece7de"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.021831 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7385c55-b36b-486d-add0-958b8cece7de-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a7385c55-b36b-486d-add0-958b8cece7de" (UID: "a7385c55-b36b-486d-add0-958b8cece7de"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.022180 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a7385c55-b36b-486d-add0-958b8cece7de-config-data-generated\") pod \"a7385c55-b36b-486d-add0-958b8cece7de\" (UID: \"a7385c55-b36b-486d-add0-958b8cece7de\") " Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.022256 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7385c55-b36b-486d-add0-958b8cece7de-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "a7385c55-b36b-486d-add0-958b8cece7de" (UID: "a7385c55-b36b-486d-add0-958b8cece7de"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.022461 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7385c55-b36b-486d-add0-958b8cece7de-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "a7385c55-b36b-486d-add0-958b8cece7de" (UID: "a7385c55-b36b-486d-add0-958b8cece7de"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.027053 4813 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a7385c55-b36b-486d-add0-958b8cece7de-config-data-generated\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.027083 4813 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a7385c55-b36b-486d-add0-958b8cece7de-kolla-config\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.027092 4813 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a7385c55-b36b-486d-add0-958b8cece7de-config-data-default\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.027101 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7385c55-b36b-486d-add0-958b8cece7de-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.036207 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7385c55-b36b-486d-add0-958b8cece7de-kube-api-access-w6b86" (OuterVolumeSpecName: "kube-api-access-w6b86") pod "a7385c55-b36b-486d-add0-958b8cece7de" (UID: "a7385c55-b36b-486d-add0-958b8cece7de"). InnerVolumeSpecName "kube-api-access-w6b86". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.037847 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.045808 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.060238 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "mysql-db") pod "a7385c55-b36b-486d-add0-958b8cece7de" (UID: "a7385c55-b36b-486d-add0-958b8cece7de"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.066680 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-4d36-account-create-update-tpk8j" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.081795 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-345c-account-create-update-wtmbd" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.087615 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2kbrh" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.093666 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cde0-account-create-update-b8qsr" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.098916 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-f626-account-create-update-pclhq" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.100043 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7385c55-b36b-486d-add0-958b8cece7de-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a7385c55-b36b-486d-add0-958b8cece7de" (UID: "a7385c55-b36b-486d-add0-958b8cece7de"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.127150 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7385c55-b36b-486d-add0-958b8cece7de-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "a7385c55-b36b-486d-add0-958b8cece7de" (UID: "a7385c55-b36b-486d-add0-958b8cece7de"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.127656 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79168133-5b77-4689-89b3-5f15fa765750-operator-scripts\") pod \"79168133-5b77-4689-89b3-5f15fa765750\" (UID: \"79168133-5b77-4689-89b3-5f15fa765750\") " Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.127715 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51a5d629-ac3a-4046-be99-01b665dce3ac-operator-scripts\") pod \"51a5d629-ac3a-4046-be99-01b665dce3ac\" (UID: \"51a5d629-ac3a-4046-be99-01b665dce3ac\") " Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.127757 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-558sp\" (UniqueName: \"kubernetes.io/projected/acbc878e-b3ae-49db-8fca-1300efb20564-kube-api-access-558sp\") pod \"acbc878e-b3ae-49db-8fca-1300efb20564\" (UID: \"acbc878e-b3ae-49db-8fca-1300efb20564\") " Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.127784 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1df3c12-83d7-4cf9-8f00-8cbd54be355f-operator-scripts\") pod \"e1df3c12-83d7-4cf9-8f00-8cbd54be355f\" (UID: \"e1df3c12-83d7-4cf9-8f00-8cbd54be355f\") " Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.127819 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tx4vd\" (UniqueName: \"kubernetes.io/projected/f10d113b-edb7-4a73-ada0-659e82a43e84-kube-api-access-tx4vd\") pod \"f10d113b-edb7-4a73-ada0-659e82a43e84\" (UID: \"f10d113b-edb7-4a73-ada0-659e82a43e84\") " Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.127836 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2llgw\" (UniqueName: \"kubernetes.io/projected/e1df3c12-83d7-4cf9-8f00-8cbd54be355f-kube-api-access-2llgw\") pod \"e1df3c12-83d7-4cf9-8f00-8cbd54be355f\" (UID: \"e1df3c12-83d7-4cf9-8f00-8cbd54be355f\") " Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.127869 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0055f8bf-7085-42c5-86d4-9cee7033a7d1-operator-scripts\") pod \"0055f8bf-7085-42c5-86d4-9cee7033a7d1\" (UID: \"0055f8bf-7085-42c5-86d4-9cee7033a7d1\") " Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.127917 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dj6tx\" (UniqueName: \"kubernetes.io/projected/51a5d629-ac3a-4046-be99-01b665dce3ac-kube-api-access-dj6tx\") pod \"51a5d629-ac3a-4046-be99-01b665dce3ac\" (UID: \"51a5d629-ac3a-4046-be99-01b665dce3ac\") " Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.127943 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f10d113b-edb7-4a73-ada0-659e82a43e84-operator-scripts\") pod \"f10d113b-edb7-4a73-ada0-659e82a43e84\" (UID: \"f10d113b-edb7-4a73-ada0-659e82a43e84\") " Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.127986 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kcrh\" (UniqueName: \"kubernetes.io/projected/0055f8bf-7085-42c5-86d4-9cee7033a7d1-kube-api-access-8kcrh\") pod \"0055f8bf-7085-42c5-86d4-9cee7033a7d1\" (UID: \"0055f8bf-7085-42c5-86d4-9cee7033a7d1\") " Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.128007 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lgch\" (UniqueName: \"kubernetes.io/projected/79168133-5b77-4689-89b3-5f15fa765750-kube-api-access-5lgch\") pod \"79168133-5b77-4689-89b3-5f15fa765750\" (UID: \"79168133-5b77-4689-89b3-5f15fa765750\") " Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.128101 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/acbc878e-b3ae-49db-8fca-1300efb20564-operator-scripts\") pod \"acbc878e-b3ae-49db-8fca-1300efb20564\" (UID: \"acbc878e-b3ae-49db-8fca-1300efb20564\") " Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.128853 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7385c55-b36b-486d-add0-958b8cece7de-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.128874 4813 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7385c55-b36b-486d-add0-958b8cece7de-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.128892 4813 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.128904 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6b86\" (UniqueName: \"kubernetes.io/projected/a7385c55-b36b-486d-add0-958b8cece7de-kube-api-access-w6b86\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.135690 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acbc878e-b3ae-49db-8fca-1300efb20564-kube-api-access-558sp" (OuterVolumeSpecName: "kube-api-access-558sp") pod "acbc878e-b3ae-49db-8fca-1300efb20564" (UID: "acbc878e-b3ae-49db-8fca-1300efb20564"). InnerVolumeSpecName "kube-api-access-558sp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.136123 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79168133-5b77-4689-89b3-5f15fa765750-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "79168133-5b77-4689-89b3-5f15fa765750" (UID: "79168133-5b77-4689-89b3-5f15fa765750"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.136506 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51a5d629-ac3a-4046-be99-01b665dce3ac-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "51a5d629-ac3a-4046-be99-01b665dce3ac" (UID: "51a5d629-ac3a-4046-be99-01b665dce3ac"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.136820 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1df3c12-83d7-4cf9-8f00-8cbd54be355f-kube-api-access-2llgw" (OuterVolumeSpecName: "kube-api-access-2llgw") pod "e1df3c12-83d7-4cf9-8f00-8cbd54be355f" (UID: "e1df3c12-83d7-4cf9-8f00-8cbd54be355f"). InnerVolumeSpecName "kube-api-access-2llgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.137433 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1df3c12-83d7-4cf9-8f00-8cbd54be355f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e1df3c12-83d7-4cf9-8f00-8cbd54be355f" (UID: "e1df3c12-83d7-4cf9-8f00-8cbd54be355f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.138107 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f10d113b-edb7-4a73-ada0-659e82a43e84-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f10d113b-edb7-4a73-ada0-659e82a43e84" (UID: "f10d113b-edb7-4a73-ada0-659e82a43e84"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.138513 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0055f8bf-7085-42c5-86d4-9cee7033a7d1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0055f8bf-7085-42c5-86d4-9cee7033a7d1" (UID: "0055f8bf-7085-42c5-86d4-9cee7033a7d1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.139758 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f10d113b-edb7-4a73-ada0-659e82a43e84-kube-api-access-tx4vd" (OuterVolumeSpecName: "kube-api-access-tx4vd") pod "f10d113b-edb7-4a73-ada0-659e82a43e84" (UID: "f10d113b-edb7-4a73-ada0-659e82a43e84"). InnerVolumeSpecName "kube-api-access-tx4vd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.140083 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acbc878e-b3ae-49db-8fca-1300efb20564-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "acbc878e-b3ae-49db-8fca-1300efb20564" (UID: "acbc878e-b3ae-49db-8fca-1300efb20564"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.149091 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0055f8bf-7085-42c5-86d4-9cee7033a7d1-kube-api-access-8kcrh" (OuterVolumeSpecName: "kube-api-access-8kcrh") pod "0055f8bf-7085-42c5-86d4-9cee7033a7d1" (UID: "0055f8bf-7085-42c5-86d4-9cee7033a7d1"). InnerVolumeSpecName "kube-api-access-8kcrh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.149189 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79168133-5b77-4689-89b3-5f15fa765750-kube-api-access-5lgch" (OuterVolumeSpecName: "kube-api-access-5lgch") pod "79168133-5b77-4689-89b3-5f15fa765750" (UID: "79168133-5b77-4689-89b3-5f15fa765750"). InnerVolumeSpecName "kube-api-access-5lgch". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.149244 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51a5d629-ac3a-4046-be99-01b665dce3ac-kube-api-access-dj6tx" (OuterVolumeSpecName: "kube-api-access-dj6tx") pod "51a5d629-ac3a-4046-be99-01b665dce3ac" (UID: "51a5d629-ac3a-4046-be99-01b665dce3ac"). InnerVolumeSpecName "kube-api-access-dj6tx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.164410 4813 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.221429 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.221688 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f5fbcd22-57c9-4c44-99e8-f9307f26a525" containerName="ceilometer-central-agent" containerID="cri-o://b40ac96b08f97e5858b87830a251d48e4c373ca1dd61bf0a07e6db7e47f55c35" gracePeriod=30 Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.221755 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f5fbcd22-57c9-4c44-99e8-f9307f26a525" containerName="sg-core" containerID="cri-o://579e9c4c32174ff288dea84c76634ebfebcf521ba3330c8727fcb1173b1e3d77" gracePeriod=30 Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.221847 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f5fbcd22-57c9-4c44-99e8-f9307f26a525" containerName="proxy-httpd" containerID="cri-o://56865a353e832ea3d2554a9134268053435401ed9d60bb86dd7266e7a226e156" gracePeriod=30 Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.221927 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f5fbcd22-57c9-4c44-99e8-f9307f26a525" containerName="ceilometer-notification-agent" containerID="cri-o://7b5552d482368d69a980b6f9acc9f03b5afc2c61db4aac5df25c2dd3bc9b75f1" gracePeriod=30 Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.233823 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tx4vd\" (UniqueName: \"kubernetes.io/projected/f10d113b-edb7-4a73-ada0-659e82a43e84-kube-api-access-tx4vd\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.233851 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2llgw\" (UniqueName: \"kubernetes.io/projected/e1df3c12-83d7-4cf9-8f00-8cbd54be355f-kube-api-access-2llgw\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.233861 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0055f8bf-7085-42c5-86d4-9cee7033a7d1-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.233869 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dj6tx\" (UniqueName: \"kubernetes.io/projected/51a5d629-ac3a-4046-be99-01b665dce3ac-kube-api-access-dj6tx\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.233879 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f10d113b-edb7-4a73-ada0-659e82a43e84-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.233887 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8kcrh\" (UniqueName: \"kubernetes.io/projected/0055f8bf-7085-42c5-86d4-9cee7033a7d1-kube-api-access-8kcrh\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.233898 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5lgch\" (UniqueName: \"kubernetes.io/projected/79168133-5b77-4689-89b3-5f15fa765750-kube-api-access-5lgch\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.233907 4813 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.233915 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/acbc878e-b3ae-49db-8fca-1300efb20564-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.233924 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79168133-5b77-4689-89b3-5f15fa765750-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.233931 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51a5d629-ac3a-4046-be99-01b665dce3ac-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.233939 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-558sp\" (UniqueName: \"kubernetes.io/projected/acbc878e-b3ae-49db-8fca-1300efb20564-kube-api-access-558sp\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.234011 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1df3c12-83d7-4cf9-8f00-8cbd54be355f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:55 crc kubenswrapper[4813]: E0219 18:52:55.234089 4813 secret.go:188] Couldn't get secret openstack/swift-proxy-config-data: secret "swift-proxy-config-data" not found Feb 19 18:52:55 crc kubenswrapper[4813]: E0219 18:52:55.234127 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c239fd72-88d6-4394-bf24-be4fb0b3e579-config-data podName:c239fd72-88d6-4394-bf24-be4fb0b3e579 nodeName:}" failed. No retries permitted until 2026-02-19 18:52:57.234114836 +0000 UTC m=+1396.459555377 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/c239fd72-88d6-4394-bf24-be4fb0b3e579-config-data") pod "swift-proxy-5bfc47d69f-qrwdk" (UID: "c239fd72-88d6-4394-bf24-be4fb0b3e579") : secret "swift-proxy-config-data" not found Feb 19 18:52:55 crc kubenswrapper[4813]: E0219 18:52:55.234167 4813 projected.go:263] Couldn't get secret openstack/swift-proxy-config-data: secret "swift-proxy-config-data" not found Feb 19 18:52:55 crc kubenswrapper[4813]: E0219 18:52:55.234176 4813 projected.go:263] Couldn't get secret openstack/swift-conf: secret "swift-conf" not found Feb 19 18:52:55 crc kubenswrapper[4813]: E0219 18:52:55.234187 4813 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 19 18:52:55 crc kubenswrapper[4813]: E0219 18:52:55.234197 4813 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-proxy-5bfc47d69f-qrwdk: [secret "swift-proxy-config-data" not found, secret "swift-conf" not found, configmap "swift-ring-files" not found] Feb 19 18:52:55 crc kubenswrapper[4813]: E0219 18:52:55.234218 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c239fd72-88d6-4394-bf24-be4fb0b3e579-etc-swift podName:c239fd72-88d6-4394-bf24-be4fb0b3e579 nodeName:}" failed. No retries permitted until 2026-02-19 18:52:57.234212618 +0000 UTC m=+1396.459653159 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c239fd72-88d6-4394-bf24-be4fb0b3e579-etc-swift") pod "swift-proxy-5bfc47d69f-qrwdk" (UID: "c239fd72-88d6-4394-bf24-be4fb0b3e579") : [secret "swift-proxy-config-data" not found, secret "swift-conf" not found, configmap "swift-ring-files" not found] Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.240492 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d489-account-create-update-7mn8q" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.247644 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.247847 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="2ca7379e-357b-4246-a820-c1aed48b722e" containerName="kube-state-metrics" containerID="cri-o://a32a6d2b95db3921a0795c8dfb6c3f6ac4703950bf4f5218f53c1129cb7f36d4" gracePeriod=30 Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.287221 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-c594-account-create-update-xlflc" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.298477 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="c69ff3db-8806-451a-9df0-c6289c327579" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.100:5671: connect: connection refused" Feb 19 18:52:55 crc kubenswrapper[4813]: E0219 18:52:55.337057 4813 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Feb 19 18:52:55 crc kubenswrapper[4813]: E0219 18:52:55.337123 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/db22a584-f05a-41ba-ad23-387b4100a9e1-config-data podName:db22a584-f05a-41ba-ad23-387b4100a9e1 nodeName:}" failed. No retries permitted until 2026-02-19 18:52:59.337108714 +0000 UTC m=+1398.562549255 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/db22a584-f05a-41ba-ad23-387b4100a9e1-config-data") pod "rabbitmq-cell1-server-0" (UID: "db22a584-f05a-41ba-ad23-387b4100a9e1") : configmap "rabbitmq-cell1-config-data" not found Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.380436 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-345c-account-create-update-wtmbd" event={"ID":"acbc878e-b3ae-49db-8fca-1300efb20564","Type":"ContainerDied","Data":"578ef21254199d92980dbe566c9c064d7ccb1d84b1a0073aab5cae187a169ab8"} Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.380534 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-345c-account-create-update-wtmbd" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.393444 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cde0-account-create-update-b8qsr" event={"ID":"51a5d629-ac3a-4046-be99-01b665dce3ac","Type":"ContainerDied","Data":"52a812711a85227bba1f14019d8dcee7289de2858808238f7d590ed1043d8f3e"} Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.393563 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cde0-account-create-update-b8qsr" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.397071 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d489-account-create-update-7mn8q" event={"ID":"2ed035dd-719a-45e9-825f-4ce51dbb9866","Type":"ContainerDied","Data":"d93e6656c6369a1270347b793c8072d0a5bc7cbb8fe07f23a7b32e75ac306789"} Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.397164 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d489-account-create-update-7mn8q" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.407798 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="6ad2b86e-f285-4acc-a87b-18f97baf0294" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.164:8776/healthcheck\": dial tcp 10.217.0.164:8776: connect: connection refused" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.416123 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-570b-account-create-update-h7vw7" event={"ID":"e1df3c12-83d7-4cf9-8f00-8cbd54be355f","Type":"ContainerDied","Data":"d98535ab9e93ecd71847a9c38758a825824a1437602664cf194073ac94f7cc01"} Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.416219 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-570b-account-create-update-h7vw7" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.449924 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ed035dd-719a-45e9-825f-4ce51dbb9866-operator-scripts\") pod \"2ed035dd-719a-45e9-825f-4ce51dbb9866\" (UID: \"2ed035dd-719a-45e9-825f-4ce51dbb9866\") " Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.449990 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-262z5\" (UniqueName: \"kubernetes.io/projected/c3275e7f-f99c-431a-a777-ea9a62895faa-kube-api-access-262z5\") pod \"c3275e7f-f99c-431a-a777-ea9a62895faa\" (UID: \"c3275e7f-f99c-431a-a777-ea9a62895faa\") " Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.450049 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3275e7f-f99c-431a-a777-ea9a62895faa-operator-scripts\") pod \"c3275e7f-f99c-431a-a777-ea9a62895faa\" (UID: \"c3275e7f-f99c-431a-a777-ea9a62895faa\") " Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.450193 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwv\" (UniqueName: \"kubernetes.io/projected/2ed035dd-719a-45e9-825f-4ce51dbb9866-kube-api-access-xcgwv\") pod \"2ed035dd-719a-45e9-825f-4ce51dbb9866\" (UID: \"2ed035dd-719a-45e9-825f-4ce51dbb9866\") " Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.450493 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ed035dd-719a-45e9-825f-4ce51dbb9866-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2ed035dd-719a-45e9-825f-4ce51dbb9866" (UID: "2ed035dd-719a-45e9-825f-4ce51dbb9866"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.450945 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3275e7f-f99c-431a-a777-ea9a62895faa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c3275e7f-f99c-431a-a777-ea9a62895faa" (UID: "c3275e7f-f99c-431a-a777-ea9a62895faa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.451449 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3275e7f-f99c-431a-a777-ea9a62895faa-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.451462 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ed035dd-719a-45e9-825f-4ce51dbb9866-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.453206 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-c594-account-create-update-xlflc" event={"ID":"c3275e7f-f99c-431a-a777-ea9a62895faa","Type":"ContainerDied","Data":"9f10dce0112b6d7935171c9dcb4ec77377fa495f4671e3622adcded492b9636f"} Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.453363 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-c594-account-create-update-xlflc" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.457177 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3275e7f-f99c-431a-a777-ea9a62895faa-kube-api-access-262z5" (OuterVolumeSpecName: "kube-api-access-262z5") pod "c3275e7f-f99c-431a-a777-ea9a62895faa" (UID: "c3275e7f-f99c-431a-a777-ea9a62895faa"). InnerVolumeSpecName "kube-api-access-262z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.464317 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-345c-account-create-update-wtmbd"] Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.477362 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ed035dd-719a-45e9-825f-4ce51dbb9866-kube-api-access-xcgwv" (OuterVolumeSpecName: "kube-api-access-xcgwv") pod "2ed035dd-719a-45e9-825f-4ce51dbb9866" (UID: "2ed035dd-719a-45e9-825f-4ce51dbb9866"). InnerVolumeSpecName "kube-api-access-xcgwv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:52:55 crc kubenswrapper[4813]: E0219 18:52:55.501467 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4e6ba2ed7bbec736dcf293ba52811dd6e91ad2796a685f3c0760ed0b7c04cd3a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.503478 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04885075-4d84-445b-b7c8-b6afaeb71600" path="/var/lib/kubelet/pods/04885075-4d84-445b-b7c8-b6afaeb71600/volumes" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.504279 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25674cd0-03bb-481f-b039-b3b1db4ea1d4" path="/var/lib/kubelet/pods/25674cd0-03bb-481f-b039-b3b1db4ea1d4/volumes" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.505225 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="279e5b26-1a20-4809-8dfe-ff290d191f38" path="/var/lib/kubelet/pods/279e5b26-1a20-4809-8dfe-ff290d191f38/volumes" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.506015 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4dac18d7-a3cc-46ce-98be-bf34e69398d7" path="/var/lib/kubelet/pods/4dac18d7-a3cc-46ce-98be-bf34e69398d7/volumes" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.507120 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3304ea7-bdca-4b4c-b290-07eacbc6a646" path="/var/lib/kubelet/pods/b3304ea7-bdca-4b4c-b290-07eacbc6a646/volumes" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.507742 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8734aca-5b08-4847-b485-1d31add9fba1" path="/var/lib/kubelet/pods/c8734aca-5b08-4847-b485-1d31add9fba1/volumes" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.509271 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-345c-account-create-update-wtmbd"] Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.513768 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-f626-account-create-update-pclhq" event={"ID":"79168133-5b77-4689-89b3-5f15fa765750","Type":"ContainerDied","Data":"2787dd4164bbb405aa41a9c594e82c8ea35a9af6fcdfd1501f313a8da5f33c86"} Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.510607 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-f626-account-create-update-pclhq" Feb 19 18:52:55 crc kubenswrapper[4813]: E0219 18:52:55.544396 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4e6ba2ed7bbec736dcf293ba52811dd6e91ad2796a685f3c0760ed0b7c04cd3a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.554250 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwv\" (UniqueName: \"kubernetes.io/projected/2ed035dd-719a-45e9-825f-4ce51dbb9866-kube-api-access-xcgwv\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.554275 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-262z5\" (UniqueName: \"kubernetes.io/projected/c3275e7f-f99c-431a-a777-ea9a62895faa-kube-api-access-262z5\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:55 crc kubenswrapper[4813]: E0219 18:52:55.574237 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4e6ba2ed7bbec736dcf293ba52811dd6e91ad2796a685f3c0760ed0b7c04cd3a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 18:52:55 crc kubenswrapper[4813]: E0219 18:52:55.574303 4813 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="b69a5561-31f2-4e4f-96d3-d0db19a6a51f" containerName="nova-scheduler-scheduler" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.574670 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-cde0-account-create-update-b8qsr"] Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.579188 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-2kbrh" event={"ID":"0055f8bf-7085-42c5-86d4-9cee7033a7d1","Type":"ContainerDied","Data":"3a43e478c1966d3727ef30c1ebfeb4bfd4d6ad484032c1319713b57514a654f4"} Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.579406 4813 scope.go:117] "RemoveContainer" containerID="8245f3047ff057018667950e19c6b9f5ff80069a6b7c3615c7b2cf96bcd2f2ae" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.579494 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2kbrh" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.598809 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-cde0-account-create-update-b8qsr"] Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.611946 4813 generic.go:334] "Generic (PLEG): container finished" podID="c239fd72-88d6-4394-bf24-be4fb0b3e579" containerID="fa176bf7b54c8db7a012a6c8fd130237f020b6ebfd7bd99836698ae1cab7d252" exitCode=0 Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.612236 4813 generic.go:334] "Generic (PLEG): container finished" podID="c239fd72-88d6-4394-bf24-be4fb0b3e579" containerID="452e2a9d879b10f739d3b19cd7e40ef58643848e29fa2a75758bdac304dc3d57" exitCode=0 Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.612346 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5bfc47d69f-qrwdk" event={"ID":"c239fd72-88d6-4394-bf24-be4fb0b3e579","Type":"ContainerDied","Data":"fa176bf7b54c8db7a012a6c8fd130237f020b6ebfd7bd99836698ae1cab7d252"} Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.612417 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5bfc47d69f-qrwdk" event={"ID":"c239fd72-88d6-4394-bf24-be4fb0b3e579","Type":"ContainerDied","Data":"452e2a9d879b10f739d3b19cd7e40ef58643848e29fa2a75758bdac304dc3d57"} Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.613821 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-vfdhs"] Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.615837 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.615877 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a7385c55-b36b-486d-add0-958b8cece7de","Type":"ContainerDied","Data":"cda3228c661fa9cd58b1d7183c55a935f35b40a8be1e3bf115d78858f34014ab"} Feb 19 18:52:55 crc kubenswrapper[4813]: W0219 18:52:55.663216 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod21e36d83_8928_4c58_8432_eb51809336a7.slice/crio-086214f0ec11794202792b627646bcda28eda4ea3fcb41e6a8c32ec393918aae WatchSource:0}: Error finding container 086214f0ec11794202792b627646bcda28eda4ea3fcb41e6a8c32ec393918aae: Status 404 returned error can't find the container with id 086214f0ec11794202792b627646bcda28eda4ea3fcb41e6a8c32ec393918aae Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.683334 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-vr4rs_abaee778-ea35-4887-90c8-2834d3eef00d/ovn-controller/0.log" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.683398 4813 generic.go:334] "Generic (PLEG): container finished" podID="abaee778-ea35-4887-90c8-2834d3eef00d" containerID="0f7e482e975d5a408c6ad81d42d1de1741994d6d368e1feef74405613c2006a7" exitCode=143 Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.683487 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-vr4rs" event={"ID":"abaee778-ea35-4887-90c8-2834d3eef00d","Type":"ContainerDied","Data":"0f7e482e975d5a408c6ad81d42d1de1741994d6d368e1feef74405613c2006a7"} Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.698558 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-4d36-account-create-update-tpk8j" event={"ID":"f10d113b-edb7-4a73-ada0-659e82a43e84","Type":"ContainerDied","Data":"e6422d20e30e7640f4d35e0699fa4c4c5bddd68a9027f77b0179dec3d005ec35"} Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.698822 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-4d36-account-create-update-tpk8j" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.715554 4813 generic.go:334] "Generic (PLEG): container finished" podID="6ad2b86e-f285-4acc-a87b-18f97baf0294" containerID="e4b7cafb1bd9bfd44873b9606ca95255b1dc06343b23e514ba497ba165a365d6" exitCode=0 Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.715631 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6ad2b86e-f285-4acc-a87b-18f97baf0294","Type":"ContainerDied","Data":"e4b7cafb1bd9bfd44873b9606ca95255b1dc06343b23e514ba497ba165a365d6"} Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.727186 4813 generic.go:334] "Generic (PLEG): container finished" podID="f5fbcd22-57c9-4c44-99e8-f9307f26a525" containerID="56865a353e832ea3d2554a9134268053435401ed9d60bb86dd7266e7a226e156" exitCode=0 Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.727221 4813 generic.go:334] "Generic (PLEG): container finished" podID="f5fbcd22-57c9-4c44-99e8-f9307f26a525" containerID="579e9c4c32174ff288dea84c76634ebfebcf521ba3330c8727fcb1173b1e3d77" exitCode=2 Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.727264 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f5fbcd22-57c9-4c44-99e8-f9307f26a525","Type":"ContainerDied","Data":"56865a353e832ea3d2554a9134268053435401ed9d60bb86dd7266e7a226e156"} Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.727320 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f5fbcd22-57c9-4c44-99e8-f9307f26a525","Type":"ContainerDied","Data":"579e9c4c32174ff288dea84c76634ebfebcf521ba3330c8727fcb1173b1e3d77"} Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.770111 4813 scope.go:117] "RemoveContainer" containerID="009f0408ab4ffcc4868aed437d6e30405baf42c34d352c5760db13ab83d1b55f" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.817296 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.817516 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/memcached-0" podUID="2d4e6cd2-75bd-43bd-9f0e-fd35002ab607" containerName="memcached" containerID="cri-o://18eaf8dca70d4fb335560e1bef6b8d6431fc9112b942b9be1509d59f63ef5a18" gracePeriod=30 Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.827723 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-734c-account-create-update-wj88l"] Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.867722 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-734c-account-create-update-wj88l"] Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.886738 4813 scope.go:117] "RemoveContainer" containerID="35531a89b45b21d436f51039dee2872292441946fc76cbc19142c9b37efc56d4" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.898759 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-734c-account-create-update-46kvp"] Feb 19 18:52:55 crc kubenswrapper[4813]: E0219 18:52:55.899268 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7385c55-b36b-486d-add0-958b8cece7de" containerName="mysql-bootstrap" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.899279 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7385c55-b36b-486d-add0-958b8cece7de" containerName="mysql-bootstrap" Feb 19 18:52:55 crc kubenswrapper[4813]: E0219 18:52:55.899295 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7385c55-b36b-486d-add0-958b8cece7de" containerName="galera" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.899301 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7385c55-b36b-486d-add0-958b8cece7de" containerName="galera" Feb 19 18:52:55 crc kubenswrapper[4813]: E0219 18:52:55.899312 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0055f8bf-7085-42c5-86d4-9cee7033a7d1" containerName="mariadb-account-create-update" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.899320 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="0055f8bf-7085-42c5-86d4-9cee7033a7d1" containerName="mariadb-account-create-update" Feb 19 18:52:55 crc kubenswrapper[4813]: E0219 18:52:55.899333 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0055f8bf-7085-42c5-86d4-9cee7033a7d1" containerName="mariadb-account-create-update" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.899339 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="0055f8bf-7085-42c5-86d4-9cee7033a7d1" containerName="mariadb-account-create-update" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.899496 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="0055f8bf-7085-42c5-86d4-9cee7033a7d1" containerName="mariadb-account-create-update" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.899504 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7385c55-b36b-486d-add0-958b8cece7de" containerName="galera" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.902468 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-734c-account-create-update-46kvp" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.910548 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.926152 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-734c-account-create-update-46kvp"] Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.926417 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="db22a584-f05a-41ba-ad23-387b4100a9e1" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.101:5671: connect: connection refused" Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.944510 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-lnxwr"] Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.948073 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-8h8fd"] Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.967773 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-5899f78d95-lmnxh"] Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.968036 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/keystone-5899f78d95-lmnxh" podUID="79cdc675-a16c-4c18-bcef-d844d7a2f75d" containerName="keystone-api" containerID="cri-o://11ff41f1c3a05377a4d38f769ca6258be7fe43feeb2f1e8eb06714701d85e419" gracePeriod=30 Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.986837 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-8h8fd"] Feb 19 18:52:55 crc kubenswrapper[4813]: I0219 18:52:55.991912 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f4ab094-423e-40ff-bf70-50275c1d41cb-operator-scripts\") pod \"keystone-734c-account-create-update-46kvp\" (UID: \"4f4ab094-423e-40ff-bf70-50275c1d41cb\") " pod="openstack/keystone-734c-account-create-update-46kvp" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.000148 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54vfj\" (UniqueName: \"kubernetes.io/projected/4f4ab094-423e-40ff-bf70-50275c1d41cb-kube-api-access-54vfj\") pod \"keystone-734c-account-create-update-46kvp\" (UID: \"4f4ab094-423e-40ff-bf70-50275c1d41cb\") " pod="openstack/keystone-734c-account-create-update-46kvp" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.011868 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-lnxwr"] Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.030675 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.047503 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="c389f482-3000-4f94-924d-158c9d51a2e9" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.202:8775/\": read tcp 10.217.0.2:49082->10.217.0.202:8775: read: connection reset by peer" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.047796 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="c389f482-3000-4f94-924d-158c9d51a2e9" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.202:8775/\": read tcp 10.217.0.2:49074->10.217.0.202:8775: read: connection reset by peer" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.048407 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-4d36-account-create-update-tpk8j"] Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.060842 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-4d36-account-create-update-tpk8j"] Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.069929 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-5h4p6"] Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.073119 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-5h4p6"] Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.078461 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="c9c7a4f0-3d5e-4e7a-8958-8a2022de3394" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.179:9292/healthcheck\": read tcp 10.217.0.2:35296->10.217.0.179:9292: read: connection reset by peer" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.078638 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="c9c7a4f0-3d5e-4e7a-8958-8a2022de3394" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.179:9292/healthcheck\": read tcp 10.217.0.2:35312->10.217.0.179:9292: read: connection reset by peer" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.080985 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7c88676b6d-zlhlk" podUID="6ecb54a2-f23d-45b0-8311-eb5ff83f9f30" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.163:9311/healthcheck\": read tcp 10.217.0.2:32988->10.217.0.163:9311: read: connection reset by peer" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.081055 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7c88676b6d-zlhlk" podUID="6ecb54a2-f23d-45b0-8311-eb5ff83f9f30" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.163:9311/healthcheck\": read tcp 10.217.0.2:32978->10.217.0.163:9311: read: connection reset by peer" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.087543 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-734c-account-create-update-46kvp"] Feb 19 18:52:56 crc kubenswrapper[4813]: E0219 18:52:56.088467 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-54vfj operator-scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/keystone-734c-account-create-update-46kvp" podUID="4f4ab094-423e-40ff-bf70-50275c1d41cb" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.099072 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-c594-account-create-update-xlflc"] Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.102585 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f4ab094-423e-40ff-bf70-50275c1d41cb-operator-scripts\") pod \"keystone-734c-account-create-update-46kvp\" (UID: \"4f4ab094-423e-40ff-bf70-50275c1d41cb\") " pod="openstack/keystone-734c-account-create-update-46kvp" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.102668 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54vfj\" (UniqueName: \"kubernetes.io/projected/4f4ab094-423e-40ff-bf70-50275c1d41cb-kube-api-access-54vfj\") pod \"keystone-734c-account-create-update-46kvp\" (UID: \"4f4ab094-423e-40ff-bf70-50275c1d41cb\") " pod="openstack/keystone-734c-account-create-update-46kvp" Feb 19 18:52:56 crc kubenswrapper[4813]: E0219 18:52:56.102936 4813 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Feb 19 18:52:56 crc kubenswrapper[4813]: E0219 18:52:56.102996 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f4ab094-423e-40ff-bf70-50275c1d41cb-operator-scripts podName:4f4ab094-423e-40ff-bf70-50275c1d41cb nodeName:}" failed. No retries permitted until 2026-02-19 18:52:56.602983159 +0000 UTC m=+1395.828423700 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/4f4ab094-423e-40ff-bf70-50275c1d41cb-operator-scripts") pod "keystone-734c-account-create-update-46kvp" (UID: "4f4ab094-423e-40ff-bf70-50275c1d41cb") : configmap "openstack-scripts" not found Feb 19 18:52:56 crc kubenswrapper[4813]: E0219 18:52:56.110800 4813 projected.go:194] Error preparing data for projected volume kube-api-access-54vfj for pod openstack/keystone-734c-account-create-update-46kvp: failed to fetch token: serviceaccounts "galera-openstack" not found Feb 19 18:52:56 crc kubenswrapper[4813]: E0219 18:52:56.110877 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f4ab094-423e-40ff-bf70-50275c1d41cb-kube-api-access-54vfj podName:4f4ab094-423e-40ff-bf70-50275c1d41cb nodeName:}" failed. No retries permitted until 2026-02-19 18:52:56.610857852 +0000 UTC m=+1395.836298393 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-54vfj" (UniqueName: "kubernetes.io/projected/4f4ab094-423e-40ff-bf70-50275c1d41cb-kube-api-access-54vfj") pod "keystone-734c-account-create-update-46kvp" (UID: "4f4ab094-423e-40ff-bf70-50275c1d41cb") : failed to fetch token: serviceaccounts "galera-openstack" not found Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.139065 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-c594-account-create-update-xlflc"] Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.156242 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-vfdhs"] Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.191007 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-570b-account-create-update-h7vw7"] Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.201741 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-570b-account-create-update-h7vw7"] Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.208861 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.213761 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.225410 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-d489-account-create-update-7mn8q"] Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.241942 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-d489-account-create-update-7mn8q"] Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.250102 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-2kbrh"] Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.259141 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-2kbrh"] Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.291789 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-f626-account-create-update-pclhq"] Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.294883 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-f626-account-create-update-pclhq"] Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.329313 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="2f8101a7-841e-4fa7-b98a-030b82e66c94" containerName="galera" containerID="cri-o://bdedaff11b60d1f610f35c3711658657b2c38b9a083960b34b9c08ca7b2b0af5" gracePeriod=30 Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.405154 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.428572 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.430504 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-vr4rs_abaee778-ea35-4887-90c8-2834d3eef00d/ovn-controller/0.log" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.430593 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vr4rs" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.431537 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5bfc47d69f-qrwdk" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.555282 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.565306 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d474bcd44-n9tsd" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.616993 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/abaee778-ea35-4887-90c8-2834d3eef00d-var-log-ovn\") pod \"abaee778-ea35-4887-90c8-2834d3eef00d\" (UID: \"abaee778-ea35-4887-90c8-2834d3eef00d\") " Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.617043 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6ad2b86e-f285-4acc-a87b-18f97baf0294-config-data-custom\") pod \"6ad2b86e-f285-4acc-a87b-18f97baf0294\" (UID: \"6ad2b86e-f285-4acc-a87b-18f97baf0294\") " Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.617100 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c239fd72-88d6-4394-bf24-be4fb0b3e579-config-data\") pod \"c239fd72-88d6-4394-bf24-be4fb0b3e579\" (UID: \"c239fd72-88d6-4394-bf24-be4fb0b3e579\") " Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.617119 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/abaee778-ea35-4887-90c8-2834d3eef00d-ovn-controller-tls-certs\") pod \"abaee778-ea35-4887-90c8-2834d3eef00d\" (UID: \"abaee778-ea35-4887-90c8-2834d3eef00d\") " Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.617116 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abaee778-ea35-4887-90c8-2834d3eef00d-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "abaee778-ea35-4887-90c8-2834d3eef00d" (UID: "abaee778-ea35-4887-90c8-2834d3eef00d"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.617147 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abaee778-ea35-4887-90c8-2834d3eef00d-combined-ca-bundle\") pod \"abaee778-ea35-4887-90c8-2834d3eef00d\" (UID: \"abaee778-ea35-4887-90c8-2834d3eef00d\") " Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.617169 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ad2b86e-f285-4acc-a87b-18f97baf0294-public-tls-certs\") pod \"6ad2b86e-f285-4acc-a87b-18f97baf0294\" (UID: \"6ad2b86e-f285-4acc-a87b-18f97baf0294\") " Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.617204 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/abaee778-ea35-4887-90c8-2834d3eef00d-scripts\") pod \"abaee778-ea35-4887-90c8-2834d3eef00d\" (UID: \"abaee778-ea35-4887-90c8-2834d3eef00d\") " Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.617226 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c239fd72-88d6-4394-bf24-be4fb0b3e579-internal-tls-certs\") pod \"c239fd72-88d6-4394-bf24-be4fb0b3e579\" (UID: \"c239fd72-88d6-4394-bf24-be4fb0b3e579\") " Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.617242 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/abaee778-ea35-4887-90c8-2834d3eef00d-var-run-ovn\") pod \"abaee778-ea35-4887-90c8-2834d3eef00d\" (UID: \"abaee778-ea35-4887-90c8-2834d3eef00d\") " Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.617284 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c239fd72-88d6-4394-bf24-be4fb0b3e579-run-httpd\") pod \"c239fd72-88d6-4394-bf24-be4fb0b3e579\" (UID: \"c239fd72-88d6-4394-bf24-be4fb0b3e579\") " Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.617320 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ad2b86e-f285-4acc-a87b-18f97baf0294-combined-ca-bundle\") pod \"6ad2b86e-f285-4acc-a87b-18f97baf0294\" (UID: \"6ad2b86e-f285-4acc-a87b-18f97baf0294\") " Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.617344 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c239fd72-88d6-4394-bf24-be4fb0b3e579-log-httpd\") pod \"c239fd72-88d6-4394-bf24-be4fb0b3e579\" (UID: \"c239fd72-88d6-4394-bf24-be4fb0b3e579\") " Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.617370 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c239fd72-88d6-4394-bf24-be4fb0b3e579-etc-swift\") pod \"c239fd72-88d6-4394-bf24-be4fb0b3e579\" (UID: \"c239fd72-88d6-4394-bf24-be4fb0b3e579\") " Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.617388 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c239fd72-88d6-4394-bf24-be4fb0b3e579-combined-ca-bundle\") pod \"c239fd72-88d6-4394-bf24-be4fb0b3e579\" (UID: \"c239fd72-88d6-4394-bf24-be4fb0b3e579\") " Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.617412 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ad2b86e-f285-4acc-a87b-18f97baf0294-config-data\") pod \"6ad2b86e-f285-4acc-a87b-18f97baf0294\" (UID: \"6ad2b86e-f285-4acc-a87b-18f97baf0294\") " Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.617462 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzspb\" (UniqueName: \"kubernetes.io/projected/2ca7379e-357b-4246-a820-c1aed48b722e-kube-api-access-nzspb\") pod \"2ca7379e-357b-4246-a820-c1aed48b722e\" (UID: \"2ca7379e-357b-4246-a820-c1aed48b722e\") " Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.617479 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ca7379e-357b-4246-a820-c1aed48b722e-combined-ca-bundle\") pod \"2ca7379e-357b-4246-a820-c1aed48b722e\" (UID: \"2ca7379e-357b-4246-a820-c1aed48b722e\") " Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.617496 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6ad2b86e-f285-4acc-a87b-18f97baf0294-etc-machine-id\") pod \"6ad2b86e-f285-4acc-a87b-18f97baf0294\" (UID: \"6ad2b86e-f285-4acc-a87b-18f97baf0294\") " Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.617550 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66vx6\" (UniqueName: \"kubernetes.io/projected/6ad2b86e-f285-4acc-a87b-18f97baf0294-kube-api-access-66vx6\") pod \"6ad2b86e-f285-4acc-a87b-18f97baf0294\" (UID: \"6ad2b86e-f285-4acc-a87b-18f97baf0294\") " Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.617579 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pgpj\" (UniqueName: \"kubernetes.io/projected/abaee778-ea35-4887-90c8-2834d3eef00d-kube-api-access-8pgpj\") pod \"abaee778-ea35-4887-90c8-2834d3eef00d\" (UID: \"abaee778-ea35-4887-90c8-2834d3eef00d\") " Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.617593 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9lnq\" (UniqueName: \"kubernetes.io/projected/c239fd72-88d6-4394-bf24-be4fb0b3e579-kube-api-access-n9lnq\") pod \"c239fd72-88d6-4394-bf24-be4fb0b3e579\" (UID: \"c239fd72-88d6-4394-bf24-be4fb0b3e579\") " Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.617618 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ad2b86e-f285-4acc-a87b-18f97baf0294-logs\") pod \"6ad2b86e-f285-4acc-a87b-18f97baf0294\" (UID: \"6ad2b86e-f285-4acc-a87b-18f97baf0294\") " Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.617634 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/abaee778-ea35-4887-90c8-2834d3eef00d-var-run\") pod \"abaee778-ea35-4887-90c8-2834d3eef00d\" (UID: \"abaee778-ea35-4887-90c8-2834d3eef00d\") " Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.617662 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c239fd72-88d6-4394-bf24-be4fb0b3e579-public-tls-certs\") pod \"c239fd72-88d6-4394-bf24-be4fb0b3e579\" (UID: \"c239fd72-88d6-4394-bf24-be4fb0b3e579\") " Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.617690 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ad2b86e-f285-4acc-a87b-18f97baf0294-internal-tls-certs\") pod \"6ad2b86e-f285-4acc-a87b-18f97baf0294\" (UID: \"6ad2b86e-f285-4acc-a87b-18f97baf0294\") " Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.617713 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ca7379e-357b-4246-a820-c1aed48b722e-kube-state-metrics-tls-certs\") pod \"2ca7379e-357b-4246-a820-c1aed48b722e\" (UID: \"2ca7379e-357b-4246-a820-c1aed48b722e\") " Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.617729 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ad2b86e-f285-4acc-a87b-18f97baf0294-scripts\") pod \"6ad2b86e-f285-4acc-a87b-18f97baf0294\" (UID: \"6ad2b86e-f285-4acc-a87b-18f97baf0294\") " Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.617747 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/2ca7379e-357b-4246-a820-c1aed48b722e-kube-state-metrics-tls-config\") pod \"2ca7379e-357b-4246-a820-c1aed48b722e\" (UID: \"2ca7379e-357b-4246-a820-c1aed48b722e\") " Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.617946 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54vfj\" (UniqueName: \"kubernetes.io/projected/4f4ab094-423e-40ff-bf70-50275c1d41cb-kube-api-access-54vfj\") pod \"keystone-734c-account-create-update-46kvp\" (UID: \"4f4ab094-423e-40ff-bf70-50275c1d41cb\") " pod="openstack/keystone-734c-account-create-update-46kvp" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.618112 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f4ab094-423e-40ff-bf70-50275c1d41cb-operator-scripts\") pod \"keystone-734c-account-create-update-46kvp\" (UID: \"4f4ab094-423e-40ff-bf70-50275c1d41cb\") " pod="openstack/keystone-734c-account-create-update-46kvp" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.618172 4813 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/abaee778-ea35-4887-90c8-2834d3eef00d-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:56 crc kubenswrapper[4813]: E0219 18:52:56.618233 4813 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Feb 19 18:52:56 crc kubenswrapper[4813]: E0219 18:52:56.618276 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f4ab094-423e-40ff-bf70-50275c1d41cb-operator-scripts podName:4f4ab094-423e-40ff-bf70-50275c1d41cb nodeName:}" failed. No retries permitted until 2026-02-19 18:52:57.618262301 +0000 UTC m=+1396.843702842 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/4f4ab094-423e-40ff-bf70-50275c1d41cb-operator-scripts") pod "keystone-734c-account-create-update-46kvp" (UID: "4f4ab094-423e-40ff-bf70-50275c1d41cb") : configmap "openstack-scripts" not found Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.618519 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ad2b86e-f285-4acc-a87b-18f97baf0294-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "6ad2b86e-f285-4acc-a87b-18f97baf0294" (UID: "6ad2b86e-f285-4acc-a87b-18f97baf0294"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.619225 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c239fd72-88d6-4394-bf24-be4fb0b3e579-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c239fd72-88d6-4394-bf24-be4fb0b3e579" (UID: "c239fd72-88d6-4394-bf24-be4fb0b3e579"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.619519 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c239fd72-88d6-4394-bf24-be4fb0b3e579-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c239fd72-88d6-4394-bf24-be4fb0b3e579" (UID: "c239fd72-88d6-4394-bf24-be4fb0b3e579"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.623368 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abaee778-ea35-4887-90c8-2834d3eef00d-scripts" (OuterVolumeSpecName: "scripts") pod "abaee778-ea35-4887-90c8-2834d3eef00d" (UID: "abaee778-ea35-4887-90c8-2834d3eef00d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.623437 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abaee778-ea35-4887-90c8-2834d3eef00d-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "abaee778-ea35-4887-90c8-2834d3eef00d" (UID: "abaee778-ea35-4887-90c8-2834d3eef00d"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.626536 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ca7379e-357b-4246-a820-c1aed48b722e-kube-api-access-nzspb" (OuterVolumeSpecName: "kube-api-access-nzspb") pod "2ca7379e-357b-4246-a820-c1aed48b722e" (UID: "2ca7379e-357b-4246-a820-c1aed48b722e"). InnerVolumeSpecName "kube-api-access-nzspb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.626686 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abaee778-ea35-4887-90c8-2834d3eef00d-var-run" (OuterVolumeSpecName: "var-run") pod "abaee778-ea35-4887-90c8-2834d3eef00d" (UID: "abaee778-ea35-4887-90c8-2834d3eef00d"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.627171 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ad2b86e-f285-4acc-a87b-18f97baf0294-logs" (OuterVolumeSpecName: "logs") pod "6ad2b86e-f285-4acc-a87b-18f97baf0294" (UID: "6ad2b86e-f285-4acc-a87b-18f97baf0294"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.628481 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abaee778-ea35-4887-90c8-2834d3eef00d-kube-api-access-8pgpj" (OuterVolumeSpecName: "kube-api-access-8pgpj") pod "abaee778-ea35-4887-90c8-2834d3eef00d" (UID: "abaee778-ea35-4887-90c8-2834d3eef00d"). InnerVolumeSpecName "kube-api-access-8pgpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.628556 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c239fd72-88d6-4394-bf24-be4fb0b3e579-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "c239fd72-88d6-4394-bf24-be4fb0b3e579" (UID: "c239fd72-88d6-4394-bf24-be4fb0b3e579"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.628615 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ad2b86e-f285-4acc-a87b-18f97baf0294-kube-api-access-66vx6" (OuterVolumeSpecName: "kube-api-access-66vx6") pod "6ad2b86e-f285-4acc-a87b-18f97baf0294" (UID: "6ad2b86e-f285-4acc-a87b-18f97baf0294"). InnerVolumeSpecName "kube-api-access-66vx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.630157 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c239fd72-88d6-4394-bf24-be4fb0b3e579-kube-api-access-n9lnq" (OuterVolumeSpecName: "kube-api-access-n9lnq") pod "c239fd72-88d6-4394-bf24-be4fb0b3e579" (UID: "c239fd72-88d6-4394-bf24-be4fb0b3e579"). InnerVolumeSpecName "kube-api-access-n9lnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.630467 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ad2b86e-f285-4acc-a87b-18f97baf0294-scripts" (OuterVolumeSpecName: "scripts") pod "6ad2b86e-f285-4acc-a87b-18f97baf0294" (UID: "6ad2b86e-f285-4acc-a87b-18f97baf0294"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:56 crc kubenswrapper[4813]: E0219 18:52:56.631156 4813 projected.go:194] Error preparing data for projected volume kube-api-access-54vfj for pod openstack/keystone-734c-account-create-update-46kvp: failed to fetch token: serviceaccounts "galera-openstack" not found Feb 19 18:52:56 crc kubenswrapper[4813]: E0219 18:52:56.631208 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f4ab094-423e-40ff-bf70-50275c1d41cb-kube-api-access-54vfj podName:4f4ab094-423e-40ff-bf70-50275c1d41cb nodeName:}" failed. No retries permitted until 2026-02-19 18:52:57.63119228 +0000 UTC m=+1396.856632821 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-54vfj" (UniqueName: "kubernetes.io/projected/4f4ab094-423e-40ff-bf70-50275c1d41cb-kube-api-access-54vfj") pod "keystone-734c-account-create-update-46kvp" (UID: "4f4ab094-423e-40ff-bf70-50275c1d41cb") : failed to fetch token: serviceaccounts "galera-openstack" not found Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.658919 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ad2b86e-f285-4acc-a87b-18f97baf0294-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6ad2b86e-f285-4acc-a87b-18f97baf0294" (UID: "6ad2b86e-f285-4acc-a87b-18f97baf0294"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.719227 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f4b651a-00cc-4ca7-b49e-713eed4968b9-combined-ca-bundle\") pod \"6f4b651a-00cc-4ca7-b49e-713eed4968b9\" (UID: \"6f4b651a-00cc-4ca7-b49e-713eed4968b9\") " Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.719294 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2ea3ff24-c40b-432b-a2f8-522284d17ff0-httpd-run\") pod \"2ea3ff24-c40b-432b-a2f8-522284d17ff0\" (UID: \"2ea3ff24-c40b-432b-a2f8-522284d17ff0\") " Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.719312 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f4b651a-00cc-4ca7-b49e-713eed4968b9-internal-tls-certs\") pod \"6f4b651a-00cc-4ca7-b49e-713eed4968b9\" (UID: \"6f4b651a-00cc-4ca7-b49e-713eed4968b9\") " Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.719346 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vp6nc\" (UniqueName: \"kubernetes.io/projected/2ea3ff24-c40b-432b-a2f8-522284d17ff0-kube-api-access-vp6nc\") pod \"2ea3ff24-c40b-432b-a2f8-522284d17ff0\" (UID: \"2ea3ff24-c40b-432b-a2f8-522284d17ff0\") " Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.719378 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f4b651a-00cc-4ca7-b49e-713eed4968b9-scripts\") pod \"6f4b651a-00cc-4ca7-b49e-713eed4968b9\" (UID: \"6f4b651a-00cc-4ca7-b49e-713eed4968b9\") " Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.719449 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ea3ff24-c40b-432b-a2f8-522284d17ff0-scripts\") pod \"2ea3ff24-c40b-432b-a2f8-522284d17ff0\" (UID: \"2ea3ff24-c40b-432b-a2f8-522284d17ff0\") " Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.719522 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f4b651a-00cc-4ca7-b49e-713eed4968b9-public-tls-certs\") pod \"6f4b651a-00cc-4ca7-b49e-713eed4968b9\" (UID: \"6f4b651a-00cc-4ca7-b49e-713eed4968b9\") " Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.719547 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ea3ff24-c40b-432b-a2f8-522284d17ff0-combined-ca-bundle\") pod \"2ea3ff24-c40b-432b-a2f8-522284d17ff0\" (UID: \"2ea3ff24-c40b-432b-a2f8-522284d17ff0\") " Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.719575 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f4b651a-00cc-4ca7-b49e-713eed4968b9-logs\") pod \"6f4b651a-00cc-4ca7-b49e-713eed4968b9\" (UID: \"6f4b651a-00cc-4ca7-b49e-713eed4968b9\") " Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.719611 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f4b651a-00cc-4ca7-b49e-713eed4968b9-config-data\") pod \"6f4b651a-00cc-4ca7-b49e-713eed4968b9\" (UID: \"6f4b651a-00cc-4ca7-b49e-713eed4968b9\") " Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.719638 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ea3ff24-c40b-432b-a2f8-522284d17ff0-config-data\") pod \"2ea3ff24-c40b-432b-a2f8-522284d17ff0\" (UID: \"2ea3ff24-c40b-432b-a2f8-522284d17ff0\") " Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.719656 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ea3ff24-c40b-432b-a2f8-522284d17ff0-logs\") pod \"2ea3ff24-c40b-432b-a2f8-522284d17ff0\" (UID: \"2ea3ff24-c40b-432b-a2f8-522284d17ff0\") " Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.719703 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ea3ff24-c40b-432b-a2f8-522284d17ff0-public-tls-certs\") pod \"2ea3ff24-c40b-432b-a2f8-522284d17ff0\" (UID: \"2ea3ff24-c40b-432b-a2f8-522284d17ff0\") " Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.719738 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djqtd\" (UniqueName: \"kubernetes.io/projected/6f4b651a-00cc-4ca7-b49e-713eed4968b9-kube-api-access-djqtd\") pod \"6f4b651a-00cc-4ca7-b49e-713eed4968b9\" (UID: \"6f4b651a-00cc-4ca7-b49e-713eed4968b9\") " Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.719762 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"2ea3ff24-c40b-432b-a2f8-522284d17ff0\" (UID: \"2ea3ff24-c40b-432b-a2f8-522284d17ff0\") " Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.720143 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66vx6\" (UniqueName: \"kubernetes.io/projected/6ad2b86e-f285-4acc-a87b-18f97baf0294-kube-api-access-66vx6\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.720155 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8pgpj\" (UniqueName: \"kubernetes.io/projected/abaee778-ea35-4887-90c8-2834d3eef00d-kube-api-access-8pgpj\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.720164 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9lnq\" (UniqueName: \"kubernetes.io/projected/c239fd72-88d6-4394-bf24-be4fb0b3e579-kube-api-access-n9lnq\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.720172 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ad2b86e-f285-4acc-a87b-18f97baf0294-logs\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.720180 4813 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/abaee778-ea35-4887-90c8-2834d3eef00d-var-run\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.720188 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ad2b86e-f285-4acc-a87b-18f97baf0294-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.720196 4813 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6ad2b86e-f285-4acc-a87b-18f97baf0294-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.720205 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/abaee778-ea35-4887-90c8-2834d3eef00d-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.720213 4813 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/abaee778-ea35-4887-90c8-2834d3eef00d-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.720221 4813 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c239fd72-88d6-4394-bf24-be4fb0b3e579-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.720229 4813 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c239fd72-88d6-4394-bf24-be4fb0b3e579-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.720237 4813 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c239fd72-88d6-4394-bf24-be4fb0b3e579-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.720244 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzspb\" (UniqueName: \"kubernetes.io/projected/2ca7379e-357b-4246-a820-c1aed48b722e-kube-api-access-nzspb\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.720252 4813 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6ad2b86e-f285-4acc-a87b-18f97baf0294-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.721880 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f4b651a-00cc-4ca7-b49e-713eed4968b9-logs" (OuterVolumeSpecName: "logs") pod "6f4b651a-00cc-4ca7-b49e-713eed4968b9" (UID: "6f4b651a-00cc-4ca7-b49e-713eed4968b9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.722218 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ea3ff24-c40b-432b-a2f8-522284d17ff0-logs" (OuterVolumeSpecName: "logs") pod "2ea3ff24-c40b-432b-a2f8-522284d17ff0" (UID: "2ea3ff24-c40b-432b-a2f8-522284d17ff0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.724035 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ea3ff24-c40b-432b-a2f8-522284d17ff0-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "2ea3ff24-c40b-432b-a2f8-522284d17ff0" (UID: "2ea3ff24-c40b-432b-a2f8-522284d17ff0"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.726447 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abaee778-ea35-4887-90c8-2834d3eef00d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "abaee778-ea35-4887-90c8-2834d3eef00d" (UID: "abaee778-ea35-4887-90c8-2834d3eef00d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.739238 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ea3ff24-c40b-432b-a2f8-522284d17ff0-kube-api-access-vp6nc" (OuterVolumeSpecName: "kube-api-access-vp6nc") pod "2ea3ff24-c40b-432b-a2f8-522284d17ff0" (UID: "2ea3ff24-c40b-432b-a2f8-522284d17ff0"). InnerVolumeSpecName "kube-api-access-vp6nc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.740583 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "2ea3ff24-c40b-432b-a2f8-522284d17ff0" (UID: "2ea3ff24-c40b-432b-a2f8-522284d17ff0"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.741020 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f4b651a-00cc-4ca7-b49e-713eed4968b9-scripts" (OuterVolumeSpecName: "scripts") pod "6f4b651a-00cc-4ca7-b49e-713eed4968b9" (UID: "6f4b651a-00cc-4ca7-b49e-713eed4968b9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.743772 4813 generic.go:334] "Generic (PLEG): container finished" podID="c389f482-3000-4f94-924d-158c9d51a2e9" containerID="610a526fd13a6efb0d4ac221965cec53053e05e9e010cd31550a6cc426a151df" exitCode=0 Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.743856 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c389f482-3000-4f94-924d-158c9d51a2e9","Type":"ContainerDied","Data":"610a526fd13a6efb0d4ac221965cec53053e05e9e010cd31550a6cc426a151df"} Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.745527 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f4b651a-00cc-4ca7-b49e-713eed4968b9-kube-api-access-djqtd" (OuterVolumeSpecName: "kube-api-access-djqtd") pod "6f4b651a-00cc-4ca7-b49e-713eed4968b9" (UID: "6f4b651a-00cc-4ca7-b49e-713eed4968b9"). InnerVolumeSpecName "kube-api-access-djqtd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.747726 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-vr4rs_abaee778-ea35-4887-90c8-2834d3eef00d/ovn-controller/0.log" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.747812 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-vr4rs" event={"ID":"abaee778-ea35-4887-90c8-2834d3eef00d","Type":"ContainerDied","Data":"6d4b04f2970a6182aea06067a80275c0f0b2525f2c70ee58181f3c99e9596a40"} Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.748324 4813 scope.go:117] "RemoveContainer" containerID="0f7e482e975d5a408c6ad81d42d1de1741994d6d368e1feef74405613c2006a7" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.748503 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vr4rs" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.752179 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ea3ff24-c40b-432b-a2f8-522284d17ff0-scripts" (OuterVolumeSpecName: "scripts") pod "2ea3ff24-c40b-432b-a2f8-522284d17ff0" (UID: "2ea3ff24-c40b-432b-a2f8-522284d17ff0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.761408 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6ad2b86e-f285-4acc-a87b-18f97baf0294","Type":"ContainerDied","Data":"f1dc77219d1c1adfda5c2d76cc46b9034c857a0f449e7487a2ba80767355f097"} Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.761462 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.795462 4813 generic.go:334] "Generic (PLEG): container finished" podID="f5fbcd22-57c9-4c44-99e8-f9307f26a525" containerID="b40ac96b08f97e5858b87830a251d48e4c373ca1dd61bf0a07e6db7e47f55c35" exitCode=0 Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.795535 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f5fbcd22-57c9-4c44-99e8-f9307f26a525","Type":"ContainerDied","Data":"b40ac96b08f97e5858b87830a251d48e4c373ca1dd61bf0a07e6db7e47f55c35"} Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.798279 4813 generic.go:334] "Generic (PLEG): container finished" podID="6f4b651a-00cc-4ca7-b49e-713eed4968b9" containerID="61fdb19e85db0c41232232581262b2f03bee939d644f27002a6fbcc6eee839c7" exitCode=0 Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.798341 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d474bcd44-n9tsd" event={"ID":"6f4b651a-00cc-4ca7-b49e-713eed4968b9","Type":"ContainerDied","Data":"61fdb19e85db0c41232232581262b2f03bee939d644f27002a6fbcc6eee839c7"} Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.798356 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d474bcd44-n9tsd" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.798367 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d474bcd44-n9tsd" event={"ID":"6f4b651a-00cc-4ca7-b49e-713eed4968b9","Type":"ContainerDied","Data":"03b6461108312cfdc52b9cab3920f5a21a37b85e6667057ffdb75a0d01c0018a"} Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.801540 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5bfc47d69f-qrwdk" event={"ID":"c239fd72-88d6-4394-bf24-be4fb0b3e579","Type":"ContainerDied","Data":"0501528d4dbfd8f2a31374824be67e9775b6b6e6c1fc0f15dcb3c1621312e121"} Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.801640 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5bfc47d69f-qrwdk" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.819339 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ca7379e-357b-4246-a820-c1aed48b722e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2ca7379e-357b-4246-a820-c1aed48b722e" (UID: "2ca7379e-357b-4246-a820-c1aed48b722e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.823059 4813 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2ea3ff24-c40b-432b-a2f8-522284d17ff0-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.823127 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vp6nc\" (UniqueName: \"kubernetes.io/projected/2ea3ff24-c40b-432b-a2f8-522284d17ff0-kube-api-access-vp6nc\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.823141 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f4b651a-00cc-4ca7-b49e-713eed4968b9-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.823154 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ea3ff24-c40b-432b-a2f8-522284d17ff0-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.823167 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abaee778-ea35-4887-90c8-2834d3eef00d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.823207 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f4b651a-00cc-4ca7-b49e-713eed4968b9-logs\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.823218 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ea3ff24-c40b-432b-a2f8-522284d17ff0-logs\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.823231 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ca7379e-357b-4246-a820-c1aed48b722e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.823243 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djqtd\" (UniqueName: \"kubernetes.io/projected/6f4b651a-00cc-4ca7-b49e-713eed4968b9-kube-api-access-djqtd\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.823301 4813 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.835917 4813 generic.go:334] "Generic (PLEG): container finished" podID="2ea3ff24-c40b-432b-a2f8-522284d17ff0" containerID="cdcf02dfe81d5acf0aa0015ceb87d0dc57996ba7b45c419fa6770df217892dee" exitCode=0 Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.836134 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2ea3ff24-c40b-432b-a2f8-522284d17ff0","Type":"ContainerDied","Data":"cdcf02dfe81d5acf0aa0015ceb87d0dc57996ba7b45c419fa6770df217892dee"} Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.836169 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2ea3ff24-c40b-432b-a2f8-522284d17ff0","Type":"ContainerDied","Data":"4ad4c2771908f3a1ca1fad2fdf7293a77d8b5127e771d75f49e3fa533ff07b1b"} Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.836247 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.842873 4813 generic.go:334] "Generic (PLEG): container finished" podID="21e36d83-8928-4c58-8432-eb51809336a7" containerID="99350913d64263a23474a084df20dd31692ca7167154de241fcca508aa3dcbe0" exitCode=1 Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.842930 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-vfdhs" event={"ID":"21e36d83-8928-4c58-8432-eb51809336a7","Type":"ContainerDied","Data":"99350913d64263a23474a084df20dd31692ca7167154de241fcca508aa3dcbe0"} Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.842998 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-vfdhs" event={"ID":"21e36d83-8928-4c58-8432-eb51809336a7","Type":"ContainerStarted","Data":"086214f0ec11794202792b627646bcda28eda4ea3fcb41e6a8c32ec393918aae"} Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.843398 4813 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/root-account-create-update-vfdhs" secret="" err="secret \"galera-openstack-dockercfg-6zpfq\" not found" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.843427 4813 scope.go:117] "RemoveContainer" containerID="99350913d64263a23474a084df20dd31692ca7167154de241fcca508aa3dcbe0" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.850558 4813 generic.go:334] "Generic (PLEG): container finished" podID="c9c7a4f0-3d5e-4e7a-8958-8a2022de3394" containerID="24d702d7195d76e4a45075dc68672b623d20a1f8acf2e94fc16e64fdd1806db3" exitCode=0 Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.850655 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c9c7a4f0-3d5e-4e7a-8958-8a2022de3394","Type":"ContainerDied","Data":"24d702d7195d76e4a45075dc68672b623d20a1f8acf2e94fc16e64fdd1806db3"} Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.854262 4813 generic.go:334] "Generic (PLEG): container finished" podID="153c22ed-7e2e-496e-9a13-9ef0ce79efd8" containerID="80732d7919ccb302a5ba0c1ac5c2fcd0229a26a0e569dafeadc7852c9e14fefa" exitCode=0 Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.854394 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"153c22ed-7e2e-496e-9a13-9ef0ce79efd8","Type":"ContainerDied","Data":"80732d7919ccb302a5ba0c1ac5c2fcd0229a26a0e569dafeadc7852c9e14fefa"} Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.857813 4813 generic.go:334] "Generic (PLEG): container finished" podID="2ca7379e-357b-4246-a820-c1aed48b722e" containerID="a32a6d2b95db3921a0795c8dfb6c3f6ac4703950bf4f5218f53c1129cb7f36d4" exitCode=2 Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.857884 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2ca7379e-357b-4246-a820-c1aed48b722e","Type":"ContainerDied","Data":"a32a6d2b95db3921a0795c8dfb6c3f6ac4703950bf4f5218f53c1129cb7f36d4"} Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.857912 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2ca7379e-357b-4246-a820-c1aed48b722e","Type":"ContainerDied","Data":"3f0185052a318f974a5da87e5ebd410f14733d9f0b6ef3231812d44706fbc1e1"} Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.858019 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.864822 4813 generic.go:334] "Generic (PLEG): container finished" podID="6ecb54a2-f23d-45b0-8311-eb5ff83f9f30" containerID="fe9cae1f29fb502eab2ed61c37be245fbecda7cbaa6a4d4b23a769893fe52d66" exitCode=0 Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.864895 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-734c-account-create-update-46kvp" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.865011 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7c88676b6d-zlhlk" event={"ID":"6ecb54a2-f23d-45b0-8311-eb5ff83f9f30","Type":"ContainerDied","Data":"fe9cae1f29fb502eab2ed61c37be245fbecda7cbaa6a4d4b23a769893fe52d66"} Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.881537 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ca7379e-357b-4246-a820-c1aed48b722e-kube-state-metrics-tls-config" (OuterVolumeSpecName: "kube-state-metrics-tls-config") pod "2ca7379e-357b-4246-a820-c1aed48b722e" (UID: "2ca7379e-357b-4246-a820-c1aed48b722e"). InnerVolumeSpecName "kube-state-metrics-tls-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.929230 4813 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/2ca7379e-357b-4246-a820-c1aed48b722e-kube-state-metrics-tls-config\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.940897 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c239fd72-88d6-4394-bf24-be4fb0b3e579-config-data" (OuterVolumeSpecName: "config-data") pod "c239fd72-88d6-4394-bf24-be4fb0b3e579" (UID: "c239fd72-88d6-4394-bf24-be4fb0b3e579"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.961358 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ea3ff24-c40b-432b-a2f8-522284d17ff0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2ea3ff24-c40b-432b-a2f8-522284d17ff0" (UID: "2ea3ff24-c40b-432b-a2f8-522284d17ff0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:56 crc kubenswrapper[4813]: I0219 18:52:56.968000 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ad2b86e-f285-4acc-a87b-18f97baf0294-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "6ad2b86e-f285-4acc-a87b-18f97baf0294" (UID: "6ad2b86e-f285-4acc-a87b-18f97baf0294"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.003520 4813 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.017742 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f4b651a-00cc-4ca7-b49e-713eed4968b9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6f4b651a-00cc-4ca7-b49e-713eed4968b9" (UID: "6f4b651a-00cc-4ca7-b49e-713eed4968b9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.017758 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ca7379e-357b-4246-a820-c1aed48b722e-kube-state-metrics-tls-certs" (OuterVolumeSpecName: "kube-state-metrics-tls-certs") pod "2ca7379e-357b-4246-a820-c1aed48b722e" (UID: "2ca7379e-357b-4246-a820-c1aed48b722e"). InnerVolumeSpecName "kube-state-metrics-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.018584 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ad2b86e-f285-4acc-a87b-18f97baf0294-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6ad2b86e-f285-4acc-a87b-18f97baf0294" (UID: "6ad2b86e-f285-4acc-a87b-18f97baf0294"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.033349 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ad2b86e-f285-4acc-a87b-18f97baf0294-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.033382 4813 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.033396 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f4b651a-00cc-4ca7-b49e-713eed4968b9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.033409 4813 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ca7379e-357b-4246-a820-c1aed48b722e-kube-state-metrics-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.033422 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c239fd72-88d6-4394-bf24-be4fb0b3e579-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.033433 4813 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ad2b86e-f285-4acc-a87b-18f97baf0294-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.033446 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ea3ff24-c40b-432b-a2f8-522284d17ff0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:57 crc kubenswrapper[4813]: E0219 18:52:57.033517 4813 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Feb 19 18:52:57 crc kubenswrapper[4813]: E0219 18:52:57.033616 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/21e36d83-8928-4c58-8432-eb51809336a7-operator-scripts podName:21e36d83-8928-4c58-8432-eb51809336a7 nodeName:}" failed. No retries permitted until 2026-02-19 18:52:57.533598919 +0000 UTC m=+1396.759039460 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/21e36d83-8928-4c58-8432-eb51809336a7-operator-scripts") pod "root-account-create-update-vfdhs" (UID: "21e36d83-8928-4c58-8432-eb51809336a7") : configmap "openstack-scripts" not found Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.038804 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c239fd72-88d6-4394-bf24-be4fb0b3e579-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c239fd72-88d6-4394-bf24-be4fb0b3e579" (UID: "c239fd72-88d6-4394-bf24-be4fb0b3e579"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.056002 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ea3ff24-c40b-432b-a2f8-522284d17ff0-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "2ea3ff24-c40b-432b-a2f8-522284d17ff0" (UID: "2ea3ff24-c40b-432b-a2f8-522284d17ff0"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.062271 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abaee778-ea35-4887-90c8-2834d3eef00d-ovn-controller-tls-certs" (OuterVolumeSpecName: "ovn-controller-tls-certs") pod "abaee778-ea35-4887-90c8-2834d3eef00d" (UID: "abaee778-ea35-4887-90c8-2834d3eef00d"). InnerVolumeSpecName "ovn-controller-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.062884 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ea3ff24-c40b-432b-a2f8-522284d17ff0-config-data" (OuterVolumeSpecName: "config-data") pod "2ea3ff24-c40b-432b-a2f8-522284d17ff0" (UID: "2ea3ff24-c40b-432b-a2f8-522284d17ff0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.067131 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.076075 4813 scope.go:117] "RemoveContainer" containerID="e4b7cafb1bd9bfd44873b9606ca95255b1dc06343b23e514ba497ba165a365d6" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.088018 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ad2b86e-f285-4acc-a87b-18f97baf0294-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "6ad2b86e-f285-4acc-a87b-18f97baf0294" (UID: "6ad2b86e-f285-4acc-a87b-18f97baf0294"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.129762 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ad2b86e-f285-4acc-a87b-18f97baf0294-config-data" (OuterVolumeSpecName: "config-data") pod "6ad2b86e-f285-4acc-a87b-18f97baf0294" (UID: "6ad2b86e-f285-4acc-a87b-18f97baf0294"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.144885 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ea3ff24-c40b-432b-a2f8-522284d17ff0-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.144917 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c239fd72-88d6-4394-bf24-be4fb0b3e579-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.144927 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ad2b86e-f285-4acc-a87b-18f97baf0294-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.144936 4813 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ea3ff24-c40b-432b-a2f8-522284d17ff0-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.144944 4813 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ad2b86e-f285-4acc-a87b-18f97baf0294-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.144966 4813 reconciler_common.go:293] "Volume detached for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/abaee778-ea35-4887-90c8-2834d3eef00d-ovn-controller-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.208822 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f4b651a-00cc-4ca7-b49e-713eed4968b9-config-data" (OuterVolumeSpecName: "config-data") pod "6f4b651a-00cc-4ca7-b49e-713eed4968b9" (UID: "6f4b651a-00cc-4ca7-b49e-713eed4968b9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.250635 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/153c22ed-7e2e-496e-9a13-9ef0ce79efd8-public-tls-certs\") pod \"153c22ed-7e2e-496e-9a13-9ef0ce79efd8\" (UID: \"153c22ed-7e2e-496e-9a13-9ef0ce79efd8\") " Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.250695 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/153c22ed-7e2e-496e-9a13-9ef0ce79efd8-internal-tls-certs\") pod \"153c22ed-7e2e-496e-9a13-9ef0ce79efd8\" (UID: \"153c22ed-7e2e-496e-9a13-9ef0ce79efd8\") " Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.250723 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/153c22ed-7e2e-496e-9a13-9ef0ce79efd8-logs\") pod \"153c22ed-7e2e-496e-9a13-9ef0ce79efd8\" (UID: \"153c22ed-7e2e-496e-9a13-9ef0ce79efd8\") " Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.250786 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/153c22ed-7e2e-496e-9a13-9ef0ce79efd8-combined-ca-bundle\") pod \"153c22ed-7e2e-496e-9a13-9ef0ce79efd8\" (UID: \"153c22ed-7e2e-496e-9a13-9ef0ce79efd8\") " Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.250805 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/153c22ed-7e2e-496e-9a13-9ef0ce79efd8-config-data\") pod \"153c22ed-7e2e-496e-9a13-9ef0ce79efd8\" (UID: \"153c22ed-7e2e-496e-9a13-9ef0ce79efd8\") " Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.250879 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skbh8\" (UniqueName: \"kubernetes.io/projected/153c22ed-7e2e-496e-9a13-9ef0ce79efd8-kube-api-access-skbh8\") pod \"153c22ed-7e2e-496e-9a13-9ef0ce79efd8\" (UID: \"153c22ed-7e2e-496e-9a13-9ef0ce79efd8\") " Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.251151 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f4b651a-00cc-4ca7-b49e-713eed4968b9-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:57 crc kubenswrapper[4813]: E0219 18:52:57.251206 4813 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Feb 19 18:52:57 crc kubenswrapper[4813]: E0219 18:52:57.251248 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c69ff3db-8806-451a-9df0-c6289c327579-config-data podName:c69ff3db-8806-451a-9df0-c6289c327579 nodeName:}" failed. No retries permitted until 2026-02-19 18:53:05.251236425 +0000 UTC m=+1404.476676966 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/c69ff3db-8806-451a-9df0-c6289c327579-config-data") pod "rabbitmq-server-0" (UID: "c69ff3db-8806-451a-9df0-c6289c327579") : configmap "rabbitmq-config-data" not found Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.256232 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.270240 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7c88676b6d-zlhlk" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.270720 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/153c22ed-7e2e-496e-9a13-9ef0ce79efd8-logs" (OuterVolumeSpecName: "logs") pod "153c22ed-7e2e-496e-9a13-9ef0ce79efd8" (UID: "153c22ed-7e2e-496e-9a13-9ef0ce79efd8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.276981 4813 scope.go:117] "RemoveContainer" containerID="4710529a2f6d6a9963aa3e46758c6cb9e333d01ce11646280e8ee29697fa5528" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.311106 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c239fd72-88d6-4394-bf24-be4fb0b3e579-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c239fd72-88d6-4394-bf24-be4fb0b3e579" (UID: "c239fd72-88d6-4394-bf24-be4fb0b3e579"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.322114 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c239fd72-88d6-4394-bf24-be4fb0b3e579-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c239fd72-88d6-4394-bf24-be4fb0b3e579" (UID: "c239fd72-88d6-4394-bf24-be4fb0b3e579"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.322276 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-734c-account-create-update-46kvp" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.322141 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/153c22ed-7e2e-496e-9a13-9ef0ce79efd8-kube-api-access-skbh8" (OuterVolumeSpecName: "kube-api-access-skbh8") pod "153c22ed-7e2e-496e-9a13-9ef0ce79efd8" (UID: "153c22ed-7e2e-496e-9a13-9ef0ce79efd8"). InnerVolumeSpecName "kube-api-access-skbh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.342697 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.358373 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skbh8\" (UniqueName: \"kubernetes.io/projected/153c22ed-7e2e-496e-9a13-9ef0ce79efd8-kube-api-access-skbh8\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.358403 4813 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c239fd72-88d6-4394-bf24-be4fb0b3e579-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.358413 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/153c22ed-7e2e-496e-9a13-9ef0ce79efd8-logs\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.358421 4813 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c239fd72-88d6-4394-bf24-be4fb0b3e579-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:57 crc kubenswrapper[4813]: E0219 18:52:57.361153 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e3f45f7478a2e402720223591b16af0ed7d32314ccb1d2c38bb96ffc15553872 is running failed: container process not found" containerID="e3f45f7478a2e402720223591b16af0ed7d32314ccb1d2c38bb96ffc15553872" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 19 18:52:57 crc kubenswrapper[4813]: E0219 18:52:57.363722 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e3f45f7478a2e402720223591b16af0ed7d32314ccb1d2c38bb96ffc15553872 is running failed: container process not found" containerID="e3f45f7478a2e402720223591b16af0ed7d32314ccb1d2c38bb96ffc15553872" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 19 18:52:57 crc kubenswrapper[4813]: E0219 18:52:57.364302 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e3f45f7478a2e402720223591b16af0ed7d32314ccb1d2c38bb96ffc15553872 is running failed: container process not found" containerID="e3f45f7478a2e402720223591b16af0ed7d32314ccb1d2c38bb96ffc15553872" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 19 18:52:57 crc kubenswrapper[4813]: E0219 18:52:57.364380 4813 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e3f45f7478a2e402720223591b16af0ed7d32314ccb1d2c38bb96ffc15553872 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="3422b8bd-2817-4e8f-8a5a-731c773b73a4" containerName="nova-cell0-conductor-conductor" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.372206 4813 scope.go:117] "RemoveContainer" containerID="61fdb19e85db0c41232232581262b2f03bee939d644f27002a6fbcc6eee839c7" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.382371 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.425630 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.435604 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f4b651a-00cc-4ca7-b49e-713eed4968b9-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "6f4b651a-00cc-4ca7-b49e-713eed4968b9" (UID: "6f4b651a-00cc-4ca7-b49e-713eed4968b9"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.435674 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.437728 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.454529 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-vr4rs"] Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.459494 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c389f482-3000-4f94-924d-158c9d51a2e9-config-data\") pod \"c389f482-3000-4f94-924d-158c9d51a2e9\" (UID: \"c389f482-3000-4f94-924d-158c9d51a2e9\") " Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.460557 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6ecb54a2-f23d-45b0-8311-eb5ff83f9f30-config-data-custom\") pod \"6ecb54a2-f23d-45b0-8311-eb5ff83f9f30\" (UID: \"6ecb54a2-f23d-45b0-8311-eb5ff83f9f30\") " Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.460839 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvtx9\" (UniqueName: \"kubernetes.io/projected/c9c7a4f0-3d5e-4e7a-8958-8a2022de3394-kube-api-access-bvtx9\") pod \"c9c7a4f0-3d5e-4e7a-8958-8a2022de3394\" (UID: \"c9c7a4f0-3d5e-4e7a-8958-8a2022de3394\") " Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.460905 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9c7a4f0-3d5e-4e7a-8958-8a2022de3394-scripts\") pod \"c9c7a4f0-3d5e-4e7a-8958-8a2022de3394\" (UID: \"c9c7a4f0-3d5e-4e7a-8958-8a2022de3394\") " Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.460997 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9c7a4f0-3d5e-4e7a-8958-8a2022de3394-combined-ca-bundle\") pod \"c9c7a4f0-3d5e-4e7a-8958-8a2022de3394\" (UID: \"c9c7a4f0-3d5e-4e7a-8958-8a2022de3394\") " Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.461107 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ecb54a2-f23d-45b0-8311-eb5ff83f9f30-internal-tls-certs\") pod \"6ecb54a2-f23d-45b0-8311-eb5ff83f9f30\" (UID: \"6ecb54a2-f23d-45b0-8311-eb5ff83f9f30\") " Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.462541 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ecb54a2-f23d-45b0-8311-eb5ff83f9f30-logs\") pod \"6ecb54a2-f23d-45b0-8311-eb5ff83f9f30\" (UID: \"6ecb54a2-f23d-45b0-8311-eb5ff83f9f30\") " Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.462627 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c389f482-3000-4f94-924d-158c9d51a2e9-logs\") pod \"c389f482-3000-4f94-924d-158c9d51a2e9\" (UID: \"c389f482-3000-4f94-924d-158c9d51a2e9\") " Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.462712 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9c7a4f0-3d5e-4e7a-8958-8a2022de3394-internal-tls-certs\") pod \"c9c7a4f0-3d5e-4e7a-8958-8a2022de3394\" (UID: \"c9c7a4f0-3d5e-4e7a-8958-8a2022de3394\") " Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.466068 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9c7a4f0-3d5e-4e7a-8958-8a2022de3394-logs\") pod \"c9c7a4f0-3d5e-4e7a-8958-8a2022de3394\" (UID: \"c9c7a4f0-3d5e-4e7a-8958-8a2022de3394\") " Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.466111 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5wcw\" (UniqueName: \"kubernetes.io/projected/c389f482-3000-4f94-924d-158c9d51a2e9-kube-api-access-j5wcw\") pod \"c389f482-3000-4f94-924d-158c9d51a2e9\" (UID: \"c389f482-3000-4f94-924d-158c9d51a2e9\") " Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.466137 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f66ct\" (UniqueName: \"kubernetes.io/projected/6ecb54a2-f23d-45b0-8311-eb5ff83f9f30-kube-api-access-f66ct\") pod \"6ecb54a2-f23d-45b0-8311-eb5ff83f9f30\" (UID: \"6ecb54a2-f23d-45b0-8311-eb5ff83f9f30\") " Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.466166 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c9c7a4f0-3d5e-4e7a-8958-8a2022de3394-httpd-run\") pod \"c9c7a4f0-3d5e-4e7a-8958-8a2022de3394\" (UID: \"c9c7a4f0-3d5e-4e7a-8958-8a2022de3394\") " Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.466228 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ecb54a2-f23d-45b0-8311-eb5ff83f9f30-config-data\") pod \"6ecb54a2-f23d-45b0-8311-eb5ff83f9f30\" (UID: \"6ecb54a2-f23d-45b0-8311-eb5ff83f9f30\") " Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.466273 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"c9c7a4f0-3d5e-4e7a-8958-8a2022de3394\" (UID: \"c9c7a4f0-3d5e-4e7a-8958-8a2022de3394\") " Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.466299 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c389f482-3000-4f94-924d-158c9d51a2e9-combined-ca-bundle\") pod \"c389f482-3000-4f94-924d-158c9d51a2e9\" (UID: \"c389f482-3000-4f94-924d-158c9d51a2e9\") " Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.466333 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ecb54a2-f23d-45b0-8311-eb5ff83f9f30-combined-ca-bundle\") pod \"6ecb54a2-f23d-45b0-8311-eb5ff83f9f30\" (UID: \"6ecb54a2-f23d-45b0-8311-eb5ff83f9f30\") " Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.466349 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9c7a4f0-3d5e-4e7a-8958-8a2022de3394-config-data\") pod \"c9c7a4f0-3d5e-4e7a-8958-8a2022de3394\" (UID: \"c9c7a4f0-3d5e-4e7a-8958-8a2022de3394\") " Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.466371 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ecb54a2-f23d-45b0-8311-eb5ff83f9f30-public-tls-certs\") pod \"6ecb54a2-f23d-45b0-8311-eb5ff83f9f30\" (UID: \"6ecb54a2-f23d-45b0-8311-eb5ff83f9f30\") " Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.466415 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c389f482-3000-4f94-924d-158c9d51a2e9-nova-metadata-tls-certs\") pod \"c389f482-3000-4f94-924d-158c9d51a2e9\" (UID: \"c389f482-3000-4f94-924d-158c9d51a2e9\") " Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.466947 4813 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f4b651a-00cc-4ca7-b49e-713eed4968b9-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.460040 4813 scope.go:117] "RemoveContainer" containerID="bf8c06468da19346eec65138cd7874af7a212840ee1fcdf4cb2cd44182970cc3" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.463680 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-vr4rs"] Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.469685 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.460483 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/153c22ed-7e2e-496e-9a13-9ef0ce79efd8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "153c22ed-7e2e-496e-9a13-9ef0ce79efd8" (UID: "153c22ed-7e2e-496e-9a13-9ef0ce79efd8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.465993 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ecb54a2-f23d-45b0-8311-eb5ff83f9f30-logs" (OuterVolumeSpecName: "logs") pod "6ecb54a2-f23d-45b0-8311-eb5ff83f9f30" (UID: "6ecb54a2-f23d-45b0-8311-eb5ff83f9f30"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.469243 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9c7a4f0-3d5e-4e7a-8958-8a2022de3394-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c9c7a4f0-3d5e-4e7a-8958-8a2022de3394" (UID: "c9c7a4f0-3d5e-4e7a-8958-8a2022de3394"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.469643 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9c7a4f0-3d5e-4e7a-8958-8a2022de3394-logs" (OuterVolumeSpecName: "logs") pod "c9c7a4f0-3d5e-4e7a-8958-8a2022de3394" (UID: "c9c7a4f0-3d5e-4e7a-8958-8a2022de3394"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.472345 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ecb54a2-f23d-45b0-8311-eb5ff83f9f30-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6ecb54a2-f23d-45b0-8311-eb5ff83f9f30" (UID: "6ecb54a2-f23d-45b0-8311-eb5ff83f9f30"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.479883 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.480381 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9c7a4f0-3d5e-4e7a-8958-8a2022de3394-kube-api-access-bvtx9" (OuterVolumeSpecName: "kube-api-access-bvtx9") pod "c9c7a4f0-3d5e-4e7a-8958-8a2022de3394" (UID: "c9c7a4f0-3d5e-4e7a-8958-8a2022de3394"). InnerVolumeSpecName "kube-api-access-bvtx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.480886 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c389f482-3000-4f94-924d-158c9d51a2e9-kube-api-access-j5wcw" (OuterVolumeSpecName: "kube-api-access-j5wcw") pod "c389f482-3000-4f94-924d-158c9d51a2e9" (UID: "c389f482-3000-4f94-924d-158c9d51a2e9"). InnerVolumeSpecName "kube-api-access-j5wcw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.483321 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c389f482-3000-4f94-924d-158c9d51a2e9-logs" (OuterVolumeSpecName: "logs") pod "c389f482-3000-4f94-924d-158c9d51a2e9" (UID: "c389f482-3000-4f94-924d-158c9d51a2e9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.519724 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ecb54a2-f23d-45b0-8311-eb5ff83f9f30-kube-api-access-f66ct" (OuterVolumeSpecName: "kube-api-access-f66ct") pod "6ecb54a2-f23d-45b0-8311-eb5ff83f9f30" (UID: "6ecb54a2-f23d-45b0-8311-eb5ff83f9f30"). InnerVolumeSpecName "kube-api-access-f66ct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:52:57 crc kubenswrapper[4813]: E0219 18:52:57.520044 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bdedaff11b60d1f610f35c3711658657b2c38b9a083960b34b9c08ca7b2b0af5" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.532291 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9c7a4f0-3d5e-4e7a-8958-8a2022de3394-scripts" (OuterVolumeSpecName: "scripts") pod "c9c7a4f0-3d5e-4e7a-8958-8a2022de3394" (UID: "c9c7a4f0-3d5e-4e7a-8958-8a2022de3394"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.532321 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "c9c7a4f0-3d5e-4e7a-8958-8a2022de3394" (UID: "c9c7a4f0-3d5e-4e7a-8958-8a2022de3394"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 18:52:57 crc kubenswrapper[4813]: E0219 18:52:57.533014 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bdedaff11b60d1f610f35c3711658657b2c38b9a083960b34b9c08ca7b2b0af5" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.535177 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0028d082-4f3c-4810-ba82-311c776dc554" path="/var/lib/kubelet/pods/0028d082-4f3c-4810-ba82-311c776dc554/volumes" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.535926 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/153c22ed-7e2e-496e-9a13-9ef0ce79efd8-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "153c22ed-7e2e-496e-9a13-9ef0ce79efd8" (UID: "153c22ed-7e2e-496e-9a13-9ef0ce79efd8"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.535936 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0055f8bf-7085-42c5-86d4-9cee7033a7d1" path="/var/lib/kubelet/pods/0055f8bf-7085-42c5-86d4-9cee7033a7d1/volumes" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.536833 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a45ccb1-adb5-432d-a40b-89c7f8412cfd" path="/var/lib/kubelet/pods/0a45ccb1-adb5-432d-a40b-89c7f8412cfd/volumes" Feb 19 18:52:57 crc kubenswrapper[4813]: E0219 18:52:57.541298 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bdedaff11b60d1f610f35c3711658657b2c38b9a083960b34b9c08ca7b2b0af5" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Feb 19 18:52:57 crc kubenswrapper[4813]: E0219 18:52:57.541354 4813 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="2f8101a7-841e-4fa7-b98a-030b82e66c94" containerName="galera" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.547863 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ca7379e-357b-4246-a820-c1aed48b722e" path="/var/lib/kubelet/pods/2ca7379e-357b-4246-a820-c1aed48b722e/volumes" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.552060 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ea3ff24-c40b-432b-a2f8-522284d17ff0" path="/var/lib/kubelet/pods/2ea3ff24-c40b-432b-a2f8-522284d17ff0/volumes" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.552739 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ed035dd-719a-45e9-825f-4ce51dbb9866" path="/var/lib/kubelet/pods/2ed035dd-719a-45e9-825f-4ce51dbb9866/volumes" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.553143 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43b357f4-2181-45a7-9d60-1a90f76b1c77" path="/var/lib/kubelet/pods/43b357f4-2181-45a7-9d60-1a90f76b1c77/volumes" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.553999 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46cbac0e-c130-4aed-9c70-e4c4d7378092" path="/var/lib/kubelet/pods/46cbac0e-c130-4aed-9c70-e4c4d7378092/volumes" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.556416 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51a5d629-ac3a-4046-be99-01b665dce3ac" path="/var/lib/kubelet/pods/51a5d629-ac3a-4046-be99-01b665dce3ac/volumes" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.556789 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ad2b86e-f285-4acc-a87b-18f97baf0294" path="/var/lib/kubelet/pods/6ad2b86e-f285-4acc-a87b-18f97baf0294/volumes" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.557310 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79168133-5b77-4689-89b3-5f15fa765750" path="/var/lib/kubelet/pods/79168133-5b77-4689-89b3-5f15fa765750/volumes" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.558233 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7385c55-b36b-486d-add0-958b8cece7de" path="/var/lib/kubelet/pods/a7385c55-b36b-486d-add0-958b8cece7de/volumes" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.559539 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abaee778-ea35-4887-90c8-2834d3eef00d" path="/var/lib/kubelet/pods/abaee778-ea35-4887-90c8-2834d3eef00d/volumes" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.560147 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acbc878e-b3ae-49db-8fca-1300efb20564" path="/var/lib/kubelet/pods/acbc878e-b3ae-49db-8fca-1300efb20564/volumes" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.560603 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3275e7f-f99c-431a-a777-ea9a62895faa" path="/var/lib/kubelet/pods/c3275e7f-f99c-431a-a777-ea9a62895faa/volumes" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.561668 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1df3c12-83d7-4cf9-8f00-8cbd54be355f" path="/var/lib/kubelet/pods/e1df3c12-83d7-4cf9-8f00-8cbd54be355f/volumes" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.562011 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f10d113b-edb7-4a73-ada0-659e82a43e84" path="/var/lib/kubelet/pods/f10d113b-edb7-4a73-ada0-659e82a43e84/volumes" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.568374 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/153c22ed-7e2e-496e-9a13-9ef0ce79efd8-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "153c22ed-7e2e-496e-9a13-9ef0ce79efd8" (UID: "153c22ed-7e2e-496e-9a13-9ef0ce79efd8"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.568915 4813 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.568938 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/153c22ed-7e2e-496e-9a13-9ef0ce79efd8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.568961 4813 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6ecb54a2-f23d-45b0-8311-eb5ff83f9f30-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.568971 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvtx9\" (UniqueName: \"kubernetes.io/projected/c9c7a4f0-3d5e-4e7a-8958-8a2022de3394-kube-api-access-bvtx9\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.568980 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9c7a4f0-3d5e-4e7a-8958-8a2022de3394-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.568988 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ecb54a2-f23d-45b0-8311-eb5ff83f9f30-logs\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.568997 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c389f482-3000-4f94-924d-158c9d51a2e9-logs\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.569005 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9c7a4f0-3d5e-4e7a-8958-8a2022de3394-logs\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.569013 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j5wcw\" (UniqueName: \"kubernetes.io/projected/c389f482-3000-4f94-924d-158c9d51a2e9-kube-api-access-j5wcw\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.569023 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f66ct\" (UniqueName: \"kubernetes.io/projected/6ecb54a2-f23d-45b0-8311-eb5ff83f9f30-kube-api-access-f66ct\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.569031 4813 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/153c22ed-7e2e-496e-9a13-9ef0ce79efd8-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.569039 4813 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c9c7a4f0-3d5e-4e7a-8958-8a2022de3394-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.569149 4813 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/153c22ed-7e2e-496e-9a13-9ef0ce79efd8-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:57 crc kubenswrapper[4813]: E0219 18:52:57.569609 4813 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Feb 19 18:52:57 crc kubenswrapper[4813]: E0219 18:52:57.569670 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/21e36d83-8928-4c58-8432-eb51809336a7-operator-scripts podName:21e36d83-8928-4c58-8432-eb51809336a7 nodeName:}" failed. No retries permitted until 2026-02-19 18:52:58.569655661 +0000 UTC m=+1397.795096192 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/21e36d83-8928-4c58-8432-eb51809336a7-operator-scripts") pod "root-account-create-update-vfdhs" (UID: "21e36d83-8928-4c58-8432-eb51809336a7") : configmap "openstack-scripts" not found Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.575897 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/153c22ed-7e2e-496e-9a13-9ef0ce79efd8-config-data" (OuterVolumeSpecName: "config-data") pod "153c22ed-7e2e-496e-9a13-9ef0ce79efd8" (UID: "153c22ed-7e2e-496e-9a13-9ef0ce79efd8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.606088 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f4b651a-00cc-4ca7-b49e-713eed4968b9-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "6f4b651a-00cc-4ca7-b49e-713eed4968b9" (UID: "6f4b651a-00cc-4ca7-b49e-713eed4968b9"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.621633 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c389f482-3000-4f94-924d-158c9d51a2e9-config-data" (OuterVolumeSpecName: "config-data") pod "c389f482-3000-4f94-924d-158c9d51a2e9" (UID: "c389f482-3000-4f94-924d-158c9d51a2e9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.628464 4813 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.651572 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ecb54a2-f23d-45b0-8311-eb5ff83f9f30-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6ecb54a2-f23d-45b0-8311-eb5ff83f9f30" (UID: "6ecb54a2-f23d-45b0-8311-eb5ff83f9f30"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.653782 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c389f482-3000-4f94-924d-158c9d51a2e9-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "c389f482-3000-4f94-924d-158c9d51a2e9" (UID: "c389f482-3000-4f94-924d-158c9d51a2e9"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.654429 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c389f482-3000-4f94-924d-158c9d51a2e9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c389f482-3000-4f94-924d-158c9d51a2e9" (UID: "c389f482-3000-4f94-924d-158c9d51a2e9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.660759 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9c7a4f0-3d5e-4e7a-8958-8a2022de3394-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c9c7a4f0-3d5e-4e7a-8958-8a2022de3394" (UID: "c9c7a4f0-3d5e-4e7a-8958-8a2022de3394"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.661355 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ecb54a2-f23d-45b0-8311-eb5ff83f9f30-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "6ecb54a2-f23d-45b0-8311-eb5ff83f9f30" (UID: "6ecb54a2-f23d-45b0-8311-eb5ff83f9f30"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.670046 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54vfj\" (UniqueName: \"kubernetes.io/projected/4f4ab094-423e-40ff-bf70-50275c1d41cb-kube-api-access-54vfj\") pod \"keystone-734c-account-create-update-46kvp\" (UID: \"4f4ab094-423e-40ff-bf70-50275c1d41cb\") " pod="openstack/keystone-734c-account-create-update-46kvp" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.670169 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f4ab094-423e-40ff-bf70-50275c1d41cb-operator-scripts\") pod \"keystone-734c-account-create-update-46kvp\" (UID: \"4f4ab094-423e-40ff-bf70-50275c1d41cb\") " pod="openstack/keystone-734c-account-create-update-46kvp" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.670241 4813 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.670256 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c389f482-3000-4f94-924d-158c9d51a2e9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.670268 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ecb54a2-f23d-45b0-8311-eb5ff83f9f30-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.670277 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/153c22ed-7e2e-496e-9a13-9ef0ce79efd8-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.670286 4813 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c389f482-3000-4f94-924d-158c9d51a2e9-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.670295 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c389f482-3000-4f94-924d-158c9d51a2e9-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.670304 4813 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f4b651a-00cc-4ca7-b49e-713eed4968b9-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.670314 4813 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ecb54a2-f23d-45b0-8311-eb5ff83f9f30-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.670323 4813 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9c7a4f0-3d5e-4e7a-8958-8a2022de3394-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:57 crc kubenswrapper[4813]: E0219 18:52:57.670377 4813 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Feb 19 18:52:57 crc kubenswrapper[4813]: E0219 18:52:57.670420 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f4ab094-423e-40ff-bf70-50275c1d41cb-operator-scripts podName:4f4ab094-423e-40ff-bf70-50275c1d41cb nodeName:}" failed. No retries permitted until 2026-02-19 18:52:59.670406491 +0000 UTC m=+1398.895847022 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/4f4ab094-423e-40ff-bf70-50275c1d41cb-operator-scripts") pod "keystone-734c-account-create-update-46kvp" (UID: "4f4ab094-423e-40ff-bf70-50275c1d41cb") : configmap "openstack-scripts" not found Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.672979 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ecb54a2-f23d-45b0-8311-eb5ff83f9f30-config-data" (OuterVolumeSpecName: "config-data") pod "6ecb54a2-f23d-45b0-8311-eb5ff83f9f30" (UID: "6ecb54a2-f23d-45b0-8311-eb5ff83f9f30"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:57 crc kubenswrapper[4813]: E0219 18:52:57.678021 4813 projected.go:194] Error preparing data for projected volume kube-api-access-54vfj for pod openstack/keystone-734c-account-create-update-46kvp: failed to fetch token: serviceaccounts "galera-openstack" not found Feb 19 18:52:57 crc kubenswrapper[4813]: E0219 18:52:57.678082 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f4ab094-423e-40ff-bf70-50275c1d41cb-kube-api-access-54vfj podName:4f4ab094-423e-40ff-bf70-50275c1d41cb nodeName:}" failed. No retries permitted until 2026-02-19 18:52:59.678064498 +0000 UTC m=+1398.903505039 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-54vfj" (UniqueName: "kubernetes.io/projected/4f4ab094-423e-40ff-bf70-50275c1d41cb-kube-api-access-54vfj") pod "keystone-734c-account-create-update-46kvp" (UID: "4f4ab094-423e-40ff-bf70-50275c1d41cb") : failed to fetch token: serviceaccounts "galera-openstack" not found Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.681880 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9c7a4f0-3d5e-4e7a-8958-8a2022de3394-config-data" (OuterVolumeSpecName: "config-data") pod "c9c7a4f0-3d5e-4e7a-8958-8a2022de3394" (UID: "c9c7a4f0-3d5e-4e7a-8958-8a2022de3394"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.689162 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9c7a4f0-3d5e-4e7a-8958-8a2022de3394-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c9c7a4f0-3d5e-4e7a-8958-8a2022de3394" (UID: "c9c7a4f0-3d5e-4e7a-8958-8a2022de3394"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.707135 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ecb54a2-f23d-45b0-8311-eb5ff83f9f30-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "6ecb54a2-f23d-45b0-8311-eb5ff83f9f30" (UID: "6ecb54a2-f23d-45b0-8311-eb5ff83f9f30"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:57 crc kubenswrapper[4813]: E0219 18:52:57.762966 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 120d2d23bdd5ab00b179e24b7405a4df74b8b142005b13e2f6722351e1038530 is running failed: container process not found" containerID="120d2d23bdd5ab00b179e24b7405a4df74b8b142005b13e2f6722351e1038530" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 19 18:52:57 crc kubenswrapper[4813]: E0219 18:52:57.763260 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 120d2d23bdd5ab00b179e24b7405a4df74b8b142005b13e2f6722351e1038530 is running failed: container process not found" containerID="120d2d23bdd5ab00b179e24b7405a4df74b8b142005b13e2f6722351e1038530" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 19 18:52:57 crc kubenswrapper[4813]: E0219 18:52:57.763457 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 120d2d23bdd5ab00b179e24b7405a4df74b8b142005b13e2f6722351e1038530 is running failed: container process not found" containerID="120d2d23bdd5ab00b179e24b7405a4df74b8b142005b13e2f6722351e1038530" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 19 18:52:57 crc kubenswrapper[4813]: E0219 18:52:57.763482 4813 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 120d2d23bdd5ab00b179e24b7405a4df74b8b142005b13e2f6722351e1038530 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="973faa07-fbab-4a50-ac4a-c62302e9f9c1" containerName="nova-cell1-conductor-conductor" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.771474 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9c7a4f0-3d5e-4e7a-8958-8a2022de3394-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.771500 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ecb54a2-f23d-45b0-8311-eb5ff83f9f30-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.771510 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9c7a4f0-3d5e-4e7a-8958-8a2022de3394-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.771518 4813 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ecb54a2-f23d-45b0-8311-eb5ff83f9f30-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.779564 4813 scope.go:117] "RemoveContainer" containerID="61fdb19e85db0c41232232581262b2f03bee939d644f27002a6fbcc6eee839c7" Feb 19 18:52:57 crc kubenswrapper[4813]: E0219 18:52:57.780029 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61fdb19e85db0c41232232581262b2f03bee939d644f27002a6fbcc6eee839c7\": container with ID starting with 61fdb19e85db0c41232232581262b2f03bee939d644f27002a6fbcc6eee839c7 not found: ID does not exist" containerID="61fdb19e85db0c41232232581262b2f03bee939d644f27002a6fbcc6eee839c7" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.780065 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61fdb19e85db0c41232232581262b2f03bee939d644f27002a6fbcc6eee839c7"} err="failed to get container status \"61fdb19e85db0c41232232581262b2f03bee939d644f27002a6fbcc6eee839c7\": rpc error: code = NotFound desc = could not find container \"61fdb19e85db0c41232232581262b2f03bee939d644f27002a6fbcc6eee839c7\": container with ID starting with 61fdb19e85db0c41232232581262b2f03bee939d644f27002a6fbcc6eee839c7 not found: ID does not exist" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.780089 4813 scope.go:117] "RemoveContainer" containerID="bf8c06468da19346eec65138cd7874af7a212840ee1fcdf4cb2cd44182970cc3" Feb 19 18:52:57 crc kubenswrapper[4813]: E0219 18:52:57.780367 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf8c06468da19346eec65138cd7874af7a212840ee1fcdf4cb2cd44182970cc3\": container with ID starting with bf8c06468da19346eec65138cd7874af7a212840ee1fcdf4cb2cd44182970cc3 not found: ID does not exist" containerID="bf8c06468da19346eec65138cd7874af7a212840ee1fcdf4cb2cd44182970cc3" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.780408 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf8c06468da19346eec65138cd7874af7a212840ee1fcdf4cb2cd44182970cc3"} err="failed to get container status \"bf8c06468da19346eec65138cd7874af7a212840ee1fcdf4cb2cd44182970cc3\": rpc error: code = NotFound desc = could not find container \"bf8c06468da19346eec65138cd7874af7a212840ee1fcdf4cb2cd44182970cc3\": container with ID starting with bf8c06468da19346eec65138cd7874af7a212840ee1fcdf4cb2cd44182970cc3 not found: ID does not exist" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.780432 4813 scope.go:117] "RemoveContainer" containerID="fa176bf7b54c8db7a012a6c8fd130237f020b6ebfd7bd99836698ae1cab7d252" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.843106 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-5bfc47d69f-qrwdk"] Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.843159 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-5bfc47d69f-qrwdk"] Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.867157 4813 scope.go:117] "RemoveContainer" containerID="452e2a9d879b10f739d3b19cd7e40ef58643848e29fa2a75758bdac304dc3d57" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.872387 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.893504 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.894341 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c9c7a4f0-3d5e-4e7a-8958-8a2022de3394","Type":"ContainerDied","Data":"7b6c78b69661dd4aa07ea77dc1e4e080726c89d13470e2a0867e6cb432f6e447"} Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.904162 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"153c22ed-7e2e-496e-9a13-9ef0ce79efd8","Type":"ContainerDied","Data":"1cafd68c3c22c47bf0012d5f19c420b6a8694b41e6f1af8da2bf387612c8e351"} Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.904395 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.909412 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.909897 4813 generic.go:334] "Generic (PLEG): container finished" podID="2d4e6cd2-75bd-43bd-9f0e-fd35002ab607" containerID="18eaf8dca70d4fb335560e1bef6b8d6431fc9112b942b9be1509d59f63ef5a18" exitCode=0 Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.909940 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"2d4e6cd2-75bd-43bd-9f0e-fd35002ab607","Type":"ContainerDied","Data":"18eaf8dca70d4fb335560e1bef6b8d6431fc9112b942b9be1509d59f63ef5a18"} Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.909973 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"2d4e6cd2-75bd-43bd-9f0e-fd35002ab607","Type":"ContainerDied","Data":"2be29964735caaad96a2560105738c00636594a7e4862836c9069c549854bb92"} Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.935090 4813 scope.go:117] "RemoveContainer" containerID="cdcf02dfe81d5acf0aa0015ceb87d0dc57996ba7b45c419fa6770df217892dee" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.935477 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7c88676b6d-zlhlk" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.936629 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7c88676b6d-zlhlk" event={"ID":"6ecb54a2-f23d-45b0-8311-eb5ff83f9f30","Type":"ContainerDied","Data":"5f9428e0811337010d65b13f1773b5a437c08b26ecadb9a88e8fd6cbf13fc6ed"} Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.938403 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-d474bcd44-n9tsd"] Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.938490 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.944706 4813 generic.go:334] "Generic (PLEG): container finished" podID="973faa07-fbab-4a50-ac4a-c62302e9f9c1" containerID="120d2d23bdd5ab00b179e24b7405a4df74b8b142005b13e2f6722351e1038530" exitCode=0 Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.945092 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"973faa07-fbab-4a50-ac4a-c62302e9f9c1","Type":"ContainerDied","Data":"120d2d23bdd5ab00b179e24b7405a4df74b8b142005b13e2f6722351e1038530"} Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.945200 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"973faa07-fbab-4a50-ac4a-c62302e9f9c1","Type":"ContainerDied","Data":"f1526a1adea998ab2b5e1300ebf3c3cdf8a9e760185edc27a1a4789e079ed460"} Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.953204 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-d474bcd44-n9tsd"] Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.976882 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3422b8bd-2817-4e8f-8a5a-731c773b73a4-combined-ca-bundle\") pod \"3422b8bd-2817-4e8f-8a5a-731c773b73a4\" (UID: \"3422b8bd-2817-4e8f-8a5a-731c773b73a4\") " Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.977375 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3422b8bd-2817-4e8f-8a5a-731c773b73a4-config-data\") pod \"3422b8bd-2817-4e8f-8a5a-731c773b73a4\" (UID: \"3422b8bd-2817-4e8f-8a5a-731c773b73a4\") " Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.977450 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxtbh\" (UniqueName: \"kubernetes.io/projected/3422b8bd-2817-4e8f-8a5a-731c773b73a4-kube-api-access-wxtbh\") pod \"3422b8bd-2817-4e8f-8a5a-731c773b73a4\" (UID: \"3422b8bd-2817-4e8f-8a5a-731c773b73a4\") " Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.978770 4813 scope.go:117] "RemoveContainer" containerID="c739347572fde9810eb2e984054e31d8f98d420e8deb6f11d5a7c741a7bd9e0a" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.983361 4813 generic.go:334] "Generic (PLEG): container finished" podID="21e36d83-8928-4c58-8432-eb51809336a7" containerID="c110076167c797991fa5c3e36a8495fafd043d5dd824a10ef14375c14c671fe1" exitCode=1 Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.983433 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-vfdhs" event={"ID":"21e36d83-8928-4c58-8432-eb51809336a7","Type":"ContainerDied","Data":"c110076167c797991fa5c3e36a8495fafd043d5dd824a10ef14375c14c671fe1"} Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.983673 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3422b8bd-2817-4e8f-8a5a-731c773b73a4-kube-api-access-wxtbh" (OuterVolumeSpecName: "kube-api-access-wxtbh") pod "3422b8bd-2817-4e8f-8a5a-731c773b73a4" (UID: "3422b8bd-2817-4e8f-8a5a-731c773b73a4"). InnerVolumeSpecName "kube-api-access-wxtbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.992130 4813 generic.go:334] "Generic (PLEG): container finished" podID="3422b8bd-2817-4e8f-8a5a-731c773b73a4" containerID="e3f45f7478a2e402720223591b16af0ed7d32314ccb1d2c38bb96ffc15553872" exitCode=0 Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.992205 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"3422b8bd-2817-4e8f-8a5a-731c773b73a4","Type":"ContainerDied","Data":"e3f45f7478a2e402720223591b16af0ed7d32314ccb1d2c38bb96ffc15553872"} Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.992229 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"3422b8bd-2817-4e8f-8a5a-731c773b73a4","Type":"ContainerDied","Data":"7acc57d450d03d47043a9b50283daa8549df90f583b57d245b091e5f193a1b62"} Feb 19 18:52:57 crc kubenswrapper[4813]: I0219 18:52:57.992284 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.006189 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-734c-account-create-update-46kvp" Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.008249 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.008913 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c389f482-3000-4f94-924d-158c9d51a2e9","Type":"ContainerDied","Data":"9f772f680748d60af5a6560ada0a69e5535e128aaa419c6744f56b1f8c246a23"} Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.017734 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.032225 4813 scope.go:117] "RemoveContainer" containerID="cdcf02dfe81d5acf0aa0015ceb87d0dc57996ba7b45c419fa6770df217892dee" Feb 19 18:52:58 crc kubenswrapper[4813]: E0219 18:52:58.032772 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cdcf02dfe81d5acf0aa0015ceb87d0dc57996ba7b45c419fa6770df217892dee\": container with ID starting with cdcf02dfe81d5acf0aa0015ceb87d0dc57996ba7b45c419fa6770df217892dee not found: ID does not exist" containerID="cdcf02dfe81d5acf0aa0015ceb87d0dc57996ba7b45c419fa6770df217892dee" Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.032805 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdcf02dfe81d5acf0aa0015ceb87d0dc57996ba7b45c419fa6770df217892dee"} err="failed to get container status \"cdcf02dfe81d5acf0aa0015ceb87d0dc57996ba7b45c419fa6770df217892dee\": rpc error: code = NotFound desc = could not find container \"cdcf02dfe81d5acf0aa0015ceb87d0dc57996ba7b45c419fa6770df217892dee\": container with ID starting with cdcf02dfe81d5acf0aa0015ceb87d0dc57996ba7b45c419fa6770df217892dee not found: ID does not exist" Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.032825 4813 scope.go:117] "RemoveContainer" containerID="c739347572fde9810eb2e984054e31d8f98d420e8deb6f11d5a7c741a7bd9e0a" Feb 19 18:52:58 crc kubenswrapper[4813]: E0219 18:52:58.033090 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c739347572fde9810eb2e984054e31d8f98d420e8deb6f11d5a7c741a7bd9e0a\": container with ID starting with c739347572fde9810eb2e984054e31d8f98d420e8deb6f11d5a7c741a7bd9e0a not found: ID does not exist" containerID="c739347572fde9810eb2e984054e31d8f98d420e8deb6f11d5a7c741a7bd9e0a" Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.033116 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c739347572fde9810eb2e984054e31d8f98d420e8deb6f11d5a7c741a7bd9e0a"} err="failed to get container status \"c739347572fde9810eb2e984054e31d8f98d420e8deb6f11d5a7c741a7bd9e0a\": rpc error: code = NotFound desc = could not find container \"c739347572fde9810eb2e984054e31d8f98d420e8deb6f11d5a7c741a7bd9e0a\": container with ID starting with c739347572fde9810eb2e984054e31d8f98d420e8deb6f11d5a7c741a7bd9e0a not found: ID does not exist" Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.033141 4813 scope.go:117] "RemoveContainer" containerID="a32a6d2b95db3921a0795c8dfb6c3f6ac4703950bf4f5218f53c1129cb7f36d4" Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.033182 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3422b8bd-2817-4e8f-8a5a-731c773b73a4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3422b8bd-2817-4e8f-8a5a-731c773b73a4" (UID: "3422b8bd-2817-4e8f-8a5a-731c773b73a4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.033067 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3422b8bd-2817-4e8f-8a5a-731c773b73a4-config-data" (OuterVolumeSpecName: "config-data") pod "3422b8bd-2817-4e8f-8a5a-731c773b73a4" (UID: "3422b8bd-2817-4e8f-8a5a-731c773b73a4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.037593 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.063023 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.064178 4813 scope.go:117] "RemoveContainer" containerID="a32a6d2b95db3921a0795c8dfb6c3f6ac4703950bf4f5218f53c1129cb7f36d4" Feb 19 18:52:58 crc kubenswrapper[4813]: E0219 18:52:58.064812 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a32a6d2b95db3921a0795c8dfb6c3f6ac4703950bf4f5218f53c1129cb7f36d4\": container with ID starting with a32a6d2b95db3921a0795c8dfb6c3f6ac4703950bf4f5218f53c1129cb7f36d4 not found: ID does not exist" containerID="a32a6d2b95db3921a0795c8dfb6c3f6ac4703950bf4f5218f53c1129cb7f36d4" Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.064843 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a32a6d2b95db3921a0795c8dfb6c3f6ac4703950bf4f5218f53c1129cb7f36d4"} err="failed to get container status \"a32a6d2b95db3921a0795c8dfb6c3f6ac4703950bf4f5218f53c1129cb7f36d4\": rpc error: code = NotFound desc = could not find container \"a32a6d2b95db3921a0795c8dfb6c3f6ac4703950bf4f5218f53c1129cb7f36d4\": container with ID starting with a32a6d2b95db3921a0795c8dfb6c3f6ac4703950bf4f5218f53c1129cb7f36d4 not found: ID does not exist" Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.064862 4813 scope.go:117] "RemoveContainer" containerID="24d702d7195d76e4a45075dc68672b623d20a1f8acf2e94fc16e64fdd1806db3" Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.068968 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.078106 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7c88676b6d-zlhlk"] Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.078706 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d4e6cd2-75bd-43bd-9f0e-fd35002ab607-memcached-tls-certs\") pod \"2d4e6cd2-75bd-43bd-9f0e-fd35002ab607\" (UID: \"2d4e6cd2-75bd-43bd-9f0e-fd35002ab607\") " Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.078731 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/973faa07-fbab-4a50-ac4a-c62302e9f9c1-config-data\") pod \"973faa07-fbab-4a50-ac4a-c62302e9f9c1\" (UID: \"973faa07-fbab-4a50-ac4a-c62302e9f9c1\") " Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.078785 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-986ln\" (UniqueName: \"kubernetes.io/projected/2d4e6cd2-75bd-43bd-9f0e-fd35002ab607-kube-api-access-986ln\") pod \"2d4e6cd2-75bd-43bd-9f0e-fd35002ab607\" (UID: \"2d4e6cd2-75bd-43bd-9f0e-fd35002ab607\") " Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.078817 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvwmf\" (UniqueName: \"kubernetes.io/projected/973faa07-fbab-4a50-ac4a-c62302e9f9c1-kube-api-access-pvwmf\") pod \"973faa07-fbab-4a50-ac4a-c62302e9f9c1\" (UID: \"973faa07-fbab-4a50-ac4a-c62302e9f9c1\") " Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.078863 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/973faa07-fbab-4a50-ac4a-c62302e9f9c1-combined-ca-bundle\") pod \"973faa07-fbab-4a50-ac4a-c62302e9f9c1\" (UID: \"973faa07-fbab-4a50-ac4a-c62302e9f9c1\") " Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.078898 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2d4e6cd2-75bd-43bd-9f0e-fd35002ab607-kolla-config\") pod \"2d4e6cd2-75bd-43bd-9f0e-fd35002ab607\" (UID: \"2d4e6cd2-75bd-43bd-9f0e-fd35002ab607\") " Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.078933 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d4e6cd2-75bd-43bd-9f0e-fd35002ab607-combined-ca-bundle\") pod \"2d4e6cd2-75bd-43bd-9f0e-fd35002ab607\" (UID: \"2d4e6cd2-75bd-43bd-9f0e-fd35002ab607\") " Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.078986 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2d4e6cd2-75bd-43bd-9f0e-fd35002ab607-config-data\") pod \"2d4e6cd2-75bd-43bd-9f0e-fd35002ab607\" (UID: \"2d4e6cd2-75bd-43bd-9f0e-fd35002ab607\") " Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.079497 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxtbh\" (UniqueName: \"kubernetes.io/projected/3422b8bd-2817-4e8f-8a5a-731c773b73a4-kube-api-access-wxtbh\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.079510 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3422b8bd-2817-4e8f-8a5a-731c773b73a4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.079519 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3422b8bd-2817-4e8f-8a5a-731c773b73a4-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.079829 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d4e6cd2-75bd-43bd-9f0e-fd35002ab607-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "2d4e6cd2-75bd-43bd-9f0e-fd35002ab607" (UID: "2d4e6cd2-75bd-43bd-9f0e-fd35002ab607"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.079854 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d4e6cd2-75bd-43bd-9f0e-fd35002ab607-config-data" (OuterVolumeSpecName: "config-data") pod "2d4e6cd2-75bd-43bd-9f0e-fd35002ab607" (UID: "2d4e6cd2-75bd-43bd-9f0e-fd35002ab607"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.081930 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/973faa07-fbab-4a50-ac4a-c62302e9f9c1-kube-api-access-pvwmf" (OuterVolumeSpecName: "kube-api-access-pvwmf") pod "973faa07-fbab-4a50-ac4a-c62302e9f9c1" (UID: "973faa07-fbab-4a50-ac4a-c62302e9f9c1"). InnerVolumeSpecName "kube-api-access-pvwmf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.100021 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/973faa07-fbab-4a50-ac4a-c62302e9f9c1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "973faa07-fbab-4a50-ac4a-c62302e9f9c1" (UID: "973faa07-fbab-4a50-ac4a-c62302e9f9c1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.099945 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d4e6cd2-75bd-43bd-9f0e-fd35002ab607-kube-api-access-986ln" (OuterVolumeSpecName: "kube-api-access-986ln") pod "2d4e6cd2-75bd-43bd-9f0e-fd35002ab607" (UID: "2d4e6cd2-75bd-43bd-9f0e-fd35002ab607"). InnerVolumeSpecName "kube-api-access-986ln". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.100362 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-7c88676b6d-zlhlk"] Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.106306 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d4e6cd2-75bd-43bd-9f0e-fd35002ab607-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2d4e6cd2-75bd-43bd-9f0e-fd35002ab607" (UID: "2d4e6cd2-75bd-43bd-9f0e-fd35002ab607"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.110525 4813 scope.go:117] "RemoveContainer" containerID="912bf4bcdf44e6c87d2ddd9d02005a5efe71719e483128b7f0504e540c94af8f" Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.130676 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d4e6cd2-75bd-43bd-9f0e-fd35002ab607-memcached-tls-certs" (OuterVolumeSpecName: "memcached-tls-certs") pod "2d4e6cd2-75bd-43bd-9f0e-fd35002ab607" (UID: "2d4e6cd2-75bd-43bd-9f0e-fd35002ab607"). InnerVolumeSpecName "memcached-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.131924 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.132595 4813 scope.go:117] "RemoveContainer" containerID="80732d7919ccb302a5ba0c1ac5c2fcd0229a26a0e569dafeadc7852c9e14fefa" Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.138198 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/973faa07-fbab-4a50-ac4a-c62302e9f9c1-config-data" (OuterVolumeSpecName: "config-data") pod "973faa07-fbab-4a50-ac4a-c62302e9f9c1" (UID: "973faa07-fbab-4a50-ac4a-c62302e9f9c1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.144881 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.158747 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-734c-account-create-update-46kvp"] Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.163473 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-734c-account-create-update-46kvp"] Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.164939 4813 scope.go:117] "RemoveContainer" containerID="84023d51754b9bdea418d6af28a79e9ce79c0d010f1e0a43ee78d69255fd77b9" Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.181169 4813 reconciler_common.go:293] "Volume detached for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d4e6cd2-75bd-43bd-9f0e-fd35002ab607-memcached-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.181220 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/973faa07-fbab-4a50-ac4a-c62302e9f9c1-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.181307 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-986ln\" (UniqueName: \"kubernetes.io/projected/2d4e6cd2-75bd-43bd-9f0e-fd35002ab607-kube-api-access-986ln\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.181462 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvwmf\" (UniqueName: \"kubernetes.io/projected/973faa07-fbab-4a50-ac4a-c62302e9f9c1-kube-api-access-pvwmf\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.181471 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/973faa07-fbab-4a50-ac4a-c62302e9f9c1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.181529 4813 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2d4e6cd2-75bd-43bd-9f0e-fd35002ab607-kolla-config\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.181540 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d4e6cd2-75bd-43bd-9f0e-fd35002ab607-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.181548 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2d4e6cd2-75bd-43bd-9f0e-fd35002ab607-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.221846 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="f5fbcd22-57c9-4c44-99e8-f9307f26a525" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.198:3000/\": dial tcp 10.217.0.198:3000: connect: connection refused" Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.243886 4813 scope.go:117] "RemoveContainer" containerID="18eaf8dca70d4fb335560e1bef6b8d6431fc9112b942b9be1509d59f63ef5a18" Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.283635 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54vfj\" (UniqueName: \"kubernetes.io/projected/4f4ab094-423e-40ff-bf70-50275c1d41cb-kube-api-access-54vfj\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.284271 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f4ab094-423e-40ff-bf70-50275c1d41cb-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.296200 4813 scope.go:117] "RemoveContainer" containerID="18eaf8dca70d4fb335560e1bef6b8d6431fc9112b942b9be1509d59f63ef5a18" Feb 19 18:52:58 crc kubenswrapper[4813]: E0219 18:52:58.301025 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18eaf8dca70d4fb335560e1bef6b8d6431fc9112b942b9be1509d59f63ef5a18\": container with ID starting with 18eaf8dca70d4fb335560e1bef6b8d6431fc9112b942b9be1509d59f63ef5a18 not found: ID does not exist" containerID="18eaf8dca70d4fb335560e1bef6b8d6431fc9112b942b9be1509d59f63ef5a18" Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.301065 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18eaf8dca70d4fb335560e1bef6b8d6431fc9112b942b9be1509d59f63ef5a18"} err="failed to get container status \"18eaf8dca70d4fb335560e1bef6b8d6431fc9112b942b9be1509d59f63ef5a18\": rpc error: code = NotFound desc = could not find container \"18eaf8dca70d4fb335560e1bef6b8d6431fc9112b942b9be1509d59f63ef5a18\": container with ID starting with 18eaf8dca70d4fb335560e1bef6b8d6431fc9112b942b9be1509d59f63ef5a18 not found: ID does not exist" Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.301094 4813 scope.go:117] "RemoveContainer" containerID="fe9cae1f29fb502eab2ed61c37be245fbecda7cbaa6a4d4b23a769893fe52d66" Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.326833 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vfdhs" Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.338394 4813 scope.go:117] "RemoveContainer" containerID="c497a249844ccadf079da17947184a1a812d81325a2ff39036321f46b9c5c309" Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.342449 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.358776 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.365748 4813 scope.go:117] "RemoveContainer" containerID="120d2d23bdd5ab00b179e24b7405a4df74b8b142005b13e2f6722351e1038530" Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.385030 4813 scope.go:117] "RemoveContainer" containerID="120d2d23bdd5ab00b179e24b7405a4df74b8b142005b13e2f6722351e1038530" Feb 19 18:52:58 crc kubenswrapper[4813]: E0219 18:52:58.385451 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"120d2d23bdd5ab00b179e24b7405a4df74b8b142005b13e2f6722351e1038530\": container with ID starting with 120d2d23bdd5ab00b179e24b7405a4df74b8b142005b13e2f6722351e1038530 not found: ID does not exist" containerID="120d2d23bdd5ab00b179e24b7405a4df74b8b142005b13e2f6722351e1038530" Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.385473 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"120d2d23bdd5ab00b179e24b7405a4df74b8b142005b13e2f6722351e1038530"} err="failed to get container status \"120d2d23bdd5ab00b179e24b7405a4df74b8b142005b13e2f6722351e1038530\": rpc error: code = NotFound desc = could not find container \"120d2d23bdd5ab00b179e24b7405a4df74b8b142005b13e2f6722351e1038530\": container with ID starting with 120d2d23bdd5ab00b179e24b7405a4df74b8b142005b13e2f6722351e1038530 not found: ID does not exist" Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.385492 4813 scope.go:117] "RemoveContainer" containerID="99350913d64263a23474a084df20dd31692ca7167154de241fcca508aa3dcbe0" Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.403572 4813 scope.go:117] "RemoveContainer" containerID="e3f45f7478a2e402720223591b16af0ed7d32314ccb1d2c38bb96ffc15553872" Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.421545 4813 scope.go:117] "RemoveContainer" containerID="e3f45f7478a2e402720223591b16af0ed7d32314ccb1d2c38bb96ffc15553872" Feb 19 18:52:58 crc kubenswrapper[4813]: E0219 18:52:58.422111 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3f45f7478a2e402720223591b16af0ed7d32314ccb1d2c38bb96ffc15553872\": container with ID starting with e3f45f7478a2e402720223591b16af0ed7d32314ccb1d2c38bb96ffc15553872 not found: ID does not exist" containerID="e3f45f7478a2e402720223591b16af0ed7d32314ccb1d2c38bb96ffc15553872" Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.422152 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3f45f7478a2e402720223591b16af0ed7d32314ccb1d2c38bb96ffc15553872"} err="failed to get container status \"e3f45f7478a2e402720223591b16af0ed7d32314ccb1d2c38bb96ffc15553872\": rpc error: code = NotFound desc = could not find container \"e3f45f7478a2e402720223591b16af0ed7d32314ccb1d2c38bb96ffc15553872\": container with ID starting with e3f45f7478a2e402720223591b16af0ed7d32314ccb1d2c38bb96ffc15553872 not found: ID does not exist" Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.422176 4813 scope.go:117] "RemoveContainer" containerID="610a526fd13a6efb0d4ac221965cec53053e05e9e010cd31550a6cc426a151df" Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.448449 4813 scope.go:117] "RemoveContainer" containerID="42f9e9f8d59b24e0b6b81176e439582dcdd7e50952fc2dd61a52acdcf4367e0e" Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.487148 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21e36d83-8928-4c58-8432-eb51809336a7-operator-scripts\") pod \"21e36d83-8928-4c58-8432-eb51809336a7\" (UID: \"21e36d83-8928-4c58-8432-eb51809336a7\") " Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.487351 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdjqd\" (UniqueName: \"kubernetes.io/projected/21e36d83-8928-4c58-8432-eb51809336a7-kube-api-access-zdjqd\") pod \"21e36d83-8928-4c58-8432-eb51809336a7\" (UID: \"21e36d83-8928-4c58-8432-eb51809336a7\") " Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.487802 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21e36d83-8928-4c58-8432-eb51809336a7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "21e36d83-8928-4c58-8432-eb51809336a7" (UID: "21e36d83-8928-4c58-8432-eb51809336a7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.491540 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21e36d83-8928-4c58-8432-eb51809336a7-kube-api-access-zdjqd" (OuterVolumeSpecName: "kube-api-access-zdjqd") pod "21e36d83-8928-4c58-8432-eb51809336a7" (UID: "21e36d83-8928-4c58-8432-eb51809336a7"). InnerVolumeSpecName "kube-api-access-zdjqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.597102 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21e36d83-8928-4c58-8432-eb51809336a7-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.597140 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdjqd\" (UniqueName: \"kubernetes.io/projected/21e36d83-8928-4c58-8432-eb51809336a7-kube-api-access-zdjqd\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:58 crc kubenswrapper[4813]: E0219 18:52:58.723180 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9226feb2aaff488664cb846e1b506dadfbfcb60658291ac816823714d99eabc1 is running failed: container process not found" containerID="9226feb2aaff488664cb846e1b506dadfbfcb60658291ac816823714d99eabc1" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Feb 19 18:52:58 crc kubenswrapper[4813]: E0219 18:52:58.723581 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9226feb2aaff488664cb846e1b506dadfbfcb60658291ac816823714d99eabc1 is running failed: container process not found" containerID="9226feb2aaff488664cb846e1b506dadfbfcb60658291ac816823714d99eabc1" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Feb 19 18:52:58 crc kubenswrapper[4813]: E0219 18:52:58.723816 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9226feb2aaff488664cb846e1b506dadfbfcb60658291ac816823714d99eabc1 is running failed: container process not found" containerID="9226feb2aaff488664cb846e1b506dadfbfcb60658291ac816823714d99eabc1" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Feb 19 18:52:58 crc kubenswrapper[4813]: E0219 18:52:58.723848 4813 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9226feb2aaff488664cb846e1b506dadfbfcb60658291ac816823714d99eabc1 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="8c675ac6-fe1c-42a1-b67e-f958aef3c086" containerName="ovn-northd" Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.733079 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_8c675ac6-fe1c-42a1-b67e-f958aef3c086/ovn-northd/0.log" Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.733159 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.887527 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.900475 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c675ac6-fe1c-42a1-b67e-f958aef3c086-ovn-northd-tls-certs\") pod \"8c675ac6-fe1c-42a1-b67e-f958aef3c086\" (UID: \"8c675ac6-fe1c-42a1-b67e-f958aef3c086\") " Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.900570 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8c675ac6-fe1c-42a1-b67e-f958aef3c086-scripts\") pod \"8c675ac6-fe1c-42a1-b67e-f958aef3c086\" (UID: \"8c675ac6-fe1c-42a1-b67e-f958aef3c086\") " Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.900622 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c675ac6-fe1c-42a1-b67e-f958aef3c086-combined-ca-bundle\") pod \"8c675ac6-fe1c-42a1-b67e-f958aef3c086\" (UID: \"8c675ac6-fe1c-42a1-b67e-f958aef3c086\") " Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.900651 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8c675ac6-fe1c-42a1-b67e-f958aef3c086-ovn-rundir\") pod \"8c675ac6-fe1c-42a1-b67e-f958aef3c086\" (UID: \"8c675ac6-fe1c-42a1-b67e-f958aef3c086\") " Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.900692 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c675ac6-fe1c-42a1-b67e-f958aef3c086-metrics-certs-tls-certs\") pod \"8c675ac6-fe1c-42a1-b67e-f958aef3c086\" (UID: \"8c675ac6-fe1c-42a1-b67e-f958aef3c086\") " Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.900725 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c675ac6-fe1c-42a1-b67e-f958aef3c086-config\") pod \"8c675ac6-fe1c-42a1-b67e-f958aef3c086\" (UID: \"8c675ac6-fe1c-42a1-b67e-f958aef3c086\") " Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.900812 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8v8ll\" (UniqueName: \"kubernetes.io/projected/8c675ac6-fe1c-42a1-b67e-f958aef3c086-kube-api-access-8v8ll\") pod \"8c675ac6-fe1c-42a1-b67e-f958aef3c086\" (UID: \"8c675ac6-fe1c-42a1-b67e-f958aef3c086\") " Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.903479 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c675ac6-fe1c-42a1-b67e-f958aef3c086-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "8c675ac6-fe1c-42a1-b67e-f958aef3c086" (UID: "8c675ac6-fe1c-42a1-b67e-f958aef3c086"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.904048 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c675ac6-fe1c-42a1-b67e-f958aef3c086-scripts" (OuterVolumeSpecName: "scripts") pod "8c675ac6-fe1c-42a1-b67e-f958aef3c086" (UID: "8c675ac6-fe1c-42a1-b67e-f958aef3c086"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.904105 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c675ac6-fe1c-42a1-b67e-f958aef3c086-config" (OuterVolumeSpecName: "config") pod "8c675ac6-fe1c-42a1-b67e-f958aef3c086" (UID: "8c675ac6-fe1c-42a1-b67e-f958aef3c086"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.914543 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c675ac6-fe1c-42a1-b67e-f958aef3c086-kube-api-access-8v8ll" (OuterVolumeSpecName: "kube-api-access-8v8ll") pod "8c675ac6-fe1c-42a1-b67e-f958aef3c086" (UID: "8c675ac6-fe1c-42a1-b67e-f958aef3c086"). InnerVolumeSpecName "kube-api-access-8v8ll". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.923439 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c675ac6-fe1c-42a1-b67e-f958aef3c086-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8c675ac6-fe1c-42a1-b67e-f958aef3c086" (UID: "8c675ac6-fe1c-42a1-b67e-f958aef3c086"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.962039 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c675ac6-fe1c-42a1-b67e-f958aef3c086-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "8c675ac6-fe1c-42a1-b67e-f958aef3c086" (UID: "8c675ac6-fe1c-42a1-b67e-f958aef3c086"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:58 crc kubenswrapper[4813]: I0219 18:52:58.965644 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c675ac6-fe1c-42a1-b67e-f958aef3c086-ovn-northd-tls-certs" (OuterVolumeSpecName: "ovn-northd-tls-certs") pod "8c675ac6-fe1c-42a1-b67e-f958aef3c086" (UID: "8c675ac6-fe1c-42a1-b67e-f958aef3c086"). InnerVolumeSpecName "ovn-northd-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.002133 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4l5k\" (UniqueName: \"kubernetes.io/projected/2f8101a7-841e-4fa7-b98a-030b82e66c94-kube-api-access-g4l5k\") pod \"2f8101a7-841e-4fa7-b98a-030b82e66c94\" (UID: \"2f8101a7-841e-4fa7-b98a-030b82e66c94\") " Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.002205 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f8101a7-841e-4fa7-b98a-030b82e66c94-operator-scripts\") pod \"2f8101a7-841e-4fa7-b98a-030b82e66c94\" (UID: \"2f8101a7-841e-4fa7-b98a-030b82e66c94\") " Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.002236 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"2f8101a7-841e-4fa7-b98a-030b82e66c94\" (UID: \"2f8101a7-841e-4fa7-b98a-030b82e66c94\") " Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.002258 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2f8101a7-841e-4fa7-b98a-030b82e66c94-config-data-default\") pod \"2f8101a7-841e-4fa7-b98a-030b82e66c94\" (UID: \"2f8101a7-841e-4fa7-b98a-030b82e66c94\") " Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.002331 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f8101a7-841e-4fa7-b98a-030b82e66c94-combined-ca-bundle\") pod \"2f8101a7-841e-4fa7-b98a-030b82e66c94\" (UID: \"2f8101a7-841e-4fa7-b98a-030b82e66c94\") " Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.002355 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2f8101a7-841e-4fa7-b98a-030b82e66c94-config-data-generated\") pod \"2f8101a7-841e-4fa7-b98a-030b82e66c94\" (UID: \"2f8101a7-841e-4fa7-b98a-030b82e66c94\") " Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.002385 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2f8101a7-841e-4fa7-b98a-030b82e66c94-kolla-config\") pod \"2f8101a7-841e-4fa7-b98a-030b82e66c94\" (UID: \"2f8101a7-841e-4fa7-b98a-030b82e66c94\") " Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.002426 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f8101a7-841e-4fa7-b98a-030b82e66c94-galera-tls-certs\") pod \"2f8101a7-841e-4fa7-b98a-030b82e66c94\" (UID: \"2f8101a7-841e-4fa7-b98a-030b82e66c94\") " Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.002709 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c675ac6-fe1c-42a1-b67e-f958aef3c086-config\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.002725 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8v8ll\" (UniqueName: \"kubernetes.io/projected/8c675ac6-fe1c-42a1-b67e-f958aef3c086-kube-api-access-8v8ll\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.002734 4813 reconciler_common.go:293] "Volume detached for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c675ac6-fe1c-42a1-b67e-f958aef3c086-ovn-northd-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.002744 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8c675ac6-fe1c-42a1-b67e-f958aef3c086-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.002752 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c675ac6-fe1c-42a1-b67e-f958aef3c086-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.002770 4813 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8c675ac6-fe1c-42a1-b67e-f958aef3c086-ovn-rundir\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.002780 4813 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c675ac6-fe1c-42a1-b67e-f958aef3c086-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.003056 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f8101a7-841e-4fa7-b98a-030b82e66c94-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "2f8101a7-841e-4fa7-b98a-030b82e66c94" (UID: "2f8101a7-841e-4fa7-b98a-030b82e66c94"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.003106 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f8101a7-841e-4fa7-b98a-030b82e66c94-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "2f8101a7-841e-4fa7-b98a-030b82e66c94" (UID: "2f8101a7-841e-4fa7-b98a-030b82e66c94"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.003482 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f8101a7-841e-4fa7-b98a-030b82e66c94-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "2f8101a7-841e-4fa7-b98a-030b82e66c94" (UID: "2f8101a7-841e-4fa7-b98a-030b82e66c94"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.003533 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f8101a7-841e-4fa7-b98a-030b82e66c94-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2f8101a7-841e-4fa7-b98a-030b82e66c94" (UID: "2f8101a7-841e-4fa7-b98a-030b82e66c94"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.007147 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f8101a7-841e-4fa7-b98a-030b82e66c94-kube-api-access-g4l5k" (OuterVolumeSpecName: "kube-api-access-g4l5k") pod "2f8101a7-841e-4fa7-b98a-030b82e66c94" (UID: "2f8101a7-841e-4fa7-b98a-030b82e66c94"). InnerVolumeSpecName "kube-api-access-g4l5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.013983 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "mysql-db") pod "2f8101a7-841e-4fa7-b98a-030b82e66c94" (UID: "2f8101a7-841e-4fa7-b98a-030b82e66c94"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.019243 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_8c675ac6-fe1c-42a1-b67e-f958aef3c086/ovn-northd/0.log" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.019511 4813 generic.go:334] "Generic (PLEG): container finished" podID="8c675ac6-fe1c-42a1-b67e-f958aef3c086" containerID="9226feb2aaff488664cb846e1b506dadfbfcb60658291ac816823714d99eabc1" exitCode=139 Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.019550 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"8c675ac6-fe1c-42a1-b67e-f958aef3c086","Type":"ContainerDied","Data":"9226feb2aaff488664cb846e1b506dadfbfcb60658291ac816823714d99eabc1"} Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.019597 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"8c675ac6-fe1c-42a1-b67e-f958aef3c086","Type":"ContainerDied","Data":"095d5d76bcb19778cb1e9862a6c6d2c3bcbbdd65e6cc2a1a688411a3d0f3b087"} Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.019616 4813 scope.go:117] "RemoveContainer" containerID="ef8054e602e4f8d1b1826408ddcea8dc3c850848bc422d974dd7ed560d2484d9" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.019663 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.032274 4813 generic.go:334] "Generic (PLEG): container finished" podID="2f8101a7-841e-4fa7-b98a-030b82e66c94" containerID="bdedaff11b60d1f610f35c3711658657b2c38b9a083960b34b9c08ca7b2b0af5" exitCode=0 Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.032331 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"2f8101a7-841e-4fa7-b98a-030b82e66c94","Type":"ContainerDied","Data":"bdedaff11b60d1f610f35c3711658657b2c38b9a083960b34b9c08ca7b2b0af5"} Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.032353 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"2f8101a7-841e-4fa7-b98a-030b82e66c94","Type":"ContainerDied","Data":"8ac56b41c2104ffffc323b297d5c7075f4d2d757f588d13ebe56da37cac0331a"} Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.032416 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.036838 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f8101a7-841e-4fa7-b98a-030b82e66c94-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2f8101a7-841e-4fa7-b98a-030b82e66c94" (UID: "2f8101a7-841e-4fa7-b98a-030b82e66c94"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.037759 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vfdhs" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.037762 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-vfdhs" event={"ID":"21e36d83-8928-4c58-8432-eb51809336a7","Type":"ContainerDied","Data":"086214f0ec11794202792b627646bcda28eda4ea3fcb41e6a8c32ec393918aae"} Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.041862 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.053811 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f8101a7-841e-4fa7-b98a-030b82e66c94-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "2f8101a7-841e-4fa7-b98a-030b82e66c94" (UID: "2f8101a7-841e-4fa7-b98a-030b82e66c94"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.066655 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.084711 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.099116 4813 scope.go:117] "RemoveContainer" containerID="9226feb2aaff488664cb846e1b506dadfbfcb60658291ac816823714d99eabc1" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.099706 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-northd-0"] Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.104089 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4l5k\" (UniqueName: \"kubernetes.io/projected/2f8101a7-841e-4fa7-b98a-030b82e66c94-kube-api-access-g4l5k\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.104110 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f8101a7-841e-4fa7-b98a-030b82e66c94-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.104141 4813 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.104153 4813 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2f8101a7-841e-4fa7-b98a-030b82e66c94-config-data-default\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.104163 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f8101a7-841e-4fa7-b98a-030b82e66c94-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.104173 4813 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2f8101a7-841e-4fa7-b98a-030b82e66c94-config-data-generated\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.104181 4813 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2f8101a7-841e-4fa7-b98a-030b82e66c94-kolla-config\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.104189 4813 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f8101a7-841e-4fa7-b98a-030b82e66c94-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.110422 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.118463 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/memcached-0"] Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.123306 4813 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.136442 4813 scope.go:117] "RemoveContainer" containerID="ef8054e602e4f8d1b1826408ddcea8dc3c850848bc422d974dd7ed560d2484d9" Feb 19 18:52:59 crc kubenswrapper[4813]: E0219 18:52:59.136906 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef8054e602e4f8d1b1826408ddcea8dc3c850848bc422d974dd7ed560d2484d9\": container with ID starting with ef8054e602e4f8d1b1826408ddcea8dc3c850848bc422d974dd7ed560d2484d9 not found: ID does not exist" containerID="ef8054e602e4f8d1b1826408ddcea8dc3c850848bc422d974dd7ed560d2484d9" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.136982 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef8054e602e4f8d1b1826408ddcea8dc3c850848bc422d974dd7ed560d2484d9"} err="failed to get container status \"ef8054e602e4f8d1b1826408ddcea8dc3c850848bc422d974dd7ed560d2484d9\": rpc error: code = NotFound desc = could not find container \"ef8054e602e4f8d1b1826408ddcea8dc3c850848bc422d974dd7ed560d2484d9\": container with ID starting with ef8054e602e4f8d1b1826408ddcea8dc3c850848bc422d974dd7ed560d2484d9 not found: ID does not exist" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.137015 4813 scope.go:117] "RemoveContainer" containerID="9226feb2aaff488664cb846e1b506dadfbfcb60658291ac816823714d99eabc1" Feb 19 18:52:59 crc kubenswrapper[4813]: E0219 18:52:59.138206 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9226feb2aaff488664cb846e1b506dadfbfcb60658291ac816823714d99eabc1\": container with ID starting with 9226feb2aaff488664cb846e1b506dadfbfcb60658291ac816823714d99eabc1 not found: ID does not exist" containerID="9226feb2aaff488664cb846e1b506dadfbfcb60658291ac816823714d99eabc1" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.138231 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9226feb2aaff488664cb846e1b506dadfbfcb60658291ac816823714d99eabc1"} err="failed to get container status \"9226feb2aaff488664cb846e1b506dadfbfcb60658291ac816823714d99eabc1\": rpc error: code = NotFound desc = could not find container \"9226feb2aaff488664cb846e1b506dadfbfcb60658291ac816823714d99eabc1\": container with ID starting with 9226feb2aaff488664cb846e1b506dadfbfcb60658291ac816823714d99eabc1 not found: ID does not exist" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.138257 4813 scope.go:117] "RemoveContainer" containerID="bdedaff11b60d1f610f35c3711658657b2c38b9a083960b34b9c08ca7b2b0af5" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.151642 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-vfdhs"] Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.162135 4813 scope.go:117] "RemoveContainer" containerID="0a7c6d6177ee6ab11116c691850a745c7aca97ccf8e6afee4e58b11e0a94661d" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.165891 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-vfdhs"] Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.173804 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.187908 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.198281 4813 scope.go:117] "RemoveContainer" containerID="bdedaff11b60d1f610f35c3711658657b2c38b9a083960b34b9c08ca7b2b0af5" Feb 19 18:52:59 crc kubenswrapper[4813]: E0219 18:52:59.199137 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bdedaff11b60d1f610f35c3711658657b2c38b9a083960b34b9c08ca7b2b0af5\": container with ID starting with bdedaff11b60d1f610f35c3711658657b2c38b9a083960b34b9c08ca7b2b0af5 not found: ID does not exist" containerID="bdedaff11b60d1f610f35c3711658657b2c38b9a083960b34b9c08ca7b2b0af5" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.199180 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdedaff11b60d1f610f35c3711658657b2c38b9a083960b34b9c08ca7b2b0af5"} err="failed to get container status \"bdedaff11b60d1f610f35c3711658657b2c38b9a083960b34b9c08ca7b2b0af5\": rpc error: code = NotFound desc = could not find container \"bdedaff11b60d1f610f35c3711658657b2c38b9a083960b34b9c08ca7b2b0af5\": container with ID starting with bdedaff11b60d1f610f35c3711658657b2c38b9a083960b34b9c08ca7b2b0af5 not found: ID does not exist" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.199206 4813 scope.go:117] "RemoveContainer" containerID="0a7c6d6177ee6ab11116c691850a745c7aca97ccf8e6afee4e58b11e0a94661d" Feb 19 18:52:59 crc kubenswrapper[4813]: E0219 18:52:59.199806 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a7c6d6177ee6ab11116c691850a745c7aca97ccf8e6afee4e58b11e0a94661d\": container with ID starting with 0a7c6d6177ee6ab11116c691850a745c7aca97ccf8e6afee4e58b11e0a94661d not found: ID does not exist" containerID="0a7c6d6177ee6ab11116c691850a745c7aca97ccf8e6afee4e58b11e0a94661d" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.199856 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a7c6d6177ee6ab11116c691850a745c7aca97ccf8e6afee4e58b11e0a94661d"} err="failed to get container status \"0a7c6d6177ee6ab11116c691850a745c7aca97ccf8e6afee4e58b11e0a94661d\": rpc error: code = NotFound desc = could not find container \"0a7c6d6177ee6ab11116c691850a745c7aca97ccf8e6afee4e58b11e0a94661d\": container with ID starting with 0a7c6d6177ee6ab11116c691850a745c7aca97ccf8e6afee4e58b11e0a94661d not found: ID does not exist" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.199883 4813 scope.go:117] "RemoveContainer" containerID="c110076167c797991fa5c3e36a8495fafd043d5dd824a10ef14375c14c671fe1" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.205322 4813 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.366547 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.371979 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-galera-0"] Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.399930 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 18:52:59 crc kubenswrapper[4813]: E0219 18:52:59.413150 4813 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Feb 19 18:52:59 crc kubenswrapper[4813]: E0219 18:52:59.413211 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/db22a584-f05a-41ba-ad23-387b4100a9e1-config-data podName:db22a584-f05a-41ba-ad23-387b4100a9e1 nodeName:}" failed. No retries permitted until 2026-02-19 18:53:07.413196395 +0000 UTC m=+1406.638636936 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/db22a584-f05a-41ba-ad23-387b4100a9e1-config-data") pod "rabbitmq-cell1-server-0" (UID: "db22a584-f05a-41ba-ad23-387b4100a9e1") : configmap "rabbitmq-cell1-config-data" not found Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.482094 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="153c22ed-7e2e-496e-9a13-9ef0ce79efd8" path="/var/lib/kubelet/pods/153c22ed-7e2e-496e-9a13-9ef0ce79efd8/volumes" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.482671 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21e36d83-8928-4c58-8432-eb51809336a7" path="/var/lib/kubelet/pods/21e36d83-8928-4c58-8432-eb51809336a7/volumes" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.483187 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d4e6cd2-75bd-43bd-9f0e-fd35002ab607" path="/var/lib/kubelet/pods/2d4e6cd2-75bd-43bd-9f0e-fd35002ab607/volumes" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.485308 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f8101a7-841e-4fa7-b98a-030b82e66c94" path="/var/lib/kubelet/pods/2f8101a7-841e-4fa7-b98a-030b82e66c94/volumes" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.485822 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3422b8bd-2817-4e8f-8a5a-731c773b73a4" path="/var/lib/kubelet/pods/3422b8bd-2817-4e8f-8a5a-731c773b73a4/volumes" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.486888 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f4ab094-423e-40ff-bf70-50275c1d41cb" path="/var/lib/kubelet/pods/4f4ab094-423e-40ff-bf70-50275c1d41cb/volumes" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.487453 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ecb54a2-f23d-45b0-8311-eb5ff83f9f30" path="/var/lib/kubelet/pods/6ecb54a2-f23d-45b0-8311-eb5ff83f9f30/volumes" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.488773 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f4b651a-00cc-4ca7-b49e-713eed4968b9" path="/var/lib/kubelet/pods/6f4b651a-00cc-4ca7-b49e-713eed4968b9/volumes" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.489402 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c675ac6-fe1c-42a1-b67e-f958aef3c086" path="/var/lib/kubelet/pods/8c675ac6-fe1c-42a1-b67e-f958aef3c086/volumes" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.490324 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="973faa07-fbab-4a50-ac4a-c62302e9f9c1" path="/var/lib/kubelet/pods/973faa07-fbab-4a50-ac4a-c62302e9f9c1/volumes" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.490825 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c239fd72-88d6-4394-bf24-be4fb0b3e579" path="/var/lib/kubelet/pods/c239fd72-88d6-4394-bf24-be4fb0b3e579/volumes" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.491378 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c389f482-3000-4f94-924d-158c9d51a2e9" path="/var/lib/kubelet/pods/c389f482-3000-4f94-924d-158c9d51a2e9/volumes" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.492398 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9c7a4f0-3d5e-4e7a-8958-8a2022de3394" path="/var/lib/kubelet/pods/c9c7a4f0-3d5e-4e7a-8958-8a2022de3394/volumes" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.514482 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c69ff3db-8806-451a-9df0-c6289c327579-pod-info\") pod \"c69ff3db-8806-451a-9df0-c6289c327579\" (UID: \"c69ff3db-8806-451a-9df0-c6289c327579\") " Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.514583 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c69ff3db-8806-451a-9df0-c6289c327579-erlang-cookie-secret\") pod \"c69ff3db-8806-451a-9df0-c6289c327579\" (UID: \"c69ff3db-8806-451a-9df0-c6289c327579\") " Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.514647 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c69ff3db-8806-451a-9df0-c6289c327579-config-data\") pod \"c69ff3db-8806-451a-9df0-c6289c327579\" (UID: \"c69ff3db-8806-451a-9df0-c6289c327579\") " Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.514698 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slk9l\" (UniqueName: \"kubernetes.io/projected/c69ff3db-8806-451a-9df0-c6289c327579-kube-api-access-slk9l\") pod \"c69ff3db-8806-451a-9df0-c6289c327579\" (UID: \"c69ff3db-8806-451a-9df0-c6289c327579\") " Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.514721 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c69ff3db-8806-451a-9df0-c6289c327579-rabbitmq-erlang-cookie\") pod \"c69ff3db-8806-451a-9df0-c6289c327579\" (UID: \"c69ff3db-8806-451a-9df0-c6289c327579\") " Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.514752 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c69ff3db-8806-451a-9df0-c6289c327579-server-conf\") pod \"c69ff3db-8806-451a-9df0-c6289c327579\" (UID: \"c69ff3db-8806-451a-9df0-c6289c327579\") " Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.514801 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c69ff3db-8806-451a-9df0-c6289c327579-plugins-conf\") pod \"c69ff3db-8806-451a-9df0-c6289c327579\" (UID: \"c69ff3db-8806-451a-9df0-c6289c327579\") " Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.514826 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c69ff3db-8806-451a-9df0-c6289c327579-rabbitmq-plugins\") pod \"c69ff3db-8806-451a-9df0-c6289c327579\" (UID: \"c69ff3db-8806-451a-9df0-c6289c327579\") " Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.514861 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c69ff3db-8806-451a-9df0-c6289c327579-rabbitmq-confd\") pod \"c69ff3db-8806-451a-9df0-c6289c327579\" (UID: \"c69ff3db-8806-451a-9df0-c6289c327579\") " Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.514907 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"c69ff3db-8806-451a-9df0-c6289c327579\" (UID: \"c69ff3db-8806-451a-9df0-c6289c327579\") " Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.514972 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c69ff3db-8806-451a-9df0-c6289c327579-rabbitmq-tls\") pod \"c69ff3db-8806-451a-9df0-c6289c327579\" (UID: \"c69ff3db-8806-451a-9df0-c6289c327579\") " Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.516502 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c69ff3db-8806-451a-9df0-c6289c327579-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "c69ff3db-8806-451a-9df0-c6289c327579" (UID: "c69ff3db-8806-451a-9df0-c6289c327579"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.516741 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c69ff3db-8806-451a-9df0-c6289c327579-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "c69ff3db-8806-451a-9df0-c6289c327579" (UID: "c69ff3db-8806-451a-9df0-c6289c327579"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.517363 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/c69ff3db-8806-451a-9df0-c6289c327579-pod-info" (OuterVolumeSpecName: "pod-info") pod "c69ff3db-8806-451a-9df0-c6289c327579" (UID: "c69ff3db-8806-451a-9df0-c6289c327579"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.517673 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c69ff3db-8806-451a-9df0-c6289c327579-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "c69ff3db-8806-451a-9df0-c6289c327579" (UID: "c69ff3db-8806-451a-9df0-c6289c327579"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.519585 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c69ff3db-8806-451a-9df0-c6289c327579-kube-api-access-slk9l" (OuterVolumeSpecName: "kube-api-access-slk9l") pod "c69ff3db-8806-451a-9df0-c6289c327579" (UID: "c69ff3db-8806-451a-9df0-c6289c327579"). InnerVolumeSpecName "kube-api-access-slk9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.522384 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c69ff3db-8806-451a-9df0-c6289c327579-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "c69ff3db-8806-451a-9df0-c6289c327579" (UID: "c69ff3db-8806-451a-9df0-c6289c327579"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.522743 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c69ff3db-8806-451a-9df0-c6289c327579-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "c69ff3db-8806-451a-9df0-c6289c327579" (UID: "c69ff3db-8806-451a-9df0-c6289c327579"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.526002 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "persistence") pod "c69ff3db-8806-451a-9df0-c6289c327579" (UID: "c69ff3db-8806-451a-9df0-c6289c327579"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.546678 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c69ff3db-8806-451a-9df0-c6289c327579-config-data" (OuterVolumeSpecName: "config-data") pod "c69ff3db-8806-451a-9df0-c6289c327579" (UID: "c69ff3db-8806-451a-9df0-c6289c327579"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.573107 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c69ff3db-8806-451a-9df0-c6289c327579-server-conf" (OuterVolumeSpecName: "server-conf") pod "c69ff3db-8806-451a-9df0-c6289c327579" (UID: "c69ff3db-8806-451a-9df0-c6289c327579"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.618740 4813 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c69ff3db-8806-451a-9df0-c6289c327579-pod-info\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.618772 4813 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c69ff3db-8806-451a-9df0-c6289c327579-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.618782 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c69ff3db-8806-451a-9df0-c6289c327579-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.618791 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slk9l\" (UniqueName: \"kubernetes.io/projected/c69ff3db-8806-451a-9df0-c6289c327579-kube-api-access-slk9l\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.618801 4813 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c69ff3db-8806-451a-9df0-c6289c327579-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.618808 4813 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c69ff3db-8806-451a-9df0-c6289c327579-server-conf\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.618816 4813 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c69ff3db-8806-451a-9df0-c6289c327579-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.618824 4813 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c69ff3db-8806-451a-9df0-c6289c327579-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.618844 4813 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.618852 4813 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c69ff3db-8806-451a-9df0-c6289c327579-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.634986 4813 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.671777 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c69ff3db-8806-451a-9df0-c6289c327579-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "c69ff3db-8806-451a-9df0-c6289c327579" (UID: "c69ff3db-8806-451a-9df0-c6289c327579"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.696693 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5899f78d95-lmnxh" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.721680 4813 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c69ff3db-8806-451a-9df0-c6289c327579-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.722078 4813 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.823470 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79cdc675-a16c-4c18-bcef-d844d7a2f75d-combined-ca-bundle\") pod \"79cdc675-a16c-4c18-bcef-d844d7a2f75d\" (UID: \"79cdc675-a16c-4c18-bcef-d844d7a2f75d\") " Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.823537 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zlj9d\" (UniqueName: \"kubernetes.io/projected/79cdc675-a16c-4c18-bcef-d844d7a2f75d-kube-api-access-zlj9d\") pod \"79cdc675-a16c-4c18-bcef-d844d7a2f75d\" (UID: \"79cdc675-a16c-4c18-bcef-d844d7a2f75d\") " Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.823561 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/79cdc675-a16c-4c18-bcef-d844d7a2f75d-fernet-keys\") pod \"79cdc675-a16c-4c18-bcef-d844d7a2f75d\" (UID: \"79cdc675-a16c-4c18-bcef-d844d7a2f75d\") " Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.823601 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79cdc675-a16c-4c18-bcef-d844d7a2f75d-config-data\") pod \"79cdc675-a16c-4c18-bcef-d844d7a2f75d\" (UID: \"79cdc675-a16c-4c18-bcef-d844d7a2f75d\") " Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.823653 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/79cdc675-a16c-4c18-bcef-d844d7a2f75d-internal-tls-certs\") pod \"79cdc675-a16c-4c18-bcef-d844d7a2f75d\" (UID: \"79cdc675-a16c-4c18-bcef-d844d7a2f75d\") " Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.823702 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79cdc675-a16c-4c18-bcef-d844d7a2f75d-scripts\") pod \"79cdc675-a16c-4c18-bcef-d844d7a2f75d\" (UID: \"79cdc675-a16c-4c18-bcef-d844d7a2f75d\") " Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.823746 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/79cdc675-a16c-4c18-bcef-d844d7a2f75d-public-tls-certs\") pod \"79cdc675-a16c-4c18-bcef-d844d7a2f75d\" (UID: \"79cdc675-a16c-4c18-bcef-d844d7a2f75d\") " Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.823776 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/79cdc675-a16c-4c18-bcef-d844d7a2f75d-credential-keys\") pod \"79cdc675-a16c-4c18-bcef-d844d7a2f75d\" (UID: \"79cdc675-a16c-4c18-bcef-d844d7a2f75d\") " Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.826589 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79cdc675-a16c-4c18-bcef-d844d7a2f75d-kube-api-access-zlj9d" (OuterVolumeSpecName: "kube-api-access-zlj9d") pod "79cdc675-a16c-4c18-bcef-d844d7a2f75d" (UID: "79cdc675-a16c-4c18-bcef-d844d7a2f75d"). InnerVolumeSpecName "kube-api-access-zlj9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.828444 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79cdc675-a16c-4c18-bcef-d844d7a2f75d-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "79cdc675-a16c-4c18-bcef-d844d7a2f75d" (UID: "79cdc675-a16c-4c18-bcef-d844d7a2f75d"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.837463 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79cdc675-a16c-4c18-bcef-d844d7a2f75d-scripts" (OuterVolumeSpecName: "scripts") pod "79cdc675-a16c-4c18-bcef-d844d7a2f75d" (UID: "79cdc675-a16c-4c18-bcef-d844d7a2f75d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.837613 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79cdc675-a16c-4c18-bcef-d844d7a2f75d-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "79cdc675-a16c-4c18-bcef-d844d7a2f75d" (UID: "79cdc675-a16c-4c18-bcef-d844d7a2f75d"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.840826 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.847154 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79cdc675-a16c-4c18-bcef-d844d7a2f75d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "79cdc675-a16c-4c18-bcef-d844d7a2f75d" (UID: "79cdc675-a16c-4c18-bcef-d844d7a2f75d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.865014 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79cdc675-a16c-4c18-bcef-d844d7a2f75d-config-data" (OuterVolumeSpecName: "config-data") pod "79cdc675-a16c-4c18-bcef-d844d7a2f75d" (UID: "79cdc675-a16c-4c18-bcef-d844d7a2f75d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.909183 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79cdc675-a16c-4c18-bcef-d844d7a2f75d-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "79cdc675-a16c-4c18-bcef-d844d7a2f75d" (UID: "79cdc675-a16c-4c18-bcef-d844d7a2f75d"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.910131 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79cdc675-a16c-4c18-bcef-d844d7a2f75d-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "79cdc675-a16c-4c18-bcef-d844d7a2f75d" (UID: "79cdc675-a16c-4c18-bcef-d844d7a2f75d"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.924877 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79cdc675-a16c-4c18-bcef-d844d7a2f75d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.924906 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zlj9d\" (UniqueName: \"kubernetes.io/projected/79cdc675-a16c-4c18-bcef-d844d7a2f75d-kube-api-access-zlj9d\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.924915 4813 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/79cdc675-a16c-4c18-bcef-d844d7a2f75d-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.924923 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79cdc675-a16c-4c18-bcef-d844d7a2f75d-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.924931 4813 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/79cdc675-a16c-4c18-bcef-d844d7a2f75d-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.924939 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79cdc675-a16c-4c18-bcef-d844d7a2f75d-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.924963 4813 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/79cdc675-a16c-4c18-bcef-d844d7a2f75d-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:59 crc kubenswrapper[4813]: I0219 18:52:59.924971 4813 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/79cdc675-a16c-4c18-bcef-d844d7a2f75d-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 19 18:52:59 crc kubenswrapper[4813]: E0219 18:52:59.963454 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fa363048760a37417b36d93a6b7c7436ba2ba02b903ff26bcb4c7f22d5b4f33f is running failed: container process not found" containerID="fa363048760a37417b36d93a6b7c7436ba2ba02b903ff26bcb4c7f22d5b4f33f" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 19 18:52:59 crc kubenswrapper[4813]: E0219 18:52:59.963755 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fa363048760a37417b36d93a6b7c7436ba2ba02b903ff26bcb4c7f22d5b4f33f is running failed: container process not found" containerID="fa363048760a37417b36d93a6b7c7436ba2ba02b903ff26bcb4c7f22d5b4f33f" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 19 18:52:59 crc kubenswrapper[4813]: E0219 18:52:59.963971 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fa363048760a37417b36d93a6b7c7436ba2ba02b903ff26bcb4c7f22d5b4f33f is running failed: container process not found" containerID="fa363048760a37417b36d93a6b7c7436ba2ba02b903ff26bcb4c7f22d5b4f33f" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 19 18:52:59 crc kubenswrapper[4813]: E0219 18:52:59.964074 4813 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fa363048760a37417b36d93a6b7c7436ba2ba02b903ff26bcb4c7f22d5b4f33f is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-w7s6z" podUID="096a88db-91ee-4ac3-b5ae-ba4bca838436" containerName="ovsdb-server" Feb 19 18:52:59 crc kubenswrapper[4813]: E0219 18:52:59.964444 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e06b8d6895c1ae07d8bfb4358319aa314b17cc2acab940e6baa96f82402c1514" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 19 18:52:59 crc kubenswrapper[4813]: E0219 18:52:59.965837 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e06b8d6895c1ae07d8bfb4358319aa314b17cc2acab940e6baa96f82402c1514" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 19 18:52:59 crc kubenswrapper[4813]: E0219 18:52:59.967177 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e06b8d6895c1ae07d8bfb4358319aa314b17cc2acab940e6baa96f82402c1514" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 19 18:52:59 crc kubenswrapper[4813]: E0219 18:52:59.967254 4813 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-w7s6z" podUID="096a88db-91ee-4ac3-b5ae-ba4bca838436" containerName="ovs-vswitchd" Feb 19 18:53:00 crc kubenswrapper[4813]: I0219 18:53:00.026673 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/db22a584-f05a-41ba-ad23-387b4100a9e1-plugins-conf\") pod \"db22a584-f05a-41ba-ad23-387b4100a9e1\" (UID: \"db22a584-f05a-41ba-ad23-387b4100a9e1\") " Feb 19 18:53:00 crc kubenswrapper[4813]: I0219 18:53:00.026715 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/db22a584-f05a-41ba-ad23-387b4100a9e1-erlang-cookie-secret\") pod \"db22a584-f05a-41ba-ad23-387b4100a9e1\" (UID: \"db22a584-f05a-41ba-ad23-387b4100a9e1\") " Feb 19 18:53:00 crc kubenswrapper[4813]: I0219 18:53:00.026734 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/db22a584-f05a-41ba-ad23-387b4100a9e1-rabbitmq-plugins\") pod \"db22a584-f05a-41ba-ad23-387b4100a9e1\" (UID: \"db22a584-f05a-41ba-ad23-387b4100a9e1\") " Feb 19 18:53:00 crc kubenswrapper[4813]: I0219 18:53:00.026751 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/db22a584-f05a-41ba-ad23-387b4100a9e1-rabbitmq-confd\") pod \"db22a584-f05a-41ba-ad23-387b4100a9e1\" (UID: \"db22a584-f05a-41ba-ad23-387b4100a9e1\") " Feb 19 18:53:00 crc kubenswrapper[4813]: I0219 18:53:00.026779 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/db22a584-f05a-41ba-ad23-387b4100a9e1-rabbitmq-erlang-cookie\") pod \"db22a584-f05a-41ba-ad23-387b4100a9e1\" (UID: \"db22a584-f05a-41ba-ad23-387b4100a9e1\") " Feb 19 18:53:00 crc kubenswrapper[4813]: I0219 18:53:00.026807 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9fmkv\" (UniqueName: \"kubernetes.io/projected/db22a584-f05a-41ba-ad23-387b4100a9e1-kube-api-access-9fmkv\") pod \"db22a584-f05a-41ba-ad23-387b4100a9e1\" (UID: \"db22a584-f05a-41ba-ad23-387b4100a9e1\") " Feb 19 18:53:00 crc kubenswrapper[4813]: I0219 18:53:00.026830 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/db22a584-f05a-41ba-ad23-387b4100a9e1-pod-info\") pod \"db22a584-f05a-41ba-ad23-387b4100a9e1\" (UID: \"db22a584-f05a-41ba-ad23-387b4100a9e1\") " Feb 19 18:53:00 crc kubenswrapper[4813]: I0219 18:53:00.026849 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/db22a584-f05a-41ba-ad23-387b4100a9e1-config-data\") pod \"db22a584-f05a-41ba-ad23-387b4100a9e1\" (UID: \"db22a584-f05a-41ba-ad23-387b4100a9e1\") " Feb 19 18:53:00 crc kubenswrapper[4813]: I0219 18:53:00.026893 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/db22a584-f05a-41ba-ad23-387b4100a9e1-rabbitmq-tls\") pod \"db22a584-f05a-41ba-ad23-387b4100a9e1\" (UID: \"db22a584-f05a-41ba-ad23-387b4100a9e1\") " Feb 19 18:53:00 crc kubenswrapper[4813]: I0219 18:53:00.026908 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/db22a584-f05a-41ba-ad23-387b4100a9e1-server-conf\") pod \"db22a584-f05a-41ba-ad23-387b4100a9e1\" (UID: \"db22a584-f05a-41ba-ad23-387b4100a9e1\") " Feb 19 18:53:00 crc kubenswrapper[4813]: I0219 18:53:00.026924 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"db22a584-f05a-41ba-ad23-387b4100a9e1\" (UID: \"db22a584-f05a-41ba-ad23-387b4100a9e1\") " Feb 19 18:53:00 crc kubenswrapper[4813]: I0219 18:53:00.027936 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db22a584-f05a-41ba-ad23-387b4100a9e1-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "db22a584-f05a-41ba-ad23-387b4100a9e1" (UID: "db22a584-f05a-41ba-ad23-387b4100a9e1"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:53:00 crc kubenswrapper[4813]: I0219 18:53:00.028011 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db22a584-f05a-41ba-ad23-387b4100a9e1-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "db22a584-f05a-41ba-ad23-387b4100a9e1" (UID: "db22a584-f05a-41ba-ad23-387b4100a9e1"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:53:00 crc kubenswrapper[4813]: I0219 18:53:00.028469 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db22a584-f05a-41ba-ad23-387b4100a9e1-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "db22a584-f05a-41ba-ad23-387b4100a9e1" (UID: "db22a584-f05a-41ba-ad23-387b4100a9e1"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:53:00 crc kubenswrapper[4813]: I0219 18:53:00.030510 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "persistence") pod "db22a584-f05a-41ba-ad23-387b4100a9e1" (UID: "db22a584-f05a-41ba-ad23-387b4100a9e1"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 18:53:00 crc kubenswrapper[4813]: I0219 18:53:00.030848 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db22a584-f05a-41ba-ad23-387b4100a9e1-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "db22a584-f05a-41ba-ad23-387b4100a9e1" (UID: "db22a584-f05a-41ba-ad23-387b4100a9e1"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:53:00 crc kubenswrapper[4813]: I0219 18:53:00.032150 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/db22a584-f05a-41ba-ad23-387b4100a9e1-pod-info" (OuterVolumeSpecName: "pod-info") pod "db22a584-f05a-41ba-ad23-387b4100a9e1" (UID: "db22a584-f05a-41ba-ad23-387b4100a9e1"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 19 18:53:00 crc kubenswrapper[4813]: I0219 18:53:00.032386 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db22a584-f05a-41ba-ad23-387b4100a9e1-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "db22a584-f05a-41ba-ad23-387b4100a9e1" (UID: "db22a584-f05a-41ba-ad23-387b4100a9e1"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:53:00 crc kubenswrapper[4813]: I0219 18:53:00.033470 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db22a584-f05a-41ba-ad23-387b4100a9e1-kube-api-access-9fmkv" (OuterVolumeSpecName: "kube-api-access-9fmkv") pod "db22a584-f05a-41ba-ad23-387b4100a9e1" (UID: "db22a584-f05a-41ba-ad23-387b4100a9e1"). InnerVolumeSpecName "kube-api-access-9fmkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:53:00 crc kubenswrapper[4813]: I0219 18:53:00.046550 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db22a584-f05a-41ba-ad23-387b4100a9e1-config-data" (OuterVolumeSpecName: "config-data") pod "db22a584-f05a-41ba-ad23-387b4100a9e1" (UID: "db22a584-f05a-41ba-ad23-387b4100a9e1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:53:00 crc kubenswrapper[4813]: I0219 18:53:00.059711 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db22a584-f05a-41ba-ad23-387b4100a9e1-server-conf" (OuterVolumeSpecName: "server-conf") pod "db22a584-f05a-41ba-ad23-387b4100a9e1" (UID: "db22a584-f05a-41ba-ad23-387b4100a9e1"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:53:00 crc kubenswrapper[4813]: I0219 18:53:00.088339 4813 generic.go:334] "Generic (PLEG): container finished" podID="c69ff3db-8806-451a-9df0-c6289c327579" containerID="902b709f26604393c782b1c130285fdc4bd898bf2ec34607dbfa328616279266" exitCode=0 Feb 19 18:53:00 crc kubenswrapper[4813]: I0219 18:53:00.088401 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c69ff3db-8806-451a-9df0-c6289c327579","Type":"ContainerDied","Data":"902b709f26604393c782b1c130285fdc4bd898bf2ec34607dbfa328616279266"} Feb 19 18:53:00 crc kubenswrapper[4813]: I0219 18:53:00.088425 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c69ff3db-8806-451a-9df0-c6289c327579","Type":"ContainerDied","Data":"a9a04dd3dcc3db19e8091009cb9d7bd2558fee7cdea0765483552bb6b90cd4e9"} Feb 19 18:53:00 crc kubenswrapper[4813]: I0219 18:53:00.088439 4813 scope.go:117] "RemoveContainer" containerID="902b709f26604393c782b1c130285fdc4bd898bf2ec34607dbfa328616279266" Feb 19 18:53:00 crc kubenswrapper[4813]: I0219 18:53:00.088528 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 18:53:00 crc kubenswrapper[4813]: I0219 18:53:00.102702 4813 generic.go:334] "Generic (PLEG): container finished" podID="db22a584-f05a-41ba-ad23-387b4100a9e1" containerID="e519f5b9e793340baf7974d5e67220195aa91581ee6dc67bcc7ab9451042be70" exitCode=0 Feb 19 18:53:00 crc kubenswrapper[4813]: I0219 18:53:00.102804 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 18:53:00 crc kubenswrapper[4813]: I0219 18:53:00.102797 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"db22a584-f05a-41ba-ad23-387b4100a9e1","Type":"ContainerDied","Data":"e519f5b9e793340baf7974d5e67220195aa91581ee6dc67bcc7ab9451042be70"} Feb 19 18:53:00 crc kubenswrapper[4813]: I0219 18:53:00.103143 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"db22a584-f05a-41ba-ad23-387b4100a9e1","Type":"ContainerDied","Data":"1ab17441071d8491a23819db4c3d6733cf81fffae60d675644fa1b3c73f438e0"} Feb 19 18:53:00 crc kubenswrapper[4813]: I0219 18:53:00.106176 4813 generic.go:334] "Generic (PLEG): container finished" podID="79cdc675-a16c-4c18-bcef-d844d7a2f75d" containerID="11ff41f1c3a05377a4d38f769ca6258be7fe43feeb2f1e8eb06714701d85e419" exitCode=0 Feb 19 18:53:00 crc kubenswrapper[4813]: I0219 18:53:00.106244 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5899f78d95-lmnxh" event={"ID":"79cdc675-a16c-4c18-bcef-d844d7a2f75d","Type":"ContainerDied","Data":"11ff41f1c3a05377a4d38f769ca6258be7fe43feeb2f1e8eb06714701d85e419"} Feb 19 18:53:00 crc kubenswrapper[4813]: I0219 18:53:00.106275 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5899f78d95-lmnxh" event={"ID":"79cdc675-a16c-4c18-bcef-d844d7a2f75d","Type":"ContainerDied","Data":"d52f3f1ee71bd9828d230464f75a582c40364abbbcb8418142451780240b5f77"} Feb 19 18:53:00 crc kubenswrapper[4813]: I0219 18:53:00.106331 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5899f78d95-lmnxh" Feb 19 18:53:00 crc kubenswrapper[4813]: I0219 18:53:00.118235 4813 scope.go:117] "RemoveContainer" containerID="b9426c242aece9d97709c8664b0a4c377652f962eb00395db079b212fbca6168" Feb 19 18:53:00 crc kubenswrapper[4813]: I0219 18:53:00.129936 4813 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/db22a584-f05a-41ba-ad23-387b4100a9e1-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 19 18:53:00 crc kubenswrapper[4813]: I0219 18:53:00.131484 4813 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/db22a584-f05a-41ba-ad23-387b4100a9e1-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 19 18:53:00 crc kubenswrapper[4813]: I0219 18:53:00.131504 4813 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/db22a584-f05a-41ba-ad23-387b4100a9e1-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 19 18:53:00 crc kubenswrapper[4813]: I0219 18:53:00.131513 4813 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/db22a584-f05a-41ba-ad23-387b4100a9e1-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 19 18:53:00 crc kubenswrapper[4813]: I0219 18:53:00.131523 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9fmkv\" (UniqueName: \"kubernetes.io/projected/db22a584-f05a-41ba-ad23-387b4100a9e1-kube-api-access-9fmkv\") on node \"crc\" DevicePath \"\"" Feb 19 18:53:00 crc kubenswrapper[4813]: I0219 18:53:00.131532 4813 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/db22a584-f05a-41ba-ad23-387b4100a9e1-pod-info\") on node \"crc\" DevicePath \"\"" Feb 19 18:53:00 crc kubenswrapper[4813]: I0219 18:53:00.131540 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/db22a584-f05a-41ba-ad23-387b4100a9e1-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 18:53:00 crc kubenswrapper[4813]: I0219 18:53:00.131548 4813 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/db22a584-f05a-41ba-ad23-387b4100a9e1-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 19 18:53:00 crc kubenswrapper[4813]: I0219 18:53:00.131556 4813 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/db22a584-f05a-41ba-ad23-387b4100a9e1-server-conf\") on node \"crc\" DevicePath \"\"" Feb 19 18:53:00 crc kubenswrapper[4813]: I0219 18:53:00.131577 4813 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Feb 19 18:53:00 crc kubenswrapper[4813]: I0219 18:53:00.132864 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db22a584-f05a-41ba-ad23-387b4100a9e1-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "db22a584-f05a-41ba-ad23-387b4100a9e1" (UID: "db22a584-f05a-41ba-ad23-387b4100a9e1"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:53:00 crc kubenswrapper[4813]: I0219 18:53:00.163574 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-5899f78d95-lmnxh"] Feb 19 18:53:00 crc kubenswrapper[4813]: I0219 18:53:00.175207 4813 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Feb 19 18:53:00 crc kubenswrapper[4813]: I0219 18:53:00.175483 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-5899f78d95-lmnxh"] Feb 19 18:53:00 crc kubenswrapper[4813]: I0219 18:53:00.176646 4813 scope.go:117] "RemoveContainer" containerID="902b709f26604393c782b1c130285fdc4bd898bf2ec34607dbfa328616279266" Feb 19 18:53:00 crc kubenswrapper[4813]: E0219 18:53:00.183165 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"902b709f26604393c782b1c130285fdc4bd898bf2ec34607dbfa328616279266\": container with ID starting with 902b709f26604393c782b1c130285fdc4bd898bf2ec34607dbfa328616279266 not found: ID does not exist" containerID="902b709f26604393c782b1c130285fdc4bd898bf2ec34607dbfa328616279266" Feb 19 18:53:00 crc kubenswrapper[4813]: I0219 18:53:00.183216 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"902b709f26604393c782b1c130285fdc4bd898bf2ec34607dbfa328616279266"} err="failed to get container status \"902b709f26604393c782b1c130285fdc4bd898bf2ec34607dbfa328616279266\": rpc error: code = NotFound desc = could not find container \"902b709f26604393c782b1c130285fdc4bd898bf2ec34607dbfa328616279266\": container with ID starting with 902b709f26604393c782b1c130285fdc4bd898bf2ec34607dbfa328616279266 not found: ID does not exist" Feb 19 18:53:00 crc kubenswrapper[4813]: I0219 18:53:00.183244 4813 scope.go:117] "RemoveContainer" containerID="b9426c242aece9d97709c8664b0a4c377652f962eb00395db079b212fbca6168" Feb 19 18:53:00 crc kubenswrapper[4813]: E0219 18:53:00.183944 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9426c242aece9d97709c8664b0a4c377652f962eb00395db079b212fbca6168\": container with ID starting with b9426c242aece9d97709c8664b0a4c377652f962eb00395db079b212fbca6168 not found: ID does not exist" containerID="b9426c242aece9d97709c8664b0a4c377652f962eb00395db079b212fbca6168" Feb 19 18:53:00 crc kubenswrapper[4813]: I0219 18:53:00.183978 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9426c242aece9d97709c8664b0a4c377652f962eb00395db079b212fbca6168"} err="failed to get container status \"b9426c242aece9d97709c8664b0a4c377652f962eb00395db079b212fbca6168\": rpc error: code = NotFound desc = could not find container \"b9426c242aece9d97709c8664b0a4c377652f962eb00395db079b212fbca6168\": container with ID starting with b9426c242aece9d97709c8664b0a4c377652f962eb00395db079b212fbca6168 not found: ID does not exist" Feb 19 18:53:00 crc kubenswrapper[4813]: I0219 18:53:00.183994 4813 scope.go:117] "RemoveContainer" containerID="e519f5b9e793340baf7974d5e67220195aa91581ee6dc67bcc7ab9451042be70" Feb 19 18:53:00 crc kubenswrapper[4813]: I0219 18:53:00.184056 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 18:53:00 crc kubenswrapper[4813]: I0219 18:53:00.191170 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 18:53:00 crc kubenswrapper[4813]: I0219 18:53:00.214748 4813 scope.go:117] "RemoveContainer" containerID="0b467ef85f9a437d89647927595333d8dc397b82f76409cebc5dee43012b081e" Feb 19 18:53:00 crc kubenswrapper[4813]: I0219 18:53:00.232907 4813 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Feb 19 18:53:00 crc kubenswrapper[4813]: I0219 18:53:00.232938 4813 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/db22a584-f05a-41ba-ad23-387b4100a9e1-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 19 18:53:00 crc kubenswrapper[4813]: I0219 18:53:00.330255 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 18:53:00 crc kubenswrapper[4813]: I0219 18:53:00.330314 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 18:53:00 crc kubenswrapper[4813]: I0219 18:53:00.330365 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" Feb 19 18:53:00 crc kubenswrapper[4813]: I0219 18:53:00.330870 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c0f644376cce138d79691366e77b885fec90be67e37a466be8ebae7b3478e829"} pod="openshift-machine-config-operator/machine-config-daemon-gfswm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 18:53:00 crc kubenswrapper[4813]: I0219 18:53:00.330940 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" containerID="cri-o://c0f644376cce138d79691366e77b885fec90be67e37a466be8ebae7b3478e829" gracePeriod=600 Feb 19 18:53:00 crc kubenswrapper[4813]: I0219 18:53:00.338716 4813 scope.go:117] "RemoveContainer" containerID="e519f5b9e793340baf7974d5e67220195aa91581ee6dc67bcc7ab9451042be70" Feb 19 18:53:00 crc kubenswrapper[4813]: E0219 18:53:00.339692 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e519f5b9e793340baf7974d5e67220195aa91581ee6dc67bcc7ab9451042be70\": container with ID starting with e519f5b9e793340baf7974d5e67220195aa91581ee6dc67bcc7ab9451042be70 not found: ID does not exist" containerID="e519f5b9e793340baf7974d5e67220195aa91581ee6dc67bcc7ab9451042be70" Feb 19 18:53:00 crc kubenswrapper[4813]: I0219 18:53:00.339749 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e519f5b9e793340baf7974d5e67220195aa91581ee6dc67bcc7ab9451042be70"} err="failed to get container status \"e519f5b9e793340baf7974d5e67220195aa91581ee6dc67bcc7ab9451042be70\": rpc error: code = NotFound desc = could not find container \"e519f5b9e793340baf7974d5e67220195aa91581ee6dc67bcc7ab9451042be70\": container with ID starting with e519f5b9e793340baf7974d5e67220195aa91581ee6dc67bcc7ab9451042be70 not found: ID does not exist" Feb 19 18:53:00 crc kubenswrapper[4813]: I0219 18:53:00.339785 4813 scope.go:117] "RemoveContainer" containerID="0b467ef85f9a437d89647927595333d8dc397b82f76409cebc5dee43012b081e" Feb 19 18:53:00 crc kubenswrapper[4813]: E0219 18:53:00.342575 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b467ef85f9a437d89647927595333d8dc397b82f76409cebc5dee43012b081e\": container with ID starting with 0b467ef85f9a437d89647927595333d8dc397b82f76409cebc5dee43012b081e not found: ID does not exist" containerID="0b467ef85f9a437d89647927595333d8dc397b82f76409cebc5dee43012b081e" Feb 19 18:53:00 crc kubenswrapper[4813]: I0219 18:53:00.342613 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b467ef85f9a437d89647927595333d8dc397b82f76409cebc5dee43012b081e"} err="failed to get container status \"0b467ef85f9a437d89647927595333d8dc397b82f76409cebc5dee43012b081e\": rpc error: code = NotFound desc = could not find container \"0b467ef85f9a437d89647927595333d8dc397b82f76409cebc5dee43012b081e\": container with ID starting with 0b467ef85f9a437d89647927595333d8dc397b82f76409cebc5dee43012b081e not found: ID does not exist" Feb 19 18:53:00 crc kubenswrapper[4813]: I0219 18:53:00.342633 4813 scope.go:117] "RemoveContainer" containerID="11ff41f1c3a05377a4d38f769ca6258be7fe43feeb2f1e8eb06714701d85e419" Feb 19 18:53:00 crc kubenswrapper[4813]: I0219 18:53:00.411878 4813 scope.go:117] "RemoveContainer" containerID="11ff41f1c3a05377a4d38f769ca6258be7fe43feeb2f1e8eb06714701d85e419" Feb 19 18:53:00 crc kubenswrapper[4813]: E0219 18:53:00.412806 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11ff41f1c3a05377a4d38f769ca6258be7fe43feeb2f1e8eb06714701d85e419\": container with ID starting with 11ff41f1c3a05377a4d38f769ca6258be7fe43feeb2f1e8eb06714701d85e419 not found: ID does not exist" containerID="11ff41f1c3a05377a4d38f769ca6258be7fe43feeb2f1e8eb06714701d85e419" Feb 19 18:53:00 crc kubenswrapper[4813]: I0219 18:53:00.412838 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11ff41f1c3a05377a4d38f769ca6258be7fe43feeb2f1e8eb06714701d85e419"} err="failed to get container status \"11ff41f1c3a05377a4d38f769ca6258be7fe43feeb2f1e8eb06714701d85e419\": rpc error: code = NotFound desc = could not find container \"11ff41f1c3a05377a4d38f769ca6258be7fe43feeb2f1e8eb06714701d85e419\": container with ID starting with 11ff41f1c3a05377a4d38f769ca6258be7fe43feeb2f1e8eb06714701d85e419 not found: ID does not exist" Feb 19 18:53:00 crc kubenswrapper[4813]: E0219 18:53:00.465991 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4e6ba2ed7bbec736dcf293ba52811dd6e91ad2796a685f3c0760ed0b7c04cd3a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 18:53:00 crc kubenswrapper[4813]: E0219 18:53:00.478466 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4e6ba2ed7bbec736dcf293ba52811dd6e91ad2796a685f3c0760ed0b7c04cd3a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 18:53:00 crc kubenswrapper[4813]: E0219 18:53:00.483342 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4e6ba2ed7bbec736dcf293ba52811dd6e91ad2796a685f3c0760ed0b7c04cd3a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 18:53:00 crc kubenswrapper[4813]: E0219 18:53:00.483395 4813 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="b69a5561-31f2-4e4f-96d3-d0db19a6a51f" containerName="nova-scheduler-scheduler" Feb 19 18:53:00 crc kubenswrapper[4813]: I0219 18:53:00.822923 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 18:53:00 crc kubenswrapper[4813]: I0219 18:53:00.830588 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 18:53:00 crc kubenswrapper[4813]: I0219 18:53:00.831227 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-bf98f678b-j6t6g" Feb 19 18:53:00 crc kubenswrapper[4813]: I0219 18:53:00.836215 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-9f6f7ccc7-dwqzl" Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.047425 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5aeb8bcb-4373-48d4-9ac6-e6472189e440-logs\") pod \"5aeb8bcb-4373-48d4-9ac6-e6472189e440\" (UID: \"5aeb8bcb-4373-48d4-9ac6-e6472189e440\") " Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.047502 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5aeb8bcb-4373-48d4-9ac6-e6472189e440-config-data\") pod \"5aeb8bcb-4373-48d4-9ac6-e6472189e440\" (UID: \"5aeb8bcb-4373-48d4-9ac6-e6472189e440\") " Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.047538 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81790f67-278c-4b6a-82e5-ec5bb521c6ac-combined-ca-bundle\") pod \"81790f67-278c-4b6a-82e5-ec5bb521c6ac\" (UID: \"81790f67-278c-4b6a-82e5-ec5bb521c6ac\") " Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.047587 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5aeb8bcb-4373-48d4-9ac6-e6472189e440-config-data-custom\") pod \"5aeb8bcb-4373-48d4-9ac6-e6472189e440\" (UID: \"5aeb8bcb-4373-48d4-9ac6-e6472189e440\") " Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.047610 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81790f67-278c-4b6a-82e5-ec5bb521c6ac-config-data\") pod \"81790f67-278c-4b6a-82e5-ec5bb521c6ac\" (UID: \"81790f67-278c-4b6a-82e5-ec5bb521c6ac\") " Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.047642 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9n6d2\" (UniqueName: \"kubernetes.io/projected/81790f67-278c-4b6a-82e5-ec5bb521c6ac-kube-api-access-9n6d2\") pod \"81790f67-278c-4b6a-82e5-ec5bb521c6ac\" (UID: \"81790f67-278c-4b6a-82e5-ec5bb521c6ac\") " Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.047660 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81790f67-278c-4b6a-82e5-ec5bb521c6ac-logs\") pod \"81790f67-278c-4b6a-82e5-ec5bb521c6ac\" (UID: \"81790f67-278c-4b6a-82e5-ec5bb521c6ac\") " Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.047699 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/81790f67-278c-4b6a-82e5-ec5bb521c6ac-config-data-custom\") pod \"81790f67-278c-4b6a-82e5-ec5bb521c6ac\" (UID: \"81790f67-278c-4b6a-82e5-ec5bb521c6ac\") " Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.047714 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkhlf\" (UniqueName: \"kubernetes.io/projected/5aeb8bcb-4373-48d4-9ac6-e6472189e440-kube-api-access-lkhlf\") pod \"5aeb8bcb-4373-48d4-9ac6-e6472189e440\" (UID: \"5aeb8bcb-4373-48d4-9ac6-e6472189e440\") " Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.047745 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5aeb8bcb-4373-48d4-9ac6-e6472189e440-combined-ca-bundle\") pod \"5aeb8bcb-4373-48d4-9ac6-e6472189e440\" (UID: \"5aeb8bcb-4373-48d4-9ac6-e6472189e440\") " Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.050518 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5aeb8bcb-4373-48d4-9ac6-e6472189e440-logs" (OuterVolumeSpecName: "logs") pod "5aeb8bcb-4373-48d4-9ac6-e6472189e440" (UID: "5aeb8bcb-4373-48d4-9ac6-e6472189e440"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.050542 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81790f67-278c-4b6a-82e5-ec5bb521c6ac-logs" (OuterVolumeSpecName: "logs") pod "81790f67-278c-4b6a-82e5-ec5bb521c6ac" (UID: "81790f67-278c-4b6a-82e5-ec5bb521c6ac"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.056072 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5aeb8bcb-4373-48d4-9ac6-e6472189e440-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5aeb8bcb-4373-48d4-9ac6-e6472189e440" (UID: "5aeb8bcb-4373-48d4-9ac6-e6472189e440"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.056186 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5aeb8bcb-4373-48d4-9ac6-e6472189e440-kube-api-access-lkhlf" (OuterVolumeSpecName: "kube-api-access-lkhlf") pod "5aeb8bcb-4373-48d4-9ac6-e6472189e440" (UID: "5aeb8bcb-4373-48d4-9ac6-e6472189e440"). InnerVolumeSpecName "kube-api-access-lkhlf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.056242 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81790f67-278c-4b6a-82e5-ec5bb521c6ac-kube-api-access-9n6d2" (OuterVolumeSpecName: "kube-api-access-9n6d2") pod "81790f67-278c-4b6a-82e5-ec5bb521c6ac" (UID: "81790f67-278c-4b6a-82e5-ec5bb521c6ac"). InnerVolumeSpecName "kube-api-access-9n6d2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.056236 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81790f67-278c-4b6a-82e5-ec5bb521c6ac-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "81790f67-278c-4b6a-82e5-ec5bb521c6ac" (UID: "81790f67-278c-4b6a-82e5-ec5bb521c6ac"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.083559 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81790f67-278c-4b6a-82e5-ec5bb521c6ac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "81790f67-278c-4b6a-82e5-ec5bb521c6ac" (UID: "81790f67-278c-4b6a-82e5-ec5bb521c6ac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.083719 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5aeb8bcb-4373-48d4-9ac6-e6472189e440-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5aeb8bcb-4373-48d4-9ac6-e6472189e440" (UID: "5aeb8bcb-4373-48d4-9ac6-e6472189e440"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.089543 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81790f67-278c-4b6a-82e5-ec5bb521c6ac-config-data" (OuterVolumeSpecName: "config-data") pod "81790f67-278c-4b6a-82e5-ec5bb521c6ac" (UID: "81790f67-278c-4b6a-82e5-ec5bb521c6ac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.102367 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5aeb8bcb-4373-48d4-9ac6-e6472189e440-config-data" (OuterVolumeSpecName: "config-data") pod "5aeb8bcb-4373-48d4-9ac6-e6472189e440" (UID: "5aeb8bcb-4373-48d4-9ac6-e6472189e440"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.129411 4813 generic.go:334] "Generic (PLEG): container finished" podID="481977a2-7072-4176-abd4-863cb6104d70" containerID="c0f644376cce138d79691366e77b885fec90be67e37a466be8ebae7b3478e829" exitCode=0 Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.129465 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" event={"ID":"481977a2-7072-4176-abd4-863cb6104d70","Type":"ContainerDied","Data":"c0f644376cce138d79691366e77b885fec90be67e37a466be8ebae7b3478e829"} Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.129492 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" event={"ID":"481977a2-7072-4176-abd4-863cb6104d70","Type":"ContainerStarted","Data":"32fa982681255cf9db7b11890f519e686141e6a0b70e496465176e3f8f434a17"} Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.129506 4813 scope.go:117] "RemoveContainer" containerID="94eee4c6af3220d0f9daafe2c95225cd4a99afe2b2f03d1a34bc8f27e9e13151" Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.131236 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.133068 4813 generic.go:334] "Generic (PLEG): container finished" podID="b69a5561-31f2-4e4f-96d3-d0db19a6a51f" containerID="4e6ba2ed7bbec736dcf293ba52811dd6e91ad2796a685f3c0760ed0b7c04cd3a" exitCode=0 Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.133146 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b69a5561-31f2-4e4f-96d3-d0db19a6a51f","Type":"ContainerDied","Data":"4e6ba2ed7bbec736dcf293ba52811dd6e91ad2796a685f3c0760ed0b7c04cd3a"} Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.142404 4813 generic.go:334] "Generic (PLEG): container finished" podID="f5fbcd22-57c9-4c44-99e8-f9307f26a525" containerID="7b5552d482368d69a980b6f9acc9f03b5afc2c61db4aac5df25c2dd3bc9b75f1" exitCode=0 Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.142469 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f5fbcd22-57c9-4c44-99e8-f9307f26a525","Type":"ContainerDied","Data":"7b5552d482368d69a980b6f9acc9f03b5afc2c61db4aac5df25c2dd3bc9b75f1"} Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.143872 4813 generic.go:334] "Generic (PLEG): container finished" podID="81790f67-278c-4b6a-82e5-ec5bb521c6ac" containerID="d4ea4dbc1137b15bb3aa4bc83a3b37e503f5c54a481f8b73a7cfd507673900a8" exitCode=0 Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.143905 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-bf98f678b-j6t6g" event={"ID":"81790f67-278c-4b6a-82e5-ec5bb521c6ac","Type":"ContainerDied","Data":"d4ea4dbc1137b15bb3aa4bc83a3b37e503f5c54a481f8b73a7cfd507673900a8"} Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.143927 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-bf98f678b-j6t6g" Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.143959 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-bf98f678b-j6t6g" event={"ID":"81790f67-278c-4b6a-82e5-ec5bb521c6ac","Type":"ContainerDied","Data":"4a9a96b9f255b2a2ab8e9274d60e36244e1129f48007f631834faf74f4ecde1e"} Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.145832 4813 generic.go:334] "Generic (PLEG): container finished" podID="5aeb8bcb-4373-48d4-9ac6-e6472189e440" containerID="7154bb37e5f6fe174542d1e2d97dda065217b924fa83469cc6cbb289e826d07a" exitCode=0 Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.145886 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-9f6f7ccc7-dwqzl" event={"ID":"5aeb8bcb-4373-48d4-9ac6-e6472189e440","Type":"ContainerDied","Data":"7154bb37e5f6fe174542d1e2d97dda065217b924fa83469cc6cbb289e826d07a"} Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.145906 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-9f6f7ccc7-dwqzl" event={"ID":"5aeb8bcb-4373-48d4-9ac6-e6472189e440","Type":"ContainerDied","Data":"4153383a658984fa3ebba097ecc8a95a05dc5f65aa988487fe746d1ce6979bb1"} Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.146000 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-9f6f7ccc7-dwqzl" Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.161145 4813 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/81790f67-278c-4b6a-82e5-ec5bb521c6ac-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.161198 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lkhlf\" (UniqueName: \"kubernetes.io/projected/5aeb8bcb-4373-48d4-9ac6-e6472189e440-kube-api-access-lkhlf\") on node \"crc\" DevicePath \"\"" Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.161222 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5aeb8bcb-4373-48d4-9ac6-e6472189e440-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.161236 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5aeb8bcb-4373-48d4-9ac6-e6472189e440-logs\") on node \"crc\" DevicePath \"\"" Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.161250 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5aeb8bcb-4373-48d4-9ac6-e6472189e440-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.161262 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81790f67-278c-4b6a-82e5-ec5bb521c6ac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.161281 4813 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5aeb8bcb-4373-48d4-9ac6-e6472189e440-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.161293 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81790f67-278c-4b6a-82e5-ec5bb521c6ac-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.161305 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9n6d2\" (UniqueName: \"kubernetes.io/projected/81790f67-278c-4b6a-82e5-ec5bb521c6ac-kube-api-access-9n6d2\") on node \"crc\" DevicePath \"\"" Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.161317 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81790f67-278c-4b6a-82e5-ec5bb521c6ac-logs\") on node \"crc\" DevicePath \"\"" Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.215774 4813 scope.go:117] "RemoveContainer" containerID="4e6ba2ed7bbec736dcf293ba52811dd6e91ad2796a685f3c0760ed0b7c04cd3a" Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.233221 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-bf98f678b-j6t6g"] Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.241657 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-bf98f678b-j6t6g"] Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.278014 4813 scope.go:117] "RemoveContainer" containerID="d4ea4dbc1137b15bb3aa4bc83a3b37e503f5c54a481f8b73a7cfd507673900a8" Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.279051 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ls27j\" (UniqueName: \"kubernetes.io/projected/b69a5561-31f2-4e4f-96d3-d0db19a6a51f-kube-api-access-ls27j\") pod \"b69a5561-31f2-4e4f-96d3-d0db19a6a51f\" (UID: \"b69a5561-31f2-4e4f-96d3-d0db19a6a51f\") " Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.279156 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b69a5561-31f2-4e4f-96d3-d0db19a6a51f-combined-ca-bundle\") pod \"b69a5561-31f2-4e4f-96d3-d0db19a6a51f\" (UID: \"b69a5561-31f2-4e4f-96d3-d0db19a6a51f\") " Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.279266 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b69a5561-31f2-4e4f-96d3-d0db19a6a51f-config-data\") pod \"b69a5561-31f2-4e4f-96d3-d0db19a6a51f\" (UID: \"b69a5561-31f2-4e4f-96d3-d0db19a6a51f\") " Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.284302 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b69a5561-31f2-4e4f-96d3-d0db19a6a51f-kube-api-access-ls27j" (OuterVolumeSpecName: "kube-api-access-ls27j") pod "b69a5561-31f2-4e4f-96d3-d0db19a6a51f" (UID: "b69a5561-31f2-4e4f-96d3-d0db19a6a51f"). InnerVolumeSpecName "kube-api-access-ls27j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.287163 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-9f6f7ccc7-dwqzl"] Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.294820 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-9f6f7ccc7-dwqzl"] Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.297303 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b69a5561-31f2-4e4f-96d3-d0db19a6a51f-config-data" (OuterVolumeSpecName: "config-data") pod "b69a5561-31f2-4e4f-96d3-d0db19a6a51f" (UID: "b69a5561-31f2-4e4f-96d3-d0db19a6a51f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.300993 4813 scope.go:117] "RemoveContainer" containerID="479709b4af77750e6b92b1b1ddf45a2dfef9f8a888e9bcb59242645580afb4a9" Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.304931 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b69a5561-31f2-4e4f-96d3-d0db19a6a51f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b69a5561-31f2-4e4f-96d3-d0db19a6a51f" (UID: "b69a5561-31f2-4e4f-96d3-d0db19a6a51f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.356658 4813 scope.go:117] "RemoveContainer" containerID="d4ea4dbc1137b15bb3aa4bc83a3b37e503f5c54a481f8b73a7cfd507673900a8" Feb 19 18:53:01 crc kubenswrapper[4813]: E0219 18:53:01.361088 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4ea4dbc1137b15bb3aa4bc83a3b37e503f5c54a481f8b73a7cfd507673900a8\": container with ID starting with d4ea4dbc1137b15bb3aa4bc83a3b37e503f5c54a481f8b73a7cfd507673900a8 not found: ID does not exist" containerID="d4ea4dbc1137b15bb3aa4bc83a3b37e503f5c54a481f8b73a7cfd507673900a8" Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.361128 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4ea4dbc1137b15bb3aa4bc83a3b37e503f5c54a481f8b73a7cfd507673900a8"} err="failed to get container status \"d4ea4dbc1137b15bb3aa4bc83a3b37e503f5c54a481f8b73a7cfd507673900a8\": rpc error: code = NotFound desc = could not find container \"d4ea4dbc1137b15bb3aa4bc83a3b37e503f5c54a481f8b73a7cfd507673900a8\": container with ID starting with d4ea4dbc1137b15bb3aa4bc83a3b37e503f5c54a481f8b73a7cfd507673900a8 not found: ID does not exist" Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.361156 4813 scope.go:117] "RemoveContainer" containerID="479709b4af77750e6b92b1b1ddf45a2dfef9f8a888e9bcb59242645580afb4a9" Feb 19 18:53:01 crc kubenswrapper[4813]: E0219 18:53:01.361608 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"479709b4af77750e6b92b1b1ddf45a2dfef9f8a888e9bcb59242645580afb4a9\": container with ID starting with 479709b4af77750e6b92b1b1ddf45a2dfef9f8a888e9bcb59242645580afb4a9 not found: ID does not exist" containerID="479709b4af77750e6b92b1b1ddf45a2dfef9f8a888e9bcb59242645580afb4a9" Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.361627 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"479709b4af77750e6b92b1b1ddf45a2dfef9f8a888e9bcb59242645580afb4a9"} err="failed to get container status \"479709b4af77750e6b92b1b1ddf45a2dfef9f8a888e9bcb59242645580afb4a9\": rpc error: code = NotFound desc = could not find container \"479709b4af77750e6b92b1b1ddf45a2dfef9f8a888e9bcb59242645580afb4a9\": container with ID starting with 479709b4af77750e6b92b1b1ddf45a2dfef9f8a888e9bcb59242645580afb4a9 not found: ID does not exist" Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.361639 4813 scope.go:117] "RemoveContainer" containerID="7154bb37e5f6fe174542d1e2d97dda065217b924fa83469cc6cbb289e826d07a" Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.380726 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ls27j\" (UniqueName: \"kubernetes.io/projected/b69a5561-31f2-4e4f-96d3-d0db19a6a51f-kube-api-access-ls27j\") on node \"crc\" DevicePath \"\"" Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.380751 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b69a5561-31f2-4e4f-96d3-d0db19a6a51f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.380762 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b69a5561-31f2-4e4f-96d3-d0db19a6a51f-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.389658 4813 scope.go:117] "RemoveContainer" containerID="52e141647daf76f468f278365830d8a4de021205dfe34eb192a73cab648f4e9a" Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.398024 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.412093 4813 scope.go:117] "RemoveContainer" containerID="7154bb37e5f6fe174542d1e2d97dda065217b924fa83469cc6cbb289e826d07a" Feb 19 18:53:01 crc kubenswrapper[4813]: E0219 18:53:01.413347 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7154bb37e5f6fe174542d1e2d97dda065217b924fa83469cc6cbb289e826d07a\": container with ID starting with 7154bb37e5f6fe174542d1e2d97dda065217b924fa83469cc6cbb289e826d07a not found: ID does not exist" containerID="7154bb37e5f6fe174542d1e2d97dda065217b924fa83469cc6cbb289e826d07a" Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.413404 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7154bb37e5f6fe174542d1e2d97dda065217b924fa83469cc6cbb289e826d07a"} err="failed to get container status \"7154bb37e5f6fe174542d1e2d97dda065217b924fa83469cc6cbb289e826d07a\": rpc error: code = NotFound desc = could not find container \"7154bb37e5f6fe174542d1e2d97dda065217b924fa83469cc6cbb289e826d07a\": container with ID starting with 7154bb37e5f6fe174542d1e2d97dda065217b924fa83469cc6cbb289e826d07a not found: ID does not exist" Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.413437 4813 scope.go:117] "RemoveContainer" containerID="52e141647daf76f468f278365830d8a4de021205dfe34eb192a73cab648f4e9a" Feb 19 18:53:01 crc kubenswrapper[4813]: E0219 18:53:01.414016 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52e141647daf76f468f278365830d8a4de021205dfe34eb192a73cab648f4e9a\": container with ID starting with 52e141647daf76f468f278365830d8a4de021205dfe34eb192a73cab648f4e9a not found: ID does not exist" containerID="52e141647daf76f468f278365830d8a4de021205dfe34eb192a73cab648f4e9a" Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.414055 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52e141647daf76f468f278365830d8a4de021205dfe34eb192a73cab648f4e9a"} err="failed to get container status \"52e141647daf76f468f278365830d8a4de021205dfe34eb192a73cab648f4e9a\": rpc error: code = NotFound desc = could not find container \"52e141647daf76f468f278365830d8a4de021205dfe34eb192a73cab648f4e9a\": container with ID starting with 52e141647daf76f468f278365830d8a4de021205dfe34eb192a73cab648f4e9a not found: ID does not exist" Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.487573 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5aeb8bcb-4373-48d4-9ac6-e6472189e440" path="/var/lib/kubelet/pods/5aeb8bcb-4373-48d4-9ac6-e6472189e440/volumes" Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.488241 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79cdc675-a16c-4c18-bcef-d844d7a2f75d" path="/var/lib/kubelet/pods/79cdc675-a16c-4c18-bcef-d844d7a2f75d/volumes" Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.488844 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81790f67-278c-4b6a-82e5-ec5bb521c6ac" path="/var/lib/kubelet/pods/81790f67-278c-4b6a-82e5-ec5bb521c6ac/volumes" Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.490348 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c69ff3db-8806-451a-9df0-c6289c327579" path="/var/lib/kubelet/pods/c69ff3db-8806-451a-9df0-c6289c327579/volumes" Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.491194 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db22a584-f05a-41ba-ad23-387b4100a9e1" path="/var/lib/kubelet/pods/db22a584-f05a-41ba-ad23-387b4100a9e1/volumes" Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.585001 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f5fbcd22-57c9-4c44-99e8-f9307f26a525-run-httpd\") pod \"f5fbcd22-57c9-4c44-99e8-f9307f26a525\" (UID: \"f5fbcd22-57c9-4c44-99e8-f9307f26a525\") " Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.585395 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f5fbcd22-57c9-4c44-99e8-f9307f26a525-log-httpd\") pod \"f5fbcd22-57c9-4c44-99e8-f9307f26a525\" (UID: \"f5fbcd22-57c9-4c44-99e8-f9307f26a525\") " Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.585455 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5fbcd22-57c9-4c44-99e8-f9307f26a525-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f5fbcd22-57c9-4c44-99e8-f9307f26a525" (UID: "f5fbcd22-57c9-4c44-99e8-f9307f26a525"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.585521 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5fbcd22-57c9-4c44-99e8-f9307f26a525-config-data\") pod \"f5fbcd22-57c9-4c44-99e8-f9307f26a525\" (UID: \"f5fbcd22-57c9-4c44-99e8-f9307f26a525\") " Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.585560 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5fbcd22-57c9-4c44-99e8-f9307f26a525-ceilometer-tls-certs\") pod \"f5fbcd22-57c9-4c44-99e8-f9307f26a525\" (UID: \"f5fbcd22-57c9-4c44-99e8-f9307f26a525\") " Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.585631 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f5fbcd22-57c9-4c44-99e8-f9307f26a525-sg-core-conf-yaml\") pod \"f5fbcd22-57c9-4c44-99e8-f9307f26a525\" (UID: \"f5fbcd22-57c9-4c44-99e8-f9307f26a525\") " Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.585680 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kv4nh\" (UniqueName: \"kubernetes.io/projected/f5fbcd22-57c9-4c44-99e8-f9307f26a525-kube-api-access-kv4nh\") pod \"f5fbcd22-57c9-4c44-99e8-f9307f26a525\" (UID: \"f5fbcd22-57c9-4c44-99e8-f9307f26a525\") " Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.585752 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5fbcd22-57c9-4c44-99e8-f9307f26a525-combined-ca-bundle\") pod \"f5fbcd22-57c9-4c44-99e8-f9307f26a525\" (UID: \"f5fbcd22-57c9-4c44-99e8-f9307f26a525\") " Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.585802 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5fbcd22-57c9-4c44-99e8-f9307f26a525-scripts\") pod \"f5fbcd22-57c9-4c44-99e8-f9307f26a525\" (UID: \"f5fbcd22-57c9-4c44-99e8-f9307f26a525\") " Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.586291 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5fbcd22-57c9-4c44-99e8-f9307f26a525-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f5fbcd22-57c9-4c44-99e8-f9307f26a525" (UID: "f5fbcd22-57c9-4c44-99e8-f9307f26a525"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.586647 4813 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f5fbcd22-57c9-4c44-99e8-f9307f26a525-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.586682 4813 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f5fbcd22-57c9-4c44-99e8-f9307f26a525-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.590789 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5fbcd22-57c9-4c44-99e8-f9307f26a525-scripts" (OuterVolumeSpecName: "scripts") pod "f5fbcd22-57c9-4c44-99e8-f9307f26a525" (UID: "f5fbcd22-57c9-4c44-99e8-f9307f26a525"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.591008 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5fbcd22-57c9-4c44-99e8-f9307f26a525-kube-api-access-kv4nh" (OuterVolumeSpecName: "kube-api-access-kv4nh") pod "f5fbcd22-57c9-4c44-99e8-f9307f26a525" (UID: "f5fbcd22-57c9-4c44-99e8-f9307f26a525"). InnerVolumeSpecName "kube-api-access-kv4nh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.607873 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5fbcd22-57c9-4c44-99e8-f9307f26a525-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f5fbcd22-57c9-4c44-99e8-f9307f26a525" (UID: "f5fbcd22-57c9-4c44-99e8-f9307f26a525"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.639099 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5fbcd22-57c9-4c44-99e8-f9307f26a525-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f5fbcd22-57c9-4c44-99e8-f9307f26a525" (UID: "f5fbcd22-57c9-4c44-99e8-f9307f26a525"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.664136 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5fbcd22-57c9-4c44-99e8-f9307f26a525-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "f5fbcd22-57c9-4c44-99e8-f9307f26a525" (UID: "f5fbcd22-57c9-4c44-99e8-f9307f26a525"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.686075 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5fbcd22-57c9-4c44-99e8-f9307f26a525-config-data" (OuterVolumeSpecName: "config-data") pod "f5fbcd22-57c9-4c44-99e8-f9307f26a525" (UID: "f5fbcd22-57c9-4c44-99e8-f9307f26a525"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.689075 4813 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f5fbcd22-57c9-4c44-99e8-f9307f26a525-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.689117 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kv4nh\" (UniqueName: \"kubernetes.io/projected/f5fbcd22-57c9-4c44-99e8-f9307f26a525-kube-api-access-kv4nh\") on node \"crc\" DevicePath \"\"" Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.689132 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5fbcd22-57c9-4c44-99e8-f9307f26a525-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.689145 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5fbcd22-57c9-4c44-99e8-f9307f26a525-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.689156 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5fbcd22-57c9-4c44-99e8-f9307f26a525-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 18:53:01 crc kubenswrapper[4813]: I0219 18:53:01.689166 4813 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5fbcd22-57c9-4c44-99e8-f9307f26a525-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 18:53:02 crc kubenswrapper[4813]: I0219 18:53:02.193791 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f5fbcd22-57c9-4c44-99e8-f9307f26a525","Type":"ContainerDied","Data":"14f6b4502310c7c885fea7d916adbc5f44a2936e68de5dc51865381e200b057b"} Feb 19 18:53:02 crc kubenswrapper[4813]: I0219 18:53:02.193808 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 18:53:02 crc kubenswrapper[4813]: I0219 18:53:02.193843 4813 scope.go:117] "RemoveContainer" containerID="56865a353e832ea3d2554a9134268053435401ed9d60bb86dd7266e7a226e156" Feb 19 18:53:02 crc kubenswrapper[4813]: I0219 18:53:02.208632 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b69a5561-31f2-4e4f-96d3-d0db19a6a51f","Type":"ContainerDied","Data":"c5a1bb754235f9deba7e9fe4d6e2837be736135482927a19fc50b52cf66e0851"} Feb 19 18:53:02 crc kubenswrapper[4813]: I0219 18:53:02.208725 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 18:53:02 crc kubenswrapper[4813]: I0219 18:53:02.243307 4813 scope.go:117] "RemoveContainer" containerID="579e9c4c32174ff288dea84c76634ebfebcf521ba3330c8727fcb1173b1e3d77" Feb 19 18:53:02 crc kubenswrapper[4813]: I0219 18:53:02.244466 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 18:53:02 crc kubenswrapper[4813]: I0219 18:53:02.258136 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 18:53:02 crc kubenswrapper[4813]: I0219 18:53:02.263560 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 18:53:02 crc kubenswrapper[4813]: I0219 18:53:02.264106 4813 scope.go:117] "RemoveContainer" containerID="7b5552d482368d69a980b6f9acc9f03b5afc2c61db4aac5df25c2dd3bc9b75f1" Feb 19 18:53:02 crc kubenswrapper[4813]: I0219 18:53:02.269657 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 18:53:02 crc kubenswrapper[4813]: I0219 18:53:02.287761 4813 scope.go:117] "RemoveContainer" containerID="b40ac96b08f97e5858b87830a251d48e4c373ca1dd61bf0a07e6db7e47f55c35" Feb 19 18:53:03 crc kubenswrapper[4813]: I0219 18:53:03.485071 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b69a5561-31f2-4e4f-96d3-d0db19a6a51f" path="/var/lib/kubelet/pods/b69a5561-31f2-4e4f-96d3-d0db19a6a51f/volumes" Feb 19 18:53:03 crc kubenswrapper[4813]: I0219 18:53:03.488649 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5fbcd22-57c9-4c44-99e8-f9307f26a525" path="/var/lib/kubelet/pods/f5fbcd22-57c9-4c44-99e8-f9307f26a525/volumes" Feb 19 18:53:04 crc kubenswrapper[4813]: E0219 18:53:04.962839 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fa363048760a37417b36d93a6b7c7436ba2ba02b903ff26bcb4c7f22d5b4f33f is running failed: container process not found" containerID="fa363048760a37417b36d93a6b7c7436ba2ba02b903ff26bcb4c7f22d5b4f33f" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 19 18:53:04 crc kubenswrapper[4813]: E0219 18:53:04.963619 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fa363048760a37417b36d93a6b7c7436ba2ba02b903ff26bcb4c7f22d5b4f33f is running failed: container process not found" containerID="fa363048760a37417b36d93a6b7c7436ba2ba02b903ff26bcb4c7f22d5b4f33f" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 19 18:53:04 crc kubenswrapper[4813]: E0219 18:53:04.963962 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fa363048760a37417b36d93a6b7c7436ba2ba02b903ff26bcb4c7f22d5b4f33f is running failed: container process not found" containerID="fa363048760a37417b36d93a6b7c7436ba2ba02b903ff26bcb4c7f22d5b4f33f" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 19 18:53:04 crc kubenswrapper[4813]: E0219 18:53:04.964013 4813 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fa363048760a37417b36d93a6b7c7436ba2ba02b903ff26bcb4c7f22d5b4f33f is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-w7s6z" podUID="096a88db-91ee-4ac3-b5ae-ba4bca838436" containerName="ovsdb-server" Feb 19 18:53:04 crc kubenswrapper[4813]: E0219 18:53:04.964066 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e06b8d6895c1ae07d8bfb4358319aa314b17cc2acab940e6baa96f82402c1514" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 19 18:53:04 crc kubenswrapper[4813]: E0219 18:53:04.965662 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e06b8d6895c1ae07d8bfb4358319aa314b17cc2acab940e6baa96f82402c1514" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 19 18:53:04 crc kubenswrapper[4813]: E0219 18:53:04.967427 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e06b8d6895c1ae07d8bfb4358319aa314b17cc2acab940e6baa96f82402c1514" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 19 18:53:04 crc kubenswrapper[4813]: E0219 18:53:04.967466 4813 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-w7s6z" podUID="096a88db-91ee-4ac3-b5ae-ba4bca838436" containerName="ovs-vswitchd" Feb 19 18:53:07 crc kubenswrapper[4813]: I0219 18:53:07.260934 4813 generic.go:334] "Generic (PLEG): container finished" podID="6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd" containerID="8d8d3b2b19279b349f178ba59ce9a2b30004d64b0f858bebde8ea20429c2ad81" exitCode=0 Feb 19 18:53:07 crc kubenswrapper[4813]: I0219 18:53:07.260990 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-79884964f7-nvxp2" event={"ID":"6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd","Type":"ContainerDied","Data":"8d8d3b2b19279b349f178ba59ce9a2b30004d64b0f858bebde8ea20429c2ad81"} Feb 19 18:53:07 crc kubenswrapper[4813]: I0219 18:53:07.556102 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-79884964f7-nvxp2" Feb 19 18:53:07 crc kubenswrapper[4813]: I0219 18:53:07.680062 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd-config\") pod \"6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd\" (UID: \"6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd\") " Feb 19 18:53:07 crc kubenswrapper[4813]: I0219 18:53:07.680109 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd-combined-ca-bundle\") pod \"6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd\" (UID: \"6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd\") " Feb 19 18:53:07 crc kubenswrapper[4813]: I0219 18:53:07.680148 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd-httpd-config\") pod \"6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd\" (UID: \"6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd\") " Feb 19 18:53:07 crc kubenswrapper[4813]: I0219 18:53:07.680211 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdk62\" (UniqueName: \"kubernetes.io/projected/6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd-kube-api-access-kdk62\") pod \"6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd\" (UID: \"6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd\") " Feb 19 18:53:07 crc kubenswrapper[4813]: I0219 18:53:07.680244 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd-internal-tls-certs\") pod \"6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd\" (UID: \"6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd\") " Feb 19 18:53:07 crc kubenswrapper[4813]: I0219 18:53:07.680274 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd-ovndb-tls-certs\") pod \"6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd\" (UID: \"6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd\") " Feb 19 18:53:07 crc kubenswrapper[4813]: I0219 18:53:07.680346 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd-public-tls-certs\") pod \"6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd\" (UID: \"6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd\") " Feb 19 18:53:07 crc kubenswrapper[4813]: I0219 18:53:07.690559 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd" (UID: "6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:53:07 crc kubenswrapper[4813]: I0219 18:53:07.691243 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd-kube-api-access-kdk62" (OuterVolumeSpecName: "kube-api-access-kdk62") pod "6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd" (UID: "6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd"). InnerVolumeSpecName "kube-api-access-kdk62". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:53:07 crc kubenswrapper[4813]: I0219 18:53:07.723880 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd-config" (OuterVolumeSpecName: "config") pod "6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd" (UID: "6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:53:07 crc kubenswrapper[4813]: I0219 18:53:07.741438 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd" (UID: "6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:53:07 crc kubenswrapper[4813]: I0219 18:53:07.747030 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd" (UID: "6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:53:07 crc kubenswrapper[4813]: I0219 18:53:07.752424 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd" (UID: "6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:53:07 crc kubenswrapper[4813]: I0219 18:53:07.753247 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd" (UID: "6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:53:07 crc kubenswrapper[4813]: I0219 18:53:07.782661 4813 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 18:53:07 crc kubenswrapper[4813]: I0219 18:53:07.782713 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd-config\") on node \"crc\" DevicePath \"\"" Feb 19 18:53:07 crc kubenswrapper[4813]: I0219 18:53:07.782732 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:53:07 crc kubenswrapper[4813]: I0219 18:53:07.782751 4813 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 19 18:53:07 crc kubenswrapper[4813]: I0219 18:53:07.782769 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdk62\" (UniqueName: \"kubernetes.io/projected/6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd-kube-api-access-kdk62\") on node \"crc\" DevicePath \"\"" Feb 19 18:53:07 crc kubenswrapper[4813]: I0219 18:53:07.782789 4813 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 18:53:07 crc kubenswrapper[4813]: I0219 18:53:07.782806 4813 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 18:53:08 crc kubenswrapper[4813]: I0219 18:53:08.275409 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-79884964f7-nvxp2" event={"ID":"6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd","Type":"ContainerDied","Data":"8da735698479a4a63037188d407db2f3e744a3d19718c2c1b9b54bcc5cc8aefc"} Feb 19 18:53:08 crc kubenswrapper[4813]: I0219 18:53:08.275464 4813 scope.go:117] "RemoveContainer" containerID="798087b6274eb1a02113a606fd85310be018115d9f4e8a89578ca60d75da110c" Feb 19 18:53:08 crc kubenswrapper[4813]: I0219 18:53:08.275486 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-79884964f7-nvxp2" Feb 19 18:53:08 crc kubenswrapper[4813]: I0219 18:53:08.306312 4813 scope.go:117] "RemoveContainer" containerID="8d8d3b2b19279b349f178ba59ce9a2b30004d64b0f858bebde8ea20429c2ad81" Feb 19 18:53:08 crc kubenswrapper[4813]: I0219 18:53:08.352006 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-79884964f7-nvxp2"] Feb 19 18:53:08 crc kubenswrapper[4813]: I0219 18:53:08.365695 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-79884964f7-nvxp2"] Feb 19 18:53:09 crc kubenswrapper[4813]: I0219 18:53:09.493097 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd" path="/var/lib/kubelet/pods/6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd/volumes" Feb 19 18:53:09 crc kubenswrapper[4813]: E0219 18:53:09.963722 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fa363048760a37417b36d93a6b7c7436ba2ba02b903ff26bcb4c7f22d5b4f33f is running failed: container process not found" containerID="fa363048760a37417b36d93a6b7c7436ba2ba02b903ff26bcb4c7f22d5b4f33f" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 19 18:53:09 crc kubenswrapper[4813]: E0219 18:53:09.964591 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fa363048760a37417b36d93a6b7c7436ba2ba02b903ff26bcb4c7f22d5b4f33f is running failed: container process not found" containerID="fa363048760a37417b36d93a6b7c7436ba2ba02b903ff26bcb4c7f22d5b4f33f" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 19 18:53:09 crc kubenswrapper[4813]: E0219 18:53:09.965154 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fa363048760a37417b36d93a6b7c7436ba2ba02b903ff26bcb4c7f22d5b4f33f is running failed: container process not found" containerID="fa363048760a37417b36d93a6b7c7436ba2ba02b903ff26bcb4c7f22d5b4f33f" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 19 18:53:09 crc kubenswrapper[4813]: E0219 18:53:09.965232 4813 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fa363048760a37417b36d93a6b7c7436ba2ba02b903ff26bcb4c7f22d5b4f33f is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-w7s6z" podUID="096a88db-91ee-4ac3-b5ae-ba4bca838436" containerName="ovsdb-server" Feb 19 18:53:09 crc kubenswrapper[4813]: E0219 18:53:09.965337 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e06b8d6895c1ae07d8bfb4358319aa314b17cc2acab940e6baa96f82402c1514" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 19 18:53:09 crc kubenswrapper[4813]: E0219 18:53:09.966993 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e06b8d6895c1ae07d8bfb4358319aa314b17cc2acab940e6baa96f82402c1514" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 19 18:53:09 crc kubenswrapper[4813]: E0219 18:53:09.970164 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e06b8d6895c1ae07d8bfb4358319aa314b17cc2acab940e6baa96f82402c1514" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 19 18:53:09 crc kubenswrapper[4813]: E0219 18:53:09.970227 4813 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-w7s6z" podUID="096a88db-91ee-4ac3-b5ae-ba4bca838436" containerName="ovs-vswitchd" Feb 19 18:53:14 crc kubenswrapper[4813]: E0219 18:53:14.966852 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fa363048760a37417b36d93a6b7c7436ba2ba02b903ff26bcb4c7f22d5b4f33f is running failed: container process not found" containerID="fa363048760a37417b36d93a6b7c7436ba2ba02b903ff26bcb4c7f22d5b4f33f" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 19 18:53:14 crc kubenswrapper[4813]: E0219 18:53:14.969194 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e06b8d6895c1ae07d8bfb4358319aa314b17cc2acab940e6baa96f82402c1514" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 19 18:53:14 crc kubenswrapper[4813]: E0219 18:53:14.969352 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fa363048760a37417b36d93a6b7c7436ba2ba02b903ff26bcb4c7f22d5b4f33f is running failed: container process not found" containerID="fa363048760a37417b36d93a6b7c7436ba2ba02b903ff26bcb4c7f22d5b4f33f" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 19 18:53:14 crc kubenswrapper[4813]: E0219 18:53:14.969801 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fa363048760a37417b36d93a6b7c7436ba2ba02b903ff26bcb4c7f22d5b4f33f is running failed: container process not found" containerID="fa363048760a37417b36d93a6b7c7436ba2ba02b903ff26bcb4c7f22d5b4f33f" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 19 18:53:14 crc kubenswrapper[4813]: E0219 18:53:14.969859 4813 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fa363048760a37417b36d93a6b7c7436ba2ba02b903ff26bcb4c7f22d5b4f33f is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-w7s6z" podUID="096a88db-91ee-4ac3-b5ae-ba4bca838436" containerName="ovsdb-server" Feb 19 18:53:14 crc kubenswrapper[4813]: E0219 18:53:14.971688 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e06b8d6895c1ae07d8bfb4358319aa314b17cc2acab940e6baa96f82402c1514" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 19 18:53:14 crc kubenswrapper[4813]: E0219 18:53:14.973778 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e06b8d6895c1ae07d8bfb4358319aa314b17cc2acab940e6baa96f82402c1514" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 19 18:53:14 crc kubenswrapper[4813]: E0219 18:53:14.973831 4813 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-w7s6z" podUID="096a88db-91ee-4ac3-b5ae-ba4bca838436" containerName="ovs-vswitchd" Feb 19 18:53:19 crc kubenswrapper[4813]: E0219 18:53:19.962272 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fa363048760a37417b36d93a6b7c7436ba2ba02b903ff26bcb4c7f22d5b4f33f is running failed: container process not found" containerID="fa363048760a37417b36d93a6b7c7436ba2ba02b903ff26bcb4c7f22d5b4f33f" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 19 18:53:19 crc kubenswrapper[4813]: E0219 18:53:19.963688 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fa363048760a37417b36d93a6b7c7436ba2ba02b903ff26bcb4c7f22d5b4f33f is running failed: container process not found" containerID="fa363048760a37417b36d93a6b7c7436ba2ba02b903ff26bcb4c7f22d5b4f33f" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 19 18:53:19 crc kubenswrapper[4813]: E0219 18:53:19.964414 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fa363048760a37417b36d93a6b7c7436ba2ba02b903ff26bcb4c7f22d5b4f33f is running failed: container process not found" containerID="fa363048760a37417b36d93a6b7c7436ba2ba02b903ff26bcb4c7f22d5b4f33f" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 19 18:53:19 crc kubenswrapper[4813]: E0219 18:53:19.964515 4813 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fa363048760a37417b36d93a6b7c7436ba2ba02b903ff26bcb4c7f22d5b4f33f is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-w7s6z" podUID="096a88db-91ee-4ac3-b5ae-ba4bca838436" containerName="ovsdb-server" Feb 19 18:53:19 crc kubenswrapper[4813]: E0219 18:53:19.965884 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e06b8d6895c1ae07d8bfb4358319aa314b17cc2acab940e6baa96f82402c1514" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 19 18:53:19 crc kubenswrapper[4813]: E0219 18:53:19.968219 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e06b8d6895c1ae07d8bfb4358319aa314b17cc2acab940e6baa96f82402c1514" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 19 18:53:19 crc kubenswrapper[4813]: E0219 18:53:19.970335 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e06b8d6895c1ae07d8bfb4358319aa314b17cc2acab940e6baa96f82402c1514" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 19 18:53:19 crc kubenswrapper[4813]: E0219 18:53:19.970413 4813 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-w7s6z" podUID="096a88db-91ee-4ac3-b5ae-ba4bca838436" containerName="ovs-vswitchd" Feb 19 18:53:21 crc kubenswrapper[4813]: I0219 18:53:21.934902 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-w7s6z_096a88db-91ee-4ac3-b5ae-ba4bca838436/ovs-vswitchd/0.log" Feb 19 18:53:21 crc kubenswrapper[4813]: I0219 18:53:21.936222 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-w7s6z" Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.116192 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/096a88db-91ee-4ac3-b5ae-ba4bca838436-scripts\") pod \"096a88db-91ee-4ac3-b5ae-ba4bca838436\" (UID: \"096a88db-91ee-4ac3-b5ae-ba4bca838436\") " Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.116237 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4zc7\" (UniqueName: \"kubernetes.io/projected/096a88db-91ee-4ac3-b5ae-ba4bca838436-kube-api-access-p4zc7\") pod \"096a88db-91ee-4ac3-b5ae-ba4bca838436\" (UID: \"096a88db-91ee-4ac3-b5ae-ba4bca838436\") " Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.116284 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/096a88db-91ee-4ac3-b5ae-ba4bca838436-var-lib\") pod \"096a88db-91ee-4ac3-b5ae-ba4bca838436\" (UID: \"096a88db-91ee-4ac3-b5ae-ba4bca838436\") " Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.116314 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/096a88db-91ee-4ac3-b5ae-ba4bca838436-var-run\") pod \"096a88db-91ee-4ac3-b5ae-ba4bca838436\" (UID: \"096a88db-91ee-4ac3-b5ae-ba4bca838436\") " Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.116390 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/096a88db-91ee-4ac3-b5ae-ba4bca838436-etc-ovs\") pod \"096a88db-91ee-4ac3-b5ae-ba4bca838436\" (UID: \"096a88db-91ee-4ac3-b5ae-ba4bca838436\") " Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.116423 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/096a88db-91ee-4ac3-b5ae-ba4bca838436-var-lib" (OuterVolumeSpecName: "var-lib") pod "096a88db-91ee-4ac3-b5ae-ba4bca838436" (UID: "096a88db-91ee-4ac3-b5ae-ba4bca838436"). InnerVolumeSpecName "var-lib". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.116453 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/096a88db-91ee-4ac3-b5ae-ba4bca838436-var-run" (OuterVolumeSpecName: "var-run") pod "096a88db-91ee-4ac3-b5ae-ba4bca838436" (UID: "096a88db-91ee-4ac3-b5ae-ba4bca838436"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.116501 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/096a88db-91ee-4ac3-b5ae-ba4bca838436-var-log" (OuterVolumeSpecName: "var-log") pod "096a88db-91ee-4ac3-b5ae-ba4bca838436" (UID: "096a88db-91ee-4ac3-b5ae-ba4bca838436"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.116480 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/096a88db-91ee-4ac3-b5ae-ba4bca838436-var-log\") pod \"096a88db-91ee-4ac3-b5ae-ba4bca838436\" (UID: \"096a88db-91ee-4ac3-b5ae-ba4bca838436\") " Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.116602 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/096a88db-91ee-4ac3-b5ae-ba4bca838436-etc-ovs" (OuterVolumeSpecName: "etc-ovs") pod "096a88db-91ee-4ac3-b5ae-ba4bca838436" (UID: "096a88db-91ee-4ac3-b5ae-ba4bca838436"). InnerVolumeSpecName "etc-ovs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.116842 4813 reconciler_common.go:293] "Volume detached for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/096a88db-91ee-4ac3-b5ae-ba4bca838436-var-lib\") on node \"crc\" DevicePath \"\"" Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.116860 4813 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/096a88db-91ee-4ac3-b5ae-ba4bca838436-var-run\") on node \"crc\" DevicePath \"\"" Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.116872 4813 reconciler_common.go:293] "Volume detached for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/096a88db-91ee-4ac3-b5ae-ba4bca838436-etc-ovs\") on node \"crc\" DevicePath \"\"" Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.116882 4813 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/096a88db-91ee-4ac3-b5ae-ba4bca838436-var-log\") on node \"crc\" DevicePath \"\"" Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.118427 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/096a88db-91ee-4ac3-b5ae-ba4bca838436-scripts" (OuterVolumeSpecName: "scripts") pod "096a88db-91ee-4ac3-b5ae-ba4bca838436" (UID: "096a88db-91ee-4ac3-b5ae-ba4bca838436"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.125290 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/096a88db-91ee-4ac3-b5ae-ba4bca838436-kube-api-access-p4zc7" (OuterVolumeSpecName: "kube-api-access-p4zc7") pod "096a88db-91ee-4ac3-b5ae-ba4bca838436" (UID: "096a88db-91ee-4ac3-b5ae-ba4bca838436"). InnerVolumeSpecName "kube-api-access-p4zc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.217899 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/096a88db-91ee-4ac3-b5ae-ba4bca838436-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.217971 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4zc7\" (UniqueName: \"kubernetes.io/projected/096a88db-91ee-4ac3-b5ae-ba4bca838436-kube-api-access-p4zc7\") on node \"crc\" DevicePath \"\"" Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.326851 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.426741 4813 generic.go:334] "Generic (PLEG): container finished" podID="1cb5586f-0789-4095-84e2-32c8c41984c1" containerID="7fd9247f2c24fe230ceea2d4153e592d608389bd4aeb769e1c50f8d5b7ce6c1f" exitCode=137 Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.427068 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1cb5586f-0789-4095-84e2-32c8c41984c1","Type":"ContainerDied","Data":"7fd9247f2c24fe230ceea2d4153e592d608389bd4aeb769e1c50f8d5b7ce6c1f"} Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.427279 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1cb5586f-0789-4095-84e2-32c8c41984c1","Type":"ContainerDied","Data":"2e074525ec469851fd47bd99cac462da007fe02071208461f5dd1b693d3bec3d"} Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.427384 4813 scope.go:117] "RemoveContainer" containerID="7fd9247f2c24fe230ceea2d4153e592d608389bd4aeb769e1c50f8d5b7ce6c1f" Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.427172 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.428989 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-w7s6z_096a88db-91ee-4ac3-b5ae-ba4bca838436/ovs-vswitchd/0.log" Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.432211 4813 generic.go:334] "Generic (PLEG): container finished" podID="096a88db-91ee-4ac3-b5ae-ba4bca838436" containerID="e06b8d6895c1ae07d8bfb4358319aa314b17cc2acab940e6baa96f82402c1514" exitCode=137 Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.432310 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-w7s6z" Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.432318 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-w7s6z" event={"ID":"096a88db-91ee-4ac3-b5ae-ba4bca838436","Type":"ContainerDied","Data":"e06b8d6895c1ae07d8bfb4358319aa314b17cc2acab940e6baa96f82402c1514"} Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.432591 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-w7s6z" event={"ID":"096a88db-91ee-4ac3-b5ae-ba4bca838436","Type":"ContainerDied","Data":"aa5128dd31514943a1a0eadd5beb7c9c3b15220b65329c8a7044df2499706f31"} Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.463103 4813 scope.go:117] "RemoveContainer" containerID="c1d32779936dc64abfb1211d92a86919ed65d162961f758373172e5d8896b226" Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.506355 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-w7s6z"] Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.516687 4813 scope.go:117] "RemoveContainer" containerID="3463fd0ce80861bf450484b04bd0d12e36c7e89f7201fbf18492f7f3c1961d7e" Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.519675 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ovs-w7s6z"] Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.520116 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"1cb5586f-0789-4095-84e2-32c8c41984c1\" (UID: \"1cb5586f-0789-4095-84e2-32c8c41984c1\") " Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.520177 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/1cb5586f-0789-4095-84e2-32c8c41984c1-lock\") pod \"1cb5586f-0789-4095-84e2-32c8c41984c1\" (UID: \"1cb5586f-0789-4095-84e2-32c8c41984c1\") " Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.520265 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/1cb5586f-0789-4095-84e2-32c8c41984c1-cache\") pod \"1cb5586f-0789-4095-84e2-32c8c41984c1\" (UID: \"1cb5586f-0789-4095-84e2-32c8c41984c1\") " Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.520296 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1cb5586f-0789-4095-84e2-32c8c41984c1-etc-swift\") pod \"1cb5586f-0789-4095-84e2-32c8c41984c1\" (UID: \"1cb5586f-0789-4095-84e2-32c8c41984c1\") " Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.520342 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cb5586f-0789-4095-84e2-32c8c41984c1-combined-ca-bundle\") pod \"1cb5586f-0789-4095-84e2-32c8c41984c1\" (UID: \"1cb5586f-0789-4095-84e2-32c8c41984c1\") " Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.520373 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxkw8\" (UniqueName: \"kubernetes.io/projected/1cb5586f-0789-4095-84e2-32c8c41984c1-kube-api-access-pxkw8\") pod \"1cb5586f-0789-4095-84e2-32c8c41984c1\" (UID: \"1cb5586f-0789-4095-84e2-32c8c41984c1\") " Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.520685 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1cb5586f-0789-4095-84e2-32c8c41984c1-lock" (OuterVolumeSpecName: "lock") pod "1cb5586f-0789-4095-84e2-32c8c41984c1" (UID: "1cb5586f-0789-4095-84e2-32c8c41984c1"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.521447 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1cb5586f-0789-4095-84e2-32c8c41984c1-cache" (OuterVolumeSpecName: "cache") pod "1cb5586f-0789-4095-84e2-32c8c41984c1" (UID: "1cb5586f-0789-4095-84e2-32c8c41984c1"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.523615 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cb5586f-0789-4095-84e2-32c8c41984c1-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "1cb5586f-0789-4095-84e2-32c8c41984c1" (UID: "1cb5586f-0789-4095-84e2-32c8c41984c1"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.525099 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "swift") pod "1cb5586f-0789-4095-84e2-32c8c41984c1" (UID: "1cb5586f-0789-4095-84e2-32c8c41984c1"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.527291 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cb5586f-0789-4095-84e2-32c8c41984c1-kube-api-access-pxkw8" (OuterVolumeSpecName: "kube-api-access-pxkw8") pod "1cb5586f-0789-4095-84e2-32c8c41984c1" (UID: "1cb5586f-0789-4095-84e2-32c8c41984c1"). InnerVolumeSpecName "kube-api-access-pxkw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.621860 4813 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.621898 4813 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/1cb5586f-0789-4095-84e2-32c8c41984c1-lock\") on node \"crc\" DevicePath \"\"" Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.621910 4813 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/1cb5586f-0789-4095-84e2-32c8c41984c1-cache\") on node \"crc\" DevicePath \"\"" Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.621922 4813 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1cb5586f-0789-4095-84e2-32c8c41984c1-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.621934 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxkw8\" (UniqueName: \"kubernetes.io/projected/1cb5586f-0789-4095-84e2-32c8c41984c1-kube-api-access-pxkw8\") on node \"crc\" DevicePath \"\"" Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.637820 4813 scope.go:117] "RemoveContainer" containerID="63ed63fc4060e5e77bda2fcedba7a9038be456080560f0972c1162295667a995" Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.641786 4813 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.659741 4813 scope.go:117] "RemoveContainer" containerID="ecdd61b7c76fea90d5030cd83e27eede68a2613e82c87d74bf1b52eae40ede4c" Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.680530 4813 scope.go:117] "RemoveContainer" containerID="b247b4f4fb5c13c744d6518cac9721377f2279093c96dcef4c1fb84f402a96fe" Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.703488 4813 scope.go:117] "RemoveContainer" containerID="fe2ab5db7a8282b411ff6c2df1477d67ccefc71efdf7f49af3550ca28e6ca30f" Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.722706 4813 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.724008 4813 scope.go:117] "RemoveContainer" containerID="5e213ff63ffa13d3db8cd64214ffc25c184324e7bcb422854cb7d5544e995e3a" Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.757770 4813 scope.go:117] "RemoveContainer" containerID="3f73192612ae69b540b4da233b53fc0e0246e8770b1168eaa09e2818d0475b0b" Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.788727 4813 scope.go:117] "RemoveContainer" containerID="8a3df88ea660009faf21ceee32e4bf9966f8ba5d8d06da52cc8eed2e8281585b" Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.815793 4813 scope.go:117] "RemoveContainer" containerID="8e7a00f1985307d87ed43318441bfb1dd483cd2506f6ccb527a5a1f992665d68" Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.826696 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cb5586f-0789-4095-84e2-32c8c41984c1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1cb5586f-0789-4095-84e2-32c8c41984c1" (UID: "1cb5586f-0789-4095-84e2-32c8c41984c1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.841343 4813 scope.go:117] "RemoveContainer" containerID="c810d20c3f622957076f0b197785575897ec5b3aab666c3929ba959f136bbac7" Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.861870 4813 scope.go:117] "RemoveContainer" containerID="173ce911f070f154fad4553f907b72434be2bbf4d9cc33dc8c7d8f6176342729" Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.887367 4813 scope.go:117] "RemoveContainer" containerID="733548779f14379f65bf0cb205274b2eac4a5ab44281cf9b72a9a0a27ccb488b" Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.911701 4813 scope.go:117] "RemoveContainer" containerID="bbf92d00811dd2fb35c3895f4c863c12eb1772bf8d221d1a27c08caefbd5bf0f" Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.925661 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cb5586f-0789-4095-84e2-32c8c41984c1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.939023 4813 scope.go:117] "RemoveContainer" containerID="7fd9247f2c24fe230ceea2d4153e592d608389bd4aeb769e1c50f8d5b7ce6c1f" Feb 19 18:53:22 crc kubenswrapper[4813]: E0219 18:53:22.939517 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7fd9247f2c24fe230ceea2d4153e592d608389bd4aeb769e1c50f8d5b7ce6c1f\": container with ID starting with 7fd9247f2c24fe230ceea2d4153e592d608389bd4aeb769e1c50f8d5b7ce6c1f not found: ID does not exist" containerID="7fd9247f2c24fe230ceea2d4153e592d608389bd4aeb769e1c50f8d5b7ce6c1f" Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.939548 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fd9247f2c24fe230ceea2d4153e592d608389bd4aeb769e1c50f8d5b7ce6c1f"} err="failed to get container status \"7fd9247f2c24fe230ceea2d4153e592d608389bd4aeb769e1c50f8d5b7ce6c1f\": rpc error: code = NotFound desc = could not find container \"7fd9247f2c24fe230ceea2d4153e592d608389bd4aeb769e1c50f8d5b7ce6c1f\": container with ID starting with 7fd9247f2c24fe230ceea2d4153e592d608389bd4aeb769e1c50f8d5b7ce6c1f not found: ID does not exist" Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.939568 4813 scope.go:117] "RemoveContainer" containerID="c1d32779936dc64abfb1211d92a86919ed65d162961f758373172e5d8896b226" Feb 19 18:53:22 crc kubenswrapper[4813]: E0219 18:53:22.940058 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1d32779936dc64abfb1211d92a86919ed65d162961f758373172e5d8896b226\": container with ID starting with c1d32779936dc64abfb1211d92a86919ed65d162961f758373172e5d8896b226 not found: ID does not exist" containerID="c1d32779936dc64abfb1211d92a86919ed65d162961f758373172e5d8896b226" Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.940098 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1d32779936dc64abfb1211d92a86919ed65d162961f758373172e5d8896b226"} err="failed to get container status \"c1d32779936dc64abfb1211d92a86919ed65d162961f758373172e5d8896b226\": rpc error: code = NotFound desc = could not find container \"c1d32779936dc64abfb1211d92a86919ed65d162961f758373172e5d8896b226\": container with ID starting with c1d32779936dc64abfb1211d92a86919ed65d162961f758373172e5d8896b226 not found: ID does not exist" Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.940126 4813 scope.go:117] "RemoveContainer" containerID="3463fd0ce80861bf450484b04bd0d12e36c7e89f7201fbf18492f7f3c1961d7e" Feb 19 18:53:22 crc kubenswrapper[4813]: E0219 18:53:22.940629 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3463fd0ce80861bf450484b04bd0d12e36c7e89f7201fbf18492f7f3c1961d7e\": container with ID starting with 3463fd0ce80861bf450484b04bd0d12e36c7e89f7201fbf18492f7f3c1961d7e not found: ID does not exist" containerID="3463fd0ce80861bf450484b04bd0d12e36c7e89f7201fbf18492f7f3c1961d7e" Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.940649 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3463fd0ce80861bf450484b04bd0d12e36c7e89f7201fbf18492f7f3c1961d7e"} err="failed to get container status \"3463fd0ce80861bf450484b04bd0d12e36c7e89f7201fbf18492f7f3c1961d7e\": rpc error: code = NotFound desc = could not find container \"3463fd0ce80861bf450484b04bd0d12e36c7e89f7201fbf18492f7f3c1961d7e\": container with ID starting with 3463fd0ce80861bf450484b04bd0d12e36c7e89f7201fbf18492f7f3c1961d7e not found: ID does not exist" Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.940662 4813 scope.go:117] "RemoveContainer" containerID="63ed63fc4060e5e77bda2fcedba7a9038be456080560f0972c1162295667a995" Feb 19 18:53:22 crc kubenswrapper[4813]: E0219 18:53:22.940974 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63ed63fc4060e5e77bda2fcedba7a9038be456080560f0972c1162295667a995\": container with ID starting with 63ed63fc4060e5e77bda2fcedba7a9038be456080560f0972c1162295667a995 not found: ID does not exist" containerID="63ed63fc4060e5e77bda2fcedba7a9038be456080560f0972c1162295667a995" Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.941002 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63ed63fc4060e5e77bda2fcedba7a9038be456080560f0972c1162295667a995"} err="failed to get container status \"63ed63fc4060e5e77bda2fcedba7a9038be456080560f0972c1162295667a995\": rpc error: code = NotFound desc = could not find container \"63ed63fc4060e5e77bda2fcedba7a9038be456080560f0972c1162295667a995\": container with ID starting with 63ed63fc4060e5e77bda2fcedba7a9038be456080560f0972c1162295667a995 not found: ID does not exist" Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.941021 4813 scope.go:117] "RemoveContainer" containerID="ecdd61b7c76fea90d5030cd83e27eede68a2613e82c87d74bf1b52eae40ede4c" Feb 19 18:53:22 crc kubenswrapper[4813]: E0219 18:53:22.941327 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ecdd61b7c76fea90d5030cd83e27eede68a2613e82c87d74bf1b52eae40ede4c\": container with ID starting with ecdd61b7c76fea90d5030cd83e27eede68a2613e82c87d74bf1b52eae40ede4c not found: ID does not exist" containerID="ecdd61b7c76fea90d5030cd83e27eede68a2613e82c87d74bf1b52eae40ede4c" Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.941354 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecdd61b7c76fea90d5030cd83e27eede68a2613e82c87d74bf1b52eae40ede4c"} err="failed to get container status \"ecdd61b7c76fea90d5030cd83e27eede68a2613e82c87d74bf1b52eae40ede4c\": rpc error: code = NotFound desc = could not find container \"ecdd61b7c76fea90d5030cd83e27eede68a2613e82c87d74bf1b52eae40ede4c\": container with ID starting with ecdd61b7c76fea90d5030cd83e27eede68a2613e82c87d74bf1b52eae40ede4c not found: ID does not exist" Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.941371 4813 scope.go:117] "RemoveContainer" containerID="b247b4f4fb5c13c744d6518cac9721377f2279093c96dcef4c1fb84f402a96fe" Feb 19 18:53:22 crc kubenswrapper[4813]: E0219 18:53:22.941640 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b247b4f4fb5c13c744d6518cac9721377f2279093c96dcef4c1fb84f402a96fe\": container with ID starting with b247b4f4fb5c13c744d6518cac9721377f2279093c96dcef4c1fb84f402a96fe not found: ID does not exist" containerID="b247b4f4fb5c13c744d6518cac9721377f2279093c96dcef4c1fb84f402a96fe" Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.941665 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b247b4f4fb5c13c744d6518cac9721377f2279093c96dcef4c1fb84f402a96fe"} err="failed to get container status \"b247b4f4fb5c13c744d6518cac9721377f2279093c96dcef4c1fb84f402a96fe\": rpc error: code = NotFound desc = could not find container \"b247b4f4fb5c13c744d6518cac9721377f2279093c96dcef4c1fb84f402a96fe\": container with ID starting with b247b4f4fb5c13c744d6518cac9721377f2279093c96dcef4c1fb84f402a96fe not found: ID does not exist" Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.941679 4813 scope.go:117] "RemoveContainer" containerID="fe2ab5db7a8282b411ff6c2df1477d67ccefc71efdf7f49af3550ca28e6ca30f" Feb 19 18:53:22 crc kubenswrapper[4813]: E0219 18:53:22.941941 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe2ab5db7a8282b411ff6c2df1477d67ccefc71efdf7f49af3550ca28e6ca30f\": container with ID starting with fe2ab5db7a8282b411ff6c2df1477d67ccefc71efdf7f49af3550ca28e6ca30f not found: ID does not exist" containerID="fe2ab5db7a8282b411ff6c2df1477d67ccefc71efdf7f49af3550ca28e6ca30f" Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.941982 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe2ab5db7a8282b411ff6c2df1477d67ccefc71efdf7f49af3550ca28e6ca30f"} err="failed to get container status \"fe2ab5db7a8282b411ff6c2df1477d67ccefc71efdf7f49af3550ca28e6ca30f\": rpc error: code = NotFound desc = could not find container \"fe2ab5db7a8282b411ff6c2df1477d67ccefc71efdf7f49af3550ca28e6ca30f\": container with ID starting with fe2ab5db7a8282b411ff6c2df1477d67ccefc71efdf7f49af3550ca28e6ca30f not found: ID does not exist" Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.942004 4813 scope.go:117] "RemoveContainer" containerID="5e213ff63ffa13d3db8cd64214ffc25c184324e7bcb422854cb7d5544e995e3a" Feb 19 18:53:22 crc kubenswrapper[4813]: E0219 18:53:22.942601 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e213ff63ffa13d3db8cd64214ffc25c184324e7bcb422854cb7d5544e995e3a\": container with ID starting with 5e213ff63ffa13d3db8cd64214ffc25c184324e7bcb422854cb7d5544e995e3a not found: ID does not exist" containerID="5e213ff63ffa13d3db8cd64214ffc25c184324e7bcb422854cb7d5544e995e3a" Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.942646 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e213ff63ffa13d3db8cd64214ffc25c184324e7bcb422854cb7d5544e995e3a"} err="failed to get container status \"5e213ff63ffa13d3db8cd64214ffc25c184324e7bcb422854cb7d5544e995e3a\": rpc error: code = NotFound desc = could not find container \"5e213ff63ffa13d3db8cd64214ffc25c184324e7bcb422854cb7d5544e995e3a\": container with ID starting with 5e213ff63ffa13d3db8cd64214ffc25c184324e7bcb422854cb7d5544e995e3a not found: ID does not exist" Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.942664 4813 scope.go:117] "RemoveContainer" containerID="3f73192612ae69b540b4da233b53fc0e0246e8770b1168eaa09e2818d0475b0b" Feb 19 18:53:22 crc kubenswrapper[4813]: E0219 18:53:22.942997 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f73192612ae69b540b4da233b53fc0e0246e8770b1168eaa09e2818d0475b0b\": container with ID starting with 3f73192612ae69b540b4da233b53fc0e0246e8770b1168eaa09e2818d0475b0b not found: ID does not exist" containerID="3f73192612ae69b540b4da233b53fc0e0246e8770b1168eaa09e2818d0475b0b" Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.943027 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f73192612ae69b540b4da233b53fc0e0246e8770b1168eaa09e2818d0475b0b"} err="failed to get container status \"3f73192612ae69b540b4da233b53fc0e0246e8770b1168eaa09e2818d0475b0b\": rpc error: code = NotFound desc = could not find container \"3f73192612ae69b540b4da233b53fc0e0246e8770b1168eaa09e2818d0475b0b\": container with ID starting with 3f73192612ae69b540b4da233b53fc0e0246e8770b1168eaa09e2818d0475b0b not found: ID does not exist" Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.943041 4813 scope.go:117] "RemoveContainer" containerID="8a3df88ea660009faf21ceee32e4bf9966f8ba5d8d06da52cc8eed2e8281585b" Feb 19 18:53:22 crc kubenswrapper[4813]: E0219 18:53:22.943456 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a3df88ea660009faf21ceee32e4bf9966f8ba5d8d06da52cc8eed2e8281585b\": container with ID starting with 8a3df88ea660009faf21ceee32e4bf9966f8ba5d8d06da52cc8eed2e8281585b not found: ID does not exist" containerID="8a3df88ea660009faf21ceee32e4bf9966f8ba5d8d06da52cc8eed2e8281585b" Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.943480 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a3df88ea660009faf21ceee32e4bf9966f8ba5d8d06da52cc8eed2e8281585b"} err="failed to get container status \"8a3df88ea660009faf21ceee32e4bf9966f8ba5d8d06da52cc8eed2e8281585b\": rpc error: code = NotFound desc = could not find container \"8a3df88ea660009faf21ceee32e4bf9966f8ba5d8d06da52cc8eed2e8281585b\": container with ID starting with 8a3df88ea660009faf21ceee32e4bf9966f8ba5d8d06da52cc8eed2e8281585b not found: ID does not exist" Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.943493 4813 scope.go:117] "RemoveContainer" containerID="8e7a00f1985307d87ed43318441bfb1dd483cd2506f6ccb527a5a1f992665d68" Feb 19 18:53:22 crc kubenswrapper[4813]: E0219 18:53:22.943794 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e7a00f1985307d87ed43318441bfb1dd483cd2506f6ccb527a5a1f992665d68\": container with ID starting with 8e7a00f1985307d87ed43318441bfb1dd483cd2506f6ccb527a5a1f992665d68 not found: ID does not exist" containerID="8e7a00f1985307d87ed43318441bfb1dd483cd2506f6ccb527a5a1f992665d68" Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.943822 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e7a00f1985307d87ed43318441bfb1dd483cd2506f6ccb527a5a1f992665d68"} err="failed to get container status \"8e7a00f1985307d87ed43318441bfb1dd483cd2506f6ccb527a5a1f992665d68\": rpc error: code = NotFound desc = could not find container \"8e7a00f1985307d87ed43318441bfb1dd483cd2506f6ccb527a5a1f992665d68\": container with ID starting with 8e7a00f1985307d87ed43318441bfb1dd483cd2506f6ccb527a5a1f992665d68 not found: ID does not exist" Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.943864 4813 scope.go:117] "RemoveContainer" containerID="c810d20c3f622957076f0b197785575897ec5b3aab666c3929ba959f136bbac7" Feb 19 18:53:22 crc kubenswrapper[4813]: E0219 18:53:22.944253 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c810d20c3f622957076f0b197785575897ec5b3aab666c3929ba959f136bbac7\": container with ID starting with c810d20c3f622957076f0b197785575897ec5b3aab666c3929ba959f136bbac7 not found: ID does not exist" containerID="c810d20c3f622957076f0b197785575897ec5b3aab666c3929ba959f136bbac7" Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.944309 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c810d20c3f622957076f0b197785575897ec5b3aab666c3929ba959f136bbac7"} err="failed to get container status \"c810d20c3f622957076f0b197785575897ec5b3aab666c3929ba959f136bbac7\": rpc error: code = NotFound desc = could not find container \"c810d20c3f622957076f0b197785575897ec5b3aab666c3929ba959f136bbac7\": container with ID starting with c810d20c3f622957076f0b197785575897ec5b3aab666c3929ba959f136bbac7 not found: ID does not exist" Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.944333 4813 scope.go:117] "RemoveContainer" containerID="173ce911f070f154fad4553f907b72434be2bbf4d9cc33dc8c7d8f6176342729" Feb 19 18:53:22 crc kubenswrapper[4813]: E0219 18:53:22.944713 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"173ce911f070f154fad4553f907b72434be2bbf4d9cc33dc8c7d8f6176342729\": container with ID starting with 173ce911f070f154fad4553f907b72434be2bbf4d9cc33dc8c7d8f6176342729 not found: ID does not exist" containerID="173ce911f070f154fad4553f907b72434be2bbf4d9cc33dc8c7d8f6176342729" Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.944809 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"173ce911f070f154fad4553f907b72434be2bbf4d9cc33dc8c7d8f6176342729"} err="failed to get container status \"173ce911f070f154fad4553f907b72434be2bbf4d9cc33dc8c7d8f6176342729\": rpc error: code = NotFound desc = could not find container \"173ce911f070f154fad4553f907b72434be2bbf4d9cc33dc8c7d8f6176342729\": container with ID starting with 173ce911f070f154fad4553f907b72434be2bbf4d9cc33dc8c7d8f6176342729 not found: ID does not exist" Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.944885 4813 scope.go:117] "RemoveContainer" containerID="733548779f14379f65bf0cb205274b2eac4a5ab44281cf9b72a9a0a27ccb488b" Feb 19 18:53:22 crc kubenswrapper[4813]: E0219 18:53:22.945297 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"733548779f14379f65bf0cb205274b2eac4a5ab44281cf9b72a9a0a27ccb488b\": container with ID starting with 733548779f14379f65bf0cb205274b2eac4a5ab44281cf9b72a9a0a27ccb488b not found: ID does not exist" containerID="733548779f14379f65bf0cb205274b2eac4a5ab44281cf9b72a9a0a27ccb488b" Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.945325 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"733548779f14379f65bf0cb205274b2eac4a5ab44281cf9b72a9a0a27ccb488b"} err="failed to get container status \"733548779f14379f65bf0cb205274b2eac4a5ab44281cf9b72a9a0a27ccb488b\": rpc error: code = NotFound desc = could not find container \"733548779f14379f65bf0cb205274b2eac4a5ab44281cf9b72a9a0a27ccb488b\": container with ID starting with 733548779f14379f65bf0cb205274b2eac4a5ab44281cf9b72a9a0a27ccb488b not found: ID does not exist" Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.945342 4813 scope.go:117] "RemoveContainer" containerID="bbf92d00811dd2fb35c3895f4c863c12eb1772bf8d221d1a27c08caefbd5bf0f" Feb 19 18:53:22 crc kubenswrapper[4813]: E0219 18:53:22.945621 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbf92d00811dd2fb35c3895f4c863c12eb1772bf8d221d1a27c08caefbd5bf0f\": container with ID starting with bbf92d00811dd2fb35c3895f4c863c12eb1772bf8d221d1a27c08caefbd5bf0f not found: ID does not exist" containerID="bbf92d00811dd2fb35c3895f4c863c12eb1772bf8d221d1a27c08caefbd5bf0f" Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.945648 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbf92d00811dd2fb35c3895f4c863c12eb1772bf8d221d1a27c08caefbd5bf0f"} err="failed to get container status \"bbf92d00811dd2fb35c3895f4c863c12eb1772bf8d221d1a27c08caefbd5bf0f\": rpc error: code = NotFound desc = could not find container \"bbf92d00811dd2fb35c3895f4c863c12eb1772bf8d221d1a27c08caefbd5bf0f\": container with ID starting with bbf92d00811dd2fb35c3895f4c863c12eb1772bf8d221d1a27c08caefbd5bf0f not found: ID does not exist" Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.945664 4813 scope.go:117] "RemoveContainer" containerID="e06b8d6895c1ae07d8bfb4358319aa314b17cc2acab940e6baa96f82402c1514" Feb 19 18:53:22 crc kubenswrapper[4813]: I0219 18:53:22.974381 4813 scope.go:117] "RemoveContainer" containerID="fa363048760a37417b36d93a6b7c7436ba2ba02b903ff26bcb4c7f22d5b4f33f" Feb 19 18:53:23 crc kubenswrapper[4813]: I0219 18:53:23.012572 4813 scope.go:117] "RemoveContainer" containerID="3ba0919aa460d3ab19ef502460953e6f6c2685b69618fe7479aaba3aaff111c3" Feb 19 18:53:23 crc kubenswrapper[4813]: I0219 18:53:23.061263 4813 scope.go:117] "RemoveContainer" containerID="e06b8d6895c1ae07d8bfb4358319aa314b17cc2acab940e6baa96f82402c1514" Feb 19 18:53:23 crc kubenswrapper[4813]: E0219 18:53:23.062381 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e06b8d6895c1ae07d8bfb4358319aa314b17cc2acab940e6baa96f82402c1514\": container with ID starting with e06b8d6895c1ae07d8bfb4358319aa314b17cc2acab940e6baa96f82402c1514 not found: ID does not exist" containerID="e06b8d6895c1ae07d8bfb4358319aa314b17cc2acab940e6baa96f82402c1514" Feb 19 18:53:23 crc kubenswrapper[4813]: I0219 18:53:23.062443 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e06b8d6895c1ae07d8bfb4358319aa314b17cc2acab940e6baa96f82402c1514"} err="failed to get container status \"e06b8d6895c1ae07d8bfb4358319aa314b17cc2acab940e6baa96f82402c1514\": rpc error: code = NotFound desc = could not find container \"e06b8d6895c1ae07d8bfb4358319aa314b17cc2acab940e6baa96f82402c1514\": container with ID starting with e06b8d6895c1ae07d8bfb4358319aa314b17cc2acab940e6baa96f82402c1514 not found: ID does not exist" Feb 19 18:53:23 crc kubenswrapper[4813]: I0219 18:53:23.062490 4813 scope.go:117] "RemoveContainer" containerID="fa363048760a37417b36d93a6b7c7436ba2ba02b903ff26bcb4c7f22d5b4f33f" Feb 19 18:53:23 crc kubenswrapper[4813]: E0219 18:53:23.064891 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa363048760a37417b36d93a6b7c7436ba2ba02b903ff26bcb4c7f22d5b4f33f\": container with ID starting with fa363048760a37417b36d93a6b7c7436ba2ba02b903ff26bcb4c7f22d5b4f33f not found: ID does not exist" containerID="fa363048760a37417b36d93a6b7c7436ba2ba02b903ff26bcb4c7f22d5b4f33f" Feb 19 18:53:23 crc kubenswrapper[4813]: I0219 18:53:23.064935 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa363048760a37417b36d93a6b7c7436ba2ba02b903ff26bcb4c7f22d5b4f33f"} err="failed to get container status \"fa363048760a37417b36d93a6b7c7436ba2ba02b903ff26bcb4c7f22d5b4f33f\": rpc error: code = NotFound desc = could not find container \"fa363048760a37417b36d93a6b7c7436ba2ba02b903ff26bcb4c7f22d5b4f33f\": container with ID starting with fa363048760a37417b36d93a6b7c7436ba2ba02b903ff26bcb4c7f22d5b4f33f not found: ID does not exist" Feb 19 18:53:23 crc kubenswrapper[4813]: I0219 18:53:23.064979 4813 scope.go:117] "RemoveContainer" containerID="3ba0919aa460d3ab19ef502460953e6f6c2685b69618fe7479aaba3aaff111c3" Feb 19 18:53:23 crc kubenswrapper[4813]: E0219 18:53:23.065525 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ba0919aa460d3ab19ef502460953e6f6c2685b69618fe7479aaba3aaff111c3\": container with ID starting with 3ba0919aa460d3ab19ef502460953e6f6c2685b69618fe7479aaba3aaff111c3 not found: ID does not exist" containerID="3ba0919aa460d3ab19ef502460953e6f6c2685b69618fe7479aaba3aaff111c3" Feb 19 18:53:23 crc kubenswrapper[4813]: I0219 18:53:23.065589 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ba0919aa460d3ab19ef502460953e6f6c2685b69618fe7479aaba3aaff111c3"} err="failed to get container status \"3ba0919aa460d3ab19ef502460953e6f6c2685b69618fe7479aaba3aaff111c3\": rpc error: code = NotFound desc = could not find container \"3ba0919aa460d3ab19ef502460953e6f6c2685b69618fe7479aaba3aaff111c3\": container with ID starting with 3ba0919aa460d3ab19ef502460953e6f6c2685b69618fe7479aaba3aaff111c3 not found: ID does not exist" Feb 19 18:53:23 crc kubenswrapper[4813]: I0219 18:53:23.105715 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Feb 19 18:53:23 crc kubenswrapper[4813]: I0219 18:53:23.112799 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-storage-0"] Feb 19 18:53:23 crc kubenswrapper[4813]: I0219 18:53:23.479489 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="096a88db-91ee-4ac3-b5ae-ba4bca838436" path="/var/lib/kubelet/pods/096a88db-91ee-4ac3-b5ae-ba4bca838436/volumes" Feb 19 18:53:23 crc kubenswrapper[4813]: I0219 18:53:23.480237 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1cb5586f-0789-4095-84e2-32c8c41984c1" path="/var/lib/kubelet/pods/1cb5586f-0789-4095-84e2-32c8c41984c1/volumes" Feb 19 18:54:48 crc kubenswrapper[4813]: I0219 18:54:48.519714 4813 scope.go:117] "RemoveContainer" containerID="5a26d45d99f9e598d77ad7b692c685032d9b5071227135e375f8e4485081b1a2" Feb 19 18:54:48 crc kubenswrapper[4813]: I0219 18:54:48.558119 4813 scope.go:117] "RemoveContainer" containerID="c13b5d5563830f3fd2e991b9ee4b32fb0fc2bf40bb76201f1e810377d822337c" Feb 19 18:54:48 crc kubenswrapper[4813]: I0219 18:54:48.606121 4813 scope.go:117] "RemoveContainer" containerID="a83bcfd4da6cf2a51f8d0ca5581bc152a4dc583f7a672ddd92d87f20e8c77394" Feb 19 18:54:48 crc kubenswrapper[4813]: I0219 18:54:48.640835 4813 scope.go:117] "RemoveContainer" containerID="26108ad72f63c574475732f4021cf708dc6d1c9417fc66ddbb7ad7422cd9d5a4" Feb 19 18:54:48 crc kubenswrapper[4813]: I0219 18:54:48.691158 4813 scope.go:117] "RemoveContainer" containerID="a19eca13cf4bbec8b7d1a7200d049c7f5ef1d7da3e57d75afa6035f101b45e80" Feb 19 18:54:48 crc kubenswrapper[4813]: I0219 18:54:48.716281 4813 scope.go:117] "RemoveContainer" containerID="f64515cccc17d57ae69306575f2fa6522cd0ff0492c66f7f7a0de9e97bbbf6de" Feb 19 18:54:48 crc kubenswrapper[4813]: I0219 18:54:48.735662 4813 scope.go:117] "RemoveContainer" containerID="6493a7242bce5b2c90612f6408da117da88b1ac72061ec6857947e43348e30c1" Feb 19 18:54:48 crc kubenswrapper[4813]: I0219 18:54:48.757944 4813 scope.go:117] "RemoveContainer" containerID="cc423f64a2ed3dfd9ab063cd117e4c9a5e3aa2de2355fbb8c6cf7bb650c929da" Feb 19 18:54:48 crc kubenswrapper[4813]: I0219 18:54:48.807031 4813 scope.go:117] "RemoveContainer" containerID="35937ae84e35ae8d4be7a92af84be075bbc5f6113cd7ecd86ce46fac1734609a" Feb 19 18:54:48 crc kubenswrapper[4813]: I0219 18:54:48.837815 4813 scope.go:117] "RemoveContainer" containerID="c663bf7132a1d71bd88008b90def56ee6497f7202061d34bce6f3a92513d526c" Feb 19 18:54:48 crc kubenswrapper[4813]: I0219 18:54:48.865553 4813 scope.go:117] "RemoveContainer" containerID="00253c9334ee177d3261e02381af6b4c6dbd84ba0ca94543e86ca7b07ba45c12" Feb 19 18:54:48 crc kubenswrapper[4813]: I0219 18:54:48.894110 4813 scope.go:117] "RemoveContainer" containerID="ae6e6fea86970a782ff89e6a5e94f9fdc4f0fe974554d09798ed586863363d74" Feb 19 18:54:48 crc kubenswrapper[4813]: I0219 18:54:48.915892 4813 scope.go:117] "RemoveContainer" containerID="590f2f643680a87703dacbc74bd36c5a912b0e44830cf1d97e1cd2bc88daa9ea" Feb 19 18:54:48 crc kubenswrapper[4813]: I0219 18:54:48.937424 4813 scope.go:117] "RemoveContainer" containerID="d54b63de3aad6d9e307f23a72e1f7901457bc2d8af8395fb0d3e63bbbebd3eb2" Feb 19 18:55:30 crc kubenswrapper[4813]: I0219 18:55:30.329846 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 18:55:30 crc kubenswrapper[4813]: I0219 18:55:30.330515 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 18:55:49 crc kubenswrapper[4813]: I0219 18:55:49.131005 4813 scope.go:117] "RemoveContainer" containerID="803b975691bded29eae06b673097142de87bf85186b3fd3df69da20ef33ddd64" Feb 19 18:55:49 crc kubenswrapper[4813]: I0219 18:55:49.165652 4813 scope.go:117] "RemoveContainer" containerID="b2e48289ecd4bad8376d6fcd570aeec36797339684b0340044cc436faa9f613e" Feb 19 18:55:49 crc kubenswrapper[4813]: I0219 18:55:49.233708 4813 scope.go:117] "RemoveContainer" containerID="fbf2efd8df51468ee3111d03323da4ae9c317212db4f2d94ec596fcf42163d90" Feb 19 18:55:49 crc kubenswrapper[4813]: I0219 18:55:49.275328 4813 scope.go:117] "RemoveContainer" containerID="13192d61e26b0abb14066637f47e14757025c5626ce910fff4ccd994cdca3e0f" Feb 19 18:55:49 crc kubenswrapper[4813]: I0219 18:55:49.307847 4813 scope.go:117] "RemoveContainer" containerID="9274722aace668916d0e004ffce85ea1a2fff67c03b7e8a4b70e1dc4e953339a" Feb 19 18:55:49 crc kubenswrapper[4813]: I0219 18:55:49.338087 4813 scope.go:117] "RemoveContainer" containerID="f26ff4b3ab8d8a8a5e7e6f0f6f0fc46990e0f6af849d5a9c66de2e6e72499a00" Feb 19 18:55:49 crc kubenswrapper[4813]: I0219 18:55:49.365421 4813 scope.go:117] "RemoveContainer" containerID="2748c6337477604f30bec1728569342fb07d1c41d4b9ae1505ab4f5b1e436fd2" Feb 19 18:55:49 crc kubenswrapper[4813]: I0219 18:55:49.388031 4813 scope.go:117] "RemoveContainer" containerID="b4a6d7a30ef50dd71bcd5afec2db98656e12a3087aa45f96cbbac4ff6cfc9b5f" Feb 19 18:55:49 crc kubenswrapper[4813]: I0219 18:55:49.424340 4813 scope.go:117] "RemoveContainer" containerID="869846648a5af54c9d2dfac6cf01db1cb4983ff40d5a072e82a3effcde006db0" Feb 19 18:55:49 crc kubenswrapper[4813]: I0219 18:55:49.473344 4813 scope.go:117] "RemoveContainer" containerID="3de0eee80ceb950d2591d80fbb2ad7694516698ca054530af0e58d17e8d30584" Feb 19 18:55:49 crc kubenswrapper[4813]: I0219 18:55:49.511092 4813 scope.go:117] "RemoveContainer" containerID="2b35cf86320758f6fa476d651db026b27c757fa47f91268776bb1f628c230b52" Feb 19 18:56:00 crc kubenswrapper[4813]: I0219 18:56:00.329871 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 18:56:00 crc kubenswrapper[4813]: I0219 18:56:00.330820 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 18:56:30 crc kubenswrapper[4813]: I0219 18:56:30.329930 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 18:56:30 crc kubenswrapper[4813]: I0219 18:56:30.330574 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 18:56:30 crc kubenswrapper[4813]: I0219 18:56:30.330636 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" Feb 19 18:56:30 crc kubenswrapper[4813]: I0219 18:56:30.331438 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"32fa982681255cf9db7b11890f519e686141e6a0b70e496465176e3f8f434a17"} pod="openshift-machine-config-operator/machine-config-daemon-gfswm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 18:56:30 crc kubenswrapper[4813]: I0219 18:56:30.331534 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" containerID="cri-o://32fa982681255cf9db7b11890f519e686141e6a0b70e496465176e3f8f434a17" gracePeriod=600 Feb 19 18:56:30 crc kubenswrapper[4813]: E0219 18:56:30.461822 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 18:56:30 crc kubenswrapper[4813]: I0219 18:56:30.496764 4813 generic.go:334] "Generic (PLEG): container finished" podID="481977a2-7072-4176-abd4-863cb6104d70" containerID="32fa982681255cf9db7b11890f519e686141e6a0b70e496465176e3f8f434a17" exitCode=0 Feb 19 18:56:30 crc kubenswrapper[4813]: I0219 18:56:30.496853 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" event={"ID":"481977a2-7072-4176-abd4-863cb6104d70","Type":"ContainerDied","Data":"32fa982681255cf9db7b11890f519e686141e6a0b70e496465176e3f8f434a17"} Feb 19 18:56:30 crc kubenswrapper[4813]: I0219 18:56:30.496930 4813 scope.go:117] "RemoveContainer" containerID="c0f644376cce138d79691366e77b885fec90be67e37a466be8ebae7b3478e829" Feb 19 18:56:30 crc kubenswrapper[4813]: I0219 18:56:30.497496 4813 scope.go:117] "RemoveContainer" containerID="32fa982681255cf9db7b11890f519e686141e6a0b70e496465176e3f8f434a17" Feb 19 18:56:30 crc kubenswrapper[4813]: E0219 18:56:30.497767 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 18:56:41 crc kubenswrapper[4813]: I0219 18:56:41.480250 4813 scope.go:117] "RemoveContainer" containerID="32fa982681255cf9db7b11890f519e686141e6a0b70e496465176e3f8f434a17" Feb 19 18:56:41 crc kubenswrapper[4813]: E0219 18:56:41.482107 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 18:56:49 crc kubenswrapper[4813]: I0219 18:56:49.767063 4813 scope.go:117] "RemoveContainer" containerID="d6492c712dfe40b99d0b2b14e6f1346b42c1c9bffe85732430bdd537d8749da8" Feb 19 18:56:49 crc kubenswrapper[4813]: I0219 18:56:49.801254 4813 scope.go:117] "RemoveContainer" containerID="87b73f80a6ec835366f4bf5b64ba364e2068a899ccab4e89a99247770dab0426" Feb 19 18:56:49 crc kubenswrapper[4813]: I0219 18:56:49.841533 4813 scope.go:117] "RemoveContainer" containerID="81b54b28e409adeced7c4ba8c4d1018f2f0fe17fb4d5cc0cb31579530c721fd8" Feb 19 18:56:49 crc kubenswrapper[4813]: I0219 18:56:49.876691 4813 scope.go:117] "RemoveContainer" containerID="1d2afcb379a400372d4098a63f5de68b91f0b29270c3b44019b9649cd59da8b4" Feb 19 18:56:49 crc kubenswrapper[4813]: I0219 18:56:49.912537 4813 scope.go:117] "RemoveContainer" containerID="9e029733d85825b371e364520952b48a1cc2ae4adde3e66c05e62cc539ccb854" Feb 19 18:56:49 crc kubenswrapper[4813]: I0219 18:56:49.932771 4813 scope.go:117] "RemoveContainer" containerID="9397c3a0ab851a08334b46849ddf572b6925e37fadc61f477415f20533b7cf9d" Feb 19 18:56:49 crc kubenswrapper[4813]: I0219 18:56:49.956826 4813 scope.go:117] "RemoveContainer" containerID="c515b6612026ed474298fe3c18c2bf37548d0465b0492238ee919dbb763eac6c" Feb 19 18:56:50 crc kubenswrapper[4813]: I0219 18:56:50.009466 4813 scope.go:117] "RemoveContainer" containerID="0206f662446a4640a2a2a4d84b8c38c156f42108b8dd8ee7e9f13be6d3df0aa5" Feb 19 18:56:50 crc kubenswrapper[4813]: I0219 18:56:50.024932 4813 scope.go:117] "RemoveContainer" containerID="9e12afa0fedf488650c3cf3b2d1dd92ace9aebb22007478a22a377a14b17cb09" Feb 19 18:56:50 crc kubenswrapper[4813]: I0219 18:56:50.042771 4813 scope.go:117] "RemoveContainer" containerID="2aad7eab96447a6cadb62e2950ffd040729196c936832856ae0117d0d29ac117" Feb 19 18:56:50 crc kubenswrapper[4813]: I0219 18:56:50.077961 4813 scope.go:117] "RemoveContainer" containerID="1f96f95505c7053c8d8ab41901548ec1360179712db919575860107692fc7c07" Feb 19 18:56:50 crc kubenswrapper[4813]: I0219 18:56:50.130216 4813 scope.go:117] "RemoveContainer" containerID="b86014540681be4c4fa4c1327a58642cf8d5baceb6108eea1dce480c400a6bc6" Feb 19 18:56:50 crc kubenswrapper[4813]: I0219 18:56:50.181268 4813 scope.go:117] "RemoveContainer" containerID="accd733dfb0a7c65272f9891db6ea8f0b431a14866fe28f386effabcecb8fc2b" Feb 19 18:56:53 crc kubenswrapper[4813]: I0219 18:56:53.471890 4813 scope.go:117] "RemoveContainer" containerID="32fa982681255cf9db7b11890f519e686141e6a0b70e496465176e3f8f434a17" Feb 19 18:56:53 crc kubenswrapper[4813]: E0219 18:56:53.472223 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 18:57:07 crc kubenswrapper[4813]: I0219 18:57:07.472168 4813 scope.go:117] "RemoveContainer" containerID="32fa982681255cf9db7b11890f519e686141e6a0b70e496465176e3f8f434a17" Feb 19 18:57:07 crc kubenswrapper[4813]: E0219 18:57:07.473204 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.093181 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5cv6f"] Feb 19 18:57:09 crc kubenswrapper[4813]: E0219 18:57:09.094224 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="096a88db-91ee-4ac3-b5ae-ba4bca838436" containerName="ovs-vswitchd" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.094322 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="096a88db-91ee-4ac3-b5ae-ba4bca838436" containerName="ovs-vswitchd" Feb 19 18:57:09 crc kubenswrapper[4813]: E0219 18:57:09.094384 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cb5586f-0789-4095-84e2-32c8c41984c1" containerName="object-updater" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.094445 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cb5586f-0789-4095-84e2-32c8c41984c1" containerName="object-updater" Feb 19 18:57:09 crc kubenswrapper[4813]: E0219 18:57:09.094503 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cb5586f-0789-4095-84e2-32c8c41984c1" containerName="object-expirer" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.094555 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cb5586f-0789-4095-84e2-32c8c41984c1" containerName="object-expirer" Feb 19 18:57:09 crc kubenswrapper[4813]: E0219 18:57:09.094662 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21e36d83-8928-4c58-8432-eb51809336a7" containerName="mariadb-account-create-update" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.094727 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="21e36d83-8928-4c58-8432-eb51809336a7" containerName="mariadb-account-create-update" Feb 19 18:57:09 crc kubenswrapper[4813]: E0219 18:57:09.094786 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c69ff3db-8806-451a-9df0-c6289c327579" containerName="setup-container" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.094833 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="c69ff3db-8806-451a-9df0-c6289c327579" containerName="setup-container" Feb 19 18:57:09 crc kubenswrapper[4813]: E0219 18:57:09.094895 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cb5586f-0789-4095-84e2-32c8c41984c1" containerName="object-auditor" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.094960 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cb5586f-0789-4095-84e2-32c8c41984c1" containerName="object-auditor" Feb 19 18:57:09 crc kubenswrapper[4813]: E0219 18:57:09.095023 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abaee778-ea35-4887-90c8-2834d3eef00d" containerName="ovn-controller" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.095075 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="abaee778-ea35-4887-90c8-2834d3eef00d" containerName="ovn-controller" Feb 19 18:57:09 crc kubenswrapper[4813]: E0219 18:57:09.095128 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="096a88db-91ee-4ac3-b5ae-ba4bca838436" containerName="ovsdb-server" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.095178 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="096a88db-91ee-4ac3-b5ae-ba4bca838436" containerName="ovsdb-server" Feb 19 18:57:09 crc kubenswrapper[4813]: E0219 18:57:09.095235 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cb5586f-0789-4095-84e2-32c8c41984c1" containerName="container-updater" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.095286 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cb5586f-0789-4095-84e2-32c8c41984c1" containerName="container-updater" Feb 19 18:57:09 crc kubenswrapper[4813]: E0219 18:57:09.095337 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c675ac6-fe1c-42a1-b67e-f958aef3c086" containerName="ovn-northd" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.095388 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c675ac6-fe1c-42a1-b67e-f958aef3c086" containerName="ovn-northd" Feb 19 18:57:09 crc kubenswrapper[4813]: E0219 18:57:09.095443 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79cdc675-a16c-4c18-bcef-d844d7a2f75d" containerName="keystone-api" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.095495 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="79cdc675-a16c-4c18-bcef-d844d7a2f75d" containerName="keystone-api" Feb 19 18:57:09 crc kubenswrapper[4813]: E0219 18:57:09.095557 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c389f482-3000-4f94-924d-158c9d51a2e9" containerName="nova-metadata-log" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.095621 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="c389f482-3000-4f94-924d-158c9d51a2e9" containerName="nova-metadata-log" Feb 19 18:57:09 crc kubenswrapper[4813]: E0219 18:57:09.095674 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5aeb8bcb-4373-48d4-9ac6-e6472189e440" containerName="barbican-worker-log" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.095728 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="5aeb8bcb-4373-48d4-9ac6-e6472189e440" containerName="barbican-worker-log" Feb 19 18:57:09 crc kubenswrapper[4813]: E0219 18:57:09.095781 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5fbcd22-57c9-4c44-99e8-f9307f26a525" containerName="ceilometer-central-agent" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.095828 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5fbcd22-57c9-4c44-99e8-f9307f26a525" containerName="ceilometer-central-agent" Feb 19 18:57:09 crc kubenswrapper[4813]: E0219 18:57:09.095883 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cb5586f-0789-4095-84e2-32c8c41984c1" containerName="object-server" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.095934 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cb5586f-0789-4095-84e2-32c8c41984c1" containerName="object-server" Feb 19 18:57:09 crc kubenswrapper[4813]: E0219 18:57:09.096006 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="153c22ed-7e2e-496e-9a13-9ef0ce79efd8" containerName="nova-api-api" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.096059 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="153c22ed-7e2e-496e-9a13-9ef0ce79efd8" containerName="nova-api-api" Feb 19 18:57:09 crc kubenswrapper[4813]: E0219 18:57:09.096118 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c389f482-3000-4f94-924d-158c9d51a2e9" containerName="nova-metadata-metadata" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.096171 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="c389f482-3000-4f94-924d-158c9d51a2e9" containerName="nova-metadata-metadata" Feb 19 18:57:09 crc kubenswrapper[4813]: E0219 18:57:09.096227 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ecb54a2-f23d-45b0-8311-eb5ff83f9f30" containerName="barbican-api" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.096280 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ecb54a2-f23d-45b0-8311-eb5ff83f9f30" containerName="barbican-api" Feb 19 18:57:09 crc kubenswrapper[4813]: E0219 18:57:09.096336 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="096a88db-91ee-4ac3-b5ae-ba4bca838436" containerName="ovsdb-server-init" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.096388 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="096a88db-91ee-4ac3-b5ae-ba4bca838436" containerName="ovsdb-server-init" Feb 19 18:57:09 crc kubenswrapper[4813]: E0219 18:57:09.096445 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f4b651a-00cc-4ca7-b49e-713eed4968b9" containerName="placement-log" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.096497 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f4b651a-00cc-4ca7-b49e-713eed4968b9" containerName="placement-log" Feb 19 18:57:09 crc kubenswrapper[4813]: E0219 18:57:09.096550 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81790f67-278c-4b6a-82e5-ec5bb521c6ac" containerName="barbican-keystone-listener-log" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.096619 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="81790f67-278c-4b6a-82e5-ec5bb521c6ac" containerName="barbican-keystone-listener-log" Feb 19 18:57:09 crc kubenswrapper[4813]: E0219 18:57:09.096678 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cb5586f-0789-4095-84e2-32c8c41984c1" containerName="account-auditor" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.096733 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cb5586f-0789-4095-84e2-32c8c41984c1" containerName="account-auditor" Feb 19 18:57:09 crc kubenswrapper[4813]: E0219 18:57:09.096788 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c69ff3db-8806-451a-9df0-c6289c327579" containerName="rabbitmq" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.096840 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="c69ff3db-8806-451a-9df0-c6289c327579" containerName="rabbitmq" Feb 19 18:57:09 crc kubenswrapper[4813]: E0219 18:57:09.096886 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cb5586f-0789-4095-84e2-32c8c41984c1" containerName="account-server" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.096935 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cb5586f-0789-4095-84e2-32c8c41984c1" containerName="account-server" Feb 19 18:57:09 crc kubenswrapper[4813]: E0219 18:57:09.097003 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c675ac6-fe1c-42a1-b67e-f958aef3c086" containerName="openstack-network-exporter" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.097058 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c675ac6-fe1c-42a1-b67e-f958aef3c086" containerName="openstack-network-exporter" Feb 19 18:57:09 crc kubenswrapper[4813]: E0219 18:57:09.097111 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3422b8bd-2817-4e8f-8a5a-731c773b73a4" containerName="nova-cell0-conductor-conductor" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.097160 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="3422b8bd-2817-4e8f-8a5a-731c773b73a4" containerName="nova-cell0-conductor-conductor" Feb 19 18:57:09 crc kubenswrapper[4813]: E0219 18:57:09.097212 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9c7a4f0-3d5e-4e7a-8958-8a2022de3394" containerName="glance-log" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.097260 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9c7a4f0-3d5e-4e7a-8958-8a2022de3394" containerName="glance-log" Feb 19 18:57:09 crc kubenswrapper[4813]: E0219 18:57:09.097312 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9c7a4f0-3d5e-4e7a-8958-8a2022de3394" containerName="glance-httpd" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.097363 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9c7a4f0-3d5e-4e7a-8958-8a2022de3394" containerName="glance-httpd" Feb 19 18:57:09 crc kubenswrapper[4813]: E0219 18:57:09.097412 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cb5586f-0789-4095-84e2-32c8c41984c1" containerName="swift-recon-cron" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.097458 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cb5586f-0789-4095-84e2-32c8c41984c1" containerName="swift-recon-cron" Feb 19 18:57:09 crc kubenswrapper[4813]: E0219 18:57:09.097513 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ecb54a2-f23d-45b0-8311-eb5ff83f9f30" containerName="barbican-api-log" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.097565 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ecb54a2-f23d-45b0-8311-eb5ff83f9f30" containerName="barbican-api-log" Feb 19 18:57:09 crc kubenswrapper[4813]: E0219 18:57:09.097646 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5aeb8bcb-4373-48d4-9ac6-e6472189e440" containerName="barbican-worker" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.097712 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="5aeb8bcb-4373-48d4-9ac6-e6472189e440" containerName="barbican-worker" Feb 19 18:57:09 crc kubenswrapper[4813]: E0219 18:57:09.097777 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f4b651a-00cc-4ca7-b49e-713eed4968b9" containerName="placement-api" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.097854 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f4b651a-00cc-4ca7-b49e-713eed4968b9" containerName="placement-api" Feb 19 18:57:09 crc kubenswrapper[4813]: E0219 18:57:09.097925 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cb5586f-0789-4095-84e2-32c8c41984c1" containerName="container-server" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.097999 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cb5586f-0789-4095-84e2-32c8c41984c1" containerName="container-server" Feb 19 18:57:09 crc kubenswrapper[4813]: E0219 18:57:09.098072 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c239fd72-88d6-4394-bf24-be4fb0b3e579" containerName="proxy-httpd" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.098137 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="c239fd72-88d6-4394-bf24-be4fb0b3e579" containerName="proxy-httpd" Feb 19 18:57:09 crc kubenswrapper[4813]: E0219 18:57:09.098212 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cb5586f-0789-4095-84e2-32c8c41984c1" containerName="container-auditor" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.098269 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cb5586f-0789-4095-84e2-32c8c41984c1" containerName="container-auditor" Feb 19 18:57:09 crc kubenswrapper[4813]: E0219 18:57:09.098325 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd" containerName="neutron-httpd" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.098371 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd" containerName="neutron-httpd" Feb 19 18:57:09 crc kubenswrapper[4813]: E0219 18:57:09.098429 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd" containerName="neutron-api" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.098482 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd" containerName="neutron-api" Feb 19 18:57:09 crc kubenswrapper[4813]: E0219 18:57:09.098538 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21e36d83-8928-4c58-8432-eb51809336a7" containerName="mariadb-account-create-update" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.098613 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="21e36d83-8928-4c58-8432-eb51809336a7" containerName="mariadb-account-create-update" Feb 19 18:57:09 crc kubenswrapper[4813]: E0219 18:57:09.098666 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5fbcd22-57c9-4c44-99e8-f9307f26a525" containerName="proxy-httpd" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.098711 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5fbcd22-57c9-4c44-99e8-f9307f26a525" containerName="proxy-httpd" Feb 19 18:57:09 crc kubenswrapper[4813]: E0219 18:57:09.098764 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b69a5561-31f2-4e4f-96d3-d0db19a6a51f" containerName="nova-scheduler-scheduler" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.098820 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="b69a5561-31f2-4e4f-96d3-d0db19a6a51f" containerName="nova-scheduler-scheduler" Feb 19 18:57:09 crc kubenswrapper[4813]: E0219 18:57:09.098877 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ea3ff24-c40b-432b-a2f8-522284d17ff0" containerName="glance-httpd" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.098928 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ea3ff24-c40b-432b-a2f8-522284d17ff0" containerName="glance-httpd" Feb 19 18:57:09 crc kubenswrapper[4813]: E0219 18:57:09.099009 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cb5586f-0789-4095-84e2-32c8c41984c1" containerName="account-reaper" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.099063 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cb5586f-0789-4095-84e2-32c8c41984c1" containerName="account-reaper" Feb 19 18:57:09 crc kubenswrapper[4813]: E0219 18:57:09.099116 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c239fd72-88d6-4394-bf24-be4fb0b3e579" containerName="proxy-server" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.099168 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="c239fd72-88d6-4394-bf24-be4fb0b3e579" containerName="proxy-server" Feb 19 18:57:09 crc kubenswrapper[4813]: E0219 18:57:09.099224 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db22a584-f05a-41ba-ad23-387b4100a9e1" containerName="rabbitmq" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.099272 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="db22a584-f05a-41ba-ad23-387b4100a9e1" containerName="rabbitmq" Feb 19 18:57:09 crc kubenswrapper[4813]: E0219 18:57:09.099325 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f8101a7-841e-4fa7-b98a-030b82e66c94" containerName="mysql-bootstrap" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.099375 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f8101a7-841e-4fa7-b98a-030b82e66c94" containerName="mysql-bootstrap" Feb 19 18:57:09 crc kubenswrapper[4813]: E0219 18:57:09.099433 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ad2b86e-f285-4acc-a87b-18f97baf0294" containerName="cinder-api-log" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.099487 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ad2b86e-f285-4acc-a87b-18f97baf0294" containerName="cinder-api-log" Feb 19 18:57:09 crc kubenswrapper[4813]: E0219 18:57:09.099538 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db22a584-f05a-41ba-ad23-387b4100a9e1" containerName="setup-container" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.099586 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="db22a584-f05a-41ba-ad23-387b4100a9e1" containerName="setup-container" Feb 19 18:57:09 crc kubenswrapper[4813]: E0219 18:57:09.099638 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cb5586f-0789-4095-84e2-32c8c41984c1" containerName="container-replicator" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.099686 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cb5586f-0789-4095-84e2-32c8c41984c1" containerName="container-replicator" Feb 19 18:57:09 crc kubenswrapper[4813]: E0219 18:57:09.099739 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5fbcd22-57c9-4c44-99e8-f9307f26a525" containerName="sg-core" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.099790 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5fbcd22-57c9-4c44-99e8-f9307f26a525" containerName="sg-core" Feb 19 18:57:09 crc kubenswrapper[4813]: E0219 18:57:09.099928 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f8101a7-841e-4fa7-b98a-030b82e66c94" containerName="galera" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.099995 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f8101a7-841e-4fa7-b98a-030b82e66c94" containerName="galera" Feb 19 18:57:09 crc kubenswrapper[4813]: E0219 18:57:09.100054 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cb5586f-0789-4095-84e2-32c8c41984c1" containerName="rsync" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.100102 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cb5586f-0789-4095-84e2-32c8c41984c1" containerName="rsync" Feb 19 18:57:09 crc kubenswrapper[4813]: E0219 18:57:09.100164 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ea3ff24-c40b-432b-a2f8-522284d17ff0" containerName="glance-log" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.100218 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ea3ff24-c40b-432b-a2f8-522284d17ff0" containerName="glance-log" Feb 19 18:57:09 crc kubenswrapper[4813]: E0219 18:57:09.100275 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cb5586f-0789-4095-84e2-32c8c41984c1" containerName="account-replicator" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.100327 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cb5586f-0789-4095-84e2-32c8c41984c1" containerName="account-replicator" Feb 19 18:57:09 crc kubenswrapper[4813]: E0219 18:57:09.100377 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cb5586f-0789-4095-84e2-32c8c41984c1" containerName="object-replicator" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.100428 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cb5586f-0789-4095-84e2-32c8c41984c1" containerName="object-replicator" Feb 19 18:57:09 crc kubenswrapper[4813]: E0219 18:57:09.100480 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ad2b86e-f285-4acc-a87b-18f97baf0294" containerName="cinder-api" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.100529 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ad2b86e-f285-4acc-a87b-18f97baf0294" containerName="cinder-api" Feb 19 18:57:09 crc kubenswrapper[4813]: E0219 18:57:09.100585 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d4e6cd2-75bd-43bd-9f0e-fd35002ab607" containerName="memcached" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.100647 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d4e6cd2-75bd-43bd-9f0e-fd35002ab607" containerName="memcached" Feb 19 18:57:09 crc kubenswrapper[4813]: E0219 18:57:09.100704 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="973faa07-fbab-4a50-ac4a-c62302e9f9c1" containerName="nova-cell1-conductor-conductor" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.100756 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="973faa07-fbab-4a50-ac4a-c62302e9f9c1" containerName="nova-cell1-conductor-conductor" Feb 19 18:57:09 crc kubenswrapper[4813]: E0219 18:57:09.100810 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81790f67-278c-4b6a-82e5-ec5bb521c6ac" containerName="barbican-keystone-listener" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.100861 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="81790f67-278c-4b6a-82e5-ec5bb521c6ac" containerName="barbican-keystone-listener" Feb 19 18:57:09 crc kubenswrapper[4813]: E0219 18:57:09.100917 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ca7379e-357b-4246-a820-c1aed48b722e" containerName="kube-state-metrics" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.100986 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ca7379e-357b-4246-a820-c1aed48b722e" containerName="kube-state-metrics" Feb 19 18:57:09 crc kubenswrapper[4813]: E0219 18:57:09.101047 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5fbcd22-57c9-4c44-99e8-f9307f26a525" containerName="ceilometer-notification-agent" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.101099 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5fbcd22-57c9-4c44-99e8-f9307f26a525" containerName="ceilometer-notification-agent" Feb 19 18:57:09 crc kubenswrapper[4813]: E0219 18:57:09.101150 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="153c22ed-7e2e-496e-9a13-9ef0ce79efd8" containerName="nova-api-log" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.101203 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="153c22ed-7e2e-496e-9a13-9ef0ce79efd8" containerName="nova-api-log" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.101457 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cb5586f-0789-4095-84e2-32c8c41984c1" containerName="swift-recon-cron" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.101523 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cb5586f-0789-4095-84e2-32c8c41984c1" containerName="object-expirer" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.101575 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="21e36d83-8928-4c58-8432-eb51809336a7" containerName="mariadb-account-create-update" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.101629 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cb5586f-0789-4095-84e2-32c8c41984c1" containerName="container-updater" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.101679 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="abaee778-ea35-4887-90c8-2834d3eef00d" containerName="ovn-controller" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.101730 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ea3ff24-c40b-432b-a2f8-522284d17ff0" containerName="glance-httpd" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.101782 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5fbcd22-57c9-4c44-99e8-f9307f26a525" containerName="proxy-httpd" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.101839 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="5aeb8bcb-4373-48d4-9ac6-e6472189e440" containerName="barbican-worker-log" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.101895 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c675ac6-fe1c-42a1-b67e-f958aef3c086" containerName="ovn-northd" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.101947 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5fbcd22-57c9-4c44-99e8-f9307f26a525" containerName="sg-core" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.102023 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="c389f482-3000-4f94-924d-158c9d51a2e9" containerName="nova-metadata-log" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.102078 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="b69a5561-31f2-4e4f-96d3-d0db19a6a51f" containerName="nova-scheduler-scheduler" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.102128 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cb5586f-0789-4095-84e2-32c8c41984c1" containerName="object-updater" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.102186 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="81790f67-278c-4b6a-82e5-ec5bb521c6ac" containerName="barbican-keystone-listener-log" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.102237 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cb5586f-0789-4095-84e2-32c8c41984c1" containerName="rsync" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.102324 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ad2b86e-f285-4acc-a87b-18f97baf0294" containerName="cinder-api-log" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.102377 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5fbcd22-57c9-4c44-99e8-f9307f26a525" containerName="ceilometer-notification-agent" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.102431 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd" containerName="neutron-api" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.103051 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="c389f482-3000-4f94-924d-158c9d51a2e9" containerName="nova-metadata-metadata" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.103139 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cb5586f-0789-4095-84e2-32c8c41984c1" containerName="object-server" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.103192 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cb5586f-0789-4095-84e2-32c8c41984c1" containerName="account-auditor" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.103244 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cb5586f-0789-4095-84e2-32c8c41984c1" containerName="account-reaper" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.103292 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="79cdc675-a16c-4c18-bcef-d844d7a2f75d" containerName="keystone-api" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.103347 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="153c22ed-7e2e-496e-9a13-9ef0ce79efd8" containerName="nova-api-api" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.103405 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="c239fd72-88d6-4394-bf24-be4fb0b3e579" containerName="proxy-httpd" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.103462 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9c7a4f0-3d5e-4e7a-8958-8a2022de3394" containerName="glance-log" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.103518 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cb5586f-0789-4095-84e2-32c8c41984c1" containerName="object-replicator" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.103575 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f4b651a-00cc-4ca7-b49e-713eed4968b9" containerName="placement-log" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.103629 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5fbcd22-57c9-4c44-99e8-f9307f26a525" containerName="ceilometer-central-agent" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.103684 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="096a88db-91ee-4ac3-b5ae-ba4bca838436" containerName="ovsdb-server" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.103732 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="c239fd72-88d6-4394-bf24-be4fb0b3e579" containerName="proxy-server" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.103782 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="973faa07-fbab-4a50-ac4a-c62302e9f9c1" containerName="nova-cell1-conductor-conductor" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.103838 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="3422b8bd-2817-4e8f-8a5a-731c773b73a4" containerName="nova-cell0-conductor-conductor" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.103893 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ecb54a2-f23d-45b0-8311-eb5ff83f9f30" containerName="barbican-api-log" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.103948 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="0055f8bf-7085-42c5-86d4-9cee7033a7d1" containerName="mariadb-account-create-update" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.104032 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9c7a4f0-3d5e-4e7a-8958-8a2022de3394" containerName="glance-httpd" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.104090 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ea3ff24-c40b-432b-a2f8-522284d17ff0" containerName="glance-log" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.104145 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f4b651a-00cc-4ca7-b49e-713eed4968b9" containerName="placement-api" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.104200 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d4e6cd2-75bd-43bd-9f0e-fd35002ab607" containerName="memcached" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.104254 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cb5586f-0789-4095-84e2-32c8c41984c1" containerName="container-auditor" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.104307 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="db22a584-f05a-41ba-ad23-387b4100a9e1" containerName="rabbitmq" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.104355 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="c69ff3db-8806-451a-9df0-c6289c327579" containerName="rabbitmq" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.104406 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="153c22ed-7e2e-496e-9a13-9ef0ce79efd8" containerName="nova-api-log" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.104457 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f8101a7-841e-4fa7-b98a-030b82e66c94" containerName="galera" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.104516 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ecb54a2-f23d-45b0-8311-eb5ff83f9f30" containerName="barbican-api" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.104578 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="096a88db-91ee-4ac3-b5ae-ba4bca838436" containerName="ovs-vswitchd" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.104644 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cb5586f-0789-4095-84e2-32c8c41984c1" containerName="container-server" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.104693 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a2a5f6c-f005-4348-a2a2-6cb20bdbe8bd" containerName="neutron-httpd" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.104747 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ad2b86e-f285-4acc-a87b-18f97baf0294" containerName="cinder-api" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.104810 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cb5586f-0789-4095-84e2-32c8c41984c1" containerName="account-replicator" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.104859 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ca7379e-357b-4246-a820-c1aed48b722e" containerName="kube-state-metrics" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.104914 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cb5586f-0789-4095-84e2-32c8c41984c1" containerName="object-auditor" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.107107 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="81790f67-278c-4b6a-82e5-ec5bb521c6ac" containerName="barbican-keystone-listener" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.107200 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="5aeb8bcb-4373-48d4-9ac6-e6472189e440" containerName="barbican-worker" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.107260 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cb5586f-0789-4095-84e2-32c8c41984c1" containerName="account-server" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.107311 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cb5586f-0789-4095-84e2-32c8c41984c1" containerName="container-replicator" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.107367 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c675ac6-fe1c-42a1-b67e-f958aef3c086" containerName="openstack-network-exporter" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.107775 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="21e36d83-8928-4c58-8432-eb51809336a7" containerName="mariadb-account-create-update" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.108610 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5cv6f" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.113857 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5cv6f"] Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.282864 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f63b7c64-7a0f-4374-b3bb-9f7fa9c368e7-utilities\") pod \"redhat-marketplace-5cv6f\" (UID: \"f63b7c64-7a0f-4374-b3bb-9f7fa9c368e7\") " pod="openshift-marketplace/redhat-marketplace-5cv6f" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.283318 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxrlg\" (UniqueName: \"kubernetes.io/projected/f63b7c64-7a0f-4374-b3bb-9f7fa9c368e7-kube-api-access-bxrlg\") pod \"redhat-marketplace-5cv6f\" (UID: \"f63b7c64-7a0f-4374-b3bb-9f7fa9c368e7\") " pod="openshift-marketplace/redhat-marketplace-5cv6f" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.283355 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f63b7c64-7a0f-4374-b3bb-9f7fa9c368e7-catalog-content\") pod \"redhat-marketplace-5cv6f\" (UID: \"f63b7c64-7a0f-4374-b3bb-9f7fa9c368e7\") " pod="openshift-marketplace/redhat-marketplace-5cv6f" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.384389 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f63b7c64-7a0f-4374-b3bb-9f7fa9c368e7-utilities\") pod \"redhat-marketplace-5cv6f\" (UID: \"f63b7c64-7a0f-4374-b3bb-9f7fa9c368e7\") " pod="openshift-marketplace/redhat-marketplace-5cv6f" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.384475 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxrlg\" (UniqueName: \"kubernetes.io/projected/f63b7c64-7a0f-4374-b3bb-9f7fa9c368e7-kube-api-access-bxrlg\") pod \"redhat-marketplace-5cv6f\" (UID: \"f63b7c64-7a0f-4374-b3bb-9f7fa9c368e7\") " pod="openshift-marketplace/redhat-marketplace-5cv6f" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.384500 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f63b7c64-7a0f-4374-b3bb-9f7fa9c368e7-catalog-content\") pod \"redhat-marketplace-5cv6f\" (UID: \"f63b7c64-7a0f-4374-b3bb-9f7fa9c368e7\") " pod="openshift-marketplace/redhat-marketplace-5cv6f" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.385006 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f63b7c64-7a0f-4374-b3bb-9f7fa9c368e7-utilities\") pod \"redhat-marketplace-5cv6f\" (UID: \"f63b7c64-7a0f-4374-b3bb-9f7fa9c368e7\") " pod="openshift-marketplace/redhat-marketplace-5cv6f" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.385023 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f63b7c64-7a0f-4374-b3bb-9f7fa9c368e7-catalog-content\") pod \"redhat-marketplace-5cv6f\" (UID: \"f63b7c64-7a0f-4374-b3bb-9f7fa9c368e7\") " pod="openshift-marketplace/redhat-marketplace-5cv6f" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.404989 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxrlg\" (UniqueName: \"kubernetes.io/projected/f63b7c64-7a0f-4374-b3bb-9f7fa9c368e7-kube-api-access-bxrlg\") pod \"redhat-marketplace-5cv6f\" (UID: \"f63b7c64-7a0f-4374-b3bb-9f7fa9c368e7\") " pod="openshift-marketplace/redhat-marketplace-5cv6f" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.425265 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5cv6f" Feb 19 18:57:09 crc kubenswrapper[4813]: I0219 18:57:09.946639 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5cv6f"] Feb 19 18:57:10 crc kubenswrapper[4813]: I0219 18:57:10.868804 4813 generic.go:334] "Generic (PLEG): container finished" podID="f63b7c64-7a0f-4374-b3bb-9f7fa9c368e7" containerID="30c40dfdc01eb9ee0f3c3d32a86f0bb8601b183b14546509e694a81f40642ca4" exitCode=0 Feb 19 18:57:10 crc kubenswrapper[4813]: I0219 18:57:10.869098 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5cv6f" event={"ID":"f63b7c64-7a0f-4374-b3bb-9f7fa9c368e7","Type":"ContainerDied","Data":"30c40dfdc01eb9ee0f3c3d32a86f0bb8601b183b14546509e694a81f40642ca4"} Feb 19 18:57:10 crc kubenswrapper[4813]: I0219 18:57:10.869303 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5cv6f" event={"ID":"f63b7c64-7a0f-4374-b3bb-9f7fa9c368e7","Type":"ContainerStarted","Data":"1c3d63cefda19d9d59316f614fef540c3b3b31678ceba1fc4c77aafeca354bd5"} Feb 19 18:57:10 crc kubenswrapper[4813]: I0219 18:57:10.871823 4813 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 18:57:11 crc kubenswrapper[4813]: I0219 18:57:11.877165 4813 generic.go:334] "Generic (PLEG): container finished" podID="f63b7c64-7a0f-4374-b3bb-9f7fa9c368e7" containerID="f7e52bf1c59747766b80eea3163f5932c168d258a323e4824cceb6486b2ab2c4" exitCode=0 Feb 19 18:57:11 crc kubenswrapper[4813]: I0219 18:57:11.877228 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5cv6f" event={"ID":"f63b7c64-7a0f-4374-b3bb-9f7fa9c368e7","Type":"ContainerDied","Data":"f7e52bf1c59747766b80eea3163f5932c168d258a323e4824cceb6486b2ab2c4"} Feb 19 18:57:12 crc kubenswrapper[4813]: I0219 18:57:12.891278 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5cv6f" event={"ID":"f63b7c64-7a0f-4374-b3bb-9f7fa9c368e7","Type":"ContainerStarted","Data":"220a253f02aaffba6bb332977a1cbd67a5a3173609a94b21dbaebf4c9af90a43"} Feb 19 18:57:12 crc kubenswrapper[4813]: I0219 18:57:12.918991 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5cv6f" podStartSLOduration=2.533176596 podStartE2EDuration="3.918944973s" podCreationTimestamp="2026-02-19 18:57:09 +0000 UTC" firstStartedPulling="2026-02-19 18:57:10.871499876 +0000 UTC m=+1650.096940427" lastFinishedPulling="2026-02-19 18:57:12.257268233 +0000 UTC m=+1651.482708804" observedRunningTime="2026-02-19 18:57:12.914674291 +0000 UTC m=+1652.140114892" watchObservedRunningTime="2026-02-19 18:57:12.918944973 +0000 UTC m=+1652.144385534" Feb 19 18:57:19 crc kubenswrapper[4813]: I0219 18:57:19.426361 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5cv6f" Feb 19 18:57:19 crc kubenswrapper[4813]: I0219 18:57:19.427229 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5cv6f" Feb 19 18:57:19 crc kubenswrapper[4813]: I0219 18:57:19.471365 4813 scope.go:117] "RemoveContainer" containerID="32fa982681255cf9db7b11890f519e686141e6a0b70e496465176e3f8f434a17" Feb 19 18:57:19 crc kubenswrapper[4813]: E0219 18:57:19.471887 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 18:57:19 crc kubenswrapper[4813]: I0219 18:57:19.495015 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5cv6f" Feb 19 18:57:19 crc kubenswrapper[4813]: I0219 18:57:19.998090 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5cv6f" Feb 19 18:57:20 crc kubenswrapper[4813]: I0219 18:57:20.045017 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5cv6f"] Feb 19 18:57:21 crc kubenswrapper[4813]: I0219 18:57:21.960207 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5cv6f" podUID="f63b7c64-7a0f-4374-b3bb-9f7fa9c368e7" containerName="registry-server" containerID="cri-o://220a253f02aaffba6bb332977a1cbd67a5a3173609a94b21dbaebf4c9af90a43" gracePeriod=2 Feb 19 18:57:22 crc kubenswrapper[4813]: I0219 18:57:22.349178 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5cv6f" Feb 19 18:57:22 crc kubenswrapper[4813]: I0219 18:57:22.496980 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f63b7c64-7a0f-4374-b3bb-9f7fa9c368e7-utilities\") pod \"f63b7c64-7a0f-4374-b3bb-9f7fa9c368e7\" (UID: \"f63b7c64-7a0f-4374-b3bb-9f7fa9c368e7\") " Feb 19 18:57:22 crc kubenswrapper[4813]: I0219 18:57:22.497274 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f63b7c64-7a0f-4374-b3bb-9f7fa9c368e7-catalog-content\") pod \"f63b7c64-7a0f-4374-b3bb-9f7fa9c368e7\" (UID: \"f63b7c64-7a0f-4374-b3bb-9f7fa9c368e7\") " Feb 19 18:57:22 crc kubenswrapper[4813]: I0219 18:57:22.497379 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxrlg\" (UniqueName: \"kubernetes.io/projected/f63b7c64-7a0f-4374-b3bb-9f7fa9c368e7-kube-api-access-bxrlg\") pod \"f63b7c64-7a0f-4374-b3bb-9f7fa9c368e7\" (UID: \"f63b7c64-7a0f-4374-b3bb-9f7fa9c368e7\") " Feb 19 18:57:22 crc kubenswrapper[4813]: I0219 18:57:22.498991 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f63b7c64-7a0f-4374-b3bb-9f7fa9c368e7-utilities" (OuterVolumeSpecName: "utilities") pod "f63b7c64-7a0f-4374-b3bb-9f7fa9c368e7" (UID: "f63b7c64-7a0f-4374-b3bb-9f7fa9c368e7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:57:22 crc kubenswrapper[4813]: I0219 18:57:22.510149 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f63b7c64-7a0f-4374-b3bb-9f7fa9c368e7-kube-api-access-bxrlg" (OuterVolumeSpecName: "kube-api-access-bxrlg") pod "f63b7c64-7a0f-4374-b3bb-9f7fa9c368e7" (UID: "f63b7c64-7a0f-4374-b3bb-9f7fa9c368e7"). InnerVolumeSpecName "kube-api-access-bxrlg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 18:57:22 crc kubenswrapper[4813]: I0219 18:57:22.536759 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f63b7c64-7a0f-4374-b3bb-9f7fa9c368e7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f63b7c64-7a0f-4374-b3bb-9f7fa9c368e7" (UID: "f63b7c64-7a0f-4374-b3bb-9f7fa9c368e7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 18:57:22 crc kubenswrapper[4813]: I0219 18:57:22.600531 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f63b7c64-7a0f-4374-b3bb-9f7fa9c368e7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 18:57:22 crc kubenswrapper[4813]: I0219 18:57:22.600584 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxrlg\" (UniqueName: \"kubernetes.io/projected/f63b7c64-7a0f-4374-b3bb-9f7fa9c368e7-kube-api-access-bxrlg\") on node \"crc\" DevicePath \"\"" Feb 19 18:57:22 crc kubenswrapper[4813]: I0219 18:57:22.600603 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f63b7c64-7a0f-4374-b3bb-9f7fa9c368e7-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 18:57:22 crc kubenswrapper[4813]: I0219 18:57:22.967386 4813 generic.go:334] "Generic (PLEG): container finished" podID="f63b7c64-7a0f-4374-b3bb-9f7fa9c368e7" containerID="220a253f02aaffba6bb332977a1cbd67a5a3173609a94b21dbaebf4c9af90a43" exitCode=0 Feb 19 18:57:22 crc kubenswrapper[4813]: I0219 18:57:22.967442 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5cv6f" event={"ID":"f63b7c64-7a0f-4374-b3bb-9f7fa9c368e7","Type":"ContainerDied","Data":"220a253f02aaffba6bb332977a1cbd67a5a3173609a94b21dbaebf4c9af90a43"} Feb 19 18:57:22 crc kubenswrapper[4813]: I0219 18:57:22.967456 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5cv6f" Feb 19 18:57:22 crc kubenswrapper[4813]: I0219 18:57:22.967482 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5cv6f" event={"ID":"f63b7c64-7a0f-4374-b3bb-9f7fa9c368e7","Type":"ContainerDied","Data":"1c3d63cefda19d9d59316f614fef540c3b3b31678ceba1fc4c77aafeca354bd5"} Feb 19 18:57:22 crc kubenswrapper[4813]: I0219 18:57:22.967697 4813 scope.go:117] "RemoveContainer" containerID="220a253f02aaffba6bb332977a1cbd67a5a3173609a94b21dbaebf4c9af90a43" Feb 19 18:57:22 crc kubenswrapper[4813]: I0219 18:57:22.983812 4813 scope.go:117] "RemoveContainer" containerID="f7e52bf1c59747766b80eea3163f5932c168d258a323e4824cceb6486b2ab2c4" Feb 19 18:57:23 crc kubenswrapper[4813]: I0219 18:57:23.000270 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5cv6f"] Feb 19 18:57:23 crc kubenswrapper[4813]: I0219 18:57:23.000968 4813 scope.go:117] "RemoveContainer" containerID="30c40dfdc01eb9ee0f3c3d32a86f0bb8601b183b14546509e694a81f40642ca4" Feb 19 18:57:23 crc kubenswrapper[4813]: I0219 18:57:23.006704 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5cv6f"] Feb 19 18:57:23 crc kubenswrapper[4813]: I0219 18:57:23.031541 4813 scope.go:117] "RemoveContainer" containerID="220a253f02aaffba6bb332977a1cbd67a5a3173609a94b21dbaebf4c9af90a43" Feb 19 18:57:23 crc kubenswrapper[4813]: E0219 18:57:23.032149 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"220a253f02aaffba6bb332977a1cbd67a5a3173609a94b21dbaebf4c9af90a43\": container with ID starting with 220a253f02aaffba6bb332977a1cbd67a5a3173609a94b21dbaebf4c9af90a43 not found: ID does not exist" containerID="220a253f02aaffba6bb332977a1cbd67a5a3173609a94b21dbaebf4c9af90a43" Feb 19 18:57:23 crc kubenswrapper[4813]: I0219 18:57:23.032181 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"220a253f02aaffba6bb332977a1cbd67a5a3173609a94b21dbaebf4c9af90a43"} err="failed to get container status \"220a253f02aaffba6bb332977a1cbd67a5a3173609a94b21dbaebf4c9af90a43\": rpc error: code = NotFound desc = could not find container \"220a253f02aaffba6bb332977a1cbd67a5a3173609a94b21dbaebf4c9af90a43\": container with ID starting with 220a253f02aaffba6bb332977a1cbd67a5a3173609a94b21dbaebf4c9af90a43 not found: ID does not exist" Feb 19 18:57:23 crc kubenswrapper[4813]: I0219 18:57:23.032203 4813 scope.go:117] "RemoveContainer" containerID="f7e52bf1c59747766b80eea3163f5932c168d258a323e4824cceb6486b2ab2c4" Feb 19 18:57:23 crc kubenswrapper[4813]: E0219 18:57:23.032485 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7e52bf1c59747766b80eea3163f5932c168d258a323e4824cceb6486b2ab2c4\": container with ID starting with f7e52bf1c59747766b80eea3163f5932c168d258a323e4824cceb6486b2ab2c4 not found: ID does not exist" containerID="f7e52bf1c59747766b80eea3163f5932c168d258a323e4824cceb6486b2ab2c4" Feb 19 18:57:23 crc kubenswrapper[4813]: I0219 18:57:23.032516 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7e52bf1c59747766b80eea3163f5932c168d258a323e4824cceb6486b2ab2c4"} err="failed to get container status \"f7e52bf1c59747766b80eea3163f5932c168d258a323e4824cceb6486b2ab2c4\": rpc error: code = NotFound desc = could not find container \"f7e52bf1c59747766b80eea3163f5932c168d258a323e4824cceb6486b2ab2c4\": container with ID starting with f7e52bf1c59747766b80eea3163f5932c168d258a323e4824cceb6486b2ab2c4 not found: ID does not exist" Feb 19 18:57:23 crc kubenswrapper[4813]: I0219 18:57:23.032532 4813 scope.go:117] "RemoveContainer" containerID="30c40dfdc01eb9ee0f3c3d32a86f0bb8601b183b14546509e694a81f40642ca4" Feb 19 18:57:23 crc kubenswrapper[4813]: E0219 18:57:23.032829 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30c40dfdc01eb9ee0f3c3d32a86f0bb8601b183b14546509e694a81f40642ca4\": container with ID starting with 30c40dfdc01eb9ee0f3c3d32a86f0bb8601b183b14546509e694a81f40642ca4 not found: ID does not exist" containerID="30c40dfdc01eb9ee0f3c3d32a86f0bb8601b183b14546509e694a81f40642ca4" Feb 19 18:57:23 crc kubenswrapper[4813]: I0219 18:57:23.032857 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30c40dfdc01eb9ee0f3c3d32a86f0bb8601b183b14546509e694a81f40642ca4"} err="failed to get container status \"30c40dfdc01eb9ee0f3c3d32a86f0bb8601b183b14546509e694a81f40642ca4\": rpc error: code = NotFound desc = could not find container \"30c40dfdc01eb9ee0f3c3d32a86f0bb8601b183b14546509e694a81f40642ca4\": container with ID starting with 30c40dfdc01eb9ee0f3c3d32a86f0bb8601b183b14546509e694a81f40642ca4 not found: ID does not exist" Feb 19 18:57:23 crc kubenswrapper[4813]: I0219 18:57:23.482480 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f63b7c64-7a0f-4374-b3bb-9f7fa9c368e7" path="/var/lib/kubelet/pods/f63b7c64-7a0f-4374-b3bb-9f7fa9c368e7/volumes" Feb 19 18:57:33 crc kubenswrapper[4813]: I0219 18:57:33.472421 4813 scope.go:117] "RemoveContainer" containerID="32fa982681255cf9db7b11890f519e686141e6a0b70e496465176e3f8f434a17" Feb 19 18:57:33 crc kubenswrapper[4813]: E0219 18:57:33.473558 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 18:57:47 crc kubenswrapper[4813]: I0219 18:57:47.471653 4813 scope.go:117] "RemoveContainer" containerID="32fa982681255cf9db7b11890f519e686141e6a0b70e496465176e3f8f434a17" Feb 19 18:57:47 crc kubenswrapper[4813]: E0219 18:57:47.473830 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 18:57:50 crc kubenswrapper[4813]: I0219 18:57:50.361231 4813 scope.go:117] "RemoveContainer" containerID="9c53928bb686ea7fb8a94957477024c8651c1173220ce378d9c31e1768bb67c2" Feb 19 18:57:50 crc kubenswrapper[4813]: I0219 18:57:50.391129 4813 scope.go:117] "RemoveContainer" containerID="1ba735df184697c88fb38b34247b6c1cb15bd976c21e2519ffc06459f7f5c81e" Feb 19 18:57:50 crc kubenswrapper[4813]: I0219 18:57:50.445891 4813 scope.go:117] "RemoveContainer" containerID="e716782824789876e737f679b53e25e942ffd227219538feea44c84706b6d858" Feb 19 18:57:50 crc kubenswrapper[4813]: I0219 18:57:50.506524 4813 scope.go:117] "RemoveContainer" containerID="2d1006a4f53f89f14386fb997d2b8b3ec2b732e5dcdb7115a52aff9373a2b497" Feb 19 18:57:50 crc kubenswrapper[4813]: I0219 18:57:50.522999 4813 scope.go:117] "RemoveContainer" containerID="ab1853900bd306abf16dd54781d641cac8027e702677508d3cc7c2e1d0a2ddb2" Feb 19 18:57:50 crc kubenswrapper[4813]: I0219 18:57:50.542466 4813 scope.go:117] "RemoveContainer" containerID="1974dc952a0d7a118c1b4ae017d610213b3f38ecf31c3a385abe73f7a324638b" Feb 19 18:57:50 crc kubenswrapper[4813]: I0219 18:57:50.562188 4813 scope.go:117] "RemoveContainer" containerID="163b6a0fe25e1d50e784cbe44b2dec473230f88d0f1845044d6b162853450091" Feb 19 18:57:50 crc kubenswrapper[4813]: I0219 18:57:50.582149 4813 scope.go:117] "RemoveContainer" containerID="67373de9f836bbcc9734c63987d52b1116dea47802d978e06050d215a0cf0e97" Feb 19 18:57:50 crc kubenswrapper[4813]: I0219 18:57:50.599566 4813 scope.go:117] "RemoveContainer" containerID="994efa1617047ad5bf4f6343264fd62528e456b0a7f377bf6017d0bb47ed4006" Feb 19 18:58:00 crc kubenswrapper[4813]: I0219 18:58:00.471381 4813 scope.go:117] "RemoveContainer" containerID="32fa982681255cf9db7b11890f519e686141e6a0b70e496465176e3f8f434a17" Feb 19 18:58:00 crc kubenswrapper[4813]: E0219 18:58:00.472545 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 18:58:13 crc kubenswrapper[4813]: I0219 18:58:13.472107 4813 scope.go:117] "RemoveContainer" containerID="32fa982681255cf9db7b11890f519e686141e6a0b70e496465176e3f8f434a17" Feb 19 18:58:13 crc kubenswrapper[4813]: E0219 18:58:13.473091 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 18:58:25 crc kubenswrapper[4813]: I0219 18:58:25.471646 4813 scope.go:117] "RemoveContainer" containerID="32fa982681255cf9db7b11890f519e686141e6a0b70e496465176e3f8f434a17" Feb 19 18:58:25 crc kubenswrapper[4813]: E0219 18:58:25.472303 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 18:58:38 crc kubenswrapper[4813]: I0219 18:58:38.472993 4813 scope.go:117] "RemoveContainer" containerID="32fa982681255cf9db7b11890f519e686141e6a0b70e496465176e3f8f434a17" Feb 19 18:58:38 crc kubenswrapper[4813]: E0219 18:58:38.474038 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 18:58:50 crc kubenswrapper[4813]: I0219 18:58:50.696111 4813 scope.go:117] "RemoveContainer" containerID="2687e2921a7978760aa8bb23883efc53ffec14ee1347d79693b3de333441e1a5" Feb 19 18:58:53 crc kubenswrapper[4813]: I0219 18:58:53.472440 4813 scope.go:117] "RemoveContainer" containerID="32fa982681255cf9db7b11890f519e686141e6a0b70e496465176e3f8f434a17" Feb 19 18:58:53 crc kubenswrapper[4813]: E0219 18:58:53.473288 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 18:59:07 crc kubenswrapper[4813]: I0219 18:59:07.474539 4813 scope.go:117] "RemoveContainer" containerID="32fa982681255cf9db7b11890f519e686141e6a0b70e496465176e3f8f434a17" Feb 19 18:59:07 crc kubenswrapper[4813]: E0219 18:59:07.475356 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 18:59:21 crc kubenswrapper[4813]: I0219 18:59:21.473044 4813 scope.go:117] "RemoveContainer" containerID="32fa982681255cf9db7b11890f519e686141e6a0b70e496465176e3f8f434a17" Feb 19 18:59:21 crc kubenswrapper[4813]: E0219 18:59:21.474229 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 18:59:32 crc kubenswrapper[4813]: I0219 18:59:32.471810 4813 scope.go:117] "RemoveContainer" containerID="32fa982681255cf9db7b11890f519e686141e6a0b70e496465176e3f8f434a17" Feb 19 18:59:32 crc kubenswrapper[4813]: E0219 18:59:32.472945 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 18:59:47 crc kubenswrapper[4813]: I0219 18:59:47.471710 4813 scope.go:117] "RemoveContainer" containerID="32fa982681255cf9db7b11890f519e686141e6a0b70e496465176e3f8f434a17" Feb 19 18:59:47 crc kubenswrapper[4813]: E0219 18:59:47.472658 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 18:59:59 crc kubenswrapper[4813]: I0219 18:59:59.482662 4813 scope.go:117] "RemoveContainer" containerID="32fa982681255cf9db7b11890f519e686141e6a0b70e496465176e3f8f434a17" Feb 19 18:59:59 crc kubenswrapper[4813]: E0219 18:59:59.483780 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:00:00 crc kubenswrapper[4813]: I0219 19:00:00.174421 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525460-ljw6z"] Feb 19 19:00:00 crc kubenswrapper[4813]: E0219 19:00:00.174919 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f63b7c64-7a0f-4374-b3bb-9f7fa9c368e7" containerName="extract-utilities" Feb 19 19:00:00 crc kubenswrapper[4813]: I0219 19:00:00.174982 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f63b7c64-7a0f-4374-b3bb-9f7fa9c368e7" containerName="extract-utilities" Feb 19 19:00:00 crc kubenswrapper[4813]: E0219 19:00:00.175022 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f63b7c64-7a0f-4374-b3bb-9f7fa9c368e7" containerName="extract-content" Feb 19 19:00:00 crc kubenswrapper[4813]: I0219 19:00:00.175037 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f63b7c64-7a0f-4374-b3bb-9f7fa9c368e7" containerName="extract-content" Feb 19 19:00:00 crc kubenswrapper[4813]: E0219 19:00:00.175064 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f63b7c64-7a0f-4374-b3bb-9f7fa9c368e7" containerName="registry-server" Feb 19 19:00:00 crc kubenswrapper[4813]: I0219 19:00:00.175077 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f63b7c64-7a0f-4374-b3bb-9f7fa9c368e7" containerName="registry-server" Feb 19 19:00:00 crc kubenswrapper[4813]: I0219 19:00:00.175355 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f63b7c64-7a0f-4374-b3bb-9f7fa9c368e7" containerName="registry-server" Feb 19 19:00:00 crc kubenswrapper[4813]: I0219 19:00:00.176128 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525460-ljw6z" Feb 19 19:00:00 crc kubenswrapper[4813]: I0219 19:00:00.179459 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 19:00:00 crc kubenswrapper[4813]: I0219 19:00:00.192985 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 19:00:00 crc kubenswrapper[4813]: I0219 19:00:00.197937 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525460-ljw6z"] Feb 19 19:00:00 crc kubenswrapper[4813]: I0219 19:00:00.324908 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4ed4271d-81b9-4f32-a9b6-723332602a3d-secret-volume\") pod \"collect-profiles-29525460-ljw6z\" (UID: \"4ed4271d-81b9-4f32-a9b6-723332602a3d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525460-ljw6z" Feb 19 19:00:00 crc kubenswrapper[4813]: I0219 19:00:00.325116 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4ed4271d-81b9-4f32-a9b6-723332602a3d-config-volume\") pod \"collect-profiles-29525460-ljw6z\" (UID: \"4ed4271d-81b9-4f32-a9b6-723332602a3d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525460-ljw6z" Feb 19 19:00:00 crc kubenswrapper[4813]: I0219 19:00:00.325167 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66nvs\" (UniqueName: \"kubernetes.io/projected/4ed4271d-81b9-4f32-a9b6-723332602a3d-kube-api-access-66nvs\") pod \"collect-profiles-29525460-ljw6z\" (UID: \"4ed4271d-81b9-4f32-a9b6-723332602a3d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525460-ljw6z" Feb 19 19:00:00 crc kubenswrapper[4813]: I0219 19:00:00.426534 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4ed4271d-81b9-4f32-a9b6-723332602a3d-secret-volume\") pod \"collect-profiles-29525460-ljw6z\" (UID: \"4ed4271d-81b9-4f32-a9b6-723332602a3d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525460-ljw6z" Feb 19 19:00:00 crc kubenswrapper[4813]: I0219 19:00:00.426814 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4ed4271d-81b9-4f32-a9b6-723332602a3d-config-volume\") pod \"collect-profiles-29525460-ljw6z\" (UID: \"4ed4271d-81b9-4f32-a9b6-723332602a3d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525460-ljw6z" Feb 19 19:00:00 crc kubenswrapper[4813]: I0219 19:00:00.426855 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66nvs\" (UniqueName: \"kubernetes.io/projected/4ed4271d-81b9-4f32-a9b6-723332602a3d-kube-api-access-66nvs\") pod \"collect-profiles-29525460-ljw6z\" (UID: \"4ed4271d-81b9-4f32-a9b6-723332602a3d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525460-ljw6z" Feb 19 19:00:00 crc kubenswrapper[4813]: I0219 19:00:00.429143 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4ed4271d-81b9-4f32-a9b6-723332602a3d-config-volume\") pod \"collect-profiles-29525460-ljw6z\" (UID: \"4ed4271d-81b9-4f32-a9b6-723332602a3d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525460-ljw6z" Feb 19 19:00:00 crc kubenswrapper[4813]: I0219 19:00:00.438008 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4ed4271d-81b9-4f32-a9b6-723332602a3d-secret-volume\") pod \"collect-profiles-29525460-ljw6z\" (UID: \"4ed4271d-81b9-4f32-a9b6-723332602a3d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525460-ljw6z" Feb 19 19:00:00 crc kubenswrapper[4813]: I0219 19:00:00.466373 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66nvs\" (UniqueName: \"kubernetes.io/projected/4ed4271d-81b9-4f32-a9b6-723332602a3d-kube-api-access-66nvs\") pod \"collect-profiles-29525460-ljw6z\" (UID: \"4ed4271d-81b9-4f32-a9b6-723332602a3d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525460-ljw6z" Feb 19 19:00:00 crc kubenswrapper[4813]: I0219 19:00:00.509136 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525460-ljw6z" Feb 19 19:00:00 crc kubenswrapper[4813]: I0219 19:00:00.790898 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525460-ljw6z"] Feb 19 19:00:01 crc kubenswrapper[4813]: I0219 19:00:01.419311 4813 generic.go:334] "Generic (PLEG): container finished" podID="4ed4271d-81b9-4f32-a9b6-723332602a3d" containerID="d34d1ff45ea0f54a8c9e8fa196da33996c83eeb35d66148751215d84d3c50c6f" exitCode=0 Feb 19 19:00:01 crc kubenswrapper[4813]: I0219 19:00:01.419429 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525460-ljw6z" event={"ID":"4ed4271d-81b9-4f32-a9b6-723332602a3d","Type":"ContainerDied","Data":"d34d1ff45ea0f54a8c9e8fa196da33996c83eeb35d66148751215d84d3c50c6f"} Feb 19 19:00:01 crc kubenswrapper[4813]: I0219 19:00:01.419634 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525460-ljw6z" event={"ID":"4ed4271d-81b9-4f32-a9b6-723332602a3d","Type":"ContainerStarted","Data":"0f5a097a8407db9312bcb448e3ebb563bcaffb02e029bc4e0c95ff15e5478e2c"} Feb 19 19:00:02 crc kubenswrapper[4813]: I0219 19:00:02.729542 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525460-ljw6z" Feb 19 19:00:02 crc kubenswrapper[4813]: I0219 19:00:02.865351 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66nvs\" (UniqueName: \"kubernetes.io/projected/4ed4271d-81b9-4f32-a9b6-723332602a3d-kube-api-access-66nvs\") pod \"4ed4271d-81b9-4f32-a9b6-723332602a3d\" (UID: \"4ed4271d-81b9-4f32-a9b6-723332602a3d\") " Feb 19 19:00:02 crc kubenswrapper[4813]: I0219 19:00:02.865552 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4ed4271d-81b9-4f32-a9b6-723332602a3d-secret-volume\") pod \"4ed4271d-81b9-4f32-a9b6-723332602a3d\" (UID: \"4ed4271d-81b9-4f32-a9b6-723332602a3d\") " Feb 19 19:00:02 crc kubenswrapper[4813]: I0219 19:00:02.865654 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4ed4271d-81b9-4f32-a9b6-723332602a3d-config-volume\") pod \"4ed4271d-81b9-4f32-a9b6-723332602a3d\" (UID: \"4ed4271d-81b9-4f32-a9b6-723332602a3d\") " Feb 19 19:00:02 crc kubenswrapper[4813]: I0219 19:00:02.866469 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ed4271d-81b9-4f32-a9b6-723332602a3d-config-volume" (OuterVolumeSpecName: "config-volume") pod "4ed4271d-81b9-4f32-a9b6-723332602a3d" (UID: "4ed4271d-81b9-4f32-a9b6-723332602a3d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:00:02 crc kubenswrapper[4813]: I0219 19:00:02.873027 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ed4271d-81b9-4f32-a9b6-723332602a3d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4ed4271d-81b9-4f32-a9b6-723332602a3d" (UID: "4ed4271d-81b9-4f32-a9b6-723332602a3d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:00:02 crc kubenswrapper[4813]: I0219 19:00:02.874316 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ed4271d-81b9-4f32-a9b6-723332602a3d-kube-api-access-66nvs" (OuterVolumeSpecName: "kube-api-access-66nvs") pod "4ed4271d-81b9-4f32-a9b6-723332602a3d" (UID: "4ed4271d-81b9-4f32-a9b6-723332602a3d"). InnerVolumeSpecName "kube-api-access-66nvs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:00:02 crc kubenswrapper[4813]: I0219 19:00:02.968158 4813 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4ed4271d-81b9-4f32-a9b6-723332602a3d-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 19:00:02 crc kubenswrapper[4813]: I0219 19:00:02.968221 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66nvs\" (UniqueName: \"kubernetes.io/projected/4ed4271d-81b9-4f32-a9b6-723332602a3d-kube-api-access-66nvs\") on node \"crc\" DevicePath \"\"" Feb 19 19:00:02 crc kubenswrapper[4813]: I0219 19:00:02.968247 4813 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4ed4271d-81b9-4f32-a9b6-723332602a3d-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 19:00:03 crc kubenswrapper[4813]: I0219 19:00:03.436913 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525460-ljw6z" Feb 19 19:00:03 crc kubenswrapper[4813]: I0219 19:00:03.436903 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525460-ljw6z" event={"ID":"4ed4271d-81b9-4f32-a9b6-723332602a3d","Type":"ContainerDied","Data":"0f5a097a8407db9312bcb448e3ebb563bcaffb02e029bc4e0c95ff15e5478e2c"} Feb 19 19:00:03 crc kubenswrapper[4813]: I0219 19:00:03.437760 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f5a097a8407db9312bcb448e3ebb563bcaffb02e029bc4e0c95ff15e5478e2c" Feb 19 19:00:13 crc kubenswrapper[4813]: I0219 19:00:13.471669 4813 scope.go:117] "RemoveContainer" containerID="32fa982681255cf9db7b11890f519e686141e6a0b70e496465176e3f8f434a17" Feb 19 19:00:13 crc kubenswrapper[4813]: E0219 19:00:13.472691 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:00:26 crc kubenswrapper[4813]: I0219 19:00:26.472075 4813 scope.go:117] "RemoveContainer" containerID="32fa982681255cf9db7b11890f519e686141e6a0b70e496465176e3f8f434a17" Feb 19 19:00:26 crc kubenswrapper[4813]: E0219 19:00:26.472890 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:00:38 crc kubenswrapper[4813]: I0219 19:00:38.472111 4813 scope.go:117] "RemoveContainer" containerID="32fa982681255cf9db7b11890f519e686141e6a0b70e496465176e3f8f434a17" Feb 19 19:00:38 crc kubenswrapper[4813]: E0219 19:00:38.473328 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:00:51 crc kubenswrapper[4813]: I0219 19:00:51.479005 4813 scope.go:117] "RemoveContainer" containerID="32fa982681255cf9db7b11890f519e686141e6a0b70e496465176e3f8f434a17" Feb 19 19:00:51 crc kubenswrapper[4813]: E0219 19:00:51.479987 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:01:00 crc kubenswrapper[4813]: I0219 19:01:00.504847 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-s5g9j"] Feb 19 19:01:00 crc kubenswrapper[4813]: E0219 19:01:00.505744 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ed4271d-81b9-4f32-a9b6-723332602a3d" containerName="collect-profiles" Feb 19 19:01:00 crc kubenswrapper[4813]: I0219 19:01:00.505761 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ed4271d-81b9-4f32-a9b6-723332602a3d" containerName="collect-profiles" Feb 19 19:01:00 crc kubenswrapper[4813]: I0219 19:01:00.505939 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ed4271d-81b9-4f32-a9b6-723332602a3d" containerName="collect-profiles" Feb 19 19:01:00 crc kubenswrapper[4813]: I0219 19:01:00.507420 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s5g9j" Feb 19 19:01:00 crc kubenswrapper[4813]: I0219 19:01:00.526371 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s5g9j"] Feb 19 19:01:00 crc kubenswrapper[4813]: I0219 19:01:00.645231 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1dbf942e-9106-4e8d-85d4-9d3fe5d925f7-catalog-content\") pod \"community-operators-s5g9j\" (UID: \"1dbf942e-9106-4e8d-85d4-9d3fe5d925f7\") " pod="openshift-marketplace/community-operators-s5g9j" Feb 19 19:01:00 crc kubenswrapper[4813]: I0219 19:01:00.645323 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1dbf942e-9106-4e8d-85d4-9d3fe5d925f7-utilities\") pod \"community-operators-s5g9j\" (UID: \"1dbf942e-9106-4e8d-85d4-9d3fe5d925f7\") " pod="openshift-marketplace/community-operators-s5g9j" Feb 19 19:01:00 crc kubenswrapper[4813]: I0219 19:01:00.645381 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddwdq\" (UniqueName: \"kubernetes.io/projected/1dbf942e-9106-4e8d-85d4-9d3fe5d925f7-kube-api-access-ddwdq\") pod \"community-operators-s5g9j\" (UID: \"1dbf942e-9106-4e8d-85d4-9d3fe5d925f7\") " pod="openshift-marketplace/community-operators-s5g9j" Feb 19 19:01:00 crc kubenswrapper[4813]: I0219 19:01:00.698238 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nqxk9"] Feb 19 19:01:00 crc kubenswrapper[4813]: I0219 19:01:00.699942 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nqxk9" Feb 19 19:01:00 crc kubenswrapper[4813]: I0219 19:01:00.714452 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nqxk9"] Feb 19 19:01:00 crc kubenswrapper[4813]: I0219 19:01:00.746857 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1dbf942e-9106-4e8d-85d4-9d3fe5d925f7-catalog-content\") pod \"community-operators-s5g9j\" (UID: \"1dbf942e-9106-4e8d-85d4-9d3fe5d925f7\") " pod="openshift-marketplace/community-operators-s5g9j" Feb 19 19:01:00 crc kubenswrapper[4813]: I0219 19:01:00.746926 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1dbf942e-9106-4e8d-85d4-9d3fe5d925f7-utilities\") pod \"community-operators-s5g9j\" (UID: \"1dbf942e-9106-4e8d-85d4-9d3fe5d925f7\") " pod="openshift-marketplace/community-operators-s5g9j" Feb 19 19:01:00 crc kubenswrapper[4813]: I0219 19:01:00.746969 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddwdq\" (UniqueName: \"kubernetes.io/projected/1dbf942e-9106-4e8d-85d4-9d3fe5d925f7-kube-api-access-ddwdq\") pod \"community-operators-s5g9j\" (UID: \"1dbf942e-9106-4e8d-85d4-9d3fe5d925f7\") " pod="openshift-marketplace/community-operators-s5g9j" Feb 19 19:01:00 crc kubenswrapper[4813]: I0219 19:01:00.747611 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1dbf942e-9106-4e8d-85d4-9d3fe5d925f7-catalog-content\") pod \"community-operators-s5g9j\" (UID: \"1dbf942e-9106-4e8d-85d4-9d3fe5d925f7\") " pod="openshift-marketplace/community-operators-s5g9j" Feb 19 19:01:00 crc kubenswrapper[4813]: I0219 19:01:00.747740 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1dbf942e-9106-4e8d-85d4-9d3fe5d925f7-utilities\") pod \"community-operators-s5g9j\" (UID: \"1dbf942e-9106-4e8d-85d4-9d3fe5d925f7\") " pod="openshift-marketplace/community-operators-s5g9j" Feb 19 19:01:00 crc kubenswrapper[4813]: I0219 19:01:00.767327 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddwdq\" (UniqueName: \"kubernetes.io/projected/1dbf942e-9106-4e8d-85d4-9d3fe5d925f7-kube-api-access-ddwdq\") pod \"community-operators-s5g9j\" (UID: \"1dbf942e-9106-4e8d-85d4-9d3fe5d925f7\") " pod="openshift-marketplace/community-operators-s5g9j" Feb 19 19:01:00 crc kubenswrapper[4813]: I0219 19:01:00.831258 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s5g9j" Feb 19 19:01:00 crc kubenswrapper[4813]: I0219 19:01:00.848456 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a32989df-f2b2-41bf-9b79-4fd1fa4ec8ae-utilities\") pod \"certified-operators-nqxk9\" (UID: \"a32989df-f2b2-41bf-9b79-4fd1fa4ec8ae\") " pod="openshift-marketplace/certified-operators-nqxk9" Feb 19 19:01:00 crc kubenswrapper[4813]: I0219 19:01:00.848651 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a32989df-f2b2-41bf-9b79-4fd1fa4ec8ae-catalog-content\") pod \"certified-operators-nqxk9\" (UID: \"a32989df-f2b2-41bf-9b79-4fd1fa4ec8ae\") " pod="openshift-marketplace/certified-operators-nqxk9" Feb 19 19:01:00 crc kubenswrapper[4813]: I0219 19:01:00.848707 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ml6rt\" (UniqueName: \"kubernetes.io/projected/a32989df-f2b2-41bf-9b79-4fd1fa4ec8ae-kube-api-access-ml6rt\") pod \"certified-operators-nqxk9\" (UID: \"a32989df-f2b2-41bf-9b79-4fd1fa4ec8ae\") " pod="openshift-marketplace/certified-operators-nqxk9" Feb 19 19:01:00 crc kubenswrapper[4813]: I0219 19:01:00.951722 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a32989df-f2b2-41bf-9b79-4fd1fa4ec8ae-catalog-content\") pod \"certified-operators-nqxk9\" (UID: \"a32989df-f2b2-41bf-9b79-4fd1fa4ec8ae\") " pod="openshift-marketplace/certified-operators-nqxk9" Feb 19 19:01:00 crc kubenswrapper[4813]: I0219 19:01:00.951782 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ml6rt\" (UniqueName: \"kubernetes.io/projected/a32989df-f2b2-41bf-9b79-4fd1fa4ec8ae-kube-api-access-ml6rt\") pod \"certified-operators-nqxk9\" (UID: \"a32989df-f2b2-41bf-9b79-4fd1fa4ec8ae\") " pod="openshift-marketplace/certified-operators-nqxk9" Feb 19 19:01:00 crc kubenswrapper[4813]: I0219 19:01:00.952031 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a32989df-f2b2-41bf-9b79-4fd1fa4ec8ae-utilities\") pod \"certified-operators-nqxk9\" (UID: \"a32989df-f2b2-41bf-9b79-4fd1fa4ec8ae\") " pod="openshift-marketplace/certified-operators-nqxk9" Feb 19 19:01:00 crc kubenswrapper[4813]: I0219 19:01:00.952778 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a32989df-f2b2-41bf-9b79-4fd1fa4ec8ae-catalog-content\") pod \"certified-operators-nqxk9\" (UID: \"a32989df-f2b2-41bf-9b79-4fd1fa4ec8ae\") " pod="openshift-marketplace/certified-operators-nqxk9" Feb 19 19:01:00 crc kubenswrapper[4813]: I0219 19:01:00.952825 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a32989df-f2b2-41bf-9b79-4fd1fa4ec8ae-utilities\") pod \"certified-operators-nqxk9\" (UID: \"a32989df-f2b2-41bf-9b79-4fd1fa4ec8ae\") " pod="openshift-marketplace/certified-operators-nqxk9" Feb 19 19:01:00 crc kubenswrapper[4813]: I0219 19:01:00.977814 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ml6rt\" (UniqueName: \"kubernetes.io/projected/a32989df-f2b2-41bf-9b79-4fd1fa4ec8ae-kube-api-access-ml6rt\") pod \"certified-operators-nqxk9\" (UID: \"a32989df-f2b2-41bf-9b79-4fd1fa4ec8ae\") " pod="openshift-marketplace/certified-operators-nqxk9" Feb 19 19:01:01 crc kubenswrapper[4813]: I0219 19:01:01.021934 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nqxk9" Feb 19 19:01:01 crc kubenswrapper[4813]: I0219 19:01:01.189892 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s5g9j"] Feb 19 19:01:01 crc kubenswrapper[4813]: I0219 19:01:01.539589 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nqxk9"] Feb 19 19:01:01 crc kubenswrapper[4813]: I0219 19:01:01.935335 4813 generic.go:334] "Generic (PLEG): container finished" podID="a32989df-f2b2-41bf-9b79-4fd1fa4ec8ae" containerID="80b68bff31c2ad757a8dfbaaeceab088760dc6e74cac3826127a9be35848d938" exitCode=0 Feb 19 19:01:01 crc kubenswrapper[4813]: I0219 19:01:01.935451 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nqxk9" event={"ID":"a32989df-f2b2-41bf-9b79-4fd1fa4ec8ae","Type":"ContainerDied","Data":"80b68bff31c2ad757a8dfbaaeceab088760dc6e74cac3826127a9be35848d938"} Feb 19 19:01:01 crc kubenswrapper[4813]: I0219 19:01:01.935512 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nqxk9" event={"ID":"a32989df-f2b2-41bf-9b79-4fd1fa4ec8ae","Type":"ContainerStarted","Data":"950013b683a079c9b21896ec38790103e62483ddc8b2df68caf0230a707e3eae"} Feb 19 19:01:01 crc kubenswrapper[4813]: I0219 19:01:01.939386 4813 generic.go:334] "Generic (PLEG): container finished" podID="1dbf942e-9106-4e8d-85d4-9d3fe5d925f7" containerID="01a4bdfce1911168bfbf6d8ef6774ecd483dac841a6990a9f37c8a578539c66f" exitCode=0 Feb 19 19:01:01 crc kubenswrapper[4813]: I0219 19:01:01.939441 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s5g9j" event={"ID":"1dbf942e-9106-4e8d-85d4-9d3fe5d925f7","Type":"ContainerDied","Data":"01a4bdfce1911168bfbf6d8ef6774ecd483dac841a6990a9f37c8a578539c66f"} Feb 19 19:01:01 crc kubenswrapper[4813]: I0219 19:01:01.939479 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s5g9j" event={"ID":"1dbf942e-9106-4e8d-85d4-9d3fe5d925f7","Type":"ContainerStarted","Data":"33410c4afcaeb98ed60214f46610838e6b6926e79af9d5ac8d8efc2337b2c61b"} Feb 19 19:01:02 crc kubenswrapper[4813]: I0219 19:01:02.472222 4813 scope.go:117] "RemoveContainer" containerID="32fa982681255cf9db7b11890f519e686141e6a0b70e496465176e3f8f434a17" Feb 19 19:01:02 crc kubenswrapper[4813]: E0219 19:01:02.472593 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:01:02 crc kubenswrapper[4813]: I0219 19:01:02.948138 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nqxk9" event={"ID":"a32989df-f2b2-41bf-9b79-4fd1fa4ec8ae","Type":"ContainerStarted","Data":"b545e7a9db8442b40e96b6f7770aff851a54eb782ea66a99520e39e4d41b1eeb"} Feb 19 19:01:02 crc kubenswrapper[4813]: I0219 19:01:02.951929 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s5g9j" event={"ID":"1dbf942e-9106-4e8d-85d4-9d3fe5d925f7","Type":"ContainerStarted","Data":"9f0c9c9ff30d9a9313baa56bdecad57010ef157dfa89b7cad11fa48515768603"} Feb 19 19:01:03 crc kubenswrapper[4813]: I0219 19:01:03.959287 4813 generic.go:334] "Generic (PLEG): container finished" podID="a32989df-f2b2-41bf-9b79-4fd1fa4ec8ae" containerID="b545e7a9db8442b40e96b6f7770aff851a54eb782ea66a99520e39e4d41b1eeb" exitCode=0 Feb 19 19:01:03 crc kubenswrapper[4813]: I0219 19:01:03.959362 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nqxk9" event={"ID":"a32989df-f2b2-41bf-9b79-4fd1fa4ec8ae","Type":"ContainerDied","Data":"b545e7a9db8442b40e96b6f7770aff851a54eb782ea66a99520e39e4d41b1eeb"} Feb 19 19:01:03 crc kubenswrapper[4813]: I0219 19:01:03.960849 4813 generic.go:334] "Generic (PLEG): container finished" podID="1dbf942e-9106-4e8d-85d4-9d3fe5d925f7" containerID="9f0c9c9ff30d9a9313baa56bdecad57010ef157dfa89b7cad11fa48515768603" exitCode=0 Feb 19 19:01:03 crc kubenswrapper[4813]: I0219 19:01:03.960894 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s5g9j" event={"ID":"1dbf942e-9106-4e8d-85d4-9d3fe5d925f7","Type":"ContainerDied","Data":"9f0c9c9ff30d9a9313baa56bdecad57010ef157dfa89b7cad11fa48515768603"} Feb 19 19:01:04 crc kubenswrapper[4813]: I0219 19:01:04.970932 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s5g9j" event={"ID":"1dbf942e-9106-4e8d-85d4-9d3fe5d925f7","Type":"ContainerStarted","Data":"2e93bd88362f547bd0be160513cd2015d1fb3163edf0f731297f25e13c4d942f"} Feb 19 19:01:04 crc kubenswrapper[4813]: I0219 19:01:04.973931 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nqxk9" event={"ID":"a32989df-f2b2-41bf-9b79-4fd1fa4ec8ae","Type":"ContainerStarted","Data":"50c199ae25dbc32ad984fa46f160ba33fcd34ac98d8d7ace21b3aa653dc32c95"} Feb 19 19:01:05 crc kubenswrapper[4813]: I0219 19:01:05.002199 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-s5g9j" podStartSLOduration=2.309820439 podStartE2EDuration="5.002177199s" podCreationTimestamp="2026-02-19 19:01:00 +0000 UTC" firstStartedPulling="2026-02-19 19:01:01.942150227 +0000 UTC m=+1881.167590768" lastFinishedPulling="2026-02-19 19:01:04.634506997 +0000 UTC m=+1883.859947528" observedRunningTime="2026-02-19 19:01:04.994664298 +0000 UTC m=+1884.220104839" watchObservedRunningTime="2026-02-19 19:01:05.002177199 +0000 UTC m=+1884.227617750" Feb 19 19:01:05 crc kubenswrapper[4813]: I0219 19:01:05.021829 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nqxk9" podStartSLOduration=2.504595279 podStartE2EDuration="5.021811965s" podCreationTimestamp="2026-02-19 19:01:00 +0000 UTC" firstStartedPulling="2026-02-19 19:01:01.936999889 +0000 UTC m=+1881.162440430" lastFinishedPulling="2026-02-19 19:01:04.454216575 +0000 UTC m=+1883.679657116" observedRunningTime="2026-02-19 19:01:05.020169505 +0000 UTC m=+1884.245610076" watchObservedRunningTime="2026-02-19 19:01:05.021811965 +0000 UTC m=+1884.247252506" Feb 19 19:01:05 crc kubenswrapper[4813]: I0219 19:01:05.508488 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rxp4t"] Feb 19 19:01:05 crc kubenswrapper[4813]: I0219 19:01:05.510747 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rxp4t" Feb 19 19:01:05 crc kubenswrapper[4813]: I0219 19:01:05.525859 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rxp4t"] Feb 19 19:01:05 crc kubenswrapper[4813]: I0219 19:01:05.529492 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2mm5\" (UniqueName: \"kubernetes.io/projected/a3b9a409-01cb-49a0-af45-fc4934ee981a-kube-api-access-b2mm5\") pod \"redhat-operators-rxp4t\" (UID: \"a3b9a409-01cb-49a0-af45-fc4934ee981a\") " pod="openshift-marketplace/redhat-operators-rxp4t" Feb 19 19:01:05 crc kubenswrapper[4813]: I0219 19:01:05.529605 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3b9a409-01cb-49a0-af45-fc4934ee981a-utilities\") pod \"redhat-operators-rxp4t\" (UID: \"a3b9a409-01cb-49a0-af45-fc4934ee981a\") " pod="openshift-marketplace/redhat-operators-rxp4t" Feb 19 19:01:05 crc kubenswrapper[4813]: I0219 19:01:05.529993 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3b9a409-01cb-49a0-af45-fc4934ee981a-catalog-content\") pod \"redhat-operators-rxp4t\" (UID: \"a3b9a409-01cb-49a0-af45-fc4934ee981a\") " pod="openshift-marketplace/redhat-operators-rxp4t" Feb 19 19:01:05 crc kubenswrapper[4813]: I0219 19:01:05.631811 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2mm5\" (UniqueName: \"kubernetes.io/projected/a3b9a409-01cb-49a0-af45-fc4934ee981a-kube-api-access-b2mm5\") pod \"redhat-operators-rxp4t\" (UID: \"a3b9a409-01cb-49a0-af45-fc4934ee981a\") " pod="openshift-marketplace/redhat-operators-rxp4t" Feb 19 19:01:05 crc kubenswrapper[4813]: I0219 19:01:05.631866 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3b9a409-01cb-49a0-af45-fc4934ee981a-utilities\") pod \"redhat-operators-rxp4t\" (UID: \"a3b9a409-01cb-49a0-af45-fc4934ee981a\") " pod="openshift-marketplace/redhat-operators-rxp4t" Feb 19 19:01:05 crc kubenswrapper[4813]: I0219 19:01:05.631895 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3b9a409-01cb-49a0-af45-fc4934ee981a-catalog-content\") pod \"redhat-operators-rxp4t\" (UID: \"a3b9a409-01cb-49a0-af45-fc4934ee981a\") " pod="openshift-marketplace/redhat-operators-rxp4t" Feb 19 19:01:05 crc kubenswrapper[4813]: I0219 19:01:05.632433 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3b9a409-01cb-49a0-af45-fc4934ee981a-catalog-content\") pod \"redhat-operators-rxp4t\" (UID: \"a3b9a409-01cb-49a0-af45-fc4934ee981a\") " pod="openshift-marketplace/redhat-operators-rxp4t" Feb 19 19:01:05 crc kubenswrapper[4813]: I0219 19:01:05.632573 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3b9a409-01cb-49a0-af45-fc4934ee981a-utilities\") pod \"redhat-operators-rxp4t\" (UID: \"a3b9a409-01cb-49a0-af45-fc4934ee981a\") " pod="openshift-marketplace/redhat-operators-rxp4t" Feb 19 19:01:05 crc kubenswrapper[4813]: I0219 19:01:05.664298 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2mm5\" (UniqueName: \"kubernetes.io/projected/a3b9a409-01cb-49a0-af45-fc4934ee981a-kube-api-access-b2mm5\") pod \"redhat-operators-rxp4t\" (UID: \"a3b9a409-01cb-49a0-af45-fc4934ee981a\") " pod="openshift-marketplace/redhat-operators-rxp4t" Feb 19 19:01:05 crc kubenswrapper[4813]: I0219 19:01:05.828662 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rxp4t" Feb 19 19:01:06 crc kubenswrapper[4813]: I0219 19:01:06.287728 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rxp4t"] Feb 19 19:01:06 crc kubenswrapper[4813]: W0219 19:01:06.294659 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda3b9a409_01cb_49a0_af45_fc4934ee981a.slice/crio-ac7cd663373e705b765be68a7cae37067bc260cfd26bb0c45116198aeb9fc2c6 WatchSource:0}: Error finding container ac7cd663373e705b765be68a7cae37067bc260cfd26bb0c45116198aeb9fc2c6: Status 404 returned error can't find the container with id ac7cd663373e705b765be68a7cae37067bc260cfd26bb0c45116198aeb9fc2c6 Feb 19 19:01:06 crc kubenswrapper[4813]: I0219 19:01:06.991317 4813 generic.go:334] "Generic (PLEG): container finished" podID="a3b9a409-01cb-49a0-af45-fc4934ee981a" containerID="8aaa23f34df85d4bac82f331e6cf7165c7e8818bf4c88433ab8ae17ac6e335e3" exitCode=0 Feb 19 19:01:06 crc kubenswrapper[4813]: I0219 19:01:06.991374 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rxp4t" event={"ID":"a3b9a409-01cb-49a0-af45-fc4934ee981a","Type":"ContainerDied","Data":"8aaa23f34df85d4bac82f331e6cf7165c7e8818bf4c88433ab8ae17ac6e335e3"} Feb 19 19:01:06 crc kubenswrapper[4813]: I0219 19:01:06.991599 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rxp4t" event={"ID":"a3b9a409-01cb-49a0-af45-fc4934ee981a","Type":"ContainerStarted","Data":"ac7cd663373e705b765be68a7cae37067bc260cfd26bb0c45116198aeb9fc2c6"} Feb 19 19:01:08 crc kubenswrapper[4813]: I0219 19:01:08.001122 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rxp4t" event={"ID":"a3b9a409-01cb-49a0-af45-fc4934ee981a","Type":"ContainerStarted","Data":"4a7618c670a6eaf8e68a87e4efef39f9dca68465b00e1a9b12510c26c8593761"} Feb 19 19:01:09 crc kubenswrapper[4813]: I0219 19:01:09.012373 4813 generic.go:334] "Generic (PLEG): container finished" podID="a3b9a409-01cb-49a0-af45-fc4934ee981a" containerID="4a7618c670a6eaf8e68a87e4efef39f9dca68465b00e1a9b12510c26c8593761" exitCode=0 Feb 19 19:01:09 crc kubenswrapper[4813]: I0219 19:01:09.012437 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rxp4t" event={"ID":"a3b9a409-01cb-49a0-af45-fc4934ee981a","Type":"ContainerDied","Data":"4a7618c670a6eaf8e68a87e4efef39f9dca68465b00e1a9b12510c26c8593761"} Feb 19 19:01:10 crc kubenswrapper[4813]: I0219 19:01:10.024438 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rxp4t" event={"ID":"a3b9a409-01cb-49a0-af45-fc4934ee981a","Type":"ContainerStarted","Data":"83ba235ff152eae71c1f778c92245b4caa264be02057c236bf63d34e96af9290"} Feb 19 19:01:10 crc kubenswrapper[4813]: I0219 19:01:10.046555 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rxp4t" podStartSLOduration=2.614391037 podStartE2EDuration="5.046533569s" podCreationTimestamp="2026-02-19 19:01:05 +0000 UTC" firstStartedPulling="2026-02-19 19:01:06.992591414 +0000 UTC m=+1886.218031975" lastFinishedPulling="2026-02-19 19:01:09.424733926 +0000 UTC m=+1888.650174507" observedRunningTime="2026-02-19 19:01:10.043247547 +0000 UTC m=+1889.268688098" watchObservedRunningTime="2026-02-19 19:01:10.046533569 +0000 UTC m=+1889.271974130" Feb 19 19:01:10 crc kubenswrapper[4813]: I0219 19:01:10.831743 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-s5g9j" Feb 19 19:01:10 crc kubenswrapper[4813]: I0219 19:01:10.832371 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-s5g9j" Feb 19 19:01:10 crc kubenswrapper[4813]: I0219 19:01:10.881779 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-s5g9j" Feb 19 19:01:11 crc kubenswrapper[4813]: I0219 19:01:11.022172 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nqxk9" Feb 19 19:01:11 crc kubenswrapper[4813]: I0219 19:01:11.022233 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nqxk9" Feb 19 19:01:11 crc kubenswrapper[4813]: I0219 19:01:11.070589 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nqxk9" Feb 19 19:01:11 crc kubenswrapper[4813]: I0219 19:01:11.071806 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-s5g9j" Feb 19 19:01:11 crc kubenswrapper[4813]: I0219 19:01:11.119787 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nqxk9" Feb 19 19:01:13 crc kubenswrapper[4813]: I0219 19:01:13.287879 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s5g9j"] Feb 19 19:01:13 crc kubenswrapper[4813]: I0219 19:01:13.472772 4813 scope.go:117] "RemoveContainer" containerID="32fa982681255cf9db7b11890f519e686141e6a0b70e496465176e3f8f434a17" Feb 19 19:01:13 crc kubenswrapper[4813]: E0219 19:01:13.473103 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:01:14 crc kubenswrapper[4813]: I0219 19:01:14.054188 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-s5g9j" podUID="1dbf942e-9106-4e8d-85d4-9d3fe5d925f7" containerName="registry-server" containerID="cri-o://2e93bd88362f547bd0be160513cd2015d1fb3163edf0f731297f25e13c4d942f" gracePeriod=2 Feb 19 19:01:14 crc kubenswrapper[4813]: I0219 19:01:14.288039 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nqxk9"] Feb 19 19:01:14 crc kubenswrapper[4813]: I0219 19:01:14.288320 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-nqxk9" podUID="a32989df-f2b2-41bf-9b79-4fd1fa4ec8ae" containerName="registry-server" containerID="cri-o://50c199ae25dbc32ad984fa46f160ba33fcd34ac98d8d7ace21b3aa653dc32c95" gracePeriod=2 Feb 19 19:01:15 crc kubenswrapper[4813]: I0219 19:01:15.829764 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rxp4t" Feb 19 19:01:15 crc kubenswrapper[4813]: I0219 19:01:15.830246 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rxp4t" Feb 19 19:01:15 crc kubenswrapper[4813]: I0219 19:01:15.896358 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rxp4t" Feb 19 19:01:16 crc kubenswrapper[4813]: I0219 19:01:16.124397 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rxp4t" Feb 19 19:01:17 crc kubenswrapper[4813]: I0219 19:01:17.079058 4813 generic.go:334] "Generic (PLEG): container finished" podID="a32989df-f2b2-41bf-9b79-4fd1fa4ec8ae" containerID="50c199ae25dbc32ad984fa46f160ba33fcd34ac98d8d7ace21b3aa653dc32c95" exitCode=0 Feb 19 19:01:17 crc kubenswrapper[4813]: I0219 19:01:17.079180 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nqxk9" event={"ID":"a32989df-f2b2-41bf-9b79-4fd1fa4ec8ae","Type":"ContainerDied","Data":"50c199ae25dbc32ad984fa46f160ba33fcd34ac98d8d7ace21b3aa653dc32c95"} Feb 19 19:01:17 crc kubenswrapper[4813]: I0219 19:01:17.486638 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rxp4t"] Feb 19 19:01:18 crc kubenswrapper[4813]: I0219 19:01:18.088654 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rxp4t" podUID="a3b9a409-01cb-49a0-af45-fc4934ee981a" containerName="registry-server" containerID="cri-o://83ba235ff152eae71c1f778c92245b4caa264be02057c236bf63d34e96af9290" gracePeriod=2 Feb 19 19:01:19 crc kubenswrapper[4813]: I0219 19:01:19.053915 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nqxk9" Feb 19 19:01:19 crc kubenswrapper[4813]: I0219 19:01:19.059830 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rxp4t" Feb 19 19:01:19 crc kubenswrapper[4813]: I0219 19:01:19.096385 4813 generic.go:334] "Generic (PLEG): container finished" podID="a3b9a409-01cb-49a0-af45-fc4934ee981a" containerID="83ba235ff152eae71c1f778c92245b4caa264be02057c236bf63d34e96af9290" exitCode=0 Feb 19 19:01:19 crc kubenswrapper[4813]: I0219 19:01:19.096451 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rxp4t" Feb 19 19:01:19 crc kubenswrapper[4813]: I0219 19:01:19.096454 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rxp4t" event={"ID":"a3b9a409-01cb-49a0-af45-fc4934ee981a","Type":"ContainerDied","Data":"83ba235ff152eae71c1f778c92245b4caa264be02057c236bf63d34e96af9290"} Feb 19 19:01:19 crc kubenswrapper[4813]: I0219 19:01:19.096503 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rxp4t" event={"ID":"a3b9a409-01cb-49a0-af45-fc4934ee981a","Type":"ContainerDied","Data":"ac7cd663373e705b765be68a7cae37067bc260cfd26bb0c45116198aeb9fc2c6"} Feb 19 19:01:19 crc kubenswrapper[4813]: I0219 19:01:19.096524 4813 scope.go:117] "RemoveContainer" containerID="83ba235ff152eae71c1f778c92245b4caa264be02057c236bf63d34e96af9290" Feb 19 19:01:19 crc kubenswrapper[4813]: I0219 19:01:19.101255 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nqxk9" event={"ID":"a32989df-f2b2-41bf-9b79-4fd1fa4ec8ae","Type":"ContainerDied","Data":"950013b683a079c9b21896ec38790103e62483ddc8b2df68caf0230a707e3eae"} Feb 19 19:01:19 crc kubenswrapper[4813]: I0219 19:01:19.101336 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nqxk9" Feb 19 19:01:19 crc kubenswrapper[4813]: I0219 19:01:19.107880 4813 generic.go:334] "Generic (PLEG): container finished" podID="1dbf942e-9106-4e8d-85d4-9d3fe5d925f7" containerID="2e93bd88362f547bd0be160513cd2015d1fb3163edf0f731297f25e13c4d942f" exitCode=0 Feb 19 19:01:19 crc kubenswrapper[4813]: I0219 19:01:19.107923 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s5g9j" event={"ID":"1dbf942e-9106-4e8d-85d4-9d3fe5d925f7","Type":"ContainerDied","Data":"2e93bd88362f547bd0be160513cd2015d1fb3163edf0f731297f25e13c4d942f"} Feb 19 19:01:19 crc kubenswrapper[4813]: I0219 19:01:19.112567 4813 scope.go:117] "RemoveContainer" containerID="4a7618c670a6eaf8e68a87e4efef39f9dca68465b00e1a9b12510c26c8593761" Feb 19 19:01:19 crc kubenswrapper[4813]: I0219 19:01:19.142091 4813 scope.go:117] "RemoveContainer" containerID="8aaa23f34df85d4bac82f331e6cf7165c7e8818bf4c88433ab8ae17ac6e335e3" Feb 19 19:01:19 crc kubenswrapper[4813]: I0219 19:01:19.160430 4813 scope.go:117] "RemoveContainer" containerID="83ba235ff152eae71c1f778c92245b4caa264be02057c236bf63d34e96af9290" Feb 19 19:01:19 crc kubenswrapper[4813]: E0219 19:01:19.160882 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83ba235ff152eae71c1f778c92245b4caa264be02057c236bf63d34e96af9290\": container with ID starting with 83ba235ff152eae71c1f778c92245b4caa264be02057c236bf63d34e96af9290 not found: ID does not exist" containerID="83ba235ff152eae71c1f778c92245b4caa264be02057c236bf63d34e96af9290" Feb 19 19:01:19 crc kubenswrapper[4813]: I0219 19:01:19.160913 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83ba235ff152eae71c1f778c92245b4caa264be02057c236bf63d34e96af9290"} err="failed to get container status \"83ba235ff152eae71c1f778c92245b4caa264be02057c236bf63d34e96af9290\": rpc error: code = NotFound desc = could not find container \"83ba235ff152eae71c1f778c92245b4caa264be02057c236bf63d34e96af9290\": container with ID starting with 83ba235ff152eae71c1f778c92245b4caa264be02057c236bf63d34e96af9290 not found: ID does not exist" Feb 19 19:01:19 crc kubenswrapper[4813]: I0219 19:01:19.160934 4813 scope.go:117] "RemoveContainer" containerID="4a7618c670a6eaf8e68a87e4efef39f9dca68465b00e1a9b12510c26c8593761" Feb 19 19:01:19 crc kubenswrapper[4813]: E0219 19:01:19.161341 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a7618c670a6eaf8e68a87e4efef39f9dca68465b00e1a9b12510c26c8593761\": container with ID starting with 4a7618c670a6eaf8e68a87e4efef39f9dca68465b00e1a9b12510c26c8593761 not found: ID does not exist" containerID="4a7618c670a6eaf8e68a87e4efef39f9dca68465b00e1a9b12510c26c8593761" Feb 19 19:01:19 crc kubenswrapper[4813]: I0219 19:01:19.161386 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a7618c670a6eaf8e68a87e4efef39f9dca68465b00e1a9b12510c26c8593761"} err="failed to get container status \"4a7618c670a6eaf8e68a87e4efef39f9dca68465b00e1a9b12510c26c8593761\": rpc error: code = NotFound desc = could not find container \"4a7618c670a6eaf8e68a87e4efef39f9dca68465b00e1a9b12510c26c8593761\": container with ID starting with 4a7618c670a6eaf8e68a87e4efef39f9dca68465b00e1a9b12510c26c8593761 not found: ID does not exist" Feb 19 19:01:19 crc kubenswrapper[4813]: I0219 19:01:19.161414 4813 scope.go:117] "RemoveContainer" containerID="8aaa23f34df85d4bac82f331e6cf7165c7e8818bf4c88433ab8ae17ac6e335e3" Feb 19 19:01:19 crc kubenswrapper[4813]: E0219 19:01:19.161946 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8aaa23f34df85d4bac82f331e6cf7165c7e8818bf4c88433ab8ae17ac6e335e3\": container with ID starting with 8aaa23f34df85d4bac82f331e6cf7165c7e8818bf4c88433ab8ae17ac6e335e3 not found: ID does not exist" containerID="8aaa23f34df85d4bac82f331e6cf7165c7e8818bf4c88433ab8ae17ac6e335e3" Feb 19 19:01:19 crc kubenswrapper[4813]: I0219 19:01:19.162016 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8aaa23f34df85d4bac82f331e6cf7165c7e8818bf4c88433ab8ae17ac6e335e3"} err="failed to get container status \"8aaa23f34df85d4bac82f331e6cf7165c7e8818bf4c88433ab8ae17ac6e335e3\": rpc error: code = NotFound desc = could not find container \"8aaa23f34df85d4bac82f331e6cf7165c7e8818bf4c88433ab8ae17ac6e335e3\": container with ID starting with 8aaa23f34df85d4bac82f331e6cf7165c7e8818bf4c88433ab8ae17ac6e335e3 not found: ID does not exist" Feb 19 19:01:19 crc kubenswrapper[4813]: I0219 19:01:19.162054 4813 scope.go:117] "RemoveContainer" containerID="50c199ae25dbc32ad984fa46f160ba33fcd34ac98d8d7ace21b3aa653dc32c95" Feb 19 19:01:19 crc kubenswrapper[4813]: I0219 19:01:19.175228 4813 scope.go:117] "RemoveContainer" containerID="b545e7a9db8442b40e96b6f7770aff851a54eb782ea66a99520e39e4d41b1eeb" Feb 19 19:01:19 crc kubenswrapper[4813]: I0219 19:01:19.191575 4813 scope.go:117] "RemoveContainer" containerID="80b68bff31c2ad757a8dfbaaeceab088760dc6e74cac3826127a9be35848d938" Feb 19 19:01:19 crc kubenswrapper[4813]: I0219 19:01:19.225841 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a32989df-f2b2-41bf-9b79-4fd1fa4ec8ae-utilities\") pod \"a32989df-f2b2-41bf-9b79-4fd1fa4ec8ae\" (UID: \"a32989df-f2b2-41bf-9b79-4fd1fa4ec8ae\") " Feb 19 19:01:19 crc kubenswrapper[4813]: I0219 19:01:19.225892 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3b9a409-01cb-49a0-af45-fc4934ee981a-catalog-content\") pod \"a3b9a409-01cb-49a0-af45-fc4934ee981a\" (UID: \"a3b9a409-01cb-49a0-af45-fc4934ee981a\") " Feb 19 19:01:19 crc kubenswrapper[4813]: I0219 19:01:19.225916 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3b9a409-01cb-49a0-af45-fc4934ee981a-utilities\") pod \"a3b9a409-01cb-49a0-af45-fc4934ee981a\" (UID: \"a3b9a409-01cb-49a0-af45-fc4934ee981a\") " Feb 19 19:01:19 crc kubenswrapper[4813]: I0219 19:01:19.225943 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ml6rt\" (UniqueName: \"kubernetes.io/projected/a32989df-f2b2-41bf-9b79-4fd1fa4ec8ae-kube-api-access-ml6rt\") pod \"a32989df-f2b2-41bf-9b79-4fd1fa4ec8ae\" (UID: \"a32989df-f2b2-41bf-9b79-4fd1fa4ec8ae\") " Feb 19 19:01:19 crc kubenswrapper[4813]: I0219 19:01:19.226069 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2mm5\" (UniqueName: \"kubernetes.io/projected/a3b9a409-01cb-49a0-af45-fc4934ee981a-kube-api-access-b2mm5\") pod \"a3b9a409-01cb-49a0-af45-fc4934ee981a\" (UID: \"a3b9a409-01cb-49a0-af45-fc4934ee981a\") " Feb 19 19:01:19 crc kubenswrapper[4813]: I0219 19:01:19.226084 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a32989df-f2b2-41bf-9b79-4fd1fa4ec8ae-catalog-content\") pod \"a32989df-f2b2-41bf-9b79-4fd1fa4ec8ae\" (UID: \"a32989df-f2b2-41bf-9b79-4fd1fa4ec8ae\") " Feb 19 19:01:19 crc kubenswrapper[4813]: I0219 19:01:19.226547 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a32989df-f2b2-41bf-9b79-4fd1fa4ec8ae-utilities" (OuterVolumeSpecName: "utilities") pod "a32989df-f2b2-41bf-9b79-4fd1fa4ec8ae" (UID: "a32989df-f2b2-41bf-9b79-4fd1fa4ec8ae"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:01:19 crc kubenswrapper[4813]: I0219 19:01:19.226860 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3b9a409-01cb-49a0-af45-fc4934ee981a-utilities" (OuterVolumeSpecName: "utilities") pod "a3b9a409-01cb-49a0-af45-fc4934ee981a" (UID: "a3b9a409-01cb-49a0-af45-fc4934ee981a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:01:19 crc kubenswrapper[4813]: I0219 19:01:19.230787 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3b9a409-01cb-49a0-af45-fc4934ee981a-kube-api-access-b2mm5" (OuterVolumeSpecName: "kube-api-access-b2mm5") pod "a3b9a409-01cb-49a0-af45-fc4934ee981a" (UID: "a3b9a409-01cb-49a0-af45-fc4934ee981a"). InnerVolumeSpecName "kube-api-access-b2mm5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:01:19 crc kubenswrapper[4813]: I0219 19:01:19.231025 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a32989df-f2b2-41bf-9b79-4fd1fa4ec8ae-kube-api-access-ml6rt" (OuterVolumeSpecName: "kube-api-access-ml6rt") pod "a32989df-f2b2-41bf-9b79-4fd1fa4ec8ae" (UID: "a32989df-f2b2-41bf-9b79-4fd1fa4ec8ae"). InnerVolumeSpecName "kube-api-access-ml6rt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:01:19 crc kubenswrapper[4813]: I0219 19:01:19.280131 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a32989df-f2b2-41bf-9b79-4fd1fa4ec8ae-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a32989df-f2b2-41bf-9b79-4fd1fa4ec8ae" (UID: "a32989df-f2b2-41bf-9b79-4fd1fa4ec8ae"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:01:19 crc kubenswrapper[4813]: I0219 19:01:19.327234 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b2mm5\" (UniqueName: \"kubernetes.io/projected/a3b9a409-01cb-49a0-af45-fc4934ee981a-kube-api-access-b2mm5\") on node \"crc\" DevicePath \"\"" Feb 19 19:01:19 crc kubenswrapper[4813]: I0219 19:01:19.327269 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a32989df-f2b2-41bf-9b79-4fd1fa4ec8ae-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 19:01:19 crc kubenswrapper[4813]: I0219 19:01:19.327283 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a32989df-f2b2-41bf-9b79-4fd1fa4ec8ae-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 19:01:19 crc kubenswrapper[4813]: I0219 19:01:19.327291 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3b9a409-01cb-49a0-af45-fc4934ee981a-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 19:01:19 crc kubenswrapper[4813]: I0219 19:01:19.327300 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ml6rt\" (UniqueName: \"kubernetes.io/projected/a32989df-f2b2-41bf-9b79-4fd1fa4ec8ae-kube-api-access-ml6rt\") on node \"crc\" DevicePath \"\"" Feb 19 19:01:19 crc kubenswrapper[4813]: I0219 19:01:19.347922 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3b9a409-01cb-49a0-af45-fc4934ee981a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a3b9a409-01cb-49a0-af45-fc4934ee981a" (UID: "a3b9a409-01cb-49a0-af45-fc4934ee981a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:01:19 crc kubenswrapper[4813]: I0219 19:01:19.427988 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3b9a409-01cb-49a0-af45-fc4934ee981a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 19:01:19 crc kubenswrapper[4813]: I0219 19:01:19.447734 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rxp4t"] Feb 19 19:01:19 crc kubenswrapper[4813]: I0219 19:01:19.466039 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rxp4t"] Feb 19 19:01:19 crc kubenswrapper[4813]: I0219 19:01:19.482856 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3b9a409-01cb-49a0-af45-fc4934ee981a" path="/var/lib/kubelet/pods/a3b9a409-01cb-49a0-af45-fc4934ee981a/volumes" Feb 19 19:01:19 crc kubenswrapper[4813]: I0219 19:01:19.483824 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nqxk9"] Feb 19 19:01:19 crc kubenswrapper[4813]: I0219 19:01:19.483864 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-nqxk9"] Feb 19 19:01:20 crc kubenswrapper[4813]: I0219 19:01:20.011545 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s5g9j" Feb 19 19:01:20 crc kubenswrapper[4813]: I0219 19:01:20.126405 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s5g9j" event={"ID":"1dbf942e-9106-4e8d-85d4-9d3fe5d925f7","Type":"ContainerDied","Data":"33410c4afcaeb98ed60214f46610838e6b6926e79af9d5ac8d8efc2337b2c61b"} Feb 19 19:01:20 crc kubenswrapper[4813]: I0219 19:01:20.126460 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s5g9j" Feb 19 19:01:20 crc kubenswrapper[4813]: I0219 19:01:20.126473 4813 scope.go:117] "RemoveContainer" containerID="2e93bd88362f547bd0be160513cd2015d1fb3163edf0f731297f25e13c4d942f" Feb 19 19:01:20 crc kubenswrapper[4813]: I0219 19:01:20.137574 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddwdq\" (UniqueName: \"kubernetes.io/projected/1dbf942e-9106-4e8d-85d4-9d3fe5d925f7-kube-api-access-ddwdq\") pod \"1dbf942e-9106-4e8d-85d4-9d3fe5d925f7\" (UID: \"1dbf942e-9106-4e8d-85d4-9d3fe5d925f7\") " Feb 19 19:01:20 crc kubenswrapper[4813]: I0219 19:01:20.137717 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1dbf942e-9106-4e8d-85d4-9d3fe5d925f7-utilities\") pod \"1dbf942e-9106-4e8d-85d4-9d3fe5d925f7\" (UID: \"1dbf942e-9106-4e8d-85d4-9d3fe5d925f7\") " Feb 19 19:01:20 crc kubenswrapper[4813]: I0219 19:01:20.137779 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1dbf942e-9106-4e8d-85d4-9d3fe5d925f7-catalog-content\") pod \"1dbf942e-9106-4e8d-85d4-9d3fe5d925f7\" (UID: \"1dbf942e-9106-4e8d-85d4-9d3fe5d925f7\") " Feb 19 19:01:20 crc kubenswrapper[4813]: I0219 19:01:20.141139 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1dbf942e-9106-4e8d-85d4-9d3fe5d925f7-utilities" (OuterVolumeSpecName: "utilities") pod "1dbf942e-9106-4e8d-85d4-9d3fe5d925f7" (UID: "1dbf942e-9106-4e8d-85d4-9d3fe5d925f7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:01:20 crc kubenswrapper[4813]: I0219 19:01:20.142645 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dbf942e-9106-4e8d-85d4-9d3fe5d925f7-kube-api-access-ddwdq" (OuterVolumeSpecName: "kube-api-access-ddwdq") pod "1dbf942e-9106-4e8d-85d4-9d3fe5d925f7" (UID: "1dbf942e-9106-4e8d-85d4-9d3fe5d925f7"). InnerVolumeSpecName "kube-api-access-ddwdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:01:20 crc kubenswrapper[4813]: I0219 19:01:20.145350 4813 scope.go:117] "RemoveContainer" containerID="9f0c9c9ff30d9a9313baa56bdecad57010ef157dfa89b7cad11fa48515768603" Feb 19 19:01:20 crc kubenswrapper[4813]: I0219 19:01:20.185389 4813 scope.go:117] "RemoveContainer" containerID="01a4bdfce1911168bfbf6d8ef6774ecd483dac841a6990a9f37c8a578539c66f" Feb 19 19:01:20 crc kubenswrapper[4813]: I0219 19:01:20.206145 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1dbf942e-9106-4e8d-85d4-9d3fe5d925f7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1dbf942e-9106-4e8d-85d4-9d3fe5d925f7" (UID: "1dbf942e-9106-4e8d-85d4-9d3fe5d925f7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:01:20 crc kubenswrapper[4813]: I0219 19:01:20.244379 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1dbf942e-9106-4e8d-85d4-9d3fe5d925f7-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 19:01:20 crc kubenswrapper[4813]: I0219 19:01:20.244410 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1dbf942e-9106-4e8d-85d4-9d3fe5d925f7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 19:01:20 crc kubenswrapper[4813]: I0219 19:01:20.244420 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddwdq\" (UniqueName: \"kubernetes.io/projected/1dbf942e-9106-4e8d-85d4-9d3fe5d925f7-kube-api-access-ddwdq\") on node \"crc\" DevicePath \"\"" Feb 19 19:01:20 crc kubenswrapper[4813]: I0219 19:01:20.468847 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s5g9j"] Feb 19 19:01:20 crc kubenswrapper[4813]: I0219 19:01:20.476826 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-s5g9j"] Feb 19 19:01:21 crc kubenswrapper[4813]: I0219 19:01:21.498460 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1dbf942e-9106-4e8d-85d4-9d3fe5d925f7" path="/var/lib/kubelet/pods/1dbf942e-9106-4e8d-85d4-9d3fe5d925f7/volumes" Feb 19 19:01:21 crc kubenswrapper[4813]: I0219 19:01:21.500057 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a32989df-f2b2-41bf-9b79-4fd1fa4ec8ae" path="/var/lib/kubelet/pods/a32989df-f2b2-41bf-9b79-4fd1fa4ec8ae/volumes" Feb 19 19:01:27 crc kubenswrapper[4813]: I0219 19:01:27.472168 4813 scope.go:117] "RemoveContainer" containerID="32fa982681255cf9db7b11890f519e686141e6a0b70e496465176e3f8f434a17" Feb 19 19:01:27 crc kubenswrapper[4813]: E0219 19:01:27.473255 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:01:40 crc kubenswrapper[4813]: I0219 19:01:40.471868 4813 scope.go:117] "RemoveContainer" containerID="32fa982681255cf9db7b11890f519e686141e6a0b70e496465176e3f8f434a17" Feb 19 19:01:41 crc kubenswrapper[4813]: I0219 19:01:41.292971 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" event={"ID":"481977a2-7072-4176-abd4-863cb6104d70","Type":"ContainerStarted","Data":"606780b8a9a2b55114d3f3041e15b2bd29017f1091089c1cf38daa6d09844911"} Feb 19 19:04:00 crc kubenswrapper[4813]: I0219 19:04:00.330653 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 19:04:00 crc kubenswrapper[4813]: I0219 19:04:00.331364 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 19:04:30 crc kubenswrapper[4813]: I0219 19:04:30.329699 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 19:04:30 crc kubenswrapper[4813]: I0219 19:04:30.330371 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 19:05:00 crc kubenswrapper[4813]: I0219 19:05:00.329733 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 19:05:00 crc kubenswrapper[4813]: I0219 19:05:00.330358 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 19:05:00 crc kubenswrapper[4813]: I0219 19:05:00.330426 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" Feb 19 19:05:00 crc kubenswrapper[4813]: I0219 19:05:00.331268 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"606780b8a9a2b55114d3f3041e15b2bd29017f1091089c1cf38daa6d09844911"} pod="openshift-machine-config-operator/machine-config-daemon-gfswm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 19:05:00 crc kubenswrapper[4813]: I0219 19:05:00.331367 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" containerID="cri-o://606780b8a9a2b55114d3f3041e15b2bd29017f1091089c1cf38daa6d09844911" gracePeriod=600 Feb 19 19:05:01 crc kubenswrapper[4813]: I0219 19:05:01.110783 4813 generic.go:334] "Generic (PLEG): container finished" podID="481977a2-7072-4176-abd4-863cb6104d70" containerID="606780b8a9a2b55114d3f3041e15b2bd29017f1091089c1cf38daa6d09844911" exitCode=0 Feb 19 19:05:01 crc kubenswrapper[4813]: I0219 19:05:01.110834 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" event={"ID":"481977a2-7072-4176-abd4-863cb6104d70","Type":"ContainerDied","Data":"606780b8a9a2b55114d3f3041e15b2bd29017f1091089c1cf38daa6d09844911"} Feb 19 19:05:01 crc kubenswrapper[4813]: I0219 19:05:01.111371 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" event={"ID":"481977a2-7072-4176-abd4-863cb6104d70","Type":"ContainerStarted","Data":"53ac915d867d2a41b6ddf091b7fd0de3401bfbf622b2473439f483389a45a74c"} Feb 19 19:05:01 crc kubenswrapper[4813]: I0219 19:05:01.111415 4813 scope.go:117] "RemoveContainer" containerID="32fa982681255cf9db7b11890f519e686141e6a0b70e496465176e3f8f434a17" Feb 19 19:07:00 crc kubenswrapper[4813]: I0219 19:07:00.329870 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 19:07:00 crc kubenswrapper[4813]: I0219 19:07:00.330625 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 19:07:30 crc kubenswrapper[4813]: I0219 19:07:30.329896 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 19:07:30 crc kubenswrapper[4813]: I0219 19:07:30.330597 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 19:07:51 crc kubenswrapper[4813]: I0219 19:07:51.649434 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-89phc"] Feb 19 19:07:51 crc kubenswrapper[4813]: E0219 19:07:51.650729 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dbf942e-9106-4e8d-85d4-9d3fe5d925f7" containerName="extract-content" Feb 19 19:07:51 crc kubenswrapper[4813]: I0219 19:07:51.650747 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dbf942e-9106-4e8d-85d4-9d3fe5d925f7" containerName="extract-content" Feb 19 19:07:51 crc kubenswrapper[4813]: E0219 19:07:51.650766 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dbf942e-9106-4e8d-85d4-9d3fe5d925f7" containerName="extract-utilities" Feb 19 19:07:51 crc kubenswrapper[4813]: I0219 19:07:51.650774 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dbf942e-9106-4e8d-85d4-9d3fe5d925f7" containerName="extract-utilities" Feb 19 19:07:51 crc kubenswrapper[4813]: E0219 19:07:51.650800 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dbf942e-9106-4e8d-85d4-9d3fe5d925f7" containerName="registry-server" Feb 19 19:07:51 crc kubenswrapper[4813]: I0219 19:07:51.650807 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dbf942e-9106-4e8d-85d4-9d3fe5d925f7" containerName="registry-server" Feb 19 19:07:51 crc kubenswrapper[4813]: E0219 19:07:51.650821 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3b9a409-01cb-49a0-af45-fc4934ee981a" containerName="extract-utilities" Feb 19 19:07:51 crc kubenswrapper[4813]: I0219 19:07:51.650829 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3b9a409-01cb-49a0-af45-fc4934ee981a" containerName="extract-utilities" Feb 19 19:07:51 crc kubenswrapper[4813]: E0219 19:07:51.650845 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3b9a409-01cb-49a0-af45-fc4934ee981a" containerName="registry-server" Feb 19 19:07:51 crc kubenswrapper[4813]: I0219 19:07:51.650852 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3b9a409-01cb-49a0-af45-fc4934ee981a" containerName="registry-server" Feb 19 19:07:51 crc kubenswrapper[4813]: E0219 19:07:51.650864 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a32989df-f2b2-41bf-9b79-4fd1fa4ec8ae" containerName="registry-server" Feb 19 19:07:51 crc kubenswrapper[4813]: I0219 19:07:51.650875 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="a32989df-f2b2-41bf-9b79-4fd1fa4ec8ae" containerName="registry-server" Feb 19 19:07:51 crc kubenswrapper[4813]: E0219 19:07:51.650887 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3b9a409-01cb-49a0-af45-fc4934ee981a" containerName="extract-content" Feb 19 19:07:51 crc kubenswrapper[4813]: I0219 19:07:51.650893 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3b9a409-01cb-49a0-af45-fc4934ee981a" containerName="extract-content" Feb 19 19:07:51 crc kubenswrapper[4813]: E0219 19:07:51.650907 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a32989df-f2b2-41bf-9b79-4fd1fa4ec8ae" containerName="extract-utilities" Feb 19 19:07:51 crc kubenswrapper[4813]: I0219 19:07:51.650914 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="a32989df-f2b2-41bf-9b79-4fd1fa4ec8ae" containerName="extract-utilities" Feb 19 19:07:51 crc kubenswrapper[4813]: E0219 19:07:51.650923 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a32989df-f2b2-41bf-9b79-4fd1fa4ec8ae" containerName="extract-content" Feb 19 19:07:51 crc kubenswrapper[4813]: I0219 19:07:51.650930 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="a32989df-f2b2-41bf-9b79-4fd1fa4ec8ae" containerName="extract-content" Feb 19 19:07:51 crc kubenswrapper[4813]: I0219 19:07:51.651147 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dbf942e-9106-4e8d-85d4-9d3fe5d925f7" containerName="registry-server" Feb 19 19:07:51 crc kubenswrapper[4813]: I0219 19:07:51.651165 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3b9a409-01cb-49a0-af45-fc4934ee981a" containerName="registry-server" Feb 19 19:07:51 crc kubenswrapper[4813]: I0219 19:07:51.651176 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="a32989df-f2b2-41bf-9b79-4fd1fa4ec8ae" containerName="registry-server" Feb 19 19:07:51 crc kubenswrapper[4813]: I0219 19:07:51.652490 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-89phc" Feb 19 19:07:51 crc kubenswrapper[4813]: I0219 19:07:51.665616 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-89phc"] Feb 19 19:07:51 crc kubenswrapper[4813]: I0219 19:07:51.743396 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7ac04c2-1f2b-4030-b41d-fd28039968ce-catalog-content\") pod \"redhat-marketplace-89phc\" (UID: \"f7ac04c2-1f2b-4030-b41d-fd28039968ce\") " pod="openshift-marketplace/redhat-marketplace-89phc" Feb 19 19:07:51 crc kubenswrapper[4813]: I0219 19:07:51.743558 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7ac04c2-1f2b-4030-b41d-fd28039968ce-utilities\") pod \"redhat-marketplace-89phc\" (UID: \"f7ac04c2-1f2b-4030-b41d-fd28039968ce\") " pod="openshift-marketplace/redhat-marketplace-89phc" Feb 19 19:07:51 crc kubenswrapper[4813]: I0219 19:07:51.743638 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwdtg\" (UniqueName: \"kubernetes.io/projected/f7ac04c2-1f2b-4030-b41d-fd28039968ce-kube-api-access-mwdtg\") pod \"redhat-marketplace-89phc\" (UID: \"f7ac04c2-1f2b-4030-b41d-fd28039968ce\") " pod="openshift-marketplace/redhat-marketplace-89phc" Feb 19 19:07:51 crc kubenswrapper[4813]: I0219 19:07:51.845099 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7ac04c2-1f2b-4030-b41d-fd28039968ce-catalog-content\") pod \"redhat-marketplace-89phc\" (UID: \"f7ac04c2-1f2b-4030-b41d-fd28039968ce\") " pod="openshift-marketplace/redhat-marketplace-89phc" Feb 19 19:07:51 crc kubenswrapper[4813]: I0219 19:07:51.845209 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7ac04c2-1f2b-4030-b41d-fd28039968ce-utilities\") pod \"redhat-marketplace-89phc\" (UID: \"f7ac04c2-1f2b-4030-b41d-fd28039968ce\") " pod="openshift-marketplace/redhat-marketplace-89phc" Feb 19 19:07:51 crc kubenswrapper[4813]: I0219 19:07:51.845614 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7ac04c2-1f2b-4030-b41d-fd28039968ce-catalog-content\") pod \"redhat-marketplace-89phc\" (UID: \"f7ac04c2-1f2b-4030-b41d-fd28039968ce\") " pod="openshift-marketplace/redhat-marketplace-89phc" Feb 19 19:07:51 crc kubenswrapper[4813]: I0219 19:07:51.845656 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7ac04c2-1f2b-4030-b41d-fd28039968ce-utilities\") pod \"redhat-marketplace-89phc\" (UID: \"f7ac04c2-1f2b-4030-b41d-fd28039968ce\") " pod="openshift-marketplace/redhat-marketplace-89phc" Feb 19 19:07:51 crc kubenswrapper[4813]: I0219 19:07:51.845692 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwdtg\" (UniqueName: \"kubernetes.io/projected/f7ac04c2-1f2b-4030-b41d-fd28039968ce-kube-api-access-mwdtg\") pod \"redhat-marketplace-89phc\" (UID: \"f7ac04c2-1f2b-4030-b41d-fd28039968ce\") " pod="openshift-marketplace/redhat-marketplace-89phc" Feb 19 19:07:51 crc kubenswrapper[4813]: I0219 19:07:51.882016 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwdtg\" (UniqueName: \"kubernetes.io/projected/f7ac04c2-1f2b-4030-b41d-fd28039968ce-kube-api-access-mwdtg\") pod \"redhat-marketplace-89phc\" (UID: \"f7ac04c2-1f2b-4030-b41d-fd28039968ce\") " pod="openshift-marketplace/redhat-marketplace-89phc" Feb 19 19:07:51 crc kubenswrapper[4813]: I0219 19:07:51.987317 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-89phc" Feb 19 19:07:52 crc kubenswrapper[4813]: I0219 19:07:52.518893 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-89phc"] Feb 19 19:07:52 crc kubenswrapper[4813]: I0219 19:07:52.674272 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-89phc" event={"ID":"f7ac04c2-1f2b-4030-b41d-fd28039968ce","Type":"ContainerStarted","Data":"2b0ae19769a6a2197a29d1f310637a5532c7fe72e228445db5ecc7a631a6da0d"} Feb 19 19:07:53 crc kubenswrapper[4813]: I0219 19:07:53.685054 4813 generic.go:334] "Generic (PLEG): container finished" podID="f7ac04c2-1f2b-4030-b41d-fd28039968ce" containerID="a89f77acb3925cf814989f7f55305cbf52aaad653b512e3999050738eb3b63ec" exitCode=0 Feb 19 19:07:53 crc kubenswrapper[4813]: I0219 19:07:53.685170 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-89phc" event={"ID":"f7ac04c2-1f2b-4030-b41d-fd28039968ce","Type":"ContainerDied","Data":"a89f77acb3925cf814989f7f55305cbf52aaad653b512e3999050738eb3b63ec"} Feb 19 19:07:53 crc kubenswrapper[4813]: I0219 19:07:53.688737 4813 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 19:07:54 crc kubenswrapper[4813]: I0219 19:07:54.709197 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-89phc" event={"ID":"f7ac04c2-1f2b-4030-b41d-fd28039968ce","Type":"ContainerStarted","Data":"e24054a6037535744cf8d0cdc6357fd1a9de48e5bb58d5a99617b636d34b9be4"} Feb 19 19:07:55 crc kubenswrapper[4813]: I0219 19:07:55.722333 4813 generic.go:334] "Generic (PLEG): container finished" podID="f7ac04c2-1f2b-4030-b41d-fd28039968ce" containerID="e24054a6037535744cf8d0cdc6357fd1a9de48e5bb58d5a99617b636d34b9be4" exitCode=0 Feb 19 19:07:55 crc kubenswrapper[4813]: I0219 19:07:55.722402 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-89phc" event={"ID":"f7ac04c2-1f2b-4030-b41d-fd28039968ce","Type":"ContainerDied","Data":"e24054a6037535744cf8d0cdc6357fd1a9de48e5bb58d5a99617b636d34b9be4"} Feb 19 19:07:56 crc kubenswrapper[4813]: I0219 19:07:56.732422 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-89phc" event={"ID":"f7ac04c2-1f2b-4030-b41d-fd28039968ce","Type":"ContainerStarted","Data":"3a800873532467e08903ec0f315a0b085ec7453a6c804b9e95ecb55efc0dc476"} Feb 19 19:07:56 crc kubenswrapper[4813]: I0219 19:07:56.762063 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-89phc" podStartSLOduration=3.325732462 podStartE2EDuration="5.76204704s" podCreationTimestamp="2026-02-19 19:07:51 +0000 UTC" firstStartedPulling="2026-02-19 19:07:53.688313017 +0000 UTC m=+2292.913753598" lastFinishedPulling="2026-02-19 19:07:56.124627595 +0000 UTC m=+2295.350068176" observedRunningTime="2026-02-19 19:07:56.756062216 +0000 UTC m=+2295.981502767" watchObservedRunningTime="2026-02-19 19:07:56.76204704 +0000 UTC m=+2295.987487591" Feb 19 19:08:00 crc kubenswrapper[4813]: I0219 19:08:00.330218 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 19:08:00 crc kubenswrapper[4813]: I0219 19:08:00.330603 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 19:08:00 crc kubenswrapper[4813]: I0219 19:08:00.330668 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" Feb 19 19:08:00 crc kubenswrapper[4813]: I0219 19:08:00.331592 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"53ac915d867d2a41b6ddf091b7fd0de3401bfbf622b2473439f483389a45a74c"} pod="openshift-machine-config-operator/machine-config-daemon-gfswm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 19:08:00 crc kubenswrapper[4813]: I0219 19:08:00.331694 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" containerID="cri-o://53ac915d867d2a41b6ddf091b7fd0de3401bfbf622b2473439f483389a45a74c" gracePeriod=600 Feb 19 19:08:00 crc kubenswrapper[4813]: E0219 19:08:00.467644 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:08:00 crc kubenswrapper[4813]: I0219 19:08:00.771345 4813 generic.go:334] "Generic (PLEG): container finished" podID="481977a2-7072-4176-abd4-863cb6104d70" containerID="53ac915d867d2a41b6ddf091b7fd0de3401bfbf622b2473439f483389a45a74c" exitCode=0 Feb 19 19:08:00 crc kubenswrapper[4813]: I0219 19:08:00.771760 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" event={"ID":"481977a2-7072-4176-abd4-863cb6104d70","Type":"ContainerDied","Data":"53ac915d867d2a41b6ddf091b7fd0de3401bfbf622b2473439f483389a45a74c"} Feb 19 19:08:00 crc kubenswrapper[4813]: I0219 19:08:00.772075 4813 scope.go:117] "RemoveContainer" containerID="606780b8a9a2b55114d3f3041e15b2bd29017f1091089c1cf38daa6d09844911" Feb 19 19:08:00 crc kubenswrapper[4813]: I0219 19:08:00.772741 4813 scope.go:117] "RemoveContainer" containerID="53ac915d867d2a41b6ddf091b7fd0de3401bfbf622b2473439f483389a45a74c" Feb 19 19:08:00 crc kubenswrapper[4813]: E0219 19:08:00.773274 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:08:01 crc kubenswrapper[4813]: I0219 19:08:01.987455 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-89phc" Feb 19 19:08:01 crc kubenswrapper[4813]: I0219 19:08:01.987881 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-89phc" Feb 19 19:08:02 crc kubenswrapper[4813]: I0219 19:08:02.051670 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-89phc" Feb 19 19:08:02 crc kubenswrapper[4813]: I0219 19:08:02.865721 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-89phc" Feb 19 19:08:02 crc kubenswrapper[4813]: I0219 19:08:02.935254 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-89phc"] Feb 19 19:08:04 crc kubenswrapper[4813]: I0219 19:08:04.812460 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-89phc" podUID="f7ac04c2-1f2b-4030-b41d-fd28039968ce" containerName="registry-server" containerID="cri-o://3a800873532467e08903ec0f315a0b085ec7453a6c804b9e95ecb55efc0dc476" gracePeriod=2 Feb 19 19:08:05 crc kubenswrapper[4813]: I0219 19:08:05.432675 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-89phc" Feb 19 19:08:05 crc kubenswrapper[4813]: I0219 19:08:05.555424 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7ac04c2-1f2b-4030-b41d-fd28039968ce-catalog-content\") pod \"f7ac04c2-1f2b-4030-b41d-fd28039968ce\" (UID: \"f7ac04c2-1f2b-4030-b41d-fd28039968ce\") " Feb 19 19:08:05 crc kubenswrapper[4813]: I0219 19:08:05.555511 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7ac04c2-1f2b-4030-b41d-fd28039968ce-utilities\") pod \"f7ac04c2-1f2b-4030-b41d-fd28039968ce\" (UID: \"f7ac04c2-1f2b-4030-b41d-fd28039968ce\") " Feb 19 19:08:05 crc kubenswrapper[4813]: I0219 19:08:05.555597 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwdtg\" (UniqueName: \"kubernetes.io/projected/f7ac04c2-1f2b-4030-b41d-fd28039968ce-kube-api-access-mwdtg\") pod \"f7ac04c2-1f2b-4030-b41d-fd28039968ce\" (UID: \"f7ac04c2-1f2b-4030-b41d-fd28039968ce\") " Feb 19 19:08:05 crc kubenswrapper[4813]: I0219 19:08:05.557320 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7ac04c2-1f2b-4030-b41d-fd28039968ce-utilities" (OuterVolumeSpecName: "utilities") pod "f7ac04c2-1f2b-4030-b41d-fd28039968ce" (UID: "f7ac04c2-1f2b-4030-b41d-fd28039968ce"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:08:05 crc kubenswrapper[4813]: I0219 19:08:05.568262 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7ac04c2-1f2b-4030-b41d-fd28039968ce-kube-api-access-mwdtg" (OuterVolumeSpecName: "kube-api-access-mwdtg") pod "f7ac04c2-1f2b-4030-b41d-fd28039968ce" (UID: "f7ac04c2-1f2b-4030-b41d-fd28039968ce"). InnerVolumeSpecName "kube-api-access-mwdtg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:08:05 crc kubenswrapper[4813]: I0219 19:08:05.597413 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7ac04c2-1f2b-4030-b41d-fd28039968ce-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f7ac04c2-1f2b-4030-b41d-fd28039968ce" (UID: "f7ac04c2-1f2b-4030-b41d-fd28039968ce"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:08:05 crc kubenswrapper[4813]: I0219 19:08:05.657783 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwdtg\" (UniqueName: \"kubernetes.io/projected/f7ac04c2-1f2b-4030-b41d-fd28039968ce-kube-api-access-mwdtg\") on node \"crc\" DevicePath \"\"" Feb 19 19:08:05 crc kubenswrapper[4813]: I0219 19:08:05.657873 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7ac04c2-1f2b-4030-b41d-fd28039968ce-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 19:08:05 crc kubenswrapper[4813]: I0219 19:08:05.657894 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7ac04c2-1f2b-4030-b41d-fd28039968ce-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 19:08:05 crc kubenswrapper[4813]: I0219 19:08:05.825744 4813 generic.go:334] "Generic (PLEG): container finished" podID="f7ac04c2-1f2b-4030-b41d-fd28039968ce" containerID="3a800873532467e08903ec0f315a0b085ec7453a6c804b9e95ecb55efc0dc476" exitCode=0 Feb 19 19:08:05 crc kubenswrapper[4813]: I0219 19:08:05.825794 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-89phc" event={"ID":"f7ac04c2-1f2b-4030-b41d-fd28039968ce","Type":"ContainerDied","Data":"3a800873532467e08903ec0f315a0b085ec7453a6c804b9e95ecb55efc0dc476"} Feb 19 19:08:05 crc kubenswrapper[4813]: I0219 19:08:05.825838 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-89phc" event={"ID":"f7ac04c2-1f2b-4030-b41d-fd28039968ce","Type":"ContainerDied","Data":"2b0ae19769a6a2197a29d1f310637a5532c7fe72e228445db5ecc7a631a6da0d"} Feb 19 19:08:05 crc kubenswrapper[4813]: I0219 19:08:05.825858 4813 scope.go:117] "RemoveContainer" containerID="3a800873532467e08903ec0f315a0b085ec7453a6c804b9e95ecb55efc0dc476" Feb 19 19:08:05 crc kubenswrapper[4813]: I0219 19:08:05.825875 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-89phc" Feb 19 19:08:05 crc kubenswrapper[4813]: I0219 19:08:05.860683 4813 scope.go:117] "RemoveContainer" containerID="e24054a6037535744cf8d0cdc6357fd1a9de48e5bb58d5a99617b636d34b9be4" Feb 19 19:08:05 crc kubenswrapper[4813]: I0219 19:08:05.875218 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-89phc"] Feb 19 19:08:05 crc kubenswrapper[4813]: I0219 19:08:05.886522 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-89phc"] Feb 19 19:08:05 crc kubenswrapper[4813]: I0219 19:08:05.890475 4813 scope.go:117] "RemoveContainer" containerID="a89f77acb3925cf814989f7f55305cbf52aaad653b512e3999050738eb3b63ec" Feb 19 19:08:05 crc kubenswrapper[4813]: I0219 19:08:05.925799 4813 scope.go:117] "RemoveContainer" containerID="3a800873532467e08903ec0f315a0b085ec7453a6c804b9e95ecb55efc0dc476" Feb 19 19:08:05 crc kubenswrapper[4813]: E0219 19:08:05.926760 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a800873532467e08903ec0f315a0b085ec7453a6c804b9e95ecb55efc0dc476\": container with ID starting with 3a800873532467e08903ec0f315a0b085ec7453a6c804b9e95ecb55efc0dc476 not found: ID does not exist" containerID="3a800873532467e08903ec0f315a0b085ec7453a6c804b9e95ecb55efc0dc476" Feb 19 19:08:05 crc kubenswrapper[4813]: I0219 19:08:05.926877 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a800873532467e08903ec0f315a0b085ec7453a6c804b9e95ecb55efc0dc476"} err="failed to get container status \"3a800873532467e08903ec0f315a0b085ec7453a6c804b9e95ecb55efc0dc476\": rpc error: code = NotFound desc = could not find container \"3a800873532467e08903ec0f315a0b085ec7453a6c804b9e95ecb55efc0dc476\": container with ID starting with 3a800873532467e08903ec0f315a0b085ec7453a6c804b9e95ecb55efc0dc476 not found: ID does not exist" Feb 19 19:08:05 crc kubenswrapper[4813]: I0219 19:08:05.927150 4813 scope.go:117] "RemoveContainer" containerID="e24054a6037535744cf8d0cdc6357fd1a9de48e5bb58d5a99617b636d34b9be4" Feb 19 19:08:05 crc kubenswrapper[4813]: E0219 19:08:05.927707 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e24054a6037535744cf8d0cdc6357fd1a9de48e5bb58d5a99617b636d34b9be4\": container with ID starting with e24054a6037535744cf8d0cdc6357fd1a9de48e5bb58d5a99617b636d34b9be4 not found: ID does not exist" containerID="e24054a6037535744cf8d0cdc6357fd1a9de48e5bb58d5a99617b636d34b9be4" Feb 19 19:08:05 crc kubenswrapper[4813]: I0219 19:08:05.927760 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e24054a6037535744cf8d0cdc6357fd1a9de48e5bb58d5a99617b636d34b9be4"} err="failed to get container status \"e24054a6037535744cf8d0cdc6357fd1a9de48e5bb58d5a99617b636d34b9be4\": rpc error: code = NotFound desc = could not find container \"e24054a6037535744cf8d0cdc6357fd1a9de48e5bb58d5a99617b636d34b9be4\": container with ID starting with e24054a6037535744cf8d0cdc6357fd1a9de48e5bb58d5a99617b636d34b9be4 not found: ID does not exist" Feb 19 19:08:05 crc kubenswrapper[4813]: I0219 19:08:05.927791 4813 scope.go:117] "RemoveContainer" containerID="a89f77acb3925cf814989f7f55305cbf52aaad653b512e3999050738eb3b63ec" Feb 19 19:08:05 crc kubenswrapper[4813]: E0219 19:08:05.928357 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a89f77acb3925cf814989f7f55305cbf52aaad653b512e3999050738eb3b63ec\": container with ID starting with a89f77acb3925cf814989f7f55305cbf52aaad653b512e3999050738eb3b63ec not found: ID does not exist" containerID="a89f77acb3925cf814989f7f55305cbf52aaad653b512e3999050738eb3b63ec" Feb 19 19:08:05 crc kubenswrapper[4813]: I0219 19:08:05.928399 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a89f77acb3925cf814989f7f55305cbf52aaad653b512e3999050738eb3b63ec"} err="failed to get container status \"a89f77acb3925cf814989f7f55305cbf52aaad653b512e3999050738eb3b63ec\": rpc error: code = NotFound desc = could not find container \"a89f77acb3925cf814989f7f55305cbf52aaad653b512e3999050738eb3b63ec\": container with ID starting with a89f77acb3925cf814989f7f55305cbf52aaad653b512e3999050738eb3b63ec not found: ID does not exist" Feb 19 19:08:07 crc kubenswrapper[4813]: I0219 19:08:07.483009 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7ac04c2-1f2b-4030-b41d-fd28039968ce" path="/var/lib/kubelet/pods/f7ac04c2-1f2b-4030-b41d-fd28039968ce/volumes" Feb 19 19:08:14 crc kubenswrapper[4813]: I0219 19:08:14.472089 4813 scope.go:117] "RemoveContainer" containerID="53ac915d867d2a41b6ddf091b7fd0de3401bfbf622b2473439f483389a45a74c" Feb 19 19:08:14 crc kubenswrapper[4813]: E0219 19:08:14.473242 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:08:25 crc kubenswrapper[4813]: I0219 19:08:25.471827 4813 scope.go:117] "RemoveContainer" containerID="53ac915d867d2a41b6ddf091b7fd0de3401bfbf622b2473439f483389a45a74c" Feb 19 19:08:25 crc kubenswrapper[4813]: E0219 19:08:25.473030 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:08:36 crc kubenswrapper[4813]: I0219 19:08:36.473002 4813 scope.go:117] "RemoveContainer" containerID="53ac915d867d2a41b6ddf091b7fd0de3401bfbf622b2473439f483389a45a74c" Feb 19 19:08:36 crc kubenswrapper[4813]: E0219 19:08:36.474402 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:08:47 crc kubenswrapper[4813]: I0219 19:08:47.472613 4813 scope.go:117] "RemoveContainer" containerID="53ac915d867d2a41b6ddf091b7fd0de3401bfbf622b2473439f483389a45a74c" Feb 19 19:08:47 crc kubenswrapper[4813]: E0219 19:08:47.473811 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:09:00 crc kubenswrapper[4813]: I0219 19:09:00.471745 4813 scope.go:117] "RemoveContainer" containerID="53ac915d867d2a41b6ddf091b7fd0de3401bfbf622b2473439f483389a45a74c" Feb 19 19:09:00 crc kubenswrapper[4813]: E0219 19:09:00.473101 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:09:15 crc kubenswrapper[4813]: I0219 19:09:15.472137 4813 scope.go:117] "RemoveContainer" containerID="53ac915d867d2a41b6ddf091b7fd0de3401bfbf622b2473439f483389a45a74c" Feb 19 19:09:15 crc kubenswrapper[4813]: E0219 19:09:15.473151 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:09:30 crc kubenswrapper[4813]: I0219 19:09:30.474394 4813 scope.go:117] "RemoveContainer" containerID="53ac915d867d2a41b6ddf091b7fd0de3401bfbf622b2473439f483389a45a74c" Feb 19 19:09:30 crc kubenswrapper[4813]: E0219 19:09:30.475400 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:09:41 crc kubenswrapper[4813]: I0219 19:09:41.477551 4813 scope.go:117] "RemoveContainer" containerID="53ac915d867d2a41b6ddf091b7fd0de3401bfbf622b2473439f483389a45a74c" Feb 19 19:09:41 crc kubenswrapper[4813]: E0219 19:09:41.478470 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:09:53 crc kubenswrapper[4813]: I0219 19:09:53.472076 4813 scope.go:117] "RemoveContainer" containerID="53ac915d867d2a41b6ddf091b7fd0de3401bfbf622b2473439f483389a45a74c" Feb 19 19:09:53 crc kubenswrapper[4813]: E0219 19:09:53.473056 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:10:06 crc kubenswrapper[4813]: I0219 19:10:06.471773 4813 scope.go:117] "RemoveContainer" containerID="53ac915d867d2a41b6ddf091b7fd0de3401bfbf622b2473439f483389a45a74c" Feb 19 19:10:06 crc kubenswrapper[4813]: E0219 19:10:06.472499 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:10:21 crc kubenswrapper[4813]: I0219 19:10:21.476349 4813 scope.go:117] "RemoveContainer" containerID="53ac915d867d2a41b6ddf091b7fd0de3401bfbf622b2473439f483389a45a74c" Feb 19 19:10:21 crc kubenswrapper[4813]: E0219 19:10:21.477431 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:10:36 crc kubenswrapper[4813]: I0219 19:10:36.471859 4813 scope.go:117] "RemoveContainer" containerID="53ac915d867d2a41b6ddf091b7fd0de3401bfbf622b2473439f483389a45a74c" Feb 19 19:10:36 crc kubenswrapper[4813]: E0219 19:10:36.472764 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:10:47 crc kubenswrapper[4813]: I0219 19:10:47.471907 4813 scope.go:117] "RemoveContainer" containerID="53ac915d867d2a41b6ddf091b7fd0de3401bfbf622b2473439f483389a45a74c" Feb 19 19:10:47 crc kubenswrapper[4813]: E0219 19:10:47.473161 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:11:01 crc kubenswrapper[4813]: I0219 19:11:01.479745 4813 scope.go:117] "RemoveContainer" containerID="53ac915d867d2a41b6ddf091b7fd0de3401bfbf622b2473439f483389a45a74c" Feb 19 19:11:01 crc kubenswrapper[4813]: E0219 19:11:01.480794 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:11:07 crc kubenswrapper[4813]: I0219 19:11:07.908983 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-q2582"] Feb 19 19:11:07 crc kubenswrapper[4813]: E0219 19:11:07.909833 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7ac04c2-1f2b-4030-b41d-fd28039968ce" containerName="registry-server" Feb 19 19:11:07 crc kubenswrapper[4813]: I0219 19:11:07.909848 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7ac04c2-1f2b-4030-b41d-fd28039968ce" containerName="registry-server" Feb 19 19:11:07 crc kubenswrapper[4813]: E0219 19:11:07.909861 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7ac04c2-1f2b-4030-b41d-fd28039968ce" containerName="extract-utilities" Feb 19 19:11:07 crc kubenswrapper[4813]: I0219 19:11:07.909869 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7ac04c2-1f2b-4030-b41d-fd28039968ce" containerName="extract-utilities" Feb 19 19:11:07 crc kubenswrapper[4813]: E0219 19:11:07.909900 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7ac04c2-1f2b-4030-b41d-fd28039968ce" containerName="extract-content" Feb 19 19:11:07 crc kubenswrapper[4813]: I0219 19:11:07.909908 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7ac04c2-1f2b-4030-b41d-fd28039968ce" containerName="extract-content" Feb 19 19:11:07 crc kubenswrapper[4813]: I0219 19:11:07.910119 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7ac04c2-1f2b-4030-b41d-fd28039968ce" containerName="registry-server" Feb 19 19:11:07 crc kubenswrapper[4813]: I0219 19:11:07.911315 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q2582" Feb 19 19:11:07 crc kubenswrapper[4813]: I0219 19:11:07.933725 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q2582"] Feb 19 19:11:08 crc kubenswrapper[4813]: I0219 19:11:08.021277 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkdzx\" (UniqueName: \"kubernetes.io/projected/7f048841-1206-44e5-8d57-006863d40ad6-kube-api-access-gkdzx\") pod \"community-operators-q2582\" (UID: \"7f048841-1206-44e5-8d57-006863d40ad6\") " pod="openshift-marketplace/community-operators-q2582" Feb 19 19:11:08 crc kubenswrapper[4813]: I0219 19:11:08.021385 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f048841-1206-44e5-8d57-006863d40ad6-catalog-content\") pod \"community-operators-q2582\" (UID: \"7f048841-1206-44e5-8d57-006863d40ad6\") " pod="openshift-marketplace/community-operators-q2582" Feb 19 19:11:08 crc kubenswrapper[4813]: I0219 19:11:08.021454 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f048841-1206-44e5-8d57-006863d40ad6-utilities\") pod \"community-operators-q2582\" (UID: \"7f048841-1206-44e5-8d57-006863d40ad6\") " pod="openshift-marketplace/community-operators-q2582" Feb 19 19:11:08 crc kubenswrapper[4813]: I0219 19:11:08.122348 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f048841-1206-44e5-8d57-006863d40ad6-utilities\") pod \"community-operators-q2582\" (UID: \"7f048841-1206-44e5-8d57-006863d40ad6\") " pod="openshift-marketplace/community-operators-q2582" Feb 19 19:11:08 crc kubenswrapper[4813]: I0219 19:11:08.122408 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkdzx\" (UniqueName: \"kubernetes.io/projected/7f048841-1206-44e5-8d57-006863d40ad6-kube-api-access-gkdzx\") pod \"community-operators-q2582\" (UID: \"7f048841-1206-44e5-8d57-006863d40ad6\") " pod="openshift-marketplace/community-operators-q2582" Feb 19 19:11:08 crc kubenswrapper[4813]: I0219 19:11:08.122462 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f048841-1206-44e5-8d57-006863d40ad6-catalog-content\") pod \"community-operators-q2582\" (UID: \"7f048841-1206-44e5-8d57-006863d40ad6\") " pod="openshift-marketplace/community-operators-q2582" Feb 19 19:11:08 crc kubenswrapper[4813]: I0219 19:11:08.122931 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f048841-1206-44e5-8d57-006863d40ad6-catalog-content\") pod \"community-operators-q2582\" (UID: \"7f048841-1206-44e5-8d57-006863d40ad6\") " pod="openshift-marketplace/community-operators-q2582" Feb 19 19:11:08 crc kubenswrapper[4813]: I0219 19:11:08.123187 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f048841-1206-44e5-8d57-006863d40ad6-utilities\") pod \"community-operators-q2582\" (UID: \"7f048841-1206-44e5-8d57-006863d40ad6\") " pod="openshift-marketplace/community-operators-q2582" Feb 19 19:11:08 crc kubenswrapper[4813]: I0219 19:11:08.152336 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkdzx\" (UniqueName: \"kubernetes.io/projected/7f048841-1206-44e5-8d57-006863d40ad6-kube-api-access-gkdzx\") pod \"community-operators-q2582\" (UID: \"7f048841-1206-44e5-8d57-006863d40ad6\") " pod="openshift-marketplace/community-operators-q2582" Feb 19 19:11:08 crc kubenswrapper[4813]: I0219 19:11:08.236880 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q2582" Feb 19 19:11:08 crc kubenswrapper[4813]: I0219 19:11:08.670504 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q2582"] Feb 19 19:11:09 crc kubenswrapper[4813]: I0219 19:11:09.456032 4813 generic.go:334] "Generic (PLEG): container finished" podID="7f048841-1206-44e5-8d57-006863d40ad6" containerID="0677ed7a3803e69c0e8e78af77f919247178378e577a6bdb50168d16b1867874" exitCode=0 Feb 19 19:11:09 crc kubenswrapper[4813]: I0219 19:11:09.456257 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q2582" event={"ID":"7f048841-1206-44e5-8d57-006863d40ad6","Type":"ContainerDied","Data":"0677ed7a3803e69c0e8e78af77f919247178378e577a6bdb50168d16b1867874"} Feb 19 19:11:09 crc kubenswrapper[4813]: I0219 19:11:09.456456 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q2582" event={"ID":"7f048841-1206-44e5-8d57-006863d40ad6","Type":"ContainerStarted","Data":"fd0ffc5618717b5434d6e9b6418a92e221a9843486ee3a106026140d4e310338"} Feb 19 19:11:10 crc kubenswrapper[4813]: I0219 19:11:10.469328 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q2582" event={"ID":"7f048841-1206-44e5-8d57-006863d40ad6","Type":"ContainerStarted","Data":"fb255d753b9326eb68ebf72bc08eb39edc3ed6d3b4845a8fb945e39f8bc46973"} Feb 19 19:11:11 crc kubenswrapper[4813]: I0219 19:11:11.484152 4813 generic.go:334] "Generic (PLEG): container finished" podID="7f048841-1206-44e5-8d57-006863d40ad6" containerID="fb255d753b9326eb68ebf72bc08eb39edc3ed6d3b4845a8fb945e39f8bc46973" exitCode=0 Feb 19 19:11:11 crc kubenswrapper[4813]: I0219 19:11:11.495076 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q2582" event={"ID":"7f048841-1206-44e5-8d57-006863d40ad6","Type":"ContainerDied","Data":"fb255d753b9326eb68ebf72bc08eb39edc3ed6d3b4845a8fb945e39f8bc46973"} Feb 19 19:11:12 crc kubenswrapper[4813]: I0219 19:11:12.493430 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q2582" event={"ID":"7f048841-1206-44e5-8d57-006863d40ad6","Type":"ContainerStarted","Data":"821b35a0edc1ed68e109806ac7ae3d1959cf19ba49dc33dfdc3e7d7f66095509"} Feb 19 19:11:12 crc kubenswrapper[4813]: I0219 19:11:12.515100 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-q2582" podStartSLOduration=3.079112167 podStartE2EDuration="5.515073802s" podCreationTimestamp="2026-02-19 19:11:07 +0000 UTC" firstStartedPulling="2026-02-19 19:11:09.458441965 +0000 UTC m=+2488.683882546" lastFinishedPulling="2026-02-19 19:11:11.89440363 +0000 UTC m=+2491.119844181" observedRunningTime="2026-02-19 19:11:12.511847672 +0000 UTC m=+2491.737288243" watchObservedRunningTime="2026-02-19 19:11:12.515073802 +0000 UTC m=+2491.740514363" Feb 19 19:11:13 crc kubenswrapper[4813]: I0219 19:11:13.473193 4813 scope.go:117] "RemoveContainer" containerID="53ac915d867d2a41b6ddf091b7fd0de3401bfbf622b2473439f483389a45a74c" Feb 19 19:11:13 crc kubenswrapper[4813]: E0219 19:11:13.473412 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:11:18 crc kubenswrapper[4813]: I0219 19:11:18.237124 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-q2582" Feb 19 19:11:18 crc kubenswrapper[4813]: I0219 19:11:18.238016 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-q2582" Feb 19 19:11:18 crc kubenswrapper[4813]: I0219 19:11:18.316420 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-q2582" Feb 19 19:11:18 crc kubenswrapper[4813]: I0219 19:11:18.605147 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-q2582" Feb 19 19:11:18 crc kubenswrapper[4813]: I0219 19:11:18.682981 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q2582"] Feb 19 19:11:20 crc kubenswrapper[4813]: I0219 19:11:20.575885 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-q2582" podUID="7f048841-1206-44e5-8d57-006863d40ad6" containerName="registry-server" containerID="cri-o://821b35a0edc1ed68e109806ac7ae3d1959cf19ba49dc33dfdc3e7d7f66095509" gracePeriod=2 Feb 19 19:11:21 crc kubenswrapper[4813]: I0219 19:11:21.586341 4813 generic.go:334] "Generic (PLEG): container finished" podID="7f048841-1206-44e5-8d57-006863d40ad6" containerID="821b35a0edc1ed68e109806ac7ae3d1959cf19ba49dc33dfdc3e7d7f66095509" exitCode=0 Feb 19 19:11:21 crc kubenswrapper[4813]: I0219 19:11:21.586419 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q2582" event={"ID":"7f048841-1206-44e5-8d57-006863d40ad6","Type":"ContainerDied","Data":"821b35a0edc1ed68e109806ac7ae3d1959cf19ba49dc33dfdc3e7d7f66095509"} Feb 19 19:11:21 crc kubenswrapper[4813]: I0219 19:11:21.586711 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q2582" event={"ID":"7f048841-1206-44e5-8d57-006863d40ad6","Type":"ContainerDied","Data":"fd0ffc5618717b5434d6e9b6418a92e221a9843486ee3a106026140d4e310338"} Feb 19 19:11:21 crc kubenswrapper[4813]: I0219 19:11:21.586729 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd0ffc5618717b5434d6e9b6418a92e221a9843486ee3a106026140d4e310338" Feb 19 19:11:21 crc kubenswrapper[4813]: I0219 19:11:21.601101 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q2582" Feb 19 19:11:21 crc kubenswrapper[4813]: I0219 19:11:21.729377 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkdzx\" (UniqueName: \"kubernetes.io/projected/7f048841-1206-44e5-8d57-006863d40ad6-kube-api-access-gkdzx\") pod \"7f048841-1206-44e5-8d57-006863d40ad6\" (UID: \"7f048841-1206-44e5-8d57-006863d40ad6\") " Feb 19 19:11:21 crc kubenswrapper[4813]: I0219 19:11:21.729538 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f048841-1206-44e5-8d57-006863d40ad6-catalog-content\") pod \"7f048841-1206-44e5-8d57-006863d40ad6\" (UID: \"7f048841-1206-44e5-8d57-006863d40ad6\") " Feb 19 19:11:21 crc kubenswrapper[4813]: I0219 19:11:21.729606 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f048841-1206-44e5-8d57-006863d40ad6-utilities\") pod \"7f048841-1206-44e5-8d57-006863d40ad6\" (UID: \"7f048841-1206-44e5-8d57-006863d40ad6\") " Feb 19 19:11:21 crc kubenswrapper[4813]: I0219 19:11:21.731032 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f048841-1206-44e5-8d57-006863d40ad6-utilities" (OuterVolumeSpecName: "utilities") pod "7f048841-1206-44e5-8d57-006863d40ad6" (UID: "7f048841-1206-44e5-8d57-006863d40ad6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:11:21 crc kubenswrapper[4813]: I0219 19:11:21.735443 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f048841-1206-44e5-8d57-006863d40ad6-kube-api-access-gkdzx" (OuterVolumeSpecName: "kube-api-access-gkdzx") pod "7f048841-1206-44e5-8d57-006863d40ad6" (UID: "7f048841-1206-44e5-8d57-006863d40ad6"). InnerVolumeSpecName "kube-api-access-gkdzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:11:21 crc kubenswrapper[4813]: I0219 19:11:21.781468 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f048841-1206-44e5-8d57-006863d40ad6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7f048841-1206-44e5-8d57-006863d40ad6" (UID: "7f048841-1206-44e5-8d57-006863d40ad6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:11:21 crc kubenswrapper[4813]: I0219 19:11:21.831373 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f048841-1206-44e5-8d57-006863d40ad6-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 19:11:21 crc kubenswrapper[4813]: I0219 19:11:21.831411 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gkdzx\" (UniqueName: \"kubernetes.io/projected/7f048841-1206-44e5-8d57-006863d40ad6-kube-api-access-gkdzx\") on node \"crc\" DevicePath \"\"" Feb 19 19:11:21 crc kubenswrapper[4813]: I0219 19:11:21.831421 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f048841-1206-44e5-8d57-006863d40ad6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 19:11:22 crc kubenswrapper[4813]: I0219 19:11:22.594849 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q2582" Feb 19 19:11:22 crc kubenswrapper[4813]: I0219 19:11:22.649143 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q2582"] Feb 19 19:11:22 crc kubenswrapper[4813]: I0219 19:11:22.656179 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-q2582"] Feb 19 19:11:23 crc kubenswrapper[4813]: I0219 19:11:23.483234 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f048841-1206-44e5-8d57-006863d40ad6" path="/var/lib/kubelet/pods/7f048841-1206-44e5-8d57-006863d40ad6/volumes" Feb 19 19:11:23 crc kubenswrapper[4813]: I0219 19:11:23.963967 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pjdbp"] Feb 19 19:11:23 crc kubenswrapper[4813]: E0219 19:11:23.964280 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f048841-1206-44e5-8d57-006863d40ad6" containerName="registry-server" Feb 19 19:11:23 crc kubenswrapper[4813]: I0219 19:11:23.964295 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f048841-1206-44e5-8d57-006863d40ad6" containerName="registry-server" Feb 19 19:11:23 crc kubenswrapper[4813]: E0219 19:11:23.964311 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f048841-1206-44e5-8d57-006863d40ad6" containerName="extract-content" Feb 19 19:11:23 crc kubenswrapper[4813]: I0219 19:11:23.964318 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f048841-1206-44e5-8d57-006863d40ad6" containerName="extract-content" Feb 19 19:11:23 crc kubenswrapper[4813]: E0219 19:11:23.964344 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f048841-1206-44e5-8d57-006863d40ad6" containerName="extract-utilities" Feb 19 19:11:23 crc kubenswrapper[4813]: I0219 19:11:23.964353 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f048841-1206-44e5-8d57-006863d40ad6" containerName="extract-utilities" Feb 19 19:11:23 crc kubenswrapper[4813]: I0219 19:11:23.964510 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f048841-1206-44e5-8d57-006863d40ad6" containerName="registry-server" Feb 19 19:11:23 crc kubenswrapper[4813]: I0219 19:11:23.965616 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pjdbp" Feb 19 19:11:23 crc kubenswrapper[4813]: I0219 19:11:23.984332 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pjdbp"] Feb 19 19:11:24 crc kubenswrapper[4813]: I0219 19:11:24.065230 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r4sc\" (UniqueName: \"kubernetes.io/projected/a1995c69-4a83-44cb-86d3-da37437d4b3b-kube-api-access-4r4sc\") pod \"certified-operators-pjdbp\" (UID: \"a1995c69-4a83-44cb-86d3-da37437d4b3b\") " pod="openshift-marketplace/certified-operators-pjdbp" Feb 19 19:11:24 crc kubenswrapper[4813]: I0219 19:11:24.065298 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1995c69-4a83-44cb-86d3-da37437d4b3b-utilities\") pod \"certified-operators-pjdbp\" (UID: \"a1995c69-4a83-44cb-86d3-da37437d4b3b\") " pod="openshift-marketplace/certified-operators-pjdbp" Feb 19 19:11:24 crc kubenswrapper[4813]: I0219 19:11:24.065372 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1995c69-4a83-44cb-86d3-da37437d4b3b-catalog-content\") pod \"certified-operators-pjdbp\" (UID: \"a1995c69-4a83-44cb-86d3-da37437d4b3b\") " pod="openshift-marketplace/certified-operators-pjdbp" Feb 19 19:11:24 crc kubenswrapper[4813]: I0219 19:11:24.166224 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4r4sc\" (UniqueName: \"kubernetes.io/projected/a1995c69-4a83-44cb-86d3-da37437d4b3b-kube-api-access-4r4sc\") pod \"certified-operators-pjdbp\" (UID: \"a1995c69-4a83-44cb-86d3-da37437d4b3b\") " pod="openshift-marketplace/certified-operators-pjdbp" Feb 19 19:11:24 crc kubenswrapper[4813]: I0219 19:11:24.166301 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1995c69-4a83-44cb-86d3-da37437d4b3b-utilities\") pod \"certified-operators-pjdbp\" (UID: \"a1995c69-4a83-44cb-86d3-da37437d4b3b\") " pod="openshift-marketplace/certified-operators-pjdbp" Feb 19 19:11:24 crc kubenswrapper[4813]: I0219 19:11:24.166366 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1995c69-4a83-44cb-86d3-da37437d4b3b-catalog-content\") pod \"certified-operators-pjdbp\" (UID: \"a1995c69-4a83-44cb-86d3-da37437d4b3b\") " pod="openshift-marketplace/certified-operators-pjdbp" Feb 19 19:11:24 crc kubenswrapper[4813]: I0219 19:11:24.166883 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1995c69-4a83-44cb-86d3-da37437d4b3b-utilities\") pod \"certified-operators-pjdbp\" (UID: \"a1995c69-4a83-44cb-86d3-da37437d4b3b\") " pod="openshift-marketplace/certified-operators-pjdbp" Feb 19 19:11:24 crc kubenswrapper[4813]: I0219 19:11:24.166933 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1995c69-4a83-44cb-86d3-da37437d4b3b-catalog-content\") pod \"certified-operators-pjdbp\" (UID: \"a1995c69-4a83-44cb-86d3-da37437d4b3b\") " pod="openshift-marketplace/certified-operators-pjdbp" Feb 19 19:11:24 crc kubenswrapper[4813]: I0219 19:11:24.184175 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4r4sc\" (UniqueName: \"kubernetes.io/projected/a1995c69-4a83-44cb-86d3-da37437d4b3b-kube-api-access-4r4sc\") pod \"certified-operators-pjdbp\" (UID: \"a1995c69-4a83-44cb-86d3-da37437d4b3b\") " pod="openshift-marketplace/certified-operators-pjdbp" Feb 19 19:11:24 crc kubenswrapper[4813]: I0219 19:11:24.280006 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pjdbp" Feb 19 19:11:24 crc kubenswrapper[4813]: I0219 19:11:24.472080 4813 scope.go:117] "RemoveContainer" containerID="53ac915d867d2a41b6ddf091b7fd0de3401bfbf622b2473439f483389a45a74c" Feb 19 19:11:24 crc kubenswrapper[4813]: E0219 19:11:24.472497 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:11:24 crc kubenswrapper[4813]: I0219 19:11:24.787311 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pjdbp"] Feb 19 19:11:25 crc kubenswrapper[4813]: I0219 19:11:25.641843 4813 generic.go:334] "Generic (PLEG): container finished" podID="a1995c69-4a83-44cb-86d3-da37437d4b3b" containerID="f81de6140d495418421428d661e5f63097acd795880b454553741e4a7e6556c5" exitCode=0 Feb 19 19:11:25 crc kubenswrapper[4813]: I0219 19:11:25.641943 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pjdbp" event={"ID":"a1995c69-4a83-44cb-86d3-da37437d4b3b","Type":"ContainerDied","Data":"f81de6140d495418421428d661e5f63097acd795880b454553741e4a7e6556c5"} Feb 19 19:11:25 crc kubenswrapper[4813]: I0219 19:11:25.642123 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pjdbp" event={"ID":"a1995c69-4a83-44cb-86d3-da37437d4b3b","Type":"ContainerStarted","Data":"6ead7fcb4b15b54d07ae018fb3ae4c045b5dcc6eed34664562464b475b2e6e14"} Feb 19 19:11:26 crc kubenswrapper[4813]: I0219 19:11:26.654222 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pjdbp" event={"ID":"a1995c69-4a83-44cb-86d3-da37437d4b3b","Type":"ContainerStarted","Data":"6f6c9ca31c5313e12cbfd1b47852fc4f6b15c19e306fedc6a4fc9b1bb0a892c0"} Feb 19 19:11:27 crc kubenswrapper[4813]: I0219 19:11:27.667063 4813 generic.go:334] "Generic (PLEG): container finished" podID="a1995c69-4a83-44cb-86d3-da37437d4b3b" containerID="6f6c9ca31c5313e12cbfd1b47852fc4f6b15c19e306fedc6a4fc9b1bb0a892c0" exitCode=0 Feb 19 19:11:27 crc kubenswrapper[4813]: I0219 19:11:27.667129 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pjdbp" event={"ID":"a1995c69-4a83-44cb-86d3-da37437d4b3b","Type":"ContainerDied","Data":"6f6c9ca31c5313e12cbfd1b47852fc4f6b15c19e306fedc6a4fc9b1bb0a892c0"} Feb 19 19:11:28 crc kubenswrapper[4813]: I0219 19:11:28.678346 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pjdbp" event={"ID":"a1995c69-4a83-44cb-86d3-da37437d4b3b","Type":"ContainerStarted","Data":"962dae7a55040e90c1045affc6ac896b753b96bfbdb06374abeb5e4cc847b2d2"} Feb 19 19:11:28 crc kubenswrapper[4813]: I0219 19:11:28.722202 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pjdbp" podStartSLOduration=3.244614199 podStartE2EDuration="5.722173439s" podCreationTimestamp="2026-02-19 19:11:23 +0000 UTC" firstStartedPulling="2026-02-19 19:11:25.643607394 +0000 UTC m=+2504.869047935" lastFinishedPulling="2026-02-19 19:11:28.121166594 +0000 UTC m=+2507.346607175" observedRunningTime="2026-02-19 19:11:28.710507228 +0000 UTC m=+2507.935947799" watchObservedRunningTime="2026-02-19 19:11:28.722173439 +0000 UTC m=+2507.947613990" Feb 19 19:11:34 crc kubenswrapper[4813]: I0219 19:11:34.280269 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pjdbp" Feb 19 19:11:34 crc kubenswrapper[4813]: I0219 19:11:34.280913 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pjdbp" Feb 19 19:11:34 crc kubenswrapper[4813]: I0219 19:11:34.355225 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pjdbp" Feb 19 19:11:34 crc kubenswrapper[4813]: I0219 19:11:34.792161 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pjdbp" Feb 19 19:11:34 crc kubenswrapper[4813]: I0219 19:11:34.861914 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pjdbp"] Feb 19 19:11:35 crc kubenswrapper[4813]: I0219 19:11:35.473349 4813 scope.go:117] "RemoveContainer" containerID="53ac915d867d2a41b6ddf091b7fd0de3401bfbf622b2473439f483389a45a74c" Feb 19 19:11:35 crc kubenswrapper[4813]: E0219 19:11:35.473828 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:11:36 crc kubenswrapper[4813]: I0219 19:11:36.745934 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pjdbp" podUID="a1995c69-4a83-44cb-86d3-da37437d4b3b" containerName="registry-server" containerID="cri-o://962dae7a55040e90c1045affc6ac896b753b96bfbdb06374abeb5e4cc847b2d2" gracePeriod=2 Feb 19 19:11:37 crc kubenswrapper[4813]: I0219 19:11:37.220650 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pjdbp" Feb 19 19:11:37 crc kubenswrapper[4813]: I0219 19:11:37.366409 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1995c69-4a83-44cb-86d3-da37437d4b3b-utilities\") pod \"a1995c69-4a83-44cb-86d3-da37437d4b3b\" (UID: \"a1995c69-4a83-44cb-86d3-da37437d4b3b\") " Feb 19 19:11:37 crc kubenswrapper[4813]: I0219 19:11:37.366581 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1995c69-4a83-44cb-86d3-da37437d4b3b-catalog-content\") pod \"a1995c69-4a83-44cb-86d3-da37437d4b3b\" (UID: \"a1995c69-4a83-44cb-86d3-da37437d4b3b\") " Feb 19 19:11:37 crc kubenswrapper[4813]: I0219 19:11:37.366637 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4r4sc\" (UniqueName: \"kubernetes.io/projected/a1995c69-4a83-44cb-86d3-da37437d4b3b-kube-api-access-4r4sc\") pod \"a1995c69-4a83-44cb-86d3-da37437d4b3b\" (UID: \"a1995c69-4a83-44cb-86d3-da37437d4b3b\") " Feb 19 19:11:37 crc kubenswrapper[4813]: I0219 19:11:37.368644 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1995c69-4a83-44cb-86d3-da37437d4b3b-utilities" (OuterVolumeSpecName: "utilities") pod "a1995c69-4a83-44cb-86d3-da37437d4b3b" (UID: "a1995c69-4a83-44cb-86d3-da37437d4b3b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:11:37 crc kubenswrapper[4813]: I0219 19:11:37.377740 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1995c69-4a83-44cb-86d3-da37437d4b3b-kube-api-access-4r4sc" (OuterVolumeSpecName: "kube-api-access-4r4sc") pod "a1995c69-4a83-44cb-86d3-da37437d4b3b" (UID: "a1995c69-4a83-44cb-86d3-da37437d4b3b"). InnerVolumeSpecName "kube-api-access-4r4sc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:11:37 crc kubenswrapper[4813]: I0219 19:11:37.468345 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4r4sc\" (UniqueName: \"kubernetes.io/projected/a1995c69-4a83-44cb-86d3-da37437d4b3b-kube-api-access-4r4sc\") on node \"crc\" DevicePath \"\"" Feb 19 19:11:37 crc kubenswrapper[4813]: I0219 19:11:37.468387 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1995c69-4a83-44cb-86d3-da37437d4b3b-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 19:11:37 crc kubenswrapper[4813]: I0219 19:11:37.757180 4813 generic.go:334] "Generic (PLEG): container finished" podID="a1995c69-4a83-44cb-86d3-da37437d4b3b" containerID="962dae7a55040e90c1045affc6ac896b753b96bfbdb06374abeb5e4cc847b2d2" exitCode=0 Feb 19 19:11:37 crc kubenswrapper[4813]: I0219 19:11:37.757222 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pjdbp" event={"ID":"a1995c69-4a83-44cb-86d3-da37437d4b3b","Type":"ContainerDied","Data":"962dae7a55040e90c1045affc6ac896b753b96bfbdb06374abeb5e4cc847b2d2"} Feb 19 19:11:37 crc kubenswrapper[4813]: I0219 19:11:37.757246 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pjdbp" event={"ID":"a1995c69-4a83-44cb-86d3-da37437d4b3b","Type":"ContainerDied","Data":"6ead7fcb4b15b54d07ae018fb3ae4c045b5dcc6eed34664562464b475b2e6e14"} Feb 19 19:11:37 crc kubenswrapper[4813]: I0219 19:11:37.757263 4813 scope.go:117] "RemoveContainer" containerID="962dae7a55040e90c1045affc6ac896b753b96bfbdb06374abeb5e4cc847b2d2" Feb 19 19:11:37 crc kubenswrapper[4813]: I0219 19:11:37.757289 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pjdbp" Feb 19 19:11:37 crc kubenswrapper[4813]: I0219 19:11:37.787372 4813 scope.go:117] "RemoveContainer" containerID="6f6c9ca31c5313e12cbfd1b47852fc4f6b15c19e306fedc6a4fc9b1bb0a892c0" Feb 19 19:11:37 crc kubenswrapper[4813]: I0219 19:11:37.816916 4813 scope.go:117] "RemoveContainer" containerID="f81de6140d495418421428d661e5f63097acd795880b454553741e4a7e6556c5" Feb 19 19:11:37 crc kubenswrapper[4813]: I0219 19:11:37.839904 4813 scope.go:117] "RemoveContainer" containerID="962dae7a55040e90c1045affc6ac896b753b96bfbdb06374abeb5e4cc847b2d2" Feb 19 19:11:37 crc kubenswrapper[4813]: E0219 19:11:37.840503 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"962dae7a55040e90c1045affc6ac896b753b96bfbdb06374abeb5e4cc847b2d2\": container with ID starting with 962dae7a55040e90c1045affc6ac896b753b96bfbdb06374abeb5e4cc847b2d2 not found: ID does not exist" containerID="962dae7a55040e90c1045affc6ac896b753b96bfbdb06374abeb5e4cc847b2d2" Feb 19 19:11:37 crc kubenswrapper[4813]: I0219 19:11:37.840540 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"962dae7a55040e90c1045affc6ac896b753b96bfbdb06374abeb5e4cc847b2d2"} err="failed to get container status \"962dae7a55040e90c1045affc6ac896b753b96bfbdb06374abeb5e4cc847b2d2\": rpc error: code = NotFound desc = could not find container \"962dae7a55040e90c1045affc6ac896b753b96bfbdb06374abeb5e4cc847b2d2\": container with ID starting with 962dae7a55040e90c1045affc6ac896b753b96bfbdb06374abeb5e4cc847b2d2 not found: ID does not exist" Feb 19 19:11:37 crc kubenswrapper[4813]: I0219 19:11:37.840564 4813 scope.go:117] "RemoveContainer" containerID="6f6c9ca31c5313e12cbfd1b47852fc4f6b15c19e306fedc6a4fc9b1bb0a892c0" Feb 19 19:11:37 crc kubenswrapper[4813]: E0219 19:11:37.841040 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f6c9ca31c5313e12cbfd1b47852fc4f6b15c19e306fedc6a4fc9b1bb0a892c0\": container with ID starting with 6f6c9ca31c5313e12cbfd1b47852fc4f6b15c19e306fedc6a4fc9b1bb0a892c0 not found: ID does not exist" containerID="6f6c9ca31c5313e12cbfd1b47852fc4f6b15c19e306fedc6a4fc9b1bb0a892c0" Feb 19 19:11:37 crc kubenswrapper[4813]: I0219 19:11:37.841108 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f6c9ca31c5313e12cbfd1b47852fc4f6b15c19e306fedc6a4fc9b1bb0a892c0"} err="failed to get container status \"6f6c9ca31c5313e12cbfd1b47852fc4f6b15c19e306fedc6a4fc9b1bb0a892c0\": rpc error: code = NotFound desc = could not find container \"6f6c9ca31c5313e12cbfd1b47852fc4f6b15c19e306fedc6a4fc9b1bb0a892c0\": container with ID starting with 6f6c9ca31c5313e12cbfd1b47852fc4f6b15c19e306fedc6a4fc9b1bb0a892c0 not found: ID does not exist" Feb 19 19:11:37 crc kubenswrapper[4813]: I0219 19:11:37.841147 4813 scope.go:117] "RemoveContainer" containerID="f81de6140d495418421428d661e5f63097acd795880b454553741e4a7e6556c5" Feb 19 19:11:37 crc kubenswrapper[4813]: E0219 19:11:37.841689 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f81de6140d495418421428d661e5f63097acd795880b454553741e4a7e6556c5\": container with ID starting with f81de6140d495418421428d661e5f63097acd795880b454553741e4a7e6556c5 not found: ID does not exist" containerID="f81de6140d495418421428d661e5f63097acd795880b454553741e4a7e6556c5" Feb 19 19:11:37 crc kubenswrapper[4813]: I0219 19:11:37.841719 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f81de6140d495418421428d661e5f63097acd795880b454553741e4a7e6556c5"} err="failed to get container status \"f81de6140d495418421428d661e5f63097acd795880b454553741e4a7e6556c5\": rpc error: code = NotFound desc = could not find container \"f81de6140d495418421428d661e5f63097acd795880b454553741e4a7e6556c5\": container with ID starting with f81de6140d495418421428d661e5f63097acd795880b454553741e4a7e6556c5 not found: ID does not exist" Feb 19 19:11:38 crc kubenswrapper[4813]: I0219 19:11:38.130044 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1995c69-4a83-44cb-86d3-da37437d4b3b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a1995c69-4a83-44cb-86d3-da37437d4b3b" (UID: "a1995c69-4a83-44cb-86d3-da37437d4b3b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:11:38 crc kubenswrapper[4813]: I0219 19:11:38.179858 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1995c69-4a83-44cb-86d3-da37437d4b3b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 19:11:38 crc kubenswrapper[4813]: I0219 19:11:38.410732 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pjdbp"] Feb 19 19:11:38 crc kubenswrapper[4813]: I0219 19:11:38.422340 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pjdbp"] Feb 19 19:11:39 crc kubenswrapper[4813]: I0219 19:11:39.488844 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1995c69-4a83-44cb-86d3-da37437d4b3b" path="/var/lib/kubelet/pods/a1995c69-4a83-44cb-86d3-da37437d4b3b/volumes" Feb 19 19:11:47 crc kubenswrapper[4813]: I0219 19:11:47.472266 4813 scope.go:117] "RemoveContainer" containerID="53ac915d867d2a41b6ddf091b7fd0de3401bfbf622b2473439f483389a45a74c" Feb 19 19:11:47 crc kubenswrapper[4813]: E0219 19:11:47.473460 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:12:02 crc kubenswrapper[4813]: I0219 19:12:02.472643 4813 scope.go:117] "RemoveContainer" containerID="53ac915d867d2a41b6ddf091b7fd0de3401bfbf622b2473439f483389a45a74c" Feb 19 19:12:02 crc kubenswrapper[4813]: E0219 19:12:02.473783 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:12:15 crc kubenswrapper[4813]: I0219 19:12:15.472760 4813 scope.go:117] "RemoveContainer" containerID="53ac915d867d2a41b6ddf091b7fd0de3401bfbf622b2473439f483389a45a74c" Feb 19 19:12:15 crc kubenswrapper[4813]: E0219 19:12:15.473489 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:12:26 crc kubenswrapper[4813]: I0219 19:12:26.471400 4813 scope.go:117] "RemoveContainer" containerID="53ac915d867d2a41b6ddf091b7fd0de3401bfbf622b2473439f483389a45a74c" Feb 19 19:12:26 crc kubenswrapper[4813]: E0219 19:12:26.472372 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:12:37 crc kubenswrapper[4813]: I0219 19:12:37.471710 4813 scope.go:117] "RemoveContainer" containerID="53ac915d867d2a41b6ddf091b7fd0de3401bfbf622b2473439f483389a45a74c" Feb 19 19:12:37 crc kubenswrapper[4813]: E0219 19:12:37.473203 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:12:50 crc kubenswrapper[4813]: I0219 19:12:50.474373 4813 scope.go:117] "RemoveContainer" containerID="53ac915d867d2a41b6ddf091b7fd0de3401bfbf622b2473439f483389a45a74c" Feb 19 19:12:50 crc kubenswrapper[4813]: E0219 19:12:50.475547 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:13:02 crc kubenswrapper[4813]: I0219 19:13:02.471534 4813 scope.go:117] "RemoveContainer" containerID="53ac915d867d2a41b6ddf091b7fd0de3401bfbf622b2473439f483389a45a74c" Feb 19 19:13:03 crc kubenswrapper[4813]: I0219 19:13:03.523094 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" event={"ID":"481977a2-7072-4176-abd4-863cb6104d70","Type":"ContainerStarted","Data":"d4a4a27017153c49bbc7828a56970db751c1584d8f12cdffdf0767bd096d7035"} Feb 19 19:13:35 crc kubenswrapper[4813]: I0219 19:13:35.860001 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-46klh"] Feb 19 19:13:35 crc kubenswrapper[4813]: E0219 19:13:35.866267 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1995c69-4a83-44cb-86d3-da37437d4b3b" containerName="registry-server" Feb 19 19:13:35 crc kubenswrapper[4813]: I0219 19:13:35.866308 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1995c69-4a83-44cb-86d3-da37437d4b3b" containerName="registry-server" Feb 19 19:13:35 crc kubenswrapper[4813]: E0219 19:13:35.866322 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1995c69-4a83-44cb-86d3-da37437d4b3b" containerName="extract-content" Feb 19 19:13:35 crc kubenswrapper[4813]: I0219 19:13:35.866331 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1995c69-4a83-44cb-86d3-da37437d4b3b" containerName="extract-content" Feb 19 19:13:35 crc kubenswrapper[4813]: E0219 19:13:35.866362 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1995c69-4a83-44cb-86d3-da37437d4b3b" containerName="extract-utilities" Feb 19 19:13:35 crc kubenswrapper[4813]: I0219 19:13:35.866370 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1995c69-4a83-44cb-86d3-da37437d4b3b" containerName="extract-utilities" Feb 19 19:13:35 crc kubenswrapper[4813]: I0219 19:13:35.866607 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1995c69-4a83-44cb-86d3-da37437d4b3b" containerName="registry-server" Feb 19 19:13:35 crc kubenswrapper[4813]: I0219 19:13:35.867903 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-46klh" Feb 19 19:13:35 crc kubenswrapper[4813]: I0219 19:13:35.871082 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-46klh"] Feb 19 19:13:35 crc kubenswrapper[4813]: I0219 19:13:35.962876 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8eaf6d0f-1f75-49b7-ab16-66199c0d38ed-utilities\") pod \"redhat-operators-46klh\" (UID: \"8eaf6d0f-1f75-49b7-ab16-66199c0d38ed\") " pod="openshift-marketplace/redhat-operators-46klh" Feb 19 19:13:35 crc kubenswrapper[4813]: I0219 19:13:35.963210 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bzmb\" (UniqueName: \"kubernetes.io/projected/8eaf6d0f-1f75-49b7-ab16-66199c0d38ed-kube-api-access-9bzmb\") pod \"redhat-operators-46klh\" (UID: \"8eaf6d0f-1f75-49b7-ab16-66199c0d38ed\") " pod="openshift-marketplace/redhat-operators-46klh" Feb 19 19:13:35 crc kubenswrapper[4813]: I0219 19:13:35.963243 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8eaf6d0f-1f75-49b7-ab16-66199c0d38ed-catalog-content\") pod \"redhat-operators-46klh\" (UID: \"8eaf6d0f-1f75-49b7-ab16-66199c0d38ed\") " pod="openshift-marketplace/redhat-operators-46klh" Feb 19 19:13:36 crc kubenswrapper[4813]: I0219 19:13:36.064518 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8eaf6d0f-1f75-49b7-ab16-66199c0d38ed-utilities\") pod \"redhat-operators-46klh\" (UID: \"8eaf6d0f-1f75-49b7-ab16-66199c0d38ed\") " pod="openshift-marketplace/redhat-operators-46klh" Feb 19 19:13:36 crc kubenswrapper[4813]: I0219 19:13:36.064637 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bzmb\" (UniqueName: \"kubernetes.io/projected/8eaf6d0f-1f75-49b7-ab16-66199c0d38ed-kube-api-access-9bzmb\") pod \"redhat-operators-46klh\" (UID: \"8eaf6d0f-1f75-49b7-ab16-66199c0d38ed\") " pod="openshift-marketplace/redhat-operators-46klh" Feb 19 19:13:36 crc kubenswrapper[4813]: I0219 19:13:36.064720 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8eaf6d0f-1f75-49b7-ab16-66199c0d38ed-catalog-content\") pod \"redhat-operators-46klh\" (UID: \"8eaf6d0f-1f75-49b7-ab16-66199c0d38ed\") " pod="openshift-marketplace/redhat-operators-46klh" Feb 19 19:13:36 crc kubenswrapper[4813]: I0219 19:13:36.065503 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8eaf6d0f-1f75-49b7-ab16-66199c0d38ed-utilities\") pod \"redhat-operators-46klh\" (UID: \"8eaf6d0f-1f75-49b7-ab16-66199c0d38ed\") " pod="openshift-marketplace/redhat-operators-46klh" Feb 19 19:13:36 crc kubenswrapper[4813]: I0219 19:13:36.065541 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8eaf6d0f-1f75-49b7-ab16-66199c0d38ed-catalog-content\") pod \"redhat-operators-46klh\" (UID: \"8eaf6d0f-1f75-49b7-ab16-66199c0d38ed\") " pod="openshift-marketplace/redhat-operators-46klh" Feb 19 19:13:36 crc kubenswrapper[4813]: I0219 19:13:36.083460 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bzmb\" (UniqueName: \"kubernetes.io/projected/8eaf6d0f-1f75-49b7-ab16-66199c0d38ed-kube-api-access-9bzmb\") pod \"redhat-operators-46klh\" (UID: \"8eaf6d0f-1f75-49b7-ab16-66199c0d38ed\") " pod="openshift-marketplace/redhat-operators-46klh" Feb 19 19:13:36 crc kubenswrapper[4813]: I0219 19:13:36.198294 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-46klh" Feb 19 19:13:36 crc kubenswrapper[4813]: I0219 19:13:36.436480 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-46klh"] Feb 19 19:13:36 crc kubenswrapper[4813]: I0219 19:13:36.789779 4813 generic.go:334] "Generic (PLEG): container finished" podID="8eaf6d0f-1f75-49b7-ab16-66199c0d38ed" containerID="27debdd617600310b033ac8be030c55695d05924747943a1b4e618829f2e3b15" exitCode=0 Feb 19 19:13:36 crc kubenswrapper[4813]: I0219 19:13:36.789850 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-46klh" event={"ID":"8eaf6d0f-1f75-49b7-ab16-66199c0d38ed","Type":"ContainerDied","Data":"27debdd617600310b033ac8be030c55695d05924747943a1b4e618829f2e3b15"} Feb 19 19:13:36 crc kubenswrapper[4813]: I0219 19:13:36.789931 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-46klh" event={"ID":"8eaf6d0f-1f75-49b7-ab16-66199c0d38ed","Type":"ContainerStarted","Data":"4675754eb2c4a2824c264564d97a458bacf15f88b7c2f3bc20e70d481ef55648"} Feb 19 19:13:36 crc kubenswrapper[4813]: I0219 19:13:36.791884 4813 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 19:13:37 crc kubenswrapper[4813]: I0219 19:13:37.799660 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-46klh" event={"ID":"8eaf6d0f-1f75-49b7-ab16-66199c0d38ed","Type":"ContainerStarted","Data":"41451d88062bd623e108956ade8b2a444ce266df8c383e1bc7536f5fc51d5b03"} Feb 19 19:13:38 crc kubenswrapper[4813]: I0219 19:13:38.808329 4813 generic.go:334] "Generic (PLEG): container finished" podID="8eaf6d0f-1f75-49b7-ab16-66199c0d38ed" containerID="41451d88062bd623e108956ade8b2a444ce266df8c383e1bc7536f5fc51d5b03" exitCode=0 Feb 19 19:13:38 crc kubenswrapper[4813]: I0219 19:13:38.808390 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-46klh" event={"ID":"8eaf6d0f-1f75-49b7-ab16-66199c0d38ed","Type":"ContainerDied","Data":"41451d88062bd623e108956ade8b2a444ce266df8c383e1bc7536f5fc51d5b03"} Feb 19 19:13:39 crc kubenswrapper[4813]: I0219 19:13:39.818168 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-46klh" event={"ID":"8eaf6d0f-1f75-49b7-ab16-66199c0d38ed","Type":"ContainerStarted","Data":"6aaf0273559565b25f104c052c05d65262149eb2b3dcca4bb056b720541e0bc3"} Feb 19 19:13:39 crc kubenswrapper[4813]: I0219 19:13:39.850095 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-46klh" podStartSLOduration=2.182828575 podStartE2EDuration="4.850077531s" podCreationTimestamp="2026-02-19 19:13:35 +0000 UTC" firstStartedPulling="2026-02-19 19:13:36.791559126 +0000 UTC m=+2636.016999677" lastFinishedPulling="2026-02-19 19:13:39.458808062 +0000 UTC m=+2638.684248633" observedRunningTime="2026-02-19 19:13:39.845870312 +0000 UTC m=+2639.071310873" watchObservedRunningTime="2026-02-19 19:13:39.850077531 +0000 UTC m=+2639.075518082" Feb 19 19:13:46 crc kubenswrapper[4813]: I0219 19:13:46.198505 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-46klh" Feb 19 19:13:46 crc kubenswrapper[4813]: I0219 19:13:46.198940 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-46klh" Feb 19 19:13:46 crc kubenswrapper[4813]: I0219 19:13:46.249451 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-46klh" Feb 19 19:13:46 crc kubenswrapper[4813]: I0219 19:13:46.942673 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-46klh" Feb 19 19:13:46 crc kubenswrapper[4813]: I0219 19:13:46.994696 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-46klh"] Feb 19 19:13:48 crc kubenswrapper[4813]: I0219 19:13:48.881074 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-46klh" podUID="8eaf6d0f-1f75-49b7-ab16-66199c0d38ed" containerName="registry-server" containerID="cri-o://6aaf0273559565b25f104c052c05d65262149eb2b3dcca4bb056b720541e0bc3" gracePeriod=2 Feb 19 19:13:49 crc kubenswrapper[4813]: I0219 19:13:49.893489 4813 generic.go:334] "Generic (PLEG): container finished" podID="8eaf6d0f-1f75-49b7-ab16-66199c0d38ed" containerID="6aaf0273559565b25f104c052c05d65262149eb2b3dcca4bb056b720541e0bc3" exitCode=0 Feb 19 19:13:49 crc kubenswrapper[4813]: I0219 19:13:49.893551 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-46klh" event={"ID":"8eaf6d0f-1f75-49b7-ab16-66199c0d38ed","Type":"ContainerDied","Data":"6aaf0273559565b25f104c052c05d65262149eb2b3dcca4bb056b720541e0bc3"} Feb 19 19:13:50 crc kubenswrapper[4813]: I0219 19:13:50.435550 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-46klh" Feb 19 19:13:50 crc kubenswrapper[4813]: I0219 19:13:50.467605 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8eaf6d0f-1f75-49b7-ab16-66199c0d38ed-utilities\") pod \"8eaf6d0f-1f75-49b7-ab16-66199c0d38ed\" (UID: \"8eaf6d0f-1f75-49b7-ab16-66199c0d38ed\") " Feb 19 19:13:50 crc kubenswrapper[4813]: I0219 19:13:50.467694 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bzmb\" (UniqueName: \"kubernetes.io/projected/8eaf6d0f-1f75-49b7-ab16-66199c0d38ed-kube-api-access-9bzmb\") pod \"8eaf6d0f-1f75-49b7-ab16-66199c0d38ed\" (UID: \"8eaf6d0f-1f75-49b7-ab16-66199c0d38ed\") " Feb 19 19:13:50 crc kubenswrapper[4813]: I0219 19:13:50.467735 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8eaf6d0f-1f75-49b7-ab16-66199c0d38ed-catalog-content\") pod \"8eaf6d0f-1f75-49b7-ab16-66199c0d38ed\" (UID: \"8eaf6d0f-1f75-49b7-ab16-66199c0d38ed\") " Feb 19 19:13:50 crc kubenswrapper[4813]: I0219 19:13:50.469399 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8eaf6d0f-1f75-49b7-ab16-66199c0d38ed-utilities" (OuterVolumeSpecName: "utilities") pod "8eaf6d0f-1f75-49b7-ab16-66199c0d38ed" (UID: "8eaf6d0f-1f75-49b7-ab16-66199c0d38ed"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:13:50 crc kubenswrapper[4813]: I0219 19:13:50.482031 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8eaf6d0f-1f75-49b7-ab16-66199c0d38ed-kube-api-access-9bzmb" (OuterVolumeSpecName: "kube-api-access-9bzmb") pod "8eaf6d0f-1f75-49b7-ab16-66199c0d38ed" (UID: "8eaf6d0f-1f75-49b7-ab16-66199c0d38ed"). InnerVolumeSpecName "kube-api-access-9bzmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:13:50 crc kubenswrapper[4813]: I0219 19:13:50.569581 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8eaf6d0f-1f75-49b7-ab16-66199c0d38ed-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 19:13:50 crc kubenswrapper[4813]: I0219 19:13:50.569616 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bzmb\" (UniqueName: \"kubernetes.io/projected/8eaf6d0f-1f75-49b7-ab16-66199c0d38ed-kube-api-access-9bzmb\") on node \"crc\" DevicePath \"\"" Feb 19 19:13:50 crc kubenswrapper[4813]: I0219 19:13:50.615060 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8eaf6d0f-1f75-49b7-ab16-66199c0d38ed-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8eaf6d0f-1f75-49b7-ab16-66199c0d38ed" (UID: "8eaf6d0f-1f75-49b7-ab16-66199c0d38ed"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:13:50 crc kubenswrapper[4813]: I0219 19:13:50.670814 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8eaf6d0f-1f75-49b7-ab16-66199c0d38ed-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 19:13:50 crc kubenswrapper[4813]: I0219 19:13:50.904254 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-46klh" event={"ID":"8eaf6d0f-1f75-49b7-ab16-66199c0d38ed","Type":"ContainerDied","Data":"4675754eb2c4a2824c264564d97a458bacf15f88b7c2f3bc20e70d481ef55648"} Feb 19 19:13:50 crc kubenswrapper[4813]: I0219 19:13:50.904314 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-46klh" Feb 19 19:13:50 crc kubenswrapper[4813]: I0219 19:13:50.905549 4813 scope.go:117] "RemoveContainer" containerID="6aaf0273559565b25f104c052c05d65262149eb2b3dcca4bb056b720541e0bc3" Feb 19 19:13:50 crc kubenswrapper[4813]: I0219 19:13:50.926818 4813 scope.go:117] "RemoveContainer" containerID="41451d88062bd623e108956ade8b2a444ce266df8c383e1bc7536f5fc51d5b03" Feb 19 19:13:50 crc kubenswrapper[4813]: I0219 19:13:50.946624 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-46klh"] Feb 19 19:13:50 crc kubenswrapper[4813]: I0219 19:13:50.952519 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-46klh"] Feb 19 19:13:50 crc kubenswrapper[4813]: I0219 19:13:50.966906 4813 scope.go:117] "RemoveContainer" containerID="27debdd617600310b033ac8be030c55695d05924747943a1b4e618829f2e3b15" Feb 19 19:13:51 crc kubenswrapper[4813]: I0219 19:13:51.508680 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8eaf6d0f-1f75-49b7-ab16-66199c0d38ed" path="/var/lib/kubelet/pods/8eaf6d0f-1f75-49b7-ab16-66199c0d38ed/volumes" Feb 19 19:15:00 crc kubenswrapper[4813]: I0219 19:15:00.138300 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525475-zkg2p"] Feb 19 19:15:00 crc kubenswrapper[4813]: E0219 19:15:00.139026 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8eaf6d0f-1f75-49b7-ab16-66199c0d38ed" containerName="extract-utilities" Feb 19 19:15:00 crc kubenswrapper[4813]: I0219 19:15:00.139040 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eaf6d0f-1f75-49b7-ab16-66199c0d38ed" containerName="extract-utilities" Feb 19 19:15:00 crc kubenswrapper[4813]: E0219 19:15:00.139057 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8eaf6d0f-1f75-49b7-ab16-66199c0d38ed" containerName="registry-server" Feb 19 19:15:00 crc kubenswrapper[4813]: I0219 19:15:00.139065 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eaf6d0f-1f75-49b7-ab16-66199c0d38ed" containerName="registry-server" Feb 19 19:15:00 crc kubenswrapper[4813]: E0219 19:15:00.139080 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8eaf6d0f-1f75-49b7-ab16-66199c0d38ed" containerName="extract-content" Feb 19 19:15:00 crc kubenswrapper[4813]: I0219 19:15:00.139086 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eaf6d0f-1f75-49b7-ab16-66199c0d38ed" containerName="extract-content" Feb 19 19:15:00 crc kubenswrapper[4813]: I0219 19:15:00.139239 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="8eaf6d0f-1f75-49b7-ab16-66199c0d38ed" containerName="registry-server" Feb 19 19:15:00 crc kubenswrapper[4813]: I0219 19:15:00.139664 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525475-zkg2p" Feb 19 19:15:00 crc kubenswrapper[4813]: I0219 19:15:00.142187 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 19:15:00 crc kubenswrapper[4813]: I0219 19:15:00.144597 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 19:15:00 crc kubenswrapper[4813]: I0219 19:15:00.152587 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525475-zkg2p"] Feb 19 19:15:00 crc kubenswrapper[4813]: I0219 19:15:00.236997 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/49ec683d-c7fc-445f-bf06-7880775e37ac-secret-volume\") pod \"collect-profiles-29525475-zkg2p\" (UID: \"49ec683d-c7fc-445f-bf06-7880775e37ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525475-zkg2p" Feb 19 19:15:00 crc kubenswrapper[4813]: I0219 19:15:00.237096 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/49ec683d-c7fc-445f-bf06-7880775e37ac-config-volume\") pod \"collect-profiles-29525475-zkg2p\" (UID: \"49ec683d-c7fc-445f-bf06-7880775e37ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525475-zkg2p" Feb 19 19:15:00 crc kubenswrapper[4813]: I0219 19:15:00.237260 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4t96\" (UniqueName: \"kubernetes.io/projected/49ec683d-c7fc-445f-bf06-7880775e37ac-kube-api-access-z4t96\") pod \"collect-profiles-29525475-zkg2p\" (UID: \"49ec683d-c7fc-445f-bf06-7880775e37ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525475-zkg2p" Feb 19 19:15:00 crc kubenswrapper[4813]: I0219 19:15:00.339115 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/49ec683d-c7fc-445f-bf06-7880775e37ac-config-volume\") pod \"collect-profiles-29525475-zkg2p\" (UID: \"49ec683d-c7fc-445f-bf06-7880775e37ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525475-zkg2p" Feb 19 19:15:00 crc kubenswrapper[4813]: I0219 19:15:00.339225 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4t96\" (UniqueName: \"kubernetes.io/projected/49ec683d-c7fc-445f-bf06-7880775e37ac-kube-api-access-z4t96\") pod \"collect-profiles-29525475-zkg2p\" (UID: \"49ec683d-c7fc-445f-bf06-7880775e37ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525475-zkg2p" Feb 19 19:15:00 crc kubenswrapper[4813]: I0219 19:15:00.339298 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/49ec683d-c7fc-445f-bf06-7880775e37ac-secret-volume\") pod \"collect-profiles-29525475-zkg2p\" (UID: \"49ec683d-c7fc-445f-bf06-7880775e37ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525475-zkg2p" Feb 19 19:15:00 crc kubenswrapper[4813]: I0219 19:15:00.340109 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/49ec683d-c7fc-445f-bf06-7880775e37ac-config-volume\") pod \"collect-profiles-29525475-zkg2p\" (UID: \"49ec683d-c7fc-445f-bf06-7880775e37ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525475-zkg2p" Feb 19 19:15:00 crc kubenswrapper[4813]: I0219 19:15:00.359805 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/49ec683d-c7fc-445f-bf06-7880775e37ac-secret-volume\") pod \"collect-profiles-29525475-zkg2p\" (UID: \"49ec683d-c7fc-445f-bf06-7880775e37ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525475-zkg2p" Feb 19 19:15:00 crc kubenswrapper[4813]: I0219 19:15:00.361794 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4t96\" (UniqueName: \"kubernetes.io/projected/49ec683d-c7fc-445f-bf06-7880775e37ac-kube-api-access-z4t96\") pod \"collect-profiles-29525475-zkg2p\" (UID: \"49ec683d-c7fc-445f-bf06-7880775e37ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525475-zkg2p" Feb 19 19:15:00 crc kubenswrapper[4813]: I0219 19:15:00.489217 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525475-zkg2p" Feb 19 19:15:00 crc kubenswrapper[4813]: I0219 19:15:00.943573 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525475-zkg2p"] Feb 19 19:15:01 crc kubenswrapper[4813]: I0219 19:15:01.431636 4813 generic.go:334] "Generic (PLEG): container finished" podID="49ec683d-c7fc-445f-bf06-7880775e37ac" containerID="c07a761b9baedcbe79b65b5a3f1a6bf4248aae776a3f436625a13deb47ec85d4" exitCode=0 Feb 19 19:15:01 crc kubenswrapper[4813]: I0219 19:15:01.431711 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525475-zkg2p" event={"ID":"49ec683d-c7fc-445f-bf06-7880775e37ac","Type":"ContainerDied","Data":"c07a761b9baedcbe79b65b5a3f1a6bf4248aae776a3f436625a13deb47ec85d4"} Feb 19 19:15:01 crc kubenswrapper[4813]: I0219 19:15:01.431984 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525475-zkg2p" event={"ID":"49ec683d-c7fc-445f-bf06-7880775e37ac","Type":"ContainerStarted","Data":"9413084788d4f9071bea368ce9cb00c82fa82ce0c8fe1207d3a640f001ae0579"} Feb 19 19:15:02 crc kubenswrapper[4813]: I0219 19:15:02.743343 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525475-zkg2p" Feb 19 19:15:02 crc kubenswrapper[4813]: I0219 19:15:02.869883 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/49ec683d-c7fc-445f-bf06-7880775e37ac-config-volume\") pod \"49ec683d-c7fc-445f-bf06-7880775e37ac\" (UID: \"49ec683d-c7fc-445f-bf06-7880775e37ac\") " Feb 19 19:15:02 crc kubenswrapper[4813]: I0219 19:15:02.869998 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4t96\" (UniqueName: \"kubernetes.io/projected/49ec683d-c7fc-445f-bf06-7880775e37ac-kube-api-access-z4t96\") pod \"49ec683d-c7fc-445f-bf06-7880775e37ac\" (UID: \"49ec683d-c7fc-445f-bf06-7880775e37ac\") " Feb 19 19:15:02 crc kubenswrapper[4813]: I0219 19:15:02.870055 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/49ec683d-c7fc-445f-bf06-7880775e37ac-secret-volume\") pod \"49ec683d-c7fc-445f-bf06-7880775e37ac\" (UID: \"49ec683d-c7fc-445f-bf06-7880775e37ac\") " Feb 19 19:15:02 crc kubenswrapper[4813]: I0219 19:15:02.871698 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49ec683d-c7fc-445f-bf06-7880775e37ac-config-volume" (OuterVolumeSpecName: "config-volume") pod "49ec683d-c7fc-445f-bf06-7880775e37ac" (UID: "49ec683d-c7fc-445f-bf06-7880775e37ac"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:15:02 crc kubenswrapper[4813]: I0219 19:15:02.876828 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ec683d-c7fc-445f-bf06-7880775e37ac-kube-api-access-z4t96" (OuterVolumeSpecName: "kube-api-access-z4t96") pod "49ec683d-c7fc-445f-bf06-7880775e37ac" (UID: "49ec683d-c7fc-445f-bf06-7880775e37ac"). InnerVolumeSpecName "kube-api-access-z4t96". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:15:02 crc kubenswrapper[4813]: I0219 19:15:02.877114 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49ec683d-c7fc-445f-bf06-7880775e37ac-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "49ec683d-c7fc-445f-bf06-7880775e37ac" (UID: "49ec683d-c7fc-445f-bf06-7880775e37ac"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:15:02 crc kubenswrapper[4813]: I0219 19:15:02.971375 4813 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/49ec683d-c7fc-445f-bf06-7880775e37ac-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 19:15:02 crc kubenswrapper[4813]: I0219 19:15:02.971420 4813 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/49ec683d-c7fc-445f-bf06-7880775e37ac-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 19:15:02 crc kubenswrapper[4813]: I0219 19:15:02.971435 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4t96\" (UniqueName: \"kubernetes.io/projected/49ec683d-c7fc-445f-bf06-7880775e37ac-kube-api-access-z4t96\") on node \"crc\" DevicePath \"\"" Feb 19 19:15:03 crc kubenswrapper[4813]: I0219 19:15:03.451858 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525475-zkg2p" event={"ID":"49ec683d-c7fc-445f-bf06-7880775e37ac","Type":"ContainerDied","Data":"9413084788d4f9071bea368ce9cb00c82fa82ce0c8fe1207d3a640f001ae0579"} Feb 19 19:15:03 crc kubenswrapper[4813]: I0219 19:15:03.452258 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9413084788d4f9071bea368ce9cb00c82fa82ce0c8fe1207d3a640f001ae0579" Feb 19 19:15:03 crc kubenswrapper[4813]: I0219 19:15:03.452040 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525475-zkg2p" Feb 19 19:15:03 crc kubenswrapper[4813]: I0219 19:15:03.838812 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525430-fq4mh"] Feb 19 19:15:03 crc kubenswrapper[4813]: I0219 19:15:03.848054 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525430-fq4mh"] Feb 19 19:15:05 crc kubenswrapper[4813]: I0219 19:15:05.488029 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e63bf777-ed22-4aae-942e-74b8613ca4ce" path="/var/lib/kubelet/pods/e63bf777-ed22-4aae-942e-74b8613ca4ce/volumes" Feb 19 19:15:30 crc kubenswrapper[4813]: I0219 19:15:30.330341 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 19:15:30 crc kubenswrapper[4813]: I0219 19:15:30.332593 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 19:15:51 crc kubenswrapper[4813]: I0219 19:15:51.081368 4813 scope.go:117] "RemoveContainer" containerID="90cc49563f9a6346069d787c35781a06ed6619b0e0ec51277e1ac74f238b3c08" Feb 19 19:16:00 crc kubenswrapper[4813]: I0219 19:16:00.330265 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 19:16:00 crc kubenswrapper[4813]: I0219 19:16:00.330855 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 19:16:30 crc kubenswrapper[4813]: I0219 19:16:30.330604 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 19:16:30 crc kubenswrapper[4813]: I0219 19:16:30.331389 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 19:16:30 crc kubenswrapper[4813]: I0219 19:16:30.331440 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" Feb 19 19:16:30 crc kubenswrapper[4813]: I0219 19:16:30.335298 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d4a4a27017153c49bbc7828a56970db751c1584d8f12cdffdf0767bd096d7035"} pod="openshift-machine-config-operator/machine-config-daemon-gfswm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 19:16:30 crc kubenswrapper[4813]: I0219 19:16:30.335598 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" containerID="cri-o://d4a4a27017153c49bbc7828a56970db751c1584d8f12cdffdf0767bd096d7035" gracePeriod=600 Feb 19 19:16:31 crc kubenswrapper[4813]: I0219 19:16:31.160044 4813 generic.go:334] "Generic (PLEG): container finished" podID="481977a2-7072-4176-abd4-863cb6104d70" containerID="d4a4a27017153c49bbc7828a56970db751c1584d8f12cdffdf0767bd096d7035" exitCode=0 Feb 19 19:16:31 crc kubenswrapper[4813]: I0219 19:16:31.160158 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" event={"ID":"481977a2-7072-4176-abd4-863cb6104d70","Type":"ContainerDied","Data":"d4a4a27017153c49bbc7828a56970db751c1584d8f12cdffdf0767bd096d7035"} Feb 19 19:16:31 crc kubenswrapper[4813]: I0219 19:16:31.161290 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" event={"ID":"481977a2-7072-4176-abd4-863cb6104d70","Type":"ContainerStarted","Data":"b774c43bef336a65d730790127bfeb43bdd55642c8eadde1ac791fa717f811e7"} Feb 19 19:16:31 crc kubenswrapper[4813]: I0219 19:16:31.161346 4813 scope.go:117] "RemoveContainer" containerID="53ac915d867d2a41b6ddf091b7fd0de3401bfbf622b2473439f483389a45a74c" Feb 19 19:17:51 crc kubenswrapper[4813]: I0219 19:17:51.159947 4813 scope.go:117] "RemoveContainer" containerID="821b35a0edc1ed68e109806ac7ae3d1959cf19ba49dc33dfdc3e7d7f66095509" Feb 19 19:17:51 crc kubenswrapper[4813]: I0219 19:17:51.193978 4813 scope.go:117] "RemoveContainer" containerID="fb255d753b9326eb68ebf72bc08eb39edc3ed6d3b4845a8fb945e39f8bc46973" Feb 19 19:17:51 crc kubenswrapper[4813]: I0219 19:17:51.222815 4813 scope.go:117] "RemoveContainer" containerID="0677ed7a3803e69c0e8e78af77f919247178378e577a6bdb50168d16b1867874" Feb 19 19:18:11 crc kubenswrapper[4813]: I0219 19:18:11.030379 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lql9b"] Feb 19 19:18:11 crc kubenswrapper[4813]: E0219 19:18:11.031469 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49ec683d-c7fc-445f-bf06-7880775e37ac" containerName="collect-profiles" Feb 19 19:18:11 crc kubenswrapper[4813]: I0219 19:18:11.031501 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="49ec683d-c7fc-445f-bf06-7880775e37ac" containerName="collect-profiles" Feb 19 19:18:11 crc kubenswrapper[4813]: I0219 19:18:11.031948 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="49ec683d-c7fc-445f-bf06-7880775e37ac" containerName="collect-profiles" Feb 19 19:18:11 crc kubenswrapper[4813]: I0219 19:18:11.035218 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lql9b" Feb 19 19:18:11 crc kubenswrapper[4813]: I0219 19:18:11.054205 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lql9b"] Feb 19 19:18:11 crc kubenswrapper[4813]: I0219 19:18:11.129935 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bdfdc69-39ee-4dd7-8acd-0f9f79abeac5-catalog-content\") pod \"redhat-marketplace-lql9b\" (UID: \"4bdfdc69-39ee-4dd7-8acd-0f9f79abeac5\") " pod="openshift-marketplace/redhat-marketplace-lql9b" Feb 19 19:18:11 crc kubenswrapper[4813]: I0219 19:18:11.130014 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bdfdc69-39ee-4dd7-8acd-0f9f79abeac5-utilities\") pod \"redhat-marketplace-lql9b\" (UID: \"4bdfdc69-39ee-4dd7-8acd-0f9f79abeac5\") " pod="openshift-marketplace/redhat-marketplace-lql9b" Feb 19 19:18:11 crc kubenswrapper[4813]: I0219 19:18:11.130148 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mflxh\" (UniqueName: \"kubernetes.io/projected/4bdfdc69-39ee-4dd7-8acd-0f9f79abeac5-kube-api-access-mflxh\") pod \"redhat-marketplace-lql9b\" (UID: \"4bdfdc69-39ee-4dd7-8acd-0f9f79abeac5\") " pod="openshift-marketplace/redhat-marketplace-lql9b" Feb 19 19:18:11 crc kubenswrapper[4813]: I0219 19:18:11.231568 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mflxh\" (UniqueName: \"kubernetes.io/projected/4bdfdc69-39ee-4dd7-8acd-0f9f79abeac5-kube-api-access-mflxh\") pod \"redhat-marketplace-lql9b\" (UID: \"4bdfdc69-39ee-4dd7-8acd-0f9f79abeac5\") " pod="openshift-marketplace/redhat-marketplace-lql9b" Feb 19 19:18:11 crc kubenswrapper[4813]: I0219 19:18:11.231946 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bdfdc69-39ee-4dd7-8acd-0f9f79abeac5-catalog-content\") pod \"redhat-marketplace-lql9b\" (UID: \"4bdfdc69-39ee-4dd7-8acd-0f9f79abeac5\") " pod="openshift-marketplace/redhat-marketplace-lql9b" Feb 19 19:18:11 crc kubenswrapper[4813]: I0219 19:18:11.231996 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bdfdc69-39ee-4dd7-8acd-0f9f79abeac5-utilities\") pod \"redhat-marketplace-lql9b\" (UID: \"4bdfdc69-39ee-4dd7-8acd-0f9f79abeac5\") " pod="openshift-marketplace/redhat-marketplace-lql9b" Feb 19 19:18:11 crc kubenswrapper[4813]: I0219 19:18:11.232398 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bdfdc69-39ee-4dd7-8acd-0f9f79abeac5-catalog-content\") pod \"redhat-marketplace-lql9b\" (UID: \"4bdfdc69-39ee-4dd7-8acd-0f9f79abeac5\") " pod="openshift-marketplace/redhat-marketplace-lql9b" Feb 19 19:18:11 crc kubenswrapper[4813]: I0219 19:18:11.232484 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bdfdc69-39ee-4dd7-8acd-0f9f79abeac5-utilities\") pod \"redhat-marketplace-lql9b\" (UID: \"4bdfdc69-39ee-4dd7-8acd-0f9f79abeac5\") " pod="openshift-marketplace/redhat-marketplace-lql9b" Feb 19 19:18:11 crc kubenswrapper[4813]: I0219 19:18:11.252597 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mflxh\" (UniqueName: \"kubernetes.io/projected/4bdfdc69-39ee-4dd7-8acd-0f9f79abeac5-kube-api-access-mflxh\") pod \"redhat-marketplace-lql9b\" (UID: \"4bdfdc69-39ee-4dd7-8acd-0f9f79abeac5\") " pod="openshift-marketplace/redhat-marketplace-lql9b" Feb 19 19:18:11 crc kubenswrapper[4813]: I0219 19:18:11.364438 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lql9b" Feb 19 19:18:11 crc kubenswrapper[4813]: I0219 19:18:11.854746 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lql9b"] Feb 19 19:18:12 crc kubenswrapper[4813]: I0219 19:18:12.049927 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lql9b" event={"ID":"4bdfdc69-39ee-4dd7-8acd-0f9f79abeac5","Type":"ContainerStarted","Data":"a6402487fd7b31037880da758cf8f0967371699fd8f3b1f291bb5789e9c3ceab"} Feb 19 19:18:12 crc kubenswrapper[4813]: I0219 19:18:12.049994 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lql9b" event={"ID":"4bdfdc69-39ee-4dd7-8acd-0f9f79abeac5","Type":"ContainerStarted","Data":"30323ef1dbe9a333fd3cededd5c00f3c0970b5ca5231ab49cefa9895c17da9b5"} Feb 19 19:18:13 crc kubenswrapper[4813]: I0219 19:18:13.057485 4813 generic.go:334] "Generic (PLEG): container finished" podID="4bdfdc69-39ee-4dd7-8acd-0f9f79abeac5" containerID="a6402487fd7b31037880da758cf8f0967371699fd8f3b1f291bb5789e9c3ceab" exitCode=0 Feb 19 19:18:13 crc kubenswrapper[4813]: I0219 19:18:13.057574 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lql9b" event={"ID":"4bdfdc69-39ee-4dd7-8acd-0f9f79abeac5","Type":"ContainerDied","Data":"a6402487fd7b31037880da758cf8f0967371699fd8f3b1f291bb5789e9c3ceab"} Feb 19 19:18:14 crc kubenswrapper[4813]: I0219 19:18:14.066894 4813 generic.go:334] "Generic (PLEG): container finished" podID="4bdfdc69-39ee-4dd7-8acd-0f9f79abeac5" containerID="ababff3f65fa4cede8f6f8d4f7752a384626074e849386a01afa553353e2ac64" exitCode=0 Feb 19 19:18:14 crc kubenswrapper[4813]: I0219 19:18:14.066983 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lql9b" event={"ID":"4bdfdc69-39ee-4dd7-8acd-0f9f79abeac5","Type":"ContainerDied","Data":"ababff3f65fa4cede8f6f8d4f7752a384626074e849386a01afa553353e2ac64"} Feb 19 19:18:15 crc kubenswrapper[4813]: I0219 19:18:15.075455 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lql9b" event={"ID":"4bdfdc69-39ee-4dd7-8acd-0f9f79abeac5","Type":"ContainerStarted","Data":"8f8c93abc5ff2ff9817d53be23177be0fe63bfdd58871fdd6378e13be49c18b3"} Feb 19 19:18:15 crc kubenswrapper[4813]: I0219 19:18:15.099657 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lql9b" podStartSLOduration=3.707771671 podStartE2EDuration="5.099642562s" podCreationTimestamp="2026-02-19 19:18:10 +0000 UTC" firstStartedPulling="2026-02-19 19:18:13.060589824 +0000 UTC m=+2912.286030365" lastFinishedPulling="2026-02-19 19:18:14.452460715 +0000 UTC m=+2913.677901256" observedRunningTime="2026-02-19 19:18:15.097386313 +0000 UTC m=+2914.322826854" watchObservedRunningTime="2026-02-19 19:18:15.099642562 +0000 UTC m=+2914.325083093" Feb 19 19:18:21 crc kubenswrapper[4813]: I0219 19:18:21.365588 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lql9b" Feb 19 19:18:21 crc kubenswrapper[4813]: I0219 19:18:21.366228 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lql9b" Feb 19 19:18:21 crc kubenswrapper[4813]: I0219 19:18:21.436910 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lql9b" Feb 19 19:18:22 crc kubenswrapper[4813]: I0219 19:18:22.179897 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lql9b" Feb 19 19:18:22 crc kubenswrapper[4813]: I0219 19:18:22.230750 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lql9b"] Feb 19 19:18:24 crc kubenswrapper[4813]: I0219 19:18:24.134987 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lql9b" podUID="4bdfdc69-39ee-4dd7-8acd-0f9f79abeac5" containerName="registry-server" containerID="cri-o://8f8c93abc5ff2ff9817d53be23177be0fe63bfdd58871fdd6378e13be49c18b3" gracePeriod=2 Feb 19 19:18:24 crc kubenswrapper[4813]: I0219 19:18:24.585937 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lql9b" Feb 19 19:18:24 crc kubenswrapper[4813]: I0219 19:18:24.732743 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mflxh\" (UniqueName: \"kubernetes.io/projected/4bdfdc69-39ee-4dd7-8acd-0f9f79abeac5-kube-api-access-mflxh\") pod \"4bdfdc69-39ee-4dd7-8acd-0f9f79abeac5\" (UID: \"4bdfdc69-39ee-4dd7-8acd-0f9f79abeac5\") " Feb 19 19:18:24 crc kubenswrapper[4813]: I0219 19:18:24.732804 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bdfdc69-39ee-4dd7-8acd-0f9f79abeac5-catalog-content\") pod \"4bdfdc69-39ee-4dd7-8acd-0f9f79abeac5\" (UID: \"4bdfdc69-39ee-4dd7-8acd-0f9f79abeac5\") " Feb 19 19:18:24 crc kubenswrapper[4813]: I0219 19:18:24.733027 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bdfdc69-39ee-4dd7-8acd-0f9f79abeac5-utilities\") pod \"4bdfdc69-39ee-4dd7-8acd-0f9f79abeac5\" (UID: \"4bdfdc69-39ee-4dd7-8acd-0f9f79abeac5\") " Feb 19 19:18:24 crc kubenswrapper[4813]: I0219 19:18:24.734438 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bdfdc69-39ee-4dd7-8acd-0f9f79abeac5-utilities" (OuterVolumeSpecName: "utilities") pod "4bdfdc69-39ee-4dd7-8acd-0f9f79abeac5" (UID: "4bdfdc69-39ee-4dd7-8acd-0f9f79abeac5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:18:24 crc kubenswrapper[4813]: I0219 19:18:24.740380 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bdfdc69-39ee-4dd7-8acd-0f9f79abeac5-kube-api-access-mflxh" (OuterVolumeSpecName: "kube-api-access-mflxh") pod "4bdfdc69-39ee-4dd7-8acd-0f9f79abeac5" (UID: "4bdfdc69-39ee-4dd7-8acd-0f9f79abeac5"). InnerVolumeSpecName "kube-api-access-mflxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:18:24 crc kubenswrapper[4813]: I0219 19:18:24.779043 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bdfdc69-39ee-4dd7-8acd-0f9f79abeac5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4bdfdc69-39ee-4dd7-8acd-0f9f79abeac5" (UID: "4bdfdc69-39ee-4dd7-8acd-0f9f79abeac5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:18:24 crc kubenswrapper[4813]: I0219 19:18:24.834780 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bdfdc69-39ee-4dd7-8acd-0f9f79abeac5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 19:18:24 crc kubenswrapper[4813]: I0219 19:18:24.834829 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mflxh\" (UniqueName: \"kubernetes.io/projected/4bdfdc69-39ee-4dd7-8acd-0f9f79abeac5-kube-api-access-mflxh\") on node \"crc\" DevicePath \"\"" Feb 19 19:18:24 crc kubenswrapper[4813]: I0219 19:18:24.834844 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bdfdc69-39ee-4dd7-8acd-0f9f79abeac5-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 19:18:25 crc kubenswrapper[4813]: I0219 19:18:25.146815 4813 generic.go:334] "Generic (PLEG): container finished" podID="4bdfdc69-39ee-4dd7-8acd-0f9f79abeac5" containerID="8f8c93abc5ff2ff9817d53be23177be0fe63bfdd58871fdd6378e13be49c18b3" exitCode=0 Feb 19 19:18:25 crc kubenswrapper[4813]: I0219 19:18:25.146897 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lql9b" Feb 19 19:18:25 crc kubenswrapper[4813]: I0219 19:18:25.146989 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lql9b" event={"ID":"4bdfdc69-39ee-4dd7-8acd-0f9f79abeac5","Type":"ContainerDied","Data":"8f8c93abc5ff2ff9817d53be23177be0fe63bfdd58871fdd6378e13be49c18b3"} Feb 19 19:18:25 crc kubenswrapper[4813]: I0219 19:18:25.148204 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lql9b" event={"ID":"4bdfdc69-39ee-4dd7-8acd-0f9f79abeac5","Type":"ContainerDied","Data":"30323ef1dbe9a333fd3cededd5c00f3c0970b5ca5231ab49cefa9895c17da9b5"} Feb 19 19:18:25 crc kubenswrapper[4813]: I0219 19:18:25.148250 4813 scope.go:117] "RemoveContainer" containerID="8f8c93abc5ff2ff9817d53be23177be0fe63bfdd58871fdd6378e13be49c18b3" Feb 19 19:18:25 crc kubenswrapper[4813]: I0219 19:18:25.195624 4813 scope.go:117] "RemoveContainer" containerID="ababff3f65fa4cede8f6f8d4f7752a384626074e849386a01afa553353e2ac64" Feb 19 19:18:25 crc kubenswrapper[4813]: I0219 19:18:25.202944 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lql9b"] Feb 19 19:18:25 crc kubenswrapper[4813]: I0219 19:18:25.215734 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lql9b"] Feb 19 19:18:25 crc kubenswrapper[4813]: I0219 19:18:25.229826 4813 scope.go:117] "RemoveContainer" containerID="a6402487fd7b31037880da758cf8f0967371699fd8f3b1f291bb5789e9c3ceab" Feb 19 19:18:25 crc kubenswrapper[4813]: I0219 19:18:25.254107 4813 scope.go:117] "RemoveContainer" containerID="8f8c93abc5ff2ff9817d53be23177be0fe63bfdd58871fdd6378e13be49c18b3" Feb 19 19:18:25 crc kubenswrapper[4813]: E0219 19:18:25.254577 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f8c93abc5ff2ff9817d53be23177be0fe63bfdd58871fdd6378e13be49c18b3\": container with ID starting with 8f8c93abc5ff2ff9817d53be23177be0fe63bfdd58871fdd6378e13be49c18b3 not found: ID does not exist" containerID="8f8c93abc5ff2ff9817d53be23177be0fe63bfdd58871fdd6378e13be49c18b3" Feb 19 19:18:25 crc kubenswrapper[4813]: I0219 19:18:25.254632 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f8c93abc5ff2ff9817d53be23177be0fe63bfdd58871fdd6378e13be49c18b3"} err="failed to get container status \"8f8c93abc5ff2ff9817d53be23177be0fe63bfdd58871fdd6378e13be49c18b3\": rpc error: code = NotFound desc = could not find container \"8f8c93abc5ff2ff9817d53be23177be0fe63bfdd58871fdd6378e13be49c18b3\": container with ID starting with 8f8c93abc5ff2ff9817d53be23177be0fe63bfdd58871fdd6378e13be49c18b3 not found: ID does not exist" Feb 19 19:18:25 crc kubenswrapper[4813]: I0219 19:18:25.254670 4813 scope.go:117] "RemoveContainer" containerID="ababff3f65fa4cede8f6f8d4f7752a384626074e849386a01afa553353e2ac64" Feb 19 19:18:25 crc kubenswrapper[4813]: E0219 19:18:25.255014 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ababff3f65fa4cede8f6f8d4f7752a384626074e849386a01afa553353e2ac64\": container with ID starting with ababff3f65fa4cede8f6f8d4f7752a384626074e849386a01afa553353e2ac64 not found: ID does not exist" containerID="ababff3f65fa4cede8f6f8d4f7752a384626074e849386a01afa553353e2ac64" Feb 19 19:18:25 crc kubenswrapper[4813]: I0219 19:18:25.255075 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ababff3f65fa4cede8f6f8d4f7752a384626074e849386a01afa553353e2ac64"} err="failed to get container status \"ababff3f65fa4cede8f6f8d4f7752a384626074e849386a01afa553353e2ac64\": rpc error: code = NotFound desc = could not find container \"ababff3f65fa4cede8f6f8d4f7752a384626074e849386a01afa553353e2ac64\": container with ID starting with ababff3f65fa4cede8f6f8d4f7752a384626074e849386a01afa553353e2ac64 not found: ID does not exist" Feb 19 19:18:25 crc kubenswrapper[4813]: I0219 19:18:25.255115 4813 scope.go:117] "RemoveContainer" containerID="a6402487fd7b31037880da758cf8f0967371699fd8f3b1f291bb5789e9c3ceab" Feb 19 19:18:25 crc kubenswrapper[4813]: E0219 19:18:25.255551 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6402487fd7b31037880da758cf8f0967371699fd8f3b1f291bb5789e9c3ceab\": container with ID starting with a6402487fd7b31037880da758cf8f0967371699fd8f3b1f291bb5789e9c3ceab not found: ID does not exist" containerID="a6402487fd7b31037880da758cf8f0967371699fd8f3b1f291bb5789e9c3ceab" Feb 19 19:18:25 crc kubenswrapper[4813]: I0219 19:18:25.255585 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6402487fd7b31037880da758cf8f0967371699fd8f3b1f291bb5789e9c3ceab"} err="failed to get container status \"a6402487fd7b31037880da758cf8f0967371699fd8f3b1f291bb5789e9c3ceab\": rpc error: code = NotFound desc = could not find container \"a6402487fd7b31037880da758cf8f0967371699fd8f3b1f291bb5789e9c3ceab\": container with ID starting with a6402487fd7b31037880da758cf8f0967371699fd8f3b1f291bb5789e9c3ceab not found: ID does not exist" Feb 19 19:18:25 crc kubenswrapper[4813]: I0219 19:18:25.493062 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bdfdc69-39ee-4dd7-8acd-0f9f79abeac5" path="/var/lib/kubelet/pods/4bdfdc69-39ee-4dd7-8acd-0f9f79abeac5/volumes" Feb 19 19:18:30 crc kubenswrapper[4813]: I0219 19:18:30.330400 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 19:18:30 crc kubenswrapper[4813]: I0219 19:18:30.330860 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 19:19:00 crc kubenswrapper[4813]: I0219 19:19:00.329558 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 19:19:00 crc kubenswrapper[4813]: I0219 19:19:00.332131 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 19:19:30 crc kubenswrapper[4813]: I0219 19:19:30.329864 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 19:19:30 crc kubenswrapper[4813]: I0219 19:19:30.330622 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 19:19:30 crc kubenswrapper[4813]: I0219 19:19:30.330690 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" Feb 19 19:19:30 crc kubenswrapper[4813]: I0219 19:19:30.331606 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b774c43bef336a65d730790127bfeb43bdd55642c8eadde1ac791fa717f811e7"} pod="openshift-machine-config-operator/machine-config-daemon-gfswm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 19:19:30 crc kubenswrapper[4813]: I0219 19:19:30.331706 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" containerID="cri-o://b774c43bef336a65d730790127bfeb43bdd55642c8eadde1ac791fa717f811e7" gracePeriod=600 Feb 19 19:19:30 crc kubenswrapper[4813]: E0219 19:19:30.542188 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:19:30 crc kubenswrapper[4813]: I0219 19:19:30.694291 4813 generic.go:334] "Generic (PLEG): container finished" podID="481977a2-7072-4176-abd4-863cb6104d70" containerID="b774c43bef336a65d730790127bfeb43bdd55642c8eadde1ac791fa717f811e7" exitCode=0 Feb 19 19:19:30 crc kubenswrapper[4813]: I0219 19:19:30.694338 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" event={"ID":"481977a2-7072-4176-abd4-863cb6104d70","Type":"ContainerDied","Data":"b774c43bef336a65d730790127bfeb43bdd55642c8eadde1ac791fa717f811e7"} Feb 19 19:19:30 crc kubenswrapper[4813]: I0219 19:19:30.694372 4813 scope.go:117] "RemoveContainer" containerID="d4a4a27017153c49bbc7828a56970db751c1584d8f12cdffdf0767bd096d7035" Feb 19 19:19:30 crc kubenswrapper[4813]: I0219 19:19:30.695238 4813 scope.go:117] "RemoveContainer" containerID="b774c43bef336a65d730790127bfeb43bdd55642c8eadde1ac791fa717f811e7" Feb 19 19:19:30 crc kubenswrapper[4813]: E0219 19:19:30.695739 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:19:45 crc kubenswrapper[4813]: I0219 19:19:45.471620 4813 scope.go:117] "RemoveContainer" containerID="b774c43bef336a65d730790127bfeb43bdd55642c8eadde1ac791fa717f811e7" Feb 19 19:19:45 crc kubenswrapper[4813]: E0219 19:19:45.472889 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:20:00 crc kubenswrapper[4813]: I0219 19:20:00.470974 4813 scope.go:117] "RemoveContainer" containerID="b774c43bef336a65d730790127bfeb43bdd55642c8eadde1ac791fa717f811e7" Feb 19 19:20:00 crc kubenswrapper[4813]: E0219 19:20:00.471484 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:20:14 crc kubenswrapper[4813]: I0219 19:20:14.471977 4813 scope.go:117] "RemoveContainer" containerID="b774c43bef336a65d730790127bfeb43bdd55642c8eadde1ac791fa717f811e7" Feb 19 19:20:14 crc kubenswrapper[4813]: E0219 19:20:14.472741 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:20:28 crc kubenswrapper[4813]: I0219 19:20:28.471350 4813 scope.go:117] "RemoveContainer" containerID="b774c43bef336a65d730790127bfeb43bdd55642c8eadde1ac791fa717f811e7" Feb 19 19:20:28 crc kubenswrapper[4813]: E0219 19:20:28.472037 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:20:42 crc kubenswrapper[4813]: I0219 19:20:42.472020 4813 scope.go:117] "RemoveContainer" containerID="b774c43bef336a65d730790127bfeb43bdd55642c8eadde1ac791fa717f811e7" Feb 19 19:20:42 crc kubenswrapper[4813]: E0219 19:20:42.473387 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:20:56 crc kubenswrapper[4813]: I0219 19:20:56.472525 4813 scope.go:117] "RemoveContainer" containerID="b774c43bef336a65d730790127bfeb43bdd55642c8eadde1ac791fa717f811e7" Feb 19 19:20:56 crc kubenswrapper[4813]: E0219 19:20:56.473532 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:21:10 crc kubenswrapper[4813]: I0219 19:21:10.472636 4813 scope.go:117] "RemoveContainer" containerID="b774c43bef336a65d730790127bfeb43bdd55642c8eadde1ac791fa717f811e7" Feb 19 19:21:10 crc kubenswrapper[4813]: E0219 19:21:10.473819 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:21:15 crc kubenswrapper[4813]: I0219 19:21:15.816913 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-z4t7l"] Feb 19 19:21:15 crc kubenswrapper[4813]: E0219 19:21:15.817522 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bdfdc69-39ee-4dd7-8acd-0f9f79abeac5" containerName="extract-utilities" Feb 19 19:21:15 crc kubenswrapper[4813]: I0219 19:21:15.817536 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bdfdc69-39ee-4dd7-8acd-0f9f79abeac5" containerName="extract-utilities" Feb 19 19:21:15 crc kubenswrapper[4813]: E0219 19:21:15.817551 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bdfdc69-39ee-4dd7-8acd-0f9f79abeac5" containerName="registry-server" Feb 19 19:21:15 crc kubenswrapper[4813]: I0219 19:21:15.817558 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bdfdc69-39ee-4dd7-8acd-0f9f79abeac5" containerName="registry-server" Feb 19 19:21:15 crc kubenswrapper[4813]: E0219 19:21:15.817573 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bdfdc69-39ee-4dd7-8acd-0f9f79abeac5" containerName="extract-content" Feb 19 19:21:15 crc kubenswrapper[4813]: I0219 19:21:15.817582 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bdfdc69-39ee-4dd7-8acd-0f9f79abeac5" containerName="extract-content" Feb 19 19:21:15 crc kubenswrapper[4813]: I0219 19:21:15.817731 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bdfdc69-39ee-4dd7-8acd-0f9f79abeac5" containerName="registry-server" Feb 19 19:21:15 crc kubenswrapper[4813]: I0219 19:21:15.818875 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z4t7l" Feb 19 19:21:15 crc kubenswrapper[4813]: I0219 19:21:15.828564 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z4t7l"] Feb 19 19:21:16 crc kubenswrapper[4813]: I0219 19:21:16.000501 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2b00141-ef28-4a9e-ac89-48fce222a382-utilities\") pod \"community-operators-z4t7l\" (UID: \"d2b00141-ef28-4a9e-ac89-48fce222a382\") " pod="openshift-marketplace/community-operators-z4t7l" Feb 19 19:21:16 crc kubenswrapper[4813]: I0219 19:21:16.000842 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2b00141-ef28-4a9e-ac89-48fce222a382-catalog-content\") pod \"community-operators-z4t7l\" (UID: \"d2b00141-ef28-4a9e-ac89-48fce222a382\") " pod="openshift-marketplace/community-operators-z4t7l" Feb 19 19:21:16 crc kubenswrapper[4813]: I0219 19:21:16.001002 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfvv7\" (UniqueName: \"kubernetes.io/projected/d2b00141-ef28-4a9e-ac89-48fce222a382-kube-api-access-wfvv7\") pod \"community-operators-z4t7l\" (UID: \"d2b00141-ef28-4a9e-ac89-48fce222a382\") " pod="openshift-marketplace/community-operators-z4t7l" Feb 19 19:21:16 crc kubenswrapper[4813]: I0219 19:21:16.102213 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfvv7\" (UniqueName: \"kubernetes.io/projected/d2b00141-ef28-4a9e-ac89-48fce222a382-kube-api-access-wfvv7\") pod \"community-operators-z4t7l\" (UID: \"d2b00141-ef28-4a9e-ac89-48fce222a382\") " pod="openshift-marketplace/community-operators-z4t7l" Feb 19 19:21:16 crc kubenswrapper[4813]: I0219 19:21:16.102274 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2b00141-ef28-4a9e-ac89-48fce222a382-utilities\") pod \"community-operators-z4t7l\" (UID: \"d2b00141-ef28-4a9e-ac89-48fce222a382\") " pod="openshift-marketplace/community-operators-z4t7l" Feb 19 19:21:16 crc kubenswrapper[4813]: I0219 19:21:16.102298 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2b00141-ef28-4a9e-ac89-48fce222a382-catalog-content\") pod \"community-operators-z4t7l\" (UID: \"d2b00141-ef28-4a9e-ac89-48fce222a382\") " pod="openshift-marketplace/community-operators-z4t7l" Feb 19 19:21:16 crc kubenswrapper[4813]: I0219 19:21:16.102793 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2b00141-ef28-4a9e-ac89-48fce222a382-catalog-content\") pod \"community-operators-z4t7l\" (UID: \"d2b00141-ef28-4a9e-ac89-48fce222a382\") " pod="openshift-marketplace/community-operators-z4t7l" Feb 19 19:21:16 crc kubenswrapper[4813]: I0219 19:21:16.103249 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2b00141-ef28-4a9e-ac89-48fce222a382-utilities\") pod \"community-operators-z4t7l\" (UID: \"d2b00141-ef28-4a9e-ac89-48fce222a382\") " pod="openshift-marketplace/community-operators-z4t7l" Feb 19 19:21:16 crc kubenswrapper[4813]: I0219 19:21:16.121917 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfvv7\" (UniqueName: \"kubernetes.io/projected/d2b00141-ef28-4a9e-ac89-48fce222a382-kube-api-access-wfvv7\") pod \"community-operators-z4t7l\" (UID: \"d2b00141-ef28-4a9e-ac89-48fce222a382\") " pod="openshift-marketplace/community-operators-z4t7l" Feb 19 19:21:16 crc kubenswrapper[4813]: I0219 19:21:16.158688 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z4t7l" Feb 19 19:21:16 crc kubenswrapper[4813]: I0219 19:21:16.674668 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z4t7l"] Feb 19 19:21:17 crc kubenswrapper[4813]: I0219 19:21:17.544682 4813 generic.go:334] "Generic (PLEG): container finished" podID="d2b00141-ef28-4a9e-ac89-48fce222a382" containerID="7be26fe2e45f8243ad5dd5e8fdaa74c1451ecf2ed70d21d8835b259ff8f2d21c" exitCode=0 Feb 19 19:21:17 crc kubenswrapper[4813]: I0219 19:21:17.544778 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z4t7l" event={"ID":"d2b00141-ef28-4a9e-ac89-48fce222a382","Type":"ContainerDied","Data":"7be26fe2e45f8243ad5dd5e8fdaa74c1451ecf2ed70d21d8835b259ff8f2d21c"} Feb 19 19:21:17 crc kubenswrapper[4813]: I0219 19:21:17.545527 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z4t7l" event={"ID":"d2b00141-ef28-4a9e-ac89-48fce222a382","Type":"ContainerStarted","Data":"78d12d407820c1019ad1b86dbd66268d5cf195abfb17de202f3754f69583cb8d"} Feb 19 19:21:17 crc kubenswrapper[4813]: I0219 19:21:17.548487 4813 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 19:21:18 crc kubenswrapper[4813]: I0219 19:21:18.555389 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z4t7l" event={"ID":"d2b00141-ef28-4a9e-ac89-48fce222a382","Type":"ContainerStarted","Data":"b3fa56cdaa1b8f63e598cbaa13c9026a507ce6aa5bbde279a02a4a90875eb5f6"} Feb 19 19:21:19 crc kubenswrapper[4813]: I0219 19:21:19.566198 4813 generic.go:334] "Generic (PLEG): container finished" podID="d2b00141-ef28-4a9e-ac89-48fce222a382" containerID="b3fa56cdaa1b8f63e598cbaa13c9026a507ce6aa5bbde279a02a4a90875eb5f6" exitCode=0 Feb 19 19:21:19 crc kubenswrapper[4813]: I0219 19:21:19.566249 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z4t7l" event={"ID":"d2b00141-ef28-4a9e-ac89-48fce222a382","Type":"ContainerDied","Data":"b3fa56cdaa1b8f63e598cbaa13c9026a507ce6aa5bbde279a02a4a90875eb5f6"} Feb 19 19:21:21 crc kubenswrapper[4813]: I0219 19:21:21.584435 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z4t7l" event={"ID":"d2b00141-ef28-4a9e-ac89-48fce222a382","Type":"ContainerStarted","Data":"a1dc728d55545bb723aa20d325dd89a40f6c9f637ea4b1f332552168bc7b47f2"} Feb 19 19:21:21 crc kubenswrapper[4813]: I0219 19:21:21.612121 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-z4t7l" podStartSLOduration=3.293995623 podStartE2EDuration="6.612095982s" podCreationTimestamp="2026-02-19 19:21:15 +0000 UTC" firstStartedPulling="2026-02-19 19:21:17.54807221 +0000 UTC m=+3096.773512791" lastFinishedPulling="2026-02-19 19:21:20.866172599 +0000 UTC m=+3100.091613150" observedRunningTime="2026-02-19 19:21:21.602232018 +0000 UTC m=+3100.827672599" watchObservedRunningTime="2026-02-19 19:21:21.612095982 +0000 UTC m=+3100.837536553" Feb 19 19:21:25 crc kubenswrapper[4813]: I0219 19:21:25.471802 4813 scope.go:117] "RemoveContainer" containerID="b774c43bef336a65d730790127bfeb43bdd55642c8eadde1ac791fa717f811e7" Feb 19 19:21:25 crc kubenswrapper[4813]: E0219 19:21:25.472529 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:21:26 crc kubenswrapper[4813]: I0219 19:21:26.159040 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-z4t7l" Feb 19 19:21:26 crc kubenswrapper[4813]: I0219 19:21:26.159281 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-z4t7l" Feb 19 19:21:26 crc kubenswrapper[4813]: I0219 19:21:26.211601 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-z4t7l" Feb 19 19:21:26 crc kubenswrapper[4813]: I0219 19:21:26.691131 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-z4t7l" Feb 19 19:21:26 crc kubenswrapper[4813]: I0219 19:21:26.739841 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z4t7l"] Feb 19 19:21:28 crc kubenswrapper[4813]: I0219 19:21:28.636245 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-z4t7l" podUID="d2b00141-ef28-4a9e-ac89-48fce222a382" containerName="registry-server" containerID="cri-o://a1dc728d55545bb723aa20d325dd89a40f6c9f637ea4b1f332552168bc7b47f2" gracePeriod=2 Feb 19 19:21:29 crc kubenswrapper[4813]: I0219 19:21:29.154443 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z4t7l" Feb 19 19:21:29 crc kubenswrapper[4813]: I0219 19:21:29.299761 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2b00141-ef28-4a9e-ac89-48fce222a382-utilities\") pod \"d2b00141-ef28-4a9e-ac89-48fce222a382\" (UID: \"d2b00141-ef28-4a9e-ac89-48fce222a382\") " Feb 19 19:21:29 crc kubenswrapper[4813]: I0219 19:21:29.299915 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfvv7\" (UniqueName: \"kubernetes.io/projected/d2b00141-ef28-4a9e-ac89-48fce222a382-kube-api-access-wfvv7\") pod \"d2b00141-ef28-4a9e-ac89-48fce222a382\" (UID: \"d2b00141-ef28-4a9e-ac89-48fce222a382\") " Feb 19 19:21:29 crc kubenswrapper[4813]: I0219 19:21:29.299995 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2b00141-ef28-4a9e-ac89-48fce222a382-catalog-content\") pod \"d2b00141-ef28-4a9e-ac89-48fce222a382\" (UID: \"d2b00141-ef28-4a9e-ac89-48fce222a382\") " Feb 19 19:21:29 crc kubenswrapper[4813]: I0219 19:21:29.300912 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2b00141-ef28-4a9e-ac89-48fce222a382-utilities" (OuterVolumeSpecName: "utilities") pod "d2b00141-ef28-4a9e-ac89-48fce222a382" (UID: "d2b00141-ef28-4a9e-ac89-48fce222a382"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:21:29 crc kubenswrapper[4813]: I0219 19:21:29.301164 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2b00141-ef28-4a9e-ac89-48fce222a382-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 19:21:29 crc kubenswrapper[4813]: I0219 19:21:29.307542 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2b00141-ef28-4a9e-ac89-48fce222a382-kube-api-access-wfvv7" (OuterVolumeSpecName: "kube-api-access-wfvv7") pod "d2b00141-ef28-4a9e-ac89-48fce222a382" (UID: "d2b00141-ef28-4a9e-ac89-48fce222a382"). InnerVolumeSpecName "kube-api-access-wfvv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:21:29 crc kubenswrapper[4813]: I0219 19:21:29.381744 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2b00141-ef28-4a9e-ac89-48fce222a382-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d2b00141-ef28-4a9e-ac89-48fce222a382" (UID: "d2b00141-ef28-4a9e-ac89-48fce222a382"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:21:29 crc kubenswrapper[4813]: I0219 19:21:29.402875 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wfvv7\" (UniqueName: \"kubernetes.io/projected/d2b00141-ef28-4a9e-ac89-48fce222a382-kube-api-access-wfvv7\") on node \"crc\" DevicePath \"\"" Feb 19 19:21:29 crc kubenswrapper[4813]: I0219 19:21:29.403307 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2b00141-ef28-4a9e-ac89-48fce222a382-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 19:21:29 crc kubenswrapper[4813]: I0219 19:21:29.647013 4813 generic.go:334] "Generic (PLEG): container finished" podID="d2b00141-ef28-4a9e-ac89-48fce222a382" containerID="a1dc728d55545bb723aa20d325dd89a40f6c9f637ea4b1f332552168bc7b47f2" exitCode=0 Feb 19 19:21:29 crc kubenswrapper[4813]: I0219 19:21:29.647064 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z4t7l" event={"ID":"d2b00141-ef28-4a9e-ac89-48fce222a382","Type":"ContainerDied","Data":"a1dc728d55545bb723aa20d325dd89a40f6c9f637ea4b1f332552168bc7b47f2"} Feb 19 19:21:29 crc kubenswrapper[4813]: I0219 19:21:29.647172 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z4t7l" Feb 19 19:21:29 crc kubenswrapper[4813]: I0219 19:21:29.647697 4813 scope.go:117] "RemoveContainer" containerID="a1dc728d55545bb723aa20d325dd89a40f6c9f637ea4b1f332552168bc7b47f2" Feb 19 19:21:29 crc kubenswrapper[4813]: I0219 19:21:29.647682 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z4t7l" event={"ID":"d2b00141-ef28-4a9e-ac89-48fce222a382","Type":"ContainerDied","Data":"78d12d407820c1019ad1b86dbd66268d5cf195abfb17de202f3754f69583cb8d"} Feb 19 19:21:29 crc kubenswrapper[4813]: I0219 19:21:29.674432 4813 scope.go:117] "RemoveContainer" containerID="b3fa56cdaa1b8f63e598cbaa13c9026a507ce6aa5bbde279a02a4a90875eb5f6" Feb 19 19:21:29 crc kubenswrapper[4813]: I0219 19:21:29.677851 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z4t7l"] Feb 19 19:21:29 crc kubenswrapper[4813]: I0219 19:21:29.688597 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-z4t7l"] Feb 19 19:21:29 crc kubenswrapper[4813]: I0219 19:21:29.698859 4813 scope.go:117] "RemoveContainer" containerID="7be26fe2e45f8243ad5dd5e8fdaa74c1451ecf2ed70d21d8835b259ff8f2d21c" Feb 19 19:21:29 crc kubenswrapper[4813]: I0219 19:21:29.724282 4813 scope.go:117] "RemoveContainer" containerID="a1dc728d55545bb723aa20d325dd89a40f6c9f637ea4b1f332552168bc7b47f2" Feb 19 19:21:29 crc kubenswrapper[4813]: E0219 19:21:29.724848 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1dc728d55545bb723aa20d325dd89a40f6c9f637ea4b1f332552168bc7b47f2\": container with ID starting with a1dc728d55545bb723aa20d325dd89a40f6c9f637ea4b1f332552168bc7b47f2 not found: ID does not exist" containerID="a1dc728d55545bb723aa20d325dd89a40f6c9f637ea4b1f332552168bc7b47f2" Feb 19 19:21:29 crc kubenswrapper[4813]: I0219 19:21:29.724971 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1dc728d55545bb723aa20d325dd89a40f6c9f637ea4b1f332552168bc7b47f2"} err="failed to get container status \"a1dc728d55545bb723aa20d325dd89a40f6c9f637ea4b1f332552168bc7b47f2\": rpc error: code = NotFound desc = could not find container \"a1dc728d55545bb723aa20d325dd89a40f6c9f637ea4b1f332552168bc7b47f2\": container with ID starting with a1dc728d55545bb723aa20d325dd89a40f6c9f637ea4b1f332552168bc7b47f2 not found: ID does not exist" Feb 19 19:21:29 crc kubenswrapper[4813]: I0219 19:21:29.725103 4813 scope.go:117] "RemoveContainer" containerID="b3fa56cdaa1b8f63e598cbaa13c9026a507ce6aa5bbde279a02a4a90875eb5f6" Feb 19 19:21:29 crc kubenswrapper[4813]: E0219 19:21:29.725589 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3fa56cdaa1b8f63e598cbaa13c9026a507ce6aa5bbde279a02a4a90875eb5f6\": container with ID starting with b3fa56cdaa1b8f63e598cbaa13c9026a507ce6aa5bbde279a02a4a90875eb5f6 not found: ID does not exist" containerID="b3fa56cdaa1b8f63e598cbaa13c9026a507ce6aa5bbde279a02a4a90875eb5f6" Feb 19 19:21:29 crc kubenswrapper[4813]: I0219 19:21:29.725628 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3fa56cdaa1b8f63e598cbaa13c9026a507ce6aa5bbde279a02a4a90875eb5f6"} err="failed to get container status \"b3fa56cdaa1b8f63e598cbaa13c9026a507ce6aa5bbde279a02a4a90875eb5f6\": rpc error: code = NotFound desc = could not find container \"b3fa56cdaa1b8f63e598cbaa13c9026a507ce6aa5bbde279a02a4a90875eb5f6\": container with ID starting with b3fa56cdaa1b8f63e598cbaa13c9026a507ce6aa5bbde279a02a4a90875eb5f6 not found: ID does not exist" Feb 19 19:21:29 crc kubenswrapper[4813]: I0219 19:21:29.725656 4813 scope.go:117] "RemoveContainer" containerID="7be26fe2e45f8243ad5dd5e8fdaa74c1451ecf2ed70d21d8835b259ff8f2d21c" Feb 19 19:21:29 crc kubenswrapper[4813]: E0219 19:21:29.726160 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7be26fe2e45f8243ad5dd5e8fdaa74c1451ecf2ed70d21d8835b259ff8f2d21c\": container with ID starting with 7be26fe2e45f8243ad5dd5e8fdaa74c1451ecf2ed70d21d8835b259ff8f2d21c not found: ID does not exist" containerID="7be26fe2e45f8243ad5dd5e8fdaa74c1451ecf2ed70d21d8835b259ff8f2d21c" Feb 19 19:21:29 crc kubenswrapper[4813]: I0219 19:21:29.726258 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7be26fe2e45f8243ad5dd5e8fdaa74c1451ecf2ed70d21d8835b259ff8f2d21c"} err="failed to get container status \"7be26fe2e45f8243ad5dd5e8fdaa74c1451ecf2ed70d21d8835b259ff8f2d21c\": rpc error: code = NotFound desc = could not find container \"7be26fe2e45f8243ad5dd5e8fdaa74c1451ecf2ed70d21d8835b259ff8f2d21c\": container with ID starting with 7be26fe2e45f8243ad5dd5e8fdaa74c1451ecf2ed70d21d8835b259ff8f2d21c not found: ID does not exist" Feb 19 19:21:31 crc kubenswrapper[4813]: I0219 19:21:31.482758 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2b00141-ef28-4a9e-ac89-48fce222a382" path="/var/lib/kubelet/pods/d2b00141-ef28-4a9e-ac89-48fce222a382/volumes" Feb 19 19:21:37 crc kubenswrapper[4813]: I0219 19:21:37.472302 4813 scope.go:117] "RemoveContainer" containerID="b774c43bef336a65d730790127bfeb43bdd55642c8eadde1ac791fa717f811e7" Feb 19 19:21:37 crc kubenswrapper[4813]: E0219 19:21:37.473314 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:21:48 crc kubenswrapper[4813]: I0219 19:21:48.472243 4813 scope.go:117] "RemoveContainer" containerID="b774c43bef336a65d730790127bfeb43bdd55642c8eadde1ac791fa717f811e7" Feb 19 19:21:48 crc kubenswrapper[4813]: E0219 19:21:48.473151 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:22:02 crc kubenswrapper[4813]: I0219 19:22:02.472776 4813 scope.go:117] "RemoveContainer" containerID="b774c43bef336a65d730790127bfeb43bdd55642c8eadde1ac791fa717f811e7" Feb 19 19:22:02 crc kubenswrapper[4813]: E0219 19:22:02.473926 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:22:17 crc kubenswrapper[4813]: I0219 19:22:17.471908 4813 scope.go:117] "RemoveContainer" containerID="b774c43bef336a65d730790127bfeb43bdd55642c8eadde1ac791fa717f811e7" Feb 19 19:22:17 crc kubenswrapper[4813]: E0219 19:22:17.473011 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:22:29 crc kubenswrapper[4813]: I0219 19:22:29.473515 4813 scope.go:117] "RemoveContainer" containerID="b774c43bef336a65d730790127bfeb43bdd55642c8eadde1ac791fa717f811e7" Feb 19 19:22:29 crc kubenswrapper[4813]: E0219 19:22:29.475521 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:22:41 crc kubenswrapper[4813]: I0219 19:22:41.928269 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5m4dz"] Feb 19 19:22:41 crc kubenswrapper[4813]: E0219 19:22:41.929294 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2b00141-ef28-4a9e-ac89-48fce222a382" containerName="extract-utilities" Feb 19 19:22:41 crc kubenswrapper[4813]: I0219 19:22:41.929325 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2b00141-ef28-4a9e-ac89-48fce222a382" containerName="extract-utilities" Feb 19 19:22:41 crc kubenswrapper[4813]: E0219 19:22:41.929385 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2b00141-ef28-4a9e-ac89-48fce222a382" containerName="extract-content" Feb 19 19:22:41 crc kubenswrapper[4813]: I0219 19:22:41.929400 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2b00141-ef28-4a9e-ac89-48fce222a382" containerName="extract-content" Feb 19 19:22:41 crc kubenswrapper[4813]: E0219 19:22:41.929436 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2b00141-ef28-4a9e-ac89-48fce222a382" containerName="registry-server" Feb 19 19:22:41 crc kubenswrapper[4813]: I0219 19:22:41.929454 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2b00141-ef28-4a9e-ac89-48fce222a382" containerName="registry-server" Feb 19 19:22:41 crc kubenswrapper[4813]: I0219 19:22:41.929737 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2b00141-ef28-4a9e-ac89-48fce222a382" containerName="registry-server" Feb 19 19:22:41 crc kubenswrapper[4813]: I0219 19:22:41.932694 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5m4dz" Feb 19 19:22:41 crc kubenswrapper[4813]: I0219 19:22:41.944123 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5m4dz"] Feb 19 19:22:42 crc kubenswrapper[4813]: I0219 19:22:42.051744 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2f7ps\" (UniqueName: \"kubernetes.io/projected/f9fb323e-ba35-4ba3-908f-2946761b3fdf-kube-api-access-2f7ps\") pod \"certified-operators-5m4dz\" (UID: \"f9fb323e-ba35-4ba3-908f-2946761b3fdf\") " pod="openshift-marketplace/certified-operators-5m4dz" Feb 19 19:22:42 crc kubenswrapper[4813]: I0219 19:22:42.051791 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9fb323e-ba35-4ba3-908f-2946761b3fdf-utilities\") pod \"certified-operators-5m4dz\" (UID: \"f9fb323e-ba35-4ba3-908f-2946761b3fdf\") " pod="openshift-marketplace/certified-operators-5m4dz" Feb 19 19:22:42 crc kubenswrapper[4813]: I0219 19:22:42.051834 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9fb323e-ba35-4ba3-908f-2946761b3fdf-catalog-content\") pod \"certified-operators-5m4dz\" (UID: \"f9fb323e-ba35-4ba3-908f-2946761b3fdf\") " pod="openshift-marketplace/certified-operators-5m4dz" Feb 19 19:22:42 crc kubenswrapper[4813]: I0219 19:22:42.152661 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2f7ps\" (UniqueName: \"kubernetes.io/projected/f9fb323e-ba35-4ba3-908f-2946761b3fdf-kube-api-access-2f7ps\") pod \"certified-operators-5m4dz\" (UID: \"f9fb323e-ba35-4ba3-908f-2946761b3fdf\") " pod="openshift-marketplace/certified-operators-5m4dz" Feb 19 19:22:42 crc kubenswrapper[4813]: I0219 19:22:42.152800 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9fb323e-ba35-4ba3-908f-2946761b3fdf-utilities\") pod \"certified-operators-5m4dz\" (UID: \"f9fb323e-ba35-4ba3-908f-2946761b3fdf\") " pod="openshift-marketplace/certified-operators-5m4dz" Feb 19 19:22:42 crc kubenswrapper[4813]: I0219 19:22:42.152838 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9fb323e-ba35-4ba3-908f-2946761b3fdf-catalog-content\") pod \"certified-operators-5m4dz\" (UID: \"f9fb323e-ba35-4ba3-908f-2946761b3fdf\") " pod="openshift-marketplace/certified-operators-5m4dz" Feb 19 19:22:42 crc kubenswrapper[4813]: I0219 19:22:42.153351 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9fb323e-ba35-4ba3-908f-2946761b3fdf-utilities\") pod \"certified-operators-5m4dz\" (UID: \"f9fb323e-ba35-4ba3-908f-2946761b3fdf\") " pod="openshift-marketplace/certified-operators-5m4dz" Feb 19 19:22:42 crc kubenswrapper[4813]: I0219 19:22:42.153428 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9fb323e-ba35-4ba3-908f-2946761b3fdf-catalog-content\") pod \"certified-operators-5m4dz\" (UID: \"f9fb323e-ba35-4ba3-908f-2946761b3fdf\") " pod="openshift-marketplace/certified-operators-5m4dz" Feb 19 19:22:42 crc kubenswrapper[4813]: I0219 19:22:42.172286 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2f7ps\" (UniqueName: \"kubernetes.io/projected/f9fb323e-ba35-4ba3-908f-2946761b3fdf-kube-api-access-2f7ps\") pod \"certified-operators-5m4dz\" (UID: \"f9fb323e-ba35-4ba3-908f-2946761b3fdf\") " pod="openshift-marketplace/certified-operators-5m4dz" Feb 19 19:22:42 crc kubenswrapper[4813]: I0219 19:22:42.262004 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5m4dz" Feb 19 19:22:42 crc kubenswrapper[4813]: I0219 19:22:42.800695 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5m4dz"] Feb 19 19:22:43 crc kubenswrapper[4813]: I0219 19:22:43.260594 4813 generic.go:334] "Generic (PLEG): container finished" podID="f9fb323e-ba35-4ba3-908f-2946761b3fdf" containerID="3ceeab8abb7fbd04fb8cf3c5966564f9769e9263b50da87c15ca43c5c8e06ac6" exitCode=0 Feb 19 19:22:43 crc kubenswrapper[4813]: I0219 19:22:43.260806 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5m4dz" event={"ID":"f9fb323e-ba35-4ba3-908f-2946761b3fdf","Type":"ContainerDied","Data":"3ceeab8abb7fbd04fb8cf3c5966564f9769e9263b50da87c15ca43c5c8e06ac6"} Feb 19 19:22:43 crc kubenswrapper[4813]: I0219 19:22:43.261013 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5m4dz" event={"ID":"f9fb323e-ba35-4ba3-908f-2946761b3fdf","Type":"ContainerStarted","Data":"dc9abf62ef3f00ff082a11734cba5a299f4ba243d45273169ba0415e19043dbd"} Feb 19 19:22:43 crc kubenswrapper[4813]: I0219 19:22:43.471447 4813 scope.go:117] "RemoveContainer" containerID="b774c43bef336a65d730790127bfeb43bdd55642c8eadde1ac791fa717f811e7" Feb 19 19:22:43 crc kubenswrapper[4813]: E0219 19:22:43.471665 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:22:44 crc kubenswrapper[4813]: I0219 19:22:44.268903 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5m4dz" event={"ID":"f9fb323e-ba35-4ba3-908f-2946761b3fdf","Type":"ContainerStarted","Data":"9a0a95bb86b1e8bb7c760b1ab8dbf88099552a3998c4dc2b2eab4fe245c64505"} Feb 19 19:22:45 crc kubenswrapper[4813]: I0219 19:22:45.277627 4813 generic.go:334] "Generic (PLEG): container finished" podID="f9fb323e-ba35-4ba3-908f-2946761b3fdf" containerID="9a0a95bb86b1e8bb7c760b1ab8dbf88099552a3998c4dc2b2eab4fe245c64505" exitCode=0 Feb 19 19:22:45 crc kubenswrapper[4813]: I0219 19:22:45.277686 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5m4dz" event={"ID":"f9fb323e-ba35-4ba3-908f-2946761b3fdf","Type":"ContainerDied","Data":"9a0a95bb86b1e8bb7c760b1ab8dbf88099552a3998c4dc2b2eab4fe245c64505"} Feb 19 19:22:46 crc kubenswrapper[4813]: I0219 19:22:46.286370 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5m4dz" event={"ID":"f9fb323e-ba35-4ba3-908f-2946761b3fdf","Type":"ContainerStarted","Data":"061993d62295cf199f2ee61d928cce1c38acaed8c3e5d1e162e9fb02d2abe305"} Feb 19 19:22:46 crc kubenswrapper[4813]: I0219 19:22:46.306908 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5m4dz" podStartSLOduration=2.79830012 podStartE2EDuration="5.306889274s" podCreationTimestamp="2026-02-19 19:22:41 +0000 UTC" firstStartedPulling="2026-02-19 19:22:43.262107108 +0000 UTC m=+3182.487547659" lastFinishedPulling="2026-02-19 19:22:45.770696272 +0000 UTC m=+3184.996136813" observedRunningTime="2026-02-19 19:22:46.305724299 +0000 UTC m=+3185.531164840" watchObservedRunningTime="2026-02-19 19:22:46.306889274 +0000 UTC m=+3185.532329815" Feb 19 19:22:52 crc kubenswrapper[4813]: I0219 19:22:52.262470 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5m4dz" Feb 19 19:22:52 crc kubenswrapper[4813]: I0219 19:22:52.262813 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5m4dz" Feb 19 19:22:52 crc kubenswrapper[4813]: I0219 19:22:52.318622 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5m4dz" Feb 19 19:22:52 crc kubenswrapper[4813]: I0219 19:22:52.381017 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5m4dz" Feb 19 19:22:52 crc kubenswrapper[4813]: I0219 19:22:52.559522 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5m4dz"] Feb 19 19:22:54 crc kubenswrapper[4813]: I0219 19:22:54.346455 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5m4dz" podUID="f9fb323e-ba35-4ba3-908f-2946761b3fdf" containerName="registry-server" containerID="cri-o://061993d62295cf199f2ee61d928cce1c38acaed8c3e5d1e162e9fb02d2abe305" gracePeriod=2 Feb 19 19:22:54 crc kubenswrapper[4813]: I0219 19:22:54.772387 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5m4dz" Feb 19 19:22:54 crc kubenswrapper[4813]: I0219 19:22:54.840931 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9fb323e-ba35-4ba3-908f-2946761b3fdf-utilities\") pod \"f9fb323e-ba35-4ba3-908f-2946761b3fdf\" (UID: \"f9fb323e-ba35-4ba3-908f-2946761b3fdf\") " Feb 19 19:22:54 crc kubenswrapper[4813]: I0219 19:22:54.841132 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2f7ps\" (UniqueName: \"kubernetes.io/projected/f9fb323e-ba35-4ba3-908f-2946761b3fdf-kube-api-access-2f7ps\") pod \"f9fb323e-ba35-4ba3-908f-2946761b3fdf\" (UID: \"f9fb323e-ba35-4ba3-908f-2946761b3fdf\") " Feb 19 19:22:54 crc kubenswrapper[4813]: I0219 19:22:54.841181 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9fb323e-ba35-4ba3-908f-2946761b3fdf-catalog-content\") pod \"f9fb323e-ba35-4ba3-908f-2946761b3fdf\" (UID: \"f9fb323e-ba35-4ba3-908f-2946761b3fdf\") " Feb 19 19:22:54 crc kubenswrapper[4813]: I0219 19:22:54.842043 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9fb323e-ba35-4ba3-908f-2946761b3fdf-utilities" (OuterVolumeSpecName: "utilities") pod "f9fb323e-ba35-4ba3-908f-2946761b3fdf" (UID: "f9fb323e-ba35-4ba3-908f-2946761b3fdf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:22:54 crc kubenswrapper[4813]: I0219 19:22:54.848929 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9fb323e-ba35-4ba3-908f-2946761b3fdf-kube-api-access-2f7ps" (OuterVolumeSpecName: "kube-api-access-2f7ps") pod "f9fb323e-ba35-4ba3-908f-2946761b3fdf" (UID: "f9fb323e-ba35-4ba3-908f-2946761b3fdf"). InnerVolumeSpecName "kube-api-access-2f7ps". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:22:54 crc kubenswrapper[4813]: I0219 19:22:54.943477 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2f7ps\" (UniqueName: \"kubernetes.io/projected/f9fb323e-ba35-4ba3-908f-2946761b3fdf-kube-api-access-2f7ps\") on node \"crc\" DevicePath \"\"" Feb 19 19:22:54 crc kubenswrapper[4813]: I0219 19:22:54.943537 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9fb323e-ba35-4ba3-908f-2946761b3fdf-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 19:22:55 crc kubenswrapper[4813]: I0219 19:22:55.355566 4813 generic.go:334] "Generic (PLEG): container finished" podID="f9fb323e-ba35-4ba3-908f-2946761b3fdf" containerID="061993d62295cf199f2ee61d928cce1c38acaed8c3e5d1e162e9fb02d2abe305" exitCode=0 Feb 19 19:22:55 crc kubenswrapper[4813]: I0219 19:22:55.355635 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5m4dz" Feb 19 19:22:55 crc kubenswrapper[4813]: I0219 19:22:55.355669 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5m4dz" event={"ID":"f9fb323e-ba35-4ba3-908f-2946761b3fdf","Type":"ContainerDied","Data":"061993d62295cf199f2ee61d928cce1c38acaed8c3e5d1e162e9fb02d2abe305"} Feb 19 19:22:55 crc kubenswrapper[4813]: I0219 19:22:55.356075 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5m4dz" event={"ID":"f9fb323e-ba35-4ba3-908f-2946761b3fdf","Type":"ContainerDied","Data":"dc9abf62ef3f00ff082a11734cba5a299f4ba243d45273169ba0415e19043dbd"} Feb 19 19:22:55 crc kubenswrapper[4813]: I0219 19:22:55.356108 4813 scope.go:117] "RemoveContainer" containerID="061993d62295cf199f2ee61d928cce1c38acaed8c3e5d1e162e9fb02d2abe305" Feb 19 19:22:55 crc kubenswrapper[4813]: I0219 19:22:55.375170 4813 scope.go:117] "RemoveContainer" containerID="9a0a95bb86b1e8bb7c760b1ab8dbf88099552a3998c4dc2b2eab4fe245c64505" Feb 19 19:22:55 crc kubenswrapper[4813]: I0219 19:22:55.397452 4813 scope.go:117] "RemoveContainer" containerID="3ceeab8abb7fbd04fb8cf3c5966564f9769e9263b50da87c15ca43c5c8e06ac6" Feb 19 19:22:55 crc kubenswrapper[4813]: I0219 19:22:55.418999 4813 scope.go:117] "RemoveContainer" containerID="061993d62295cf199f2ee61d928cce1c38acaed8c3e5d1e162e9fb02d2abe305" Feb 19 19:22:55 crc kubenswrapper[4813]: E0219 19:22:55.419429 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"061993d62295cf199f2ee61d928cce1c38acaed8c3e5d1e162e9fb02d2abe305\": container with ID starting with 061993d62295cf199f2ee61d928cce1c38acaed8c3e5d1e162e9fb02d2abe305 not found: ID does not exist" containerID="061993d62295cf199f2ee61d928cce1c38acaed8c3e5d1e162e9fb02d2abe305" Feb 19 19:22:55 crc kubenswrapper[4813]: I0219 19:22:55.419465 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"061993d62295cf199f2ee61d928cce1c38acaed8c3e5d1e162e9fb02d2abe305"} err="failed to get container status \"061993d62295cf199f2ee61d928cce1c38acaed8c3e5d1e162e9fb02d2abe305\": rpc error: code = NotFound desc = could not find container \"061993d62295cf199f2ee61d928cce1c38acaed8c3e5d1e162e9fb02d2abe305\": container with ID starting with 061993d62295cf199f2ee61d928cce1c38acaed8c3e5d1e162e9fb02d2abe305 not found: ID does not exist" Feb 19 19:22:55 crc kubenswrapper[4813]: I0219 19:22:55.419488 4813 scope.go:117] "RemoveContainer" containerID="9a0a95bb86b1e8bb7c760b1ab8dbf88099552a3998c4dc2b2eab4fe245c64505" Feb 19 19:22:55 crc kubenswrapper[4813]: E0219 19:22:55.419847 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a0a95bb86b1e8bb7c760b1ab8dbf88099552a3998c4dc2b2eab4fe245c64505\": container with ID starting with 9a0a95bb86b1e8bb7c760b1ab8dbf88099552a3998c4dc2b2eab4fe245c64505 not found: ID does not exist" containerID="9a0a95bb86b1e8bb7c760b1ab8dbf88099552a3998c4dc2b2eab4fe245c64505" Feb 19 19:22:55 crc kubenswrapper[4813]: I0219 19:22:55.419897 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a0a95bb86b1e8bb7c760b1ab8dbf88099552a3998c4dc2b2eab4fe245c64505"} err="failed to get container status \"9a0a95bb86b1e8bb7c760b1ab8dbf88099552a3998c4dc2b2eab4fe245c64505\": rpc error: code = NotFound desc = could not find container \"9a0a95bb86b1e8bb7c760b1ab8dbf88099552a3998c4dc2b2eab4fe245c64505\": container with ID starting with 9a0a95bb86b1e8bb7c760b1ab8dbf88099552a3998c4dc2b2eab4fe245c64505 not found: ID does not exist" Feb 19 19:22:55 crc kubenswrapper[4813]: I0219 19:22:55.419931 4813 scope.go:117] "RemoveContainer" containerID="3ceeab8abb7fbd04fb8cf3c5966564f9769e9263b50da87c15ca43c5c8e06ac6" Feb 19 19:22:55 crc kubenswrapper[4813]: E0219 19:22:55.420295 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ceeab8abb7fbd04fb8cf3c5966564f9769e9263b50da87c15ca43c5c8e06ac6\": container with ID starting with 3ceeab8abb7fbd04fb8cf3c5966564f9769e9263b50da87c15ca43c5c8e06ac6 not found: ID does not exist" containerID="3ceeab8abb7fbd04fb8cf3c5966564f9769e9263b50da87c15ca43c5c8e06ac6" Feb 19 19:22:55 crc kubenswrapper[4813]: I0219 19:22:55.420334 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ceeab8abb7fbd04fb8cf3c5966564f9769e9263b50da87c15ca43c5c8e06ac6"} err="failed to get container status \"3ceeab8abb7fbd04fb8cf3c5966564f9769e9263b50da87c15ca43c5c8e06ac6\": rpc error: code = NotFound desc = could not find container \"3ceeab8abb7fbd04fb8cf3c5966564f9769e9263b50da87c15ca43c5c8e06ac6\": container with ID starting with 3ceeab8abb7fbd04fb8cf3c5966564f9769e9263b50da87c15ca43c5c8e06ac6 not found: ID does not exist" Feb 19 19:22:55 crc kubenswrapper[4813]: I0219 19:22:55.471603 4813 scope.go:117] "RemoveContainer" containerID="b774c43bef336a65d730790127bfeb43bdd55642c8eadde1ac791fa717f811e7" Feb 19 19:22:55 crc kubenswrapper[4813]: E0219 19:22:55.471931 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:22:55 crc kubenswrapper[4813]: I0219 19:22:55.610858 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9fb323e-ba35-4ba3-908f-2946761b3fdf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f9fb323e-ba35-4ba3-908f-2946761b3fdf" (UID: "f9fb323e-ba35-4ba3-908f-2946761b3fdf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:22:55 crc kubenswrapper[4813]: I0219 19:22:55.658659 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9fb323e-ba35-4ba3-908f-2946761b3fdf-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 19:22:55 crc kubenswrapper[4813]: I0219 19:22:55.690516 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5m4dz"] Feb 19 19:22:55 crc kubenswrapper[4813]: I0219 19:22:55.697590 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5m4dz"] Feb 19 19:22:57 crc kubenswrapper[4813]: I0219 19:22:57.482457 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9fb323e-ba35-4ba3-908f-2946761b3fdf" path="/var/lib/kubelet/pods/f9fb323e-ba35-4ba3-908f-2946761b3fdf/volumes" Feb 19 19:23:10 crc kubenswrapper[4813]: I0219 19:23:10.471759 4813 scope.go:117] "RemoveContainer" containerID="b774c43bef336a65d730790127bfeb43bdd55642c8eadde1ac791fa717f811e7" Feb 19 19:23:10 crc kubenswrapper[4813]: E0219 19:23:10.472484 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:23:23 crc kubenswrapper[4813]: I0219 19:23:23.473734 4813 scope.go:117] "RemoveContainer" containerID="b774c43bef336a65d730790127bfeb43bdd55642c8eadde1ac791fa717f811e7" Feb 19 19:23:23 crc kubenswrapper[4813]: E0219 19:23:23.474723 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:23:34 crc kubenswrapper[4813]: I0219 19:23:34.473156 4813 scope.go:117] "RemoveContainer" containerID="b774c43bef336a65d730790127bfeb43bdd55642c8eadde1ac791fa717f811e7" Feb 19 19:23:34 crc kubenswrapper[4813]: E0219 19:23:34.474313 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:23:47 crc kubenswrapper[4813]: I0219 19:23:47.472421 4813 scope.go:117] "RemoveContainer" containerID="b774c43bef336a65d730790127bfeb43bdd55642c8eadde1ac791fa717f811e7" Feb 19 19:23:47 crc kubenswrapper[4813]: E0219 19:23:47.474104 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:23:58 crc kubenswrapper[4813]: I0219 19:23:58.472248 4813 scope.go:117] "RemoveContainer" containerID="b774c43bef336a65d730790127bfeb43bdd55642c8eadde1ac791fa717f811e7" Feb 19 19:23:58 crc kubenswrapper[4813]: E0219 19:23:58.473712 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:24:04 crc kubenswrapper[4813]: I0219 19:24:04.511767 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fz4qt"] Feb 19 19:24:04 crc kubenswrapper[4813]: E0219 19:24:04.512647 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9fb323e-ba35-4ba3-908f-2946761b3fdf" containerName="registry-server" Feb 19 19:24:04 crc kubenswrapper[4813]: I0219 19:24:04.512664 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9fb323e-ba35-4ba3-908f-2946761b3fdf" containerName="registry-server" Feb 19 19:24:04 crc kubenswrapper[4813]: E0219 19:24:04.512679 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9fb323e-ba35-4ba3-908f-2946761b3fdf" containerName="extract-content" Feb 19 19:24:04 crc kubenswrapper[4813]: I0219 19:24:04.512687 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9fb323e-ba35-4ba3-908f-2946761b3fdf" containerName="extract-content" Feb 19 19:24:04 crc kubenswrapper[4813]: E0219 19:24:04.512716 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9fb323e-ba35-4ba3-908f-2946761b3fdf" containerName="extract-utilities" Feb 19 19:24:04 crc kubenswrapper[4813]: I0219 19:24:04.512727 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9fb323e-ba35-4ba3-908f-2946761b3fdf" containerName="extract-utilities" Feb 19 19:24:04 crc kubenswrapper[4813]: I0219 19:24:04.512925 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9fb323e-ba35-4ba3-908f-2946761b3fdf" containerName="registry-server" Feb 19 19:24:04 crc kubenswrapper[4813]: I0219 19:24:04.514522 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fz4qt" Feb 19 19:24:04 crc kubenswrapper[4813]: I0219 19:24:04.531379 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fz4qt"] Feb 19 19:24:04 crc kubenswrapper[4813]: I0219 19:24:04.672711 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8045e1a7-6d0b-46a5-9fd5-429cbe03d83b-utilities\") pod \"redhat-operators-fz4qt\" (UID: \"8045e1a7-6d0b-46a5-9fd5-429cbe03d83b\") " pod="openshift-marketplace/redhat-operators-fz4qt" Feb 19 19:24:04 crc kubenswrapper[4813]: I0219 19:24:04.672869 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8045e1a7-6d0b-46a5-9fd5-429cbe03d83b-catalog-content\") pod \"redhat-operators-fz4qt\" (UID: \"8045e1a7-6d0b-46a5-9fd5-429cbe03d83b\") " pod="openshift-marketplace/redhat-operators-fz4qt" Feb 19 19:24:04 crc kubenswrapper[4813]: I0219 19:24:04.672991 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncb57\" (UniqueName: \"kubernetes.io/projected/8045e1a7-6d0b-46a5-9fd5-429cbe03d83b-kube-api-access-ncb57\") pod \"redhat-operators-fz4qt\" (UID: \"8045e1a7-6d0b-46a5-9fd5-429cbe03d83b\") " pod="openshift-marketplace/redhat-operators-fz4qt" Feb 19 19:24:04 crc kubenswrapper[4813]: I0219 19:24:04.774884 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8045e1a7-6d0b-46a5-9fd5-429cbe03d83b-catalog-content\") pod \"redhat-operators-fz4qt\" (UID: \"8045e1a7-6d0b-46a5-9fd5-429cbe03d83b\") " pod="openshift-marketplace/redhat-operators-fz4qt" Feb 19 19:24:04 crc kubenswrapper[4813]: I0219 19:24:04.775031 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncb57\" (UniqueName: \"kubernetes.io/projected/8045e1a7-6d0b-46a5-9fd5-429cbe03d83b-kube-api-access-ncb57\") pod \"redhat-operators-fz4qt\" (UID: \"8045e1a7-6d0b-46a5-9fd5-429cbe03d83b\") " pod="openshift-marketplace/redhat-operators-fz4qt" Feb 19 19:24:04 crc kubenswrapper[4813]: I0219 19:24:04.775110 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8045e1a7-6d0b-46a5-9fd5-429cbe03d83b-utilities\") pod \"redhat-operators-fz4qt\" (UID: \"8045e1a7-6d0b-46a5-9fd5-429cbe03d83b\") " pod="openshift-marketplace/redhat-operators-fz4qt" Feb 19 19:24:04 crc kubenswrapper[4813]: I0219 19:24:04.775534 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8045e1a7-6d0b-46a5-9fd5-429cbe03d83b-catalog-content\") pod \"redhat-operators-fz4qt\" (UID: \"8045e1a7-6d0b-46a5-9fd5-429cbe03d83b\") " pod="openshift-marketplace/redhat-operators-fz4qt" Feb 19 19:24:04 crc kubenswrapper[4813]: I0219 19:24:04.775594 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8045e1a7-6d0b-46a5-9fd5-429cbe03d83b-utilities\") pod \"redhat-operators-fz4qt\" (UID: \"8045e1a7-6d0b-46a5-9fd5-429cbe03d83b\") " pod="openshift-marketplace/redhat-operators-fz4qt" Feb 19 19:24:04 crc kubenswrapper[4813]: I0219 19:24:04.792800 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncb57\" (UniqueName: \"kubernetes.io/projected/8045e1a7-6d0b-46a5-9fd5-429cbe03d83b-kube-api-access-ncb57\") pod \"redhat-operators-fz4qt\" (UID: \"8045e1a7-6d0b-46a5-9fd5-429cbe03d83b\") " pod="openshift-marketplace/redhat-operators-fz4qt" Feb 19 19:24:04 crc kubenswrapper[4813]: I0219 19:24:04.840322 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fz4qt" Feb 19 19:24:05 crc kubenswrapper[4813]: I0219 19:24:05.258392 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fz4qt"] Feb 19 19:24:05 crc kubenswrapper[4813]: I0219 19:24:05.328624 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fz4qt" event={"ID":"8045e1a7-6d0b-46a5-9fd5-429cbe03d83b","Type":"ContainerStarted","Data":"2fd31c763a75d934b180d36a5044fceb6e0f9ab6e68e09b10b12f333d8ffb7f3"} Feb 19 19:24:06 crc kubenswrapper[4813]: I0219 19:24:06.337656 4813 generic.go:334] "Generic (PLEG): container finished" podID="8045e1a7-6d0b-46a5-9fd5-429cbe03d83b" containerID="cfdb2ca2fa867b85f6e0443eb89e11eee0270edf40cc16ab9be4b9106ca1aaaa" exitCode=0 Feb 19 19:24:06 crc kubenswrapper[4813]: I0219 19:24:06.337779 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fz4qt" event={"ID":"8045e1a7-6d0b-46a5-9fd5-429cbe03d83b","Type":"ContainerDied","Data":"cfdb2ca2fa867b85f6e0443eb89e11eee0270edf40cc16ab9be4b9106ca1aaaa"} Feb 19 19:24:07 crc kubenswrapper[4813]: I0219 19:24:07.349795 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fz4qt" event={"ID":"8045e1a7-6d0b-46a5-9fd5-429cbe03d83b","Type":"ContainerStarted","Data":"157a767861a33f4fc58fbe458a4c70b34390c201b439bab5904e59a6a76180f0"} Feb 19 19:24:08 crc kubenswrapper[4813]: I0219 19:24:08.359810 4813 generic.go:334] "Generic (PLEG): container finished" podID="8045e1a7-6d0b-46a5-9fd5-429cbe03d83b" containerID="157a767861a33f4fc58fbe458a4c70b34390c201b439bab5904e59a6a76180f0" exitCode=0 Feb 19 19:24:08 crc kubenswrapper[4813]: I0219 19:24:08.359857 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fz4qt" event={"ID":"8045e1a7-6d0b-46a5-9fd5-429cbe03d83b","Type":"ContainerDied","Data":"157a767861a33f4fc58fbe458a4c70b34390c201b439bab5904e59a6a76180f0"} Feb 19 19:24:09 crc kubenswrapper[4813]: I0219 19:24:09.372542 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fz4qt" event={"ID":"8045e1a7-6d0b-46a5-9fd5-429cbe03d83b","Type":"ContainerStarted","Data":"90bf974c2f6b669b56051aedfe757c195f32692464020e825f52b708a6a3fa41"} Feb 19 19:24:09 crc kubenswrapper[4813]: I0219 19:24:09.392124 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fz4qt" podStartSLOduration=2.897394546 podStartE2EDuration="5.392100592s" podCreationTimestamp="2026-02-19 19:24:04 +0000 UTC" firstStartedPulling="2026-02-19 19:24:06.341159135 +0000 UTC m=+3265.566599716" lastFinishedPulling="2026-02-19 19:24:08.835865181 +0000 UTC m=+3268.061305762" observedRunningTime="2026-02-19 19:24:09.391569346 +0000 UTC m=+3268.617009887" watchObservedRunningTime="2026-02-19 19:24:09.392100592 +0000 UTC m=+3268.617541173" Feb 19 19:24:10 crc kubenswrapper[4813]: I0219 19:24:10.471894 4813 scope.go:117] "RemoveContainer" containerID="b774c43bef336a65d730790127bfeb43bdd55642c8eadde1ac791fa717f811e7" Feb 19 19:24:10 crc kubenswrapper[4813]: E0219 19:24:10.472150 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:24:14 crc kubenswrapper[4813]: I0219 19:24:14.841227 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fz4qt" Feb 19 19:24:14 crc kubenswrapper[4813]: I0219 19:24:14.841562 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fz4qt" Feb 19 19:24:15 crc kubenswrapper[4813]: I0219 19:24:15.888076 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fz4qt" podUID="8045e1a7-6d0b-46a5-9fd5-429cbe03d83b" containerName="registry-server" probeResult="failure" output=< Feb 19 19:24:15 crc kubenswrapper[4813]: timeout: failed to connect service ":50051" within 1s Feb 19 19:24:15 crc kubenswrapper[4813]: > Feb 19 19:24:24 crc kubenswrapper[4813]: I0219 19:24:24.472250 4813 scope.go:117] "RemoveContainer" containerID="b774c43bef336a65d730790127bfeb43bdd55642c8eadde1ac791fa717f811e7" Feb 19 19:24:24 crc kubenswrapper[4813]: E0219 19:24:24.473430 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:24:24 crc kubenswrapper[4813]: I0219 19:24:24.904295 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fz4qt" Feb 19 19:24:24 crc kubenswrapper[4813]: I0219 19:24:24.963298 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fz4qt" Feb 19 19:24:25 crc kubenswrapper[4813]: I0219 19:24:25.144233 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fz4qt"] Feb 19 19:24:26 crc kubenswrapper[4813]: I0219 19:24:26.547346 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fz4qt" podUID="8045e1a7-6d0b-46a5-9fd5-429cbe03d83b" containerName="registry-server" containerID="cri-o://90bf974c2f6b669b56051aedfe757c195f32692464020e825f52b708a6a3fa41" gracePeriod=2 Feb 19 19:24:26 crc kubenswrapper[4813]: I0219 19:24:26.981077 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fz4qt" Feb 19 19:24:27 crc kubenswrapper[4813]: I0219 19:24:27.142340 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8045e1a7-6d0b-46a5-9fd5-429cbe03d83b-catalog-content\") pod \"8045e1a7-6d0b-46a5-9fd5-429cbe03d83b\" (UID: \"8045e1a7-6d0b-46a5-9fd5-429cbe03d83b\") " Feb 19 19:24:27 crc kubenswrapper[4813]: I0219 19:24:27.142499 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncb57\" (UniqueName: \"kubernetes.io/projected/8045e1a7-6d0b-46a5-9fd5-429cbe03d83b-kube-api-access-ncb57\") pod \"8045e1a7-6d0b-46a5-9fd5-429cbe03d83b\" (UID: \"8045e1a7-6d0b-46a5-9fd5-429cbe03d83b\") " Feb 19 19:24:27 crc kubenswrapper[4813]: I0219 19:24:27.142566 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8045e1a7-6d0b-46a5-9fd5-429cbe03d83b-utilities\") pod \"8045e1a7-6d0b-46a5-9fd5-429cbe03d83b\" (UID: \"8045e1a7-6d0b-46a5-9fd5-429cbe03d83b\") " Feb 19 19:24:27 crc kubenswrapper[4813]: I0219 19:24:27.143676 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8045e1a7-6d0b-46a5-9fd5-429cbe03d83b-utilities" (OuterVolumeSpecName: "utilities") pod "8045e1a7-6d0b-46a5-9fd5-429cbe03d83b" (UID: "8045e1a7-6d0b-46a5-9fd5-429cbe03d83b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:24:27 crc kubenswrapper[4813]: I0219 19:24:27.147602 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8045e1a7-6d0b-46a5-9fd5-429cbe03d83b-kube-api-access-ncb57" (OuterVolumeSpecName: "kube-api-access-ncb57") pod "8045e1a7-6d0b-46a5-9fd5-429cbe03d83b" (UID: "8045e1a7-6d0b-46a5-9fd5-429cbe03d83b"). InnerVolumeSpecName "kube-api-access-ncb57". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:24:27 crc kubenswrapper[4813]: I0219 19:24:27.243809 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8045e1a7-6d0b-46a5-9fd5-429cbe03d83b-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 19:24:27 crc kubenswrapper[4813]: I0219 19:24:27.243847 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ncb57\" (UniqueName: \"kubernetes.io/projected/8045e1a7-6d0b-46a5-9fd5-429cbe03d83b-kube-api-access-ncb57\") on node \"crc\" DevicePath \"\"" Feb 19 19:24:27 crc kubenswrapper[4813]: I0219 19:24:27.312442 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8045e1a7-6d0b-46a5-9fd5-429cbe03d83b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8045e1a7-6d0b-46a5-9fd5-429cbe03d83b" (UID: "8045e1a7-6d0b-46a5-9fd5-429cbe03d83b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:24:27 crc kubenswrapper[4813]: I0219 19:24:27.345217 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8045e1a7-6d0b-46a5-9fd5-429cbe03d83b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 19:24:27 crc kubenswrapper[4813]: I0219 19:24:27.557484 4813 generic.go:334] "Generic (PLEG): container finished" podID="8045e1a7-6d0b-46a5-9fd5-429cbe03d83b" containerID="90bf974c2f6b669b56051aedfe757c195f32692464020e825f52b708a6a3fa41" exitCode=0 Feb 19 19:24:27 crc kubenswrapper[4813]: I0219 19:24:27.557540 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fz4qt" event={"ID":"8045e1a7-6d0b-46a5-9fd5-429cbe03d83b","Type":"ContainerDied","Data":"90bf974c2f6b669b56051aedfe757c195f32692464020e825f52b708a6a3fa41"} Feb 19 19:24:27 crc kubenswrapper[4813]: I0219 19:24:27.557547 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fz4qt" Feb 19 19:24:27 crc kubenswrapper[4813]: I0219 19:24:27.557577 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fz4qt" event={"ID":"8045e1a7-6d0b-46a5-9fd5-429cbe03d83b","Type":"ContainerDied","Data":"2fd31c763a75d934b180d36a5044fceb6e0f9ab6e68e09b10b12f333d8ffb7f3"} Feb 19 19:24:27 crc kubenswrapper[4813]: I0219 19:24:27.557603 4813 scope.go:117] "RemoveContainer" containerID="90bf974c2f6b669b56051aedfe757c195f32692464020e825f52b708a6a3fa41" Feb 19 19:24:27 crc kubenswrapper[4813]: I0219 19:24:27.593308 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fz4qt"] Feb 19 19:24:27 crc kubenswrapper[4813]: I0219 19:24:27.594930 4813 scope.go:117] "RemoveContainer" containerID="157a767861a33f4fc58fbe458a4c70b34390c201b439bab5904e59a6a76180f0" Feb 19 19:24:27 crc kubenswrapper[4813]: I0219 19:24:27.599919 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fz4qt"] Feb 19 19:24:27 crc kubenswrapper[4813]: I0219 19:24:27.614402 4813 scope.go:117] "RemoveContainer" containerID="cfdb2ca2fa867b85f6e0443eb89e11eee0270edf40cc16ab9be4b9106ca1aaaa" Feb 19 19:24:27 crc kubenswrapper[4813]: I0219 19:24:27.639238 4813 scope.go:117] "RemoveContainer" containerID="90bf974c2f6b669b56051aedfe757c195f32692464020e825f52b708a6a3fa41" Feb 19 19:24:27 crc kubenswrapper[4813]: E0219 19:24:27.639788 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90bf974c2f6b669b56051aedfe757c195f32692464020e825f52b708a6a3fa41\": container with ID starting with 90bf974c2f6b669b56051aedfe757c195f32692464020e825f52b708a6a3fa41 not found: ID does not exist" containerID="90bf974c2f6b669b56051aedfe757c195f32692464020e825f52b708a6a3fa41" Feb 19 19:24:27 crc kubenswrapper[4813]: I0219 19:24:27.639837 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90bf974c2f6b669b56051aedfe757c195f32692464020e825f52b708a6a3fa41"} err="failed to get container status \"90bf974c2f6b669b56051aedfe757c195f32692464020e825f52b708a6a3fa41\": rpc error: code = NotFound desc = could not find container \"90bf974c2f6b669b56051aedfe757c195f32692464020e825f52b708a6a3fa41\": container with ID starting with 90bf974c2f6b669b56051aedfe757c195f32692464020e825f52b708a6a3fa41 not found: ID does not exist" Feb 19 19:24:27 crc kubenswrapper[4813]: I0219 19:24:27.639867 4813 scope.go:117] "RemoveContainer" containerID="157a767861a33f4fc58fbe458a4c70b34390c201b439bab5904e59a6a76180f0" Feb 19 19:24:27 crc kubenswrapper[4813]: E0219 19:24:27.640304 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"157a767861a33f4fc58fbe458a4c70b34390c201b439bab5904e59a6a76180f0\": container with ID starting with 157a767861a33f4fc58fbe458a4c70b34390c201b439bab5904e59a6a76180f0 not found: ID does not exist" containerID="157a767861a33f4fc58fbe458a4c70b34390c201b439bab5904e59a6a76180f0" Feb 19 19:24:27 crc kubenswrapper[4813]: I0219 19:24:27.640326 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"157a767861a33f4fc58fbe458a4c70b34390c201b439bab5904e59a6a76180f0"} err="failed to get container status \"157a767861a33f4fc58fbe458a4c70b34390c201b439bab5904e59a6a76180f0\": rpc error: code = NotFound desc = could not find container \"157a767861a33f4fc58fbe458a4c70b34390c201b439bab5904e59a6a76180f0\": container with ID starting with 157a767861a33f4fc58fbe458a4c70b34390c201b439bab5904e59a6a76180f0 not found: ID does not exist" Feb 19 19:24:27 crc kubenswrapper[4813]: I0219 19:24:27.640340 4813 scope.go:117] "RemoveContainer" containerID="cfdb2ca2fa867b85f6e0443eb89e11eee0270edf40cc16ab9be4b9106ca1aaaa" Feb 19 19:24:27 crc kubenswrapper[4813]: E0219 19:24:27.640562 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfdb2ca2fa867b85f6e0443eb89e11eee0270edf40cc16ab9be4b9106ca1aaaa\": container with ID starting with cfdb2ca2fa867b85f6e0443eb89e11eee0270edf40cc16ab9be4b9106ca1aaaa not found: ID does not exist" containerID="cfdb2ca2fa867b85f6e0443eb89e11eee0270edf40cc16ab9be4b9106ca1aaaa" Feb 19 19:24:27 crc kubenswrapper[4813]: I0219 19:24:27.640585 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfdb2ca2fa867b85f6e0443eb89e11eee0270edf40cc16ab9be4b9106ca1aaaa"} err="failed to get container status \"cfdb2ca2fa867b85f6e0443eb89e11eee0270edf40cc16ab9be4b9106ca1aaaa\": rpc error: code = NotFound desc = could not find container \"cfdb2ca2fa867b85f6e0443eb89e11eee0270edf40cc16ab9be4b9106ca1aaaa\": container with ID starting with cfdb2ca2fa867b85f6e0443eb89e11eee0270edf40cc16ab9be4b9106ca1aaaa not found: ID does not exist" Feb 19 19:24:29 crc kubenswrapper[4813]: I0219 19:24:29.487296 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8045e1a7-6d0b-46a5-9fd5-429cbe03d83b" path="/var/lib/kubelet/pods/8045e1a7-6d0b-46a5-9fd5-429cbe03d83b/volumes" Feb 19 19:24:36 crc kubenswrapper[4813]: I0219 19:24:36.472591 4813 scope.go:117] "RemoveContainer" containerID="b774c43bef336a65d730790127bfeb43bdd55642c8eadde1ac791fa717f811e7" Feb 19 19:24:37 crc kubenswrapper[4813]: I0219 19:24:37.666975 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" event={"ID":"481977a2-7072-4176-abd4-863cb6104d70","Type":"ContainerStarted","Data":"b0e049e1f75fc0273497500652ead6bd143c1c44c544a9754ed2183b653760dd"} Feb 19 19:27:00 crc kubenswrapper[4813]: I0219 19:27:00.330242 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 19:27:00 crc kubenswrapper[4813]: I0219 19:27:00.331064 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 19:27:30 crc kubenswrapper[4813]: I0219 19:27:30.329796 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 19:27:30 crc kubenswrapper[4813]: I0219 19:27:30.330458 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 19:28:00 crc kubenswrapper[4813]: I0219 19:28:00.329334 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 19:28:00 crc kubenswrapper[4813]: I0219 19:28:00.329838 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 19:28:00 crc kubenswrapper[4813]: I0219 19:28:00.329881 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" Feb 19 19:28:00 crc kubenswrapper[4813]: I0219 19:28:00.330562 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b0e049e1f75fc0273497500652ead6bd143c1c44c544a9754ed2183b653760dd"} pod="openshift-machine-config-operator/machine-config-daemon-gfswm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 19:28:00 crc kubenswrapper[4813]: I0219 19:28:00.330617 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" containerID="cri-o://b0e049e1f75fc0273497500652ead6bd143c1c44c544a9754ed2183b653760dd" gracePeriod=600 Feb 19 19:28:01 crc kubenswrapper[4813]: I0219 19:28:01.316492 4813 generic.go:334] "Generic (PLEG): container finished" podID="481977a2-7072-4176-abd4-863cb6104d70" containerID="b0e049e1f75fc0273497500652ead6bd143c1c44c544a9754ed2183b653760dd" exitCode=0 Feb 19 19:28:01 crc kubenswrapper[4813]: I0219 19:28:01.316615 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" event={"ID":"481977a2-7072-4176-abd4-863cb6104d70","Type":"ContainerDied","Data":"b0e049e1f75fc0273497500652ead6bd143c1c44c544a9754ed2183b653760dd"} Feb 19 19:28:01 crc kubenswrapper[4813]: I0219 19:28:01.317226 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" event={"ID":"481977a2-7072-4176-abd4-863cb6104d70","Type":"ContainerStarted","Data":"af6e4d1a386b8b6abce9ceaa7a5f6f9bd10098b61ae90ee53a61274fbda643f9"} Feb 19 19:28:01 crc kubenswrapper[4813]: I0219 19:28:01.317258 4813 scope.go:117] "RemoveContainer" containerID="b774c43bef336a65d730790127bfeb43bdd55642c8eadde1ac791fa717f811e7" Feb 19 19:29:29 crc kubenswrapper[4813]: I0219 19:29:29.834698 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-44hdg"] Feb 19 19:29:29 crc kubenswrapper[4813]: E0219 19:29:29.835426 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8045e1a7-6d0b-46a5-9fd5-429cbe03d83b" containerName="extract-content" Feb 19 19:29:29 crc kubenswrapper[4813]: I0219 19:29:29.835440 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="8045e1a7-6d0b-46a5-9fd5-429cbe03d83b" containerName="extract-content" Feb 19 19:29:29 crc kubenswrapper[4813]: E0219 19:29:29.835464 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8045e1a7-6d0b-46a5-9fd5-429cbe03d83b" containerName="extract-utilities" Feb 19 19:29:29 crc kubenswrapper[4813]: I0219 19:29:29.835471 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="8045e1a7-6d0b-46a5-9fd5-429cbe03d83b" containerName="extract-utilities" Feb 19 19:29:29 crc kubenswrapper[4813]: E0219 19:29:29.835482 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8045e1a7-6d0b-46a5-9fd5-429cbe03d83b" containerName="registry-server" Feb 19 19:29:29 crc kubenswrapper[4813]: I0219 19:29:29.835492 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="8045e1a7-6d0b-46a5-9fd5-429cbe03d83b" containerName="registry-server" Feb 19 19:29:29 crc kubenswrapper[4813]: I0219 19:29:29.835617 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="8045e1a7-6d0b-46a5-9fd5-429cbe03d83b" containerName="registry-server" Feb 19 19:29:29 crc kubenswrapper[4813]: I0219 19:29:29.836501 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-44hdg" Feb 19 19:29:29 crc kubenswrapper[4813]: I0219 19:29:29.856225 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-44hdg"] Feb 19 19:29:29 crc kubenswrapper[4813]: I0219 19:29:29.930713 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d86fa475-8ae5-4e34-b917-c14057bcfbca-utilities\") pod \"redhat-marketplace-44hdg\" (UID: \"d86fa475-8ae5-4e34-b917-c14057bcfbca\") " pod="openshift-marketplace/redhat-marketplace-44hdg" Feb 19 19:29:29 crc kubenswrapper[4813]: I0219 19:29:29.930771 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bh9x5\" (UniqueName: \"kubernetes.io/projected/d86fa475-8ae5-4e34-b917-c14057bcfbca-kube-api-access-bh9x5\") pod \"redhat-marketplace-44hdg\" (UID: \"d86fa475-8ae5-4e34-b917-c14057bcfbca\") " pod="openshift-marketplace/redhat-marketplace-44hdg" Feb 19 19:29:29 crc kubenswrapper[4813]: I0219 19:29:29.930835 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d86fa475-8ae5-4e34-b917-c14057bcfbca-catalog-content\") pod \"redhat-marketplace-44hdg\" (UID: \"d86fa475-8ae5-4e34-b917-c14057bcfbca\") " pod="openshift-marketplace/redhat-marketplace-44hdg" Feb 19 19:29:30 crc kubenswrapper[4813]: I0219 19:29:30.032120 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d86fa475-8ae5-4e34-b917-c14057bcfbca-catalog-content\") pod \"redhat-marketplace-44hdg\" (UID: \"d86fa475-8ae5-4e34-b917-c14057bcfbca\") " pod="openshift-marketplace/redhat-marketplace-44hdg" Feb 19 19:29:30 crc kubenswrapper[4813]: I0219 19:29:30.032227 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d86fa475-8ae5-4e34-b917-c14057bcfbca-utilities\") pod \"redhat-marketplace-44hdg\" (UID: \"d86fa475-8ae5-4e34-b917-c14057bcfbca\") " pod="openshift-marketplace/redhat-marketplace-44hdg" Feb 19 19:29:30 crc kubenswrapper[4813]: I0219 19:29:30.032267 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bh9x5\" (UniqueName: \"kubernetes.io/projected/d86fa475-8ae5-4e34-b917-c14057bcfbca-kube-api-access-bh9x5\") pod \"redhat-marketplace-44hdg\" (UID: \"d86fa475-8ae5-4e34-b917-c14057bcfbca\") " pod="openshift-marketplace/redhat-marketplace-44hdg" Feb 19 19:29:30 crc kubenswrapper[4813]: I0219 19:29:30.032853 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d86fa475-8ae5-4e34-b917-c14057bcfbca-utilities\") pod \"redhat-marketplace-44hdg\" (UID: \"d86fa475-8ae5-4e34-b917-c14057bcfbca\") " pod="openshift-marketplace/redhat-marketplace-44hdg" Feb 19 19:29:30 crc kubenswrapper[4813]: I0219 19:29:30.032857 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d86fa475-8ae5-4e34-b917-c14057bcfbca-catalog-content\") pod \"redhat-marketplace-44hdg\" (UID: \"d86fa475-8ae5-4e34-b917-c14057bcfbca\") " pod="openshift-marketplace/redhat-marketplace-44hdg" Feb 19 19:29:30 crc kubenswrapper[4813]: I0219 19:29:30.061461 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bh9x5\" (UniqueName: \"kubernetes.io/projected/d86fa475-8ae5-4e34-b917-c14057bcfbca-kube-api-access-bh9x5\") pod \"redhat-marketplace-44hdg\" (UID: \"d86fa475-8ae5-4e34-b917-c14057bcfbca\") " pod="openshift-marketplace/redhat-marketplace-44hdg" Feb 19 19:29:30 crc kubenswrapper[4813]: I0219 19:29:30.166396 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-44hdg" Feb 19 19:29:30 crc kubenswrapper[4813]: I0219 19:29:30.397864 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-44hdg"] Feb 19 19:29:31 crc kubenswrapper[4813]: I0219 19:29:31.102994 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-44hdg" event={"ID":"d86fa475-8ae5-4e34-b917-c14057bcfbca","Type":"ContainerStarted","Data":"e00d085305493a88a770f553b94ff945957f94966291efb68a52bcc77aa1dcd9"} Feb 19 19:29:32 crc kubenswrapper[4813]: I0219 19:29:32.112144 4813 generic.go:334] "Generic (PLEG): container finished" podID="d86fa475-8ae5-4e34-b917-c14057bcfbca" containerID="94fe81119fc00d21d878dc497c420d3dc07238cfffaf5d5cd78f4d23873f0c13" exitCode=0 Feb 19 19:29:32 crc kubenswrapper[4813]: I0219 19:29:32.112203 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-44hdg" event={"ID":"d86fa475-8ae5-4e34-b917-c14057bcfbca","Type":"ContainerDied","Data":"94fe81119fc00d21d878dc497c420d3dc07238cfffaf5d5cd78f4d23873f0c13"} Feb 19 19:29:32 crc kubenswrapper[4813]: I0219 19:29:32.114795 4813 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 19:29:33 crc kubenswrapper[4813]: I0219 19:29:33.133427 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-44hdg" event={"ID":"d86fa475-8ae5-4e34-b917-c14057bcfbca","Type":"ContainerStarted","Data":"a5b4416bc004c0ab821baadf70756c551c01dbd5a007dbce422e19743e8253e6"} Feb 19 19:29:34 crc kubenswrapper[4813]: I0219 19:29:34.147770 4813 generic.go:334] "Generic (PLEG): container finished" podID="d86fa475-8ae5-4e34-b917-c14057bcfbca" containerID="a5b4416bc004c0ab821baadf70756c551c01dbd5a007dbce422e19743e8253e6" exitCode=0 Feb 19 19:29:34 crc kubenswrapper[4813]: I0219 19:29:34.147838 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-44hdg" event={"ID":"d86fa475-8ae5-4e34-b917-c14057bcfbca","Type":"ContainerDied","Data":"a5b4416bc004c0ab821baadf70756c551c01dbd5a007dbce422e19743e8253e6"} Feb 19 19:29:35 crc kubenswrapper[4813]: I0219 19:29:35.156375 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-44hdg" event={"ID":"d86fa475-8ae5-4e34-b917-c14057bcfbca","Type":"ContainerStarted","Data":"b438919a271c547fc9736827e2ae609f936fa191ac252b6f85f9d066e305b322"} Feb 19 19:29:35 crc kubenswrapper[4813]: I0219 19:29:35.184376 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-44hdg" podStartSLOduration=3.725428983 podStartE2EDuration="6.184356842s" podCreationTimestamp="2026-02-19 19:29:29 +0000 UTC" firstStartedPulling="2026-02-19 19:29:32.114546748 +0000 UTC m=+3591.339987289" lastFinishedPulling="2026-02-19 19:29:34.573474607 +0000 UTC m=+3593.798915148" observedRunningTime="2026-02-19 19:29:35.179180542 +0000 UTC m=+3594.404621123" watchObservedRunningTime="2026-02-19 19:29:35.184356842 +0000 UTC m=+3594.409797393" Feb 19 19:29:40 crc kubenswrapper[4813]: I0219 19:29:40.166838 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-44hdg" Feb 19 19:29:40 crc kubenswrapper[4813]: I0219 19:29:40.167454 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-44hdg" Feb 19 19:29:40 crc kubenswrapper[4813]: I0219 19:29:40.231062 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-44hdg" Feb 19 19:29:40 crc kubenswrapper[4813]: I0219 19:29:40.282823 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-44hdg" Feb 19 19:29:40 crc kubenswrapper[4813]: I0219 19:29:40.479270 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-44hdg"] Feb 19 19:29:42 crc kubenswrapper[4813]: I0219 19:29:42.209116 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-44hdg" podUID="d86fa475-8ae5-4e34-b917-c14057bcfbca" containerName="registry-server" containerID="cri-o://b438919a271c547fc9736827e2ae609f936fa191ac252b6f85f9d066e305b322" gracePeriod=2 Feb 19 19:29:42 crc kubenswrapper[4813]: I0219 19:29:42.554290 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-44hdg" Feb 19 19:29:42 crc kubenswrapper[4813]: I0219 19:29:42.711581 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bh9x5\" (UniqueName: \"kubernetes.io/projected/d86fa475-8ae5-4e34-b917-c14057bcfbca-kube-api-access-bh9x5\") pod \"d86fa475-8ae5-4e34-b917-c14057bcfbca\" (UID: \"d86fa475-8ae5-4e34-b917-c14057bcfbca\") " Feb 19 19:29:42 crc kubenswrapper[4813]: I0219 19:29:42.712045 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d86fa475-8ae5-4e34-b917-c14057bcfbca-utilities\") pod \"d86fa475-8ae5-4e34-b917-c14057bcfbca\" (UID: \"d86fa475-8ae5-4e34-b917-c14057bcfbca\") " Feb 19 19:29:42 crc kubenswrapper[4813]: I0219 19:29:42.712176 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d86fa475-8ae5-4e34-b917-c14057bcfbca-catalog-content\") pod \"d86fa475-8ae5-4e34-b917-c14057bcfbca\" (UID: \"d86fa475-8ae5-4e34-b917-c14057bcfbca\") " Feb 19 19:29:42 crc kubenswrapper[4813]: I0219 19:29:42.713467 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d86fa475-8ae5-4e34-b917-c14057bcfbca-utilities" (OuterVolumeSpecName: "utilities") pod "d86fa475-8ae5-4e34-b917-c14057bcfbca" (UID: "d86fa475-8ae5-4e34-b917-c14057bcfbca"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:29:42 crc kubenswrapper[4813]: I0219 19:29:42.721266 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d86fa475-8ae5-4e34-b917-c14057bcfbca-kube-api-access-bh9x5" (OuterVolumeSpecName: "kube-api-access-bh9x5") pod "d86fa475-8ae5-4e34-b917-c14057bcfbca" (UID: "d86fa475-8ae5-4e34-b917-c14057bcfbca"). InnerVolumeSpecName "kube-api-access-bh9x5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:29:42 crc kubenswrapper[4813]: I0219 19:29:42.756720 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d86fa475-8ae5-4e34-b917-c14057bcfbca-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d86fa475-8ae5-4e34-b917-c14057bcfbca" (UID: "d86fa475-8ae5-4e34-b917-c14057bcfbca"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:29:42 crc kubenswrapper[4813]: I0219 19:29:42.813911 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bh9x5\" (UniqueName: \"kubernetes.io/projected/d86fa475-8ae5-4e34-b917-c14057bcfbca-kube-api-access-bh9x5\") on node \"crc\" DevicePath \"\"" Feb 19 19:29:42 crc kubenswrapper[4813]: I0219 19:29:42.813982 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d86fa475-8ae5-4e34-b917-c14057bcfbca-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 19:29:42 crc kubenswrapper[4813]: I0219 19:29:42.813995 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d86fa475-8ae5-4e34-b917-c14057bcfbca-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 19:29:43 crc kubenswrapper[4813]: I0219 19:29:43.217805 4813 generic.go:334] "Generic (PLEG): container finished" podID="d86fa475-8ae5-4e34-b917-c14057bcfbca" containerID="b438919a271c547fc9736827e2ae609f936fa191ac252b6f85f9d066e305b322" exitCode=0 Feb 19 19:29:43 crc kubenswrapper[4813]: I0219 19:29:43.217895 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-44hdg" Feb 19 19:29:43 crc kubenswrapper[4813]: I0219 19:29:43.217966 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-44hdg" event={"ID":"d86fa475-8ae5-4e34-b917-c14057bcfbca","Type":"ContainerDied","Data":"b438919a271c547fc9736827e2ae609f936fa191ac252b6f85f9d066e305b322"} Feb 19 19:29:43 crc kubenswrapper[4813]: I0219 19:29:43.219095 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-44hdg" event={"ID":"d86fa475-8ae5-4e34-b917-c14057bcfbca","Type":"ContainerDied","Data":"e00d085305493a88a770f553b94ff945957f94966291efb68a52bcc77aa1dcd9"} Feb 19 19:29:43 crc kubenswrapper[4813]: I0219 19:29:43.219121 4813 scope.go:117] "RemoveContainer" containerID="b438919a271c547fc9736827e2ae609f936fa191ac252b6f85f9d066e305b322" Feb 19 19:29:43 crc kubenswrapper[4813]: I0219 19:29:43.237897 4813 scope.go:117] "RemoveContainer" containerID="a5b4416bc004c0ab821baadf70756c551c01dbd5a007dbce422e19743e8253e6" Feb 19 19:29:43 crc kubenswrapper[4813]: I0219 19:29:43.256106 4813 scope.go:117] "RemoveContainer" containerID="94fe81119fc00d21d878dc497c420d3dc07238cfffaf5d5cd78f4d23873f0c13" Feb 19 19:29:43 crc kubenswrapper[4813]: I0219 19:29:43.265510 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-44hdg"] Feb 19 19:29:43 crc kubenswrapper[4813]: I0219 19:29:43.275345 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-44hdg"] Feb 19 19:29:43 crc kubenswrapper[4813]: I0219 19:29:43.286124 4813 scope.go:117] "RemoveContainer" containerID="b438919a271c547fc9736827e2ae609f936fa191ac252b6f85f9d066e305b322" Feb 19 19:29:43 crc kubenswrapper[4813]: E0219 19:29:43.286668 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b438919a271c547fc9736827e2ae609f936fa191ac252b6f85f9d066e305b322\": container with ID starting with b438919a271c547fc9736827e2ae609f936fa191ac252b6f85f9d066e305b322 not found: ID does not exist" containerID="b438919a271c547fc9736827e2ae609f936fa191ac252b6f85f9d066e305b322" Feb 19 19:29:43 crc kubenswrapper[4813]: I0219 19:29:43.286699 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b438919a271c547fc9736827e2ae609f936fa191ac252b6f85f9d066e305b322"} err="failed to get container status \"b438919a271c547fc9736827e2ae609f936fa191ac252b6f85f9d066e305b322\": rpc error: code = NotFound desc = could not find container \"b438919a271c547fc9736827e2ae609f936fa191ac252b6f85f9d066e305b322\": container with ID starting with b438919a271c547fc9736827e2ae609f936fa191ac252b6f85f9d066e305b322 not found: ID does not exist" Feb 19 19:29:43 crc kubenswrapper[4813]: I0219 19:29:43.286718 4813 scope.go:117] "RemoveContainer" containerID="a5b4416bc004c0ab821baadf70756c551c01dbd5a007dbce422e19743e8253e6" Feb 19 19:29:43 crc kubenswrapper[4813]: E0219 19:29:43.287126 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5b4416bc004c0ab821baadf70756c551c01dbd5a007dbce422e19743e8253e6\": container with ID starting with a5b4416bc004c0ab821baadf70756c551c01dbd5a007dbce422e19743e8253e6 not found: ID does not exist" containerID="a5b4416bc004c0ab821baadf70756c551c01dbd5a007dbce422e19743e8253e6" Feb 19 19:29:43 crc kubenswrapper[4813]: I0219 19:29:43.287149 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5b4416bc004c0ab821baadf70756c551c01dbd5a007dbce422e19743e8253e6"} err="failed to get container status \"a5b4416bc004c0ab821baadf70756c551c01dbd5a007dbce422e19743e8253e6\": rpc error: code = NotFound desc = could not find container \"a5b4416bc004c0ab821baadf70756c551c01dbd5a007dbce422e19743e8253e6\": container with ID starting with a5b4416bc004c0ab821baadf70756c551c01dbd5a007dbce422e19743e8253e6 not found: ID does not exist" Feb 19 19:29:43 crc kubenswrapper[4813]: I0219 19:29:43.287163 4813 scope.go:117] "RemoveContainer" containerID="94fe81119fc00d21d878dc497c420d3dc07238cfffaf5d5cd78f4d23873f0c13" Feb 19 19:29:43 crc kubenswrapper[4813]: E0219 19:29:43.287751 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94fe81119fc00d21d878dc497c420d3dc07238cfffaf5d5cd78f4d23873f0c13\": container with ID starting with 94fe81119fc00d21d878dc497c420d3dc07238cfffaf5d5cd78f4d23873f0c13 not found: ID does not exist" containerID="94fe81119fc00d21d878dc497c420d3dc07238cfffaf5d5cd78f4d23873f0c13" Feb 19 19:29:43 crc kubenswrapper[4813]: I0219 19:29:43.287782 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94fe81119fc00d21d878dc497c420d3dc07238cfffaf5d5cd78f4d23873f0c13"} err="failed to get container status \"94fe81119fc00d21d878dc497c420d3dc07238cfffaf5d5cd78f4d23873f0c13\": rpc error: code = NotFound desc = could not find container \"94fe81119fc00d21d878dc497c420d3dc07238cfffaf5d5cd78f4d23873f0c13\": container with ID starting with 94fe81119fc00d21d878dc497c420d3dc07238cfffaf5d5cd78f4d23873f0c13 not found: ID does not exist" Feb 19 19:29:43 crc kubenswrapper[4813]: I0219 19:29:43.483538 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d86fa475-8ae5-4e34-b917-c14057bcfbca" path="/var/lib/kubelet/pods/d86fa475-8ae5-4e34-b917-c14057bcfbca/volumes" Feb 19 19:30:00 crc kubenswrapper[4813]: I0219 19:30:00.173699 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525490-4n9xs"] Feb 19 19:30:00 crc kubenswrapper[4813]: E0219 19:30:00.174669 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d86fa475-8ae5-4e34-b917-c14057bcfbca" containerName="extract-utilities" Feb 19 19:30:00 crc kubenswrapper[4813]: I0219 19:30:00.174686 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="d86fa475-8ae5-4e34-b917-c14057bcfbca" containerName="extract-utilities" Feb 19 19:30:00 crc kubenswrapper[4813]: E0219 19:30:00.174701 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d86fa475-8ae5-4e34-b917-c14057bcfbca" containerName="registry-server" Feb 19 19:30:00 crc kubenswrapper[4813]: I0219 19:30:00.174710 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="d86fa475-8ae5-4e34-b917-c14057bcfbca" containerName="registry-server" Feb 19 19:30:00 crc kubenswrapper[4813]: E0219 19:30:00.174754 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d86fa475-8ae5-4e34-b917-c14057bcfbca" containerName="extract-content" Feb 19 19:30:00 crc kubenswrapper[4813]: I0219 19:30:00.174765 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="d86fa475-8ae5-4e34-b917-c14057bcfbca" containerName="extract-content" Feb 19 19:30:00 crc kubenswrapper[4813]: I0219 19:30:00.174971 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="d86fa475-8ae5-4e34-b917-c14057bcfbca" containerName="registry-server" Feb 19 19:30:00 crc kubenswrapper[4813]: I0219 19:30:00.175529 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525490-4n9xs" Feb 19 19:30:00 crc kubenswrapper[4813]: I0219 19:30:00.177929 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 19:30:00 crc kubenswrapper[4813]: I0219 19:30:00.179553 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 19:30:00 crc kubenswrapper[4813]: I0219 19:30:00.200159 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525490-4n9xs"] Feb 19 19:30:00 crc kubenswrapper[4813]: I0219 19:30:00.330476 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 19:30:00 crc kubenswrapper[4813]: I0219 19:30:00.330595 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 19:30:00 crc kubenswrapper[4813]: I0219 19:30:00.363228 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dbf0f104-f84a-4537-8205-9791f33f7be0-secret-volume\") pod \"collect-profiles-29525490-4n9xs\" (UID: \"dbf0f104-f84a-4537-8205-9791f33f7be0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525490-4n9xs" Feb 19 19:30:00 crc kubenswrapper[4813]: I0219 19:30:00.363438 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dbf0f104-f84a-4537-8205-9791f33f7be0-config-volume\") pod \"collect-profiles-29525490-4n9xs\" (UID: \"dbf0f104-f84a-4537-8205-9791f33f7be0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525490-4n9xs" Feb 19 19:30:00 crc kubenswrapper[4813]: I0219 19:30:00.363500 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkhs9\" (UniqueName: \"kubernetes.io/projected/dbf0f104-f84a-4537-8205-9791f33f7be0-kube-api-access-bkhs9\") pod \"collect-profiles-29525490-4n9xs\" (UID: \"dbf0f104-f84a-4537-8205-9791f33f7be0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525490-4n9xs" Feb 19 19:30:00 crc kubenswrapper[4813]: I0219 19:30:00.464417 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dbf0f104-f84a-4537-8205-9791f33f7be0-secret-volume\") pod \"collect-profiles-29525490-4n9xs\" (UID: \"dbf0f104-f84a-4537-8205-9791f33f7be0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525490-4n9xs" Feb 19 19:30:00 crc kubenswrapper[4813]: I0219 19:30:00.464551 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dbf0f104-f84a-4537-8205-9791f33f7be0-config-volume\") pod \"collect-profiles-29525490-4n9xs\" (UID: \"dbf0f104-f84a-4537-8205-9791f33f7be0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525490-4n9xs" Feb 19 19:30:00 crc kubenswrapper[4813]: I0219 19:30:00.464598 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkhs9\" (UniqueName: \"kubernetes.io/projected/dbf0f104-f84a-4537-8205-9791f33f7be0-kube-api-access-bkhs9\") pod \"collect-profiles-29525490-4n9xs\" (UID: \"dbf0f104-f84a-4537-8205-9791f33f7be0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525490-4n9xs" Feb 19 19:30:00 crc kubenswrapper[4813]: I0219 19:30:00.465665 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dbf0f104-f84a-4537-8205-9791f33f7be0-config-volume\") pod \"collect-profiles-29525490-4n9xs\" (UID: \"dbf0f104-f84a-4537-8205-9791f33f7be0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525490-4n9xs" Feb 19 19:30:00 crc kubenswrapper[4813]: I0219 19:30:00.472507 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dbf0f104-f84a-4537-8205-9791f33f7be0-secret-volume\") pod \"collect-profiles-29525490-4n9xs\" (UID: \"dbf0f104-f84a-4537-8205-9791f33f7be0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525490-4n9xs" Feb 19 19:30:00 crc kubenswrapper[4813]: I0219 19:30:00.502333 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkhs9\" (UniqueName: \"kubernetes.io/projected/dbf0f104-f84a-4537-8205-9791f33f7be0-kube-api-access-bkhs9\") pod \"collect-profiles-29525490-4n9xs\" (UID: \"dbf0f104-f84a-4537-8205-9791f33f7be0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525490-4n9xs" Feb 19 19:30:00 crc kubenswrapper[4813]: I0219 19:30:00.510245 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525490-4n9xs" Feb 19 19:30:00 crc kubenswrapper[4813]: I0219 19:30:00.931805 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525490-4n9xs"] Feb 19 19:30:01 crc kubenswrapper[4813]: I0219 19:30:01.360402 4813 generic.go:334] "Generic (PLEG): container finished" podID="dbf0f104-f84a-4537-8205-9791f33f7be0" containerID="c0e01636cd151dc9aec63da82175597cbfbb90bb1ff9da7551709ecdcbb57fc1" exitCode=0 Feb 19 19:30:01 crc kubenswrapper[4813]: I0219 19:30:01.360471 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525490-4n9xs" event={"ID":"dbf0f104-f84a-4537-8205-9791f33f7be0","Type":"ContainerDied","Data":"c0e01636cd151dc9aec63da82175597cbfbb90bb1ff9da7551709ecdcbb57fc1"} Feb 19 19:30:01 crc kubenswrapper[4813]: I0219 19:30:01.360514 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525490-4n9xs" event={"ID":"dbf0f104-f84a-4537-8205-9791f33f7be0","Type":"ContainerStarted","Data":"18b674ab0efbc571a938448461e0616858a619db7e9d731bfc3eea393049b874"} Feb 19 19:30:02 crc kubenswrapper[4813]: I0219 19:30:02.700582 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525490-4n9xs" Feb 19 19:30:02 crc kubenswrapper[4813]: I0219 19:30:02.804000 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dbf0f104-f84a-4537-8205-9791f33f7be0-secret-volume\") pod \"dbf0f104-f84a-4537-8205-9791f33f7be0\" (UID: \"dbf0f104-f84a-4537-8205-9791f33f7be0\") " Feb 19 19:30:02 crc kubenswrapper[4813]: I0219 19:30:02.804090 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bkhs9\" (UniqueName: \"kubernetes.io/projected/dbf0f104-f84a-4537-8205-9791f33f7be0-kube-api-access-bkhs9\") pod \"dbf0f104-f84a-4537-8205-9791f33f7be0\" (UID: \"dbf0f104-f84a-4537-8205-9791f33f7be0\") " Feb 19 19:30:02 crc kubenswrapper[4813]: I0219 19:30:02.804145 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dbf0f104-f84a-4537-8205-9791f33f7be0-config-volume\") pod \"dbf0f104-f84a-4537-8205-9791f33f7be0\" (UID: \"dbf0f104-f84a-4537-8205-9791f33f7be0\") " Feb 19 19:30:02 crc kubenswrapper[4813]: I0219 19:30:02.805144 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbf0f104-f84a-4537-8205-9791f33f7be0-config-volume" (OuterVolumeSpecName: "config-volume") pod "dbf0f104-f84a-4537-8205-9791f33f7be0" (UID: "dbf0f104-f84a-4537-8205-9791f33f7be0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:30:02 crc kubenswrapper[4813]: I0219 19:30:02.811425 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbf0f104-f84a-4537-8205-9791f33f7be0-kube-api-access-bkhs9" (OuterVolumeSpecName: "kube-api-access-bkhs9") pod "dbf0f104-f84a-4537-8205-9791f33f7be0" (UID: "dbf0f104-f84a-4537-8205-9791f33f7be0"). InnerVolumeSpecName "kube-api-access-bkhs9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:30:02 crc kubenswrapper[4813]: I0219 19:30:02.817753 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbf0f104-f84a-4537-8205-9791f33f7be0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "dbf0f104-f84a-4537-8205-9791f33f7be0" (UID: "dbf0f104-f84a-4537-8205-9791f33f7be0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:30:02 crc kubenswrapper[4813]: I0219 19:30:02.906316 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bkhs9\" (UniqueName: \"kubernetes.io/projected/dbf0f104-f84a-4537-8205-9791f33f7be0-kube-api-access-bkhs9\") on node \"crc\" DevicePath \"\"" Feb 19 19:30:02 crc kubenswrapper[4813]: I0219 19:30:02.906370 4813 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dbf0f104-f84a-4537-8205-9791f33f7be0-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 19:30:02 crc kubenswrapper[4813]: I0219 19:30:02.906390 4813 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dbf0f104-f84a-4537-8205-9791f33f7be0-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 19:30:03 crc kubenswrapper[4813]: I0219 19:30:03.374904 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525490-4n9xs" event={"ID":"dbf0f104-f84a-4537-8205-9791f33f7be0","Type":"ContainerDied","Data":"18b674ab0efbc571a938448461e0616858a619db7e9d731bfc3eea393049b874"} Feb 19 19:30:03 crc kubenswrapper[4813]: I0219 19:30:03.374942 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18b674ab0efbc571a938448461e0616858a619db7e9d731bfc3eea393049b874" Feb 19 19:30:03 crc kubenswrapper[4813]: I0219 19:30:03.375019 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525490-4n9xs" Feb 19 19:30:03 crc kubenswrapper[4813]: I0219 19:30:03.802879 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525445-cd67s"] Feb 19 19:30:03 crc kubenswrapper[4813]: I0219 19:30:03.809301 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525445-cd67s"] Feb 19 19:30:05 crc kubenswrapper[4813]: I0219 19:30:05.500302 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00cbc745-db4e-48c3-a8b1-21561917a0eb" path="/var/lib/kubelet/pods/00cbc745-db4e-48c3-a8b1-21561917a0eb/volumes" Feb 19 19:30:30 crc kubenswrapper[4813]: I0219 19:30:30.726177 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 19:30:30 crc kubenswrapper[4813]: I0219 19:30:30.726724 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 19:30:51 crc kubenswrapper[4813]: I0219 19:30:51.576751 4813 scope.go:117] "RemoveContainer" containerID="5d02044cdc86dae23dcd27c93a04ac77447a8420276c23f63364b126070ffe18" Feb 19 19:31:00 crc kubenswrapper[4813]: I0219 19:31:00.329533 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 19:31:00 crc kubenswrapper[4813]: I0219 19:31:00.330403 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 19:31:00 crc kubenswrapper[4813]: I0219 19:31:00.330477 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" Feb 19 19:31:00 crc kubenswrapper[4813]: I0219 19:31:00.331394 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"af6e4d1a386b8b6abce9ceaa7a5f6f9bd10098b61ae90ee53a61274fbda643f9"} pod="openshift-machine-config-operator/machine-config-daemon-gfswm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 19:31:00 crc kubenswrapper[4813]: I0219 19:31:00.331495 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" containerID="cri-o://af6e4d1a386b8b6abce9ceaa7a5f6f9bd10098b61ae90ee53a61274fbda643f9" gracePeriod=600 Feb 19 19:31:01 crc kubenswrapper[4813]: I0219 19:31:01.016253 4813 generic.go:334] "Generic (PLEG): container finished" podID="481977a2-7072-4176-abd4-863cb6104d70" containerID="af6e4d1a386b8b6abce9ceaa7a5f6f9bd10098b61ae90ee53a61274fbda643f9" exitCode=0 Feb 19 19:31:01 crc kubenswrapper[4813]: I0219 19:31:01.016338 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" event={"ID":"481977a2-7072-4176-abd4-863cb6104d70","Type":"ContainerDied","Data":"af6e4d1a386b8b6abce9ceaa7a5f6f9bd10098b61ae90ee53a61274fbda643f9"} Feb 19 19:31:01 crc kubenswrapper[4813]: I0219 19:31:01.016723 4813 scope.go:117] "RemoveContainer" containerID="b0e049e1f75fc0273497500652ead6bd143c1c44c544a9754ed2183b653760dd" Feb 19 19:31:01 crc kubenswrapper[4813]: E0219 19:31:01.034265 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:31:02 crc kubenswrapper[4813]: I0219 19:31:02.024852 4813 scope.go:117] "RemoveContainer" containerID="af6e4d1a386b8b6abce9ceaa7a5f6f9bd10098b61ae90ee53a61274fbda643f9" Feb 19 19:31:02 crc kubenswrapper[4813]: E0219 19:31:02.025321 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:31:14 crc kubenswrapper[4813]: I0219 19:31:14.472024 4813 scope.go:117] "RemoveContainer" containerID="af6e4d1a386b8b6abce9ceaa7a5f6f9bd10098b61ae90ee53a61274fbda643f9" Feb 19 19:31:14 crc kubenswrapper[4813]: E0219 19:31:14.472876 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:31:28 crc kubenswrapper[4813]: I0219 19:31:28.471285 4813 scope.go:117] "RemoveContainer" containerID="af6e4d1a386b8b6abce9ceaa7a5f6f9bd10098b61ae90ee53a61274fbda643f9" Feb 19 19:31:28 crc kubenswrapper[4813]: E0219 19:31:28.471975 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:31:43 crc kubenswrapper[4813]: I0219 19:31:43.471728 4813 scope.go:117] "RemoveContainer" containerID="af6e4d1a386b8b6abce9ceaa7a5f6f9bd10098b61ae90ee53a61274fbda643f9" Feb 19 19:31:43 crc kubenswrapper[4813]: E0219 19:31:43.472239 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:31:55 crc kubenswrapper[4813]: I0219 19:31:55.471825 4813 scope.go:117] "RemoveContainer" containerID="af6e4d1a386b8b6abce9ceaa7a5f6f9bd10098b61ae90ee53a61274fbda643f9" Feb 19 19:31:55 crc kubenswrapper[4813]: E0219 19:31:55.472623 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:32:06 crc kubenswrapper[4813]: I0219 19:32:06.471329 4813 scope.go:117] "RemoveContainer" containerID="af6e4d1a386b8b6abce9ceaa7a5f6f9bd10098b61ae90ee53a61274fbda643f9" Feb 19 19:32:06 crc kubenswrapper[4813]: E0219 19:32:06.473256 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:32:19 crc kubenswrapper[4813]: I0219 19:32:19.471778 4813 scope.go:117] "RemoveContainer" containerID="af6e4d1a386b8b6abce9ceaa7a5f6f9bd10098b61ae90ee53a61274fbda643f9" Feb 19 19:32:19 crc kubenswrapper[4813]: E0219 19:32:19.473232 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:32:30 crc kubenswrapper[4813]: I0219 19:32:30.471131 4813 scope.go:117] "RemoveContainer" containerID="af6e4d1a386b8b6abce9ceaa7a5f6f9bd10098b61ae90ee53a61274fbda643f9" Feb 19 19:32:30 crc kubenswrapper[4813]: E0219 19:32:30.471843 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:32:45 crc kubenswrapper[4813]: I0219 19:32:45.473182 4813 scope.go:117] "RemoveContainer" containerID="af6e4d1a386b8b6abce9ceaa7a5f6f9bd10098b61ae90ee53a61274fbda643f9" Feb 19 19:32:45 crc kubenswrapper[4813]: E0219 19:32:45.476232 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:32:56 crc kubenswrapper[4813]: I0219 19:32:56.471909 4813 scope.go:117] "RemoveContainer" containerID="af6e4d1a386b8b6abce9ceaa7a5f6f9bd10098b61ae90ee53a61274fbda643f9" Feb 19 19:32:56 crc kubenswrapper[4813]: E0219 19:32:56.472916 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:33:11 crc kubenswrapper[4813]: I0219 19:33:11.480350 4813 scope.go:117] "RemoveContainer" containerID="af6e4d1a386b8b6abce9ceaa7a5f6f9bd10098b61ae90ee53a61274fbda643f9" Feb 19 19:33:11 crc kubenswrapper[4813]: E0219 19:33:11.481180 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:33:17 crc kubenswrapper[4813]: I0219 19:33:17.266238 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4vjml"] Feb 19 19:33:17 crc kubenswrapper[4813]: E0219 19:33:17.267047 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbf0f104-f84a-4537-8205-9791f33f7be0" containerName="collect-profiles" Feb 19 19:33:17 crc kubenswrapper[4813]: I0219 19:33:17.267060 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbf0f104-f84a-4537-8205-9791f33f7be0" containerName="collect-profiles" Feb 19 19:33:17 crc kubenswrapper[4813]: I0219 19:33:17.267254 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbf0f104-f84a-4537-8205-9791f33f7be0" containerName="collect-profiles" Feb 19 19:33:17 crc kubenswrapper[4813]: I0219 19:33:17.268452 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4vjml" Feb 19 19:33:17 crc kubenswrapper[4813]: I0219 19:33:17.283043 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4vjml"] Feb 19 19:33:17 crc kubenswrapper[4813]: I0219 19:33:17.412432 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6k5z\" (UniqueName: \"kubernetes.io/projected/29b95c10-c2b0-44ee-b70e-f3f3c2aaf925-kube-api-access-j6k5z\") pod \"certified-operators-4vjml\" (UID: \"29b95c10-c2b0-44ee-b70e-f3f3c2aaf925\") " pod="openshift-marketplace/certified-operators-4vjml" Feb 19 19:33:17 crc kubenswrapper[4813]: I0219 19:33:17.412503 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29b95c10-c2b0-44ee-b70e-f3f3c2aaf925-catalog-content\") pod \"certified-operators-4vjml\" (UID: \"29b95c10-c2b0-44ee-b70e-f3f3c2aaf925\") " pod="openshift-marketplace/certified-operators-4vjml" Feb 19 19:33:17 crc kubenswrapper[4813]: I0219 19:33:17.412716 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29b95c10-c2b0-44ee-b70e-f3f3c2aaf925-utilities\") pod \"certified-operators-4vjml\" (UID: \"29b95c10-c2b0-44ee-b70e-f3f3c2aaf925\") " pod="openshift-marketplace/certified-operators-4vjml" Feb 19 19:33:17 crc kubenswrapper[4813]: I0219 19:33:17.513916 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29b95c10-c2b0-44ee-b70e-f3f3c2aaf925-utilities\") pod \"certified-operators-4vjml\" (UID: \"29b95c10-c2b0-44ee-b70e-f3f3c2aaf925\") " pod="openshift-marketplace/certified-operators-4vjml" Feb 19 19:33:17 crc kubenswrapper[4813]: I0219 19:33:17.514042 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6k5z\" (UniqueName: \"kubernetes.io/projected/29b95c10-c2b0-44ee-b70e-f3f3c2aaf925-kube-api-access-j6k5z\") pod \"certified-operators-4vjml\" (UID: \"29b95c10-c2b0-44ee-b70e-f3f3c2aaf925\") " pod="openshift-marketplace/certified-operators-4vjml" Feb 19 19:33:17 crc kubenswrapper[4813]: I0219 19:33:17.514092 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29b95c10-c2b0-44ee-b70e-f3f3c2aaf925-catalog-content\") pod \"certified-operators-4vjml\" (UID: \"29b95c10-c2b0-44ee-b70e-f3f3c2aaf925\") " pod="openshift-marketplace/certified-operators-4vjml" Feb 19 19:33:17 crc kubenswrapper[4813]: I0219 19:33:17.514491 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29b95c10-c2b0-44ee-b70e-f3f3c2aaf925-utilities\") pod \"certified-operators-4vjml\" (UID: \"29b95c10-c2b0-44ee-b70e-f3f3c2aaf925\") " pod="openshift-marketplace/certified-operators-4vjml" Feb 19 19:33:17 crc kubenswrapper[4813]: I0219 19:33:17.514509 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29b95c10-c2b0-44ee-b70e-f3f3c2aaf925-catalog-content\") pod \"certified-operators-4vjml\" (UID: \"29b95c10-c2b0-44ee-b70e-f3f3c2aaf925\") " pod="openshift-marketplace/certified-operators-4vjml" Feb 19 19:33:17 crc kubenswrapper[4813]: I0219 19:33:17.537918 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6k5z\" (UniqueName: \"kubernetes.io/projected/29b95c10-c2b0-44ee-b70e-f3f3c2aaf925-kube-api-access-j6k5z\") pod \"certified-operators-4vjml\" (UID: \"29b95c10-c2b0-44ee-b70e-f3f3c2aaf925\") " pod="openshift-marketplace/certified-operators-4vjml" Feb 19 19:33:17 crc kubenswrapper[4813]: I0219 19:33:17.596748 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4vjml" Feb 19 19:33:18 crc kubenswrapper[4813]: I0219 19:33:18.100437 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4vjml"] Feb 19 19:33:18 crc kubenswrapper[4813]: I0219 19:33:18.123153 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4vjml" event={"ID":"29b95c10-c2b0-44ee-b70e-f3f3c2aaf925","Type":"ContainerStarted","Data":"6f9dce06afcd1d2880e68b0168733715fbc5a6301985a4dbecf662ff97083902"} Feb 19 19:33:19 crc kubenswrapper[4813]: I0219 19:33:19.129709 4813 generic.go:334] "Generic (PLEG): container finished" podID="29b95c10-c2b0-44ee-b70e-f3f3c2aaf925" containerID="7c701a55f598fba99ef410b02904639fea879e91b01de9c4a545c418056fddd1" exitCode=0 Feb 19 19:33:19 crc kubenswrapper[4813]: I0219 19:33:19.129750 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4vjml" event={"ID":"29b95c10-c2b0-44ee-b70e-f3f3c2aaf925","Type":"ContainerDied","Data":"7c701a55f598fba99ef410b02904639fea879e91b01de9c4a545c418056fddd1"} Feb 19 19:33:21 crc kubenswrapper[4813]: I0219 19:33:21.150295 4813 generic.go:334] "Generic (PLEG): container finished" podID="29b95c10-c2b0-44ee-b70e-f3f3c2aaf925" containerID="c3e552dc8f6904b2b8888b551184f0bb458b1b4075891e8f4d4ebb2ccd71391b" exitCode=0 Feb 19 19:33:21 crc kubenswrapper[4813]: I0219 19:33:21.150344 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4vjml" event={"ID":"29b95c10-c2b0-44ee-b70e-f3f3c2aaf925","Type":"ContainerDied","Data":"c3e552dc8f6904b2b8888b551184f0bb458b1b4075891e8f4d4ebb2ccd71391b"} Feb 19 19:33:22 crc kubenswrapper[4813]: I0219 19:33:22.159389 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4vjml" event={"ID":"29b95c10-c2b0-44ee-b70e-f3f3c2aaf925","Type":"ContainerStarted","Data":"14aca3590862db81eb71f50a73ce65274e03f6cd1688f67850384aba168c7fde"} Feb 19 19:33:22 crc kubenswrapper[4813]: I0219 19:33:22.188341 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4vjml" podStartSLOduration=2.7882626999999998 podStartE2EDuration="5.188319745s" podCreationTimestamp="2026-02-19 19:33:17 +0000 UTC" firstStartedPulling="2026-02-19 19:33:19.131138791 +0000 UTC m=+3818.356579332" lastFinishedPulling="2026-02-19 19:33:21.531195836 +0000 UTC m=+3820.756636377" observedRunningTime="2026-02-19 19:33:22.182584617 +0000 UTC m=+3821.408025198" watchObservedRunningTime="2026-02-19 19:33:22.188319745 +0000 UTC m=+3821.413760306" Feb 19 19:33:26 crc kubenswrapper[4813]: I0219 19:33:26.471368 4813 scope.go:117] "RemoveContainer" containerID="af6e4d1a386b8b6abce9ceaa7a5f6f9bd10098b61ae90ee53a61274fbda643f9" Feb 19 19:33:26 crc kubenswrapper[4813]: E0219 19:33:26.471871 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:33:27 crc kubenswrapper[4813]: I0219 19:33:27.597986 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4vjml" Feb 19 19:33:27 crc kubenswrapper[4813]: I0219 19:33:27.598041 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4vjml" Feb 19 19:33:27 crc kubenswrapper[4813]: I0219 19:33:27.641780 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4vjml" Feb 19 19:33:28 crc kubenswrapper[4813]: I0219 19:33:28.256694 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4vjml" Feb 19 19:33:28 crc kubenswrapper[4813]: I0219 19:33:28.314028 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4vjml"] Feb 19 19:33:30 crc kubenswrapper[4813]: I0219 19:33:30.216717 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4vjml" podUID="29b95c10-c2b0-44ee-b70e-f3f3c2aaf925" containerName="registry-server" containerID="cri-o://14aca3590862db81eb71f50a73ce65274e03f6cd1688f67850384aba168c7fde" gracePeriod=2 Feb 19 19:33:31 crc kubenswrapper[4813]: I0219 19:33:31.128774 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4vjml" Feb 19 19:33:31 crc kubenswrapper[4813]: I0219 19:33:31.224332 4813 generic.go:334] "Generic (PLEG): container finished" podID="29b95c10-c2b0-44ee-b70e-f3f3c2aaf925" containerID="14aca3590862db81eb71f50a73ce65274e03f6cd1688f67850384aba168c7fde" exitCode=0 Feb 19 19:33:31 crc kubenswrapper[4813]: I0219 19:33:31.224375 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4vjml" event={"ID":"29b95c10-c2b0-44ee-b70e-f3f3c2aaf925","Type":"ContainerDied","Data":"14aca3590862db81eb71f50a73ce65274e03f6cd1688f67850384aba168c7fde"} Feb 19 19:33:31 crc kubenswrapper[4813]: I0219 19:33:31.224403 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4vjml" event={"ID":"29b95c10-c2b0-44ee-b70e-f3f3c2aaf925","Type":"ContainerDied","Data":"6f9dce06afcd1d2880e68b0168733715fbc5a6301985a4dbecf662ff97083902"} Feb 19 19:33:31 crc kubenswrapper[4813]: I0219 19:33:31.224412 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4vjml" Feb 19 19:33:31 crc kubenswrapper[4813]: I0219 19:33:31.224421 4813 scope.go:117] "RemoveContainer" containerID="14aca3590862db81eb71f50a73ce65274e03f6cd1688f67850384aba168c7fde" Feb 19 19:33:31 crc kubenswrapper[4813]: I0219 19:33:31.227378 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29b95c10-c2b0-44ee-b70e-f3f3c2aaf925-utilities\") pod \"29b95c10-c2b0-44ee-b70e-f3f3c2aaf925\" (UID: \"29b95c10-c2b0-44ee-b70e-f3f3c2aaf925\") " Feb 19 19:33:31 crc kubenswrapper[4813]: I0219 19:33:31.227445 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29b95c10-c2b0-44ee-b70e-f3f3c2aaf925-catalog-content\") pod \"29b95c10-c2b0-44ee-b70e-f3f3c2aaf925\" (UID: \"29b95c10-c2b0-44ee-b70e-f3f3c2aaf925\") " Feb 19 19:33:31 crc kubenswrapper[4813]: I0219 19:33:31.227576 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6k5z\" (UniqueName: \"kubernetes.io/projected/29b95c10-c2b0-44ee-b70e-f3f3c2aaf925-kube-api-access-j6k5z\") pod \"29b95c10-c2b0-44ee-b70e-f3f3c2aaf925\" (UID: \"29b95c10-c2b0-44ee-b70e-f3f3c2aaf925\") " Feb 19 19:33:31 crc kubenswrapper[4813]: I0219 19:33:31.228346 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29b95c10-c2b0-44ee-b70e-f3f3c2aaf925-utilities" (OuterVolumeSpecName: "utilities") pod "29b95c10-c2b0-44ee-b70e-f3f3c2aaf925" (UID: "29b95c10-c2b0-44ee-b70e-f3f3c2aaf925"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:33:31 crc kubenswrapper[4813]: I0219 19:33:31.231691 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29b95c10-c2b0-44ee-b70e-f3f3c2aaf925-kube-api-access-j6k5z" (OuterVolumeSpecName: "kube-api-access-j6k5z") pod "29b95c10-c2b0-44ee-b70e-f3f3c2aaf925" (UID: "29b95c10-c2b0-44ee-b70e-f3f3c2aaf925"). InnerVolumeSpecName "kube-api-access-j6k5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:33:31 crc kubenswrapper[4813]: I0219 19:33:31.241637 4813 scope.go:117] "RemoveContainer" containerID="c3e552dc8f6904b2b8888b551184f0bb458b1b4075891e8f4d4ebb2ccd71391b" Feb 19 19:33:31 crc kubenswrapper[4813]: I0219 19:33:31.271749 4813 scope.go:117] "RemoveContainer" containerID="7c701a55f598fba99ef410b02904639fea879e91b01de9c4a545c418056fddd1" Feb 19 19:33:31 crc kubenswrapper[4813]: I0219 19:33:31.291605 4813 scope.go:117] "RemoveContainer" containerID="14aca3590862db81eb71f50a73ce65274e03f6cd1688f67850384aba168c7fde" Feb 19 19:33:31 crc kubenswrapper[4813]: E0219 19:33:31.292256 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14aca3590862db81eb71f50a73ce65274e03f6cd1688f67850384aba168c7fde\": container with ID starting with 14aca3590862db81eb71f50a73ce65274e03f6cd1688f67850384aba168c7fde not found: ID does not exist" containerID="14aca3590862db81eb71f50a73ce65274e03f6cd1688f67850384aba168c7fde" Feb 19 19:33:31 crc kubenswrapper[4813]: I0219 19:33:31.292288 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14aca3590862db81eb71f50a73ce65274e03f6cd1688f67850384aba168c7fde"} err="failed to get container status \"14aca3590862db81eb71f50a73ce65274e03f6cd1688f67850384aba168c7fde\": rpc error: code = NotFound desc = could not find container \"14aca3590862db81eb71f50a73ce65274e03f6cd1688f67850384aba168c7fde\": container with ID starting with 14aca3590862db81eb71f50a73ce65274e03f6cd1688f67850384aba168c7fde not found: ID does not exist" Feb 19 19:33:31 crc kubenswrapper[4813]: I0219 19:33:31.292308 4813 scope.go:117] "RemoveContainer" containerID="c3e552dc8f6904b2b8888b551184f0bb458b1b4075891e8f4d4ebb2ccd71391b" Feb 19 19:33:31 crc kubenswrapper[4813]: E0219 19:33:31.292622 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3e552dc8f6904b2b8888b551184f0bb458b1b4075891e8f4d4ebb2ccd71391b\": container with ID starting with c3e552dc8f6904b2b8888b551184f0bb458b1b4075891e8f4d4ebb2ccd71391b not found: ID does not exist" containerID="c3e552dc8f6904b2b8888b551184f0bb458b1b4075891e8f4d4ebb2ccd71391b" Feb 19 19:33:31 crc kubenswrapper[4813]: I0219 19:33:31.292645 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3e552dc8f6904b2b8888b551184f0bb458b1b4075891e8f4d4ebb2ccd71391b"} err="failed to get container status \"c3e552dc8f6904b2b8888b551184f0bb458b1b4075891e8f4d4ebb2ccd71391b\": rpc error: code = NotFound desc = could not find container \"c3e552dc8f6904b2b8888b551184f0bb458b1b4075891e8f4d4ebb2ccd71391b\": container with ID starting with c3e552dc8f6904b2b8888b551184f0bb458b1b4075891e8f4d4ebb2ccd71391b not found: ID does not exist" Feb 19 19:33:31 crc kubenswrapper[4813]: I0219 19:33:31.292660 4813 scope.go:117] "RemoveContainer" containerID="7c701a55f598fba99ef410b02904639fea879e91b01de9c4a545c418056fddd1" Feb 19 19:33:31 crc kubenswrapper[4813]: E0219 19:33:31.292903 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c701a55f598fba99ef410b02904639fea879e91b01de9c4a545c418056fddd1\": container with ID starting with 7c701a55f598fba99ef410b02904639fea879e91b01de9c4a545c418056fddd1 not found: ID does not exist" containerID="7c701a55f598fba99ef410b02904639fea879e91b01de9c4a545c418056fddd1" Feb 19 19:33:31 crc kubenswrapper[4813]: I0219 19:33:31.292923 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c701a55f598fba99ef410b02904639fea879e91b01de9c4a545c418056fddd1"} err="failed to get container status \"7c701a55f598fba99ef410b02904639fea879e91b01de9c4a545c418056fddd1\": rpc error: code = NotFound desc = could not find container \"7c701a55f598fba99ef410b02904639fea879e91b01de9c4a545c418056fddd1\": container with ID starting with 7c701a55f598fba99ef410b02904639fea879e91b01de9c4a545c418056fddd1 not found: ID does not exist" Feb 19 19:33:31 crc kubenswrapper[4813]: I0219 19:33:31.328789 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6k5z\" (UniqueName: \"kubernetes.io/projected/29b95c10-c2b0-44ee-b70e-f3f3c2aaf925-kube-api-access-j6k5z\") on node \"crc\" DevicePath \"\"" Feb 19 19:33:31 crc kubenswrapper[4813]: I0219 19:33:31.328854 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29b95c10-c2b0-44ee-b70e-f3f3c2aaf925-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 19:33:31 crc kubenswrapper[4813]: I0219 19:33:31.599229 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29b95c10-c2b0-44ee-b70e-f3f3c2aaf925-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "29b95c10-c2b0-44ee-b70e-f3f3c2aaf925" (UID: "29b95c10-c2b0-44ee-b70e-f3f3c2aaf925"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:33:31 crc kubenswrapper[4813]: I0219 19:33:31.633527 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29b95c10-c2b0-44ee-b70e-f3f3c2aaf925-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 19:33:31 crc kubenswrapper[4813]: I0219 19:33:31.865495 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4vjml"] Feb 19 19:33:31 crc kubenswrapper[4813]: I0219 19:33:31.873327 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4vjml"] Feb 19 19:33:33 crc kubenswrapper[4813]: I0219 19:33:33.479340 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29b95c10-c2b0-44ee-b70e-f3f3c2aaf925" path="/var/lib/kubelet/pods/29b95c10-c2b0-44ee-b70e-f3f3c2aaf925/volumes" Feb 19 19:33:37 crc kubenswrapper[4813]: I0219 19:33:37.471801 4813 scope.go:117] "RemoveContainer" containerID="af6e4d1a386b8b6abce9ceaa7a5f6f9bd10098b61ae90ee53a61274fbda643f9" Feb 19 19:33:37 crc kubenswrapper[4813]: E0219 19:33:37.473892 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:33:49 crc kubenswrapper[4813]: I0219 19:33:49.472416 4813 scope.go:117] "RemoveContainer" containerID="af6e4d1a386b8b6abce9ceaa7a5f6f9bd10098b61ae90ee53a61274fbda643f9" Feb 19 19:33:49 crc kubenswrapper[4813]: E0219 19:33:49.473134 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:34:01 crc kubenswrapper[4813]: I0219 19:34:01.474663 4813 scope.go:117] "RemoveContainer" containerID="af6e4d1a386b8b6abce9ceaa7a5f6f9bd10098b61ae90ee53a61274fbda643f9" Feb 19 19:34:01 crc kubenswrapper[4813]: E0219 19:34:01.475376 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:34:12 crc kubenswrapper[4813]: I0219 19:34:12.471803 4813 scope.go:117] "RemoveContainer" containerID="af6e4d1a386b8b6abce9ceaa7a5f6f9bd10098b61ae90ee53a61274fbda643f9" Feb 19 19:34:12 crc kubenswrapper[4813]: E0219 19:34:12.472900 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:34:19 crc kubenswrapper[4813]: I0219 19:34:19.435546 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-645t7"] Feb 19 19:34:19 crc kubenswrapper[4813]: E0219 19:34:19.436620 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29b95c10-c2b0-44ee-b70e-f3f3c2aaf925" containerName="extract-utilities" Feb 19 19:34:19 crc kubenswrapper[4813]: I0219 19:34:19.436656 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="29b95c10-c2b0-44ee-b70e-f3f3c2aaf925" containerName="extract-utilities" Feb 19 19:34:19 crc kubenswrapper[4813]: E0219 19:34:19.436680 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29b95c10-c2b0-44ee-b70e-f3f3c2aaf925" containerName="extract-content" Feb 19 19:34:19 crc kubenswrapper[4813]: I0219 19:34:19.436690 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="29b95c10-c2b0-44ee-b70e-f3f3c2aaf925" containerName="extract-content" Feb 19 19:34:19 crc kubenswrapper[4813]: E0219 19:34:19.436703 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29b95c10-c2b0-44ee-b70e-f3f3c2aaf925" containerName="registry-server" Feb 19 19:34:19 crc kubenswrapper[4813]: I0219 19:34:19.436734 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="29b95c10-c2b0-44ee-b70e-f3f3c2aaf925" containerName="registry-server" Feb 19 19:34:19 crc kubenswrapper[4813]: I0219 19:34:19.436926 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="29b95c10-c2b0-44ee-b70e-f3f3c2aaf925" containerName="registry-server" Feb 19 19:34:19 crc kubenswrapper[4813]: I0219 19:34:19.438477 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-645t7" Feb 19 19:34:19 crc kubenswrapper[4813]: I0219 19:34:19.451250 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-645t7"] Feb 19 19:34:19 crc kubenswrapper[4813]: I0219 19:34:19.546553 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c876f3f-7fb7-4de6-9bae-9aeb27037387-utilities\") pod \"redhat-operators-645t7\" (UID: \"8c876f3f-7fb7-4de6-9bae-9aeb27037387\") " pod="openshift-marketplace/redhat-operators-645t7" Feb 19 19:34:19 crc kubenswrapper[4813]: I0219 19:34:19.547289 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c876f3f-7fb7-4de6-9bae-9aeb27037387-catalog-content\") pod \"redhat-operators-645t7\" (UID: \"8c876f3f-7fb7-4de6-9bae-9aeb27037387\") " pod="openshift-marketplace/redhat-operators-645t7" Feb 19 19:34:19 crc kubenswrapper[4813]: I0219 19:34:19.547376 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86rz6\" (UniqueName: \"kubernetes.io/projected/8c876f3f-7fb7-4de6-9bae-9aeb27037387-kube-api-access-86rz6\") pod \"redhat-operators-645t7\" (UID: \"8c876f3f-7fb7-4de6-9bae-9aeb27037387\") " pod="openshift-marketplace/redhat-operators-645t7" Feb 19 19:34:19 crc kubenswrapper[4813]: I0219 19:34:19.648611 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c876f3f-7fb7-4de6-9bae-9aeb27037387-catalog-content\") pod \"redhat-operators-645t7\" (UID: \"8c876f3f-7fb7-4de6-9bae-9aeb27037387\") " pod="openshift-marketplace/redhat-operators-645t7" Feb 19 19:34:19 crc kubenswrapper[4813]: I0219 19:34:19.648708 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86rz6\" (UniqueName: \"kubernetes.io/projected/8c876f3f-7fb7-4de6-9bae-9aeb27037387-kube-api-access-86rz6\") pod \"redhat-operators-645t7\" (UID: \"8c876f3f-7fb7-4de6-9bae-9aeb27037387\") " pod="openshift-marketplace/redhat-operators-645t7" Feb 19 19:34:19 crc kubenswrapper[4813]: I0219 19:34:19.648754 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c876f3f-7fb7-4de6-9bae-9aeb27037387-utilities\") pod \"redhat-operators-645t7\" (UID: \"8c876f3f-7fb7-4de6-9bae-9aeb27037387\") " pod="openshift-marketplace/redhat-operators-645t7" Feb 19 19:34:19 crc kubenswrapper[4813]: I0219 19:34:19.649195 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c876f3f-7fb7-4de6-9bae-9aeb27037387-catalog-content\") pod \"redhat-operators-645t7\" (UID: \"8c876f3f-7fb7-4de6-9bae-9aeb27037387\") " pod="openshift-marketplace/redhat-operators-645t7" Feb 19 19:34:19 crc kubenswrapper[4813]: I0219 19:34:19.649321 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c876f3f-7fb7-4de6-9bae-9aeb27037387-utilities\") pod \"redhat-operators-645t7\" (UID: \"8c876f3f-7fb7-4de6-9bae-9aeb27037387\") " pod="openshift-marketplace/redhat-operators-645t7" Feb 19 19:34:19 crc kubenswrapper[4813]: I0219 19:34:19.672103 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86rz6\" (UniqueName: \"kubernetes.io/projected/8c876f3f-7fb7-4de6-9bae-9aeb27037387-kube-api-access-86rz6\") pod \"redhat-operators-645t7\" (UID: \"8c876f3f-7fb7-4de6-9bae-9aeb27037387\") " pod="openshift-marketplace/redhat-operators-645t7" Feb 19 19:34:19 crc kubenswrapper[4813]: I0219 19:34:19.762220 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-645t7" Feb 19 19:34:20 crc kubenswrapper[4813]: I0219 19:34:20.202533 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-645t7"] Feb 19 19:34:20 crc kubenswrapper[4813]: I0219 19:34:20.632141 4813 generic.go:334] "Generic (PLEG): container finished" podID="8c876f3f-7fb7-4de6-9bae-9aeb27037387" containerID="0ffae9fa985e1d160881f35e9b9cdf6f94aab1b8a86f6656b508b1cce03d6dcb" exitCode=0 Feb 19 19:34:20 crc kubenswrapper[4813]: I0219 19:34:20.632189 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-645t7" event={"ID":"8c876f3f-7fb7-4de6-9bae-9aeb27037387","Type":"ContainerDied","Data":"0ffae9fa985e1d160881f35e9b9cdf6f94aab1b8a86f6656b508b1cce03d6dcb"} Feb 19 19:34:20 crc kubenswrapper[4813]: I0219 19:34:20.632481 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-645t7" event={"ID":"8c876f3f-7fb7-4de6-9bae-9aeb27037387","Type":"ContainerStarted","Data":"3539481cde74246cbe866c3f33a8f7d13415784e6e2e56ac22f8da5dbd06a478"} Feb 19 19:34:22 crc kubenswrapper[4813]: I0219 19:34:22.649246 4813 generic.go:334] "Generic (PLEG): container finished" podID="8c876f3f-7fb7-4de6-9bae-9aeb27037387" containerID="5029863f6521237e6d951cf7f9b20b5622bd1cafcae2d997b583c8a5f8e91617" exitCode=0 Feb 19 19:34:22 crc kubenswrapper[4813]: I0219 19:34:22.649360 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-645t7" event={"ID":"8c876f3f-7fb7-4de6-9bae-9aeb27037387","Type":"ContainerDied","Data":"5029863f6521237e6d951cf7f9b20b5622bd1cafcae2d997b583c8a5f8e91617"} Feb 19 19:34:23 crc kubenswrapper[4813]: I0219 19:34:23.664030 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-645t7" event={"ID":"8c876f3f-7fb7-4de6-9bae-9aeb27037387","Type":"ContainerStarted","Data":"6c9590e753b8d14e03a03699e126e8414aa8bb29e1801273019d8a39f0ecd417"} Feb 19 19:34:23 crc kubenswrapper[4813]: I0219 19:34:23.692229 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-645t7" podStartSLOduration=2.272948849 podStartE2EDuration="4.692205299s" podCreationTimestamp="2026-02-19 19:34:19 +0000 UTC" firstStartedPulling="2026-02-19 19:34:20.633850006 +0000 UTC m=+3879.859290547" lastFinishedPulling="2026-02-19 19:34:23.053106416 +0000 UTC m=+3882.278546997" observedRunningTime="2026-02-19 19:34:23.690731023 +0000 UTC m=+3882.916171614" watchObservedRunningTime="2026-02-19 19:34:23.692205299 +0000 UTC m=+3882.917645851" Feb 19 19:34:24 crc kubenswrapper[4813]: I0219 19:34:24.471985 4813 scope.go:117] "RemoveContainer" containerID="af6e4d1a386b8b6abce9ceaa7a5f6f9bd10098b61ae90ee53a61274fbda643f9" Feb 19 19:34:24 crc kubenswrapper[4813]: E0219 19:34:24.472153 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:34:29 crc kubenswrapper[4813]: I0219 19:34:29.763088 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-645t7" Feb 19 19:34:29 crc kubenswrapper[4813]: I0219 19:34:29.763689 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-645t7" Feb 19 19:34:29 crc kubenswrapper[4813]: I0219 19:34:29.808028 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-645t7" Feb 19 19:34:30 crc kubenswrapper[4813]: I0219 19:34:30.787840 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-645t7" Feb 19 19:34:30 crc kubenswrapper[4813]: I0219 19:34:30.861119 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-645t7"] Feb 19 19:34:32 crc kubenswrapper[4813]: I0219 19:34:32.734473 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-645t7" podUID="8c876f3f-7fb7-4de6-9bae-9aeb27037387" containerName="registry-server" containerID="cri-o://6c9590e753b8d14e03a03699e126e8414aa8bb29e1801273019d8a39f0ecd417" gracePeriod=2 Feb 19 19:34:33 crc kubenswrapper[4813]: I0219 19:34:33.593199 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-645t7" Feb 19 19:34:33 crc kubenswrapper[4813]: I0219 19:34:33.643796 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c876f3f-7fb7-4de6-9bae-9aeb27037387-catalog-content\") pod \"8c876f3f-7fb7-4de6-9bae-9aeb27037387\" (UID: \"8c876f3f-7fb7-4de6-9bae-9aeb27037387\") " Feb 19 19:34:33 crc kubenswrapper[4813]: I0219 19:34:33.644234 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c876f3f-7fb7-4de6-9bae-9aeb27037387-utilities\") pod \"8c876f3f-7fb7-4de6-9bae-9aeb27037387\" (UID: \"8c876f3f-7fb7-4de6-9bae-9aeb27037387\") " Feb 19 19:34:33 crc kubenswrapper[4813]: I0219 19:34:33.644280 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86rz6\" (UniqueName: \"kubernetes.io/projected/8c876f3f-7fb7-4de6-9bae-9aeb27037387-kube-api-access-86rz6\") pod \"8c876f3f-7fb7-4de6-9bae-9aeb27037387\" (UID: \"8c876f3f-7fb7-4de6-9bae-9aeb27037387\") " Feb 19 19:34:33 crc kubenswrapper[4813]: I0219 19:34:33.645763 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c876f3f-7fb7-4de6-9bae-9aeb27037387-utilities" (OuterVolumeSpecName: "utilities") pod "8c876f3f-7fb7-4de6-9bae-9aeb27037387" (UID: "8c876f3f-7fb7-4de6-9bae-9aeb27037387"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:34:33 crc kubenswrapper[4813]: I0219 19:34:33.651350 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c876f3f-7fb7-4de6-9bae-9aeb27037387-kube-api-access-86rz6" (OuterVolumeSpecName: "kube-api-access-86rz6") pod "8c876f3f-7fb7-4de6-9bae-9aeb27037387" (UID: "8c876f3f-7fb7-4de6-9bae-9aeb27037387"). InnerVolumeSpecName "kube-api-access-86rz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:34:33 crc kubenswrapper[4813]: I0219 19:34:33.744558 4813 generic.go:334] "Generic (PLEG): container finished" podID="8c876f3f-7fb7-4de6-9bae-9aeb27037387" containerID="6c9590e753b8d14e03a03699e126e8414aa8bb29e1801273019d8a39f0ecd417" exitCode=0 Feb 19 19:34:33 crc kubenswrapper[4813]: I0219 19:34:33.744611 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-645t7" event={"ID":"8c876f3f-7fb7-4de6-9bae-9aeb27037387","Type":"ContainerDied","Data":"6c9590e753b8d14e03a03699e126e8414aa8bb29e1801273019d8a39f0ecd417"} Feb 19 19:34:33 crc kubenswrapper[4813]: I0219 19:34:33.744635 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-645t7" event={"ID":"8c876f3f-7fb7-4de6-9bae-9aeb27037387","Type":"ContainerDied","Data":"3539481cde74246cbe866c3f33a8f7d13415784e6e2e56ac22f8da5dbd06a478"} Feb 19 19:34:33 crc kubenswrapper[4813]: I0219 19:34:33.744651 4813 scope.go:117] "RemoveContainer" containerID="6c9590e753b8d14e03a03699e126e8414aa8bb29e1801273019d8a39f0ecd417" Feb 19 19:34:33 crc kubenswrapper[4813]: I0219 19:34:33.744749 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-645t7" Feb 19 19:34:33 crc kubenswrapper[4813]: I0219 19:34:33.747662 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c876f3f-7fb7-4de6-9bae-9aeb27037387-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 19:34:33 crc kubenswrapper[4813]: I0219 19:34:33.747818 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86rz6\" (UniqueName: \"kubernetes.io/projected/8c876f3f-7fb7-4de6-9bae-9aeb27037387-kube-api-access-86rz6\") on node \"crc\" DevicePath \"\"" Feb 19 19:34:33 crc kubenswrapper[4813]: I0219 19:34:33.763791 4813 scope.go:117] "RemoveContainer" containerID="5029863f6521237e6d951cf7f9b20b5622bd1cafcae2d997b583c8a5f8e91617" Feb 19 19:34:33 crc kubenswrapper[4813]: I0219 19:34:33.790325 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c876f3f-7fb7-4de6-9bae-9aeb27037387-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8c876f3f-7fb7-4de6-9bae-9aeb27037387" (UID: "8c876f3f-7fb7-4de6-9bae-9aeb27037387"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:34:33 crc kubenswrapper[4813]: I0219 19:34:33.819830 4813 scope.go:117] "RemoveContainer" containerID="0ffae9fa985e1d160881f35e9b9cdf6f94aab1b8a86f6656b508b1cce03d6dcb" Feb 19 19:34:33 crc kubenswrapper[4813]: I0219 19:34:33.836073 4813 scope.go:117] "RemoveContainer" containerID="6c9590e753b8d14e03a03699e126e8414aa8bb29e1801273019d8a39f0ecd417" Feb 19 19:34:33 crc kubenswrapper[4813]: E0219 19:34:33.836456 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c9590e753b8d14e03a03699e126e8414aa8bb29e1801273019d8a39f0ecd417\": container with ID starting with 6c9590e753b8d14e03a03699e126e8414aa8bb29e1801273019d8a39f0ecd417 not found: ID does not exist" containerID="6c9590e753b8d14e03a03699e126e8414aa8bb29e1801273019d8a39f0ecd417" Feb 19 19:34:33 crc kubenswrapper[4813]: I0219 19:34:33.836531 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c9590e753b8d14e03a03699e126e8414aa8bb29e1801273019d8a39f0ecd417"} err="failed to get container status \"6c9590e753b8d14e03a03699e126e8414aa8bb29e1801273019d8a39f0ecd417\": rpc error: code = NotFound desc = could not find container \"6c9590e753b8d14e03a03699e126e8414aa8bb29e1801273019d8a39f0ecd417\": container with ID starting with 6c9590e753b8d14e03a03699e126e8414aa8bb29e1801273019d8a39f0ecd417 not found: ID does not exist" Feb 19 19:34:33 crc kubenswrapper[4813]: I0219 19:34:33.836559 4813 scope.go:117] "RemoveContainer" containerID="5029863f6521237e6d951cf7f9b20b5622bd1cafcae2d997b583c8a5f8e91617" Feb 19 19:34:33 crc kubenswrapper[4813]: E0219 19:34:33.836802 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5029863f6521237e6d951cf7f9b20b5622bd1cafcae2d997b583c8a5f8e91617\": container with ID starting with 5029863f6521237e6d951cf7f9b20b5622bd1cafcae2d997b583c8a5f8e91617 not found: ID does not exist" containerID="5029863f6521237e6d951cf7f9b20b5622bd1cafcae2d997b583c8a5f8e91617" Feb 19 19:34:33 crc kubenswrapper[4813]: I0219 19:34:33.836822 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5029863f6521237e6d951cf7f9b20b5622bd1cafcae2d997b583c8a5f8e91617"} err="failed to get container status \"5029863f6521237e6d951cf7f9b20b5622bd1cafcae2d997b583c8a5f8e91617\": rpc error: code = NotFound desc = could not find container \"5029863f6521237e6d951cf7f9b20b5622bd1cafcae2d997b583c8a5f8e91617\": container with ID starting with 5029863f6521237e6d951cf7f9b20b5622bd1cafcae2d997b583c8a5f8e91617 not found: ID does not exist" Feb 19 19:34:33 crc kubenswrapper[4813]: I0219 19:34:33.836836 4813 scope.go:117] "RemoveContainer" containerID="0ffae9fa985e1d160881f35e9b9cdf6f94aab1b8a86f6656b508b1cce03d6dcb" Feb 19 19:34:33 crc kubenswrapper[4813]: E0219 19:34:33.837070 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ffae9fa985e1d160881f35e9b9cdf6f94aab1b8a86f6656b508b1cce03d6dcb\": container with ID starting with 0ffae9fa985e1d160881f35e9b9cdf6f94aab1b8a86f6656b508b1cce03d6dcb not found: ID does not exist" containerID="0ffae9fa985e1d160881f35e9b9cdf6f94aab1b8a86f6656b508b1cce03d6dcb" Feb 19 19:34:33 crc kubenswrapper[4813]: I0219 19:34:33.837096 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ffae9fa985e1d160881f35e9b9cdf6f94aab1b8a86f6656b508b1cce03d6dcb"} err="failed to get container status \"0ffae9fa985e1d160881f35e9b9cdf6f94aab1b8a86f6656b508b1cce03d6dcb\": rpc error: code = NotFound desc = could not find container \"0ffae9fa985e1d160881f35e9b9cdf6f94aab1b8a86f6656b508b1cce03d6dcb\": container with ID starting with 0ffae9fa985e1d160881f35e9b9cdf6f94aab1b8a86f6656b508b1cce03d6dcb not found: ID does not exist" Feb 19 19:34:33 crc kubenswrapper[4813]: I0219 19:34:33.848753 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c876f3f-7fb7-4de6-9bae-9aeb27037387-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 19:34:34 crc kubenswrapper[4813]: I0219 19:34:34.095149 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-645t7"] Feb 19 19:34:34 crc kubenswrapper[4813]: I0219 19:34:34.105782 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-645t7"] Feb 19 19:34:35 crc kubenswrapper[4813]: I0219 19:34:35.470919 4813 scope.go:117] "RemoveContainer" containerID="af6e4d1a386b8b6abce9ceaa7a5f6f9bd10098b61ae90ee53a61274fbda643f9" Feb 19 19:34:35 crc kubenswrapper[4813]: E0219 19:34:35.476422 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:34:35 crc kubenswrapper[4813]: I0219 19:34:35.492638 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c876f3f-7fb7-4de6-9bae-9aeb27037387" path="/var/lib/kubelet/pods/8c876f3f-7fb7-4de6-9bae-9aeb27037387/volumes" Feb 19 19:34:48 crc kubenswrapper[4813]: I0219 19:34:48.472262 4813 scope.go:117] "RemoveContainer" containerID="af6e4d1a386b8b6abce9ceaa7a5f6f9bd10098b61ae90ee53a61274fbda643f9" Feb 19 19:34:48 crc kubenswrapper[4813]: E0219 19:34:48.473324 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:35:03 crc kubenswrapper[4813]: I0219 19:35:03.471809 4813 scope.go:117] "RemoveContainer" containerID="af6e4d1a386b8b6abce9ceaa7a5f6f9bd10098b61ae90ee53a61274fbda643f9" Feb 19 19:35:03 crc kubenswrapper[4813]: E0219 19:35:03.472897 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:35:17 crc kubenswrapper[4813]: I0219 19:35:17.471937 4813 scope.go:117] "RemoveContainer" containerID="af6e4d1a386b8b6abce9ceaa7a5f6f9bd10098b61ae90ee53a61274fbda643f9" Feb 19 19:35:17 crc kubenswrapper[4813]: E0219 19:35:17.472811 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:35:29 crc kubenswrapper[4813]: I0219 19:35:29.471417 4813 scope.go:117] "RemoveContainer" containerID="af6e4d1a386b8b6abce9ceaa7a5f6f9bd10098b61ae90ee53a61274fbda643f9" Feb 19 19:35:29 crc kubenswrapper[4813]: E0219 19:35:29.472085 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:35:41 crc kubenswrapper[4813]: I0219 19:35:41.480219 4813 scope.go:117] "RemoveContainer" containerID="af6e4d1a386b8b6abce9ceaa7a5f6f9bd10098b61ae90ee53a61274fbda643f9" Feb 19 19:35:41 crc kubenswrapper[4813]: E0219 19:35:41.480835 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:35:55 crc kubenswrapper[4813]: I0219 19:35:55.470954 4813 scope.go:117] "RemoveContainer" containerID="af6e4d1a386b8b6abce9ceaa7a5f6f9bd10098b61ae90ee53a61274fbda643f9" Feb 19 19:35:55 crc kubenswrapper[4813]: E0219 19:35:55.472330 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:36:09 crc kubenswrapper[4813]: I0219 19:36:09.471541 4813 scope.go:117] "RemoveContainer" containerID="af6e4d1a386b8b6abce9ceaa7a5f6f9bd10098b61ae90ee53a61274fbda643f9" Feb 19 19:36:10 crc kubenswrapper[4813]: I0219 19:36:10.458791 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" event={"ID":"481977a2-7072-4176-abd4-863cb6104d70","Type":"ContainerStarted","Data":"be981c571a3e7b4efda5a64693ce84e4150127591218d14e4a62185fc2229a1c"} Feb 19 19:38:30 crc kubenswrapper[4813]: I0219 19:38:30.330819 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 19:38:30 crc kubenswrapper[4813]: I0219 19:38:30.332360 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 19:38:34 crc kubenswrapper[4813]: I0219 19:38:34.373939 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-k5gwv"] Feb 19 19:38:34 crc kubenswrapper[4813]: E0219 19:38:34.374589 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c876f3f-7fb7-4de6-9bae-9aeb27037387" containerName="extract-utilities" Feb 19 19:38:34 crc kubenswrapper[4813]: I0219 19:38:34.374869 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c876f3f-7fb7-4de6-9bae-9aeb27037387" containerName="extract-utilities" Feb 19 19:38:34 crc kubenswrapper[4813]: E0219 19:38:34.374888 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c876f3f-7fb7-4de6-9bae-9aeb27037387" containerName="extract-content" Feb 19 19:38:34 crc kubenswrapper[4813]: I0219 19:38:34.374896 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c876f3f-7fb7-4de6-9bae-9aeb27037387" containerName="extract-content" Feb 19 19:38:34 crc kubenswrapper[4813]: E0219 19:38:34.374911 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c876f3f-7fb7-4de6-9bae-9aeb27037387" containerName="registry-server" Feb 19 19:38:34 crc kubenswrapper[4813]: I0219 19:38:34.374919 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c876f3f-7fb7-4de6-9bae-9aeb27037387" containerName="registry-server" Feb 19 19:38:34 crc kubenswrapper[4813]: I0219 19:38:34.375172 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c876f3f-7fb7-4de6-9bae-9aeb27037387" containerName="registry-server" Feb 19 19:38:34 crc kubenswrapper[4813]: I0219 19:38:34.376412 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k5gwv" Feb 19 19:38:34 crc kubenswrapper[4813]: I0219 19:38:34.411067 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k5gwv"] Feb 19 19:38:34 crc kubenswrapper[4813]: I0219 19:38:34.485285 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkwdx\" (UniqueName: \"kubernetes.io/projected/16a23901-aa7d-4a96-ad71-e1a8e6dfe675-kube-api-access-hkwdx\") pod \"community-operators-k5gwv\" (UID: \"16a23901-aa7d-4a96-ad71-e1a8e6dfe675\") " pod="openshift-marketplace/community-operators-k5gwv" Feb 19 19:38:34 crc kubenswrapper[4813]: I0219 19:38:34.485467 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16a23901-aa7d-4a96-ad71-e1a8e6dfe675-catalog-content\") pod \"community-operators-k5gwv\" (UID: \"16a23901-aa7d-4a96-ad71-e1a8e6dfe675\") " pod="openshift-marketplace/community-operators-k5gwv" Feb 19 19:38:34 crc kubenswrapper[4813]: I0219 19:38:34.485643 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16a23901-aa7d-4a96-ad71-e1a8e6dfe675-utilities\") pod \"community-operators-k5gwv\" (UID: \"16a23901-aa7d-4a96-ad71-e1a8e6dfe675\") " pod="openshift-marketplace/community-operators-k5gwv" Feb 19 19:38:34 crc kubenswrapper[4813]: I0219 19:38:34.586768 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16a23901-aa7d-4a96-ad71-e1a8e6dfe675-catalog-content\") pod \"community-operators-k5gwv\" (UID: \"16a23901-aa7d-4a96-ad71-e1a8e6dfe675\") " pod="openshift-marketplace/community-operators-k5gwv" Feb 19 19:38:34 crc kubenswrapper[4813]: I0219 19:38:34.586852 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16a23901-aa7d-4a96-ad71-e1a8e6dfe675-utilities\") pod \"community-operators-k5gwv\" (UID: \"16a23901-aa7d-4a96-ad71-e1a8e6dfe675\") " pod="openshift-marketplace/community-operators-k5gwv" Feb 19 19:38:34 crc kubenswrapper[4813]: I0219 19:38:34.586893 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkwdx\" (UniqueName: \"kubernetes.io/projected/16a23901-aa7d-4a96-ad71-e1a8e6dfe675-kube-api-access-hkwdx\") pod \"community-operators-k5gwv\" (UID: \"16a23901-aa7d-4a96-ad71-e1a8e6dfe675\") " pod="openshift-marketplace/community-operators-k5gwv" Feb 19 19:38:34 crc kubenswrapper[4813]: I0219 19:38:34.587367 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16a23901-aa7d-4a96-ad71-e1a8e6dfe675-catalog-content\") pod \"community-operators-k5gwv\" (UID: \"16a23901-aa7d-4a96-ad71-e1a8e6dfe675\") " pod="openshift-marketplace/community-operators-k5gwv" Feb 19 19:38:34 crc kubenswrapper[4813]: I0219 19:38:34.587434 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16a23901-aa7d-4a96-ad71-e1a8e6dfe675-utilities\") pod \"community-operators-k5gwv\" (UID: \"16a23901-aa7d-4a96-ad71-e1a8e6dfe675\") " pod="openshift-marketplace/community-operators-k5gwv" Feb 19 19:38:34 crc kubenswrapper[4813]: I0219 19:38:34.605693 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkwdx\" (UniqueName: \"kubernetes.io/projected/16a23901-aa7d-4a96-ad71-e1a8e6dfe675-kube-api-access-hkwdx\") pod \"community-operators-k5gwv\" (UID: \"16a23901-aa7d-4a96-ad71-e1a8e6dfe675\") " pod="openshift-marketplace/community-operators-k5gwv" Feb 19 19:38:34 crc kubenswrapper[4813]: I0219 19:38:34.702706 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k5gwv" Feb 19 19:38:35 crc kubenswrapper[4813]: I0219 19:38:35.151857 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k5gwv"] Feb 19 19:38:35 crc kubenswrapper[4813]: I0219 19:38:35.637065 4813 generic.go:334] "Generic (PLEG): container finished" podID="16a23901-aa7d-4a96-ad71-e1a8e6dfe675" containerID="7eb721c2be2591a3e8d4cd7c7380764b9107fa6dc018a11124178e94ae08bb4c" exitCode=0 Feb 19 19:38:35 crc kubenswrapper[4813]: I0219 19:38:35.637106 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k5gwv" event={"ID":"16a23901-aa7d-4a96-ad71-e1a8e6dfe675","Type":"ContainerDied","Data":"7eb721c2be2591a3e8d4cd7c7380764b9107fa6dc018a11124178e94ae08bb4c"} Feb 19 19:38:35 crc kubenswrapper[4813]: I0219 19:38:35.637132 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k5gwv" event={"ID":"16a23901-aa7d-4a96-ad71-e1a8e6dfe675","Type":"ContainerStarted","Data":"6cdeda6903c394d608a2be1e03fe15848b076806666cdd9fa64419d44f18b45d"} Feb 19 19:38:35 crc kubenswrapper[4813]: I0219 19:38:35.642130 4813 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 19:38:37 crc kubenswrapper[4813]: I0219 19:38:37.651305 4813 generic.go:334] "Generic (PLEG): container finished" podID="16a23901-aa7d-4a96-ad71-e1a8e6dfe675" containerID="9d7fb2cbc4e030e4a36392fd8ff4baffc6a39881d44abd6672e26b254176ced1" exitCode=0 Feb 19 19:38:37 crc kubenswrapper[4813]: I0219 19:38:37.651388 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k5gwv" event={"ID":"16a23901-aa7d-4a96-ad71-e1a8e6dfe675","Type":"ContainerDied","Data":"9d7fb2cbc4e030e4a36392fd8ff4baffc6a39881d44abd6672e26b254176ced1"} Feb 19 19:38:38 crc kubenswrapper[4813]: I0219 19:38:38.661623 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k5gwv" event={"ID":"16a23901-aa7d-4a96-ad71-e1a8e6dfe675","Type":"ContainerStarted","Data":"69db5fda2ccb4a2cc60f53e697680a7cae30119543c575ac536cd80cd5c669f6"} Feb 19 19:38:38 crc kubenswrapper[4813]: I0219 19:38:38.685036 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-k5gwv" podStartSLOduration=2.27286109 podStartE2EDuration="4.685012035s" podCreationTimestamp="2026-02-19 19:38:34 +0000 UTC" firstStartedPulling="2026-02-19 19:38:35.641549898 +0000 UTC m=+4134.866990459" lastFinishedPulling="2026-02-19 19:38:38.053700863 +0000 UTC m=+4137.279141404" observedRunningTime="2026-02-19 19:38:38.680027411 +0000 UTC m=+4137.905467982" watchObservedRunningTime="2026-02-19 19:38:38.685012035 +0000 UTC m=+4137.910452576" Feb 19 19:38:44 crc kubenswrapper[4813]: I0219 19:38:44.702884 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-k5gwv" Feb 19 19:38:44 crc kubenswrapper[4813]: I0219 19:38:44.703473 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-k5gwv" Feb 19 19:38:44 crc kubenswrapper[4813]: I0219 19:38:44.778825 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-k5gwv" Feb 19 19:38:44 crc kubenswrapper[4813]: I0219 19:38:44.840267 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-k5gwv" Feb 19 19:38:45 crc kubenswrapper[4813]: I0219 19:38:45.027157 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k5gwv"] Feb 19 19:38:46 crc kubenswrapper[4813]: I0219 19:38:46.734842 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-k5gwv" podUID="16a23901-aa7d-4a96-ad71-e1a8e6dfe675" containerName="registry-server" containerID="cri-o://69db5fda2ccb4a2cc60f53e697680a7cae30119543c575ac536cd80cd5c669f6" gracePeriod=2 Feb 19 19:38:47 crc kubenswrapper[4813]: I0219 19:38:47.150185 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k5gwv" Feb 19 19:38:47 crc kubenswrapper[4813]: I0219 19:38:47.190119 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16a23901-aa7d-4a96-ad71-e1a8e6dfe675-utilities\") pod \"16a23901-aa7d-4a96-ad71-e1a8e6dfe675\" (UID: \"16a23901-aa7d-4a96-ad71-e1a8e6dfe675\") " Feb 19 19:38:47 crc kubenswrapper[4813]: I0219 19:38:47.190219 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkwdx\" (UniqueName: \"kubernetes.io/projected/16a23901-aa7d-4a96-ad71-e1a8e6dfe675-kube-api-access-hkwdx\") pod \"16a23901-aa7d-4a96-ad71-e1a8e6dfe675\" (UID: \"16a23901-aa7d-4a96-ad71-e1a8e6dfe675\") " Feb 19 19:38:47 crc kubenswrapper[4813]: I0219 19:38:47.190269 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16a23901-aa7d-4a96-ad71-e1a8e6dfe675-catalog-content\") pod \"16a23901-aa7d-4a96-ad71-e1a8e6dfe675\" (UID: \"16a23901-aa7d-4a96-ad71-e1a8e6dfe675\") " Feb 19 19:38:47 crc kubenswrapper[4813]: I0219 19:38:47.197189 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16a23901-aa7d-4a96-ad71-e1a8e6dfe675-kube-api-access-hkwdx" (OuterVolumeSpecName: "kube-api-access-hkwdx") pod "16a23901-aa7d-4a96-ad71-e1a8e6dfe675" (UID: "16a23901-aa7d-4a96-ad71-e1a8e6dfe675"). InnerVolumeSpecName "kube-api-access-hkwdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:38:47 crc kubenswrapper[4813]: I0219 19:38:47.200124 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16a23901-aa7d-4a96-ad71-e1a8e6dfe675-utilities" (OuterVolumeSpecName: "utilities") pod "16a23901-aa7d-4a96-ad71-e1a8e6dfe675" (UID: "16a23901-aa7d-4a96-ad71-e1a8e6dfe675"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:38:47 crc kubenswrapper[4813]: I0219 19:38:47.292418 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16a23901-aa7d-4a96-ad71-e1a8e6dfe675-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 19:38:47 crc kubenswrapper[4813]: I0219 19:38:47.292456 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hkwdx\" (UniqueName: \"kubernetes.io/projected/16a23901-aa7d-4a96-ad71-e1a8e6dfe675-kube-api-access-hkwdx\") on node \"crc\" DevicePath \"\"" Feb 19 19:38:47 crc kubenswrapper[4813]: I0219 19:38:47.417309 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16a23901-aa7d-4a96-ad71-e1a8e6dfe675-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "16a23901-aa7d-4a96-ad71-e1a8e6dfe675" (UID: "16a23901-aa7d-4a96-ad71-e1a8e6dfe675"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:38:47 crc kubenswrapper[4813]: I0219 19:38:47.495549 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16a23901-aa7d-4a96-ad71-e1a8e6dfe675-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 19:38:47 crc kubenswrapper[4813]: I0219 19:38:47.743675 4813 generic.go:334] "Generic (PLEG): container finished" podID="16a23901-aa7d-4a96-ad71-e1a8e6dfe675" containerID="69db5fda2ccb4a2cc60f53e697680a7cae30119543c575ac536cd80cd5c669f6" exitCode=0 Feb 19 19:38:47 crc kubenswrapper[4813]: I0219 19:38:47.743734 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k5gwv" Feb 19 19:38:47 crc kubenswrapper[4813]: I0219 19:38:47.743718 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k5gwv" event={"ID":"16a23901-aa7d-4a96-ad71-e1a8e6dfe675","Type":"ContainerDied","Data":"69db5fda2ccb4a2cc60f53e697680a7cae30119543c575ac536cd80cd5c669f6"} Feb 19 19:38:47 crc kubenswrapper[4813]: I0219 19:38:47.743885 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k5gwv" event={"ID":"16a23901-aa7d-4a96-ad71-e1a8e6dfe675","Type":"ContainerDied","Data":"6cdeda6903c394d608a2be1e03fe15848b076806666cdd9fa64419d44f18b45d"} Feb 19 19:38:47 crc kubenswrapper[4813]: I0219 19:38:47.743909 4813 scope.go:117] "RemoveContainer" containerID="69db5fda2ccb4a2cc60f53e697680a7cae30119543c575ac536cd80cd5c669f6" Feb 19 19:38:47 crc kubenswrapper[4813]: I0219 19:38:47.767939 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k5gwv"] Feb 19 19:38:47 crc kubenswrapper[4813]: I0219 19:38:47.773651 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-k5gwv"] Feb 19 19:38:47 crc kubenswrapper[4813]: I0219 19:38:47.780094 4813 scope.go:117] "RemoveContainer" containerID="9d7fb2cbc4e030e4a36392fd8ff4baffc6a39881d44abd6672e26b254176ced1" Feb 19 19:38:47 crc kubenswrapper[4813]: I0219 19:38:47.799789 4813 scope.go:117] "RemoveContainer" containerID="7eb721c2be2591a3e8d4cd7c7380764b9107fa6dc018a11124178e94ae08bb4c" Feb 19 19:38:47 crc kubenswrapper[4813]: I0219 19:38:47.823743 4813 scope.go:117] "RemoveContainer" containerID="69db5fda2ccb4a2cc60f53e697680a7cae30119543c575ac536cd80cd5c669f6" Feb 19 19:38:47 crc kubenswrapper[4813]: E0219 19:38:47.824167 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69db5fda2ccb4a2cc60f53e697680a7cae30119543c575ac536cd80cd5c669f6\": container with ID starting with 69db5fda2ccb4a2cc60f53e697680a7cae30119543c575ac536cd80cd5c669f6 not found: ID does not exist" containerID="69db5fda2ccb4a2cc60f53e697680a7cae30119543c575ac536cd80cd5c669f6" Feb 19 19:38:47 crc kubenswrapper[4813]: I0219 19:38:47.824216 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69db5fda2ccb4a2cc60f53e697680a7cae30119543c575ac536cd80cd5c669f6"} err="failed to get container status \"69db5fda2ccb4a2cc60f53e697680a7cae30119543c575ac536cd80cd5c669f6\": rpc error: code = NotFound desc = could not find container \"69db5fda2ccb4a2cc60f53e697680a7cae30119543c575ac536cd80cd5c669f6\": container with ID starting with 69db5fda2ccb4a2cc60f53e697680a7cae30119543c575ac536cd80cd5c669f6 not found: ID does not exist" Feb 19 19:38:47 crc kubenswrapper[4813]: I0219 19:38:47.824253 4813 scope.go:117] "RemoveContainer" containerID="9d7fb2cbc4e030e4a36392fd8ff4baffc6a39881d44abd6672e26b254176ced1" Feb 19 19:38:47 crc kubenswrapper[4813]: E0219 19:38:47.824713 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d7fb2cbc4e030e4a36392fd8ff4baffc6a39881d44abd6672e26b254176ced1\": container with ID starting with 9d7fb2cbc4e030e4a36392fd8ff4baffc6a39881d44abd6672e26b254176ced1 not found: ID does not exist" containerID="9d7fb2cbc4e030e4a36392fd8ff4baffc6a39881d44abd6672e26b254176ced1" Feb 19 19:38:47 crc kubenswrapper[4813]: I0219 19:38:47.824758 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d7fb2cbc4e030e4a36392fd8ff4baffc6a39881d44abd6672e26b254176ced1"} err="failed to get container status \"9d7fb2cbc4e030e4a36392fd8ff4baffc6a39881d44abd6672e26b254176ced1\": rpc error: code = NotFound desc = could not find container \"9d7fb2cbc4e030e4a36392fd8ff4baffc6a39881d44abd6672e26b254176ced1\": container with ID starting with 9d7fb2cbc4e030e4a36392fd8ff4baffc6a39881d44abd6672e26b254176ced1 not found: ID does not exist" Feb 19 19:38:47 crc kubenswrapper[4813]: I0219 19:38:47.824786 4813 scope.go:117] "RemoveContainer" containerID="7eb721c2be2591a3e8d4cd7c7380764b9107fa6dc018a11124178e94ae08bb4c" Feb 19 19:38:47 crc kubenswrapper[4813]: E0219 19:38:47.825158 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7eb721c2be2591a3e8d4cd7c7380764b9107fa6dc018a11124178e94ae08bb4c\": container with ID starting with 7eb721c2be2591a3e8d4cd7c7380764b9107fa6dc018a11124178e94ae08bb4c not found: ID does not exist" containerID="7eb721c2be2591a3e8d4cd7c7380764b9107fa6dc018a11124178e94ae08bb4c" Feb 19 19:38:47 crc kubenswrapper[4813]: I0219 19:38:47.825200 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7eb721c2be2591a3e8d4cd7c7380764b9107fa6dc018a11124178e94ae08bb4c"} err="failed to get container status \"7eb721c2be2591a3e8d4cd7c7380764b9107fa6dc018a11124178e94ae08bb4c\": rpc error: code = NotFound desc = could not find container \"7eb721c2be2591a3e8d4cd7c7380764b9107fa6dc018a11124178e94ae08bb4c\": container with ID starting with 7eb721c2be2591a3e8d4cd7c7380764b9107fa6dc018a11124178e94ae08bb4c not found: ID does not exist" Feb 19 19:38:49 crc kubenswrapper[4813]: I0219 19:38:49.482989 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16a23901-aa7d-4a96-ad71-e1a8e6dfe675" path="/var/lib/kubelet/pods/16a23901-aa7d-4a96-ad71-e1a8e6dfe675/volumes" Feb 19 19:39:00 crc kubenswrapper[4813]: I0219 19:39:00.329649 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 19:39:00 crc kubenswrapper[4813]: I0219 19:39:00.330378 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 19:39:30 crc kubenswrapper[4813]: I0219 19:39:30.329805 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 19:39:30 crc kubenswrapper[4813]: I0219 19:39:30.330361 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 19:39:30 crc kubenswrapper[4813]: I0219 19:39:30.330405 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" Feb 19 19:39:30 crc kubenswrapper[4813]: I0219 19:39:30.331175 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"be981c571a3e7b4efda5a64693ce84e4150127591218d14e4a62185fc2229a1c"} pod="openshift-machine-config-operator/machine-config-daemon-gfswm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 19:39:30 crc kubenswrapper[4813]: I0219 19:39:30.331230 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" containerID="cri-o://be981c571a3e7b4efda5a64693ce84e4150127591218d14e4a62185fc2229a1c" gracePeriod=600 Feb 19 19:39:31 crc kubenswrapper[4813]: I0219 19:39:31.082670 4813 generic.go:334] "Generic (PLEG): container finished" podID="481977a2-7072-4176-abd4-863cb6104d70" containerID="be981c571a3e7b4efda5a64693ce84e4150127591218d14e4a62185fc2229a1c" exitCode=0 Feb 19 19:39:31 crc kubenswrapper[4813]: I0219 19:39:31.082725 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" event={"ID":"481977a2-7072-4176-abd4-863cb6104d70","Type":"ContainerDied","Data":"be981c571a3e7b4efda5a64693ce84e4150127591218d14e4a62185fc2229a1c"} Feb 19 19:39:31 crc kubenswrapper[4813]: I0219 19:39:31.083153 4813 scope.go:117] "RemoveContainer" containerID="af6e4d1a386b8b6abce9ceaa7a5f6f9bd10098b61ae90ee53a61274fbda643f9" Feb 19 19:39:32 crc kubenswrapper[4813]: I0219 19:39:32.092972 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" event={"ID":"481977a2-7072-4176-abd4-863cb6104d70","Type":"ContainerStarted","Data":"eeed46d1b34159de66f42948a66c3d63b5a21a3caa6efa2e5b0c9cefdb6e91ba"} Feb 19 19:40:13 crc kubenswrapper[4813]: I0219 19:40:13.354622 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2wn9p"] Feb 19 19:40:13 crc kubenswrapper[4813]: E0219 19:40:13.356109 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16a23901-aa7d-4a96-ad71-e1a8e6dfe675" containerName="extract-utilities" Feb 19 19:40:13 crc kubenswrapper[4813]: I0219 19:40:13.356145 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="16a23901-aa7d-4a96-ad71-e1a8e6dfe675" containerName="extract-utilities" Feb 19 19:40:13 crc kubenswrapper[4813]: E0219 19:40:13.356174 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16a23901-aa7d-4a96-ad71-e1a8e6dfe675" containerName="registry-server" Feb 19 19:40:13 crc kubenswrapper[4813]: I0219 19:40:13.356191 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="16a23901-aa7d-4a96-ad71-e1a8e6dfe675" containerName="registry-server" Feb 19 19:40:13 crc kubenswrapper[4813]: E0219 19:40:13.356231 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16a23901-aa7d-4a96-ad71-e1a8e6dfe675" containerName="extract-content" Feb 19 19:40:13 crc kubenswrapper[4813]: I0219 19:40:13.356249 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="16a23901-aa7d-4a96-ad71-e1a8e6dfe675" containerName="extract-content" Feb 19 19:40:13 crc kubenswrapper[4813]: I0219 19:40:13.356563 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="16a23901-aa7d-4a96-ad71-e1a8e6dfe675" containerName="registry-server" Feb 19 19:40:13 crc kubenswrapper[4813]: I0219 19:40:13.358851 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2wn9p" Feb 19 19:40:13 crc kubenswrapper[4813]: I0219 19:40:13.368860 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2wn9p"] Feb 19 19:40:13 crc kubenswrapper[4813]: I0219 19:40:13.371910 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c3c3ff7-c0d6-45ae-af49-4ad73c09ecc5-catalog-content\") pod \"redhat-marketplace-2wn9p\" (UID: \"1c3c3ff7-c0d6-45ae-af49-4ad73c09ecc5\") " pod="openshift-marketplace/redhat-marketplace-2wn9p" Feb 19 19:40:13 crc kubenswrapper[4813]: I0219 19:40:13.372059 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wk57l\" (UniqueName: \"kubernetes.io/projected/1c3c3ff7-c0d6-45ae-af49-4ad73c09ecc5-kube-api-access-wk57l\") pod \"redhat-marketplace-2wn9p\" (UID: \"1c3c3ff7-c0d6-45ae-af49-4ad73c09ecc5\") " pod="openshift-marketplace/redhat-marketplace-2wn9p" Feb 19 19:40:13 crc kubenswrapper[4813]: I0219 19:40:13.372260 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c3c3ff7-c0d6-45ae-af49-4ad73c09ecc5-utilities\") pod \"redhat-marketplace-2wn9p\" (UID: \"1c3c3ff7-c0d6-45ae-af49-4ad73c09ecc5\") " pod="openshift-marketplace/redhat-marketplace-2wn9p" Feb 19 19:40:13 crc kubenswrapper[4813]: I0219 19:40:13.473256 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c3c3ff7-c0d6-45ae-af49-4ad73c09ecc5-utilities\") pod \"redhat-marketplace-2wn9p\" (UID: \"1c3c3ff7-c0d6-45ae-af49-4ad73c09ecc5\") " pod="openshift-marketplace/redhat-marketplace-2wn9p" Feb 19 19:40:13 crc kubenswrapper[4813]: I0219 19:40:13.473319 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c3c3ff7-c0d6-45ae-af49-4ad73c09ecc5-catalog-content\") pod \"redhat-marketplace-2wn9p\" (UID: \"1c3c3ff7-c0d6-45ae-af49-4ad73c09ecc5\") " pod="openshift-marketplace/redhat-marketplace-2wn9p" Feb 19 19:40:13 crc kubenswrapper[4813]: I0219 19:40:13.473358 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wk57l\" (UniqueName: \"kubernetes.io/projected/1c3c3ff7-c0d6-45ae-af49-4ad73c09ecc5-kube-api-access-wk57l\") pod \"redhat-marketplace-2wn9p\" (UID: \"1c3c3ff7-c0d6-45ae-af49-4ad73c09ecc5\") " pod="openshift-marketplace/redhat-marketplace-2wn9p" Feb 19 19:40:13 crc kubenswrapper[4813]: I0219 19:40:13.474397 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c3c3ff7-c0d6-45ae-af49-4ad73c09ecc5-catalog-content\") pod \"redhat-marketplace-2wn9p\" (UID: \"1c3c3ff7-c0d6-45ae-af49-4ad73c09ecc5\") " pod="openshift-marketplace/redhat-marketplace-2wn9p" Feb 19 19:40:13 crc kubenswrapper[4813]: I0219 19:40:13.475184 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c3c3ff7-c0d6-45ae-af49-4ad73c09ecc5-utilities\") pod \"redhat-marketplace-2wn9p\" (UID: \"1c3c3ff7-c0d6-45ae-af49-4ad73c09ecc5\") " pod="openshift-marketplace/redhat-marketplace-2wn9p" Feb 19 19:40:13 crc kubenswrapper[4813]: I0219 19:40:13.497460 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wk57l\" (UniqueName: \"kubernetes.io/projected/1c3c3ff7-c0d6-45ae-af49-4ad73c09ecc5-kube-api-access-wk57l\") pod \"redhat-marketplace-2wn9p\" (UID: \"1c3c3ff7-c0d6-45ae-af49-4ad73c09ecc5\") " pod="openshift-marketplace/redhat-marketplace-2wn9p" Feb 19 19:40:13 crc kubenswrapper[4813]: I0219 19:40:13.694194 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2wn9p" Feb 19 19:40:14 crc kubenswrapper[4813]: I0219 19:40:14.111868 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2wn9p"] Feb 19 19:40:14 crc kubenswrapper[4813]: I0219 19:40:14.402165 4813 generic.go:334] "Generic (PLEG): container finished" podID="1c3c3ff7-c0d6-45ae-af49-4ad73c09ecc5" containerID="804a00f40ba32f96f13e2909f249665b78010c01bec49ab7f673ca3b09c1e330" exitCode=0 Feb 19 19:40:14 crc kubenswrapper[4813]: I0219 19:40:14.402211 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2wn9p" event={"ID":"1c3c3ff7-c0d6-45ae-af49-4ad73c09ecc5","Type":"ContainerDied","Data":"804a00f40ba32f96f13e2909f249665b78010c01bec49ab7f673ca3b09c1e330"} Feb 19 19:40:14 crc kubenswrapper[4813]: I0219 19:40:14.402239 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2wn9p" event={"ID":"1c3c3ff7-c0d6-45ae-af49-4ad73c09ecc5","Type":"ContainerStarted","Data":"8a327e1c097f6f9300d7819df9588c2a0bb1fd5c998a065111f329778851645a"} Feb 19 19:40:15 crc kubenswrapper[4813]: I0219 19:40:15.417171 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2wn9p" event={"ID":"1c3c3ff7-c0d6-45ae-af49-4ad73c09ecc5","Type":"ContainerStarted","Data":"773b41b288bed066f2c52365fa19703388b4fa0ef0d6dc2301fa50ef1518cee7"} Feb 19 19:40:16 crc kubenswrapper[4813]: I0219 19:40:16.428446 4813 generic.go:334] "Generic (PLEG): container finished" podID="1c3c3ff7-c0d6-45ae-af49-4ad73c09ecc5" containerID="773b41b288bed066f2c52365fa19703388b4fa0ef0d6dc2301fa50ef1518cee7" exitCode=0 Feb 19 19:40:16 crc kubenswrapper[4813]: I0219 19:40:16.428506 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2wn9p" event={"ID":"1c3c3ff7-c0d6-45ae-af49-4ad73c09ecc5","Type":"ContainerDied","Data":"773b41b288bed066f2c52365fa19703388b4fa0ef0d6dc2301fa50ef1518cee7"} Feb 19 19:40:17 crc kubenswrapper[4813]: I0219 19:40:17.437122 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2wn9p" event={"ID":"1c3c3ff7-c0d6-45ae-af49-4ad73c09ecc5","Type":"ContainerStarted","Data":"04e97caf2a6e324b8d4725c5c302db2a2afcc0f487ac451a2735dfd40cc39b6c"} Feb 19 19:40:17 crc kubenswrapper[4813]: I0219 19:40:17.458748 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2wn9p" podStartSLOduration=1.8397439869999999 podStartE2EDuration="4.458728235s" podCreationTimestamp="2026-02-19 19:40:13 +0000 UTC" firstStartedPulling="2026-02-19 19:40:14.403885476 +0000 UTC m=+4233.629326017" lastFinishedPulling="2026-02-19 19:40:17.022869724 +0000 UTC m=+4236.248310265" observedRunningTime="2026-02-19 19:40:17.454251787 +0000 UTC m=+4236.679692338" watchObservedRunningTime="2026-02-19 19:40:17.458728235 +0000 UTC m=+4236.684168776" Feb 19 19:40:23 crc kubenswrapper[4813]: I0219 19:40:23.696244 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2wn9p" Feb 19 19:40:23 crc kubenswrapper[4813]: I0219 19:40:23.697063 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2wn9p" Feb 19 19:40:23 crc kubenswrapper[4813]: I0219 19:40:23.738272 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2wn9p" Feb 19 19:40:24 crc kubenswrapper[4813]: I0219 19:40:24.527592 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2wn9p" Feb 19 19:40:24 crc kubenswrapper[4813]: I0219 19:40:24.587845 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2wn9p"] Feb 19 19:40:26 crc kubenswrapper[4813]: I0219 19:40:26.506022 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2wn9p" podUID="1c3c3ff7-c0d6-45ae-af49-4ad73c09ecc5" containerName="registry-server" containerID="cri-o://04e97caf2a6e324b8d4725c5c302db2a2afcc0f487ac451a2735dfd40cc39b6c" gracePeriod=2 Feb 19 19:40:26 crc kubenswrapper[4813]: I0219 19:40:26.971839 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2wn9p" Feb 19 19:40:27 crc kubenswrapper[4813]: I0219 19:40:27.072969 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wk57l\" (UniqueName: \"kubernetes.io/projected/1c3c3ff7-c0d6-45ae-af49-4ad73c09ecc5-kube-api-access-wk57l\") pod \"1c3c3ff7-c0d6-45ae-af49-4ad73c09ecc5\" (UID: \"1c3c3ff7-c0d6-45ae-af49-4ad73c09ecc5\") " Feb 19 19:40:27 crc kubenswrapper[4813]: I0219 19:40:27.073143 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c3c3ff7-c0d6-45ae-af49-4ad73c09ecc5-catalog-content\") pod \"1c3c3ff7-c0d6-45ae-af49-4ad73c09ecc5\" (UID: \"1c3c3ff7-c0d6-45ae-af49-4ad73c09ecc5\") " Feb 19 19:40:27 crc kubenswrapper[4813]: I0219 19:40:27.073176 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c3c3ff7-c0d6-45ae-af49-4ad73c09ecc5-utilities\") pod \"1c3c3ff7-c0d6-45ae-af49-4ad73c09ecc5\" (UID: \"1c3c3ff7-c0d6-45ae-af49-4ad73c09ecc5\") " Feb 19 19:40:27 crc kubenswrapper[4813]: I0219 19:40:27.074421 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c3c3ff7-c0d6-45ae-af49-4ad73c09ecc5-utilities" (OuterVolumeSpecName: "utilities") pod "1c3c3ff7-c0d6-45ae-af49-4ad73c09ecc5" (UID: "1c3c3ff7-c0d6-45ae-af49-4ad73c09ecc5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:40:27 crc kubenswrapper[4813]: I0219 19:40:27.079478 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c3c3ff7-c0d6-45ae-af49-4ad73c09ecc5-kube-api-access-wk57l" (OuterVolumeSpecName: "kube-api-access-wk57l") pod "1c3c3ff7-c0d6-45ae-af49-4ad73c09ecc5" (UID: "1c3c3ff7-c0d6-45ae-af49-4ad73c09ecc5"). InnerVolumeSpecName "kube-api-access-wk57l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:40:27 crc kubenswrapper[4813]: I0219 19:40:27.108880 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c3c3ff7-c0d6-45ae-af49-4ad73c09ecc5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1c3c3ff7-c0d6-45ae-af49-4ad73c09ecc5" (UID: "1c3c3ff7-c0d6-45ae-af49-4ad73c09ecc5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:40:27 crc kubenswrapper[4813]: I0219 19:40:27.174276 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wk57l\" (UniqueName: \"kubernetes.io/projected/1c3c3ff7-c0d6-45ae-af49-4ad73c09ecc5-kube-api-access-wk57l\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:27 crc kubenswrapper[4813]: I0219 19:40:27.174311 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c3c3ff7-c0d6-45ae-af49-4ad73c09ecc5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:27 crc kubenswrapper[4813]: I0219 19:40:27.174323 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c3c3ff7-c0d6-45ae-af49-4ad73c09ecc5-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:27 crc kubenswrapper[4813]: I0219 19:40:27.513215 4813 generic.go:334] "Generic (PLEG): container finished" podID="1c3c3ff7-c0d6-45ae-af49-4ad73c09ecc5" containerID="04e97caf2a6e324b8d4725c5c302db2a2afcc0f487ac451a2735dfd40cc39b6c" exitCode=0 Feb 19 19:40:27 crc kubenswrapper[4813]: I0219 19:40:27.513264 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2wn9p" event={"ID":"1c3c3ff7-c0d6-45ae-af49-4ad73c09ecc5","Type":"ContainerDied","Data":"04e97caf2a6e324b8d4725c5c302db2a2afcc0f487ac451a2735dfd40cc39b6c"} Feb 19 19:40:27 crc kubenswrapper[4813]: I0219 19:40:27.513266 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2wn9p" Feb 19 19:40:27 crc kubenswrapper[4813]: I0219 19:40:27.513293 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2wn9p" event={"ID":"1c3c3ff7-c0d6-45ae-af49-4ad73c09ecc5","Type":"ContainerDied","Data":"8a327e1c097f6f9300d7819df9588c2a0bb1fd5c998a065111f329778851645a"} Feb 19 19:40:27 crc kubenswrapper[4813]: I0219 19:40:27.513315 4813 scope.go:117] "RemoveContainer" containerID="04e97caf2a6e324b8d4725c5c302db2a2afcc0f487ac451a2735dfd40cc39b6c" Feb 19 19:40:27 crc kubenswrapper[4813]: I0219 19:40:27.527724 4813 scope.go:117] "RemoveContainer" containerID="773b41b288bed066f2c52365fa19703388b4fa0ef0d6dc2301fa50ef1518cee7" Feb 19 19:40:27 crc kubenswrapper[4813]: I0219 19:40:27.544155 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2wn9p"] Feb 19 19:40:27 crc kubenswrapper[4813]: I0219 19:40:27.557862 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2wn9p"] Feb 19 19:40:27 crc kubenswrapper[4813]: I0219 19:40:27.751577 4813 scope.go:117] "RemoveContainer" containerID="804a00f40ba32f96f13e2909f249665b78010c01bec49ab7f673ca3b09c1e330" Feb 19 19:40:27 crc kubenswrapper[4813]: I0219 19:40:27.796622 4813 scope.go:117] "RemoveContainer" containerID="04e97caf2a6e324b8d4725c5c302db2a2afcc0f487ac451a2735dfd40cc39b6c" Feb 19 19:40:27 crc kubenswrapper[4813]: E0219 19:40:27.797140 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04e97caf2a6e324b8d4725c5c302db2a2afcc0f487ac451a2735dfd40cc39b6c\": container with ID starting with 04e97caf2a6e324b8d4725c5c302db2a2afcc0f487ac451a2735dfd40cc39b6c not found: ID does not exist" containerID="04e97caf2a6e324b8d4725c5c302db2a2afcc0f487ac451a2735dfd40cc39b6c" Feb 19 19:40:27 crc kubenswrapper[4813]: I0219 19:40:27.797173 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04e97caf2a6e324b8d4725c5c302db2a2afcc0f487ac451a2735dfd40cc39b6c"} err="failed to get container status \"04e97caf2a6e324b8d4725c5c302db2a2afcc0f487ac451a2735dfd40cc39b6c\": rpc error: code = NotFound desc = could not find container \"04e97caf2a6e324b8d4725c5c302db2a2afcc0f487ac451a2735dfd40cc39b6c\": container with ID starting with 04e97caf2a6e324b8d4725c5c302db2a2afcc0f487ac451a2735dfd40cc39b6c not found: ID does not exist" Feb 19 19:40:27 crc kubenswrapper[4813]: I0219 19:40:27.797195 4813 scope.go:117] "RemoveContainer" containerID="773b41b288bed066f2c52365fa19703388b4fa0ef0d6dc2301fa50ef1518cee7" Feb 19 19:40:27 crc kubenswrapper[4813]: E0219 19:40:27.797465 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"773b41b288bed066f2c52365fa19703388b4fa0ef0d6dc2301fa50ef1518cee7\": container with ID starting with 773b41b288bed066f2c52365fa19703388b4fa0ef0d6dc2301fa50ef1518cee7 not found: ID does not exist" containerID="773b41b288bed066f2c52365fa19703388b4fa0ef0d6dc2301fa50ef1518cee7" Feb 19 19:40:27 crc kubenswrapper[4813]: I0219 19:40:27.797487 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"773b41b288bed066f2c52365fa19703388b4fa0ef0d6dc2301fa50ef1518cee7"} err="failed to get container status \"773b41b288bed066f2c52365fa19703388b4fa0ef0d6dc2301fa50ef1518cee7\": rpc error: code = NotFound desc = could not find container \"773b41b288bed066f2c52365fa19703388b4fa0ef0d6dc2301fa50ef1518cee7\": container with ID starting with 773b41b288bed066f2c52365fa19703388b4fa0ef0d6dc2301fa50ef1518cee7 not found: ID does not exist" Feb 19 19:40:27 crc kubenswrapper[4813]: I0219 19:40:27.797499 4813 scope.go:117] "RemoveContainer" containerID="804a00f40ba32f96f13e2909f249665b78010c01bec49ab7f673ca3b09c1e330" Feb 19 19:40:27 crc kubenswrapper[4813]: E0219 19:40:27.798701 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"804a00f40ba32f96f13e2909f249665b78010c01bec49ab7f673ca3b09c1e330\": container with ID starting with 804a00f40ba32f96f13e2909f249665b78010c01bec49ab7f673ca3b09c1e330 not found: ID does not exist" containerID="804a00f40ba32f96f13e2909f249665b78010c01bec49ab7f673ca3b09c1e330" Feb 19 19:40:27 crc kubenswrapper[4813]: I0219 19:40:27.798736 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"804a00f40ba32f96f13e2909f249665b78010c01bec49ab7f673ca3b09c1e330"} err="failed to get container status \"804a00f40ba32f96f13e2909f249665b78010c01bec49ab7f673ca3b09c1e330\": rpc error: code = NotFound desc = could not find container \"804a00f40ba32f96f13e2909f249665b78010c01bec49ab7f673ca3b09c1e330\": container with ID starting with 804a00f40ba32f96f13e2909f249665b78010c01bec49ab7f673ca3b09c1e330 not found: ID does not exist" Feb 19 19:40:29 crc kubenswrapper[4813]: I0219 19:40:29.482548 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c3c3ff7-c0d6-45ae-af49-4ad73c09ecc5" path="/var/lib/kubelet/pods/1c3c3ff7-c0d6-45ae-af49-4ad73c09ecc5/volumes" Feb 19 19:42:00 crc kubenswrapper[4813]: I0219 19:42:00.330232 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 19:42:00 crc kubenswrapper[4813]: I0219 19:42:00.331003 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 19:42:30 crc kubenswrapper[4813]: I0219 19:42:30.330487 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 19:42:30 crc kubenswrapper[4813]: I0219 19:42:30.331195 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 19:43:00 crc kubenswrapper[4813]: I0219 19:43:00.329864 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 19:43:00 crc kubenswrapper[4813]: I0219 19:43:00.330632 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 19:43:00 crc kubenswrapper[4813]: I0219 19:43:00.330698 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" Feb 19 19:43:00 crc kubenswrapper[4813]: I0219 19:43:00.331609 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"eeed46d1b34159de66f42948a66c3d63b5a21a3caa6efa2e5b0c9cefdb6e91ba"} pod="openshift-machine-config-operator/machine-config-daemon-gfswm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 19:43:00 crc kubenswrapper[4813]: I0219 19:43:00.331705 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" containerID="cri-o://eeed46d1b34159de66f42948a66c3d63b5a21a3caa6efa2e5b0c9cefdb6e91ba" gracePeriod=600 Feb 19 19:43:00 crc kubenswrapper[4813]: E0219 19:43:00.465447 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:43:00 crc kubenswrapper[4813]: I0219 19:43:00.752608 4813 generic.go:334] "Generic (PLEG): container finished" podID="481977a2-7072-4176-abd4-863cb6104d70" containerID="eeed46d1b34159de66f42948a66c3d63b5a21a3caa6efa2e5b0c9cefdb6e91ba" exitCode=0 Feb 19 19:43:00 crc kubenswrapper[4813]: I0219 19:43:00.752657 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" event={"ID":"481977a2-7072-4176-abd4-863cb6104d70","Type":"ContainerDied","Data":"eeed46d1b34159de66f42948a66c3d63b5a21a3caa6efa2e5b0c9cefdb6e91ba"} Feb 19 19:43:00 crc kubenswrapper[4813]: I0219 19:43:00.752693 4813 scope.go:117] "RemoveContainer" containerID="be981c571a3e7b4efda5a64693ce84e4150127591218d14e4a62185fc2229a1c" Feb 19 19:43:00 crc kubenswrapper[4813]: I0219 19:43:00.754171 4813 scope.go:117] "RemoveContainer" containerID="eeed46d1b34159de66f42948a66c3d63b5a21a3caa6efa2e5b0c9cefdb6e91ba" Feb 19 19:43:00 crc kubenswrapper[4813]: E0219 19:43:00.754652 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:43:13 crc kubenswrapper[4813]: I0219 19:43:13.495576 4813 scope.go:117] "RemoveContainer" containerID="eeed46d1b34159de66f42948a66c3d63b5a21a3caa6efa2e5b0c9cefdb6e91ba" Feb 19 19:43:13 crc kubenswrapper[4813]: E0219 19:43:13.496922 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:43:14 crc kubenswrapper[4813]: I0219 19:43:14.888754 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-dsrqn"] Feb 19 19:43:14 crc kubenswrapper[4813]: I0219 19:43:14.906057 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-dsrqn"] Feb 19 19:43:15 crc kubenswrapper[4813]: I0219 19:43:15.037647 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-rbnsz"] Feb 19 19:43:15 crc kubenswrapper[4813]: E0219 19:43:15.038206 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c3c3ff7-c0d6-45ae-af49-4ad73c09ecc5" containerName="extract-utilities" Feb 19 19:43:15 crc kubenswrapper[4813]: I0219 19:43:15.038239 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c3c3ff7-c0d6-45ae-af49-4ad73c09ecc5" containerName="extract-utilities" Feb 19 19:43:15 crc kubenswrapper[4813]: E0219 19:43:15.038278 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c3c3ff7-c0d6-45ae-af49-4ad73c09ecc5" containerName="extract-content" Feb 19 19:43:15 crc kubenswrapper[4813]: I0219 19:43:15.038292 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c3c3ff7-c0d6-45ae-af49-4ad73c09ecc5" containerName="extract-content" Feb 19 19:43:15 crc kubenswrapper[4813]: E0219 19:43:15.038308 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c3c3ff7-c0d6-45ae-af49-4ad73c09ecc5" containerName="registry-server" Feb 19 19:43:15 crc kubenswrapper[4813]: I0219 19:43:15.038325 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c3c3ff7-c0d6-45ae-af49-4ad73c09ecc5" containerName="registry-server" Feb 19 19:43:15 crc kubenswrapper[4813]: I0219 19:43:15.038677 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c3c3ff7-c0d6-45ae-af49-4ad73c09ecc5" containerName="registry-server" Feb 19 19:43:15 crc kubenswrapper[4813]: I0219 19:43:15.039435 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-rbnsz" Feb 19 19:43:15 crc kubenswrapper[4813]: I0219 19:43:15.042872 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Feb 19 19:43:15 crc kubenswrapper[4813]: I0219 19:43:15.043129 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Feb 19 19:43:15 crc kubenswrapper[4813]: I0219 19:43:15.043295 4813 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-scp5k" Feb 19 19:43:15 crc kubenswrapper[4813]: I0219 19:43:15.043601 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Feb 19 19:43:15 crc kubenswrapper[4813]: I0219 19:43:15.076236 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-rbnsz"] Feb 19 19:43:15 crc kubenswrapper[4813]: I0219 19:43:15.157749 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/c32d11fb-a182-4a20-b33d-ea97eb9e2c0b-crc-storage\") pod \"crc-storage-crc-rbnsz\" (UID: \"c32d11fb-a182-4a20-b33d-ea97eb9e2c0b\") " pod="crc-storage/crc-storage-crc-rbnsz" Feb 19 19:43:15 crc kubenswrapper[4813]: I0219 19:43:15.157814 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/c32d11fb-a182-4a20-b33d-ea97eb9e2c0b-node-mnt\") pod \"crc-storage-crc-rbnsz\" (UID: \"c32d11fb-a182-4a20-b33d-ea97eb9e2c0b\") " pod="crc-storage/crc-storage-crc-rbnsz" Feb 19 19:43:15 crc kubenswrapper[4813]: I0219 19:43:15.157967 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swzcj\" (UniqueName: \"kubernetes.io/projected/c32d11fb-a182-4a20-b33d-ea97eb9e2c0b-kube-api-access-swzcj\") pod \"crc-storage-crc-rbnsz\" (UID: \"c32d11fb-a182-4a20-b33d-ea97eb9e2c0b\") " pod="crc-storage/crc-storage-crc-rbnsz" Feb 19 19:43:15 crc kubenswrapper[4813]: I0219 19:43:15.258685 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swzcj\" (UniqueName: \"kubernetes.io/projected/c32d11fb-a182-4a20-b33d-ea97eb9e2c0b-kube-api-access-swzcj\") pod \"crc-storage-crc-rbnsz\" (UID: \"c32d11fb-a182-4a20-b33d-ea97eb9e2c0b\") " pod="crc-storage/crc-storage-crc-rbnsz" Feb 19 19:43:15 crc kubenswrapper[4813]: I0219 19:43:15.258999 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/c32d11fb-a182-4a20-b33d-ea97eb9e2c0b-crc-storage\") pod \"crc-storage-crc-rbnsz\" (UID: \"c32d11fb-a182-4a20-b33d-ea97eb9e2c0b\") " pod="crc-storage/crc-storage-crc-rbnsz" Feb 19 19:43:15 crc kubenswrapper[4813]: I0219 19:43:15.259020 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/c32d11fb-a182-4a20-b33d-ea97eb9e2c0b-node-mnt\") pod \"crc-storage-crc-rbnsz\" (UID: \"c32d11fb-a182-4a20-b33d-ea97eb9e2c0b\") " pod="crc-storage/crc-storage-crc-rbnsz" Feb 19 19:43:15 crc kubenswrapper[4813]: I0219 19:43:15.259282 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/c32d11fb-a182-4a20-b33d-ea97eb9e2c0b-node-mnt\") pod \"crc-storage-crc-rbnsz\" (UID: \"c32d11fb-a182-4a20-b33d-ea97eb9e2c0b\") " pod="crc-storage/crc-storage-crc-rbnsz" Feb 19 19:43:15 crc kubenswrapper[4813]: I0219 19:43:15.260780 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/c32d11fb-a182-4a20-b33d-ea97eb9e2c0b-crc-storage\") pod \"crc-storage-crc-rbnsz\" (UID: \"c32d11fb-a182-4a20-b33d-ea97eb9e2c0b\") " pod="crc-storage/crc-storage-crc-rbnsz" Feb 19 19:43:15 crc kubenswrapper[4813]: I0219 19:43:15.278318 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swzcj\" (UniqueName: \"kubernetes.io/projected/c32d11fb-a182-4a20-b33d-ea97eb9e2c0b-kube-api-access-swzcj\") pod \"crc-storage-crc-rbnsz\" (UID: \"c32d11fb-a182-4a20-b33d-ea97eb9e2c0b\") " pod="crc-storage/crc-storage-crc-rbnsz" Feb 19 19:43:15 crc kubenswrapper[4813]: I0219 19:43:15.394899 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-rbnsz" Feb 19 19:43:15 crc kubenswrapper[4813]: I0219 19:43:15.488243 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f2acde5-52ac-4a4a-9c3e-7e0cebb568d9" path="/var/lib/kubelet/pods/7f2acde5-52ac-4a4a-9c3e-7e0cebb568d9/volumes" Feb 19 19:43:15 crc kubenswrapper[4813]: I0219 19:43:15.817076 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-rbnsz"] Feb 19 19:43:15 crc kubenswrapper[4813]: I0219 19:43:15.884467 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-rbnsz" event={"ID":"c32d11fb-a182-4a20-b33d-ea97eb9e2c0b","Type":"ContainerStarted","Data":"d049a86bc216ae15fb97a8f0b165ac7a317ddbecb2103125d95c4e47c80a5886"} Feb 19 19:43:16 crc kubenswrapper[4813]: I0219 19:43:16.897586 4813 generic.go:334] "Generic (PLEG): container finished" podID="c32d11fb-a182-4a20-b33d-ea97eb9e2c0b" containerID="2ea0be7576895785ae1c0e130476667da1300603cbd9d18f56456c3f8e3b4e83" exitCode=0 Feb 19 19:43:16 crc kubenswrapper[4813]: I0219 19:43:16.897721 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-rbnsz" event={"ID":"c32d11fb-a182-4a20-b33d-ea97eb9e2c0b","Type":"ContainerDied","Data":"2ea0be7576895785ae1c0e130476667da1300603cbd9d18f56456c3f8e3b4e83"} Feb 19 19:43:18 crc kubenswrapper[4813]: I0219 19:43:18.243124 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-rbnsz" Feb 19 19:43:18 crc kubenswrapper[4813]: I0219 19:43:18.406473 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/c32d11fb-a182-4a20-b33d-ea97eb9e2c0b-crc-storage\") pod \"c32d11fb-a182-4a20-b33d-ea97eb9e2c0b\" (UID: \"c32d11fb-a182-4a20-b33d-ea97eb9e2c0b\") " Feb 19 19:43:18 crc kubenswrapper[4813]: I0219 19:43:18.406554 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swzcj\" (UniqueName: \"kubernetes.io/projected/c32d11fb-a182-4a20-b33d-ea97eb9e2c0b-kube-api-access-swzcj\") pod \"c32d11fb-a182-4a20-b33d-ea97eb9e2c0b\" (UID: \"c32d11fb-a182-4a20-b33d-ea97eb9e2c0b\") " Feb 19 19:43:18 crc kubenswrapper[4813]: I0219 19:43:18.406606 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/c32d11fb-a182-4a20-b33d-ea97eb9e2c0b-node-mnt\") pod \"c32d11fb-a182-4a20-b33d-ea97eb9e2c0b\" (UID: \"c32d11fb-a182-4a20-b33d-ea97eb9e2c0b\") " Feb 19 19:43:18 crc kubenswrapper[4813]: I0219 19:43:18.406848 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c32d11fb-a182-4a20-b33d-ea97eb9e2c0b-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "c32d11fb-a182-4a20-b33d-ea97eb9e2c0b" (UID: "c32d11fb-a182-4a20-b33d-ea97eb9e2c0b"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 19:43:18 crc kubenswrapper[4813]: I0219 19:43:18.407230 4813 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/c32d11fb-a182-4a20-b33d-ea97eb9e2c0b-node-mnt\") on node \"crc\" DevicePath \"\"" Feb 19 19:43:18 crc kubenswrapper[4813]: I0219 19:43:18.412237 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c32d11fb-a182-4a20-b33d-ea97eb9e2c0b-kube-api-access-swzcj" (OuterVolumeSpecName: "kube-api-access-swzcj") pod "c32d11fb-a182-4a20-b33d-ea97eb9e2c0b" (UID: "c32d11fb-a182-4a20-b33d-ea97eb9e2c0b"). InnerVolumeSpecName "kube-api-access-swzcj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:43:18 crc kubenswrapper[4813]: I0219 19:43:18.423732 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c32d11fb-a182-4a20-b33d-ea97eb9e2c0b-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "c32d11fb-a182-4a20-b33d-ea97eb9e2c0b" (UID: "c32d11fb-a182-4a20-b33d-ea97eb9e2c0b"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:43:18 crc kubenswrapper[4813]: I0219 19:43:18.508814 4813 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/c32d11fb-a182-4a20-b33d-ea97eb9e2c0b-crc-storage\") on node \"crc\" DevicePath \"\"" Feb 19 19:43:18 crc kubenswrapper[4813]: I0219 19:43:18.508869 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swzcj\" (UniqueName: \"kubernetes.io/projected/c32d11fb-a182-4a20-b33d-ea97eb9e2c0b-kube-api-access-swzcj\") on node \"crc\" DevicePath \"\"" Feb 19 19:43:18 crc kubenswrapper[4813]: I0219 19:43:18.917444 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-rbnsz" event={"ID":"c32d11fb-a182-4a20-b33d-ea97eb9e2c0b","Type":"ContainerDied","Data":"d049a86bc216ae15fb97a8f0b165ac7a317ddbecb2103125d95c4e47c80a5886"} Feb 19 19:43:18 crc kubenswrapper[4813]: I0219 19:43:18.917491 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d049a86bc216ae15fb97a8f0b165ac7a317ddbecb2103125d95c4e47c80a5886" Feb 19 19:43:18 crc kubenswrapper[4813]: I0219 19:43:18.917524 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-rbnsz" Feb 19 19:43:20 crc kubenswrapper[4813]: I0219 19:43:20.518168 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-rbnsz"] Feb 19 19:43:20 crc kubenswrapper[4813]: I0219 19:43:20.522741 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-rbnsz"] Feb 19 19:43:20 crc kubenswrapper[4813]: I0219 19:43:20.645874 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-xd7md"] Feb 19 19:43:20 crc kubenswrapper[4813]: E0219 19:43:20.646253 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c32d11fb-a182-4a20-b33d-ea97eb9e2c0b" containerName="storage" Feb 19 19:43:20 crc kubenswrapper[4813]: I0219 19:43:20.646273 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="c32d11fb-a182-4a20-b33d-ea97eb9e2c0b" containerName="storage" Feb 19 19:43:20 crc kubenswrapper[4813]: I0219 19:43:20.646440 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="c32d11fb-a182-4a20-b33d-ea97eb9e2c0b" containerName="storage" Feb 19 19:43:20 crc kubenswrapper[4813]: I0219 19:43:20.647008 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-xd7md" Feb 19 19:43:20 crc kubenswrapper[4813]: I0219 19:43:20.649913 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Feb 19 19:43:20 crc kubenswrapper[4813]: I0219 19:43:20.650131 4813 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-scp5k" Feb 19 19:43:20 crc kubenswrapper[4813]: I0219 19:43:20.650210 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Feb 19 19:43:20 crc kubenswrapper[4813]: I0219 19:43:20.650246 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Feb 19 19:43:20 crc kubenswrapper[4813]: I0219 19:43:20.686021 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-xd7md"] Feb 19 19:43:20 crc kubenswrapper[4813]: I0219 19:43:20.742623 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/a81ea4b2-c7f1-4d7a-baef-a7715f56d3d1-node-mnt\") pod \"crc-storage-crc-xd7md\" (UID: \"a81ea4b2-c7f1-4d7a-baef-a7715f56d3d1\") " pod="crc-storage/crc-storage-crc-xd7md" Feb 19 19:43:20 crc kubenswrapper[4813]: I0219 19:43:20.742746 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/a81ea4b2-c7f1-4d7a-baef-a7715f56d3d1-crc-storage\") pod \"crc-storage-crc-xd7md\" (UID: \"a81ea4b2-c7f1-4d7a-baef-a7715f56d3d1\") " pod="crc-storage/crc-storage-crc-xd7md" Feb 19 19:43:20 crc kubenswrapper[4813]: I0219 19:43:20.742802 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svkzz\" (UniqueName: \"kubernetes.io/projected/a81ea4b2-c7f1-4d7a-baef-a7715f56d3d1-kube-api-access-svkzz\") pod \"crc-storage-crc-xd7md\" (UID: \"a81ea4b2-c7f1-4d7a-baef-a7715f56d3d1\") " pod="crc-storage/crc-storage-crc-xd7md" Feb 19 19:43:20 crc kubenswrapper[4813]: I0219 19:43:20.844177 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svkzz\" (UniqueName: \"kubernetes.io/projected/a81ea4b2-c7f1-4d7a-baef-a7715f56d3d1-kube-api-access-svkzz\") pod \"crc-storage-crc-xd7md\" (UID: \"a81ea4b2-c7f1-4d7a-baef-a7715f56d3d1\") " pod="crc-storage/crc-storage-crc-xd7md" Feb 19 19:43:20 crc kubenswrapper[4813]: I0219 19:43:20.844303 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/a81ea4b2-c7f1-4d7a-baef-a7715f56d3d1-node-mnt\") pod \"crc-storage-crc-xd7md\" (UID: \"a81ea4b2-c7f1-4d7a-baef-a7715f56d3d1\") " pod="crc-storage/crc-storage-crc-xd7md" Feb 19 19:43:20 crc kubenswrapper[4813]: I0219 19:43:20.844341 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/a81ea4b2-c7f1-4d7a-baef-a7715f56d3d1-crc-storage\") pod \"crc-storage-crc-xd7md\" (UID: \"a81ea4b2-c7f1-4d7a-baef-a7715f56d3d1\") " pod="crc-storage/crc-storage-crc-xd7md" Feb 19 19:43:20 crc kubenswrapper[4813]: I0219 19:43:20.844597 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/a81ea4b2-c7f1-4d7a-baef-a7715f56d3d1-node-mnt\") pod \"crc-storage-crc-xd7md\" (UID: \"a81ea4b2-c7f1-4d7a-baef-a7715f56d3d1\") " pod="crc-storage/crc-storage-crc-xd7md" Feb 19 19:43:20 crc kubenswrapper[4813]: I0219 19:43:20.845251 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/a81ea4b2-c7f1-4d7a-baef-a7715f56d3d1-crc-storage\") pod \"crc-storage-crc-xd7md\" (UID: \"a81ea4b2-c7f1-4d7a-baef-a7715f56d3d1\") " pod="crc-storage/crc-storage-crc-xd7md" Feb 19 19:43:20 crc kubenswrapper[4813]: I0219 19:43:20.864604 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svkzz\" (UniqueName: \"kubernetes.io/projected/a81ea4b2-c7f1-4d7a-baef-a7715f56d3d1-kube-api-access-svkzz\") pod \"crc-storage-crc-xd7md\" (UID: \"a81ea4b2-c7f1-4d7a-baef-a7715f56d3d1\") " pod="crc-storage/crc-storage-crc-xd7md" Feb 19 19:43:21 crc kubenswrapper[4813]: I0219 19:43:21.010425 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-xd7md" Feb 19 19:43:21 crc kubenswrapper[4813]: I0219 19:43:21.479772 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c32d11fb-a182-4a20-b33d-ea97eb9e2c0b" path="/var/lib/kubelet/pods/c32d11fb-a182-4a20-b33d-ea97eb9e2c0b/volumes" Feb 19 19:43:21 crc kubenswrapper[4813]: I0219 19:43:21.506315 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-xd7md"] Feb 19 19:43:21 crc kubenswrapper[4813]: I0219 19:43:21.936763 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-xd7md" event={"ID":"a81ea4b2-c7f1-4d7a-baef-a7715f56d3d1","Type":"ContainerStarted","Data":"f002be3d7bba8bba416b8a99e61c48de2cd27200fafff912226951da43cb4289"} Feb 19 19:43:22 crc kubenswrapper[4813]: I0219 19:43:22.948493 4813 generic.go:334] "Generic (PLEG): container finished" podID="a81ea4b2-c7f1-4d7a-baef-a7715f56d3d1" containerID="3464da787001b56444d4198ebe5053e172b0d2a96ea389c8cbcf8d74d3b21f34" exitCode=0 Feb 19 19:43:22 crc kubenswrapper[4813]: I0219 19:43:22.948883 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-xd7md" event={"ID":"a81ea4b2-c7f1-4d7a-baef-a7715f56d3d1","Type":"ContainerDied","Data":"3464da787001b56444d4198ebe5053e172b0d2a96ea389c8cbcf8d74d3b21f34"} Feb 19 19:43:24 crc kubenswrapper[4813]: I0219 19:43:24.260069 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-xd7md" Feb 19 19:43:24 crc kubenswrapper[4813]: I0219 19:43:24.396088 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/a81ea4b2-c7f1-4d7a-baef-a7715f56d3d1-crc-storage\") pod \"a81ea4b2-c7f1-4d7a-baef-a7715f56d3d1\" (UID: \"a81ea4b2-c7f1-4d7a-baef-a7715f56d3d1\") " Feb 19 19:43:24 crc kubenswrapper[4813]: I0219 19:43:24.396149 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/a81ea4b2-c7f1-4d7a-baef-a7715f56d3d1-node-mnt\") pod \"a81ea4b2-c7f1-4d7a-baef-a7715f56d3d1\" (UID: \"a81ea4b2-c7f1-4d7a-baef-a7715f56d3d1\") " Feb 19 19:43:24 crc kubenswrapper[4813]: I0219 19:43:24.396205 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svkzz\" (UniqueName: \"kubernetes.io/projected/a81ea4b2-c7f1-4d7a-baef-a7715f56d3d1-kube-api-access-svkzz\") pod \"a81ea4b2-c7f1-4d7a-baef-a7715f56d3d1\" (UID: \"a81ea4b2-c7f1-4d7a-baef-a7715f56d3d1\") " Feb 19 19:43:24 crc kubenswrapper[4813]: I0219 19:43:24.396289 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a81ea4b2-c7f1-4d7a-baef-a7715f56d3d1-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "a81ea4b2-c7f1-4d7a-baef-a7715f56d3d1" (UID: "a81ea4b2-c7f1-4d7a-baef-a7715f56d3d1"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 19:43:24 crc kubenswrapper[4813]: I0219 19:43:24.396509 4813 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/a81ea4b2-c7f1-4d7a-baef-a7715f56d3d1-node-mnt\") on node \"crc\" DevicePath \"\"" Feb 19 19:43:24 crc kubenswrapper[4813]: I0219 19:43:24.409202 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a81ea4b2-c7f1-4d7a-baef-a7715f56d3d1-kube-api-access-svkzz" (OuterVolumeSpecName: "kube-api-access-svkzz") pod "a81ea4b2-c7f1-4d7a-baef-a7715f56d3d1" (UID: "a81ea4b2-c7f1-4d7a-baef-a7715f56d3d1"). InnerVolumeSpecName "kube-api-access-svkzz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:43:24 crc kubenswrapper[4813]: I0219 19:43:24.413523 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a81ea4b2-c7f1-4d7a-baef-a7715f56d3d1-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "a81ea4b2-c7f1-4d7a-baef-a7715f56d3d1" (UID: "a81ea4b2-c7f1-4d7a-baef-a7715f56d3d1"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:43:24 crc kubenswrapper[4813]: I0219 19:43:24.497693 4813 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/a81ea4b2-c7f1-4d7a-baef-a7715f56d3d1-crc-storage\") on node \"crc\" DevicePath \"\"" Feb 19 19:43:24 crc kubenswrapper[4813]: I0219 19:43:24.497739 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svkzz\" (UniqueName: \"kubernetes.io/projected/a81ea4b2-c7f1-4d7a-baef-a7715f56d3d1-kube-api-access-svkzz\") on node \"crc\" DevicePath \"\"" Feb 19 19:43:24 crc kubenswrapper[4813]: I0219 19:43:24.977613 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-xd7md" event={"ID":"a81ea4b2-c7f1-4d7a-baef-a7715f56d3d1","Type":"ContainerDied","Data":"f002be3d7bba8bba416b8a99e61c48de2cd27200fafff912226951da43cb4289"} Feb 19 19:43:24 crc kubenswrapper[4813]: I0219 19:43:24.977667 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f002be3d7bba8bba416b8a99e61c48de2cd27200fafff912226951da43cb4289" Feb 19 19:43:24 crc kubenswrapper[4813]: I0219 19:43:24.977679 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-xd7md" Feb 19 19:43:25 crc kubenswrapper[4813]: I0219 19:43:25.473286 4813 scope.go:117] "RemoveContainer" containerID="eeed46d1b34159de66f42948a66c3d63b5a21a3caa6efa2e5b0c9cefdb6e91ba" Feb 19 19:43:25 crc kubenswrapper[4813]: E0219 19:43:25.473904 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:43:36 crc kubenswrapper[4813]: I0219 19:43:36.472202 4813 scope.go:117] "RemoveContainer" containerID="eeed46d1b34159de66f42948a66c3d63b5a21a3caa6efa2e5b0c9cefdb6e91ba" Feb 19 19:43:36 crc kubenswrapper[4813]: E0219 19:43:36.473110 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:43:51 crc kubenswrapper[4813]: I0219 19:43:51.481450 4813 scope.go:117] "RemoveContainer" containerID="eeed46d1b34159de66f42948a66c3d63b5a21a3caa6efa2e5b0c9cefdb6e91ba" Feb 19 19:43:51 crc kubenswrapper[4813]: E0219 19:43:51.482479 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:43:51 crc kubenswrapper[4813]: I0219 19:43:51.854391 4813 scope.go:117] "RemoveContainer" containerID="2f270f841e3c456fcd42132bf34c3a6b1e6eb36b9e99c6bea77031d1286d19cb" Feb 19 19:44:02 crc kubenswrapper[4813]: I0219 19:44:02.472222 4813 scope.go:117] "RemoveContainer" containerID="eeed46d1b34159de66f42948a66c3d63b5a21a3caa6efa2e5b0c9cefdb6e91ba" Feb 19 19:44:02 crc kubenswrapper[4813]: E0219 19:44:02.473564 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:44:16 crc kubenswrapper[4813]: I0219 19:44:16.471615 4813 scope.go:117] "RemoveContainer" containerID="eeed46d1b34159de66f42948a66c3d63b5a21a3caa6efa2e5b0c9cefdb6e91ba" Feb 19 19:44:16 crc kubenswrapper[4813]: E0219 19:44:16.472408 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:44:24 crc kubenswrapper[4813]: I0219 19:44:24.637487 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qw7d2"] Feb 19 19:44:24 crc kubenswrapper[4813]: E0219 19:44:24.638391 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a81ea4b2-c7f1-4d7a-baef-a7715f56d3d1" containerName="storage" Feb 19 19:44:24 crc kubenswrapper[4813]: I0219 19:44:24.638405 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="a81ea4b2-c7f1-4d7a-baef-a7715f56d3d1" containerName="storage" Feb 19 19:44:24 crc kubenswrapper[4813]: I0219 19:44:24.638554 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="a81ea4b2-c7f1-4d7a-baef-a7715f56d3d1" containerName="storage" Feb 19 19:44:24 crc kubenswrapper[4813]: I0219 19:44:24.639619 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qw7d2" Feb 19 19:44:24 crc kubenswrapper[4813]: I0219 19:44:24.646296 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qw7d2"] Feb 19 19:44:24 crc kubenswrapper[4813]: I0219 19:44:24.677853 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1c26f08-b363-49b7-8946-e60ac0d5e15c-catalog-content\") pod \"certified-operators-qw7d2\" (UID: \"a1c26f08-b363-49b7-8946-e60ac0d5e15c\") " pod="openshift-marketplace/certified-operators-qw7d2" Feb 19 19:44:24 crc kubenswrapper[4813]: I0219 19:44:24.678058 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xst7d\" (UniqueName: \"kubernetes.io/projected/a1c26f08-b363-49b7-8946-e60ac0d5e15c-kube-api-access-xst7d\") pod \"certified-operators-qw7d2\" (UID: \"a1c26f08-b363-49b7-8946-e60ac0d5e15c\") " pod="openshift-marketplace/certified-operators-qw7d2" Feb 19 19:44:24 crc kubenswrapper[4813]: I0219 19:44:24.678288 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1c26f08-b363-49b7-8946-e60ac0d5e15c-utilities\") pod \"certified-operators-qw7d2\" (UID: \"a1c26f08-b363-49b7-8946-e60ac0d5e15c\") " pod="openshift-marketplace/certified-operators-qw7d2" Feb 19 19:44:24 crc kubenswrapper[4813]: I0219 19:44:24.779464 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1c26f08-b363-49b7-8946-e60ac0d5e15c-catalog-content\") pod \"certified-operators-qw7d2\" (UID: \"a1c26f08-b363-49b7-8946-e60ac0d5e15c\") " pod="openshift-marketplace/certified-operators-qw7d2" Feb 19 19:44:24 crc kubenswrapper[4813]: I0219 19:44:24.779540 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xst7d\" (UniqueName: \"kubernetes.io/projected/a1c26f08-b363-49b7-8946-e60ac0d5e15c-kube-api-access-xst7d\") pod \"certified-operators-qw7d2\" (UID: \"a1c26f08-b363-49b7-8946-e60ac0d5e15c\") " pod="openshift-marketplace/certified-operators-qw7d2" Feb 19 19:44:24 crc kubenswrapper[4813]: I0219 19:44:24.779599 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1c26f08-b363-49b7-8946-e60ac0d5e15c-utilities\") pod \"certified-operators-qw7d2\" (UID: \"a1c26f08-b363-49b7-8946-e60ac0d5e15c\") " pod="openshift-marketplace/certified-operators-qw7d2" Feb 19 19:44:24 crc kubenswrapper[4813]: I0219 19:44:24.779986 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1c26f08-b363-49b7-8946-e60ac0d5e15c-catalog-content\") pod \"certified-operators-qw7d2\" (UID: \"a1c26f08-b363-49b7-8946-e60ac0d5e15c\") " pod="openshift-marketplace/certified-operators-qw7d2" Feb 19 19:44:24 crc kubenswrapper[4813]: I0219 19:44:24.780413 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1c26f08-b363-49b7-8946-e60ac0d5e15c-utilities\") pod \"certified-operators-qw7d2\" (UID: \"a1c26f08-b363-49b7-8946-e60ac0d5e15c\") " pod="openshift-marketplace/certified-operators-qw7d2" Feb 19 19:44:24 crc kubenswrapper[4813]: I0219 19:44:24.946923 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xst7d\" (UniqueName: \"kubernetes.io/projected/a1c26f08-b363-49b7-8946-e60ac0d5e15c-kube-api-access-xst7d\") pod \"certified-operators-qw7d2\" (UID: \"a1c26f08-b363-49b7-8946-e60ac0d5e15c\") " pod="openshift-marketplace/certified-operators-qw7d2" Feb 19 19:44:24 crc kubenswrapper[4813]: I0219 19:44:24.960501 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qw7d2" Feb 19 19:44:25 crc kubenswrapper[4813]: I0219 19:44:25.233257 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qw7d2"] Feb 19 19:44:25 crc kubenswrapper[4813]: I0219 19:44:25.421662 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qw7d2" event={"ID":"a1c26f08-b363-49b7-8946-e60ac0d5e15c","Type":"ContainerStarted","Data":"15f36b21051831d060d53dd976479a0177fc29d6ff7ed73cc012a79121e5239f"} Feb 19 19:44:26 crc kubenswrapper[4813]: I0219 19:44:26.430620 4813 generic.go:334] "Generic (PLEG): container finished" podID="a1c26f08-b363-49b7-8946-e60ac0d5e15c" containerID="7314dd0682fb3abd2f8aa1007aeb7b8f2c30a976f3bd5b9abc5d6c3ebecbe9e7" exitCode=0 Feb 19 19:44:26 crc kubenswrapper[4813]: I0219 19:44:26.430944 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qw7d2" event={"ID":"a1c26f08-b363-49b7-8946-e60ac0d5e15c","Type":"ContainerDied","Data":"7314dd0682fb3abd2f8aa1007aeb7b8f2c30a976f3bd5b9abc5d6c3ebecbe9e7"} Feb 19 19:44:26 crc kubenswrapper[4813]: I0219 19:44:26.435795 4813 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 19:44:27 crc kubenswrapper[4813]: I0219 19:44:27.437659 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qw7d2" event={"ID":"a1c26f08-b363-49b7-8946-e60ac0d5e15c","Type":"ContainerStarted","Data":"e815f3a8d66c686d3a8e447e2bdc808f1de138f6d4bb7fa906fec563d2f73912"} Feb 19 19:44:28 crc kubenswrapper[4813]: I0219 19:44:28.446260 4813 generic.go:334] "Generic (PLEG): container finished" podID="a1c26f08-b363-49b7-8946-e60ac0d5e15c" containerID="e815f3a8d66c686d3a8e447e2bdc808f1de138f6d4bb7fa906fec563d2f73912" exitCode=0 Feb 19 19:44:28 crc kubenswrapper[4813]: I0219 19:44:28.446326 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qw7d2" event={"ID":"a1c26f08-b363-49b7-8946-e60ac0d5e15c","Type":"ContainerDied","Data":"e815f3a8d66c686d3a8e447e2bdc808f1de138f6d4bb7fa906fec563d2f73912"} Feb 19 19:44:29 crc kubenswrapper[4813]: I0219 19:44:29.455566 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qw7d2" event={"ID":"a1c26f08-b363-49b7-8946-e60ac0d5e15c","Type":"ContainerStarted","Data":"934aaa07722d16e6410f5c55af40d311dffe3c18387bdc12351d416942218d0c"} Feb 19 19:44:29 crc kubenswrapper[4813]: I0219 19:44:29.484542 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qw7d2" podStartSLOduration=3.104118047 podStartE2EDuration="5.484519409s" podCreationTimestamp="2026-02-19 19:44:24 +0000 UTC" firstStartedPulling="2026-02-19 19:44:26.435463238 +0000 UTC m=+4485.660903779" lastFinishedPulling="2026-02-19 19:44:28.81586458 +0000 UTC m=+4488.041305141" observedRunningTime="2026-02-19 19:44:29.47318344 +0000 UTC m=+4488.698623981" watchObservedRunningTime="2026-02-19 19:44:29.484519409 +0000 UTC m=+4488.709959960" Feb 19 19:44:31 crc kubenswrapper[4813]: I0219 19:44:31.475780 4813 scope.go:117] "RemoveContainer" containerID="eeed46d1b34159de66f42948a66c3d63b5a21a3caa6efa2e5b0c9cefdb6e91ba" Feb 19 19:44:31 crc kubenswrapper[4813]: E0219 19:44:31.476430 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:44:34 crc kubenswrapper[4813]: I0219 19:44:34.961383 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qw7d2" Feb 19 19:44:34 crc kubenswrapper[4813]: I0219 19:44:34.962529 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qw7d2" Feb 19 19:44:35 crc kubenswrapper[4813]: I0219 19:44:35.003009 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qw7d2" Feb 19 19:44:35 crc kubenswrapper[4813]: I0219 19:44:35.578724 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qw7d2" Feb 19 19:44:35 crc kubenswrapper[4813]: I0219 19:44:35.623155 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qw7d2"] Feb 19 19:44:37 crc kubenswrapper[4813]: I0219 19:44:37.525310 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qw7d2" podUID="a1c26f08-b363-49b7-8946-e60ac0d5e15c" containerName="registry-server" containerID="cri-o://934aaa07722d16e6410f5c55af40d311dffe3c18387bdc12351d416942218d0c" gracePeriod=2 Feb 19 19:44:37 crc kubenswrapper[4813]: I0219 19:44:37.900474 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qw7d2" Feb 19 19:44:37 crc kubenswrapper[4813]: I0219 19:44:37.972595 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xst7d\" (UniqueName: \"kubernetes.io/projected/a1c26f08-b363-49b7-8946-e60ac0d5e15c-kube-api-access-xst7d\") pod \"a1c26f08-b363-49b7-8946-e60ac0d5e15c\" (UID: \"a1c26f08-b363-49b7-8946-e60ac0d5e15c\") " Feb 19 19:44:37 crc kubenswrapper[4813]: I0219 19:44:37.972678 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1c26f08-b363-49b7-8946-e60ac0d5e15c-utilities\") pod \"a1c26f08-b363-49b7-8946-e60ac0d5e15c\" (UID: \"a1c26f08-b363-49b7-8946-e60ac0d5e15c\") " Feb 19 19:44:37 crc kubenswrapper[4813]: I0219 19:44:37.972738 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1c26f08-b363-49b7-8946-e60ac0d5e15c-catalog-content\") pod \"a1c26f08-b363-49b7-8946-e60ac0d5e15c\" (UID: \"a1c26f08-b363-49b7-8946-e60ac0d5e15c\") " Feb 19 19:44:37 crc kubenswrapper[4813]: I0219 19:44:37.973660 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1c26f08-b363-49b7-8946-e60ac0d5e15c-utilities" (OuterVolumeSpecName: "utilities") pod "a1c26f08-b363-49b7-8946-e60ac0d5e15c" (UID: "a1c26f08-b363-49b7-8946-e60ac0d5e15c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:44:38 crc kubenswrapper[4813]: I0219 19:44:38.025534 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1c26f08-b363-49b7-8946-e60ac0d5e15c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a1c26f08-b363-49b7-8946-e60ac0d5e15c" (UID: "a1c26f08-b363-49b7-8946-e60ac0d5e15c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:44:38 crc kubenswrapper[4813]: I0219 19:44:38.074161 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1c26f08-b363-49b7-8946-e60ac0d5e15c-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 19:44:38 crc kubenswrapper[4813]: I0219 19:44:38.074203 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1c26f08-b363-49b7-8946-e60ac0d5e15c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 19:44:38 crc kubenswrapper[4813]: I0219 19:44:38.206282 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1c26f08-b363-49b7-8946-e60ac0d5e15c-kube-api-access-xst7d" (OuterVolumeSpecName: "kube-api-access-xst7d") pod "a1c26f08-b363-49b7-8946-e60ac0d5e15c" (UID: "a1c26f08-b363-49b7-8946-e60ac0d5e15c"). InnerVolumeSpecName "kube-api-access-xst7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:44:38 crc kubenswrapper[4813]: I0219 19:44:38.276918 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xst7d\" (UniqueName: \"kubernetes.io/projected/a1c26f08-b363-49b7-8946-e60ac0d5e15c-kube-api-access-xst7d\") on node \"crc\" DevicePath \"\"" Feb 19 19:44:38 crc kubenswrapper[4813]: I0219 19:44:38.535075 4813 generic.go:334] "Generic (PLEG): container finished" podID="a1c26f08-b363-49b7-8946-e60ac0d5e15c" containerID="934aaa07722d16e6410f5c55af40d311dffe3c18387bdc12351d416942218d0c" exitCode=0 Feb 19 19:44:38 crc kubenswrapper[4813]: I0219 19:44:38.535152 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qw7d2" event={"ID":"a1c26f08-b363-49b7-8946-e60ac0d5e15c","Type":"ContainerDied","Data":"934aaa07722d16e6410f5c55af40d311dffe3c18387bdc12351d416942218d0c"} Feb 19 19:44:38 crc kubenswrapper[4813]: I0219 19:44:38.535215 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qw7d2" event={"ID":"a1c26f08-b363-49b7-8946-e60ac0d5e15c","Type":"ContainerDied","Data":"15f36b21051831d060d53dd976479a0177fc29d6ff7ed73cc012a79121e5239f"} Feb 19 19:44:38 crc kubenswrapper[4813]: I0219 19:44:38.535223 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qw7d2" Feb 19 19:44:38 crc kubenswrapper[4813]: I0219 19:44:38.535237 4813 scope.go:117] "RemoveContainer" containerID="934aaa07722d16e6410f5c55af40d311dffe3c18387bdc12351d416942218d0c" Feb 19 19:44:38 crc kubenswrapper[4813]: I0219 19:44:38.560174 4813 scope.go:117] "RemoveContainer" containerID="e815f3a8d66c686d3a8e447e2bdc808f1de138f6d4bb7fa906fec563d2f73912" Feb 19 19:44:38 crc kubenswrapper[4813]: I0219 19:44:38.565815 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qw7d2"] Feb 19 19:44:38 crc kubenswrapper[4813]: I0219 19:44:38.571019 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qw7d2"] Feb 19 19:44:38 crc kubenswrapper[4813]: I0219 19:44:38.601727 4813 scope.go:117] "RemoveContainer" containerID="7314dd0682fb3abd2f8aa1007aeb7b8f2c30a976f3bd5b9abc5d6c3ebecbe9e7" Feb 19 19:44:38 crc kubenswrapper[4813]: I0219 19:44:38.618593 4813 scope.go:117] "RemoveContainer" containerID="934aaa07722d16e6410f5c55af40d311dffe3c18387bdc12351d416942218d0c" Feb 19 19:44:38 crc kubenswrapper[4813]: E0219 19:44:38.619208 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"934aaa07722d16e6410f5c55af40d311dffe3c18387bdc12351d416942218d0c\": container with ID starting with 934aaa07722d16e6410f5c55af40d311dffe3c18387bdc12351d416942218d0c not found: ID does not exist" containerID="934aaa07722d16e6410f5c55af40d311dffe3c18387bdc12351d416942218d0c" Feb 19 19:44:38 crc kubenswrapper[4813]: I0219 19:44:38.619249 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"934aaa07722d16e6410f5c55af40d311dffe3c18387bdc12351d416942218d0c"} err="failed to get container status \"934aaa07722d16e6410f5c55af40d311dffe3c18387bdc12351d416942218d0c\": rpc error: code = NotFound desc = could not find container \"934aaa07722d16e6410f5c55af40d311dffe3c18387bdc12351d416942218d0c\": container with ID starting with 934aaa07722d16e6410f5c55af40d311dffe3c18387bdc12351d416942218d0c not found: ID does not exist" Feb 19 19:44:38 crc kubenswrapper[4813]: I0219 19:44:38.619274 4813 scope.go:117] "RemoveContainer" containerID="e815f3a8d66c686d3a8e447e2bdc808f1de138f6d4bb7fa906fec563d2f73912" Feb 19 19:44:38 crc kubenswrapper[4813]: E0219 19:44:38.619602 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e815f3a8d66c686d3a8e447e2bdc808f1de138f6d4bb7fa906fec563d2f73912\": container with ID starting with e815f3a8d66c686d3a8e447e2bdc808f1de138f6d4bb7fa906fec563d2f73912 not found: ID does not exist" containerID="e815f3a8d66c686d3a8e447e2bdc808f1de138f6d4bb7fa906fec563d2f73912" Feb 19 19:44:38 crc kubenswrapper[4813]: I0219 19:44:38.619630 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e815f3a8d66c686d3a8e447e2bdc808f1de138f6d4bb7fa906fec563d2f73912"} err="failed to get container status \"e815f3a8d66c686d3a8e447e2bdc808f1de138f6d4bb7fa906fec563d2f73912\": rpc error: code = NotFound desc = could not find container \"e815f3a8d66c686d3a8e447e2bdc808f1de138f6d4bb7fa906fec563d2f73912\": container with ID starting with e815f3a8d66c686d3a8e447e2bdc808f1de138f6d4bb7fa906fec563d2f73912 not found: ID does not exist" Feb 19 19:44:38 crc kubenswrapper[4813]: I0219 19:44:38.619648 4813 scope.go:117] "RemoveContainer" containerID="7314dd0682fb3abd2f8aa1007aeb7b8f2c30a976f3bd5b9abc5d6c3ebecbe9e7" Feb 19 19:44:38 crc kubenswrapper[4813]: E0219 19:44:38.620060 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7314dd0682fb3abd2f8aa1007aeb7b8f2c30a976f3bd5b9abc5d6c3ebecbe9e7\": container with ID starting with 7314dd0682fb3abd2f8aa1007aeb7b8f2c30a976f3bd5b9abc5d6c3ebecbe9e7 not found: ID does not exist" containerID="7314dd0682fb3abd2f8aa1007aeb7b8f2c30a976f3bd5b9abc5d6c3ebecbe9e7" Feb 19 19:44:38 crc kubenswrapper[4813]: I0219 19:44:38.620124 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7314dd0682fb3abd2f8aa1007aeb7b8f2c30a976f3bd5b9abc5d6c3ebecbe9e7"} err="failed to get container status \"7314dd0682fb3abd2f8aa1007aeb7b8f2c30a976f3bd5b9abc5d6c3ebecbe9e7\": rpc error: code = NotFound desc = could not find container \"7314dd0682fb3abd2f8aa1007aeb7b8f2c30a976f3bd5b9abc5d6c3ebecbe9e7\": container with ID starting with 7314dd0682fb3abd2f8aa1007aeb7b8f2c30a976f3bd5b9abc5d6c3ebecbe9e7 not found: ID does not exist" Feb 19 19:44:39 crc kubenswrapper[4813]: I0219 19:44:39.482912 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1c26f08-b363-49b7-8946-e60ac0d5e15c" path="/var/lib/kubelet/pods/a1c26f08-b363-49b7-8946-e60ac0d5e15c/volumes" Feb 19 19:44:45 crc kubenswrapper[4813]: I0219 19:44:45.471566 4813 scope.go:117] "RemoveContainer" containerID="eeed46d1b34159de66f42948a66c3d63b5a21a3caa6efa2e5b0c9cefdb6e91ba" Feb 19 19:44:45 crc kubenswrapper[4813]: E0219 19:44:45.472249 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:44:59 crc kubenswrapper[4813]: I0219 19:44:59.471385 4813 scope.go:117] "RemoveContainer" containerID="eeed46d1b34159de66f42948a66c3d63b5a21a3caa6efa2e5b0c9cefdb6e91ba" Feb 19 19:44:59 crc kubenswrapper[4813]: E0219 19:44:59.472107 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:45:00 crc kubenswrapper[4813]: I0219 19:45:00.172435 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525505-tml84"] Feb 19 19:45:00 crc kubenswrapper[4813]: E0219 19:45:00.172718 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1c26f08-b363-49b7-8946-e60ac0d5e15c" containerName="extract-utilities" Feb 19 19:45:00 crc kubenswrapper[4813]: I0219 19:45:00.172735 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1c26f08-b363-49b7-8946-e60ac0d5e15c" containerName="extract-utilities" Feb 19 19:45:00 crc kubenswrapper[4813]: E0219 19:45:00.172745 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1c26f08-b363-49b7-8946-e60ac0d5e15c" containerName="extract-content" Feb 19 19:45:00 crc kubenswrapper[4813]: I0219 19:45:00.172752 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1c26f08-b363-49b7-8946-e60ac0d5e15c" containerName="extract-content" Feb 19 19:45:00 crc kubenswrapper[4813]: E0219 19:45:00.172771 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1c26f08-b363-49b7-8946-e60ac0d5e15c" containerName="registry-server" Feb 19 19:45:00 crc kubenswrapper[4813]: I0219 19:45:00.172781 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1c26f08-b363-49b7-8946-e60ac0d5e15c" containerName="registry-server" Feb 19 19:45:00 crc kubenswrapper[4813]: I0219 19:45:00.172968 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1c26f08-b363-49b7-8946-e60ac0d5e15c" containerName="registry-server" Feb 19 19:45:00 crc kubenswrapper[4813]: I0219 19:45:00.173438 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525505-tml84" Feb 19 19:45:00 crc kubenswrapper[4813]: I0219 19:45:00.175861 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 19:45:00 crc kubenswrapper[4813]: I0219 19:45:00.176512 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 19:45:00 crc kubenswrapper[4813]: I0219 19:45:00.187735 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525505-tml84"] Feb 19 19:45:00 crc kubenswrapper[4813]: I0219 19:45:00.373581 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/870c7782-2146-4cec-92bc-590e85dfc2b8-secret-volume\") pod \"collect-profiles-29525505-tml84\" (UID: \"870c7782-2146-4cec-92bc-590e85dfc2b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525505-tml84" Feb 19 19:45:00 crc kubenswrapper[4813]: I0219 19:45:00.374003 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/870c7782-2146-4cec-92bc-590e85dfc2b8-config-volume\") pod \"collect-profiles-29525505-tml84\" (UID: \"870c7782-2146-4cec-92bc-590e85dfc2b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525505-tml84" Feb 19 19:45:00 crc kubenswrapper[4813]: I0219 19:45:00.374040 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpspn\" (UniqueName: \"kubernetes.io/projected/870c7782-2146-4cec-92bc-590e85dfc2b8-kube-api-access-cpspn\") pod \"collect-profiles-29525505-tml84\" (UID: \"870c7782-2146-4cec-92bc-590e85dfc2b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525505-tml84" Feb 19 19:45:00 crc kubenswrapper[4813]: I0219 19:45:00.476273 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpspn\" (UniqueName: \"kubernetes.io/projected/870c7782-2146-4cec-92bc-590e85dfc2b8-kube-api-access-cpspn\") pod \"collect-profiles-29525505-tml84\" (UID: \"870c7782-2146-4cec-92bc-590e85dfc2b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525505-tml84" Feb 19 19:45:00 crc kubenswrapper[4813]: I0219 19:45:00.476538 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/870c7782-2146-4cec-92bc-590e85dfc2b8-secret-volume\") pod \"collect-profiles-29525505-tml84\" (UID: \"870c7782-2146-4cec-92bc-590e85dfc2b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525505-tml84" Feb 19 19:45:00 crc kubenswrapper[4813]: I0219 19:45:00.476575 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/870c7782-2146-4cec-92bc-590e85dfc2b8-config-volume\") pod \"collect-profiles-29525505-tml84\" (UID: \"870c7782-2146-4cec-92bc-590e85dfc2b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525505-tml84" Feb 19 19:45:00 crc kubenswrapper[4813]: I0219 19:45:00.479048 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/870c7782-2146-4cec-92bc-590e85dfc2b8-config-volume\") pod \"collect-profiles-29525505-tml84\" (UID: \"870c7782-2146-4cec-92bc-590e85dfc2b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525505-tml84" Feb 19 19:45:00 crc kubenswrapper[4813]: I0219 19:45:00.484505 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/870c7782-2146-4cec-92bc-590e85dfc2b8-secret-volume\") pod \"collect-profiles-29525505-tml84\" (UID: \"870c7782-2146-4cec-92bc-590e85dfc2b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525505-tml84" Feb 19 19:45:00 crc kubenswrapper[4813]: I0219 19:45:00.493835 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpspn\" (UniqueName: \"kubernetes.io/projected/870c7782-2146-4cec-92bc-590e85dfc2b8-kube-api-access-cpspn\") pod \"collect-profiles-29525505-tml84\" (UID: \"870c7782-2146-4cec-92bc-590e85dfc2b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525505-tml84" Feb 19 19:45:00 crc kubenswrapper[4813]: I0219 19:45:00.494170 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525505-tml84" Feb 19 19:45:00 crc kubenswrapper[4813]: I0219 19:45:00.966049 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525505-tml84"] Feb 19 19:45:01 crc kubenswrapper[4813]: I0219 19:45:01.699713 4813 generic.go:334] "Generic (PLEG): container finished" podID="870c7782-2146-4cec-92bc-590e85dfc2b8" containerID="e40bc99d97e432132dd3d0bbba2baa7522e379f4543de86e9f400a37ada16696" exitCode=0 Feb 19 19:45:01 crc kubenswrapper[4813]: I0219 19:45:01.699795 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525505-tml84" event={"ID":"870c7782-2146-4cec-92bc-590e85dfc2b8","Type":"ContainerDied","Data":"e40bc99d97e432132dd3d0bbba2baa7522e379f4543de86e9f400a37ada16696"} Feb 19 19:45:01 crc kubenswrapper[4813]: I0219 19:45:01.700115 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525505-tml84" event={"ID":"870c7782-2146-4cec-92bc-590e85dfc2b8","Type":"ContainerStarted","Data":"313332b5e0a4d5a5232b2009173ecf9072d48629f631ac9ba3ba32a34194b436"} Feb 19 19:45:02 crc kubenswrapper[4813]: I0219 19:45:02.978178 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525505-tml84" Feb 19 19:45:03 crc kubenswrapper[4813]: I0219 19:45:03.116508 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/870c7782-2146-4cec-92bc-590e85dfc2b8-config-volume\") pod \"870c7782-2146-4cec-92bc-590e85dfc2b8\" (UID: \"870c7782-2146-4cec-92bc-590e85dfc2b8\") " Feb 19 19:45:03 crc kubenswrapper[4813]: I0219 19:45:03.116580 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/870c7782-2146-4cec-92bc-590e85dfc2b8-secret-volume\") pod \"870c7782-2146-4cec-92bc-590e85dfc2b8\" (UID: \"870c7782-2146-4cec-92bc-590e85dfc2b8\") " Feb 19 19:45:03 crc kubenswrapper[4813]: I0219 19:45:03.116633 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cpspn\" (UniqueName: \"kubernetes.io/projected/870c7782-2146-4cec-92bc-590e85dfc2b8-kube-api-access-cpspn\") pod \"870c7782-2146-4cec-92bc-590e85dfc2b8\" (UID: \"870c7782-2146-4cec-92bc-590e85dfc2b8\") " Feb 19 19:45:03 crc kubenswrapper[4813]: I0219 19:45:03.117415 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/870c7782-2146-4cec-92bc-590e85dfc2b8-config-volume" (OuterVolumeSpecName: "config-volume") pod "870c7782-2146-4cec-92bc-590e85dfc2b8" (UID: "870c7782-2146-4cec-92bc-590e85dfc2b8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:45:03 crc kubenswrapper[4813]: I0219 19:45:03.123165 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/870c7782-2146-4cec-92bc-590e85dfc2b8-kube-api-access-cpspn" (OuterVolumeSpecName: "kube-api-access-cpspn") pod "870c7782-2146-4cec-92bc-590e85dfc2b8" (UID: "870c7782-2146-4cec-92bc-590e85dfc2b8"). InnerVolumeSpecName "kube-api-access-cpspn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:45:03 crc kubenswrapper[4813]: I0219 19:45:03.124134 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/870c7782-2146-4cec-92bc-590e85dfc2b8-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "870c7782-2146-4cec-92bc-590e85dfc2b8" (UID: "870c7782-2146-4cec-92bc-590e85dfc2b8"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:45:03 crc kubenswrapper[4813]: I0219 19:45:03.217711 4813 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/870c7782-2146-4cec-92bc-590e85dfc2b8-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 19:45:03 crc kubenswrapper[4813]: I0219 19:45:03.217754 4813 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/870c7782-2146-4cec-92bc-590e85dfc2b8-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 19:45:03 crc kubenswrapper[4813]: I0219 19:45:03.217767 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cpspn\" (UniqueName: \"kubernetes.io/projected/870c7782-2146-4cec-92bc-590e85dfc2b8-kube-api-access-cpspn\") on node \"crc\" DevicePath \"\"" Feb 19 19:45:03 crc kubenswrapper[4813]: I0219 19:45:03.714459 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525505-tml84" event={"ID":"870c7782-2146-4cec-92bc-590e85dfc2b8","Type":"ContainerDied","Data":"313332b5e0a4d5a5232b2009173ecf9072d48629f631ac9ba3ba32a34194b436"} Feb 19 19:45:03 crc kubenswrapper[4813]: I0219 19:45:03.714495 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="313332b5e0a4d5a5232b2009173ecf9072d48629f631ac9ba3ba32a34194b436" Feb 19 19:45:03 crc kubenswrapper[4813]: I0219 19:45:03.714805 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525505-tml84" Feb 19 19:45:04 crc kubenswrapper[4813]: I0219 19:45:04.057072 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525460-ljw6z"] Feb 19 19:45:04 crc kubenswrapper[4813]: I0219 19:45:04.065296 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525460-ljw6z"] Feb 19 19:45:05 crc kubenswrapper[4813]: I0219 19:45:05.495887 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ed4271d-81b9-4f32-a9b6-723332602a3d" path="/var/lib/kubelet/pods/4ed4271d-81b9-4f32-a9b6-723332602a3d/volumes" Feb 19 19:45:12 crc kubenswrapper[4813]: I0219 19:45:12.471446 4813 scope.go:117] "RemoveContainer" containerID="eeed46d1b34159de66f42948a66c3d63b5a21a3caa6efa2e5b0c9cefdb6e91ba" Feb 19 19:45:12 crc kubenswrapper[4813]: E0219 19:45:12.472309 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:45:25 crc kubenswrapper[4813]: I0219 19:45:25.471567 4813 scope.go:117] "RemoveContainer" containerID="eeed46d1b34159de66f42948a66c3d63b5a21a3caa6efa2e5b0c9cefdb6e91ba" Feb 19 19:45:25 crc kubenswrapper[4813]: E0219 19:45:25.473495 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:45:36 crc kubenswrapper[4813]: I0219 19:45:36.471457 4813 scope.go:117] "RemoveContainer" containerID="eeed46d1b34159de66f42948a66c3d63b5a21a3caa6efa2e5b0c9cefdb6e91ba" Feb 19 19:45:36 crc kubenswrapper[4813]: E0219 19:45:36.472294 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:45:48 crc kubenswrapper[4813]: I0219 19:45:48.472128 4813 scope.go:117] "RemoveContainer" containerID="eeed46d1b34159de66f42948a66c3d63b5a21a3caa6efa2e5b0c9cefdb6e91ba" Feb 19 19:45:48 crc kubenswrapper[4813]: E0219 19:45:48.473207 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:45:51 crc kubenswrapper[4813]: I0219 19:45:51.924695 4813 scope.go:117] "RemoveContainer" containerID="d34d1ff45ea0f54a8c9e8fa196da33996c83eeb35d66148751215d84d3c50c6f" Feb 19 19:46:02 crc kubenswrapper[4813]: I0219 19:46:02.471733 4813 scope.go:117] "RemoveContainer" containerID="eeed46d1b34159de66f42948a66c3d63b5a21a3caa6efa2e5b0c9cefdb6e91ba" Feb 19 19:46:02 crc kubenswrapper[4813]: E0219 19:46:02.472793 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:46:17 crc kubenswrapper[4813]: I0219 19:46:17.472225 4813 scope.go:117] "RemoveContainer" containerID="eeed46d1b34159de66f42948a66c3d63b5a21a3caa6efa2e5b0c9cefdb6e91ba" Feb 19 19:46:17 crc kubenswrapper[4813]: E0219 19:46:17.472963 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:46:28 crc kubenswrapper[4813]: I0219 19:46:28.471500 4813 scope.go:117] "RemoveContainer" containerID="eeed46d1b34159de66f42948a66c3d63b5a21a3caa6efa2e5b0c9cefdb6e91ba" Feb 19 19:46:28 crc kubenswrapper[4813]: E0219 19:46:28.472241 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:46:32 crc kubenswrapper[4813]: I0219 19:46:32.942789 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7c4c8f55b5-4gzmc"] Feb 19 19:46:32 crc kubenswrapper[4813]: E0219 19:46:32.943290 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="870c7782-2146-4cec-92bc-590e85dfc2b8" containerName="collect-profiles" Feb 19 19:46:32 crc kubenswrapper[4813]: I0219 19:46:32.943301 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="870c7782-2146-4cec-92bc-590e85dfc2b8" containerName="collect-profiles" Feb 19 19:46:32 crc kubenswrapper[4813]: I0219 19:46:32.943422 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="870c7782-2146-4cec-92bc-590e85dfc2b8" containerName="collect-profiles" Feb 19 19:46:32 crc kubenswrapper[4813]: I0219 19:46:32.944079 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c4c8f55b5-4gzmc" Feb 19 19:46:32 crc kubenswrapper[4813]: I0219 19:46:32.946557 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-62gwm" Feb 19 19:46:32 crc kubenswrapper[4813]: I0219 19:46:32.946811 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 19 19:46:32 crc kubenswrapper[4813]: I0219 19:46:32.947072 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 19 19:46:32 crc kubenswrapper[4813]: I0219 19:46:32.947231 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 19 19:46:32 crc kubenswrapper[4813]: I0219 19:46:32.947519 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 19 19:46:32 crc kubenswrapper[4813]: I0219 19:46:32.958344 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c4c8f55b5-4gzmc"] Feb 19 19:46:33 crc kubenswrapper[4813]: I0219 19:46:33.085228 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfb0cbc0-08ba-4cb4-a29d-17ddaa09f7e0-config\") pod \"dnsmasq-dns-7c4c8f55b5-4gzmc\" (UID: \"bfb0cbc0-08ba-4cb4-a29d-17ddaa09f7e0\") " pod="openstack/dnsmasq-dns-7c4c8f55b5-4gzmc" Feb 19 19:46:33 crc kubenswrapper[4813]: I0219 19:46:33.085594 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czkmv\" (UniqueName: \"kubernetes.io/projected/bfb0cbc0-08ba-4cb4-a29d-17ddaa09f7e0-kube-api-access-czkmv\") pod \"dnsmasq-dns-7c4c8f55b5-4gzmc\" (UID: \"bfb0cbc0-08ba-4cb4-a29d-17ddaa09f7e0\") " pod="openstack/dnsmasq-dns-7c4c8f55b5-4gzmc" Feb 19 19:46:33 crc kubenswrapper[4813]: I0219 19:46:33.085630 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bfb0cbc0-08ba-4cb4-a29d-17ddaa09f7e0-dns-svc\") pod \"dnsmasq-dns-7c4c8f55b5-4gzmc\" (UID: \"bfb0cbc0-08ba-4cb4-a29d-17ddaa09f7e0\") " pod="openstack/dnsmasq-dns-7c4c8f55b5-4gzmc" Feb 19 19:46:33 crc kubenswrapper[4813]: I0219 19:46:33.187300 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfb0cbc0-08ba-4cb4-a29d-17ddaa09f7e0-config\") pod \"dnsmasq-dns-7c4c8f55b5-4gzmc\" (UID: \"bfb0cbc0-08ba-4cb4-a29d-17ddaa09f7e0\") " pod="openstack/dnsmasq-dns-7c4c8f55b5-4gzmc" Feb 19 19:46:33 crc kubenswrapper[4813]: I0219 19:46:33.187369 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czkmv\" (UniqueName: \"kubernetes.io/projected/bfb0cbc0-08ba-4cb4-a29d-17ddaa09f7e0-kube-api-access-czkmv\") pod \"dnsmasq-dns-7c4c8f55b5-4gzmc\" (UID: \"bfb0cbc0-08ba-4cb4-a29d-17ddaa09f7e0\") " pod="openstack/dnsmasq-dns-7c4c8f55b5-4gzmc" Feb 19 19:46:33 crc kubenswrapper[4813]: I0219 19:46:33.187398 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bfb0cbc0-08ba-4cb4-a29d-17ddaa09f7e0-dns-svc\") pod \"dnsmasq-dns-7c4c8f55b5-4gzmc\" (UID: \"bfb0cbc0-08ba-4cb4-a29d-17ddaa09f7e0\") " pod="openstack/dnsmasq-dns-7c4c8f55b5-4gzmc" Feb 19 19:46:33 crc kubenswrapper[4813]: I0219 19:46:33.188123 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfb0cbc0-08ba-4cb4-a29d-17ddaa09f7e0-config\") pod \"dnsmasq-dns-7c4c8f55b5-4gzmc\" (UID: \"bfb0cbc0-08ba-4cb4-a29d-17ddaa09f7e0\") " pod="openstack/dnsmasq-dns-7c4c8f55b5-4gzmc" Feb 19 19:46:33 crc kubenswrapper[4813]: I0219 19:46:33.188142 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bfb0cbc0-08ba-4cb4-a29d-17ddaa09f7e0-dns-svc\") pod \"dnsmasq-dns-7c4c8f55b5-4gzmc\" (UID: \"bfb0cbc0-08ba-4cb4-a29d-17ddaa09f7e0\") " pod="openstack/dnsmasq-dns-7c4c8f55b5-4gzmc" Feb 19 19:46:33 crc kubenswrapper[4813]: I0219 19:46:33.212874 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czkmv\" (UniqueName: \"kubernetes.io/projected/bfb0cbc0-08ba-4cb4-a29d-17ddaa09f7e0-kube-api-access-czkmv\") pod \"dnsmasq-dns-7c4c8f55b5-4gzmc\" (UID: \"bfb0cbc0-08ba-4cb4-a29d-17ddaa09f7e0\") " pod="openstack/dnsmasq-dns-7c4c8f55b5-4gzmc" Feb 19 19:46:33 crc kubenswrapper[4813]: I0219 19:46:33.255531 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-589cf688cc-dp55n"] Feb 19 19:46:33 crc kubenswrapper[4813]: I0219 19:46:33.257099 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-589cf688cc-dp55n" Feb 19 19:46:33 crc kubenswrapper[4813]: I0219 19:46:33.277172 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-589cf688cc-dp55n"] Feb 19 19:46:33 crc kubenswrapper[4813]: I0219 19:46:33.320834 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c4c8f55b5-4gzmc" Feb 19 19:46:33 crc kubenswrapper[4813]: I0219 19:46:33.390192 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa1478d6-53a6-489d-a7a2-e47fee0aec60-dns-svc\") pod \"dnsmasq-dns-589cf688cc-dp55n\" (UID: \"fa1478d6-53a6-489d-a7a2-e47fee0aec60\") " pod="openstack/dnsmasq-dns-589cf688cc-dp55n" Feb 19 19:46:33 crc kubenswrapper[4813]: I0219 19:46:33.390365 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpzgm\" (UniqueName: \"kubernetes.io/projected/fa1478d6-53a6-489d-a7a2-e47fee0aec60-kube-api-access-rpzgm\") pod \"dnsmasq-dns-589cf688cc-dp55n\" (UID: \"fa1478d6-53a6-489d-a7a2-e47fee0aec60\") " pod="openstack/dnsmasq-dns-589cf688cc-dp55n" Feb 19 19:46:33 crc kubenswrapper[4813]: I0219 19:46:33.390435 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa1478d6-53a6-489d-a7a2-e47fee0aec60-config\") pod \"dnsmasq-dns-589cf688cc-dp55n\" (UID: \"fa1478d6-53a6-489d-a7a2-e47fee0aec60\") " pod="openstack/dnsmasq-dns-589cf688cc-dp55n" Feb 19 19:46:33 crc kubenswrapper[4813]: I0219 19:46:33.491651 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa1478d6-53a6-489d-a7a2-e47fee0aec60-dns-svc\") pod \"dnsmasq-dns-589cf688cc-dp55n\" (UID: \"fa1478d6-53a6-489d-a7a2-e47fee0aec60\") " pod="openstack/dnsmasq-dns-589cf688cc-dp55n" Feb 19 19:46:33 crc kubenswrapper[4813]: I0219 19:46:33.492316 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpzgm\" (UniqueName: \"kubernetes.io/projected/fa1478d6-53a6-489d-a7a2-e47fee0aec60-kube-api-access-rpzgm\") pod \"dnsmasq-dns-589cf688cc-dp55n\" (UID: \"fa1478d6-53a6-489d-a7a2-e47fee0aec60\") " pod="openstack/dnsmasq-dns-589cf688cc-dp55n" Feb 19 19:46:33 crc kubenswrapper[4813]: I0219 19:46:33.492388 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa1478d6-53a6-489d-a7a2-e47fee0aec60-config\") pod \"dnsmasq-dns-589cf688cc-dp55n\" (UID: \"fa1478d6-53a6-489d-a7a2-e47fee0aec60\") " pod="openstack/dnsmasq-dns-589cf688cc-dp55n" Feb 19 19:46:33 crc kubenswrapper[4813]: I0219 19:46:33.492872 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa1478d6-53a6-489d-a7a2-e47fee0aec60-dns-svc\") pod \"dnsmasq-dns-589cf688cc-dp55n\" (UID: \"fa1478d6-53a6-489d-a7a2-e47fee0aec60\") " pod="openstack/dnsmasq-dns-589cf688cc-dp55n" Feb 19 19:46:33 crc kubenswrapper[4813]: I0219 19:46:33.493260 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa1478d6-53a6-489d-a7a2-e47fee0aec60-config\") pod \"dnsmasq-dns-589cf688cc-dp55n\" (UID: \"fa1478d6-53a6-489d-a7a2-e47fee0aec60\") " pod="openstack/dnsmasq-dns-589cf688cc-dp55n" Feb 19 19:46:33 crc kubenswrapper[4813]: I0219 19:46:33.509252 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpzgm\" (UniqueName: \"kubernetes.io/projected/fa1478d6-53a6-489d-a7a2-e47fee0aec60-kube-api-access-rpzgm\") pod \"dnsmasq-dns-589cf688cc-dp55n\" (UID: \"fa1478d6-53a6-489d-a7a2-e47fee0aec60\") " pod="openstack/dnsmasq-dns-589cf688cc-dp55n" Feb 19 19:46:33 crc kubenswrapper[4813]: I0219 19:46:33.573227 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-589cf688cc-dp55n" Feb 19 19:46:33 crc kubenswrapper[4813]: I0219 19:46:33.793752 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c4c8f55b5-4gzmc"] Feb 19 19:46:33 crc kubenswrapper[4813]: W0219 19:46:33.804500 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbfb0cbc0_08ba_4cb4_a29d_17ddaa09f7e0.slice/crio-ca077aa27d035d3bf5c9d42b084d40a1cf8ee5e58c15ed48e4e0ebafd1ce03c3 WatchSource:0}: Error finding container ca077aa27d035d3bf5c9d42b084d40a1cf8ee5e58c15ed48e4e0ebafd1ce03c3: Status 404 returned error can't find the container with id ca077aa27d035d3bf5c9d42b084d40a1cf8ee5e58c15ed48e4e0ebafd1ce03c3 Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.069499 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-589cf688cc-dp55n"] Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.110440 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.111723 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.114460 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.114622 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.115307 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.115514 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.115781 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-bpgnz" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.124186 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.218329 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.219285 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.220521 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7b58a90f-858f-45af-a26b-7dc89dea9f8f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7b58a90f-858f-45af-a26b-7dc89dea9f8f\") pod \"rabbitmq-server-0\" (UID: \"700f4c86-1aa6-4cbd-a3d5-680ca2763884\") " pod="openstack/rabbitmq-server-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.220572 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/700f4c86-1aa6-4cbd-a3d5-680ca2763884-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"700f4c86-1aa6-4cbd-a3d5-680ca2763884\") " pod="openstack/rabbitmq-server-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.220591 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/700f4c86-1aa6-4cbd-a3d5-680ca2763884-server-conf\") pod \"rabbitmq-server-0\" (UID: \"700f4c86-1aa6-4cbd-a3d5-680ca2763884\") " pod="openstack/rabbitmq-server-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.220614 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/700f4c86-1aa6-4cbd-a3d5-680ca2763884-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"700f4c86-1aa6-4cbd-a3d5-680ca2763884\") " pod="openstack/rabbitmq-server-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.220654 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/700f4c86-1aa6-4cbd-a3d5-680ca2763884-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"700f4c86-1aa6-4cbd-a3d5-680ca2763884\") " pod="openstack/rabbitmq-server-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.220714 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/700f4c86-1aa6-4cbd-a3d5-680ca2763884-pod-info\") pod \"rabbitmq-server-0\" (UID: \"700f4c86-1aa6-4cbd-a3d5-680ca2763884\") " pod="openstack/rabbitmq-server-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.220747 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9z6n5\" (UniqueName: \"kubernetes.io/projected/700f4c86-1aa6-4cbd-a3d5-680ca2763884-kube-api-access-9z6n5\") pod \"rabbitmq-server-0\" (UID: \"700f4c86-1aa6-4cbd-a3d5-680ca2763884\") " pod="openstack/rabbitmq-server-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.220769 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/700f4c86-1aa6-4cbd-a3d5-680ca2763884-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"700f4c86-1aa6-4cbd-a3d5-680ca2763884\") " pod="openstack/rabbitmq-server-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.220792 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/700f4c86-1aa6-4cbd-a3d5-680ca2763884-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"700f4c86-1aa6-4cbd-a3d5-680ca2763884\") " pod="openstack/rabbitmq-server-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.228039 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.228238 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-jxg4n" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.254437 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.321660 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdbp6\" (UniqueName: \"kubernetes.io/projected/e6c529a9-395b-4e55-87d1-f93fb0c98cd6-kube-api-access-tdbp6\") pod \"memcached-0\" (UID: \"e6c529a9-395b-4e55-87d1-f93fb0c98cd6\") " pod="openstack/memcached-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.321712 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/700f4c86-1aa6-4cbd-a3d5-680ca2763884-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"700f4c86-1aa6-4cbd-a3d5-680ca2763884\") " pod="openstack/rabbitmq-server-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.321754 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e6c529a9-395b-4e55-87d1-f93fb0c98cd6-config-data\") pod \"memcached-0\" (UID: \"e6c529a9-395b-4e55-87d1-f93fb0c98cd6\") " pod="openstack/memcached-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.321784 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/700f4c86-1aa6-4cbd-a3d5-680ca2763884-pod-info\") pod \"rabbitmq-server-0\" (UID: \"700f4c86-1aa6-4cbd-a3d5-680ca2763884\") " pod="openstack/rabbitmq-server-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.321808 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9z6n5\" (UniqueName: \"kubernetes.io/projected/700f4c86-1aa6-4cbd-a3d5-680ca2763884-kube-api-access-9z6n5\") pod \"rabbitmq-server-0\" (UID: \"700f4c86-1aa6-4cbd-a3d5-680ca2763884\") " pod="openstack/rabbitmq-server-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.321822 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/700f4c86-1aa6-4cbd-a3d5-680ca2763884-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"700f4c86-1aa6-4cbd-a3d5-680ca2763884\") " pod="openstack/rabbitmq-server-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.321836 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/700f4c86-1aa6-4cbd-a3d5-680ca2763884-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"700f4c86-1aa6-4cbd-a3d5-680ca2763884\") " pod="openstack/rabbitmq-server-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.321858 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e6c529a9-395b-4e55-87d1-f93fb0c98cd6-kolla-config\") pod \"memcached-0\" (UID: \"e6c529a9-395b-4e55-87d1-f93fb0c98cd6\") " pod="openstack/memcached-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.321882 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7b58a90f-858f-45af-a26b-7dc89dea9f8f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7b58a90f-858f-45af-a26b-7dc89dea9f8f\") pod \"rabbitmq-server-0\" (UID: \"700f4c86-1aa6-4cbd-a3d5-680ca2763884\") " pod="openstack/rabbitmq-server-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.321913 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/700f4c86-1aa6-4cbd-a3d5-680ca2763884-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"700f4c86-1aa6-4cbd-a3d5-680ca2763884\") " pod="openstack/rabbitmq-server-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.321929 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/700f4c86-1aa6-4cbd-a3d5-680ca2763884-server-conf\") pod \"rabbitmq-server-0\" (UID: \"700f4c86-1aa6-4cbd-a3d5-680ca2763884\") " pod="openstack/rabbitmq-server-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.321948 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/700f4c86-1aa6-4cbd-a3d5-680ca2763884-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"700f4c86-1aa6-4cbd-a3d5-680ca2763884\") " pod="openstack/rabbitmq-server-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.322354 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/700f4c86-1aa6-4cbd-a3d5-680ca2763884-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"700f4c86-1aa6-4cbd-a3d5-680ca2763884\") " pod="openstack/rabbitmq-server-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.323801 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/700f4c86-1aa6-4cbd-a3d5-680ca2763884-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"700f4c86-1aa6-4cbd-a3d5-680ca2763884\") " pod="openstack/rabbitmq-server-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.324898 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/700f4c86-1aa6-4cbd-a3d5-680ca2763884-server-conf\") pod \"rabbitmq-server-0\" (UID: \"700f4c86-1aa6-4cbd-a3d5-680ca2763884\") " pod="openstack/rabbitmq-server-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.325598 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/700f4c86-1aa6-4cbd-a3d5-680ca2763884-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"700f4c86-1aa6-4cbd-a3d5-680ca2763884\") " pod="openstack/rabbitmq-server-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.326262 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/700f4c86-1aa6-4cbd-a3d5-680ca2763884-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"700f4c86-1aa6-4cbd-a3d5-680ca2763884\") " pod="openstack/rabbitmq-server-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.326381 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/700f4c86-1aa6-4cbd-a3d5-680ca2763884-pod-info\") pod \"rabbitmq-server-0\" (UID: \"700f4c86-1aa6-4cbd-a3d5-680ca2763884\") " pod="openstack/rabbitmq-server-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.339809 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9z6n5\" (UniqueName: \"kubernetes.io/projected/700f4c86-1aa6-4cbd-a3d5-680ca2763884-kube-api-access-9z6n5\") pod \"rabbitmq-server-0\" (UID: \"700f4c86-1aa6-4cbd-a3d5-680ca2763884\") " pod="openstack/rabbitmq-server-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.340850 4813 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.340888 4813 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7b58a90f-858f-45af-a26b-7dc89dea9f8f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7b58a90f-858f-45af-a26b-7dc89dea9f8f\") pod \"rabbitmq-server-0\" (UID: \"700f4c86-1aa6-4cbd-a3d5-680ca2763884\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/befe2f2c7864de6777c17bbf4c0bd7318501f5434b28fa16494354dec037595d/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.351998 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-589cf688cc-dp55n" event={"ID":"fa1478d6-53a6-489d-a7a2-e47fee0aec60","Type":"ContainerStarted","Data":"c0e57144c55b158b776db898c268da97b285326bd46910dc610318ae92111e98"} Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.352046 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-589cf688cc-dp55n" event={"ID":"fa1478d6-53a6-489d-a7a2-e47fee0aec60","Type":"ContainerStarted","Data":"1486ca27ca9ff71e640f473e436b82c3b9c7e9952fba33311ab1f80fb8e97f8b"} Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.355653 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/700f4c86-1aa6-4cbd-a3d5-680ca2763884-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"700f4c86-1aa6-4cbd-a3d5-680ca2763884\") " pod="openstack/rabbitmq-server-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.356886 4813 generic.go:334] "Generic (PLEG): container finished" podID="bfb0cbc0-08ba-4cb4-a29d-17ddaa09f7e0" containerID="a3468a6857267866bf193e1eb2b9877e026c3af8d8cf76b4d3a81480157bb7aa" exitCode=0 Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.356927 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c4c8f55b5-4gzmc" event={"ID":"bfb0cbc0-08ba-4cb4-a29d-17ddaa09f7e0","Type":"ContainerDied","Data":"a3468a6857267866bf193e1eb2b9877e026c3af8d8cf76b4d3a81480157bb7aa"} Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.356970 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c4c8f55b5-4gzmc" event={"ID":"bfb0cbc0-08ba-4cb4-a29d-17ddaa09f7e0","Type":"ContainerStarted","Data":"ca077aa27d035d3bf5c9d42b084d40a1cf8ee5e58c15ed48e4e0ebafd1ce03c3"} Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.408322 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7b58a90f-858f-45af-a26b-7dc89dea9f8f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7b58a90f-858f-45af-a26b-7dc89dea9f8f\") pod \"rabbitmq-server-0\" (UID: \"700f4c86-1aa6-4cbd-a3d5-680ca2763884\") " pod="openstack/rabbitmq-server-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.422930 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e6c529a9-395b-4e55-87d1-f93fb0c98cd6-config-data\") pod \"memcached-0\" (UID: \"e6c529a9-395b-4e55-87d1-f93fb0c98cd6\") " pod="openstack/memcached-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.423064 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e6c529a9-395b-4e55-87d1-f93fb0c98cd6-kolla-config\") pod \"memcached-0\" (UID: \"e6c529a9-395b-4e55-87d1-f93fb0c98cd6\") " pod="openstack/memcached-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.423144 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdbp6\" (UniqueName: \"kubernetes.io/projected/e6c529a9-395b-4e55-87d1-f93fb0c98cd6-kube-api-access-tdbp6\") pod \"memcached-0\" (UID: \"e6c529a9-395b-4e55-87d1-f93fb0c98cd6\") " pod="openstack/memcached-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.423978 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e6c529a9-395b-4e55-87d1-f93fb0c98cd6-config-data\") pod \"memcached-0\" (UID: \"e6c529a9-395b-4e55-87d1-f93fb0c98cd6\") " pod="openstack/memcached-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.429385 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e6c529a9-395b-4e55-87d1-f93fb0c98cd6-kolla-config\") pod \"memcached-0\" (UID: \"e6c529a9-395b-4e55-87d1-f93fb0c98cd6\") " pod="openstack/memcached-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.435406 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.436702 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.445546 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.445736 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-88bpc" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.445889 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.446018 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.446120 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.452121 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.452699 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.452732 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdbp6\" (UniqueName: \"kubernetes.io/projected/e6c529a9-395b-4e55-87d1-f93fb0c98cd6-kube-api-access-tdbp6\") pod \"memcached-0\" (UID: \"e6c529a9-395b-4e55-87d1-f93fb0c98cd6\") " pod="openstack/memcached-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.524843 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2c1ace90-8e41-4862-8be0-f0500e93b9f3-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2c1ace90-8e41-4862-8be0-f0500e93b9f3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.524893 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2c1ace90-8e41-4862-8be0-f0500e93b9f3-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2c1ace90-8e41-4862-8be0-f0500e93b9f3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.524927 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jdxd\" (UniqueName: \"kubernetes.io/projected/2c1ace90-8e41-4862-8be0-f0500e93b9f3-kube-api-access-2jdxd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2c1ace90-8e41-4862-8be0-f0500e93b9f3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.524947 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2c1ace90-8e41-4862-8be0-f0500e93b9f3-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2c1ace90-8e41-4862-8be0-f0500e93b9f3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.525013 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2c1ace90-8e41-4862-8be0-f0500e93b9f3-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2c1ace90-8e41-4862-8be0-f0500e93b9f3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.525058 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2c1ace90-8e41-4862-8be0-f0500e93b9f3-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2c1ace90-8e41-4862-8be0-f0500e93b9f3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.525112 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c842c9ea-15e2-450e-a45c-aec0ba91cae5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c842c9ea-15e2-450e-a45c-aec0ba91cae5\") pod \"rabbitmq-cell1-server-0\" (UID: \"2c1ace90-8e41-4862-8be0-f0500e93b9f3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.525129 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2c1ace90-8e41-4862-8be0-f0500e93b9f3-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2c1ace90-8e41-4862-8be0-f0500e93b9f3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.525143 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2c1ace90-8e41-4862-8be0-f0500e93b9f3-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2c1ace90-8e41-4862-8be0-f0500e93b9f3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.587571 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 19 19:46:34 crc kubenswrapper[4813]: E0219 19:46:34.617444 4813 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Feb 19 19:46:34 crc kubenswrapper[4813]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/bfb0cbc0-08ba-4cb4-a29d-17ddaa09f7e0/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 19 19:46:34 crc kubenswrapper[4813]: > podSandboxID="ca077aa27d035d3bf5c9d42b084d40a1cf8ee5e58c15ed48e4e0ebafd1ce03c3" Feb 19 19:46:34 crc kubenswrapper[4813]: E0219 19:46:34.617599 4813 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 19 19:46:34 crc kubenswrapper[4813]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n8chc6h5bh56fh546hb7hc8h67h5bchffh577h697h5b5h5bdh59bhf6hf4h558hb5h578h595h5cchfbh644h59ch7fh654h547h587h5cbh5d5h8fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-czkmv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-7c4c8f55b5-4gzmc_openstack(bfb0cbc0-08ba-4cb4-a29d-17ddaa09f7e0): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/bfb0cbc0-08ba-4cb4-a29d-17ddaa09f7e0/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 19 19:46:34 crc kubenswrapper[4813]: > logger="UnhandledError" Feb 19 19:46:34 crc kubenswrapper[4813]: E0219 19:46:34.618826 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/bfb0cbc0-08ba-4cb4-a29d-17ddaa09f7e0/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-7c4c8f55b5-4gzmc" podUID="bfb0cbc0-08ba-4cb4-a29d-17ddaa09f7e0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.626825 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jdxd\" (UniqueName: \"kubernetes.io/projected/2c1ace90-8e41-4862-8be0-f0500e93b9f3-kube-api-access-2jdxd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2c1ace90-8e41-4862-8be0-f0500e93b9f3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.626877 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2c1ace90-8e41-4862-8be0-f0500e93b9f3-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2c1ace90-8e41-4862-8be0-f0500e93b9f3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.626913 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2c1ace90-8e41-4862-8be0-f0500e93b9f3-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2c1ace90-8e41-4862-8be0-f0500e93b9f3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.627044 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2c1ace90-8e41-4862-8be0-f0500e93b9f3-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2c1ace90-8e41-4862-8be0-f0500e93b9f3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.627119 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c842c9ea-15e2-450e-a45c-aec0ba91cae5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c842c9ea-15e2-450e-a45c-aec0ba91cae5\") pod \"rabbitmq-cell1-server-0\" (UID: \"2c1ace90-8e41-4862-8be0-f0500e93b9f3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.627144 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2c1ace90-8e41-4862-8be0-f0500e93b9f3-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2c1ace90-8e41-4862-8be0-f0500e93b9f3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.627169 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2c1ace90-8e41-4862-8be0-f0500e93b9f3-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2c1ace90-8e41-4862-8be0-f0500e93b9f3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.627213 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2c1ace90-8e41-4862-8be0-f0500e93b9f3-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2c1ace90-8e41-4862-8be0-f0500e93b9f3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.627242 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2c1ace90-8e41-4862-8be0-f0500e93b9f3-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2c1ace90-8e41-4862-8be0-f0500e93b9f3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.627758 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2c1ace90-8e41-4862-8be0-f0500e93b9f3-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2c1ace90-8e41-4862-8be0-f0500e93b9f3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.628118 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2c1ace90-8e41-4862-8be0-f0500e93b9f3-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2c1ace90-8e41-4862-8be0-f0500e93b9f3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.628696 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2c1ace90-8e41-4862-8be0-f0500e93b9f3-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2c1ace90-8e41-4862-8be0-f0500e93b9f3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.628847 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2c1ace90-8e41-4862-8be0-f0500e93b9f3-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2c1ace90-8e41-4862-8be0-f0500e93b9f3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.632655 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2c1ace90-8e41-4862-8be0-f0500e93b9f3-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2c1ace90-8e41-4862-8be0-f0500e93b9f3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.633042 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2c1ace90-8e41-4862-8be0-f0500e93b9f3-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2c1ace90-8e41-4862-8be0-f0500e93b9f3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.633227 4813 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.633272 4813 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c842c9ea-15e2-450e-a45c-aec0ba91cae5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c842c9ea-15e2-450e-a45c-aec0ba91cae5\") pod \"rabbitmq-cell1-server-0\" (UID: \"2c1ace90-8e41-4862-8be0-f0500e93b9f3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d0202a2aad552b9f63e1c75a8cf8bbd022754947a395994a4047c05a28b21699/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.634014 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2c1ace90-8e41-4862-8be0-f0500e93b9f3-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2c1ace90-8e41-4862-8be0-f0500e93b9f3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.648296 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jdxd\" (UniqueName: \"kubernetes.io/projected/2c1ace90-8e41-4862-8be0-f0500e93b9f3-kube-api-access-2jdxd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2c1ace90-8e41-4862-8be0-f0500e93b9f3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.677372 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c842c9ea-15e2-450e-a45c-aec0ba91cae5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c842c9ea-15e2-450e-a45c-aec0ba91cae5\") pod \"rabbitmq-cell1-server-0\" (UID: \"2c1ace90-8e41-4862-8be0-f0500e93b9f3\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.700481 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.765866 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.768446 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.772057 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-9xndb" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.772126 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.773047 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.773062 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.781003 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.783753 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.830176 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/77107511-f026-4e3d-9598-65484b98aea8-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"77107511-f026-4e3d-9598-65484b98aea8\") " pod="openstack/openstack-galera-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.830231 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/77107511-f026-4e3d-9598-65484b98aea8-kolla-config\") pod \"openstack-galera-0\" (UID: \"77107511-f026-4e3d-9598-65484b98aea8\") " pod="openstack/openstack-galera-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.830288 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/77107511-f026-4e3d-9598-65484b98aea8-config-data-default\") pod \"openstack-galera-0\" (UID: \"77107511-f026-4e3d-9598-65484b98aea8\") " pod="openstack/openstack-galera-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.830461 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/77107511-f026-4e3d-9598-65484b98aea8-config-data-generated\") pod \"openstack-galera-0\" (UID: \"77107511-f026-4e3d-9598-65484b98aea8\") " pod="openstack/openstack-galera-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.830574 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzs94\" (UniqueName: \"kubernetes.io/projected/77107511-f026-4e3d-9598-65484b98aea8-kube-api-access-tzs94\") pod \"openstack-galera-0\" (UID: \"77107511-f026-4e3d-9598-65484b98aea8\") " pod="openstack/openstack-galera-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.830746 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77107511-f026-4e3d-9598-65484b98aea8-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"77107511-f026-4e3d-9598-65484b98aea8\") " pod="openstack/openstack-galera-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.830792 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4caff24a-7038-41cf-b964-c0ed35e9d3c5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4caff24a-7038-41cf-b964-c0ed35e9d3c5\") pod \"openstack-galera-0\" (UID: \"77107511-f026-4e3d-9598-65484b98aea8\") " pod="openstack/openstack-galera-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.830818 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77107511-f026-4e3d-9598-65484b98aea8-operator-scripts\") pod \"openstack-galera-0\" (UID: \"77107511-f026-4e3d-9598-65484b98aea8\") " pod="openstack/openstack-galera-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.840350 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.932355 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzs94\" (UniqueName: \"kubernetes.io/projected/77107511-f026-4e3d-9598-65484b98aea8-kube-api-access-tzs94\") pod \"openstack-galera-0\" (UID: \"77107511-f026-4e3d-9598-65484b98aea8\") " pod="openstack/openstack-galera-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.932443 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77107511-f026-4e3d-9598-65484b98aea8-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"77107511-f026-4e3d-9598-65484b98aea8\") " pod="openstack/openstack-galera-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.932472 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4caff24a-7038-41cf-b964-c0ed35e9d3c5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4caff24a-7038-41cf-b964-c0ed35e9d3c5\") pod \"openstack-galera-0\" (UID: \"77107511-f026-4e3d-9598-65484b98aea8\") " pod="openstack/openstack-galera-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.932496 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77107511-f026-4e3d-9598-65484b98aea8-operator-scripts\") pod \"openstack-galera-0\" (UID: \"77107511-f026-4e3d-9598-65484b98aea8\") " pod="openstack/openstack-galera-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.932528 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/77107511-f026-4e3d-9598-65484b98aea8-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"77107511-f026-4e3d-9598-65484b98aea8\") " pod="openstack/openstack-galera-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.932560 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/77107511-f026-4e3d-9598-65484b98aea8-kolla-config\") pod \"openstack-galera-0\" (UID: \"77107511-f026-4e3d-9598-65484b98aea8\") " pod="openstack/openstack-galera-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.932605 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/77107511-f026-4e3d-9598-65484b98aea8-config-data-default\") pod \"openstack-galera-0\" (UID: \"77107511-f026-4e3d-9598-65484b98aea8\") " pod="openstack/openstack-galera-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.932647 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/77107511-f026-4e3d-9598-65484b98aea8-config-data-generated\") pod \"openstack-galera-0\" (UID: \"77107511-f026-4e3d-9598-65484b98aea8\") " pod="openstack/openstack-galera-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.933164 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/77107511-f026-4e3d-9598-65484b98aea8-config-data-generated\") pod \"openstack-galera-0\" (UID: \"77107511-f026-4e3d-9598-65484b98aea8\") " pod="openstack/openstack-galera-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.935596 4813 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.935645 4813 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4caff24a-7038-41cf-b964-c0ed35e9d3c5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4caff24a-7038-41cf-b964-c0ed35e9d3c5\") pod \"openstack-galera-0\" (UID: \"77107511-f026-4e3d-9598-65484b98aea8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/80ca45c04f4804da7056f6792c68c90970a2823869ceb3559d6b15c2cae05344/globalmount\"" pod="openstack/openstack-galera-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.936441 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/77107511-f026-4e3d-9598-65484b98aea8-kolla-config\") pod \"openstack-galera-0\" (UID: \"77107511-f026-4e3d-9598-65484b98aea8\") " pod="openstack/openstack-galera-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.937256 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77107511-f026-4e3d-9598-65484b98aea8-operator-scripts\") pod \"openstack-galera-0\" (UID: \"77107511-f026-4e3d-9598-65484b98aea8\") " pod="openstack/openstack-galera-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.937455 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/77107511-f026-4e3d-9598-65484b98aea8-config-data-default\") pod \"openstack-galera-0\" (UID: \"77107511-f026-4e3d-9598-65484b98aea8\") " pod="openstack/openstack-galera-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.938364 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77107511-f026-4e3d-9598-65484b98aea8-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"77107511-f026-4e3d-9598-65484b98aea8\") " pod="openstack/openstack-galera-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.938555 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/77107511-f026-4e3d-9598-65484b98aea8-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"77107511-f026-4e3d-9598-65484b98aea8\") " pod="openstack/openstack-galera-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.951233 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzs94\" (UniqueName: \"kubernetes.io/projected/77107511-f026-4e3d-9598-65484b98aea8-kube-api-access-tzs94\") pod \"openstack-galera-0\" (UID: \"77107511-f026-4e3d-9598-65484b98aea8\") " pod="openstack/openstack-galera-0" Feb 19 19:46:34 crc kubenswrapper[4813]: I0219 19:46:34.969530 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4caff24a-7038-41cf-b964-c0ed35e9d3c5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4caff24a-7038-41cf-b964-c0ed35e9d3c5\") pod \"openstack-galera-0\" (UID: \"77107511-f026-4e3d-9598-65484b98aea8\") " pod="openstack/openstack-galera-0" Feb 19 19:46:35 crc kubenswrapper[4813]: I0219 19:46:35.020959 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 19:46:35 crc kubenswrapper[4813]: W0219 19:46:35.026311 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c1ace90_8e41_4862_8be0_f0500e93b9f3.slice/crio-d2c0beb5aee6f9ff6aebee5823fa80d234026ad52414821b6edffb69ccb5869b WatchSource:0}: Error finding container d2c0beb5aee6f9ff6aebee5823fa80d234026ad52414821b6edffb69ccb5869b: Status 404 returned error can't find the container with id d2c0beb5aee6f9ff6aebee5823fa80d234026ad52414821b6edffb69ccb5869b Feb 19 19:46:35 crc kubenswrapper[4813]: I0219 19:46:35.046620 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 19 19:46:35 crc kubenswrapper[4813]: W0219 19:46:35.054828 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode6c529a9_395b_4e55_87d1_f93fb0c98cd6.slice/crio-efcccda281e89608fdb17ad390c07841adca3189d8760324e844d2101ee964bb WatchSource:0}: Error finding container efcccda281e89608fdb17ad390c07841adca3189d8760324e844d2101ee964bb: Status 404 returned error can't find the container with id efcccda281e89608fdb17ad390c07841adca3189d8760324e844d2101ee964bb Feb 19 19:46:35 crc kubenswrapper[4813]: I0219 19:46:35.104318 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 19 19:46:35 crc kubenswrapper[4813]: I0219 19:46:35.292734 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 19 19:46:35 crc kubenswrapper[4813]: I0219 19:46:35.294207 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 19 19:46:35 crc kubenswrapper[4813]: I0219 19:46:35.299649 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 19 19:46:35 crc kubenswrapper[4813]: I0219 19:46:35.300050 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 19 19:46:35 crc kubenswrapper[4813]: I0219 19:46:35.300236 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 19 19:46:35 crc kubenswrapper[4813]: I0219 19:46:35.300487 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-jf62q" Feb 19 19:46:35 crc kubenswrapper[4813]: I0219 19:46:35.313763 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 19 19:46:35 crc kubenswrapper[4813]: I0219 19:46:35.338642 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2cdfd5a1-a1f0-4cd8-a24e-4c2e660a08c4-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"2cdfd5a1-a1f0-4cd8-a24e-4c2e660a08c4\") " pod="openstack/openstack-cell1-galera-0" Feb 19 19:46:35 crc kubenswrapper[4813]: I0219 19:46:35.338828 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjdtx\" (UniqueName: \"kubernetes.io/projected/2cdfd5a1-a1f0-4cd8-a24e-4c2e660a08c4-kube-api-access-cjdtx\") pod \"openstack-cell1-galera-0\" (UID: \"2cdfd5a1-a1f0-4cd8-a24e-4c2e660a08c4\") " pod="openstack/openstack-cell1-galera-0" Feb 19 19:46:35 crc kubenswrapper[4813]: I0219 19:46:35.338864 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cdfd5a1-a1f0-4cd8-a24e-4c2e660a08c4-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"2cdfd5a1-a1f0-4cd8-a24e-4c2e660a08c4\") " pod="openstack/openstack-cell1-galera-0" Feb 19 19:46:35 crc kubenswrapper[4813]: I0219 19:46:35.339117 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2cdfd5a1-a1f0-4cd8-a24e-4c2e660a08c4-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"2cdfd5a1-a1f0-4cd8-a24e-4c2e660a08c4\") " pod="openstack/openstack-cell1-galera-0" Feb 19 19:46:35 crc kubenswrapper[4813]: I0219 19:46:35.339159 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2cdfd5a1-a1f0-4cd8-a24e-4c2e660a08c4-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"2cdfd5a1-a1f0-4cd8-a24e-4c2e660a08c4\") " pod="openstack/openstack-cell1-galera-0" Feb 19 19:46:35 crc kubenswrapper[4813]: I0219 19:46:35.339222 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cdfd5a1-a1f0-4cd8-a24e-4c2e660a08c4-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"2cdfd5a1-a1f0-4cd8-a24e-4c2e660a08c4\") " pod="openstack/openstack-cell1-galera-0" Feb 19 19:46:35 crc kubenswrapper[4813]: I0219 19:46:35.339434 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-51355197-deb7-4522-8616-1ea4517b524a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-51355197-deb7-4522-8616-1ea4517b524a\") pod \"openstack-cell1-galera-0\" (UID: \"2cdfd5a1-a1f0-4cd8-a24e-4c2e660a08c4\") " pod="openstack/openstack-cell1-galera-0" Feb 19 19:46:35 crc kubenswrapper[4813]: I0219 19:46:35.339468 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2cdfd5a1-a1f0-4cd8-a24e-4c2e660a08c4-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"2cdfd5a1-a1f0-4cd8-a24e-4c2e660a08c4\") " pod="openstack/openstack-cell1-galera-0" Feb 19 19:46:35 crc kubenswrapper[4813]: I0219 19:46:35.369280 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2c1ace90-8e41-4862-8be0-f0500e93b9f3","Type":"ContainerStarted","Data":"d2c0beb5aee6f9ff6aebee5823fa80d234026ad52414821b6edffb69ccb5869b"} Feb 19 19:46:35 crc kubenswrapper[4813]: I0219 19:46:35.372199 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"e6c529a9-395b-4e55-87d1-f93fb0c98cd6","Type":"ContainerStarted","Data":"82ee4f3e43da1f66fb95bb5746e709dc6f189332688f6000ef3a4d9e6fbbad03"} Feb 19 19:46:35 crc kubenswrapper[4813]: I0219 19:46:35.372219 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"e6c529a9-395b-4e55-87d1-f93fb0c98cd6","Type":"ContainerStarted","Data":"efcccda281e89608fdb17ad390c07841adca3189d8760324e844d2101ee964bb"} Feb 19 19:46:35 crc kubenswrapper[4813]: I0219 19:46:35.373179 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 19 19:46:35 crc kubenswrapper[4813]: I0219 19:46:35.374935 4813 generic.go:334] "Generic (PLEG): container finished" podID="fa1478d6-53a6-489d-a7a2-e47fee0aec60" containerID="c0e57144c55b158b776db898c268da97b285326bd46910dc610318ae92111e98" exitCode=0 Feb 19 19:46:35 crc kubenswrapper[4813]: I0219 19:46:35.375003 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-589cf688cc-dp55n" event={"ID":"fa1478d6-53a6-489d-a7a2-e47fee0aec60","Type":"ContainerDied","Data":"c0e57144c55b158b776db898c268da97b285326bd46910dc610318ae92111e98"} Feb 19 19:46:35 crc kubenswrapper[4813]: I0219 19:46:35.377584 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"700f4c86-1aa6-4cbd-a3d5-680ca2763884","Type":"ContainerStarted","Data":"7b78395efe9f7c6b234d1599729215fb9d51b35bbd9c60939f791950b5b22815"} Feb 19 19:46:35 crc kubenswrapper[4813]: I0219 19:46:35.408971 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=1.408941016 podStartE2EDuration="1.408941016s" podCreationTimestamp="2026-02-19 19:46:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:46:35.395056148 +0000 UTC m=+4614.620496709" watchObservedRunningTime="2026-02-19 19:46:35.408941016 +0000 UTC m=+4614.634381557" Feb 19 19:46:35 crc kubenswrapper[4813]: I0219 19:46:35.440856 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cdfd5a1-a1f0-4cd8-a24e-4c2e660a08c4-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"2cdfd5a1-a1f0-4cd8-a24e-4c2e660a08c4\") " pod="openstack/openstack-cell1-galera-0" Feb 19 19:46:35 crc kubenswrapper[4813]: I0219 19:46:35.440943 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-51355197-deb7-4522-8616-1ea4517b524a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-51355197-deb7-4522-8616-1ea4517b524a\") pod \"openstack-cell1-galera-0\" (UID: \"2cdfd5a1-a1f0-4cd8-a24e-4c2e660a08c4\") " pod="openstack/openstack-cell1-galera-0" Feb 19 19:46:35 crc kubenswrapper[4813]: I0219 19:46:35.441034 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2cdfd5a1-a1f0-4cd8-a24e-4c2e660a08c4-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"2cdfd5a1-a1f0-4cd8-a24e-4c2e660a08c4\") " pod="openstack/openstack-cell1-galera-0" Feb 19 19:46:35 crc kubenswrapper[4813]: I0219 19:46:35.441079 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2cdfd5a1-a1f0-4cd8-a24e-4c2e660a08c4-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"2cdfd5a1-a1f0-4cd8-a24e-4c2e660a08c4\") " pod="openstack/openstack-cell1-galera-0" Feb 19 19:46:35 crc kubenswrapper[4813]: I0219 19:46:35.442506 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2cdfd5a1-a1f0-4cd8-a24e-4c2e660a08c4-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"2cdfd5a1-a1f0-4cd8-a24e-4c2e660a08c4\") " pod="openstack/openstack-cell1-galera-0" Feb 19 19:46:35 crc kubenswrapper[4813]: I0219 19:46:35.442885 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjdtx\" (UniqueName: \"kubernetes.io/projected/2cdfd5a1-a1f0-4cd8-a24e-4c2e660a08c4-kube-api-access-cjdtx\") pod \"openstack-cell1-galera-0\" (UID: \"2cdfd5a1-a1f0-4cd8-a24e-4c2e660a08c4\") " pod="openstack/openstack-cell1-galera-0" Feb 19 19:46:35 crc kubenswrapper[4813]: I0219 19:46:35.442918 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cdfd5a1-a1f0-4cd8-a24e-4c2e660a08c4-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"2cdfd5a1-a1f0-4cd8-a24e-4c2e660a08c4\") " pod="openstack/openstack-cell1-galera-0" Feb 19 19:46:35 crc kubenswrapper[4813]: I0219 19:46:35.442982 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2cdfd5a1-a1f0-4cd8-a24e-4c2e660a08c4-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"2cdfd5a1-a1f0-4cd8-a24e-4c2e660a08c4\") " pod="openstack/openstack-cell1-galera-0" Feb 19 19:46:35 crc kubenswrapper[4813]: I0219 19:46:35.443012 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2cdfd5a1-a1f0-4cd8-a24e-4c2e660a08c4-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"2cdfd5a1-a1f0-4cd8-a24e-4c2e660a08c4\") " pod="openstack/openstack-cell1-galera-0" Feb 19 19:46:35 crc kubenswrapper[4813]: I0219 19:46:35.444030 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2cdfd5a1-a1f0-4cd8-a24e-4c2e660a08c4-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"2cdfd5a1-a1f0-4cd8-a24e-4c2e660a08c4\") " pod="openstack/openstack-cell1-galera-0" Feb 19 19:46:35 crc kubenswrapper[4813]: I0219 19:46:35.444678 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2cdfd5a1-a1f0-4cd8-a24e-4c2e660a08c4-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"2cdfd5a1-a1f0-4cd8-a24e-4c2e660a08c4\") " pod="openstack/openstack-cell1-galera-0" Feb 19 19:46:35 crc kubenswrapper[4813]: I0219 19:46:35.445307 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cdfd5a1-a1f0-4cd8-a24e-4c2e660a08c4-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"2cdfd5a1-a1f0-4cd8-a24e-4c2e660a08c4\") " pod="openstack/openstack-cell1-galera-0" Feb 19 19:46:35 crc kubenswrapper[4813]: I0219 19:46:35.445522 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cdfd5a1-a1f0-4cd8-a24e-4c2e660a08c4-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"2cdfd5a1-a1f0-4cd8-a24e-4c2e660a08c4\") " pod="openstack/openstack-cell1-galera-0" Feb 19 19:46:35 crc kubenswrapper[4813]: I0219 19:46:35.447154 4813 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 19:46:35 crc kubenswrapper[4813]: I0219 19:46:35.447184 4813 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-51355197-deb7-4522-8616-1ea4517b524a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-51355197-deb7-4522-8616-1ea4517b524a\") pod \"openstack-cell1-galera-0\" (UID: \"2cdfd5a1-a1f0-4cd8-a24e-4c2e660a08c4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/808eff7ba8e4815a16d6a5aa4b4fd06c8b112d23cafb44f73e347d59461daffc/globalmount\"" pod="openstack/openstack-cell1-galera-0" Feb 19 19:46:35 crc kubenswrapper[4813]: I0219 19:46:35.462246 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjdtx\" (UniqueName: \"kubernetes.io/projected/2cdfd5a1-a1f0-4cd8-a24e-4c2e660a08c4-kube-api-access-cjdtx\") pod \"openstack-cell1-galera-0\" (UID: \"2cdfd5a1-a1f0-4cd8-a24e-4c2e660a08c4\") " pod="openstack/openstack-cell1-galera-0" Feb 19 19:46:35 crc kubenswrapper[4813]: I0219 19:46:35.549303 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2cdfd5a1-a1f0-4cd8-a24e-4c2e660a08c4-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"2cdfd5a1-a1f0-4cd8-a24e-4c2e660a08c4\") " pod="openstack/openstack-cell1-galera-0" Feb 19 19:46:35 crc kubenswrapper[4813]: I0219 19:46:35.555753 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-51355197-deb7-4522-8616-1ea4517b524a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-51355197-deb7-4522-8616-1ea4517b524a\") pod \"openstack-cell1-galera-0\" (UID: \"2cdfd5a1-a1f0-4cd8-a24e-4c2e660a08c4\") " pod="openstack/openstack-cell1-galera-0" Feb 19 19:46:35 crc kubenswrapper[4813]: I0219 19:46:35.570667 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 19 19:46:35 crc kubenswrapper[4813]: W0219 19:46:35.577868 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod77107511_f026_4e3d_9598_65484b98aea8.slice/crio-6ad1cfcad8089733e8f5d74832a7c5d3b621dfe8e06b1d187cf6783402dcf576 WatchSource:0}: Error finding container 6ad1cfcad8089733e8f5d74832a7c5d3b621dfe8e06b1d187cf6783402dcf576: Status 404 returned error can't find the container with id 6ad1cfcad8089733e8f5d74832a7c5d3b621dfe8e06b1d187cf6783402dcf576 Feb 19 19:46:35 crc kubenswrapper[4813]: I0219 19:46:35.616821 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 19 19:46:36 crc kubenswrapper[4813]: I0219 19:46:36.111700 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 19 19:46:36 crc kubenswrapper[4813]: W0219 19:46:36.113792 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2cdfd5a1_a1f0_4cd8_a24e_4c2e660a08c4.slice/crio-f3af379477dc84150ebf7e95d37cc44760c5e7ef95526e66f0430992937dae09 WatchSource:0}: Error finding container f3af379477dc84150ebf7e95d37cc44760c5e7ef95526e66f0430992937dae09: Status 404 returned error can't find the container with id f3af379477dc84150ebf7e95d37cc44760c5e7ef95526e66f0430992937dae09 Feb 19 19:46:36 crc kubenswrapper[4813]: I0219 19:46:36.387275 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-589cf688cc-dp55n" event={"ID":"fa1478d6-53a6-489d-a7a2-e47fee0aec60","Type":"ContainerStarted","Data":"0aba1baea714c527b93bf1b842ef22a837ff08e6a1934ae1dc887f530d1a1711"} Feb 19 19:46:36 crc kubenswrapper[4813]: I0219 19:46:36.387584 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-589cf688cc-dp55n" Feb 19 19:46:36 crc kubenswrapper[4813]: I0219 19:46:36.389379 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"700f4c86-1aa6-4cbd-a3d5-680ca2763884","Type":"ContainerStarted","Data":"e2caa54dad2ab49a1d0d6047668227d2f980c33b38677d07297fda75932c603e"} Feb 19 19:46:36 crc kubenswrapper[4813]: I0219 19:46:36.391099 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"77107511-f026-4e3d-9598-65484b98aea8","Type":"ContainerStarted","Data":"ead3a7d05277deeca162fa7e7ece92697c1d359a4e1e4cca7a97b5333ac71ba9"} Feb 19 19:46:36 crc kubenswrapper[4813]: I0219 19:46:36.391130 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"77107511-f026-4e3d-9598-65484b98aea8","Type":"ContainerStarted","Data":"6ad1cfcad8089733e8f5d74832a7c5d3b621dfe8e06b1d187cf6783402dcf576"} Feb 19 19:46:36 crc kubenswrapper[4813]: I0219 19:46:36.392524 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"2cdfd5a1-a1f0-4cd8-a24e-4c2e660a08c4","Type":"ContainerStarted","Data":"a4e329e05a65bd36573c433db71135d8c7ee6ac90968500147a254cec17cc875"} Feb 19 19:46:36 crc kubenswrapper[4813]: I0219 19:46:36.392557 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"2cdfd5a1-a1f0-4cd8-a24e-4c2e660a08c4","Type":"ContainerStarted","Data":"f3af379477dc84150ebf7e95d37cc44760c5e7ef95526e66f0430992937dae09"} Feb 19 19:46:36 crc kubenswrapper[4813]: I0219 19:46:36.394350 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2c1ace90-8e41-4862-8be0-f0500e93b9f3","Type":"ContainerStarted","Data":"39857af26eb6def16d85706c5b1aca2cf737070b400a1b7b4f0e2b3fccf23154"} Feb 19 19:46:36 crc kubenswrapper[4813]: I0219 19:46:36.396741 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c4c8f55b5-4gzmc" event={"ID":"bfb0cbc0-08ba-4cb4-a29d-17ddaa09f7e0","Type":"ContainerStarted","Data":"08775473d06c25ee8524004dccb1e359544eaf7cdec67220e3712fce0181c93b"} Feb 19 19:46:36 crc kubenswrapper[4813]: I0219 19:46:36.396933 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7c4c8f55b5-4gzmc" Feb 19 19:46:36 crc kubenswrapper[4813]: I0219 19:46:36.413207 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-589cf688cc-dp55n" podStartSLOduration=3.413190734 podStartE2EDuration="3.413190734s" podCreationTimestamp="2026-02-19 19:46:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:46:36.412314706 +0000 UTC m=+4615.637755247" watchObservedRunningTime="2026-02-19 19:46:36.413190734 +0000 UTC m=+4615.638631275" Feb 19 19:46:36 crc kubenswrapper[4813]: I0219 19:46:36.499720 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7c4c8f55b5-4gzmc" podStartSLOduration=4.4996954989999995 podStartE2EDuration="4.499695499s" podCreationTimestamp="2026-02-19 19:46:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:46:36.49489012 +0000 UTC m=+4615.720330681" watchObservedRunningTime="2026-02-19 19:46:36.499695499 +0000 UTC m=+4615.725136040" Feb 19 19:46:39 crc kubenswrapper[4813]: I0219 19:46:39.472264 4813 scope.go:117] "RemoveContainer" containerID="eeed46d1b34159de66f42948a66c3d63b5a21a3caa6efa2e5b0c9cefdb6e91ba" Feb 19 19:46:39 crc kubenswrapper[4813]: E0219 19:46:39.473009 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:46:40 crc kubenswrapper[4813]: I0219 19:46:40.432347 4813 generic.go:334] "Generic (PLEG): container finished" podID="2cdfd5a1-a1f0-4cd8-a24e-4c2e660a08c4" containerID="a4e329e05a65bd36573c433db71135d8c7ee6ac90968500147a254cec17cc875" exitCode=0 Feb 19 19:46:40 crc kubenswrapper[4813]: I0219 19:46:40.432484 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"2cdfd5a1-a1f0-4cd8-a24e-4c2e660a08c4","Type":"ContainerDied","Data":"a4e329e05a65bd36573c433db71135d8c7ee6ac90968500147a254cec17cc875"} Feb 19 19:46:40 crc kubenswrapper[4813]: I0219 19:46:40.434889 4813 generic.go:334] "Generic (PLEG): container finished" podID="77107511-f026-4e3d-9598-65484b98aea8" containerID="ead3a7d05277deeca162fa7e7ece92697c1d359a4e1e4cca7a97b5333ac71ba9" exitCode=0 Feb 19 19:46:40 crc kubenswrapper[4813]: I0219 19:46:40.434998 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"77107511-f026-4e3d-9598-65484b98aea8","Type":"ContainerDied","Data":"ead3a7d05277deeca162fa7e7ece92697c1d359a4e1e4cca7a97b5333ac71ba9"} Feb 19 19:46:41 crc kubenswrapper[4813]: I0219 19:46:41.445276 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"77107511-f026-4e3d-9598-65484b98aea8","Type":"ContainerStarted","Data":"3118a0f714ceef2e5f13b407390142017eb420f58c0cc8a14c451ea7294eb7df"} Feb 19 19:46:41 crc kubenswrapper[4813]: I0219 19:46:41.448463 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"2cdfd5a1-a1f0-4cd8-a24e-4c2e660a08c4","Type":"ContainerStarted","Data":"a98f95b903ca713f32ca635d569566d5cf80d51c58fe42428c50bb285365d97c"} Feb 19 19:46:41 crc kubenswrapper[4813]: I0219 19:46:41.482480 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=8.482456559 podStartE2EDuration="8.482456559s" podCreationTimestamp="2026-02-19 19:46:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:46:41.469683215 +0000 UTC m=+4620.695123766" watchObservedRunningTime="2026-02-19 19:46:41.482456559 +0000 UTC m=+4620.707897120" Feb 19 19:46:41 crc kubenswrapper[4813]: I0219 19:46:41.501241 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=7.501220936 podStartE2EDuration="7.501220936s" podCreationTimestamp="2026-02-19 19:46:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:46:41.496002896 +0000 UTC m=+4620.721443487" watchObservedRunningTime="2026-02-19 19:46:41.501220936 +0000 UTC m=+4620.726661477" Feb 19 19:46:43 crc kubenswrapper[4813]: I0219 19:46:43.323151 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7c4c8f55b5-4gzmc" Feb 19 19:46:43 crc kubenswrapper[4813]: I0219 19:46:43.578144 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-589cf688cc-dp55n" Feb 19 19:46:43 crc kubenswrapper[4813]: I0219 19:46:43.670600 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c4c8f55b5-4gzmc"] Feb 19 19:46:43 crc kubenswrapper[4813]: I0219 19:46:43.671031 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7c4c8f55b5-4gzmc" podUID="bfb0cbc0-08ba-4cb4-a29d-17ddaa09f7e0" containerName="dnsmasq-dns" containerID="cri-o://08775473d06c25ee8524004dccb1e359544eaf7cdec67220e3712fce0181c93b" gracePeriod=10 Feb 19 19:46:44 crc kubenswrapper[4813]: I0219 19:46:44.135849 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c4c8f55b5-4gzmc" Feb 19 19:46:44 crc kubenswrapper[4813]: I0219 19:46:44.180971 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfb0cbc0-08ba-4cb4-a29d-17ddaa09f7e0-config\") pod \"bfb0cbc0-08ba-4cb4-a29d-17ddaa09f7e0\" (UID: \"bfb0cbc0-08ba-4cb4-a29d-17ddaa09f7e0\") " Feb 19 19:46:44 crc kubenswrapper[4813]: I0219 19:46:44.181045 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czkmv\" (UniqueName: \"kubernetes.io/projected/bfb0cbc0-08ba-4cb4-a29d-17ddaa09f7e0-kube-api-access-czkmv\") pod \"bfb0cbc0-08ba-4cb4-a29d-17ddaa09f7e0\" (UID: \"bfb0cbc0-08ba-4cb4-a29d-17ddaa09f7e0\") " Feb 19 19:46:44 crc kubenswrapper[4813]: I0219 19:46:44.181091 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bfb0cbc0-08ba-4cb4-a29d-17ddaa09f7e0-dns-svc\") pod \"bfb0cbc0-08ba-4cb4-a29d-17ddaa09f7e0\" (UID: \"bfb0cbc0-08ba-4cb4-a29d-17ddaa09f7e0\") " Feb 19 19:46:44 crc kubenswrapper[4813]: I0219 19:46:44.187412 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfb0cbc0-08ba-4cb4-a29d-17ddaa09f7e0-kube-api-access-czkmv" (OuterVolumeSpecName: "kube-api-access-czkmv") pod "bfb0cbc0-08ba-4cb4-a29d-17ddaa09f7e0" (UID: "bfb0cbc0-08ba-4cb4-a29d-17ddaa09f7e0"). InnerVolumeSpecName "kube-api-access-czkmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:46:44 crc kubenswrapper[4813]: I0219 19:46:44.226339 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfb0cbc0-08ba-4cb4-a29d-17ddaa09f7e0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bfb0cbc0-08ba-4cb4-a29d-17ddaa09f7e0" (UID: "bfb0cbc0-08ba-4cb4-a29d-17ddaa09f7e0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:46:44 crc kubenswrapper[4813]: I0219 19:46:44.229800 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfb0cbc0-08ba-4cb4-a29d-17ddaa09f7e0-config" (OuterVolumeSpecName: "config") pod "bfb0cbc0-08ba-4cb4-a29d-17ddaa09f7e0" (UID: "bfb0cbc0-08ba-4cb4-a29d-17ddaa09f7e0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:46:44 crc kubenswrapper[4813]: I0219 19:46:44.282417 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfb0cbc0-08ba-4cb4-a29d-17ddaa09f7e0-config\") on node \"crc\" DevicePath \"\"" Feb 19 19:46:44 crc kubenswrapper[4813]: I0219 19:46:44.282449 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czkmv\" (UniqueName: \"kubernetes.io/projected/bfb0cbc0-08ba-4cb4-a29d-17ddaa09f7e0-kube-api-access-czkmv\") on node \"crc\" DevicePath \"\"" Feb 19 19:46:44 crc kubenswrapper[4813]: I0219 19:46:44.282461 4813 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bfb0cbc0-08ba-4cb4-a29d-17ddaa09f7e0-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 19:46:44 crc kubenswrapper[4813]: I0219 19:46:44.473271 4813 generic.go:334] "Generic (PLEG): container finished" podID="bfb0cbc0-08ba-4cb4-a29d-17ddaa09f7e0" containerID="08775473d06c25ee8524004dccb1e359544eaf7cdec67220e3712fce0181c93b" exitCode=0 Feb 19 19:46:44 crc kubenswrapper[4813]: I0219 19:46:44.473325 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c4c8f55b5-4gzmc" Feb 19 19:46:44 crc kubenswrapper[4813]: I0219 19:46:44.473330 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c4c8f55b5-4gzmc" event={"ID":"bfb0cbc0-08ba-4cb4-a29d-17ddaa09f7e0","Type":"ContainerDied","Data":"08775473d06c25ee8524004dccb1e359544eaf7cdec67220e3712fce0181c93b"} Feb 19 19:46:44 crc kubenswrapper[4813]: I0219 19:46:44.473371 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c4c8f55b5-4gzmc" event={"ID":"bfb0cbc0-08ba-4cb4-a29d-17ddaa09f7e0","Type":"ContainerDied","Data":"ca077aa27d035d3bf5c9d42b084d40a1cf8ee5e58c15ed48e4e0ebafd1ce03c3"} Feb 19 19:46:44 crc kubenswrapper[4813]: I0219 19:46:44.473397 4813 scope.go:117] "RemoveContainer" containerID="08775473d06c25ee8524004dccb1e359544eaf7cdec67220e3712fce0181c93b" Feb 19 19:46:44 crc kubenswrapper[4813]: I0219 19:46:44.501056 4813 scope.go:117] "RemoveContainer" containerID="a3468a6857267866bf193e1eb2b9877e026c3af8d8cf76b4d3a81480157bb7aa" Feb 19 19:46:44 crc kubenswrapper[4813]: I0219 19:46:44.507182 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c4c8f55b5-4gzmc"] Feb 19 19:46:44 crc kubenswrapper[4813]: I0219 19:46:44.512530 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7c4c8f55b5-4gzmc"] Feb 19 19:46:44 crc kubenswrapper[4813]: I0219 19:46:44.589152 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 19 19:46:44 crc kubenswrapper[4813]: I0219 19:46:44.851796 4813 scope.go:117] "RemoveContainer" containerID="08775473d06c25ee8524004dccb1e359544eaf7cdec67220e3712fce0181c93b" Feb 19 19:46:44 crc kubenswrapper[4813]: E0219 19:46:44.852215 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08775473d06c25ee8524004dccb1e359544eaf7cdec67220e3712fce0181c93b\": container with ID starting with 08775473d06c25ee8524004dccb1e359544eaf7cdec67220e3712fce0181c93b not found: ID does not exist" containerID="08775473d06c25ee8524004dccb1e359544eaf7cdec67220e3712fce0181c93b" Feb 19 19:46:44 crc kubenswrapper[4813]: I0219 19:46:44.852243 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08775473d06c25ee8524004dccb1e359544eaf7cdec67220e3712fce0181c93b"} err="failed to get container status \"08775473d06c25ee8524004dccb1e359544eaf7cdec67220e3712fce0181c93b\": rpc error: code = NotFound desc = could not find container \"08775473d06c25ee8524004dccb1e359544eaf7cdec67220e3712fce0181c93b\": container with ID starting with 08775473d06c25ee8524004dccb1e359544eaf7cdec67220e3712fce0181c93b not found: ID does not exist" Feb 19 19:46:44 crc kubenswrapper[4813]: I0219 19:46:44.852262 4813 scope.go:117] "RemoveContainer" containerID="a3468a6857267866bf193e1eb2b9877e026c3af8d8cf76b4d3a81480157bb7aa" Feb 19 19:46:44 crc kubenswrapper[4813]: E0219 19:46:44.852616 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3468a6857267866bf193e1eb2b9877e026c3af8d8cf76b4d3a81480157bb7aa\": container with ID starting with a3468a6857267866bf193e1eb2b9877e026c3af8d8cf76b4d3a81480157bb7aa not found: ID does not exist" containerID="a3468a6857267866bf193e1eb2b9877e026c3af8d8cf76b4d3a81480157bb7aa" Feb 19 19:46:44 crc kubenswrapper[4813]: I0219 19:46:44.852692 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3468a6857267866bf193e1eb2b9877e026c3af8d8cf76b4d3a81480157bb7aa"} err="failed to get container status \"a3468a6857267866bf193e1eb2b9877e026c3af8d8cf76b4d3a81480157bb7aa\": rpc error: code = NotFound desc = could not find container \"a3468a6857267866bf193e1eb2b9877e026c3af8d8cf76b4d3a81480157bb7aa\": container with ID starting with a3468a6857267866bf193e1eb2b9877e026c3af8d8cf76b4d3a81480157bb7aa not found: ID does not exist" Feb 19 19:46:45 crc kubenswrapper[4813]: I0219 19:46:45.104725 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 19 19:46:45 crc kubenswrapper[4813]: I0219 19:46:45.104818 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 19 19:46:45 crc kubenswrapper[4813]: I0219 19:46:45.480343 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfb0cbc0-08ba-4cb4-a29d-17ddaa09f7e0" path="/var/lib/kubelet/pods/bfb0cbc0-08ba-4cb4-a29d-17ddaa09f7e0/volumes" Feb 19 19:46:45 crc kubenswrapper[4813]: I0219 19:46:45.617670 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 19 19:46:45 crc kubenswrapper[4813]: I0219 19:46:45.617703 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 19 19:46:47 crc kubenswrapper[4813]: I0219 19:46:47.424560 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 19 19:46:47 crc kubenswrapper[4813]: I0219 19:46:47.491832 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 19 19:46:48 crc kubenswrapper[4813]: I0219 19:46:48.263039 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 19 19:46:48 crc kubenswrapper[4813]: I0219 19:46:48.340003 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 19 19:46:53 crc kubenswrapper[4813]: I0219 19:46:53.698390 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-kd527"] Feb 19 19:46:53 crc kubenswrapper[4813]: E0219 19:46:53.700252 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfb0cbc0-08ba-4cb4-a29d-17ddaa09f7e0" containerName="dnsmasq-dns" Feb 19 19:46:53 crc kubenswrapper[4813]: I0219 19:46:53.700273 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfb0cbc0-08ba-4cb4-a29d-17ddaa09f7e0" containerName="dnsmasq-dns" Feb 19 19:46:53 crc kubenswrapper[4813]: E0219 19:46:53.700303 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfb0cbc0-08ba-4cb4-a29d-17ddaa09f7e0" containerName="init" Feb 19 19:46:53 crc kubenswrapper[4813]: I0219 19:46:53.700311 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfb0cbc0-08ba-4cb4-a29d-17ddaa09f7e0" containerName="init" Feb 19 19:46:53 crc kubenswrapper[4813]: I0219 19:46:53.700524 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfb0cbc0-08ba-4cb4-a29d-17ddaa09f7e0" containerName="dnsmasq-dns" Feb 19 19:46:53 crc kubenswrapper[4813]: I0219 19:46:53.701157 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kd527" Feb 19 19:46:53 crc kubenswrapper[4813]: I0219 19:46:53.705297 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-kd527"] Feb 19 19:46:53 crc kubenswrapper[4813]: I0219 19:46:53.740571 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 19 19:46:53 crc kubenswrapper[4813]: I0219 19:46:53.842354 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8r4kd\" (UniqueName: \"kubernetes.io/projected/ad1ac157-fdbc-4d12-9c75-d7ec3ea0e992-kube-api-access-8r4kd\") pod \"root-account-create-update-kd527\" (UID: \"ad1ac157-fdbc-4d12-9c75-d7ec3ea0e992\") " pod="openstack/root-account-create-update-kd527" Feb 19 19:46:53 crc kubenswrapper[4813]: I0219 19:46:53.842687 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad1ac157-fdbc-4d12-9c75-d7ec3ea0e992-operator-scripts\") pod \"root-account-create-update-kd527\" (UID: \"ad1ac157-fdbc-4d12-9c75-d7ec3ea0e992\") " pod="openstack/root-account-create-update-kd527" Feb 19 19:46:53 crc kubenswrapper[4813]: I0219 19:46:53.943902 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8r4kd\" (UniqueName: \"kubernetes.io/projected/ad1ac157-fdbc-4d12-9c75-d7ec3ea0e992-kube-api-access-8r4kd\") pod \"root-account-create-update-kd527\" (UID: \"ad1ac157-fdbc-4d12-9c75-d7ec3ea0e992\") " pod="openstack/root-account-create-update-kd527" Feb 19 19:46:53 crc kubenswrapper[4813]: I0219 19:46:53.944120 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad1ac157-fdbc-4d12-9c75-d7ec3ea0e992-operator-scripts\") pod \"root-account-create-update-kd527\" (UID: \"ad1ac157-fdbc-4d12-9c75-d7ec3ea0e992\") " pod="openstack/root-account-create-update-kd527" Feb 19 19:46:53 crc kubenswrapper[4813]: I0219 19:46:53.945695 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad1ac157-fdbc-4d12-9c75-d7ec3ea0e992-operator-scripts\") pod \"root-account-create-update-kd527\" (UID: \"ad1ac157-fdbc-4d12-9c75-d7ec3ea0e992\") " pod="openstack/root-account-create-update-kd527" Feb 19 19:46:54 crc kubenswrapper[4813]: I0219 19:46:54.046207 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8r4kd\" (UniqueName: \"kubernetes.io/projected/ad1ac157-fdbc-4d12-9c75-d7ec3ea0e992-kube-api-access-8r4kd\") pod \"root-account-create-update-kd527\" (UID: \"ad1ac157-fdbc-4d12-9c75-d7ec3ea0e992\") " pod="openstack/root-account-create-update-kd527" Feb 19 19:46:54 crc kubenswrapper[4813]: I0219 19:46:54.063643 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kd527" Feb 19 19:46:54 crc kubenswrapper[4813]: I0219 19:46:54.472762 4813 scope.go:117] "RemoveContainer" containerID="eeed46d1b34159de66f42948a66c3d63b5a21a3caa6efa2e5b0c9cefdb6e91ba" Feb 19 19:46:54 crc kubenswrapper[4813]: E0219 19:46:54.473780 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:46:54 crc kubenswrapper[4813]: I0219 19:46:54.496113 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-kd527"] Feb 19 19:46:54 crc kubenswrapper[4813]: I0219 19:46:54.594002 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-kd527" event={"ID":"ad1ac157-fdbc-4d12-9c75-d7ec3ea0e992","Type":"ContainerStarted","Data":"6116a3bea76688b4fab25df525ccc76e96584d7ddecaafdac4e8dca21f885232"} Feb 19 19:46:55 crc kubenswrapper[4813]: I0219 19:46:55.615040 4813 generic.go:334] "Generic (PLEG): container finished" podID="ad1ac157-fdbc-4d12-9c75-d7ec3ea0e992" containerID="7c9b70bfa6befc195a134bca63f405870b1806ab104f8a0a7c49c09b7106fecf" exitCode=0 Feb 19 19:46:55 crc kubenswrapper[4813]: I0219 19:46:55.615093 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-kd527" event={"ID":"ad1ac157-fdbc-4d12-9c75-d7ec3ea0e992","Type":"ContainerDied","Data":"7c9b70bfa6befc195a134bca63f405870b1806ab104f8a0a7c49c09b7106fecf"} Feb 19 19:46:56 crc kubenswrapper[4813]: I0219 19:46:56.978990 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kd527" Feb 19 19:46:57 crc kubenswrapper[4813]: I0219 19:46:57.090123 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8r4kd\" (UniqueName: \"kubernetes.io/projected/ad1ac157-fdbc-4d12-9c75-d7ec3ea0e992-kube-api-access-8r4kd\") pod \"ad1ac157-fdbc-4d12-9c75-d7ec3ea0e992\" (UID: \"ad1ac157-fdbc-4d12-9c75-d7ec3ea0e992\") " Feb 19 19:46:57 crc kubenswrapper[4813]: I0219 19:46:57.090249 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad1ac157-fdbc-4d12-9c75-d7ec3ea0e992-operator-scripts\") pod \"ad1ac157-fdbc-4d12-9c75-d7ec3ea0e992\" (UID: \"ad1ac157-fdbc-4d12-9c75-d7ec3ea0e992\") " Feb 19 19:46:57 crc kubenswrapper[4813]: I0219 19:46:57.091334 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad1ac157-fdbc-4d12-9c75-d7ec3ea0e992-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ad1ac157-fdbc-4d12-9c75-d7ec3ea0e992" (UID: "ad1ac157-fdbc-4d12-9c75-d7ec3ea0e992"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:46:57 crc kubenswrapper[4813]: I0219 19:46:57.091888 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad1ac157-fdbc-4d12-9c75-d7ec3ea0e992-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:46:57 crc kubenswrapper[4813]: I0219 19:46:57.097572 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad1ac157-fdbc-4d12-9c75-d7ec3ea0e992-kube-api-access-8r4kd" (OuterVolumeSpecName: "kube-api-access-8r4kd") pod "ad1ac157-fdbc-4d12-9c75-d7ec3ea0e992" (UID: "ad1ac157-fdbc-4d12-9c75-d7ec3ea0e992"). InnerVolumeSpecName "kube-api-access-8r4kd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:46:57 crc kubenswrapper[4813]: I0219 19:46:57.193289 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8r4kd\" (UniqueName: \"kubernetes.io/projected/ad1ac157-fdbc-4d12-9c75-d7ec3ea0e992-kube-api-access-8r4kd\") on node \"crc\" DevicePath \"\"" Feb 19 19:46:57 crc kubenswrapper[4813]: I0219 19:46:57.634577 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-kd527" event={"ID":"ad1ac157-fdbc-4d12-9c75-d7ec3ea0e992","Type":"ContainerDied","Data":"6116a3bea76688b4fab25df525ccc76e96584d7ddecaafdac4e8dca21f885232"} Feb 19 19:46:57 crc kubenswrapper[4813]: I0219 19:46:57.634908 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6116a3bea76688b4fab25df525ccc76e96584d7ddecaafdac4e8dca21f885232" Feb 19 19:46:57 crc kubenswrapper[4813]: I0219 19:46:57.634669 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kd527" Feb 19 19:46:59 crc kubenswrapper[4813]: I0219 19:46:59.286657 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-kd527"] Feb 19 19:46:59 crc kubenswrapper[4813]: I0219 19:46:59.295530 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-kd527"] Feb 19 19:46:59 crc kubenswrapper[4813]: I0219 19:46:59.487253 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad1ac157-fdbc-4d12-9c75-d7ec3ea0e992" path="/var/lib/kubelet/pods/ad1ac157-fdbc-4d12-9c75-d7ec3ea0e992/volumes" Feb 19 19:47:04 crc kubenswrapper[4813]: I0219 19:47:04.280844 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-t9zb4"] Feb 19 19:47:04 crc kubenswrapper[4813]: E0219 19:47:04.281789 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad1ac157-fdbc-4d12-9c75-d7ec3ea0e992" containerName="mariadb-account-create-update" Feb 19 19:47:04 crc kubenswrapper[4813]: I0219 19:47:04.281809 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad1ac157-fdbc-4d12-9c75-d7ec3ea0e992" containerName="mariadb-account-create-update" Feb 19 19:47:04 crc kubenswrapper[4813]: I0219 19:47:04.284179 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad1ac157-fdbc-4d12-9c75-d7ec3ea0e992" containerName="mariadb-account-create-update" Feb 19 19:47:04 crc kubenswrapper[4813]: I0219 19:47:04.284930 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-t9zb4" Feb 19 19:47:04 crc kubenswrapper[4813]: I0219 19:47:04.287337 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 19 19:47:04 crc kubenswrapper[4813]: I0219 19:47:04.290823 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-t9zb4"] Feb 19 19:47:04 crc kubenswrapper[4813]: I0219 19:47:04.404489 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30a87e56-d537-4563-b122-f6ca0132bf0d-operator-scripts\") pod \"root-account-create-update-t9zb4\" (UID: \"30a87e56-d537-4563-b122-f6ca0132bf0d\") " pod="openstack/root-account-create-update-t9zb4" Feb 19 19:47:04 crc kubenswrapper[4813]: I0219 19:47:04.404708 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwzgn\" (UniqueName: \"kubernetes.io/projected/30a87e56-d537-4563-b122-f6ca0132bf0d-kube-api-access-dwzgn\") pod \"root-account-create-update-t9zb4\" (UID: \"30a87e56-d537-4563-b122-f6ca0132bf0d\") " pod="openstack/root-account-create-update-t9zb4" Feb 19 19:47:04 crc kubenswrapper[4813]: I0219 19:47:04.506229 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30a87e56-d537-4563-b122-f6ca0132bf0d-operator-scripts\") pod \"root-account-create-update-t9zb4\" (UID: \"30a87e56-d537-4563-b122-f6ca0132bf0d\") " pod="openstack/root-account-create-update-t9zb4" Feb 19 19:47:04 crc kubenswrapper[4813]: I0219 19:47:04.506351 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwzgn\" (UniqueName: \"kubernetes.io/projected/30a87e56-d537-4563-b122-f6ca0132bf0d-kube-api-access-dwzgn\") pod \"root-account-create-update-t9zb4\" (UID: \"30a87e56-d537-4563-b122-f6ca0132bf0d\") " pod="openstack/root-account-create-update-t9zb4" Feb 19 19:47:04 crc kubenswrapper[4813]: I0219 19:47:04.508433 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30a87e56-d537-4563-b122-f6ca0132bf0d-operator-scripts\") pod \"root-account-create-update-t9zb4\" (UID: \"30a87e56-d537-4563-b122-f6ca0132bf0d\") " pod="openstack/root-account-create-update-t9zb4" Feb 19 19:47:04 crc kubenswrapper[4813]: I0219 19:47:04.529906 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwzgn\" (UniqueName: \"kubernetes.io/projected/30a87e56-d537-4563-b122-f6ca0132bf0d-kube-api-access-dwzgn\") pod \"root-account-create-update-t9zb4\" (UID: \"30a87e56-d537-4563-b122-f6ca0132bf0d\") " pod="openstack/root-account-create-update-t9zb4" Feb 19 19:47:04 crc kubenswrapper[4813]: I0219 19:47:04.608920 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-t9zb4" Feb 19 19:47:05 crc kubenswrapper[4813]: I0219 19:47:05.060387 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-t9zb4"] Feb 19 19:47:05 crc kubenswrapper[4813]: W0219 19:47:05.456908 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod30a87e56_d537_4563_b122_f6ca0132bf0d.slice/crio-ee2bffa036ee4982e2c08b1ab37015b3a1ceda475e29b20c675dde24bac64293 WatchSource:0}: Error finding container ee2bffa036ee4982e2c08b1ab37015b3a1ceda475e29b20c675dde24bac64293: Status 404 returned error can't find the container with id ee2bffa036ee4982e2c08b1ab37015b3a1ceda475e29b20c675dde24bac64293 Feb 19 19:47:05 crc kubenswrapper[4813]: I0219 19:47:05.698817 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-t9zb4" event={"ID":"30a87e56-d537-4563-b122-f6ca0132bf0d","Type":"ContainerStarted","Data":"3502f5519457799330cc0c94e894fc1fa203db192142e76e14264c01c1168c07"} Feb 19 19:47:05 crc kubenswrapper[4813]: I0219 19:47:05.699228 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-t9zb4" event={"ID":"30a87e56-d537-4563-b122-f6ca0132bf0d","Type":"ContainerStarted","Data":"ee2bffa036ee4982e2c08b1ab37015b3a1ceda475e29b20c675dde24bac64293"} Feb 19 19:47:05 crc kubenswrapper[4813]: I0219 19:47:05.713045 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-t9zb4" podStartSLOduration=1.7130230659999999 podStartE2EDuration="1.713023066s" podCreationTimestamp="2026-02-19 19:47:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:47:05.711664284 +0000 UTC m=+4644.937104835" watchObservedRunningTime="2026-02-19 19:47:05.713023066 +0000 UTC m=+4644.938463607" Feb 19 19:47:06 crc kubenswrapper[4813]: I0219 19:47:06.708700 4813 generic.go:334] "Generic (PLEG): container finished" podID="30a87e56-d537-4563-b122-f6ca0132bf0d" containerID="3502f5519457799330cc0c94e894fc1fa203db192142e76e14264c01c1168c07" exitCode=0 Feb 19 19:47:06 crc kubenswrapper[4813]: I0219 19:47:06.709387 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-t9zb4" event={"ID":"30a87e56-d537-4563-b122-f6ca0132bf0d","Type":"ContainerDied","Data":"3502f5519457799330cc0c94e894fc1fa203db192142e76e14264c01c1168c07"} Feb 19 19:47:08 crc kubenswrapper[4813]: I0219 19:47:08.124163 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-t9zb4" Feb 19 19:47:08 crc kubenswrapper[4813]: I0219 19:47:08.265169 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwzgn\" (UniqueName: \"kubernetes.io/projected/30a87e56-d537-4563-b122-f6ca0132bf0d-kube-api-access-dwzgn\") pod \"30a87e56-d537-4563-b122-f6ca0132bf0d\" (UID: \"30a87e56-d537-4563-b122-f6ca0132bf0d\") " Feb 19 19:47:08 crc kubenswrapper[4813]: I0219 19:47:08.265291 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30a87e56-d537-4563-b122-f6ca0132bf0d-operator-scripts\") pod \"30a87e56-d537-4563-b122-f6ca0132bf0d\" (UID: \"30a87e56-d537-4563-b122-f6ca0132bf0d\") " Feb 19 19:47:08 crc kubenswrapper[4813]: I0219 19:47:08.265936 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30a87e56-d537-4563-b122-f6ca0132bf0d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "30a87e56-d537-4563-b122-f6ca0132bf0d" (UID: "30a87e56-d537-4563-b122-f6ca0132bf0d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:47:08 crc kubenswrapper[4813]: I0219 19:47:08.283612 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30a87e56-d537-4563-b122-f6ca0132bf0d-kube-api-access-dwzgn" (OuterVolumeSpecName: "kube-api-access-dwzgn") pod "30a87e56-d537-4563-b122-f6ca0132bf0d" (UID: "30a87e56-d537-4563-b122-f6ca0132bf0d"). InnerVolumeSpecName "kube-api-access-dwzgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:47:08 crc kubenswrapper[4813]: E0219 19:47:08.288462 4813 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c1ace90_8e41_4862_8be0_f0500e93b9f3.slice/crio-conmon-39857af26eb6def16d85706c5b1aca2cf737070b400a1b7b4f0e2b3fccf23154.scope\": RecentStats: unable to find data in memory cache]" Feb 19 19:47:08 crc kubenswrapper[4813]: I0219 19:47:08.367022 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30a87e56-d537-4563-b122-f6ca0132bf0d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:47:08 crc kubenswrapper[4813]: I0219 19:47:08.367272 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwzgn\" (UniqueName: \"kubernetes.io/projected/30a87e56-d537-4563-b122-f6ca0132bf0d-kube-api-access-dwzgn\") on node \"crc\" DevicePath \"\"" Feb 19 19:47:08 crc kubenswrapper[4813]: I0219 19:47:08.472291 4813 scope.go:117] "RemoveContainer" containerID="eeed46d1b34159de66f42948a66c3d63b5a21a3caa6efa2e5b0c9cefdb6e91ba" Feb 19 19:47:08 crc kubenswrapper[4813]: E0219 19:47:08.473618 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:47:08 crc kubenswrapper[4813]: I0219 19:47:08.747732 4813 generic.go:334] "Generic (PLEG): container finished" podID="2c1ace90-8e41-4862-8be0-f0500e93b9f3" containerID="39857af26eb6def16d85706c5b1aca2cf737070b400a1b7b4f0e2b3fccf23154" exitCode=0 Feb 19 19:47:08 crc kubenswrapper[4813]: I0219 19:47:08.747818 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2c1ace90-8e41-4862-8be0-f0500e93b9f3","Type":"ContainerDied","Data":"39857af26eb6def16d85706c5b1aca2cf737070b400a1b7b4f0e2b3fccf23154"} Feb 19 19:47:08 crc kubenswrapper[4813]: I0219 19:47:08.756456 4813 generic.go:334] "Generic (PLEG): container finished" podID="700f4c86-1aa6-4cbd-a3d5-680ca2763884" containerID="e2caa54dad2ab49a1d0d6047668227d2f980c33b38677d07297fda75932c603e" exitCode=0 Feb 19 19:47:08 crc kubenswrapper[4813]: I0219 19:47:08.756548 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"700f4c86-1aa6-4cbd-a3d5-680ca2763884","Type":"ContainerDied","Data":"e2caa54dad2ab49a1d0d6047668227d2f980c33b38677d07297fda75932c603e"} Feb 19 19:47:08 crc kubenswrapper[4813]: I0219 19:47:08.758668 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-t9zb4" Feb 19 19:47:08 crc kubenswrapper[4813]: I0219 19:47:08.758657 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-t9zb4" event={"ID":"30a87e56-d537-4563-b122-f6ca0132bf0d","Type":"ContainerDied","Data":"ee2bffa036ee4982e2c08b1ab37015b3a1ceda475e29b20c675dde24bac64293"} Feb 19 19:47:08 crc kubenswrapper[4813]: I0219 19:47:08.758841 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee2bffa036ee4982e2c08b1ab37015b3a1ceda475e29b20c675dde24bac64293" Feb 19 19:47:09 crc kubenswrapper[4813]: I0219 19:47:09.767531 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2c1ace90-8e41-4862-8be0-f0500e93b9f3","Type":"ContainerStarted","Data":"2930b19f1eea20f82d2c9ee05fd4216d4d9ac59c6f2c8df298e0a8ed66f729c7"} Feb 19 19:47:09 crc kubenswrapper[4813]: I0219 19:47:09.768192 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:47:09 crc kubenswrapper[4813]: I0219 19:47:09.769466 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"700f4c86-1aa6-4cbd-a3d5-680ca2763884","Type":"ContainerStarted","Data":"ca0f1edc6dd9a204dcb7fd2b49a0007c6bacec9a5212dbea2d4bead7c6b17d51"} Feb 19 19:47:09 crc kubenswrapper[4813]: I0219 19:47:09.769652 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 19 19:47:09 crc kubenswrapper[4813]: I0219 19:47:09.799801 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.799765954 podStartE2EDuration="36.799765954s" podCreationTimestamp="2026-02-19 19:46:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:47:09.789436496 +0000 UTC m=+4649.014877057" watchObservedRunningTime="2026-02-19 19:47:09.799765954 +0000 UTC m=+4649.025206535" Feb 19 19:47:09 crc kubenswrapper[4813]: I0219 19:47:09.824635 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.824616759 podStartE2EDuration="36.824616759s" podCreationTimestamp="2026-02-19 19:46:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:47:09.822099382 +0000 UTC m=+4649.047539973" watchObservedRunningTime="2026-02-19 19:47:09.824616759 +0000 UTC m=+4649.050057300" Feb 19 19:47:23 crc kubenswrapper[4813]: I0219 19:47:23.472501 4813 scope.go:117] "RemoveContainer" containerID="eeed46d1b34159de66f42948a66c3d63b5a21a3caa6efa2e5b0c9cefdb6e91ba" Feb 19 19:47:23 crc kubenswrapper[4813]: E0219 19:47:23.473647 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:47:24 crc kubenswrapper[4813]: I0219 19:47:24.455199 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 19 19:47:24 crc kubenswrapper[4813]: I0219 19:47:24.843156 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:47:27 crc kubenswrapper[4813]: I0219 19:47:27.805082 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-54dc9c94cc-f99dr"] Feb 19 19:47:27 crc kubenswrapper[4813]: E0219 19:47:27.805688 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30a87e56-d537-4563-b122-f6ca0132bf0d" containerName="mariadb-account-create-update" Feb 19 19:47:27 crc kubenswrapper[4813]: I0219 19:47:27.805702 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="30a87e56-d537-4563-b122-f6ca0132bf0d" containerName="mariadb-account-create-update" Feb 19 19:47:27 crc kubenswrapper[4813]: I0219 19:47:27.805840 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="30a87e56-d537-4563-b122-f6ca0132bf0d" containerName="mariadb-account-create-update" Feb 19 19:47:27 crc kubenswrapper[4813]: I0219 19:47:27.806824 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54dc9c94cc-f99dr" Feb 19 19:47:27 crc kubenswrapper[4813]: I0219 19:47:27.833805 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54dc9c94cc-f99dr"] Feb 19 19:47:27 crc kubenswrapper[4813]: I0219 19:47:27.992601 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5qlh\" (UniqueName: \"kubernetes.io/projected/cf1972b0-8fc7-4e43-a622-5458944ae7b0-kube-api-access-q5qlh\") pod \"dnsmasq-dns-54dc9c94cc-f99dr\" (UID: \"cf1972b0-8fc7-4e43-a622-5458944ae7b0\") " pod="openstack/dnsmasq-dns-54dc9c94cc-f99dr" Feb 19 19:47:27 crc kubenswrapper[4813]: I0219 19:47:27.992813 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf1972b0-8fc7-4e43-a622-5458944ae7b0-dns-svc\") pod \"dnsmasq-dns-54dc9c94cc-f99dr\" (UID: \"cf1972b0-8fc7-4e43-a622-5458944ae7b0\") " pod="openstack/dnsmasq-dns-54dc9c94cc-f99dr" Feb 19 19:47:27 crc kubenswrapper[4813]: I0219 19:47:27.992921 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf1972b0-8fc7-4e43-a622-5458944ae7b0-config\") pod \"dnsmasq-dns-54dc9c94cc-f99dr\" (UID: \"cf1972b0-8fc7-4e43-a622-5458944ae7b0\") " pod="openstack/dnsmasq-dns-54dc9c94cc-f99dr" Feb 19 19:47:28 crc kubenswrapper[4813]: I0219 19:47:28.094686 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5qlh\" (UniqueName: \"kubernetes.io/projected/cf1972b0-8fc7-4e43-a622-5458944ae7b0-kube-api-access-q5qlh\") pod \"dnsmasq-dns-54dc9c94cc-f99dr\" (UID: \"cf1972b0-8fc7-4e43-a622-5458944ae7b0\") " pod="openstack/dnsmasq-dns-54dc9c94cc-f99dr" Feb 19 19:47:28 crc kubenswrapper[4813]: I0219 19:47:28.094788 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf1972b0-8fc7-4e43-a622-5458944ae7b0-dns-svc\") pod \"dnsmasq-dns-54dc9c94cc-f99dr\" (UID: \"cf1972b0-8fc7-4e43-a622-5458944ae7b0\") " pod="openstack/dnsmasq-dns-54dc9c94cc-f99dr" Feb 19 19:47:28 crc kubenswrapper[4813]: I0219 19:47:28.094813 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf1972b0-8fc7-4e43-a622-5458944ae7b0-config\") pod \"dnsmasq-dns-54dc9c94cc-f99dr\" (UID: \"cf1972b0-8fc7-4e43-a622-5458944ae7b0\") " pod="openstack/dnsmasq-dns-54dc9c94cc-f99dr" Feb 19 19:47:28 crc kubenswrapper[4813]: I0219 19:47:28.095686 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf1972b0-8fc7-4e43-a622-5458944ae7b0-config\") pod \"dnsmasq-dns-54dc9c94cc-f99dr\" (UID: \"cf1972b0-8fc7-4e43-a622-5458944ae7b0\") " pod="openstack/dnsmasq-dns-54dc9c94cc-f99dr" Feb 19 19:47:28 crc kubenswrapper[4813]: I0219 19:47:28.095855 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf1972b0-8fc7-4e43-a622-5458944ae7b0-dns-svc\") pod \"dnsmasq-dns-54dc9c94cc-f99dr\" (UID: \"cf1972b0-8fc7-4e43-a622-5458944ae7b0\") " pod="openstack/dnsmasq-dns-54dc9c94cc-f99dr" Feb 19 19:47:28 crc kubenswrapper[4813]: I0219 19:47:28.113805 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5qlh\" (UniqueName: \"kubernetes.io/projected/cf1972b0-8fc7-4e43-a622-5458944ae7b0-kube-api-access-q5qlh\") pod \"dnsmasq-dns-54dc9c94cc-f99dr\" (UID: \"cf1972b0-8fc7-4e43-a622-5458944ae7b0\") " pod="openstack/dnsmasq-dns-54dc9c94cc-f99dr" Feb 19 19:47:28 crc kubenswrapper[4813]: I0219 19:47:28.124032 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54dc9c94cc-f99dr" Feb 19 19:47:28 crc kubenswrapper[4813]: I0219 19:47:28.577433 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 19:47:28 crc kubenswrapper[4813]: I0219 19:47:28.646475 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54dc9c94cc-f99dr"] Feb 19 19:47:29 crc kubenswrapper[4813]: I0219 19:47:29.043249 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54dc9c94cc-f99dr" event={"ID":"cf1972b0-8fc7-4e43-a622-5458944ae7b0","Type":"ContainerStarted","Data":"217605aad397b13616f854447bbe3f419441dee703d769000e87e6d926cb1479"} Feb 19 19:47:30 crc kubenswrapper[4813]: I0219 19:47:30.051626 4813 generic.go:334] "Generic (PLEG): container finished" podID="cf1972b0-8fc7-4e43-a622-5458944ae7b0" containerID="b7b346904238edd9593370bf0c108e4eedf9bd5f427850f541ac6ce218df9b46" exitCode=0 Feb 19 19:47:30 crc kubenswrapper[4813]: I0219 19:47:30.051730 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54dc9c94cc-f99dr" event={"ID":"cf1972b0-8fc7-4e43-a622-5458944ae7b0","Type":"ContainerDied","Data":"b7b346904238edd9593370bf0c108e4eedf9bd5f427850f541ac6ce218df9b46"} Feb 19 19:47:30 crc kubenswrapper[4813]: I0219 19:47:30.873737 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="700f4c86-1aa6-4cbd-a3d5-680ca2763884" containerName="rabbitmq" containerID="cri-o://ca0f1edc6dd9a204dcb7fd2b49a0007c6bacec9a5212dbea2d4bead7c6b17d51" gracePeriod=604798 Feb 19 19:47:31 crc kubenswrapper[4813]: I0219 19:47:31.063318 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54dc9c94cc-f99dr" event={"ID":"cf1972b0-8fc7-4e43-a622-5458944ae7b0","Type":"ContainerStarted","Data":"207aff66b25c1b036dc15542a495daa84a391f90838130d362fd41dda6a18a8d"} Feb 19 19:47:31 crc kubenswrapper[4813]: I0219 19:47:31.063789 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-54dc9c94cc-f99dr" Feb 19 19:47:31 crc kubenswrapper[4813]: I0219 19:47:31.089366 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-54dc9c94cc-f99dr" podStartSLOduration=4.089336178 podStartE2EDuration="4.089336178s" podCreationTimestamp="2026-02-19 19:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:47:31.082184228 +0000 UTC m=+4670.307624769" watchObservedRunningTime="2026-02-19 19:47:31.089336178 +0000 UTC m=+4670.314776749" Feb 19 19:47:31 crc kubenswrapper[4813]: I0219 19:47:31.509980 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 19:47:33 crc kubenswrapper[4813]: I0219 19:47:33.300225 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="2c1ace90-8e41-4862-8be0-f0500e93b9f3" containerName="rabbitmq" containerID="cri-o://2930b19f1eea20f82d2c9ee05fd4216d4d9ac59c6f2c8df298e0a8ed66f729c7" gracePeriod=604799 Feb 19 19:47:34 crc kubenswrapper[4813]: I0219 19:47:34.454324 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="700f4c86-1aa6-4cbd-a3d5-680ca2763884" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.240:5672: connect: connection refused" Feb 19 19:47:34 crc kubenswrapper[4813]: I0219 19:47:34.841701 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="2c1ace90-8e41-4862-8be0-f0500e93b9f3" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.242:5672: connect: connection refused" Feb 19 19:47:36 crc kubenswrapper[4813]: I0219 19:47:36.471927 4813 scope.go:117] "RemoveContainer" containerID="eeed46d1b34159de66f42948a66c3d63b5a21a3caa6efa2e5b0c9cefdb6e91ba" Feb 19 19:47:36 crc kubenswrapper[4813]: E0219 19:47:36.472779 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:47:37 crc kubenswrapper[4813]: I0219 19:47:37.105101 4813 generic.go:334] "Generic (PLEG): container finished" podID="700f4c86-1aa6-4cbd-a3d5-680ca2763884" containerID="ca0f1edc6dd9a204dcb7fd2b49a0007c6bacec9a5212dbea2d4bead7c6b17d51" exitCode=0 Feb 19 19:47:37 crc kubenswrapper[4813]: I0219 19:47:37.105144 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"700f4c86-1aa6-4cbd-a3d5-680ca2763884","Type":"ContainerDied","Data":"ca0f1edc6dd9a204dcb7fd2b49a0007c6bacec9a5212dbea2d4bead7c6b17d51"} Feb 19 19:47:37 crc kubenswrapper[4813]: I0219 19:47:37.723198 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 19:47:37 crc kubenswrapper[4813]: I0219 19:47:37.899229 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/700f4c86-1aa6-4cbd-a3d5-680ca2763884-rabbitmq-plugins\") pod \"700f4c86-1aa6-4cbd-a3d5-680ca2763884\" (UID: \"700f4c86-1aa6-4cbd-a3d5-680ca2763884\") " Feb 19 19:47:37 crc kubenswrapper[4813]: I0219 19:47:37.899467 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7b58a90f-858f-45af-a26b-7dc89dea9f8f\") pod \"700f4c86-1aa6-4cbd-a3d5-680ca2763884\" (UID: \"700f4c86-1aa6-4cbd-a3d5-680ca2763884\") " Feb 19 19:47:37 crc kubenswrapper[4813]: I0219 19:47:37.899512 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/700f4c86-1aa6-4cbd-a3d5-680ca2763884-pod-info\") pod \"700f4c86-1aa6-4cbd-a3d5-680ca2763884\" (UID: \"700f4c86-1aa6-4cbd-a3d5-680ca2763884\") " Feb 19 19:47:37 crc kubenswrapper[4813]: I0219 19:47:37.899545 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/700f4c86-1aa6-4cbd-a3d5-680ca2763884-erlang-cookie-secret\") pod \"700f4c86-1aa6-4cbd-a3d5-680ca2763884\" (UID: \"700f4c86-1aa6-4cbd-a3d5-680ca2763884\") " Feb 19 19:47:37 crc kubenswrapper[4813]: I0219 19:47:37.899564 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/700f4c86-1aa6-4cbd-a3d5-680ca2763884-server-conf\") pod \"700f4c86-1aa6-4cbd-a3d5-680ca2763884\" (UID: \"700f4c86-1aa6-4cbd-a3d5-680ca2763884\") " Feb 19 19:47:37 crc kubenswrapper[4813]: I0219 19:47:37.899587 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/700f4c86-1aa6-4cbd-a3d5-680ca2763884-rabbitmq-erlang-cookie\") pod \"700f4c86-1aa6-4cbd-a3d5-680ca2763884\" (UID: \"700f4c86-1aa6-4cbd-a3d5-680ca2763884\") " Feb 19 19:47:37 crc kubenswrapper[4813]: I0219 19:47:37.899577 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/700f4c86-1aa6-4cbd-a3d5-680ca2763884-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "700f4c86-1aa6-4cbd-a3d5-680ca2763884" (UID: "700f4c86-1aa6-4cbd-a3d5-680ca2763884"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:47:37 crc kubenswrapper[4813]: I0219 19:47:37.899621 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/700f4c86-1aa6-4cbd-a3d5-680ca2763884-plugins-conf\") pod \"700f4c86-1aa6-4cbd-a3d5-680ca2763884\" (UID: \"700f4c86-1aa6-4cbd-a3d5-680ca2763884\") " Feb 19 19:47:37 crc kubenswrapper[4813]: I0219 19:47:37.899650 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z6n5\" (UniqueName: \"kubernetes.io/projected/700f4c86-1aa6-4cbd-a3d5-680ca2763884-kube-api-access-9z6n5\") pod \"700f4c86-1aa6-4cbd-a3d5-680ca2763884\" (UID: \"700f4c86-1aa6-4cbd-a3d5-680ca2763884\") " Feb 19 19:47:37 crc kubenswrapper[4813]: I0219 19:47:37.899666 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/700f4c86-1aa6-4cbd-a3d5-680ca2763884-rabbitmq-confd\") pod \"700f4c86-1aa6-4cbd-a3d5-680ca2763884\" (UID: \"700f4c86-1aa6-4cbd-a3d5-680ca2763884\") " Feb 19 19:47:37 crc kubenswrapper[4813]: I0219 19:47:37.899849 4813 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/700f4c86-1aa6-4cbd-a3d5-680ca2763884-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 19 19:47:37 crc kubenswrapper[4813]: I0219 19:47:37.900478 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/700f4c86-1aa6-4cbd-a3d5-680ca2763884-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "700f4c86-1aa6-4cbd-a3d5-680ca2763884" (UID: "700f4c86-1aa6-4cbd-a3d5-680ca2763884"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:47:37 crc kubenswrapper[4813]: I0219 19:47:37.900916 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/700f4c86-1aa6-4cbd-a3d5-680ca2763884-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "700f4c86-1aa6-4cbd-a3d5-680ca2763884" (UID: "700f4c86-1aa6-4cbd-a3d5-680ca2763884"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:47:37 crc kubenswrapper[4813]: I0219 19:47:37.908214 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/700f4c86-1aa6-4cbd-a3d5-680ca2763884-kube-api-access-9z6n5" (OuterVolumeSpecName: "kube-api-access-9z6n5") pod "700f4c86-1aa6-4cbd-a3d5-680ca2763884" (UID: "700f4c86-1aa6-4cbd-a3d5-680ca2763884"). InnerVolumeSpecName "kube-api-access-9z6n5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:47:37 crc kubenswrapper[4813]: I0219 19:47:37.908473 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/700f4c86-1aa6-4cbd-a3d5-680ca2763884-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "700f4c86-1aa6-4cbd-a3d5-680ca2763884" (UID: "700f4c86-1aa6-4cbd-a3d5-680ca2763884"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:47:37 crc kubenswrapper[4813]: I0219 19:47:37.908596 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/700f4c86-1aa6-4cbd-a3d5-680ca2763884-pod-info" (OuterVolumeSpecName: "pod-info") pod "700f4c86-1aa6-4cbd-a3d5-680ca2763884" (UID: "700f4c86-1aa6-4cbd-a3d5-680ca2763884"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 19 19:47:37 crc kubenswrapper[4813]: I0219 19:47:37.911240 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7b58a90f-858f-45af-a26b-7dc89dea9f8f" (OuterVolumeSpecName: "persistence") pod "700f4c86-1aa6-4cbd-a3d5-680ca2763884" (UID: "700f4c86-1aa6-4cbd-a3d5-680ca2763884"). InnerVolumeSpecName "pvc-7b58a90f-858f-45af-a26b-7dc89dea9f8f". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 19:47:37 crc kubenswrapper[4813]: I0219 19:47:37.925306 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/700f4c86-1aa6-4cbd-a3d5-680ca2763884-server-conf" (OuterVolumeSpecName: "server-conf") pod "700f4c86-1aa6-4cbd-a3d5-680ca2763884" (UID: "700f4c86-1aa6-4cbd-a3d5-680ca2763884"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:47:38 crc kubenswrapper[4813]: I0219 19:47:38.001281 4813 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-7b58a90f-858f-45af-a26b-7dc89dea9f8f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7b58a90f-858f-45af-a26b-7dc89dea9f8f\") on node \"crc\" " Feb 19 19:47:38 crc kubenswrapper[4813]: I0219 19:47:38.001518 4813 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/700f4c86-1aa6-4cbd-a3d5-680ca2763884-pod-info\") on node \"crc\" DevicePath \"\"" Feb 19 19:47:38 crc kubenswrapper[4813]: I0219 19:47:38.001584 4813 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/700f4c86-1aa6-4cbd-a3d5-680ca2763884-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 19 19:47:38 crc kubenswrapper[4813]: I0219 19:47:38.001647 4813 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/700f4c86-1aa6-4cbd-a3d5-680ca2763884-server-conf\") on node \"crc\" DevicePath \"\"" Feb 19 19:47:38 crc kubenswrapper[4813]: I0219 19:47:38.001710 4813 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/700f4c86-1aa6-4cbd-a3d5-680ca2763884-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 19 19:47:38 crc kubenswrapper[4813]: I0219 19:47:38.001774 4813 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/700f4c86-1aa6-4cbd-a3d5-680ca2763884-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 19 19:47:38 crc kubenswrapper[4813]: I0219 19:47:38.001829 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9z6n5\" (UniqueName: \"kubernetes.io/projected/700f4c86-1aa6-4cbd-a3d5-680ca2763884-kube-api-access-9z6n5\") on node \"crc\" DevicePath \"\"" Feb 19 19:47:38 crc kubenswrapper[4813]: I0219 19:47:38.028381 4813 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 19 19:47:38 crc kubenswrapper[4813]: I0219 19:47:38.028553 4813 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-7b58a90f-858f-45af-a26b-7dc89dea9f8f" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7b58a90f-858f-45af-a26b-7dc89dea9f8f") on node "crc" Feb 19 19:47:38 crc kubenswrapper[4813]: I0219 19:47:38.062769 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/700f4c86-1aa6-4cbd-a3d5-680ca2763884-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "700f4c86-1aa6-4cbd-a3d5-680ca2763884" (UID: "700f4c86-1aa6-4cbd-a3d5-680ca2763884"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:47:38 crc kubenswrapper[4813]: I0219 19:47:38.102926 4813 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/700f4c86-1aa6-4cbd-a3d5-680ca2763884-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 19 19:47:38 crc kubenswrapper[4813]: I0219 19:47:38.103022 4813 reconciler_common.go:293] "Volume detached for volume \"pvc-7b58a90f-858f-45af-a26b-7dc89dea9f8f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7b58a90f-858f-45af-a26b-7dc89dea9f8f\") on node \"crc\" DevicePath \"\"" Feb 19 19:47:38 crc kubenswrapper[4813]: I0219 19:47:38.112353 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"700f4c86-1aa6-4cbd-a3d5-680ca2763884","Type":"ContainerDied","Data":"7b78395efe9f7c6b234d1599729215fb9d51b35bbd9c60939f791950b5b22815"} Feb 19 19:47:38 crc kubenswrapper[4813]: I0219 19:47:38.112412 4813 scope.go:117] "RemoveContainer" containerID="ca0f1edc6dd9a204dcb7fd2b49a0007c6bacec9a5212dbea2d4bead7c6b17d51" Feb 19 19:47:38 crc kubenswrapper[4813]: I0219 19:47:38.112420 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 19:47:38 crc kubenswrapper[4813]: I0219 19:47:38.127452 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-54dc9c94cc-f99dr" Feb 19 19:47:38 crc kubenswrapper[4813]: I0219 19:47:38.149454 4813 scope.go:117] "RemoveContainer" containerID="e2caa54dad2ab49a1d0d6047668227d2f980c33b38677d07297fda75932c603e" Feb 19 19:47:38 crc kubenswrapper[4813]: I0219 19:47:38.173535 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 19:47:38 crc kubenswrapper[4813]: I0219 19:47:38.182343 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 19:47:38 crc kubenswrapper[4813]: I0219 19:47:38.218620 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 19:47:38 crc kubenswrapper[4813]: E0219 19:47:38.219042 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="700f4c86-1aa6-4cbd-a3d5-680ca2763884" containerName="rabbitmq" Feb 19 19:47:38 crc kubenswrapper[4813]: I0219 19:47:38.219067 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="700f4c86-1aa6-4cbd-a3d5-680ca2763884" containerName="rabbitmq" Feb 19 19:47:38 crc kubenswrapper[4813]: E0219 19:47:38.219083 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="700f4c86-1aa6-4cbd-a3d5-680ca2763884" containerName="setup-container" Feb 19 19:47:38 crc kubenswrapper[4813]: I0219 19:47:38.219090 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="700f4c86-1aa6-4cbd-a3d5-680ca2763884" containerName="setup-container" Feb 19 19:47:38 crc kubenswrapper[4813]: I0219 19:47:38.219269 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="700f4c86-1aa6-4cbd-a3d5-680ca2763884" containerName="rabbitmq" Feb 19 19:47:38 crc kubenswrapper[4813]: I0219 19:47:38.220222 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 19:47:38 crc kubenswrapper[4813]: I0219 19:47:38.223167 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-bpgnz" Feb 19 19:47:38 crc kubenswrapper[4813]: I0219 19:47:38.223414 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 19 19:47:38 crc kubenswrapper[4813]: I0219 19:47:38.223577 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 19 19:47:38 crc kubenswrapper[4813]: I0219 19:47:38.223736 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 19 19:47:38 crc kubenswrapper[4813]: I0219 19:47:38.223916 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 19 19:47:38 crc kubenswrapper[4813]: I0219 19:47:38.231759 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-589cf688cc-dp55n"] Feb 19 19:47:38 crc kubenswrapper[4813]: I0219 19:47:38.232307 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-589cf688cc-dp55n" podUID="fa1478d6-53a6-489d-a7a2-e47fee0aec60" containerName="dnsmasq-dns" containerID="cri-o://0aba1baea714c527b93bf1b842ef22a837ff08e6a1934ae1dc887f530d1a1711" gracePeriod=10 Feb 19 19:47:38 crc kubenswrapper[4813]: I0219 19:47:38.237202 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 19:47:38 crc kubenswrapper[4813]: I0219 19:47:38.411886 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bb0aa614-4f8e-403d-bb8b-2c472cce87e3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"bb0aa614-4f8e-403d-bb8b-2c472cce87e3\") " pod="openstack/rabbitmq-server-0" Feb 19 19:47:38 crc kubenswrapper[4813]: I0219 19:47:38.411977 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bb0aa614-4f8e-403d-bb8b-2c472cce87e3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"bb0aa614-4f8e-403d-bb8b-2c472cce87e3\") " pod="openstack/rabbitmq-server-0" Feb 19 19:47:38 crc kubenswrapper[4813]: I0219 19:47:38.412150 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7b58a90f-858f-45af-a26b-7dc89dea9f8f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7b58a90f-858f-45af-a26b-7dc89dea9f8f\") pod \"rabbitmq-server-0\" (UID: \"bb0aa614-4f8e-403d-bb8b-2c472cce87e3\") " pod="openstack/rabbitmq-server-0" Feb 19 19:47:38 crc kubenswrapper[4813]: I0219 19:47:38.412215 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zx9gr\" (UniqueName: \"kubernetes.io/projected/bb0aa614-4f8e-403d-bb8b-2c472cce87e3-kube-api-access-zx9gr\") pod \"rabbitmq-server-0\" (UID: \"bb0aa614-4f8e-403d-bb8b-2c472cce87e3\") " pod="openstack/rabbitmq-server-0" Feb 19 19:47:38 crc kubenswrapper[4813]: I0219 19:47:38.412252 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bb0aa614-4f8e-403d-bb8b-2c472cce87e3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"bb0aa614-4f8e-403d-bb8b-2c472cce87e3\") " pod="openstack/rabbitmq-server-0" Feb 19 19:47:38 crc kubenswrapper[4813]: I0219 19:47:38.412273 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bb0aa614-4f8e-403d-bb8b-2c472cce87e3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"bb0aa614-4f8e-403d-bb8b-2c472cce87e3\") " pod="openstack/rabbitmq-server-0" Feb 19 19:47:38 crc kubenswrapper[4813]: I0219 19:47:38.412400 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bb0aa614-4f8e-403d-bb8b-2c472cce87e3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"bb0aa614-4f8e-403d-bb8b-2c472cce87e3\") " pod="openstack/rabbitmq-server-0" Feb 19 19:47:38 crc kubenswrapper[4813]: I0219 19:47:38.412450 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bb0aa614-4f8e-403d-bb8b-2c472cce87e3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"bb0aa614-4f8e-403d-bb8b-2c472cce87e3\") " pod="openstack/rabbitmq-server-0" Feb 19 19:47:38 crc kubenswrapper[4813]: I0219 19:47:38.412510 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bb0aa614-4f8e-403d-bb8b-2c472cce87e3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"bb0aa614-4f8e-403d-bb8b-2c472cce87e3\") " pod="openstack/rabbitmq-server-0" Feb 19 19:47:38 crc kubenswrapper[4813]: I0219 19:47:38.513918 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bb0aa614-4f8e-403d-bb8b-2c472cce87e3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"bb0aa614-4f8e-403d-bb8b-2c472cce87e3\") " pod="openstack/rabbitmq-server-0" Feb 19 19:47:38 crc kubenswrapper[4813]: I0219 19:47:38.513991 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7b58a90f-858f-45af-a26b-7dc89dea9f8f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7b58a90f-858f-45af-a26b-7dc89dea9f8f\") pod \"rabbitmq-server-0\" (UID: \"bb0aa614-4f8e-403d-bb8b-2c472cce87e3\") " pod="openstack/rabbitmq-server-0" Feb 19 19:47:38 crc kubenswrapper[4813]: I0219 19:47:38.514019 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zx9gr\" (UniqueName: \"kubernetes.io/projected/bb0aa614-4f8e-403d-bb8b-2c472cce87e3-kube-api-access-zx9gr\") pod \"rabbitmq-server-0\" (UID: \"bb0aa614-4f8e-403d-bb8b-2c472cce87e3\") " pod="openstack/rabbitmq-server-0" Feb 19 19:47:38 crc kubenswrapper[4813]: I0219 19:47:38.514045 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bb0aa614-4f8e-403d-bb8b-2c472cce87e3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"bb0aa614-4f8e-403d-bb8b-2c472cce87e3\") " pod="openstack/rabbitmq-server-0" Feb 19 19:47:38 crc kubenswrapper[4813]: I0219 19:47:38.514061 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bb0aa614-4f8e-403d-bb8b-2c472cce87e3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"bb0aa614-4f8e-403d-bb8b-2c472cce87e3\") " pod="openstack/rabbitmq-server-0" Feb 19 19:47:38 crc kubenswrapper[4813]: I0219 19:47:38.514101 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bb0aa614-4f8e-403d-bb8b-2c472cce87e3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"bb0aa614-4f8e-403d-bb8b-2c472cce87e3\") " pod="openstack/rabbitmq-server-0" Feb 19 19:47:38 crc kubenswrapper[4813]: I0219 19:47:38.514134 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bb0aa614-4f8e-403d-bb8b-2c472cce87e3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"bb0aa614-4f8e-403d-bb8b-2c472cce87e3\") " pod="openstack/rabbitmq-server-0" Feb 19 19:47:38 crc kubenswrapper[4813]: I0219 19:47:38.514150 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bb0aa614-4f8e-403d-bb8b-2c472cce87e3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"bb0aa614-4f8e-403d-bb8b-2c472cce87e3\") " pod="openstack/rabbitmq-server-0" Feb 19 19:47:38 crc kubenswrapper[4813]: I0219 19:47:38.514189 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bb0aa614-4f8e-403d-bb8b-2c472cce87e3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"bb0aa614-4f8e-403d-bb8b-2c472cce87e3\") " pod="openstack/rabbitmq-server-0" Feb 19 19:47:38 crc kubenswrapper[4813]: I0219 19:47:38.514395 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bb0aa614-4f8e-403d-bb8b-2c472cce87e3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"bb0aa614-4f8e-403d-bb8b-2c472cce87e3\") " pod="openstack/rabbitmq-server-0" Feb 19 19:47:38 crc kubenswrapper[4813]: I0219 19:47:38.515143 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bb0aa614-4f8e-403d-bb8b-2c472cce87e3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"bb0aa614-4f8e-403d-bb8b-2c472cce87e3\") " pod="openstack/rabbitmq-server-0" Feb 19 19:47:38 crc kubenswrapper[4813]: I0219 19:47:38.515321 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bb0aa614-4f8e-403d-bb8b-2c472cce87e3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"bb0aa614-4f8e-403d-bb8b-2c472cce87e3\") " pod="openstack/rabbitmq-server-0" Feb 19 19:47:38 crc kubenswrapper[4813]: I0219 19:47:38.515835 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bb0aa614-4f8e-403d-bb8b-2c472cce87e3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"bb0aa614-4f8e-403d-bb8b-2c472cce87e3\") " pod="openstack/rabbitmq-server-0" Feb 19 19:47:38 crc kubenswrapper[4813]: I0219 19:47:38.517365 4813 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 19:47:38 crc kubenswrapper[4813]: I0219 19:47:38.517412 4813 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7b58a90f-858f-45af-a26b-7dc89dea9f8f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7b58a90f-858f-45af-a26b-7dc89dea9f8f\") pod \"rabbitmq-server-0\" (UID: \"bb0aa614-4f8e-403d-bb8b-2c472cce87e3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/befe2f2c7864de6777c17bbf4c0bd7318501f5434b28fa16494354dec037595d/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 19 19:47:38 crc kubenswrapper[4813]: I0219 19:47:38.524669 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bb0aa614-4f8e-403d-bb8b-2c472cce87e3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"bb0aa614-4f8e-403d-bb8b-2c472cce87e3\") " pod="openstack/rabbitmq-server-0" Feb 19 19:47:38 crc kubenswrapper[4813]: I0219 19:47:38.524784 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bb0aa614-4f8e-403d-bb8b-2c472cce87e3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"bb0aa614-4f8e-403d-bb8b-2c472cce87e3\") " pod="openstack/rabbitmq-server-0" Feb 19 19:47:38 crc kubenswrapper[4813]: I0219 19:47:38.524809 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bb0aa614-4f8e-403d-bb8b-2c472cce87e3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"bb0aa614-4f8e-403d-bb8b-2c472cce87e3\") " pod="openstack/rabbitmq-server-0" Feb 19 19:47:38 crc kubenswrapper[4813]: I0219 19:47:38.533208 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zx9gr\" (UniqueName: \"kubernetes.io/projected/bb0aa614-4f8e-403d-bb8b-2c472cce87e3-kube-api-access-zx9gr\") pod \"rabbitmq-server-0\" (UID: \"bb0aa614-4f8e-403d-bb8b-2c472cce87e3\") " pod="openstack/rabbitmq-server-0" Feb 19 19:47:38 crc kubenswrapper[4813]: I0219 19:47:38.552917 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7b58a90f-858f-45af-a26b-7dc89dea9f8f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7b58a90f-858f-45af-a26b-7dc89dea9f8f\") pod \"rabbitmq-server-0\" (UID: \"bb0aa614-4f8e-403d-bb8b-2c472cce87e3\") " pod="openstack/rabbitmq-server-0" Feb 19 19:47:38 crc kubenswrapper[4813]: I0219 19:47:38.592304 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-589cf688cc-dp55n" Feb 19 19:47:38 crc kubenswrapper[4813]: I0219 19:47:38.716695 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa1478d6-53a6-489d-a7a2-e47fee0aec60-config\") pod \"fa1478d6-53a6-489d-a7a2-e47fee0aec60\" (UID: \"fa1478d6-53a6-489d-a7a2-e47fee0aec60\") " Feb 19 19:47:38 crc kubenswrapper[4813]: I0219 19:47:38.716739 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa1478d6-53a6-489d-a7a2-e47fee0aec60-dns-svc\") pod \"fa1478d6-53a6-489d-a7a2-e47fee0aec60\" (UID: \"fa1478d6-53a6-489d-a7a2-e47fee0aec60\") " Feb 19 19:47:38 crc kubenswrapper[4813]: I0219 19:47:38.716844 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpzgm\" (UniqueName: \"kubernetes.io/projected/fa1478d6-53a6-489d-a7a2-e47fee0aec60-kube-api-access-rpzgm\") pod \"fa1478d6-53a6-489d-a7a2-e47fee0aec60\" (UID: \"fa1478d6-53a6-489d-a7a2-e47fee0aec60\") " Feb 19 19:47:38 crc kubenswrapper[4813]: I0219 19:47:38.719615 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa1478d6-53a6-489d-a7a2-e47fee0aec60-kube-api-access-rpzgm" (OuterVolumeSpecName: "kube-api-access-rpzgm") pod "fa1478d6-53a6-489d-a7a2-e47fee0aec60" (UID: "fa1478d6-53a6-489d-a7a2-e47fee0aec60"). InnerVolumeSpecName "kube-api-access-rpzgm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:47:38 crc kubenswrapper[4813]: I0219 19:47:38.744486 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa1478d6-53a6-489d-a7a2-e47fee0aec60-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fa1478d6-53a6-489d-a7a2-e47fee0aec60" (UID: "fa1478d6-53a6-489d-a7a2-e47fee0aec60"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:47:38 crc kubenswrapper[4813]: I0219 19:47:38.818513 4813 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa1478d6-53a6-489d-a7a2-e47fee0aec60-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 19:47:38 crc kubenswrapper[4813]: I0219 19:47:38.818551 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rpzgm\" (UniqueName: \"kubernetes.io/projected/fa1478d6-53a6-489d-a7a2-e47fee0aec60-kube-api-access-rpzgm\") on node \"crc\" DevicePath \"\"" Feb 19 19:47:38 crc kubenswrapper[4813]: I0219 19:47:38.838130 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 19:47:39 crc kubenswrapper[4813]: I0219 19:47:39.127378 4813 generic.go:334] "Generic (PLEG): container finished" podID="fa1478d6-53a6-489d-a7a2-e47fee0aec60" containerID="0aba1baea714c527b93bf1b842ef22a837ff08e6a1934ae1dc887f530d1a1711" exitCode=0 Feb 19 19:47:39 crc kubenswrapper[4813]: I0219 19:47:39.127433 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-589cf688cc-dp55n" event={"ID":"fa1478d6-53a6-489d-a7a2-e47fee0aec60","Type":"ContainerDied","Data":"0aba1baea714c527b93bf1b842ef22a837ff08e6a1934ae1dc887f530d1a1711"} Feb 19 19:47:39 crc kubenswrapper[4813]: I0219 19:47:39.127467 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-589cf688cc-dp55n" event={"ID":"fa1478d6-53a6-489d-a7a2-e47fee0aec60","Type":"ContainerDied","Data":"1486ca27ca9ff71e640f473e436b82c3b9c7e9952fba33311ab1f80fb8e97f8b"} Feb 19 19:47:39 crc kubenswrapper[4813]: I0219 19:47:39.127491 4813 scope.go:117] "RemoveContainer" containerID="0aba1baea714c527b93bf1b842ef22a837ff08e6a1934ae1dc887f530d1a1711" Feb 19 19:47:39 crc kubenswrapper[4813]: I0219 19:47:39.127609 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-589cf688cc-dp55n" Feb 19 19:47:39 crc kubenswrapper[4813]: I0219 19:47:39.148714 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa1478d6-53a6-489d-a7a2-e47fee0aec60-config" (OuterVolumeSpecName: "config") pod "fa1478d6-53a6-489d-a7a2-e47fee0aec60" (UID: "fa1478d6-53a6-489d-a7a2-e47fee0aec60"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:47:39 crc kubenswrapper[4813]: I0219 19:47:39.224638 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa1478d6-53a6-489d-a7a2-e47fee0aec60-config\") on node \"crc\" DevicePath \"\"" Feb 19 19:47:39 crc kubenswrapper[4813]: I0219 19:47:39.336170 4813 scope.go:117] "RemoveContainer" containerID="c0e57144c55b158b776db898c268da97b285326bd46910dc610318ae92111e98" Feb 19 19:47:39 crc kubenswrapper[4813]: I0219 19:47:39.351236 4813 scope.go:117] "RemoveContainer" containerID="0aba1baea714c527b93bf1b842ef22a837ff08e6a1934ae1dc887f530d1a1711" Feb 19 19:47:39 crc kubenswrapper[4813]: E0219 19:47:39.351595 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0aba1baea714c527b93bf1b842ef22a837ff08e6a1934ae1dc887f530d1a1711\": container with ID starting with 0aba1baea714c527b93bf1b842ef22a837ff08e6a1934ae1dc887f530d1a1711 not found: ID does not exist" containerID="0aba1baea714c527b93bf1b842ef22a837ff08e6a1934ae1dc887f530d1a1711" Feb 19 19:47:39 crc kubenswrapper[4813]: I0219 19:47:39.351630 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0aba1baea714c527b93bf1b842ef22a837ff08e6a1934ae1dc887f530d1a1711"} err="failed to get container status \"0aba1baea714c527b93bf1b842ef22a837ff08e6a1934ae1dc887f530d1a1711\": rpc error: code = NotFound desc = could not find container \"0aba1baea714c527b93bf1b842ef22a837ff08e6a1934ae1dc887f530d1a1711\": container with ID starting with 0aba1baea714c527b93bf1b842ef22a837ff08e6a1934ae1dc887f530d1a1711 not found: ID does not exist" Feb 19 19:47:39 crc kubenswrapper[4813]: I0219 19:47:39.351656 4813 scope.go:117] "RemoveContainer" containerID="c0e57144c55b158b776db898c268da97b285326bd46910dc610318ae92111e98" Feb 19 19:47:39 crc kubenswrapper[4813]: E0219 19:47:39.351941 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0e57144c55b158b776db898c268da97b285326bd46910dc610318ae92111e98\": container with ID starting with c0e57144c55b158b776db898c268da97b285326bd46910dc610318ae92111e98 not found: ID does not exist" containerID="c0e57144c55b158b776db898c268da97b285326bd46910dc610318ae92111e98" Feb 19 19:47:39 crc kubenswrapper[4813]: I0219 19:47:39.351986 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0e57144c55b158b776db898c268da97b285326bd46910dc610318ae92111e98"} err="failed to get container status \"c0e57144c55b158b776db898c268da97b285326bd46910dc610318ae92111e98\": rpc error: code = NotFound desc = could not find container \"c0e57144c55b158b776db898c268da97b285326bd46910dc610318ae92111e98\": container with ID starting with c0e57144c55b158b776db898c268da97b285326bd46910dc610318ae92111e98 not found: ID does not exist" Feb 19 19:47:39 crc kubenswrapper[4813]: I0219 19:47:39.481564 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="700f4c86-1aa6-4cbd-a3d5-680ca2763884" path="/var/lib/kubelet/pods/700f4c86-1aa6-4cbd-a3d5-680ca2763884/volumes" Feb 19 19:47:39 crc kubenswrapper[4813]: I0219 19:47:39.499188 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-589cf688cc-dp55n"] Feb 19 19:47:39 crc kubenswrapper[4813]: I0219 19:47:39.504814 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-589cf688cc-dp55n"] Feb 19 19:47:39 crc kubenswrapper[4813]: I0219 19:47:39.561773 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 19:47:39 crc kubenswrapper[4813]: I0219 19:47:39.858781 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.036091 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2c1ace90-8e41-4862-8be0-f0500e93b9f3-server-conf\") pod \"2c1ace90-8e41-4862-8be0-f0500e93b9f3\" (UID: \"2c1ace90-8e41-4862-8be0-f0500e93b9f3\") " Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.036130 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2c1ace90-8e41-4862-8be0-f0500e93b9f3-pod-info\") pod \"2c1ace90-8e41-4862-8be0-f0500e93b9f3\" (UID: \"2c1ace90-8e41-4862-8be0-f0500e93b9f3\") " Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.036153 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2c1ace90-8e41-4862-8be0-f0500e93b9f3-rabbitmq-confd\") pod \"2c1ace90-8e41-4862-8be0-f0500e93b9f3\" (UID: \"2c1ace90-8e41-4862-8be0-f0500e93b9f3\") " Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.036192 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2c1ace90-8e41-4862-8be0-f0500e93b9f3-rabbitmq-erlang-cookie\") pod \"2c1ace90-8e41-4862-8be0-f0500e93b9f3\" (UID: \"2c1ace90-8e41-4862-8be0-f0500e93b9f3\") " Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.036225 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jdxd\" (UniqueName: \"kubernetes.io/projected/2c1ace90-8e41-4862-8be0-f0500e93b9f3-kube-api-access-2jdxd\") pod \"2c1ace90-8e41-4862-8be0-f0500e93b9f3\" (UID: \"2c1ace90-8e41-4862-8be0-f0500e93b9f3\") " Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.036245 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2c1ace90-8e41-4862-8be0-f0500e93b9f3-plugins-conf\") pod \"2c1ace90-8e41-4862-8be0-f0500e93b9f3\" (UID: \"2c1ace90-8e41-4862-8be0-f0500e93b9f3\") " Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.036297 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2c1ace90-8e41-4862-8be0-f0500e93b9f3-erlang-cookie-secret\") pod \"2c1ace90-8e41-4862-8be0-f0500e93b9f3\" (UID: \"2c1ace90-8e41-4862-8be0-f0500e93b9f3\") " Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.036325 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2c1ace90-8e41-4862-8be0-f0500e93b9f3-rabbitmq-plugins\") pod \"2c1ace90-8e41-4862-8be0-f0500e93b9f3\" (UID: \"2c1ace90-8e41-4862-8be0-f0500e93b9f3\") " Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.036458 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c842c9ea-15e2-450e-a45c-aec0ba91cae5\") pod \"2c1ace90-8e41-4862-8be0-f0500e93b9f3\" (UID: \"2c1ace90-8e41-4862-8be0-f0500e93b9f3\") " Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.037393 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c1ace90-8e41-4862-8be0-f0500e93b9f3-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "2c1ace90-8e41-4862-8be0-f0500e93b9f3" (UID: "2c1ace90-8e41-4862-8be0-f0500e93b9f3"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.037990 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c1ace90-8e41-4862-8be0-f0500e93b9f3-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "2c1ace90-8e41-4862-8be0-f0500e93b9f3" (UID: "2c1ace90-8e41-4862-8be0-f0500e93b9f3"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.039634 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c1ace90-8e41-4862-8be0-f0500e93b9f3-kube-api-access-2jdxd" (OuterVolumeSpecName: "kube-api-access-2jdxd") pod "2c1ace90-8e41-4862-8be0-f0500e93b9f3" (UID: "2c1ace90-8e41-4862-8be0-f0500e93b9f3"). InnerVolumeSpecName "kube-api-access-2jdxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.039841 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c1ace90-8e41-4862-8be0-f0500e93b9f3-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "2c1ace90-8e41-4862-8be0-f0500e93b9f3" (UID: "2c1ace90-8e41-4862-8be0-f0500e93b9f3"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.040175 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c1ace90-8e41-4862-8be0-f0500e93b9f3-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "2c1ace90-8e41-4862-8be0-f0500e93b9f3" (UID: "2c1ace90-8e41-4862-8be0-f0500e93b9f3"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.042159 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/2c1ace90-8e41-4862-8be0-f0500e93b9f3-pod-info" (OuterVolumeSpecName: "pod-info") pod "2c1ace90-8e41-4862-8be0-f0500e93b9f3" (UID: "2c1ace90-8e41-4862-8be0-f0500e93b9f3"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.054037 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c842c9ea-15e2-450e-a45c-aec0ba91cae5" (OuterVolumeSpecName: "persistence") pod "2c1ace90-8e41-4862-8be0-f0500e93b9f3" (UID: "2c1ace90-8e41-4862-8be0-f0500e93b9f3"). InnerVolumeSpecName "pvc-c842c9ea-15e2-450e-a45c-aec0ba91cae5". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.055472 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c1ace90-8e41-4862-8be0-f0500e93b9f3-server-conf" (OuterVolumeSpecName: "server-conf") pod "2c1ace90-8e41-4862-8be0-f0500e93b9f3" (UID: "2c1ace90-8e41-4862-8be0-f0500e93b9f3"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.109115 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c1ace90-8e41-4862-8be0-f0500e93b9f3-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "2c1ace90-8e41-4862-8be0-f0500e93b9f3" (UID: "2c1ace90-8e41-4862-8be0-f0500e93b9f3"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.133636 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bb0aa614-4f8e-403d-bb8b-2c472cce87e3","Type":"ContainerStarted","Data":"5c798817d9aaee009d752859b06d758c7d7e764d9c962ba13ce3944f2a3c7fb0"} Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.134758 4813 generic.go:334] "Generic (PLEG): container finished" podID="2c1ace90-8e41-4862-8be0-f0500e93b9f3" containerID="2930b19f1eea20f82d2c9ee05fd4216d4d9ac59c6f2c8df298e0a8ed66f729c7" exitCode=0 Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.134791 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2c1ace90-8e41-4862-8be0-f0500e93b9f3","Type":"ContainerDied","Data":"2930b19f1eea20f82d2c9ee05fd4216d4d9ac59c6f2c8df298e0a8ed66f729c7"} Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.134811 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2c1ace90-8e41-4862-8be0-f0500e93b9f3","Type":"ContainerDied","Data":"d2c0beb5aee6f9ff6aebee5823fa80d234026ad52414821b6edffb69ccb5869b"} Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.134825 4813 scope.go:117] "RemoveContainer" containerID="2930b19f1eea20f82d2c9ee05fd4216d4d9ac59c6f2c8df298e0a8ed66f729c7" Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.134907 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.137771 4813 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2c1ace90-8e41-4862-8be0-f0500e93b9f3-server-conf\") on node \"crc\" DevicePath \"\"" Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.137808 4813 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2c1ace90-8e41-4862-8be0-f0500e93b9f3-pod-info\") on node \"crc\" DevicePath \"\"" Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.137819 4813 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2c1ace90-8e41-4862-8be0-f0500e93b9f3-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.137831 4813 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2c1ace90-8e41-4862-8be0-f0500e93b9f3-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.137842 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2jdxd\" (UniqueName: \"kubernetes.io/projected/2c1ace90-8e41-4862-8be0-f0500e93b9f3-kube-api-access-2jdxd\") on node \"crc\" DevicePath \"\"" Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.137852 4813 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2c1ace90-8e41-4862-8be0-f0500e93b9f3-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.137862 4813 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2c1ace90-8e41-4862-8be0-f0500e93b9f3-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.137874 4813 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2c1ace90-8e41-4862-8be0-f0500e93b9f3-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.137906 4813 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-c842c9ea-15e2-450e-a45c-aec0ba91cae5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c842c9ea-15e2-450e-a45c-aec0ba91cae5\") on node \"crc\" " Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.154501 4813 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.154669 4813 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-c842c9ea-15e2-450e-a45c-aec0ba91cae5" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c842c9ea-15e2-450e-a45c-aec0ba91cae5") on node "crc" Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.239090 4813 reconciler_common.go:293] "Volume detached for volume \"pvc-c842c9ea-15e2-450e-a45c-aec0ba91cae5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c842c9ea-15e2-450e-a45c-aec0ba91cae5\") on node \"crc\" DevicePath \"\"" Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.276626 4813 scope.go:117] "RemoveContainer" containerID="39857af26eb6def16d85706c5b1aca2cf737070b400a1b7b4f0e2b3fccf23154" Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.280215 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.286174 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.302253 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 19:47:40 crc kubenswrapper[4813]: E0219 19:47:40.302630 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c1ace90-8e41-4862-8be0-f0500e93b9f3" containerName="rabbitmq" Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.302652 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c1ace90-8e41-4862-8be0-f0500e93b9f3" containerName="rabbitmq" Feb 19 19:47:40 crc kubenswrapper[4813]: E0219 19:47:40.302680 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa1478d6-53a6-489d-a7a2-e47fee0aec60" containerName="dnsmasq-dns" Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.302689 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa1478d6-53a6-489d-a7a2-e47fee0aec60" containerName="dnsmasq-dns" Feb 19 19:47:40 crc kubenswrapper[4813]: E0219 19:47:40.302704 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa1478d6-53a6-489d-a7a2-e47fee0aec60" containerName="init" Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.302713 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa1478d6-53a6-489d-a7a2-e47fee0aec60" containerName="init" Feb 19 19:47:40 crc kubenswrapper[4813]: E0219 19:47:40.302726 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c1ace90-8e41-4862-8be0-f0500e93b9f3" containerName="setup-container" Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.302734 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c1ace90-8e41-4862-8be0-f0500e93b9f3" containerName="setup-container" Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.302939 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa1478d6-53a6-489d-a7a2-e47fee0aec60" containerName="dnsmasq-dns" Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.303075 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c1ace90-8e41-4862-8be0-f0500e93b9f3" containerName="rabbitmq" Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.304361 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.307841 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.308057 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.309214 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.309724 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.310109 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-88bpc" Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.344333 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.368151 4813 scope.go:117] "RemoveContainer" containerID="2930b19f1eea20f82d2c9ee05fd4216d4d9ac59c6f2c8df298e0a8ed66f729c7" Feb 19 19:47:40 crc kubenswrapper[4813]: E0219 19:47:40.368631 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2930b19f1eea20f82d2c9ee05fd4216d4d9ac59c6f2c8df298e0a8ed66f729c7\": container with ID starting with 2930b19f1eea20f82d2c9ee05fd4216d4d9ac59c6f2c8df298e0a8ed66f729c7 not found: ID does not exist" containerID="2930b19f1eea20f82d2c9ee05fd4216d4d9ac59c6f2c8df298e0a8ed66f729c7" Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.368786 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2930b19f1eea20f82d2c9ee05fd4216d4d9ac59c6f2c8df298e0a8ed66f729c7"} err="failed to get container status \"2930b19f1eea20f82d2c9ee05fd4216d4d9ac59c6f2c8df298e0a8ed66f729c7\": rpc error: code = NotFound desc = could not find container \"2930b19f1eea20f82d2c9ee05fd4216d4d9ac59c6f2c8df298e0a8ed66f729c7\": container with ID starting with 2930b19f1eea20f82d2c9ee05fd4216d4d9ac59c6f2c8df298e0a8ed66f729c7 not found: ID does not exist" Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.368828 4813 scope.go:117] "RemoveContainer" containerID="39857af26eb6def16d85706c5b1aca2cf737070b400a1b7b4f0e2b3fccf23154" Feb 19 19:47:40 crc kubenswrapper[4813]: E0219 19:47:40.369301 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39857af26eb6def16d85706c5b1aca2cf737070b400a1b7b4f0e2b3fccf23154\": container with ID starting with 39857af26eb6def16d85706c5b1aca2cf737070b400a1b7b4f0e2b3fccf23154 not found: ID does not exist" containerID="39857af26eb6def16d85706c5b1aca2cf737070b400a1b7b4f0e2b3fccf23154" Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.369327 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39857af26eb6def16d85706c5b1aca2cf737070b400a1b7b4f0e2b3fccf23154"} err="failed to get container status \"39857af26eb6def16d85706c5b1aca2cf737070b400a1b7b4f0e2b3fccf23154\": rpc error: code = NotFound desc = could not find container \"39857af26eb6def16d85706c5b1aca2cf737070b400a1b7b4f0e2b3fccf23154\": container with ID starting with 39857af26eb6def16d85706c5b1aca2cf737070b400a1b7b4f0e2b3fccf23154 not found: ID does not exist" Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.444593 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cwx4\" (UniqueName: \"kubernetes.io/projected/1011827d-556c-4cd2-8189-c6151a791a71-kube-api-access-8cwx4\") pod \"rabbitmq-cell1-server-0\" (UID: \"1011827d-556c-4cd2-8189-c6151a791a71\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.444641 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1011827d-556c-4cd2-8189-c6151a791a71-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"1011827d-556c-4cd2-8189-c6151a791a71\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.444661 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1011827d-556c-4cd2-8189-c6151a791a71-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1011827d-556c-4cd2-8189-c6151a791a71\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.444715 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1011827d-556c-4cd2-8189-c6151a791a71-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"1011827d-556c-4cd2-8189-c6151a791a71\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.444737 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c842c9ea-15e2-450e-a45c-aec0ba91cae5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c842c9ea-15e2-450e-a45c-aec0ba91cae5\") pod \"rabbitmq-cell1-server-0\" (UID: \"1011827d-556c-4cd2-8189-c6151a791a71\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.444757 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1011827d-556c-4cd2-8189-c6151a791a71-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"1011827d-556c-4cd2-8189-c6151a791a71\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.444773 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1011827d-556c-4cd2-8189-c6151a791a71-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"1011827d-556c-4cd2-8189-c6151a791a71\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.444787 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1011827d-556c-4cd2-8189-c6151a791a71-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1011827d-556c-4cd2-8189-c6151a791a71\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.444803 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1011827d-556c-4cd2-8189-c6151a791a71-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"1011827d-556c-4cd2-8189-c6151a791a71\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.546839 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1011827d-556c-4cd2-8189-c6151a791a71-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"1011827d-556c-4cd2-8189-c6151a791a71\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.546905 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c842c9ea-15e2-450e-a45c-aec0ba91cae5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c842c9ea-15e2-450e-a45c-aec0ba91cae5\") pod \"rabbitmq-cell1-server-0\" (UID: \"1011827d-556c-4cd2-8189-c6151a791a71\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.546941 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1011827d-556c-4cd2-8189-c6151a791a71-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"1011827d-556c-4cd2-8189-c6151a791a71\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.546979 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1011827d-556c-4cd2-8189-c6151a791a71-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"1011827d-556c-4cd2-8189-c6151a791a71\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.547002 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1011827d-556c-4cd2-8189-c6151a791a71-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1011827d-556c-4cd2-8189-c6151a791a71\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.547023 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1011827d-556c-4cd2-8189-c6151a791a71-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"1011827d-556c-4cd2-8189-c6151a791a71\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.547083 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cwx4\" (UniqueName: \"kubernetes.io/projected/1011827d-556c-4cd2-8189-c6151a791a71-kube-api-access-8cwx4\") pod \"rabbitmq-cell1-server-0\" (UID: \"1011827d-556c-4cd2-8189-c6151a791a71\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.547114 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1011827d-556c-4cd2-8189-c6151a791a71-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"1011827d-556c-4cd2-8189-c6151a791a71\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.547141 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1011827d-556c-4cd2-8189-c6151a791a71-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1011827d-556c-4cd2-8189-c6151a791a71\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.548828 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1011827d-556c-4cd2-8189-c6151a791a71-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1011827d-556c-4cd2-8189-c6151a791a71\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.548851 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1011827d-556c-4cd2-8189-c6151a791a71-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"1011827d-556c-4cd2-8189-c6151a791a71\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.549861 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1011827d-556c-4cd2-8189-c6151a791a71-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"1011827d-556c-4cd2-8189-c6151a791a71\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.553730 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1011827d-556c-4cd2-8189-c6151a791a71-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"1011827d-556c-4cd2-8189-c6151a791a71\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.554903 4813 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.554943 4813 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c842c9ea-15e2-450e-a45c-aec0ba91cae5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c842c9ea-15e2-450e-a45c-aec0ba91cae5\") pod \"rabbitmq-cell1-server-0\" (UID: \"1011827d-556c-4cd2-8189-c6151a791a71\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d0202a2aad552b9f63e1c75a8cf8bbd022754947a395994a4047c05a28b21699/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.555340 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1011827d-556c-4cd2-8189-c6151a791a71-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1011827d-556c-4cd2-8189-c6151a791a71\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.557009 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1011827d-556c-4cd2-8189-c6151a791a71-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"1011827d-556c-4cd2-8189-c6151a791a71\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.560158 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1011827d-556c-4cd2-8189-c6151a791a71-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"1011827d-556c-4cd2-8189-c6151a791a71\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.566622 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cwx4\" (UniqueName: \"kubernetes.io/projected/1011827d-556c-4cd2-8189-c6151a791a71-kube-api-access-8cwx4\") pod \"rabbitmq-cell1-server-0\" (UID: \"1011827d-556c-4cd2-8189-c6151a791a71\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.588135 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c842c9ea-15e2-450e-a45c-aec0ba91cae5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c842c9ea-15e2-450e-a45c-aec0ba91cae5\") pod \"rabbitmq-cell1-server-0\" (UID: \"1011827d-556c-4cd2-8189-c6151a791a71\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:47:40 crc kubenswrapper[4813]: I0219 19:47:40.664052 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:47:41 crc kubenswrapper[4813]: I0219 19:47:41.100056 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 19:47:41 crc kubenswrapper[4813]: W0219 19:47:41.108532 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1011827d_556c_4cd2_8189_c6151a791a71.slice/crio-18377beaea432819943e78948129035827636b0c8afe9f09f135cdc4c8de910b WatchSource:0}: Error finding container 18377beaea432819943e78948129035827636b0c8afe9f09f135cdc4c8de910b: Status 404 returned error can't find the container with id 18377beaea432819943e78948129035827636b0c8afe9f09f135cdc4c8de910b Feb 19 19:47:41 crc kubenswrapper[4813]: I0219 19:47:41.143801 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1011827d-556c-4cd2-8189-c6151a791a71","Type":"ContainerStarted","Data":"18377beaea432819943e78948129035827636b0c8afe9f09f135cdc4c8de910b"} Feb 19 19:47:41 crc kubenswrapper[4813]: I0219 19:47:41.145759 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bb0aa614-4f8e-403d-bb8b-2c472cce87e3","Type":"ContainerStarted","Data":"463e1b9a715b786760443d297e8b37f7288f7d85dc9a3ed443d1c89adcf5102b"} Feb 19 19:47:41 crc kubenswrapper[4813]: I0219 19:47:41.493233 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c1ace90-8e41-4862-8be0-f0500e93b9f3" path="/var/lib/kubelet/pods/2c1ace90-8e41-4862-8be0-f0500e93b9f3/volumes" Feb 19 19:47:41 crc kubenswrapper[4813]: I0219 19:47:41.495494 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa1478d6-53a6-489d-a7a2-e47fee0aec60" path="/var/lib/kubelet/pods/fa1478d6-53a6-489d-a7a2-e47fee0aec60/volumes" Feb 19 19:47:43 crc kubenswrapper[4813]: I0219 19:47:43.163273 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1011827d-556c-4cd2-8189-c6151a791a71","Type":"ContainerStarted","Data":"d2da866328e61a1de9cf869fed8aed8d45b58d0d4c219c667de951006a84f94e"} Feb 19 19:47:43 crc kubenswrapper[4813]: I0219 19:47:43.573517 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-589cf688cc-dp55n" podUID="fa1478d6-53a6-489d-a7a2-e47fee0aec60" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.239:5353: i/o timeout" Feb 19 19:47:47 crc kubenswrapper[4813]: I0219 19:47:47.471287 4813 scope.go:117] "RemoveContainer" containerID="eeed46d1b34159de66f42948a66c3d63b5a21a3caa6efa2e5b0c9cefdb6e91ba" Feb 19 19:47:47 crc kubenswrapper[4813]: E0219 19:47:47.471796 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:48:00 crc kubenswrapper[4813]: I0219 19:48:00.471437 4813 scope.go:117] "RemoveContainer" containerID="eeed46d1b34159de66f42948a66c3d63b5a21a3caa6efa2e5b0c9cefdb6e91ba" Feb 19 19:48:01 crc kubenswrapper[4813]: I0219 19:48:01.557891 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" event={"ID":"481977a2-7072-4176-abd4-863cb6104d70","Type":"ContainerStarted","Data":"be38fd9999c563e4734db89d3c18752f33e9f41c50b92368d55ff07e1436c0c2"} Feb 19 19:48:13 crc kubenswrapper[4813]: I0219 19:48:13.650128 4813 generic.go:334] "Generic (PLEG): container finished" podID="bb0aa614-4f8e-403d-bb8b-2c472cce87e3" containerID="463e1b9a715b786760443d297e8b37f7288f7d85dc9a3ed443d1c89adcf5102b" exitCode=0 Feb 19 19:48:13 crc kubenswrapper[4813]: I0219 19:48:13.650209 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bb0aa614-4f8e-403d-bb8b-2c472cce87e3","Type":"ContainerDied","Data":"463e1b9a715b786760443d297e8b37f7288f7d85dc9a3ed443d1c89adcf5102b"} Feb 19 19:48:14 crc kubenswrapper[4813]: I0219 19:48:14.658899 4813 generic.go:334] "Generic (PLEG): container finished" podID="1011827d-556c-4cd2-8189-c6151a791a71" containerID="d2da866328e61a1de9cf869fed8aed8d45b58d0d4c219c667de951006a84f94e" exitCode=0 Feb 19 19:48:14 crc kubenswrapper[4813]: I0219 19:48:14.658985 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1011827d-556c-4cd2-8189-c6151a791a71","Type":"ContainerDied","Data":"d2da866328e61a1de9cf869fed8aed8d45b58d0d4c219c667de951006a84f94e"} Feb 19 19:48:14 crc kubenswrapper[4813]: I0219 19:48:14.661461 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bb0aa614-4f8e-403d-bb8b-2c472cce87e3","Type":"ContainerStarted","Data":"bb71bd94d0a15fa9cd6ad3d0910073fcde75ecc2f96eb86b2ee3ebca89013a62"} Feb 19 19:48:14 crc kubenswrapper[4813]: I0219 19:48:14.661690 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 19 19:48:14 crc kubenswrapper[4813]: I0219 19:48:14.709846 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.70982553 podStartE2EDuration="36.70982553s" podCreationTimestamp="2026-02-19 19:47:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:48:14.706640312 +0000 UTC m=+4713.932080853" watchObservedRunningTime="2026-02-19 19:48:14.70982553 +0000 UTC m=+4713.935266071" Feb 19 19:48:15 crc kubenswrapper[4813]: I0219 19:48:15.670390 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1011827d-556c-4cd2-8189-c6151a791a71","Type":"ContainerStarted","Data":"7832f39d75ddd43c8539b4f84a9da524fae3337e6954940118304737f6796f31"} Feb 19 19:48:15 crc kubenswrapper[4813]: I0219 19:48:15.670989 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:48:15 crc kubenswrapper[4813]: I0219 19:48:15.698367 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=35.698345883 podStartE2EDuration="35.698345883s" podCreationTimestamp="2026-02-19 19:47:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:48:15.69337309 +0000 UTC m=+4714.918813651" watchObservedRunningTime="2026-02-19 19:48:15.698345883 +0000 UTC m=+4714.923786424" Feb 19 19:48:28 crc kubenswrapper[4813]: I0219 19:48:28.842045 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 19 19:48:30 crc kubenswrapper[4813]: I0219 19:48:30.668154 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:48:38 crc kubenswrapper[4813]: I0219 19:48:38.743372 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Feb 19 19:48:38 crc kubenswrapper[4813]: I0219 19:48:38.745082 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 19 19:48:38 crc kubenswrapper[4813]: I0219 19:48:38.747501 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-87mg8" Feb 19 19:48:38 crc kubenswrapper[4813]: I0219 19:48:38.751133 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvlj6\" (UniqueName: \"kubernetes.io/projected/03352d3c-6295-43f9-846b-d76ffc34f69b-kube-api-access-bvlj6\") pod \"mariadb-client\" (UID: \"03352d3c-6295-43f9-846b-d76ffc34f69b\") " pod="openstack/mariadb-client" Feb 19 19:48:38 crc kubenswrapper[4813]: I0219 19:48:38.753749 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Feb 19 19:48:38 crc kubenswrapper[4813]: I0219 19:48:38.852818 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvlj6\" (UniqueName: \"kubernetes.io/projected/03352d3c-6295-43f9-846b-d76ffc34f69b-kube-api-access-bvlj6\") pod \"mariadb-client\" (UID: \"03352d3c-6295-43f9-846b-d76ffc34f69b\") " pod="openstack/mariadb-client" Feb 19 19:48:38 crc kubenswrapper[4813]: I0219 19:48:38.872904 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvlj6\" (UniqueName: \"kubernetes.io/projected/03352d3c-6295-43f9-846b-d76ffc34f69b-kube-api-access-bvlj6\") pod \"mariadb-client\" (UID: \"03352d3c-6295-43f9-846b-d76ffc34f69b\") " pod="openstack/mariadb-client" Feb 19 19:48:39 crc kubenswrapper[4813]: I0219 19:48:39.070402 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 19 19:48:39 crc kubenswrapper[4813]: I0219 19:48:39.578620 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Feb 19 19:48:39 crc kubenswrapper[4813]: I0219 19:48:39.849634 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"03352d3c-6295-43f9-846b-d76ffc34f69b","Type":"ContainerStarted","Data":"02b75ceda423dbe03f99b4d2fbbf99a3276228b496aba731d8285c56085cf9f2"} Feb 19 19:48:40 crc kubenswrapper[4813]: I0219 19:48:40.856143 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"03352d3c-6295-43f9-846b-d76ffc34f69b","Type":"ContainerStarted","Data":"0899275fcbbfd746f6e378318e518d8d44a2f0d0e1f30f5eb9d1d14a0c60af22"} Feb 19 19:48:44 crc kubenswrapper[4813]: I0219 19:48:44.784312 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-client" podStartSLOduration=6.341552158 podStartE2EDuration="6.784288916s" podCreationTimestamp="2026-02-19 19:48:38 +0000 UTC" firstStartedPulling="2026-02-19 19:48:39.583090706 +0000 UTC m=+4738.808531247" lastFinishedPulling="2026-02-19 19:48:40.025827474 +0000 UTC m=+4739.251268005" observedRunningTime="2026-02-19 19:48:40.870029402 +0000 UTC m=+4740.095469963" watchObservedRunningTime="2026-02-19 19:48:44.784288916 +0000 UTC m=+4744.009729457" Feb 19 19:48:44 crc kubenswrapper[4813]: I0219 19:48:44.792448 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wffd8"] Feb 19 19:48:44 crc kubenswrapper[4813]: I0219 19:48:44.794540 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wffd8" Feb 19 19:48:44 crc kubenswrapper[4813]: I0219 19:48:44.806495 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wffd8"] Feb 19 19:48:44 crc kubenswrapper[4813]: I0219 19:48:44.952962 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/914be9d7-de4c-4da1-b91b-04682687c033-utilities\") pod \"community-operators-wffd8\" (UID: \"914be9d7-de4c-4da1-b91b-04682687c033\") " pod="openshift-marketplace/community-operators-wffd8" Feb 19 19:48:44 crc kubenswrapper[4813]: I0219 19:48:44.953088 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/914be9d7-de4c-4da1-b91b-04682687c033-catalog-content\") pod \"community-operators-wffd8\" (UID: \"914be9d7-de4c-4da1-b91b-04682687c033\") " pod="openshift-marketplace/community-operators-wffd8" Feb 19 19:48:44 crc kubenswrapper[4813]: I0219 19:48:44.953130 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxmgc\" (UniqueName: \"kubernetes.io/projected/914be9d7-de4c-4da1-b91b-04682687c033-kube-api-access-bxmgc\") pod \"community-operators-wffd8\" (UID: \"914be9d7-de4c-4da1-b91b-04682687c033\") " pod="openshift-marketplace/community-operators-wffd8" Feb 19 19:48:45 crc kubenswrapper[4813]: I0219 19:48:45.055142 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/914be9d7-de4c-4da1-b91b-04682687c033-utilities\") pod \"community-operators-wffd8\" (UID: \"914be9d7-de4c-4da1-b91b-04682687c033\") " pod="openshift-marketplace/community-operators-wffd8" Feb 19 19:48:45 crc kubenswrapper[4813]: I0219 19:48:45.055280 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/914be9d7-de4c-4da1-b91b-04682687c033-catalog-content\") pod \"community-operators-wffd8\" (UID: \"914be9d7-de4c-4da1-b91b-04682687c033\") " pod="openshift-marketplace/community-operators-wffd8" Feb 19 19:48:45 crc kubenswrapper[4813]: I0219 19:48:45.055319 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxmgc\" (UniqueName: \"kubernetes.io/projected/914be9d7-de4c-4da1-b91b-04682687c033-kube-api-access-bxmgc\") pod \"community-operators-wffd8\" (UID: \"914be9d7-de4c-4da1-b91b-04682687c033\") " pod="openshift-marketplace/community-operators-wffd8" Feb 19 19:48:45 crc kubenswrapper[4813]: I0219 19:48:45.055651 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/914be9d7-de4c-4da1-b91b-04682687c033-utilities\") pod \"community-operators-wffd8\" (UID: \"914be9d7-de4c-4da1-b91b-04682687c033\") " pod="openshift-marketplace/community-operators-wffd8" Feb 19 19:48:45 crc kubenswrapper[4813]: I0219 19:48:45.055868 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/914be9d7-de4c-4da1-b91b-04682687c033-catalog-content\") pod \"community-operators-wffd8\" (UID: \"914be9d7-de4c-4da1-b91b-04682687c033\") " pod="openshift-marketplace/community-operators-wffd8" Feb 19 19:48:45 crc kubenswrapper[4813]: I0219 19:48:45.086812 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxmgc\" (UniqueName: \"kubernetes.io/projected/914be9d7-de4c-4da1-b91b-04682687c033-kube-api-access-bxmgc\") pod \"community-operators-wffd8\" (UID: \"914be9d7-de4c-4da1-b91b-04682687c033\") " pod="openshift-marketplace/community-operators-wffd8" Feb 19 19:48:45 crc kubenswrapper[4813]: I0219 19:48:45.143309 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wffd8" Feb 19 19:48:45 crc kubenswrapper[4813]: I0219 19:48:45.640672 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wffd8"] Feb 19 19:48:45 crc kubenswrapper[4813]: I0219 19:48:45.904083 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wffd8" event={"ID":"914be9d7-de4c-4da1-b91b-04682687c033","Type":"ContainerStarted","Data":"1c87b9df1fd82e8a34d7ac4800cf9caf30b0c05c8391179ed75c94a5a1a22632"} Feb 19 19:48:46 crc kubenswrapper[4813]: I0219 19:48:46.913060 4813 generic.go:334] "Generic (PLEG): container finished" podID="914be9d7-de4c-4da1-b91b-04682687c033" containerID="23458778d64dafc3008533ccedc471265809d86c2c7fcdd569468675fef4e8eb" exitCode=0 Feb 19 19:48:46 crc kubenswrapper[4813]: I0219 19:48:46.913126 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wffd8" event={"ID":"914be9d7-de4c-4da1-b91b-04682687c033","Type":"ContainerDied","Data":"23458778d64dafc3008533ccedc471265809d86c2c7fcdd569468675fef4e8eb"} Feb 19 19:48:49 crc kubenswrapper[4813]: I0219 19:48:49.943016 4813 generic.go:334] "Generic (PLEG): container finished" podID="914be9d7-de4c-4da1-b91b-04682687c033" containerID="ce06e8e84100428f8171885e4fbe027c141576bd4546bd5635d5246810480684" exitCode=0 Feb 19 19:48:49 crc kubenswrapper[4813]: I0219 19:48:49.943150 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wffd8" event={"ID":"914be9d7-de4c-4da1-b91b-04682687c033","Type":"ContainerDied","Data":"ce06e8e84100428f8171885e4fbe027c141576bd4546bd5635d5246810480684"} Feb 19 19:48:52 crc kubenswrapper[4813]: I0219 19:48:52.966802 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wffd8" event={"ID":"914be9d7-de4c-4da1-b91b-04682687c033","Type":"ContainerStarted","Data":"a2e314a2c1dc7423614023cb52e4a501a3bfb2dc15684d3fdb6214731085c04d"} Feb 19 19:48:52 crc kubenswrapper[4813]: I0219 19:48:52.988535 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wffd8" podStartSLOduration=3.788132923 podStartE2EDuration="8.988519447s" podCreationTimestamp="2026-02-19 19:48:44 +0000 UTC" firstStartedPulling="2026-02-19 19:48:46.915045227 +0000 UTC m=+4746.140485808" lastFinishedPulling="2026-02-19 19:48:52.115431791 +0000 UTC m=+4751.340872332" observedRunningTime="2026-02-19 19:48:52.984931647 +0000 UTC m=+4752.210372188" watchObservedRunningTime="2026-02-19 19:48:52.988519447 +0000 UTC m=+4752.213959988" Feb 19 19:48:54 crc kubenswrapper[4813]: I0219 19:48:54.423219 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Feb 19 19:48:54 crc kubenswrapper[4813]: I0219 19:48:54.424844 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mariadb-client" podUID="03352d3c-6295-43f9-846b-d76ffc34f69b" containerName="mariadb-client" containerID="cri-o://0899275fcbbfd746f6e378318e518d8d44a2f0d0e1f30f5eb9d1d14a0c60af22" gracePeriod=30 Feb 19 19:48:54 crc kubenswrapper[4813]: I0219 19:48:54.980178 4813 generic.go:334] "Generic (PLEG): container finished" podID="03352d3c-6295-43f9-846b-d76ffc34f69b" containerID="0899275fcbbfd746f6e378318e518d8d44a2f0d0e1f30f5eb9d1d14a0c60af22" exitCode=143 Feb 19 19:48:54 crc kubenswrapper[4813]: I0219 19:48:54.980224 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"03352d3c-6295-43f9-846b-d76ffc34f69b","Type":"ContainerDied","Data":"0899275fcbbfd746f6e378318e518d8d44a2f0d0e1f30f5eb9d1d14a0c60af22"} Feb 19 19:48:55 crc kubenswrapper[4813]: I0219 19:48:55.143944 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wffd8" Feb 19 19:48:55 crc kubenswrapper[4813]: I0219 19:48:55.144223 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wffd8" Feb 19 19:48:55 crc kubenswrapper[4813]: I0219 19:48:55.186687 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wffd8" Feb 19 19:48:55 crc kubenswrapper[4813]: I0219 19:48:55.489003 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 19 19:48:55 crc kubenswrapper[4813]: I0219 19:48:55.604676 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvlj6\" (UniqueName: \"kubernetes.io/projected/03352d3c-6295-43f9-846b-d76ffc34f69b-kube-api-access-bvlj6\") pod \"03352d3c-6295-43f9-846b-d76ffc34f69b\" (UID: \"03352d3c-6295-43f9-846b-d76ffc34f69b\") " Feb 19 19:48:55 crc kubenswrapper[4813]: I0219 19:48:55.611404 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03352d3c-6295-43f9-846b-d76ffc34f69b-kube-api-access-bvlj6" (OuterVolumeSpecName: "kube-api-access-bvlj6") pod "03352d3c-6295-43f9-846b-d76ffc34f69b" (UID: "03352d3c-6295-43f9-846b-d76ffc34f69b"). InnerVolumeSpecName "kube-api-access-bvlj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:48:55 crc kubenswrapper[4813]: I0219 19:48:55.706410 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvlj6\" (UniqueName: \"kubernetes.io/projected/03352d3c-6295-43f9-846b-d76ffc34f69b-kube-api-access-bvlj6\") on node \"crc\" DevicePath \"\"" Feb 19 19:48:55 crc kubenswrapper[4813]: I0219 19:48:55.989992 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 19 19:48:55 crc kubenswrapper[4813]: I0219 19:48:55.990841 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"03352d3c-6295-43f9-846b-d76ffc34f69b","Type":"ContainerDied","Data":"02b75ceda423dbe03f99b4d2fbbf99a3276228b496aba731d8285c56085cf9f2"} Feb 19 19:48:55 crc kubenswrapper[4813]: I0219 19:48:55.990924 4813 scope.go:117] "RemoveContainer" containerID="0899275fcbbfd746f6e378318e518d8d44a2f0d0e1f30f5eb9d1d14a0c60af22" Feb 19 19:48:56 crc kubenswrapper[4813]: I0219 19:48:56.024996 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Feb 19 19:48:56 crc kubenswrapper[4813]: I0219 19:48:56.036682 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Feb 19 19:48:57 crc kubenswrapper[4813]: I0219 19:48:57.490895 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03352d3c-6295-43f9-846b-d76ffc34f69b" path="/var/lib/kubelet/pods/03352d3c-6295-43f9-846b-d76ffc34f69b/volumes" Feb 19 19:49:02 crc kubenswrapper[4813]: E0219 19:49:02.583578 4813 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.112s" Feb 19 19:49:05 crc kubenswrapper[4813]: I0219 19:49:05.187828 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wffd8" Feb 19 19:49:05 crc kubenswrapper[4813]: I0219 19:49:05.232665 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wffd8"] Feb 19 19:49:05 crc kubenswrapper[4813]: I0219 19:49:05.623645 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wffd8" podUID="914be9d7-de4c-4da1-b91b-04682687c033" containerName="registry-server" containerID="cri-o://a2e314a2c1dc7423614023cb52e4a501a3bfb2dc15684d3fdb6214731085c04d" gracePeriod=2 Feb 19 19:49:06 crc kubenswrapper[4813]: I0219 19:49:06.060103 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wffd8" Feb 19 19:49:06 crc kubenswrapper[4813]: I0219 19:49:06.222444 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxmgc\" (UniqueName: \"kubernetes.io/projected/914be9d7-de4c-4da1-b91b-04682687c033-kube-api-access-bxmgc\") pod \"914be9d7-de4c-4da1-b91b-04682687c033\" (UID: \"914be9d7-de4c-4da1-b91b-04682687c033\") " Feb 19 19:49:06 crc kubenswrapper[4813]: I0219 19:49:06.222623 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/914be9d7-de4c-4da1-b91b-04682687c033-utilities\") pod \"914be9d7-de4c-4da1-b91b-04682687c033\" (UID: \"914be9d7-de4c-4da1-b91b-04682687c033\") " Feb 19 19:49:06 crc kubenswrapper[4813]: I0219 19:49:06.222823 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/914be9d7-de4c-4da1-b91b-04682687c033-catalog-content\") pod \"914be9d7-de4c-4da1-b91b-04682687c033\" (UID: \"914be9d7-de4c-4da1-b91b-04682687c033\") " Feb 19 19:49:06 crc kubenswrapper[4813]: I0219 19:49:06.223866 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/914be9d7-de4c-4da1-b91b-04682687c033-utilities" (OuterVolumeSpecName: "utilities") pod "914be9d7-de4c-4da1-b91b-04682687c033" (UID: "914be9d7-de4c-4da1-b91b-04682687c033"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:49:06 crc kubenswrapper[4813]: I0219 19:49:06.229422 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/914be9d7-de4c-4da1-b91b-04682687c033-kube-api-access-bxmgc" (OuterVolumeSpecName: "kube-api-access-bxmgc") pod "914be9d7-de4c-4da1-b91b-04682687c033" (UID: "914be9d7-de4c-4da1-b91b-04682687c033"). InnerVolumeSpecName "kube-api-access-bxmgc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:49:06 crc kubenswrapper[4813]: I0219 19:49:06.279244 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/914be9d7-de4c-4da1-b91b-04682687c033-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "914be9d7-de4c-4da1-b91b-04682687c033" (UID: "914be9d7-de4c-4da1-b91b-04682687c033"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:49:06 crc kubenswrapper[4813]: I0219 19:49:06.325192 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/914be9d7-de4c-4da1-b91b-04682687c033-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 19:49:06 crc kubenswrapper[4813]: I0219 19:49:06.325240 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxmgc\" (UniqueName: \"kubernetes.io/projected/914be9d7-de4c-4da1-b91b-04682687c033-kube-api-access-bxmgc\") on node \"crc\" DevicePath \"\"" Feb 19 19:49:06 crc kubenswrapper[4813]: I0219 19:49:06.325255 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/914be9d7-de4c-4da1-b91b-04682687c033-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 19:49:06 crc kubenswrapper[4813]: I0219 19:49:06.637152 4813 generic.go:334] "Generic (PLEG): container finished" podID="914be9d7-de4c-4da1-b91b-04682687c033" containerID="a2e314a2c1dc7423614023cb52e4a501a3bfb2dc15684d3fdb6214731085c04d" exitCode=0 Feb 19 19:49:06 crc kubenswrapper[4813]: I0219 19:49:06.637205 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wffd8" event={"ID":"914be9d7-de4c-4da1-b91b-04682687c033","Type":"ContainerDied","Data":"a2e314a2c1dc7423614023cb52e4a501a3bfb2dc15684d3fdb6214731085c04d"} Feb 19 19:49:06 crc kubenswrapper[4813]: I0219 19:49:06.637246 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wffd8" event={"ID":"914be9d7-de4c-4da1-b91b-04682687c033","Type":"ContainerDied","Data":"1c87b9df1fd82e8a34d7ac4800cf9caf30b0c05c8391179ed75c94a5a1a22632"} Feb 19 19:49:06 crc kubenswrapper[4813]: I0219 19:49:06.637280 4813 scope.go:117] "RemoveContainer" containerID="a2e314a2c1dc7423614023cb52e4a501a3bfb2dc15684d3fdb6214731085c04d" Feb 19 19:49:06 crc kubenswrapper[4813]: I0219 19:49:06.637290 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wffd8" Feb 19 19:49:06 crc kubenswrapper[4813]: I0219 19:49:06.670773 4813 scope.go:117] "RemoveContainer" containerID="ce06e8e84100428f8171885e4fbe027c141576bd4546bd5635d5246810480684" Feb 19 19:49:06 crc kubenswrapper[4813]: I0219 19:49:06.679905 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wffd8"] Feb 19 19:49:06 crc kubenswrapper[4813]: I0219 19:49:06.686680 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wffd8"] Feb 19 19:49:06 crc kubenswrapper[4813]: I0219 19:49:06.694160 4813 scope.go:117] "RemoveContainer" containerID="23458778d64dafc3008533ccedc471265809d86c2c7fcdd569468675fef4e8eb" Feb 19 19:49:06 crc kubenswrapper[4813]: I0219 19:49:06.731935 4813 scope.go:117] "RemoveContainer" containerID="a2e314a2c1dc7423614023cb52e4a501a3bfb2dc15684d3fdb6214731085c04d" Feb 19 19:49:06 crc kubenswrapper[4813]: E0219 19:49:06.732554 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2e314a2c1dc7423614023cb52e4a501a3bfb2dc15684d3fdb6214731085c04d\": container with ID starting with a2e314a2c1dc7423614023cb52e4a501a3bfb2dc15684d3fdb6214731085c04d not found: ID does not exist" containerID="a2e314a2c1dc7423614023cb52e4a501a3bfb2dc15684d3fdb6214731085c04d" Feb 19 19:49:06 crc kubenswrapper[4813]: I0219 19:49:06.732603 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2e314a2c1dc7423614023cb52e4a501a3bfb2dc15684d3fdb6214731085c04d"} err="failed to get container status \"a2e314a2c1dc7423614023cb52e4a501a3bfb2dc15684d3fdb6214731085c04d\": rpc error: code = NotFound desc = could not find container \"a2e314a2c1dc7423614023cb52e4a501a3bfb2dc15684d3fdb6214731085c04d\": container with ID starting with a2e314a2c1dc7423614023cb52e4a501a3bfb2dc15684d3fdb6214731085c04d not found: ID does not exist" Feb 19 19:49:06 crc kubenswrapper[4813]: I0219 19:49:06.732628 4813 scope.go:117] "RemoveContainer" containerID="ce06e8e84100428f8171885e4fbe027c141576bd4546bd5635d5246810480684" Feb 19 19:49:06 crc kubenswrapper[4813]: E0219 19:49:06.732997 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce06e8e84100428f8171885e4fbe027c141576bd4546bd5635d5246810480684\": container with ID starting with ce06e8e84100428f8171885e4fbe027c141576bd4546bd5635d5246810480684 not found: ID does not exist" containerID="ce06e8e84100428f8171885e4fbe027c141576bd4546bd5635d5246810480684" Feb 19 19:49:06 crc kubenswrapper[4813]: I0219 19:49:06.733018 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce06e8e84100428f8171885e4fbe027c141576bd4546bd5635d5246810480684"} err="failed to get container status \"ce06e8e84100428f8171885e4fbe027c141576bd4546bd5635d5246810480684\": rpc error: code = NotFound desc = could not find container \"ce06e8e84100428f8171885e4fbe027c141576bd4546bd5635d5246810480684\": container with ID starting with ce06e8e84100428f8171885e4fbe027c141576bd4546bd5635d5246810480684 not found: ID does not exist" Feb 19 19:49:06 crc kubenswrapper[4813]: I0219 19:49:06.733033 4813 scope.go:117] "RemoveContainer" containerID="23458778d64dafc3008533ccedc471265809d86c2c7fcdd569468675fef4e8eb" Feb 19 19:49:06 crc kubenswrapper[4813]: E0219 19:49:06.733266 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23458778d64dafc3008533ccedc471265809d86c2c7fcdd569468675fef4e8eb\": container with ID starting with 23458778d64dafc3008533ccedc471265809d86c2c7fcdd569468675fef4e8eb not found: ID does not exist" containerID="23458778d64dafc3008533ccedc471265809d86c2c7fcdd569468675fef4e8eb" Feb 19 19:49:06 crc kubenswrapper[4813]: I0219 19:49:06.733298 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23458778d64dafc3008533ccedc471265809d86c2c7fcdd569468675fef4e8eb"} err="failed to get container status \"23458778d64dafc3008533ccedc471265809d86c2c7fcdd569468675fef4e8eb\": rpc error: code = NotFound desc = could not find container \"23458778d64dafc3008533ccedc471265809d86c2c7fcdd569468675fef4e8eb\": container with ID starting with 23458778d64dafc3008533ccedc471265809d86c2c7fcdd569468675fef4e8eb not found: ID does not exist" Feb 19 19:49:07 crc kubenswrapper[4813]: I0219 19:49:07.488498 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="914be9d7-de4c-4da1-b91b-04682687c033" path="/var/lib/kubelet/pods/914be9d7-de4c-4da1-b91b-04682687c033/volumes" Feb 19 19:49:52 crc kubenswrapper[4813]: I0219 19:49:52.078920 4813 scope.go:117] "RemoveContainer" containerID="2ea0be7576895785ae1c0e130476667da1300603cbd9d18f56456c3f8e3b4e83" Feb 19 19:50:00 crc kubenswrapper[4813]: I0219 19:50:00.330499 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 19:50:00 crc kubenswrapper[4813]: I0219 19:50:00.331063 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 19:50:30 crc kubenswrapper[4813]: I0219 19:50:30.330043 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 19:50:30 crc kubenswrapper[4813]: I0219 19:50:30.330581 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 19:51:00 crc kubenswrapper[4813]: I0219 19:51:00.329694 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 19:51:00 crc kubenswrapper[4813]: I0219 19:51:00.330277 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 19:51:00 crc kubenswrapper[4813]: I0219 19:51:00.330320 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" Feb 19 19:51:00 crc kubenswrapper[4813]: I0219 19:51:00.330725 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"be38fd9999c563e4734db89d3c18752f33e9f41c50b92368d55ff07e1436c0c2"} pod="openshift-machine-config-operator/machine-config-daemon-gfswm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 19:51:00 crc kubenswrapper[4813]: I0219 19:51:00.330767 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" containerID="cri-o://be38fd9999c563e4734db89d3c18752f33e9f41c50b92368d55ff07e1436c0c2" gracePeriod=600 Feb 19 19:51:01 crc kubenswrapper[4813]: I0219 19:51:01.090105 4813 generic.go:334] "Generic (PLEG): container finished" podID="481977a2-7072-4176-abd4-863cb6104d70" containerID="be38fd9999c563e4734db89d3c18752f33e9f41c50b92368d55ff07e1436c0c2" exitCode=0 Feb 19 19:51:01 crc kubenswrapper[4813]: I0219 19:51:01.090186 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" event={"ID":"481977a2-7072-4176-abd4-863cb6104d70","Type":"ContainerDied","Data":"be38fd9999c563e4734db89d3c18752f33e9f41c50b92368d55ff07e1436c0c2"} Feb 19 19:51:01 crc kubenswrapper[4813]: I0219 19:51:01.090503 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" event={"ID":"481977a2-7072-4176-abd4-863cb6104d70","Type":"ContainerStarted","Data":"6f703dfd90f9e11930a40b965ab018f3c46a606e13eaba250b413520a86bea4f"} Feb 19 19:51:01 crc kubenswrapper[4813]: I0219 19:51:01.090530 4813 scope.go:117] "RemoveContainer" containerID="eeed46d1b34159de66f42948a66c3d63b5a21a3caa6efa2e5b0c9cefdb6e91ba" Feb 19 19:51:07 crc kubenswrapper[4813]: I0219 19:51:07.319895 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lq4rz"] Feb 19 19:51:07 crc kubenswrapper[4813]: E0219 19:51:07.320863 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="914be9d7-de4c-4da1-b91b-04682687c033" containerName="extract-utilities" Feb 19 19:51:07 crc kubenswrapper[4813]: I0219 19:51:07.320882 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="914be9d7-de4c-4da1-b91b-04682687c033" containerName="extract-utilities" Feb 19 19:51:07 crc kubenswrapper[4813]: E0219 19:51:07.320902 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="914be9d7-de4c-4da1-b91b-04682687c033" containerName="extract-content" Feb 19 19:51:07 crc kubenswrapper[4813]: I0219 19:51:07.320910 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="914be9d7-de4c-4da1-b91b-04682687c033" containerName="extract-content" Feb 19 19:51:07 crc kubenswrapper[4813]: E0219 19:51:07.320935 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03352d3c-6295-43f9-846b-d76ffc34f69b" containerName="mariadb-client" Feb 19 19:51:07 crc kubenswrapper[4813]: I0219 19:51:07.320944 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="03352d3c-6295-43f9-846b-d76ffc34f69b" containerName="mariadb-client" Feb 19 19:51:07 crc kubenswrapper[4813]: E0219 19:51:07.321280 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="914be9d7-de4c-4da1-b91b-04682687c033" containerName="registry-server" Feb 19 19:51:07 crc kubenswrapper[4813]: I0219 19:51:07.321307 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="914be9d7-de4c-4da1-b91b-04682687c033" containerName="registry-server" Feb 19 19:51:07 crc kubenswrapper[4813]: I0219 19:51:07.321493 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="914be9d7-de4c-4da1-b91b-04682687c033" containerName="registry-server" Feb 19 19:51:07 crc kubenswrapper[4813]: I0219 19:51:07.321526 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="03352d3c-6295-43f9-846b-d76ffc34f69b" containerName="mariadb-client" Feb 19 19:51:07 crc kubenswrapper[4813]: I0219 19:51:07.322946 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lq4rz" Feb 19 19:51:07 crc kubenswrapper[4813]: I0219 19:51:07.330623 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lq4rz"] Feb 19 19:51:07 crc kubenswrapper[4813]: I0219 19:51:07.431666 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce813641-c480-484a-9310-134842a3e017-utilities\") pod \"redhat-marketplace-lq4rz\" (UID: \"ce813641-c480-484a-9310-134842a3e017\") " pod="openshift-marketplace/redhat-marketplace-lq4rz" Feb 19 19:51:07 crc kubenswrapper[4813]: I0219 19:51:07.431759 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvsbh\" (UniqueName: \"kubernetes.io/projected/ce813641-c480-484a-9310-134842a3e017-kube-api-access-hvsbh\") pod \"redhat-marketplace-lq4rz\" (UID: \"ce813641-c480-484a-9310-134842a3e017\") " pod="openshift-marketplace/redhat-marketplace-lq4rz" Feb 19 19:51:07 crc kubenswrapper[4813]: I0219 19:51:07.432116 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce813641-c480-484a-9310-134842a3e017-catalog-content\") pod \"redhat-marketplace-lq4rz\" (UID: \"ce813641-c480-484a-9310-134842a3e017\") " pod="openshift-marketplace/redhat-marketplace-lq4rz" Feb 19 19:51:07 crc kubenswrapper[4813]: I0219 19:51:07.518015 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hg2kl"] Feb 19 19:51:07 crc kubenswrapper[4813]: I0219 19:51:07.519764 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hg2kl" Feb 19 19:51:07 crc kubenswrapper[4813]: I0219 19:51:07.528047 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hg2kl"] Feb 19 19:51:07 crc kubenswrapper[4813]: I0219 19:51:07.533809 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce813641-c480-484a-9310-134842a3e017-catalog-content\") pod \"redhat-marketplace-lq4rz\" (UID: \"ce813641-c480-484a-9310-134842a3e017\") " pod="openshift-marketplace/redhat-marketplace-lq4rz" Feb 19 19:51:07 crc kubenswrapper[4813]: I0219 19:51:07.533870 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce813641-c480-484a-9310-134842a3e017-utilities\") pod \"redhat-marketplace-lq4rz\" (UID: \"ce813641-c480-484a-9310-134842a3e017\") " pod="openshift-marketplace/redhat-marketplace-lq4rz" Feb 19 19:51:07 crc kubenswrapper[4813]: I0219 19:51:07.533931 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvsbh\" (UniqueName: \"kubernetes.io/projected/ce813641-c480-484a-9310-134842a3e017-kube-api-access-hvsbh\") pod \"redhat-marketplace-lq4rz\" (UID: \"ce813641-c480-484a-9310-134842a3e017\") " pod="openshift-marketplace/redhat-marketplace-lq4rz" Feb 19 19:51:07 crc kubenswrapper[4813]: I0219 19:51:07.535008 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce813641-c480-484a-9310-134842a3e017-utilities\") pod \"redhat-marketplace-lq4rz\" (UID: \"ce813641-c480-484a-9310-134842a3e017\") " pod="openshift-marketplace/redhat-marketplace-lq4rz" Feb 19 19:51:07 crc kubenswrapper[4813]: I0219 19:51:07.535033 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce813641-c480-484a-9310-134842a3e017-catalog-content\") pod \"redhat-marketplace-lq4rz\" (UID: \"ce813641-c480-484a-9310-134842a3e017\") " pod="openshift-marketplace/redhat-marketplace-lq4rz" Feb 19 19:51:07 crc kubenswrapper[4813]: I0219 19:51:07.558915 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvsbh\" (UniqueName: \"kubernetes.io/projected/ce813641-c480-484a-9310-134842a3e017-kube-api-access-hvsbh\") pod \"redhat-marketplace-lq4rz\" (UID: \"ce813641-c480-484a-9310-134842a3e017\") " pod="openshift-marketplace/redhat-marketplace-lq4rz" Feb 19 19:51:07 crc kubenswrapper[4813]: I0219 19:51:07.635867 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a8e0ccd-085a-4b00-af05-9418bdaa7e63-catalog-content\") pod \"redhat-operators-hg2kl\" (UID: \"0a8e0ccd-085a-4b00-af05-9418bdaa7e63\") " pod="openshift-marketplace/redhat-operators-hg2kl" Feb 19 19:51:07 crc kubenswrapper[4813]: I0219 19:51:07.636511 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlvbh\" (UniqueName: \"kubernetes.io/projected/0a8e0ccd-085a-4b00-af05-9418bdaa7e63-kube-api-access-vlvbh\") pod \"redhat-operators-hg2kl\" (UID: \"0a8e0ccd-085a-4b00-af05-9418bdaa7e63\") " pod="openshift-marketplace/redhat-operators-hg2kl" Feb 19 19:51:07 crc kubenswrapper[4813]: I0219 19:51:07.636718 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a8e0ccd-085a-4b00-af05-9418bdaa7e63-utilities\") pod \"redhat-operators-hg2kl\" (UID: \"0a8e0ccd-085a-4b00-af05-9418bdaa7e63\") " pod="openshift-marketplace/redhat-operators-hg2kl" Feb 19 19:51:07 crc kubenswrapper[4813]: I0219 19:51:07.652129 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lq4rz" Feb 19 19:51:07 crc kubenswrapper[4813]: I0219 19:51:07.738070 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlvbh\" (UniqueName: \"kubernetes.io/projected/0a8e0ccd-085a-4b00-af05-9418bdaa7e63-kube-api-access-vlvbh\") pod \"redhat-operators-hg2kl\" (UID: \"0a8e0ccd-085a-4b00-af05-9418bdaa7e63\") " pod="openshift-marketplace/redhat-operators-hg2kl" Feb 19 19:51:07 crc kubenswrapper[4813]: I0219 19:51:07.738167 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a8e0ccd-085a-4b00-af05-9418bdaa7e63-utilities\") pod \"redhat-operators-hg2kl\" (UID: \"0a8e0ccd-085a-4b00-af05-9418bdaa7e63\") " pod="openshift-marketplace/redhat-operators-hg2kl" Feb 19 19:51:07 crc kubenswrapper[4813]: I0219 19:51:07.738250 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a8e0ccd-085a-4b00-af05-9418bdaa7e63-catalog-content\") pod \"redhat-operators-hg2kl\" (UID: \"0a8e0ccd-085a-4b00-af05-9418bdaa7e63\") " pod="openshift-marketplace/redhat-operators-hg2kl" Feb 19 19:51:07 crc kubenswrapper[4813]: I0219 19:51:07.738753 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a8e0ccd-085a-4b00-af05-9418bdaa7e63-utilities\") pod \"redhat-operators-hg2kl\" (UID: \"0a8e0ccd-085a-4b00-af05-9418bdaa7e63\") " pod="openshift-marketplace/redhat-operators-hg2kl" Feb 19 19:51:07 crc kubenswrapper[4813]: I0219 19:51:07.738808 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a8e0ccd-085a-4b00-af05-9418bdaa7e63-catalog-content\") pod \"redhat-operators-hg2kl\" (UID: \"0a8e0ccd-085a-4b00-af05-9418bdaa7e63\") " pod="openshift-marketplace/redhat-operators-hg2kl" Feb 19 19:51:07 crc kubenswrapper[4813]: I0219 19:51:07.758668 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlvbh\" (UniqueName: \"kubernetes.io/projected/0a8e0ccd-085a-4b00-af05-9418bdaa7e63-kube-api-access-vlvbh\") pod \"redhat-operators-hg2kl\" (UID: \"0a8e0ccd-085a-4b00-af05-9418bdaa7e63\") " pod="openshift-marketplace/redhat-operators-hg2kl" Feb 19 19:51:07 crc kubenswrapper[4813]: I0219 19:51:07.862366 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hg2kl" Feb 19 19:51:08 crc kubenswrapper[4813]: I0219 19:51:08.224946 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lq4rz"] Feb 19 19:51:08 crc kubenswrapper[4813]: I0219 19:51:08.324773 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hg2kl"] Feb 19 19:51:08 crc kubenswrapper[4813]: W0219 19:51:08.331778 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a8e0ccd_085a_4b00_af05_9418bdaa7e63.slice/crio-e58419c28096b00e8c190f8a465e784bef196924dc1d383b536b0b4c50daa6d7 WatchSource:0}: Error finding container e58419c28096b00e8c190f8a465e784bef196924dc1d383b536b0b4c50daa6d7: Status 404 returned error can't find the container with id e58419c28096b00e8c190f8a465e784bef196924dc1d383b536b0b4c50daa6d7 Feb 19 19:51:09 crc kubenswrapper[4813]: I0219 19:51:09.158855 4813 generic.go:334] "Generic (PLEG): container finished" podID="ce813641-c480-484a-9310-134842a3e017" containerID="d9845eb4d820d294e57ab6e63961c22781a096ea50a3f18d68a2f831f999774f" exitCode=0 Feb 19 19:51:09 crc kubenswrapper[4813]: I0219 19:51:09.158927 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lq4rz" event={"ID":"ce813641-c480-484a-9310-134842a3e017","Type":"ContainerDied","Data":"d9845eb4d820d294e57ab6e63961c22781a096ea50a3f18d68a2f831f999774f"} Feb 19 19:51:09 crc kubenswrapper[4813]: I0219 19:51:09.158968 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lq4rz" event={"ID":"ce813641-c480-484a-9310-134842a3e017","Type":"ContainerStarted","Data":"9f1d5a093f9abe686ab253ce3a9ac39891ec54a16d5554e305065a8315edba5f"} Feb 19 19:51:09 crc kubenswrapper[4813]: I0219 19:51:09.160586 4813 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 19:51:09 crc kubenswrapper[4813]: I0219 19:51:09.160671 4813 generic.go:334] "Generic (PLEG): container finished" podID="0a8e0ccd-085a-4b00-af05-9418bdaa7e63" containerID="aad437dc1c1ed991881ebeff2cdd2c81c43c11fb08ae256f8934d6cc124db4c9" exitCode=0 Feb 19 19:51:09 crc kubenswrapper[4813]: I0219 19:51:09.160698 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hg2kl" event={"ID":"0a8e0ccd-085a-4b00-af05-9418bdaa7e63","Type":"ContainerDied","Data":"aad437dc1c1ed991881ebeff2cdd2c81c43c11fb08ae256f8934d6cc124db4c9"} Feb 19 19:51:09 crc kubenswrapper[4813]: I0219 19:51:09.160742 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hg2kl" event={"ID":"0a8e0ccd-085a-4b00-af05-9418bdaa7e63","Type":"ContainerStarted","Data":"e58419c28096b00e8c190f8a465e784bef196924dc1d383b536b0b4c50daa6d7"} Feb 19 19:51:10 crc kubenswrapper[4813]: I0219 19:51:10.170893 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lq4rz" event={"ID":"ce813641-c480-484a-9310-134842a3e017","Type":"ContainerStarted","Data":"c1a64e8f70aec88f194caacd6ce70de4fca8666f0ad4a939bc7c9443ce5d71ef"} Feb 19 19:51:11 crc kubenswrapper[4813]: I0219 19:51:11.178812 4813 generic.go:334] "Generic (PLEG): container finished" podID="ce813641-c480-484a-9310-134842a3e017" containerID="c1a64e8f70aec88f194caacd6ce70de4fca8666f0ad4a939bc7c9443ce5d71ef" exitCode=0 Feb 19 19:51:11 crc kubenswrapper[4813]: I0219 19:51:11.178861 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lq4rz" event={"ID":"ce813641-c480-484a-9310-134842a3e017","Type":"ContainerDied","Data":"c1a64e8f70aec88f194caacd6ce70de4fca8666f0ad4a939bc7c9443ce5d71ef"} Feb 19 19:51:11 crc kubenswrapper[4813]: I0219 19:51:11.181038 4813 generic.go:334] "Generic (PLEG): container finished" podID="0a8e0ccd-085a-4b00-af05-9418bdaa7e63" containerID="8ad1ecf5b853a84ace4e3310e96c720566467161022c24a9bf51de327b376a60" exitCode=0 Feb 19 19:51:11 crc kubenswrapper[4813]: I0219 19:51:11.181091 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hg2kl" event={"ID":"0a8e0ccd-085a-4b00-af05-9418bdaa7e63","Type":"ContainerDied","Data":"8ad1ecf5b853a84ace4e3310e96c720566467161022c24a9bf51de327b376a60"} Feb 19 19:51:12 crc kubenswrapper[4813]: I0219 19:51:12.191697 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hg2kl" event={"ID":"0a8e0ccd-085a-4b00-af05-9418bdaa7e63","Type":"ContainerStarted","Data":"ea97735b15610a2d3e1bc1c5b2f11ea2697bd9c9fcabec6d0032bf3b76b03954"} Feb 19 19:51:12 crc kubenswrapper[4813]: I0219 19:51:12.214375 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hg2kl" podStartSLOduration=2.524869287 podStartE2EDuration="5.21435333s" podCreationTimestamp="2026-02-19 19:51:07 +0000 UTC" firstStartedPulling="2026-02-19 19:51:09.162138943 +0000 UTC m=+4888.387579484" lastFinishedPulling="2026-02-19 19:51:11.851622986 +0000 UTC m=+4891.077063527" observedRunningTime="2026-02-19 19:51:12.21009789 +0000 UTC m=+4891.435538441" watchObservedRunningTime="2026-02-19 19:51:12.21435333 +0000 UTC m=+4891.439793871" Feb 19 19:51:13 crc kubenswrapper[4813]: I0219 19:51:13.199842 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lq4rz" event={"ID":"ce813641-c480-484a-9310-134842a3e017","Type":"ContainerStarted","Data":"2c324f1dc319b21d7a57f582e17a1ba5693d4fd0fc6584a55f8bc9800c85b155"} Feb 19 19:51:13 crc kubenswrapper[4813]: I0219 19:51:13.220805 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lq4rz" podStartSLOduration=3.355011692 podStartE2EDuration="6.220787636s" podCreationTimestamp="2026-02-19 19:51:07 +0000 UTC" firstStartedPulling="2026-02-19 19:51:09.160363458 +0000 UTC m=+4888.385803999" lastFinishedPulling="2026-02-19 19:51:12.026139402 +0000 UTC m=+4891.251579943" observedRunningTime="2026-02-19 19:51:13.218975049 +0000 UTC m=+4892.444415600" watchObservedRunningTime="2026-02-19 19:51:13.220787636 +0000 UTC m=+4892.446228167" Feb 19 19:51:17 crc kubenswrapper[4813]: I0219 19:51:17.652897 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lq4rz" Feb 19 19:51:17 crc kubenswrapper[4813]: I0219 19:51:17.653433 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lq4rz" Feb 19 19:51:17 crc kubenswrapper[4813]: I0219 19:51:17.694653 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lq4rz" Feb 19 19:51:17 crc kubenswrapper[4813]: I0219 19:51:17.863194 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hg2kl" Feb 19 19:51:17 crc kubenswrapper[4813]: I0219 19:51:17.863251 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hg2kl" Feb 19 19:51:17 crc kubenswrapper[4813]: I0219 19:51:17.914986 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hg2kl" Feb 19 19:51:18 crc kubenswrapper[4813]: I0219 19:51:18.305446 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lq4rz" Feb 19 19:51:18 crc kubenswrapper[4813]: I0219 19:51:18.320511 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hg2kl" Feb 19 19:51:19 crc kubenswrapper[4813]: I0219 19:51:19.325585 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lq4rz"] Feb 19 19:51:20 crc kubenswrapper[4813]: I0219 19:51:20.724685 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hg2kl"] Feb 19 19:51:20 crc kubenswrapper[4813]: I0219 19:51:20.725726 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hg2kl" podUID="0a8e0ccd-085a-4b00-af05-9418bdaa7e63" containerName="registry-server" containerID="cri-o://ea97735b15610a2d3e1bc1c5b2f11ea2697bd9c9fcabec6d0032bf3b76b03954" gracePeriod=2 Feb 19 19:51:21 crc kubenswrapper[4813]: I0219 19:51:21.272195 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lq4rz" podUID="ce813641-c480-484a-9310-134842a3e017" containerName="registry-server" containerID="cri-o://2c324f1dc319b21d7a57f582e17a1ba5693d4fd0fc6584a55f8bc9800c85b155" gracePeriod=2 Feb 19 19:51:22 crc kubenswrapper[4813]: I0219 19:51:22.285366 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hg2kl" Feb 19 19:51:22 crc kubenswrapper[4813]: I0219 19:51:22.285768 4813 generic.go:334] "Generic (PLEG): container finished" podID="ce813641-c480-484a-9310-134842a3e017" containerID="2c324f1dc319b21d7a57f582e17a1ba5693d4fd0fc6584a55f8bc9800c85b155" exitCode=0 Feb 19 19:51:22 crc kubenswrapper[4813]: I0219 19:51:22.285800 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lq4rz" event={"ID":"ce813641-c480-484a-9310-134842a3e017","Type":"ContainerDied","Data":"2c324f1dc319b21d7a57f582e17a1ba5693d4fd0fc6584a55f8bc9800c85b155"} Feb 19 19:51:22 crc kubenswrapper[4813]: I0219 19:51:22.288478 4813 generic.go:334] "Generic (PLEG): container finished" podID="0a8e0ccd-085a-4b00-af05-9418bdaa7e63" containerID="ea97735b15610a2d3e1bc1c5b2f11ea2697bd9c9fcabec6d0032bf3b76b03954" exitCode=0 Feb 19 19:51:22 crc kubenswrapper[4813]: I0219 19:51:22.288505 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hg2kl" event={"ID":"0a8e0ccd-085a-4b00-af05-9418bdaa7e63","Type":"ContainerDied","Data":"ea97735b15610a2d3e1bc1c5b2f11ea2697bd9c9fcabec6d0032bf3b76b03954"} Feb 19 19:51:22 crc kubenswrapper[4813]: I0219 19:51:22.288523 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hg2kl" event={"ID":"0a8e0ccd-085a-4b00-af05-9418bdaa7e63","Type":"ContainerDied","Data":"e58419c28096b00e8c190f8a465e784bef196924dc1d383b536b0b4c50daa6d7"} Feb 19 19:51:22 crc kubenswrapper[4813]: I0219 19:51:22.288537 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hg2kl" Feb 19 19:51:22 crc kubenswrapper[4813]: I0219 19:51:22.288538 4813 scope.go:117] "RemoveContainer" containerID="ea97735b15610a2d3e1bc1c5b2f11ea2697bd9c9fcabec6d0032bf3b76b03954" Feb 19 19:51:22 crc kubenswrapper[4813]: I0219 19:51:22.316401 4813 scope.go:117] "RemoveContainer" containerID="8ad1ecf5b853a84ace4e3310e96c720566467161022c24a9bf51de327b376a60" Feb 19 19:51:22 crc kubenswrapper[4813]: I0219 19:51:22.345008 4813 scope.go:117] "RemoveContainer" containerID="aad437dc1c1ed991881ebeff2cdd2c81c43c11fb08ae256f8934d6cc124db4c9" Feb 19 19:51:22 crc kubenswrapper[4813]: I0219 19:51:22.379814 4813 scope.go:117] "RemoveContainer" containerID="ea97735b15610a2d3e1bc1c5b2f11ea2697bd9c9fcabec6d0032bf3b76b03954" Feb 19 19:51:22 crc kubenswrapper[4813]: E0219 19:51:22.380393 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea97735b15610a2d3e1bc1c5b2f11ea2697bd9c9fcabec6d0032bf3b76b03954\": container with ID starting with ea97735b15610a2d3e1bc1c5b2f11ea2697bd9c9fcabec6d0032bf3b76b03954 not found: ID does not exist" containerID="ea97735b15610a2d3e1bc1c5b2f11ea2697bd9c9fcabec6d0032bf3b76b03954" Feb 19 19:51:22 crc kubenswrapper[4813]: I0219 19:51:22.380425 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea97735b15610a2d3e1bc1c5b2f11ea2697bd9c9fcabec6d0032bf3b76b03954"} err="failed to get container status \"ea97735b15610a2d3e1bc1c5b2f11ea2697bd9c9fcabec6d0032bf3b76b03954\": rpc error: code = NotFound desc = could not find container \"ea97735b15610a2d3e1bc1c5b2f11ea2697bd9c9fcabec6d0032bf3b76b03954\": container with ID starting with ea97735b15610a2d3e1bc1c5b2f11ea2697bd9c9fcabec6d0032bf3b76b03954 not found: ID does not exist" Feb 19 19:51:22 crc kubenswrapper[4813]: I0219 19:51:22.380445 4813 scope.go:117] "RemoveContainer" containerID="8ad1ecf5b853a84ace4e3310e96c720566467161022c24a9bf51de327b376a60" Feb 19 19:51:22 crc kubenswrapper[4813]: E0219 19:51:22.380763 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ad1ecf5b853a84ace4e3310e96c720566467161022c24a9bf51de327b376a60\": container with ID starting with 8ad1ecf5b853a84ace4e3310e96c720566467161022c24a9bf51de327b376a60 not found: ID does not exist" containerID="8ad1ecf5b853a84ace4e3310e96c720566467161022c24a9bf51de327b376a60" Feb 19 19:51:22 crc kubenswrapper[4813]: I0219 19:51:22.380783 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ad1ecf5b853a84ace4e3310e96c720566467161022c24a9bf51de327b376a60"} err="failed to get container status \"8ad1ecf5b853a84ace4e3310e96c720566467161022c24a9bf51de327b376a60\": rpc error: code = NotFound desc = could not find container \"8ad1ecf5b853a84ace4e3310e96c720566467161022c24a9bf51de327b376a60\": container with ID starting with 8ad1ecf5b853a84ace4e3310e96c720566467161022c24a9bf51de327b376a60 not found: ID does not exist" Feb 19 19:51:22 crc kubenswrapper[4813]: I0219 19:51:22.380798 4813 scope.go:117] "RemoveContainer" containerID="aad437dc1c1ed991881ebeff2cdd2c81c43c11fb08ae256f8934d6cc124db4c9" Feb 19 19:51:22 crc kubenswrapper[4813]: E0219 19:51:22.381136 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aad437dc1c1ed991881ebeff2cdd2c81c43c11fb08ae256f8934d6cc124db4c9\": container with ID starting with aad437dc1c1ed991881ebeff2cdd2c81c43c11fb08ae256f8934d6cc124db4c9 not found: ID does not exist" containerID="aad437dc1c1ed991881ebeff2cdd2c81c43c11fb08ae256f8934d6cc124db4c9" Feb 19 19:51:22 crc kubenswrapper[4813]: I0219 19:51:22.381163 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aad437dc1c1ed991881ebeff2cdd2c81c43c11fb08ae256f8934d6cc124db4c9"} err="failed to get container status \"aad437dc1c1ed991881ebeff2cdd2c81c43c11fb08ae256f8934d6cc124db4c9\": rpc error: code = NotFound desc = could not find container \"aad437dc1c1ed991881ebeff2cdd2c81c43c11fb08ae256f8934d6cc124db4c9\": container with ID starting with aad437dc1c1ed991881ebeff2cdd2c81c43c11fb08ae256f8934d6cc124db4c9 not found: ID does not exist" Feb 19 19:51:22 crc kubenswrapper[4813]: I0219 19:51:22.462406 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lq4rz" Feb 19 19:51:22 crc kubenswrapper[4813]: I0219 19:51:22.476739 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vlvbh\" (UniqueName: \"kubernetes.io/projected/0a8e0ccd-085a-4b00-af05-9418bdaa7e63-kube-api-access-vlvbh\") pod \"0a8e0ccd-085a-4b00-af05-9418bdaa7e63\" (UID: \"0a8e0ccd-085a-4b00-af05-9418bdaa7e63\") " Feb 19 19:51:22 crc kubenswrapper[4813]: I0219 19:51:22.476785 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a8e0ccd-085a-4b00-af05-9418bdaa7e63-catalog-content\") pod \"0a8e0ccd-085a-4b00-af05-9418bdaa7e63\" (UID: \"0a8e0ccd-085a-4b00-af05-9418bdaa7e63\") " Feb 19 19:51:22 crc kubenswrapper[4813]: I0219 19:51:22.476895 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a8e0ccd-085a-4b00-af05-9418bdaa7e63-utilities\") pod \"0a8e0ccd-085a-4b00-af05-9418bdaa7e63\" (UID: \"0a8e0ccd-085a-4b00-af05-9418bdaa7e63\") " Feb 19 19:51:22 crc kubenswrapper[4813]: I0219 19:51:22.480100 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a8e0ccd-085a-4b00-af05-9418bdaa7e63-utilities" (OuterVolumeSpecName: "utilities") pod "0a8e0ccd-085a-4b00-af05-9418bdaa7e63" (UID: "0a8e0ccd-085a-4b00-af05-9418bdaa7e63"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:51:22 crc kubenswrapper[4813]: I0219 19:51:22.481890 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a8e0ccd-085a-4b00-af05-9418bdaa7e63-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 19:51:22 crc kubenswrapper[4813]: I0219 19:51:22.482105 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a8e0ccd-085a-4b00-af05-9418bdaa7e63-kube-api-access-vlvbh" (OuterVolumeSpecName: "kube-api-access-vlvbh") pod "0a8e0ccd-085a-4b00-af05-9418bdaa7e63" (UID: "0a8e0ccd-085a-4b00-af05-9418bdaa7e63"). InnerVolumeSpecName "kube-api-access-vlvbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:51:22 crc kubenswrapper[4813]: I0219 19:51:22.582430 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce813641-c480-484a-9310-134842a3e017-utilities\") pod \"ce813641-c480-484a-9310-134842a3e017\" (UID: \"ce813641-c480-484a-9310-134842a3e017\") " Feb 19 19:51:22 crc kubenswrapper[4813]: I0219 19:51:22.582935 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvsbh\" (UniqueName: \"kubernetes.io/projected/ce813641-c480-484a-9310-134842a3e017-kube-api-access-hvsbh\") pod \"ce813641-c480-484a-9310-134842a3e017\" (UID: \"ce813641-c480-484a-9310-134842a3e017\") " Feb 19 19:51:22 crc kubenswrapper[4813]: I0219 19:51:22.584224 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce813641-c480-484a-9310-134842a3e017-catalog-content\") pod \"ce813641-c480-484a-9310-134842a3e017\" (UID: \"ce813641-c480-484a-9310-134842a3e017\") " Feb 19 19:51:22 crc kubenswrapper[4813]: I0219 19:51:22.584735 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vlvbh\" (UniqueName: \"kubernetes.io/projected/0a8e0ccd-085a-4b00-af05-9418bdaa7e63-kube-api-access-vlvbh\") on node \"crc\" DevicePath \"\"" Feb 19 19:51:22 crc kubenswrapper[4813]: I0219 19:51:22.586041 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce813641-c480-484a-9310-134842a3e017-kube-api-access-hvsbh" (OuterVolumeSpecName: "kube-api-access-hvsbh") pod "ce813641-c480-484a-9310-134842a3e017" (UID: "ce813641-c480-484a-9310-134842a3e017"). InnerVolumeSpecName "kube-api-access-hvsbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:51:22 crc kubenswrapper[4813]: I0219 19:51:22.590988 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce813641-c480-484a-9310-134842a3e017-utilities" (OuterVolumeSpecName: "utilities") pod "ce813641-c480-484a-9310-134842a3e017" (UID: "ce813641-c480-484a-9310-134842a3e017"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:51:22 crc kubenswrapper[4813]: I0219 19:51:22.610165 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a8e0ccd-085a-4b00-af05-9418bdaa7e63-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0a8e0ccd-085a-4b00-af05-9418bdaa7e63" (UID: "0a8e0ccd-085a-4b00-af05-9418bdaa7e63"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:51:22 crc kubenswrapper[4813]: I0219 19:51:22.610599 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce813641-c480-484a-9310-134842a3e017-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ce813641-c480-484a-9310-134842a3e017" (UID: "ce813641-c480-484a-9310-134842a3e017"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:51:22 crc kubenswrapper[4813]: I0219 19:51:22.685817 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce813641-c480-484a-9310-134842a3e017-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 19:51:22 crc kubenswrapper[4813]: I0219 19:51:22.685867 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvsbh\" (UniqueName: \"kubernetes.io/projected/ce813641-c480-484a-9310-134842a3e017-kube-api-access-hvsbh\") on node \"crc\" DevicePath \"\"" Feb 19 19:51:22 crc kubenswrapper[4813]: I0219 19:51:22.685879 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a8e0ccd-085a-4b00-af05-9418bdaa7e63-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 19:51:22 crc kubenswrapper[4813]: I0219 19:51:22.685889 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce813641-c480-484a-9310-134842a3e017-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 19:51:22 crc kubenswrapper[4813]: I0219 19:51:22.918263 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hg2kl"] Feb 19 19:51:22 crc kubenswrapper[4813]: I0219 19:51:22.923864 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hg2kl"] Feb 19 19:51:23 crc kubenswrapper[4813]: I0219 19:51:23.297056 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lq4rz" event={"ID":"ce813641-c480-484a-9310-134842a3e017","Type":"ContainerDied","Data":"9f1d5a093f9abe686ab253ce3a9ac39891ec54a16d5554e305065a8315edba5f"} Feb 19 19:51:23 crc kubenswrapper[4813]: I0219 19:51:23.297133 4813 scope.go:117] "RemoveContainer" containerID="2c324f1dc319b21d7a57f582e17a1ba5693d4fd0fc6584a55f8bc9800c85b155" Feb 19 19:51:23 crc kubenswrapper[4813]: I0219 19:51:23.297389 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lq4rz" Feb 19 19:51:23 crc kubenswrapper[4813]: I0219 19:51:23.327369 4813 scope.go:117] "RemoveContainer" containerID="c1a64e8f70aec88f194caacd6ce70de4fca8666f0ad4a939bc7c9443ce5d71ef" Feb 19 19:51:23 crc kubenswrapper[4813]: I0219 19:51:23.331075 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lq4rz"] Feb 19 19:51:23 crc kubenswrapper[4813]: I0219 19:51:23.340010 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lq4rz"] Feb 19 19:51:23 crc kubenswrapper[4813]: I0219 19:51:23.350406 4813 scope.go:117] "RemoveContainer" containerID="d9845eb4d820d294e57ab6e63961c22781a096ea50a3f18d68a2f831f999774f" Feb 19 19:51:23 crc kubenswrapper[4813]: I0219 19:51:23.480301 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a8e0ccd-085a-4b00-af05-9418bdaa7e63" path="/var/lib/kubelet/pods/0a8e0ccd-085a-4b00-af05-9418bdaa7e63/volumes" Feb 19 19:51:23 crc kubenswrapper[4813]: I0219 19:51:23.481131 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce813641-c480-484a-9310-134842a3e017" path="/var/lib/kubelet/pods/ce813641-c480-484a-9310-134842a3e017/volumes" Feb 19 19:52:47 crc kubenswrapper[4813]: I0219 19:52:47.195659 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-copy-data"] Feb 19 19:52:47 crc kubenswrapper[4813]: E0219 19:52:47.196471 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a8e0ccd-085a-4b00-af05-9418bdaa7e63" containerName="registry-server" Feb 19 19:52:47 crc kubenswrapper[4813]: I0219 19:52:47.196483 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a8e0ccd-085a-4b00-af05-9418bdaa7e63" containerName="registry-server" Feb 19 19:52:47 crc kubenswrapper[4813]: E0219 19:52:47.196500 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a8e0ccd-085a-4b00-af05-9418bdaa7e63" containerName="extract-content" Feb 19 19:52:47 crc kubenswrapper[4813]: I0219 19:52:47.196506 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a8e0ccd-085a-4b00-af05-9418bdaa7e63" containerName="extract-content" Feb 19 19:52:47 crc kubenswrapper[4813]: E0219 19:52:47.196519 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce813641-c480-484a-9310-134842a3e017" containerName="registry-server" Feb 19 19:52:47 crc kubenswrapper[4813]: I0219 19:52:47.196525 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce813641-c480-484a-9310-134842a3e017" containerName="registry-server" Feb 19 19:52:47 crc kubenswrapper[4813]: E0219 19:52:47.196541 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce813641-c480-484a-9310-134842a3e017" containerName="extract-utilities" Feb 19 19:52:47 crc kubenswrapper[4813]: I0219 19:52:47.196547 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce813641-c480-484a-9310-134842a3e017" containerName="extract-utilities" Feb 19 19:52:47 crc kubenswrapper[4813]: E0219 19:52:47.196560 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce813641-c480-484a-9310-134842a3e017" containerName="extract-content" Feb 19 19:52:47 crc kubenswrapper[4813]: I0219 19:52:47.196566 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce813641-c480-484a-9310-134842a3e017" containerName="extract-content" Feb 19 19:52:47 crc kubenswrapper[4813]: E0219 19:52:47.196574 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a8e0ccd-085a-4b00-af05-9418bdaa7e63" containerName="extract-utilities" Feb 19 19:52:47 crc kubenswrapper[4813]: I0219 19:52:47.196580 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a8e0ccd-085a-4b00-af05-9418bdaa7e63" containerName="extract-utilities" Feb 19 19:52:47 crc kubenswrapper[4813]: I0219 19:52:47.196723 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce813641-c480-484a-9310-134842a3e017" containerName="registry-server" Feb 19 19:52:47 crc kubenswrapper[4813]: I0219 19:52:47.196738 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a8e0ccd-085a-4b00-af05-9418bdaa7e63" containerName="registry-server" Feb 19 19:52:47 crc kubenswrapper[4813]: I0219 19:52:47.197305 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Feb 19 19:52:47 crc kubenswrapper[4813]: I0219 19:52:47.203626 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Feb 19 19:52:47 crc kubenswrapper[4813]: I0219 19:52:47.204705 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-87mg8" Feb 19 19:52:47 crc kubenswrapper[4813]: I0219 19:52:47.264314 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvxcc\" (UniqueName: \"kubernetes.io/projected/fa05940d-c476-4e31-adce-a834c33da6de-kube-api-access-qvxcc\") pod \"mariadb-copy-data\" (UID: \"fa05940d-c476-4e31-adce-a834c33da6de\") " pod="openstack/mariadb-copy-data" Feb 19 19:52:47 crc kubenswrapper[4813]: I0219 19:52:47.264517 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-dc3120ee-a320-435e-a3db-082532da4fe5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc3120ee-a320-435e-a3db-082532da4fe5\") pod \"mariadb-copy-data\" (UID: \"fa05940d-c476-4e31-adce-a834c33da6de\") " pod="openstack/mariadb-copy-data" Feb 19 19:52:47 crc kubenswrapper[4813]: I0219 19:52:47.384754 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-dc3120ee-a320-435e-a3db-082532da4fe5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc3120ee-a320-435e-a3db-082532da4fe5\") pod \"mariadb-copy-data\" (UID: \"fa05940d-c476-4e31-adce-a834c33da6de\") " pod="openstack/mariadb-copy-data" Feb 19 19:52:47 crc kubenswrapper[4813]: I0219 19:52:47.384851 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvxcc\" (UniqueName: \"kubernetes.io/projected/fa05940d-c476-4e31-adce-a834c33da6de-kube-api-access-qvxcc\") pod \"mariadb-copy-data\" (UID: \"fa05940d-c476-4e31-adce-a834c33da6de\") " pod="openstack/mariadb-copy-data" Feb 19 19:52:47 crc kubenswrapper[4813]: I0219 19:52:47.387869 4813 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 19:52:47 crc kubenswrapper[4813]: I0219 19:52:47.387929 4813 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-dc3120ee-a320-435e-a3db-082532da4fe5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc3120ee-a320-435e-a3db-082532da4fe5\") pod \"mariadb-copy-data\" (UID: \"fa05940d-c476-4e31-adce-a834c33da6de\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/db20e2716f7f7b5276269ce9862b8d7bded585882d33aa371fba5454d57a2c15/globalmount\"" pod="openstack/mariadb-copy-data" Feb 19 19:52:47 crc kubenswrapper[4813]: I0219 19:52:47.417460 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvxcc\" (UniqueName: \"kubernetes.io/projected/fa05940d-c476-4e31-adce-a834c33da6de-kube-api-access-qvxcc\") pod \"mariadb-copy-data\" (UID: \"fa05940d-c476-4e31-adce-a834c33da6de\") " pod="openstack/mariadb-copy-data" Feb 19 19:52:47 crc kubenswrapper[4813]: I0219 19:52:47.486254 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-dc3120ee-a320-435e-a3db-082532da4fe5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc3120ee-a320-435e-a3db-082532da4fe5\") pod \"mariadb-copy-data\" (UID: \"fa05940d-c476-4e31-adce-a834c33da6de\") " pod="openstack/mariadb-copy-data" Feb 19 19:52:47 crc kubenswrapper[4813]: I0219 19:52:47.514746 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Feb 19 19:52:47 crc kubenswrapper[4813]: I0219 19:52:47.999019 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Feb 19 19:52:48 crc kubenswrapper[4813]: I0219 19:52:48.939738 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"fa05940d-c476-4e31-adce-a834c33da6de","Type":"ContainerStarted","Data":"77f43f65aacf0bda469e9fd3b3b4433ac810fd81b47e9dbe6274e2a56f94842e"} Feb 19 19:52:48 crc kubenswrapper[4813]: I0219 19:52:48.940918 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"fa05940d-c476-4e31-adce-a834c33da6de","Type":"ContainerStarted","Data":"7406040b5b44dbef32df4c5027a41fa12f00617c5a55428d5f6a903c915855c7"} Feb 19 19:52:48 crc kubenswrapper[4813]: I0219 19:52:48.959013 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-copy-data" podStartSLOduration=2.958990262 podStartE2EDuration="2.958990262s" podCreationTimestamp="2026-02-19 19:52:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:52:48.952849591 +0000 UTC m=+4988.178290132" watchObservedRunningTime="2026-02-19 19:52:48.958990262 +0000 UTC m=+4988.184430803" Feb 19 19:52:51 crc kubenswrapper[4813]: I0219 19:52:51.630795 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Feb 19 19:52:51 crc kubenswrapper[4813]: I0219 19:52:51.632764 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 19 19:52:51 crc kubenswrapper[4813]: I0219 19:52:51.641509 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Feb 19 19:52:51 crc kubenswrapper[4813]: I0219 19:52:51.753055 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vt4vk\" (UniqueName: \"kubernetes.io/projected/db557844-83ff-415f-8484-b348d32eda3c-kube-api-access-vt4vk\") pod \"mariadb-client\" (UID: \"db557844-83ff-415f-8484-b348d32eda3c\") " pod="openstack/mariadb-client" Feb 19 19:52:51 crc kubenswrapper[4813]: I0219 19:52:51.854760 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vt4vk\" (UniqueName: \"kubernetes.io/projected/db557844-83ff-415f-8484-b348d32eda3c-kube-api-access-vt4vk\") pod \"mariadb-client\" (UID: \"db557844-83ff-415f-8484-b348d32eda3c\") " pod="openstack/mariadb-client" Feb 19 19:52:51 crc kubenswrapper[4813]: I0219 19:52:51.872258 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vt4vk\" (UniqueName: \"kubernetes.io/projected/db557844-83ff-415f-8484-b348d32eda3c-kube-api-access-vt4vk\") pod \"mariadb-client\" (UID: \"db557844-83ff-415f-8484-b348d32eda3c\") " pod="openstack/mariadb-client" Feb 19 19:52:51 crc kubenswrapper[4813]: I0219 19:52:51.992135 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 19 19:52:52 crc kubenswrapper[4813]: W0219 19:52:52.472241 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb557844_83ff_415f_8484_b348d32eda3c.slice/crio-b5f2536b36053a9e60a5fa4116dc17993764cc75724e7ea1eb4740d44e2182d4 WatchSource:0}: Error finding container b5f2536b36053a9e60a5fa4116dc17993764cc75724e7ea1eb4740d44e2182d4: Status 404 returned error can't find the container with id b5f2536b36053a9e60a5fa4116dc17993764cc75724e7ea1eb4740d44e2182d4 Feb 19 19:52:52 crc kubenswrapper[4813]: I0219 19:52:52.472983 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Feb 19 19:52:52 crc kubenswrapper[4813]: I0219 19:52:52.967008 4813 generic.go:334] "Generic (PLEG): container finished" podID="db557844-83ff-415f-8484-b348d32eda3c" containerID="06e2c4b9404be97461c29ff744d68e485c3eee746844dbea733787dc2672202c" exitCode=0 Feb 19 19:52:52 crc kubenswrapper[4813]: I0219 19:52:52.967318 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"db557844-83ff-415f-8484-b348d32eda3c","Type":"ContainerDied","Data":"06e2c4b9404be97461c29ff744d68e485c3eee746844dbea733787dc2672202c"} Feb 19 19:52:52 crc kubenswrapper[4813]: I0219 19:52:52.967352 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"db557844-83ff-415f-8484-b348d32eda3c","Type":"ContainerStarted","Data":"b5f2536b36053a9e60a5fa4116dc17993764cc75724e7ea1eb4740d44e2182d4"} Feb 19 19:52:54 crc kubenswrapper[4813]: I0219 19:52:54.249279 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 19 19:52:54 crc kubenswrapper[4813]: I0219 19:52:54.282998 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_db557844-83ff-415f-8484-b348d32eda3c/mariadb-client/0.log" Feb 19 19:52:54 crc kubenswrapper[4813]: I0219 19:52:54.291658 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt4vk\" (UniqueName: \"kubernetes.io/projected/db557844-83ff-415f-8484-b348d32eda3c-kube-api-access-vt4vk\") pod \"db557844-83ff-415f-8484-b348d32eda3c\" (UID: \"db557844-83ff-415f-8484-b348d32eda3c\") " Feb 19 19:52:54 crc kubenswrapper[4813]: I0219 19:52:54.303327 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db557844-83ff-415f-8484-b348d32eda3c-kube-api-access-vt4vk" (OuterVolumeSpecName: "kube-api-access-vt4vk") pod "db557844-83ff-415f-8484-b348d32eda3c" (UID: "db557844-83ff-415f-8484-b348d32eda3c"). InnerVolumeSpecName "kube-api-access-vt4vk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:52:54 crc kubenswrapper[4813]: I0219 19:52:54.313460 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Feb 19 19:52:54 crc kubenswrapper[4813]: I0219 19:52:54.318689 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Feb 19 19:52:54 crc kubenswrapper[4813]: I0219 19:52:54.393892 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt4vk\" (UniqueName: \"kubernetes.io/projected/db557844-83ff-415f-8484-b348d32eda3c-kube-api-access-vt4vk\") on node \"crc\" DevicePath \"\"" Feb 19 19:52:54 crc kubenswrapper[4813]: I0219 19:52:54.460069 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Feb 19 19:52:54 crc kubenswrapper[4813]: E0219 19:52:54.460389 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db557844-83ff-415f-8484-b348d32eda3c" containerName="mariadb-client" Feb 19 19:52:54 crc kubenswrapper[4813]: I0219 19:52:54.460401 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="db557844-83ff-415f-8484-b348d32eda3c" containerName="mariadb-client" Feb 19 19:52:54 crc kubenswrapper[4813]: I0219 19:52:54.460710 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="db557844-83ff-415f-8484-b348d32eda3c" containerName="mariadb-client" Feb 19 19:52:54 crc kubenswrapper[4813]: I0219 19:52:54.461316 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 19 19:52:54 crc kubenswrapper[4813]: I0219 19:52:54.472769 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Feb 19 19:52:54 crc kubenswrapper[4813]: I0219 19:52:54.597265 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9bg6\" (UniqueName: \"kubernetes.io/projected/a1337faf-94ec-457a-80bd-b1d268610b4b-kube-api-access-b9bg6\") pod \"mariadb-client\" (UID: \"a1337faf-94ec-457a-80bd-b1d268610b4b\") " pod="openstack/mariadb-client" Feb 19 19:52:54 crc kubenswrapper[4813]: I0219 19:52:54.699909 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9bg6\" (UniqueName: \"kubernetes.io/projected/a1337faf-94ec-457a-80bd-b1d268610b4b-kube-api-access-b9bg6\") pod \"mariadb-client\" (UID: \"a1337faf-94ec-457a-80bd-b1d268610b4b\") " pod="openstack/mariadb-client" Feb 19 19:52:54 crc kubenswrapper[4813]: I0219 19:52:54.718300 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9bg6\" (UniqueName: \"kubernetes.io/projected/a1337faf-94ec-457a-80bd-b1d268610b4b-kube-api-access-b9bg6\") pod \"mariadb-client\" (UID: \"a1337faf-94ec-457a-80bd-b1d268610b4b\") " pod="openstack/mariadb-client" Feb 19 19:52:54 crc kubenswrapper[4813]: I0219 19:52:54.777609 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 19 19:52:54 crc kubenswrapper[4813]: I0219 19:52:54.980716 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5f2536b36053a9e60a5fa4116dc17993764cc75724e7ea1eb4740d44e2182d4" Feb 19 19:52:54 crc kubenswrapper[4813]: I0219 19:52:54.980737 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 19 19:52:55 crc kubenswrapper[4813]: I0219 19:52:54.999542 4813 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/mariadb-client" oldPodUID="db557844-83ff-415f-8484-b348d32eda3c" podUID="a1337faf-94ec-457a-80bd-b1d268610b4b" Feb 19 19:52:55 crc kubenswrapper[4813]: I0219 19:52:55.183230 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Feb 19 19:52:55 crc kubenswrapper[4813]: W0219 19:52:55.184209 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1337faf_94ec_457a_80bd_b1d268610b4b.slice/crio-ed882adb673e395b8fc6c6babf7d334c002aa120ac1c1fc807ba5a32c66c373d WatchSource:0}: Error finding container ed882adb673e395b8fc6c6babf7d334c002aa120ac1c1fc807ba5a32c66c373d: Status 404 returned error can't find the container with id ed882adb673e395b8fc6c6babf7d334c002aa120ac1c1fc807ba5a32c66c373d Feb 19 19:52:55 crc kubenswrapper[4813]: I0219 19:52:55.484592 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db557844-83ff-415f-8484-b348d32eda3c" path="/var/lib/kubelet/pods/db557844-83ff-415f-8484-b348d32eda3c/volumes" Feb 19 19:52:55 crc kubenswrapper[4813]: I0219 19:52:55.988813 4813 generic.go:334] "Generic (PLEG): container finished" podID="a1337faf-94ec-457a-80bd-b1d268610b4b" containerID="c81dc93003c8d76bf94c290f39f28870e6ae9af1912a47540b03399aa7ca818a" exitCode=0 Feb 19 19:52:55 crc kubenswrapper[4813]: I0219 19:52:55.988858 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"a1337faf-94ec-457a-80bd-b1d268610b4b","Type":"ContainerDied","Data":"c81dc93003c8d76bf94c290f39f28870e6ae9af1912a47540b03399aa7ca818a"} Feb 19 19:52:55 crc kubenswrapper[4813]: I0219 19:52:55.988987 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"a1337faf-94ec-457a-80bd-b1d268610b4b","Type":"ContainerStarted","Data":"ed882adb673e395b8fc6c6babf7d334c002aa120ac1c1fc807ba5a32c66c373d"} Feb 19 19:52:57 crc kubenswrapper[4813]: I0219 19:52:57.281202 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 19 19:52:57 crc kubenswrapper[4813]: I0219 19:52:57.299807 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_a1337faf-94ec-457a-80bd-b1d268610b4b/mariadb-client/0.log" Feb 19 19:52:57 crc kubenswrapper[4813]: I0219 19:52:57.328876 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Feb 19 19:52:57 crc kubenswrapper[4813]: I0219 19:52:57.334460 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Feb 19 19:52:57 crc kubenswrapper[4813]: I0219 19:52:57.370769 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9bg6\" (UniqueName: \"kubernetes.io/projected/a1337faf-94ec-457a-80bd-b1d268610b4b-kube-api-access-b9bg6\") pod \"a1337faf-94ec-457a-80bd-b1d268610b4b\" (UID: \"a1337faf-94ec-457a-80bd-b1d268610b4b\") " Feb 19 19:52:57 crc kubenswrapper[4813]: I0219 19:52:57.376098 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1337faf-94ec-457a-80bd-b1d268610b4b-kube-api-access-b9bg6" (OuterVolumeSpecName: "kube-api-access-b9bg6") pod "a1337faf-94ec-457a-80bd-b1d268610b4b" (UID: "a1337faf-94ec-457a-80bd-b1d268610b4b"). InnerVolumeSpecName "kube-api-access-b9bg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:52:57 crc kubenswrapper[4813]: I0219 19:52:57.472277 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9bg6\" (UniqueName: \"kubernetes.io/projected/a1337faf-94ec-457a-80bd-b1d268610b4b-kube-api-access-b9bg6\") on node \"crc\" DevicePath \"\"" Feb 19 19:52:57 crc kubenswrapper[4813]: I0219 19:52:57.479789 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1337faf-94ec-457a-80bd-b1d268610b4b" path="/var/lib/kubelet/pods/a1337faf-94ec-457a-80bd-b1d268610b4b/volumes" Feb 19 19:52:58 crc kubenswrapper[4813]: I0219 19:52:58.004576 4813 scope.go:117] "RemoveContainer" containerID="c81dc93003c8d76bf94c290f39f28870e6ae9af1912a47540b03399aa7ca818a" Feb 19 19:52:58 crc kubenswrapper[4813]: I0219 19:52:58.004584 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 19 19:53:00 crc kubenswrapper[4813]: I0219 19:53:00.329670 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 19:53:00 crc kubenswrapper[4813]: I0219 19:53:00.330013 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 19:53:30 crc kubenswrapper[4813]: I0219 19:53:30.329724 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 19:53:30 crc kubenswrapper[4813]: I0219 19:53:30.330306 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.424829 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 19 19:53:32 crc kubenswrapper[4813]: E0219 19:53:32.425467 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1337faf-94ec-457a-80bd-b1d268610b4b" containerName="mariadb-client" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.425486 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1337faf-94ec-457a-80bd-b1d268610b4b" containerName="mariadb-client" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.425641 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1337faf-94ec-457a-80bd-b1d268610b4b" containerName="mariadb-client" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.426421 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.430041 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-lsczt" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.430206 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.436331 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.439890 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-2"] Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.441698 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.449167 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-1"] Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.452289 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.458844 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.490679 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.507913 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.517774 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-819a378c-c178-46d5-b32d-bbd86b10547f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-819a378c-c178-46d5-b32d-bbd86b10547f\") pod \"ovsdbserver-nb-2\" (UID: \"eef6fa0f-4b41-488c-afd1-1179ea364d95\") " pod="openstack/ovsdbserver-nb-2" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.517830 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/eef6fa0f-4b41-488c-afd1-1179ea364d95-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"eef6fa0f-4b41-488c-afd1-1179ea364d95\") " pod="openstack/ovsdbserver-nb-2" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.517868 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbfw9\" (UniqueName: \"kubernetes.io/projected/9ea9898d-86e7-4fa2-965f-43b2d6e44046-kube-api-access-bbfw9\") pod \"ovsdbserver-nb-0\" (UID: \"9ea9898d-86e7-4fa2-965f-43b2d6e44046\") " pod="openstack/ovsdbserver-nb-0" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.517911 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eef6fa0f-4b41-488c-afd1-1179ea364d95-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"eef6fa0f-4b41-488c-afd1-1179ea364d95\") " pod="openstack/ovsdbserver-nb-2" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.518019 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfh75\" (UniqueName: \"kubernetes.io/projected/eef6fa0f-4b41-488c-afd1-1179ea364d95-kube-api-access-wfh75\") pod \"ovsdbserver-nb-2\" (UID: \"eef6fa0f-4b41-488c-afd1-1179ea364d95\") " pod="openstack/ovsdbserver-nb-2" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.518063 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9ea9898d-86e7-4fa2-965f-43b2d6e44046-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"9ea9898d-86e7-4fa2-965f-43b2d6e44046\") " pod="openstack/ovsdbserver-nb-0" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.518082 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eef6fa0f-4b41-488c-afd1-1179ea364d95-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"eef6fa0f-4b41-488c-afd1-1179ea364d95\") " pod="openstack/ovsdbserver-nb-2" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.518112 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ea9898d-86e7-4fa2-965f-43b2d6e44046-config\") pod \"ovsdbserver-nb-0\" (UID: \"9ea9898d-86e7-4fa2-965f-43b2d6e44046\") " pod="openstack/ovsdbserver-nb-0" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.518232 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-560256b2-f952-49fd-a598-bfb4133e3753\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-560256b2-f952-49fd-a598-bfb4133e3753\") pod \"ovsdbserver-nb-0\" (UID: \"9ea9898d-86e7-4fa2-965f-43b2d6e44046\") " pod="openstack/ovsdbserver-nb-0" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.518275 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eef6fa0f-4b41-488c-afd1-1179ea364d95-config\") pod \"ovsdbserver-nb-2\" (UID: \"eef6fa0f-4b41-488c-afd1-1179ea364d95\") " pod="openstack/ovsdbserver-nb-2" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.518312 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ea9898d-86e7-4fa2-965f-43b2d6e44046-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"9ea9898d-86e7-4fa2-965f-43b2d6e44046\") " pod="openstack/ovsdbserver-nb-0" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.518351 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9ea9898d-86e7-4fa2-965f-43b2d6e44046-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"9ea9898d-86e7-4fa2-965f-43b2d6e44046\") " pod="openstack/ovsdbserver-nb-0" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.625010 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9ea9898d-86e7-4fa2-965f-43b2d6e44046-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"9ea9898d-86e7-4fa2-965f-43b2d6e44046\") " pod="openstack/ovsdbserver-nb-0" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.625050 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eef6fa0f-4b41-488c-afd1-1179ea364d95-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"eef6fa0f-4b41-488c-afd1-1179ea364d95\") " pod="openstack/ovsdbserver-nb-2" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.625076 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ea9898d-86e7-4fa2-965f-43b2d6e44046-config\") pod \"ovsdbserver-nb-0\" (UID: \"9ea9898d-86e7-4fa2-965f-43b2d6e44046\") " pod="openstack/ovsdbserver-nb-0" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.625116 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-560256b2-f952-49fd-a598-bfb4133e3753\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-560256b2-f952-49fd-a598-bfb4133e3753\") pod \"ovsdbserver-nb-0\" (UID: \"9ea9898d-86e7-4fa2-965f-43b2d6e44046\") " pod="openstack/ovsdbserver-nb-0" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.625149 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eef6fa0f-4b41-488c-afd1-1179ea364d95-config\") pod \"ovsdbserver-nb-2\" (UID: \"eef6fa0f-4b41-488c-afd1-1179ea364d95\") " pod="openstack/ovsdbserver-nb-2" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.625182 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ea9898d-86e7-4fa2-965f-43b2d6e44046-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"9ea9898d-86e7-4fa2-965f-43b2d6e44046\") " pod="openstack/ovsdbserver-nb-0" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.625200 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9ea9898d-86e7-4fa2-965f-43b2d6e44046-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"9ea9898d-86e7-4fa2-965f-43b2d6e44046\") " pod="openstack/ovsdbserver-nb-0" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.625221 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67be32e9-4e1a-424b-8381-259b527565c6-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"67be32e9-4e1a-424b-8381-259b527565c6\") " pod="openstack/ovsdbserver-nb-1" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.625236 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/67be32e9-4e1a-424b-8381-259b527565c6-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"67be32e9-4e1a-424b-8381-259b527565c6\") " pod="openstack/ovsdbserver-nb-1" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.625276 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqlhf\" (UniqueName: \"kubernetes.io/projected/67be32e9-4e1a-424b-8381-259b527565c6-kube-api-access-gqlhf\") pod \"ovsdbserver-nb-1\" (UID: \"67be32e9-4e1a-424b-8381-259b527565c6\") " pod="openstack/ovsdbserver-nb-1" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.625304 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-819a378c-c178-46d5-b32d-bbd86b10547f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-819a378c-c178-46d5-b32d-bbd86b10547f\") pod \"ovsdbserver-nb-2\" (UID: \"eef6fa0f-4b41-488c-afd1-1179ea364d95\") " pod="openstack/ovsdbserver-nb-2" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.625325 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/eef6fa0f-4b41-488c-afd1-1179ea364d95-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"eef6fa0f-4b41-488c-afd1-1179ea364d95\") " pod="openstack/ovsdbserver-nb-2" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.625342 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67be32e9-4e1a-424b-8381-259b527565c6-config\") pod \"ovsdbserver-nb-1\" (UID: \"67be32e9-4e1a-424b-8381-259b527565c6\") " pod="openstack/ovsdbserver-nb-1" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.625359 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbfw9\" (UniqueName: \"kubernetes.io/projected/9ea9898d-86e7-4fa2-965f-43b2d6e44046-kube-api-access-bbfw9\") pod \"ovsdbserver-nb-0\" (UID: \"9ea9898d-86e7-4fa2-965f-43b2d6e44046\") " pod="openstack/ovsdbserver-nb-0" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.625397 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eef6fa0f-4b41-488c-afd1-1179ea364d95-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"eef6fa0f-4b41-488c-afd1-1179ea364d95\") " pod="openstack/ovsdbserver-nb-2" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.625425 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-592b7f5f-5f08-47d4-8262-f6817a758799\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-592b7f5f-5f08-47d4-8262-f6817a758799\") pod \"ovsdbserver-nb-1\" (UID: \"67be32e9-4e1a-424b-8381-259b527565c6\") " pod="openstack/ovsdbserver-nb-1" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.625448 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/67be32e9-4e1a-424b-8381-259b527565c6-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"67be32e9-4e1a-424b-8381-259b527565c6\") " pod="openstack/ovsdbserver-nb-1" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.625484 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfh75\" (UniqueName: \"kubernetes.io/projected/eef6fa0f-4b41-488c-afd1-1179ea364d95-kube-api-access-wfh75\") pod \"ovsdbserver-nb-2\" (UID: \"eef6fa0f-4b41-488c-afd1-1179ea364d95\") " pod="openstack/ovsdbserver-nb-2" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.626524 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9ea9898d-86e7-4fa2-965f-43b2d6e44046-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"9ea9898d-86e7-4fa2-965f-43b2d6e44046\") " pod="openstack/ovsdbserver-nb-0" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.627111 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eef6fa0f-4b41-488c-afd1-1179ea364d95-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"eef6fa0f-4b41-488c-afd1-1179ea364d95\") " pod="openstack/ovsdbserver-nb-2" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.628175 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/eef6fa0f-4b41-488c-afd1-1179ea364d95-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"eef6fa0f-4b41-488c-afd1-1179ea364d95\") " pod="openstack/ovsdbserver-nb-2" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.629232 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9ea9898d-86e7-4fa2-965f-43b2d6e44046-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"9ea9898d-86e7-4fa2-965f-43b2d6e44046\") " pod="openstack/ovsdbserver-nb-0" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.629675 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ea9898d-86e7-4fa2-965f-43b2d6e44046-config\") pod \"ovsdbserver-nb-0\" (UID: \"9ea9898d-86e7-4fa2-965f-43b2d6e44046\") " pod="openstack/ovsdbserver-nb-0" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.630605 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eef6fa0f-4b41-488c-afd1-1179ea364d95-config\") pod \"ovsdbserver-nb-2\" (UID: \"eef6fa0f-4b41-488c-afd1-1179ea364d95\") " pod="openstack/ovsdbserver-nb-2" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.631192 4813 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.631219 4813 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-819a378c-c178-46d5-b32d-bbd86b10547f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-819a378c-c178-46d5-b32d-bbd86b10547f\") pod \"ovsdbserver-nb-2\" (UID: \"eef6fa0f-4b41-488c-afd1-1179ea364d95\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/718c2219c175375cbebd9ceed8109a2e2e946c1ceac2bd9aa41c5d417f3b3446/globalmount\"" pod="openstack/ovsdbserver-nb-2" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.631461 4813 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.631669 4813 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-560256b2-f952-49fd-a598-bfb4133e3753\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-560256b2-f952-49fd-a598-bfb4133e3753\") pod \"ovsdbserver-nb-0\" (UID: \"9ea9898d-86e7-4fa2-965f-43b2d6e44046\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f68d61d7304d18dcc2c453254d6fee7ae375d76b25aa84e2b5b1bb5ea4576bc9/globalmount\"" pod="openstack/ovsdbserver-nb-0" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.633817 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.635332 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.636016 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ea9898d-86e7-4fa2-965f-43b2d6e44046-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"9ea9898d-86e7-4fa2-965f-43b2d6e44046\") " pod="openstack/ovsdbserver-nb-0" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.636253 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eef6fa0f-4b41-488c-afd1-1179ea364d95-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"eef6fa0f-4b41-488c-afd1-1179ea364d95\") " pod="openstack/ovsdbserver-nb-2" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.639084 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.639339 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.639729 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-kqpqc" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.659721 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbfw9\" (UniqueName: \"kubernetes.io/projected/9ea9898d-86e7-4fa2-965f-43b2d6e44046-kube-api-access-bbfw9\") pod \"ovsdbserver-nb-0\" (UID: \"9ea9898d-86e7-4fa2-965f-43b2d6e44046\") " pod="openstack/ovsdbserver-nb-0" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.666559 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfh75\" (UniqueName: \"kubernetes.io/projected/eef6fa0f-4b41-488c-afd1-1179ea364d95-kube-api-access-wfh75\") pod \"ovsdbserver-nb-2\" (UID: \"eef6fa0f-4b41-488c-afd1-1179ea364d95\") " pod="openstack/ovsdbserver-nb-2" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.673900 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.686064 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-1"] Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.687711 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.696485 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-2"] Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.699107 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.705839 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-560256b2-f952-49fd-a598-bfb4133e3753\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-560256b2-f952-49fd-a598-bfb4133e3753\") pod \"ovsdbserver-nb-0\" (UID: \"9ea9898d-86e7-4fa2-965f-43b2d6e44046\") " pod="openstack/ovsdbserver-nb-0" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.706028 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.713535 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-819a378c-c178-46d5-b32d-bbd86b10547f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-819a378c-c178-46d5-b32d-bbd86b10547f\") pod \"ovsdbserver-nb-2\" (UID: \"eef6fa0f-4b41-488c-afd1-1179ea364d95\") " pod="openstack/ovsdbserver-nb-2" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.724032 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.726592 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02fe8e04-6814-4371-b705-24f31ab7cdb5-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"02fe8e04-6814-4371-b705-24f31ab7cdb5\") " pod="openstack/ovsdbserver-sb-0" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.726637 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02fe8e04-6814-4371-b705-24f31ab7cdb5-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"02fe8e04-6814-4371-b705-24f31ab7cdb5\") " pod="openstack/ovsdbserver-sb-0" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.726707 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x94jd\" (UniqueName: \"kubernetes.io/projected/02fe8e04-6814-4371-b705-24f31ab7cdb5-kube-api-access-x94jd\") pod \"ovsdbserver-sb-0\" (UID: \"02fe8e04-6814-4371-b705-24f31ab7cdb5\") " pod="openstack/ovsdbserver-sb-0" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.726745 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67be32e9-4e1a-424b-8381-259b527565c6-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"67be32e9-4e1a-424b-8381-259b527565c6\") " pod="openstack/ovsdbserver-nb-1" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.726932 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/67be32e9-4e1a-424b-8381-259b527565c6-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"67be32e9-4e1a-424b-8381-259b527565c6\") " pod="openstack/ovsdbserver-nb-1" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.727205 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqlhf\" (UniqueName: \"kubernetes.io/projected/67be32e9-4e1a-424b-8381-259b527565c6-kube-api-access-gqlhf\") pod \"ovsdbserver-nb-1\" (UID: \"67be32e9-4e1a-424b-8381-259b527565c6\") " pod="openstack/ovsdbserver-nb-1" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.727303 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67be32e9-4e1a-424b-8381-259b527565c6-config\") pod \"ovsdbserver-nb-1\" (UID: \"67be32e9-4e1a-424b-8381-259b527565c6\") " pod="openstack/ovsdbserver-nb-1" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.727416 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0320635a-d7e1-49c0-9a3b-8657bcaa5ef8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0320635a-d7e1-49c0-9a3b-8657bcaa5ef8\") pod \"ovsdbserver-sb-0\" (UID: \"02fe8e04-6814-4371-b705-24f31ab7cdb5\") " pod="openstack/ovsdbserver-sb-0" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.727475 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/67be32e9-4e1a-424b-8381-259b527565c6-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"67be32e9-4e1a-424b-8381-259b527565c6\") " pod="openstack/ovsdbserver-nb-1" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.727525 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/02fe8e04-6814-4371-b705-24f31ab7cdb5-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"02fe8e04-6814-4371-b705-24f31ab7cdb5\") " pod="openstack/ovsdbserver-sb-0" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.727612 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-592b7f5f-5f08-47d4-8262-f6817a758799\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-592b7f5f-5f08-47d4-8262-f6817a758799\") pod \"ovsdbserver-nb-1\" (UID: \"67be32e9-4e1a-424b-8381-259b527565c6\") " pod="openstack/ovsdbserver-nb-1" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.727657 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/67be32e9-4e1a-424b-8381-259b527565c6-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"67be32e9-4e1a-424b-8381-259b527565c6\") " pod="openstack/ovsdbserver-nb-1" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.727686 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02fe8e04-6814-4371-b705-24f31ab7cdb5-config\") pod \"ovsdbserver-sb-0\" (UID: \"02fe8e04-6814-4371-b705-24f31ab7cdb5\") " pod="openstack/ovsdbserver-sb-0" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.728394 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67be32e9-4e1a-424b-8381-259b527565c6-config\") pod \"ovsdbserver-nb-1\" (UID: \"67be32e9-4e1a-424b-8381-259b527565c6\") " pod="openstack/ovsdbserver-nb-1" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.728674 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/67be32e9-4e1a-424b-8381-259b527565c6-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"67be32e9-4e1a-424b-8381-259b527565c6\") " pod="openstack/ovsdbserver-nb-1" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.729865 4813 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.729914 4813 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-592b7f5f-5f08-47d4-8262-f6817a758799\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-592b7f5f-5f08-47d4-8262-f6817a758799\") pod \"ovsdbserver-nb-1\" (UID: \"67be32e9-4e1a-424b-8381-259b527565c6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/40529a794f09fcb646b2aec77de093fb21b783da91a240d4fd2f0a16272c50b0/globalmount\"" pod="openstack/ovsdbserver-nb-1" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.731747 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67be32e9-4e1a-424b-8381-259b527565c6-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"67be32e9-4e1a-424b-8381-259b527565c6\") " pod="openstack/ovsdbserver-nb-1" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.745002 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqlhf\" (UniqueName: \"kubernetes.io/projected/67be32e9-4e1a-424b-8381-259b527565c6-kube-api-access-gqlhf\") pod \"ovsdbserver-nb-1\" (UID: \"67be32e9-4e1a-424b-8381-259b527565c6\") " pod="openstack/ovsdbserver-nb-1" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.745397 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.763844 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-592b7f5f-5f08-47d4-8262-f6817a758799\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-592b7f5f-5f08-47d4-8262-f6817a758799\") pod \"ovsdbserver-nb-1\" (UID: \"67be32e9-4e1a-424b-8381-259b527565c6\") " pod="openstack/ovsdbserver-nb-1" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.771111 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.779545 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.829839 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h67wp\" (UniqueName: \"kubernetes.io/projected/9e34a64c-2d41-4679-bece-0a45b44d6f81-kube-api-access-h67wp\") pod \"ovsdbserver-sb-1\" (UID: \"9e34a64c-2d41-4679-bece-0a45b44d6f81\") " pod="openstack/ovsdbserver-sb-1" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.829901 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0320635a-d7e1-49c0-9a3b-8657bcaa5ef8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0320635a-d7e1-49c0-9a3b-8657bcaa5ef8\") pod \"ovsdbserver-sb-0\" (UID: \"02fe8e04-6814-4371-b705-24f31ab7cdb5\") " pod="openstack/ovsdbserver-sb-0" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.830057 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0dfcae18-3ae4-47a8-925b-cd9c210a879b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0dfcae18-3ae4-47a8-925b-cd9c210a879b\") pod \"ovsdbserver-sb-1\" (UID: \"9e34a64c-2d41-4679-bece-0a45b44d6f81\") " pod="openstack/ovsdbserver-sb-1" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.830095 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9e34a64c-2d41-4679-bece-0a45b44d6f81-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"9e34a64c-2d41-4679-bece-0a45b44d6f81\") " pod="openstack/ovsdbserver-sb-1" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.830129 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/02fe8e04-6814-4371-b705-24f31ab7cdb5-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"02fe8e04-6814-4371-b705-24f31ab7cdb5\") " pod="openstack/ovsdbserver-sb-0" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.830160 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02fe8e04-6814-4371-b705-24f31ab7cdb5-config\") pod \"ovsdbserver-sb-0\" (UID: \"02fe8e04-6814-4371-b705-24f31ab7cdb5\") " pod="openstack/ovsdbserver-sb-0" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.830230 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0051fa4c-43a4-449a-bde5-68883d14e44c-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"0051fa4c-43a4-449a-bde5-68883d14e44c\") " pod="openstack/ovsdbserver-sb-2" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.830264 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e34a64c-2d41-4679-bece-0a45b44d6f81-config\") pod \"ovsdbserver-sb-1\" (UID: \"9e34a64c-2d41-4679-bece-0a45b44d6f81\") " pod="openstack/ovsdbserver-sb-1" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.830285 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02fe8e04-6814-4371-b705-24f31ab7cdb5-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"02fe8e04-6814-4371-b705-24f31ab7cdb5\") " pod="openstack/ovsdbserver-sb-0" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.830304 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02fe8e04-6814-4371-b705-24f31ab7cdb5-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"02fe8e04-6814-4371-b705-24f31ab7cdb5\") " pod="openstack/ovsdbserver-sb-0" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.830364 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9e34a64c-2d41-4679-bece-0a45b44d6f81-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"9e34a64c-2d41-4679-bece-0a45b44d6f81\") " pod="openstack/ovsdbserver-sb-1" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.830379 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0051fa4c-43a4-449a-bde5-68883d14e44c-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"0051fa4c-43a4-449a-bde5-68883d14e44c\") " pod="openstack/ovsdbserver-sb-2" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.830411 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e34a64c-2d41-4679-bece-0a45b44d6f81-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"9e34a64c-2d41-4679-bece-0a45b44d6f81\") " pod="openstack/ovsdbserver-sb-1" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.830466 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a746d467-2280-4ebe-927f-cf70daa043a5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a746d467-2280-4ebe-927f-cf70daa043a5\") pod \"ovsdbserver-sb-2\" (UID: \"0051fa4c-43a4-449a-bde5-68883d14e44c\") " pod="openstack/ovsdbserver-sb-2" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.830493 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x94jd\" (UniqueName: \"kubernetes.io/projected/02fe8e04-6814-4371-b705-24f31ab7cdb5-kube-api-access-x94jd\") pod \"ovsdbserver-sb-0\" (UID: \"02fe8e04-6814-4371-b705-24f31ab7cdb5\") " pod="openstack/ovsdbserver-sb-0" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.830520 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0051fa4c-43a4-449a-bde5-68883d14e44c-config\") pod \"ovsdbserver-sb-2\" (UID: \"0051fa4c-43a4-449a-bde5-68883d14e44c\") " pod="openstack/ovsdbserver-sb-2" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.830546 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0051fa4c-43a4-449a-bde5-68883d14e44c-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"0051fa4c-43a4-449a-bde5-68883d14e44c\") " pod="openstack/ovsdbserver-sb-2" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.830560 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn8hh\" (UniqueName: \"kubernetes.io/projected/0051fa4c-43a4-449a-bde5-68883d14e44c-kube-api-access-wn8hh\") pod \"ovsdbserver-sb-2\" (UID: \"0051fa4c-43a4-449a-bde5-68883d14e44c\") " pod="openstack/ovsdbserver-sb-2" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.832048 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/02fe8e04-6814-4371-b705-24f31ab7cdb5-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"02fe8e04-6814-4371-b705-24f31ab7cdb5\") " pod="openstack/ovsdbserver-sb-0" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.832726 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02fe8e04-6814-4371-b705-24f31ab7cdb5-config\") pod \"ovsdbserver-sb-0\" (UID: \"02fe8e04-6814-4371-b705-24f31ab7cdb5\") " pod="openstack/ovsdbserver-sb-0" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.833973 4813 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.834005 4813 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0320635a-d7e1-49c0-9a3b-8657bcaa5ef8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0320635a-d7e1-49c0-9a3b-8657bcaa5ef8\") pod \"ovsdbserver-sb-0\" (UID: \"02fe8e04-6814-4371-b705-24f31ab7cdb5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e8f6717746cb4067fc5f8006ed4dbb1cbf8eb89453fae2e38adea96505870e18/globalmount\"" pod="openstack/ovsdbserver-sb-0" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.834309 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02fe8e04-6814-4371-b705-24f31ab7cdb5-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"02fe8e04-6814-4371-b705-24f31ab7cdb5\") " pod="openstack/ovsdbserver-sb-0" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.835093 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02fe8e04-6814-4371-b705-24f31ab7cdb5-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"02fe8e04-6814-4371-b705-24f31ab7cdb5\") " pod="openstack/ovsdbserver-sb-0" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.847711 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x94jd\" (UniqueName: \"kubernetes.io/projected/02fe8e04-6814-4371-b705-24f31ab7cdb5-kube-api-access-x94jd\") pod \"ovsdbserver-sb-0\" (UID: \"02fe8e04-6814-4371-b705-24f31ab7cdb5\") " pod="openstack/ovsdbserver-sb-0" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.862876 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0320635a-d7e1-49c0-9a3b-8657bcaa5ef8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0320635a-d7e1-49c0-9a3b-8657bcaa5ef8\") pod \"ovsdbserver-sb-0\" (UID: \"02fe8e04-6814-4371-b705-24f31ab7cdb5\") " pod="openstack/ovsdbserver-sb-0" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.932271 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h67wp\" (UniqueName: \"kubernetes.io/projected/9e34a64c-2d41-4679-bece-0a45b44d6f81-kube-api-access-h67wp\") pod \"ovsdbserver-sb-1\" (UID: \"9e34a64c-2d41-4679-bece-0a45b44d6f81\") " pod="openstack/ovsdbserver-sb-1" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.933069 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0dfcae18-3ae4-47a8-925b-cd9c210a879b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0dfcae18-3ae4-47a8-925b-cd9c210a879b\") pod \"ovsdbserver-sb-1\" (UID: \"9e34a64c-2d41-4679-bece-0a45b44d6f81\") " pod="openstack/ovsdbserver-sb-1" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.933128 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9e34a64c-2d41-4679-bece-0a45b44d6f81-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"9e34a64c-2d41-4679-bece-0a45b44d6f81\") " pod="openstack/ovsdbserver-sb-1" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.933230 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0051fa4c-43a4-449a-bde5-68883d14e44c-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"0051fa4c-43a4-449a-bde5-68883d14e44c\") " pod="openstack/ovsdbserver-sb-2" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.933316 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e34a64c-2d41-4679-bece-0a45b44d6f81-config\") pod \"ovsdbserver-sb-1\" (UID: \"9e34a64c-2d41-4679-bece-0a45b44d6f81\") " pod="openstack/ovsdbserver-sb-1" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.933712 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9e34a64c-2d41-4679-bece-0a45b44d6f81-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"9e34a64c-2d41-4679-bece-0a45b44d6f81\") " pod="openstack/ovsdbserver-sb-1" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.934808 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e34a64c-2d41-4679-bece-0a45b44d6f81-config\") pod \"ovsdbserver-sb-1\" (UID: \"9e34a64c-2d41-4679-bece-0a45b44d6f81\") " pod="openstack/ovsdbserver-sb-1" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.935360 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0051fa4c-43a4-449a-bde5-68883d14e44c-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"0051fa4c-43a4-449a-bde5-68883d14e44c\") " pod="openstack/ovsdbserver-sb-2" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.935536 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0051fa4c-43a4-449a-bde5-68883d14e44c-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"0051fa4c-43a4-449a-bde5-68883d14e44c\") " pod="openstack/ovsdbserver-sb-2" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.935587 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9e34a64c-2d41-4679-bece-0a45b44d6f81-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"9e34a64c-2d41-4679-bece-0a45b44d6f81\") " pod="openstack/ovsdbserver-sb-1" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.935622 4813 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.935722 4813 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0dfcae18-3ae4-47a8-925b-cd9c210a879b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0dfcae18-3ae4-47a8-925b-cd9c210a879b\") pod \"ovsdbserver-sb-1\" (UID: \"9e34a64c-2d41-4679-bece-0a45b44d6f81\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d8b0c5e844f1adff081de7bb2b3c97bf7e7d6fca003839f72978c38b70f0400c/globalmount\"" pod="openstack/ovsdbserver-sb-1" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.936067 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0051fa4c-43a4-449a-bde5-68883d14e44c-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"0051fa4c-43a4-449a-bde5-68883d14e44c\") " pod="openstack/ovsdbserver-sb-2" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.936475 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e34a64c-2d41-4679-bece-0a45b44d6f81-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"9e34a64c-2d41-4679-bece-0a45b44d6f81\") " pod="openstack/ovsdbserver-sb-1" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.936553 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a746d467-2280-4ebe-927f-cf70daa043a5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a746d467-2280-4ebe-927f-cf70daa043a5\") pod \"ovsdbserver-sb-2\" (UID: \"0051fa4c-43a4-449a-bde5-68883d14e44c\") " pod="openstack/ovsdbserver-sb-2" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.936599 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0051fa4c-43a4-449a-bde5-68883d14e44c-config\") pod \"ovsdbserver-sb-2\" (UID: \"0051fa4c-43a4-449a-bde5-68883d14e44c\") " pod="openstack/ovsdbserver-sb-2" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.936628 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0051fa4c-43a4-449a-bde5-68883d14e44c-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"0051fa4c-43a4-449a-bde5-68883d14e44c\") " pod="openstack/ovsdbserver-sb-2" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.936646 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn8hh\" (UniqueName: \"kubernetes.io/projected/0051fa4c-43a4-449a-bde5-68883d14e44c-kube-api-access-wn8hh\") pod \"ovsdbserver-sb-2\" (UID: \"0051fa4c-43a4-449a-bde5-68883d14e44c\") " pod="openstack/ovsdbserver-sb-2" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.937936 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0051fa4c-43a4-449a-bde5-68883d14e44c-config\") pod \"ovsdbserver-sb-2\" (UID: \"0051fa4c-43a4-449a-bde5-68883d14e44c\") " pod="openstack/ovsdbserver-sb-2" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.939439 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9e34a64c-2d41-4679-bece-0a45b44d6f81-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"9e34a64c-2d41-4679-bece-0a45b44d6f81\") " pod="openstack/ovsdbserver-sb-1" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.940484 4813 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.940524 4813 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a746d467-2280-4ebe-927f-cf70daa043a5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a746d467-2280-4ebe-927f-cf70daa043a5\") pod \"ovsdbserver-sb-2\" (UID: \"0051fa4c-43a4-449a-bde5-68883d14e44c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5f84e18d2fa2fb4e9723a60b55dc5f10009d59fc669270e2406d798c48b2eda4/globalmount\"" pod="openstack/ovsdbserver-sb-2" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.943658 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0051fa4c-43a4-449a-bde5-68883d14e44c-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"0051fa4c-43a4-449a-bde5-68883d14e44c\") " pod="openstack/ovsdbserver-sb-2" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.944036 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e34a64c-2d41-4679-bece-0a45b44d6f81-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"9e34a64c-2d41-4679-bece-0a45b44d6f81\") " pod="openstack/ovsdbserver-sb-1" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.948637 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h67wp\" (UniqueName: \"kubernetes.io/projected/9e34a64c-2d41-4679-bece-0a45b44d6f81-kube-api-access-h67wp\") pod \"ovsdbserver-sb-1\" (UID: \"9e34a64c-2d41-4679-bece-0a45b44d6f81\") " pod="openstack/ovsdbserver-sb-1" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.958609 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn8hh\" (UniqueName: \"kubernetes.io/projected/0051fa4c-43a4-449a-bde5-68883d14e44c-kube-api-access-wn8hh\") pod \"ovsdbserver-sb-2\" (UID: \"0051fa4c-43a4-449a-bde5-68883d14e44c\") " pod="openstack/ovsdbserver-sb-2" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.978985 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0dfcae18-3ae4-47a8-925b-cd9c210a879b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0dfcae18-3ae4-47a8-925b-cd9c210a879b\") pod \"ovsdbserver-sb-1\" (UID: \"9e34a64c-2d41-4679-bece-0a45b44d6f81\") " pod="openstack/ovsdbserver-sb-1" Feb 19 19:53:32 crc kubenswrapper[4813]: I0219 19:53:32.986090 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a746d467-2280-4ebe-927f-cf70daa043a5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a746d467-2280-4ebe-927f-cf70daa043a5\") pod \"ovsdbserver-sb-2\" (UID: \"0051fa4c-43a4-449a-bde5-68883d14e44c\") " pod="openstack/ovsdbserver-sb-2" Feb 19 19:53:33 crc kubenswrapper[4813]: I0219 19:53:33.100418 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 19 19:53:33 crc kubenswrapper[4813]: I0219 19:53:33.101560 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 19 19:53:33 crc kubenswrapper[4813]: I0219 19:53:33.111992 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Feb 19 19:53:33 crc kubenswrapper[4813]: I0219 19:53:33.118536 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Feb 19 19:53:33 crc kubenswrapper[4813]: I0219 19:53:33.276921 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"9ea9898d-86e7-4fa2-965f-43b2d6e44046","Type":"ContainerStarted","Data":"f9a0f011921ca57282e3ae4e39bcbdd1b4edc8b94b35621fc871a64b96ec8743"} Feb 19 19:53:33 crc kubenswrapper[4813]: I0219 19:53:33.359263 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Feb 19 19:53:33 crc kubenswrapper[4813]: W0219 19:53:33.368122 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod67be32e9_4e1a_424b_8381_259b527565c6.slice/crio-40ec4535b5b90d9cf87f8434f7ffee8405941d51dfdff1549942dbe732659e89 WatchSource:0}: Error finding container 40ec4535b5b90d9cf87f8434f7ffee8405941d51dfdff1549942dbe732659e89: Status 404 returned error can't find the container with id 40ec4535b5b90d9cf87f8434f7ffee8405941d51dfdff1549942dbe732659e89 Feb 19 19:53:33 crc kubenswrapper[4813]: I0219 19:53:33.461923 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Feb 19 19:53:33 crc kubenswrapper[4813]: I0219 19:53:33.672283 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Feb 19 19:53:33 crc kubenswrapper[4813]: I0219 19:53:33.766977 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Feb 19 19:53:34 crc kubenswrapper[4813]: I0219 19:53:34.288043 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"67be32e9-4e1a-424b-8381-259b527565c6","Type":"ContainerStarted","Data":"ede40cc5dd7862af409b8c5911e71e728223900e3a1347354356546055e1fced"} Feb 19 19:53:34 crc kubenswrapper[4813]: I0219 19:53:34.288103 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"67be32e9-4e1a-424b-8381-259b527565c6","Type":"ContainerStarted","Data":"7d07b3329f80e67ab3d7aba805a5f38a2efbfc9855bb763125e0ae3e414fcb1c"} Feb 19 19:53:34 crc kubenswrapper[4813]: I0219 19:53:34.288119 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"67be32e9-4e1a-424b-8381-259b527565c6","Type":"ContainerStarted","Data":"40ec4535b5b90d9cf87f8434f7ffee8405941d51dfdff1549942dbe732659e89"} Feb 19 19:53:34 crc kubenswrapper[4813]: I0219 19:53:34.290405 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"9e34a64c-2d41-4679-bece-0a45b44d6f81","Type":"ContainerStarted","Data":"8af7112d8afaae6842b7bcf5b3a90db2147ea59382a650cf0c0771596ebaee76"} Feb 19 19:53:34 crc kubenswrapper[4813]: I0219 19:53:34.290446 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"9e34a64c-2d41-4679-bece-0a45b44d6f81","Type":"ContainerStarted","Data":"d720fc09b492e2b9efe19d4a022ef973afeb1b806b67491ef6d214ff73ca2030"} Feb 19 19:53:34 crc kubenswrapper[4813]: I0219 19:53:34.290456 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"9e34a64c-2d41-4679-bece-0a45b44d6f81","Type":"ContainerStarted","Data":"ec640a2e3eb48fc6e4b6832a006848b5b16fc6ae76e59002de6fd34170e9b5da"} Feb 19 19:53:34 crc kubenswrapper[4813]: I0219 19:53:34.292854 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"0051fa4c-43a4-449a-bde5-68883d14e44c","Type":"ContainerStarted","Data":"fc1e864ab8afc346767d1bd33370ee1c22be52b6ac67fc9f3b6f038163cad08a"} Feb 19 19:53:34 crc kubenswrapper[4813]: I0219 19:53:34.292896 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"0051fa4c-43a4-449a-bde5-68883d14e44c","Type":"ContainerStarted","Data":"69fe7cbf42a121f6a9368e1db2e79d81a2e218110445c2b5a9d596bc41bbdc06"} Feb 19 19:53:34 crc kubenswrapper[4813]: I0219 19:53:34.292910 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"0051fa4c-43a4-449a-bde5-68883d14e44c","Type":"ContainerStarted","Data":"1929ed14dbf0213f3419ad1a00f77b46064c77ac0b80dee106347e3fb2b71f61"} Feb 19 19:53:34 crc kubenswrapper[4813]: I0219 19:53:34.295202 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"eef6fa0f-4b41-488c-afd1-1179ea364d95","Type":"ContainerStarted","Data":"37c79fb1c7e22acc4b8009f4a3871059ba6fd197186157d4981d604843c031db"} Feb 19 19:53:34 crc kubenswrapper[4813]: I0219 19:53:34.295328 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"eef6fa0f-4b41-488c-afd1-1179ea364d95","Type":"ContainerStarted","Data":"ba5cb1a9e8650c1b132638bdd472b0a333b6442751b21fbd908950166918543e"} Feb 19 19:53:34 crc kubenswrapper[4813]: I0219 19:53:34.295416 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"eef6fa0f-4b41-488c-afd1-1179ea364d95","Type":"ContainerStarted","Data":"4b25bf3f70ccd11b956e0d7596983247dae6e669bc3e744534a5d7f1ecb386bc"} Feb 19 19:53:34 crc kubenswrapper[4813]: I0219 19:53:34.297968 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"9ea9898d-86e7-4fa2-965f-43b2d6e44046","Type":"ContainerStarted","Data":"49338ba8c4ce029b1470c7f2bf149e56146977768ebd34781fdf0485d804fc83"} Feb 19 19:53:34 crc kubenswrapper[4813]: I0219 19:53:34.298065 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"9ea9898d-86e7-4fa2-965f-43b2d6e44046","Type":"ContainerStarted","Data":"29998fb7ed93168e43322d98ed5e9703cd14057c3d0766dc78202ef3d83485e5"} Feb 19 19:53:34 crc kubenswrapper[4813]: I0219 19:53:34.311687 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-1" podStartSLOduration=3.311654865 podStartE2EDuration="3.311654865s" podCreationTimestamp="2026-02-19 19:53:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:53:34.307306261 +0000 UTC m=+5033.532746802" watchObservedRunningTime="2026-02-19 19:53:34.311654865 +0000 UTC m=+5033.537095436" Feb 19 19:53:34 crc kubenswrapper[4813]: I0219 19:53:34.340171 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=3.340155117 podStartE2EDuration="3.340155117s" podCreationTimestamp="2026-02-19 19:53:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:53:34.335837944 +0000 UTC m=+5033.561278475" watchObservedRunningTime="2026-02-19 19:53:34.340155117 +0000 UTC m=+5033.565595658" Feb 19 19:53:34 crc kubenswrapper[4813]: I0219 19:53:34.372888 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-1" podStartSLOduration=3.372867149 podStartE2EDuration="3.372867149s" podCreationTimestamp="2026-02-19 19:53:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:53:34.36159693 +0000 UTC m=+5033.587037471" watchObservedRunningTime="2026-02-19 19:53:34.372867149 +0000 UTC m=+5033.598307700" Feb 19 19:53:34 crc kubenswrapper[4813]: I0219 19:53:34.406148 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-2" podStartSLOduration=3.4061261480000002 podStartE2EDuration="3.406126148s" podCreationTimestamp="2026-02-19 19:53:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:53:34.404124266 +0000 UTC m=+5033.629564807" watchObservedRunningTime="2026-02-19 19:53:34.406126148 +0000 UTC m=+5033.631566689" Feb 19 19:53:34 crc kubenswrapper[4813]: I0219 19:53:34.472064 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-2" podStartSLOduration=3.4720341279999998 podStartE2EDuration="3.472034128s" podCreationTimestamp="2026-02-19 19:53:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:53:34.460360866 +0000 UTC m=+5033.685801667" watchObservedRunningTime="2026-02-19 19:53:34.472034128 +0000 UTC m=+5033.697474669" Feb 19 19:53:34 crc kubenswrapper[4813]: I0219 19:53:34.523414 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 19 19:53:34 crc kubenswrapper[4813]: W0219 19:53:34.527495 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02fe8e04_6814_4371_b705_24f31ab7cdb5.slice/crio-d81dc3b99a56d42ee21f9c442b324cc2246c7b8f8080b5c109c576bdb8fdf790 WatchSource:0}: Error finding container d81dc3b99a56d42ee21f9c442b324cc2246c7b8f8080b5c109c576bdb8fdf790: Status 404 returned error can't find the container with id d81dc3b99a56d42ee21f9c442b324cc2246c7b8f8080b5c109c576bdb8fdf790 Feb 19 19:53:35 crc kubenswrapper[4813]: I0219 19:53:35.308966 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"02fe8e04-6814-4371-b705-24f31ab7cdb5","Type":"ContainerStarted","Data":"7d4e7469001b9ce0df32f1fb906210375094721d839fe4be00817f0ff295a278"} Feb 19 19:53:35 crc kubenswrapper[4813]: I0219 19:53:35.309286 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"02fe8e04-6814-4371-b705-24f31ab7cdb5","Type":"ContainerStarted","Data":"e9d5283097421c824e1a562ab2495806d92ed409a12295957b1df5ea8789e0ee"} Feb 19 19:53:35 crc kubenswrapper[4813]: I0219 19:53:35.309300 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"02fe8e04-6814-4371-b705-24f31ab7cdb5","Type":"ContainerStarted","Data":"d81dc3b99a56d42ee21f9c442b324cc2246c7b8f8080b5c109c576bdb8fdf790"} Feb 19 19:53:35 crc kubenswrapper[4813]: I0219 19:53:35.329966 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=4.329935311 podStartE2EDuration="4.329935311s" podCreationTimestamp="2026-02-19 19:53:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:53:35.32474269 +0000 UTC m=+5034.550183271" watchObservedRunningTime="2026-02-19 19:53:35.329935311 +0000 UTC m=+5034.555375852" Feb 19 19:53:35 crc kubenswrapper[4813]: I0219 19:53:35.745512 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 19 19:53:35 crc kubenswrapper[4813]: I0219 19:53:35.772094 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-2" Feb 19 19:53:35 crc kubenswrapper[4813]: I0219 19:53:35.779985 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-1" Feb 19 19:53:36 crc kubenswrapper[4813]: I0219 19:53:36.101895 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 19 19:53:36 crc kubenswrapper[4813]: I0219 19:53:36.112601 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-1" Feb 19 19:53:36 crc kubenswrapper[4813]: I0219 19:53:36.119124 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-2" Feb 19 19:53:36 crc kubenswrapper[4813]: I0219 19:53:36.158684 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-1" Feb 19 19:53:36 crc kubenswrapper[4813]: I0219 19:53:36.162675 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-2" Feb 19 19:53:36 crc kubenswrapper[4813]: I0219 19:53:36.314256 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-2" Feb 19 19:53:36 crc kubenswrapper[4813]: I0219 19:53:36.314581 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-1" Feb 19 19:53:37 crc kubenswrapper[4813]: I0219 19:53:37.746381 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 19 19:53:37 crc kubenswrapper[4813]: I0219 19:53:37.771896 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-2" Feb 19 19:53:37 crc kubenswrapper[4813]: I0219 19:53:37.780257 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-1" Feb 19 19:53:38 crc kubenswrapper[4813]: I0219 19:53:38.101757 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 19 19:53:38 crc kubenswrapper[4813]: I0219 19:53:38.158215 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-2" Feb 19 19:53:38 crc kubenswrapper[4813]: I0219 19:53:38.159236 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-1" Feb 19 19:53:38 crc kubenswrapper[4813]: I0219 19:53:38.361403 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8558946c59-qgl2g"] Feb 19 19:53:38 crc kubenswrapper[4813]: I0219 19:53:38.362941 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8558946c59-qgl2g" Feb 19 19:53:38 crc kubenswrapper[4813]: I0219 19:53:38.370567 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 19 19:53:38 crc kubenswrapper[4813]: I0219 19:53:38.399328 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8558946c59-qgl2g"] Feb 19 19:53:38 crc kubenswrapper[4813]: I0219 19:53:38.447583 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xppxf\" (UniqueName: \"kubernetes.io/projected/54d49449-4feb-410b-98bb-f17db85e7d38-kube-api-access-xppxf\") pod \"dnsmasq-dns-8558946c59-qgl2g\" (UID: \"54d49449-4feb-410b-98bb-f17db85e7d38\") " pod="openstack/dnsmasq-dns-8558946c59-qgl2g" Feb 19 19:53:38 crc kubenswrapper[4813]: I0219 19:53:38.447684 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54d49449-4feb-410b-98bb-f17db85e7d38-config\") pod \"dnsmasq-dns-8558946c59-qgl2g\" (UID: \"54d49449-4feb-410b-98bb-f17db85e7d38\") " pod="openstack/dnsmasq-dns-8558946c59-qgl2g" Feb 19 19:53:38 crc kubenswrapper[4813]: I0219 19:53:38.447735 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/54d49449-4feb-410b-98bb-f17db85e7d38-dns-svc\") pod \"dnsmasq-dns-8558946c59-qgl2g\" (UID: \"54d49449-4feb-410b-98bb-f17db85e7d38\") " pod="openstack/dnsmasq-dns-8558946c59-qgl2g" Feb 19 19:53:38 crc kubenswrapper[4813]: I0219 19:53:38.447765 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/54d49449-4feb-410b-98bb-f17db85e7d38-ovsdbserver-sb\") pod \"dnsmasq-dns-8558946c59-qgl2g\" (UID: \"54d49449-4feb-410b-98bb-f17db85e7d38\") " pod="openstack/dnsmasq-dns-8558946c59-qgl2g" Feb 19 19:53:38 crc kubenswrapper[4813]: I0219 19:53:38.549064 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54d49449-4feb-410b-98bb-f17db85e7d38-config\") pod \"dnsmasq-dns-8558946c59-qgl2g\" (UID: \"54d49449-4feb-410b-98bb-f17db85e7d38\") " pod="openstack/dnsmasq-dns-8558946c59-qgl2g" Feb 19 19:53:38 crc kubenswrapper[4813]: I0219 19:53:38.549127 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/54d49449-4feb-410b-98bb-f17db85e7d38-dns-svc\") pod \"dnsmasq-dns-8558946c59-qgl2g\" (UID: \"54d49449-4feb-410b-98bb-f17db85e7d38\") " pod="openstack/dnsmasq-dns-8558946c59-qgl2g" Feb 19 19:53:38 crc kubenswrapper[4813]: I0219 19:53:38.549148 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/54d49449-4feb-410b-98bb-f17db85e7d38-ovsdbserver-sb\") pod \"dnsmasq-dns-8558946c59-qgl2g\" (UID: \"54d49449-4feb-410b-98bb-f17db85e7d38\") " pod="openstack/dnsmasq-dns-8558946c59-qgl2g" Feb 19 19:53:38 crc kubenswrapper[4813]: I0219 19:53:38.549325 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xppxf\" (UniqueName: \"kubernetes.io/projected/54d49449-4feb-410b-98bb-f17db85e7d38-kube-api-access-xppxf\") pod \"dnsmasq-dns-8558946c59-qgl2g\" (UID: \"54d49449-4feb-410b-98bb-f17db85e7d38\") " pod="openstack/dnsmasq-dns-8558946c59-qgl2g" Feb 19 19:53:38 crc kubenswrapper[4813]: I0219 19:53:38.550060 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/54d49449-4feb-410b-98bb-f17db85e7d38-dns-svc\") pod \"dnsmasq-dns-8558946c59-qgl2g\" (UID: \"54d49449-4feb-410b-98bb-f17db85e7d38\") " pod="openstack/dnsmasq-dns-8558946c59-qgl2g" Feb 19 19:53:38 crc kubenswrapper[4813]: I0219 19:53:38.550186 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54d49449-4feb-410b-98bb-f17db85e7d38-config\") pod \"dnsmasq-dns-8558946c59-qgl2g\" (UID: \"54d49449-4feb-410b-98bb-f17db85e7d38\") " pod="openstack/dnsmasq-dns-8558946c59-qgl2g" Feb 19 19:53:38 crc kubenswrapper[4813]: I0219 19:53:38.550189 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/54d49449-4feb-410b-98bb-f17db85e7d38-ovsdbserver-sb\") pod \"dnsmasq-dns-8558946c59-qgl2g\" (UID: \"54d49449-4feb-410b-98bb-f17db85e7d38\") " pod="openstack/dnsmasq-dns-8558946c59-qgl2g" Feb 19 19:53:38 crc kubenswrapper[4813]: I0219 19:53:38.568636 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xppxf\" (UniqueName: \"kubernetes.io/projected/54d49449-4feb-410b-98bb-f17db85e7d38-kube-api-access-xppxf\") pod \"dnsmasq-dns-8558946c59-qgl2g\" (UID: \"54d49449-4feb-410b-98bb-f17db85e7d38\") " pod="openstack/dnsmasq-dns-8558946c59-qgl2g" Feb 19 19:53:38 crc kubenswrapper[4813]: I0219 19:53:38.693648 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8558946c59-qgl2g" Feb 19 19:53:38 crc kubenswrapper[4813]: I0219 19:53:38.793608 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 19 19:53:38 crc kubenswrapper[4813]: I0219 19:53:38.824037 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-1" Feb 19 19:53:38 crc kubenswrapper[4813]: I0219 19:53:38.827615 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-2" Feb 19 19:53:38 crc kubenswrapper[4813]: I0219 19:53:38.841098 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 19 19:53:38 crc kubenswrapper[4813]: I0219 19:53:38.870621 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-1" Feb 19 19:53:38 crc kubenswrapper[4813]: I0219 19:53:38.887586 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-2" Feb 19 19:53:39 crc kubenswrapper[4813]: I0219 19:53:39.065714 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8558946c59-qgl2g"] Feb 19 19:53:39 crc kubenswrapper[4813]: I0219 19:53:39.107087 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85cf84b645-6nmn2"] Feb 19 19:53:39 crc kubenswrapper[4813]: I0219 19:53:39.108824 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85cf84b645-6nmn2" Feb 19 19:53:39 crc kubenswrapper[4813]: I0219 19:53:39.111789 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 19 19:53:39 crc kubenswrapper[4813]: I0219 19:53:39.114605 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85cf84b645-6nmn2"] Feb 19 19:53:39 crc kubenswrapper[4813]: I0219 19:53:39.151734 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 19 19:53:39 crc kubenswrapper[4813]: I0219 19:53:39.169449 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2bab7d0f-10c0-4ef4-b064-4c4c529cd802-ovsdbserver-nb\") pod \"dnsmasq-dns-85cf84b645-6nmn2\" (UID: \"2bab7d0f-10c0-4ef4-b064-4c4c529cd802\") " pod="openstack/dnsmasq-dns-85cf84b645-6nmn2" Feb 19 19:53:39 crc kubenswrapper[4813]: I0219 19:53:39.169514 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbmxn\" (UniqueName: \"kubernetes.io/projected/2bab7d0f-10c0-4ef4-b064-4c4c529cd802-kube-api-access-fbmxn\") pod \"dnsmasq-dns-85cf84b645-6nmn2\" (UID: \"2bab7d0f-10c0-4ef4-b064-4c4c529cd802\") " pod="openstack/dnsmasq-dns-85cf84b645-6nmn2" Feb 19 19:53:39 crc kubenswrapper[4813]: I0219 19:53:39.169543 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2bab7d0f-10c0-4ef4-b064-4c4c529cd802-config\") pod \"dnsmasq-dns-85cf84b645-6nmn2\" (UID: \"2bab7d0f-10c0-4ef4-b064-4c4c529cd802\") " pod="openstack/dnsmasq-dns-85cf84b645-6nmn2" Feb 19 19:53:39 crc kubenswrapper[4813]: I0219 19:53:39.169584 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2bab7d0f-10c0-4ef4-b064-4c4c529cd802-dns-svc\") pod \"dnsmasq-dns-85cf84b645-6nmn2\" (UID: \"2bab7d0f-10c0-4ef4-b064-4c4c529cd802\") " pod="openstack/dnsmasq-dns-85cf84b645-6nmn2" Feb 19 19:53:39 crc kubenswrapper[4813]: I0219 19:53:39.169640 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2bab7d0f-10c0-4ef4-b064-4c4c529cd802-ovsdbserver-sb\") pod \"dnsmasq-dns-85cf84b645-6nmn2\" (UID: \"2bab7d0f-10c0-4ef4-b064-4c4c529cd802\") " pod="openstack/dnsmasq-dns-85cf84b645-6nmn2" Feb 19 19:53:39 crc kubenswrapper[4813]: I0219 19:53:39.174984 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8558946c59-qgl2g"] Feb 19 19:53:39 crc kubenswrapper[4813]: W0219 19:53:39.193307 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod54d49449_4feb_410b_98bb_f17db85e7d38.slice/crio-645a83b6462d1a05375b30b491569d676cde79be1b858b2310df7fc43708d164 WatchSource:0}: Error finding container 645a83b6462d1a05375b30b491569d676cde79be1b858b2310df7fc43708d164: Status 404 returned error can't find the container with id 645a83b6462d1a05375b30b491569d676cde79be1b858b2310df7fc43708d164 Feb 19 19:53:39 crc kubenswrapper[4813]: I0219 19:53:39.271023 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2bab7d0f-10c0-4ef4-b064-4c4c529cd802-ovsdbserver-sb\") pod \"dnsmasq-dns-85cf84b645-6nmn2\" (UID: \"2bab7d0f-10c0-4ef4-b064-4c4c529cd802\") " pod="openstack/dnsmasq-dns-85cf84b645-6nmn2" Feb 19 19:53:39 crc kubenswrapper[4813]: I0219 19:53:39.271084 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2bab7d0f-10c0-4ef4-b064-4c4c529cd802-ovsdbserver-nb\") pod \"dnsmasq-dns-85cf84b645-6nmn2\" (UID: \"2bab7d0f-10c0-4ef4-b064-4c4c529cd802\") " pod="openstack/dnsmasq-dns-85cf84b645-6nmn2" Feb 19 19:53:39 crc kubenswrapper[4813]: I0219 19:53:39.271138 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbmxn\" (UniqueName: \"kubernetes.io/projected/2bab7d0f-10c0-4ef4-b064-4c4c529cd802-kube-api-access-fbmxn\") pod \"dnsmasq-dns-85cf84b645-6nmn2\" (UID: \"2bab7d0f-10c0-4ef4-b064-4c4c529cd802\") " pod="openstack/dnsmasq-dns-85cf84b645-6nmn2" Feb 19 19:53:39 crc kubenswrapper[4813]: I0219 19:53:39.271210 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2bab7d0f-10c0-4ef4-b064-4c4c529cd802-config\") pod \"dnsmasq-dns-85cf84b645-6nmn2\" (UID: \"2bab7d0f-10c0-4ef4-b064-4c4c529cd802\") " pod="openstack/dnsmasq-dns-85cf84b645-6nmn2" Feb 19 19:53:39 crc kubenswrapper[4813]: I0219 19:53:39.271264 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2bab7d0f-10c0-4ef4-b064-4c4c529cd802-dns-svc\") pod \"dnsmasq-dns-85cf84b645-6nmn2\" (UID: \"2bab7d0f-10c0-4ef4-b064-4c4c529cd802\") " pod="openstack/dnsmasq-dns-85cf84b645-6nmn2" Feb 19 19:53:39 crc kubenswrapper[4813]: I0219 19:53:39.271903 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2bab7d0f-10c0-4ef4-b064-4c4c529cd802-ovsdbserver-sb\") pod \"dnsmasq-dns-85cf84b645-6nmn2\" (UID: \"2bab7d0f-10c0-4ef4-b064-4c4c529cd802\") " pod="openstack/dnsmasq-dns-85cf84b645-6nmn2" Feb 19 19:53:39 crc kubenswrapper[4813]: I0219 19:53:39.272016 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2bab7d0f-10c0-4ef4-b064-4c4c529cd802-dns-svc\") pod \"dnsmasq-dns-85cf84b645-6nmn2\" (UID: \"2bab7d0f-10c0-4ef4-b064-4c4c529cd802\") " pod="openstack/dnsmasq-dns-85cf84b645-6nmn2" Feb 19 19:53:39 crc kubenswrapper[4813]: I0219 19:53:39.272275 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2bab7d0f-10c0-4ef4-b064-4c4c529cd802-ovsdbserver-nb\") pod \"dnsmasq-dns-85cf84b645-6nmn2\" (UID: \"2bab7d0f-10c0-4ef4-b064-4c4c529cd802\") " pod="openstack/dnsmasq-dns-85cf84b645-6nmn2" Feb 19 19:53:39 crc kubenswrapper[4813]: I0219 19:53:39.272531 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2bab7d0f-10c0-4ef4-b064-4c4c529cd802-config\") pod \"dnsmasq-dns-85cf84b645-6nmn2\" (UID: \"2bab7d0f-10c0-4ef4-b064-4c4c529cd802\") " pod="openstack/dnsmasq-dns-85cf84b645-6nmn2" Feb 19 19:53:39 crc kubenswrapper[4813]: I0219 19:53:39.287524 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbmxn\" (UniqueName: \"kubernetes.io/projected/2bab7d0f-10c0-4ef4-b064-4c4c529cd802-kube-api-access-fbmxn\") pod \"dnsmasq-dns-85cf84b645-6nmn2\" (UID: \"2bab7d0f-10c0-4ef4-b064-4c4c529cd802\") " pod="openstack/dnsmasq-dns-85cf84b645-6nmn2" Feb 19 19:53:39 crc kubenswrapper[4813]: I0219 19:53:39.333897 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8558946c59-qgl2g" event={"ID":"54d49449-4feb-410b-98bb-f17db85e7d38","Type":"ContainerStarted","Data":"645a83b6462d1a05375b30b491569d676cde79be1b858b2310df7fc43708d164"} Feb 19 19:53:39 crc kubenswrapper[4813]: I0219 19:53:39.380039 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 19 19:53:39 crc kubenswrapper[4813]: I0219 19:53:39.452382 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85cf84b645-6nmn2" Feb 19 19:53:39 crc kubenswrapper[4813]: I0219 19:53:39.906604 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85cf84b645-6nmn2"] Feb 19 19:53:39 crc kubenswrapper[4813]: W0219 19:53:39.909471 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2bab7d0f_10c0_4ef4_b064_4c4c529cd802.slice/crio-2c9c131509d4701fb14b6e69329a89aefd2ae86236c0db7653cd5a050eb4d87d WatchSource:0}: Error finding container 2c9c131509d4701fb14b6e69329a89aefd2ae86236c0db7653cd5a050eb4d87d: Status 404 returned error can't find the container with id 2c9c131509d4701fb14b6e69329a89aefd2ae86236c0db7653cd5a050eb4d87d Feb 19 19:53:40 crc kubenswrapper[4813]: I0219 19:53:40.344288 4813 generic.go:334] "Generic (PLEG): container finished" podID="54d49449-4feb-410b-98bb-f17db85e7d38" containerID="c06306773476268d8ab90f80ccb591e7ae1dfd1298a8979d4b9166c554a172f3" exitCode=0 Feb 19 19:53:40 crc kubenswrapper[4813]: I0219 19:53:40.344405 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8558946c59-qgl2g" event={"ID":"54d49449-4feb-410b-98bb-f17db85e7d38","Type":"ContainerDied","Data":"c06306773476268d8ab90f80ccb591e7ae1dfd1298a8979d4b9166c554a172f3"} Feb 19 19:53:40 crc kubenswrapper[4813]: I0219 19:53:40.345945 4813 generic.go:334] "Generic (PLEG): container finished" podID="2bab7d0f-10c0-4ef4-b064-4c4c529cd802" containerID="63c01e307710cc125889c70172ccd73bb6a587c502a1ff283d0ebe56e1c4caf0" exitCode=0 Feb 19 19:53:40 crc kubenswrapper[4813]: I0219 19:53:40.346070 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85cf84b645-6nmn2" event={"ID":"2bab7d0f-10c0-4ef4-b064-4c4c529cd802","Type":"ContainerDied","Data":"63c01e307710cc125889c70172ccd73bb6a587c502a1ff283d0ebe56e1c4caf0"} Feb 19 19:53:40 crc kubenswrapper[4813]: I0219 19:53:40.346129 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85cf84b645-6nmn2" event={"ID":"2bab7d0f-10c0-4ef4-b064-4c4c529cd802","Type":"ContainerStarted","Data":"2c9c131509d4701fb14b6e69329a89aefd2ae86236c0db7653cd5a050eb4d87d"} Feb 19 19:53:40 crc kubenswrapper[4813]: I0219 19:53:40.747378 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8558946c59-qgl2g" Feb 19 19:53:40 crc kubenswrapper[4813]: I0219 19:53:40.898663 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/54d49449-4feb-410b-98bb-f17db85e7d38-dns-svc\") pod \"54d49449-4feb-410b-98bb-f17db85e7d38\" (UID: \"54d49449-4feb-410b-98bb-f17db85e7d38\") " Feb 19 19:53:40 crc kubenswrapper[4813]: I0219 19:53:40.898851 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xppxf\" (UniqueName: \"kubernetes.io/projected/54d49449-4feb-410b-98bb-f17db85e7d38-kube-api-access-xppxf\") pod \"54d49449-4feb-410b-98bb-f17db85e7d38\" (UID: \"54d49449-4feb-410b-98bb-f17db85e7d38\") " Feb 19 19:53:40 crc kubenswrapper[4813]: I0219 19:53:40.898897 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/54d49449-4feb-410b-98bb-f17db85e7d38-ovsdbserver-sb\") pod \"54d49449-4feb-410b-98bb-f17db85e7d38\" (UID: \"54d49449-4feb-410b-98bb-f17db85e7d38\") " Feb 19 19:53:40 crc kubenswrapper[4813]: I0219 19:53:40.899029 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54d49449-4feb-410b-98bb-f17db85e7d38-config\") pod \"54d49449-4feb-410b-98bb-f17db85e7d38\" (UID: \"54d49449-4feb-410b-98bb-f17db85e7d38\") " Feb 19 19:53:40 crc kubenswrapper[4813]: I0219 19:53:40.903200 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54d49449-4feb-410b-98bb-f17db85e7d38-kube-api-access-xppxf" (OuterVolumeSpecName: "kube-api-access-xppxf") pod "54d49449-4feb-410b-98bb-f17db85e7d38" (UID: "54d49449-4feb-410b-98bb-f17db85e7d38"). InnerVolumeSpecName "kube-api-access-xppxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:53:40 crc kubenswrapper[4813]: I0219 19:53:40.918384 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54d49449-4feb-410b-98bb-f17db85e7d38-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "54d49449-4feb-410b-98bb-f17db85e7d38" (UID: "54d49449-4feb-410b-98bb-f17db85e7d38"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:53:40 crc kubenswrapper[4813]: I0219 19:53:40.921523 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54d49449-4feb-410b-98bb-f17db85e7d38-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "54d49449-4feb-410b-98bb-f17db85e7d38" (UID: "54d49449-4feb-410b-98bb-f17db85e7d38"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:53:40 crc kubenswrapper[4813]: I0219 19:53:40.921735 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54d49449-4feb-410b-98bb-f17db85e7d38-config" (OuterVolumeSpecName: "config") pod "54d49449-4feb-410b-98bb-f17db85e7d38" (UID: "54d49449-4feb-410b-98bb-f17db85e7d38"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:53:41 crc kubenswrapper[4813]: I0219 19:53:41.001339 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54d49449-4feb-410b-98bb-f17db85e7d38-config\") on node \"crc\" DevicePath \"\"" Feb 19 19:53:41 crc kubenswrapper[4813]: I0219 19:53:41.001385 4813 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/54d49449-4feb-410b-98bb-f17db85e7d38-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 19:53:41 crc kubenswrapper[4813]: I0219 19:53:41.001398 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xppxf\" (UniqueName: \"kubernetes.io/projected/54d49449-4feb-410b-98bb-f17db85e7d38-kube-api-access-xppxf\") on node \"crc\" DevicePath \"\"" Feb 19 19:53:41 crc kubenswrapper[4813]: I0219 19:53:41.001414 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/54d49449-4feb-410b-98bb-f17db85e7d38-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 19:53:41 crc kubenswrapper[4813]: I0219 19:53:41.354959 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85cf84b645-6nmn2" event={"ID":"2bab7d0f-10c0-4ef4-b064-4c4c529cd802","Type":"ContainerStarted","Data":"e84066aef76cafeacb0e80151e834e087ba14d52a7324c8f495f3306c528f556"} Feb 19 19:53:41 crc kubenswrapper[4813]: I0219 19:53:41.355626 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85cf84b645-6nmn2" Feb 19 19:53:41 crc kubenswrapper[4813]: I0219 19:53:41.356992 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8558946c59-qgl2g" event={"ID":"54d49449-4feb-410b-98bb-f17db85e7d38","Type":"ContainerDied","Data":"645a83b6462d1a05375b30b491569d676cde79be1b858b2310df7fc43708d164"} Feb 19 19:53:41 crc kubenswrapper[4813]: I0219 19:53:41.357032 4813 scope.go:117] "RemoveContainer" containerID="c06306773476268d8ab90f80ccb591e7ae1dfd1298a8979d4b9166c554a172f3" Feb 19 19:53:41 crc kubenswrapper[4813]: I0219 19:53:41.357155 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8558946c59-qgl2g" Feb 19 19:53:41 crc kubenswrapper[4813]: I0219 19:53:41.394514 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85cf84b645-6nmn2" podStartSLOduration=2.394493511 podStartE2EDuration="2.394493511s" podCreationTimestamp="2026-02-19 19:53:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:53:41.388723072 +0000 UTC m=+5040.614163623" watchObservedRunningTime="2026-02-19 19:53:41.394493511 +0000 UTC m=+5040.619934052" Feb 19 19:53:41 crc kubenswrapper[4813]: I0219 19:53:41.426628 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8558946c59-qgl2g"] Feb 19 19:53:41 crc kubenswrapper[4813]: I0219 19:53:41.433244 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8558946c59-qgl2g"] Feb 19 19:53:41 crc kubenswrapper[4813]: I0219 19:53:41.489158 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54d49449-4feb-410b-98bb-f17db85e7d38" path="/var/lib/kubelet/pods/54d49449-4feb-410b-98bb-f17db85e7d38/volumes" Feb 19 19:53:41 crc kubenswrapper[4813]: I0219 19:53:41.978153 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-copy-data"] Feb 19 19:53:41 crc kubenswrapper[4813]: E0219 19:53:41.978510 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54d49449-4feb-410b-98bb-f17db85e7d38" containerName="init" Feb 19 19:53:41 crc kubenswrapper[4813]: I0219 19:53:41.978527 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="54d49449-4feb-410b-98bb-f17db85e7d38" containerName="init" Feb 19 19:53:41 crc kubenswrapper[4813]: I0219 19:53:41.978722 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="54d49449-4feb-410b-98bb-f17db85e7d38" containerName="init" Feb 19 19:53:41 crc kubenswrapper[4813]: I0219 19:53:41.979379 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Feb 19 19:53:41 crc kubenswrapper[4813]: I0219 19:53:41.987231 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovn-data-cert" Feb 19 19:53:41 crc kubenswrapper[4813]: I0219 19:53:41.995739 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Feb 19 19:53:42 crc kubenswrapper[4813]: I0219 19:53:42.145586 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/fe0e72fd-4a3a-45e2-84fb-27f878d6abed-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"fe0e72fd-4a3a-45e2-84fb-27f878d6abed\") " pod="openstack/ovn-copy-data" Feb 19 19:53:42 crc kubenswrapper[4813]: I0219 19:53:42.145641 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbkzv\" (UniqueName: \"kubernetes.io/projected/fe0e72fd-4a3a-45e2-84fb-27f878d6abed-kube-api-access-mbkzv\") pod \"ovn-copy-data\" (UID: \"fe0e72fd-4a3a-45e2-84fb-27f878d6abed\") " pod="openstack/ovn-copy-data" Feb 19 19:53:42 crc kubenswrapper[4813]: I0219 19:53:42.145708 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8e9d2a81-d3ea-4684-b3b4-403e3a517eec\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8e9d2a81-d3ea-4684-b3b4-403e3a517eec\") pod \"ovn-copy-data\" (UID: \"fe0e72fd-4a3a-45e2-84fb-27f878d6abed\") " pod="openstack/ovn-copy-data" Feb 19 19:53:42 crc kubenswrapper[4813]: I0219 19:53:42.247063 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/fe0e72fd-4a3a-45e2-84fb-27f878d6abed-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"fe0e72fd-4a3a-45e2-84fb-27f878d6abed\") " pod="openstack/ovn-copy-data" Feb 19 19:53:42 crc kubenswrapper[4813]: I0219 19:53:42.247160 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbkzv\" (UniqueName: \"kubernetes.io/projected/fe0e72fd-4a3a-45e2-84fb-27f878d6abed-kube-api-access-mbkzv\") pod \"ovn-copy-data\" (UID: \"fe0e72fd-4a3a-45e2-84fb-27f878d6abed\") " pod="openstack/ovn-copy-data" Feb 19 19:53:42 crc kubenswrapper[4813]: I0219 19:53:42.247273 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8e9d2a81-d3ea-4684-b3b4-403e3a517eec\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8e9d2a81-d3ea-4684-b3b4-403e3a517eec\") pod \"ovn-copy-data\" (UID: \"fe0e72fd-4a3a-45e2-84fb-27f878d6abed\") " pod="openstack/ovn-copy-data" Feb 19 19:53:42 crc kubenswrapper[4813]: I0219 19:53:42.250915 4813 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 19:53:42 crc kubenswrapper[4813]: I0219 19:53:42.250990 4813 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8e9d2a81-d3ea-4684-b3b4-403e3a517eec\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8e9d2a81-d3ea-4684-b3b4-403e3a517eec\") pod \"ovn-copy-data\" (UID: \"fe0e72fd-4a3a-45e2-84fb-27f878d6abed\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c46c3d641878f1e77bb673cf7eda3edbaa5bde8247fa9d9af7c507d31cd6622b/globalmount\"" pod="openstack/ovn-copy-data" Feb 19 19:53:42 crc kubenswrapper[4813]: I0219 19:53:42.255202 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/fe0e72fd-4a3a-45e2-84fb-27f878d6abed-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"fe0e72fd-4a3a-45e2-84fb-27f878d6abed\") " pod="openstack/ovn-copy-data" Feb 19 19:53:42 crc kubenswrapper[4813]: I0219 19:53:42.262211 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbkzv\" (UniqueName: \"kubernetes.io/projected/fe0e72fd-4a3a-45e2-84fb-27f878d6abed-kube-api-access-mbkzv\") pod \"ovn-copy-data\" (UID: \"fe0e72fd-4a3a-45e2-84fb-27f878d6abed\") " pod="openstack/ovn-copy-data" Feb 19 19:53:42 crc kubenswrapper[4813]: I0219 19:53:42.286323 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8e9d2a81-d3ea-4684-b3b4-403e3a517eec\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8e9d2a81-d3ea-4684-b3b4-403e3a517eec\") pod \"ovn-copy-data\" (UID: \"fe0e72fd-4a3a-45e2-84fb-27f878d6abed\") " pod="openstack/ovn-copy-data" Feb 19 19:53:42 crc kubenswrapper[4813]: I0219 19:53:42.304373 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Feb 19 19:53:42 crc kubenswrapper[4813]: I0219 19:53:42.791652 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Feb 19 19:53:43 crc kubenswrapper[4813]: I0219 19:53:43.374439 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"fe0e72fd-4a3a-45e2-84fb-27f878d6abed","Type":"ContainerStarted","Data":"87b7988136874be2257e26257e75812aa348d480b35f1ed7ce1a86c17af51ebb"} Feb 19 19:53:44 crc kubenswrapper[4813]: I0219 19:53:44.383854 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"fe0e72fd-4a3a-45e2-84fb-27f878d6abed","Type":"ContainerStarted","Data":"dd0bdf42f1cf69ed721181e0595d5ee8635bad4e970d40bcf6a76058e01dd77d"} Feb 19 19:53:44 crc kubenswrapper[4813]: I0219 19:53:44.402281 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-copy-data" podStartSLOduration=3.711545301 podStartE2EDuration="4.402257961s" podCreationTimestamp="2026-02-19 19:53:40 +0000 UTC" firstStartedPulling="2026-02-19 19:53:42.80361815 +0000 UTC m=+5042.029058691" lastFinishedPulling="2026-02-19 19:53:43.49433079 +0000 UTC m=+5042.719771351" observedRunningTime="2026-02-19 19:53:44.397326158 +0000 UTC m=+5043.622766719" watchObservedRunningTime="2026-02-19 19:53:44.402257961 +0000 UTC m=+5043.627698522" Feb 19 19:53:49 crc kubenswrapper[4813]: I0219 19:53:49.001560 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 19 19:53:49 crc kubenswrapper[4813]: I0219 19:53:49.003803 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 19 19:53:49 crc kubenswrapper[4813]: I0219 19:53:49.015133 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-2njpb" Feb 19 19:53:49 crc kubenswrapper[4813]: I0219 19:53:49.019038 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 19 19:53:49 crc kubenswrapper[4813]: I0219 19:53:49.019123 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 19 19:53:49 crc kubenswrapper[4813]: I0219 19:53:49.050681 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 19 19:53:49 crc kubenswrapper[4813]: I0219 19:53:49.157579 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0b4f5f49-12cd-4e82-aaef-d52b3f186786-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"0b4f5f49-12cd-4e82-aaef-d52b3f186786\") " pod="openstack/ovn-northd-0" Feb 19 19:53:49 crc kubenswrapper[4813]: I0219 19:53:49.157649 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b4f5f49-12cd-4e82-aaef-d52b3f186786-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"0b4f5f49-12cd-4e82-aaef-d52b3f186786\") " pod="openstack/ovn-northd-0" Feb 19 19:53:49 crc kubenswrapper[4813]: I0219 19:53:49.157682 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b4f5f49-12cd-4e82-aaef-d52b3f186786-config\") pod \"ovn-northd-0\" (UID: \"0b4f5f49-12cd-4e82-aaef-d52b3f186786\") " pod="openstack/ovn-northd-0" Feb 19 19:53:49 crc kubenswrapper[4813]: I0219 19:53:49.157702 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hktx8\" (UniqueName: \"kubernetes.io/projected/0b4f5f49-12cd-4e82-aaef-d52b3f186786-kube-api-access-hktx8\") pod \"ovn-northd-0\" (UID: \"0b4f5f49-12cd-4e82-aaef-d52b3f186786\") " pod="openstack/ovn-northd-0" Feb 19 19:53:49 crc kubenswrapper[4813]: I0219 19:53:49.157801 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0b4f5f49-12cd-4e82-aaef-d52b3f186786-scripts\") pod \"ovn-northd-0\" (UID: \"0b4f5f49-12cd-4e82-aaef-d52b3f186786\") " pod="openstack/ovn-northd-0" Feb 19 19:53:49 crc kubenswrapper[4813]: I0219 19:53:49.259366 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0b4f5f49-12cd-4e82-aaef-d52b3f186786-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"0b4f5f49-12cd-4e82-aaef-d52b3f186786\") " pod="openstack/ovn-northd-0" Feb 19 19:53:49 crc kubenswrapper[4813]: I0219 19:53:49.259451 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b4f5f49-12cd-4e82-aaef-d52b3f186786-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"0b4f5f49-12cd-4e82-aaef-d52b3f186786\") " pod="openstack/ovn-northd-0" Feb 19 19:53:49 crc kubenswrapper[4813]: I0219 19:53:49.259488 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b4f5f49-12cd-4e82-aaef-d52b3f186786-config\") pod \"ovn-northd-0\" (UID: \"0b4f5f49-12cd-4e82-aaef-d52b3f186786\") " pod="openstack/ovn-northd-0" Feb 19 19:53:49 crc kubenswrapper[4813]: I0219 19:53:49.259508 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hktx8\" (UniqueName: \"kubernetes.io/projected/0b4f5f49-12cd-4e82-aaef-d52b3f186786-kube-api-access-hktx8\") pod \"ovn-northd-0\" (UID: \"0b4f5f49-12cd-4e82-aaef-d52b3f186786\") " pod="openstack/ovn-northd-0" Feb 19 19:53:49 crc kubenswrapper[4813]: I0219 19:53:49.259577 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0b4f5f49-12cd-4e82-aaef-d52b3f186786-scripts\") pod \"ovn-northd-0\" (UID: \"0b4f5f49-12cd-4e82-aaef-d52b3f186786\") " pod="openstack/ovn-northd-0" Feb 19 19:53:49 crc kubenswrapper[4813]: I0219 19:53:49.260441 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0b4f5f49-12cd-4e82-aaef-d52b3f186786-scripts\") pod \"ovn-northd-0\" (UID: \"0b4f5f49-12cd-4e82-aaef-d52b3f186786\") " pod="openstack/ovn-northd-0" Feb 19 19:53:49 crc kubenswrapper[4813]: I0219 19:53:49.260749 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0b4f5f49-12cd-4e82-aaef-d52b3f186786-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"0b4f5f49-12cd-4e82-aaef-d52b3f186786\") " pod="openstack/ovn-northd-0" Feb 19 19:53:49 crc kubenswrapper[4813]: I0219 19:53:49.261356 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b4f5f49-12cd-4e82-aaef-d52b3f186786-config\") pod \"ovn-northd-0\" (UID: \"0b4f5f49-12cd-4e82-aaef-d52b3f186786\") " pod="openstack/ovn-northd-0" Feb 19 19:53:49 crc kubenswrapper[4813]: I0219 19:53:49.266216 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b4f5f49-12cd-4e82-aaef-d52b3f186786-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"0b4f5f49-12cd-4e82-aaef-d52b3f186786\") " pod="openstack/ovn-northd-0" Feb 19 19:53:49 crc kubenswrapper[4813]: I0219 19:53:49.280930 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hktx8\" (UniqueName: \"kubernetes.io/projected/0b4f5f49-12cd-4e82-aaef-d52b3f186786-kube-api-access-hktx8\") pod \"ovn-northd-0\" (UID: \"0b4f5f49-12cd-4e82-aaef-d52b3f186786\") " pod="openstack/ovn-northd-0" Feb 19 19:53:49 crc kubenswrapper[4813]: I0219 19:53:49.350565 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 19 19:53:49 crc kubenswrapper[4813]: I0219 19:53:49.453781 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-85cf84b645-6nmn2" Feb 19 19:53:49 crc kubenswrapper[4813]: I0219 19:53:49.528797 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54dc9c94cc-f99dr"] Feb 19 19:53:49 crc kubenswrapper[4813]: I0219 19:53:49.548299 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-54dc9c94cc-f99dr" podUID="cf1972b0-8fc7-4e43-a622-5458944ae7b0" containerName="dnsmasq-dns" containerID="cri-o://207aff66b25c1b036dc15542a495daa84a391f90838130d362fd41dda6a18a8d" gracePeriod=10 Feb 19 19:53:49 crc kubenswrapper[4813]: I0219 19:53:49.891851 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 19 19:53:49 crc kubenswrapper[4813]: W0219 19:53:49.900646 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b4f5f49_12cd_4e82_aaef_d52b3f186786.slice/crio-a3b053ecfb229c53ff8d85f456ae114c096fe212f85f9878e33797419aedafc5 WatchSource:0}: Error finding container a3b053ecfb229c53ff8d85f456ae114c096fe212f85f9878e33797419aedafc5: Status 404 returned error can't find the container with id a3b053ecfb229c53ff8d85f456ae114c096fe212f85f9878e33797419aedafc5 Feb 19 19:53:49 crc kubenswrapper[4813]: I0219 19:53:49.983415 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54dc9c94cc-f99dr" Feb 19 19:53:50 crc kubenswrapper[4813]: I0219 19:53:50.071603 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5qlh\" (UniqueName: \"kubernetes.io/projected/cf1972b0-8fc7-4e43-a622-5458944ae7b0-kube-api-access-q5qlh\") pod \"cf1972b0-8fc7-4e43-a622-5458944ae7b0\" (UID: \"cf1972b0-8fc7-4e43-a622-5458944ae7b0\") " Feb 19 19:53:50 crc kubenswrapper[4813]: I0219 19:53:50.071783 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf1972b0-8fc7-4e43-a622-5458944ae7b0-dns-svc\") pod \"cf1972b0-8fc7-4e43-a622-5458944ae7b0\" (UID: \"cf1972b0-8fc7-4e43-a622-5458944ae7b0\") " Feb 19 19:53:50 crc kubenswrapper[4813]: I0219 19:53:50.071837 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf1972b0-8fc7-4e43-a622-5458944ae7b0-config\") pod \"cf1972b0-8fc7-4e43-a622-5458944ae7b0\" (UID: \"cf1972b0-8fc7-4e43-a622-5458944ae7b0\") " Feb 19 19:53:50 crc kubenswrapper[4813]: I0219 19:53:50.077096 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf1972b0-8fc7-4e43-a622-5458944ae7b0-kube-api-access-q5qlh" (OuterVolumeSpecName: "kube-api-access-q5qlh") pod "cf1972b0-8fc7-4e43-a622-5458944ae7b0" (UID: "cf1972b0-8fc7-4e43-a622-5458944ae7b0"). InnerVolumeSpecName "kube-api-access-q5qlh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:53:50 crc kubenswrapper[4813]: I0219 19:53:50.116051 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf1972b0-8fc7-4e43-a622-5458944ae7b0-config" (OuterVolumeSpecName: "config") pod "cf1972b0-8fc7-4e43-a622-5458944ae7b0" (UID: "cf1972b0-8fc7-4e43-a622-5458944ae7b0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:53:50 crc kubenswrapper[4813]: I0219 19:53:50.136196 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf1972b0-8fc7-4e43-a622-5458944ae7b0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cf1972b0-8fc7-4e43-a622-5458944ae7b0" (UID: "cf1972b0-8fc7-4e43-a622-5458944ae7b0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:53:50 crc kubenswrapper[4813]: I0219 19:53:50.174220 4813 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf1972b0-8fc7-4e43-a622-5458944ae7b0-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 19:53:50 crc kubenswrapper[4813]: I0219 19:53:50.174264 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf1972b0-8fc7-4e43-a622-5458944ae7b0-config\") on node \"crc\" DevicePath \"\"" Feb 19 19:53:50 crc kubenswrapper[4813]: I0219 19:53:50.174279 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5qlh\" (UniqueName: \"kubernetes.io/projected/cf1972b0-8fc7-4e43-a622-5458944ae7b0-kube-api-access-q5qlh\") on node \"crc\" DevicePath \"\"" Feb 19 19:53:50 crc kubenswrapper[4813]: I0219 19:53:50.425194 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"0b4f5f49-12cd-4e82-aaef-d52b3f186786","Type":"ContainerStarted","Data":"b1f9db40aa098f6285869d3d14119bdff2d9e1d601beb238fdd6d688d1f5d0da"} Feb 19 19:53:50 crc kubenswrapper[4813]: I0219 19:53:50.425264 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 19 19:53:50 crc kubenswrapper[4813]: I0219 19:53:50.425283 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"0b4f5f49-12cd-4e82-aaef-d52b3f186786","Type":"ContainerStarted","Data":"0f1becefd034f469687af4ca0712b5b94041674fa2268bf116f1533628db8e4b"} Feb 19 19:53:50 crc kubenswrapper[4813]: I0219 19:53:50.425300 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"0b4f5f49-12cd-4e82-aaef-d52b3f186786","Type":"ContainerStarted","Data":"a3b053ecfb229c53ff8d85f456ae114c096fe212f85f9878e33797419aedafc5"} Feb 19 19:53:50 crc kubenswrapper[4813]: I0219 19:53:50.427314 4813 generic.go:334] "Generic (PLEG): container finished" podID="cf1972b0-8fc7-4e43-a622-5458944ae7b0" containerID="207aff66b25c1b036dc15542a495daa84a391f90838130d362fd41dda6a18a8d" exitCode=0 Feb 19 19:53:50 crc kubenswrapper[4813]: I0219 19:53:50.427349 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54dc9c94cc-f99dr" Feb 19 19:53:50 crc kubenswrapper[4813]: I0219 19:53:50.427369 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54dc9c94cc-f99dr" event={"ID":"cf1972b0-8fc7-4e43-a622-5458944ae7b0","Type":"ContainerDied","Data":"207aff66b25c1b036dc15542a495daa84a391f90838130d362fd41dda6a18a8d"} Feb 19 19:53:50 crc kubenswrapper[4813]: I0219 19:53:50.427404 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54dc9c94cc-f99dr" event={"ID":"cf1972b0-8fc7-4e43-a622-5458944ae7b0","Type":"ContainerDied","Data":"217605aad397b13616f854447bbe3f419441dee703d769000e87e6d926cb1479"} Feb 19 19:53:50 crc kubenswrapper[4813]: I0219 19:53:50.427435 4813 scope.go:117] "RemoveContainer" containerID="207aff66b25c1b036dc15542a495daa84a391f90838130d362fd41dda6a18a8d" Feb 19 19:53:50 crc kubenswrapper[4813]: I0219 19:53:50.467738 4813 scope.go:117] "RemoveContainer" containerID="b7b346904238edd9593370bf0c108e4eedf9bd5f427850f541ac6ce218df9b46" Feb 19 19:53:50 crc kubenswrapper[4813]: I0219 19:53:50.501164 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.50114201 podStartE2EDuration="2.50114201s" podCreationTimestamp="2026-02-19 19:53:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:53:50.470767441 +0000 UTC m=+5049.696208002" watchObservedRunningTime="2026-02-19 19:53:50.50114201 +0000 UTC m=+5049.726582561" Feb 19 19:53:50 crc kubenswrapper[4813]: I0219 19:53:50.502433 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54dc9c94cc-f99dr"] Feb 19 19:53:50 crc kubenswrapper[4813]: I0219 19:53:50.511763 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-54dc9c94cc-f99dr"] Feb 19 19:53:50 crc kubenswrapper[4813]: I0219 19:53:50.529419 4813 scope.go:117] "RemoveContainer" containerID="207aff66b25c1b036dc15542a495daa84a391f90838130d362fd41dda6a18a8d" Feb 19 19:53:50 crc kubenswrapper[4813]: E0219 19:53:50.529812 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"207aff66b25c1b036dc15542a495daa84a391f90838130d362fd41dda6a18a8d\": container with ID starting with 207aff66b25c1b036dc15542a495daa84a391f90838130d362fd41dda6a18a8d not found: ID does not exist" containerID="207aff66b25c1b036dc15542a495daa84a391f90838130d362fd41dda6a18a8d" Feb 19 19:53:50 crc kubenswrapper[4813]: I0219 19:53:50.529856 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"207aff66b25c1b036dc15542a495daa84a391f90838130d362fd41dda6a18a8d"} err="failed to get container status \"207aff66b25c1b036dc15542a495daa84a391f90838130d362fd41dda6a18a8d\": rpc error: code = NotFound desc = could not find container \"207aff66b25c1b036dc15542a495daa84a391f90838130d362fd41dda6a18a8d\": container with ID starting with 207aff66b25c1b036dc15542a495daa84a391f90838130d362fd41dda6a18a8d not found: ID does not exist" Feb 19 19:53:50 crc kubenswrapper[4813]: I0219 19:53:50.529882 4813 scope.go:117] "RemoveContainer" containerID="b7b346904238edd9593370bf0c108e4eedf9bd5f427850f541ac6ce218df9b46" Feb 19 19:53:50 crc kubenswrapper[4813]: E0219 19:53:50.530234 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7b346904238edd9593370bf0c108e4eedf9bd5f427850f541ac6ce218df9b46\": container with ID starting with b7b346904238edd9593370bf0c108e4eedf9bd5f427850f541ac6ce218df9b46 not found: ID does not exist" containerID="b7b346904238edd9593370bf0c108e4eedf9bd5f427850f541ac6ce218df9b46" Feb 19 19:53:50 crc kubenswrapper[4813]: I0219 19:53:50.530267 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7b346904238edd9593370bf0c108e4eedf9bd5f427850f541ac6ce218df9b46"} err="failed to get container status \"b7b346904238edd9593370bf0c108e4eedf9bd5f427850f541ac6ce218df9b46\": rpc error: code = NotFound desc = could not find container \"b7b346904238edd9593370bf0c108e4eedf9bd5f427850f541ac6ce218df9b46\": container with ID starting with b7b346904238edd9593370bf0c108e4eedf9bd5f427850f541ac6ce218df9b46 not found: ID does not exist" Feb 19 19:53:51 crc kubenswrapper[4813]: I0219 19:53:51.484575 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf1972b0-8fc7-4e43-a622-5458944ae7b0" path="/var/lib/kubelet/pods/cf1972b0-8fc7-4e43-a622-5458944ae7b0/volumes" Feb 19 19:53:52 crc kubenswrapper[4813]: I0219 19:53:52.228038 4813 scope.go:117] "RemoveContainer" containerID="7c9b70bfa6befc195a134bca63f405870b1806ab104f8a0a7c49c09b7106fecf" Feb 19 19:53:54 crc kubenswrapper[4813]: I0219 19:53:54.984075 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-n4xk5"] Feb 19 19:53:54 crc kubenswrapper[4813]: E0219 19:53:54.984749 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf1972b0-8fc7-4e43-a622-5458944ae7b0" containerName="init" Feb 19 19:53:54 crc kubenswrapper[4813]: I0219 19:53:54.984990 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf1972b0-8fc7-4e43-a622-5458944ae7b0" containerName="init" Feb 19 19:53:54 crc kubenswrapper[4813]: E0219 19:53:54.985018 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf1972b0-8fc7-4e43-a622-5458944ae7b0" containerName="dnsmasq-dns" Feb 19 19:53:54 crc kubenswrapper[4813]: I0219 19:53:54.985027 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf1972b0-8fc7-4e43-a622-5458944ae7b0" containerName="dnsmasq-dns" Feb 19 19:53:54 crc kubenswrapper[4813]: I0219 19:53:54.985224 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf1972b0-8fc7-4e43-a622-5458944ae7b0" containerName="dnsmasq-dns" Feb 19 19:53:54 crc kubenswrapper[4813]: I0219 19:53:54.985845 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-n4xk5" Feb 19 19:53:54 crc kubenswrapper[4813]: I0219 19:53:54.992625 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-n4xk5"] Feb 19 19:53:55 crc kubenswrapper[4813]: I0219 19:53:55.065072 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48470170-5b98-4a7d-a359-b14d60bbf229-operator-scripts\") pod \"keystone-db-create-n4xk5\" (UID: \"48470170-5b98-4a7d-a359-b14d60bbf229\") " pod="openstack/keystone-db-create-n4xk5" Feb 19 19:53:55 crc kubenswrapper[4813]: I0219 19:53:55.065149 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9vk7\" (UniqueName: \"kubernetes.io/projected/48470170-5b98-4a7d-a359-b14d60bbf229-kube-api-access-h9vk7\") pod \"keystone-db-create-n4xk5\" (UID: \"48470170-5b98-4a7d-a359-b14d60bbf229\") " pod="openstack/keystone-db-create-n4xk5" Feb 19 19:53:55 crc kubenswrapper[4813]: I0219 19:53:55.095311 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-1b68-account-create-update-gslmf"] Feb 19 19:53:55 crc kubenswrapper[4813]: I0219 19:53:55.096541 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-1b68-account-create-update-gslmf" Feb 19 19:53:55 crc kubenswrapper[4813]: I0219 19:53:55.098918 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 19 19:53:55 crc kubenswrapper[4813]: I0219 19:53:55.124052 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-1b68-account-create-update-gslmf"] Feb 19 19:53:55 crc kubenswrapper[4813]: I0219 19:53:55.168863 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48470170-5b98-4a7d-a359-b14d60bbf229-operator-scripts\") pod \"keystone-db-create-n4xk5\" (UID: \"48470170-5b98-4a7d-a359-b14d60bbf229\") " pod="openstack/keystone-db-create-n4xk5" Feb 19 19:53:55 crc kubenswrapper[4813]: I0219 19:53:55.171847 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48470170-5b98-4a7d-a359-b14d60bbf229-operator-scripts\") pod \"keystone-db-create-n4xk5\" (UID: \"48470170-5b98-4a7d-a359-b14d60bbf229\") " pod="openstack/keystone-db-create-n4xk5" Feb 19 19:53:55 crc kubenswrapper[4813]: I0219 19:53:55.171967 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9vk7\" (UniqueName: \"kubernetes.io/projected/48470170-5b98-4a7d-a359-b14d60bbf229-kube-api-access-h9vk7\") pod \"keystone-db-create-n4xk5\" (UID: \"48470170-5b98-4a7d-a359-b14d60bbf229\") " pod="openstack/keystone-db-create-n4xk5" Feb 19 19:53:55 crc kubenswrapper[4813]: I0219 19:53:55.195346 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9vk7\" (UniqueName: \"kubernetes.io/projected/48470170-5b98-4a7d-a359-b14d60bbf229-kube-api-access-h9vk7\") pod \"keystone-db-create-n4xk5\" (UID: \"48470170-5b98-4a7d-a359-b14d60bbf229\") " pod="openstack/keystone-db-create-n4xk5" Feb 19 19:53:55 crc kubenswrapper[4813]: I0219 19:53:55.273261 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frcnr\" (UniqueName: \"kubernetes.io/projected/815a91b3-eddd-434c-b30f-6b3a84c91efd-kube-api-access-frcnr\") pod \"keystone-1b68-account-create-update-gslmf\" (UID: \"815a91b3-eddd-434c-b30f-6b3a84c91efd\") " pod="openstack/keystone-1b68-account-create-update-gslmf" Feb 19 19:53:55 crc kubenswrapper[4813]: I0219 19:53:55.273346 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/815a91b3-eddd-434c-b30f-6b3a84c91efd-operator-scripts\") pod \"keystone-1b68-account-create-update-gslmf\" (UID: \"815a91b3-eddd-434c-b30f-6b3a84c91efd\") " pod="openstack/keystone-1b68-account-create-update-gslmf" Feb 19 19:53:55 crc kubenswrapper[4813]: I0219 19:53:55.302318 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-n4xk5" Feb 19 19:53:55 crc kubenswrapper[4813]: I0219 19:53:55.375091 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frcnr\" (UniqueName: \"kubernetes.io/projected/815a91b3-eddd-434c-b30f-6b3a84c91efd-kube-api-access-frcnr\") pod \"keystone-1b68-account-create-update-gslmf\" (UID: \"815a91b3-eddd-434c-b30f-6b3a84c91efd\") " pod="openstack/keystone-1b68-account-create-update-gslmf" Feb 19 19:53:55 crc kubenswrapper[4813]: I0219 19:53:55.375164 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/815a91b3-eddd-434c-b30f-6b3a84c91efd-operator-scripts\") pod \"keystone-1b68-account-create-update-gslmf\" (UID: \"815a91b3-eddd-434c-b30f-6b3a84c91efd\") " pod="openstack/keystone-1b68-account-create-update-gslmf" Feb 19 19:53:55 crc kubenswrapper[4813]: I0219 19:53:55.376199 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/815a91b3-eddd-434c-b30f-6b3a84c91efd-operator-scripts\") pod \"keystone-1b68-account-create-update-gslmf\" (UID: \"815a91b3-eddd-434c-b30f-6b3a84c91efd\") " pod="openstack/keystone-1b68-account-create-update-gslmf" Feb 19 19:53:55 crc kubenswrapper[4813]: I0219 19:53:55.398446 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frcnr\" (UniqueName: \"kubernetes.io/projected/815a91b3-eddd-434c-b30f-6b3a84c91efd-kube-api-access-frcnr\") pod \"keystone-1b68-account-create-update-gslmf\" (UID: \"815a91b3-eddd-434c-b30f-6b3a84c91efd\") " pod="openstack/keystone-1b68-account-create-update-gslmf" Feb 19 19:53:55 crc kubenswrapper[4813]: I0219 19:53:55.452228 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-1b68-account-create-update-gslmf" Feb 19 19:53:55 crc kubenswrapper[4813]: I0219 19:53:55.796291 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-n4xk5"] Feb 19 19:53:55 crc kubenswrapper[4813]: W0219 19:53:55.804702 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48470170_5b98_4a7d_a359_b14d60bbf229.slice/crio-4fef527906a9e851782b60608267c42f048074ac37e1f2339e9fd43ff87da2d0 WatchSource:0}: Error finding container 4fef527906a9e851782b60608267c42f048074ac37e1f2339e9fd43ff87da2d0: Status 404 returned error can't find the container with id 4fef527906a9e851782b60608267c42f048074ac37e1f2339e9fd43ff87da2d0 Feb 19 19:53:55 crc kubenswrapper[4813]: I0219 19:53:55.924628 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-1b68-account-create-update-gslmf"] Feb 19 19:53:55 crc kubenswrapper[4813]: W0219 19:53:55.927211 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod815a91b3_eddd_434c_b30f_6b3a84c91efd.slice/crio-8f749a4de2747deccae3e7a100495bf2b8b9ead1967b9a51a3da533f8eca0b9a WatchSource:0}: Error finding container 8f749a4de2747deccae3e7a100495bf2b8b9ead1967b9a51a3da533f8eca0b9a: Status 404 returned error can't find the container with id 8f749a4de2747deccae3e7a100495bf2b8b9ead1967b9a51a3da533f8eca0b9a Feb 19 19:53:56 crc kubenswrapper[4813]: I0219 19:53:56.477090 4813 generic.go:334] "Generic (PLEG): container finished" podID="48470170-5b98-4a7d-a359-b14d60bbf229" containerID="3fdd14a3a0a78a2a46335d4ccb93ca0f6a434cad6a06adb36c1eabfd45db7cbe" exitCode=0 Feb 19 19:53:56 crc kubenswrapper[4813]: I0219 19:53:56.477160 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-n4xk5" event={"ID":"48470170-5b98-4a7d-a359-b14d60bbf229","Type":"ContainerDied","Data":"3fdd14a3a0a78a2a46335d4ccb93ca0f6a434cad6a06adb36c1eabfd45db7cbe"} Feb 19 19:53:56 crc kubenswrapper[4813]: I0219 19:53:56.477459 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-n4xk5" event={"ID":"48470170-5b98-4a7d-a359-b14d60bbf229","Type":"ContainerStarted","Data":"4fef527906a9e851782b60608267c42f048074ac37e1f2339e9fd43ff87da2d0"} Feb 19 19:53:56 crc kubenswrapper[4813]: I0219 19:53:56.480830 4813 generic.go:334] "Generic (PLEG): container finished" podID="815a91b3-eddd-434c-b30f-6b3a84c91efd" containerID="649cd16b233c4eca4393209d12c0e7988532812fafb39a3ef659a9c174df8a7c" exitCode=0 Feb 19 19:53:56 crc kubenswrapper[4813]: I0219 19:53:56.480883 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-1b68-account-create-update-gslmf" event={"ID":"815a91b3-eddd-434c-b30f-6b3a84c91efd","Type":"ContainerDied","Data":"649cd16b233c4eca4393209d12c0e7988532812fafb39a3ef659a9c174df8a7c"} Feb 19 19:53:56 crc kubenswrapper[4813]: I0219 19:53:56.480920 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-1b68-account-create-update-gslmf" event={"ID":"815a91b3-eddd-434c-b30f-6b3a84c91efd","Type":"ContainerStarted","Data":"8f749a4de2747deccae3e7a100495bf2b8b9ead1967b9a51a3da533f8eca0b9a"} Feb 19 19:53:57 crc kubenswrapper[4813]: I0219 19:53:57.874559 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-n4xk5" Feb 19 19:53:57 crc kubenswrapper[4813]: I0219 19:53:57.881437 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-1b68-account-create-update-gslmf" Feb 19 19:53:58 crc kubenswrapper[4813]: I0219 19:53:58.021637 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48470170-5b98-4a7d-a359-b14d60bbf229-operator-scripts\") pod \"48470170-5b98-4a7d-a359-b14d60bbf229\" (UID: \"48470170-5b98-4a7d-a359-b14d60bbf229\") " Feb 19 19:53:58 crc kubenswrapper[4813]: I0219 19:53:58.022068 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9vk7\" (UniqueName: \"kubernetes.io/projected/48470170-5b98-4a7d-a359-b14d60bbf229-kube-api-access-h9vk7\") pod \"48470170-5b98-4a7d-a359-b14d60bbf229\" (UID: \"48470170-5b98-4a7d-a359-b14d60bbf229\") " Feb 19 19:53:58 crc kubenswrapper[4813]: I0219 19:53:58.022194 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/815a91b3-eddd-434c-b30f-6b3a84c91efd-operator-scripts\") pod \"815a91b3-eddd-434c-b30f-6b3a84c91efd\" (UID: \"815a91b3-eddd-434c-b30f-6b3a84c91efd\") " Feb 19 19:53:58 crc kubenswrapper[4813]: I0219 19:53:58.022352 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frcnr\" (UniqueName: \"kubernetes.io/projected/815a91b3-eddd-434c-b30f-6b3a84c91efd-kube-api-access-frcnr\") pod \"815a91b3-eddd-434c-b30f-6b3a84c91efd\" (UID: \"815a91b3-eddd-434c-b30f-6b3a84c91efd\") " Feb 19 19:53:58 crc kubenswrapper[4813]: I0219 19:53:58.022687 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48470170-5b98-4a7d-a359-b14d60bbf229-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "48470170-5b98-4a7d-a359-b14d60bbf229" (UID: "48470170-5b98-4a7d-a359-b14d60bbf229"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:53:58 crc kubenswrapper[4813]: I0219 19:53:58.022692 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/815a91b3-eddd-434c-b30f-6b3a84c91efd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "815a91b3-eddd-434c-b30f-6b3a84c91efd" (UID: "815a91b3-eddd-434c-b30f-6b3a84c91efd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:53:58 crc kubenswrapper[4813]: I0219 19:53:58.023012 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48470170-5b98-4a7d-a359-b14d60bbf229-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:53:58 crc kubenswrapper[4813]: I0219 19:53:58.023048 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/815a91b3-eddd-434c-b30f-6b3a84c91efd-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:53:58 crc kubenswrapper[4813]: I0219 19:53:58.028661 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48470170-5b98-4a7d-a359-b14d60bbf229-kube-api-access-h9vk7" (OuterVolumeSpecName: "kube-api-access-h9vk7") pod "48470170-5b98-4a7d-a359-b14d60bbf229" (UID: "48470170-5b98-4a7d-a359-b14d60bbf229"). InnerVolumeSpecName "kube-api-access-h9vk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:53:58 crc kubenswrapper[4813]: I0219 19:53:58.028848 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/815a91b3-eddd-434c-b30f-6b3a84c91efd-kube-api-access-frcnr" (OuterVolumeSpecName: "kube-api-access-frcnr") pod "815a91b3-eddd-434c-b30f-6b3a84c91efd" (UID: "815a91b3-eddd-434c-b30f-6b3a84c91efd"). InnerVolumeSpecName "kube-api-access-frcnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:53:58 crc kubenswrapper[4813]: I0219 19:53:58.126871 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frcnr\" (UniqueName: \"kubernetes.io/projected/815a91b3-eddd-434c-b30f-6b3a84c91efd-kube-api-access-frcnr\") on node \"crc\" DevicePath \"\"" Feb 19 19:53:58 crc kubenswrapper[4813]: I0219 19:53:58.126909 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9vk7\" (UniqueName: \"kubernetes.io/projected/48470170-5b98-4a7d-a359-b14d60bbf229-kube-api-access-h9vk7\") on node \"crc\" DevicePath \"\"" Feb 19 19:53:58 crc kubenswrapper[4813]: I0219 19:53:58.498781 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-n4xk5" Feb 19 19:53:58 crc kubenswrapper[4813]: I0219 19:53:58.498788 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-n4xk5" event={"ID":"48470170-5b98-4a7d-a359-b14d60bbf229","Type":"ContainerDied","Data":"4fef527906a9e851782b60608267c42f048074ac37e1f2339e9fd43ff87da2d0"} Feb 19 19:53:58 crc kubenswrapper[4813]: I0219 19:53:58.498885 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4fef527906a9e851782b60608267c42f048074ac37e1f2339e9fd43ff87da2d0" Feb 19 19:53:58 crc kubenswrapper[4813]: I0219 19:53:58.502282 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-1b68-account-create-update-gslmf" event={"ID":"815a91b3-eddd-434c-b30f-6b3a84c91efd","Type":"ContainerDied","Data":"8f749a4de2747deccae3e7a100495bf2b8b9ead1967b9a51a3da533f8eca0b9a"} Feb 19 19:53:58 crc kubenswrapper[4813]: I0219 19:53:58.502326 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f749a4de2747deccae3e7a100495bf2b8b9ead1967b9a51a3da533f8eca0b9a" Feb 19 19:53:58 crc kubenswrapper[4813]: I0219 19:53:58.502329 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-1b68-account-create-update-gslmf" Feb 19 19:54:00 crc kubenswrapper[4813]: I0219 19:54:00.329923 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 19:54:00 crc kubenswrapper[4813]: I0219 19:54:00.330316 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 19:54:00 crc kubenswrapper[4813]: I0219 19:54:00.330367 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" Feb 19 19:54:00 crc kubenswrapper[4813]: I0219 19:54:00.331091 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6f703dfd90f9e11930a40b965ab018f3c46a606e13eaba250b413520a86bea4f"} pod="openshift-machine-config-operator/machine-config-daemon-gfswm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 19:54:00 crc kubenswrapper[4813]: I0219 19:54:00.331156 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" containerID="cri-o://6f703dfd90f9e11930a40b965ab018f3c46a606e13eaba250b413520a86bea4f" gracePeriod=600 Feb 19 19:54:00 crc kubenswrapper[4813]: E0219 19:54:00.457106 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:54:00 crc kubenswrapper[4813]: I0219 19:54:00.517164 4813 generic.go:334] "Generic (PLEG): container finished" podID="481977a2-7072-4176-abd4-863cb6104d70" containerID="6f703dfd90f9e11930a40b965ab018f3c46a606e13eaba250b413520a86bea4f" exitCode=0 Feb 19 19:54:00 crc kubenswrapper[4813]: I0219 19:54:00.517204 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" event={"ID":"481977a2-7072-4176-abd4-863cb6104d70","Type":"ContainerDied","Data":"6f703dfd90f9e11930a40b965ab018f3c46a606e13eaba250b413520a86bea4f"} Feb 19 19:54:00 crc kubenswrapper[4813]: I0219 19:54:00.517235 4813 scope.go:117] "RemoveContainer" containerID="be38fd9999c563e4734db89d3c18752f33e9f41c50b92368d55ff07e1436c0c2" Feb 19 19:54:00 crc kubenswrapper[4813]: I0219 19:54:00.517979 4813 scope.go:117] "RemoveContainer" containerID="6f703dfd90f9e11930a40b965ab018f3c46a606e13eaba250b413520a86bea4f" Feb 19 19:54:00 crc kubenswrapper[4813]: E0219 19:54:00.518359 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:54:00 crc kubenswrapper[4813]: I0219 19:54:00.756320 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-459mq"] Feb 19 19:54:00 crc kubenswrapper[4813]: E0219 19:54:00.756659 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48470170-5b98-4a7d-a359-b14d60bbf229" containerName="mariadb-database-create" Feb 19 19:54:00 crc kubenswrapper[4813]: I0219 19:54:00.756679 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="48470170-5b98-4a7d-a359-b14d60bbf229" containerName="mariadb-database-create" Feb 19 19:54:00 crc kubenswrapper[4813]: E0219 19:54:00.756691 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="815a91b3-eddd-434c-b30f-6b3a84c91efd" containerName="mariadb-account-create-update" Feb 19 19:54:00 crc kubenswrapper[4813]: I0219 19:54:00.756697 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="815a91b3-eddd-434c-b30f-6b3a84c91efd" containerName="mariadb-account-create-update" Feb 19 19:54:00 crc kubenswrapper[4813]: I0219 19:54:00.759452 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="815a91b3-eddd-434c-b30f-6b3a84c91efd" containerName="mariadb-account-create-update" Feb 19 19:54:00 crc kubenswrapper[4813]: I0219 19:54:00.759500 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="48470170-5b98-4a7d-a359-b14d60bbf229" containerName="mariadb-database-create" Feb 19 19:54:00 crc kubenswrapper[4813]: I0219 19:54:00.760558 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-459mq" Feb 19 19:54:00 crc kubenswrapper[4813]: I0219 19:54:00.765490 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 19 19:54:00 crc kubenswrapper[4813]: I0219 19:54:00.769661 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 19 19:54:00 crc kubenswrapper[4813]: I0219 19:54:00.769900 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 19 19:54:00 crc kubenswrapper[4813]: I0219 19:54:00.770120 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-zjsd6" Feb 19 19:54:00 crc kubenswrapper[4813]: I0219 19:54:00.797241 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-459mq"] Feb 19 19:54:00 crc kubenswrapper[4813]: I0219 19:54:00.872201 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66h8g\" (UniqueName: \"kubernetes.io/projected/8395bad4-6541-4a62-bd8a-f0185b9ccef5-kube-api-access-66h8g\") pod \"keystone-db-sync-459mq\" (UID: \"8395bad4-6541-4a62-bd8a-f0185b9ccef5\") " pod="openstack/keystone-db-sync-459mq" Feb 19 19:54:00 crc kubenswrapper[4813]: I0219 19:54:00.872289 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8395bad4-6541-4a62-bd8a-f0185b9ccef5-combined-ca-bundle\") pod \"keystone-db-sync-459mq\" (UID: \"8395bad4-6541-4a62-bd8a-f0185b9ccef5\") " pod="openstack/keystone-db-sync-459mq" Feb 19 19:54:00 crc kubenswrapper[4813]: I0219 19:54:00.872413 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8395bad4-6541-4a62-bd8a-f0185b9ccef5-config-data\") pod \"keystone-db-sync-459mq\" (UID: \"8395bad4-6541-4a62-bd8a-f0185b9ccef5\") " pod="openstack/keystone-db-sync-459mq" Feb 19 19:54:00 crc kubenswrapper[4813]: I0219 19:54:00.973913 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8395bad4-6541-4a62-bd8a-f0185b9ccef5-config-data\") pod \"keystone-db-sync-459mq\" (UID: \"8395bad4-6541-4a62-bd8a-f0185b9ccef5\") " pod="openstack/keystone-db-sync-459mq" Feb 19 19:54:00 crc kubenswrapper[4813]: I0219 19:54:00.974055 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66h8g\" (UniqueName: \"kubernetes.io/projected/8395bad4-6541-4a62-bd8a-f0185b9ccef5-kube-api-access-66h8g\") pod \"keystone-db-sync-459mq\" (UID: \"8395bad4-6541-4a62-bd8a-f0185b9ccef5\") " pod="openstack/keystone-db-sync-459mq" Feb 19 19:54:00 crc kubenswrapper[4813]: I0219 19:54:00.974097 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8395bad4-6541-4a62-bd8a-f0185b9ccef5-combined-ca-bundle\") pod \"keystone-db-sync-459mq\" (UID: \"8395bad4-6541-4a62-bd8a-f0185b9ccef5\") " pod="openstack/keystone-db-sync-459mq" Feb 19 19:54:00 crc kubenswrapper[4813]: I0219 19:54:00.980592 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8395bad4-6541-4a62-bd8a-f0185b9ccef5-combined-ca-bundle\") pod \"keystone-db-sync-459mq\" (UID: \"8395bad4-6541-4a62-bd8a-f0185b9ccef5\") " pod="openstack/keystone-db-sync-459mq" Feb 19 19:54:00 crc kubenswrapper[4813]: I0219 19:54:00.985348 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8395bad4-6541-4a62-bd8a-f0185b9ccef5-config-data\") pod \"keystone-db-sync-459mq\" (UID: \"8395bad4-6541-4a62-bd8a-f0185b9ccef5\") " pod="openstack/keystone-db-sync-459mq" Feb 19 19:54:00 crc kubenswrapper[4813]: I0219 19:54:00.992938 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66h8g\" (UniqueName: \"kubernetes.io/projected/8395bad4-6541-4a62-bd8a-f0185b9ccef5-kube-api-access-66h8g\") pod \"keystone-db-sync-459mq\" (UID: \"8395bad4-6541-4a62-bd8a-f0185b9ccef5\") " pod="openstack/keystone-db-sync-459mq" Feb 19 19:54:01 crc kubenswrapper[4813]: I0219 19:54:01.080359 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-459mq" Feb 19 19:54:01 crc kubenswrapper[4813]: I0219 19:54:01.528431 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-459mq"] Feb 19 19:54:02 crc kubenswrapper[4813]: I0219 19:54:02.534767 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-459mq" event={"ID":"8395bad4-6541-4a62-bd8a-f0185b9ccef5","Type":"ContainerStarted","Data":"a36c1c3af030f72abab22b0fb1f0485bc42c127f8c420510293f922eec434aa4"} Feb 19 19:54:02 crc kubenswrapper[4813]: I0219 19:54:02.535151 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-459mq" event={"ID":"8395bad4-6541-4a62-bd8a-f0185b9ccef5","Type":"ContainerStarted","Data":"a81962f4a232852f259bf990bacf4d55c6599628bc6303320b9b550b580d3447"} Feb 19 19:54:02 crc kubenswrapper[4813]: I0219 19:54:02.558780 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-459mq" podStartSLOduration=2.5587633260000002 podStartE2EDuration="2.558763326s" podCreationTimestamp="2026-02-19 19:54:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:54:02.552185614 +0000 UTC m=+5061.777626165" watchObservedRunningTime="2026-02-19 19:54:02.558763326 +0000 UTC m=+5061.784203867" Feb 19 19:54:04 crc kubenswrapper[4813]: I0219 19:54:04.552295 4813 generic.go:334] "Generic (PLEG): container finished" podID="8395bad4-6541-4a62-bd8a-f0185b9ccef5" containerID="a36c1c3af030f72abab22b0fb1f0485bc42c127f8c420510293f922eec434aa4" exitCode=0 Feb 19 19:54:04 crc kubenswrapper[4813]: I0219 19:54:04.552395 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-459mq" event={"ID":"8395bad4-6541-4a62-bd8a-f0185b9ccef5","Type":"ContainerDied","Data":"a36c1c3af030f72abab22b0fb1f0485bc42c127f8c420510293f922eec434aa4"} Feb 19 19:54:05 crc kubenswrapper[4813]: I0219 19:54:05.870821 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-459mq" Feb 19 19:54:06 crc kubenswrapper[4813]: I0219 19:54:06.052398 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8395bad4-6541-4a62-bd8a-f0185b9ccef5-combined-ca-bundle\") pod \"8395bad4-6541-4a62-bd8a-f0185b9ccef5\" (UID: \"8395bad4-6541-4a62-bd8a-f0185b9ccef5\") " Feb 19 19:54:06 crc kubenswrapper[4813]: I0219 19:54:06.052798 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8395bad4-6541-4a62-bd8a-f0185b9ccef5-config-data\") pod \"8395bad4-6541-4a62-bd8a-f0185b9ccef5\" (UID: \"8395bad4-6541-4a62-bd8a-f0185b9ccef5\") " Feb 19 19:54:06 crc kubenswrapper[4813]: I0219 19:54:06.052835 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66h8g\" (UniqueName: \"kubernetes.io/projected/8395bad4-6541-4a62-bd8a-f0185b9ccef5-kube-api-access-66h8g\") pod \"8395bad4-6541-4a62-bd8a-f0185b9ccef5\" (UID: \"8395bad4-6541-4a62-bd8a-f0185b9ccef5\") " Feb 19 19:54:06 crc kubenswrapper[4813]: I0219 19:54:06.058378 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8395bad4-6541-4a62-bd8a-f0185b9ccef5-kube-api-access-66h8g" (OuterVolumeSpecName: "kube-api-access-66h8g") pod "8395bad4-6541-4a62-bd8a-f0185b9ccef5" (UID: "8395bad4-6541-4a62-bd8a-f0185b9ccef5"). InnerVolumeSpecName "kube-api-access-66h8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:54:06 crc kubenswrapper[4813]: I0219 19:54:06.074906 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8395bad4-6541-4a62-bd8a-f0185b9ccef5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8395bad4-6541-4a62-bd8a-f0185b9ccef5" (UID: "8395bad4-6541-4a62-bd8a-f0185b9ccef5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:54:06 crc kubenswrapper[4813]: I0219 19:54:06.092380 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8395bad4-6541-4a62-bd8a-f0185b9ccef5-config-data" (OuterVolumeSpecName: "config-data") pod "8395bad4-6541-4a62-bd8a-f0185b9ccef5" (UID: "8395bad4-6541-4a62-bd8a-f0185b9ccef5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:54:06 crc kubenswrapper[4813]: I0219 19:54:06.154143 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8395bad4-6541-4a62-bd8a-f0185b9ccef5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:54:06 crc kubenswrapper[4813]: I0219 19:54:06.154174 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8395bad4-6541-4a62-bd8a-f0185b9ccef5-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 19:54:06 crc kubenswrapper[4813]: I0219 19:54:06.154183 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66h8g\" (UniqueName: \"kubernetes.io/projected/8395bad4-6541-4a62-bd8a-f0185b9ccef5-kube-api-access-66h8g\") on node \"crc\" DevicePath \"\"" Feb 19 19:54:06 crc kubenswrapper[4813]: I0219 19:54:06.566553 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-459mq" event={"ID":"8395bad4-6541-4a62-bd8a-f0185b9ccef5","Type":"ContainerDied","Data":"a81962f4a232852f259bf990bacf4d55c6599628bc6303320b9b550b580d3447"} Feb 19 19:54:06 crc kubenswrapper[4813]: I0219 19:54:06.566612 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-459mq" Feb 19 19:54:06 crc kubenswrapper[4813]: I0219 19:54:06.566617 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a81962f4a232852f259bf990bacf4d55c6599628bc6303320b9b550b580d3447" Feb 19 19:54:06 crc kubenswrapper[4813]: I0219 19:54:06.812879 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-784d68bcdf-4qrcw"] Feb 19 19:54:06 crc kubenswrapper[4813]: E0219 19:54:06.813495 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8395bad4-6541-4a62-bd8a-f0185b9ccef5" containerName="keystone-db-sync" Feb 19 19:54:06 crc kubenswrapper[4813]: I0219 19:54:06.813567 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="8395bad4-6541-4a62-bd8a-f0185b9ccef5" containerName="keystone-db-sync" Feb 19 19:54:06 crc kubenswrapper[4813]: I0219 19:54:06.814816 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="8395bad4-6541-4a62-bd8a-f0185b9ccef5" containerName="keystone-db-sync" Feb 19 19:54:06 crc kubenswrapper[4813]: I0219 19:54:06.815771 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-784d68bcdf-4qrcw" Feb 19 19:54:06 crc kubenswrapper[4813]: I0219 19:54:06.824685 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-784d68bcdf-4qrcw"] Feb 19 19:54:06 crc kubenswrapper[4813]: I0219 19:54:06.868217 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-6gbnd"] Feb 19 19:54:06 crc kubenswrapper[4813]: I0219 19:54:06.869156 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6gbnd" Feb 19 19:54:06 crc kubenswrapper[4813]: I0219 19:54:06.871404 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 19 19:54:06 crc kubenswrapper[4813]: I0219 19:54:06.875303 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-zjsd6" Feb 19 19:54:06 crc kubenswrapper[4813]: I0219 19:54:06.875372 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 19 19:54:06 crc kubenswrapper[4813]: I0219 19:54:06.875507 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 19 19:54:06 crc kubenswrapper[4813]: I0219 19:54:06.875534 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 19 19:54:06 crc kubenswrapper[4813]: I0219 19:54:06.886281 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-6gbnd"] Feb 19 19:54:06 crc kubenswrapper[4813]: I0219 19:54:06.968624 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d7e98bc-abfe-4c60-9a7a-81ace51d739e-config\") pod \"dnsmasq-dns-784d68bcdf-4qrcw\" (UID: \"7d7e98bc-abfe-4c60-9a7a-81ace51d739e\") " pod="openstack/dnsmasq-dns-784d68bcdf-4qrcw" Feb 19 19:54:06 crc kubenswrapper[4813]: I0219 19:54:06.968850 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7d7e98bc-abfe-4c60-9a7a-81ace51d739e-dns-svc\") pod \"dnsmasq-dns-784d68bcdf-4qrcw\" (UID: \"7d7e98bc-abfe-4c60-9a7a-81ace51d739e\") " pod="openstack/dnsmasq-dns-784d68bcdf-4qrcw" Feb 19 19:54:06 crc kubenswrapper[4813]: I0219 19:54:06.968931 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65a5aa67-d081-4ae6-b906-845d798131fd-scripts\") pod \"keystone-bootstrap-6gbnd\" (UID: \"65a5aa67-d081-4ae6-b906-845d798131fd\") " pod="openstack/keystone-bootstrap-6gbnd" Feb 19 19:54:06 crc kubenswrapper[4813]: I0219 19:54:06.969069 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzbzf\" (UniqueName: \"kubernetes.io/projected/65a5aa67-d081-4ae6-b906-845d798131fd-kube-api-access-zzbzf\") pod \"keystone-bootstrap-6gbnd\" (UID: \"65a5aa67-d081-4ae6-b906-845d798131fd\") " pod="openstack/keystone-bootstrap-6gbnd" Feb 19 19:54:06 crc kubenswrapper[4813]: I0219 19:54:06.969176 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65a5aa67-d081-4ae6-b906-845d798131fd-config-data\") pod \"keystone-bootstrap-6gbnd\" (UID: \"65a5aa67-d081-4ae6-b906-845d798131fd\") " pod="openstack/keystone-bootstrap-6gbnd" Feb 19 19:54:06 crc kubenswrapper[4813]: I0219 19:54:06.969287 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65a5aa67-d081-4ae6-b906-845d798131fd-combined-ca-bundle\") pod \"keystone-bootstrap-6gbnd\" (UID: \"65a5aa67-d081-4ae6-b906-845d798131fd\") " pod="openstack/keystone-bootstrap-6gbnd" Feb 19 19:54:06 crc kubenswrapper[4813]: I0219 19:54:06.969353 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7d7e98bc-abfe-4c60-9a7a-81ace51d739e-ovsdbserver-sb\") pod \"dnsmasq-dns-784d68bcdf-4qrcw\" (UID: \"7d7e98bc-abfe-4c60-9a7a-81ace51d739e\") " pod="openstack/dnsmasq-dns-784d68bcdf-4qrcw" Feb 19 19:54:06 crc kubenswrapper[4813]: I0219 19:54:06.969495 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6p92\" (UniqueName: \"kubernetes.io/projected/7d7e98bc-abfe-4c60-9a7a-81ace51d739e-kube-api-access-z6p92\") pod \"dnsmasq-dns-784d68bcdf-4qrcw\" (UID: \"7d7e98bc-abfe-4c60-9a7a-81ace51d739e\") " pod="openstack/dnsmasq-dns-784d68bcdf-4qrcw" Feb 19 19:54:06 crc kubenswrapper[4813]: I0219 19:54:06.969543 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/65a5aa67-d081-4ae6-b906-845d798131fd-credential-keys\") pod \"keystone-bootstrap-6gbnd\" (UID: \"65a5aa67-d081-4ae6-b906-845d798131fd\") " pod="openstack/keystone-bootstrap-6gbnd" Feb 19 19:54:06 crc kubenswrapper[4813]: I0219 19:54:06.969656 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/65a5aa67-d081-4ae6-b906-845d798131fd-fernet-keys\") pod \"keystone-bootstrap-6gbnd\" (UID: \"65a5aa67-d081-4ae6-b906-845d798131fd\") " pod="openstack/keystone-bootstrap-6gbnd" Feb 19 19:54:06 crc kubenswrapper[4813]: I0219 19:54:06.969721 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7d7e98bc-abfe-4c60-9a7a-81ace51d739e-ovsdbserver-nb\") pod \"dnsmasq-dns-784d68bcdf-4qrcw\" (UID: \"7d7e98bc-abfe-4c60-9a7a-81ace51d739e\") " pod="openstack/dnsmasq-dns-784d68bcdf-4qrcw" Feb 19 19:54:07 crc kubenswrapper[4813]: I0219 19:54:07.071648 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/65a5aa67-d081-4ae6-b906-845d798131fd-fernet-keys\") pod \"keystone-bootstrap-6gbnd\" (UID: \"65a5aa67-d081-4ae6-b906-845d798131fd\") " pod="openstack/keystone-bootstrap-6gbnd" Feb 19 19:54:07 crc kubenswrapper[4813]: I0219 19:54:07.071713 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7d7e98bc-abfe-4c60-9a7a-81ace51d739e-ovsdbserver-nb\") pod \"dnsmasq-dns-784d68bcdf-4qrcw\" (UID: \"7d7e98bc-abfe-4c60-9a7a-81ace51d739e\") " pod="openstack/dnsmasq-dns-784d68bcdf-4qrcw" Feb 19 19:54:07 crc kubenswrapper[4813]: I0219 19:54:07.071744 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d7e98bc-abfe-4c60-9a7a-81ace51d739e-config\") pod \"dnsmasq-dns-784d68bcdf-4qrcw\" (UID: \"7d7e98bc-abfe-4c60-9a7a-81ace51d739e\") " pod="openstack/dnsmasq-dns-784d68bcdf-4qrcw" Feb 19 19:54:07 crc kubenswrapper[4813]: I0219 19:54:07.071781 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7d7e98bc-abfe-4c60-9a7a-81ace51d739e-dns-svc\") pod \"dnsmasq-dns-784d68bcdf-4qrcw\" (UID: \"7d7e98bc-abfe-4c60-9a7a-81ace51d739e\") " pod="openstack/dnsmasq-dns-784d68bcdf-4qrcw" Feb 19 19:54:07 crc kubenswrapper[4813]: I0219 19:54:07.071812 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65a5aa67-d081-4ae6-b906-845d798131fd-scripts\") pod \"keystone-bootstrap-6gbnd\" (UID: \"65a5aa67-d081-4ae6-b906-845d798131fd\") " pod="openstack/keystone-bootstrap-6gbnd" Feb 19 19:54:07 crc kubenswrapper[4813]: I0219 19:54:07.071845 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzbzf\" (UniqueName: \"kubernetes.io/projected/65a5aa67-d081-4ae6-b906-845d798131fd-kube-api-access-zzbzf\") pod \"keystone-bootstrap-6gbnd\" (UID: \"65a5aa67-d081-4ae6-b906-845d798131fd\") " pod="openstack/keystone-bootstrap-6gbnd" Feb 19 19:54:07 crc kubenswrapper[4813]: I0219 19:54:07.071863 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65a5aa67-d081-4ae6-b906-845d798131fd-config-data\") pod \"keystone-bootstrap-6gbnd\" (UID: \"65a5aa67-d081-4ae6-b906-845d798131fd\") " pod="openstack/keystone-bootstrap-6gbnd" Feb 19 19:54:07 crc kubenswrapper[4813]: I0219 19:54:07.071920 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65a5aa67-d081-4ae6-b906-845d798131fd-combined-ca-bundle\") pod \"keystone-bootstrap-6gbnd\" (UID: \"65a5aa67-d081-4ae6-b906-845d798131fd\") " pod="openstack/keystone-bootstrap-6gbnd" Feb 19 19:54:07 crc kubenswrapper[4813]: I0219 19:54:07.071943 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7d7e98bc-abfe-4c60-9a7a-81ace51d739e-ovsdbserver-sb\") pod \"dnsmasq-dns-784d68bcdf-4qrcw\" (UID: \"7d7e98bc-abfe-4c60-9a7a-81ace51d739e\") " pod="openstack/dnsmasq-dns-784d68bcdf-4qrcw" Feb 19 19:54:07 crc kubenswrapper[4813]: I0219 19:54:07.071996 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6p92\" (UniqueName: \"kubernetes.io/projected/7d7e98bc-abfe-4c60-9a7a-81ace51d739e-kube-api-access-z6p92\") pod \"dnsmasq-dns-784d68bcdf-4qrcw\" (UID: \"7d7e98bc-abfe-4c60-9a7a-81ace51d739e\") " pod="openstack/dnsmasq-dns-784d68bcdf-4qrcw" Feb 19 19:54:07 crc kubenswrapper[4813]: I0219 19:54:07.072022 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/65a5aa67-d081-4ae6-b906-845d798131fd-credential-keys\") pod \"keystone-bootstrap-6gbnd\" (UID: \"65a5aa67-d081-4ae6-b906-845d798131fd\") " pod="openstack/keystone-bootstrap-6gbnd" Feb 19 19:54:07 crc kubenswrapper[4813]: I0219 19:54:07.073555 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7d7e98bc-abfe-4c60-9a7a-81ace51d739e-ovsdbserver-nb\") pod \"dnsmasq-dns-784d68bcdf-4qrcw\" (UID: \"7d7e98bc-abfe-4c60-9a7a-81ace51d739e\") " pod="openstack/dnsmasq-dns-784d68bcdf-4qrcw" Feb 19 19:54:07 crc kubenswrapper[4813]: I0219 19:54:07.073628 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d7e98bc-abfe-4c60-9a7a-81ace51d739e-config\") pod \"dnsmasq-dns-784d68bcdf-4qrcw\" (UID: \"7d7e98bc-abfe-4c60-9a7a-81ace51d739e\") " pod="openstack/dnsmasq-dns-784d68bcdf-4qrcw" Feb 19 19:54:07 crc kubenswrapper[4813]: I0219 19:54:07.073694 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7d7e98bc-abfe-4c60-9a7a-81ace51d739e-dns-svc\") pod \"dnsmasq-dns-784d68bcdf-4qrcw\" (UID: \"7d7e98bc-abfe-4c60-9a7a-81ace51d739e\") " pod="openstack/dnsmasq-dns-784d68bcdf-4qrcw" Feb 19 19:54:07 crc kubenswrapper[4813]: I0219 19:54:07.073706 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7d7e98bc-abfe-4c60-9a7a-81ace51d739e-ovsdbserver-sb\") pod \"dnsmasq-dns-784d68bcdf-4qrcw\" (UID: \"7d7e98bc-abfe-4c60-9a7a-81ace51d739e\") " pod="openstack/dnsmasq-dns-784d68bcdf-4qrcw" Feb 19 19:54:07 crc kubenswrapper[4813]: I0219 19:54:07.076272 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65a5aa67-d081-4ae6-b906-845d798131fd-scripts\") pod \"keystone-bootstrap-6gbnd\" (UID: \"65a5aa67-d081-4ae6-b906-845d798131fd\") " pod="openstack/keystone-bootstrap-6gbnd" Feb 19 19:54:07 crc kubenswrapper[4813]: I0219 19:54:07.076431 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/65a5aa67-d081-4ae6-b906-845d798131fd-fernet-keys\") pod \"keystone-bootstrap-6gbnd\" (UID: \"65a5aa67-d081-4ae6-b906-845d798131fd\") " pod="openstack/keystone-bootstrap-6gbnd" Feb 19 19:54:07 crc kubenswrapper[4813]: I0219 19:54:07.076500 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65a5aa67-d081-4ae6-b906-845d798131fd-combined-ca-bundle\") pod \"keystone-bootstrap-6gbnd\" (UID: \"65a5aa67-d081-4ae6-b906-845d798131fd\") " pod="openstack/keystone-bootstrap-6gbnd" Feb 19 19:54:07 crc kubenswrapper[4813]: I0219 19:54:07.077424 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/65a5aa67-d081-4ae6-b906-845d798131fd-credential-keys\") pod \"keystone-bootstrap-6gbnd\" (UID: \"65a5aa67-d081-4ae6-b906-845d798131fd\") " pod="openstack/keystone-bootstrap-6gbnd" Feb 19 19:54:07 crc kubenswrapper[4813]: I0219 19:54:07.077906 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65a5aa67-d081-4ae6-b906-845d798131fd-config-data\") pod \"keystone-bootstrap-6gbnd\" (UID: \"65a5aa67-d081-4ae6-b906-845d798131fd\") " pod="openstack/keystone-bootstrap-6gbnd" Feb 19 19:54:07 crc kubenswrapper[4813]: I0219 19:54:07.094551 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzbzf\" (UniqueName: \"kubernetes.io/projected/65a5aa67-d081-4ae6-b906-845d798131fd-kube-api-access-zzbzf\") pod \"keystone-bootstrap-6gbnd\" (UID: \"65a5aa67-d081-4ae6-b906-845d798131fd\") " pod="openstack/keystone-bootstrap-6gbnd" Feb 19 19:54:07 crc kubenswrapper[4813]: I0219 19:54:07.094718 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6p92\" (UniqueName: \"kubernetes.io/projected/7d7e98bc-abfe-4c60-9a7a-81ace51d739e-kube-api-access-z6p92\") pod \"dnsmasq-dns-784d68bcdf-4qrcw\" (UID: \"7d7e98bc-abfe-4c60-9a7a-81ace51d739e\") " pod="openstack/dnsmasq-dns-784d68bcdf-4qrcw" Feb 19 19:54:07 crc kubenswrapper[4813]: I0219 19:54:07.137897 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-784d68bcdf-4qrcw" Feb 19 19:54:07 crc kubenswrapper[4813]: I0219 19:54:07.188099 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6gbnd" Feb 19 19:54:07 crc kubenswrapper[4813]: I0219 19:54:07.583652 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-784d68bcdf-4qrcw"] Feb 19 19:54:07 crc kubenswrapper[4813]: W0219 19:54:07.700152 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod65a5aa67_d081_4ae6_b906_845d798131fd.slice/crio-7f059acc7fdcf2bc5c4c7acf545b118afa8a0b03250d074c6664102f3bb735ab WatchSource:0}: Error finding container 7f059acc7fdcf2bc5c4c7acf545b118afa8a0b03250d074c6664102f3bb735ab: Status 404 returned error can't find the container with id 7f059acc7fdcf2bc5c4c7acf545b118afa8a0b03250d074c6664102f3bb735ab Feb 19 19:54:07 crc kubenswrapper[4813]: I0219 19:54:07.700396 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-6gbnd"] Feb 19 19:54:08 crc kubenswrapper[4813]: I0219 19:54:08.582329 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6gbnd" event={"ID":"65a5aa67-d081-4ae6-b906-845d798131fd","Type":"ContainerStarted","Data":"79fbfc08b43e363d77777ef16086d4ebd175a49dac210861dc287b9d77e246af"} Feb 19 19:54:08 crc kubenswrapper[4813]: I0219 19:54:08.582721 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6gbnd" event={"ID":"65a5aa67-d081-4ae6-b906-845d798131fd","Type":"ContainerStarted","Data":"7f059acc7fdcf2bc5c4c7acf545b118afa8a0b03250d074c6664102f3bb735ab"} Feb 19 19:54:08 crc kubenswrapper[4813]: I0219 19:54:08.583865 4813 generic.go:334] "Generic (PLEG): container finished" podID="7d7e98bc-abfe-4c60-9a7a-81ace51d739e" containerID="b9f890843e84e9790c17a3615a461b4b68fdb4c518b82c4b7d48745df23da9c8" exitCode=0 Feb 19 19:54:08 crc kubenswrapper[4813]: I0219 19:54:08.583907 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-784d68bcdf-4qrcw" event={"ID":"7d7e98bc-abfe-4c60-9a7a-81ace51d739e","Type":"ContainerDied","Data":"b9f890843e84e9790c17a3615a461b4b68fdb4c518b82c4b7d48745df23da9c8"} Feb 19 19:54:08 crc kubenswrapper[4813]: I0219 19:54:08.583929 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-784d68bcdf-4qrcw" event={"ID":"7d7e98bc-abfe-4c60-9a7a-81ace51d739e","Type":"ContainerStarted","Data":"06ec38f780f400458793417dd4a339b68fb6ac1f37b5bacd5ad23a8eff95e1f2"} Feb 19 19:54:08 crc kubenswrapper[4813]: I0219 19:54:08.633257 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-6gbnd" podStartSLOduration=2.633241404 podStartE2EDuration="2.633241404s" podCreationTimestamp="2026-02-19 19:54:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:54:08.613945507 +0000 UTC m=+5067.839386048" watchObservedRunningTime="2026-02-19 19:54:08.633241404 +0000 UTC m=+5067.858681945" Feb 19 19:54:09 crc kubenswrapper[4813]: I0219 19:54:09.419068 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 19 19:54:09 crc kubenswrapper[4813]: I0219 19:54:09.591924 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-784d68bcdf-4qrcw" event={"ID":"7d7e98bc-abfe-4c60-9a7a-81ace51d739e","Type":"ContainerStarted","Data":"0bda61f7326f1649a01a62ed74727ae21de8a335f85626ef04e7b5e00caccb76"} Feb 19 19:54:09 crc kubenswrapper[4813]: I0219 19:54:09.612283 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-784d68bcdf-4qrcw" podStartSLOduration=3.612262735 podStartE2EDuration="3.612262735s" podCreationTimestamp="2026-02-19 19:54:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:54:09.606001561 +0000 UTC m=+5068.831442102" watchObservedRunningTime="2026-02-19 19:54:09.612262735 +0000 UTC m=+5068.837703276" Feb 19 19:54:10 crc kubenswrapper[4813]: I0219 19:54:10.598560 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-784d68bcdf-4qrcw" Feb 19 19:54:11 crc kubenswrapper[4813]: I0219 19:54:11.606356 4813 generic.go:334] "Generic (PLEG): container finished" podID="65a5aa67-d081-4ae6-b906-845d798131fd" containerID="79fbfc08b43e363d77777ef16086d4ebd175a49dac210861dc287b9d77e246af" exitCode=0 Feb 19 19:54:11 crc kubenswrapper[4813]: I0219 19:54:11.606449 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6gbnd" event={"ID":"65a5aa67-d081-4ae6-b906-845d798131fd","Type":"ContainerDied","Data":"79fbfc08b43e363d77777ef16086d4ebd175a49dac210861dc287b9d77e246af"} Feb 19 19:54:12 crc kubenswrapper[4813]: I0219 19:54:12.940483 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6gbnd" Feb 19 19:54:13 crc kubenswrapper[4813]: I0219 19:54:13.067092 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65a5aa67-d081-4ae6-b906-845d798131fd-combined-ca-bundle\") pod \"65a5aa67-d081-4ae6-b906-845d798131fd\" (UID: \"65a5aa67-d081-4ae6-b906-845d798131fd\") " Feb 19 19:54:13 crc kubenswrapper[4813]: I0219 19:54:13.067167 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/65a5aa67-d081-4ae6-b906-845d798131fd-fernet-keys\") pod \"65a5aa67-d081-4ae6-b906-845d798131fd\" (UID: \"65a5aa67-d081-4ae6-b906-845d798131fd\") " Feb 19 19:54:13 crc kubenswrapper[4813]: I0219 19:54:13.067245 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65a5aa67-d081-4ae6-b906-845d798131fd-config-data\") pod \"65a5aa67-d081-4ae6-b906-845d798131fd\" (UID: \"65a5aa67-d081-4ae6-b906-845d798131fd\") " Feb 19 19:54:13 crc kubenswrapper[4813]: I0219 19:54:13.067303 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/65a5aa67-d081-4ae6-b906-845d798131fd-credential-keys\") pod \"65a5aa67-d081-4ae6-b906-845d798131fd\" (UID: \"65a5aa67-d081-4ae6-b906-845d798131fd\") " Feb 19 19:54:13 crc kubenswrapper[4813]: I0219 19:54:13.067430 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65a5aa67-d081-4ae6-b906-845d798131fd-scripts\") pod \"65a5aa67-d081-4ae6-b906-845d798131fd\" (UID: \"65a5aa67-d081-4ae6-b906-845d798131fd\") " Feb 19 19:54:13 crc kubenswrapper[4813]: I0219 19:54:13.068069 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzbzf\" (UniqueName: \"kubernetes.io/projected/65a5aa67-d081-4ae6-b906-845d798131fd-kube-api-access-zzbzf\") pod \"65a5aa67-d081-4ae6-b906-845d798131fd\" (UID: \"65a5aa67-d081-4ae6-b906-845d798131fd\") " Feb 19 19:54:13 crc kubenswrapper[4813]: I0219 19:54:13.073031 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65a5aa67-d081-4ae6-b906-845d798131fd-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "65a5aa67-d081-4ae6-b906-845d798131fd" (UID: "65a5aa67-d081-4ae6-b906-845d798131fd"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:54:13 crc kubenswrapper[4813]: I0219 19:54:13.073191 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65a5aa67-d081-4ae6-b906-845d798131fd-kube-api-access-zzbzf" (OuterVolumeSpecName: "kube-api-access-zzbzf") pod "65a5aa67-d081-4ae6-b906-845d798131fd" (UID: "65a5aa67-d081-4ae6-b906-845d798131fd"). InnerVolumeSpecName "kube-api-access-zzbzf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:54:13 crc kubenswrapper[4813]: I0219 19:54:13.075172 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65a5aa67-d081-4ae6-b906-845d798131fd-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "65a5aa67-d081-4ae6-b906-845d798131fd" (UID: "65a5aa67-d081-4ae6-b906-845d798131fd"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:54:13 crc kubenswrapper[4813]: I0219 19:54:13.075797 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65a5aa67-d081-4ae6-b906-845d798131fd-scripts" (OuterVolumeSpecName: "scripts") pod "65a5aa67-d081-4ae6-b906-845d798131fd" (UID: "65a5aa67-d081-4ae6-b906-845d798131fd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:54:13 crc kubenswrapper[4813]: I0219 19:54:13.091769 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65a5aa67-d081-4ae6-b906-845d798131fd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "65a5aa67-d081-4ae6-b906-845d798131fd" (UID: "65a5aa67-d081-4ae6-b906-845d798131fd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:54:13 crc kubenswrapper[4813]: I0219 19:54:13.094000 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65a5aa67-d081-4ae6-b906-845d798131fd-config-data" (OuterVolumeSpecName: "config-data") pod "65a5aa67-d081-4ae6-b906-845d798131fd" (UID: "65a5aa67-d081-4ae6-b906-845d798131fd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:54:13 crc kubenswrapper[4813]: I0219 19:54:13.170390 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65a5aa67-d081-4ae6-b906-845d798131fd-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 19:54:13 crc kubenswrapper[4813]: I0219 19:54:13.170700 4813 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/65a5aa67-d081-4ae6-b906-845d798131fd-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 19 19:54:13 crc kubenswrapper[4813]: I0219 19:54:13.170712 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65a5aa67-d081-4ae6-b906-845d798131fd-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:54:13 crc kubenswrapper[4813]: I0219 19:54:13.170724 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zzbzf\" (UniqueName: \"kubernetes.io/projected/65a5aa67-d081-4ae6-b906-845d798131fd-kube-api-access-zzbzf\") on node \"crc\" DevicePath \"\"" Feb 19 19:54:13 crc kubenswrapper[4813]: I0219 19:54:13.170736 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65a5aa67-d081-4ae6-b906-845d798131fd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:54:13 crc kubenswrapper[4813]: I0219 19:54:13.170748 4813 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/65a5aa67-d081-4ae6-b906-845d798131fd-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 19 19:54:13 crc kubenswrapper[4813]: I0219 19:54:13.624302 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6gbnd" event={"ID":"65a5aa67-d081-4ae6-b906-845d798131fd","Type":"ContainerDied","Data":"7f059acc7fdcf2bc5c4c7acf545b118afa8a0b03250d074c6664102f3bb735ab"} Feb 19 19:54:13 crc kubenswrapper[4813]: I0219 19:54:13.624370 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f059acc7fdcf2bc5c4c7acf545b118afa8a0b03250d074c6664102f3bb735ab" Feb 19 19:54:13 crc kubenswrapper[4813]: I0219 19:54:13.624472 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6gbnd" Feb 19 19:54:13 crc kubenswrapper[4813]: I0219 19:54:13.694871 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-6gbnd"] Feb 19 19:54:13 crc kubenswrapper[4813]: I0219 19:54:13.700629 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-6gbnd"] Feb 19 19:54:13 crc kubenswrapper[4813]: I0219 19:54:13.789554 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-p4js4"] Feb 19 19:54:13 crc kubenswrapper[4813]: E0219 19:54:13.789899 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65a5aa67-d081-4ae6-b906-845d798131fd" containerName="keystone-bootstrap" Feb 19 19:54:13 crc kubenswrapper[4813]: I0219 19:54:13.789916 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="65a5aa67-d081-4ae6-b906-845d798131fd" containerName="keystone-bootstrap" Feb 19 19:54:13 crc kubenswrapper[4813]: I0219 19:54:13.790136 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="65a5aa67-d081-4ae6-b906-845d798131fd" containerName="keystone-bootstrap" Feb 19 19:54:13 crc kubenswrapper[4813]: I0219 19:54:13.790628 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-p4js4" Feb 19 19:54:13 crc kubenswrapper[4813]: I0219 19:54:13.793814 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 19 19:54:13 crc kubenswrapper[4813]: I0219 19:54:13.794269 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 19 19:54:13 crc kubenswrapper[4813]: I0219 19:54:13.794788 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-zjsd6" Feb 19 19:54:13 crc kubenswrapper[4813]: I0219 19:54:13.794945 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 19 19:54:13 crc kubenswrapper[4813]: I0219 19:54:13.795097 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 19 19:54:13 crc kubenswrapper[4813]: I0219 19:54:13.807358 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-p4js4"] Feb 19 19:54:13 crc kubenswrapper[4813]: I0219 19:54:13.881606 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1db69cd-0899-4f7b-ac3f-fa9b75471765-scripts\") pod \"keystone-bootstrap-p4js4\" (UID: \"f1db69cd-0899-4f7b-ac3f-fa9b75471765\") " pod="openstack/keystone-bootstrap-p4js4" Feb 19 19:54:13 crc kubenswrapper[4813]: I0219 19:54:13.881662 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1db69cd-0899-4f7b-ac3f-fa9b75471765-config-data\") pod \"keystone-bootstrap-p4js4\" (UID: \"f1db69cd-0899-4f7b-ac3f-fa9b75471765\") " pod="openstack/keystone-bootstrap-p4js4" Feb 19 19:54:13 crc kubenswrapper[4813]: I0219 19:54:13.881701 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1db69cd-0899-4f7b-ac3f-fa9b75471765-combined-ca-bundle\") pod \"keystone-bootstrap-p4js4\" (UID: \"f1db69cd-0899-4f7b-ac3f-fa9b75471765\") " pod="openstack/keystone-bootstrap-p4js4" Feb 19 19:54:13 crc kubenswrapper[4813]: I0219 19:54:13.881730 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f1db69cd-0899-4f7b-ac3f-fa9b75471765-fernet-keys\") pod \"keystone-bootstrap-p4js4\" (UID: \"f1db69cd-0899-4f7b-ac3f-fa9b75471765\") " pod="openstack/keystone-bootstrap-p4js4" Feb 19 19:54:13 crc kubenswrapper[4813]: I0219 19:54:13.881807 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f1db69cd-0899-4f7b-ac3f-fa9b75471765-credential-keys\") pod \"keystone-bootstrap-p4js4\" (UID: \"f1db69cd-0899-4f7b-ac3f-fa9b75471765\") " pod="openstack/keystone-bootstrap-p4js4" Feb 19 19:54:13 crc kubenswrapper[4813]: I0219 19:54:13.881889 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8fl7\" (UniqueName: \"kubernetes.io/projected/f1db69cd-0899-4f7b-ac3f-fa9b75471765-kube-api-access-b8fl7\") pod \"keystone-bootstrap-p4js4\" (UID: \"f1db69cd-0899-4f7b-ac3f-fa9b75471765\") " pod="openstack/keystone-bootstrap-p4js4" Feb 19 19:54:13 crc kubenswrapper[4813]: I0219 19:54:13.983833 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8fl7\" (UniqueName: \"kubernetes.io/projected/f1db69cd-0899-4f7b-ac3f-fa9b75471765-kube-api-access-b8fl7\") pod \"keystone-bootstrap-p4js4\" (UID: \"f1db69cd-0899-4f7b-ac3f-fa9b75471765\") " pod="openstack/keystone-bootstrap-p4js4" Feb 19 19:54:13 crc kubenswrapper[4813]: I0219 19:54:13.983900 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1db69cd-0899-4f7b-ac3f-fa9b75471765-scripts\") pod \"keystone-bootstrap-p4js4\" (UID: \"f1db69cd-0899-4f7b-ac3f-fa9b75471765\") " pod="openstack/keystone-bootstrap-p4js4" Feb 19 19:54:13 crc kubenswrapper[4813]: I0219 19:54:13.983943 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1db69cd-0899-4f7b-ac3f-fa9b75471765-config-data\") pod \"keystone-bootstrap-p4js4\" (UID: \"f1db69cd-0899-4f7b-ac3f-fa9b75471765\") " pod="openstack/keystone-bootstrap-p4js4" Feb 19 19:54:13 crc kubenswrapper[4813]: I0219 19:54:13.984004 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1db69cd-0899-4f7b-ac3f-fa9b75471765-combined-ca-bundle\") pod \"keystone-bootstrap-p4js4\" (UID: \"f1db69cd-0899-4f7b-ac3f-fa9b75471765\") " pod="openstack/keystone-bootstrap-p4js4" Feb 19 19:54:13 crc kubenswrapper[4813]: I0219 19:54:13.984038 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f1db69cd-0899-4f7b-ac3f-fa9b75471765-fernet-keys\") pod \"keystone-bootstrap-p4js4\" (UID: \"f1db69cd-0899-4f7b-ac3f-fa9b75471765\") " pod="openstack/keystone-bootstrap-p4js4" Feb 19 19:54:13 crc kubenswrapper[4813]: I0219 19:54:13.984072 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f1db69cd-0899-4f7b-ac3f-fa9b75471765-credential-keys\") pod \"keystone-bootstrap-p4js4\" (UID: \"f1db69cd-0899-4f7b-ac3f-fa9b75471765\") " pod="openstack/keystone-bootstrap-p4js4" Feb 19 19:54:13 crc kubenswrapper[4813]: I0219 19:54:13.988285 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f1db69cd-0899-4f7b-ac3f-fa9b75471765-credential-keys\") pod \"keystone-bootstrap-p4js4\" (UID: \"f1db69cd-0899-4f7b-ac3f-fa9b75471765\") " pod="openstack/keystone-bootstrap-p4js4" Feb 19 19:54:13 crc kubenswrapper[4813]: I0219 19:54:13.988530 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1db69cd-0899-4f7b-ac3f-fa9b75471765-config-data\") pod \"keystone-bootstrap-p4js4\" (UID: \"f1db69cd-0899-4f7b-ac3f-fa9b75471765\") " pod="openstack/keystone-bootstrap-p4js4" Feb 19 19:54:13 crc kubenswrapper[4813]: I0219 19:54:13.989269 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1db69cd-0899-4f7b-ac3f-fa9b75471765-scripts\") pod \"keystone-bootstrap-p4js4\" (UID: \"f1db69cd-0899-4f7b-ac3f-fa9b75471765\") " pod="openstack/keystone-bootstrap-p4js4" Feb 19 19:54:13 crc kubenswrapper[4813]: I0219 19:54:13.995801 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1db69cd-0899-4f7b-ac3f-fa9b75471765-combined-ca-bundle\") pod \"keystone-bootstrap-p4js4\" (UID: \"f1db69cd-0899-4f7b-ac3f-fa9b75471765\") " pod="openstack/keystone-bootstrap-p4js4" Feb 19 19:54:13 crc kubenswrapper[4813]: I0219 19:54:13.997658 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f1db69cd-0899-4f7b-ac3f-fa9b75471765-fernet-keys\") pod \"keystone-bootstrap-p4js4\" (UID: \"f1db69cd-0899-4f7b-ac3f-fa9b75471765\") " pod="openstack/keystone-bootstrap-p4js4" Feb 19 19:54:14 crc kubenswrapper[4813]: I0219 19:54:14.000768 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8fl7\" (UniqueName: \"kubernetes.io/projected/f1db69cd-0899-4f7b-ac3f-fa9b75471765-kube-api-access-b8fl7\") pod \"keystone-bootstrap-p4js4\" (UID: \"f1db69cd-0899-4f7b-ac3f-fa9b75471765\") " pod="openstack/keystone-bootstrap-p4js4" Feb 19 19:54:14 crc kubenswrapper[4813]: I0219 19:54:14.112994 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-p4js4" Feb 19 19:54:14 crc kubenswrapper[4813]: I0219 19:54:14.545512 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-p4js4"] Feb 19 19:54:14 crc kubenswrapper[4813]: W0219 19:54:14.547336 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf1db69cd_0899_4f7b_ac3f_fa9b75471765.slice/crio-8b3c517231c7d90a76ab74af66b61cc85a75f18f8ac4f50c137d6f08b27437c5 WatchSource:0}: Error finding container 8b3c517231c7d90a76ab74af66b61cc85a75f18f8ac4f50c137d6f08b27437c5: Status 404 returned error can't find the container with id 8b3c517231c7d90a76ab74af66b61cc85a75f18f8ac4f50c137d6f08b27437c5 Feb 19 19:54:14 crc kubenswrapper[4813]: I0219 19:54:14.631260 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-p4js4" event={"ID":"f1db69cd-0899-4f7b-ac3f-fa9b75471765","Type":"ContainerStarted","Data":"8b3c517231c7d90a76ab74af66b61cc85a75f18f8ac4f50c137d6f08b27437c5"} Feb 19 19:54:15 crc kubenswrapper[4813]: I0219 19:54:15.472014 4813 scope.go:117] "RemoveContainer" containerID="6f703dfd90f9e11930a40b965ab018f3c46a606e13eaba250b413520a86bea4f" Feb 19 19:54:15 crc kubenswrapper[4813]: E0219 19:54:15.472306 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:54:15 crc kubenswrapper[4813]: I0219 19:54:15.485726 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65a5aa67-d081-4ae6-b906-845d798131fd" path="/var/lib/kubelet/pods/65a5aa67-d081-4ae6-b906-845d798131fd/volumes" Feb 19 19:54:15 crc kubenswrapper[4813]: I0219 19:54:15.639430 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-p4js4" event={"ID":"f1db69cd-0899-4f7b-ac3f-fa9b75471765","Type":"ContainerStarted","Data":"ee915a7dc90b651974b775d56205a91aedde0e1ff436c0eccb0ee630dcd03fc2"} Feb 19 19:54:15 crc kubenswrapper[4813]: I0219 19:54:15.662050 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-p4js4" podStartSLOduration=2.662032947 podStartE2EDuration="2.662032947s" podCreationTimestamp="2026-02-19 19:54:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:54:15.653863344 +0000 UTC m=+5074.879303885" watchObservedRunningTime="2026-02-19 19:54:15.662032947 +0000 UTC m=+5074.887473498" Feb 19 19:54:17 crc kubenswrapper[4813]: I0219 19:54:17.140667 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-784d68bcdf-4qrcw" Feb 19 19:54:17 crc kubenswrapper[4813]: I0219 19:54:17.203422 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85cf84b645-6nmn2"] Feb 19 19:54:17 crc kubenswrapper[4813]: I0219 19:54:17.203657 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85cf84b645-6nmn2" podUID="2bab7d0f-10c0-4ef4-b064-4c4c529cd802" containerName="dnsmasq-dns" containerID="cri-o://e84066aef76cafeacb0e80151e834e087ba14d52a7324c8f495f3306c528f556" gracePeriod=10 Feb 19 19:54:17 crc kubenswrapper[4813]: I0219 19:54:17.656600 4813 generic.go:334] "Generic (PLEG): container finished" podID="2bab7d0f-10c0-4ef4-b064-4c4c529cd802" containerID="e84066aef76cafeacb0e80151e834e087ba14d52a7324c8f495f3306c528f556" exitCode=0 Feb 19 19:54:17 crc kubenswrapper[4813]: I0219 19:54:17.656680 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85cf84b645-6nmn2" event={"ID":"2bab7d0f-10c0-4ef4-b064-4c4c529cd802","Type":"ContainerDied","Data":"e84066aef76cafeacb0e80151e834e087ba14d52a7324c8f495f3306c528f556"} Feb 19 19:54:17 crc kubenswrapper[4813]: I0219 19:54:17.738366 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85cf84b645-6nmn2" Feb 19 19:54:17 crc kubenswrapper[4813]: I0219 19:54:17.780103 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2bab7d0f-10c0-4ef4-b064-4c4c529cd802-ovsdbserver-sb\") pod \"2bab7d0f-10c0-4ef4-b064-4c4c529cd802\" (UID: \"2bab7d0f-10c0-4ef4-b064-4c4c529cd802\") " Feb 19 19:54:17 crc kubenswrapper[4813]: I0219 19:54:17.780155 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2bab7d0f-10c0-4ef4-b064-4c4c529cd802-config\") pod \"2bab7d0f-10c0-4ef4-b064-4c4c529cd802\" (UID: \"2bab7d0f-10c0-4ef4-b064-4c4c529cd802\") " Feb 19 19:54:17 crc kubenswrapper[4813]: I0219 19:54:17.780182 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2bab7d0f-10c0-4ef4-b064-4c4c529cd802-ovsdbserver-nb\") pod \"2bab7d0f-10c0-4ef4-b064-4c4c529cd802\" (UID: \"2bab7d0f-10c0-4ef4-b064-4c4c529cd802\") " Feb 19 19:54:17 crc kubenswrapper[4813]: I0219 19:54:17.780256 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2bab7d0f-10c0-4ef4-b064-4c4c529cd802-dns-svc\") pod \"2bab7d0f-10c0-4ef4-b064-4c4c529cd802\" (UID: \"2bab7d0f-10c0-4ef4-b064-4c4c529cd802\") " Feb 19 19:54:17 crc kubenswrapper[4813]: I0219 19:54:17.780283 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbmxn\" (UniqueName: \"kubernetes.io/projected/2bab7d0f-10c0-4ef4-b064-4c4c529cd802-kube-api-access-fbmxn\") pod \"2bab7d0f-10c0-4ef4-b064-4c4c529cd802\" (UID: \"2bab7d0f-10c0-4ef4-b064-4c4c529cd802\") " Feb 19 19:54:17 crc kubenswrapper[4813]: I0219 19:54:17.798664 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bab7d0f-10c0-4ef4-b064-4c4c529cd802-kube-api-access-fbmxn" (OuterVolumeSpecName: "kube-api-access-fbmxn") pod "2bab7d0f-10c0-4ef4-b064-4c4c529cd802" (UID: "2bab7d0f-10c0-4ef4-b064-4c4c529cd802"). InnerVolumeSpecName "kube-api-access-fbmxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:54:17 crc kubenswrapper[4813]: I0219 19:54:17.839375 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2bab7d0f-10c0-4ef4-b064-4c4c529cd802-config" (OuterVolumeSpecName: "config") pod "2bab7d0f-10c0-4ef4-b064-4c4c529cd802" (UID: "2bab7d0f-10c0-4ef4-b064-4c4c529cd802"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:54:17 crc kubenswrapper[4813]: I0219 19:54:17.842847 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2bab7d0f-10c0-4ef4-b064-4c4c529cd802-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2bab7d0f-10c0-4ef4-b064-4c4c529cd802" (UID: "2bab7d0f-10c0-4ef4-b064-4c4c529cd802"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:54:17 crc kubenswrapper[4813]: I0219 19:54:17.850679 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2bab7d0f-10c0-4ef4-b064-4c4c529cd802-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2bab7d0f-10c0-4ef4-b064-4c4c529cd802" (UID: "2bab7d0f-10c0-4ef4-b064-4c4c529cd802"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:54:17 crc kubenswrapper[4813]: I0219 19:54:17.864231 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2bab7d0f-10c0-4ef4-b064-4c4c529cd802-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2bab7d0f-10c0-4ef4-b064-4c4c529cd802" (UID: "2bab7d0f-10c0-4ef4-b064-4c4c529cd802"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:54:17 crc kubenswrapper[4813]: I0219 19:54:17.881726 4813 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2bab7d0f-10c0-4ef4-b064-4c4c529cd802-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 19:54:17 crc kubenswrapper[4813]: I0219 19:54:17.881757 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbmxn\" (UniqueName: \"kubernetes.io/projected/2bab7d0f-10c0-4ef4-b064-4c4c529cd802-kube-api-access-fbmxn\") on node \"crc\" DevicePath \"\"" Feb 19 19:54:17 crc kubenswrapper[4813]: I0219 19:54:17.881773 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2bab7d0f-10c0-4ef4-b064-4c4c529cd802-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 19:54:17 crc kubenswrapper[4813]: I0219 19:54:17.881791 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2bab7d0f-10c0-4ef4-b064-4c4c529cd802-config\") on node \"crc\" DevicePath \"\"" Feb 19 19:54:17 crc kubenswrapper[4813]: I0219 19:54:17.881802 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2bab7d0f-10c0-4ef4-b064-4c4c529cd802-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 19:54:18 crc kubenswrapper[4813]: I0219 19:54:18.667894 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85cf84b645-6nmn2" Feb 19 19:54:18 crc kubenswrapper[4813]: I0219 19:54:18.667934 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85cf84b645-6nmn2" event={"ID":"2bab7d0f-10c0-4ef4-b064-4c4c529cd802","Type":"ContainerDied","Data":"2c9c131509d4701fb14b6e69329a89aefd2ae86236c0db7653cd5a050eb4d87d"} Feb 19 19:54:18 crc kubenswrapper[4813]: I0219 19:54:18.668022 4813 scope.go:117] "RemoveContainer" containerID="e84066aef76cafeacb0e80151e834e087ba14d52a7324c8f495f3306c528f556" Feb 19 19:54:18 crc kubenswrapper[4813]: I0219 19:54:18.670936 4813 generic.go:334] "Generic (PLEG): container finished" podID="f1db69cd-0899-4f7b-ac3f-fa9b75471765" containerID="ee915a7dc90b651974b775d56205a91aedde0e1ff436c0eccb0ee630dcd03fc2" exitCode=0 Feb 19 19:54:18 crc kubenswrapper[4813]: I0219 19:54:18.670971 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-p4js4" event={"ID":"f1db69cd-0899-4f7b-ac3f-fa9b75471765","Type":"ContainerDied","Data":"ee915a7dc90b651974b775d56205a91aedde0e1ff436c0eccb0ee630dcd03fc2"} Feb 19 19:54:18 crc kubenswrapper[4813]: I0219 19:54:18.690642 4813 scope.go:117] "RemoveContainer" containerID="63c01e307710cc125889c70172ccd73bb6a587c502a1ff283d0ebe56e1c4caf0" Feb 19 19:54:18 crc kubenswrapper[4813]: I0219 19:54:18.713287 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85cf84b645-6nmn2"] Feb 19 19:54:18 crc kubenswrapper[4813]: I0219 19:54:18.728760 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85cf84b645-6nmn2"] Feb 19 19:54:19 crc kubenswrapper[4813]: I0219 19:54:19.483162 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2bab7d0f-10c0-4ef4-b064-4c4c529cd802" path="/var/lib/kubelet/pods/2bab7d0f-10c0-4ef4-b064-4c4c529cd802/volumes" Feb 19 19:54:19 crc kubenswrapper[4813]: I0219 19:54:19.985414 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-p4js4" Feb 19 19:54:20 crc kubenswrapper[4813]: I0219 19:54:20.021869 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f1db69cd-0899-4f7b-ac3f-fa9b75471765-credential-keys\") pod \"f1db69cd-0899-4f7b-ac3f-fa9b75471765\" (UID: \"f1db69cd-0899-4f7b-ac3f-fa9b75471765\") " Feb 19 19:54:20 crc kubenswrapper[4813]: I0219 19:54:20.022007 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1db69cd-0899-4f7b-ac3f-fa9b75471765-scripts\") pod \"f1db69cd-0899-4f7b-ac3f-fa9b75471765\" (UID: \"f1db69cd-0899-4f7b-ac3f-fa9b75471765\") " Feb 19 19:54:20 crc kubenswrapper[4813]: I0219 19:54:20.022051 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8fl7\" (UniqueName: \"kubernetes.io/projected/f1db69cd-0899-4f7b-ac3f-fa9b75471765-kube-api-access-b8fl7\") pod \"f1db69cd-0899-4f7b-ac3f-fa9b75471765\" (UID: \"f1db69cd-0899-4f7b-ac3f-fa9b75471765\") " Feb 19 19:54:20 crc kubenswrapper[4813]: I0219 19:54:20.022083 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1db69cd-0899-4f7b-ac3f-fa9b75471765-combined-ca-bundle\") pod \"f1db69cd-0899-4f7b-ac3f-fa9b75471765\" (UID: \"f1db69cd-0899-4f7b-ac3f-fa9b75471765\") " Feb 19 19:54:20 crc kubenswrapper[4813]: I0219 19:54:20.022113 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1db69cd-0899-4f7b-ac3f-fa9b75471765-config-data\") pod \"f1db69cd-0899-4f7b-ac3f-fa9b75471765\" (UID: \"f1db69cd-0899-4f7b-ac3f-fa9b75471765\") " Feb 19 19:54:20 crc kubenswrapper[4813]: I0219 19:54:20.022181 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f1db69cd-0899-4f7b-ac3f-fa9b75471765-fernet-keys\") pod \"f1db69cd-0899-4f7b-ac3f-fa9b75471765\" (UID: \"f1db69cd-0899-4f7b-ac3f-fa9b75471765\") " Feb 19 19:54:20 crc kubenswrapper[4813]: I0219 19:54:20.027244 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1db69cd-0899-4f7b-ac3f-fa9b75471765-scripts" (OuterVolumeSpecName: "scripts") pod "f1db69cd-0899-4f7b-ac3f-fa9b75471765" (UID: "f1db69cd-0899-4f7b-ac3f-fa9b75471765"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:54:20 crc kubenswrapper[4813]: I0219 19:54:20.027900 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1db69cd-0899-4f7b-ac3f-fa9b75471765-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "f1db69cd-0899-4f7b-ac3f-fa9b75471765" (UID: "f1db69cd-0899-4f7b-ac3f-fa9b75471765"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:54:20 crc kubenswrapper[4813]: I0219 19:54:20.028225 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1db69cd-0899-4f7b-ac3f-fa9b75471765-kube-api-access-b8fl7" (OuterVolumeSpecName: "kube-api-access-b8fl7") pod "f1db69cd-0899-4f7b-ac3f-fa9b75471765" (UID: "f1db69cd-0899-4f7b-ac3f-fa9b75471765"). InnerVolumeSpecName "kube-api-access-b8fl7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:54:20 crc kubenswrapper[4813]: I0219 19:54:20.029267 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1db69cd-0899-4f7b-ac3f-fa9b75471765-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "f1db69cd-0899-4f7b-ac3f-fa9b75471765" (UID: "f1db69cd-0899-4f7b-ac3f-fa9b75471765"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:54:20 crc kubenswrapper[4813]: I0219 19:54:20.046070 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1db69cd-0899-4f7b-ac3f-fa9b75471765-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f1db69cd-0899-4f7b-ac3f-fa9b75471765" (UID: "f1db69cd-0899-4f7b-ac3f-fa9b75471765"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:54:20 crc kubenswrapper[4813]: I0219 19:54:20.050554 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1db69cd-0899-4f7b-ac3f-fa9b75471765-config-data" (OuterVolumeSpecName: "config-data") pod "f1db69cd-0899-4f7b-ac3f-fa9b75471765" (UID: "f1db69cd-0899-4f7b-ac3f-fa9b75471765"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:54:20 crc kubenswrapper[4813]: I0219 19:54:20.123090 4813 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f1db69cd-0899-4f7b-ac3f-fa9b75471765-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 19 19:54:20 crc kubenswrapper[4813]: I0219 19:54:20.123117 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1db69cd-0899-4f7b-ac3f-fa9b75471765-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:54:20 crc kubenswrapper[4813]: I0219 19:54:20.123127 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8fl7\" (UniqueName: \"kubernetes.io/projected/f1db69cd-0899-4f7b-ac3f-fa9b75471765-kube-api-access-b8fl7\") on node \"crc\" DevicePath \"\"" Feb 19 19:54:20 crc kubenswrapper[4813]: I0219 19:54:20.123137 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1db69cd-0899-4f7b-ac3f-fa9b75471765-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:54:20 crc kubenswrapper[4813]: I0219 19:54:20.123146 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1db69cd-0899-4f7b-ac3f-fa9b75471765-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 19:54:20 crc kubenswrapper[4813]: I0219 19:54:20.123154 4813 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f1db69cd-0899-4f7b-ac3f-fa9b75471765-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 19 19:54:20 crc kubenswrapper[4813]: I0219 19:54:20.689266 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-p4js4" event={"ID":"f1db69cd-0899-4f7b-ac3f-fa9b75471765","Type":"ContainerDied","Data":"8b3c517231c7d90a76ab74af66b61cc85a75f18f8ac4f50c137d6f08b27437c5"} Feb 19 19:54:20 crc kubenswrapper[4813]: I0219 19:54:20.689313 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b3c517231c7d90a76ab74af66b61cc85a75f18f8ac4f50c137d6f08b27437c5" Feb 19 19:54:20 crc kubenswrapper[4813]: I0219 19:54:20.689339 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-p4js4" Feb 19 19:54:20 crc kubenswrapper[4813]: I0219 19:54:20.914514 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-648fd9799d-6xngg"] Feb 19 19:54:20 crc kubenswrapper[4813]: E0219 19:54:20.915141 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bab7d0f-10c0-4ef4-b064-4c4c529cd802" containerName="init" Feb 19 19:54:20 crc kubenswrapper[4813]: I0219 19:54:20.915164 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bab7d0f-10c0-4ef4-b064-4c4c529cd802" containerName="init" Feb 19 19:54:20 crc kubenswrapper[4813]: E0219 19:54:20.915175 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bab7d0f-10c0-4ef4-b064-4c4c529cd802" containerName="dnsmasq-dns" Feb 19 19:54:20 crc kubenswrapper[4813]: I0219 19:54:20.915181 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bab7d0f-10c0-4ef4-b064-4c4c529cd802" containerName="dnsmasq-dns" Feb 19 19:54:20 crc kubenswrapper[4813]: E0219 19:54:20.915195 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1db69cd-0899-4f7b-ac3f-fa9b75471765" containerName="keystone-bootstrap" Feb 19 19:54:20 crc kubenswrapper[4813]: I0219 19:54:20.915201 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1db69cd-0899-4f7b-ac3f-fa9b75471765" containerName="keystone-bootstrap" Feb 19 19:54:20 crc kubenswrapper[4813]: I0219 19:54:20.915421 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bab7d0f-10c0-4ef4-b064-4c4c529cd802" containerName="dnsmasq-dns" Feb 19 19:54:20 crc kubenswrapper[4813]: I0219 19:54:20.915441 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1db69cd-0899-4f7b-ac3f-fa9b75471765" containerName="keystone-bootstrap" Feb 19 19:54:20 crc kubenswrapper[4813]: I0219 19:54:20.915947 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-648fd9799d-6xngg" Feb 19 19:54:20 crc kubenswrapper[4813]: I0219 19:54:20.918382 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 19 19:54:20 crc kubenswrapper[4813]: I0219 19:54:20.918664 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-zjsd6" Feb 19 19:54:20 crc kubenswrapper[4813]: I0219 19:54:20.918797 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 19 19:54:20 crc kubenswrapper[4813]: I0219 19:54:20.918903 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 19 19:54:20 crc kubenswrapper[4813]: I0219 19:54:20.928169 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-648fd9799d-6xngg"] Feb 19 19:54:21 crc kubenswrapper[4813]: I0219 19:54:21.036899 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47rpf\" (UniqueName: \"kubernetes.io/projected/7752a372-6f12-490d-8b57-9c08bcc8ad6b-kube-api-access-47rpf\") pod \"keystone-648fd9799d-6xngg\" (UID: \"7752a372-6f12-490d-8b57-9c08bcc8ad6b\") " pod="openstack/keystone-648fd9799d-6xngg" Feb 19 19:54:21 crc kubenswrapper[4813]: I0219 19:54:21.037028 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7752a372-6f12-490d-8b57-9c08bcc8ad6b-config-data\") pod \"keystone-648fd9799d-6xngg\" (UID: \"7752a372-6f12-490d-8b57-9c08bcc8ad6b\") " pod="openstack/keystone-648fd9799d-6xngg" Feb 19 19:54:21 crc kubenswrapper[4813]: I0219 19:54:21.037065 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7752a372-6f12-490d-8b57-9c08bcc8ad6b-combined-ca-bundle\") pod \"keystone-648fd9799d-6xngg\" (UID: \"7752a372-6f12-490d-8b57-9c08bcc8ad6b\") " pod="openstack/keystone-648fd9799d-6xngg" Feb 19 19:54:21 crc kubenswrapper[4813]: I0219 19:54:21.038039 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7752a372-6f12-490d-8b57-9c08bcc8ad6b-fernet-keys\") pod \"keystone-648fd9799d-6xngg\" (UID: \"7752a372-6f12-490d-8b57-9c08bcc8ad6b\") " pod="openstack/keystone-648fd9799d-6xngg" Feb 19 19:54:21 crc kubenswrapper[4813]: I0219 19:54:21.038191 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7752a372-6f12-490d-8b57-9c08bcc8ad6b-scripts\") pod \"keystone-648fd9799d-6xngg\" (UID: \"7752a372-6f12-490d-8b57-9c08bcc8ad6b\") " pod="openstack/keystone-648fd9799d-6xngg" Feb 19 19:54:21 crc kubenswrapper[4813]: I0219 19:54:21.038418 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7752a372-6f12-490d-8b57-9c08bcc8ad6b-credential-keys\") pod \"keystone-648fd9799d-6xngg\" (UID: \"7752a372-6f12-490d-8b57-9c08bcc8ad6b\") " pod="openstack/keystone-648fd9799d-6xngg" Feb 19 19:54:21 crc kubenswrapper[4813]: I0219 19:54:21.149751 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47rpf\" (UniqueName: \"kubernetes.io/projected/7752a372-6f12-490d-8b57-9c08bcc8ad6b-kube-api-access-47rpf\") pod \"keystone-648fd9799d-6xngg\" (UID: \"7752a372-6f12-490d-8b57-9c08bcc8ad6b\") " pod="openstack/keystone-648fd9799d-6xngg" Feb 19 19:54:21 crc kubenswrapper[4813]: I0219 19:54:21.150060 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7752a372-6f12-490d-8b57-9c08bcc8ad6b-config-data\") pod \"keystone-648fd9799d-6xngg\" (UID: \"7752a372-6f12-490d-8b57-9c08bcc8ad6b\") " pod="openstack/keystone-648fd9799d-6xngg" Feb 19 19:54:21 crc kubenswrapper[4813]: I0219 19:54:21.150140 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7752a372-6f12-490d-8b57-9c08bcc8ad6b-combined-ca-bundle\") pod \"keystone-648fd9799d-6xngg\" (UID: \"7752a372-6f12-490d-8b57-9c08bcc8ad6b\") " pod="openstack/keystone-648fd9799d-6xngg" Feb 19 19:54:21 crc kubenswrapper[4813]: I0219 19:54:21.150262 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7752a372-6f12-490d-8b57-9c08bcc8ad6b-fernet-keys\") pod \"keystone-648fd9799d-6xngg\" (UID: \"7752a372-6f12-490d-8b57-9c08bcc8ad6b\") " pod="openstack/keystone-648fd9799d-6xngg" Feb 19 19:54:21 crc kubenswrapper[4813]: I0219 19:54:21.150338 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7752a372-6f12-490d-8b57-9c08bcc8ad6b-scripts\") pod \"keystone-648fd9799d-6xngg\" (UID: \"7752a372-6f12-490d-8b57-9c08bcc8ad6b\") " pod="openstack/keystone-648fd9799d-6xngg" Feb 19 19:54:21 crc kubenswrapper[4813]: I0219 19:54:21.150421 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7752a372-6f12-490d-8b57-9c08bcc8ad6b-credential-keys\") pod \"keystone-648fd9799d-6xngg\" (UID: \"7752a372-6f12-490d-8b57-9c08bcc8ad6b\") " pod="openstack/keystone-648fd9799d-6xngg" Feb 19 19:54:21 crc kubenswrapper[4813]: I0219 19:54:21.170713 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7752a372-6f12-490d-8b57-9c08bcc8ad6b-credential-keys\") pod \"keystone-648fd9799d-6xngg\" (UID: \"7752a372-6f12-490d-8b57-9c08bcc8ad6b\") " pod="openstack/keystone-648fd9799d-6xngg" Feb 19 19:54:21 crc kubenswrapper[4813]: I0219 19:54:21.176689 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7752a372-6f12-490d-8b57-9c08bcc8ad6b-combined-ca-bundle\") pod \"keystone-648fd9799d-6xngg\" (UID: \"7752a372-6f12-490d-8b57-9c08bcc8ad6b\") " pod="openstack/keystone-648fd9799d-6xngg" Feb 19 19:54:21 crc kubenswrapper[4813]: I0219 19:54:21.179376 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7752a372-6f12-490d-8b57-9c08bcc8ad6b-scripts\") pod \"keystone-648fd9799d-6xngg\" (UID: \"7752a372-6f12-490d-8b57-9c08bcc8ad6b\") " pod="openstack/keystone-648fd9799d-6xngg" Feb 19 19:54:21 crc kubenswrapper[4813]: I0219 19:54:21.188186 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47rpf\" (UniqueName: \"kubernetes.io/projected/7752a372-6f12-490d-8b57-9c08bcc8ad6b-kube-api-access-47rpf\") pod \"keystone-648fd9799d-6xngg\" (UID: \"7752a372-6f12-490d-8b57-9c08bcc8ad6b\") " pod="openstack/keystone-648fd9799d-6xngg" Feb 19 19:54:21 crc kubenswrapper[4813]: I0219 19:54:21.188765 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7752a372-6f12-490d-8b57-9c08bcc8ad6b-fernet-keys\") pod \"keystone-648fd9799d-6xngg\" (UID: \"7752a372-6f12-490d-8b57-9c08bcc8ad6b\") " pod="openstack/keystone-648fd9799d-6xngg" Feb 19 19:54:21 crc kubenswrapper[4813]: I0219 19:54:21.201349 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7752a372-6f12-490d-8b57-9c08bcc8ad6b-config-data\") pod \"keystone-648fd9799d-6xngg\" (UID: \"7752a372-6f12-490d-8b57-9c08bcc8ad6b\") " pod="openstack/keystone-648fd9799d-6xngg" Feb 19 19:54:21 crc kubenswrapper[4813]: I0219 19:54:21.236601 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-648fd9799d-6xngg" Feb 19 19:54:21 crc kubenswrapper[4813]: I0219 19:54:21.669090 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-648fd9799d-6xngg"] Feb 19 19:54:21 crc kubenswrapper[4813]: I0219 19:54:21.698545 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-648fd9799d-6xngg" event={"ID":"7752a372-6f12-490d-8b57-9c08bcc8ad6b","Type":"ContainerStarted","Data":"d9f38d9039ee77aa9fc30326117405ef28fa72d0a3e4de25453a2c6117a01540"} Feb 19 19:54:22 crc kubenswrapper[4813]: I0219 19:54:22.717168 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-648fd9799d-6xngg" event={"ID":"7752a372-6f12-490d-8b57-9c08bcc8ad6b","Type":"ContainerStarted","Data":"98a2952c21d02226fc01366406e0a15bf77b431d880d34e2a804985c436f127a"} Feb 19 19:54:22 crc kubenswrapper[4813]: I0219 19:54:22.717522 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-648fd9799d-6xngg" Feb 19 19:54:22 crc kubenswrapper[4813]: I0219 19:54:22.751143 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-648fd9799d-6xngg" podStartSLOduration=2.751118175 podStartE2EDuration="2.751118175s" podCreationTimestamp="2026-02-19 19:54:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:54:22.744461969 +0000 UTC m=+5081.969902510" watchObservedRunningTime="2026-02-19 19:54:22.751118175 +0000 UTC m=+5081.976558736" Feb 19 19:54:27 crc kubenswrapper[4813]: I0219 19:54:27.471430 4813 scope.go:117] "RemoveContainer" containerID="6f703dfd90f9e11930a40b965ab018f3c46a606e13eaba250b413520a86bea4f" Feb 19 19:54:27 crc kubenswrapper[4813]: E0219 19:54:27.472260 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:54:38 crc kubenswrapper[4813]: I0219 19:54:38.471489 4813 scope.go:117] "RemoveContainer" containerID="6f703dfd90f9e11930a40b965ab018f3c46a606e13eaba250b413520a86bea4f" Feb 19 19:54:38 crc kubenswrapper[4813]: E0219 19:54:38.472202 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:54:50 crc kubenswrapper[4813]: I0219 19:54:50.471998 4813 scope.go:117] "RemoveContainer" containerID="6f703dfd90f9e11930a40b965ab018f3c46a606e13eaba250b413520a86bea4f" Feb 19 19:54:50 crc kubenswrapper[4813]: E0219 19:54:50.472856 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:54:52 crc kubenswrapper[4813]: I0219 19:54:52.673358 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-648fd9799d-6xngg" Feb 19 19:54:54 crc kubenswrapper[4813]: I0219 19:54:54.990736 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 19 19:54:54 crc kubenswrapper[4813]: I0219 19:54:54.993545 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 19:54:54 crc kubenswrapper[4813]: I0219 19:54:54.999473 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 19 19:54:54 crc kubenswrapper[4813]: I0219 19:54:54.999743 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 19 19:54:54 crc kubenswrapper[4813]: I0219 19:54:54.999874 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-s98rd" Feb 19 19:54:55 crc kubenswrapper[4813]: I0219 19:54:55.011078 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 19 19:54:55 crc kubenswrapper[4813]: I0219 19:54:55.104382 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/aa2c214f-33ef-4b76-b375-3eb37e6e17ad-openstack-config-secret\") pod \"openstackclient\" (UID: \"aa2c214f-33ef-4b76-b375-3eb37e6e17ad\") " pod="openstack/openstackclient" Feb 19 19:54:55 crc kubenswrapper[4813]: I0219 19:54:55.104440 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/aa2c214f-33ef-4b76-b375-3eb37e6e17ad-openstack-config\") pod \"openstackclient\" (UID: \"aa2c214f-33ef-4b76-b375-3eb37e6e17ad\") " pod="openstack/openstackclient" Feb 19 19:54:55 crc kubenswrapper[4813]: I0219 19:54:55.104792 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmlpv\" (UniqueName: \"kubernetes.io/projected/aa2c214f-33ef-4b76-b375-3eb37e6e17ad-kube-api-access-xmlpv\") pod \"openstackclient\" (UID: \"aa2c214f-33ef-4b76-b375-3eb37e6e17ad\") " pod="openstack/openstackclient" Feb 19 19:54:55 crc kubenswrapper[4813]: I0219 19:54:55.206520 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmlpv\" (UniqueName: \"kubernetes.io/projected/aa2c214f-33ef-4b76-b375-3eb37e6e17ad-kube-api-access-xmlpv\") pod \"openstackclient\" (UID: \"aa2c214f-33ef-4b76-b375-3eb37e6e17ad\") " pod="openstack/openstackclient" Feb 19 19:54:55 crc kubenswrapper[4813]: I0219 19:54:55.206612 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/aa2c214f-33ef-4b76-b375-3eb37e6e17ad-openstack-config-secret\") pod \"openstackclient\" (UID: \"aa2c214f-33ef-4b76-b375-3eb37e6e17ad\") " pod="openstack/openstackclient" Feb 19 19:54:55 crc kubenswrapper[4813]: I0219 19:54:55.206641 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/aa2c214f-33ef-4b76-b375-3eb37e6e17ad-openstack-config\") pod \"openstackclient\" (UID: \"aa2c214f-33ef-4b76-b375-3eb37e6e17ad\") " pod="openstack/openstackclient" Feb 19 19:54:55 crc kubenswrapper[4813]: I0219 19:54:55.207651 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/aa2c214f-33ef-4b76-b375-3eb37e6e17ad-openstack-config\") pod \"openstackclient\" (UID: \"aa2c214f-33ef-4b76-b375-3eb37e6e17ad\") " pod="openstack/openstackclient" Feb 19 19:54:55 crc kubenswrapper[4813]: I0219 19:54:55.221978 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/aa2c214f-33ef-4b76-b375-3eb37e6e17ad-openstack-config-secret\") pod \"openstackclient\" (UID: \"aa2c214f-33ef-4b76-b375-3eb37e6e17ad\") " pod="openstack/openstackclient" Feb 19 19:54:55 crc kubenswrapper[4813]: I0219 19:54:55.230206 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmlpv\" (UniqueName: \"kubernetes.io/projected/aa2c214f-33ef-4b76-b375-3eb37e6e17ad-kube-api-access-xmlpv\") pod \"openstackclient\" (UID: \"aa2c214f-33ef-4b76-b375-3eb37e6e17ad\") " pod="openstack/openstackclient" Feb 19 19:54:55 crc kubenswrapper[4813]: I0219 19:54:55.327850 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 19:54:55 crc kubenswrapper[4813]: W0219 19:54:55.780604 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa2c214f_33ef_4b76_b375_3eb37e6e17ad.slice/crio-02135e80fa003d37b1828035cb91d9fba2c0fc8efc3f2d2d85214c626cecc79a WatchSource:0}: Error finding container 02135e80fa003d37b1828035cb91d9fba2c0fc8efc3f2d2d85214c626cecc79a: Status 404 returned error can't find the container with id 02135e80fa003d37b1828035cb91d9fba2c0fc8efc3f2d2d85214c626cecc79a Feb 19 19:54:55 crc kubenswrapper[4813]: I0219 19:54:55.785490 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 19 19:54:55 crc kubenswrapper[4813]: I0219 19:54:55.969724 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"aa2c214f-33ef-4b76-b375-3eb37e6e17ad","Type":"ContainerStarted","Data":"d5cc8ed4b6e4006265e8d5f6364d15ab59ab9a01349c37870c5838a289a2b517"} Feb 19 19:54:55 crc kubenswrapper[4813]: I0219 19:54:55.969768 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"aa2c214f-33ef-4b76-b375-3eb37e6e17ad","Type":"ContainerStarted","Data":"02135e80fa003d37b1828035cb91d9fba2c0fc8efc3f2d2d85214c626cecc79a"} Feb 19 19:54:55 crc kubenswrapper[4813]: I0219 19:54:55.985635 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=1.985616212 podStartE2EDuration="1.985616212s" podCreationTimestamp="2026-02-19 19:54:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:54:55.983052502 +0000 UTC m=+5115.208493063" watchObservedRunningTime="2026-02-19 19:54:55.985616212 +0000 UTC m=+5115.211056753" Feb 19 19:55:01 crc kubenswrapper[4813]: I0219 19:55:01.476013 4813 scope.go:117] "RemoveContainer" containerID="6f703dfd90f9e11930a40b965ab018f3c46a606e13eaba250b413520a86bea4f" Feb 19 19:55:01 crc kubenswrapper[4813]: E0219 19:55:01.476605 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:55:14 crc kubenswrapper[4813]: I0219 19:55:14.472107 4813 scope.go:117] "RemoveContainer" containerID="6f703dfd90f9e11930a40b965ab018f3c46a606e13eaba250b413520a86bea4f" Feb 19 19:55:14 crc kubenswrapper[4813]: E0219 19:55:14.472884 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:55:29 crc kubenswrapper[4813]: I0219 19:55:29.473103 4813 scope.go:117] "RemoveContainer" containerID="6f703dfd90f9e11930a40b965ab018f3c46a606e13eaba250b413520a86bea4f" Feb 19 19:55:29 crc kubenswrapper[4813]: E0219 19:55:29.473903 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:55:42 crc kubenswrapper[4813]: I0219 19:55:42.471564 4813 scope.go:117] "RemoveContainer" containerID="6f703dfd90f9e11930a40b965ab018f3c46a606e13eaba250b413520a86bea4f" Feb 19 19:55:42 crc kubenswrapper[4813]: E0219 19:55:42.472397 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:55:54 crc kubenswrapper[4813]: I0219 19:55:54.471824 4813 scope.go:117] "RemoveContainer" containerID="6f703dfd90f9e11930a40b965ab018f3c46a606e13eaba250b413520a86bea4f" Feb 19 19:55:54 crc kubenswrapper[4813]: E0219 19:55:54.473662 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:56:09 crc kubenswrapper[4813]: I0219 19:56:09.472276 4813 scope.go:117] "RemoveContainer" containerID="6f703dfd90f9e11930a40b965ab018f3c46a606e13eaba250b413520a86bea4f" Feb 19 19:56:09 crc kubenswrapper[4813]: E0219 19:56:09.473172 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:56:23 crc kubenswrapper[4813]: I0219 19:56:23.471913 4813 scope.go:117] "RemoveContainer" containerID="6f703dfd90f9e11930a40b965ab018f3c46a606e13eaba250b413520a86bea4f" Feb 19 19:56:23 crc kubenswrapper[4813]: E0219 19:56:23.472700 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:56:25 crc kubenswrapper[4813]: I0219 19:56:25.594946 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8jzv4"] Feb 19 19:56:25 crc kubenswrapper[4813]: I0219 19:56:25.596721 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8jzv4" Feb 19 19:56:25 crc kubenswrapper[4813]: I0219 19:56:25.615543 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8jzv4"] Feb 19 19:56:25 crc kubenswrapper[4813]: I0219 19:56:25.674915 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67ee67ae-cf5b-47e9-b6e4-544e224c3c8b-catalog-content\") pod \"certified-operators-8jzv4\" (UID: \"67ee67ae-cf5b-47e9-b6e4-544e224c3c8b\") " pod="openshift-marketplace/certified-operators-8jzv4" Feb 19 19:56:25 crc kubenswrapper[4813]: I0219 19:56:25.674997 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pv2lx\" (UniqueName: \"kubernetes.io/projected/67ee67ae-cf5b-47e9-b6e4-544e224c3c8b-kube-api-access-pv2lx\") pod \"certified-operators-8jzv4\" (UID: \"67ee67ae-cf5b-47e9-b6e4-544e224c3c8b\") " pod="openshift-marketplace/certified-operators-8jzv4" Feb 19 19:56:25 crc kubenswrapper[4813]: I0219 19:56:25.675028 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67ee67ae-cf5b-47e9-b6e4-544e224c3c8b-utilities\") pod \"certified-operators-8jzv4\" (UID: \"67ee67ae-cf5b-47e9-b6e4-544e224c3c8b\") " pod="openshift-marketplace/certified-operators-8jzv4" Feb 19 19:56:25 crc kubenswrapper[4813]: I0219 19:56:25.776195 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pv2lx\" (UniqueName: \"kubernetes.io/projected/67ee67ae-cf5b-47e9-b6e4-544e224c3c8b-kube-api-access-pv2lx\") pod \"certified-operators-8jzv4\" (UID: \"67ee67ae-cf5b-47e9-b6e4-544e224c3c8b\") " pod="openshift-marketplace/certified-operators-8jzv4" Feb 19 19:56:25 crc kubenswrapper[4813]: I0219 19:56:25.776261 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67ee67ae-cf5b-47e9-b6e4-544e224c3c8b-utilities\") pod \"certified-operators-8jzv4\" (UID: \"67ee67ae-cf5b-47e9-b6e4-544e224c3c8b\") " pod="openshift-marketplace/certified-operators-8jzv4" Feb 19 19:56:25 crc kubenswrapper[4813]: I0219 19:56:25.776430 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67ee67ae-cf5b-47e9-b6e4-544e224c3c8b-catalog-content\") pod \"certified-operators-8jzv4\" (UID: \"67ee67ae-cf5b-47e9-b6e4-544e224c3c8b\") " pod="openshift-marketplace/certified-operators-8jzv4" Feb 19 19:56:25 crc kubenswrapper[4813]: I0219 19:56:25.776892 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67ee67ae-cf5b-47e9-b6e4-544e224c3c8b-utilities\") pod \"certified-operators-8jzv4\" (UID: \"67ee67ae-cf5b-47e9-b6e4-544e224c3c8b\") " pod="openshift-marketplace/certified-operators-8jzv4" Feb 19 19:56:25 crc kubenswrapper[4813]: I0219 19:56:25.776943 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67ee67ae-cf5b-47e9-b6e4-544e224c3c8b-catalog-content\") pod \"certified-operators-8jzv4\" (UID: \"67ee67ae-cf5b-47e9-b6e4-544e224c3c8b\") " pod="openshift-marketplace/certified-operators-8jzv4" Feb 19 19:56:25 crc kubenswrapper[4813]: I0219 19:56:25.797447 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pv2lx\" (UniqueName: \"kubernetes.io/projected/67ee67ae-cf5b-47e9-b6e4-544e224c3c8b-kube-api-access-pv2lx\") pod \"certified-operators-8jzv4\" (UID: \"67ee67ae-cf5b-47e9-b6e4-544e224c3c8b\") " pod="openshift-marketplace/certified-operators-8jzv4" Feb 19 19:56:25 crc kubenswrapper[4813]: I0219 19:56:25.927245 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8jzv4" Feb 19 19:56:26 crc kubenswrapper[4813]: I0219 19:56:26.495584 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8jzv4"] Feb 19 19:56:26 crc kubenswrapper[4813]: I0219 19:56:26.664365 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8jzv4" event={"ID":"67ee67ae-cf5b-47e9-b6e4-544e224c3c8b","Type":"ContainerStarted","Data":"3a13c66c7d01ead8de7cdb5cbd2cd615d14c83ccf75f90013331948c476796f9"} Feb 19 19:56:26 crc kubenswrapper[4813]: I0219 19:56:26.664412 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8jzv4" event={"ID":"67ee67ae-cf5b-47e9-b6e4-544e224c3c8b","Type":"ContainerStarted","Data":"9e6426cf0327ff5f47a191a17586388634fe56b62e3a885e8465c0880eca6c8e"} Feb 19 19:56:27 crc kubenswrapper[4813]: I0219 19:56:27.674332 4813 generic.go:334] "Generic (PLEG): container finished" podID="67ee67ae-cf5b-47e9-b6e4-544e224c3c8b" containerID="3a13c66c7d01ead8de7cdb5cbd2cd615d14c83ccf75f90013331948c476796f9" exitCode=0 Feb 19 19:56:27 crc kubenswrapper[4813]: I0219 19:56:27.674398 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8jzv4" event={"ID":"67ee67ae-cf5b-47e9-b6e4-544e224c3c8b","Type":"ContainerDied","Data":"3a13c66c7d01ead8de7cdb5cbd2cd615d14c83ccf75f90013331948c476796f9"} Feb 19 19:56:27 crc kubenswrapper[4813]: I0219 19:56:27.676900 4813 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 19:56:29 crc kubenswrapper[4813]: I0219 19:56:29.701366 4813 generic.go:334] "Generic (PLEG): container finished" podID="67ee67ae-cf5b-47e9-b6e4-544e224c3c8b" containerID="a3944ee1ccbc242cc6e08cfc30257415e07fb875b404960343186e027d320c81" exitCode=0 Feb 19 19:56:29 crc kubenswrapper[4813]: I0219 19:56:29.701499 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8jzv4" event={"ID":"67ee67ae-cf5b-47e9-b6e4-544e224c3c8b","Type":"ContainerDied","Data":"a3944ee1ccbc242cc6e08cfc30257415e07fb875b404960343186e027d320c81"} Feb 19 19:56:30 crc kubenswrapper[4813]: I0219 19:56:30.711226 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8jzv4" event={"ID":"67ee67ae-cf5b-47e9-b6e4-544e224c3c8b","Type":"ContainerStarted","Data":"537c7e2d5b6ed763326c4fd31eb5d3da772cbfe565a068511fdd6dda4864d5a0"} Feb 19 19:56:30 crc kubenswrapper[4813]: I0219 19:56:30.728536 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8jzv4" podStartSLOduration=3.2965804739999998 podStartE2EDuration="5.728514768s" podCreationTimestamp="2026-02-19 19:56:25 +0000 UTC" firstStartedPulling="2026-02-19 19:56:27.676625992 +0000 UTC m=+5206.902066533" lastFinishedPulling="2026-02-19 19:56:30.108560286 +0000 UTC m=+5209.334000827" observedRunningTime="2026-02-19 19:56:30.725231916 +0000 UTC m=+5209.950672477" watchObservedRunningTime="2026-02-19 19:56:30.728514768 +0000 UTC m=+5209.953955309" Feb 19 19:56:35 crc kubenswrapper[4813]: I0219 19:56:35.928130 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8jzv4" Feb 19 19:56:35 crc kubenswrapper[4813]: I0219 19:56:35.928688 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8jzv4" Feb 19 19:56:35 crc kubenswrapper[4813]: I0219 19:56:35.999103 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8jzv4" Feb 19 19:56:36 crc kubenswrapper[4813]: I0219 19:56:36.875818 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8jzv4" Feb 19 19:56:36 crc kubenswrapper[4813]: I0219 19:56:36.923990 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8jzv4"] Feb 19 19:56:38 crc kubenswrapper[4813]: I0219 19:56:38.472438 4813 scope.go:117] "RemoveContainer" containerID="6f703dfd90f9e11930a40b965ab018f3c46a606e13eaba250b413520a86bea4f" Feb 19 19:56:38 crc kubenswrapper[4813]: E0219 19:56:38.474560 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:56:38 crc kubenswrapper[4813]: I0219 19:56:38.835183 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8jzv4" podUID="67ee67ae-cf5b-47e9-b6e4-544e224c3c8b" containerName="registry-server" containerID="cri-o://537c7e2d5b6ed763326c4fd31eb5d3da772cbfe565a068511fdd6dda4864d5a0" gracePeriod=2 Feb 19 19:56:38 crc kubenswrapper[4813]: E0219 19:56:38.996404 4813 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67ee67ae_cf5b_47e9_b6e4_544e224c3c8b.slice/crio-537c7e2d5b6ed763326c4fd31eb5d3da772cbfe565a068511fdd6dda4864d5a0.scope\": RecentStats: unable to find data in memory cache]" Feb 19 19:56:39 crc kubenswrapper[4813]: I0219 19:56:39.264170 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8jzv4" Feb 19 19:56:39 crc kubenswrapper[4813]: I0219 19:56:39.417133 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67ee67ae-cf5b-47e9-b6e4-544e224c3c8b-utilities\") pod \"67ee67ae-cf5b-47e9-b6e4-544e224c3c8b\" (UID: \"67ee67ae-cf5b-47e9-b6e4-544e224c3c8b\") " Feb 19 19:56:39 crc kubenswrapper[4813]: I0219 19:56:39.417210 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pv2lx\" (UniqueName: \"kubernetes.io/projected/67ee67ae-cf5b-47e9-b6e4-544e224c3c8b-kube-api-access-pv2lx\") pod \"67ee67ae-cf5b-47e9-b6e4-544e224c3c8b\" (UID: \"67ee67ae-cf5b-47e9-b6e4-544e224c3c8b\") " Feb 19 19:56:39 crc kubenswrapper[4813]: I0219 19:56:39.417279 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67ee67ae-cf5b-47e9-b6e4-544e224c3c8b-catalog-content\") pod \"67ee67ae-cf5b-47e9-b6e4-544e224c3c8b\" (UID: \"67ee67ae-cf5b-47e9-b6e4-544e224c3c8b\") " Feb 19 19:56:39 crc kubenswrapper[4813]: I0219 19:56:39.419149 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67ee67ae-cf5b-47e9-b6e4-544e224c3c8b-utilities" (OuterVolumeSpecName: "utilities") pod "67ee67ae-cf5b-47e9-b6e4-544e224c3c8b" (UID: "67ee67ae-cf5b-47e9-b6e4-544e224c3c8b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:56:39 crc kubenswrapper[4813]: I0219 19:56:39.427754 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67ee67ae-cf5b-47e9-b6e4-544e224c3c8b-kube-api-access-pv2lx" (OuterVolumeSpecName: "kube-api-access-pv2lx") pod "67ee67ae-cf5b-47e9-b6e4-544e224c3c8b" (UID: "67ee67ae-cf5b-47e9-b6e4-544e224c3c8b"). InnerVolumeSpecName "kube-api-access-pv2lx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:56:39 crc kubenswrapper[4813]: I0219 19:56:39.475479 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67ee67ae-cf5b-47e9-b6e4-544e224c3c8b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "67ee67ae-cf5b-47e9-b6e4-544e224c3c8b" (UID: "67ee67ae-cf5b-47e9-b6e4-544e224c3c8b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:56:39 crc kubenswrapper[4813]: I0219 19:56:39.518734 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67ee67ae-cf5b-47e9-b6e4-544e224c3c8b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 19:56:39 crc kubenswrapper[4813]: I0219 19:56:39.518761 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67ee67ae-cf5b-47e9-b6e4-544e224c3c8b-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 19:56:39 crc kubenswrapper[4813]: I0219 19:56:39.518771 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pv2lx\" (UniqueName: \"kubernetes.io/projected/67ee67ae-cf5b-47e9-b6e4-544e224c3c8b-kube-api-access-pv2lx\") on node \"crc\" DevicePath \"\"" Feb 19 19:56:39 crc kubenswrapper[4813]: I0219 19:56:39.706480 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-jk5nd"] Feb 19 19:56:39 crc kubenswrapper[4813]: E0219 19:56:39.706788 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67ee67ae-cf5b-47e9-b6e4-544e224c3c8b" containerName="extract-utilities" Feb 19 19:56:39 crc kubenswrapper[4813]: I0219 19:56:39.706803 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="67ee67ae-cf5b-47e9-b6e4-544e224c3c8b" containerName="extract-utilities" Feb 19 19:56:39 crc kubenswrapper[4813]: E0219 19:56:39.706816 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67ee67ae-cf5b-47e9-b6e4-544e224c3c8b" containerName="registry-server" Feb 19 19:56:39 crc kubenswrapper[4813]: I0219 19:56:39.706822 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="67ee67ae-cf5b-47e9-b6e4-544e224c3c8b" containerName="registry-server" Feb 19 19:56:39 crc kubenswrapper[4813]: E0219 19:56:39.706837 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67ee67ae-cf5b-47e9-b6e4-544e224c3c8b" containerName="extract-content" Feb 19 19:56:39 crc kubenswrapper[4813]: I0219 19:56:39.706843 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="67ee67ae-cf5b-47e9-b6e4-544e224c3c8b" containerName="extract-content" Feb 19 19:56:39 crc kubenswrapper[4813]: I0219 19:56:39.707000 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="67ee67ae-cf5b-47e9-b6e4-544e224c3c8b" containerName="registry-server" Feb 19 19:56:39 crc kubenswrapper[4813]: I0219 19:56:39.707505 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-jk5nd" Feb 19 19:56:39 crc kubenswrapper[4813]: I0219 19:56:39.718689 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-jk5nd"] Feb 19 19:56:39 crc kubenswrapper[4813]: I0219 19:56:39.813144 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-1eff-account-create-update-lqgmz"] Feb 19 19:56:39 crc kubenswrapper[4813]: I0219 19:56:39.814236 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-1eff-account-create-update-lqgmz" Feb 19 19:56:39 crc kubenswrapper[4813]: I0219 19:56:39.819656 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 19 19:56:39 crc kubenswrapper[4813]: I0219 19:56:39.820761 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-1eff-account-create-update-lqgmz"] Feb 19 19:56:39 crc kubenswrapper[4813]: I0219 19:56:39.822835 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcs99\" (UniqueName: \"kubernetes.io/projected/481fb453-85ee-472b-a0ae-876e114415d6-kube-api-access-wcs99\") pod \"barbican-db-create-jk5nd\" (UID: \"481fb453-85ee-472b-a0ae-876e114415d6\") " pod="openstack/barbican-db-create-jk5nd" Feb 19 19:56:39 crc kubenswrapper[4813]: I0219 19:56:39.823027 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/481fb453-85ee-472b-a0ae-876e114415d6-operator-scripts\") pod \"barbican-db-create-jk5nd\" (UID: \"481fb453-85ee-472b-a0ae-876e114415d6\") " pod="openstack/barbican-db-create-jk5nd" Feb 19 19:56:39 crc kubenswrapper[4813]: I0219 19:56:39.845527 4813 generic.go:334] "Generic (PLEG): container finished" podID="67ee67ae-cf5b-47e9-b6e4-544e224c3c8b" containerID="537c7e2d5b6ed763326c4fd31eb5d3da772cbfe565a068511fdd6dda4864d5a0" exitCode=0 Feb 19 19:56:39 crc kubenswrapper[4813]: I0219 19:56:39.845574 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8jzv4" event={"ID":"67ee67ae-cf5b-47e9-b6e4-544e224c3c8b","Type":"ContainerDied","Data":"537c7e2d5b6ed763326c4fd31eb5d3da772cbfe565a068511fdd6dda4864d5a0"} Feb 19 19:56:39 crc kubenswrapper[4813]: I0219 19:56:39.845625 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8jzv4" event={"ID":"67ee67ae-cf5b-47e9-b6e4-544e224c3c8b","Type":"ContainerDied","Data":"9e6426cf0327ff5f47a191a17586388634fe56b62e3a885e8465c0880eca6c8e"} Feb 19 19:56:39 crc kubenswrapper[4813]: I0219 19:56:39.845648 4813 scope.go:117] "RemoveContainer" containerID="537c7e2d5b6ed763326c4fd31eb5d3da772cbfe565a068511fdd6dda4864d5a0" Feb 19 19:56:39 crc kubenswrapper[4813]: I0219 19:56:39.845682 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8jzv4" Feb 19 19:56:39 crc kubenswrapper[4813]: I0219 19:56:39.865738 4813 scope.go:117] "RemoveContainer" containerID="a3944ee1ccbc242cc6e08cfc30257415e07fb875b404960343186e027d320c81" Feb 19 19:56:39 crc kubenswrapper[4813]: I0219 19:56:39.874203 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8jzv4"] Feb 19 19:56:39 crc kubenswrapper[4813]: I0219 19:56:39.885133 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8jzv4"] Feb 19 19:56:39 crc kubenswrapper[4813]: I0219 19:56:39.887874 4813 scope.go:117] "RemoveContainer" containerID="3a13c66c7d01ead8de7cdb5cbd2cd615d14c83ccf75f90013331948c476796f9" Feb 19 19:56:39 crc kubenswrapper[4813]: I0219 19:56:39.920645 4813 scope.go:117] "RemoveContainer" containerID="537c7e2d5b6ed763326c4fd31eb5d3da772cbfe565a068511fdd6dda4864d5a0" Feb 19 19:56:39 crc kubenswrapper[4813]: E0219 19:56:39.921178 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"537c7e2d5b6ed763326c4fd31eb5d3da772cbfe565a068511fdd6dda4864d5a0\": container with ID starting with 537c7e2d5b6ed763326c4fd31eb5d3da772cbfe565a068511fdd6dda4864d5a0 not found: ID does not exist" containerID="537c7e2d5b6ed763326c4fd31eb5d3da772cbfe565a068511fdd6dda4864d5a0" Feb 19 19:56:39 crc kubenswrapper[4813]: I0219 19:56:39.921275 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"537c7e2d5b6ed763326c4fd31eb5d3da772cbfe565a068511fdd6dda4864d5a0"} err="failed to get container status \"537c7e2d5b6ed763326c4fd31eb5d3da772cbfe565a068511fdd6dda4864d5a0\": rpc error: code = NotFound desc = could not find container \"537c7e2d5b6ed763326c4fd31eb5d3da772cbfe565a068511fdd6dda4864d5a0\": container with ID starting with 537c7e2d5b6ed763326c4fd31eb5d3da772cbfe565a068511fdd6dda4864d5a0 not found: ID does not exist" Feb 19 19:56:39 crc kubenswrapper[4813]: I0219 19:56:39.921350 4813 scope.go:117] "RemoveContainer" containerID="a3944ee1ccbc242cc6e08cfc30257415e07fb875b404960343186e027d320c81" Feb 19 19:56:39 crc kubenswrapper[4813]: E0219 19:56:39.921714 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3944ee1ccbc242cc6e08cfc30257415e07fb875b404960343186e027d320c81\": container with ID starting with a3944ee1ccbc242cc6e08cfc30257415e07fb875b404960343186e027d320c81 not found: ID does not exist" containerID="a3944ee1ccbc242cc6e08cfc30257415e07fb875b404960343186e027d320c81" Feb 19 19:56:39 crc kubenswrapper[4813]: I0219 19:56:39.921735 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3944ee1ccbc242cc6e08cfc30257415e07fb875b404960343186e027d320c81"} err="failed to get container status \"a3944ee1ccbc242cc6e08cfc30257415e07fb875b404960343186e027d320c81\": rpc error: code = NotFound desc = could not find container \"a3944ee1ccbc242cc6e08cfc30257415e07fb875b404960343186e027d320c81\": container with ID starting with a3944ee1ccbc242cc6e08cfc30257415e07fb875b404960343186e027d320c81 not found: ID does not exist" Feb 19 19:56:39 crc kubenswrapper[4813]: I0219 19:56:39.921748 4813 scope.go:117] "RemoveContainer" containerID="3a13c66c7d01ead8de7cdb5cbd2cd615d14c83ccf75f90013331948c476796f9" Feb 19 19:56:39 crc kubenswrapper[4813]: E0219 19:56:39.922109 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a13c66c7d01ead8de7cdb5cbd2cd615d14c83ccf75f90013331948c476796f9\": container with ID starting with 3a13c66c7d01ead8de7cdb5cbd2cd615d14c83ccf75f90013331948c476796f9 not found: ID does not exist" containerID="3a13c66c7d01ead8de7cdb5cbd2cd615d14c83ccf75f90013331948c476796f9" Feb 19 19:56:39 crc kubenswrapper[4813]: I0219 19:56:39.922162 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a13c66c7d01ead8de7cdb5cbd2cd615d14c83ccf75f90013331948c476796f9"} err="failed to get container status \"3a13c66c7d01ead8de7cdb5cbd2cd615d14c83ccf75f90013331948c476796f9\": rpc error: code = NotFound desc = could not find container \"3a13c66c7d01ead8de7cdb5cbd2cd615d14c83ccf75f90013331948c476796f9\": container with ID starting with 3a13c66c7d01ead8de7cdb5cbd2cd615d14c83ccf75f90013331948c476796f9 not found: ID does not exist" Feb 19 19:56:39 crc kubenswrapper[4813]: I0219 19:56:39.924999 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wx57\" (UniqueName: \"kubernetes.io/projected/caaee91d-ca35-47dc-bfd1-6f32edf33360-kube-api-access-5wx57\") pod \"barbican-1eff-account-create-update-lqgmz\" (UID: \"caaee91d-ca35-47dc-bfd1-6f32edf33360\") " pod="openstack/barbican-1eff-account-create-update-lqgmz" Feb 19 19:56:39 crc kubenswrapper[4813]: I0219 19:56:39.925071 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcs99\" (UniqueName: \"kubernetes.io/projected/481fb453-85ee-472b-a0ae-876e114415d6-kube-api-access-wcs99\") pod \"barbican-db-create-jk5nd\" (UID: \"481fb453-85ee-472b-a0ae-876e114415d6\") " pod="openstack/barbican-db-create-jk5nd" Feb 19 19:56:39 crc kubenswrapper[4813]: I0219 19:56:39.925179 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/caaee91d-ca35-47dc-bfd1-6f32edf33360-operator-scripts\") pod \"barbican-1eff-account-create-update-lqgmz\" (UID: \"caaee91d-ca35-47dc-bfd1-6f32edf33360\") " pod="openstack/barbican-1eff-account-create-update-lqgmz" Feb 19 19:56:39 crc kubenswrapper[4813]: I0219 19:56:39.925205 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/481fb453-85ee-472b-a0ae-876e114415d6-operator-scripts\") pod \"barbican-db-create-jk5nd\" (UID: \"481fb453-85ee-472b-a0ae-876e114415d6\") " pod="openstack/barbican-db-create-jk5nd" Feb 19 19:56:39 crc kubenswrapper[4813]: I0219 19:56:39.926058 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/481fb453-85ee-472b-a0ae-876e114415d6-operator-scripts\") pod \"barbican-db-create-jk5nd\" (UID: \"481fb453-85ee-472b-a0ae-876e114415d6\") " pod="openstack/barbican-db-create-jk5nd" Feb 19 19:56:39 crc kubenswrapper[4813]: I0219 19:56:39.942271 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcs99\" (UniqueName: \"kubernetes.io/projected/481fb453-85ee-472b-a0ae-876e114415d6-kube-api-access-wcs99\") pod \"barbican-db-create-jk5nd\" (UID: \"481fb453-85ee-472b-a0ae-876e114415d6\") " pod="openstack/barbican-db-create-jk5nd" Feb 19 19:56:40 crc kubenswrapper[4813]: I0219 19:56:40.024292 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-jk5nd" Feb 19 19:56:40 crc kubenswrapper[4813]: I0219 19:56:40.027102 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wx57\" (UniqueName: \"kubernetes.io/projected/caaee91d-ca35-47dc-bfd1-6f32edf33360-kube-api-access-5wx57\") pod \"barbican-1eff-account-create-update-lqgmz\" (UID: \"caaee91d-ca35-47dc-bfd1-6f32edf33360\") " pod="openstack/barbican-1eff-account-create-update-lqgmz" Feb 19 19:56:40 crc kubenswrapper[4813]: I0219 19:56:40.027215 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/caaee91d-ca35-47dc-bfd1-6f32edf33360-operator-scripts\") pod \"barbican-1eff-account-create-update-lqgmz\" (UID: \"caaee91d-ca35-47dc-bfd1-6f32edf33360\") " pod="openstack/barbican-1eff-account-create-update-lqgmz" Feb 19 19:56:40 crc kubenswrapper[4813]: I0219 19:56:40.027932 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/caaee91d-ca35-47dc-bfd1-6f32edf33360-operator-scripts\") pod \"barbican-1eff-account-create-update-lqgmz\" (UID: \"caaee91d-ca35-47dc-bfd1-6f32edf33360\") " pod="openstack/barbican-1eff-account-create-update-lqgmz" Feb 19 19:56:40 crc kubenswrapper[4813]: I0219 19:56:40.044281 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wx57\" (UniqueName: \"kubernetes.io/projected/caaee91d-ca35-47dc-bfd1-6f32edf33360-kube-api-access-5wx57\") pod \"barbican-1eff-account-create-update-lqgmz\" (UID: \"caaee91d-ca35-47dc-bfd1-6f32edf33360\") " pod="openstack/barbican-1eff-account-create-update-lqgmz" Feb 19 19:56:40 crc kubenswrapper[4813]: I0219 19:56:40.134928 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-1eff-account-create-update-lqgmz" Feb 19 19:56:40 crc kubenswrapper[4813]: I0219 19:56:40.474524 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-jk5nd"] Feb 19 19:56:40 crc kubenswrapper[4813]: I0219 19:56:40.588214 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-1eff-account-create-update-lqgmz"] Feb 19 19:56:40 crc kubenswrapper[4813]: I0219 19:56:40.854288 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-1eff-account-create-update-lqgmz" event={"ID":"caaee91d-ca35-47dc-bfd1-6f32edf33360","Type":"ContainerStarted","Data":"3da73ec1fc397237d76cb1e8eddec085e09d798eed3ad8f636f5799498dba3ab"} Feb 19 19:56:40 crc kubenswrapper[4813]: I0219 19:56:40.854333 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-1eff-account-create-update-lqgmz" event={"ID":"caaee91d-ca35-47dc-bfd1-6f32edf33360","Type":"ContainerStarted","Data":"191806ef4f9e40871eacae755503f6884fec1ead21ec16d54ff7b9d029b054dd"} Feb 19 19:56:40 crc kubenswrapper[4813]: I0219 19:56:40.855610 4813 generic.go:334] "Generic (PLEG): container finished" podID="481fb453-85ee-472b-a0ae-876e114415d6" containerID="249b1d5ecaf21d9741fcb34412a6e6939b7d01047451d8af5b3521efbe2955d3" exitCode=0 Feb 19 19:56:40 crc kubenswrapper[4813]: I0219 19:56:40.855641 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-jk5nd" event={"ID":"481fb453-85ee-472b-a0ae-876e114415d6","Type":"ContainerDied","Data":"249b1d5ecaf21d9741fcb34412a6e6939b7d01047451d8af5b3521efbe2955d3"} Feb 19 19:56:40 crc kubenswrapper[4813]: I0219 19:56:40.855658 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-jk5nd" event={"ID":"481fb453-85ee-472b-a0ae-876e114415d6","Type":"ContainerStarted","Data":"3e0bbd1dbd8bb7b68d2e6651adfbf6640c0cf157d641d50ef9ca48e4474dba56"} Feb 19 19:56:40 crc kubenswrapper[4813]: I0219 19:56:40.872070 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-1eff-account-create-update-lqgmz" podStartSLOduration=1.8720533019999999 podStartE2EDuration="1.872053302s" podCreationTimestamp="2026-02-19 19:56:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:56:40.87002488 +0000 UTC m=+5220.095465421" watchObservedRunningTime="2026-02-19 19:56:40.872053302 +0000 UTC m=+5220.097493843" Feb 19 19:56:41 crc kubenswrapper[4813]: I0219 19:56:41.492761 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67ee67ae-cf5b-47e9-b6e4-544e224c3c8b" path="/var/lib/kubelet/pods/67ee67ae-cf5b-47e9-b6e4-544e224c3c8b/volumes" Feb 19 19:56:41 crc kubenswrapper[4813]: I0219 19:56:41.863805 4813 generic.go:334] "Generic (PLEG): container finished" podID="caaee91d-ca35-47dc-bfd1-6f32edf33360" containerID="3da73ec1fc397237d76cb1e8eddec085e09d798eed3ad8f636f5799498dba3ab" exitCode=0 Feb 19 19:56:41 crc kubenswrapper[4813]: I0219 19:56:41.864007 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-1eff-account-create-update-lqgmz" event={"ID":"caaee91d-ca35-47dc-bfd1-6f32edf33360","Type":"ContainerDied","Data":"3da73ec1fc397237d76cb1e8eddec085e09d798eed3ad8f636f5799498dba3ab"} Feb 19 19:56:42 crc kubenswrapper[4813]: I0219 19:56:42.210887 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-jk5nd" Feb 19 19:56:42 crc kubenswrapper[4813]: I0219 19:56:42.363137 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/481fb453-85ee-472b-a0ae-876e114415d6-operator-scripts\") pod \"481fb453-85ee-472b-a0ae-876e114415d6\" (UID: \"481fb453-85ee-472b-a0ae-876e114415d6\") " Feb 19 19:56:42 crc kubenswrapper[4813]: I0219 19:56:42.363351 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wcs99\" (UniqueName: \"kubernetes.io/projected/481fb453-85ee-472b-a0ae-876e114415d6-kube-api-access-wcs99\") pod \"481fb453-85ee-472b-a0ae-876e114415d6\" (UID: \"481fb453-85ee-472b-a0ae-876e114415d6\") " Feb 19 19:56:42 crc kubenswrapper[4813]: I0219 19:56:42.364050 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/481fb453-85ee-472b-a0ae-876e114415d6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "481fb453-85ee-472b-a0ae-876e114415d6" (UID: "481fb453-85ee-472b-a0ae-876e114415d6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:56:42 crc kubenswrapper[4813]: I0219 19:56:42.374269 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/481fb453-85ee-472b-a0ae-876e114415d6-kube-api-access-wcs99" (OuterVolumeSpecName: "kube-api-access-wcs99") pod "481fb453-85ee-472b-a0ae-876e114415d6" (UID: "481fb453-85ee-472b-a0ae-876e114415d6"). InnerVolumeSpecName "kube-api-access-wcs99". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:56:42 crc kubenswrapper[4813]: I0219 19:56:42.466071 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wcs99\" (UniqueName: \"kubernetes.io/projected/481fb453-85ee-472b-a0ae-876e114415d6-kube-api-access-wcs99\") on node \"crc\" DevicePath \"\"" Feb 19 19:56:42 crc kubenswrapper[4813]: I0219 19:56:42.466165 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/481fb453-85ee-472b-a0ae-876e114415d6-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:56:42 crc kubenswrapper[4813]: I0219 19:56:42.872929 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-jk5nd" Feb 19 19:56:42 crc kubenswrapper[4813]: I0219 19:56:42.873021 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-jk5nd" event={"ID":"481fb453-85ee-472b-a0ae-876e114415d6","Type":"ContainerDied","Data":"3e0bbd1dbd8bb7b68d2e6651adfbf6640c0cf157d641d50ef9ca48e4474dba56"} Feb 19 19:56:42 crc kubenswrapper[4813]: I0219 19:56:42.873047 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e0bbd1dbd8bb7b68d2e6651adfbf6640c0cf157d641d50ef9ca48e4474dba56" Feb 19 19:56:43 crc kubenswrapper[4813]: I0219 19:56:43.202578 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-1eff-account-create-update-lqgmz" Feb 19 19:56:43 crc kubenswrapper[4813]: I0219 19:56:43.278849 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wx57\" (UniqueName: \"kubernetes.io/projected/caaee91d-ca35-47dc-bfd1-6f32edf33360-kube-api-access-5wx57\") pod \"caaee91d-ca35-47dc-bfd1-6f32edf33360\" (UID: \"caaee91d-ca35-47dc-bfd1-6f32edf33360\") " Feb 19 19:56:43 crc kubenswrapper[4813]: I0219 19:56:43.279048 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/caaee91d-ca35-47dc-bfd1-6f32edf33360-operator-scripts\") pod \"caaee91d-ca35-47dc-bfd1-6f32edf33360\" (UID: \"caaee91d-ca35-47dc-bfd1-6f32edf33360\") " Feb 19 19:56:43 crc kubenswrapper[4813]: I0219 19:56:43.279799 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/caaee91d-ca35-47dc-bfd1-6f32edf33360-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "caaee91d-ca35-47dc-bfd1-6f32edf33360" (UID: "caaee91d-ca35-47dc-bfd1-6f32edf33360"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:56:43 crc kubenswrapper[4813]: I0219 19:56:43.284146 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/caaee91d-ca35-47dc-bfd1-6f32edf33360-kube-api-access-5wx57" (OuterVolumeSpecName: "kube-api-access-5wx57") pod "caaee91d-ca35-47dc-bfd1-6f32edf33360" (UID: "caaee91d-ca35-47dc-bfd1-6f32edf33360"). InnerVolumeSpecName "kube-api-access-5wx57". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:56:43 crc kubenswrapper[4813]: I0219 19:56:43.381178 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/caaee91d-ca35-47dc-bfd1-6f32edf33360-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:56:43 crc kubenswrapper[4813]: I0219 19:56:43.381218 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wx57\" (UniqueName: \"kubernetes.io/projected/caaee91d-ca35-47dc-bfd1-6f32edf33360-kube-api-access-5wx57\") on node \"crc\" DevicePath \"\"" Feb 19 19:56:43 crc kubenswrapper[4813]: I0219 19:56:43.885553 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-1eff-account-create-update-lqgmz" event={"ID":"caaee91d-ca35-47dc-bfd1-6f32edf33360","Type":"ContainerDied","Data":"191806ef4f9e40871eacae755503f6884fec1ead21ec16d54ff7b9d029b054dd"} Feb 19 19:56:43 crc kubenswrapper[4813]: I0219 19:56:43.887036 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="191806ef4f9e40871eacae755503f6884fec1ead21ec16d54ff7b9d029b054dd" Feb 19 19:56:43 crc kubenswrapper[4813]: I0219 19:56:43.885637 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-1eff-account-create-update-lqgmz" Feb 19 19:56:45 crc kubenswrapper[4813]: I0219 19:56:45.052701 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-m9bhp"] Feb 19 19:56:45 crc kubenswrapper[4813]: E0219 19:56:45.053407 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="caaee91d-ca35-47dc-bfd1-6f32edf33360" containerName="mariadb-account-create-update" Feb 19 19:56:45 crc kubenswrapper[4813]: I0219 19:56:45.053423 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="caaee91d-ca35-47dc-bfd1-6f32edf33360" containerName="mariadb-account-create-update" Feb 19 19:56:45 crc kubenswrapper[4813]: E0219 19:56:45.053447 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="481fb453-85ee-472b-a0ae-876e114415d6" containerName="mariadb-database-create" Feb 19 19:56:45 crc kubenswrapper[4813]: I0219 19:56:45.053454 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="481fb453-85ee-472b-a0ae-876e114415d6" containerName="mariadb-database-create" Feb 19 19:56:45 crc kubenswrapper[4813]: I0219 19:56:45.053650 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="481fb453-85ee-472b-a0ae-876e114415d6" containerName="mariadb-database-create" Feb 19 19:56:45 crc kubenswrapper[4813]: I0219 19:56:45.053669 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="caaee91d-ca35-47dc-bfd1-6f32edf33360" containerName="mariadb-account-create-update" Feb 19 19:56:45 crc kubenswrapper[4813]: I0219 19:56:45.054349 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-m9bhp" Feb 19 19:56:45 crc kubenswrapper[4813]: I0219 19:56:45.058211 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-52glg" Feb 19 19:56:45 crc kubenswrapper[4813]: I0219 19:56:45.058747 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 19 19:56:45 crc kubenswrapper[4813]: I0219 19:56:45.062777 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-m9bhp"] Feb 19 19:56:45 crc kubenswrapper[4813]: I0219 19:56:45.213714 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/11899028-740e-4834-987e-b51a914fd1f5-db-sync-config-data\") pod \"barbican-db-sync-m9bhp\" (UID: \"11899028-740e-4834-987e-b51a914fd1f5\") " pod="openstack/barbican-db-sync-m9bhp" Feb 19 19:56:45 crc kubenswrapper[4813]: I0219 19:56:45.213803 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11899028-740e-4834-987e-b51a914fd1f5-combined-ca-bundle\") pod \"barbican-db-sync-m9bhp\" (UID: \"11899028-740e-4834-987e-b51a914fd1f5\") " pod="openstack/barbican-db-sync-m9bhp" Feb 19 19:56:45 crc kubenswrapper[4813]: I0219 19:56:45.213971 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45kt9\" (UniqueName: \"kubernetes.io/projected/11899028-740e-4834-987e-b51a914fd1f5-kube-api-access-45kt9\") pod \"barbican-db-sync-m9bhp\" (UID: \"11899028-740e-4834-987e-b51a914fd1f5\") " pod="openstack/barbican-db-sync-m9bhp" Feb 19 19:56:45 crc kubenswrapper[4813]: I0219 19:56:45.315906 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/11899028-740e-4834-987e-b51a914fd1f5-db-sync-config-data\") pod \"barbican-db-sync-m9bhp\" (UID: \"11899028-740e-4834-987e-b51a914fd1f5\") " pod="openstack/barbican-db-sync-m9bhp" Feb 19 19:56:45 crc kubenswrapper[4813]: I0219 19:56:45.316008 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11899028-740e-4834-987e-b51a914fd1f5-combined-ca-bundle\") pod \"barbican-db-sync-m9bhp\" (UID: \"11899028-740e-4834-987e-b51a914fd1f5\") " pod="openstack/barbican-db-sync-m9bhp" Feb 19 19:56:45 crc kubenswrapper[4813]: I0219 19:56:45.316035 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45kt9\" (UniqueName: \"kubernetes.io/projected/11899028-740e-4834-987e-b51a914fd1f5-kube-api-access-45kt9\") pod \"barbican-db-sync-m9bhp\" (UID: \"11899028-740e-4834-987e-b51a914fd1f5\") " pod="openstack/barbican-db-sync-m9bhp" Feb 19 19:56:45 crc kubenswrapper[4813]: I0219 19:56:45.321932 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/11899028-740e-4834-987e-b51a914fd1f5-db-sync-config-data\") pod \"barbican-db-sync-m9bhp\" (UID: \"11899028-740e-4834-987e-b51a914fd1f5\") " pod="openstack/barbican-db-sync-m9bhp" Feb 19 19:56:45 crc kubenswrapper[4813]: I0219 19:56:45.331582 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11899028-740e-4834-987e-b51a914fd1f5-combined-ca-bundle\") pod \"barbican-db-sync-m9bhp\" (UID: \"11899028-740e-4834-987e-b51a914fd1f5\") " pod="openstack/barbican-db-sync-m9bhp" Feb 19 19:56:45 crc kubenswrapper[4813]: I0219 19:56:45.335119 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45kt9\" (UniqueName: \"kubernetes.io/projected/11899028-740e-4834-987e-b51a914fd1f5-kube-api-access-45kt9\") pod \"barbican-db-sync-m9bhp\" (UID: \"11899028-740e-4834-987e-b51a914fd1f5\") " pod="openstack/barbican-db-sync-m9bhp" Feb 19 19:56:45 crc kubenswrapper[4813]: I0219 19:56:45.369793 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-m9bhp" Feb 19 19:56:45 crc kubenswrapper[4813]: I0219 19:56:45.813141 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-m9bhp"] Feb 19 19:56:45 crc kubenswrapper[4813]: I0219 19:56:45.905063 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-m9bhp" event={"ID":"11899028-740e-4834-987e-b51a914fd1f5","Type":"ContainerStarted","Data":"a3afe8965dd0fdd93c2e976da615e43cf7807b5139fe5fb0d2b0b568e126604f"} Feb 19 19:56:46 crc kubenswrapper[4813]: I0219 19:56:46.913196 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-m9bhp" event={"ID":"11899028-740e-4834-987e-b51a914fd1f5","Type":"ContainerStarted","Data":"89127d1134eb23667de45ce98f25fcb57637d284affe6ea05c06c4cf408acdb4"} Feb 19 19:56:46 crc kubenswrapper[4813]: I0219 19:56:46.936415 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-m9bhp" podStartSLOduration=1.936388215 podStartE2EDuration="1.936388215s" podCreationTimestamp="2026-02-19 19:56:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:56:46.931923197 +0000 UTC m=+5226.157363758" watchObservedRunningTime="2026-02-19 19:56:46.936388215 +0000 UTC m=+5226.161828756" Feb 19 19:56:47 crc kubenswrapper[4813]: I0219 19:56:47.922857 4813 generic.go:334] "Generic (PLEG): container finished" podID="11899028-740e-4834-987e-b51a914fd1f5" containerID="89127d1134eb23667de45ce98f25fcb57637d284affe6ea05c06c4cf408acdb4" exitCode=0 Feb 19 19:56:47 crc kubenswrapper[4813]: I0219 19:56:47.922921 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-m9bhp" event={"ID":"11899028-740e-4834-987e-b51a914fd1f5","Type":"ContainerDied","Data":"89127d1134eb23667de45ce98f25fcb57637d284affe6ea05c06c4cf408acdb4"} Feb 19 19:56:49 crc kubenswrapper[4813]: I0219 19:56:49.209981 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-m9bhp" Feb 19 19:56:49 crc kubenswrapper[4813]: I0219 19:56:49.279635 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45kt9\" (UniqueName: \"kubernetes.io/projected/11899028-740e-4834-987e-b51a914fd1f5-kube-api-access-45kt9\") pod \"11899028-740e-4834-987e-b51a914fd1f5\" (UID: \"11899028-740e-4834-987e-b51a914fd1f5\") " Feb 19 19:56:49 crc kubenswrapper[4813]: I0219 19:56:49.279767 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11899028-740e-4834-987e-b51a914fd1f5-combined-ca-bundle\") pod \"11899028-740e-4834-987e-b51a914fd1f5\" (UID: \"11899028-740e-4834-987e-b51a914fd1f5\") " Feb 19 19:56:49 crc kubenswrapper[4813]: I0219 19:56:49.279816 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/11899028-740e-4834-987e-b51a914fd1f5-db-sync-config-data\") pod \"11899028-740e-4834-987e-b51a914fd1f5\" (UID: \"11899028-740e-4834-987e-b51a914fd1f5\") " Feb 19 19:56:49 crc kubenswrapper[4813]: I0219 19:56:49.284593 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11899028-740e-4834-987e-b51a914fd1f5-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "11899028-740e-4834-987e-b51a914fd1f5" (UID: "11899028-740e-4834-987e-b51a914fd1f5"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:56:49 crc kubenswrapper[4813]: I0219 19:56:49.284916 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11899028-740e-4834-987e-b51a914fd1f5-kube-api-access-45kt9" (OuterVolumeSpecName: "kube-api-access-45kt9") pod "11899028-740e-4834-987e-b51a914fd1f5" (UID: "11899028-740e-4834-987e-b51a914fd1f5"). InnerVolumeSpecName "kube-api-access-45kt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:56:49 crc kubenswrapper[4813]: I0219 19:56:49.303681 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11899028-740e-4834-987e-b51a914fd1f5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "11899028-740e-4834-987e-b51a914fd1f5" (UID: "11899028-740e-4834-987e-b51a914fd1f5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:56:49 crc kubenswrapper[4813]: I0219 19:56:49.381096 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11899028-740e-4834-987e-b51a914fd1f5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:56:49 crc kubenswrapper[4813]: I0219 19:56:49.381125 4813 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/11899028-740e-4834-987e-b51a914fd1f5-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 19:56:49 crc kubenswrapper[4813]: I0219 19:56:49.381135 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45kt9\" (UniqueName: \"kubernetes.io/projected/11899028-740e-4834-987e-b51a914fd1f5-kube-api-access-45kt9\") on node \"crc\" DevicePath \"\"" Feb 19 19:56:49 crc kubenswrapper[4813]: I0219 19:56:49.938697 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-m9bhp" event={"ID":"11899028-740e-4834-987e-b51a914fd1f5","Type":"ContainerDied","Data":"a3afe8965dd0fdd93c2e976da615e43cf7807b5139fe5fb0d2b0b568e126604f"} Feb 19 19:56:49 crc kubenswrapper[4813]: I0219 19:56:49.939185 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3afe8965dd0fdd93c2e976da615e43cf7807b5139fe5fb0d2b0b568e126604f" Feb 19 19:56:49 crc kubenswrapper[4813]: I0219 19:56:49.938782 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-m9bhp" Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.192884 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-784cf99fcf-cp46x"] Feb 19 19:56:50 crc kubenswrapper[4813]: E0219 19:56:50.193327 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11899028-740e-4834-987e-b51a914fd1f5" containerName="barbican-db-sync" Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.193395 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="11899028-740e-4834-987e-b51a914fd1f5" containerName="barbican-db-sync" Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.193648 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="11899028-740e-4834-987e-b51a914fd1f5" containerName="barbican-db-sync" Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.194797 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-784cf99fcf-cp46x" Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.198779 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.198866 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-52glg" Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.204291 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.220152 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-784cf99fcf-cp46x"] Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.271013 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-84947654b6-l9jwd"] Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.272355 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-84947654b6-l9jwd" Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.276244 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.290853 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-84947654b6-l9jwd"] Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.297081 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/098326bb-f104-4ccf-80dc-99e65bebc619-logs\") pod \"barbican-worker-784cf99fcf-cp46x\" (UID: \"098326bb-f104-4ccf-80dc-99e65bebc619\") " pod="openstack/barbican-worker-784cf99fcf-cp46x" Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.298325 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5f94h\" (UniqueName: \"kubernetes.io/projected/098326bb-f104-4ccf-80dc-99e65bebc619-kube-api-access-5f94h\") pod \"barbican-worker-784cf99fcf-cp46x\" (UID: \"098326bb-f104-4ccf-80dc-99e65bebc619\") " pod="openstack/barbican-worker-784cf99fcf-cp46x" Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.298421 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/098326bb-f104-4ccf-80dc-99e65bebc619-config-data-custom\") pod \"barbican-worker-784cf99fcf-cp46x\" (UID: \"098326bb-f104-4ccf-80dc-99e65bebc619\") " pod="openstack/barbican-worker-784cf99fcf-cp46x" Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.298460 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/098326bb-f104-4ccf-80dc-99e65bebc619-config-data\") pod \"barbican-worker-784cf99fcf-cp46x\" (UID: \"098326bb-f104-4ccf-80dc-99e65bebc619\") " pod="openstack/barbican-worker-784cf99fcf-cp46x" Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.298495 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/098326bb-f104-4ccf-80dc-99e65bebc619-combined-ca-bundle\") pod \"barbican-worker-784cf99fcf-cp46x\" (UID: \"098326bb-f104-4ccf-80dc-99e65bebc619\") " pod="openstack/barbican-worker-784cf99fcf-cp46x" Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.400121 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1cf66052-db02-4388-aaa3-65c44dbb3e74-config-data-custom\") pod \"barbican-keystone-listener-84947654b6-l9jwd\" (UID: \"1cf66052-db02-4388-aaa3-65c44dbb3e74\") " pod="openstack/barbican-keystone-listener-84947654b6-l9jwd" Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.400165 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cf66052-db02-4388-aaa3-65c44dbb3e74-combined-ca-bundle\") pod \"barbican-keystone-listener-84947654b6-l9jwd\" (UID: \"1cf66052-db02-4388-aaa3-65c44dbb3e74\") " pod="openstack/barbican-keystone-listener-84947654b6-l9jwd" Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.400206 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5f94h\" (UniqueName: \"kubernetes.io/projected/098326bb-f104-4ccf-80dc-99e65bebc619-kube-api-access-5f94h\") pod \"barbican-worker-784cf99fcf-cp46x\" (UID: \"098326bb-f104-4ccf-80dc-99e65bebc619\") " pod="openstack/barbican-worker-784cf99fcf-cp46x" Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.400231 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1cf66052-db02-4388-aaa3-65c44dbb3e74-logs\") pod \"barbican-keystone-listener-84947654b6-l9jwd\" (UID: \"1cf66052-db02-4388-aaa3-65c44dbb3e74\") " pod="openstack/barbican-keystone-listener-84947654b6-l9jwd" Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.400255 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cf66052-db02-4388-aaa3-65c44dbb3e74-config-data\") pod \"barbican-keystone-listener-84947654b6-l9jwd\" (UID: \"1cf66052-db02-4388-aaa3-65c44dbb3e74\") " pod="openstack/barbican-keystone-listener-84947654b6-l9jwd" Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.400273 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/098326bb-f104-4ccf-80dc-99e65bebc619-config-data\") pod \"barbican-worker-784cf99fcf-cp46x\" (UID: \"098326bb-f104-4ccf-80dc-99e65bebc619\") " pod="openstack/barbican-worker-784cf99fcf-cp46x" Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.400291 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/098326bb-f104-4ccf-80dc-99e65bebc619-config-data-custom\") pod \"barbican-worker-784cf99fcf-cp46x\" (UID: \"098326bb-f104-4ccf-80dc-99e65bebc619\") " pod="openstack/barbican-worker-784cf99fcf-cp46x" Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.400310 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/098326bb-f104-4ccf-80dc-99e65bebc619-combined-ca-bundle\") pod \"barbican-worker-784cf99fcf-cp46x\" (UID: \"098326bb-f104-4ccf-80dc-99e65bebc619\") " pod="openstack/barbican-worker-784cf99fcf-cp46x" Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.400331 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64kt7\" (UniqueName: \"kubernetes.io/projected/1cf66052-db02-4388-aaa3-65c44dbb3e74-kube-api-access-64kt7\") pod \"barbican-keystone-listener-84947654b6-l9jwd\" (UID: \"1cf66052-db02-4388-aaa3-65c44dbb3e74\") " pod="openstack/barbican-keystone-listener-84947654b6-l9jwd" Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.400390 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/098326bb-f104-4ccf-80dc-99e65bebc619-logs\") pod \"barbican-worker-784cf99fcf-cp46x\" (UID: \"098326bb-f104-4ccf-80dc-99e65bebc619\") " pod="openstack/barbican-worker-784cf99fcf-cp46x" Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.400744 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/098326bb-f104-4ccf-80dc-99e65bebc619-logs\") pod \"barbican-worker-784cf99fcf-cp46x\" (UID: \"098326bb-f104-4ccf-80dc-99e65bebc619\") " pod="openstack/barbican-worker-784cf99fcf-cp46x" Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.412566 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/098326bb-f104-4ccf-80dc-99e65bebc619-combined-ca-bundle\") pod \"barbican-worker-784cf99fcf-cp46x\" (UID: \"098326bb-f104-4ccf-80dc-99e65bebc619\") " pod="openstack/barbican-worker-784cf99fcf-cp46x" Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.414985 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6585cd964f-lcwmb"] Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.416226 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6585cd964f-lcwmb" Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.424851 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/098326bb-f104-4ccf-80dc-99e65bebc619-config-data-custom\") pod \"barbican-worker-784cf99fcf-cp46x\" (UID: \"098326bb-f104-4ccf-80dc-99e65bebc619\") " pod="openstack/barbican-worker-784cf99fcf-cp46x" Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.425208 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/098326bb-f104-4ccf-80dc-99e65bebc619-config-data\") pod \"barbican-worker-784cf99fcf-cp46x\" (UID: \"098326bb-f104-4ccf-80dc-99e65bebc619\") " pod="openstack/barbican-worker-784cf99fcf-cp46x" Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.427342 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5f94h\" (UniqueName: \"kubernetes.io/projected/098326bb-f104-4ccf-80dc-99e65bebc619-kube-api-access-5f94h\") pod \"barbican-worker-784cf99fcf-cp46x\" (UID: \"098326bb-f104-4ccf-80dc-99e65bebc619\") " pod="openstack/barbican-worker-784cf99fcf-cp46x" Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.439397 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6585cd964f-lcwmb"] Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.502255 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64kt7\" (UniqueName: \"kubernetes.io/projected/1cf66052-db02-4388-aaa3-65c44dbb3e74-kube-api-access-64kt7\") pod \"barbican-keystone-listener-84947654b6-l9jwd\" (UID: \"1cf66052-db02-4388-aaa3-65c44dbb3e74\") " pod="openstack/barbican-keystone-listener-84947654b6-l9jwd" Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.502332 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f1b41f7-0eab-4f57-902e-eff27bdde663-ovsdbserver-nb\") pod \"dnsmasq-dns-6585cd964f-lcwmb\" (UID: \"7f1b41f7-0eab-4f57-902e-eff27bdde663\") " pod="openstack/dnsmasq-dns-6585cd964f-lcwmb" Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.502392 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv22l\" (UniqueName: \"kubernetes.io/projected/7f1b41f7-0eab-4f57-902e-eff27bdde663-kube-api-access-mv22l\") pod \"dnsmasq-dns-6585cd964f-lcwmb\" (UID: \"7f1b41f7-0eab-4f57-902e-eff27bdde663\") " pod="openstack/dnsmasq-dns-6585cd964f-lcwmb" Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.502409 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f1b41f7-0eab-4f57-902e-eff27bdde663-dns-svc\") pod \"dnsmasq-dns-6585cd964f-lcwmb\" (UID: \"7f1b41f7-0eab-4f57-902e-eff27bdde663\") " pod="openstack/dnsmasq-dns-6585cd964f-lcwmb" Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.502429 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1cf66052-db02-4388-aaa3-65c44dbb3e74-config-data-custom\") pod \"barbican-keystone-listener-84947654b6-l9jwd\" (UID: \"1cf66052-db02-4388-aaa3-65c44dbb3e74\") " pod="openstack/barbican-keystone-listener-84947654b6-l9jwd" Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.502450 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cf66052-db02-4388-aaa3-65c44dbb3e74-combined-ca-bundle\") pod \"barbican-keystone-listener-84947654b6-l9jwd\" (UID: \"1cf66052-db02-4388-aaa3-65c44dbb3e74\") " pod="openstack/barbican-keystone-listener-84947654b6-l9jwd" Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.502475 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f1b41f7-0eab-4f57-902e-eff27bdde663-ovsdbserver-sb\") pod \"dnsmasq-dns-6585cd964f-lcwmb\" (UID: \"7f1b41f7-0eab-4f57-902e-eff27bdde663\") " pod="openstack/dnsmasq-dns-6585cd964f-lcwmb" Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.502495 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f1b41f7-0eab-4f57-902e-eff27bdde663-config\") pod \"dnsmasq-dns-6585cd964f-lcwmb\" (UID: \"7f1b41f7-0eab-4f57-902e-eff27bdde663\") " pod="openstack/dnsmasq-dns-6585cd964f-lcwmb" Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.502515 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1cf66052-db02-4388-aaa3-65c44dbb3e74-logs\") pod \"barbican-keystone-listener-84947654b6-l9jwd\" (UID: \"1cf66052-db02-4388-aaa3-65c44dbb3e74\") " pod="openstack/barbican-keystone-listener-84947654b6-l9jwd" Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.502532 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cf66052-db02-4388-aaa3-65c44dbb3e74-config-data\") pod \"barbican-keystone-listener-84947654b6-l9jwd\" (UID: \"1cf66052-db02-4388-aaa3-65c44dbb3e74\") " pod="openstack/barbican-keystone-listener-84947654b6-l9jwd" Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.506117 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1cf66052-db02-4388-aaa3-65c44dbb3e74-logs\") pod \"barbican-keystone-listener-84947654b6-l9jwd\" (UID: \"1cf66052-db02-4388-aaa3-65c44dbb3e74\") " pod="openstack/barbican-keystone-listener-84947654b6-l9jwd" Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.511578 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cf66052-db02-4388-aaa3-65c44dbb3e74-config-data\") pod \"barbican-keystone-listener-84947654b6-l9jwd\" (UID: \"1cf66052-db02-4388-aaa3-65c44dbb3e74\") " pod="openstack/barbican-keystone-listener-84947654b6-l9jwd" Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.512930 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cf66052-db02-4388-aaa3-65c44dbb3e74-combined-ca-bundle\") pod \"barbican-keystone-listener-84947654b6-l9jwd\" (UID: \"1cf66052-db02-4388-aaa3-65c44dbb3e74\") " pod="openstack/barbican-keystone-listener-84947654b6-l9jwd" Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.517168 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-784cf99fcf-cp46x" Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.517905 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1cf66052-db02-4388-aaa3-65c44dbb3e74-config-data-custom\") pod \"barbican-keystone-listener-84947654b6-l9jwd\" (UID: \"1cf66052-db02-4388-aaa3-65c44dbb3e74\") " pod="openstack/barbican-keystone-listener-84947654b6-l9jwd" Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.522508 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-748c6b9b7b-zzvwq"] Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.523933 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-748c6b9b7b-zzvwq" Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.525310 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.548001 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64kt7\" (UniqueName: \"kubernetes.io/projected/1cf66052-db02-4388-aaa3-65c44dbb3e74-kube-api-access-64kt7\") pod \"barbican-keystone-listener-84947654b6-l9jwd\" (UID: \"1cf66052-db02-4388-aaa3-65c44dbb3e74\") " pod="openstack/barbican-keystone-listener-84947654b6-l9jwd" Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.566225 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-748c6b9b7b-zzvwq"] Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.595004 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-84947654b6-l9jwd" Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.604555 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfnvt\" (UniqueName: \"kubernetes.io/projected/f358878b-f864-4efd-a8a2-163128d1e49a-kube-api-access-dfnvt\") pod \"barbican-api-748c6b9b7b-zzvwq\" (UID: \"f358878b-f864-4efd-a8a2-163128d1e49a\") " pod="openstack/barbican-api-748c6b9b7b-zzvwq" Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.604614 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f358878b-f864-4efd-a8a2-163128d1e49a-logs\") pod \"barbican-api-748c6b9b7b-zzvwq\" (UID: \"f358878b-f864-4efd-a8a2-163128d1e49a\") " pod="openstack/barbican-api-748c6b9b7b-zzvwq" Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.604672 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f1b41f7-0eab-4f57-902e-eff27bdde663-ovsdbserver-nb\") pod \"dnsmasq-dns-6585cd964f-lcwmb\" (UID: \"7f1b41f7-0eab-4f57-902e-eff27bdde663\") " pod="openstack/dnsmasq-dns-6585cd964f-lcwmb" Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.604737 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f358878b-f864-4efd-a8a2-163128d1e49a-config-data-custom\") pod \"barbican-api-748c6b9b7b-zzvwq\" (UID: \"f358878b-f864-4efd-a8a2-163128d1e49a\") " pod="openstack/barbican-api-748c6b9b7b-zzvwq" Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.604776 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mv22l\" (UniqueName: \"kubernetes.io/projected/7f1b41f7-0eab-4f57-902e-eff27bdde663-kube-api-access-mv22l\") pod \"dnsmasq-dns-6585cd964f-lcwmb\" (UID: \"7f1b41f7-0eab-4f57-902e-eff27bdde663\") " pod="openstack/dnsmasq-dns-6585cd964f-lcwmb" Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.604804 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f1b41f7-0eab-4f57-902e-eff27bdde663-dns-svc\") pod \"dnsmasq-dns-6585cd964f-lcwmb\" (UID: \"7f1b41f7-0eab-4f57-902e-eff27bdde663\") " pod="openstack/dnsmasq-dns-6585cd964f-lcwmb" Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.604830 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f358878b-f864-4efd-a8a2-163128d1e49a-combined-ca-bundle\") pod \"barbican-api-748c6b9b7b-zzvwq\" (UID: \"f358878b-f864-4efd-a8a2-163128d1e49a\") " pod="openstack/barbican-api-748c6b9b7b-zzvwq" Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.604883 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f1b41f7-0eab-4f57-902e-eff27bdde663-ovsdbserver-sb\") pod \"dnsmasq-dns-6585cd964f-lcwmb\" (UID: \"7f1b41f7-0eab-4f57-902e-eff27bdde663\") " pod="openstack/dnsmasq-dns-6585cd964f-lcwmb" Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.604909 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f1b41f7-0eab-4f57-902e-eff27bdde663-config\") pod \"dnsmasq-dns-6585cd964f-lcwmb\" (UID: \"7f1b41f7-0eab-4f57-902e-eff27bdde663\") " pod="openstack/dnsmasq-dns-6585cd964f-lcwmb" Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.604930 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f358878b-f864-4efd-a8a2-163128d1e49a-config-data\") pod \"barbican-api-748c6b9b7b-zzvwq\" (UID: \"f358878b-f864-4efd-a8a2-163128d1e49a\") " pod="openstack/barbican-api-748c6b9b7b-zzvwq" Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.606055 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f1b41f7-0eab-4f57-902e-eff27bdde663-ovsdbserver-nb\") pod \"dnsmasq-dns-6585cd964f-lcwmb\" (UID: \"7f1b41f7-0eab-4f57-902e-eff27bdde663\") " pod="openstack/dnsmasq-dns-6585cd964f-lcwmb" Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.607313 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f1b41f7-0eab-4f57-902e-eff27bdde663-dns-svc\") pod \"dnsmasq-dns-6585cd964f-lcwmb\" (UID: \"7f1b41f7-0eab-4f57-902e-eff27bdde663\") " pod="openstack/dnsmasq-dns-6585cd964f-lcwmb" Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.611097 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f1b41f7-0eab-4f57-902e-eff27bdde663-ovsdbserver-sb\") pod \"dnsmasq-dns-6585cd964f-lcwmb\" (UID: \"7f1b41f7-0eab-4f57-902e-eff27bdde663\") " pod="openstack/dnsmasq-dns-6585cd964f-lcwmb" Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.611789 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f1b41f7-0eab-4f57-902e-eff27bdde663-config\") pod \"dnsmasq-dns-6585cd964f-lcwmb\" (UID: \"7f1b41f7-0eab-4f57-902e-eff27bdde663\") " pod="openstack/dnsmasq-dns-6585cd964f-lcwmb" Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.621896 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mv22l\" (UniqueName: \"kubernetes.io/projected/7f1b41f7-0eab-4f57-902e-eff27bdde663-kube-api-access-mv22l\") pod \"dnsmasq-dns-6585cd964f-lcwmb\" (UID: \"7f1b41f7-0eab-4f57-902e-eff27bdde663\") " pod="openstack/dnsmasq-dns-6585cd964f-lcwmb" Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.641149 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6585cd964f-lcwmb" Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.706624 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f358878b-f864-4efd-a8a2-163128d1e49a-config-data\") pod \"barbican-api-748c6b9b7b-zzvwq\" (UID: \"f358878b-f864-4efd-a8a2-163128d1e49a\") " pod="openstack/barbican-api-748c6b9b7b-zzvwq" Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.707022 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfnvt\" (UniqueName: \"kubernetes.io/projected/f358878b-f864-4efd-a8a2-163128d1e49a-kube-api-access-dfnvt\") pod \"barbican-api-748c6b9b7b-zzvwq\" (UID: \"f358878b-f864-4efd-a8a2-163128d1e49a\") " pod="openstack/barbican-api-748c6b9b7b-zzvwq" Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.707064 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f358878b-f864-4efd-a8a2-163128d1e49a-logs\") pod \"barbican-api-748c6b9b7b-zzvwq\" (UID: \"f358878b-f864-4efd-a8a2-163128d1e49a\") " pod="openstack/barbican-api-748c6b9b7b-zzvwq" Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.708473 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f358878b-f864-4efd-a8a2-163128d1e49a-logs\") pod \"barbican-api-748c6b9b7b-zzvwq\" (UID: \"f358878b-f864-4efd-a8a2-163128d1e49a\") " pod="openstack/barbican-api-748c6b9b7b-zzvwq" Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.708615 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f358878b-f864-4efd-a8a2-163128d1e49a-config-data-custom\") pod \"barbican-api-748c6b9b7b-zzvwq\" (UID: \"f358878b-f864-4efd-a8a2-163128d1e49a\") " pod="openstack/barbican-api-748c6b9b7b-zzvwq" Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.708695 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f358878b-f864-4efd-a8a2-163128d1e49a-combined-ca-bundle\") pod \"barbican-api-748c6b9b7b-zzvwq\" (UID: \"f358878b-f864-4efd-a8a2-163128d1e49a\") " pod="openstack/barbican-api-748c6b9b7b-zzvwq" Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.719630 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f358878b-f864-4efd-a8a2-163128d1e49a-combined-ca-bundle\") pod \"barbican-api-748c6b9b7b-zzvwq\" (UID: \"f358878b-f864-4efd-a8a2-163128d1e49a\") " pod="openstack/barbican-api-748c6b9b7b-zzvwq" Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.719883 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f358878b-f864-4efd-a8a2-163128d1e49a-config-data-custom\") pod \"barbican-api-748c6b9b7b-zzvwq\" (UID: \"f358878b-f864-4efd-a8a2-163128d1e49a\") " pod="openstack/barbican-api-748c6b9b7b-zzvwq" Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.720290 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f358878b-f864-4efd-a8a2-163128d1e49a-config-data\") pod \"barbican-api-748c6b9b7b-zzvwq\" (UID: \"f358878b-f864-4efd-a8a2-163128d1e49a\") " pod="openstack/barbican-api-748c6b9b7b-zzvwq" Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.729631 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfnvt\" (UniqueName: \"kubernetes.io/projected/f358878b-f864-4efd-a8a2-163128d1e49a-kube-api-access-dfnvt\") pod \"barbican-api-748c6b9b7b-zzvwq\" (UID: \"f358878b-f864-4efd-a8a2-163128d1e49a\") " pod="openstack/barbican-api-748c6b9b7b-zzvwq" Feb 19 19:56:50 crc kubenswrapper[4813]: I0219 19:56:50.955215 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-748c6b9b7b-zzvwq" Feb 19 19:56:51 crc kubenswrapper[4813]: I0219 19:56:51.007049 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-784cf99fcf-cp46x"] Feb 19 19:56:51 crc kubenswrapper[4813]: W0219 19:56:51.183807 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1cf66052_db02_4388_aaa3_65c44dbb3e74.slice/crio-4c9cdc307c71fe662a79d6eb345808183c87ff4a4d70843801ef30209e4fab05 WatchSource:0}: Error finding container 4c9cdc307c71fe662a79d6eb345808183c87ff4a4d70843801ef30209e4fab05: Status 404 returned error can't find the container with id 4c9cdc307c71fe662a79d6eb345808183c87ff4a4d70843801ef30209e4fab05 Feb 19 19:56:51 crc kubenswrapper[4813]: I0219 19:56:51.183921 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-84947654b6-l9jwd"] Feb 19 19:56:51 crc kubenswrapper[4813]: W0219 19:56:51.286599 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f1b41f7_0eab_4f57_902e_eff27bdde663.slice/crio-b4720f76ada52e57961aa4173106088c704a23b8989207b24a5a48373a31503d WatchSource:0}: Error finding container b4720f76ada52e57961aa4173106088c704a23b8989207b24a5a48373a31503d: Status 404 returned error can't find the container with id b4720f76ada52e57961aa4173106088c704a23b8989207b24a5a48373a31503d Feb 19 19:56:51 crc kubenswrapper[4813]: I0219 19:56:51.287015 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6585cd964f-lcwmb"] Feb 19 19:56:51 crc kubenswrapper[4813]: I0219 19:56:51.382866 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-748c6b9b7b-zzvwq"] Feb 19 19:56:51 crc kubenswrapper[4813]: W0219 19:56:51.388965 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf358878b_f864_4efd_a8a2_163128d1e49a.slice/crio-7223c5f4bf66ccb669419e58bc3cbd43c30b60adbef5dd3dd7e4cce9ebc16bbf WatchSource:0}: Error finding container 7223c5f4bf66ccb669419e58bc3cbd43c30b60adbef5dd3dd7e4cce9ebc16bbf: Status 404 returned error can't find the container with id 7223c5f4bf66ccb669419e58bc3cbd43c30b60adbef5dd3dd7e4cce9ebc16bbf Feb 19 19:56:51 crc kubenswrapper[4813]: I0219 19:56:51.954627 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-84947654b6-l9jwd" event={"ID":"1cf66052-db02-4388-aaa3-65c44dbb3e74","Type":"ContainerStarted","Data":"1cf8cd7e60c0cfb2c529c9ddba206b247dd27c2dc58b0775488fdea80cced4ec"} Feb 19 19:56:51 crc kubenswrapper[4813]: I0219 19:56:51.955001 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-84947654b6-l9jwd" event={"ID":"1cf66052-db02-4388-aaa3-65c44dbb3e74","Type":"ContainerStarted","Data":"d36993177ff5dbb4a4082b9e53a8ecd59be20e153de4f92a6c01410875a70dfb"} Feb 19 19:56:51 crc kubenswrapper[4813]: I0219 19:56:51.955019 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-84947654b6-l9jwd" event={"ID":"1cf66052-db02-4388-aaa3-65c44dbb3e74","Type":"ContainerStarted","Data":"4c9cdc307c71fe662a79d6eb345808183c87ff4a4d70843801ef30209e4fab05"} Feb 19 19:56:51 crc kubenswrapper[4813]: I0219 19:56:51.958378 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-748c6b9b7b-zzvwq" event={"ID":"f358878b-f864-4efd-a8a2-163128d1e49a","Type":"ContainerStarted","Data":"8202dd7ba1cacaa3c43f2da0f970e9a905b40c64cde56c65039a170cd74544f8"} Feb 19 19:56:51 crc kubenswrapper[4813]: I0219 19:56:51.958419 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-748c6b9b7b-zzvwq" event={"ID":"f358878b-f864-4efd-a8a2-163128d1e49a","Type":"ContainerStarted","Data":"06ae41bc63aedc9c0e9aec6ccfbbbbd651cce66d165c4eb07e63915463b9bbc8"} Feb 19 19:56:51 crc kubenswrapper[4813]: I0219 19:56:51.958432 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-748c6b9b7b-zzvwq" event={"ID":"f358878b-f864-4efd-a8a2-163128d1e49a","Type":"ContainerStarted","Data":"7223c5f4bf66ccb669419e58bc3cbd43c30b60adbef5dd3dd7e4cce9ebc16bbf"} Feb 19 19:56:51 crc kubenswrapper[4813]: I0219 19:56:51.959022 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-748c6b9b7b-zzvwq" Feb 19 19:56:51 crc kubenswrapper[4813]: I0219 19:56:51.959060 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-748c6b9b7b-zzvwq" Feb 19 19:56:51 crc kubenswrapper[4813]: I0219 19:56:51.961429 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-784cf99fcf-cp46x" event={"ID":"098326bb-f104-4ccf-80dc-99e65bebc619","Type":"ContainerStarted","Data":"cd502abeb43c1f12de893111083a9f05dac162ecd8924493ff60fe1f30389231"} Feb 19 19:56:51 crc kubenswrapper[4813]: I0219 19:56:51.961457 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-784cf99fcf-cp46x" event={"ID":"098326bb-f104-4ccf-80dc-99e65bebc619","Type":"ContainerStarted","Data":"0a9ef93facc449a241a01d40cf18fbebe1f0e56363de7162928212a535cc4b87"} Feb 19 19:56:51 crc kubenswrapper[4813]: I0219 19:56:51.961466 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-784cf99fcf-cp46x" event={"ID":"098326bb-f104-4ccf-80dc-99e65bebc619","Type":"ContainerStarted","Data":"095786f3e63b63d293537f744debb140b4f187f21d2e4a4ed889846fb37d432a"} Feb 19 19:56:51 crc kubenswrapper[4813]: I0219 19:56:51.964036 4813 generic.go:334] "Generic (PLEG): container finished" podID="7f1b41f7-0eab-4f57-902e-eff27bdde663" containerID="6dd0a21a756d64f0bf24e46318c24c0ef6c522017d4b8afe0a54326c62606ad1" exitCode=0 Feb 19 19:56:51 crc kubenswrapper[4813]: I0219 19:56:51.964097 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6585cd964f-lcwmb" event={"ID":"7f1b41f7-0eab-4f57-902e-eff27bdde663","Type":"ContainerDied","Data":"6dd0a21a756d64f0bf24e46318c24c0ef6c522017d4b8afe0a54326c62606ad1"} Feb 19 19:56:51 crc kubenswrapper[4813]: I0219 19:56:51.964183 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6585cd964f-lcwmb" event={"ID":"7f1b41f7-0eab-4f57-902e-eff27bdde663","Type":"ContainerStarted","Data":"b4720f76ada52e57961aa4173106088c704a23b8989207b24a5a48373a31503d"} Feb 19 19:56:51 crc kubenswrapper[4813]: I0219 19:56:51.983657 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-84947654b6-l9jwd" podStartSLOduration=1.983635018 podStartE2EDuration="1.983635018s" podCreationTimestamp="2026-02-19 19:56:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:56:51.975598179 +0000 UTC m=+5231.201038710" watchObservedRunningTime="2026-02-19 19:56:51.983635018 +0000 UTC m=+5231.209075559" Feb 19 19:56:52 crc kubenswrapper[4813]: I0219 19:56:52.005810 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-784cf99fcf-cp46x" podStartSLOduration=2.005788783 podStartE2EDuration="2.005788783s" podCreationTimestamp="2026-02-19 19:56:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:56:52.002610865 +0000 UTC m=+5231.228051426" watchObservedRunningTime="2026-02-19 19:56:52.005788783 +0000 UTC m=+5231.231229324" Feb 19 19:56:52 crc kubenswrapper[4813]: I0219 19:56:52.051748 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-748c6b9b7b-zzvwq" podStartSLOduration=2.051725395 podStartE2EDuration="2.051725395s" podCreationTimestamp="2026-02-19 19:56:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:56:52.045481352 +0000 UTC m=+5231.270921893" watchObservedRunningTime="2026-02-19 19:56:52.051725395 +0000 UTC m=+5231.277165936" Feb 19 19:56:52 crc kubenswrapper[4813]: I0219 19:56:52.471352 4813 scope.go:117] "RemoveContainer" containerID="6f703dfd90f9e11930a40b965ab018f3c46a606e13eaba250b413520a86bea4f" Feb 19 19:56:52 crc kubenswrapper[4813]: E0219 19:56:52.471877 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:56:52 crc kubenswrapper[4813]: I0219 19:56:52.973330 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6585cd964f-lcwmb" event={"ID":"7f1b41f7-0eab-4f57-902e-eff27bdde663","Type":"ContainerStarted","Data":"74cbb559fec467b7e74c6299a319c27bb415eb48d85fc8f284dc776bb9f17fa5"} Feb 19 19:56:52 crc kubenswrapper[4813]: I0219 19:56:52.973673 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6585cd964f-lcwmb" Feb 19 19:56:53 crc kubenswrapper[4813]: I0219 19:56:53.000737 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6585cd964f-lcwmb" podStartSLOduration=3.000712627 podStartE2EDuration="3.000712627s" podCreationTimestamp="2026-02-19 19:56:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:56:52.991402189 +0000 UTC m=+5232.216842740" watchObservedRunningTime="2026-02-19 19:56:53.000712627 +0000 UTC m=+5232.226153188" Feb 19 19:57:00 crc kubenswrapper[4813]: I0219 19:57:00.643148 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6585cd964f-lcwmb" Feb 19 19:57:00 crc kubenswrapper[4813]: I0219 19:57:00.709874 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-784d68bcdf-4qrcw"] Feb 19 19:57:00 crc kubenswrapper[4813]: I0219 19:57:00.710164 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-784d68bcdf-4qrcw" podUID="7d7e98bc-abfe-4c60-9a7a-81ace51d739e" containerName="dnsmasq-dns" containerID="cri-o://0bda61f7326f1649a01a62ed74727ae21de8a335f85626ef04e7b5e00caccb76" gracePeriod=10 Feb 19 19:57:01 crc kubenswrapper[4813]: I0219 19:57:01.066944 4813 generic.go:334] "Generic (PLEG): container finished" podID="7d7e98bc-abfe-4c60-9a7a-81ace51d739e" containerID="0bda61f7326f1649a01a62ed74727ae21de8a335f85626ef04e7b5e00caccb76" exitCode=0 Feb 19 19:57:01 crc kubenswrapper[4813]: I0219 19:57:01.066981 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-784d68bcdf-4qrcw" event={"ID":"7d7e98bc-abfe-4c60-9a7a-81ace51d739e","Type":"ContainerDied","Data":"0bda61f7326f1649a01a62ed74727ae21de8a335f85626ef04e7b5e00caccb76"} Feb 19 19:57:01 crc kubenswrapper[4813]: I0219 19:57:01.210911 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-784d68bcdf-4qrcw" Feb 19 19:57:01 crc kubenswrapper[4813]: I0219 19:57:01.286140 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6p92\" (UniqueName: \"kubernetes.io/projected/7d7e98bc-abfe-4c60-9a7a-81ace51d739e-kube-api-access-z6p92\") pod \"7d7e98bc-abfe-4c60-9a7a-81ace51d739e\" (UID: \"7d7e98bc-abfe-4c60-9a7a-81ace51d739e\") " Feb 19 19:57:01 crc kubenswrapper[4813]: I0219 19:57:01.286415 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d7e98bc-abfe-4c60-9a7a-81ace51d739e-config\") pod \"7d7e98bc-abfe-4c60-9a7a-81ace51d739e\" (UID: \"7d7e98bc-abfe-4c60-9a7a-81ace51d739e\") " Feb 19 19:57:01 crc kubenswrapper[4813]: I0219 19:57:01.286614 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7d7e98bc-abfe-4c60-9a7a-81ace51d739e-ovsdbserver-sb\") pod \"7d7e98bc-abfe-4c60-9a7a-81ace51d739e\" (UID: \"7d7e98bc-abfe-4c60-9a7a-81ace51d739e\") " Feb 19 19:57:01 crc kubenswrapper[4813]: I0219 19:57:01.286699 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7d7e98bc-abfe-4c60-9a7a-81ace51d739e-ovsdbserver-nb\") pod \"7d7e98bc-abfe-4c60-9a7a-81ace51d739e\" (UID: \"7d7e98bc-abfe-4c60-9a7a-81ace51d739e\") " Feb 19 19:57:01 crc kubenswrapper[4813]: I0219 19:57:01.286821 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7d7e98bc-abfe-4c60-9a7a-81ace51d739e-dns-svc\") pod \"7d7e98bc-abfe-4c60-9a7a-81ace51d739e\" (UID: \"7d7e98bc-abfe-4c60-9a7a-81ace51d739e\") " Feb 19 19:57:01 crc kubenswrapper[4813]: I0219 19:57:01.309261 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d7e98bc-abfe-4c60-9a7a-81ace51d739e-kube-api-access-z6p92" (OuterVolumeSpecName: "kube-api-access-z6p92") pod "7d7e98bc-abfe-4c60-9a7a-81ace51d739e" (UID: "7d7e98bc-abfe-4c60-9a7a-81ace51d739e"). InnerVolumeSpecName "kube-api-access-z6p92". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:57:01 crc kubenswrapper[4813]: I0219 19:57:01.328616 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d7e98bc-abfe-4c60-9a7a-81ace51d739e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7d7e98bc-abfe-4c60-9a7a-81ace51d739e" (UID: "7d7e98bc-abfe-4c60-9a7a-81ace51d739e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:57:01 crc kubenswrapper[4813]: I0219 19:57:01.329493 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d7e98bc-abfe-4c60-9a7a-81ace51d739e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7d7e98bc-abfe-4c60-9a7a-81ace51d739e" (UID: "7d7e98bc-abfe-4c60-9a7a-81ace51d739e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:57:01 crc kubenswrapper[4813]: I0219 19:57:01.339406 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d7e98bc-abfe-4c60-9a7a-81ace51d739e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7d7e98bc-abfe-4c60-9a7a-81ace51d739e" (UID: "7d7e98bc-abfe-4c60-9a7a-81ace51d739e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:57:01 crc kubenswrapper[4813]: I0219 19:57:01.353700 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d7e98bc-abfe-4c60-9a7a-81ace51d739e-config" (OuterVolumeSpecName: "config") pod "7d7e98bc-abfe-4c60-9a7a-81ace51d739e" (UID: "7d7e98bc-abfe-4c60-9a7a-81ace51d739e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:57:01 crc kubenswrapper[4813]: I0219 19:57:01.389559 4813 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7d7e98bc-abfe-4c60-9a7a-81ace51d739e-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 19:57:01 crc kubenswrapper[4813]: I0219 19:57:01.389593 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6p92\" (UniqueName: \"kubernetes.io/projected/7d7e98bc-abfe-4c60-9a7a-81ace51d739e-kube-api-access-z6p92\") on node \"crc\" DevicePath \"\"" Feb 19 19:57:01 crc kubenswrapper[4813]: I0219 19:57:01.389630 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d7e98bc-abfe-4c60-9a7a-81ace51d739e-config\") on node \"crc\" DevicePath \"\"" Feb 19 19:57:01 crc kubenswrapper[4813]: I0219 19:57:01.389648 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7d7e98bc-abfe-4c60-9a7a-81ace51d739e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 19:57:01 crc kubenswrapper[4813]: I0219 19:57:01.389777 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7d7e98bc-abfe-4c60-9a7a-81ace51d739e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 19:57:02 crc kubenswrapper[4813]: I0219 19:57:02.076902 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-784d68bcdf-4qrcw" event={"ID":"7d7e98bc-abfe-4c60-9a7a-81ace51d739e","Type":"ContainerDied","Data":"06ec38f780f400458793417dd4a339b68fb6ac1f37b5bacd5ad23a8eff95e1f2"} Feb 19 19:57:02 crc kubenswrapper[4813]: I0219 19:57:02.076915 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-784d68bcdf-4qrcw" Feb 19 19:57:02 crc kubenswrapper[4813]: I0219 19:57:02.077249 4813 scope.go:117] "RemoveContainer" containerID="0bda61f7326f1649a01a62ed74727ae21de8a335f85626ef04e7b5e00caccb76" Feb 19 19:57:02 crc kubenswrapper[4813]: I0219 19:57:02.102482 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-784d68bcdf-4qrcw"] Feb 19 19:57:02 crc kubenswrapper[4813]: I0219 19:57:02.110722 4813 scope.go:117] "RemoveContainer" containerID="b9f890843e84e9790c17a3615a461b4b68fdb4c518b82c4b7d48745df23da9c8" Feb 19 19:57:02 crc kubenswrapper[4813]: I0219 19:57:02.113164 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-784d68bcdf-4qrcw"] Feb 19 19:57:02 crc kubenswrapper[4813]: I0219 19:57:02.435690 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-748c6b9b7b-zzvwq" Feb 19 19:57:02 crc kubenswrapper[4813]: I0219 19:57:02.456739 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-748c6b9b7b-zzvwq" Feb 19 19:57:03 crc kubenswrapper[4813]: I0219 19:57:03.483348 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d7e98bc-abfe-4c60-9a7a-81ace51d739e" path="/var/lib/kubelet/pods/7d7e98bc-abfe-4c60-9a7a-81ace51d739e/volumes" Feb 19 19:57:06 crc kubenswrapper[4813]: I0219 19:57:06.471482 4813 scope.go:117] "RemoveContainer" containerID="6f703dfd90f9e11930a40b965ab018f3c46a606e13eaba250b413520a86bea4f" Feb 19 19:57:06 crc kubenswrapper[4813]: E0219 19:57:06.472187 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:57:08 crc kubenswrapper[4813]: I0219 19:57:08.046492 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-t9zb4"] Feb 19 19:57:08 crc kubenswrapper[4813]: I0219 19:57:08.052808 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-t9zb4"] Feb 19 19:57:09 crc kubenswrapper[4813]: I0219 19:57:09.482078 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30a87e56-d537-4563-b122-f6ca0132bf0d" path="/var/lib/kubelet/pods/30a87e56-d537-4563-b122-f6ca0132bf0d/volumes" Feb 19 19:57:12 crc kubenswrapper[4813]: I0219 19:57:12.304447 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-gsckp"] Feb 19 19:57:12 crc kubenswrapper[4813]: E0219 19:57:12.304844 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d7e98bc-abfe-4c60-9a7a-81ace51d739e" containerName="dnsmasq-dns" Feb 19 19:57:12 crc kubenswrapper[4813]: I0219 19:57:12.304859 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d7e98bc-abfe-4c60-9a7a-81ace51d739e" containerName="dnsmasq-dns" Feb 19 19:57:12 crc kubenswrapper[4813]: E0219 19:57:12.304893 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d7e98bc-abfe-4c60-9a7a-81ace51d739e" containerName="init" Feb 19 19:57:12 crc kubenswrapper[4813]: I0219 19:57:12.304901 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d7e98bc-abfe-4c60-9a7a-81ace51d739e" containerName="init" Feb 19 19:57:12 crc kubenswrapper[4813]: I0219 19:57:12.305101 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d7e98bc-abfe-4c60-9a7a-81ace51d739e" containerName="dnsmasq-dns" Feb 19 19:57:12 crc kubenswrapper[4813]: I0219 19:57:12.305790 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-gsckp" Feb 19 19:57:12 crc kubenswrapper[4813]: I0219 19:57:12.317450 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-gsckp"] Feb 19 19:57:12 crc kubenswrapper[4813]: I0219 19:57:12.377849 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b2xf\" (UniqueName: \"kubernetes.io/projected/73e60321-561a-4a68-b9c6-d197f26fc6d6-kube-api-access-5b2xf\") pod \"neutron-db-create-gsckp\" (UID: \"73e60321-561a-4a68-b9c6-d197f26fc6d6\") " pod="openstack/neutron-db-create-gsckp" Feb 19 19:57:12 crc kubenswrapper[4813]: I0219 19:57:12.377903 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73e60321-561a-4a68-b9c6-d197f26fc6d6-operator-scripts\") pod \"neutron-db-create-gsckp\" (UID: \"73e60321-561a-4a68-b9c6-d197f26fc6d6\") " pod="openstack/neutron-db-create-gsckp" Feb 19 19:57:12 crc kubenswrapper[4813]: I0219 19:57:12.422393 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-a0cf-account-create-update-7ghvw"] Feb 19 19:57:12 crc kubenswrapper[4813]: I0219 19:57:12.423429 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-a0cf-account-create-update-7ghvw" Feb 19 19:57:12 crc kubenswrapper[4813]: I0219 19:57:12.425942 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 19 19:57:12 crc kubenswrapper[4813]: I0219 19:57:12.433189 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-a0cf-account-create-update-7ghvw"] Feb 19 19:57:12 crc kubenswrapper[4813]: I0219 19:57:12.479664 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9m4c\" (UniqueName: \"kubernetes.io/projected/6635e6e1-6786-4efd-8b75-96423fd91219-kube-api-access-s9m4c\") pod \"neutron-a0cf-account-create-update-7ghvw\" (UID: \"6635e6e1-6786-4efd-8b75-96423fd91219\") " pod="openstack/neutron-a0cf-account-create-update-7ghvw" Feb 19 19:57:12 crc kubenswrapper[4813]: I0219 19:57:12.479757 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5b2xf\" (UniqueName: \"kubernetes.io/projected/73e60321-561a-4a68-b9c6-d197f26fc6d6-kube-api-access-5b2xf\") pod \"neutron-db-create-gsckp\" (UID: \"73e60321-561a-4a68-b9c6-d197f26fc6d6\") " pod="openstack/neutron-db-create-gsckp" Feb 19 19:57:12 crc kubenswrapper[4813]: I0219 19:57:12.479795 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73e60321-561a-4a68-b9c6-d197f26fc6d6-operator-scripts\") pod \"neutron-db-create-gsckp\" (UID: \"73e60321-561a-4a68-b9c6-d197f26fc6d6\") " pod="openstack/neutron-db-create-gsckp" Feb 19 19:57:12 crc kubenswrapper[4813]: I0219 19:57:12.479832 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6635e6e1-6786-4efd-8b75-96423fd91219-operator-scripts\") pod \"neutron-a0cf-account-create-update-7ghvw\" (UID: \"6635e6e1-6786-4efd-8b75-96423fd91219\") " pod="openstack/neutron-a0cf-account-create-update-7ghvw" Feb 19 19:57:12 crc kubenswrapper[4813]: I0219 19:57:12.482315 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73e60321-561a-4a68-b9c6-d197f26fc6d6-operator-scripts\") pod \"neutron-db-create-gsckp\" (UID: \"73e60321-561a-4a68-b9c6-d197f26fc6d6\") " pod="openstack/neutron-db-create-gsckp" Feb 19 19:57:12 crc kubenswrapper[4813]: I0219 19:57:12.498248 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5b2xf\" (UniqueName: \"kubernetes.io/projected/73e60321-561a-4a68-b9c6-d197f26fc6d6-kube-api-access-5b2xf\") pod \"neutron-db-create-gsckp\" (UID: \"73e60321-561a-4a68-b9c6-d197f26fc6d6\") " pod="openstack/neutron-db-create-gsckp" Feb 19 19:57:12 crc kubenswrapper[4813]: I0219 19:57:12.581300 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9m4c\" (UniqueName: \"kubernetes.io/projected/6635e6e1-6786-4efd-8b75-96423fd91219-kube-api-access-s9m4c\") pod \"neutron-a0cf-account-create-update-7ghvw\" (UID: \"6635e6e1-6786-4efd-8b75-96423fd91219\") " pod="openstack/neutron-a0cf-account-create-update-7ghvw" Feb 19 19:57:12 crc kubenswrapper[4813]: I0219 19:57:12.581378 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6635e6e1-6786-4efd-8b75-96423fd91219-operator-scripts\") pod \"neutron-a0cf-account-create-update-7ghvw\" (UID: \"6635e6e1-6786-4efd-8b75-96423fd91219\") " pod="openstack/neutron-a0cf-account-create-update-7ghvw" Feb 19 19:57:12 crc kubenswrapper[4813]: I0219 19:57:12.582156 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6635e6e1-6786-4efd-8b75-96423fd91219-operator-scripts\") pod \"neutron-a0cf-account-create-update-7ghvw\" (UID: \"6635e6e1-6786-4efd-8b75-96423fd91219\") " pod="openstack/neutron-a0cf-account-create-update-7ghvw" Feb 19 19:57:12 crc kubenswrapper[4813]: I0219 19:57:12.598537 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9m4c\" (UniqueName: \"kubernetes.io/projected/6635e6e1-6786-4efd-8b75-96423fd91219-kube-api-access-s9m4c\") pod \"neutron-a0cf-account-create-update-7ghvw\" (UID: \"6635e6e1-6786-4efd-8b75-96423fd91219\") " pod="openstack/neutron-a0cf-account-create-update-7ghvw" Feb 19 19:57:12 crc kubenswrapper[4813]: I0219 19:57:12.624656 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-gsckp" Feb 19 19:57:12 crc kubenswrapper[4813]: I0219 19:57:12.741463 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-a0cf-account-create-update-7ghvw" Feb 19 19:57:13 crc kubenswrapper[4813]: I0219 19:57:13.129209 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-gsckp"] Feb 19 19:57:13 crc kubenswrapper[4813]: W0219 19:57:13.145150 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod73e60321_561a_4a68_b9c6_d197f26fc6d6.slice/crio-d48ec322239a986fb2b6b98176e25bf4ea280aaeea78b9d9e40aa042b3de8bec WatchSource:0}: Error finding container d48ec322239a986fb2b6b98176e25bf4ea280aaeea78b9d9e40aa042b3de8bec: Status 404 returned error can't find the container with id d48ec322239a986fb2b6b98176e25bf4ea280aaeea78b9d9e40aa042b3de8bec Feb 19 19:57:13 crc kubenswrapper[4813]: I0219 19:57:13.177574 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-gsckp" event={"ID":"73e60321-561a-4a68-b9c6-d197f26fc6d6","Type":"ContainerStarted","Data":"d48ec322239a986fb2b6b98176e25bf4ea280aaeea78b9d9e40aa042b3de8bec"} Feb 19 19:57:13 crc kubenswrapper[4813]: W0219 19:57:13.238764 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6635e6e1_6786_4efd_8b75_96423fd91219.slice/crio-55d13e531a5b0e496107685665be7978081f6b2832389c9aa84869594ccd1456 WatchSource:0}: Error finding container 55d13e531a5b0e496107685665be7978081f6b2832389c9aa84869594ccd1456: Status 404 returned error can't find the container with id 55d13e531a5b0e496107685665be7978081f6b2832389c9aa84869594ccd1456 Feb 19 19:57:13 crc kubenswrapper[4813]: I0219 19:57:13.239499 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-a0cf-account-create-update-7ghvw"] Feb 19 19:57:14 crc kubenswrapper[4813]: I0219 19:57:14.186065 4813 generic.go:334] "Generic (PLEG): container finished" podID="73e60321-561a-4a68-b9c6-d197f26fc6d6" containerID="901af8cf9f4076f6ef836e7266c838a35655300a2acbddcf0b99b721435aacf0" exitCode=0 Feb 19 19:57:14 crc kubenswrapper[4813]: I0219 19:57:14.186223 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-gsckp" event={"ID":"73e60321-561a-4a68-b9c6-d197f26fc6d6","Type":"ContainerDied","Data":"901af8cf9f4076f6ef836e7266c838a35655300a2acbddcf0b99b721435aacf0"} Feb 19 19:57:14 crc kubenswrapper[4813]: I0219 19:57:14.187870 4813 generic.go:334] "Generic (PLEG): container finished" podID="6635e6e1-6786-4efd-8b75-96423fd91219" containerID="92c0a4bf9042bb3b779d3bb566cfcedc731811c0187fc6e44eba0afd89457312" exitCode=0 Feb 19 19:57:14 crc kubenswrapper[4813]: I0219 19:57:14.187903 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-a0cf-account-create-update-7ghvw" event={"ID":"6635e6e1-6786-4efd-8b75-96423fd91219","Type":"ContainerDied","Data":"92c0a4bf9042bb3b779d3bb566cfcedc731811c0187fc6e44eba0afd89457312"} Feb 19 19:57:14 crc kubenswrapper[4813]: I0219 19:57:14.187924 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-a0cf-account-create-update-7ghvw" event={"ID":"6635e6e1-6786-4efd-8b75-96423fd91219","Type":"ContainerStarted","Data":"55d13e531a5b0e496107685665be7978081f6b2832389c9aa84869594ccd1456"} Feb 19 19:57:15 crc kubenswrapper[4813]: I0219 19:57:15.590478 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-gsckp" Feb 19 19:57:15 crc kubenswrapper[4813]: I0219 19:57:15.632435 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73e60321-561a-4a68-b9c6-d197f26fc6d6-operator-scripts\") pod \"73e60321-561a-4a68-b9c6-d197f26fc6d6\" (UID: \"73e60321-561a-4a68-b9c6-d197f26fc6d6\") " Feb 19 19:57:15 crc kubenswrapper[4813]: I0219 19:57:15.632588 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5b2xf\" (UniqueName: \"kubernetes.io/projected/73e60321-561a-4a68-b9c6-d197f26fc6d6-kube-api-access-5b2xf\") pod \"73e60321-561a-4a68-b9c6-d197f26fc6d6\" (UID: \"73e60321-561a-4a68-b9c6-d197f26fc6d6\") " Feb 19 19:57:15 crc kubenswrapper[4813]: I0219 19:57:15.634739 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73e60321-561a-4a68-b9c6-d197f26fc6d6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "73e60321-561a-4a68-b9c6-d197f26fc6d6" (UID: "73e60321-561a-4a68-b9c6-d197f26fc6d6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:57:15 crc kubenswrapper[4813]: I0219 19:57:15.639292 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73e60321-561a-4a68-b9c6-d197f26fc6d6-kube-api-access-5b2xf" (OuterVolumeSpecName: "kube-api-access-5b2xf") pod "73e60321-561a-4a68-b9c6-d197f26fc6d6" (UID: "73e60321-561a-4a68-b9c6-d197f26fc6d6"). InnerVolumeSpecName "kube-api-access-5b2xf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:57:15 crc kubenswrapper[4813]: I0219 19:57:15.680042 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-a0cf-account-create-update-7ghvw" Feb 19 19:57:15 crc kubenswrapper[4813]: I0219 19:57:15.734880 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9m4c\" (UniqueName: \"kubernetes.io/projected/6635e6e1-6786-4efd-8b75-96423fd91219-kube-api-access-s9m4c\") pod \"6635e6e1-6786-4efd-8b75-96423fd91219\" (UID: \"6635e6e1-6786-4efd-8b75-96423fd91219\") " Feb 19 19:57:15 crc kubenswrapper[4813]: I0219 19:57:15.735029 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6635e6e1-6786-4efd-8b75-96423fd91219-operator-scripts\") pod \"6635e6e1-6786-4efd-8b75-96423fd91219\" (UID: \"6635e6e1-6786-4efd-8b75-96423fd91219\") " Feb 19 19:57:15 crc kubenswrapper[4813]: I0219 19:57:15.735465 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5b2xf\" (UniqueName: \"kubernetes.io/projected/73e60321-561a-4a68-b9c6-d197f26fc6d6-kube-api-access-5b2xf\") on node \"crc\" DevicePath \"\"" Feb 19 19:57:15 crc kubenswrapper[4813]: I0219 19:57:15.735488 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73e60321-561a-4a68-b9c6-d197f26fc6d6-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:57:15 crc kubenswrapper[4813]: I0219 19:57:15.735528 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6635e6e1-6786-4efd-8b75-96423fd91219-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6635e6e1-6786-4efd-8b75-96423fd91219" (UID: "6635e6e1-6786-4efd-8b75-96423fd91219"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:57:15 crc kubenswrapper[4813]: I0219 19:57:15.738867 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6635e6e1-6786-4efd-8b75-96423fd91219-kube-api-access-s9m4c" (OuterVolumeSpecName: "kube-api-access-s9m4c") pod "6635e6e1-6786-4efd-8b75-96423fd91219" (UID: "6635e6e1-6786-4efd-8b75-96423fd91219"). InnerVolumeSpecName "kube-api-access-s9m4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:57:15 crc kubenswrapper[4813]: I0219 19:57:15.837553 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9m4c\" (UniqueName: \"kubernetes.io/projected/6635e6e1-6786-4efd-8b75-96423fd91219-kube-api-access-s9m4c\") on node \"crc\" DevicePath \"\"" Feb 19 19:57:15 crc kubenswrapper[4813]: I0219 19:57:15.837636 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6635e6e1-6786-4efd-8b75-96423fd91219-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:57:16 crc kubenswrapper[4813]: I0219 19:57:16.206119 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-gsckp" Feb 19 19:57:16 crc kubenswrapper[4813]: I0219 19:57:16.206125 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-gsckp" event={"ID":"73e60321-561a-4a68-b9c6-d197f26fc6d6","Type":"ContainerDied","Data":"d48ec322239a986fb2b6b98176e25bf4ea280aaeea78b9d9e40aa042b3de8bec"} Feb 19 19:57:16 crc kubenswrapper[4813]: I0219 19:57:16.206700 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d48ec322239a986fb2b6b98176e25bf4ea280aaeea78b9d9e40aa042b3de8bec" Feb 19 19:57:16 crc kubenswrapper[4813]: I0219 19:57:16.208409 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-a0cf-account-create-update-7ghvw" event={"ID":"6635e6e1-6786-4efd-8b75-96423fd91219","Type":"ContainerDied","Data":"55d13e531a5b0e496107685665be7978081f6b2832389c9aa84869594ccd1456"} Feb 19 19:57:16 crc kubenswrapper[4813]: I0219 19:57:16.208450 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55d13e531a5b0e496107685665be7978081f6b2832389c9aa84869594ccd1456" Feb 19 19:57:16 crc kubenswrapper[4813]: I0219 19:57:16.208509 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-a0cf-account-create-update-7ghvw" Feb 19 19:57:17 crc kubenswrapper[4813]: I0219 19:57:17.636241 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-65dzd"] Feb 19 19:57:17 crc kubenswrapper[4813]: E0219 19:57:17.636616 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73e60321-561a-4a68-b9c6-d197f26fc6d6" containerName="mariadb-database-create" Feb 19 19:57:17 crc kubenswrapper[4813]: I0219 19:57:17.636634 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="73e60321-561a-4a68-b9c6-d197f26fc6d6" containerName="mariadb-database-create" Feb 19 19:57:17 crc kubenswrapper[4813]: E0219 19:57:17.636673 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6635e6e1-6786-4efd-8b75-96423fd91219" containerName="mariadb-account-create-update" Feb 19 19:57:17 crc kubenswrapper[4813]: I0219 19:57:17.636681 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="6635e6e1-6786-4efd-8b75-96423fd91219" containerName="mariadb-account-create-update" Feb 19 19:57:17 crc kubenswrapper[4813]: I0219 19:57:17.636877 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="6635e6e1-6786-4efd-8b75-96423fd91219" containerName="mariadb-account-create-update" Feb 19 19:57:17 crc kubenswrapper[4813]: I0219 19:57:17.636903 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="73e60321-561a-4a68-b9c6-d197f26fc6d6" containerName="mariadb-database-create" Feb 19 19:57:17 crc kubenswrapper[4813]: I0219 19:57:17.637556 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-65dzd" Feb 19 19:57:17 crc kubenswrapper[4813]: I0219 19:57:17.639300 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 19 19:57:17 crc kubenswrapper[4813]: I0219 19:57:17.639375 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 19 19:57:17 crc kubenswrapper[4813]: I0219 19:57:17.640445 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-qdjjf" Feb 19 19:57:17 crc kubenswrapper[4813]: I0219 19:57:17.668700 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ca8d65e5-8a02-426f-aab4-b2ee48bfb93b-config\") pod \"neutron-db-sync-65dzd\" (UID: \"ca8d65e5-8a02-426f-aab4-b2ee48bfb93b\") " pod="openstack/neutron-db-sync-65dzd" Feb 19 19:57:17 crc kubenswrapper[4813]: I0219 19:57:17.668766 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x428q\" (UniqueName: \"kubernetes.io/projected/ca8d65e5-8a02-426f-aab4-b2ee48bfb93b-kube-api-access-x428q\") pod \"neutron-db-sync-65dzd\" (UID: \"ca8d65e5-8a02-426f-aab4-b2ee48bfb93b\") " pod="openstack/neutron-db-sync-65dzd" Feb 19 19:57:17 crc kubenswrapper[4813]: I0219 19:57:17.668832 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca8d65e5-8a02-426f-aab4-b2ee48bfb93b-combined-ca-bundle\") pod \"neutron-db-sync-65dzd\" (UID: \"ca8d65e5-8a02-426f-aab4-b2ee48bfb93b\") " pod="openstack/neutron-db-sync-65dzd" Feb 19 19:57:17 crc kubenswrapper[4813]: I0219 19:57:17.689372 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-65dzd"] Feb 19 19:57:17 crc kubenswrapper[4813]: I0219 19:57:17.770305 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ca8d65e5-8a02-426f-aab4-b2ee48bfb93b-config\") pod \"neutron-db-sync-65dzd\" (UID: \"ca8d65e5-8a02-426f-aab4-b2ee48bfb93b\") " pod="openstack/neutron-db-sync-65dzd" Feb 19 19:57:17 crc kubenswrapper[4813]: I0219 19:57:17.770365 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x428q\" (UniqueName: \"kubernetes.io/projected/ca8d65e5-8a02-426f-aab4-b2ee48bfb93b-kube-api-access-x428q\") pod \"neutron-db-sync-65dzd\" (UID: \"ca8d65e5-8a02-426f-aab4-b2ee48bfb93b\") " pod="openstack/neutron-db-sync-65dzd" Feb 19 19:57:17 crc kubenswrapper[4813]: I0219 19:57:17.770431 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca8d65e5-8a02-426f-aab4-b2ee48bfb93b-combined-ca-bundle\") pod \"neutron-db-sync-65dzd\" (UID: \"ca8d65e5-8a02-426f-aab4-b2ee48bfb93b\") " pod="openstack/neutron-db-sync-65dzd" Feb 19 19:57:17 crc kubenswrapper[4813]: I0219 19:57:17.783436 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ca8d65e5-8a02-426f-aab4-b2ee48bfb93b-config\") pod \"neutron-db-sync-65dzd\" (UID: \"ca8d65e5-8a02-426f-aab4-b2ee48bfb93b\") " pod="openstack/neutron-db-sync-65dzd" Feb 19 19:57:17 crc kubenswrapper[4813]: I0219 19:57:17.795574 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x428q\" (UniqueName: \"kubernetes.io/projected/ca8d65e5-8a02-426f-aab4-b2ee48bfb93b-kube-api-access-x428q\") pod \"neutron-db-sync-65dzd\" (UID: \"ca8d65e5-8a02-426f-aab4-b2ee48bfb93b\") " pod="openstack/neutron-db-sync-65dzd" Feb 19 19:57:17 crc kubenswrapper[4813]: I0219 19:57:17.800694 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca8d65e5-8a02-426f-aab4-b2ee48bfb93b-combined-ca-bundle\") pod \"neutron-db-sync-65dzd\" (UID: \"ca8d65e5-8a02-426f-aab4-b2ee48bfb93b\") " pod="openstack/neutron-db-sync-65dzd" Feb 19 19:57:17 crc kubenswrapper[4813]: I0219 19:57:17.982266 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-65dzd" Feb 19 19:57:18 crc kubenswrapper[4813]: I0219 19:57:18.467683 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-65dzd"] Feb 19 19:57:19 crc kubenswrapper[4813]: I0219 19:57:19.236518 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-65dzd" event={"ID":"ca8d65e5-8a02-426f-aab4-b2ee48bfb93b","Type":"ContainerStarted","Data":"e98b49ddb1952baade3c95b6d16c78a8ae52f329b1a7e0382cd3db9d4024f0a9"} Feb 19 19:57:19 crc kubenswrapper[4813]: I0219 19:57:19.236904 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-65dzd" event={"ID":"ca8d65e5-8a02-426f-aab4-b2ee48bfb93b","Type":"ContainerStarted","Data":"d721523d7b4b0bb77d8d9d675529d269a992d1898a13832180b8ba3da2fa248e"} Feb 19 19:57:19 crc kubenswrapper[4813]: I0219 19:57:19.260723 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-65dzd" podStartSLOduration=2.260694167 podStartE2EDuration="2.260694167s" podCreationTimestamp="2026-02-19 19:57:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:57:19.251773561 +0000 UTC m=+5258.477214152" watchObservedRunningTime="2026-02-19 19:57:19.260694167 +0000 UTC m=+5258.486134748" Feb 19 19:57:20 crc kubenswrapper[4813]: I0219 19:57:20.471122 4813 scope.go:117] "RemoveContainer" containerID="6f703dfd90f9e11930a40b965ab018f3c46a606e13eaba250b413520a86bea4f" Feb 19 19:57:20 crc kubenswrapper[4813]: E0219 19:57:20.471360 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:57:23 crc kubenswrapper[4813]: I0219 19:57:23.268907 4813 generic.go:334] "Generic (PLEG): container finished" podID="ca8d65e5-8a02-426f-aab4-b2ee48bfb93b" containerID="e98b49ddb1952baade3c95b6d16c78a8ae52f329b1a7e0382cd3db9d4024f0a9" exitCode=0 Feb 19 19:57:23 crc kubenswrapper[4813]: I0219 19:57:23.269041 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-65dzd" event={"ID":"ca8d65e5-8a02-426f-aab4-b2ee48bfb93b","Type":"ContainerDied","Data":"e98b49ddb1952baade3c95b6d16c78a8ae52f329b1a7e0382cd3db9d4024f0a9"} Feb 19 19:57:24 crc kubenswrapper[4813]: I0219 19:57:24.593338 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-65dzd" Feb 19 19:57:24 crc kubenswrapper[4813]: I0219 19:57:24.692156 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca8d65e5-8a02-426f-aab4-b2ee48bfb93b-combined-ca-bundle\") pod \"ca8d65e5-8a02-426f-aab4-b2ee48bfb93b\" (UID: \"ca8d65e5-8a02-426f-aab4-b2ee48bfb93b\") " Feb 19 19:57:24 crc kubenswrapper[4813]: I0219 19:57:24.692212 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ca8d65e5-8a02-426f-aab4-b2ee48bfb93b-config\") pod \"ca8d65e5-8a02-426f-aab4-b2ee48bfb93b\" (UID: \"ca8d65e5-8a02-426f-aab4-b2ee48bfb93b\") " Feb 19 19:57:24 crc kubenswrapper[4813]: I0219 19:57:24.692300 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x428q\" (UniqueName: \"kubernetes.io/projected/ca8d65e5-8a02-426f-aab4-b2ee48bfb93b-kube-api-access-x428q\") pod \"ca8d65e5-8a02-426f-aab4-b2ee48bfb93b\" (UID: \"ca8d65e5-8a02-426f-aab4-b2ee48bfb93b\") " Feb 19 19:57:24 crc kubenswrapper[4813]: I0219 19:57:24.698642 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca8d65e5-8a02-426f-aab4-b2ee48bfb93b-kube-api-access-x428q" (OuterVolumeSpecName: "kube-api-access-x428q") pod "ca8d65e5-8a02-426f-aab4-b2ee48bfb93b" (UID: "ca8d65e5-8a02-426f-aab4-b2ee48bfb93b"). InnerVolumeSpecName "kube-api-access-x428q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:57:24 crc kubenswrapper[4813]: I0219 19:57:24.715585 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca8d65e5-8a02-426f-aab4-b2ee48bfb93b-config" (OuterVolumeSpecName: "config") pod "ca8d65e5-8a02-426f-aab4-b2ee48bfb93b" (UID: "ca8d65e5-8a02-426f-aab4-b2ee48bfb93b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:57:24 crc kubenswrapper[4813]: I0219 19:57:24.717738 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca8d65e5-8a02-426f-aab4-b2ee48bfb93b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ca8d65e5-8a02-426f-aab4-b2ee48bfb93b" (UID: "ca8d65e5-8a02-426f-aab4-b2ee48bfb93b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:57:24 crc kubenswrapper[4813]: I0219 19:57:24.794640 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x428q\" (UniqueName: \"kubernetes.io/projected/ca8d65e5-8a02-426f-aab4-b2ee48bfb93b-kube-api-access-x428q\") on node \"crc\" DevicePath \"\"" Feb 19 19:57:24 crc kubenswrapper[4813]: I0219 19:57:24.794685 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca8d65e5-8a02-426f-aab4-b2ee48bfb93b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:57:24 crc kubenswrapper[4813]: I0219 19:57:24.794694 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/ca8d65e5-8a02-426f-aab4-b2ee48bfb93b-config\") on node \"crc\" DevicePath \"\"" Feb 19 19:57:25 crc kubenswrapper[4813]: I0219 19:57:25.288716 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-65dzd" event={"ID":"ca8d65e5-8a02-426f-aab4-b2ee48bfb93b","Type":"ContainerDied","Data":"d721523d7b4b0bb77d8d9d675529d269a992d1898a13832180b8ba3da2fa248e"} Feb 19 19:57:25 crc kubenswrapper[4813]: I0219 19:57:25.288798 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d721523d7b4b0bb77d8d9d675529d269a992d1898a13832180b8ba3da2fa248e" Feb 19 19:57:25 crc kubenswrapper[4813]: I0219 19:57:25.289018 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-65dzd" Feb 19 19:57:25 crc kubenswrapper[4813]: I0219 19:57:25.446878 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7854cb56d9-jcfdt"] Feb 19 19:57:25 crc kubenswrapper[4813]: E0219 19:57:25.447355 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca8d65e5-8a02-426f-aab4-b2ee48bfb93b" containerName="neutron-db-sync" Feb 19 19:57:25 crc kubenswrapper[4813]: I0219 19:57:25.447374 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca8d65e5-8a02-426f-aab4-b2ee48bfb93b" containerName="neutron-db-sync" Feb 19 19:57:25 crc kubenswrapper[4813]: I0219 19:57:25.447585 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca8d65e5-8a02-426f-aab4-b2ee48bfb93b" containerName="neutron-db-sync" Feb 19 19:57:25 crc kubenswrapper[4813]: I0219 19:57:25.451464 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7854cb56d9-jcfdt" Feb 19 19:57:25 crc kubenswrapper[4813]: I0219 19:57:25.473704 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7854cb56d9-jcfdt"] Feb 19 19:57:25 crc kubenswrapper[4813]: I0219 19:57:25.617057 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1f1a98f7-4c83-4106-a6c8-09a2a4057544-ovsdbserver-nb\") pod \"dnsmasq-dns-7854cb56d9-jcfdt\" (UID: \"1f1a98f7-4c83-4106-a6c8-09a2a4057544\") " pod="openstack/dnsmasq-dns-7854cb56d9-jcfdt" Feb 19 19:57:25 crc kubenswrapper[4813]: I0219 19:57:25.617206 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqcqj\" (UniqueName: \"kubernetes.io/projected/1f1a98f7-4c83-4106-a6c8-09a2a4057544-kube-api-access-hqcqj\") pod \"dnsmasq-dns-7854cb56d9-jcfdt\" (UID: \"1f1a98f7-4c83-4106-a6c8-09a2a4057544\") " pod="openstack/dnsmasq-dns-7854cb56d9-jcfdt" Feb 19 19:57:25 crc kubenswrapper[4813]: I0219 19:57:25.617362 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1f1a98f7-4c83-4106-a6c8-09a2a4057544-dns-svc\") pod \"dnsmasq-dns-7854cb56d9-jcfdt\" (UID: \"1f1a98f7-4c83-4106-a6c8-09a2a4057544\") " pod="openstack/dnsmasq-dns-7854cb56d9-jcfdt" Feb 19 19:57:25 crc kubenswrapper[4813]: I0219 19:57:25.617420 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1f1a98f7-4c83-4106-a6c8-09a2a4057544-ovsdbserver-sb\") pod \"dnsmasq-dns-7854cb56d9-jcfdt\" (UID: \"1f1a98f7-4c83-4106-a6c8-09a2a4057544\") " pod="openstack/dnsmasq-dns-7854cb56d9-jcfdt" Feb 19 19:57:25 crc kubenswrapper[4813]: I0219 19:57:25.617487 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f1a98f7-4c83-4106-a6c8-09a2a4057544-config\") pod \"dnsmasq-dns-7854cb56d9-jcfdt\" (UID: \"1f1a98f7-4c83-4106-a6c8-09a2a4057544\") " pod="openstack/dnsmasq-dns-7854cb56d9-jcfdt" Feb 19 19:57:25 crc kubenswrapper[4813]: I0219 19:57:25.685919 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7cb766c549-4vh2s"] Feb 19 19:57:25 crc kubenswrapper[4813]: I0219 19:57:25.687735 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7cb766c549-4vh2s" Feb 19 19:57:25 crc kubenswrapper[4813]: I0219 19:57:25.690576 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 19 19:57:25 crc kubenswrapper[4813]: I0219 19:57:25.690909 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 19 19:57:25 crc kubenswrapper[4813]: I0219 19:57:25.691097 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-qdjjf" Feb 19 19:57:25 crc kubenswrapper[4813]: I0219 19:57:25.703282 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7cb766c549-4vh2s"] Feb 19 19:57:25 crc kubenswrapper[4813]: I0219 19:57:25.726584 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1f1a98f7-4c83-4106-a6c8-09a2a4057544-ovsdbserver-nb\") pod \"dnsmasq-dns-7854cb56d9-jcfdt\" (UID: \"1f1a98f7-4c83-4106-a6c8-09a2a4057544\") " pod="openstack/dnsmasq-dns-7854cb56d9-jcfdt" Feb 19 19:57:25 crc kubenswrapper[4813]: I0219 19:57:25.726687 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqcqj\" (UniqueName: \"kubernetes.io/projected/1f1a98f7-4c83-4106-a6c8-09a2a4057544-kube-api-access-hqcqj\") pod \"dnsmasq-dns-7854cb56d9-jcfdt\" (UID: \"1f1a98f7-4c83-4106-a6c8-09a2a4057544\") " pod="openstack/dnsmasq-dns-7854cb56d9-jcfdt" Feb 19 19:57:25 crc kubenswrapper[4813]: I0219 19:57:25.726723 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1f1a98f7-4c83-4106-a6c8-09a2a4057544-dns-svc\") pod \"dnsmasq-dns-7854cb56d9-jcfdt\" (UID: \"1f1a98f7-4c83-4106-a6c8-09a2a4057544\") " pod="openstack/dnsmasq-dns-7854cb56d9-jcfdt" Feb 19 19:57:25 crc kubenswrapper[4813]: I0219 19:57:25.726742 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1f1a98f7-4c83-4106-a6c8-09a2a4057544-ovsdbserver-sb\") pod \"dnsmasq-dns-7854cb56d9-jcfdt\" (UID: \"1f1a98f7-4c83-4106-a6c8-09a2a4057544\") " pod="openstack/dnsmasq-dns-7854cb56d9-jcfdt" Feb 19 19:57:25 crc kubenswrapper[4813]: I0219 19:57:25.726774 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f1a98f7-4c83-4106-a6c8-09a2a4057544-config\") pod \"dnsmasq-dns-7854cb56d9-jcfdt\" (UID: \"1f1a98f7-4c83-4106-a6c8-09a2a4057544\") " pod="openstack/dnsmasq-dns-7854cb56d9-jcfdt" Feb 19 19:57:25 crc kubenswrapper[4813]: I0219 19:57:25.727693 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1f1a98f7-4c83-4106-a6c8-09a2a4057544-ovsdbserver-nb\") pod \"dnsmasq-dns-7854cb56d9-jcfdt\" (UID: \"1f1a98f7-4c83-4106-a6c8-09a2a4057544\") " pod="openstack/dnsmasq-dns-7854cb56d9-jcfdt" Feb 19 19:57:25 crc kubenswrapper[4813]: I0219 19:57:25.731141 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1f1a98f7-4c83-4106-a6c8-09a2a4057544-ovsdbserver-sb\") pod \"dnsmasq-dns-7854cb56d9-jcfdt\" (UID: \"1f1a98f7-4c83-4106-a6c8-09a2a4057544\") " pod="openstack/dnsmasq-dns-7854cb56d9-jcfdt" Feb 19 19:57:25 crc kubenswrapper[4813]: I0219 19:57:25.731160 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1f1a98f7-4c83-4106-a6c8-09a2a4057544-dns-svc\") pod \"dnsmasq-dns-7854cb56d9-jcfdt\" (UID: \"1f1a98f7-4c83-4106-a6c8-09a2a4057544\") " pod="openstack/dnsmasq-dns-7854cb56d9-jcfdt" Feb 19 19:57:25 crc kubenswrapper[4813]: I0219 19:57:25.731179 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f1a98f7-4c83-4106-a6c8-09a2a4057544-config\") pod \"dnsmasq-dns-7854cb56d9-jcfdt\" (UID: \"1f1a98f7-4c83-4106-a6c8-09a2a4057544\") " pod="openstack/dnsmasq-dns-7854cb56d9-jcfdt" Feb 19 19:57:25 crc kubenswrapper[4813]: I0219 19:57:25.772936 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqcqj\" (UniqueName: \"kubernetes.io/projected/1f1a98f7-4c83-4106-a6c8-09a2a4057544-kube-api-access-hqcqj\") pod \"dnsmasq-dns-7854cb56d9-jcfdt\" (UID: \"1f1a98f7-4c83-4106-a6c8-09a2a4057544\") " pod="openstack/dnsmasq-dns-7854cb56d9-jcfdt" Feb 19 19:57:25 crc kubenswrapper[4813]: I0219 19:57:25.794653 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7854cb56d9-jcfdt" Feb 19 19:57:25 crc kubenswrapper[4813]: I0219 19:57:25.828199 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/849e45f9-9e3d-4128-8143-90e3e920c2b5-combined-ca-bundle\") pod \"neutron-7cb766c549-4vh2s\" (UID: \"849e45f9-9e3d-4128-8143-90e3e920c2b5\") " pod="openstack/neutron-7cb766c549-4vh2s" Feb 19 19:57:25 crc kubenswrapper[4813]: I0219 19:57:25.828261 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgrlg\" (UniqueName: \"kubernetes.io/projected/849e45f9-9e3d-4128-8143-90e3e920c2b5-kube-api-access-mgrlg\") pod \"neutron-7cb766c549-4vh2s\" (UID: \"849e45f9-9e3d-4128-8143-90e3e920c2b5\") " pod="openstack/neutron-7cb766c549-4vh2s" Feb 19 19:57:25 crc kubenswrapper[4813]: I0219 19:57:25.828800 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/849e45f9-9e3d-4128-8143-90e3e920c2b5-httpd-config\") pod \"neutron-7cb766c549-4vh2s\" (UID: \"849e45f9-9e3d-4128-8143-90e3e920c2b5\") " pod="openstack/neutron-7cb766c549-4vh2s" Feb 19 19:57:25 crc kubenswrapper[4813]: I0219 19:57:25.828875 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/849e45f9-9e3d-4128-8143-90e3e920c2b5-config\") pod \"neutron-7cb766c549-4vh2s\" (UID: \"849e45f9-9e3d-4128-8143-90e3e920c2b5\") " pod="openstack/neutron-7cb766c549-4vh2s" Feb 19 19:57:25 crc kubenswrapper[4813]: I0219 19:57:25.930945 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/849e45f9-9e3d-4128-8143-90e3e920c2b5-config\") pod \"neutron-7cb766c549-4vh2s\" (UID: \"849e45f9-9e3d-4128-8143-90e3e920c2b5\") " pod="openstack/neutron-7cb766c549-4vh2s" Feb 19 19:57:25 crc kubenswrapper[4813]: I0219 19:57:25.937130 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/849e45f9-9e3d-4128-8143-90e3e920c2b5-combined-ca-bundle\") pod \"neutron-7cb766c549-4vh2s\" (UID: \"849e45f9-9e3d-4128-8143-90e3e920c2b5\") " pod="openstack/neutron-7cb766c549-4vh2s" Feb 19 19:57:25 crc kubenswrapper[4813]: I0219 19:57:25.937216 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgrlg\" (UniqueName: \"kubernetes.io/projected/849e45f9-9e3d-4128-8143-90e3e920c2b5-kube-api-access-mgrlg\") pod \"neutron-7cb766c549-4vh2s\" (UID: \"849e45f9-9e3d-4128-8143-90e3e920c2b5\") " pod="openstack/neutron-7cb766c549-4vh2s" Feb 19 19:57:25 crc kubenswrapper[4813]: I0219 19:57:25.938172 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/849e45f9-9e3d-4128-8143-90e3e920c2b5-httpd-config\") pod \"neutron-7cb766c549-4vh2s\" (UID: \"849e45f9-9e3d-4128-8143-90e3e920c2b5\") " pod="openstack/neutron-7cb766c549-4vh2s" Feb 19 19:57:25 crc kubenswrapper[4813]: I0219 19:57:25.945480 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/849e45f9-9e3d-4128-8143-90e3e920c2b5-config\") pod \"neutron-7cb766c549-4vh2s\" (UID: \"849e45f9-9e3d-4128-8143-90e3e920c2b5\") " pod="openstack/neutron-7cb766c549-4vh2s" Feb 19 19:57:25 crc kubenswrapper[4813]: I0219 19:57:25.953749 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/849e45f9-9e3d-4128-8143-90e3e920c2b5-httpd-config\") pod \"neutron-7cb766c549-4vh2s\" (UID: \"849e45f9-9e3d-4128-8143-90e3e920c2b5\") " pod="openstack/neutron-7cb766c549-4vh2s" Feb 19 19:57:25 crc kubenswrapper[4813]: I0219 19:57:25.957149 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/849e45f9-9e3d-4128-8143-90e3e920c2b5-combined-ca-bundle\") pod \"neutron-7cb766c549-4vh2s\" (UID: \"849e45f9-9e3d-4128-8143-90e3e920c2b5\") " pod="openstack/neutron-7cb766c549-4vh2s" Feb 19 19:57:25 crc kubenswrapper[4813]: I0219 19:57:25.963748 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgrlg\" (UniqueName: \"kubernetes.io/projected/849e45f9-9e3d-4128-8143-90e3e920c2b5-kube-api-access-mgrlg\") pod \"neutron-7cb766c549-4vh2s\" (UID: \"849e45f9-9e3d-4128-8143-90e3e920c2b5\") " pod="openstack/neutron-7cb766c549-4vh2s" Feb 19 19:57:26 crc kubenswrapper[4813]: I0219 19:57:26.028873 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7cb766c549-4vh2s" Feb 19 19:57:26 crc kubenswrapper[4813]: I0219 19:57:26.256369 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7854cb56d9-jcfdt"] Feb 19 19:57:26 crc kubenswrapper[4813]: I0219 19:57:26.300820 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7854cb56d9-jcfdt" event={"ID":"1f1a98f7-4c83-4106-a6c8-09a2a4057544","Type":"ContainerStarted","Data":"17d19983310728df727f965f23fd6c9086805d365a91f4245e8ddde0d01e6ca2"} Feb 19 19:57:26 crc kubenswrapper[4813]: I0219 19:57:26.593569 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7cb766c549-4vh2s"] Feb 19 19:57:26 crc kubenswrapper[4813]: W0219 19:57:26.594401 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod849e45f9_9e3d_4128_8143_90e3e920c2b5.slice/crio-d77ec96ec898f26b7e900cf0e16c869f9c6d3b9f442ac1e79d967344b3ba86cc WatchSource:0}: Error finding container d77ec96ec898f26b7e900cf0e16c869f9c6d3b9f442ac1e79d967344b3ba86cc: Status 404 returned error can't find the container with id d77ec96ec898f26b7e900cf0e16c869f9c6d3b9f442ac1e79d967344b3ba86cc Feb 19 19:57:27 crc kubenswrapper[4813]: I0219 19:57:27.308415 4813 generic.go:334] "Generic (PLEG): container finished" podID="1f1a98f7-4c83-4106-a6c8-09a2a4057544" containerID="289416b42dcd2d9197e96d12bc7d6320319df2a0ac7f67c4134190a853cb045f" exitCode=0 Feb 19 19:57:27 crc kubenswrapper[4813]: I0219 19:57:27.308486 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7854cb56d9-jcfdt" event={"ID":"1f1a98f7-4c83-4106-a6c8-09a2a4057544","Type":"ContainerDied","Data":"289416b42dcd2d9197e96d12bc7d6320319df2a0ac7f67c4134190a853cb045f"} Feb 19 19:57:27 crc kubenswrapper[4813]: I0219 19:57:27.311692 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7cb766c549-4vh2s" event={"ID":"849e45f9-9e3d-4128-8143-90e3e920c2b5","Type":"ContainerStarted","Data":"d22bd446dfd5a805e304227340c7810246974c6e20cf95a637d7c7171aaed7ec"} Feb 19 19:57:27 crc kubenswrapper[4813]: I0219 19:57:27.311724 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7cb766c549-4vh2s" event={"ID":"849e45f9-9e3d-4128-8143-90e3e920c2b5","Type":"ContainerStarted","Data":"da37614dd4c7071d44f1c127869cc16a30f45fd1e0ed521a525d8ef439195615"} Feb 19 19:57:27 crc kubenswrapper[4813]: I0219 19:57:27.311737 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7cb766c549-4vh2s" event={"ID":"849e45f9-9e3d-4128-8143-90e3e920c2b5","Type":"ContainerStarted","Data":"d77ec96ec898f26b7e900cf0e16c869f9c6d3b9f442ac1e79d967344b3ba86cc"} Feb 19 19:57:27 crc kubenswrapper[4813]: I0219 19:57:27.311830 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7cb766c549-4vh2s" Feb 19 19:57:27 crc kubenswrapper[4813]: I0219 19:57:27.365294 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7cb766c549-4vh2s" podStartSLOduration=2.365268515 podStartE2EDuration="2.365268515s" podCreationTimestamp="2026-02-19 19:57:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:57:27.361416095 +0000 UTC m=+5266.586856646" watchObservedRunningTime="2026-02-19 19:57:27.365268515 +0000 UTC m=+5266.590709066" Feb 19 19:57:28 crc kubenswrapper[4813]: I0219 19:57:28.321553 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7854cb56d9-jcfdt" event={"ID":"1f1a98f7-4c83-4106-a6c8-09a2a4057544","Type":"ContainerStarted","Data":"fc43fbcce36e5d9f8aba6cd85d39d2a3ce0c090a3c19280d6a7b64db0ac351ed"} Feb 19 19:57:28 crc kubenswrapper[4813]: I0219 19:57:28.322827 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7854cb56d9-jcfdt" Feb 19 19:57:28 crc kubenswrapper[4813]: I0219 19:57:28.348662 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7854cb56d9-jcfdt" podStartSLOduration=3.34863741 podStartE2EDuration="3.34863741s" podCreationTimestamp="2026-02-19 19:57:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:57:28.338539617 +0000 UTC m=+5267.563980158" watchObservedRunningTime="2026-02-19 19:57:28.34863741 +0000 UTC m=+5267.574077951" Feb 19 19:57:35 crc kubenswrapper[4813]: I0219 19:57:35.472536 4813 scope.go:117] "RemoveContainer" containerID="6f703dfd90f9e11930a40b965ab018f3c46a606e13eaba250b413520a86bea4f" Feb 19 19:57:35 crc kubenswrapper[4813]: E0219 19:57:35.473302 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:57:35 crc kubenswrapper[4813]: I0219 19:57:35.797155 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7854cb56d9-jcfdt" Feb 19 19:57:35 crc kubenswrapper[4813]: I0219 19:57:35.851228 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6585cd964f-lcwmb"] Feb 19 19:57:35 crc kubenswrapper[4813]: I0219 19:57:35.851520 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6585cd964f-lcwmb" podUID="7f1b41f7-0eab-4f57-902e-eff27bdde663" containerName="dnsmasq-dns" containerID="cri-o://74cbb559fec467b7e74c6299a319c27bb415eb48d85fc8f284dc776bb9f17fa5" gracePeriod=10 Feb 19 19:57:36 crc kubenswrapper[4813]: I0219 19:57:36.406376 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6585cd964f-lcwmb" Feb 19 19:57:36 crc kubenswrapper[4813]: I0219 19:57:36.543673 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f1b41f7-0eab-4f57-902e-eff27bdde663-ovsdbserver-sb\") pod \"7f1b41f7-0eab-4f57-902e-eff27bdde663\" (UID: \"7f1b41f7-0eab-4f57-902e-eff27bdde663\") " Feb 19 19:57:36 crc kubenswrapper[4813]: I0219 19:57:36.543748 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f1b41f7-0eab-4f57-902e-eff27bdde663-dns-svc\") pod \"7f1b41f7-0eab-4f57-902e-eff27bdde663\" (UID: \"7f1b41f7-0eab-4f57-902e-eff27bdde663\") " Feb 19 19:57:36 crc kubenswrapper[4813]: I0219 19:57:36.543776 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mv22l\" (UniqueName: \"kubernetes.io/projected/7f1b41f7-0eab-4f57-902e-eff27bdde663-kube-api-access-mv22l\") pod \"7f1b41f7-0eab-4f57-902e-eff27bdde663\" (UID: \"7f1b41f7-0eab-4f57-902e-eff27bdde663\") " Feb 19 19:57:36 crc kubenswrapper[4813]: I0219 19:57:36.543845 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f1b41f7-0eab-4f57-902e-eff27bdde663-ovsdbserver-nb\") pod \"7f1b41f7-0eab-4f57-902e-eff27bdde663\" (UID: \"7f1b41f7-0eab-4f57-902e-eff27bdde663\") " Feb 19 19:57:36 crc kubenswrapper[4813]: I0219 19:57:36.543937 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f1b41f7-0eab-4f57-902e-eff27bdde663-config\") pod \"7f1b41f7-0eab-4f57-902e-eff27bdde663\" (UID: \"7f1b41f7-0eab-4f57-902e-eff27bdde663\") " Feb 19 19:57:36 crc kubenswrapper[4813]: I0219 19:57:36.552253 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f1b41f7-0eab-4f57-902e-eff27bdde663-kube-api-access-mv22l" (OuterVolumeSpecName: "kube-api-access-mv22l") pod "7f1b41f7-0eab-4f57-902e-eff27bdde663" (UID: "7f1b41f7-0eab-4f57-902e-eff27bdde663"). InnerVolumeSpecName "kube-api-access-mv22l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:57:36 crc kubenswrapper[4813]: I0219 19:57:36.585053 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f1b41f7-0eab-4f57-902e-eff27bdde663-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7f1b41f7-0eab-4f57-902e-eff27bdde663" (UID: "7f1b41f7-0eab-4f57-902e-eff27bdde663"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:57:36 crc kubenswrapper[4813]: I0219 19:57:36.587545 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f1b41f7-0eab-4f57-902e-eff27bdde663-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7f1b41f7-0eab-4f57-902e-eff27bdde663" (UID: "7f1b41f7-0eab-4f57-902e-eff27bdde663"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:57:36 crc kubenswrapper[4813]: I0219 19:57:36.597171 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f1b41f7-0eab-4f57-902e-eff27bdde663-config" (OuterVolumeSpecName: "config") pod "7f1b41f7-0eab-4f57-902e-eff27bdde663" (UID: "7f1b41f7-0eab-4f57-902e-eff27bdde663"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:57:36 crc kubenswrapper[4813]: I0219 19:57:36.608384 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f1b41f7-0eab-4f57-902e-eff27bdde663-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7f1b41f7-0eab-4f57-902e-eff27bdde663" (UID: "7f1b41f7-0eab-4f57-902e-eff27bdde663"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:57:36 crc kubenswrapper[4813]: I0219 19:57:36.646206 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f1b41f7-0eab-4f57-902e-eff27bdde663-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 19:57:36 crc kubenswrapper[4813]: I0219 19:57:36.646247 4813 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f1b41f7-0eab-4f57-902e-eff27bdde663-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 19:57:36 crc kubenswrapper[4813]: I0219 19:57:36.646258 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mv22l\" (UniqueName: \"kubernetes.io/projected/7f1b41f7-0eab-4f57-902e-eff27bdde663-kube-api-access-mv22l\") on node \"crc\" DevicePath \"\"" Feb 19 19:57:36 crc kubenswrapper[4813]: I0219 19:57:36.646267 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f1b41f7-0eab-4f57-902e-eff27bdde663-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 19:57:36 crc kubenswrapper[4813]: I0219 19:57:36.646276 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f1b41f7-0eab-4f57-902e-eff27bdde663-config\") on node \"crc\" DevicePath \"\"" Feb 19 19:57:36 crc kubenswrapper[4813]: I0219 19:57:36.894592 4813 generic.go:334] "Generic (PLEG): container finished" podID="7f1b41f7-0eab-4f57-902e-eff27bdde663" containerID="74cbb559fec467b7e74c6299a319c27bb415eb48d85fc8f284dc776bb9f17fa5" exitCode=0 Feb 19 19:57:36 crc kubenswrapper[4813]: I0219 19:57:36.894650 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6585cd964f-lcwmb" Feb 19 19:57:36 crc kubenswrapper[4813]: I0219 19:57:36.894651 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6585cd964f-lcwmb" event={"ID":"7f1b41f7-0eab-4f57-902e-eff27bdde663","Type":"ContainerDied","Data":"74cbb559fec467b7e74c6299a319c27bb415eb48d85fc8f284dc776bb9f17fa5"} Feb 19 19:57:36 crc kubenswrapper[4813]: I0219 19:57:36.894752 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6585cd964f-lcwmb" event={"ID":"7f1b41f7-0eab-4f57-902e-eff27bdde663","Type":"ContainerDied","Data":"b4720f76ada52e57961aa4173106088c704a23b8989207b24a5a48373a31503d"} Feb 19 19:57:36 crc kubenswrapper[4813]: I0219 19:57:36.894769 4813 scope.go:117] "RemoveContainer" containerID="74cbb559fec467b7e74c6299a319c27bb415eb48d85fc8f284dc776bb9f17fa5" Feb 19 19:57:36 crc kubenswrapper[4813]: I0219 19:57:36.913503 4813 scope.go:117] "RemoveContainer" containerID="6dd0a21a756d64f0bf24e46318c24c0ef6c522017d4b8afe0a54326c62606ad1" Feb 19 19:57:36 crc kubenswrapper[4813]: I0219 19:57:36.934494 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6585cd964f-lcwmb"] Feb 19 19:57:36 crc kubenswrapper[4813]: I0219 19:57:36.934902 4813 scope.go:117] "RemoveContainer" containerID="74cbb559fec467b7e74c6299a319c27bb415eb48d85fc8f284dc776bb9f17fa5" Feb 19 19:57:36 crc kubenswrapper[4813]: E0219 19:57:36.935294 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74cbb559fec467b7e74c6299a319c27bb415eb48d85fc8f284dc776bb9f17fa5\": container with ID starting with 74cbb559fec467b7e74c6299a319c27bb415eb48d85fc8f284dc776bb9f17fa5 not found: ID does not exist" containerID="74cbb559fec467b7e74c6299a319c27bb415eb48d85fc8f284dc776bb9f17fa5" Feb 19 19:57:36 crc kubenswrapper[4813]: I0219 19:57:36.935344 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74cbb559fec467b7e74c6299a319c27bb415eb48d85fc8f284dc776bb9f17fa5"} err="failed to get container status \"74cbb559fec467b7e74c6299a319c27bb415eb48d85fc8f284dc776bb9f17fa5\": rpc error: code = NotFound desc = could not find container \"74cbb559fec467b7e74c6299a319c27bb415eb48d85fc8f284dc776bb9f17fa5\": container with ID starting with 74cbb559fec467b7e74c6299a319c27bb415eb48d85fc8f284dc776bb9f17fa5 not found: ID does not exist" Feb 19 19:57:36 crc kubenswrapper[4813]: I0219 19:57:36.935380 4813 scope.go:117] "RemoveContainer" containerID="6dd0a21a756d64f0bf24e46318c24c0ef6c522017d4b8afe0a54326c62606ad1" Feb 19 19:57:36 crc kubenswrapper[4813]: E0219 19:57:36.935694 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6dd0a21a756d64f0bf24e46318c24c0ef6c522017d4b8afe0a54326c62606ad1\": container with ID starting with 6dd0a21a756d64f0bf24e46318c24c0ef6c522017d4b8afe0a54326c62606ad1 not found: ID does not exist" containerID="6dd0a21a756d64f0bf24e46318c24c0ef6c522017d4b8afe0a54326c62606ad1" Feb 19 19:57:36 crc kubenswrapper[4813]: I0219 19:57:36.935728 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6dd0a21a756d64f0bf24e46318c24c0ef6c522017d4b8afe0a54326c62606ad1"} err="failed to get container status \"6dd0a21a756d64f0bf24e46318c24c0ef6c522017d4b8afe0a54326c62606ad1\": rpc error: code = NotFound desc = could not find container \"6dd0a21a756d64f0bf24e46318c24c0ef6c522017d4b8afe0a54326c62606ad1\": container with ID starting with 6dd0a21a756d64f0bf24e46318c24c0ef6c522017d4b8afe0a54326c62606ad1 not found: ID does not exist" Feb 19 19:57:36 crc kubenswrapper[4813]: I0219 19:57:36.944704 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6585cd964f-lcwmb"] Feb 19 19:57:37 crc kubenswrapper[4813]: I0219 19:57:37.483557 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f1b41f7-0eab-4f57-902e-eff27bdde663" path="/var/lib/kubelet/pods/7f1b41f7-0eab-4f57-902e-eff27bdde663/volumes" Feb 19 19:57:48 crc kubenswrapper[4813]: I0219 19:57:48.472784 4813 scope.go:117] "RemoveContainer" containerID="6f703dfd90f9e11930a40b965ab018f3c46a606e13eaba250b413520a86bea4f" Feb 19 19:57:48 crc kubenswrapper[4813]: E0219 19:57:48.473859 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:57:52 crc kubenswrapper[4813]: I0219 19:57:52.370649 4813 scope.go:117] "RemoveContainer" containerID="3502f5519457799330cc0c94e894fc1fa203db192142e76e14264c01c1168c07" Feb 19 19:57:56 crc kubenswrapper[4813]: I0219 19:57:56.037371 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7cb766c549-4vh2s" Feb 19 19:58:02 crc kubenswrapper[4813]: I0219 19:58:02.472029 4813 scope.go:117] "RemoveContainer" containerID="6f703dfd90f9e11930a40b965ab018f3c46a606e13eaba250b413520a86bea4f" Feb 19 19:58:02 crc kubenswrapper[4813]: E0219 19:58:02.472732 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:58:02 crc kubenswrapper[4813]: I0219 19:58:02.532975 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-qgx9f"] Feb 19 19:58:02 crc kubenswrapper[4813]: E0219 19:58:02.533395 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f1b41f7-0eab-4f57-902e-eff27bdde663" containerName="dnsmasq-dns" Feb 19 19:58:02 crc kubenswrapper[4813]: I0219 19:58:02.533418 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f1b41f7-0eab-4f57-902e-eff27bdde663" containerName="dnsmasq-dns" Feb 19 19:58:02 crc kubenswrapper[4813]: E0219 19:58:02.533450 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f1b41f7-0eab-4f57-902e-eff27bdde663" containerName="init" Feb 19 19:58:02 crc kubenswrapper[4813]: I0219 19:58:02.533459 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f1b41f7-0eab-4f57-902e-eff27bdde663" containerName="init" Feb 19 19:58:02 crc kubenswrapper[4813]: I0219 19:58:02.533650 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f1b41f7-0eab-4f57-902e-eff27bdde663" containerName="dnsmasq-dns" Feb 19 19:58:02 crc kubenswrapper[4813]: I0219 19:58:02.534322 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-qgx9f" Feb 19 19:58:02 crc kubenswrapper[4813]: I0219 19:58:02.586187 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-qgx9f"] Feb 19 19:58:02 crc kubenswrapper[4813]: I0219 19:58:02.655339 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-758f-account-create-update-v6zkq"] Feb 19 19:58:02 crc kubenswrapper[4813]: I0219 19:58:02.656285 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-758f-account-create-update-v6zkq" Feb 19 19:58:02 crc kubenswrapper[4813]: I0219 19:58:02.659767 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 19 19:58:02 crc kubenswrapper[4813]: I0219 19:58:02.668237 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-758f-account-create-update-v6zkq"] Feb 19 19:58:02 crc kubenswrapper[4813]: I0219 19:58:02.692995 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35510066-9f5c-49f6-b361-6ddd69e8b62a-operator-scripts\") pod \"glance-db-create-qgx9f\" (UID: \"35510066-9f5c-49f6-b361-6ddd69e8b62a\") " pod="openstack/glance-db-create-qgx9f" Feb 19 19:58:02 crc kubenswrapper[4813]: I0219 19:58:02.693230 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtshv\" (UniqueName: \"kubernetes.io/projected/35510066-9f5c-49f6-b361-6ddd69e8b62a-kube-api-access-qtshv\") pod \"glance-db-create-qgx9f\" (UID: \"35510066-9f5c-49f6-b361-6ddd69e8b62a\") " pod="openstack/glance-db-create-qgx9f" Feb 19 19:58:02 crc kubenswrapper[4813]: I0219 19:58:02.794943 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87b91f05-3f40-4031-86ac-fb99aa3cc841-operator-scripts\") pod \"glance-758f-account-create-update-v6zkq\" (UID: \"87b91f05-3f40-4031-86ac-fb99aa3cc841\") " pod="openstack/glance-758f-account-create-update-v6zkq" Feb 19 19:58:02 crc kubenswrapper[4813]: I0219 19:58:02.795013 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtshv\" (UniqueName: \"kubernetes.io/projected/35510066-9f5c-49f6-b361-6ddd69e8b62a-kube-api-access-qtshv\") pod \"glance-db-create-qgx9f\" (UID: \"35510066-9f5c-49f6-b361-6ddd69e8b62a\") " pod="openstack/glance-db-create-qgx9f" Feb 19 19:58:02 crc kubenswrapper[4813]: I0219 19:58:02.795061 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35510066-9f5c-49f6-b361-6ddd69e8b62a-operator-scripts\") pod \"glance-db-create-qgx9f\" (UID: \"35510066-9f5c-49f6-b361-6ddd69e8b62a\") " pod="openstack/glance-db-create-qgx9f" Feb 19 19:58:02 crc kubenswrapper[4813]: I0219 19:58:02.795176 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfzsd\" (UniqueName: \"kubernetes.io/projected/87b91f05-3f40-4031-86ac-fb99aa3cc841-kube-api-access-rfzsd\") pod \"glance-758f-account-create-update-v6zkq\" (UID: \"87b91f05-3f40-4031-86ac-fb99aa3cc841\") " pod="openstack/glance-758f-account-create-update-v6zkq" Feb 19 19:58:02 crc kubenswrapper[4813]: I0219 19:58:02.795897 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35510066-9f5c-49f6-b361-6ddd69e8b62a-operator-scripts\") pod \"glance-db-create-qgx9f\" (UID: \"35510066-9f5c-49f6-b361-6ddd69e8b62a\") " pod="openstack/glance-db-create-qgx9f" Feb 19 19:58:02 crc kubenswrapper[4813]: I0219 19:58:02.815928 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtshv\" (UniqueName: \"kubernetes.io/projected/35510066-9f5c-49f6-b361-6ddd69e8b62a-kube-api-access-qtshv\") pod \"glance-db-create-qgx9f\" (UID: \"35510066-9f5c-49f6-b361-6ddd69e8b62a\") " pod="openstack/glance-db-create-qgx9f" Feb 19 19:58:02 crc kubenswrapper[4813]: I0219 19:58:02.867230 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-qgx9f" Feb 19 19:58:02 crc kubenswrapper[4813]: I0219 19:58:02.896643 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfzsd\" (UniqueName: \"kubernetes.io/projected/87b91f05-3f40-4031-86ac-fb99aa3cc841-kube-api-access-rfzsd\") pod \"glance-758f-account-create-update-v6zkq\" (UID: \"87b91f05-3f40-4031-86ac-fb99aa3cc841\") " pod="openstack/glance-758f-account-create-update-v6zkq" Feb 19 19:58:02 crc kubenswrapper[4813]: I0219 19:58:02.896691 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87b91f05-3f40-4031-86ac-fb99aa3cc841-operator-scripts\") pod \"glance-758f-account-create-update-v6zkq\" (UID: \"87b91f05-3f40-4031-86ac-fb99aa3cc841\") " pod="openstack/glance-758f-account-create-update-v6zkq" Feb 19 19:58:02 crc kubenswrapper[4813]: I0219 19:58:02.897585 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87b91f05-3f40-4031-86ac-fb99aa3cc841-operator-scripts\") pod \"glance-758f-account-create-update-v6zkq\" (UID: \"87b91f05-3f40-4031-86ac-fb99aa3cc841\") " pod="openstack/glance-758f-account-create-update-v6zkq" Feb 19 19:58:02 crc kubenswrapper[4813]: I0219 19:58:02.915596 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfzsd\" (UniqueName: \"kubernetes.io/projected/87b91f05-3f40-4031-86ac-fb99aa3cc841-kube-api-access-rfzsd\") pod \"glance-758f-account-create-update-v6zkq\" (UID: \"87b91f05-3f40-4031-86ac-fb99aa3cc841\") " pod="openstack/glance-758f-account-create-update-v6zkq" Feb 19 19:58:02 crc kubenswrapper[4813]: I0219 19:58:02.970443 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-758f-account-create-update-v6zkq" Feb 19 19:58:03 crc kubenswrapper[4813]: I0219 19:58:03.225319 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-758f-account-create-update-v6zkq"] Feb 19 19:58:03 crc kubenswrapper[4813]: I0219 19:58:03.314779 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-qgx9f"] Feb 19 19:58:03 crc kubenswrapper[4813]: W0219 19:58:03.324107 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod35510066_9f5c_49f6_b361_6ddd69e8b62a.slice/crio-89cdab6f5012fa39b0ebbcee3f66177341ca9585c92ceda2e45f8b266d2b7a4e WatchSource:0}: Error finding container 89cdab6f5012fa39b0ebbcee3f66177341ca9585c92ceda2e45f8b266d2b7a4e: Status 404 returned error can't find the container with id 89cdab6f5012fa39b0ebbcee3f66177341ca9585c92ceda2e45f8b266d2b7a4e Feb 19 19:58:04 crc kubenswrapper[4813]: I0219 19:58:04.100322 4813 generic.go:334] "Generic (PLEG): container finished" podID="87b91f05-3f40-4031-86ac-fb99aa3cc841" containerID="14df61ba8457621e0e837cde6563626760750e0cf84cb28171113ee5e0c544bb" exitCode=0 Feb 19 19:58:04 crc kubenswrapper[4813]: I0219 19:58:04.100430 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-758f-account-create-update-v6zkq" event={"ID":"87b91f05-3f40-4031-86ac-fb99aa3cc841","Type":"ContainerDied","Data":"14df61ba8457621e0e837cde6563626760750e0cf84cb28171113ee5e0c544bb"} Feb 19 19:58:04 crc kubenswrapper[4813]: I0219 19:58:04.100470 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-758f-account-create-update-v6zkq" event={"ID":"87b91f05-3f40-4031-86ac-fb99aa3cc841","Type":"ContainerStarted","Data":"9adf31845126d07777475d5ffd1dc9296856008b555734cc27d167706ca74282"} Feb 19 19:58:04 crc kubenswrapper[4813]: I0219 19:58:04.104276 4813 generic.go:334] "Generic (PLEG): container finished" podID="35510066-9f5c-49f6-b361-6ddd69e8b62a" containerID="d8418ddd7540177ceeb51bc8843377bc3bea865a18e0a5de264002e3cde5af24" exitCode=0 Feb 19 19:58:04 crc kubenswrapper[4813]: I0219 19:58:04.104323 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-qgx9f" event={"ID":"35510066-9f5c-49f6-b361-6ddd69e8b62a","Type":"ContainerDied","Data":"d8418ddd7540177ceeb51bc8843377bc3bea865a18e0a5de264002e3cde5af24"} Feb 19 19:58:04 crc kubenswrapper[4813]: I0219 19:58:04.104351 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-qgx9f" event={"ID":"35510066-9f5c-49f6-b361-6ddd69e8b62a","Type":"ContainerStarted","Data":"89cdab6f5012fa39b0ebbcee3f66177341ca9585c92ceda2e45f8b266d2b7a4e"} Feb 19 19:58:05 crc kubenswrapper[4813]: I0219 19:58:05.453758 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-758f-account-create-update-v6zkq" Feb 19 19:58:05 crc kubenswrapper[4813]: I0219 19:58:05.527805 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-qgx9f" Feb 19 19:58:05 crc kubenswrapper[4813]: I0219 19:58:05.544890 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfzsd\" (UniqueName: \"kubernetes.io/projected/87b91f05-3f40-4031-86ac-fb99aa3cc841-kube-api-access-rfzsd\") pod \"87b91f05-3f40-4031-86ac-fb99aa3cc841\" (UID: \"87b91f05-3f40-4031-86ac-fb99aa3cc841\") " Feb 19 19:58:05 crc kubenswrapper[4813]: I0219 19:58:05.545055 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87b91f05-3f40-4031-86ac-fb99aa3cc841-operator-scripts\") pod \"87b91f05-3f40-4031-86ac-fb99aa3cc841\" (UID: \"87b91f05-3f40-4031-86ac-fb99aa3cc841\") " Feb 19 19:58:05 crc kubenswrapper[4813]: I0219 19:58:05.545478 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87b91f05-3f40-4031-86ac-fb99aa3cc841-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "87b91f05-3f40-4031-86ac-fb99aa3cc841" (UID: "87b91f05-3f40-4031-86ac-fb99aa3cc841"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:58:05 crc kubenswrapper[4813]: I0219 19:58:05.550774 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87b91f05-3f40-4031-86ac-fb99aa3cc841-kube-api-access-rfzsd" (OuterVolumeSpecName: "kube-api-access-rfzsd") pod "87b91f05-3f40-4031-86ac-fb99aa3cc841" (UID: "87b91f05-3f40-4031-86ac-fb99aa3cc841"). InnerVolumeSpecName "kube-api-access-rfzsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:58:05 crc kubenswrapper[4813]: I0219 19:58:05.646242 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35510066-9f5c-49f6-b361-6ddd69e8b62a-operator-scripts\") pod \"35510066-9f5c-49f6-b361-6ddd69e8b62a\" (UID: \"35510066-9f5c-49f6-b361-6ddd69e8b62a\") " Feb 19 19:58:05 crc kubenswrapper[4813]: I0219 19:58:05.646302 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtshv\" (UniqueName: \"kubernetes.io/projected/35510066-9f5c-49f6-b361-6ddd69e8b62a-kube-api-access-qtshv\") pod \"35510066-9f5c-49f6-b361-6ddd69e8b62a\" (UID: \"35510066-9f5c-49f6-b361-6ddd69e8b62a\") " Feb 19 19:58:05 crc kubenswrapper[4813]: I0219 19:58:05.646717 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35510066-9f5c-49f6-b361-6ddd69e8b62a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "35510066-9f5c-49f6-b361-6ddd69e8b62a" (UID: "35510066-9f5c-49f6-b361-6ddd69e8b62a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:58:05 crc kubenswrapper[4813]: I0219 19:58:05.646808 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfzsd\" (UniqueName: \"kubernetes.io/projected/87b91f05-3f40-4031-86ac-fb99aa3cc841-kube-api-access-rfzsd\") on node \"crc\" DevicePath \"\"" Feb 19 19:58:05 crc kubenswrapper[4813]: I0219 19:58:05.646825 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87b91f05-3f40-4031-86ac-fb99aa3cc841-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:58:05 crc kubenswrapper[4813]: I0219 19:58:05.646836 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35510066-9f5c-49f6-b361-6ddd69e8b62a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:58:05 crc kubenswrapper[4813]: I0219 19:58:05.649563 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35510066-9f5c-49f6-b361-6ddd69e8b62a-kube-api-access-qtshv" (OuterVolumeSpecName: "kube-api-access-qtshv") pod "35510066-9f5c-49f6-b361-6ddd69e8b62a" (UID: "35510066-9f5c-49f6-b361-6ddd69e8b62a"). InnerVolumeSpecName "kube-api-access-qtshv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:58:05 crc kubenswrapper[4813]: I0219 19:58:05.748318 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qtshv\" (UniqueName: \"kubernetes.io/projected/35510066-9f5c-49f6-b361-6ddd69e8b62a-kube-api-access-qtshv\") on node \"crc\" DevicePath \"\"" Feb 19 19:58:06 crc kubenswrapper[4813]: I0219 19:58:06.120891 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-qgx9f" Feb 19 19:58:06 crc kubenswrapper[4813]: I0219 19:58:06.121051 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-qgx9f" event={"ID":"35510066-9f5c-49f6-b361-6ddd69e8b62a","Type":"ContainerDied","Data":"89cdab6f5012fa39b0ebbcee3f66177341ca9585c92ceda2e45f8b266d2b7a4e"} Feb 19 19:58:06 crc kubenswrapper[4813]: I0219 19:58:06.121100 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89cdab6f5012fa39b0ebbcee3f66177341ca9585c92ceda2e45f8b266d2b7a4e" Feb 19 19:58:06 crc kubenswrapper[4813]: I0219 19:58:06.122933 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-758f-account-create-update-v6zkq" event={"ID":"87b91f05-3f40-4031-86ac-fb99aa3cc841","Type":"ContainerDied","Data":"9adf31845126d07777475d5ffd1dc9296856008b555734cc27d167706ca74282"} Feb 19 19:58:06 crc kubenswrapper[4813]: I0219 19:58:06.122984 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9adf31845126d07777475d5ffd1dc9296856008b555734cc27d167706ca74282" Feb 19 19:58:06 crc kubenswrapper[4813]: I0219 19:58:06.123048 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-758f-account-create-update-v6zkq" Feb 19 19:58:07 crc kubenswrapper[4813]: I0219 19:58:07.855341 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-mwg5n"] Feb 19 19:58:07 crc kubenswrapper[4813]: E0219 19:58:07.856134 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35510066-9f5c-49f6-b361-6ddd69e8b62a" containerName="mariadb-database-create" Feb 19 19:58:07 crc kubenswrapper[4813]: I0219 19:58:07.856154 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="35510066-9f5c-49f6-b361-6ddd69e8b62a" containerName="mariadb-database-create" Feb 19 19:58:07 crc kubenswrapper[4813]: E0219 19:58:07.856196 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87b91f05-3f40-4031-86ac-fb99aa3cc841" containerName="mariadb-account-create-update" Feb 19 19:58:07 crc kubenswrapper[4813]: I0219 19:58:07.856206 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="87b91f05-3f40-4031-86ac-fb99aa3cc841" containerName="mariadb-account-create-update" Feb 19 19:58:07 crc kubenswrapper[4813]: I0219 19:58:07.856393 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="35510066-9f5c-49f6-b361-6ddd69e8b62a" containerName="mariadb-database-create" Feb 19 19:58:07 crc kubenswrapper[4813]: I0219 19:58:07.856426 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="87b91f05-3f40-4031-86ac-fb99aa3cc841" containerName="mariadb-account-create-update" Feb 19 19:58:07 crc kubenswrapper[4813]: I0219 19:58:07.857082 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-mwg5n" Feb 19 19:58:07 crc kubenswrapper[4813]: I0219 19:58:07.858657 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 19 19:58:07 crc kubenswrapper[4813]: I0219 19:58:07.858748 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-tcjbp" Feb 19 19:58:07 crc kubenswrapper[4813]: I0219 19:58:07.876123 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-mwg5n"] Feb 19 19:58:07 crc kubenswrapper[4813]: I0219 19:58:07.986024 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-989jt\" (UniqueName: \"kubernetes.io/projected/cf454eb7-811a-46da-b647-e9a292c11f70-kube-api-access-989jt\") pod \"glance-db-sync-mwg5n\" (UID: \"cf454eb7-811a-46da-b647-e9a292c11f70\") " pod="openstack/glance-db-sync-mwg5n" Feb 19 19:58:07 crc kubenswrapper[4813]: I0219 19:58:07.986069 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf454eb7-811a-46da-b647-e9a292c11f70-combined-ca-bundle\") pod \"glance-db-sync-mwg5n\" (UID: \"cf454eb7-811a-46da-b647-e9a292c11f70\") " pod="openstack/glance-db-sync-mwg5n" Feb 19 19:58:07 crc kubenswrapper[4813]: I0219 19:58:07.986379 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf454eb7-811a-46da-b647-e9a292c11f70-config-data\") pod \"glance-db-sync-mwg5n\" (UID: \"cf454eb7-811a-46da-b647-e9a292c11f70\") " pod="openstack/glance-db-sync-mwg5n" Feb 19 19:58:07 crc kubenswrapper[4813]: I0219 19:58:07.986501 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cf454eb7-811a-46da-b647-e9a292c11f70-db-sync-config-data\") pod \"glance-db-sync-mwg5n\" (UID: \"cf454eb7-811a-46da-b647-e9a292c11f70\") " pod="openstack/glance-db-sync-mwg5n" Feb 19 19:58:08 crc kubenswrapper[4813]: I0219 19:58:08.087793 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-989jt\" (UniqueName: \"kubernetes.io/projected/cf454eb7-811a-46da-b647-e9a292c11f70-kube-api-access-989jt\") pod \"glance-db-sync-mwg5n\" (UID: \"cf454eb7-811a-46da-b647-e9a292c11f70\") " pod="openstack/glance-db-sync-mwg5n" Feb 19 19:58:08 crc kubenswrapper[4813]: I0219 19:58:08.088109 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf454eb7-811a-46da-b647-e9a292c11f70-combined-ca-bundle\") pod \"glance-db-sync-mwg5n\" (UID: \"cf454eb7-811a-46da-b647-e9a292c11f70\") " pod="openstack/glance-db-sync-mwg5n" Feb 19 19:58:08 crc kubenswrapper[4813]: I0219 19:58:08.088256 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf454eb7-811a-46da-b647-e9a292c11f70-config-data\") pod \"glance-db-sync-mwg5n\" (UID: \"cf454eb7-811a-46da-b647-e9a292c11f70\") " pod="openstack/glance-db-sync-mwg5n" Feb 19 19:58:08 crc kubenswrapper[4813]: I0219 19:58:08.088354 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cf454eb7-811a-46da-b647-e9a292c11f70-db-sync-config-data\") pod \"glance-db-sync-mwg5n\" (UID: \"cf454eb7-811a-46da-b647-e9a292c11f70\") " pod="openstack/glance-db-sync-mwg5n" Feb 19 19:58:08 crc kubenswrapper[4813]: I0219 19:58:08.093595 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf454eb7-811a-46da-b647-e9a292c11f70-combined-ca-bundle\") pod \"glance-db-sync-mwg5n\" (UID: \"cf454eb7-811a-46da-b647-e9a292c11f70\") " pod="openstack/glance-db-sync-mwg5n" Feb 19 19:58:08 crc kubenswrapper[4813]: I0219 19:58:08.093639 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf454eb7-811a-46da-b647-e9a292c11f70-config-data\") pod \"glance-db-sync-mwg5n\" (UID: \"cf454eb7-811a-46da-b647-e9a292c11f70\") " pod="openstack/glance-db-sync-mwg5n" Feb 19 19:58:08 crc kubenswrapper[4813]: I0219 19:58:08.104479 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cf454eb7-811a-46da-b647-e9a292c11f70-db-sync-config-data\") pod \"glance-db-sync-mwg5n\" (UID: \"cf454eb7-811a-46da-b647-e9a292c11f70\") " pod="openstack/glance-db-sync-mwg5n" Feb 19 19:58:08 crc kubenswrapper[4813]: I0219 19:58:08.115456 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-989jt\" (UniqueName: \"kubernetes.io/projected/cf454eb7-811a-46da-b647-e9a292c11f70-kube-api-access-989jt\") pod \"glance-db-sync-mwg5n\" (UID: \"cf454eb7-811a-46da-b647-e9a292c11f70\") " pod="openstack/glance-db-sync-mwg5n" Feb 19 19:58:08 crc kubenswrapper[4813]: I0219 19:58:08.186549 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-mwg5n" Feb 19 19:58:08 crc kubenswrapper[4813]: I0219 19:58:08.735207 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-mwg5n"] Feb 19 19:58:09 crc kubenswrapper[4813]: I0219 19:58:09.158718 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-mwg5n" event={"ID":"cf454eb7-811a-46da-b647-e9a292c11f70","Type":"ContainerStarted","Data":"c601ef718fef78652afe385d5b9c0a044a399ce71c909c80ca301ac34d91b3bc"} Feb 19 19:58:10 crc kubenswrapper[4813]: I0219 19:58:10.170709 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-mwg5n" event={"ID":"cf454eb7-811a-46da-b647-e9a292c11f70","Type":"ContainerStarted","Data":"655c9a79eb09c901e13e250a0e1070d2a8ec06741ed6260ff23f0159ac3f44d5"} Feb 19 19:58:10 crc kubenswrapper[4813]: I0219 19:58:10.198581 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-mwg5n" podStartSLOduration=3.198557789 podStartE2EDuration="3.198557789s" podCreationTimestamp="2026-02-19 19:58:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:58:10.193027158 +0000 UTC m=+5309.418467749" watchObservedRunningTime="2026-02-19 19:58:10.198557789 +0000 UTC m=+5309.423998340" Feb 19 19:58:13 crc kubenswrapper[4813]: I0219 19:58:13.201566 4813 generic.go:334] "Generic (PLEG): container finished" podID="cf454eb7-811a-46da-b647-e9a292c11f70" containerID="655c9a79eb09c901e13e250a0e1070d2a8ec06741ed6260ff23f0159ac3f44d5" exitCode=0 Feb 19 19:58:13 crc kubenswrapper[4813]: I0219 19:58:13.201654 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-mwg5n" event={"ID":"cf454eb7-811a-46da-b647-e9a292c11f70","Type":"ContainerDied","Data":"655c9a79eb09c901e13e250a0e1070d2a8ec06741ed6260ff23f0159ac3f44d5"} Feb 19 19:58:14 crc kubenswrapper[4813]: I0219 19:58:14.571798 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-mwg5n" Feb 19 19:58:14 crc kubenswrapper[4813]: I0219 19:58:14.704162 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cf454eb7-811a-46da-b647-e9a292c11f70-db-sync-config-data\") pod \"cf454eb7-811a-46da-b647-e9a292c11f70\" (UID: \"cf454eb7-811a-46da-b647-e9a292c11f70\") " Feb 19 19:58:14 crc kubenswrapper[4813]: I0219 19:58:14.704357 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf454eb7-811a-46da-b647-e9a292c11f70-combined-ca-bundle\") pod \"cf454eb7-811a-46da-b647-e9a292c11f70\" (UID: \"cf454eb7-811a-46da-b647-e9a292c11f70\") " Feb 19 19:58:14 crc kubenswrapper[4813]: I0219 19:58:14.704563 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-989jt\" (UniqueName: \"kubernetes.io/projected/cf454eb7-811a-46da-b647-e9a292c11f70-kube-api-access-989jt\") pod \"cf454eb7-811a-46da-b647-e9a292c11f70\" (UID: \"cf454eb7-811a-46da-b647-e9a292c11f70\") " Feb 19 19:58:14 crc kubenswrapper[4813]: I0219 19:58:14.704661 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf454eb7-811a-46da-b647-e9a292c11f70-config-data\") pod \"cf454eb7-811a-46da-b647-e9a292c11f70\" (UID: \"cf454eb7-811a-46da-b647-e9a292c11f70\") " Feb 19 19:58:14 crc kubenswrapper[4813]: I0219 19:58:14.710474 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf454eb7-811a-46da-b647-e9a292c11f70-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "cf454eb7-811a-46da-b647-e9a292c11f70" (UID: "cf454eb7-811a-46da-b647-e9a292c11f70"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:58:14 crc kubenswrapper[4813]: I0219 19:58:14.710502 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf454eb7-811a-46da-b647-e9a292c11f70-kube-api-access-989jt" (OuterVolumeSpecName: "kube-api-access-989jt") pod "cf454eb7-811a-46da-b647-e9a292c11f70" (UID: "cf454eb7-811a-46da-b647-e9a292c11f70"). InnerVolumeSpecName "kube-api-access-989jt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:58:14 crc kubenswrapper[4813]: I0219 19:58:14.725666 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf454eb7-811a-46da-b647-e9a292c11f70-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cf454eb7-811a-46da-b647-e9a292c11f70" (UID: "cf454eb7-811a-46da-b647-e9a292c11f70"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:58:14 crc kubenswrapper[4813]: I0219 19:58:14.748765 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf454eb7-811a-46da-b647-e9a292c11f70-config-data" (OuterVolumeSpecName: "config-data") pod "cf454eb7-811a-46da-b647-e9a292c11f70" (UID: "cf454eb7-811a-46da-b647-e9a292c11f70"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:58:14 crc kubenswrapper[4813]: I0219 19:58:14.806398 4813 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cf454eb7-811a-46da-b647-e9a292c11f70-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 19:58:14 crc kubenswrapper[4813]: I0219 19:58:14.806435 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf454eb7-811a-46da-b647-e9a292c11f70-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:58:14 crc kubenswrapper[4813]: I0219 19:58:14.806445 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-989jt\" (UniqueName: \"kubernetes.io/projected/cf454eb7-811a-46da-b647-e9a292c11f70-kube-api-access-989jt\") on node \"crc\" DevicePath \"\"" Feb 19 19:58:14 crc kubenswrapper[4813]: I0219 19:58:14.806454 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf454eb7-811a-46da-b647-e9a292c11f70-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 19:58:15 crc kubenswrapper[4813]: I0219 19:58:15.218145 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-mwg5n" event={"ID":"cf454eb7-811a-46da-b647-e9a292c11f70","Type":"ContainerDied","Data":"c601ef718fef78652afe385d5b9c0a044a399ce71c909c80ca301ac34d91b3bc"} Feb 19 19:58:15 crc kubenswrapper[4813]: I0219 19:58:15.218194 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c601ef718fef78652afe385d5b9c0a044a399ce71c909c80ca301ac34d91b3bc" Feb 19 19:58:15 crc kubenswrapper[4813]: I0219 19:58:15.218603 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-mwg5n" Feb 19 19:58:15 crc kubenswrapper[4813]: I0219 19:58:15.473462 4813 scope.go:117] "RemoveContainer" containerID="6f703dfd90f9e11930a40b965ab018f3c46a606e13eaba250b413520a86bea4f" Feb 19 19:58:15 crc kubenswrapper[4813]: E0219 19:58:15.473693 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:58:15 crc kubenswrapper[4813]: I0219 19:58:15.599217 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c445499f7-5vdmf"] Feb 19 19:58:15 crc kubenswrapper[4813]: E0219 19:58:15.599531 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf454eb7-811a-46da-b647-e9a292c11f70" containerName="glance-db-sync" Feb 19 19:58:15 crc kubenswrapper[4813]: I0219 19:58:15.599543 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf454eb7-811a-46da-b647-e9a292c11f70" containerName="glance-db-sync" Feb 19 19:58:15 crc kubenswrapper[4813]: I0219 19:58:15.599697 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf454eb7-811a-46da-b647-e9a292c11f70" containerName="glance-db-sync" Feb 19 19:58:15 crc kubenswrapper[4813]: I0219 19:58:15.600510 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c445499f7-5vdmf" Feb 19 19:58:15 crc kubenswrapper[4813]: I0219 19:58:15.618526 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 19:58:15 crc kubenswrapper[4813]: I0219 19:58:15.619880 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 19:58:15 crc kubenswrapper[4813]: I0219 19:58:15.623808 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 19 19:58:15 crc kubenswrapper[4813]: I0219 19:58:15.625223 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 19 19:58:15 crc kubenswrapper[4813]: I0219 19:58:15.625463 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-tcjbp" Feb 19 19:58:15 crc kubenswrapper[4813]: I0219 19:58:15.625659 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 19 19:58:15 crc kubenswrapper[4813]: I0219 19:58:15.631943 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c445499f7-5vdmf"] Feb 19 19:58:15 crc kubenswrapper[4813]: I0219 19:58:15.650341 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 19:58:15 crc kubenswrapper[4813]: I0219 19:58:15.718999 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 19:58:15 crc kubenswrapper[4813]: I0219 19:58:15.720724 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 19:58:15 crc kubenswrapper[4813]: I0219 19:58:15.720798 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e256e6bb-57e4-4319-988b-80a9df8fe072-ovsdbserver-sb\") pod \"dnsmasq-dns-5c445499f7-5vdmf\" (UID: \"e256e6bb-57e4-4319-988b-80a9df8fe072\") " pod="openstack/dnsmasq-dns-5c445499f7-5vdmf" Feb 19 19:58:15 crc kubenswrapper[4813]: I0219 19:58:15.720839 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/216e6931-2f1e-44d4-bece-f5516f986b3a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"216e6931-2f1e-44d4-bece-f5516f986b3a\") " pod="openstack/glance-default-external-api-0" Feb 19 19:58:15 crc kubenswrapper[4813]: I0219 19:58:15.720875 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e256e6bb-57e4-4319-988b-80a9df8fe072-ovsdbserver-nb\") pod \"dnsmasq-dns-5c445499f7-5vdmf\" (UID: \"e256e6bb-57e4-4319-988b-80a9df8fe072\") " pod="openstack/dnsmasq-dns-5c445499f7-5vdmf" Feb 19 19:58:15 crc kubenswrapper[4813]: I0219 19:58:15.720900 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/216e6931-2f1e-44d4-bece-f5516f986b3a-config-data\") pod \"glance-default-external-api-0\" (UID: \"216e6931-2f1e-44d4-bece-f5516f986b3a\") " pod="openstack/glance-default-external-api-0" Feb 19 19:58:15 crc kubenswrapper[4813]: I0219 19:58:15.720928 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/216e6931-2f1e-44d4-bece-f5516f986b3a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"216e6931-2f1e-44d4-bece-f5516f986b3a\") " pod="openstack/glance-default-external-api-0" Feb 19 19:58:15 crc kubenswrapper[4813]: I0219 19:58:15.720990 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/216e6931-2f1e-44d4-bece-f5516f986b3a-ceph\") pod \"glance-default-external-api-0\" (UID: \"216e6931-2f1e-44d4-bece-f5516f986b3a\") " pod="openstack/glance-default-external-api-0" Feb 19 19:58:15 crc kubenswrapper[4813]: I0219 19:58:15.721014 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhzmh\" (UniqueName: \"kubernetes.io/projected/216e6931-2f1e-44d4-bece-f5516f986b3a-kube-api-access-jhzmh\") pod \"glance-default-external-api-0\" (UID: \"216e6931-2f1e-44d4-bece-f5516f986b3a\") " pod="openstack/glance-default-external-api-0" Feb 19 19:58:15 crc kubenswrapper[4813]: I0219 19:58:15.721040 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e256e6bb-57e4-4319-988b-80a9df8fe072-dns-svc\") pod \"dnsmasq-dns-5c445499f7-5vdmf\" (UID: \"e256e6bb-57e4-4319-988b-80a9df8fe072\") " pod="openstack/dnsmasq-dns-5c445499f7-5vdmf" Feb 19 19:58:15 crc kubenswrapper[4813]: I0219 19:58:15.721061 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxmw6\" (UniqueName: \"kubernetes.io/projected/e256e6bb-57e4-4319-988b-80a9df8fe072-kube-api-access-rxmw6\") pod \"dnsmasq-dns-5c445499f7-5vdmf\" (UID: \"e256e6bb-57e4-4319-988b-80a9df8fe072\") " pod="openstack/dnsmasq-dns-5c445499f7-5vdmf" Feb 19 19:58:15 crc kubenswrapper[4813]: I0219 19:58:15.721116 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/216e6931-2f1e-44d4-bece-f5516f986b3a-logs\") pod \"glance-default-external-api-0\" (UID: \"216e6931-2f1e-44d4-bece-f5516f986b3a\") " pod="openstack/glance-default-external-api-0" Feb 19 19:58:15 crc kubenswrapper[4813]: I0219 19:58:15.721165 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/216e6931-2f1e-44d4-bece-f5516f986b3a-scripts\") pod \"glance-default-external-api-0\" (UID: \"216e6931-2f1e-44d4-bece-f5516f986b3a\") " pod="openstack/glance-default-external-api-0" Feb 19 19:58:15 crc kubenswrapper[4813]: I0219 19:58:15.721195 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e256e6bb-57e4-4319-988b-80a9df8fe072-config\") pod \"dnsmasq-dns-5c445499f7-5vdmf\" (UID: \"e256e6bb-57e4-4319-988b-80a9df8fe072\") " pod="openstack/dnsmasq-dns-5c445499f7-5vdmf" Feb 19 19:58:15 crc kubenswrapper[4813]: I0219 19:58:15.724495 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 19 19:58:15 crc kubenswrapper[4813]: I0219 19:58:15.727864 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 19:58:15 crc kubenswrapper[4813]: I0219 19:58:15.822786 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e256e6bb-57e4-4319-988b-80a9df8fe072-dns-svc\") pod \"dnsmasq-dns-5c445499f7-5vdmf\" (UID: \"e256e6bb-57e4-4319-988b-80a9df8fe072\") " pod="openstack/dnsmasq-dns-5c445499f7-5vdmf" Feb 19 19:58:15 crc kubenswrapper[4813]: I0219 19:58:15.822839 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxmw6\" (UniqueName: \"kubernetes.io/projected/e256e6bb-57e4-4319-988b-80a9df8fe072-kube-api-access-rxmw6\") pod \"dnsmasq-dns-5c445499f7-5vdmf\" (UID: \"e256e6bb-57e4-4319-988b-80a9df8fe072\") " pod="openstack/dnsmasq-dns-5c445499f7-5vdmf" Feb 19 19:58:15 crc kubenswrapper[4813]: I0219 19:58:15.822898 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdfe2042-1289-4639-a3ee-3b6abbc5d7ad-config-data\") pod \"glance-default-internal-api-0\" (UID: \"cdfe2042-1289-4639-a3ee-3b6abbc5d7ad\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:58:15 crc kubenswrapper[4813]: I0219 19:58:15.822937 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/216e6931-2f1e-44d4-bece-f5516f986b3a-logs\") pod \"glance-default-external-api-0\" (UID: \"216e6931-2f1e-44d4-bece-f5516f986b3a\") " pod="openstack/glance-default-external-api-0" Feb 19 19:58:15 crc kubenswrapper[4813]: I0219 19:58:15.822982 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdfe2042-1289-4639-a3ee-3b6abbc5d7ad-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"cdfe2042-1289-4639-a3ee-3b6abbc5d7ad\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:58:15 crc kubenswrapper[4813]: I0219 19:58:15.823028 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/216e6931-2f1e-44d4-bece-f5516f986b3a-scripts\") pod \"glance-default-external-api-0\" (UID: \"216e6931-2f1e-44d4-bece-f5516f986b3a\") " pod="openstack/glance-default-external-api-0" Feb 19 19:58:15 crc kubenswrapper[4813]: I0219 19:58:15.823057 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e256e6bb-57e4-4319-988b-80a9df8fe072-config\") pod \"dnsmasq-dns-5c445499f7-5vdmf\" (UID: \"e256e6bb-57e4-4319-988b-80a9df8fe072\") " pod="openstack/dnsmasq-dns-5c445499f7-5vdmf" Feb 19 19:58:15 crc kubenswrapper[4813]: I0219 19:58:15.823087 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pk6c\" (UniqueName: \"kubernetes.io/projected/cdfe2042-1289-4639-a3ee-3b6abbc5d7ad-kube-api-access-4pk6c\") pod \"glance-default-internal-api-0\" (UID: \"cdfe2042-1289-4639-a3ee-3b6abbc5d7ad\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:58:15 crc kubenswrapper[4813]: I0219 19:58:15.823114 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e256e6bb-57e4-4319-988b-80a9df8fe072-ovsdbserver-sb\") pod \"dnsmasq-dns-5c445499f7-5vdmf\" (UID: \"e256e6bb-57e4-4319-988b-80a9df8fe072\") " pod="openstack/dnsmasq-dns-5c445499f7-5vdmf" Feb 19 19:58:15 crc kubenswrapper[4813]: I0219 19:58:15.823136 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/216e6931-2f1e-44d4-bece-f5516f986b3a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"216e6931-2f1e-44d4-bece-f5516f986b3a\") " pod="openstack/glance-default-external-api-0" Feb 19 19:58:15 crc kubenswrapper[4813]: I0219 19:58:15.823159 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cdfe2042-1289-4639-a3ee-3b6abbc5d7ad-scripts\") pod \"glance-default-internal-api-0\" (UID: \"cdfe2042-1289-4639-a3ee-3b6abbc5d7ad\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:58:15 crc kubenswrapper[4813]: I0219 19:58:15.823187 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e256e6bb-57e4-4319-988b-80a9df8fe072-ovsdbserver-nb\") pod \"dnsmasq-dns-5c445499f7-5vdmf\" (UID: \"e256e6bb-57e4-4319-988b-80a9df8fe072\") " pod="openstack/dnsmasq-dns-5c445499f7-5vdmf" Feb 19 19:58:15 crc kubenswrapper[4813]: I0219 19:58:15.823211 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/216e6931-2f1e-44d4-bece-f5516f986b3a-config-data\") pod \"glance-default-external-api-0\" (UID: \"216e6931-2f1e-44d4-bece-f5516f986b3a\") " pod="openstack/glance-default-external-api-0" Feb 19 19:58:15 crc kubenswrapper[4813]: I0219 19:58:15.823241 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/216e6931-2f1e-44d4-bece-f5516f986b3a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"216e6931-2f1e-44d4-bece-f5516f986b3a\") " pod="openstack/glance-default-external-api-0" Feb 19 19:58:15 crc kubenswrapper[4813]: I0219 19:58:15.823264 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cdfe2042-1289-4639-a3ee-3b6abbc5d7ad-logs\") pod \"glance-default-internal-api-0\" (UID: \"cdfe2042-1289-4639-a3ee-3b6abbc5d7ad\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:58:15 crc kubenswrapper[4813]: I0219 19:58:15.823300 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cdfe2042-1289-4639-a3ee-3b6abbc5d7ad-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"cdfe2042-1289-4639-a3ee-3b6abbc5d7ad\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:58:15 crc kubenswrapper[4813]: I0219 19:58:15.823326 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/216e6931-2f1e-44d4-bece-f5516f986b3a-ceph\") pod \"glance-default-external-api-0\" (UID: \"216e6931-2f1e-44d4-bece-f5516f986b3a\") " pod="openstack/glance-default-external-api-0" Feb 19 19:58:15 crc kubenswrapper[4813]: I0219 19:58:15.823351 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhzmh\" (UniqueName: \"kubernetes.io/projected/216e6931-2f1e-44d4-bece-f5516f986b3a-kube-api-access-jhzmh\") pod \"glance-default-external-api-0\" (UID: \"216e6931-2f1e-44d4-bece-f5516f986b3a\") " pod="openstack/glance-default-external-api-0" Feb 19 19:58:15 crc kubenswrapper[4813]: I0219 19:58:15.823371 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/cdfe2042-1289-4639-a3ee-3b6abbc5d7ad-ceph\") pod \"glance-default-internal-api-0\" (UID: \"cdfe2042-1289-4639-a3ee-3b6abbc5d7ad\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:58:15 crc kubenswrapper[4813]: I0219 19:58:15.824225 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e256e6bb-57e4-4319-988b-80a9df8fe072-dns-svc\") pod \"dnsmasq-dns-5c445499f7-5vdmf\" (UID: \"e256e6bb-57e4-4319-988b-80a9df8fe072\") " pod="openstack/dnsmasq-dns-5c445499f7-5vdmf" Feb 19 19:58:15 crc kubenswrapper[4813]: I0219 19:58:15.824234 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e256e6bb-57e4-4319-988b-80a9df8fe072-ovsdbserver-sb\") pod \"dnsmasq-dns-5c445499f7-5vdmf\" (UID: \"e256e6bb-57e4-4319-988b-80a9df8fe072\") " pod="openstack/dnsmasq-dns-5c445499f7-5vdmf" Feb 19 19:58:15 crc kubenswrapper[4813]: I0219 19:58:15.824575 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/216e6931-2f1e-44d4-bece-f5516f986b3a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"216e6931-2f1e-44d4-bece-f5516f986b3a\") " pod="openstack/glance-default-external-api-0" Feb 19 19:58:15 crc kubenswrapper[4813]: I0219 19:58:15.824853 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/216e6931-2f1e-44d4-bece-f5516f986b3a-logs\") pod \"glance-default-external-api-0\" (UID: \"216e6931-2f1e-44d4-bece-f5516f986b3a\") " pod="openstack/glance-default-external-api-0" Feb 19 19:58:15 crc kubenswrapper[4813]: I0219 19:58:15.825847 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e256e6bb-57e4-4319-988b-80a9df8fe072-ovsdbserver-nb\") pod \"dnsmasq-dns-5c445499f7-5vdmf\" (UID: \"e256e6bb-57e4-4319-988b-80a9df8fe072\") " pod="openstack/dnsmasq-dns-5c445499f7-5vdmf" Feb 19 19:58:15 crc kubenswrapper[4813]: I0219 19:58:15.827440 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e256e6bb-57e4-4319-988b-80a9df8fe072-config\") pod \"dnsmasq-dns-5c445499f7-5vdmf\" (UID: \"e256e6bb-57e4-4319-988b-80a9df8fe072\") " pod="openstack/dnsmasq-dns-5c445499f7-5vdmf" Feb 19 19:58:15 crc kubenswrapper[4813]: I0219 19:58:15.829659 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/216e6931-2f1e-44d4-bece-f5516f986b3a-scripts\") pod \"glance-default-external-api-0\" (UID: \"216e6931-2f1e-44d4-bece-f5516f986b3a\") " pod="openstack/glance-default-external-api-0" Feb 19 19:58:15 crc kubenswrapper[4813]: I0219 19:58:15.829921 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/216e6931-2f1e-44d4-bece-f5516f986b3a-ceph\") pod \"glance-default-external-api-0\" (UID: \"216e6931-2f1e-44d4-bece-f5516f986b3a\") " pod="openstack/glance-default-external-api-0" Feb 19 19:58:15 crc kubenswrapper[4813]: I0219 19:58:15.830540 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/216e6931-2f1e-44d4-bece-f5516f986b3a-config-data\") pod \"glance-default-external-api-0\" (UID: \"216e6931-2f1e-44d4-bece-f5516f986b3a\") " pod="openstack/glance-default-external-api-0" Feb 19 19:58:15 crc kubenswrapper[4813]: I0219 19:58:15.843103 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxmw6\" (UniqueName: \"kubernetes.io/projected/e256e6bb-57e4-4319-988b-80a9df8fe072-kube-api-access-rxmw6\") pod \"dnsmasq-dns-5c445499f7-5vdmf\" (UID: \"e256e6bb-57e4-4319-988b-80a9df8fe072\") " pod="openstack/dnsmasq-dns-5c445499f7-5vdmf" Feb 19 19:58:15 crc kubenswrapper[4813]: I0219 19:58:15.843530 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/216e6931-2f1e-44d4-bece-f5516f986b3a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"216e6931-2f1e-44d4-bece-f5516f986b3a\") " pod="openstack/glance-default-external-api-0" Feb 19 19:58:15 crc kubenswrapper[4813]: I0219 19:58:15.845201 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhzmh\" (UniqueName: \"kubernetes.io/projected/216e6931-2f1e-44d4-bece-f5516f986b3a-kube-api-access-jhzmh\") pod \"glance-default-external-api-0\" (UID: \"216e6931-2f1e-44d4-bece-f5516f986b3a\") " pod="openstack/glance-default-external-api-0" Feb 19 19:58:15 crc kubenswrapper[4813]: I0219 19:58:15.923896 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c445499f7-5vdmf" Feb 19 19:58:15 crc kubenswrapper[4813]: I0219 19:58:15.924435 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pk6c\" (UniqueName: \"kubernetes.io/projected/cdfe2042-1289-4639-a3ee-3b6abbc5d7ad-kube-api-access-4pk6c\") pod \"glance-default-internal-api-0\" (UID: \"cdfe2042-1289-4639-a3ee-3b6abbc5d7ad\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:58:15 crc kubenswrapper[4813]: I0219 19:58:15.924489 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cdfe2042-1289-4639-a3ee-3b6abbc5d7ad-scripts\") pod \"glance-default-internal-api-0\" (UID: \"cdfe2042-1289-4639-a3ee-3b6abbc5d7ad\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:58:15 crc kubenswrapper[4813]: I0219 19:58:15.924525 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cdfe2042-1289-4639-a3ee-3b6abbc5d7ad-logs\") pod \"glance-default-internal-api-0\" (UID: \"cdfe2042-1289-4639-a3ee-3b6abbc5d7ad\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:58:15 crc kubenswrapper[4813]: I0219 19:58:15.924610 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cdfe2042-1289-4639-a3ee-3b6abbc5d7ad-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"cdfe2042-1289-4639-a3ee-3b6abbc5d7ad\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:58:15 crc kubenswrapper[4813]: I0219 19:58:15.924634 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/cdfe2042-1289-4639-a3ee-3b6abbc5d7ad-ceph\") pod \"glance-default-internal-api-0\" (UID: \"cdfe2042-1289-4639-a3ee-3b6abbc5d7ad\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:58:15 crc kubenswrapper[4813]: I0219 19:58:15.924701 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdfe2042-1289-4639-a3ee-3b6abbc5d7ad-config-data\") pod \"glance-default-internal-api-0\" (UID: \"cdfe2042-1289-4639-a3ee-3b6abbc5d7ad\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:58:15 crc kubenswrapper[4813]: I0219 19:58:15.924766 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdfe2042-1289-4639-a3ee-3b6abbc5d7ad-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"cdfe2042-1289-4639-a3ee-3b6abbc5d7ad\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:58:15 crc kubenswrapper[4813]: I0219 19:58:15.925048 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cdfe2042-1289-4639-a3ee-3b6abbc5d7ad-logs\") pod \"glance-default-internal-api-0\" (UID: \"cdfe2042-1289-4639-a3ee-3b6abbc5d7ad\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:58:15 crc kubenswrapper[4813]: I0219 19:58:15.925190 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cdfe2042-1289-4639-a3ee-3b6abbc5d7ad-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"cdfe2042-1289-4639-a3ee-3b6abbc5d7ad\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:58:15 crc kubenswrapper[4813]: I0219 19:58:15.928730 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cdfe2042-1289-4639-a3ee-3b6abbc5d7ad-scripts\") pod \"glance-default-internal-api-0\" (UID: \"cdfe2042-1289-4639-a3ee-3b6abbc5d7ad\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:58:15 crc kubenswrapper[4813]: I0219 19:58:15.928809 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdfe2042-1289-4639-a3ee-3b6abbc5d7ad-config-data\") pod \"glance-default-internal-api-0\" (UID: \"cdfe2042-1289-4639-a3ee-3b6abbc5d7ad\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:58:15 crc kubenswrapper[4813]: I0219 19:58:15.929401 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdfe2042-1289-4639-a3ee-3b6abbc5d7ad-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"cdfe2042-1289-4639-a3ee-3b6abbc5d7ad\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:58:15 crc kubenswrapper[4813]: I0219 19:58:15.929770 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/cdfe2042-1289-4639-a3ee-3b6abbc5d7ad-ceph\") pod \"glance-default-internal-api-0\" (UID: \"cdfe2042-1289-4639-a3ee-3b6abbc5d7ad\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:58:15 crc kubenswrapper[4813]: I0219 19:58:15.945361 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pk6c\" (UniqueName: \"kubernetes.io/projected/cdfe2042-1289-4639-a3ee-3b6abbc5d7ad-kube-api-access-4pk6c\") pod \"glance-default-internal-api-0\" (UID: \"cdfe2042-1289-4639-a3ee-3b6abbc5d7ad\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:58:15 crc kubenswrapper[4813]: I0219 19:58:15.951402 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 19:58:16 crc kubenswrapper[4813]: I0219 19:58:16.042505 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 19:58:16 crc kubenswrapper[4813]: I0219 19:58:16.395048 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c445499f7-5vdmf"] Feb 19 19:58:16 crc kubenswrapper[4813]: I0219 19:58:16.520443 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 19:58:16 crc kubenswrapper[4813]: I0219 19:58:16.621825 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 19:58:16 crc kubenswrapper[4813]: W0219 19:58:16.664337 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod216e6931_2f1e_44d4_bece_f5516f986b3a.slice/crio-1ef8b170e5380f430e9f616710fca1bf0b464b135cbd443f2d786c2a4016c36f WatchSource:0}: Error finding container 1ef8b170e5380f430e9f616710fca1bf0b464b135cbd443f2d786c2a4016c36f: Status 404 returned error can't find the container with id 1ef8b170e5380f430e9f616710fca1bf0b464b135cbd443f2d786c2a4016c36f Feb 19 19:58:17 crc kubenswrapper[4813]: I0219 19:58:17.246572 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"216e6931-2f1e-44d4-bece-f5516f986b3a","Type":"ContainerStarted","Data":"bad39ad0f827ca23ba8086b1854a82e0d6d0765fd87b15d8aa331fafd90d696c"} Feb 19 19:58:17 crc kubenswrapper[4813]: I0219 19:58:17.252294 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"216e6931-2f1e-44d4-bece-f5516f986b3a","Type":"ContainerStarted","Data":"1ef8b170e5380f430e9f616710fca1bf0b464b135cbd443f2d786c2a4016c36f"} Feb 19 19:58:17 crc kubenswrapper[4813]: I0219 19:58:17.252399 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c445499f7-5vdmf" event={"ID":"e256e6bb-57e4-4319-988b-80a9df8fe072","Type":"ContainerDied","Data":"2a6918899f3dc9959745b442aed8ebc54824c400aec76706d1c5d50c22cffbd1"} Feb 19 19:58:17 crc kubenswrapper[4813]: I0219 19:58:17.249870 4813 generic.go:334] "Generic (PLEG): container finished" podID="e256e6bb-57e4-4319-988b-80a9df8fe072" containerID="2a6918899f3dc9959745b442aed8ebc54824c400aec76706d1c5d50c22cffbd1" exitCode=0 Feb 19 19:58:17 crc kubenswrapper[4813]: I0219 19:58:17.252617 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c445499f7-5vdmf" event={"ID":"e256e6bb-57e4-4319-988b-80a9df8fe072","Type":"ContainerStarted","Data":"93be82c4dfcd3824fd075a99083d90ce732191193c25c7b6dc1f264990ade579"} Feb 19 19:58:17 crc kubenswrapper[4813]: I0219 19:58:17.267870 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 19:58:17 crc kubenswrapper[4813]: W0219 19:58:17.287919 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcdfe2042_1289_4639_a3ee_3b6abbc5d7ad.slice/crio-be0bf96a091a0150a3c0f37d0a85c4080b1f14d84a7913afc4b72cd78b39678b WatchSource:0}: Error finding container be0bf96a091a0150a3c0f37d0a85c4080b1f14d84a7913afc4b72cd78b39678b: Status 404 returned error can't find the container with id be0bf96a091a0150a3c0f37d0a85c4080b1f14d84a7913afc4b72cd78b39678b Feb 19 19:58:18 crc kubenswrapper[4813]: I0219 19:58:18.151201 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 19:58:18 crc kubenswrapper[4813]: I0219 19:58:18.263088 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c445499f7-5vdmf" event={"ID":"e256e6bb-57e4-4319-988b-80a9df8fe072","Type":"ContainerStarted","Data":"c155729b6d39703ccb405abd2303d8295a7818228e90d03688ad2d158e266375"} Feb 19 19:58:18 crc kubenswrapper[4813]: I0219 19:58:18.263243 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c445499f7-5vdmf" Feb 19 19:58:18 crc kubenswrapper[4813]: I0219 19:58:18.264767 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cdfe2042-1289-4639-a3ee-3b6abbc5d7ad","Type":"ContainerStarted","Data":"f8c5e4feb0008e3fa5555b37eb06bd42aa49d12dad4ba03944f3777a79a8c557"} Feb 19 19:58:18 crc kubenswrapper[4813]: I0219 19:58:18.264806 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cdfe2042-1289-4639-a3ee-3b6abbc5d7ad","Type":"ContainerStarted","Data":"be0bf96a091a0150a3c0f37d0a85c4080b1f14d84a7913afc4b72cd78b39678b"} Feb 19 19:58:18 crc kubenswrapper[4813]: I0219 19:58:18.267643 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"216e6931-2f1e-44d4-bece-f5516f986b3a","Type":"ContainerStarted","Data":"0aaf7819f7e9b68fc5f6493216db9f9d7e5922893824c1c0df3c2409c88db321"} Feb 19 19:58:18 crc kubenswrapper[4813]: I0219 19:58:18.267768 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="216e6931-2f1e-44d4-bece-f5516f986b3a" containerName="glance-log" containerID="cri-o://bad39ad0f827ca23ba8086b1854a82e0d6d0765fd87b15d8aa331fafd90d696c" gracePeriod=30 Feb 19 19:58:18 crc kubenswrapper[4813]: I0219 19:58:18.267821 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="216e6931-2f1e-44d4-bece-f5516f986b3a" containerName="glance-httpd" containerID="cri-o://0aaf7819f7e9b68fc5f6493216db9f9d7e5922893824c1c0df3c2409c88db321" gracePeriod=30 Feb 19 19:58:18 crc kubenswrapper[4813]: I0219 19:58:18.283648 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c445499f7-5vdmf" podStartSLOduration=3.283625953 podStartE2EDuration="3.283625953s" podCreationTimestamp="2026-02-19 19:58:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:58:18.280122625 +0000 UTC m=+5317.505563166" watchObservedRunningTime="2026-02-19 19:58:18.283625953 +0000 UTC m=+5317.509066494" Feb 19 19:58:18 crc kubenswrapper[4813]: I0219 19:58:18.309772 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.309750162 podStartE2EDuration="3.309750162s" podCreationTimestamp="2026-02-19 19:58:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:58:18.300739783 +0000 UTC m=+5317.526180324" watchObservedRunningTime="2026-02-19 19:58:18.309750162 +0000 UTC m=+5317.535190703" Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.041743 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.195638 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/216e6931-2f1e-44d4-bece-f5516f986b3a-scripts\") pod \"216e6931-2f1e-44d4-bece-f5516f986b3a\" (UID: \"216e6931-2f1e-44d4-bece-f5516f986b3a\") " Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.196621 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhzmh\" (UniqueName: \"kubernetes.io/projected/216e6931-2f1e-44d4-bece-f5516f986b3a-kube-api-access-jhzmh\") pod \"216e6931-2f1e-44d4-bece-f5516f986b3a\" (UID: \"216e6931-2f1e-44d4-bece-f5516f986b3a\") " Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.196696 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/216e6931-2f1e-44d4-bece-f5516f986b3a-logs\") pod \"216e6931-2f1e-44d4-bece-f5516f986b3a\" (UID: \"216e6931-2f1e-44d4-bece-f5516f986b3a\") " Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.196747 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/216e6931-2f1e-44d4-bece-f5516f986b3a-config-data\") pod \"216e6931-2f1e-44d4-bece-f5516f986b3a\" (UID: \"216e6931-2f1e-44d4-bece-f5516f986b3a\") " Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.197318 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/216e6931-2f1e-44d4-bece-f5516f986b3a-ceph\") pod \"216e6931-2f1e-44d4-bece-f5516f986b3a\" (UID: \"216e6931-2f1e-44d4-bece-f5516f986b3a\") " Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.196919 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/216e6931-2f1e-44d4-bece-f5516f986b3a-logs" (OuterVolumeSpecName: "logs") pod "216e6931-2f1e-44d4-bece-f5516f986b3a" (UID: "216e6931-2f1e-44d4-bece-f5516f986b3a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.197413 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/216e6931-2f1e-44d4-bece-f5516f986b3a-httpd-run\") pod \"216e6931-2f1e-44d4-bece-f5516f986b3a\" (UID: \"216e6931-2f1e-44d4-bece-f5516f986b3a\") " Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.197932 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/216e6931-2f1e-44d4-bece-f5516f986b3a-combined-ca-bundle\") pod \"216e6931-2f1e-44d4-bece-f5516f986b3a\" (UID: \"216e6931-2f1e-44d4-bece-f5516f986b3a\") " Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.198206 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/216e6931-2f1e-44d4-bece-f5516f986b3a-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "216e6931-2f1e-44d4-bece-f5516f986b3a" (UID: "216e6931-2f1e-44d4-bece-f5516f986b3a"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.198689 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/216e6931-2f1e-44d4-bece-f5516f986b3a-logs\") on node \"crc\" DevicePath \"\"" Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.198713 4813 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/216e6931-2f1e-44d4-bece-f5516f986b3a-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.202224 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/216e6931-2f1e-44d4-bece-f5516f986b3a-scripts" (OuterVolumeSpecName: "scripts") pod "216e6931-2f1e-44d4-bece-f5516f986b3a" (UID: "216e6931-2f1e-44d4-bece-f5516f986b3a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.202234 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/216e6931-2f1e-44d4-bece-f5516f986b3a-kube-api-access-jhzmh" (OuterVolumeSpecName: "kube-api-access-jhzmh") pod "216e6931-2f1e-44d4-bece-f5516f986b3a" (UID: "216e6931-2f1e-44d4-bece-f5516f986b3a"). InnerVolumeSpecName "kube-api-access-jhzmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.202634 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/216e6931-2f1e-44d4-bece-f5516f986b3a-ceph" (OuterVolumeSpecName: "ceph") pod "216e6931-2f1e-44d4-bece-f5516f986b3a" (UID: "216e6931-2f1e-44d4-bece-f5516f986b3a"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.222273 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/216e6931-2f1e-44d4-bece-f5516f986b3a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "216e6931-2f1e-44d4-bece-f5516f986b3a" (UID: "216e6931-2f1e-44d4-bece-f5516f986b3a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.236747 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/216e6931-2f1e-44d4-bece-f5516f986b3a-config-data" (OuterVolumeSpecName: "config-data") pod "216e6931-2f1e-44d4-bece-f5516f986b3a" (UID: "216e6931-2f1e-44d4-bece-f5516f986b3a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.277718 4813 generic.go:334] "Generic (PLEG): container finished" podID="216e6931-2f1e-44d4-bece-f5516f986b3a" containerID="0aaf7819f7e9b68fc5f6493216db9f9d7e5922893824c1c0df3c2409c88db321" exitCode=0 Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.277757 4813 generic.go:334] "Generic (PLEG): container finished" podID="216e6931-2f1e-44d4-bece-f5516f986b3a" containerID="bad39ad0f827ca23ba8086b1854a82e0d6d0765fd87b15d8aa331fafd90d696c" exitCode=143 Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.277766 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"216e6931-2f1e-44d4-bece-f5516f986b3a","Type":"ContainerDied","Data":"0aaf7819f7e9b68fc5f6493216db9f9d7e5922893824c1c0df3c2409c88db321"} Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.277810 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.277830 4813 scope.go:117] "RemoveContainer" containerID="0aaf7819f7e9b68fc5f6493216db9f9d7e5922893824c1c0df3c2409c88db321" Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.277818 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"216e6931-2f1e-44d4-bece-f5516f986b3a","Type":"ContainerDied","Data":"bad39ad0f827ca23ba8086b1854a82e0d6d0765fd87b15d8aa331fafd90d696c"} Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.277947 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"216e6931-2f1e-44d4-bece-f5516f986b3a","Type":"ContainerDied","Data":"1ef8b170e5380f430e9f616710fca1bf0b464b135cbd443f2d786c2a4016c36f"} Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.281088 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cdfe2042-1289-4639-a3ee-3b6abbc5d7ad","Type":"ContainerStarted","Data":"073e3c1bbeb50683381fd7c20cc3f16ebfc08a84e21a4a1926f3a200376e5bc7"} Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.281203 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="cdfe2042-1289-4639-a3ee-3b6abbc5d7ad" containerName="glance-log" containerID="cri-o://f8c5e4feb0008e3fa5555b37eb06bd42aa49d12dad4ba03944f3777a79a8c557" gracePeriod=30 Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.281235 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="cdfe2042-1289-4639-a3ee-3b6abbc5d7ad" containerName="glance-httpd" containerID="cri-o://073e3c1bbeb50683381fd7c20cc3f16ebfc08a84e21a4a1926f3a200376e5bc7" gracePeriod=30 Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.300431 4813 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/216e6931-2f1e-44d4-bece-f5516f986b3a-ceph\") on node \"crc\" DevicePath \"\"" Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.300673 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/216e6931-2f1e-44d4-bece-f5516f986b3a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.300683 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/216e6931-2f1e-44d4-bece-f5516f986b3a-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.300692 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhzmh\" (UniqueName: \"kubernetes.io/projected/216e6931-2f1e-44d4-bece-f5516f986b3a-kube-api-access-jhzmh\") on node \"crc\" DevicePath \"\"" Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.300701 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/216e6931-2f1e-44d4-bece-f5516f986b3a-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.306706 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.306692978 podStartE2EDuration="4.306692978s" podCreationTimestamp="2026-02-19 19:58:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:58:19.30515963 +0000 UTC m=+5318.530600181" watchObservedRunningTime="2026-02-19 19:58:19.306692978 +0000 UTC m=+5318.532133519" Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.318237 4813 scope.go:117] "RemoveContainer" containerID="bad39ad0f827ca23ba8086b1854a82e0d6d0765fd87b15d8aa331fafd90d696c" Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.334141 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.338337 4813 scope.go:117] "RemoveContainer" containerID="0aaf7819f7e9b68fc5f6493216db9f9d7e5922893824c1c0df3c2409c88db321" Feb 19 19:58:19 crc kubenswrapper[4813]: E0219 19:58:19.338873 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0aaf7819f7e9b68fc5f6493216db9f9d7e5922893824c1c0df3c2409c88db321\": container with ID starting with 0aaf7819f7e9b68fc5f6493216db9f9d7e5922893824c1c0df3c2409c88db321 not found: ID does not exist" containerID="0aaf7819f7e9b68fc5f6493216db9f9d7e5922893824c1c0df3c2409c88db321" Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.338919 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0aaf7819f7e9b68fc5f6493216db9f9d7e5922893824c1c0df3c2409c88db321"} err="failed to get container status \"0aaf7819f7e9b68fc5f6493216db9f9d7e5922893824c1c0df3c2409c88db321\": rpc error: code = NotFound desc = could not find container \"0aaf7819f7e9b68fc5f6493216db9f9d7e5922893824c1c0df3c2409c88db321\": container with ID starting with 0aaf7819f7e9b68fc5f6493216db9f9d7e5922893824c1c0df3c2409c88db321 not found: ID does not exist" Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.338969 4813 scope.go:117] "RemoveContainer" containerID="bad39ad0f827ca23ba8086b1854a82e0d6d0765fd87b15d8aa331fafd90d696c" Feb 19 19:58:19 crc kubenswrapper[4813]: E0219 19:58:19.339285 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bad39ad0f827ca23ba8086b1854a82e0d6d0765fd87b15d8aa331fafd90d696c\": container with ID starting with bad39ad0f827ca23ba8086b1854a82e0d6d0765fd87b15d8aa331fafd90d696c not found: ID does not exist" containerID="bad39ad0f827ca23ba8086b1854a82e0d6d0765fd87b15d8aa331fafd90d696c" Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.339336 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bad39ad0f827ca23ba8086b1854a82e0d6d0765fd87b15d8aa331fafd90d696c"} err="failed to get container status \"bad39ad0f827ca23ba8086b1854a82e0d6d0765fd87b15d8aa331fafd90d696c\": rpc error: code = NotFound desc = could not find container \"bad39ad0f827ca23ba8086b1854a82e0d6d0765fd87b15d8aa331fafd90d696c\": container with ID starting with bad39ad0f827ca23ba8086b1854a82e0d6d0765fd87b15d8aa331fafd90d696c not found: ID does not exist" Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.339361 4813 scope.go:117] "RemoveContainer" containerID="0aaf7819f7e9b68fc5f6493216db9f9d7e5922893824c1c0df3c2409c88db321" Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.339611 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0aaf7819f7e9b68fc5f6493216db9f9d7e5922893824c1c0df3c2409c88db321"} err="failed to get container status \"0aaf7819f7e9b68fc5f6493216db9f9d7e5922893824c1c0df3c2409c88db321\": rpc error: code = NotFound desc = could not find container \"0aaf7819f7e9b68fc5f6493216db9f9d7e5922893824c1c0df3c2409c88db321\": container with ID starting with 0aaf7819f7e9b68fc5f6493216db9f9d7e5922893824c1c0df3c2409c88db321 not found: ID does not exist" Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.339641 4813 scope.go:117] "RemoveContainer" containerID="bad39ad0f827ca23ba8086b1854a82e0d6d0765fd87b15d8aa331fafd90d696c" Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.339809 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bad39ad0f827ca23ba8086b1854a82e0d6d0765fd87b15d8aa331fafd90d696c"} err="failed to get container status \"bad39ad0f827ca23ba8086b1854a82e0d6d0765fd87b15d8aa331fafd90d696c\": rpc error: code = NotFound desc = could not find container \"bad39ad0f827ca23ba8086b1854a82e0d6d0765fd87b15d8aa331fafd90d696c\": container with ID starting with bad39ad0f827ca23ba8086b1854a82e0d6d0765fd87b15d8aa331fafd90d696c not found: ID does not exist" Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.349311 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.358330 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 19:58:19 crc kubenswrapper[4813]: E0219 19:58:19.358685 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="216e6931-2f1e-44d4-bece-f5516f986b3a" containerName="glance-log" Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.358701 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="216e6931-2f1e-44d4-bece-f5516f986b3a" containerName="glance-log" Feb 19 19:58:19 crc kubenswrapper[4813]: E0219 19:58:19.358722 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="216e6931-2f1e-44d4-bece-f5516f986b3a" containerName="glance-httpd" Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.358728 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="216e6931-2f1e-44d4-bece-f5516f986b3a" containerName="glance-httpd" Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.358879 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="216e6931-2f1e-44d4-bece-f5516f986b3a" containerName="glance-httpd" Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.358901 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="216e6931-2f1e-44d4-bece-f5516f986b3a" containerName="glance-log" Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.361693 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.374644 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.383215 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.484325 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="216e6931-2f1e-44d4-bece-f5516f986b3a" path="/var/lib/kubelet/pods/216e6931-2f1e-44d4-bece-f5516f986b3a/volumes" Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.505823 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4344547a-ae09-4f86-8196-ac453357cd15-logs\") pod \"glance-default-external-api-0\" (UID: \"4344547a-ae09-4f86-8196-ac453357cd15\") " pod="openstack/glance-default-external-api-0" Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.506190 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4344547a-ae09-4f86-8196-ac453357cd15-scripts\") pod \"glance-default-external-api-0\" (UID: \"4344547a-ae09-4f86-8196-ac453357cd15\") " pod="openstack/glance-default-external-api-0" Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.506316 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bx96n\" (UniqueName: \"kubernetes.io/projected/4344547a-ae09-4f86-8196-ac453357cd15-kube-api-access-bx96n\") pod \"glance-default-external-api-0\" (UID: \"4344547a-ae09-4f86-8196-ac453357cd15\") " pod="openstack/glance-default-external-api-0" Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.506492 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4344547a-ae09-4f86-8196-ac453357cd15-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4344547a-ae09-4f86-8196-ac453357cd15\") " pod="openstack/glance-default-external-api-0" Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.506601 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/4344547a-ae09-4f86-8196-ac453357cd15-ceph\") pod \"glance-default-external-api-0\" (UID: \"4344547a-ae09-4f86-8196-ac453357cd15\") " pod="openstack/glance-default-external-api-0" Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.506703 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4344547a-ae09-4f86-8196-ac453357cd15-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4344547a-ae09-4f86-8196-ac453357cd15\") " pod="openstack/glance-default-external-api-0" Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.506932 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4344547a-ae09-4f86-8196-ac453357cd15-config-data\") pod \"glance-default-external-api-0\" (UID: \"4344547a-ae09-4f86-8196-ac453357cd15\") " pod="openstack/glance-default-external-api-0" Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.608906 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4344547a-ae09-4f86-8196-ac453357cd15-config-data\") pod \"glance-default-external-api-0\" (UID: \"4344547a-ae09-4f86-8196-ac453357cd15\") " pod="openstack/glance-default-external-api-0" Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.609017 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4344547a-ae09-4f86-8196-ac453357cd15-logs\") pod \"glance-default-external-api-0\" (UID: \"4344547a-ae09-4f86-8196-ac453357cd15\") " pod="openstack/glance-default-external-api-0" Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.609052 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4344547a-ae09-4f86-8196-ac453357cd15-scripts\") pod \"glance-default-external-api-0\" (UID: \"4344547a-ae09-4f86-8196-ac453357cd15\") " pod="openstack/glance-default-external-api-0" Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.609082 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bx96n\" (UniqueName: \"kubernetes.io/projected/4344547a-ae09-4f86-8196-ac453357cd15-kube-api-access-bx96n\") pod \"glance-default-external-api-0\" (UID: \"4344547a-ae09-4f86-8196-ac453357cd15\") " pod="openstack/glance-default-external-api-0" Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.609148 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4344547a-ae09-4f86-8196-ac453357cd15-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4344547a-ae09-4f86-8196-ac453357cd15\") " pod="openstack/glance-default-external-api-0" Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.609214 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/4344547a-ae09-4f86-8196-ac453357cd15-ceph\") pod \"glance-default-external-api-0\" (UID: \"4344547a-ae09-4f86-8196-ac453357cd15\") " pod="openstack/glance-default-external-api-0" Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.609271 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4344547a-ae09-4f86-8196-ac453357cd15-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4344547a-ae09-4f86-8196-ac453357cd15\") " pod="openstack/glance-default-external-api-0" Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.609853 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4344547a-ae09-4f86-8196-ac453357cd15-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4344547a-ae09-4f86-8196-ac453357cd15\") " pod="openstack/glance-default-external-api-0" Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.609932 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4344547a-ae09-4f86-8196-ac453357cd15-logs\") pod \"glance-default-external-api-0\" (UID: \"4344547a-ae09-4f86-8196-ac453357cd15\") " pod="openstack/glance-default-external-api-0" Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.613589 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/4344547a-ae09-4f86-8196-ac453357cd15-ceph\") pod \"glance-default-external-api-0\" (UID: \"4344547a-ae09-4f86-8196-ac453357cd15\") " pod="openstack/glance-default-external-api-0" Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.613610 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4344547a-ae09-4f86-8196-ac453357cd15-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4344547a-ae09-4f86-8196-ac453357cd15\") " pod="openstack/glance-default-external-api-0" Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.615197 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4344547a-ae09-4f86-8196-ac453357cd15-scripts\") pod \"glance-default-external-api-0\" (UID: \"4344547a-ae09-4f86-8196-ac453357cd15\") " pod="openstack/glance-default-external-api-0" Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.615545 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4344547a-ae09-4f86-8196-ac453357cd15-config-data\") pod \"glance-default-external-api-0\" (UID: \"4344547a-ae09-4f86-8196-ac453357cd15\") " pod="openstack/glance-default-external-api-0" Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.626073 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bx96n\" (UniqueName: \"kubernetes.io/projected/4344547a-ae09-4f86-8196-ac453357cd15-kube-api-access-bx96n\") pod \"glance-default-external-api-0\" (UID: \"4344547a-ae09-4f86-8196-ac453357cd15\") " pod="openstack/glance-default-external-api-0" Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.735640 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.780407 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.914511 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cdfe2042-1289-4639-a3ee-3b6abbc5d7ad-logs\") pod \"cdfe2042-1289-4639-a3ee-3b6abbc5d7ad\" (UID: \"cdfe2042-1289-4639-a3ee-3b6abbc5d7ad\") " Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.914857 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cdfe2042-1289-4639-a3ee-3b6abbc5d7ad-scripts\") pod \"cdfe2042-1289-4639-a3ee-3b6abbc5d7ad\" (UID: \"cdfe2042-1289-4639-a3ee-3b6abbc5d7ad\") " Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.914897 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdfe2042-1289-4639-a3ee-3b6abbc5d7ad-config-data\") pod \"cdfe2042-1289-4639-a3ee-3b6abbc5d7ad\" (UID: \"cdfe2042-1289-4639-a3ee-3b6abbc5d7ad\") " Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.914929 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cdfe2042-1289-4639-a3ee-3b6abbc5d7ad-httpd-run\") pod \"cdfe2042-1289-4639-a3ee-3b6abbc5d7ad\" (UID: \"cdfe2042-1289-4639-a3ee-3b6abbc5d7ad\") " Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.914981 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pk6c\" (UniqueName: \"kubernetes.io/projected/cdfe2042-1289-4639-a3ee-3b6abbc5d7ad-kube-api-access-4pk6c\") pod \"cdfe2042-1289-4639-a3ee-3b6abbc5d7ad\" (UID: \"cdfe2042-1289-4639-a3ee-3b6abbc5d7ad\") " Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.915126 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/cdfe2042-1289-4639-a3ee-3b6abbc5d7ad-ceph\") pod \"cdfe2042-1289-4639-a3ee-3b6abbc5d7ad\" (UID: \"cdfe2042-1289-4639-a3ee-3b6abbc5d7ad\") " Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.915152 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdfe2042-1289-4639-a3ee-3b6abbc5d7ad-combined-ca-bundle\") pod \"cdfe2042-1289-4639-a3ee-3b6abbc5d7ad\" (UID: \"cdfe2042-1289-4639-a3ee-3b6abbc5d7ad\") " Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.915617 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cdfe2042-1289-4639-a3ee-3b6abbc5d7ad-logs" (OuterVolumeSpecName: "logs") pod "cdfe2042-1289-4639-a3ee-3b6abbc5d7ad" (UID: "cdfe2042-1289-4639-a3ee-3b6abbc5d7ad"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.916043 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cdfe2042-1289-4639-a3ee-3b6abbc5d7ad-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "cdfe2042-1289-4639-a3ee-3b6abbc5d7ad" (UID: "cdfe2042-1289-4639-a3ee-3b6abbc5d7ad"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.920691 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdfe2042-1289-4639-a3ee-3b6abbc5d7ad-ceph" (OuterVolumeSpecName: "ceph") pod "cdfe2042-1289-4639-a3ee-3b6abbc5d7ad" (UID: "cdfe2042-1289-4639-a3ee-3b6abbc5d7ad"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.923223 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdfe2042-1289-4639-a3ee-3b6abbc5d7ad-kube-api-access-4pk6c" (OuterVolumeSpecName: "kube-api-access-4pk6c") pod "cdfe2042-1289-4639-a3ee-3b6abbc5d7ad" (UID: "cdfe2042-1289-4639-a3ee-3b6abbc5d7ad"). InnerVolumeSpecName "kube-api-access-4pk6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.923969 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdfe2042-1289-4639-a3ee-3b6abbc5d7ad-scripts" (OuterVolumeSpecName: "scripts") pod "cdfe2042-1289-4639-a3ee-3b6abbc5d7ad" (UID: "cdfe2042-1289-4639-a3ee-3b6abbc5d7ad"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.954255 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdfe2042-1289-4639-a3ee-3b6abbc5d7ad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cdfe2042-1289-4639-a3ee-3b6abbc5d7ad" (UID: "cdfe2042-1289-4639-a3ee-3b6abbc5d7ad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:58:19 crc kubenswrapper[4813]: I0219 19:58:19.969879 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdfe2042-1289-4639-a3ee-3b6abbc5d7ad-config-data" (OuterVolumeSpecName: "config-data") pod "cdfe2042-1289-4639-a3ee-3b6abbc5d7ad" (UID: "cdfe2042-1289-4639-a3ee-3b6abbc5d7ad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:58:20 crc kubenswrapper[4813]: I0219 19:58:20.019269 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cdfe2042-1289-4639-a3ee-3b6abbc5d7ad-logs\") on node \"crc\" DevicePath \"\"" Feb 19 19:58:20 crc kubenswrapper[4813]: I0219 19:58:20.019306 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cdfe2042-1289-4639-a3ee-3b6abbc5d7ad-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:58:20 crc kubenswrapper[4813]: I0219 19:58:20.019321 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdfe2042-1289-4639-a3ee-3b6abbc5d7ad-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 19:58:20 crc kubenswrapper[4813]: I0219 19:58:20.019332 4813 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cdfe2042-1289-4639-a3ee-3b6abbc5d7ad-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 19 19:58:20 crc kubenswrapper[4813]: I0219 19:58:20.019345 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pk6c\" (UniqueName: \"kubernetes.io/projected/cdfe2042-1289-4639-a3ee-3b6abbc5d7ad-kube-api-access-4pk6c\") on node \"crc\" DevicePath \"\"" Feb 19 19:58:20 crc kubenswrapper[4813]: I0219 19:58:20.019356 4813 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/cdfe2042-1289-4639-a3ee-3b6abbc5d7ad-ceph\") on node \"crc\" DevicePath \"\"" Feb 19 19:58:20 crc kubenswrapper[4813]: I0219 19:58:20.019366 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdfe2042-1289-4639-a3ee-3b6abbc5d7ad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:58:20 crc kubenswrapper[4813]: I0219 19:58:20.284753 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 19:58:20 crc kubenswrapper[4813]: W0219 19:58:20.286185 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4344547a_ae09_4f86_8196_ac453357cd15.slice/crio-9d987d4bacda3f0dd79a752f021f71a883b75e5b61dde5758be67b9323215f19 WatchSource:0}: Error finding container 9d987d4bacda3f0dd79a752f021f71a883b75e5b61dde5758be67b9323215f19: Status 404 returned error can't find the container with id 9d987d4bacda3f0dd79a752f021f71a883b75e5b61dde5758be67b9323215f19 Feb 19 19:58:20 crc kubenswrapper[4813]: I0219 19:58:20.292775 4813 generic.go:334] "Generic (PLEG): container finished" podID="cdfe2042-1289-4639-a3ee-3b6abbc5d7ad" containerID="073e3c1bbeb50683381fd7c20cc3f16ebfc08a84e21a4a1926f3a200376e5bc7" exitCode=0 Feb 19 19:58:20 crc kubenswrapper[4813]: I0219 19:58:20.292803 4813 generic.go:334] "Generic (PLEG): container finished" podID="cdfe2042-1289-4639-a3ee-3b6abbc5d7ad" containerID="f8c5e4feb0008e3fa5555b37eb06bd42aa49d12dad4ba03944f3777a79a8c557" exitCode=143 Feb 19 19:58:20 crc kubenswrapper[4813]: I0219 19:58:20.292846 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cdfe2042-1289-4639-a3ee-3b6abbc5d7ad","Type":"ContainerDied","Data":"073e3c1bbeb50683381fd7c20cc3f16ebfc08a84e21a4a1926f3a200376e5bc7"} Feb 19 19:58:20 crc kubenswrapper[4813]: I0219 19:58:20.292872 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cdfe2042-1289-4639-a3ee-3b6abbc5d7ad","Type":"ContainerDied","Data":"f8c5e4feb0008e3fa5555b37eb06bd42aa49d12dad4ba03944f3777a79a8c557"} Feb 19 19:58:20 crc kubenswrapper[4813]: I0219 19:58:20.292881 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cdfe2042-1289-4639-a3ee-3b6abbc5d7ad","Type":"ContainerDied","Data":"be0bf96a091a0150a3c0f37d0a85c4080b1f14d84a7913afc4b72cd78b39678b"} Feb 19 19:58:20 crc kubenswrapper[4813]: I0219 19:58:20.292897 4813 scope.go:117] "RemoveContainer" containerID="073e3c1bbeb50683381fd7c20cc3f16ebfc08a84e21a4a1926f3a200376e5bc7" Feb 19 19:58:20 crc kubenswrapper[4813]: I0219 19:58:20.293029 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 19:58:20 crc kubenswrapper[4813]: I0219 19:58:20.380897 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 19:58:20 crc kubenswrapper[4813]: I0219 19:58:20.387239 4813 scope.go:117] "RemoveContainer" containerID="f8c5e4feb0008e3fa5555b37eb06bd42aa49d12dad4ba03944f3777a79a8c557" Feb 19 19:58:20 crc kubenswrapper[4813]: I0219 19:58:20.401178 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 19:58:20 crc kubenswrapper[4813]: I0219 19:58:20.418491 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 19:58:20 crc kubenswrapper[4813]: E0219 19:58:20.418914 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdfe2042-1289-4639-a3ee-3b6abbc5d7ad" containerName="glance-httpd" Feb 19 19:58:20 crc kubenswrapper[4813]: I0219 19:58:20.418932 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdfe2042-1289-4639-a3ee-3b6abbc5d7ad" containerName="glance-httpd" Feb 19 19:58:20 crc kubenswrapper[4813]: E0219 19:58:20.418944 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdfe2042-1289-4639-a3ee-3b6abbc5d7ad" containerName="glance-log" Feb 19 19:58:20 crc kubenswrapper[4813]: I0219 19:58:20.419029 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdfe2042-1289-4639-a3ee-3b6abbc5d7ad" containerName="glance-log" Feb 19 19:58:20 crc kubenswrapper[4813]: I0219 19:58:20.419178 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdfe2042-1289-4639-a3ee-3b6abbc5d7ad" containerName="glance-httpd" Feb 19 19:58:20 crc kubenswrapper[4813]: I0219 19:58:20.419198 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdfe2042-1289-4639-a3ee-3b6abbc5d7ad" containerName="glance-log" Feb 19 19:58:20 crc kubenswrapper[4813]: I0219 19:58:20.424093 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 19:58:20 crc kubenswrapper[4813]: I0219 19:58:20.428571 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 19 19:58:20 crc kubenswrapper[4813]: I0219 19:58:20.434202 4813 scope.go:117] "RemoveContainer" containerID="073e3c1bbeb50683381fd7c20cc3f16ebfc08a84e21a4a1926f3a200376e5bc7" Feb 19 19:58:20 crc kubenswrapper[4813]: I0219 19:58:20.434343 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 19:58:20 crc kubenswrapper[4813]: E0219 19:58:20.436348 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"073e3c1bbeb50683381fd7c20cc3f16ebfc08a84e21a4a1926f3a200376e5bc7\": container with ID starting with 073e3c1bbeb50683381fd7c20cc3f16ebfc08a84e21a4a1926f3a200376e5bc7 not found: ID does not exist" containerID="073e3c1bbeb50683381fd7c20cc3f16ebfc08a84e21a4a1926f3a200376e5bc7" Feb 19 19:58:20 crc kubenswrapper[4813]: I0219 19:58:20.436406 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"073e3c1bbeb50683381fd7c20cc3f16ebfc08a84e21a4a1926f3a200376e5bc7"} err="failed to get container status \"073e3c1bbeb50683381fd7c20cc3f16ebfc08a84e21a4a1926f3a200376e5bc7\": rpc error: code = NotFound desc = could not find container \"073e3c1bbeb50683381fd7c20cc3f16ebfc08a84e21a4a1926f3a200376e5bc7\": container with ID starting with 073e3c1bbeb50683381fd7c20cc3f16ebfc08a84e21a4a1926f3a200376e5bc7 not found: ID does not exist" Feb 19 19:58:20 crc kubenswrapper[4813]: I0219 19:58:20.436438 4813 scope.go:117] "RemoveContainer" containerID="f8c5e4feb0008e3fa5555b37eb06bd42aa49d12dad4ba03944f3777a79a8c557" Feb 19 19:58:20 crc kubenswrapper[4813]: E0219 19:58:20.436971 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8c5e4feb0008e3fa5555b37eb06bd42aa49d12dad4ba03944f3777a79a8c557\": container with ID starting with f8c5e4feb0008e3fa5555b37eb06bd42aa49d12dad4ba03944f3777a79a8c557 not found: ID does not exist" containerID="f8c5e4feb0008e3fa5555b37eb06bd42aa49d12dad4ba03944f3777a79a8c557" Feb 19 19:58:20 crc kubenswrapper[4813]: I0219 19:58:20.436994 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8c5e4feb0008e3fa5555b37eb06bd42aa49d12dad4ba03944f3777a79a8c557"} err="failed to get container status \"f8c5e4feb0008e3fa5555b37eb06bd42aa49d12dad4ba03944f3777a79a8c557\": rpc error: code = NotFound desc = could not find container \"f8c5e4feb0008e3fa5555b37eb06bd42aa49d12dad4ba03944f3777a79a8c557\": container with ID starting with f8c5e4feb0008e3fa5555b37eb06bd42aa49d12dad4ba03944f3777a79a8c557 not found: ID does not exist" Feb 19 19:58:20 crc kubenswrapper[4813]: I0219 19:58:20.437011 4813 scope.go:117] "RemoveContainer" containerID="073e3c1bbeb50683381fd7c20cc3f16ebfc08a84e21a4a1926f3a200376e5bc7" Feb 19 19:58:20 crc kubenswrapper[4813]: I0219 19:58:20.437657 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"073e3c1bbeb50683381fd7c20cc3f16ebfc08a84e21a4a1926f3a200376e5bc7"} err="failed to get container status \"073e3c1bbeb50683381fd7c20cc3f16ebfc08a84e21a4a1926f3a200376e5bc7\": rpc error: code = NotFound desc = could not find container \"073e3c1bbeb50683381fd7c20cc3f16ebfc08a84e21a4a1926f3a200376e5bc7\": container with ID starting with 073e3c1bbeb50683381fd7c20cc3f16ebfc08a84e21a4a1926f3a200376e5bc7 not found: ID does not exist" Feb 19 19:58:20 crc kubenswrapper[4813]: I0219 19:58:20.437676 4813 scope.go:117] "RemoveContainer" containerID="f8c5e4feb0008e3fa5555b37eb06bd42aa49d12dad4ba03944f3777a79a8c557" Feb 19 19:58:20 crc kubenswrapper[4813]: I0219 19:58:20.439609 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8c5e4feb0008e3fa5555b37eb06bd42aa49d12dad4ba03944f3777a79a8c557"} err="failed to get container status \"f8c5e4feb0008e3fa5555b37eb06bd42aa49d12dad4ba03944f3777a79a8c557\": rpc error: code = NotFound desc = could not find container \"f8c5e4feb0008e3fa5555b37eb06bd42aa49d12dad4ba03944f3777a79a8c557\": container with ID starting with f8c5e4feb0008e3fa5555b37eb06bd42aa49d12dad4ba03944f3777a79a8c557 not found: ID does not exist" Feb 19 19:58:20 crc kubenswrapper[4813]: I0219 19:58:20.527316 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dd6c088-3815-44be-863c-d72d71a2cfa5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0dd6c088-3815-44be-863c-d72d71a2cfa5\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:58:20 crc kubenswrapper[4813]: I0219 19:58:20.527669 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpjg6\" (UniqueName: \"kubernetes.io/projected/0dd6c088-3815-44be-863c-d72d71a2cfa5-kube-api-access-jpjg6\") pod \"glance-default-internal-api-0\" (UID: \"0dd6c088-3815-44be-863c-d72d71a2cfa5\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:58:20 crc kubenswrapper[4813]: I0219 19:58:20.527696 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0dd6c088-3815-44be-863c-d72d71a2cfa5-ceph\") pod \"glance-default-internal-api-0\" (UID: \"0dd6c088-3815-44be-863c-d72d71a2cfa5\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:58:20 crc kubenswrapper[4813]: I0219 19:58:20.527739 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0dd6c088-3815-44be-863c-d72d71a2cfa5-logs\") pod \"glance-default-internal-api-0\" (UID: \"0dd6c088-3815-44be-863c-d72d71a2cfa5\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:58:20 crc kubenswrapper[4813]: I0219 19:58:20.527758 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dd6c088-3815-44be-863c-d72d71a2cfa5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0dd6c088-3815-44be-863c-d72d71a2cfa5\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:58:20 crc kubenswrapper[4813]: I0219 19:58:20.527889 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0dd6c088-3815-44be-863c-d72d71a2cfa5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0dd6c088-3815-44be-863c-d72d71a2cfa5\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:58:20 crc kubenswrapper[4813]: I0219 19:58:20.528075 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0dd6c088-3815-44be-863c-d72d71a2cfa5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0dd6c088-3815-44be-863c-d72d71a2cfa5\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:58:20 crc kubenswrapper[4813]: I0219 19:58:20.630206 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpjg6\" (UniqueName: \"kubernetes.io/projected/0dd6c088-3815-44be-863c-d72d71a2cfa5-kube-api-access-jpjg6\") pod \"glance-default-internal-api-0\" (UID: \"0dd6c088-3815-44be-863c-d72d71a2cfa5\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:58:20 crc kubenswrapper[4813]: I0219 19:58:20.630271 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0dd6c088-3815-44be-863c-d72d71a2cfa5-ceph\") pod \"glance-default-internal-api-0\" (UID: \"0dd6c088-3815-44be-863c-d72d71a2cfa5\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:58:20 crc kubenswrapper[4813]: I0219 19:58:20.630305 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0dd6c088-3815-44be-863c-d72d71a2cfa5-logs\") pod \"glance-default-internal-api-0\" (UID: \"0dd6c088-3815-44be-863c-d72d71a2cfa5\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:58:20 crc kubenswrapper[4813]: I0219 19:58:20.630338 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dd6c088-3815-44be-863c-d72d71a2cfa5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0dd6c088-3815-44be-863c-d72d71a2cfa5\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:58:20 crc kubenswrapper[4813]: I0219 19:58:20.630396 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0dd6c088-3815-44be-863c-d72d71a2cfa5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0dd6c088-3815-44be-863c-d72d71a2cfa5\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:58:20 crc kubenswrapper[4813]: I0219 19:58:20.630502 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0dd6c088-3815-44be-863c-d72d71a2cfa5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0dd6c088-3815-44be-863c-d72d71a2cfa5\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:58:20 crc kubenswrapper[4813]: I0219 19:58:20.630588 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dd6c088-3815-44be-863c-d72d71a2cfa5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0dd6c088-3815-44be-863c-d72d71a2cfa5\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:58:20 crc kubenswrapper[4813]: I0219 19:58:20.631529 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0dd6c088-3815-44be-863c-d72d71a2cfa5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0dd6c088-3815-44be-863c-d72d71a2cfa5\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:58:20 crc kubenswrapper[4813]: I0219 19:58:20.633404 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0dd6c088-3815-44be-863c-d72d71a2cfa5-logs\") pod \"glance-default-internal-api-0\" (UID: \"0dd6c088-3815-44be-863c-d72d71a2cfa5\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:58:20 crc kubenswrapper[4813]: I0219 19:58:20.636911 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0dd6c088-3815-44be-863c-d72d71a2cfa5-ceph\") pod \"glance-default-internal-api-0\" (UID: \"0dd6c088-3815-44be-863c-d72d71a2cfa5\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:58:20 crc kubenswrapper[4813]: I0219 19:58:20.637824 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dd6c088-3815-44be-863c-d72d71a2cfa5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0dd6c088-3815-44be-863c-d72d71a2cfa5\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:58:20 crc kubenswrapper[4813]: I0219 19:58:20.637850 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0dd6c088-3815-44be-863c-d72d71a2cfa5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0dd6c088-3815-44be-863c-d72d71a2cfa5\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:58:20 crc kubenswrapper[4813]: I0219 19:58:20.647357 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dd6c088-3815-44be-863c-d72d71a2cfa5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0dd6c088-3815-44be-863c-d72d71a2cfa5\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:58:20 crc kubenswrapper[4813]: I0219 19:58:20.647738 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpjg6\" (UniqueName: \"kubernetes.io/projected/0dd6c088-3815-44be-863c-d72d71a2cfa5-kube-api-access-jpjg6\") pod \"glance-default-internal-api-0\" (UID: \"0dd6c088-3815-44be-863c-d72d71a2cfa5\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:58:20 crc kubenswrapper[4813]: I0219 19:58:20.745937 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 19:58:21 crc kubenswrapper[4813]: I0219 19:58:21.285681 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 19:58:21 crc kubenswrapper[4813]: I0219 19:58:21.310636 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0dd6c088-3815-44be-863c-d72d71a2cfa5","Type":"ContainerStarted","Data":"6e5c5cfb95b3d8b3ad65fdc221a462c3f7b3c5f700087a02075616c71e10809f"} Feb 19 19:58:21 crc kubenswrapper[4813]: I0219 19:58:21.314410 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4344547a-ae09-4f86-8196-ac453357cd15","Type":"ContainerStarted","Data":"3e5ffc7cde1221dbcd57349e0adeaa0ed53928dddd6b35b1cdc6df2c70188ca1"} Feb 19 19:58:21 crc kubenswrapper[4813]: I0219 19:58:21.314438 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4344547a-ae09-4f86-8196-ac453357cd15","Type":"ContainerStarted","Data":"df33cc9c2df9419e6f2e7cb6f3232c991442063e273a982bf39590502378c5ed"} Feb 19 19:58:21 crc kubenswrapper[4813]: I0219 19:58:21.314448 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4344547a-ae09-4f86-8196-ac453357cd15","Type":"ContainerStarted","Data":"9d987d4bacda3f0dd79a752f021f71a883b75e5b61dde5758be67b9323215f19"} Feb 19 19:58:21 crc kubenswrapper[4813]: I0219 19:58:21.333644 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=2.333619491 podStartE2EDuration="2.333619491s" podCreationTimestamp="2026-02-19 19:58:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:58:21.331388231 +0000 UTC m=+5320.556828772" watchObservedRunningTime="2026-02-19 19:58:21.333619491 +0000 UTC m=+5320.559060042" Feb 19 19:58:21 crc kubenswrapper[4813]: I0219 19:58:21.510861 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdfe2042-1289-4639-a3ee-3b6abbc5d7ad" path="/var/lib/kubelet/pods/cdfe2042-1289-4639-a3ee-3b6abbc5d7ad/volumes" Feb 19 19:58:22 crc kubenswrapper[4813]: I0219 19:58:22.324199 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0dd6c088-3815-44be-863c-d72d71a2cfa5","Type":"ContainerStarted","Data":"c3531ceb1e4ac87632a3dcd2f54be716aa24b8e2a285d6f679b54e3f2b99bdfe"} Feb 19 19:58:22 crc kubenswrapper[4813]: I0219 19:58:22.325215 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0dd6c088-3815-44be-863c-d72d71a2cfa5","Type":"ContainerStarted","Data":"ef77f4d11fa940a3768d863b7f292ecade1db28e65768f0b337515fc1810981c"} Feb 19 19:58:22 crc kubenswrapper[4813]: I0219 19:58:22.345641 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=2.345621432 podStartE2EDuration="2.345621432s" podCreationTimestamp="2026-02-19 19:58:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:58:22.343458956 +0000 UTC m=+5321.568899497" watchObservedRunningTime="2026-02-19 19:58:22.345621432 +0000 UTC m=+5321.571061973" Feb 19 19:58:25 crc kubenswrapper[4813]: I0219 19:58:25.925219 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c445499f7-5vdmf" Feb 19 19:58:25 crc kubenswrapper[4813]: I0219 19:58:25.993890 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7854cb56d9-jcfdt"] Feb 19 19:58:25 crc kubenswrapper[4813]: I0219 19:58:25.996407 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7854cb56d9-jcfdt" podUID="1f1a98f7-4c83-4106-a6c8-09a2a4057544" containerName="dnsmasq-dns" containerID="cri-o://fc43fbcce36e5d9f8aba6cd85d39d2a3ce0c090a3c19280d6a7b64db0ac351ed" gracePeriod=10 Feb 19 19:58:26 crc kubenswrapper[4813]: I0219 19:58:26.359430 4813 generic.go:334] "Generic (PLEG): container finished" podID="1f1a98f7-4c83-4106-a6c8-09a2a4057544" containerID="fc43fbcce36e5d9f8aba6cd85d39d2a3ce0c090a3c19280d6a7b64db0ac351ed" exitCode=0 Feb 19 19:58:26 crc kubenswrapper[4813]: I0219 19:58:26.359709 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7854cb56d9-jcfdt" event={"ID":"1f1a98f7-4c83-4106-a6c8-09a2a4057544","Type":"ContainerDied","Data":"fc43fbcce36e5d9f8aba6cd85d39d2a3ce0c090a3c19280d6a7b64db0ac351ed"} Feb 19 19:58:26 crc kubenswrapper[4813]: I0219 19:58:26.451305 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7854cb56d9-jcfdt" Feb 19 19:58:26 crc kubenswrapper[4813]: I0219 19:58:26.541174 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1f1a98f7-4c83-4106-a6c8-09a2a4057544-dns-svc\") pod \"1f1a98f7-4c83-4106-a6c8-09a2a4057544\" (UID: \"1f1a98f7-4c83-4106-a6c8-09a2a4057544\") " Feb 19 19:58:26 crc kubenswrapper[4813]: I0219 19:58:26.541269 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1f1a98f7-4c83-4106-a6c8-09a2a4057544-ovsdbserver-sb\") pod \"1f1a98f7-4c83-4106-a6c8-09a2a4057544\" (UID: \"1f1a98f7-4c83-4106-a6c8-09a2a4057544\") " Feb 19 19:58:26 crc kubenswrapper[4813]: I0219 19:58:26.541328 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f1a98f7-4c83-4106-a6c8-09a2a4057544-config\") pod \"1f1a98f7-4c83-4106-a6c8-09a2a4057544\" (UID: \"1f1a98f7-4c83-4106-a6c8-09a2a4057544\") " Feb 19 19:58:26 crc kubenswrapper[4813]: I0219 19:58:26.541431 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqcqj\" (UniqueName: \"kubernetes.io/projected/1f1a98f7-4c83-4106-a6c8-09a2a4057544-kube-api-access-hqcqj\") pod \"1f1a98f7-4c83-4106-a6c8-09a2a4057544\" (UID: \"1f1a98f7-4c83-4106-a6c8-09a2a4057544\") " Feb 19 19:58:26 crc kubenswrapper[4813]: I0219 19:58:26.541457 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1f1a98f7-4c83-4106-a6c8-09a2a4057544-ovsdbserver-nb\") pod \"1f1a98f7-4c83-4106-a6c8-09a2a4057544\" (UID: \"1f1a98f7-4c83-4106-a6c8-09a2a4057544\") " Feb 19 19:58:26 crc kubenswrapper[4813]: I0219 19:58:26.546620 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f1a98f7-4c83-4106-a6c8-09a2a4057544-kube-api-access-hqcqj" (OuterVolumeSpecName: "kube-api-access-hqcqj") pod "1f1a98f7-4c83-4106-a6c8-09a2a4057544" (UID: "1f1a98f7-4c83-4106-a6c8-09a2a4057544"). InnerVolumeSpecName "kube-api-access-hqcqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:58:26 crc kubenswrapper[4813]: I0219 19:58:26.586313 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f1a98f7-4c83-4106-a6c8-09a2a4057544-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1f1a98f7-4c83-4106-a6c8-09a2a4057544" (UID: "1f1a98f7-4c83-4106-a6c8-09a2a4057544"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:58:26 crc kubenswrapper[4813]: I0219 19:58:26.586481 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f1a98f7-4c83-4106-a6c8-09a2a4057544-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1f1a98f7-4c83-4106-a6c8-09a2a4057544" (UID: "1f1a98f7-4c83-4106-a6c8-09a2a4057544"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:58:26 crc kubenswrapper[4813]: I0219 19:58:26.587681 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f1a98f7-4c83-4106-a6c8-09a2a4057544-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1f1a98f7-4c83-4106-a6c8-09a2a4057544" (UID: "1f1a98f7-4c83-4106-a6c8-09a2a4057544"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:58:26 crc kubenswrapper[4813]: I0219 19:58:26.593829 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f1a98f7-4c83-4106-a6c8-09a2a4057544-config" (OuterVolumeSpecName: "config") pod "1f1a98f7-4c83-4106-a6c8-09a2a4057544" (UID: "1f1a98f7-4c83-4106-a6c8-09a2a4057544"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:58:26 crc kubenswrapper[4813]: I0219 19:58:26.643919 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f1a98f7-4c83-4106-a6c8-09a2a4057544-config\") on node \"crc\" DevicePath \"\"" Feb 19 19:58:26 crc kubenswrapper[4813]: I0219 19:58:26.643990 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqcqj\" (UniqueName: \"kubernetes.io/projected/1f1a98f7-4c83-4106-a6c8-09a2a4057544-kube-api-access-hqcqj\") on node \"crc\" DevicePath \"\"" Feb 19 19:58:26 crc kubenswrapper[4813]: I0219 19:58:26.644009 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1f1a98f7-4c83-4106-a6c8-09a2a4057544-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 19:58:26 crc kubenswrapper[4813]: I0219 19:58:26.644022 4813 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1f1a98f7-4c83-4106-a6c8-09a2a4057544-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 19:58:26 crc kubenswrapper[4813]: I0219 19:58:26.644035 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1f1a98f7-4c83-4106-a6c8-09a2a4057544-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 19:58:27 crc kubenswrapper[4813]: I0219 19:58:27.374457 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7854cb56d9-jcfdt" event={"ID":"1f1a98f7-4c83-4106-a6c8-09a2a4057544","Type":"ContainerDied","Data":"17d19983310728df727f965f23fd6c9086805d365a91f4245e8ddde0d01e6ca2"} Feb 19 19:58:27 crc kubenswrapper[4813]: I0219 19:58:27.374533 4813 scope.go:117] "RemoveContainer" containerID="fc43fbcce36e5d9f8aba6cd85d39d2a3ce0c090a3c19280d6a7b64db0ac351ed" Feb 19 19:58:27 crc kubenswrapper[4813]: I0219 19:58:27.374730 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7854cb56d9-jcfdt" Feb 19 19:58:27 crc kubenswrapper[4813]: I0219 19:58:27.399761 4813 scope.go:117] "RemoveContainer" containerID="289416b42dcd2d9197e96d12bc7d6320319df2a0ac7f67c4134190a853cb045f" Feb 19 19:58:27 crc kubenswrapper[4813]: I0219 19:58:27.413189 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7854cb56d9-jcfdt"] Feb 19 19:58:27 crc kubenswrapper[4813]: I0219 19:58:27.421907 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7854cb56d9-jcfdt"] Feb 19 19:58:27 crc kubenswrapper[4813]: I0219 19:58:27.484669 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f1a98f7-4c83-4106-a6c8-09a2a4057544" path="/var/lib/kubelet/pods/1f1a98f7-4c83-4106-a6c8-09a2a4057544/volumes" Feb 19 19:58:29 crc kubenswrapper[4813]: I0219 19:58:29.781013 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 19 19:58:29 crc kubenswrapper[4813]: I0219 19:58:29.781062 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 19 19:58:29 crc kubenswrapper[4813]: I0219 19:58:29.813150 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 19 19:58:29 crc kubenswrapper[4813]: I0219 19:58:29.823665 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 19 19:58:30 crc kubenswrapper[4813]: I0219 19:58:30.405912 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 19 19:58:30 crc kubenswrapper[4813]: I0219 19:58:30.406331 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 19 19:58:30 crc kubenswrapper[4813]: I0219 19:58:30.471137 4813 scope.go:117] "RemoveContainer" containerID="6f703dfd90f9e11930a40b965ab018f3c46a606e13eaba250b413520a86bea4f" Feb 19 19:58:30 crc kubenswrapper[4813]: E0219 19:58:30.471407 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:58:30 crc kubenswrapper[4813]: I0219 19:58:30.746689 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 19 19:58:30 crc kubenswrapper[4813]: I0219 19:58:30.746734 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 19 19:58:30 crc kubenswrapper[4813]: I0219 19:58:30.775164 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 19 19:58:30 crc kubenswrapper[4813]: I0219 19:58:30.790201 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 19 19:58:31 crc kubenswrapper[4813]: I0219 19:58:31.414437 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 19 19:58:31 crc kubenswrapper[4813]: I0219 19:58:31.414474 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 19 19:58:32 crc kubenswrapper[4813]: I0219 19:58:32.424315 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 19 19:58:32 crc kubenswrapper[4813]: I0219 19:58:32.424770 4813 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 19:58:32 crc kubenswrapper[4813]: I0219 19:58:32.727461 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 19 19:58:33 crc kubenswrapper[4813]: I0219 19:58:33.358273 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 19 19:58:33 crc kubenswrapper[4813]: I0219 19:58:33.362079 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 19 19:58:40 crc kubenswrapper[4813]: I0219 19:58:40.831356 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-qv4mk"] Feb 19 19:58:40 crc kubenswrapper[4813]: E0219 19:58:40.832302 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f1a98f7-4c83-4106-a6c8-09a2a4057544" containerName="init" Feb 19 19:58:40 crc kubenswrapper[4813]: I0219 19:58:40.832319 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f1a98f7-4c83-4106-a6c8-09a2a4057544" containerName="init" Feb 19 19:58:40 crc kubenswrapper[4813]: E0219 19:58:40.832336 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f1a98f7-4c83-4106-a6c8-09a2a4057544" containerName="dnsmasq-dns" Feb 19 19:58:40 crc kubenswrapper[4813]: I0219 19:58:40.832342 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f1a98f7-4c83-4106-a6c8-09a2a4057544" containerName="dnsmasq-dns" Feb 19 19:58:40 crc kubenswrapper[4813]: I0219 19:58:40.832505 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f1a98f7-4c83-4106-a6c8-09a2a4057544" containerName="dnsmasq-dns" Feb 19 19:58:40 crc kubenswrapper[4813]: I0219 19:58:40.833103 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-qv4mk" Feb 19 19:58:40 crc kubenswrapper[4813]: I0219 19:58:40.840650 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-1468-account-create-update-6jnz2"] Feb 19 19:58:40 crc kubenswrapper[4813]: I0219 19:58:40.841663 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1468-account-create-update-6jnz2" Feb 19 19:58:40 crc kubenswrapper[4813]: I0219 19:58:40.848754 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-qv4mk"] Feb 19 19:58:40 crc kubenswrapper[4813]: I0219 19:58:40.852577 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 19 19:58:40 crc kubenswrapper[4813]: I0219 19:58:40.857488 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-1468-account-create-update-6jnz2"] Feb 19 19:58:40 crc kubenswrapper[4813]: I0219 19:58:40.897446 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7rcc\" (UniqueName: \"kubernetes.io/projected/1d06c681-ae92-4c23-a05f-f08441382c05-kube-api-access-k7rcc\") pod \"placement-1468-account-create-update-6jnz2\" (UID: \"1d06c681-ae92-4c23-a05f-f08441382c05\") " pod="openstack/placement-1468-account-create-update-6jnz2" Feb 19 19:58:40 crc kubenswrapper[4813]: I0219 19:58:40.897763 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d06c681-ae92-4c23-a05f-f08441382c05-operator-scripts\") pod \"placement-1468-account-create-update-6jnz2\" (UID: \"1d06c681-ae92-4c23-a05f-f08441382c05\") " pod="openstack/placement-1468-account-create-update-6jnz2" Feb 19 19:58:40 crc kubenswrapper[4813]: I0219 19:58:40.897811 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hd2x4\" (UniqueName: \"kubernetes.io/projected/530cfe42-e9cd-45ad-9c70-eee31f9522c7-kube-api-access-hd2x4\") pod \"placement-db-create-qv4mk\" (UID: \"530cfe42-e9cd-45ad-9c70-eee31f9522c7\") " pod="openstack/placement-db-create-qv4mk" Feb 19 19:58:40 crc kubenswrapper[4813]: I0219 19:58:40.897834 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/530cfe42-e9cd-45ad-9c70-eee31f9522c7-operator-scripts\") pod \"placement-db-create-qv4mk\" (UID: \"530cfe42-e9cd-45ad-9c70-eee31f9522c7\") " pod="openstack/placement-db-create-qv4mk" Feb 19 19:58:40 crc kubenswrapper[4813]: I0219 19:58:40.999780 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7rcc\" (UniqueName: \"kubernetes.io/projected/1d06c681-ae92-4c23-a05f-f08441382c05-kube-api-access-k7rcc\") pod \"placement-1468-account-create-update-6jnz2\" (UID: \"1d06c681-ae92-4c23-a05f-f08441382c05\") " pod="openstack/placement-1468-account-create-update-6jnz2" Feb 19 19:58:40 crc kubenswrapper[4813]: I0219 19:58:40.999853 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d06c681-ae92-4c23-a05f-f08441382c05-operator-scripts\") pod \"placement-1468-account-create-update-6jnz2\" (UID: \"1d06c681-ae92-4c23-a05f-f08441382c05\") " pod="openstack/placement-1468-account-create-update-6jnz2" Feb 19 19:58:41 crc kubenswrapper[4813]: I0219 19:58:40.999887 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hd2x4\" (UniqueName: \"kubernetes.io/projected/530cfe42-e9cd-45ad-9c70-eee31f9522c7-kube-api-access-hd2x4\") pod \"placement-db-create-qv4mk\" (UID: \"530cfe42-e9cd-45ad-9c70-eee31f9522c7\") " pod="openstack/placement-db-create-qv4mk" Feb 19 19:58:41 crc kubenswrapper[4813]: I0219 19:58:40.999912 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/530cfe42-e9cd-45ad-9c70-eee31f9522c7-operator-scripts\") pod \"placement-db-create-qv4mk\" (UID: \"530cfe42-e9cd-45ad-9c70-eee31f9522c7\") " pod="openstack/placement-db-create-qv4mk" Feb 19 19:58:41 crc kubenswrapper[4813]: I0219 19:58:41.000826 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/530cfe42-e9cd-45ad-9c70-eee31f9522c7-operator-scripts\") pod \"placement-db-create-qv4mk\" (UID: \"530cfe42-e9cd-45ad-9c70-eee31f9522c7\") " pod="openstack/placement-db-create-qv4mk" Feb 19 19:58:41 crc kubenswrapper[4813]: I0219 19:58:41.000854 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d06c681-ae92-4c23-a05f-f08441382c05-operator-scripts\") pod \"placement-1468-account-create-update-6jnz2\" (UID: \"1d06c681-ae92-4c23-a05f-f08441382c05\") " pod="openstack/placement-1468-account-create-update-6jnz2" Feb 19 19:58:41 crc kubenswrapper[4813]: I0219 19:58:41.019133 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7rcc\" (UniqueName: \"kubernetes.io/projected/1d06c681-ae92-4c23-a05f-f08441382c05-kube-api-access-k7rcc\") pod \"placement-1468-account-create-update-6jnz2\" (UID: \"1d06c681-ae92-4c23-a05f-f08441382c05\") " pod="openstack/placement-1468-account-create-update-6jnz2" Feb 19 19:58:41 crc kubenswrapper[4813]: I0219 19:58:41.021793 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hd2x4\" (UniqueName: \"kubernetes.io/projected/530cfe42-e9cd-45ad-9c70-eee31f9522c7-kube-api-access-hd2x4\") pod \"placement-db-create-qv4mk\" (UID: \"530cfe42-e9cd-45ad-9c70-eee31f9522c7\") " pod="openstack/placement-db-create-qv4mk" Feb 19 19:58:41 crc kubenswrapper[4813]: I0219 19:58:41.164261 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-qv4mk" Feb 19 19:58:41 crc kubenswrapper[4813]: I0219 19:58:41.172912 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1468-account-create-update-6jnz2" Feb 19 19:58:41 crc kubenswrapper[4813]: W0219 19:58:41.629212 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod530cfe42_e9cd_45ad_9c70_eee31f9522c7.slice/crio-a7b80ad19c334a884923b7c8ff04b9b6b03ae5140d640330054aed797cb07a9b WatchSource:0}: Error finding container a7b80ad19c334a884923b7c8ff04b9b6b03ae5140d640330054aed797cb07a9b: Status 404 returned error can't find the container with id a7b80ad19c334a884923b7c8ff04b9b6b03ae5140d640330054aed797cb07a9b Feb 19 19:58:41 crc kubenswrapper[4813]: I0219 19:58:41.629552 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-qv4mk"] Feb 19 19:58:41 crc kubenswrapper[4813]: I0219 19:58:41.673005 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-1468-account-create-update-6jnz2"] Feb 19 19:58:42 crc kubenswrapper[4813]: I0219 19:58:42.541776 4813 generic.go:334] "Generic (PLEG): container finished" podID="530cfe42-e9cd-45ad-9c70-eee31f9522c7" containerID="80d2716957bd3a3e5266f935a2e9bdb5c6d31d6a011d772f8cb5a34601edfa31" exitCode=0 Feb 19 19:58:42 crc kubenswrapper[4813]: I0219 19:58:42.541834 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-qv4mk" event={"ID":"530cfe42-e9cd-45ad-9c70-eee31f9522c7","Type":"ContainerDied","Data":"80d2716957bd3a3e5266f935a2e9bdb5c6d31d6a011d772f8cb5a34601edfa31"} Feb 19 19:58:42 crc kubenswrapper[4813]: I0219 19:58:42.542095 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-qv4mk" event={"ID":"530cfe42-e9cd-45ad-9c70-eee31f9522c7","Type":"ContainerStarted","Data":"a7b80ad19c334a884923b7c8ff04b9b6b03ae5140d640330054aed797cb07a9b"} Feb 19 19:58:42 crc kubenswrapper[4813]: I0219 19:58:42.544918 4813 generic.go:334] "Generic (PLEG): container finished" podID="1d06c681-ae92-4c23-a05f-f08441382c05" containerID="238915b1b1ddd39617184ea0c77d41fe50c46d78a64dadb1ec22d93b7c373657" exitCode=0 Feb 19 19:58:42 crc kubenswrapper[4813]: I0219 19:58:42.544970 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-1468-account-create-update-6jnz2" event={"ID":"1d06c681-ae92-4c23-a05f-f08441382c05","Type":"ContainerDied","Data":"238915b1b1ddd39617184ea0c77d41fe50c46d78a64dadb1ec22d93b7c373657"} Feb 19 19:58:42 crc kubenswrapper[4813]: I0219 19:58:42.544993 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-1468-account-create-update-6jnz2" event={"ID":"1d06c681-ae92-4c23-a05f-f08441382c05","Type":"ContainerStarted","Data":"e188ceb43008f4b8ffd4275f417e97da5c26a70d73c026ff8fd6103a0f4615d0"} Feb 19 19:58:43 crc kubenswrapper[4813]: I0219 19:58:43.472032 4813 scope.go:117] "RemoveContainer" containerID="6f703dfd90f9e11930a40b965ab018f3c46a606e13eaba250b413520a86bea4f" Feb 19 19:58:43 crc kubenswrapper[4813]: E0219 19:58:43.472740 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:58:43 crc kubenswrapper[4813]: I0219 19:58:43.992863 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1468-account-create-update-6jnz2" Feb 19 19:58:43 crc kubenswrapper[4813]: I0219 19:58:43.998767 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-qv4mk" Feb 19 19:58:44 crc kubenswrapper[4813]: I0219 19:58:44.076689 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hd2x4\" (UniqueName: \"kubernetes.io/projected/530cfe42-e9cd-45ad-9c70-eee31f9522c7-kube-api-access-hd2x4\") pod \"530cfe42-e9cd-45ad-9c70-eee31f9522c7\" (UID: \"530cfe42-e9cd-45ad-9c70-eee31f9522c7\") " Feb 19 19:58:44 crc kubenswrapper[4813]: I0219 19:58:44.077410 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k7rcc\" (UniqueName: \"kubernetes.io/projected/1d06c681-ae92-4c23-a05f-f08441382c05-kube-api-access-k7rcc\") pod \"1d06c681-ae92-4c23-a05f-f08441382c05\" (UID: \"1d06c681-ae92-4c23-a05f-f08441382c05\") " Feb 19 19:58:44 crc kubenswrapper[4813]: I0219 19:58:44.077610 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/530cfe42-e9cd-45ad-9c70-eee31f9522c7-operator-scripts\") pod \"530cfe42-e9cd-45ad-9c70-eee31f9522c7\" (UID: \"530cfe42-e9cd-45ad-9c70-eee31f9522c7\") " Feb 19 19:58:44 crc kubenswrapper[4813]: I0219 19:58:44.077826 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d06c681-ae92-4c23-a05f-f08441382c05-operator-scripts\") pod \"1d06c681-ae92-4c23-a05f-f08441382c05\" (UID: \"1d06c681-ae92-4c23-a05f-f08441382c05\") " Feb 19 19:58:44 crc kubenswrapper[4813]: I0219 19:58:44.078777 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d06c681-ae92-4c23-a05f-f08441382c05-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1d06c681-ae92-4c23-a05f-f08441382c05" (UID: "1d06c681-ae92-4c23-a05f-f08441382c05"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:58:44 crc kubenswrapper[4813]: I0219 19:58:44.078785 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/530cfe42-e9cd-45ad-9c70-eee31f9522c7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "530cfe42-e9cd-45ad-9c70-eee31f9522c7" (UID: "530cfe42-e9cd-45ad-9c70-eee31f9522c7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:58:44 crc kubenswrapper[4813]: I0219 19:58:44.082982 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/530cfe42-e9cd-45ad-9c70-eee31f9522c7-kube-api-access-hd2x4" (OuterVolumeSpecName: "kube-api-access-hd2x4") pod "530cfe42-e9cd-45ad-9c70-eee31f9522c7" (UID: "530cfe42-e9cd-45ad-9c70-eee31f9522c7"). InnerVolumeSpecName "kube-api-access-hd2x4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:58:44 crc kubenswrapper[4813]: I0219 19:58:44.083038 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d06c681-ae92-4c23-a05f-f08441382c05-kube-api-access-k7rcc" (OuterVolumeSpecName: "kube-api-access-k7rcc") pod "1d06c681-ae92-4c23-a05f-f08441382c05" (UID: "1d06c681-ae92-4c23-a05f-f08441382c05"). InnerVolumeSpecName "kube-api-access-k7rcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:58:44 crc kubenswrapper[4813]: I0219 19:58:44.181180 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d06c681-ae92-4c23-a05f-f08441382c05-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:58:44 crc kubenswrapper[4813]: I0219 19:58:44.181219 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hd2x4\" (UniqueName: \"kubernetes.io/projected/530cfe42-e9cd-45ad-9c70-eee31f9522c7-kube-api-access-hd2x4\") on node \"crc\" DevicePath \"\"" Feb 19 19:58:44 crc kubenswrapper[4813]: I0219 19:58:44.181231 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k7rcc\" (UniqueName: \"kubernetes.io/projected/1d06c681-ae92-4c23-a05f-f08441382c05-kube-api-access-k7rcc\") on node \"crc\" DevicePath \"\"" Feb 19 19:58:44 crc kubenswrapper[4813]: I0219 19:58:44.181243 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/530cfe42-e9cd-45ad-9c70-eee31f9522c7-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:58:44 crc kubenswrapper[4813]: I0219 19:58:44.564373 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-1468-account-create-update-6jnz2" event={"ID":"1d06c681-ae92-4c23-a05f-f08441382c05","Type":"ContainerDied","Data":"e188ceb43008f4b8ffd4275f417e97da5c26a70d73c026ff8fd6103a0f4615d0"} Feb 19 19:58:44 crc kubenswrapper[4813]: I0219 19:58:44.564422 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e188ceb43008f4b8ffd4275f417e97da5c26a70d73c026ff8fd6103a0f4615d0" Feb 19 19:58:44 crc kubenswrapper[4813]: I0219 19:58:44.564424 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1468-account-create-update-6jnz2" Feb 19 19:58:44 crc kubenswrapper[4813]: I0219 19:58:44.566575 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-qv4mk" event={"ID":"530cfe42-e9cd-45ad-9c70-eee31f9522c7","Type":"ContainerDied","Data":"a7b80ad19c334a884923b7c8ff04b9b6b03ae5140d640330054aed797cb07a9b"} Feb 19 19:58:44 crc kubenswrapper[4813]: I0219 19:58:44.566602 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7b80ad19c334a884923b7c8ff04b9b6b03ae5140d640330054aed797cb07a9b" Feb 19 19:58:44 crc kubenswrapper[4813]: I0219 19:58:44.566651 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-qv4mk" Feb 19 19:58:46 crc kubenswrapper[4813]: I0219 19:58:46.057149 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-688b5bb7-mhv85"] Feb 19 19:58:46 crc kubenswrapper[4813]: E0219 19:58:46.057747 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d06c681-ae92-4c23-a05f-f08441382c05" containerName="mariadb-account-create-update" Feb 19 19:58:46 crc kubenswrapper[4813]: I0219 19:58:46.057766 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d06c681-ae92-4c23-a05f-f08441382c05" containerName="mariadb-account-create-update" Feb 19 19:58:46 crc kubenswrapper[4813]: E0219 19:58:46.057808 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="530cfe42-e9cd-45ad-9c70-eee31f9522c7" containerName="mariadb-database-create" Feb 19 19:58:46 crc kubenswrapper[4813]: I0219 19:58:46.057813 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="530cfe42-e9cd-45ad-9c70-eee31f9522c7" containerName="mariadb-database-create" Feb 19 19:58:46 crc kubenswrapper[4813]: I0219 19:58:46.057980 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="530cfe42-e9cd-45ad-9c70-eee31f9522c7" containerName="mariadb-database-create" Feb 19 19:58:46 crc kubenswrapper[4813]: I0219 19:58:46.057997 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d06c681-ae92-4c23-a05f-f08441382c05" containerName="mariadb-account-create-update" Feb 19 19:58:46 crc kubenswrapper[4813]: I0219 19:58:46.058839 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b5bb7-mhv85" Feb 19 19:58:46 crc kubenswrapper[4813]: I0219 19:58:46.064712 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688b5bb7-mhv85"] Feb 19 19:58:46 crc kubenswrapper[4813]: I0219 19:58:46.111091 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/055c5861-4fba-4801-b657-8524cd6d8320-config\") pod \"dnsmasq-dns-688b5bb7-mhv85\" (UID: \"055c5861-4fba-4801-b657-8524cd6d8320\") " pod="openstack/dnsmasq-dns-688b5bb7-mhv85" Feb 19 19:58:46 crc kubenswrapper[4813]: I0219 19:58:46.111161 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/055c5861-4fba-4801-b657-8524cd6d8320-dns-svc\") pod \"dnsmasq-dns-688b5bb7-mhv85\" (UID: \"055c5861-4fba-4801-b657-8524cd6d8320\") " pod="openstack/dnsmasq-dns-688b5bb7-mhv85" Feb 19 19:58:46 crc kubenswrapper[4813]: I0219 19:58:46.111220 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/055c5861-4fba-4801-b657-8524cd6d8320-ovsdbserver-nb\") pod \"dnsmasq-dns-688b5bb7-mhv85\" (UID: \"055c5861-4fba-4801-b657-8524cd6d8320\") " pod="openstack/dnsmasq-dns-688b5bb7-mhv85" Feb 19 19:58:46 crc kubenswrapper[4813]: I0219 19:58:46.111248 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/055c5861-4fba-4801-b657-8524cd6d8320-ovsdbserver-sb\") pod \"dnsmasq-dns-688b5bb7-mhv85\" (UID: \"055c5861-4fba-4801-b657-8524cd6d8320\") " pod="openstack/dnsmasq-dns-688b5bb7-mhv85" Feb 19 19:58:46 crc kubenswrapper[4813]: I0219 19:58:46.111296 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v55b5\" (UniqueName: \"kubernetes.io/projected/055c5861-4fba-4801-b657-8524cd6d8320-kube-api-access-v55b5\") pod \"dnsmasq-dns-688b5bb7-mhv85\" (UID: \"055c5861-4fba-4801-b657-8524cd6d8320\") " pod="openstack/dnsmasq-dns-688b5bb7-mhv85" Feb 19 19:58:46 crc kubenswrapper[4813]: I0219 19:58:46.113131 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-9sskl"] Feb 19 19:58:46 crc kubenswrapper[4813]: I0219 19:58:46.114551 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-9sskl" Feb 19 19:58:46 crc kubenswrapper[4813]: I0219 19:58:46.118200 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 19 19:58:46 crc kubenswrapper[4813]: I0219 19:58:46.118513 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 19 19:58:46 crc kubenswrapper[4813]: I0219 19:58:46.118579 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-68fmt" Feb 19 19:58:46 crc kubenswrapper[4813]: I0219 19:58:46.128564 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-9sskl"] Feb 19 19:58:46 crc kubenswrapper[4813]: I0219 19:58:46.213317 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/055c5861-4fba-4801-b657-8524cd6d8320-ovsdbserver-nb\") pod \"dnsmasq-dns-688b5bb7-mhv85\" (UID: \"055c5861-4fba-4801-b657-8524cd6d8320\") " pod="openstack/dnsmasq-dns-688b5bb7-mhv85" Feb 19 19:58:46 crc kubenswrapper[4813]: I0219 19:58:46.213364 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a528fe9c-0064-4ed6-87dd-770fa3c0ee52-combined-ca-bundle\") pod \"placement-db-sync-9sskl\" (UID: \"a528fe9c-0064-4ed6-87dd-770fa3c0ee52\") " pod="openstack/placement-db-sync-9sskl" Feb 19 19:58:46 crc kubenswrapper[4813]: I0219 19:58:46.213395 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a528fe9c-0064-4ed6-87dd-770fa3c0ee52-scripts\") pod \"placement-db-sync-9sskl\" (UID: \"a528fe9c-0064-4ed6-87dd-770fa3c0ee52\") " pod="openstack/placement-db-sync-9sskl" Feb 19 19:58:46 crc kubenswrapper[4813]: I0219 19:58:46.213425 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/055c5861-4fba-4801-b657-8524cd6d8320-ovsdbserver-sb\") pod \"dnsmasq-dns-688b5bb7-mhv85\" (UID: \"055c5861-4fba-4801-b657-8524cd6d8320\") " pod="openstack/dnsmasq-dns-688b5bb7-mhv85" Feb 19 19:58:46 crc kubenswrapper[4813]: I0219 19:58:46.213603 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a528fe9c-0064-4ed6-87dd-770fa3c0ee52-logs\") pod \"placement-db-sync-9sskl\" (UID: \"a528fe9c-0064-4ed6-87dd-770fa3c0ee52\") " pod="openstack/placement-db-sync-9sskl" Feb 19 19:58:46 crc kubenswrapper[4813]: I0219 19:58:46.213724 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v55b5\" (UniqueName: \"kubernetes.io/projected/055c5861-4fba-4801-b657-8524cd6d8320-kube-api-access-v55b5\") pod \"dnsmasq-dns-688b5bb7-mhv85\" (UID: \"055c5861-4fba-4801-b657-8524cd6d8320\") " pod="openstack/dnsmasq-dns-688b5bb7-mhv85" Feb 19 19:58:46 crc kubenswrapper[4813]: I0219 19:58:46.213774 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/055c5861-4fba-4801-b657-8524cd6d8320-config\") pod \"dnsmasq-dns-688b5bb7-mhv85\" (UID: \"055c5861-4fba-4801-b657-8524cd6d8320\") " pod="openstack/dnsmasq-dns-688b5bb7-mhv85" Feb 19 19:58:46 crc kubenswrapper[4813]: I0219 19:58:46.213834 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sdsl\" (UniqueName: \"kubernetes.io/projected/a528fe9c-0064-4ed6-87dd-770fa3c0ee52-kube-api-access-7sdsl\") pod \"placement-db-sync-9sskl\" (UID: \"a528fe9c-0064-4ed6-87dd-770fa3c0ee52\") " pod="openstack/placement-db-sync-9sskl" Feb 19 19:58:46 crc kubenswrapper[4813]: I0219 19:58:46.213899 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/055c5861-4fba-4801-b657-8524cd6d8320-dns-svc\") pod \"dnsmasq-dns-688b5bb7-mhv85\" (UID: \"055c5861-4fba-4801-b657-8524cd6d8320\") " pod="openstack/dnsmasq-dns-688b5bb7-mhv85" Feb 19 19:58:46 crc kubenswrapper[4813]: I0219 19:58:46.214097 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a528fe9c-0064-4ed6-87dd-770fa3c0ee52-config-data\") pod \"placement-db-sync-9sskl\" (UID: \"a528fe9c-0064-4ed6-87dd-770fa3c0ee52\") " pod="openstack/placement-db-sync-9sskl" Feb 19 19:58:46 crc kubenswrapper[4813]: I0219 19:58:46.214393 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/055c5861-4fba-4801-b657-8524cd6d8320-ovsdbserver-nb\") pod \"dnsmasq-dns-688b5bb7-mhv85\" (UID: \"055c5861-4fba-4801-b657-8524cd6d8320\") " pod="openstack/dnsmasq-dns-688b5bb7-mhv85" Feb 19 19:58:46 crc kubenswrapper[4813]: I0219 19:58:46.214570 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/055c5861-4fba-4801-b657-8524cd6d8320-ovsdbserver-sb\") pod \"dnsmasq-dns-688b5bb7-mhv85\" (UID: \"055c5861-4fba-4801-b657-8524cd6d8320\") " pod="openstack/dnsmasq-dns-688b5bb7-mhv85" Feb 19 19:58:46 crc kubenswrapper[4813]: I0219 19:58:46.214760 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/055c5861-4fba-4801-b657-8524cd6d8320-config\") pod \"dnsmasq-dns-688b5bb7-mhv85\" (UID: \"055c5861-4fba-4801-b657-8524cd6d8320\") " pod="openstack/dnsmasq-dns-688b5bb7-mhv85" Feb 19 19:58:46 crc kubenswrapper[4813]: I0219 19:58:46.214785 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/055c5861-4fba-4801-b657-8524cd6d8320-dns-svc\") pod \"dnsmasq-dns-688b5bb7-mhv85\" (UID: \"055c5861-4fba-4801-b657-8524cd6d8320\") " pod="openstack/dnsmasq-dns-688b5bb7-mhv85" Feb 19 19:58:46 crc kubenswrapper[4813]: I0219 19:58:46.234515 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v55b5\" (UniqueName: \"kubernetes.io/projected/055c5861-4fba-4801-b657-8524cd6d8320-kube-api-access-v55b5\") pod \"dnsmasq-dns-688b5bb7-mhv85\" (UID: \"055c5861-4fba-4801-b657-8524cd6d8320\") " pod="openstack/dnsmasq-dns-688b5bb7-mhv85" Feb 19 19:58:46 crc kubenswrapper[4813]: I0219 19:58:46.316933 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7sdsl\" (UniqueName: \"kubernetes.io/projected/a528fe9c-0064-4ed6-87dd-770fa3c0ee52-kube-api-access-7sdsl\") pod \"placement-db-sync-9sskl\" (UID: \"a528fe9c-0064-4ed6-87dd-770fa3c0ee52\") " pod="openstack/placement-db-sync-9sskl" Feb 19 19:58:46 crc kubenswrapper[4813]: I0219 19:58:46.317040 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a528fe9c-0064-4ed6-87dd-770fa3c0ee52-config-data\") pod \"placement-db-sync-9sskl\" (UID: \"a528fe9c-0064-4ed6-87dd-770fa3c0ee52\") " pod="openstack/placement-db-sync-9sskl" Feb 19 19:58:46 crc kubenswrapper[4813]: I0219 19:58:46.317094 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a528fe9c-0064-4ed6-87dd-770fa3c0ee52-combined-ca-bundle\") pod \"placement-db-sync-9sskl\" (UID: \"a528fe9c-0064-4ed6-87dd-770fa3c0ee52\") " pod="openstack/placement-db-sync-9sskl" Feb 19 19:58:46 crc kubenswrapper[4813]: I0219 19:58:46.317120 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a528fe9c-0064-4ed6-87dd-770fa3c0ee52-scripts\") pod \"placement-db-sync-9sskl\" (UID: \"a528fe9c-0064-4ed6-87dd-770fa3c0ee52\") " pod="openstack/placement-db-sync-9sskl" Feb 19 19:58:46 crc kubenswrapper[4813]: I0219 19:58:46.317171 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a528fe9c-0064-4ed6-87dd-770fa3c0ee52-logs\") pod \"placement-db-sync-9sskl\" (UID: \"a528fe9c-0064-4ed6-87dd-770fa3c0ee52\") " pod="openstack/placement-db-sync-9sskl" Feb 19 19:58:46 crc kubenswrapper[4813]: I0219 19:58:46.317659 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a528fe9c-0064-4ed6-87dd-770fa3c0ee52-logs\") pod \"placement-db-sync-9sskl\" (UID: \"a528fe9c-0064-4ed6-87dd-770fa3c0ee52\") " pod="openstack/placement-db-sync-9sskl" Feb 19 19:58:46 crc kubenswrapper[4813]: I0219 19:58:46.321062 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a528fe9c-0064-4ed6-87dd-770fa3c0ee52-combined-ca-bundle\") pod \"placement-db-sync-9sskl\" (UID: \"a528fe9c-0064-4ed6-87dd-770fa3c0ee52\") " pod="openstack/placement-db-sync-9sskl" Feb 19 19:58:46 crc kubenswrapper[4813]: I0219 19:58:46.321753 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a528fe9c-0064-4ed6-87dd-770fa3c0ee52-config-data\") pod \"placement-db-sync-9sskl\" (UID: \"a528fe9c-0064-4ed6-87dd-770fa3c0ee52\") " pod="openstack/placement-db-sync-9sskl" Feb 19 19:58:46 crc kubenswrapper[4813]: I0219 19:58:46.322048 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a528fe9c-0064-4ed6-87dd-770fa3c0ee52-scripts\") pod \"placement-db-sync-9sskl\" (UID: \"a528fe9c-0064-4ed6-87dd-770fa3c0ee52\") " pod="openstack/placement-db-sync-9sskl" Feb 19 19:58:46 crc kubenswrapper[4813]: I0219 19:58:46.335841 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sdsl\" (UniqueName: \"kubernetes.io/projected/a528fe9c-0064-4ed6-87dd-770fa3c0ee52-kube-api-access-7sdsl\") pod \"placement-db-sync-9sskl\" (UID: \"a528fe9c-0064-4ed6-87dd-770fa3c0ee52\") " pod="openstack/placement-db-sync-9sskl" Feb 19 19:58:46 crc kubenswrapper[4813]: I0219 19:58:46.381346 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b5bb7-mhv85" Feb 19 19:58:46 crc kubenswrapper[4813]: I0219 19:58:46.443276 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-9sskl" Feb 19 19:58:46 crc kubenswrapper[4813]: I0219 19:58:46.839379 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688b5bb7-mhv85"] Feb 19 19:58:46 crc kubenswrapper[4813]: I0219 19:58:46.918086 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-9sskl"] Feb 19 19:58:46 crc kubenswrapper[4813]: W0219 19:58:46.924740 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda528fe9c_0064_4ed6_87dd_770fa3c0ee52.slice/crio-244681cf2c5e1a781877555008d35034d91dff34bd6a3669545d7626174d56cc WatchSource:0}: Error finding container 244681cf2c5e1a781877555008d35034d91dff34bd6a3669545d7626174d56cc: Status 404 returned error can't find the container with id 244681cf2c5e1a781877555008d35034d91dff34bd6a3669545d7626174d56cc Feb 19 19:58:47 crc kubenswrapper[4813]: I0219 19:58:47.594775 4813 generic.go:334] "Generic (PLEG): container finished" podID="055c5861-4fba-4801-b657-8524cd6d8320" containerID="d9097947797c0b9c44ff50559a41888fc421fe602e2ace039c18dd5fb8902ff1" exitCode=0 Feb 19 19:58:47 crc kubenswrapper[4813]: I0219 19:58:47.594850 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b5bb7-mhv85" event={"ID":"055c5861-4fba-4801-b657-8524cd6d8320","Type":"ContainerDied","Data":"d9097947797c0b9c44ff50559a41888fc421fe602e2ace039c18dd5fb8902ff1"} Feb 19 19:58:47 crc kubenswrapper[4813]: I0219 19:58:47.594877 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b5bb7-mhv85" event={"ID":"055c5861-4fba-4801-b657-8524cd6d8320","Type":"ContainerStarted","Data":"2e6172fcb0e84979f4b278685aae69f85e8bc563bf24bc584f111f32287b358b"} Feb 19 19:58:47 crc kubenswrapper[4813]: I0219 19:58:47.596306 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-9sskl" event={"ID":"a528fe9c-0064-4ed6-87dd-770fa3c0ee52","Type":"ContainerStarted","Data":"53dc4e6d06be044a40965932dcb917c881872778b82e93a3321443417b9286ef"} Feb 19 19:58:47 crc kubenswrapper[4813]: I0219 19:58:47.596344 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-9sskl" event={"ID":"a528fe9c-0064-4ed6-87dd-770fa3c0ee52","Type":"ContainerStarted","Data":"244681cf2c5e1a781877555008d35034d91dff34bd6a3669545d7626174d56cc"} Feb 19 19:58:47 crc kubenswrapper[4813]: I0219 19:58:47.643767 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-9sskl" podStartSLOduration=1.643744765 podStartE2EDuration="1.643744765s" podCreationTimestamp="2026-02-19 19:58:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:58:47.639378429 +0000 UTC m=+5346.864818970" watchObservedRunningTime="2026-02-19 19:58:47.643744765 +0000 UTC m=+5346.869185306" Feb 19 19:58:48 crc kubenswrapper[4813]: I0219 19:58:48.606470 4813 generic.go:334] "Generic (PLEG): container finished" podID="a528fe9c-0064-4ed6-87dd-770fa3c0ee52" containerID="53dc4e6d06be044a40965932dcb917c881872778b82e93a3321443417b9286ef" exitCode=0 Feb 19 19:58:48 crc kubenswrapper[4813]: I0219 19:58:48.606569 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-9sskl" event={"ID":"a528fe9c-0064-4ed6-87dd-770fa3c0ee52","Type":"ContainerDied","Data":"53dc4e6d06be044a40965932dcb917c881872778b82e93a3321443417b9286ef"} Feb 19 19:58:48 crc kubenswrapper[4813]: I0219 19:58:48.609471 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b5bb7-mhv85" event={"ID":"055c5861-4fba-4801-b657-8524cd6d8320","Type":"ContainerStarted","Data":"0b792002658d31160dc7d39e7d85c65a6eaf29378e87b8a6a61f587b84aa5808"} Feb 19 19:58:48 crc kubenswrapper[4813]: I0219 19:58:48.609744 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-688b5bb7-mhv85" Feb 19 19:58:49 crc kubenswrapper[4813]: I0219 19:58:49.931988 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-9sskl" Feb 19 19:58:49 crc kubenswrapper[4813]: I0219 19:58:49.950609 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-688b5bb7-mhv85" podStartSLOduration=3.950588729 podStartE2EDuration="3.950588729s" podCreationTimestamp="2026-02-19 19:58:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:58:48.644304192 +0000 UTC m=+5347.869744723" watchObservedRunningTime="2026-02-19 19:58:49.950588729 +0000 UTC m=+5349.176029270" Feb 19 19:58:49 crc kubenswrapper[4813]: I0219 19:58:49.989065 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a528fe9c-0064-4ed6-87dd-770fa3c0ee52-config-data\") pod \"a528fe9c-0064-4ed6-87dd-770fa3c0ee52\" (UID: \"a528fe9c-0064-4ed6-87dd-770fa3c0ee52\") " Feb 19 19:58:49 crc kubenswrapper[4813]: I0219 19:58:49.989113 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a528fe9c-0064-4ed6-87dd-770fa3c0ee52-scripts\") pod \"a528fe9c-0064-4ed6-87dd-770fa3c0ee52\" (UID: \"a528fe9c-0064-4ed6-87dd-770fa3c0ee52\") " Feb 19 19:58:49 crc kubenswrapper[4813]: I0219 19:58:49.989130 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a528fe9c-0064-4ed6-87dd-770fa3c0ee52-combined-ca-bundle\") pod \"a528fe9c-0064-4ed6-87dd-770fa3c0ee52\" (UID: \"a528fe9c-0064-4ed6-87dd-770fa3c0ee52\") " Feb 19 19:58:49 crc kubenswrapper[4813]: I0219 19:58:49.989209 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7sdsl\" (UniqueName: \"kubernetes.io/projected/a528fe9c-0064-4ed6-87dd-770fa3c0ee52-kube-api-access-7sdsl\") pod \"a528fe9c-0064-4ed6-87dd-770fa3c0ee52\" (UID: \"a528fe9c-0064-4ed6-87dd-770fa3c0ee52\") " Feb 19 19:58:49 crc kubenswrapper[4813]: I0219 19:58:49.989273 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a528fe9c-0064-4ed6-87dd-770fa3c0ee52-logs\") pod \"a528fe9c-0064-4ed6-87dd-770fa3c0ee52\" (UID: \"a528fe9c-0064-4ed6-87dd-770fa3c0ee52\") " Feb 19 19:58:49 crc kubenswrapper[4813]: I0219 19:58:49.989800 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a528fe9c-0064-4ed6-87dd-770fa3c0ee52-logs" (OuterVolumeSpecName: "logs") pod "a528fe9c-0064-4ed6-87dd-770fa3c0ee52" (UID: "a528fe9c-0064-4ed6-87dd-770fa3c0ee52"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:58:49 crc kubenswrapper[4813]: I0219 19:58:49.994305 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a528fe9c-0064-4ed6-87dd-770fa3c0ee52-scripts" (OuterVolumeSpecName: "scripts") pod "a528fe9c-0064-4ed6-87dd-770fa3c0ee52" (UID: "a528fe9c-0064-4ed6-87dd-770fa3c0ee52"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:58:49 crc kubenswrapper[4813]: I0219 19:58:49.995513 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a528fe9c-0064-4ed6-87dd-770fa3c0ee52-kube-api-access-7sdsl" (OuterVolumeSpecName: "kube-api-access-7sdsl") pod "a528fe9c-0064-4ed6-87dd-770fa3c0ee52" (UID: "a528fe9c-0064-4ed6-87dd-770fa3c0ee52"). InnerVolumeSpecName "kube-api-access-7sdsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:58:50 crc kubenswrapper[4813]: I0219 19:58:50.012025 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a528fe9c-0064-4ed6-87dd-770fa3c0ee52-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a528fe9c-0064-4ed6-87dd-770fa3c0ee52" (UID: "a528fe9c-0064-4ed6-87dd-770fa3c0ee52"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:58:50 crc kubenswrapper[4813]: I0219 19:58:50.012899 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a528fe9c-0064-4ed6-87dd-770fa3c0ee52-config-data" (OuterVolumeSpecName: "config-data") pod "a528fe9c-0064-4ed6-87dd-770fa3c0ee52" (UID: "a528fe9c-0064-4ed6-87dd-770fa3c0ee52"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:58:50 crc kubenswrapper[4813]: I0219 19:58:50.091274 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a528fe9c-0064-4ed6-87dd-770fa3c0ee52-logs\") on node \"crc\" DevicePath \"\"" Feb 19 19:58:50 crc kubenswrapper[4813]: I0219 19:58:50.091312 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a528fe9c-0064-4ed6-87dd-770fa3c0ee52-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 19:58:50 crc kubenswrapper[4813]: I0219 19:58:50.091324 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a528fe9c-0064-4ed6-87dd-770fa3c0ee52-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:58:50 crc kubenswrapper[4813]: I0219 19:58:50.091335 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a528fe9c-0064-4ed6-87dd-770fa3c0ee52-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:58:50 crc kubenswrapper[4813]: I0219 19:58:50.091347 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7sdsl\" (UniqueName: \"kubernetes.io/projected/a528fe9c-0064-4ed6-87dd-770fa3c0ee52-kube-api-access-7sdsl\") on node \"crc\" DevicePath \"\"" Feb 19 19:58:50 crc kubenswrapper[4813]: I0219 19:58:50.670808 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-9sskl" event={"ID":"a528fe9c-0064-4ed6-87dd-770fa3c0ee52","Type":"ContainerDied","Data":"244681cf2c5e1a781877555008d35034d91dff34bd6a3669545d7626174d56cc"} Feb 19 19:58:50 crc kubenswrapper[4813]: I0219 19:58:50.670879 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="244681cf2c5e1a781877555008d35034d91dff34bd6a3669545d7626174d56cc" Feb 19 19:58:50 crc kubenswrapper[4813]: I0219 19:58:50.671004 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-9sskl" Feb 19 19:58:51 crc kubenswrapper[4813]: I0219 19:58:51.097444 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-577797bdf8-4jfrw"] Feb 19 19:58:51 crc kubenswrapper[4813]: E0219 19:58:51.098093 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a528fe9c-0064-4ed6-87dd-770fa3c0ee52" containerName="placement-db-sync" Feb 19 19:58:51 crc kubenswrapper[4813]: I0219 19:58:51.098108 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="a528fe9c-0064-4ed6-87dd-770fa3c0ee52" containerName="placement-db-sync" Feb 19 19:58:51 crc kubenswrapper[4813]: I0219 19:58:51.098270 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="a528fe9c-0064-4ed6-87dd-770fa3c0ee52" containerName="placement-db-sync" Feb 19 19:58:51 crc kubenswrapper[4813]: I0219 19:58:51.099171 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-577797bdf8-4jfrw" Feb 19 19:58:51 crc kubenswrapper[4813]: I0219 19:58:51.103543 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-68fmt" Feb 19 19:58:51 crc kubenswrapper[4813]: I0219 19:58:51.104523 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 19 19:58:51 crc kubenswrapper[4813]: I0219 19:58:51.110804 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 19 19:58:51 crc kubenswrapper[4813]: I0219 19:58:51.121265 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-577797bdf8-4jfrw"] Feb 19 19:58:51 crc kubenswrapper[4813]: I0219 19:58:51.210512 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dfd6d10a-dbfe-43fb-bde4-a35de8e29ca4-scripts\") pod \"placement-577797bdf8-4jfrw\" (UID: \"dfd6d10a-dbfe-43fb-bde4-a35de8e29ca4\") " pod="openstack/placement-577797bdf8-4jfrw" Feb 19 19:58:51 crc kubenswrapper[4813]: I0219 19:58:51.210586 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfd6d10a-dbfe-43fb-bde4-a35de8e29ca4-config-data\") pod \"placement-577797bdf8-4jfrw\" (UID: \"dfd6d10a-dbfe-43fb-bde4-a35de8e29ca4\") " pod="openstack/placement-577797bdf8-4jfrw" Feb 19 19:58:51 crc kubenswrapper[4813]: I0219 19:58:51.210616 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dfd6d10a-dbfe-43fb-bde4-a35de8e29ca4-logs\") pod \"placement-577797bdf8-4jfrw\" (UID: \"dfd6d10a-dbfe-43fb-bde4-a35de8e29ca4\") " pod="openstack/placement-577797bdf8-4jfrw" Feb 19 19:58:51 crc kubenswrapper[4813]: I0219 19:58:51.210688 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n6hd\" (UniqueName: \"kubernetes.io/projected/dfd6d10a-dbfe-43fb-bde4-a35de8e29ca4-kube-api-access-7n6hd\") pod \"placement-577797bdf8-4jfrw\" (UID: \"dfd6d10a-dbfe-43fb-bde4-a35de8e29ca4\") " pod="openstack/placement-577797bdf8-4jfrw" Feb 19 19:58:51 crc kubenswrapper[4813]: I0219 19:58:51.210721 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfd6d10a-dbfe-43fb-bde4-a35de8e29ca4-combined-ca-bundle\") pod \"placement-577797bdf8-4jfrw\" (UID: \"dfd6d10a-dbfe-43fb-bde4-a35de8e29ca4\") " pod="openstack/placement-577797bdf8-4jfrw" Feb 19 19:58:51 crc kubenswrapper[4813]: I0219 19:58:51.311961 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfd6d10a-dbfe-43fb-bde4-a35de8e29ca4-combined-ca-bundle\") pod \"placement-577797bdf8-4jfrw\" (UID: \"dfd6d10a-dbfe-43fb-bde4-a35de8e29ca4\") " pod="openstack/placement-577797bdf8-4jfrw" Feb 19 19:58:51 crc kubenswrapper[4813]: I0219 19:58:51.312130 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dfd6d10a-dbfe-43fb-bde4-a35de8e29ca4-scripts\") pod \"placement-577797bdf8-4jfrw\" (UID: \"dfd6d10a-dbfe-43fb-bde4-a35de8e29ca4\") " pod="openstack/placement-577797bdf8-4jfrw" Feb 19 19:58:51 crc kubenswrapper[4813]: I0219 19:58:51.312200 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfd6d10a-dbfe-43fb-bde4-a35de8e29ca4-config-data\") pod \"placement-577797bdf8-4jfrw\" (UID: \"dfd6d10a-dbfe-43fb-bde4-a35de8e29ca4\") " pod="openstack/placement-577797bdf8-4jfrw" Feb 19 19:58:51 crc kubenswrapper[4813]: I0219 19:58:51.312229 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dfd6d10a-dbfe-43fb-bde4-a35de8e29ca4-logs\") pod \"placement-577797bdf8-4jfrw\" (UID: \"dfd6d10a-dbfe-43fb-bde4-a35de8e29ca4\") " pod="openstack/placement-577797bdf8-4jfrw" Feb 19 19:58:51 crc kubenswrapper[4813]: I0219 19:58:51.312277 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7n6hd\" (UniqueName: \"kubernetes.io/projected/dfd6d10a-dbfe-43fb-bde4-a35de8e29ca4-kube-api-access-7n6hd\") pod \"placement-577797bdf8-4jfrw\" (UID: \"dfd6d10a-dbfe-43fb-bde4-a35de8e29ca4\") " pod="openstack/placement-577797bdf8-4jfrw" Feb 19 19:58:51 crc kubenswrapper[4813]: I0219 19:58:51.312756 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dfd6d10a-dbfe-43fb-bde4-a35de8e29ca4-logs\") pod \"placement-577797bdf8-4jfrw\" (UID: \"dfd6d10a-dbfe-43fb-bde4-a35de8e29ca4\") " pod="openstack/placement-577797bdf8-4jfrw" Feb 19 19:58:51 crc kubenswrapper[4813]: I0219 19:58:51.317316 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfd6d10a-dbfe-43fb-bde4-a35de8e29ca4-config-data\") pod \"placement-577797bdf8-4jfrw\" (UID: \"dfd6d10a-dbfe-43fb-bde4-a35de8e29ca4\") " pod="openstack/placement-577797bdf8-4jfrw" Feb 19 19:58:51 crc kubenswrapper[4813]: I0219 19:58:51.317361 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dfd6d10a-dbfe-43fb-bde4-a35de8e29ca4-scripts\") pod \"placement-577797bdf8-4jfrw\" (UID: \"dfd6d10a-dbfe-43fb-bde4-a35de8e29ca4\") " pod="openstack/placement-577797bdf8-4jfrw" Feb 19 19:58:51 crc kubenswrapper[4813]: I0219 19:58:51.322542 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfd6d10a-dbfe-43fb-bde4-a35de8e29ca4-combined-ca-bundle\") pod \"placement-577797bdf8-4jfrw\" (UID: \"dfd6d10a-dbfe-43fb-bde4-a35de8e29ca4\") " pod="openstack/placement-577797bdf8-4jfrw" Feb 19 19:58:51 crc kubenswrapper[4813]: I0219 19:58:51.328647 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7n6hd\" (UniqueName: \"kubernetes.io/projected/dfd6d10a-dbfe-43fb-bde4-a35de8e29ca4-kube-api-access-7n6hd\") pod \"placement-577797bdf8-4jfrw\" (UID: \"dfd6d10a-dbfe-43fb-bde4-a35de8e29ca4\") " pod="openstack/placement-577797bdf8-4jfrw" Feb 19 19:58:51 crc kubenswrapper[4813]: I0219 19:58:51.417734 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-577797bdf8-4jfrw" Feb 19 19:58:51 crc kubenswrapper[4813]: I0219 19:58:51.903524 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-577797bdf8-4jfrw"] Feb 19 19:58:52 crc kubenswrapper[4813]: I0219 19:58:52.692819 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-577797bdf8-4jfrw" event={"ID":"dfd6d10a-dbfe-43fb-bde4-a35de8e29ca4","Type":"ContainerStarted","Data":"89b670bbe82bc26025445706423dbd3ddee47641b89cdf5b80be7edfce26d169"} Feb 19 19:58:52 crc kubenswrapper[4813]: I0219 19:58:52.693285 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-577797bdf8-4jfrw" event={"ID":"dfd6d10a-dbfe-43fb-bde4-a35de8e29ca4","Type":"ContainerStarted","Data":"168333dbad23a41bf60fc5227d517ed8dd83ba786367e1255c2828438ba58b7d"} Feb 19 19:58:52 crc kubenswrapper[4813]: I0219 19:58:52.693329 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-577797bdf8-4jfrw" event={"ID":"dfd6d10a-dbfe-43fb-bde4-a35de8e29ca4","Type":"ContainerStarted","Data":"430cd03137746252978d2bf170cc374792214150adc160b1954d1a34c7226a61"} Feb 19 19:58:52 crc kubenswrapper[4813]: I0219 19:58:52.693367 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-577797bdf8-4jfrw" Feb 19 19:58:52 crc kubenswrapper[4813]: I0219 19:58:52.693525 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-577797bdf8-4jfrw" Feb 19 19:58:56 crc kubenswrapper[4813]: I0219 19:58:56.383809 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-688b5bb7-mhv85" Feb 19 19:58:56 crc kubenswrapper[4813]: I0219 19:58:56.407169 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-577797bdf8-4jfrw" podStartSLOduration=5.4071116759999995 podStartE2EDuration="5.407111676s" podCreationTimestamp="2026-02-19 19:58:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:58:52.717315162 +0000 UTC m=+5351.942755723" watchObservedRunningTime="2026-02-19 19:58:56.407111676 +0000 UTC m=+5355.632552257" Feb 19 19:58:56 crc kubenswrapper[4813]: I0219 19:58:56.486789 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c445499f7-5vdmf"] Feb 19 19:58:56 crc kubenswrapper[4813]: I0219 19:58:56.487032 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c445499f7-5vdmf" podUID="e256e6bb-57e4-4319-988b-80a9df8fe072" containerName="dnsmasq-dns" containerID="cri-o://c155729b6d39703ccb405abd2303d8295a7818228e90d03688ad2d158e266375" gracePeriod=10 Feb 19 19:58:56 crc kubenswrapper[4813]: I0219 19:58:56.725212 4813 generic.go:334] "Generic (PLEG): container finished" podID="e256e6bb-57e4-4319-988b-80a9df8fe072" containerID="c155729b6d39703ccb405abd2303d8295a7818228e90d03688ad2d158e266375" exitCode=0 Feb 19 19:58:56 crc kubenswrapper[4813]: I0219 19:58:56.725280 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c445499f7-5vdmf" event={"ID":"e256e6bb-57e4-4319-988b-80a9df8fe072","Type":"ContainerDied","Data":"c155729b6d39703ccb405abd2303d8295a7818228e90d03688ad2d158e266375"} Feb 19 19:58:56 crc kubenswrapper[4813]: I0219 19:58:56.944529 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c445499f7-5vdmf" Feb 19 19:58:57 crc kubenswrapper[4813]: I0219 19:58:57.008913 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e256e6bb-57e4-4319-988b-80a9df8fe072-ovsdbserver-sb\") pod \"e256e6bb-57e4-4319-988b-80a9df8fe072\" (UID: \"e256e6bb-57e4-4319-988b-80a9df8fe072\") " Feb 19 19:58:57 crc kubenswrapper[4813]: I0219 19:58:57.009083 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e256e6bb-57e4-4319-988b-80a9df8fe072-ovsdbserver-nb\") pod \"e256e6bb-57e4-4319-988b-80a9df8fe072\" (UID: \"e256e6bb-57e4-4319-988b-80a9df8fe072\") " Feb 19 19:58:57 crc kubenswrapper[4813]: I0219 19:58:57.009168 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e256e6bb-57e4-4319-988b-80a9df8fe072-config\") pod \"e256e6bb-57e4-4319-988b-80a9df8fe072\" (UID: \"e256e6bb-57e4-4319-988b-80a9df8fe072\") " Feb 19 19:58:57 crc kubenswrapper[4813]: I0219 19:58:57.009224 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e256e6bb-57e4-4319-988b-80a9df8fe072-dns-svc\") pod \"e256e6bb-57e4-4319-988b-80a9df8fe072\" (UID: \"e256e6bb-57e4-4319-988b-80a9df8fe072\") " Feb 19 19:58:57 crc kubenswrapper[4813]: I0219 19:58:57.009300 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxmw6\" (UniqueName: \"kubernetes.io/projected/e256e6bb-57e4-4319-988b-80a9df8fe072-kube-api-access-rxmw6\") pod \"e256e6bb-57e4-4319-988b-80a9df8fe072\" (UID: \"e256e6bb-57e4-4319-988b-80a9df8fe072\") " Feb 19 19:58:57 crc kubenswrapper[4813]: I0219 19:58:57.014690 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e256e6bb-57e4-4319-988b-80a9df8fe072-kube-api-access-rxmw6" (OuterVolumeSpecName: "kube-api-access-rxmw6") pod "e256e6bb-57e4-4319-988b-80a9df8fe072" (UID: "e256e6bb-57e4-4319-988b-80a9df8fe072"). InnerVolumeSpecName "kube-api-access-rxmw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:58:57 crc kubenswrapper[4813]: I0219 19:58:57.050699 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e256e6bb-57e4-4319-988b-80a9df8fe072-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e256e6bb-57e4-4319-988b-80a9df8fe072" (UID: "e256e6bb-57e4-4319-988b-80a9df8fe072"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:58:57 crc kubenswrapper[4813]: I0219 19:58:57.050720 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e256e6bb-57e4-4319-988b-80a9df8fe072-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e256e6bb-57e4-4319-988b-80a9df8fe072" (UID: "e256e6bb-57e4-4319-988b-80a9df8fe072"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:58:57 crc kubenswrapper[4813]: I0219 19:58:57.056257 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e256e6bb-57e4-4319-988b-80a9df8fe072-config" (OuterVolumeSpecName: "config") pod "e256e6bb-57e4-4319-988b-80a9df8fe072" (UID: "e256e6bb-57e4-4319-988b-80a9df8fe072"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:58:57 crc kubenswrapper[4813]: I0219 19:58:57.056891 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e256e6bb-57e4-4319-988b-80a9df8fe072-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e256e6bb-57e4-4319-988b-80a9df8fe072" (UID: "e256e6bb-57e4-4319-988b-80a9df8fe072"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:58:57 crc kubenswrapper[4813]: I0219 19:58:57.111661 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e256e6bb-57e4-4319-988b-80a9df8fe072-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 19:58:57 crc kubenswrapper[4813]: I0219 19:58:57.111700 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e256e6bb-57e4-4319-988b-80a9df8fe072-config\") on node \"crc\" DevicePath \"\"" Feb 19 19:58:57 crc kubenswrapper[4813]: I0219 19:58:57.111710 4813 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e256e6bb-57e4-4319-988b-80a9df8fe072-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 19:58:57 crc kubenswrapper[4813]: I0219 19:58:57.111722 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxmw6\" (UniqueName: \"kubernetes.io/projected/e256e6bb-57e4-4319-988b-80a9df8fe072-kube-api-access-rxmw6\") on node \"crc\" DevicePath \"\"" Feb 19 19:58:57 crc kubenswrapper[4813]: I0219 19:58:57.111730 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e256e6bb-57e4-4319-988b-80a9df8fe072-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 19:58:57 crc kubenswrapper[4813]: I0219 19:58:57.471677 4813 scope.go:117] "RemoveContainer" containerID="6f703dfd90f9e11930a40b965ab018f3c46a606e13eaba250b413520a86bea4f" Feb 19 19:58:57 crc kubenswrapper[4813]: E0219 19:58:57.472011 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 19:58:57 crc kubenswrapper[4813]: I0219 19:58:57.737553 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c445499f7-5vdmf" event={"ID":"e256e6bb-57e4-4319-988b-80a9df8fe072","Type":"ContainerDied","Data":"93be82c4dfcd3824fd075a99083d90ce732191193c25c7b6dc1f264990ade579"} Feb 19 19:58:57 crc kubenswrapper[4813]: I0219 19:58:57.737635 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c445499f7-5vdmf" Feb 19 19:58:57 crc kubenswrapper[4813]: I0219 19:58:57.738240 4813 scope.go:117] "RemoveContainer" containerID="c155729b6d39703ccb405abd2303d8295a7818228e90d03688ad2d158e266375" Feb 19 19:58:57 crc kubenswrapper[4813]: I0219 19:58:57.765428 4813 scope.go:117] "RemoveContainer" containerID="2a6918899f3dc9959745b442aed8ebc54824c400aec76706d1c5d50c22cffbd1" Feb 19 19:58:57 crc kubenswrapper[4813]: I0219 19:58:57.771426 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c445499f7-5vdmf"] Feb 19 19:58:57 crc kubenswrapper[4813]: I0219 19:58:57.783233 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c445499f7-5vdmf"] Feb 19 19:58:59 crc kubenswrapper[4813]: I0219 19:58:59.483284 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e256e6bb-57e4-4319-988b-80a9df8fe072" path="/var/lib/kubelet/pods/e256e6bb-57e4-4319-988b-80a9df8fe072/volumes" Feb 19 19:59:08 crc kubenswrapper[4813]: I0219 19:59:08.471608 4813 scope.go:117] "RemoveContainer" containerID="6f703dfd90f9e11930a40b965ab018f3c46a606e13eaba250b413520a86bea4f" Feb 19 19:59:08 crc kubenswrapper[4813]: I0219 19:59:08.837023 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" event={"ID":"481977a2-7072-4176-abd4-863cb6104d70","Type":"ContainerStarted","Data":"c5ad4b652219a5a2c769519cd60ad08e92b1309893f1bb32935c2a23f7bc230a"} Feb 19 19:59:09 crc kubenswrapper[4813]: I0219 19:59:09.724191 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dlj8s"] Feb 19 19:59:09 crc kubenswrapper[4813]: E0219 19:59:09.724868 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e256e6bb-57e4-4319-988b-80a9df8fe072" containerName="init" Feb 19 19:59:09 crc kubenswrapper[4813]: I0219 19:59:09.724884 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="e256e6bb-57e4-4319-988b-80a9df8fe072" containerName="init" Feb 19 19:59:09 crc kubenswrapper[4813]: E0219 19:59:09.724906 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e256e6bb-57e4-4319-988b-80a9df8fe072" containerName="dnsmasq-dns" Feb 19 19:59:09 crc kubenswrapper[4813]: I0219 19:59:09.724915 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="e256e6bb-57e4-4319-988b-80a9df8fe072" containerName="dnsmasq-dns" Feb 19 19:59:09 crc kubenswrapper[4813]: I0219 19:59:09.725129 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="e256e6bb-57e4-4319-988b-80a9df8fe072" containerName="dnsmasq-dns" Feb 19 19:59:09 crc kubenswrapper[4813]: I0219 19:59:09.726613 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dlj8s" Feb 19 19:59:09 crc kubenswrapper[4813]: I0219 19:59:09.736693 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dlj8s"] Feb 19 19:59:09 crc kubenswrapper[4813]: I0219 19:59:09.913167 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d45f5\" (UniqueName: \"kubernetes.io/projected/b59ac4fc-f79a-4386-88d1-c315fa6da8c7-kube-api-access-d45f5\") pod \"community-operators-dlj8s\" (UID: \"b59ac4fc-f79a-4386-88d1-c315fa6da8c7\") " pod="openshift-marketplace/community-operators-dlj8s" Feb 19 19:59:09 crc kubenswrapper[4813]: I0219 19:59:09.913324 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b59ac4fc-f79a-4386-88d1-c315fa6da8c7-catalog-content\") pod \"community-operators-dlj8s\" (UID: \"b59ac4fc-f79a-4386-88d1-c315fa6da8c7\") " pod="openshift-marketplace/community-operators-dlj8s" Feb 19 19:59:09 crc kubenswrapper[4813]: I0219 19:59:09.913451 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b59ac4fc-f79a-4386-88d1-c315fa6da8c7-utilities\") pod \"community-operators-dlj8s\" (UID: \"b59ac4fc-f79a-4386-88d1-c315fa6da8c7\") " pod="openshift-marketplace/community-operators-dlj8s" Feb 19 19:59:10 crc kubenswrapper[4813]: I0219 19:59:10.014575 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b59ac4fc-f79a-4386-88d1-c315fa6da8c7-utilities\") pod \"community-operators-dlj8s\" (UID: \"b59ac4fc-f79a-4386-88d1-c315fa6da8c7\") " pod="openshift-marketplace/community-operators-dlj8s" Feb 19 19:59:10 crc kubenswrapper[4813]: I0219 19:59:10.014632 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d45f5\" (UniqueName: \"kubernetes.io/projected/b59ac4fc-f79a-4386-88d1-c315fa6da8c7-kube-api-access-d45f5\") pod \"community-operators-dlj8s\" (UID: \"b59ac4fc-f79a-4386-88d1-c315fa6da8c7\") " pod="openshift-marketplace/community-operators-dlj8s" Feb 19 19:59:10 crc kubenswrapper[4813]: I0219 19:59:10.014701 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b59ac4fc-f79a-4386-88d1-c315fa6da8c7-catalog-content\") pod \"community-operators-dlj8s\" (UID: \"b59ac4fc-f79a-4386-88d1-c315fa6da8c7\") " pod="openshift-marketplace/community-operators-dlj8s" Feb 19 19:59:10 crc kubenswrapper[4813]: I0219 19:59:10.015066 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b59ac4fc-f79a-4386-88d1-c315fa6da8c7-utilities\") pod \"community-operators-dlj8s\" (UID: \"b59ac4fc-f79a-4386-88d1-c315fa6da8c7\") " pod="openshift-marketplace/community-operators-dlj8s" Feb 19 19:59:10 crc kubenswrapper[4813]: I0219 19:59:10.015090 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b59ac4fc-f79a-4386-88d1-c315fa6da8c7-catalog-content\") pod \"community-operators-dlj8s\" (UID: \"b59ac4fc-f79a-4386-88d1-c315fa6da8c7\") " pod="openshift-marketplace/community-operators-dlj8s" Feb 19 19:59:10 crc kubenswrapper[4813]: I0219 19:59:10.042528 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d45f5\" (UniqueName: \"kubernetes.io/projected/b59ac4fc-f79a-4386-88d1-c315fa6da8c7-kube-api-access-d45f5\") pod \"community-operators-dlj8s\" (UID: \"b59ac4fc-f79a-4386-88d1-c315fa6da8c7\") " pod="openshift-marketplace/community-operators-dlj8s" Feb 19 19:59:10 crc kubenswrapper[4813]: I0219 19:59:10.049233 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dlj8s" Feb 19 19:59:10 crc kubenswrapper[4813]: I0219 19:59:10.532592 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dlj8s"] Feb 19 19:59:10 crc kubenswrapper[4813]: I0219 19:59:10.856051 4813 generic.go:334] "Generic (PLEG): container finished" podID="b59ac4fc-f79a-4386-88d1-c315fa6da8c7" containerID="1a282165a99e1e3b7a80a787261bca175606cf08bf792f54aa8f7ae918c52df3" exitCode=0 Feb 19 19:59:10 crc kubenswrapper[4813]: I0219 19:59:10.856118 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dlj8s" event={"ID":"b59ac4fc-f79a-4386-88d1-c315fa6da8c7","Type":"ContainerDied","Data":"1a282165a99e1e3b7a80a787261bca175606cf08bf792f54aa8f7ae918c52df3"} Feb 19 19:59:10 crc kubenswrapper[4813]: I0219 19:59:10.856445 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dlj8s" event={"ID":"b59ac4fc-f79a-4386-88d1-c315fa6da8c7","Type":"ContainerStarted","Data":"88a5cd967ccaf9f3bc9ce8a389f95497bcc700c31ee3f863259bb5424b9c84f2"} Feb 19 19:59:11 crc kubenswrapper[4813]: I0219 19:59:11.870502 4813 generic.go:334] "Generic (PLEG): container finished" podID="b59ac4fc-f79a-4386-88d1-c315fa6da8c7" containerID="3162f842f56769bf5a8dc1a2f6a80bf65492f77310ec16c1099a49774e3a3b6c" exitCode=0 Feb 19 19:59:11 crc kubenswrapper[4813]: I0219 19:59:11.870587 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dlj8s" event={"ID":"b59ac4fc-f79a-4386-88d1-c315fa6da8c7","Type":"ContainerDied","Data":"3162f842f56769bf5a8dc1a2f6a80bf65492f77310ec16c1099a49774e3a3b6c"} Feb 19 19:59:12 crc kubenswrapper[4813]: I0219 19:59:12.897362 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dlj8s" event={"ID":"b59ac4fc-f79a-4386-88d1-c315fa6da8c7","Type":"ContainerStarted","Data":"32935bfca80fce241d8a96b0c517f4422cdbaad3a7b0a65a62fd4350379fef3e"} Feb 19 19:59:12 crc kubenswrapper[4813]: I0219 19:59:12.924222 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dlj8s" podStartSLOduration=2.323120092 podStartE2EDuration="3.92419963s" podCreationTimestamp="2026-02-19 19:59:09 +0000 UTC" firstStartedPulling="2026-02-19 19:59:10.858789265 +0000 UTC m=+5370.084229836" lastFinishedPulling="2026-02-19 19:59:12.459868833 +0000 UTC m=+5371.685309374" observedRunningTime="2026-02-19 19:59:12.923526889 +0000 UTC m=+5372.148967430" watchObservedRunningTime="2026-02-19 19:59:12.92419963 +0000 UTC m=+5372.149640171" Feb 19 19:59:20 crc kubenswrapper[4813]: I0219 19:59:20.050093 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dlj8s" Feb 19 19:59:20 crc kubenswrapper[4813]: I0219 19:59:20.051176 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dlj8s" Feb 19 19:59:20 crc kubenswrapper[4813]: I0219 19:59:20.099095 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dlj8s" Feb 19 19:59:21 crc kubenswrapper[4813]: I0219 19:59:21.027345 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dlj8s" Feb 19 19:59:21 crc kubenswrapper[4813]: I0219 19:59:21.088434 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dlj8s"] Feb 19 19:59:22 crc kubenswrapper[4813]: I0219 19:59:22.475627 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-577797bdf8-4jfrw" Feb 19 19:59:22 crc kubenswrapper[4813]: I0219 19:59:22.487023 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-577797bdf8-4jfrw" Feb 19 19:59:22 crc kubenswrapper[4813]: I0219 19:59:22.992560 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dlj8s" podUID="b59ac4fc-f79a-4386-88d1-c315fa6da8c7" containerName="registry-server" containerID="cri-o://32935bfca80fce241d8a96b0c517f4422cdbaad3a7b0a65a62fd4350379fef3e" gracePeriod=2 Feb 19 19:59:23 crc kubenswrapper[4813]: I0219 19:59:23.617001 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dlj8s" Feb 19 19:59:23 crc kubenswrapper[4813]: I0219 19:59:23.781549 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b59ac4fc-f79a-4386-88d1-c315fa6da8c7-catalog-content\") pod \"b59ac4fc-f79a-4386-88d1-c315fa6da8c7\" (UID: \"b59ac4fc-f79a-4386-88d1-c315fa6da8c7\") " Feb 19 19:59:23 crc kubenswrapper[4813]: I0219 19:59:23.781652 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b59ac4fc-f79a-4386-88d1-c315fa6da8c7-utilities\") pod \"b59ac4fc-f79a-4386-88d1-c315fa6da8c7\" (UID: \"b59ac4fc-f79a-4386-88d1-c315fa6da8c7\") " Feb 19 19:59:23 crc kubenswrapper[4813]: I0219 19:59:23.781750 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d45f5\" (UniqueName: \"kubernetes.io/projected/b59ac4fc-f79a-4386-88d1-c315fa6da8c7-kube-api-access-d45f5\") pod \"b59ac4fc-f79a-4386-88d1-c315fa6da8c7\" (UID: \"b59ac4fc-f79a-4386-88d1-c315fa6da8c7\") " Feb 19 19:59:23 crc kubenswrapper[4813]: I0219 19:59:23.782803 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b59ac4fc-f79a-4386-88d1-c315fa6da8c7-utilities" (OuterVolumeSpecName: "utilities") pod "b59ac4fc-f79a-4386-88d1-c315fa6da8c7" (UID: "b59ac4fc-f79a-4386-88d1-c315fa6da8c7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:59:23 crc kubenswrapper[4813]: I0219 19:59:23.788281 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b59ac4fc-f79a-4386-88d1-c315fa6da8c7-kube-api-access-d45f5" (OuterVolumeSpecName: "kube-api-access-d45f5") pod "b59ac4fc-f79a-4386-88d1-c315fa6da8c7" (UID: "b59ac4fc-f79a-4386-88d1-c315fa6da8c7"). InnerVolumeSpecName "kube-api-access-d45f5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:59:23 crc kubenswrapper[4813]: I0219 19:59:23.843619 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b59ac4fc-f79a-4386-88d1-c315fa6da8c7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b59ac4fc-f79a-4386-88d1-c315fa6da8c7" (UID: "b59ac4fc-f79a-4386-88d1-c315fa6da8c7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:59:23 crc kubenswrapper[4813]: I0219 19:59:23.883266 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b59ac4fc-f79a-4386-88d1-c315fa6da8c7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 19:59:23 crc kubenswrapper[4813]: I0219 19:59:23.883298 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b59ac4fc-f79a-4386-88d1-c315fa6da8c7-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 19:59:23 crc kubenswrapper[4813]: I0219 19:59:23.883309 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d45f5\" (UniqueName: \"kubernetes.io/projected/b59ac4fc-f79a-4386-88d1-c315fa6da8c7-kube-api-access-d45f5\") on node \"crc\" DevicePath \"\"" Feb 19 19:59:24 crc kubenswrapper[4813]: I0219 19:59:24.001387 4813 generic.go:334] "Generic (PLEG): container finished" podID="b59ac4fc-f79a-4386-88d1-c315fa6da8c7" containerID="32935bfca80fce241d8a96b0c517f4422cdbaad3a7b0a65a62fd4350379fef3e" exitCode=0 Feb 19 19:59:24 crc kubenswrapper[4813]: I0219 19:59:24.001429 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dlj8s" event={"ID":"b59ac4fc-f79a-4386-88d1-c315fa6da8c7","Type":"ContainerDied","Data":"32935bfca80fce241d8a96b0c517f4422cdbaad3a7b0a65a62fd4350379fef3e"} Feb 19 19:59:24 crc kubenswrapper[4813]: I0219 19:59:24.001455 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dlj8s" event={"ID":"b59ac4fc-f79a-4386-88d1-c315fa6da8c7","Type":"ContainerDied","Data":"88a5cd967ccaf9f3bc9ce8a389f95497bcc700c31ee3f863259bb5424b9c84f2"} Feb 19 19:59:24 crc kubenswrapper[4813]: I0219 19:59:24.001472 4813 scope.go:117] "RemoveContainer" containerID="32935bfca80fce241d8a96b0c517f4422cdbaad3a7b0a65a62fd4350379fef3e" Feb 19 19:59:24 crc kubenswrapper[4813]: I0219 19:59:24.001584 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dlj8s" Feb 19 19:59:24 crc kubenswrapper[4813]: I0219 19:59:24.037067 4813 scope.go:117] "RemoveContainer" containerID="3162f842f56769bf5a8dc1a2f6a80bf65492f77310ec16c1099a49774e3a3b6c" Feb 19 19:59:24 crc kubenswrapper[4813]: I0219 19:59:24.044412 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dlj8s"] Feb 19 19:59:24 crc kubenswrapper[4813]: I0219 19:59:24.058348 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dlj8s"] Feb 19 19:59:24 crc kubenswrapper[4813]: I0219 19:59:24.059007 4813 scope.go:117] "RemoveContainer" containerID="1a282165a99e1e3b7a80a787261bca175606cf08bf792f54aa8f7ae918c52df3" Feb 19 19:59:24 crc kubenswrapper[4813]: I0219 19:59:24.101532 4813 scope.go:117] "RemoveContainer" containerID="32935bfca80fce241d8a96b0c517f4422cdbaad3a7b0a65a62fd4350379fef3e" Feb 19 19:59:24 crc kubenswrapper[4813]: E0219 19:59:24.101967 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32935bfca80fce241d8a96b0c517f4422cdbaad3a7b0a65a62fd4350379fef3e\": container with ID starting with 32935bfca80fce241d8a96b0c517f4422cdbaad3a7b0a65a62fd4350379fef3e not found: ID does not exist" containerID="32935bfca80fce241d8a96b0c517f4422cdbaad3a7b0a65a62fd4350379fef3e" Feb 19 19:59:24 crc kubenswrapper[4813]: I0219 19:59:24.102008 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32935bfca80fce241d8a96b0c517f4422cdbaad3a7b0a65a62fd4350379fef3e"} err="failed to get container status \"32935bfca80fce241d8a96b0c517f4422cdbaad3a7b0a65a62fd4350379fef3e\": rpc error: code = NotFound desc = could not find container \"32935bfca80fce241d8a96b0c517f4422cdbaad3a7b0a65a62fd4350379fef3e\": container with ID starting with 32935bfca80fce241d8a96b0c517f4422cdbaad3a7b0a65a62fd4350379fef3e not found: ID does not exist" Feb 19 19:59:24 crc kubenswrapper[4813]: I0219 19:59:24.102032 4813 scope.go:117] "RemoveContainer" containerID="3162f842f56769bf5a8dc1a2f6a80bf65492f77310ec16c1099a49774e3a3b6c" Feb 19 19:59:24 crc kubenswrapper[4813]: E0219 19:59:24.104330 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3162f842f56769bf5a8dc1a2f6a80bf65492f77310ec16c1099a49774e3a3b6c\": container with ID starting with 3162f842f56769bf5a8dc1a2f6a80bf65492f77310ec16c1099a49774e3a3b6c not found: ID does not exist" containerID="3162f842f56769bf5a8dc1a2f6a80bf65492f77310ec16c1099a49774e3a3b6c" Feb 19 19:59:24 crc kubenswrapper[4813]: I0219 19:59:24.104361 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3162f842f56769bf5a8dc1a2f6a80bf65492f77310ec16c1099a49774e3a3b6c"} err="failed to get container status \"3162f842f56769bf5a8dc1a2f6a80bf65492f77310ec16c1099a49774e3a3b6c\": rpc error: code = NotFound desc = could not find container \"3162f842f56769bf5a8dc1a2f6a80bf65492f77310ec16c1099a49774e3a3b6c\": container with ID starting with 3162f842f56769bf5a8dc1a2f6a80bf65492f77310ec16c1099a49774e3a3b6c not found: ID does not exist" Feb 19 19:59:24 crc kubenswrapper[4813]: I0219 19:59:24.104384 4813 scope.go:117] "RemoveContainer" containerID="1a282165a99e1e3b7a80a787261bca175606cf08bf792f54aa8f7ae918c52df3" Feb 19 19:59:24 crc kubenswrapper[4813]: E0219 19:59:24.104757 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a282165a99e1e3b7a80a787261bca175606cf08bf792f54aa8f7ae918c52df3\": container with ID starting with 1a282165a99e1e3b7a80a787261bca175606cf08bf792f54aa8f7ae918c52df3 not found: ID does not exist" containerID="1a282165a99e1e3b7a80a787261bca175606cf08bf792f54aa8f7ae918c52df3" Feb 19 19:59:24 crc kubenswrapper[4813]: I0219 19:59:24.104818 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a282165a99e1e3b7a80a787261bca175606cf08bf792f54aa8f7ae918c52df3"} err="failed to get container status \"1a282165a99e1e3b7a80a787261bca175606cf08bf792f54aa8f7ae918c52df3\": rpc error: code = NotFound desc = could not find container \"1a282165a99e1e3b7a80a787261bca175606cf08bf792f54aa8f7ae918c52df3\": container with ID starting with 1a282165a99e1e3b7a80a787261bca175606cf08bf792f54aa8f7ae918c52df3 not found: ID does not exist" Feb 19 19:59:25 crc kubenswrapper[4813]: I0219 19:59:25.488294 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b59ac4fc-f79a-4386-88d1-c315fa6da8c7" path="/var/lib/kubelet/pods/b59ac4fc-f79a-4386-88d1-c315fa6da8c7/volumes" Feb 19 19:59:40 crc kubenswrapper[4813]: E0219 19:59:40.426485 4813 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.69:50128->38.102.83.69:38045: write tcp 38.102.83.69:50128->38.102.83.69:38045: write: broken pipe Feb 19 19:59:43 crc kubenswrapper[4813]: I0219 19:59:43.315070 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-fm8jf"] Feb 19 19:59:43 crc kubenswrapper[4813]: E0219 19:59:43.315900 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b59ac4fc-f79a-4386-88d1-c315fa6da8c7" containerName="extract-utilities" Feb 19 19:59:43 crc kubenswrapper[4813]: I0219 19:59:43.315914 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="b59ac4fc-f79a-4386-88d1-c315fa6da8c7" containerName="extract-utilities" Feb 19 19:59:43 crc kubenswrapper[4813]: E0219 19:59:43.315924 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b59ac4fc-f79a-4386-88d1-c315fa6da8c7" containerName="registry-server" Feb 19 19:59:43 crc kubenswrapper[4813]: I0219 19:59:43.315930 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="b59ac4fc-f79a-4386-88d1-c315fa6da8c7" containerName="registry-server" Feb 19 19:59:43 crc kubenswrapper[4813]: E0219 19:59:43.315939 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b59ac4fc-f79a-4386-88d1-c315fa6da8c7" containerName="extract-content" Feb 19 19:59:43 crc kubenswrapper[4813]: I0219 19:59:43.315946 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="b59ac4fc-f79a-4386-88d1-c315fa6da8c7" containerName="extract-content" Feb 19 19:59:43 crc kubenswrapper[4813]: I0219 19:59:43.316112 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="b59ac4fc-f79a-4386-88d1-c315fa6da8c7" containerName="registry-server" Feb 19 19:59:43 crc kubenswrapper[4813]: I0219 19:59:43.316761 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-fm8jf" Feb 19 19:59:43 crc kubenswrapper[4813]: I0219 19:59:43.326485 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-fm8jf"] Feb 19 19:59:43 crc kubenswrapper[4813]: I0219 19:59:43.408595 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73ed1a04-38ca-473b-8d0d-6412fc7fc147-operator-scripts\") pod \"nova-api-db-create-fm8jf\" (UID: \"73ed1a04-38ca-473b-8d0d-6412fc7fc147\") " pod="openstack/nova-api-db-create-fm8jf" Feb 19 19:59:43 crc kubenswrapper[4813]: I0219 19:59:43.408698 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq5bd\" (UniqueName: \"kubernetes.io/projected/73ed1a04-38ca-473b-8d0d-6412fc7fc147-kube-api-access-sq5bd\") pod \"nova-api-db-create-fm8jf\" (UID: \"73ed1a04-38ca-473b-8d0d-6412fc7fc147\") " pod="openstack/nova-api-db-create-fm8jf" Feb 19 19:59:43 crc kubenswrapper[4813]: I0219 19:59:43.411526 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-xg69l"] Feb 19 19:59:43 crc kubenswrapper[4813]: I0219 19:59:43.412671 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-xg69l" Feb 19 19:59:43 crc kubenswrapper[4813]: I0219 19:59:43.427649 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-xg69l"] Feb 19 19:59:43 crc kubenswrapper[4813]: I0219 19:59:43.510371 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h89pp\" (UniqueName: \"kubernetes.io/projected/5fd7e145-bf0d-469b-880d-4d1086f799e2-kube-api-access-h89pp\") pod \"nova-cell0-db-create-xg69l\" (UID: \"5fd7e145-bf0d-469b-880d-4d1086f799e2\") " pod="openstack/nova-cell0-db-create-xg69l" Feb 19 19:59:43 crc kubenswrapper[4813]: I0219 19:59:43.510441 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73ed1a04-38ca-473b-8d0d-6412fc7fc147-operator-scripts\") pod \"nova-api-db-create-fm8jf\" (UID: \"73ed1a04-38ca-473b-8d0d-6412fc7fc147\") " pod="openstack/nova-api-db-create-fm8jf" Feb 19 19:59:43 crc kubenswrapper[4813]: I0219 19:59:43.510477 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5fd7e145-bf0d-469b-880d-4d1086f799e2-operator-scripts\") pod \"nova-cell0-db-create-xg69l\" (UID: \"5fd7e145-bf0d-469b-880d-4d1086f799e2\") " pod="openstack/nova-cell0-db-create-xg69l" Feb 19 19:59:43 crc kubenswrapper[4813]: I0219 19:59:43.510592 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sq5bd\" (UniqueName: \"kubernetes.io/projected/73ed1a04-38ca-473b-8d0d-6412fc7fc147-kube-api-access-sq5bd\") pod \"nova-api-db-create-fm8jf\" (UID: \"73ed1a04-38ca-473b-8d0d-6412fc7fc147\") " pod="openstack/nova-api-db-create-fm8jf" Feb 19 19:59:43 crc kubenswrapper[4813]: I0219 19:59:43.511841 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73ed1a04-38ca-473b-8d0d-6412fc7fc147-operator-scripts\") pod \"nova-api-db-create-fm8jf\" (UID: \"73ed1a04-38ca-473b-8d0d-6412fc7fc147\") " pod="openstack/nova-api-db-create-fm8jf" Feb 19 19:59:43 crc kubenswrapper[4813]: I0219 19:59:43.519693 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-zzr5k"] Feb 19 19:59:43 crc kubenswrapper[4813]: I0219 19:59:43.521065 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-zzr5k" Feb 19 19:59:43 crc kubenswrapper[4813]: I0219 19:59:43.537144 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-88c2-account-create-update-687tm"] Feb 19 19:59:43 crc kubenswrapper[4813]: I0219 19:59:43.538560 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-88c2-account-create-update-687tm" Feb 19 19:59:43 crc kubenswrapper[4813]: I0219 19:59:43.541240 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 19 19:59:43 crc kubenswrapper[4813]: I0219 19:59:43.552479 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-zzr5k"] Feb 19 19:59:43 crc kubenswrapper[4813]: I0219 19:59:43.560870 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sq5bd\" (UniqueName: \"kubernetes.io/projected/73ed1a04-38ca-473b-8d0d-6412fc7fc147-kube-api-access-sq5bd\") pod \"nova-api-db-create-fm8jf\" (UID: \"73ed1a04-38ca-473b-8d0d-6412fc7fc147\") " pod="openstack/nova-api-db-create-fm8jf" Feb 19 19:59:43 crc kubenswrapper[4813]: I0219 19:59:43.578178 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-88c2-account-create-update-687tm"] Feb 19 19:59:43 crc kubenswrapper[4813]: I0219 19:59:43.611986 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h89pp\" (UniqueName: \"kubernetes.io/projected/5fd7e145-bf0d-469b-880d-4d1086f799e2-kube-api-access-h89pp\") pod \"nova-cell0-db-create-xg69l\" (UID: \"5fd7e145-bf0d-469b-880d-4d1086f799e2\") " pod="openstack/nova-cell0-db-create-xg69l" Feb 19 19:59:43 crc kubenswrapper[4813]: I0219 19:59:43.612071 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5fd7e145-bf0d-469b-880d-4d1086f799e2-operator-scripts\") pod \"nova-cell0-db-create-xg69l\" (UID: \"5fd7e145-bf0d-469b-880d-4d1086f799e2\") " pod="openstack/nova-cell0-db-create-xg69l" Feb 19 19:59:43 crc kubenswrapper[4813]: I0219 19:59:43.612933 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5fd7e145-bf0d-469b-880d-4d1086f799e2-operator-scripts\") pod \"nova-cell0-db-create-xg69l\" (UID: \"5fd7e145-bf0d-469b-880d-4d1086f799e2\") " pod="openstack/nova-cell0-db-create-xg69l" Feb 19 19:59:43 crc kubenswrapper[4813]: I0219 19:59:43.613399 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8bhl\" (UniqueName: \"kubernetes.io/projected/36727543-174a-48ce-b5cc-0abd04f85e4c-kube-api-access-l8bhl\") pod \"nova-cell1-db-create-zzr5k\" (UID: \"36727543-174a-48ce-b5cc-0abd04f85e4c\") " pod="openstack/nova-cell1-db-create-zzr5k" Feb 19 19:59:43 crc kubenswrapper[4813]: I0219 19:59:43.613494 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36727543-174a-48ce-b5cc-0abd04f85e4c-operator-scripts\") pod \"nova-cell1-db-create-zzr5k\" (UID: \"36727543-174a-48ce-b5cc-0abd04f85e4c\") " pod="openstack/nova-cell1-db-create-zzr5k" Feb 19 19:59:43 crc kubenswrapper[4813]: I0219 19:59:43.628128 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h89pp\" (UniqueName: \"kubernetes.io/projected/5fd7e145-bf0d-469b-880d-4d1086f799e2-kube-api-access-h89pp\") pod \"nova-cell0-db-create-xg69l\" (UID: \"5fd7e145-bf0d-469b-880d-4d1086f799e2\") " pod="openstack/nova-cell0-db-create-xg69l" Feb 19 19:59:43 crc kubenswrapper[4813]: I0219 19:59:43.633884 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-fm8jf" Feb 19 19:59:43 crc kubenswrapper[4813]: I0219 19:59:43.714737 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4s8m\" (UniqueName: \"kubernetes.io/projected/712b0a56-84d1-4ab3-a0e0-5237a748b43d-kube-api-access-n4s8m\") pod \"nova-api-88c2-account-create-update-687tm\" (UID: \"712b0a56-84d1-4ab3-a0e0-5237a748b43d\") " pod="openstack/nova-api-88c2-account-create-update-687tm" Feb 19 19:59:43 crc kubenswrapper[4813]: I0219 19:59:43.715021 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8bhl\" (UniqueName: \"kubernetes.io/projected/36727543-174a-48ce-b5cc-0abd04f85e4c-kube-api-access-l8bhl\") pod \"nova-cell1-db-create-zzr5k\" (UID: \"36727543-174a-48ce-b5cc-0abd04f85e4c\") " pod="openstack/nova-cell1-db-create-zzr5k" Feb 19 19:59:43 crc kubenswrapper[4813]: I0219 19:59:43.715116 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36727543-174a-48ce-b5cc-0abd04f85e4c-operator-scripts\") pod \"nova-cell1-db-create-zzr5k\" (UID: \"36727543-174a-48ce-b5cc-0abd04f85e4c\") " pod="openstack/nova-cell1-db-create-zzr5k" Feb 19 19:59:43 crc kubenswrapper[4813]: I0219 19:59:43.715271 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/712b0a56-84d1-4ab3-a0e0-5237a748b43d-operator-scripts\") pod \"nova-api-88c2-account-create-update-687tm\" (UID: \"712b0a56-84d1-4ab3-a0e0-5237a748b43d\") " pod="openstack/nova-api-88c2-account-create-update-687tm" Feb 19 19:59:43 crc kubenswrapper[4813]: I0219 19:59:43.715869 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36727543-174a-48ce-b5cc-0abd04f85e4c-operator-scripts\") pod \"nova-cell1-db-create-zzr5k\" (UID: \"36727543-174a-48ce-b5cc-0abd04f85e4c\") " pod="openstack/nova-cell1-db-create-zzr5k" Feb 19 19:59:43 crc kubenswrapper[4813]: I0219 19:59:43.727537 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-10bf-account-create-update-zk5c2"] Feb 19 19:59:43 crc kubenswrapper[4813]: I0219 19:59:43.728754 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-10bf-account-create-update-zk5c2" Feb 19 19:59:43 crc kubenswrapper[4813]: I0219 19:59:43.731057 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 19 19:59:43 crc kubenswrapper[4813]: I0219 19:59:43.736085 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-10bf-account-create-update-zk5c2"] Feb 19 19:59:43 crc kubenswrapper[4813]: I0219 19:59:43.741621 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8bhl\" (UniqueName: \"kubernetes.io/projected/36727543-174a-48ce-b5cc-0abd04f85e4c-kube-api-access-l8bhl\") pod \"nova-cell1-db-create-zzr5k\" (UID: \"36727543-174a-48ce-b5cc-0abd04f85e4c\") " pod="openstack/nova-cell1-db-create-zzr5k" Feb 19 19:59:43 crc kubenswrapper[4813]: I0219 19:59:43.745725 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-xg69l" Feb 19 19:59:43 crc kubenswrapper[4813]: I0219 19:59:43.821763 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sl7j6\" (UniqueName: \"kubernetes.io/projected/0949f55f-f63e-4af5-804f-57500e6d83f2-kube-api-access-sl7j6\") pod \"nova-cell0-10bf-account-create-update-zk5c2\" (UID: \"0949f55f-f63e-4af5-804f-57500e6d83f2\") " pod="openstack/nova-cell0-10bf-account-create-update-zk5c2" Feb 19 19:59:43 crc kubenswrapper[4813]: I0219 19:59:43.821862 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/712b0a56-84d1-4ab3-a0e0-5237a748b43d-operator-scripts\") pod \"nova-api-88c2-account-create-update-687tm\" (UID: \"712b0a56-84d1-4ab3-a0e0-5237a748b43d\") " pod="openstack/nova-api-88c2-account-create-update-687tm" Feb 19 19:59:43 crc kubenswrapper[4813]: I0219 19:59:43.821894 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4s8m\" (UniqueName: \"kubernetes.io/projected/712b0a56-84d1-4ab3-a0e0-5237a748b43d-kube-api-access-n4s8m\") pod \"nova-api-88c2-account-create-update-687tm\" (UID: \"712b0a56-84d1-4ab3-a0e0-5237a748b43d\") " pod="openstack/nova-api-88c2-account-create-update-687tm" Feb 19 19:59:43 crc kubenswrapper[4813]: I0219 19:59:43.823033 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0949f55f-f63e-4af5-804f-57500e6d83f2-operator-scripts\") pod \"nova-cell0-10bf-account-create-update-zk5c2\" (UID: \"0949f55f-f63e-4af5-804f-57500e6d83f2\") " pod="openstack/nova-cell0-10bf-account-create-update-zk5c2" Feb 19 19:59:43 crc kubenswrapper[4813]: I0219 19:59:43.823487 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/712b0a56-84d1-4ab3-a0e0-5237a748b43d-operator-scripts\") pod \"nova-api-88c2-account-create-update-687tm\" (UID: \"712b0a56-84d1-4ab3-a0e0-5237a748b43d\") " pod="openstack/nova-api-88c2-account-create-update-687tm" Feb 19 19:59:43 crc kubenswrapper[4813]: I0219 19:59:43.841569 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-zzr5k" Feb 19 19:59:43 crc kubenswrapper[4813]: I0219 19:59:43.841803 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4s8m\" (UniqueName: \"kubernetes.io/projected/712b0a56-84d1-4ab3-a0e0-5237a748b43d-kube-api-access-n4s8m\") pod \"nova-api-88c2-account-create-update-687tm\" (UID: \"712b0a56-84d1-4ab3-a0e0-5237a748b43d\") " pod="openstack/nova-api-88c2-account-create-update-687tm" Feb 19 19:59:43 crc kubenswrapper[4813]: I0219 19:59:43.895457 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-88c2-account-create-update-687tm" Feb 19 19:59:43 crc kubenswrapper[4813]: I0219 19:59:43.926685 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sl7j6\" (UniqueName: \"kubernetes.io/projected/0949f55f-f63e-4af5-804f-57500e6d83f2-kube-api-access-sl7j6\") pod \"nova-cell0-10bf-account-create-update-zk5c2\" (UID: \"0949f55f-f63e-4af5-804f-57500e6d83f2\") " pod="openstack/nova-cell0-10bf-account-create-update-zk5c2" Feb 19 19:59:43 crc kubenswrapper[4813]: I0219 19:59:43.926766 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0949f55f-f63e-4af5-804f-57500e6d83f2-operator-scripts\") pod \"nova-cell0-10bf-account-create-update-zk5c2\" (UID: \"0949f55f-f63e-4af5-804f-57500e6d83f2\") " pod="openstack/nova-cell0-10bf-account-create-update-zk5c2" Feb 19 19:59:43 crc kubenswrapper[4813]: I0219 19:59:43.927526 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0949f55f-f63e-4af5-804f-57500e6d83f2-operator-scripts\") pod \"nova-cell0-10bf-account-create-update-zk5c2\" (UID: \"0949f55f-f63e-4af5-804f-57500e6d83f2\") " pod="openstack/nova-cell0-10bf-account-create-update-zk5c2" Feb 19 19:59:43 crc kubenswrapper[4813]: I0219 19:59:43.929977 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-22c4-account-create-update-9tzrv"] Feb 19 19:59:43 crc kubenswrapper[4813]: I0219 19:59:43.931125 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-22c4-account-create-update-9tzrv" Feb 19 19:59:43 crc kubenswrapper[4813]: I0219 19:59:43.933322 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 19 19:59:43 crc kubenswrapper[4813]: I0219 19:59:43.940786 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-22c4-account-create-update-9tzrv"] Feb 19 19:59:43 crc kubenswrapper[4813]: I0219 19:59:43.958042 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sl7j6\" (UniqueName: \"kubernetes.io/projected/0949f55f-f63e-4af5-804f-57500e6d83f2-kube-api-access-sl7j6\") pod \"nova-cell0-10bf-account-create-update-zk5c2\" (UID: \"0949f55f-f63e-4af5-804f-57500e6d83f2\") " pod="openstack/nova-cell0-10bf-account-create-update-zk5c2" Feb 19 19:59:44 crc kubenswrapper[4813]: I0219 19:59:44.028706 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/034ceb6c-2c10-4059-8f04-007215596cf8-operator-scripts\") pod \"nova-cell1-22c4-account-create-update-9tzrv\" (UID: \"034ceb6c-2c10-4059-8f04-007215596cf8\") " pod="openstack/nova-cell1-22c4-account-create-update-9tzrv" Feb 19 19:59:44 crc kubenswrapper[4813]: I0219 19:59:44.028778 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fzhs\" (UniqueName: \"kubernetes.io/projected/034ceb6c-2c10-4059-8f04-007215596cf8-kube-api-access-8fzhs\") pod \"nova-cell1-22c4-account-create-update-9tzrv\" (UID: \"034ceb6c-2c10-4059-8f04-007215596cf8\") " pod="openstack/nova-cell1-22c4-account-create-update-9tzrv" Feb 19 19:59:44 crc kubenswrapper[4813]: I0219 19:59:44.123884 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-10bf-account-create-update-zk5c2" Feb 19 19:59:44 crc kubenswrapper[4813]: I0219 19:59:44.130313 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/034ceb6c-2c10-4059-8f04-007215596cf8-operator-scripts\") pod \"nova-cell1-22c4-account-create-update-9tzrv\" (UID: \"034ceb6c-2c10-4059-8f04-007215596cf8\") " pod="openstack/nova-cell1-22c4-account-create-update-9tzrv" Feb 19 19:59:44 crc kubenswrapper[4813]: I0219 19:59:44.130418 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fzhs\" (UniqueName: \"kubernetes.io/projected/034ceb6c-2c10-4059-8f04-007215596cf8-kube-api-access-8fzhs\") pod \"nova-cell1-22c4-account-create-update-9tzrv\" (UID: \"034ceb6c-2c10-4059-8f04-007215596cf8\") " pod="openstack/nova-cell1-22c4-account-create-update-9tzrv" Feb 19 19:59:44 crc kubenswrapper[4813]: I0219 19:59:44.131626 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/034ceb6c-2c10-4059-8f04-007215596cf8-operator-scripts\") pod \"nova-cell1-22c4-account-create-update-9tzrv\" (UID: \"034ceb6c-2c10-4059-8f04-007215596cf8\") " pod="openstack/nova-cell1-22c4-account-create-update-9tzrv" Feb 19 19:59:44 crc kubenswrapper[4813]: I0219 19:59:44.158065 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-fm8jf"] Feb 19 19:59:44 crc kubenswrapper[4813]: I0219 19:59:44.160109 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fzhs\" (UniqueName: \"kubernetes.io/projected/034ceb6c-2c10-4059-8f04-007215596cf8-kube-api-access-8fzhs\") pod \"nova-cell1-22c4-account-create-update-9tzrv\" (UID: \"034ceb6c-2c10-4059-8f04-007215596cf8\") " pod="openstack/nova-cell1-22c4-account-create-update-9tzrv" Feb 19 19:59:44 crc kubenswrapper[4813]: I0219 19:59:44.221069 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-fm8jf" event={"ID":"73ed1a04-38ca-473b-8d0d-6412fc7fc147","Type":"ContainerStarted","Data":"92eee8eafede5c5ebb13dce156f7c55db61aef8e754d6607dff026f7278f65de"} Feb 19 19:59:44 crc kubenswrapper[4813]: I0219 19:59:44.254391 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-22c4-account-create-update-9tzrv" Feb 19 19:59:44 crc kubenswrapper[4813]: I0219 19:59:44.321466 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-xg69l"] Feb 19 19:59:44 crc kubenswrapper[4813]: W0219 19:59:44.330746 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5fd7e145_bf0d_469b_880d_4d1086f799e2.slice/crio-72326e089204151d98dee718ab7446bc6d425f397c00c2e44249158bf0681477 WatchSource:0}: Error finding container 72326e089204151d98dee718ab7446bc6d425f397c00c2e44249158bf0681477: Status 404 returned error can't find the container with id 72326e089204151d98dee718ab7446bc6d425f397c00c2e44249158bf0681477 Feb 19 19:59:44 crc kubenswrapper[4813]: I0219 19:59:44.404432 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-zzr5k"] Feb 19 19:59:44 crc kubenswrapper[4813]: W0219 19:59:44.431856 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36727543_174a_48ce_b5cc_0abd04f85e4c.slice/crio-d7e0233d5ebd7d8f8556fc59cefed153fd8650b423e57cb1b504fe27b40340e0 WatchSource:0}: Error finding container d7e0233d5ebd7d8f8556fc59cefed153fd8650b423e57cb1b504fe27b40340e0: Status 404 returned error can't find the container with id d7e0233d5ebd7d8f8556fc59cefed153fd8650b423e57cb1b504fe27b40340e0 Feb 19 19:59:44 crc kubenswrapper[4813]: I0219 19:59:44.509738 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-88c2-account-create-update-687tm"] Feb 19 19:59:44 crc kubenswrapper[4813]: W0219 19:59:44.517803 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod712b0a56_84d1_4ab3_a0e0_5237a748b43d.slice/crio-ac1dc5a4447fdafc0fbb26ddfecc7b72cf0f4dd71bbe418e3e0be7cf5e2ed47e WatchSource:0}: Error finding container ac1dc5a4447fdafc0fbb26ddfecc7b72cf0f4dd71bbe418e3e0be7cf5e2ed47e: Status 404 returned error can't find the container with id ac1dc5a4447fdafc0fbb26ddfecc7b72cf0f4dd71bbe418e3e0be7cf5e2ed47e Feb 19 19:59:44 crc kubenswrapper[4813]: I0219 19:59:44.643854 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-10bf-account-create-update-zk5c2"] Feb 19 19:59:44 crc kubenswrapper[4813]: W0219 19:59:44.650573 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0949f55f_f63e_4af5_804f_57500e6d83f2.slice/crio-4e4275dda32b84dca9885faa03ef60979888fc43b44b4477ff21d2f7e30f1462 WatchSource:0}: Error finding container 4e4275dda32b84dca9885faa03ef60979888fc43b44b4477ff21d2f7e30f1462: Status 404 returned error can't find the container with id 4e4275dda32b84dca9885faa03ef60979888fc43b44b4477ff21d2f7e30f1462 Feb 19 19:59:44 crc kubenswrapper[4813]: I0219 19:59:44.778988 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-22c4-account-create-update-9tzrv"] Feb 19 19:59:44 crc kubenswrapper[4813]: W0219 19:59:44.803024 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod034ceb6c_2c10_4059_8f04_007215596cf8.slice/crio-556b3caddbd90768b3dafbb64577f126423366245733ff1f79aaff90bb05d4f0 WatchSource:0}: Error finding container 556b3caddbd90768b3dafbb64577f126423366245733ff1f79aaff90bb05d4f0: Status 404 returned error can't find the container with id 556b3caddbd90768b3dafbb64577f126423366245733ff1f79aaff90bb05d4f0 Feb 19 19:59:45 crc kubenswrapper[4813]: I0219 19:59:45.229248 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-22c4-account-create-update-9tzrv" event={"ID":"034ceb6c-2c10-4059-8f04-007215596cf8","Type":"ContainerStarted","Data":"32144c334c41bfbc7c55454ea453f9e12f4f8822dcd1bd5c3d8861983d64a5a4"} Feb 19 19:59:45 crc kubenswrapper[4813]: I0219 19:59:45.229298 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-22c4-account-create-update-9tzrv" event={"ID":"034ceb6c-2c10-4059-8f04-007215596cf8","Type":"ContainerStarted","Data":"556b3caddbd90768b3dafbb64577f126423366245733ff1f79aaff90bb05d4f0"} Feb 19 19:59:45 crc kubenswrapper[4813]: I0219 19:59:45.230794 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-10bf-account-create-update-zk5c2" event={"ID":"0949f55f-f63e-4af5-804f-57500e6d83f2","Type":"ContainerStarted","Data":"a116ef55d81e12fc8e54d567d1c226cb474cbf281bd724af9e2631fc072955f4"} Feb 19 19:59:45 crc kubenswrapper[4813]: I0219 19:59:45.230824 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-10bf-account-create-update-zk5c2" event={"ID":"0949f55f-f63e-4af5-804f-57500e6d83f2","Type":"ContainerStarted","Data":"4e4275dda32b84dca9885faa03ef60979888fc43b44b4477ff21d2f7e30f1462"} Feb 19 19:59:45 crc kubenswrapper[4813]: I0219 19:59:45.232279 4813 generic.go:334] "Generic (PLEG): container finished" podID="5fd7e145-bf0d-469b-880d-4d1086f799e2" containerID="e061fbe17f425006faf09d648d10e5d3f36e0178e3f5ff42c0530d615b9ca178" exitCode=0 Feb 19 19:59:45 crc kubenswrapper[4813]: I0219 19:59:45.232333 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-xg69l" event={"ID":"5fd7e145-bf0d-469b-880d-4d1086f799e2","Type":"ContainerDied","Data":"e061fbe17f425006faf09d648d10e5d3f36e0178e3f5ff42c0530d615b9ca178"} Feb 19 19:59:45 crc kubenswrapper[4813]: I0219 19:59:45.232350 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-xg69l" event={"ID":"5fd7e145-bf0d-469b-880d-4d1086f799e2","Type":"ContainerStarted","Data":"72326e089204151d98dee718ab7446bc6d425f397c00c2e44249158bf0681477"} Feb 19 19:59:45 crc kubenswrapper[4813]: I0219 19:59:45.234518 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-88c2-account-create-update-687tm" event={"ID":"712b0a56-84d1-4ab3-a0e0-5237a748b43d","Type":"ContainerStarted","Data":"25a0a85a23ae31c5a5c8f17018717a02e57010f24eb6d499a312e67fc6b0c7f6"} Feb 19 19:59:45 crc kubenswrapper[4813]: I0219 19:59:45.234548 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-88c2-account-create-update-687tm" event={"ID":"712b0a56-84d1-4ab3-a0e0-5237a748b43d","Type":"ContainerStarted","Data":"ac1dc5a4447fdafc0fbb26ddfecc7b72cf0f4dd71bbe418e3e0be7cf5e2ed47e"} Feb 19 19:59:45 crc kubenswrapper[4813]: I0219 19:59:45.236213 4813 generic.go:334] "Generic (PLEG): container finished" podID="36727543-174a-48ce-b5cc-0abd04f85e4c" containerID="c87bf07b391dc431d1211a9ebf4b34646f101df752289bc24e6149a04543dc9e" exitCode=0 Feb 19 19:59:45 crc kubenswrapper[4813]: I0219 19:59:45.236258 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-zzr5k" event={"ID":"36727543-174a-48ce-b5cc-0abd04f85e4c","Type":"ContainerDied","Data":"c87bf07b391dc431d1211a9ebf4b34646f101df752289bc24e6149a04543dc9e"} Feb 19 19:59:45 crc kubenswrapper[4813]: I0219 19:59:45.236280 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-zzr5k" event={"ID":"36727543-174a-48ce-b5cc-0abd04f85e4c","Type":"ContainerStarted","Data":"d7e0233d5ebd7d8f8556fc59cefed153fd8650b423e57cb1b504fe27b40340e0"} Feb 19 19:59:45 crc kubenswrapper[4813]: I0219 19:59:45.237805 4813 generic.go:334] "Generic (PLEG): container finished" podID="73ed1a04-38ca-473b-8d0d-6412fc7fc147" containerID="76471d8a318e103238b4b0bb913015c7a431100e43ff477f3721f5a22275ab58" exitCode=0 Feb 19 19:59:45 crc kubenswrapper[4813]: I0219 19:59:45.237850 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-fm8jf" event={"ID":"73ed1a04-38ca-473b-8d0d-6412fc7fc147","Type":"ContainerDied","Data":"76471d8a318e103238b4b0bb913015c7a431100e43ff477f3721f5a22275ab58"} Feb 19 19:59:45 crc kubenswrapper[4813]: I0219 19:59:45.254352 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-22c4-account-create-update-9tzrv" podStartSLOduration=2.2543297239999998 podStartE2EDuration="2.254329724s" podCreationTimestamp="2026-02-19 19:59:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:59:45.245369128 +0000 UTC m=+5404.470809699" watchObservedRunningTime="2026-02-19 19:59:45.254329724 +0000 UTC m=+5404.479770265" Feb 19 19:59:45 crc kubenswrapper[4813]: I0219 19:59:45.296466 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-10bf-account-create-update-zk5c2" podStartSLOduration=2.296448157 podStartE2EDuration="2.296448157s" podCreationTimestamp="2026-02-19 19:59:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:59:45.291407412 +0000 UTC m=+5404.516847963" watchObservedRunningTime="2026-02-19 19:59:45.296448157 +0000 UTC m=+5404.521888708" Feb 19 19:59:45 crc kubenswrapper[4813]: I0219 19:59:45.331137 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-88c2-account-create-update-687tm" podStartSLOduration=2.33111347 podStartE2EDuration="2.33111347s" podCreationTimestamp="2026-02-19 19:59:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:59:45.316056894 +0000 UTC m=+5404.541497445" watchObservedRunningTime="2026-02-19 19:59:45.33111347 +0000 UTC m=+5404.556554011" Feb 19 19:59:46 crc kubenswrapper[4813]: I0219 19:59:46.248247 4813 generic.go:334] "Generic (PLEG): container finished" podID="712b0a56-84d1-4ab3-a0e0-5237a748b43d" containerID="25a0a85a23ae31c5a5c8f17018717a02e57010f24eb6d499a312e67fc6b0c7f6" exitCode=0 Feb 19 19:59:46 crc kubenswrapper[4813]: I0219 19:59:46.248362 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-88c2-account-create-update-687tm" event={"ID":"712b0a56-84d1-4ab3-a0e0-5237a748b43d","Type":"ContainerDied","Data":"25a0a85a23ae31c5a5c8f17018717a02e57010f24eb6d499a312e67fc6b0c7f6"} Feb 19 19:59:46 crc kubenswrapper[4813]: I0219 19:59:46.250134 4813 generic.go:334] "Generic (PLEG): container finished" podID="034ceb6c-2c10-4059-8f04-007215596cf8" containerID="32144c334c41bfbc7c55454ea453f9e12f4f8822dcd1bd5c3d8861983d64a5a4" exitCode=0 Feb 19 19:59:46 crc kubenswrapper[4813]: I0219 19:59:46.250202 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-22c4-account-create-update-9tzrv" event={"ID":"034ceb6c-2c10-4059-8f04-007215596cf8","Type":"ContainerDied","Data":"32144c334c41bfbc7c55454ea453f9e12f4f8822dcd1bd5c3d8861983d64a5a4"} Feb 19 19:59:46 crc kubenswrapper[4813]: I0219 19:59:46.251771 4813 generic.go:334] "Generic (PLEG): container finished" podID="0949f55f-f63e-4af5-804f-57500e6d83f2" containerID="a116ef55d81e12fc8e54d567d1c226cb474cbf281bd724af9e2631fc072955f4" exitCode=0 Feb 19 19:59:46 crc kubenswrapper[4813]: I0219 19:59:46.251899 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-10bf-account-create-update-zk5c2" event={"ID":"0949f55f-f63e-4af5-804f-57500e6d83f2","Type":"ContainerDied","Data":"a116ef55d81e12fc8e54d567d1c226cb474cbf281bd724af9e2631fc072955f4"} Feb 19 19:59:46 crc kubenswrapper[4813]: I0219 19:59:46.827933 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-zzr5k" Feb 19 19:59:46 crc kubenswrapper[4813]: I0219 19:59:46.834728 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-fm8jf" Feb 19 19:59:46 crc kubenswrapper[4813]: I0219 19:59:46.842013 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-xg69l" Feb 19 19:59:46 crc kubenswrapper[4813]: I0219 19:59:46.993500 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36727543-174a-48ce-b5cc-0abd04f85e4c-operator-scripts\") pod \"36727543-174a-48ce-b5cc-0abd04f85e4c\" (UID: \"36727543-174a-48ce-b5cc-0abd04f85e4c\") " Feb 19 19:59:46 crc kubenswrapper[4813]: I0219 19:59:46.993582 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73ed1a04-38ca-473b-8d0d-6412fc7fc147-operator-scripts\") pod \"73ed1a04-38ca-473b-8d0d-6412fc7fc147\" (UID: \"73ed1a04-38ca-473b-8d0d-6412fc7fc147\") " Feb 19 19:59:46 crc kubenswrapper[4813]: I0219 19:59:46.993634 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5fd7e145-bf0d-469b-880d-4d1086f799e2-operator-scripts\") pod \"5fd7e145-bf0d-469b-880d-4d1086f799e2\" (UID: \"5fd7e145-bf0d-469b-880d-4d1086f799e2\") " Feb 19 19:59:46 crc kubenswrapper[4813]: I0219 19:59:46.993721 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h89pp\" (UniqueName: \"kubernetes.io/projected/5fd7e145-bf0d-469b-880d-4d1086f799e2-kube-api-access-h89pp\") pod \"5fd7e145-bf0d-469b-880d-4d1086f799e2\" (UID: \"5fd7e145-bf0d-469b-880d-4d1086f799e2\") " Feb 19 19:59:46 crc kubenswrapper[4813]: I0219 19:59:46.993864 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sq5bd\" (UniqueName: \"kubernetes.io/projected/73ed1a04-38ca-473b-8d0d-6412fc7fc147-kube-api-access-sq5bd\") pod \"73ed1a04-38ca-473b-8d0d-6412fc7fc147\" (UID: \"73ed1a04-38ca-473b-8d0d-6412fc7fc147\") " Feb 19 19:59:46 crc kubenswrapper[4813]: I0219 19:59:46.993892 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8bhl\" (UniqueName: \"kubernetes.io/projected/36727543-174a-48ce-b5cc-0abd04f85e4c-kube-api-access-l8bhl\") pod \"36727543-174a-48ce-b5cc-0abd04f85e4c\" (UID: \"36727543-174a-48ce-b5cc-0abd04f85e4c\") " Feb 19 19:59:46 crc kubenswrapper[4813]: I0219 19:59:46.994886 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73ed1a04-38ca-473b-8d0d-6412fc7fc147-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "73ed1a04-38ca-473b-8d0d-6412fc7fc147" (UID: "73ed1a04-38ca-473b-8d0d-6412fc7fc147"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:59:46 crc kubenswrapper[4813]: I0219 19:59:46.995819 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fd7e145-bf0d-469b-880d-4d1086f799e2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5fd7e145-bf0d-469b-880d-4d1086f799e2" (UID: "5fd7e145-bf0d-469b-880d-4d1086f799e2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:59:46 crc kubenswrapper[4813]: I0219 19:59:46.996052 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36727543-174a-48ce-b5cc-0abd04f85e4c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "36727543-174a-48ce-b5cc-0abd04f85e4c" (UID: "36727543-174a-48ce-b5cc-0abd04f85e4c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:59:47 crc kubenswrapper[4813]: I0219 19:59:47.001982 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fd7e145-bf0d-469b-880d-4d1086f799e2-kube-api-access-h89pp" (OuterVolumeSpecName: "kube-api-access-h89pp") pod "5fd7e145-bf0d-469b-880d-4d1086f799e2" (UID: "5fd7e145-bf0d-469b-880d-4d1086f799e2"). InnerVolumeSpecName "kube-api-access-h89pp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:59:47 crc kubenswrapper[4813]: I0219 19:59:47.002488 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36727543-174a-48ce-b5cc-0abd04f85e4c-kube-api-access-l8bhl" (OuterVolumeSpecName: "kube-api-access-l8bhl") pod "36727543-174a-48ce-b5cc-0abd04f85e4c" (UID: "36727543-174a-48ce-b5cc-0abd04f85e4c"). InnerVolumeSpecName "kube-api-access-l8bhl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:59:47 crc kubenswrapper[4813]: I0219 19:59:47.008283 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73ed1a04-38ca-473b-8d0d-6412fc7fc147-kube-api-access-sq5bd" (OuterVolumeSpecName: "kube-api-access-sq5bd") pod "73ed1a04-38ca-473b-8d0d-6412fc7fc147" (UID: "73ed1a04-38ca-473b-8d0d-6412fc7fc147"). InnerVolumeSpecName "kube-api-access-sq5bd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:59:47 crc kubenswrapper[4813]: I0219 19:59:47.096117 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36727543-174a-48ce-b5cc-0abd04f85e4c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:59:47 crc kubenswrapper[4813]: I0219 19:59:47.096152 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73ed1a04-38ca-473b-8d0d-6412fc7fc147-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:59:47 crc kubenswrapper[4813]: I0219 19:59:47.096160 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5fd7e145-bf0d-469b-880d-4d1086f799e2-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:59:47 crc kubenswrapper[4813]: I0219 19:59:47.096168 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h89pp\" (UniqueName: \"kubernetes.io/projected/5fd7e145-bf0d-469b-880d-4d1086f799e2-kube-api-access-h89pp\") on node \"crc\" DevicePath \"\"" Feb 19 19:59:47 crc kubenswrapper[4813]: I0219 19:59:47.096178 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sq5bd\" (UniqueName: \"kubernetes.io/projected/73ed1a04-38ca-473b-8d0d-6412fc7fc147-kube-api-access-sq5bd\") on node \"crc\" DevicePath \"\"" Feb 19 19:59:47 crc kubenswrapper[4813]: I0219 19:59:47.096186 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8bhl\" (UniqueName: \"kubernetes.io/projected/36727543-174a-48ce-b5cc-0abd04f85e4c-kube-api-access-l8bhl\") on node \"crc\" DevicePath \"\"" Feb 19 19:59:47 crc kubenswrapper[4813]: I0219 19:59:47.259313 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-zzr5k" Feb 19 19:59:47 crc kubenswrapper[4813]: I0219 19:59:47.259313 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-zzr5k" event={"ID":"36727543-174a-48ce-b5cc-0abd04f85e4c","Type":"ContainerDied","Data":"d7e0233d5ebd7d8f8556fc59cefed153fd8650b423e57cb1b504fe27b40340e0"} Feb 19 19:59:47 crc kubenswrapper[4813]: I0219 19:59:47.259492 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7e0233d5ebd7d8f8556fc59cefed153fd8650b423e57cb1b504fe27b40340e0" Feb 19 19:59:47 crc kubenswrapper[4813]: I0219 19:59:47.260741 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-fm8jf" event={"ID":"73ed1a04-38ca-473b-8d0d-6412fc7fc147","Type":"ContainerDied","Data":"92eee8eafede5c5ebb13dce156f7c55db61aef8e754d6607dff026f7278f65de"} Feb 19 19:59:47 crc kubenswrapper[4813]: I0219 19:59:47.260788 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92eee8eafede5c5ebb13dce156f7c55db61aef8e754d6607dff026f7278f65de" Feb 19 19:59:47 crc kubenswrapper[4813]: I0219 19:59:47.260788 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-fm8jf" Feb 19 19:59:47 crc kubenswrapper[4813]: I0219 19:59:47.261967 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-xg69l" event={"ID":"5fd7e145-bf0d-469b-880d-4d1086f799e2","Type":"ContainerDied","Data":"72326e089204151d98dee718ab7446bc6d425f397c00c2e44249158bf0681477"} Feb 19 19:59:47 crc kubenswrapper[4813]: I0219 19:59:47.262001 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72326e089204151d98dee718ab7446bc6d425f397c00c2e44249158bf0681477" Feb 19 19:59:47 crc kubenswrapper[4813]: I0219 19:59:47.262113 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-xg69l" Feb 19 19:59:47 crc kubenswrapper[4813]: I0219 19:59:47.551879 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-88c2-account-create-update-687tm" Feb 19 19:59:47 crc kubenswrapper[4813]: I0219 19:59:47.619800 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-10bf-account-create-update-zk5c2" Feb 19 19:59:47 crc kubenswrapper[4813]: I0219 19:59:47.652120 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-22c4-account-create-update-9tzrv" Feb 19 19:59:47 crc kubenswrapper[4813]: I0219 19:59:47.703498 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0949f55f-f63e-4af5-804f-57500e6d83f2-operator-scripts\") pod \"0949f55f-f63e-4af5-804f-57500e6d83f2\" (UID: \"0949f55f-f63e-4af5-804f-57500e6d83f2\") " Feb 19 19:59:47 crc kubenswrapper[4813]: I0219 19:59:47.703577 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/712b0a56-84d1-4ab3-a0e0-5237a748b43d-operator-scripts\") pod \"712b0a56-84d1-4ab3-a0e0-5237a748b43d\" (UID: \"712b0a56-84d1-4ab3-a0e0-5237a748b43d\") " Feb 19 19:59:47 crc kubenswrapper[4813]: I0219 19:59:47.703671 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4s8m\" (UniqueName: \"kubernetes.io/projected/712b0a56-84d1-4ab3-a0e0-5237a748b43d-kube-api-access-n4s8m\") pod \"712b0a56-84d1-4ab3-a0e0-5237a748b43d\" (UID: \"712b0a56-84d1-4ab3-a0e0-5237a748b43d\") " Feb 19 19:59:47 crc kubenswrapper[4813]: I0219 19:59:47.703747 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sl7j6\" (UniqueName: \"kubernetes.io/projected/0949f55f-f63e-4af5-804f-57500e6d83f2-kube-api-access-sl7j6\") pod \"0949f55f-f63e-4af5-804f-57500e6d83f2\" (UID: \"0949f55f-f63e-4af5-804f-57500e6d83f2\") " Feb 19 19:59:47 crc kubenswrapper[4813]: I0219 19:59:47.704221 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0949f55f-f63e-4af5-804f-57500e6d83f2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0949f55f-f63e-4af5-804f-57500e6d83f2" (UID: "0949f55f-f63e-4af5-804f-57500e6d83f2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:59:47 crc kubenswrapper[4813]: I0219 19:59:47.704248 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/712b0a56-84d1-4ab3-a0e0-5237a748b43d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "712b0a56-84d1-4ab3-a0e0-5237a748b43d" (UID: "712b0a56-84d1-4ab3-a0e0-5237a748b43d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:59:47 crc kubenswrapper[4813]: I0219 19:59:47.706704 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/712b0a56-84d1-4ab3-a0e0-5237a748b43d-kube-api-access-n4s8m" (OuterVolumeSpecName: "kube-api-access-n4s8m") pod "712b0a56-84d1-4ab3-a0e0-5237a748b43d" (UID: "712b0a56-84d1-4ab3-a0e0-5237a748b43d"). InnerVolumeSpecName "kube-api-access-n4s8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:59:47 crc kubenswrapper[4813]: I0219 19:59:47.707061 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0949f55f-f63e-4af5-804f-57500e6d83f2-kube-api-access-sl7j6" (OuterVolumeSpecName: "kube-api-access-sl7j6") pod "0949f55f-f63e-4af5-804f-57500e6d83f2" (UID: "0949f55f-f63e-4af5-804f-57500e6d83f2"). InnerVolumeSpecName "kube-api-access-sl7j6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:59:47 crc kubenswrapper[4813]: I0219 19:59:47.805146 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/034ceb6c-2c10-4059-8f04-007215596cf8-operator-scripts\") pod \"034ceb6c-2c10-4059-8f04-007215596cf8\" (UID: \"034ceb6c-2c10-4059-8f04-007215596cf8\") " Feb 19 19:59:47 crc kubenswrapper[4813]: I0219 19:59:47.805643 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fzhs\" (UniqueName: \"kubernetes.io/projected/034ceb6c-2c10-4059-8f04-007215596cf8-kube-api-access-8fzhs\") pod \"034ceb6c-2c10-4059-8f04-007215596cf8\" (UID: \"034ceb6c-2c10-4059-8f04-007215596cf8\") " Feb 19 19:59:47 crc kubenswrapper[4813]: I0219 19:59:47.806109 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/034ceb6c-2c10-4059-8f04-007215596cf8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "034ceb6c-2c10-4059-8f04-007215596cf8" (UID: "034ceb6c-2c10-4059-8f04-007215596cf8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:59:47 crc kubenswrapper[4813]: I0219 19:59:47.806203 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/034ceb6c-2c10-4059-8f04-007215596cf8-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:59:47 crc kubenswrapper[4813]: I0219 19:59:47.806229 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n4s8m\" (UniqueName: \"kubernetes.io/projected/712b0a56-84d1-4ab3-a0e0-5237a748b43d-kube-api-access-n4s8m\") on node \"crc\" DevicePath \"\"" Feb 19 19:59:47 crc kubenswrapper[4813]: I0219 19:59:47.806243 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sl7j6\" (UniqueName: \"kubernetes.io/projected/0949f55f-f63e-4af5-804f-57500e6d83f2-kube-api-access-sl7j6\") on node \"crc\" DevicePath \"\"" Feb 19 19:59:47 crc kubenswrapper[4813]: I0219 19:59:47.806253 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0949f55f-f63e-4af5-804f-57500e6d83f2-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:59:47 crc kubenswrapper[4813]: I0219 19:59:47.806262 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/712b0a56-84d1-4ab3-a0e0-5237a748b43d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:59:47 crc kubenswrapper[4813]: I0219 19:59:47.809205 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/034ceb6c-2c10-4059-8f04-007215596cf8-kube-api-access-8fzhs" (OuterVolumeSpecName: "kube-api-access-8fzhs") pod "034ceb6c-2c10-4059-8f04-007215596cf8" (UID: "034ceb6c-2c10-4059-8f04-007215596cf8"). InnerVolumeSpecName "kube-api-access-8fzhs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:59:47 crc kubenswrapper[4813]: I0219 19:59:47.908404 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fzhs\" (UniqueName: \"kubernetes.io/projected/034ceb6c-2c10-4059-8f04-007215596cf8-kube-api-access-8fzhs\") on node \"crc\" DevicePath \"\"" Feb 19 19:59:48 crc kubenswrapper[4813]: I0219 19:59:48.274407 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-10bf-account-create-update-zk5c2" Feb 19 19:59:48 crc kubenswrapper[4813]: I0219 19:59:48.274650 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-10bf-account-create-update-zk5c2" event={"ID":"0949f55f-f63e-4af5-804f-57500e6d83f2","Type":"ContainerDied","Data":"4e4275dda32b84dca9885faa03ef60979888fc43b44b4477ff21d2f7e30f1462"} Feb 19 19:59:48 crc kubenswrapper[4813]: I0219 19:59:48.274704 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e4275dda32b84dca9885faa03ef60979888fc43b44b4477ff21d2f7e30f1462" Feb 19 19:59:48 crc kubenswrapper[4813]: I0219 19:59:48.276735 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-88c2-account-create-update-687tm" event={"ID":"712b0a56-84d1-4ab3-a0e0-5237a748b43d","Type":"ContainerDied","Data":"ac1dc5a4447fdafc0fbb26ddfecc7b72cf0f4dd71bbe418e3e0be7cf5e2ed47e"} Feb 19 19:59:48 crc kubenswrapper[4813]: I0219 19:59:48.276769 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac1dc5a4447fdafc0fbb26ddfecc7b72cf0f4dd71bbe418e3e0be7cf5e2ed47e" Feb 19 19:59:48 crc kubenswrapper[4813]: I0219 19:59:48.276797 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-88c2-account-create-update-687tm" Feb 19 19:59:48 crc kubenswrapper[4813]: I0219 19:59:48.280613 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-22c4-account-create-update-9tzrv" event={"ID":"034ceb6c-2c10-4059-8f04-007215596cf8","Type":"ContainerDied","Data":"556b3caddbd90768b3dafbb64577f126423366245733ff1f79aaff90bb05d4f0"} Feb 19 19:59:48 crc kubenswrapper[4813]: I0219 19:59:48.280661 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="556b3caddbd90768b3dafbb64577f126423366245733ff1f79aaff90bb05d4f0" Feb 19 19:59:48 crc kubenswrapper[4813]: I0219 19:59:48.280722 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-22c4-account-create-update-9tzrv" Feb 19 19:59:48 crc kubenswrapper[4813]: I0219 19:59:48.836393 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-wh5fw"] Feb 19 19:59:48 crc kubenswrapper[4813]: E0219 19:59:48.836829 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fd7e145-bf0d-469b-880d-4d1086f799e2" containerName="mariadb-database-create" Feb 19 19:59:48 crc kubenswrapper[4813]: I0219 19:59:48.836854 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fd7e145-bf0d-469b-880d-4d1086f799e2" containerName="mariadb-database-create" Feb 19 19:59:48 crc kubenswrapper[4813]: E0219 19:59:48.836870 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0949f55f-f63e-4af5-804f-57500e6d83f2" containerName="mariadb-account-create-update" Feb 19 19:59:48 crc kubenswrapper[4813]: I0219 19:59:48.836878 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="0949f55f-f63e-4af5-804f-57500e6d83f2" containerName="mariadb-account-create-update" Feb 19 19:59:48 crc kubenswrapper[4813]: E0219 19:59:48.836890 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="712b0a56-84d1-4ab3-a0e0-5237a748b43d" containerName="mariadb-account-create-update" Feb 19 19:59:48 crc kubenswrapper[4813]: I0219 19:59:48.836897 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="712b0a56-84d1-4ab3-a0e0-5237a748b43d" containerName="mariadb-account-create-update" Feb 19 19:59:48 crc kubenswrapper[4813]: E0219 19:59:48.836915 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="034ceb6c-2c10-4059-8f04-007215596cf8" containerName="mariadb-account-create-update" Feb 19 19:59:48 crc kubenswrapper[4813]: I0219 19:59:48.836922 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="034ceb6c-2c10-4059-8f04-007215596cf8" containerName="mariadb-account-create-update" Feb 19 19:59:48 crc kubenswrapper[4813]: E0219 19:59:48.836942 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73ed1a04-38ca-473b-8d0d-6412fc7fc147" containerName="mariadb-database-create" Feb 19 19:59:48 crc kubenswrapper[4813]: I0219 19:59:48.837035 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="73ed1a04-38ca-473b-8d0d-6412fc7fc147" containerName="mariadb-database-create" Feb 19 19:59:48 crc kubenswrapper[4813]: E0219 19:59:48.837059 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36727543-174a-48ce-b5cc-0abd04f85e4c" containerName="mariadb-database-create" Feb 19 19:59:48 crc kubenswrapper[4813]: I0219 19:59:48.837066 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="36727543-174a-48ce-b5cc-0abd04f85e4c" containerName="mariadb-database-create" Feb 19 19:59:48 crc kubenswrapper[4813]: I0219 19:59:48.837299 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="0949f55f-f63e-4af5-804f-57500e6d83f2" containerName="mariadb-account-create-update" Feb 19 19:59:48 crc kubenswrapper[4813]: I0219 19:59:48.837330 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fd7e145-bf0d-469b-880d-4d1086f799e2" containerName="mariadb-database-create" Feb 19 19:59:48 crc kubenswrapper[4813]: I0219 19:59:48.837359 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="712b0a56-84d1-4ab3-a0e0-5237a748b43d" containerName="mariadb-account-create-update" Feb 19 19:59:48 crc kubenswrapper[4813]: I0219 19:59:48.837373 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="73ed1a04-38ca-473b-8d0d-6412fc7fc147" containerName="mariadb-database-create" Feb 19 19:59:48 crc kubenswrapper[4813]: I0219 19:59:48.837387 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="034ceb6c-2c10-4059-8f04-007215596cf8" containerName="mariadb-account-create-update" Feb 19 19:59:48 crc kubenswrapper[4813]: I0219 19:59:48.837398 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="36727543-174a-48ce-b5cc-0abd04f85e4c" containerName="mariadb-database-create" Feb 19 19:59:48 crc kubenswrapper[4813]: I0219 19:59:48.838168 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-wh5fw" Feb 19 19:59:48 crc kubenswrapper[4813]: I0219 19:59:48.841308 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 19 19:59:48 crc kubenswrapper[4813]: I0219 19:59:48.841729 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-5jxw6" Feb 19 19:59:48 crc kubenswrapper[4813]: I0219 19:59:48.841892 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 19 19:59:48 crc kubenswrapper[4813]: I0219 19:59:48.865916 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-wh5fw"] Feb 19 19:59:48 crc kubenswrapper[4813]: I0219 19:59:48.928225 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dck9j\" (UniqueName: \"kubernetes.io/projected/12c13eaf-400f-4c9b-b8ae-b8c3ed93617f-kube-api-access-dck9j\") pod \"nova-cell0-conductor-db-sync-wh5fw\" (UID: \"12c13eaf-400f-4c9b-b8ae-b8c3ed93617f\") " pod="openstack/nova-cell0-conductor-db-sync-wh5fw" Feb 19 19:59:48 crc kubenswrapper[4813]: I0219 19:59:48.928344 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12c13eaf-400f-4c9b-b8ae-b8c3ed93617f-scripts\") pod \"nova-cell0-conductor-db-sync-wh5fw\" (UID: \"12c13eaf-400f-4c9b-b8ae-b8c3ed93617f\") " pod="openstack/nova-cell0-conductor-db-sync-wh5fw" Feb 19 19:59:48 crc kubenswrapper[4813]: I0219 19:59:48.928442 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12c13eaf-400f-4c9b-b8ae-b8c3ed93617f-config-data\") pod \"nova-cell0-conductor-db-sync-wh5fw\" (UID: \"12c13eaf-400f-4c9b-b8ae-b8c3ed93617f\") " pod="openstack/nova-cell0-conductor-db-sync-wh5fw" Feb 19 19:59:48 crc kubenswrapper[4813]: I0219 19:59:48.928516 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12c13eaf-400f-4c9b-b8ae-b8c3ed93617f-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-wh5fw\" (UID: \"12c13eaf-400f-4c9b-b8ae-b8c3ed93617f\") " pod="openstack/nova-cell0-conductor-db-sync-wh5fw" Feb 19 19:59:49 crc kubenswrapper[4813]: I0219 19:59:49.030072 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12c13eaf-400f-4c9b-b8ae-b8c3ed93617f-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-wh5fw\" (UID: \"12c13eaf-400f-4c9b-b8ae-b8c3ed93617f\") " pod="openstack/nova-cell0-conductor-db-sync-wh5fw" Feb 19 19:59:49 crc kubenswrapper[4813]: I0219 19:59:49.030164 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dck9j\" (UniqueName: \"kubernetes.io/projected/12c13eaf-400f-4c9b-b8ae-b8c3ed93617f-kube-api-access-dck9j\") pod \"nova-cell0-conductor-db-sync-wh5fw\" (UID: \"12c13eaf-400f-4c9b-b8ae-b8c3ed93617f\") " pod="openstack/nova-cell0-conductor-db-sync-wh5fw" Feb 19 19:59:49 crc kubenswrapper[4813]: I0219 19:59:49.030243 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12c13eaf-400f-4c9b-b8ae-b8c3ed93617f-scripts\") pod \"nova-cell0-conductor-db-sync-wh5fw\" (UID: \"12c13eaf-400f-4c9b-b8ae-b8c3ed93617f\") " pod="openstack/nova-cell0-conductor-db-sync-wh5fw" Feb 19 19:59:49 crc kubenswrapper[4813]: I0219 19:59:49.030354 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12c13eaf-400f-4c9b-b8ae-b8c3ed93617f-config-data\") pod \"nova-cell0-conductor-db-sync-wh5fw\" (UID: \"12c13eaf-400f-4c9b-b8ae-b8c3ed93617f\") " pod="openstack/nova-cell0-conductor-db-sync-wh5fw" Feb 19 19:59:49 crc kubenswrapper[4813]: I0219 19:59:49.034995 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12c13eaf-400f-4c9b-b8ae-b8c3ed93617f-scripts\") pod \"nova-cell0-conductor-db-sync-wh5fw\" (UID: \"12c13eaf-400f-4c9b-b8ae-b8c3ed93617f\") " pod="openstack/nova-cell0-conductor-db-sync-wh5fw" Feb 19 19:59:49 crc kubenswrapper[4813]: I0219 19:59:49.036190 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12c13eaf-400f-4c9b-b8ae-b8c3ed93617f-config-data\") pod \"nova-cell0-conductor-db-sync-wh5fw\" (UID: \"12c13eaf-400f-4c9b-b8ae-b8c3ed93617f\") " pod="openstack/nova-cell0-conductor-db-sync-wh5fw" Feb 19 19:59:49 crc kubenswrapper[4813]: I0219 19:59:49.037665 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12c13eaf-400f-4c9b-b8ae-b8c3ed93617f-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-wh5fw\" (UID: \"12c13eaf-400f-4c9b-b8ae-b8c3ed93617f\") " pod="openstack/nova-cell0-conductor-db-sync-wh5fw" Feb 19 19:59:49 crc kubenswrapper[4813]: I0219 19:59:49.047844 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dck9j\" (UniqueName: \"kubernetes.io/projected/12c13eaf-400f-4c9b-b8ae-b8c3ed93617f-kube-api-access-dck9j\") pod \"nova-cell0-conductor-db-sync-wh5fw\" (UID: \"12c13eaf-400f-4c9b-b8ae-b8c3ed93617f\") " pod="openstack/nova-cell0-conductor-db-sync-wh5fw" Feb 19 19:59:49 crc kubenswrapper[4813]: I0219 19:59:49.155907 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-wh5fw" Feb 19 19:59:49 crc kubenswrapper[4813]: I0219 19:59:49.599698 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-wh5fw"] Feb 19 19:59:50 crc kubenswrapper[4813]: I0219 19:59:50.300436 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-wh5fw" event={"ID":"12c13eaf-400f-4c9b-b8ae-b8c3ed93617f","Type":"ContainerStarted","Data":"c1d959e0038654d7c2ced3e92eede4bf9e67b754f1b4b0f4b620bf613955e28f"} Feb 19 19:59:50 crc kubenswrapper[4813]: I0219 19:59:50.300872 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-wh5fw" event={"ID":"12c13eaf-400f-4c9b-b8ae-b8c3ed93617f","Type":"ContainerStarted","Data":"3ebff6f0b4d5d194be6eb7dcb52eb2f45bb25dac9fa7c6ffea1ca026404a9d07"} Feb 19 19:59:52 crc kubenswrapper[4813]: I0219 19:59:52.494634 4813 scope.go:117] "RemoveContainer" containerID="06e2c4b9404be97461c29ff744d68e485c3eee746844dbea733787dc2672202c" Feb 19 19:59:55 crc kubenswrapper[4813]: I0219 19:59:55.358787 4813 generic.go:334] "Generic (PLEG): container finished" podID="12c13eaf-400f-4c9b-b8ae-b8c3ed93617f" containerID="c1d959e0038654d7c2ced3e92eede4bf9e67b754f1b4b0f4b620bf613955e28f" exitCode=0 Feb 19 19:59:55 crc kubenswrapper[4813]: I0219 19:59:55.358841 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-wh5fw" event={"ID":"12c13eaf-400f-4c9b-b8ae-b8c3ed93617f","Type":"ContainerDied","Data":"c1d959e0038654d7c2ced3e92eede4bf9e67b754f1b4b0f4b620bf613955e28f"} Feb 19 19:59:56 crc kubenswrapper[4813]: I0219 19:59:56.680169 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-wh5fw" Feb 19 19:59:56 crc kubenswrapper[4813]: I0219 19:59:56.766290 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dck9j\" (UniqueName: \"kubernetes.io/projected/12c13eaf-400f-4c9b-b8ae-b8c3ed93617f-kube-api-access-dck9j\") pod \"12c13eaf-400f-4c9b-b8ae-b8c3ed93617f\" (UID: \"12c13eaf-400f-4c9b-b8ae-b8c3ed93617f\") " Feb 19 19:59:56 crc kubenswrapper[4813]: I0219 19:59:56.766394 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12c13eaf-400f-4c9b-b8ae-b8c3ed93617f-scripts\") pod \"12c13eaf-400f-4c9b-b8ae-b8c3ed93617f\" (UID: \"12c13eaf-400f-4c9b-b8ae-b8c3ed93617f\") " Feb 19 19:59:56 crc kubenswrapper[4813]: I0219 19:59:56.766548 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12c13eaf-400f-4c9b-b8ae-b8c3ed93617f-config-data\") pod \"12c13eaf-400f-4c9b-b8ae-b8c3ed93617f\" (UID: \"12c13eaf-400f-4c9b-b8ae-b8c3ed93617f\") " Feb 19 19:59:56 crc kubenswrapper[4813]: I0219 19:59:56.766748 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12c13eaf-400f-4c9b-b8ae-b8c3ed93617f-combined-ca-bundle\") pod \"12c13eaf-400f-4c9b-b8ae-b8c3ed93617f\" (UID: \"12c13eaf-400f-4c9b-b8ae-b8c3ed93617f\") " Feb 19 19:59:56 crc kubenswrapper[4813]: I0219 19:59:56.772829 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12c13eaf-400f-4c9b-b8ae-b8c3ed93617f-scripts" (OuterVolumeSpecName: "scripts") pod "12c13eaf-400f-4c9b-b8ae-b8c3ed93617f" (UID: "12c13eaf-400f-4c9b-b8ae-b8c3ed93617f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:59:56 crc kubenswrapper[4813]: I0219 19:59:56.775690 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12c13eaf-400f-4c9b-b8ae-b8c3ed93617f-kube-api-access-dck9j" (OuterVolumeSpecName: "kube-api-access-dck9j") pod "12c13eaf-400f-4c9b-b8ae-b8c3ed93617f" (UID: "12c13eaf-400f-4c9b-b8ae-b8c3ed93617f"). InnerVolumeSpecName "kube-api-access-dck9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:59:56 crc kubenswrapper[4813]: I0219 19:59:56.798006 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12c13eaf-400f-4c9b-b8ae-b8c3ed93617f-config-data" (OuterVolumeSpecName: "config-data") pod "12c13eaf-400f-4c9b-b8ae-b8c3ed93617f" (UID: "12c13eaf-400f-4c9b-b8ae-b8c3ed93617f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:59:56 crc kubenswrapper[4813]: I0219 19:59:56.798148 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12c13eaf-400f-4c9b-b8ae-b8c3ed93617f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "12c13eaf-400f-4c9b-b8ae-b8c3ed93617f" (UID: "12c13eaf-400f-4c9b-b8ae-b8c3ed93617f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:59:56 crc kubenswrapper[4813]: I0219 19:59:56.869153 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dck9j\" (UniqueName: \"kubernetes.io/projected/12c13eaf-400f-4c9b-b8ae-b8c3ed93617f-kube-api-access-dck9j\") on node \"crc\" DevicePath \"\"" Feb 19 19:59:56 crc kubenswrapper[4813]: I0219 19:59:56.869183 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12c13eaf-400f-4c9b-b8ae-b8c3ed93617f-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:59:56 crc kubenswrapper[4813]: I0219 19:59:56.869193 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12c13eaf-400f-4c9b-b8ae-b8c3ed93617f-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 19:59:56 crc kubenswrapper[4813]: I0219 19:59:56.869201 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12c13eaf-400f-4c9b-b8ae-b8c3ed93617f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:59:57 crc kubenswrapper[4813]: I0219 19:59:57.375753 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-wh5fw" event={"ID":"12c13eaf-400f-4c9b-b8ae-b8c3ed93617f","Type":"ContainerDied","Data":"3ebff6f0b4d5d194be6eb7dcb52eb2f45bb25dac9fa7c6ffea1ca026404a9d07"} Feb 19 19:59:57 crc kubenswrapper[4813]: I0219 19:59:57.375794 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ebff6f0b4d5d194be6eb7dcb52eb2f45bb25dac9fa7c6ffea1ca026404a9d07" Feb 19 19:59:57 crc kubenswrapper[4813]: I0219 19:59:57.375824 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-wh5fw" Feb 19 19:59:57 crc kubenswrapper[4813]: I0219 19:59:57.460365 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 19:59:57 crc kubenswrapper[4813]: E0219 19:59:57.460699 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12c13eaf-400f-4c9b-b8ae-b8c3ed93617f" containerName="nova-cell0-conductor-db-sync" Feb 19 19:59:57 crc kubenswrapper[4813]: I0219 19:59:57.460713 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="12c13eaf-400f-4c9b-b8ae-b8c3ed93617f" containerName="nova-cell0-conductor-db-sync" Feb 19 19:59:57 crc kubenswrapper[4813]: I0219 19:59:57.460875 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="12c13eaf-400f-4c9b-b8ae-b8c3ed93617f" containerName="nova-cell0-conductor-db-sync" Feb 19 19:59:57 crc kubenswrapper[4813]: I0219 19:59:57.461429 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 19 19:59:57 crc kubenswrapper[4813]: I0219 19:59:57.463375 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-5jxw6" Feb 19 19:59:57 crc kubenswrapper[4813]: I0219 19:59:57.463478 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 19 19:59:57 crc kubenswrapper[4813]: I0219 19:59:57.484795 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 19:59:57 crc kubenswrapper[4813]: I0219 19:59:57.580245 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d84f94f-cccd-4607-a769-24bfe4404005-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"2d84f94f-cccd-4607-a769-24bfe4404005\") " pod="openstack/nova-cell0-conductor-0" Feb 19 19:59:57 crc kubenswrapper[4813]: I0219 19:59:57.580307 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdwzz\" (UniqueName: \"kubernetes.io/projected/2d84f94f-cccd-4607-a769-24bfe4404005-kube-api-access-wdwzz\") pod \"nova-cell0-conductor-0\" (UID: \"2d84f94f-cccd-4607-a769-24bfe4404005\") " pod="openstack/nova-cell0-conductor-0" Feb 19 19:59:57 crc kubenswrapper[4813]: I0219 19:59:57.580405 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d84f94f-cccd-4607-a769-24bfe4404005-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"2d84f94f-cccd-4607-a769-24bfe4404005\") " pod="openstack/nova-cell0-conductor-0" Feb 19 19:59:57 crc kubenswrapper[4813]: I0219 19:59:57.682338 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d84f94f-cccd-4607-a769-24bfe4404005-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"2d84f94f-cccd-4607-a769-24bfe4404005\") " pod="openstack/nova-cell0-conductor-0" Feb 19 19:59:57 crc kubenswrapper[4813]: I0219 19:59:57.682449 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdwzz\" (UniqueName: \"kubernetes.io/projected/2d84f94f-cccd-4607-a769-24bfe4404005-kube-api-access-wdwzz\") pod \"nova-cell0-conductor-0\" (UID: \"2d84f94f-cccd-4607-a769-24bfe4404005\") " pod="openstack/nova-cell0-conductor-0" Feb 19 19:59:57 crc kubenswrapper[4813]: I0219 19:59:57.682667 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d84f94f-cccd-4607-a769-24bfe4404005-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"2d84f94f-cccd-4607-a769-24bfe4404005\") " pod="openstack/nova-cell0-conductor-0" Feb 19 19:59:57 crc kubenswrapper[4813]: I0219 19:59:57.687383 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d84f94f-cccd-4607-a769-24bfe4404005-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"2d84f94f-cccd-4607-a769-24bfe4404005\") " pod="openstack/nova-cell0-conductor-0" Feb 19 19:59:57 crc kubenswrapper[4813]: I0219 19:59:57.689384 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d84f94f-cccd-4607-a769-24bfe4404005-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"2d84f94f-cccd-4607-a769-24bfe4404005\") " pod="openstack/nova-cell0-conductor-0" Feb 19 19:59:57 crc kubenswrapper[4813]: I0219 19:59:57.708686 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdwzz\" (UniqueName: \"kubernetes.io/projected/2d84f94f-cccd-4607-a769-24bfe4404005-kube-api-access-wdwzz\") pod \"nova-cell0-conductor-0\" (UID: \"2d84f94f-cccd-4607-a769-24bfe4404005\") " pod="openstack/nova-cell0-conductor-0" Feb 19 19:59:57 crc kubenswrapper[4813]: I0219 19:59:57.777798 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 19 19:59:58 crc kubenswrapper[4813]: I0219 19:59:58.249366 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 19:59:58 crc kubenswrapper[4813]: W0219 19:59:58.253633 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d84f94f_cccd_4607_a769_24bfe4404005.slice/crio-c43144d7ab008d2f55dc61de2c2a82fe4fd242ec8866df3f6b9a9ccce5571b29 WatchSource:0}: Error finding container c43144d7ab008d2f55dc61de2c2a82fe4fd242ec8866df3f6b9a9ccce5571b29: Status 404 returned error can't find the container with id c43144d7ab008d2f55dc61de2c2a82fe4fd242ec8866df3f6b9a9ccce5571b29 Feb 19 19:59:58 crc kubenswrapper[4813]: I0219 19:59:58.389188 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"2d84f94f-cccd-4607-a769-24bfe4404005","Type":"ContainerStarted","Data":"c43144d7ab008d2f55dc61de2c2a82fe4fd242ec8866df3f6b9a9ccce5571b29"} Feb 19 19:59:59 crc kubenswrapper[4813]: I0219 19:59:59.401936 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"2d84f94f-cccd-4607-a769-24bfe4404005","Type":"ContainerStarted","Data":"822bba2cca4c4c1c10967652651427d926b2359d6c72b40183a6aa5a1efd92d0"} Feb 19 19:59:59 crc kubenswrapper[4813]: I0219 19:59:59.402238 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 19 19:59:59 crc kubenswrapper[4813]: I0219 19:59:59.427309 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.427291401 podStartE2EDuration="2.427291401s" podCreationTimestamp="2026-02-19 19:59:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:59:59.423559605 +0000 UTC m=+5418.649000186" watchObservedRunningTime="2026-02-19 19:59:59.427291401 +0000 UTC m=+5418.652731942" Feb 19 20:00:00 crc kubenswrapper[4813]: I0219 20:00:00.140408 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525520-wb9nz"] Feb 19 20:00:00 crc kubenswrapper[4813]: I0219 20:00:00.142528 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525520-wb9nz" Feb 19 20:00:00 crc kubenswrapper[4813]: I0219 20:00:00.146290 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 20:00:00 crc kubenswrapper[4813]: I0219 20:00:00.146507 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 20:00:00 crc kubenswrapper[4813]: I0219 20:00:00.155502 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525520-wb9nz"] Feb 19 20:00:00 crc kubenswrapper[4813]: I0219 20:00:00.260034 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/49f6c823-7d5e-4b6b-802f-30f2161fff59-config-volume\") pod \"collect-profiles-29525520-wb9nz\" (UID: \"49f6c823-7d5e-4b6b-802f-30f2161fff59\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525520-wb9nz" Feb 19 20:00:00 crc kubenswrapper[4813]: I0219 20:00:00.260083 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kskzs\" (UniqueName: \"kubernetes.io/projected/49f6c823-7d5e-4b6b-802f-30f2161fff59-kube-api-access-kskzs\") pod \"collect-profiles-29525520-wb9nz\" (UID: \"49f6c823-7d5e-4b6b-802f-30f2161fff59\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525520-wb9nz" Feb 19 20:00:00 crc kubenswrapper[4813]: I0219 20:00:00.260186 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/49f6c823-7d5e-4b6b-802f-30f2161fff59-secret-volume\") pod \"collect-profiles-29525520-wb9nz\" (UID: \"49f6c823-7d5e-4b6b-802f-30f2161fff59\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525520-wb9nz" Feb 19 20:00:00 crc kubenswrapper[4813]: I0219 20:00:00.361343 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/49f6c823-7d5e-4b6b-802f-30f2161fff59-config-volume\") pod \"collect-profiles-29525520-wb9nz\" (UID: \"49f6c823-7d5e-4b6b-802f-30f2161fff59\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525520-wb9nz" Feb 19 20:00:00 crc kubenswrapper[4813]: I0219 20:00:00.361598 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kskzs\" (UniqueName: \"kubernetes.io/projected/49f6c823-7d5e-4b6b-802f-30f2161fff59-kube-api-access-kskzs\") pod \"collect-profiles-29525520-wb9nz\" (UID: \"49f6c823-7d5e-4b6b-802f-30f2161fff59\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525520-wb9nz" Feb 19 20:00:00 crc kubenswrapper[4813]: I0219 20:00:00.361810 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/49f6c823-7d5e-4b6b-802f-30f2161fff59-secret-volume\") pod \"collect-profiles-29525520-wb9nz\" (UID: \"49f6c823-7d5e-4b6b-802f-30f2161fff59\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525520-wb9nz" Feb 19 20:00:00 crc kubenswrapper[4813]: I0219 20:00:00.362662 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/49f6c823-7d5e-4b6b-802f-30f2161fff59-config-volume\") pod \"collect-profiles-29525520-wb9nz\" (UID: \"49f6c823-7d5e-4b6b-802f-30f2161fff59\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525520-wb9nz" Feb 19 20:00:00 crc kubenswrapper[4813]: I0219 20:00:00.372613 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/49f6c823-7d5e-4b6b-802f-30f2161fff59-secret-volume\") pod \"collect-profiles-29525520-wb9nz\" (UID: \"49f6c823-7d5e-4b6b-802f-30f2161fff59\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525520-wb9nz" Feb 19 20:00:00 crc kubenswrapper[4813]: I0219 20:00:00.382181 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kskzs\" (UniqueName: \"kubernetes.io/projected/49f6c823-7d5e-4b6b-802f-30f2161fff59-kube-api-access-kskzs\") pod \"collect-profiles-29525520-wb9nz\" (UID: \"49f6c823-7d5e-4b6b-802f-30f2161fff59\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525520-wb9nz" Feb 19 20:00:00 crc kubenswrapper[4813]: I0219 20:00:00.481168 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525520-wb9nz" Feb 19 20:00:00 crc kubenswrapper[4813]: I0219 20:00:00.903565 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525520-wb9nz"] Feb 19 20:00:00 crc kubenswrapper[4813]: W0219 20:00:00.905154 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49f6c823_7d5e_4b6b_802f_30f2161fff59.slice/crio-c926cce737f75251b394477cb10d2fd12223968fee7518d6250456485ce8569e WatchSource:0}: Error finding container c926cce737f75251b394477cb10d2fd12223968fee7518d6250456485ce8569e: Status 404 returned error can't find the container with id c926cce737f75251b394477cb10d2fd12223968fee7518d6250456485ce8569e Feb 19 20:00:01 crc kubenswrapper[4813]: I0219 20:00:01.419786 4813 generic.go:334] "Generic (PLEG): container finished" podID="49f6c823-7d5e-4b6b-802f-30f2161fff59" containerID="5fed16dab7ca501b54e79f8999ed126a67465d45677b3b7bf0d2900b7e574a0a" exitCode=0 Feb 19 20:00:01 crc kubenswrapper[4813]: I0219 20:00:01.420185 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525520-wb9nz" event={"ID":"49f6c823-7d5e-4b6b-802f-30f2161fff59","Type":"ContainerDied","Data":"5fed16dab7ca501b54e79f8999ed126a67465d45677b3b7bf0d2900b7e574a0a"} Feb 19 20:00:01 crc kubenswrapper[4813]: I0219 20:00:01.420288 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525520-wb9nz" event={"ID":"49f6c823-7d5e-4b6b-802f-30f2161fff59","Type":"ContainerStarted","Data":"c926cce737f75251b394477cb10d2fd12223968fee7518d6250456485ce8569e"} Feb 19 20:00:02 crc kubenswrapper[4813]: I0219 20:00:02.735927 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525520-wb9nz" Feb 19 20:00:02 crc kubenswrapper[4813]: I0219 20:00:02.803765 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kskzs\" (UniqueName: \"kubernetes.io/projected/49f6c823-7d5e-4b6b-802f-30f2161fff59-kube-api-access-kskzs\") pod \"49f6c823-7d5e-4b6b-802f-30f2161fff59\" (UID: \"49f6c823-7d5e-4b6b-802f-30f2161fff59\") " Feb 19 20:00:02 crc kubenswrapper[4813]: I0219 20:00:02.803933 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/49f6c823-7d5e-4b6b-802f-30f2161fff59-config-volume\") pod \"49f6c823-7d5e-4b6b-802f-30f2161fff59\" (UID: \"49f6c823-7d5e-4b6b-802f-30f2161fff59\") " Feb 19 20:00:02 crc kubenswrapper[4813]: I0219 20:00:02.803996 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/49f6c823-7d5e-4b6b-802f-30f2161fff59-secret-volume\") pod \"49f6c823-7d5e-4b6b-802f-30f2161fff59\" (UID: \"49f6c823-7d5e-4b6b-802f-30f2161fff59\") " Feb 19 20:00:02 crc kubenswrapper[4813]: I0219 20:00:02.804762 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49f6c823-7d5e-4b6b-802f-30f2161fff59-config-volume" (OuterVolumeSpecName: "config-volume") pod "49f6c823-7d5e-4b6b-802f-30f2161fff59" (UID: "49f6c823-7d5e-4b6b-802f-30f2161fff59"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:00:02 crc kubenswrapper[4813]: I0219 20:00:02.822318 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49f6c823-7d5e-4b6b-802f-30f2161fff59-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "49f6c823-7d5e-4b6b-802f-30f2161fff59" (UID: "49f6c823-7d5e-4b6b-802f-30f2161fff59"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:00:02 crc kubenswrapper[4813]: I0219 20:00:02.826754 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49f6c823-7d5e-4b6b-802f-30f2161fff59-kube-api-access-kskzs" (OuterVolumeSpecName: "kube-api-access-kskzs") pod "49f6c823-7d5e-4b6b-802f-30f2161fff59" (UID: "49f6c823-7d5e-4b6b-802f-30f2161fff59"). InnerVolumeSpecName "kube-api-access-kskzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:00:02 crc kubenswrapper[4813]: I0219 20:00:02.906201 4813 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/49f6c823-7d5e-4b6b-802f-30f2161fff59-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 20:00:02 crc kubenswrapper[4813]: I0219 20:00:02.906243 4813 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/49f6c823-7d5e-4b6b-802f-30f2161fff59-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 20:00:02 crc kubenswrapper[4813]: I0219 20:00:02.906257 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kskzs\" (UniqueName: \"kubernetes.io/projected/49f6c823-7d5e-4b6b-802f-30f2161fff59-kube-api-access-kskzs\") on node \"crc\" DevicePath \"\"" Feb 19 20:00:03 crc kubenswrapper[4813]: I0219 20:00:03.437603 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525520-wb9nz" event={"ID":"49f6c823-7d5e-4b6b-802f-30f2161fff59","Type":"ContainerDied","Data":"c926cce737f75251b394477cb10d2fd12223968fee7518d6250456485ce8569e"} Feb 19 20:00:03 crc kubenswrapper[4813]: I0219 20:00:03.437915 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c926cce737f75251b394477cb10d2fd12223968fee7518d6250456485ce8569e" Feb 19 20:00:03 crc kubenswrapper[4813]: I0219 20:00:03.437697 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525520-wb9nz" Feb 19 20:00:03 crc kubenswrapper[4813]: I0219 20:00:03.804610 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525475-zkg2p"] Feb 19 20:00:03 crc kubenswrapper[4813]: I0219 20:00:03.813510 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525475-zkg2p"] Feb 19 20:00:05 crc kubenswrapper[4813]: I0219 20:00:05.487231 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ec683d-c7fc-445f-bf06-7880775e37ac" path="/var/lib/kubelet/pods/49ec683d-c7fc-445f-bf06-7880775e37ac/volumes" Feb 19 20:00:07 crc kubenswrapper[4813]: I0219 20:00:07.813173 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.470817 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-x6wmz"] Feb 19 20:00:08 crc kubenswrapper[4813]: E0219 20:00:08.471299 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49f6c823-7d5e-4b6b-802f-30f2161fff59" containerName="collect-profiles" Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.471322 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="49f6c823-7d5e-4b6b-802f-30f2161fff59" containerName="collect-profiles" Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.471512 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="49f6c823-7d5e-4b6b-802f-30f2161fff59" containerName="collect-profiles" Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.472361 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-x6wmz" Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.477194 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.477594 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.484931 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-x6wmz"] Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.493759 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffe08647-6b48-488d-86bd-84de78e5c05c-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-x6wmz\" (UID: \"ffe08647-6b48-488d-86bd-84de78e5c05c\") " pod="openstack/nova-cell0-cell-mapping-x6wmz" Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.493804 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffe08647-6b48-488d-86bd-84de78e5c05c-config-data\") pod \"nova-cell0-cell-mapping-x6wmz\" (UID: \"ffe08647-6b48-488d-86bd-84de78e5c05c\") " pod="openstack/nova-cell0-cell-mapping-x6wmz" Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.493893 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffe08647-6b48-488d-86bd-84de78e5c05c-scripts\") pod \"nova-cell0-cell-mapping-x6wmz\" (UID: \"ffe08647-6b48-488d-86bd-84de78e5c05c\") " pod="openstack/nova-cell0-cell-mapping-x6wmz" Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.494000 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c76ll\" (UniqueName: \"kubernetes.io/projected/ffe08647-6b48-488d-86bd-84de78e5c05c-kube-api-access-c76ll\") pod \"nova-cell0-cell-mapping-x6wmz\" (UID: \"ffe08647-6b48-488d-86bd-84de78e5c05c\") " pod="openstack/nova-cell0-cell-mapping-x6wmz" Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.595150 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c76ll\" (UniqueName: \"kubernetes.io/projected/ffe08647-6b48-488d-86bd-84de78e5c05c-kube-api-access-c76ll\") pod \"nova-cell0-cell-mapping-x6wmz\" (UID: \"ffe08647-6b48-488d-86bd-84de78e5c05c\") " pod="openstack/nova-cell0-cell-mapping-x6wmz" Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.595226 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffe08647-6b48-488d-86bd-84de78e5c05c-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-x6wmz\" (UID: \"ffe08647-6b48-488d-86bd-84de78e5c05c\") " pod="openstack/nova-cell0-cell-mapping-x6wmz" Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.595248 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffe08647-6b48-488d-86bd-84de78e5c05c-config-data\") pod \"nova-cell0-cell-mapping-x6wmz\" (UID: \"ffe08647-6b48-488d-86bd-84de78e5c05c\") " pod="openstack/nova-cell0-cell-mapping-x6wmz" Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.595318 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffe08647-6b48-488d-86bd-84de78e5c05c-scripts\") pod \"nova-cell0-cell-mapping-x6wmz\" (UID: \"ffe08647-6b48-488d-86bd-84de78e5c05c\") " pod="openstack/nova-cell0-cell-mapping-x6wmz" Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.602881 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffe08647-6b48-488d-86bd-84de78e5c05c-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-x6wmz\" (UID: \"ffe08647-6b48-488d-86bd-84de78e5c05c\") " pod="openstack/nova-cell0-cell-mapping-x6wmz" Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.611641 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffe08647-6b48-488d-86bd-84de78e5c05c-scripts\") pod \"nova-cell0-cell-mapping-x6wmz\" (UID: \"ffe08647-6b48-488d-86bd-84de78e5c05c\") " pod="openstack/nova-cell0-cell-mapping-x6wmz" Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.617333 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffe08647-6b48-488d-86bd-84de78e5c05c-config-data\") pod \"nova-cell0-cell-mapping-x6wmz\" (UID: \"ffe08647-6b48-488d-86bd-84de78e5c05c\") " pod="openstack/nova-cell0-cell-mapping-x6wmz" Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.623871 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.643319 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.648402 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.661544 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.663076 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.664818 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.666205 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c76ll\" (UniqueName: \"kubernetes.io/projected/ffe08647-6b48-488d-86bd-84de78e5c05c-kube-api-access-c76ll\") pod \"nova-cell0-cell-mapping-x6wmz\" (UID: \"ffe08647-6b48-488d-86bd-84de78e5c05c\") " pod="openstack/nova-cell0-cell-mapping-x6wmz" Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.680753 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.693873 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.696252 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b079fbd3-97d7-4bf3-81f6-4fa19b0e03c9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b079fbd3-97d7-4bf3-81f6-4fa19b0e03c9\") " pod="openstack/nova-metadata-0" Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.696436 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b079fbd3-97d7-4bf3-81f6-4fa19b0e03c9-logs\") pod \"nova-metadata-0\" (UID: \"b079fbd3-97d7-4bf3-81f6-4fa19b0e03c9\") " pod="openstack/nova-metadata-0" Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.696548 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5f38238-79ad-457f-87b4-6aedb8bc0a2c-config-data\") pod \"nova-api-0\" (UID: \"b5f38238-79ad-457f-87b4-6aedb8bc0a2c\") " pod="openstack/nova-api-0" Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.696689 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6w29\" (UniqueName: \"kubernetes.io/projected/b079fbd3-97d7-4bf3-81f6-4fa19b0e03c9-kube-api-access-x6w29\") pod \"nova-metadata-0\" (UID: \"b079fbd3-97d7-4bf3-81f6-4fa19b0e03c9\") " pod="openstack/nova-metadata-0" Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.696812 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b079fbd3-97d7-4bf3-81f6-4fa19b0e03c9-config-data\") pod \"nova-metadata-0\" (UID: \"b079fbd3-97d7-4bf3-81f6-4fa19b0e03c9\") " pod="openstack/nova-metadata-0" Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.697033 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5f38238-79ad-457f-87b4-6aedb8bc0a2c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b5f38238-79ad-457f-87b4-6aedb8bc0a2c\") " pod="openstack/nova-api-0" Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.697160 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5f38238-79ad-457f-87b4-6aedb8bc0a2c-logs\") pod \"nova-api-0\" (UID: \"b5f38238-79ad-457f-87b4-6aedb8bc0a2c\") " pod="openstack/nova-api-0" Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.697256 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hg7kb\" (UniqueName: \"kubernetes.io/projected/b5f38238-79ad-457f-87b4-6aedb8bc0a2c-kube-api-access-hg7kb\") pod \"nova-api-0\" (UID: \"b5f38238-79ad-457f-87b4-6aedb8bc0a2c\") " pod="openstack/nova-api-0" Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.720723 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.721998 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.725880 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.736917 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.800385 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e42b2871-2bbb-4439-9be6-ca3b594ce8f7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e42b2871-2bbb-4439-9be6-ca3b594ce8f7\") " pod="openstack/nova-scheduler-0" Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.800445 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b079fbd3-97d7-4bf3-81f6-4fa19b0e03c9-config-data\") pod \"nova-metadata-0\" (UID: \"b079fbd3-97d7-4bf3-81f6-4fa19b0e03c9\") " pod="openstack/nova-metadata-0" Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.800479 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e42b2871-2bbb-4439-9be6-ca3b594ce8f7-config-data\") pod \"nova-scheduler-0\" (UID: \"e42b2871-2bbb-4439-9be6-ca3b594ce8f7\") " pod="openstack/nova-scheduler-0" Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.800511 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5f38238-79ad-457f-87b4-6aedb8bc0a2c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b5f38238-79ad-457f-87b4-6aedb8bc0a2c\") " pod="openstack/nova-api-0" Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.800541 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5f38238-79ad-457f-87b4-6aedb8bc0a2c-logs\") pod \"nova-api-0\" (UID: \"b5f38238-79ad-457f-87b4-6aedb8bc0a2c\") " pod="openstack/nova-api-0" Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.800587 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpkd8\" (UniqueName: \"kubernetes.io/projected/e42b2871-2bbb-4439-9be6-ca3b594ce8f7-kube-api-access-gpkd8\") pod \"nova-scheduler-0\" (UID: \"e42b2871-2bbb-4439-9be6-ca3b594ce8f7\") " pod="openstack/nova-scheduler-0" Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.800627 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hg7kb\" (UniqueName: \"kubernetes.io/projected/b5f38238-79ad-457f-87b4-6aedb8bc0a2c-kube-api-access-hg7kb\") pod \"nova-api-0\" (UID: \"b5f38238-79ad-457f-87b4-6aedb8bc0a2c\") " pod="openstack/nova-api-0" Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.800669 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b079fbd3-97d7-4bf3-81f6-4fa19b0e03c9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b079fbd3-97d7-4bf3-81f6-4fa19b0e03c9\") " pod="openstack/nova-metadata-0" Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.800721 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b079fbd3-97d7-4bf3-81f6-4fa19b0e03c9-logs\") pod \"nova-metadata-0\" (UID: \"b079fbd3-97d7-4bf3-81f6-4fa19b0e03c9\") " pod="openstack/nova-metadata-0" Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.800748 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5f38238-79ad-457f-87b4-6aedb8bc0a2c-config-data\") pod \"nova-api-0\" (UID: \"b5f38238-79ad-457f-87b4-6aedb8bc0a2c\") " pod="openstack/nova-api-0" Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.800820 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6w29\" (UniqueName: \"kubernetes.io/projected/b079fbd3-97d7-4bf3-81f6-4fa19b0e03c9-kube-api-access-x6w29\") pod \"nova-metadata-0\" (UID: \"b079fbd3-97d7-4bf3-81f6-4fa19b0e03c9\") " pod="openstack/nova-metadata-0" Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.805977 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5f38238-79ad-457f-87b4-6aedb8bc0a2c-logs\") pod \"nova-api-0\" (UID: \"b5f38238-79ad-457f-87b4-6aedb8bc0a2c\") " pod="openstack/nova-api-0" Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.806539 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b079fbd3-97d7-4bf3-81f6-4fa19b0e03c9-logs\") pod \"nova-metadata-0\" (UID: \"b079fbd3-97d7-4bf3-81f6-4fa19b0e03c9\") " pod="openstack/nova-metadata-0" Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.809157 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b079fbd3-97d7-4bf3-81f6-4fa19b0e03c9-config-data\") pod \"nova-metadata-0\" (UID: \"b079fbd3-97d7-4bf3-81f6-4fa19b0e03c9\") " pod="openstack/nova-metadata-0" Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.811592 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b079fbd3-97d7-4bf3-81f6-4fa19b0e03c9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b079fbd3-97d7-4bf3-81f6-4fa19b0e03c9\") " pod="openstack/nova-metadata-0" Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.813939 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5f38238-79ad-457f-87b4-6aedb8bc0a2c-config-data\") pod \"nova-api-0\" (UID: \"b5f38238-79ad-457f-87b4-6aedb8bc0a2c\") " pod="openstack/nova-api-0" Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.821284 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6464f94485-cmrjz"] Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.823298 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6464f94485-cmrjz" Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.827752 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6w29\" (UniqueName: \"kubernetes.io/projected/b079fbd3-97d7-4bf3-81f6-4fa19b0e03c9-kube-api-access-x6w29\") pod \"nova-metadata-0\" (UID: \"b079fbd3-97d7-4bf3-81f6-4fa19b0e03c9\") " pod="openstack/nova-metadata-0" Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.858039 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5f38238-79ad-457f-87b4-6aedb8bc0a2c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b5f38238-79ad-457f-87b4-6aedb8bc0a2c\") " pod="openstack/nova-api-0" Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.858757 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.859093 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hg7kb\" (UniqueName: \"kubernetes.io/projected/b5f38238-79ad-457f-87b4-6aedb8bc0a2c-kube-api-access-hg7kb\") pod \"nova-api-0\" (UID: \"b5f38238-79ad-457f-87b4-6aedb8bc0a2c\") " pod="openstack/nova-api-0" Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.859850 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.866371 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.869343 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-x6wmz" Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.876556 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6464f94485-cmrjz"] Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.901757 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/333f58de-9f17-415d-9667-3f0a8b1d0bae-config\") pod \"dnsmasq-dns-6464f94485-cmrjz\" (UID: \"333f58de-9f17-415d-9667-3f0a8b1d0bae\") " pod="openstack/dnsmasq-dns-6464f94485-cmrjz" Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.901808 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/333f58de-9f17-415d-9667-3f0a8b1d0bae-ovsdbserver-sb\") pod \"dnsmasq-dns-6464f94485-cmrjz\" (UID: \"333f58de-9f17-415d-9667-3f0a8b1d0bae\") " pod="openstack/dnsmasq-dns-6464f94485-cmrjz" Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.901836 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/333f58de-9f17-415d-9667-3f0a8b1d0bae-dns-svc\") pod \"dnsmasq-dns-6464f94485-cmrjz\" (UID: \"333f58de-9f17-415d-9667-3f0a8b1d0bae\") " pod="openstack/dnsmasq-dns-6464f94485-cmrjz" Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.901867 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e42b2871-2bbb-4439-9be6-ca3b594ce8f7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e42b2871-2bbb-4439-9be6-ca3b594ce8f7\") " pod="openstack/nova-scheduler-0" Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.901902 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e42b2871-2bbb-4439-9be6-ca3b594ce8f7-config-data\") pod \"nova-scheduler-0\" (UID: \"e42b2871-2bbb-4439-9be6-ca3b594ce8f7\") " pod="openstack/nova-scheduler-0" Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.901920 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86684df4-7521-416c-8599-2c8db67240c3-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"86684df4-7521-416c-8599-2c8db67240c3\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.901959 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpkd8\" (UniqueName: \"kubernetes.io/projected/e42b2871-2bbb-4439-9be6-ca3b594ce8f7-kube-api-access-gpkd8\") pod \"nova-scheduler-0\" (UID: \"e42b2871-2bbb-4439-9be6-ca3b594ce8f7\") " pod="openstack/nova-scheduler-0" Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.901978 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvxdm\" (UniqueName: \"kubernetes.io/projected/86684df4-7521-416c-8599-2c8db67240c3-kube-api-access-vvxdm\") pod \"nova-cell1-novncproxy-0\" (UID: \"86684df4-7521-416c-8599-2c8db67240c3\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.902008 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86684df4-7521-416c-8599-2c8db67240c3-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"86684df4-7521-416c-8599-2c8db67240c3\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.902031 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/333f58de-9f17-415d-9667-3f0a8b1d0bae-ovsdbserver-nb\") pod \"dnsmasq-dns-6464f94485-cmrjz\" (UID: \"333f58de-9f17-415d-9667-3f0a8b1d0bae\") " pod="openstack/dnsmasq-dns-6464f94485-cmrjz" Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.902068 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bt78\" (UniqueName: \"kubernetes.io/projected/333f58de-9f17-415d-9667-3f0a8b1d0bae-kube-api-access-9bt78\") pod \"dnsmasq-dns-6464f94485-cmrjz\" (UID: \"333f58de-9f17-415d-9667-3f0a8b1d0bae\") " pod="openstack/dnsmasq-dns-6464f94485-cmrjz" Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.903488 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.923918 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e42b2871-2bbb-4439-9be6-ca3b594ce8f7-config-data\") pod \"nova-scheduler-0\" (UID: \"e42b2871-2bbb-4439-9be6-ca3b594ce8f7\") " pod="openstack/nova-scheduler-0" Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.927179 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e42b2871-2bbb-4439-9be6-ca3b594ce8f7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e42b2871-2bbb-4439-9be6-ca3b594ce8f7\") " pod="openstack/nova-scheduler-0" Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.936453 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpkd8\" (UniqueName: \"kubernetes.io/projected/e42b2871-2bbb-4439-9be6-ca3b594ce8f7-kube-api-access-gpkd8\") pod \"nova-scheduler-0\" (UID: \"e42b2871-2bbb-4439-9be6-ca3b594ce8f7\") " pod="openstack/nova-scheduler-0" Feb 19 20:00:08 crc kubenswrapper[4813]: I0219 20:00:08.980102 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 20:00:09 crc kubenswrapper[4813]: I0219 20:00:09.002906 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86684df4-7521-416c-8599-2c8db67240c3-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"86684df4-7521-416c-8599-2c8db67240c3\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 20:00:09 crc kubenswrapper[4813]: I0219 20:00:09.002974 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvxdm\" (UniqueName: \"kubernetes.io/projected/86684df4-7521-416c-8599-2c8db67240c3-kube-api-access-vvxdm\") pod \"nova-cell1-novncproxy-0\" (UID: \"86684df4-7521-416c-8599-2c8db67240c3\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 20:00:09 crc kubenswrapper[4813]: I0219 20:00:09.003002 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86684df4-7521-416c-8599-2c8db67240c3-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"86684df4-7521-416c-8599-2c8db67240c3\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 20:00:09 crc kubenswrapper[4813]: I0219 20:00:09.003025 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/333f58de-9f17-415d-9667-3f0a8b1d0bae-ovsdbserver-nb\") pod \"dnsmasq-dns-6464f94485-cmrjz\" (UID: \"333f58de-9f17-415d-9667-3f0a8b1d0bae\") " pod="openstack/dnsmasq-dns-6464f94485-cmrjz" Feb 19 20:00:09 crc kubenswrapper[4813]: I0219 20:00:09.003063 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bt78\" (UniqueName: \"kubernetes.io/projected/333f58de-9f17-415d-9667-3f0a8b1d0bae-kube-api-access-9bt78\") pod \"dnsmasq-dns-6464f94485-cmrjz\" (UID: \"333f58de-9f17-415d-9667-3f0a8b1d0bae\") " pod="openstack/dnsmasq-dns-6464f94485-cmrjz" Feb 19 20:00:09 crc kubenswrapper[4813]: I0219 20:00:09.003101 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/333f58de-9f17-415d-9667-3f0a8b1d0bae-config\") pod \"dnsmasq-dns-6464f94485-cmrjz\" (UID: \"333f58de-9f17-415d-9667-3f0a8b1d0bae\") " pod="openstack/dnsmasq-dns-6464f94485-cmrjz" Feb 19 20:00:09 crc kubenswrapper[4813]: I0219 20:00:09.003125 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/333f58de-9f17-415d-9667-3f0a8b1d0bae-ovsdbserver-sb\") pod \"dnsmasq-dns-6464f94485-cmrjz\" (UID: \"333f58de-9f17-415d-9667-3f0a8b1d0bae\") " pod="openstack/dnsmasq-dns-6464f94485-cmrjz" Feb 19 20:00:09 crc kubenswrapper[4813]: I0219 20:00:09.003149 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/333f58de-9f17-415d-9667-3f0a8b1d0bae-dns-svc\") pod \"dnsmasq-dns-6464f94485-cmrjz\" (UID: \"333f58de-9f17-415d-9667-3f0a8b1d0bae\") " pod="openstack/dnsmasq-dns-6464f94485-cmrjz" Feb 19 20:00:09 crc kubenswrapper[4813]: I0219 20:00:09.003910 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/333f58de-9f17-415d-9667-3f0a8b1d0bae-dns-svc\") pod \"dnsmasq-dns-6464f94485-cmrjz\" (UID: \"333f58de-9f17-415d-9667-3f0a8b1d0bae\") " pod="openstack/dnsmasq-dns-6464f94485-cmrjz" Feb 19 20:00:09 crc kubenswrapper[4813]: I0219 20:00:09.005226 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/333f58de-9f17-415d-9667-3f0a8b1d0bae-ovsdbserver-nb\") pod \"dnsmasq-dns-6464f94485-cmrjz\" (UID: \"333f58de-9f17-415d-9667-3f0a8b1d0bae\") " pod="openstack/dnsmasq-dns-6464f94485-cmrjz" Feb 19 20:00:09 crc kubenswrapper[4813]: I0219 20:00:09.005384 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/333f58de-9f17-415d-9667-3f0a8b1d0bae-config\") pod \"dnsmasq-dns-6464f94485-cmrjz\" (UID: \"333f58de-9f17-415d-9667-3f0a8b1d0bae\") " pod="openstack/dnsmasq-dns-6464f94485-cmrjz" Feb 19 20:00:09 crc kubenswrapper[4813]: I0219 20:00:09.005745 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/333f58de-9f17-415d-9667-3f0a8b1d0bae-ovsdbserver-sb\") pod \"dnsmasq-dns-6464f94485-cmrjz\" (UID: \"333f58de-9f17-415d-9667-3f0a8b1d0bae\") " pod="openstack/dnsmasq-dns-6464f94485-cmrjz" Feb 19 20:00:09 crc kubenswrapper[4813]: I0219 20:00:09.007466 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86684df4-7521-416c-8599-2c8db67240c3-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"86684df4-7521-416c-8599-2c8db67240c3\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 20:00:09 crc kubenswrapper[4813]: I0219 20:00:09.008036 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86684df4-7521-416c-8599-2c8db67240c3-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"86684df4-7521-416c-8599-2c8db67240c3\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 20:00:09 crc kubenswrapper[4813]: I0219 20:00:09.022013 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvxdm\" (UniqueName: \"kubernetes.io/projected/86684df4-7521-416c-8599-2c8db67240c3-kube-api-access-vvxdm\") pod \"nova-cell1-novncproxy-0\" (UID: \"86684df4-7521-416c-8599-2c8db67240c3\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 20:00:09 crc kubenswrapper[4813]: I0219 20:00:09.023888 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bt78\" (UniqueName: \"kubernetes.io/projected/333f58de-9f17-415d-9667-3f0a8b1d0bae-kube-api-access-9bt78\") pod \"dnsmasq-dns-6464f94485-cmrjz\" (UID: \"333f58de-9f17-415d-9667-3f0a8b1d0bae\") " pod="openstack/dnsmasq-dns-6464f94485-cmrjz" Feb 19 20:00:09 crc kubenswrapper[4813]: I0219 20:00:09.045938 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 20:00:09 crc kubenswrapper[4813]: I0219 20:00:09.081603 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 20:00:09 crc kubenswrapper[4813]: I0219 20:00:09.296795 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6464f94485-cmrjz" Feb 19 20:00:09 crc kubenswrapper[4813]: I0219 20:00:09.302925 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 20:00:09 crc kubenswrapper[4813]: I0219 20:00:09.518517 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-x6wmz"] Feb 19 20:00:09 crc kubenswrapper[4813]: I0219 20:00:09.594281 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-x6wmz" event={"ID":"ffe08647-6b48-488d-86bd-84de78e5c05c","Type":"ContainerStarted","Data":"6fcfee60dc1f485364be0cbc6a947004e043895fd9a1284700160e4c464071b0"} Feb 19 20:00:09 crc kubenswrapper[4813]: I0219 20:00:09.600144 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-cqkk6"] Feb 19 20:00:09 crc kubenswrapper[4813]: I0219 20:00:09.601371 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-cqkk6" Feb 19 20:00:09 crc kubenswrapper[4813]: I0219 20:00:09.603931 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 19 20:00:09 crc kubenswrapper[4813]: I0219 20:00:09.604118 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 19 20:00:09 crc kubenswrapper[4813]: I0219 20:00:09.611121 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-cqkk6"] Feb 19 20:00:09 crc kubenswrapper[4813]: I0219 20:00:09.616135 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whb4c\" (UniqueName: \"kubernetes.io/projected/dfa00bea-2bcd-4843-ba6a-2e2b4070928d-kube-api-access-whb4c\") pod \"nova-cell1-conductor-db-sync-cqkk6\" (UID: \"dfa00bea-2bcd-4843-ba6a-2e2b4070928d\") " pod="openstack/nova-cell1-conductor-db-sync-cqkk6" Feb 19 20:00:09 crc kubenswrapper[4813]: I0219 20:00:09.616252 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dfa00bea-2bcd-4843-ba6a-2e2b4070928d-scripts\") pod \"nova-cell1-conductor-db-sync-cqkk6\" (UID: \"dfa00bea-2bcd-4843-ba6a-2e2b4070928d\") " pod="openstack/nova-cell1-conductor-db-sync-cqkk6" Feb 19 20:00:09 crc kubenswrapper[4813]: I0219 20:00:09.616301 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfa00bea-2bcd-4843-ba6a-2e2b4070928d-config-data\") pod \"nova-cell1-conductor-db-sync-cqkk6\" (UID: \"dfa00bea-2bcd-4843-ba6a-2e2b4070928d\") " pod="openstack/nova-cell1-conductor-db-sync-cqkk6" Feb 19 20:00:09 crc kubenswrapper[4813]: I0219 20:00:09.616375 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfa00bea-2bcd-4843-ba6a-2e2b4070928d-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-cqkk6\" (UID: \"dfa00bea-2bcd-4843-ba6a-2e2b4070928d\") " pod="openstack/nova-cell1-conductor-db-sync-cqkk6" Feb 19 20:00:09 crc kubenswrapper[4813]: I0219 20:00:09.674128 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 20:00:09 crc kubenswrapper[4813]: I0219 20:00:09.716035 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 20:00:09 crc kubenswrapper[4813]: I0219 20:00:09.718299 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfa00bea-2bcd-4843-ba6a-2e2b4070928d-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-cqkk6\" (UID: \"dfa00bea-2bcd-4843-ba6a-2e2b4070928d\") " pod="openstack/nova-cell1-conductor-db-sync-cqkk6" Feb 19 20:00:09 crc kubenswrapper[4813]: I0219 20:00:09.718404 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whb4c\" (UniqueName: \"kubernetes.io/projected/dfa00bea-2bcd-4843-ba6a-2e2b4070928d-kube-api-access-whb4c\") pod \"nova-cell1-conductor-db-sync-cqkk6\" (UID: \"dfa00bea-2bcd-4843-ba6a-2e2b4070928d\") " pod="openstack/nova-cell1-conductor-db-sync-cqkk6" Feb 19 20:00:09 crc kubenswrapper[4813]: I0219 20:00:09.718456 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dfa00bea-2bcd-4843-ba6a-2e2b4070928d-scripts\") pod \"nova-cell1-conductor-db-sync-cqkk6\" (UID: \"dfa00bea-2bcd-4843-ba6a-2e2b4070928d\") " pod="openstack/nova-cell1-conductor-db-sync-cqkk6" Feb 19 20:00:09 crc kubenswrapper[4813]: I0219 20:00:09.718505 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfa00bea-2bcd-4843-ba6a-2e2b4070928d-config-data\") pod \"nova-cell1-conductor-db-sync-cqkk6\" (UID: \"dfa00bea-2bcd-4843-ba6a-2e2b4070928d\") " pod="openstack/nova-cell1-conductor-db-sync-cqkk6" Feb 19 20:00:09 crc kubenswrapper[4813]: I0219 20:00:09.726390 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 20:00:09 crc kubenswrapper[4813]: I0219 20:00:09.728356 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dfa00bea-2bcd-4843-ba6a-2e2b4070928d-scripts\") pod \"nova-cell1-conductor-db-sync-cqkk6\" (UID: \"dfa00bea-2bcd-4843-ba6a-2e2b4070928d\") " pod="openstack/nova-cell1-conductor-db-sync-cqkk6" Feb 19 20:00:09 crc kubenswrapper[4813]: I0219 20:00:09.728413 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfa00bea-2bcd-4843-ba6a-2e2b4070928d-config-data\") pod \"nova-cell1-conductor-db-sync-cqkk6\" (UID: \"dfa00bea-2bcd-4843-ba6a-2e2b4070928d\") " pod="openstack/nova-cell1-conductor-db-sync-cqkk6" Feb 19 20:00:09 crc kubenswrapper[4813]: I0219 20:00:09.729084 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfa00bea-2bcd-4843-ba6a-2e2b4070928d-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-cqkk6\" (UID: \"dfa00bea-2bcd-4843-ba6a-2e2b4070928d\") " pod="openstack/nova-cell1-conductor-db-sync-cqkk6" Feb 19 20:00:09 crc kubenswrapper[4813]: I0219 20:00:09.742543 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whb4c\" (UniqueName: \"kubernetes.io/projected/dfa00bea-2bcd-4843-ba6a-2e2b4070928d-kube-api-access-whb4c\") pod \"nova-cell1-conductor-db-sync-cqkk6\" (UID: \"dfa00bea-2bcd-4843-ba6a-2e2b4070928d\") " pod="openstack/nova-cell1-conductor-db-sync-cqkk6" Feb 19 20:00:09 crc kubenswrapper[4813]: I0219 20:00:09.960409 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6464f94485-cmrjz"] Feb 19 20:00:09 crc kubenswrapper[4813]: W0219 20:00:09.976272 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod333f58de_9f17_415d_9667_3f0a8b1d0bae.slice/crio-b0365dbf2a85b51892292d9bfcdb066fd1d66c1b32415ed86e56fee4b645781c WatchSource:0}: Error finding container b0365dbf2a85b51892292d9bfcdb066fd1d66c1b32415ed86e56fee4b645781c: Status 404 returned error can't find the container with id b0365dbf2a85b51892292d9bfcdb066fd1d66c1b32415ed86e56fee4b645781c Feb 19 20:00:09 crc kubenswrapper[4813]: I0219 20:00:09.976760 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 20:00:09 crc kubenswrapper[4813]: W0219 20:00:09.977519 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod86684df4_7521_416c_8599_2c8db67240c3.slice/crio-4f7da98c01eadb2450c594c5d9b7349b4d3092eab2f7cdfcfcfc11ceac90a16c WatchSource:0}: Error finding container 4f7da98c01eadb2450c594c5d9b7349b4d3092eab2f7cdfcfcfc11ceac90a16c: Status 404 returned error can't find the container with id 4f7da98c01eadb2450c594c5d9b7349b4d3092eab2f7cdfcfcfc11ceac90a16c Feb 19 20:00:10 crc kubenswrapper[4813]: I0219 20:00:10.033507 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-cqkk6" Feb 19 20:00:10 crc kubenswrapper[4813]: I0219 20:00:10.513807 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-cqkk6"] Feb 19 20:00:10 crc kubenswrapper[4813]: W0219 20:00:10.518132 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddfa00bea_2bcd_4843_ba6a_2e2b4070928d.slice/crio-51927e9f40f749bd5af6b438bc6f67b40d48d7ccb537f39f46fe6f8912898508 WatchSource:0}: Error finding container 51927e9f40f749bd5af6b438bc6f67b40d48d7ccb537f39f46fe6f8912898508: Status 404 returned error can't find the container with id 51927e9f40f749bd5af6b438bc6f67b40d48d7ccb537f39f46fe6f8912898508 Feb 19 20:00:10 crc kubenswrapper[4813]: I0219 20:00:10.603799 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-cqkk6" event={"ID":"dfa00bea-2bcd-4843-ba6a-2e2b4070928d","Type":"ContainerStarted","Data":"51927e9f40f749bd5af6b438bc6f67b40d48d7ccb537f39f46fe6f8912898508"} Feb 19 20:00:10 crc kubenswrapper[4813]: I0219 20:00:10.605156 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e42b2871-2bbb-4439-9be6-ca3b594ce8f7","Type":"ContainerStarted","Data":"38ec3f52beb7682eebf064130a6ecb3357856c51414e5ca5008b815dac8adc92"} Feb 19 20:00:10 crc kubenswrapper[4813]: I0219 20:00:10.605187 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e42b2871-2bbb-4439-9be6-ca3b594ce8f7","Type":"ContainerStarted","Data":"d86a00793f34ad531f9e141e6259994ab35caea785da05453a28de4a4ca7f410"} Feb 19 20:00:10 crc kubenswrapper[4813]: I0219 20:00:10.606849 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b079fbd3-97d7-4bf3-81f6-4fa19b0e03c9","Type":"ContainerStarted","Data":"65ce39f72a18b5c0d7749800af7d020bb839667c60707b209158c48cb1eb85a3"} Feb 19 20:00:10 crc kubenswrapper[4813]: I0219 20:00:10.606881 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b079fbd3-97d7-4bf3-81f6-4fa19b0e03c9","Type":"ContainerStarted","Data":"ed17e2505f12b9bc5c6d5620ca981ba38af33dbabd24e27766f79ae5d164a99e"} Feb 19 20:00:10 crc kubenswrapper[4813]: I0219 20:00:10.607387 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b079fbd3-97d7-4bf3-81f6-4fa19b0e03c9","Type":"ContainerStarted","Data":"16012cbc4f8816701b2f6fbf0a4eac06d47f2376f634fdf3769ae104e22402cd"} Feb 19 20:00:10 crc kubenswrapper[4813]: I0219 20:00:10.609683 4813 generic.go:334] "Generic (PLEG): container finished" podID="333f58de-9f17-415d-9667-3f0a8b1d0bae" containerID="73e7a62e38d4c1f2ebc8c15fdd37d2456a28355f7878bcc8f1247f15f4774816" exitCode=0 Feb 19 20:00:10 crc kubenswrapper[4813]: I0219 20:00:10.609752 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6464f94485-cmrjz" event={"ID":"333f58de-9f17-415d-9667-3f0a8b1d0bae","Type":"ContainerDied","Data":"73e7a62e38d4c1f2ebc8c15fdd37d2456a28355f7878bcc8f1247f15f4774816"} Feb 19 20:00:10 crc kubenswrapper[4813]: I0219 20:00:10.609773 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6464f94485-cmrjz" event={"ID":"333f58de-9f17-415d-9667-3f0a8b1d0bae","Type":"ContainerStarted","Data":"b0365dbf2a85b51892292d9bfcdb066fd1d66c1b32415ed86e56fee4b645781c"} Feb 19 20:00:10 crc kubenswrapper[4813]: I0219 20:00:10.612701 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b5f38238-79ad-457f-87b4-6aedb8bc0a2c","Type":"ContainerStarted","Data":"68ed855b2e64b333c8db694020b1d17c621bcbff013a6d4fa38dacbb2b38a534"} Feb 19 20:00:10 crc kubenswrapper[4813]: I0219 20:00:10.612842 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b5f38238-79ad-457f-87b4-6aedb8bc0a2c","Type":"ContainerStarted","Data":"fef22b7dbb7b9b1f179b31446fdbcbd0b60905b99fbcdb72839697ce46136c12"} Feb 19 20:00:10 crc kubenswrapper[4813]: I0219 20:00:10.612937 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b5f38238-79ad-457f-87b4-6aedb8bc0a2c","Type":"ContainerStarted","Data":"652d397359f72aee3da9fe23c9946b20c315726fda31da36f6e7459939772f06"} Feb 19 20:00:10 crc kubenswrapper[4813]: I0219 20:00:10.614855 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-x6wmz" event={"ID":"ffe08647-6b48-488d-86bd-84de78e5c05c","Type":"ContainerStarted","Data":"8bb58da7cf44d5538bb3196418fc4065e50905582a5d1d9cb75f6f62d99c018b"} Feb 19 20:00:10 crc kubenswrapper[4813]: I0219 20:00:10.624245 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"86684df4-7521-416c-8599-2c8db67240c3","Type":"ContainerStarted","Data":"4aba640ef27b9e451f61e1c02830aa27c17a67496af4b4d8b1b2a68fb377c9df"} Feb 19 20:00:10 crc kubenswrapper[4813]: I0219 20:00:10.624308 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"86684df4-7521-416c-8599-2c8db67240c3","Type":"ContainerStarted","Data":"4f7da98c01eadb2450c594c5d9b7349b4d3092eab2f7cdfcfcfc11ceac90a16c"} Feb 19 20:00:10 crc kubenswrapper[4813]: I0219 20:00:10.628532 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.6285111089999997 podStartE2EDuration="2.628511109s" podCreationTimestamp="2026-02-19 20:00:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 20:00:10.620085938 +0000 UTC m=+5429.845526489" watchObservedRunningTime="2026-02-19 20:00:10.628511109 +0000 UTC m=+5429.853951650" Feb 19 20:00:10 crc kubenswrapper[4813]: I0219 20:00:10.663797 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.6637767390000002 podStartE2EDuration="2.663776739s" podCreationTimestamp="2026-02-19 20:00:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 20:00:10.652923624 +0000 UTC m=+5429.878364175" watchObservedRunningTime="2026-02-19 20:00:10.663776739 +0000 UTC m=+5429.889217270" Feb 19 20:00:10 crc kubenswrapper[4813]: I0219 20:00:10.714913 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-x6wmz" podStartSLOduration=2.714890001 podStartE2EDuration="2.714890001s" podCreationTimestamp="2026-02-19 20:00:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 20:00:10.673432128 +0000 UTC m=+5429.898872669" watchObservedRunningTime="2026-02-19 20:00:10.714890001 +0000 UTC m=+5429.940330542" Feb 19 20:00:10 crc kubenswrapper[4813]: I0219 20:00:10.734429 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.734405665 podStartE2EDuration="2.734405665s" podCreationTimestamp="2026-02-19 20:00:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 20:00:10.723425095 +0000 UTC m=+5429.948865636" watchObservedRunningTime="2026-02-19 20:00:10.734405665 +0000 UTC m=+5429.959846216" Feb 19 20:00:10 crc kubenswrapper[4813]: I0219 20:00:10.777098 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.777074915 podStartE2EDuration="2.777074915s" podCreationTimestamp="2026-02-19 20:00:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 20:00:10.751011268 +0000 UTC m=+5429.976451809" watchObservedRunningTime="2026-02-19 20:00:10.777074915 +0000 UTC m=+5430.002515456" Feb 19 20:00:11 crc kubenswrapper[4813]: I0219 20:00:11.634460 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-cqkk6" event={"ID":"dfa00bea-2bcd-4843-ba6a-2e2b4070928d","Type":"ContainerStarted","Data":"14abec32d3c100619311126e80ac1248a5ffebf3248ea0153b1cce13a5dec395"} Feb 19 20:00:11 crc kubenswrapper[4813]: I0219 20:00:11.639881 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6464f94485-cmrjz" event={"ID":"333f58de-9f17-415d-9667-3f0a8b1d0bae","Type":"ContainerStarted","Data":"f4556a47b76c9f9a397dd7ac0cae211a861b23f84208b1ca0171a7dd87efd356"} Feb 19 20:00:11 crc kubenswrapper[4813]: I0219 20:00:11.639914 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6464f94485-cmrjz" Feb 19 20:00:11 crc kubenswrapper[4813]: I0219 20:00:11.650626 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-cqkk6" podStartSLOduration=2.650610523 podStartE2EDuration="2.650610523s" podCreationTimestamp="2026-02-19 20:00:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 20:00:11.645899276 +0000 UTC m=+5430.871339827" watchObservedRunningTime="2026-02-19 20:00:11.650610523 +0000 UTC m=+5430.876051064" Feb 19 20:00:11 crc kubenswrapper[4813]: I0219 20:00:11.670839 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6464f94485-cmrjz" podStartSLOduration=3.670818927 podStartE2EDuration="3.670818927s" podCreationTimestamp="2026-02-19 20:00:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 20:00:11.666405551 +0000 UTC m=+5430.891846102" watchObservedRunningTime="2026-02-19 20:00:11.670818927 +0000 UTC m=+5430.896259468" Feb 19 20:00:13 crc kubenswrapper[4813]: I0219 20:00:13.655873 4813 generic.go:334] "Generic (PLEG): container finished" podID="dfa00bea-2bcd-4843-ba6a-2e2b4070928d" containerID="14abec32d3c100619311126e80ac1248a5ffebf3248ea0153b1cce13a5dec395" exitCode=0 Feb 19 20:00:13 crc kubenswrapper[4813]: I0219 20:00:13.655982 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-cqkk6" event={"ID":"dfa00bea-2bcd-4843-ba6a-2e2b4070928d","Type":"ContainerDied","Data":"14abec32d3c100619311126e80ac1248a5ffebf3248ea0153b1cce13a5dec395"} Feb 19 20:00:13 crc kubenswrapper[4813]: I0219 20:00:13.981178 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 19 20:00:14 crc kubenswrapper[4813]: I0219 20:00:14.083247 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 20:00:14 crc kubenswrapper[4813]: I0219 20:00:14.083324 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 20:00:14 crc kubenswrapper[4813]: I0219 20:00:14.304518 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 19 20:00:15 crc kubenswrapper[4813]: I0219 20:00:15.084066 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-cqkk6" Feb 19 20:00:15 crc kubenswrapper[4813]: I0219 20:00:15.235386 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dfa00bea-2bcd-4843-ba6a-2e2b4070928d-scripts\") pod \"dfa00bea-2bcd-4843-ba6a-2e2b4070928d\" (UID: \"dfa00bea-2bcd-4843-ba6a-2e2b4070928d\") " Feb 19 20:00:15 crc kubenswrapper[4813]: I0219 20:00:15.235753 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfa00bea-2bcd-4843-ba6a-2e2b4070928d-combined-ca-bundle\") pod \"dfa00bea-2bcd-4843-ba6a-2e2b4070928d\" (UID: \"dfa00bea-2bcd-4843-ba6a-2e2b4070928d\") " Feb 19 20:00:15 crc kubenswrapper[4813]: I0219 20:00:15.235839 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfa00bea-2bcd-4843-ba6a-2e2b4070928d-config-data\") pod \"dfa00bea-2bcd-4843-ba6a-2e2b4070928d\" (UID: \"dfa00bea-2bcd-4843-ba6a-2e2b4070928d\") " Feb 19 20:00:15 crc kubenswrapper[4813]: I0219 20:00:15.235906 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whb4c\" (UniqueName: \"kubernetes.io/projected/dfa00bea-2bcd-4843-ba6a-2e2b4070928d-kube-api-access-whb4c\") pod \"dfa00bea-2bcd-4843-ba6a-2e2b4070928d\" (UID: \"dfa00bea-2bcd-4843-ba6a-2e2b4070928d\") " Feb 19 20:00:15 crc kubenswrapper[4813]: I0219 20:00:15.241129 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfa00bea-2bcd-4843-ba6a-2e2b4070928d-scripts" (OuterVolumeSpecName: "scripts") pod "dfa00bea-2bcd-4843-ba6a-2e2b4070928d" (UID: "dfa00bea-2bcd-4843-ba6a-2e2b4070928d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:00:15 crc kubenswrapper[4813]: I0219 20:00:15.241669 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfa00bea-2bcd-4843-ba6a-2e2b4070928d-kube-api-access-whb4c" (OuterVolumeSpecName: "kube-api-access-whb4c") pod "dfa00bea-2bcd-4843-ba6a-2e2b4070928d" (UID: "dfa00bea-2bcd-4843-ba6a-2e2b4070928d"). InnerVolumeSpecName "kube-api-access-whb4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:00:15 crc kubenswrapper[4813]: I0219 20:00:15.261770 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfa00bea-2bcd-4843-ba6a-2e2b4070928d-config-data" (OuterVolumeSpecName: "config-data") pod "dfa00bea-2bcd-4843-ba6a-2e2b4070928d" (UID: "dfa00bea-2bcd-4843-ba6a-2e2b4070928d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:00:15 crc kubenswrapper[4813]: I0219 20:00:15.266105 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfa00bea-2bcd-4843-ba6a-2e2b4070928d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dfa00bea-2bcd-4843-ba6a-2e2b4070928d" (UID: "dfa00bea-2bcd-4843-ba6a-2e2b4070928d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:00:15 crc kubenswrapper[4813]: I0219 20:00:15.338220 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dfa00bea-2bcd-4843-ba6a-2e2b4070928d-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 20:00:15 crc kubenswrapper[4813]: I0219 20:00:15.338253 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfa00bea-2bcd-4843-ba6a-2e2b4070928d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 20:00:15 crc kubenswrapper[4813]: I0219 20:00:15.338265 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfa00bea-2bcd-4843-ba6a-2e2b4070928d-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 20:00:15 crc kubenswrapper[4813]: I0219 20:00:15.338276 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-whb4c\" (UniqueName: \"kubernetes.io/projected/dfa00bea-2bcd-4843-ba6a-2e2b4070928d-kube-api-access-whb4c\") on node \"crc\" DevicePath \"\"" Feb 19 20:00:15 crc kubenswrapper[4813]: I0219 20:00:15.681602 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-cqkk6" event={"ID":"dfa00bea-2bcd-4843-ba6a-2e2b4070928d","Type":"ContainerDied","Data":"51927e9f40f749bd5af6b438bc6f67b40d48d7ccb537f39f46fe6f8912898508"} Feb 19 20:00:15 crc kubenswrapper[4813]: I0219 20:00:15.681940 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51927e9f40f749bd5af6b438bc6f67b40d48d7ccb537f39f46fe6f8912898508" Feb 19 20:00:15 crc kubenswrapper[4813]: I0219 20:00:15.681638 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-cqkk6" Feb 19 20:00:15 crc kubenswrapper[4813]: I0219 20:00:15.683360 4813 generic.go:334] "Generic (PLEG): container finished" podID="ffe08647-6b48-488d-86bd-84de78e5c05c" containerID="8bb58da7cf44d5538bb3196418fc4065e50905582a5d1d9cb75f6f62d99c018b" exitCode=0 Feb 19 20:00:15 crc kubenswrapper[4813]: I0219 20:00:15.683626 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-x6wmz" event={"ID":"ffe08647-6b48-488d-86bd-84de78e5c05c","Type":"ContainerDied","Data":"8bb58da7cf44d5538bb3196418fc4065e50905582a5d1d9cb75f6f62d99c018b"} Feb 19 20:00:15 crc kubenswrapper[4813]: I0219 20:00:15.759766 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 20:00:15 crc kubenswrapper[4813]: E0219 20:00:15.760630 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfa00bea-2bcd-4843-ba6a-2e2b4070928d" containerName="nova-cell1-conductor-db-sync" Feb 19 20:00:15 crc kubenswrapper[4813]: I0219 20:00:15.760744 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfa00bea-2bcd-4843-ba6a-2e2b4070928d" containerName="nova-cell1-conductor-db-sync" Feb 19 20:00:15 crc kubenswrapper[4813]: I0219 20:00:15.761070 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfa00bea-2bcd-4843-ba6a-2e2b4070928d" containerName="nova-cell1-conductor-db-sync" Feb 19 20:00:15 crc kubenswrapper[4813]: I0219 20:00:15.761979 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 19 20:00:15 crc kubenswrapper[4813]: I0219 20:00:15.766501 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 19 20:00:15 crc kubenswrapper[4813]: I0219 20:00:15.769794 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 20:00:15 crc kubenswrapper[4813]: I0219 20:00:15.948827 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzclq\" (UniqueName: \"kubernetes.io/projected/7cd7d095-1bd3-4307-b317-81d6150f4bb2-kube-api-access-rzclq\") pod \"nova-cell1-conductor-0\" (UID: \"7cd7d095-1bd3-4307-b317-81d6150f4bb2\") " pod="openstack/nova-cell1-conductor-0" Feb 19 20:00:15 crc kubenswrapper[4813]: I0219 20:00:15.948927 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cd7d095-1bd3-4307-b317-81d6150f4bb2-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"7cd7d095-1bd3-4307-b317-81d6150f4bb2\") " pod="openstack/nova-cell1-conductor-0" Feb 19 20:00:15 crc kubenswrapper[4813]: I0219 20:00:15.949294 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cd7d095-1bd3-4307-b317-81d6150f4bb2-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"7cd7d095-1bd3-4307-b317-81d6150f4bb2\") " pod="openstack/nova-cell1-conductor-0" Feb 19 20:00:16 crc kubenswrapper[4813]: I0219 20:00:16.051439 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cd7d095-1bd3-4307-b317-81d6150f4bb2-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"7cd7d095-1bd3-4307-b317-81d6150f4bb2\") " pod="openstack/nova-cell1-conductor-0" Feb 19 20:00:16 crc kubenswrapper[4813]: I0219 20:00:16.051578 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cd7d095-1bd3-4307-b317-81d6150f4bb2-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"7cd7d095-1bd3-4307-b317-81d6150f4bb2\") " pod="openstack/nova-cell1-conductor-0" Feb 19 20:00:16 crc kubenswrapper[4813]: I0219 20:00:16.051647 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzclq\" (UniqueName: \"kubernetes.io/projected/7cd7d095-1bd3-4307-b317-81d6150f4bb2-kube-api-access-rzclq\") pod \"nova-cell1-conductor-0\" (UID: \"7cd7d095-1bd3-4307-b317-81d6150f4bb2\") " pod="openstack/nova-cell1-conductor-0" Feb 19 20:00:16 crc kubenswrapper[4813]: I0219 20:00:16.058717 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cd7d095-1bd3-4307-b317-81d6150f4bb2-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"7cd7d095-1bd3-4307-b317-81d6150f4bb2\") " pod="openstack/nova-cell1-conductor-0" Feb 19 20:00:16 crc kubenswrapper[4813]: I0219 20:00:16.060917 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cd7d095-1bd3-4307-b317-81d6150f4bb2-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"7cd7d095-1bd3-4307-b317-81d6150f4bb2\") " pod="openstack/nova-cell1-conductor-0" Feb 19 20:00:16 crc kubenswrapper[4813]: I0219 20:00:16.067031 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzclq\" (UniqueName: \"kubernetes.io/projected/7cd7d095-1bd3-4307-b317-81d6150f4bb2-kube-api-access-rzclq\") pod \"nova-cell1-conductor-0\" (UID: \"7cd7d095-1bd3-4307-b317-81d6150f4bb2\") " pod="openstack/nova-cell1-conductor-0" Feb 19 20:00:16 crc kubenswrapper[4813]: I0219 20:00:16.089999 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 19 20:00:16 crc kubenswrapper[4813]: I0219 20:00:16.525826 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 20:00:16 crc kubenswrapper[4813]: W0219 20:00:16.529505 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7cd7d095_1bd3_4307_b317_81d6150f4bb2.slice/crio-bd165d9e9a96b8a55a9fa2cb4d0ed43adee11e97f6d59fc2b6e74dbd9951c124 WatchSource:0}: Error finding container bd165d9e9a96b8a55a9fa2cb4d0ed43adee11e97f6d59fc2b6e74dbd9951c124: Status 404 returned error can't find the container with id bd165d9e9a96b8a55a9fa2cb4d0ed43adee11e97f6d59fc2b6e74dbd9951c124 Feb 19 20:00:16 crc kubenswrapper[4813]: I0219 20:00:16.695238 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"7cd7d095-1bd3-4307-b317-81d6150f4bb2","Type":"ContainerStarted","Data":"5f875800ffd45e80283683e8420f66bdf3acfdc7bba50970ffbd88e70d650994"} Feb 19 20:00:16 crc kubenswrapper[4813]: I0219 20:00:16.695589 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"7cd7d095-1bd3-4307-b317-81d6150f4bb2","Type":"ContainerStarted","Data":"bd165d9e9a96b8a55a9fa2cb4d0ed43adee11e97f6d59fc2b6e74dbd9951c124"} Feb 19 20:00:16 crc kubenswrapper[4813]: I0219 20:00:16.715910 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=1.715865962 podStartE2EDuration="1.715865962s" podCreationTimestamp="2026-02-19 20:00:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 20:00:16.709128455 +0000 UTC m=+5435.934569016" watchObservedRunningTime="2026-02-19 20:00:16.715865962 +0000 UTC m=+5435.941306513" Feb 19 20:00:17 crc kubenswrapper[4813]: I0219 20:00:17.013391 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-x6wmz" Feb 19 20:00:17 crc kubenswrapper[4813]: I0219 20:00:17.207019 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffe08647-6b48-488d-86bd-84de78e5c05c-config-data\") pod \"ffe08647-6b48-488d-86bd-84de78e5c05c\" (UID: \"ffe08647-6b48-488d-86bd-84de78e5c05c\") " Feb 19 20:00:17 crc kubenswrapper[4813]: I0219 20:00:17.207162 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c76ll\" (UniqueName: \"kubernetes.io/projected/ffe08647-6b48-488d-86bd-84de78e5c05c-kube-api-access-c76ll\") pod \"ffe08647-6b48-488d-86bd-84de78e5c05c\" (UID: \"ffe08647-6b48-488d-86bd-84de78e5c05c\") " Feb 19 20:00:17 crc kubenswrapper[4813]: I0219 20:00:17.207315 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffe08647-6b48-488d-86bd-84de78e5c05c-scripts\") pod \"ffe08647-6b48-488d-86bd-84de78e5c05c\" (UID: \"ffe08647-6b48-488d-86bd-84de78e5c05c\") " Feb 19 20:00:17 crc kubenswrapper[4813]: I0219 20:00:17.207924 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffe08647-6b48-488d-86bd-84de78e5c05c-combined-ca-bundle\") pod \"ffe08647-6b48-488d-86bd-84de78e5c05c\" (UID: \"ffe08647-6b48-488d-86bd-84de78e5c05c\") " Feb 19 20:00:17 crc kubenswrapper[4813]: I0219 20:00:17.213229 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffe08647-6b48-488d-86bd-84de78e5c05c-scripts" (OuterVolumeSpecName: "scripts") pod "ffe08647-6b48-488d-86bd-84de78e5c05c" (UID: "ffe08647-6b48-488d-86bd-84de78e5c05c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:00:17 crc kubenswrapper[4813]: I0219 20:00:17.214181 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffe08647-6b48-488d-86bd-84de78e5c05c-kube-api-access-c76ll" (OuterVolumeSpecName: "kube-api-access-c76ll") pod "ffe08647-6b48-488d-86bd-84de78e5c05c" (UID: "ffe08647-6b48-488d-86bd-84de78e5c05c"). InnerVolumeSpecName "kube-api-access-c76ll". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:00:17 crc kubenswrapper[4813]: I0219 20:00:17.257003 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffe08647-6b48-488d-86bd-84de78e5c05c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ffe08647-6b48-488d-86bd-84de78e5c05c" (UID: "ffe08647-6b48-488d-86bd-84de78e5c05c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:00:17 crc kubenswrapper[4813]: I0219 20:00:17.257342 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffe08647-6b48-488d-86bd-84de78e5c05c-config-data" (OuterVolumeSpecName: "config-data") pod "ffe08647-6b48-488d-86bd-84de78e5c05c" (UID: "ffe08647-6b48-488d-86bd-84de78e5c05c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:00:17 crc kubenswrapper[4813]: I0219 20:00:17.310438 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffe08647-6b48-488d-86bd-84de78e5c05c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 20:00:17 crc kubenswrapper[4813]: I0219 20:00:17.310483 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffe08647-6b48-488d-86bd-84de78e5c05c-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 20:00:17 crc kubenswrapper[4813]: I0219 20:00:17.310497 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c76ll\" (UniqueName: \"kubernetes.io/projected/ffe08647-6b48-488d-86bd-84de78e5c05c-kube-api-access-c76ll\") on node \"crc\" DevicePath \"\"" Feb 19 20:00:17 crc kubenswrapper[4813]: I0219 20:00:17.310513 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffe08647-6b48-488d-86bd-84de78e5c05c-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 20:00:17 crc kubenswrapper[4813]: I0219 20:00:17.709580 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-x6wmz" Feb 19 20:00:17 crc kubenswrapper[4813]: I0219 20:00:17.710334 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-x6wmz" event={"ID":"ffe08647-6b48-488d-86bd-84de78e5c05c","Type":"ContainerDied","Data":"6fcfee60dc1f485364be0cbc6a947004e043895fd9a1284700160e4c464071b0"} Feb 19 20:00:17 crc kubenswrapper[4813]: I0219 20:00:17.710373 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6fcfee60dc1f485364be0cbc6a947004e043895fd9a1284700160e4c464071b0" Feb 19 20:00:17 crc kubenswrapper[4813]: I0219 20:00:17.710393 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 19 20:00:17 crc kubenswrapper[4813]: I0219 20:00:17.882519 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 20:00:17 crc kubenswrapper[4813]: I0219 20:00:17.882790 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b5f38238-79ad-457f-87b4-6aedb8bc0a2c" containerName="nova-api-log" containerID="cri-o://fef22b7dbb7b9b1f179b31446fdbcbd0b60905b99fbcdb72839697ce46136c12" gracePeriod=30 Feb 19 20:00:17 crc kubenswrapper[4813]: I0219 20:00:17.882908 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b5f38238-79ad-457f-87b4-6aedb8bc0a2c" containerName="nova-api-api" containerID="cri-o://68ed855b2e64b333c8db694020b1d17c621bcbff013a6d4fa38dacbb2b38a534" gracePeriod=30 Feb 19 20:00:17 crc kubenswrapper[4813]: I0219 20:00:17.899127 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 20:00:17 crc kubenswrapper[4813]: I0219 20:00:17.908754 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="e42b2871-2bbb-4439-9be6-ca3b594ce8f7" containerName="nova-scheduler-scheduler" containerID="cri-o://38ec3f52beb7682eebf064130a6ecb3357856c51414e5ca5008b815dac8adc92" gracePeriod=30 Feb 19 20:00:17 crc kubenswrapper[4813]: I0219 20:00:17.913525 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 20:00:17 crc kubenswrapper[4813]: I0219 20:00:17.913738 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b079fbd3-97d7-4bf3-81f6-4fa19b0e03c9" containerName="nova-metadata-log" containerID="cri-o://ed17e2505f12b9bc5c6d5620ca981ba38af33dbabd24e27766f79ae5d164a99e" gracePeriod=30 Feb 19 20:00:17 crc kubenswrapper[4813]: I0219 20:00:17.913884 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b079fbd3-97d7-4bf3-81f6-4fa19b0e03c9" containerName="nova-metadata-metadata" containerID="cri-o://65ce39f72a18b5c0d7749800af7d020bb839667c60707b209158c48cb1eb85a3" gracePeriod=30 Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.522680 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.532507 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.636190 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5f38238-79ad-457f-87b4-6aedb8bc0a2c-combined-ca-bundle\") pod \"b5f38238-79ad-457f-87b4-6aedb8bc0a2c\" (UID: \"b5f38238-79ad-457f-87b4-6aedb8bc0a2c\") " Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.636278 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b079fbd3-97d7-4bf3-81f6-4fa19b0e03c9-combined-ca-bundle\") pod \"b079fbd3-97d7-4bf3-81f6-4fa19b0e03c9\" (UID: \"b079fbd3-97d7-4bf3-81f6-4fa19b0e03c9\") " Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.636338 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hg7kb\" (UniqueName: \"kubernetes.io/projected/b5f38238-79ad-457f-87b4-6aedb8bc0a2c-kube-api-access-hg7kb\") pod \"b5f38238-79ad-457f-87b4-6aedb8bc0a2c\" (UID: \"b5f38238-79ad-457f-87b4-6aedb8bc0a2c\") " Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.636367 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5f38238-79ad-457f-87b4-6aedb8bc0a2c-config-data\") pod \"b5f38238-79ad-457f-87b4-6aedb8bc0a2c\" (UID: \"b5f38238-79ad-457f-87b4-6aedb8bc0a2c\") " Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.636473 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b079fbd3-97d7-4bf3-81f6-4fa19b0e03c9-logs\") pod \"b079fbd3-97d7-4bf3-81f6-4fa19b0e03c9\" (UID: \"b079fbd3-97d7-4bf3-81f6-4fa19b0e03c9\") " Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.636519 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b079fbd3-97d7-4bf3-81f6-4fa19b0e03c9-config-data\") pod \"b079fbd3-97d7-4bf3-81f6-4fa19b0e03c9\" (UID: \"b079fbd3-97d7-4bf3-81f6-4fa19b0e03c9\") " Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.636567 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6w29\" (UniqueName: \"kubernetes.io/projected/b079fbd3-97d7-4bf3-81f6-4fa19b0e03c9-kube-api-access-x6w29\") pod \"b079fbd3-97d7-4bf3-81f6-4fa19b0e03c9\" (UID: \"b079fbd3-97d7-4bf3-81f6-4fa19b0e03c9\") " Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.636586 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5f38238-79ad-457f-87b4-6aedb8bc0a2c-logs\") pod \"b5f38238-79ad-457f-87b4-6aedb8bc0a2c\" (UID: \"b5f38238-79ad-457f-87b4-6aedb8bc0a2c\") " Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.637226 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5f38238-79ad-457f-87b4-6aedb8bc0a2c-logs" (OuterVolumeSpecName: "logs") pod "b5f38238-79ad-457f-87b4-6aedb8bc0a2c" (UID: "b5f38238-79ad-457f-87b4-6aedb8bc0a2c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.637678 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b079fbd3-97d7-4bf3-81f6-4fa19b0e03c9-logs" (OuterVolumeSpecName: "logs") pod "b079fbd3-97d7-4bf3-81f6-4fa19b0e03c9" (UID: "b079fbd3-97d7-4bf3-81f6-4fa19b0e03c9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.642468 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b079fbd3-97d7-4bf3-81f6-4fa19b0e03c9-kube-api-access-x6w29" (OuterVolumeSpecName: "kube-api-access-x6w29") pod "b079fbd3-97d7-4bf3-81f6-4fa19b0e03c9" (UID: "b079fbd3-97d7-4bf3-81f6-4fa19b0e03c9"). InnerVolumeSpecName "kube-api-access-x6w29". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.642962 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5f38238-79ad-457f-87b4-6aedb8bc0a2c-kube-api-access-hg7kb" (OuterVolumeSpecName: "kube-api-access-hg7kb") pod "b5f38238-79ad-457f-87b4-6aedb8bc0a2c" (UID: "b5f38238-79ad-457f-87b4-6aedb8bc0a2c"). InnerVolumeSpecName "kube-api-access-hg7kb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.660772 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b079fbd3-97d7-4bf3-81f6-4fa19b0e03c9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b079fbd3-97d7-4bf3-81f6-4fa19b0e03c9" (UID: "b079fbd3-97d7-4bf3-81f6-4fa19b0e03c9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.661182 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5f38238-79ad-457f-87b4-6aedb8bc0a2c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b5f38238-79ad-457f-87b4-6aedb8bc0a2c" (UID: "b5f38238-79ad-457f-87b4-6aedb8bc0a2c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.665558 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b079fbd3-97d7-4bf3-81f6-4fa19b0e03c9-config-data" (OuterVolumeSpecName: "config-data") pod "b079fbd3-97d7-4bf3-81f6-4fa19b0e03c9" (UID: "b079fbd3-97d7-4bf3-81f6-4fa19b0e03c9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.673135 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5f38238-79ad-457f-87b4-6aedb8bc0a2c-config-data" (OuterVolumeSpecName: "config-data") pod "b5f38238-79ad-457f-87b4-6aedb8bc0a2c" (UID: "b5f38238-79ad-457f-87b4-6aedb8bc0a2c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.733528 4813 generic.go:334] "Generic (PLEG): container finished" podID="b079fbd3-97d7-4bf3-81f6-4fa19b0e03c9" containerID="65ce39f72a18b5c0d7749800af7d020bb839667c60707b209158c48cb1eb85a3" exitCode=0 Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.733569 4813 generic.go:334] "Generic (PLEG): container finished" podID="b079fbd3-97d7-4bf3-81f6-4fa19b0e03c9" containerID="ed17e2505f12b9bc5c6d5620ca981ba38af33dbabd24e27766f79ae5d164a99e" exitCode=143 Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.733588 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.733640 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b079fbd3-97d7-4bf3-81f6-4fa19b0e03c9","Type":"ContainerDied","Data":"65ce39f72a18b5c0d7749800af7d020bb839667c60707b209158c48cb1eb85a3"} Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.733698 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b079fbd3-97d7-4bf3-81f6-4fa19b0e03c9","Type":"ContainerDied","Data":"ed17e2505f12b9bc5c6d5620ca981ba38af33dbabd24e27766f79ae5d164a99e"} Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.733709 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b079fbd3-97d7-4bf3-81f6-4fa19b0e03c9","Type":"ContainerDied","Data":"16012cbc4f8816701b2f6fbf0a4eac06d47f2376f634fdf3769ae104e22402cd"} Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.733727 4813 scope.go:117] "RemoveContainer" containerID="65ce39f72a18b5c0d7749800af7d020bb839667c60707b209158c48cb1eb85a3" Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.738125 4813 generic.go:334] "Generic (PLEG): container finished" podID="b5f38238-79ad-457f-87b4-6aedb8bc0a2c" containerID="68ed855b2e64b333c8db694020b1d17c621bcbff013a6d4fa38dacbb2b38a534" exitCode=0 Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.738148 4813 generic.go:334] "Generic (PLEG): container finished" podID="b5f38238-79ad-457f-87b4-6aedb8bc0a2c" containerID="fef22b7dbb7b9b1f179b31446fdbcbd0b60905b99fbcdb72839697ce46136c12" exitCode=143 Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.738206 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b5f38238-79ad-457f-87b4-6aedb8bc0a2c","Type":"ContainerDied","Data":"68ed855b2e64b333c8db694020b1d17c621bcbff013a6d4fa38dacbb2b38a534"} Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.738245 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b5f38238-79ad-457f-87b4-6aedb8bc0a2c","Type":"ContainerDied","Data":"fef22b7dbb7b9b1f179b31446fdbcbd0b60905b99fbcdb72839697ce46136c12"} Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.738260 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b5f38238-79ad-457f-87b4-6aedb8bc0a2c","Type":"ContainerDied","Data":"652d397359f72aee3da9fe23c9946b20c315726fda31da36f6e7459939772f06"} Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.738217 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.739051 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b079fbd3-97d7-4bf3-81f6-4fa19b0e03c9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.739091 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hg7kb\" (UniqueName: \"kubernetes.io/projected/b5f38238-79ad-457f-87b4-6aedb8bc0a2c-kube-api-access-hg7kb\") on node \"crc\" DevicePath \"\"" Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.739113 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5f38238-79ad-457f-87b4-6aedb8bc0a2c-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.739129 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b079fbd3-97d7-4bf3-81f6-4fa19b0e03c9-logs\") on node \"crc\" DevicePath \"\"" Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.739145 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b079fbd3-97d7-4bf3-81f6-4fa19b0e03c9-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.739161 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6w29\" (UniqueName: \"kubernetes.io/projected/b079fbd3-97d7-4bf3-81f6-4fa19b0e03c9-kube-api-access-x6w29\") on node \"crc\" DevicePath \"\"" Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.739177 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5f38238-79ad-457f-87b4-6aedb8bc0a2c-logs\") on node \"crc\" DevicePath \"\"" Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.739192 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5f38238-79ad-457f-87b4-6aedb8bc0a2c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.768464 4813 scope.go:117] "RemoveContainer" containerID="ed17e2505f12b9bc5c6d5620ca981ba38af33dbabd24e27766f79ae5d164a99e" Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.801031 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.823175 4813 scope.go:117] "RemoveContainer" containerID="65ce39f72a18b5c0d7749800af7d020bb839667c60707b209158c48cb1eb85a3" Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.826304 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 20:00:18 crc kubenswrapper[4813]: E0219 20:00:18.829812 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65ce39f72a18b5c0d7749800af7d020bb839667c60707b209158c48cb1eb85a3\": container with ID starting with 65ce39f72a18b5c0d7749800af7d020bb839667c60707b209158c48cb1eb85a3 not found: ID does not exist" containerID="65ce39f72a18b5c0d7749800af7d020bb839667c60707b209158c48cb1eb85a3" Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.829871 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65ce39f72a18b5c0d7749800af7d020bb839667c60707b209158c48cb1eb85a3"} err="failed to get container status \"65ce39f72a18b5c0d7749800af7d020bb839667c60707b209158c48cb1eb85a3\": rpc error: code = NotFound desc = could not find container \"65ce39f72a18b5c0d7749800af7d020bb839667c60707b209158c48cb1eb85a3\": container with ID starting with 65ce39f72a18b5c0d7749800af7d020bb839667c60707b209158c48cb1eb85a3 not found: ID does not exist" Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.830109 4813 scope.go:117] "RemoveContainer" containerID="ed17e2505f12b9bc5c6d5620ca981ba38af33dbabd24e27766f79ae5d164a99e" Feb 19 20:00:18 crc kubenswrapper[4813]: E0219 20:00:18.830676 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed17e2505f12b9bc5c6d5620ca981ba38af33dbabd24e27766f79ae5d164a99e\": container with ID starting with ed17e2505f12b9bc5c6d5620ca981ba38af33dbabd24e27766f79ae5d164a99e not found: ID does not exist" containerID="ed17e2505f12b9bc5c6d5620ca981ba38af33dbabd24e27766f79ae5d164a99e" Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.830717 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed17e2505f12b9bc5c6d5620ca981ba38af33dbabd24e27766f79ae5d164a99e"} err="failed to get container status \"ed17e2505f12b9bc5c6d5620ca981ba38af33dbabd24e27766f79ae5d164a99e\": rpc error: code = NotFound desc = could not find container \"ed17e2505f12b9bc5c6d5620ca981ba38af33dbabd24e27766f79ae5d164a99e\": container with ID starting with ed17e2505f12b9bc5c6d5620ca981ba38af33dbabd24e27766f79ae5d164a99e not found: ID does not exist" Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.830744 4813 scope.go:117] "RemoveContainer" containerID="65ce39f72a18b5c0d7749800af7d020bb839667c60707b209158c48cb1eb85a3" Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.831414 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65ce39f72a18b5c0d7749800af7d020bb839667c60707b209158c48cb1eb85a3"} err="failed to get container status \"65ce39f72a18b5c0d7749800af7d020bb839667c60707b209158c48cb1eb85a3\": rpc error: code = NotFound desc = could not find container \"65ce39f72a18b5c0d7749800af7d020bb839667c60707b209158c48cb1eb85a3\": container with ID starting with 65ce39f72a18b5c0d7749800af7d020bb839667c60707b209158c48cb1eb85a3 not found: ID does not exist" Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.831435 4813 scope.go:117] "RemoveContainer" containerID="ed17e2505f12b9bc5c6d5620ca981ba38af33dbabd24e27766f79ae5d164a99e" Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.831895 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed17e2505f12b9bc5c6d5620ca981ba38af33dbabd24e27766f79ae5d164a99e"} err="failed to get container status \"ed17e2505f12b9bc5c6d5620ca981ba38af33dbabd24e27766f79ae5d164a99e\": rpc error: code = NotFound desc = could not find container \"ed17e2505f12b9bc5c6d5620ca981ba38af33dbabd24e27766f79ae5d164a99e\": container with ID starting with ed17e2505f12b9bc5c6d5620ca981ba38af33dbabd24e27766f79ae5d164a99e not found: ID does not exist" Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.831973 4813 scope.go:117] "RemoveContainer" containerID="68ed855b2e64b333c8db694020b1d17c621bcbff013a6d4fa38dacbb2b38a534" Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.834878 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.847022 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.852367 4813 scope.go:117] "RemoveContainer" containerID="fef22b7dbb7b9b1f179b31446fdbcbd0b60905b99fbcdb72839697ce46136c12" Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.858913 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 19 20:00:18 crc kubenswrapper[4813]: E0219 20:00:18.859450 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5f38238-79ad-457f-87b4-6aedb8bc0a2c" containerName="nova-api-api" Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.859474 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5f38238-79ad-457f-87b4-6aedb8bc0a2c" containerName="nova-api-api" Feb 19 20:00:18 crc kubenswrapper[4813]: E0219 20:00:18.859506 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5f38238-79ad-457f-87b4-6aedb8bc0a2c" containerName="nova-api-log" Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.859515 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5f38238-79ad-457f-87b4-6aedb8bc0a2c" containerName="nova-api-log" Feb 19 20:00:18 crc kubenswrapper[4813]: E0219 20:00:18.859533 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffe08647-6b48-488d-86bd-84de78e5c05c" containerName="nova-manage" Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.859542 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffe08647-6b48-488d-86bd-84de78e5c05c" containerName="nova-manage" Feb 19 20:00:18 crc kubenswrapper[4813]: E0219 20:00:18.859559 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b079fbd3-97d7-4bf3-81f6-4fa19b0e03c9" containerName="nova-metadata-metadata" Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.859566 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="b079fbd3-97d7-4bf3-81f6-4fa19b0e03c9" containerName="nova-metadata-metadata" Feb 19 20:00:18 crc kubenswrapper[4813]: E0219 20:00:18.859576 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b079fbd3-97d7-4bf3-81f6-4fa19b0e03c9" containerName="nova-metadata-log" Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.859585 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="b079fbd3-97d7-4bf3-81f6-4fa19b0e03c9" containerName="nova-metadata-log" Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.859823 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="b079fbd3-97d7-4bf3-81f6-4fa19b0e03c9" containerName="nova-metadata-log" Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.859886 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5f38238-79ad-457f-87b4-6aedb8bc0a2c" containerName="nova-api-log" Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.859905 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffe08647-6b48-488d-86bd-84de78e5c05c" containerName="nova-manage" Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.859914 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5f38238-79ad-457f-87b4-6aedb8bc0a2c" containerName="nova-api-api" Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.859930 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="b079fbd3-97d7-4bf3-81f6-4fa19b0e03c9" containerName="nova-metadata-metadata" Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.861046 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.863521 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.872067 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.873692 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.877199 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.884154 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.892735 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.893513 4813 scope.go:117] "RemoveContainer" containerID="68ed855b2e64b333c8db694020b1d17c621bcbff013a6d4fa38dacbb2b38a534" Feb 19 20:00:18 crc kubenswrapper[4813]: E0219 20:00:18.894233 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68ed855b2e64b333c8db694020b1d17c621bcbff013a6d4fa38dacbb2b38a534\": container with ID starting with 68ed855b2e64b333c8db694020b1d17c621bcbff013a6d4fa38dacbb2b38a534 not found: ID does not exist" containerID="68ed855b2e64b333c8db694020b1d17c621bcbff013a6d4fa38dacbb2b38a534" Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.894281 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68ed855b2e64b333c8db694020b1d17c621bcbff013a6d4fa38dacbb2b38a534"} err="failed to get container status \"68ed855b2e64b333c8db694020b1d17c621bcbff013a6d4fa38dacbb2b38a534\": rpc error: code = NotFound desc = could not find container \"68ed855b2e64b333c8db694020b1d17c621bcbff013a6d4fa38dacbb2b38a534\": container with ID starting with 68ed855b2e64b333c8db694020b1d17c621bcbff013a6d4fa38dacbb2b38a534 not found: ID does not exist" Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.894314 4813 scope.go:117] "RemoveContainer" containerID="fef22b7dbb7b9b1f179b31446fdbcbd0b60905b99fbcdb72839697ce46136c12" Feb 19 20:00:18 crc kubenswrapper[4813]: E0219 20:00:18.894736 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fef22b7dbb7b9b1f179b31446fdbcbd0b60905b99fbcdb72839697ce46136c12\": container with ID starting with fef22b7dbb7b9b1f179b31446fdbcbd0b60905b99fbcdb72839697ce46136c12 not found: ID does not exist" containerID="fef22b7dbb7b9b1f179b31446fdbcbd0b60905b99fbcdb72839697ce46136c12" Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.894773 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fef22b7dbb7b9b1f179b31446fdbcbd0b60905b99fbcdb72839697ce46136c12"} err="failed to get container status \"fef22b7dbb7b9b1f179b31446fdbcbd0b60905b99fbcdb72839697ce46136c12\": rpc error: code = NotFound desc = could not find container \"fef22b7dbb7b9b1f179b31446fdbcbd0b60905b99fbcdb72839697ce46136c12\": container with ID starting with fef22b7dbb7b9b1f179b31446fdbcbd0b60905b99fbcdb72839697ce46136c12 not found: ID does not exist" Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.894797 4813 scope.go:117] "RemoveContainer" containerID="68ed855b2e64b333c8db694020b1d17c621bcbff013a6d4fa38dacbb2b38a534" Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.895344 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68ed855b2e64b333c8db694020b1d17c621bcbff013a6d4fa38dacbb2b38a534"} err="failed to get container status \"68ed855b2e64b333c8db694020b1d17c621bcbff013a6d4fa38dacbb2b38a534\": rpc error: code = NotFound desc = could not find container \"68ed855b2e64b333c8db694020b1d17c621bcbff013a6d4fa38dacbb2b38a534\": container with ID starting with 68ed855b2e64b333c8db694020b1d17c621bcbff013a6d4fa38dacbb2b38a534 not found: ID does not exist" Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.895417 4813 scope.go:117] "RemoveContainer" containerID="fef22b7dbb7b9b1f179b31446fdbcbd0b60905b99fbcdb72839697ce46136c12" Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.895758 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fef22b7dbb7b9b1f179b31446fdbcbd0b60905b99fbcdb72839697ce46136c12"} err="failed to get container status \"fef22b7dbb7b9b1f179b31446fdbcbd0b60905b99fbcdb72839697ce46136c12\": rpc error: code = NotFound desc = could not find container \"fef22b7dbb7b9b1f179b31446fdbcbd0b60905b99fbcdb72839697ce46136c12\": container with ID starting with fef22b7dbb7b9b1f179b31446fdbcbd0b60905b99fbcdb72839697ce46136c12 not found: ID does not exist" Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.942034 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bqtv\" (UniqueName: \"kubernetes.io/projected/51e8cdf9-4bd9-436f-8933-271ef1bef791-kube-api-access-5bqtv\") pod \"nova-metadata-0\" (UID: \"51e8cdf9-4bd9-436f-8933-271ef1bef791\") " pod="openstack/nova-metadata-0" Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.942124 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51e8cdf9-4bd9-436f-8933-271ef1bef791-logs\") pod \"nova-metadata-0\" (UID: \"51e8cdf9-4bd9-436f-8933-271ef1bef791\") " pod="openstack/nova-metadata-0" Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.942159 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51e8cdf9-4bd9-436f-8933-271ef1bef791-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"51e8cdf9-4bd9-436f-8933-271ef1bef791\") " pod="openstack/nova-metadata-0" Feb 19 20:00:18 crc kubenswrapper[4813]: I0219 20:00:18.942210 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51e8cdf9-4bd9-436f-8933-271ef1bef791-config-data\") pod \"nova-metadata-0\" (UID: \"51e8cdf9-4bd9-436f-8933-271ef1bef791\") " pod="openstack/nova-metadata-0" Feb 19 20:00:19 crc kubenswrapper[4813]: I0219 20:00:19.043575 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51e8cdf9-4bd9-436f-8933-271ef1bef791-config-data\") pod \"nova-metadata-0\" (UID: \"51e8cdf9-4bd9-436f-8933-271ef1bef791\") " pod="openstack/nova-metadata-0" Feb 19 20:00:19 crc kubenswrapper[4813]: I0219 20:00:19.043746 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wsv2\" (UniqueName: \"kubernetes.io/projected/3545a303-e475-410d-9748-5c14674c5973-kube-api-access-9wsv2\") pod \"nova-api-0\" (UID: \"3545a303-e475-410d-9748-5c14674c5973\") " pod="openstack/nova-api-0" Feb 19 20:00:19 crc kubenswrapper[4813]: I0219 20:00:19.043816 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3545a303-e475-410d-9748-5c14674c5973-config-data\") pod \"nova-api-0\" (UID: \"3545a303-e475-410d-9748-5c14674c5973\") " pod="openstack/nova-api-0" Feb 19 20:00:19 crc kubenswrapper[4813]: I0219 20:00:19.043837 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bqtv\" (UniqueName: \"kubernetes.io/projected/51e8cdf9-4bd9-436f-8933-271ef1bef791-kube-api-access-5bqtv\") pod \"nova-metadata-0\" (UID: \"51e8cdf9-4bd9-436f-8933-271ef1bef791\") " pod="openstack/nova-metadata-0" Feb 19 20:00:19 crc kubenswrapper[4813]: I0219 20:00:19.043857 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3545a303-e475-410d-9748-5c14674c5973-logs\") pod \"nova-api-0\" (UID: \"3545a303-e475-410d-9748-5c14674c5973\") " pod="openstack/nova-api-0" Feb 19 20:00:19 crc kubenswrapper[4813]: I0219 20:00:19.043903 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3545a303-e475-410d-9748-5c14674c5973-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3545a303-e475-410d-9748-5c14674c5973\") " pod="openstack/nova-api-0" Feb 19 20:00:19 crc kubenswrapper[4813]: I0219 20:00:19.043940 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51e8cdf9-4bd9-436f-8933-271ef1bef791-logs\") pod \"nova-metadata-0\" (UID: \"51e8cdf9-4bd9-436f-8933-271ef1bef791\") " pod="openstack/nova-metadata-0" Feb 19 20:00:19 crc kubenswrapper[4813]: I0219 20:00:19.043993 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51e8cdf9-4bd9-436f-8933-271ef1bef791-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"51e8cdf9-4bd9-436f-8933-271ef1bef791\") " pod="openstack/nova-metadata-0" Feb 19 20:00:19 crc kubenswrapper[4813]: I0219 20:00:19.044936 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51e8cdf9-4bd9-436f-8933-271ef1bef791-logs\") pod \"nova-metadata-0\" (UID: \"51e8cdf9-4bd9-436f-8933-271ef1bef791\") " pod="openstack/nova-metadata-0" Feb 19 20:00:19 crc kubenswrapper[4813]: I0219 20:00:19.047636 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51e8cdf9-4bd9-436f-8933-271ef1bef791-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"51e8cdf9-4bd9-436f-8933-271ef1bef791\") " pod="openstack/nova-metadata-0" Feb 19 20:00:19 crc kubenswrapper[4813]: I0219 20:00:19.049854 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51e8cdf9-4bd9-436f-8933-271ef1bef791-config-data\") pod \"nova-metadata-0\" (UID: \"51e8cdf9-4bd9-436f-8933-271ef1bef791\") " pod="openstack/nova-metadata-0" Feb 19 20:00:19 crc kubenswrapper[4813]: I0219 20:00:19.060726 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bqtv\" (UniqueName: \"kubernetes.io/projected/51e8cdf9-4bd9-436f-8933-271ef1bef791-kube-api-access-5bqtv\") pod \"nova-metadata-0\" (UID: \"51e8cdf9-4bd9-436f-8933-271ef1bef791\") " pod="openstack/nova-metadata-0" Feb 19 20:00:19 crc kubenswrapper[4813]: I0219 20:00:19.146303 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wsv2\" (UniqueName: \"kubernetes.io/projected/3545a303-e475-410d-9748-5c14674c5973-kube-api-access-9wsv2\") pod \"nova-api-0\" (UID: \"3545a303-e475-410d-9748-5c14674c5973\") " pod="openstack/nova-api-0" Feb 19 20:00:19 crc kubenswrapper[4813]: I0219 20:00:19.146404 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3545a303-e475-410d-9748-5c14674c5973-config-data\") pod \"nova-api-0\" (UID: \"3545a303-e475-410d-9748-5c14674c5973\") " pod="openstack/nova-api-0" Feb 19 20:00:19 crc kubenswrapper[4813]: I0219 20:00:19.146429 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3545a303-e475-410d-9748-5c14674c5973-logs\") pod \"nova-api-0\" (UID: \"3545a303-e475-410d-9748-5c14674c5973\") " pod="openstack/nova-api-0" Feb 19 20:00:19 crc kubenswrapper[4813]: I0219 20:00:19.146471 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3545a303-e475-410d-9748-5c14674c5973-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3545a303-e475-410d-9748-5c14674c5973\") " pod="openstack/nova-api-0" Feb 19 20:00:19 crc kubenswrapper[4813]: I0219 20:00:19.147499 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3545a303-e475-410d-9748-5c14674c5973-logs\") pod \"nova-api-0\" (UID: \"3545a303-e475-410d-9748-5c14674c5973\") " pod="openstack/nova-api-0" Feb 19 20:00:19 crc kubenswrapper[4813]: I0219 20:00:19.150792 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3545a303-e475-410d-9748-5c14674c5973-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3545a303-e475-410d-9748-5c14674c5973\") " pod="openstack/nova-api-0" Feb 19 20:00:19 crc kubenswrapper[4813]: I0219 20:00:19.153602 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3545a303-e475-410d-9748-5c14674c5973-config-data\") pod \"nova-api-0\" (UID: \"3545a303-e475-410d-9748-5c14674c5973\") " pod="openstack/nova-api-0" Feb 19 20:00:19 crc kubenswrapper[4813]: I0219 20:00:19.165001 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wsv2\" (UniqueName: \"kubernetes.io/projected/3545a303-e475-410d-9748-5c14674c5973-kube-api-access-9wsv2\") pod \"nova-api-0\" (UID: \"3545a303-e475-410d-9748-5c14674c5973\") " pod="openstack/nova-api-0" Feb 19 20:00:19 crc kubenswrapper[4813]: I0219 20:00:19.182505 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 20:00:19 crc kubenswrapper[4813]: I0219 20:00:19.196851 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 20:00:19 crc kubenswrapper[4813]: I0219 20:00:19.302503 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6464f94485-cmrjz" Feb 19 20:00:19 crc kubenswrapper[4813]: I0219 20:00:19.305869 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 19 20:00:19 crc kubenswrapper[4813]: I0219 20:00:19.337632 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 19 20:00:19 crc kubenswrapper[4813]: I0219 20:00:19.483630 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b079fbd3-97d7-4bf3-81f6-4fa19b0e03c9" path="/var/lib/kubelet/pods/b079fbd3-97d7-4bf3-81f6-4fa19b0e03c9/volumes" Feb 19 20:00:19 crc kubenswrapper[4813]: I0219 20:00:19.484573 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5f38238-79ad-457f-87b4-6aedb8bc0a2c" path="/var/lib/kubelet/pods/b5f38238-79ad-457f-87b4-6aedb8bc0a2c/volumes" Feb 19 20:00:19 crc kubenswrapper[4813]: I0219 20:00:19.485253 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688b5bb7-mhv85"] Feb 19 20:00:19 crc kubenswrapper[4813]: I0219 20:00:19.485490 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-688b5bb7-mhv85" podUID="055c5861-4fba-4801-b657-8524cd6d8320" containerName="dnsmasq-dns" containerID="cri-o://0b792002658d31160dc7d39e7d85c65a6eaf29378e87b8a6a61f587b84aa5808" gracePeriod=10 Feb 19 20:00:19 crc kubenswrapper[4813]: I0219 20:00:19.676546 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 20:00:19 crc kubenswrapper[4813]: I0219 20:00:19.731614 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 20:00:19 crc kubenswrapper[4813]: I0219 20:00:19.754507 4813 generic.go:334] "Generic (PLEG): container finished" podID="055c5861-4fba-4801-b657-8524cd6d8320" containerID="0b792002658d31160dc7d39e7d85c65a6eaf29378e87b8a6a61f587b84aa5808" exitCode=0 Feb 19 20:00:19 crc kubenswrapper[4813]: I0219 20:00:19.754575 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b5bb7-mhv85" event={"ID":"055c5861-4fba-4801-b657-8524cd6d8320","Type":"ContainerDied","Data":"0b792002658d31160dc7d39e7d85c65a6eaf29378e87b8a6a61f587b84aa5808"} Feb 19 20:00:19 crc kubenswrapper[4813]: I0219 20:00:19.764458 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"51e8cdf9-4bd9-436f-8933-271ef1bef791","Type":"ContainerStarted","Data":"20bbdd74638ccb4ff5f3a3f39e7a2871990108c4a60fccf364ddc1e3b945b5f0"} Feb 19 20:00:19 crc kubenswrapper[4813]: I0219 20:00:19.778272 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 19 20:00:19 crc kubenswrapper[4813]: I0219 20:00:19.957507 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b5bb7-mhv85" Feb 19 20:00:20 crc kubenswrapper[4813]: I0219 20:00:20.064331 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v55b5\" (UniqueName: \"kubernetes.io/projected/055c5861-4fba-4801-b657-8524cd6d8320-kube-api-access-v55b5\") pod \"055c5861-4fba-4801-b657-8524cd6d8320\" (UID: \"055c5861-4fba-4801-b657-8524cd6d8320\") " Feb 19 20:00:20 crc kubenswrapper[4813]: I0219 20:00:20.064438 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/055c5861-4fba-4801-b657-8524cd6d8320-ovsdbserver-nb\") pod \"055c5861-4fba-4801-b657-8524cd6d8320\" (UID: \"055c5861-4fba-4801-b657-8524cd6d8320\") " Feb 19 20:00:20 crc kubenswrapper[4813]: I0219 20:00:20.064489 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/055c5861-4fba-4801-b657-8524cd6d8320-dns-svc\") pod \"055c5861-4fba-4801-b657-8524cd6d8320\" (UID: \"055c5861-4fba-4801-b657-8524cd6d8320\") " Feb 19 20:00:20 crc kubenswrapper[4813]: I0219 20:00:20.064538 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/055c5861-4fba-4801-b657-8524cd6d8320-config\") pod \"055c5861-4fba-4801-b657-8524cd6d8320\" (UID: \"055c5861-4fba-4801-b657-8524cd6d8320\") " Feb 19 20:00:20 crc kubenswrapper[4813]: I0219 20:00:20.064607 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/055c5861-4fba-4801-b657-8524cd6d8320-ovsdbserver-sb\") pod \"055c5861-4fba-4801-b657-8524cd6d8320\" (UID: \"055c5861-4fba-4801-b657-8524cd6d8320\") " Feb 19 20:00:20 crc kubenswrapper[4813]: I0219 20:00:20.078499 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/055c5861-4fba-4801-b657-8524cd6d8320-kube-api-access-v55b5" (OuterVolumeSpecName: "kube-api-access-v55b5") pod "055c5861-4fba-4801-b657-8524cd6d8320" (UID: "055c5861-4fba-4801-b657-8524cd6d8320"). InnerVolumeSpecName "kube-api-access-v55b5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:00:20 crc kubenswrapper[4813]: I0219 20:00:20.118341 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/055c5861-4fba-4801-b657-8524cd6d8320-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "055c5861-4fba-4801-b657-8524cd6d8320" (UID: "055c5861-4fba-4801-b657-8524cd6d8320"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:00:20 crc kubenswrapper[4813]: I0219 20:00:20.122944 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/055c5861-4fba-4801-b657-8524cd6d8320-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "055c5861-4fba-4801-b657-8524cd6d8320" (UID: "055c5861-4fba-4801-b657-8524cd6d8320"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:00:20 crc kubenswrapper[4813]: I0219 20:00:20.124140 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/055c5861-4fba-4801-b657-8524cd6d8320-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "055c5861-4fba-4801-b657-8524cd6d8320" (UID: "055c5861-4fba-4801-b657-8524cd6d8320"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:00:20 crc kubenswrapper[4813]: I0219 20:00:20.126547 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/055c5861-4fba-4801-b657-8524cd6d8320-config" (OuterVolumeSpecName: "config") pod "055c5861-4fba-4801-b657-8524cd6d8320" (UID: "055c5861-4fba-4801-b657-8524cd6d8320"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:00:20 crc kubenswrapper[4813]: I0219 20:00:20.166483 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/055c5861-4fba-4801-b657-8524cd6d8320-config\") on node \"crc\" DevicePath \"\"" Feb 19 20:00:20 crc kubenswrapper[4813]: I0219 20:00:20.166533 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/055c5861-4fba-4801-b657-8524cd6d8320-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 20:00:20 crc kubenswrapper[4813]: I0219 20:00:20.166552 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v55b5\" (UniqueName: \"kubernetes.io/projected/055c5861-4fba-4801-b657-8524cd6d8320-kube-api-access-v55b5\") on node \"crc\" DevicePath \"\"" Feb 19 20:00:20 crc kubenswrapper[4813]: I0219 20:00:20.166566 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/055c5861-4fba-4801-b657-8524cd6d8320-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 20:00:20 crc kubenswrapper[4813]: I0219 20:00:20.166578 4813 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/055c5861-4fba-4801-b657-8524cd6d8320-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 20:00:20 crc kubenswrapper[4813]: I0219 20:00:20.778967 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3545a303-e475-410d-9748-5c14674c5973","Type":"ContainerStarted","Data":"bc10905392e483254745d0e73f39292400a5c4ab7d56832dfbf00a6f80b6e762"} Feb 19 20:00:20 crc kubenswrapper[4813]: I0219 20:00:20.779006 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3545a303-e475-410d-9748-5c14674c5973","Type":"ContainerStarted","Data":"7a56d5a97b0c0579e9a098cab123a6ecfb4da4690b529ffcbf05e3cc5dbd1e8a"} Feb 19 20:00:20 crc kubenswrapper[4813]: I0219 20:00:20.779018 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3545a303-e475-410d-9748-5c14674c5973","Type":"ContainerStarted","Data":"1316a23429920407476e11ba74a63001753304d5e4626f66aff4ce3cb69d3056"} Feb 19 20:00:20 crc kubenswrapper[4813]: I0219 20:00:20.786484 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"51e8cdf9-4bd9-436f-8933-271ef1bef791","Type":"ContainerStarted","Data":"c6a10b20d5807d213f6c61f27a2b93e6656f86063772ac1f5eb1a4e98795c2bc"} Feb 19 20:00:20 crc kubenswrapper[4813]: I0219 20:00:20.786526 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"51e8cdf9-4bd9-436f-8933-271ef1bef791","Type":"ContainerStarted","Data":"9f9f2c28ba1330f397d13573d8ef534e2d49d84c5a6ba4c4c4c8cc4b29439281"} Feb 19 20:00:20 crc kubenswrapper[4813]: I0219 20:00:20.790060 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b5bb7-mhv85" event={"ID":"055c5861-4fba-4801-b657-8524cd6d8320","Type":"ContainerDied","Data":"2e6172fcb0e84979f4b278685aae69f85e8bc563bf24bc584f111f32287b358b"} Feb 19 20:00:20 crc kubenswrapper[4813]: I0219 20:00:20.790110 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b5bb7-mhv85" Feb 19 20:00:20 crc kubenswrapper[4813]: I0219 20:00:20.790132 4813 scope.go:117] "RemoveContainer" containerID="0b792002658d31160dc7d39e7d85c65a6eaf29378e87b8a6a61f587b84aa5808" Feb 19 20:00:20 crc kubenswrapper[4813]: I0219 20:00:20.820908 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.820877073 podStartE2EDuration="2.820877073s" podCreationTimestamp="2026-02-19 20:00:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 20:00:20.811009948 +0000 UTC m=+5440.036450489" watchObservedRunningTime="2026-02-19 20:00:20.820877073 +0000 UTC m=+5440.046317664" Feb 19 20:00:20 crc kubenswrapper[4813]: I0219 20:00:20.836928 4813 scope.go:117] "RemoveContainer" containerID="d9097947797c0b9c44ff50559a41888fc421fe602e2ace039c18dd5fb8902ff1" Feb 19 20:00:20 crc kubenswrapper[4813]: I0219 20:00:20.844771 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.844751202 podStartE2EDuration="2.844751202s" podCreationTimestamp="2026-02-19 20:00:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 20:00:20.837022702 +0000 UTC m=+5440.062463253" watchObservedRunningTime="2026-02-19 20:00:20.844751202 +0000 UTC m=+5440.070191743" Feb 19 20:00:20 crc kubenswrapper[4813]: I0219 20:00:20.860250 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688b5bb7-mhv85"] Feb 19 20:00:20 crc kubenswrapper[4813]: I0219 20:00:20.873124 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-688b5bb7-mhv85"] Feb 19 20:00:21 crc kubenswrapper[4813]: I0219 20:00:21.116552 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 19 20:00:21 crc kubenswrapper[4813]: I0219 20:00:21.485432 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="055c5861-4fba-4801-b657-8524cd6d8320" path="/var/lib/kubelet/pods/055c5861-4fba-4801-b657-8524cd6d8320/volumes" Feb 19 20:00:21 crc kubenswrapper[4813]: I0219 20:00:21.578107 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-287ns"] Feb 19 20:00:21 crc kubenswrapper[4813]: E0219 20:00:21.578901 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="055c5861-4fba-4801-b657-8524cd6d8320" containerName="init" Feb 19 20:00:21 crc kubenswrapper[4813]: I0219 20:00:21.578920 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="055c5861-4fba-4801-b657-8524cd6d8320" containerName="init" Feb 19 20:00:21 crc kubenswrapper[4813]: E0219 20:00:21.578942 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="055c5861-4fba-4801-b657-8524cd6d8320" containerName="dnsmasq-dns" Feb 19 20:00:21 crc kubenswrapper[4813]: I0219 20:00:21.578969 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="055c5861-4fba-4801-b657-8524cd6d8320" containerName="dnsmasq-dns" Feb 19 20:00:21 crc kubenswrapper[4813]: I0219 20:00:21.579222 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="055c5861-4fba-4801-b657-8524cd6d8320" containerName="dnsmasq-dns" Feb 19 20:00:21 crc kubenswrapper[4813]: I0219 20:00:21.579997 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-287ns" Feb 19 20:00:21 crc kubenswrapper[4813]: I0219 20:00:21.583991 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 19 20:00:21 crc kubenswrapper[4813]: I0219 20:00:21.584275 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 19 20:00:21 crc kubenswrapper[4813]: I0219 20:00:21.587463 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-287ns"] Feb 19 20:00:21 crc kubenswrapper[4813]: I0219 20:00:21.700267 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/488329c3-4802-41f2-afbf-66ff5389c1a2-scripts\") pod \"nova-cell1-cell-mapping-287ns\" (UID: \"488329c3-4802-41f2-afbf-66ff5389c1a2\") " pod="openstack/nova-cell1-cell-mapping-287ns" Feb 19 20:00:21 crc kubenswrapper[4813]: I0219 20:00:21.700351 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/488329c3-4802-41f2-afbf-66ff5389c1a2-config-data\") pod \"nova-cell1-cell-mapping-287ns\" (UID: \"488329c3-4802-41f2-afbf-66ff5389c1a2\") " pod="openstack/nova-cell1-cell-mapping-287ns" Feb 19 20:00:21 crc kubenswrapper[4813]: I0219 20:00:21.700408 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/488329c3-4802-41f2-afbf-66ff5389c1a2-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-287ns\" (UID: \"488329c3-4802-41f2-afbf-66ff5389c1a2\") " pod="openstack/nova-cell1-cell-mapping-287ns" Feb 19 20:00:21 crc kubenswrapper[4813]: I0219 20:00:21.700515 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2db89\" (UniqueName: \"kubernetes.io/projected/488329c3-4802-41f2-afbf-66ff5389c1a2-kube-api-access-2db89\") pod \"nova-cell1-cell-mapping-287ns\" (UID: \"488329c3-4802-41f2-afbf-66ff5389c1a2\") " pod="openstack/nova-cell1-cell-mapping-287ns" Feb 19 20:00:21 crc kubenswrapper[4813]: I0219 20:00:21.799424 4813 generic.go:334] "Generic (PLEG): container finished" podID="e42b2871-2bbb-4439-9be6-ca3b594ce8f7" containerID="38ec3f52beb7682eebf064130a6ecb3357856c51414e5ca5008b815dac8adc92" exitCode=0 Feb 19 20:00:21 crc kubenswrapper[4813]: I0219 20:00:21.799474 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e42b2871-2bbb-4439-9be6-ca3b594ce8f7","Type":"ContainerDied","Data":"38ec3f52beb7682eebf064130a6ecb3357856c51414e5ca5008b815dac8adc92"} Feb 19 20:00:21 crc kubenswrapper[4813]: I0219 20:00:21.799498 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e42b2871-2bbb-4439-9be6-ca3b594ce8f7","Type":"ContainerDied","Data":"d86a00793f34ad531f9e141e6259994ab35caea785da05453a28de4a4ca7f410"} Feb 19 20:00:21 crc kubenswrapper[4813]: I0219 20:00:21.799509 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d86a00793f34ad531f9e141e6259994ab35caea785da05453a28de4a4ca7f410" Feb 19 20:00:21 crc kubenswrapper[4813]: I0219 20:00:21.803013 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/488329c3-4802-41f2-afbf-66ff5389c1a2-scripts\") pod \"nova-cell1-cell-mapping-287ns\" (UID: \"488329c3-4802-41f2-afbf-66ff5389c1a2\") " pod="openstack/nova-cell1-cell-mapping-287ns" Feb 19 20:00:21 crc kubenswrapper[4813]: I0219 20:00:21.803075 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/488329c3-4802-41f2-afbf-66ff5389c1a2-config-data\") pod \"nova-cell1-cell-mapping-287ns\" (UID: \"488329c3-4802-41f2-afbf-66ff5389c1a2\") " pod="openstack/nova-cell1-cell-mapping-287ns" Feb 19 20:00:21 crc kubenswrapper[4813]: I0219 20:00:21.803122 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/488329c3-4802-41f2-afbf-66ff5389c1a2-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-287ns\" (UID: \"488329c3-4802-41f2-afbf-66ff5389c1a2\") " pod="openstack/nova-cell1-cell-mapping-287ns" Feb 19 20:00:21 crc kubenswrapper[4813]: I0219 20:00:21.803162 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2db89\" (UniqueName: \"kubernetes.io/projected/488329c3-4802-41f2-afbf-66ff5389c1a2-kube-api-access-2db89\") pod \"nova-cell1-cell-mapping-287ns\" (UID: \"488329c3-4802-41f2-afbf-66ff5389c1a2\") " pod="openstack/nova-cell1-cell-mapping-287ns" Feb 19 20:00:21 crc kubenswrapper[4813]: I0219 20:00:21.810775 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/488329c3-4802-41f2-afbf-66ff5389c1a2-config-data\") pod \"nova-cell1-cell-mapping-287ns\" (UID: \"488329c3-4802-41f2-afbf-66ff5389c1a2\") " pod="openstack/nova-cell1-cell-mapping-287ns" Feb 19 20:00:21 crc kubenswrapper[4813]: I0219 20:00:21.810996 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/488329c3-4802-41f2-afbf-66ff5389c1a2-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-287ns\" (UID: \"488329c3-4802-41f2-afbf-66ff5389c1a2\") " pod="openstack/nova-cell1-cell-mapping-287ns" Feb 19 20:00:21 crc kubenswrapper[4813]: I0219 20:00:21.811149 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/488329c3-4802-41f2-afbf-66ff5389c1a2-scripts\") pod \"nova-cell1-cell-mapping-287ns\" (UID: \"488329c3-4802-41f2-afbf-66ff5389c1a2\") " pod="openstack/nova-cell1-cell-mapping-287ns" Feb 19 20:00:21 crc kubenswrapper[4813]: I0219 20:00:21.825006 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2db89\" (UniqueName: \"kubernetes.io/projected/488329c3-4802-41f2-afbf-66ff5389c1a2-kube-api-access-2db89\") pod \"nova-cell1-cell-mapping-287ns\" (UID: \"488329c3-4802-41f2-afbf-66ff5389c1a2\") " pod="openstack/nova-cell1-cell-mapping-287ns" Feb 19 20:00:21 crc kubenswrapper[4813]: I0219 20:00:21.896136 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 20:00:21 crc kubenswrapper[4813]: I0219 20:00:21.912613 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-287ns" Feb 19 20:00:22 crc kubenswrapper[4813]: I0219 20:00:22.005987 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e42b2871-2bbb-4439-9be6-ca3b594ce8f7-combined-ca-bundle\") pod \"e42b2871-2bbb-4439-9be6-ca3b594ce8f7\" (UID: \"e42b2871-2bbb-4439-9be6-ca3b594ce8f7\") " Feb 19 20:00:22 crc kubenswrapper[4813]: I0219 20:00:22.006230 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gpkd8\" (UniqueName: \"kubernetes.io/projected/e42b2871-2bbb-4439-9be6-ca3b594ce8f7-kube-api-access-gpkd8\") pod \"e42b2871-2bbb-4439-9be6-ca3b594ce8f7\" (UID: \"e42b2871-2bbb-4439-9be6-ca3b594ce8f7\") " Feb 19 20:00:22 crc kubenswrapper[4813]: I0219 20:00:22.006262 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e42b2871-2bbb-4439-9be6-ca3b594ce8f7-config-data\") pod \"e42b2871-2bbb-4439-9be6-ca3b594ce8f7\" (UID: \"e42b2871-2bbb-4439-9be6-ca3b594ce8f7\") " Feb 19 20:00:22 crc kubenswrapper[4813]: I0219 20:00:22.019086 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e42b2871-2bbb-4439-9be6-ca3b594ce8f7-kube-api-access-gpkd8" (OuterVolumeSpecName: "kube-api-access-gpkd8") pod "e42b2871-2bbb-4439-9be6-ca3b594ce8f7" (UID: "e42b2871-2bbb-4439-9be6-ca3b594ce8f7"). InnerVolumeSpecName "kube-api-access-gpkd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:00:22 crc kubenswrapper[4813]: I0219 20:00:22.041884 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e42b2871-2bbb-4439-9be6-ca3b594ce8f7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e42b2871-2bbb-4439-9be6-ca3b594ce8f7" (UID: "e42b2871-2bbb-4439-9be6-ca3b594ce8f7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:00:22 crc kubenswrapper[4813]: I0219 20:00:22.046715 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e42b2871-2bbb-4439-9be6-ca3b594ce8f7-config-data" (OuterVolumeSpecName: "config-data") pod "e42b2871-2bbb-4439-9be6-ca3b594ce8f7" (UID: "e42b2871-2bbb-4439-9be6-ca3b594ce8f7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:00:22 crc kubenswrapper[4813]: I0219 20:00:22.108051 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gpkd8\" (UniqueName: \"kubernetes.io/projected/e42b2871-2bbb-4439-9be6-ca3b594ce8f7-kube-api-access-gpkd8\") on node \"crc\" DevicePath \"\"" Feb 19 20:00:22 crc kubenswrapper[4813]: I0219 20:00:22.108087 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e42b2871-2bbb-4439-9be6-ca3b594ce8f7-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 20:00:22 crc kubenswrapper[4813]: I0219 20:00:22.108096 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e42b2871-2bbb-4439-9be6-ca3b594ce8f7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 20:00:22 crc kubenswrapper[4813]: W0219 20:00:22.378835 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod488329c3_4802_41f2_afbf_66ff5389c1a2.slice/crio-82ac58d8143f2d57ec64ed88648fcaafae18250775ebbc619c774be311aba665 WatchSource:0}: Error finding container 82ac58d8143f2d57ec64ed88648fcaafae18250775ebbc619c774be311aba665: Status 404 returned error can't find the container with id 82ac58d8143f2d57ec64ed88648fcaafae18250775ebbc619c774be311aba665 Feb 19 20:00:22 crc kubenswrapper[4813]: I0219 20:00:22.388033 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-287ns"] Feb 19 20:00:22 crc kubenswrapper[4813]: I0219 20:00:22.811341 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 20:00:22 crc kubenswrapper[4813]: I0219 20:00:22.811736 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-287ns" event={"ID":"488329c3-4802-41f2-afbf-66ff5389c1a2","Type":"ContainerStarted","Data":"c6ae10887f4836597b3bdd2d93672a6fdee2315446f3bed0ad1aebb8df86609c"} Feb 19 20:00:22 crc kubenswrapper[4813]: I0219 20:00:22.811777 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-287ns" event={"ID":"488329c3-4802-41f2-afbf-66ff5389c1a2","Type":"ContainerStarted","Data":"82ac58d8143f2d57ec64ed88648fcaafae18250775ebbc619c774be311aba665"} Feb 19 20:00:22 crc kubenswrapper[4813]: I0219 20:00:22.838137 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-287ns" podStartSLOduration=1.838121488 podStartE2EDuration="1.838121488s" podCreationTimestamp="2026-02-19 20:00:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 20:00:22.836622591 +0000 UTC m=+5442.062063132" watchObservedRunningTime="2026-02-19 20:00:22.838121488 +0000 UTC m=+5442.063562029" Feb 19 20:00:22 crc kubenswrapper[4813]: I0219 20:00:22.868759 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 20:00:22 crc kubenswrapper[4813]: I0219 20:00:22.887743 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 20:00:22 crc kubenswrapper[4813]: I0219 20:00:22.897733 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 20:00:22 crc kubenswrapper[4813]: E0219 20:00:22.898153 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e42b2871-2bbb-4439-9be6-ca3b594ce8f7" containerName="nova-scheduler-scheduler" Feb 19 20:00:22 crc kubenswrapper[4813]: I0219 20:00:22.898173 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="e42b2871-2bbb-4439-9be6-ca3b594ce8f7" containerName="nova-scheduler-scheduler" Feb 19 20:00:22 crc kubenswrapper[4813]: I0219 20:00:22.898405 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="e42b2871-2bbb-4439-9be6-ca3b594ce8f7" containerName="nova-scheduler-scheduler" Feb 19 20:00:22 crc kubenswrapper[4813]: I0219 20:00:22.899456 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 20:00:22 crc kubenswrapper[4813]: I0219 20:00:22.904332 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 19 20:00:22 crc kubenswrapper[4813]: I0219 20:00:22.908447 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 20:00:23 crc kubenswrapper[4813]: I0219 20:00:23.023925 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39da5831-4cca-4b21-a816-3ec615b48e80-config-data\") pod \"nova-scheduler-0\" (UID: \"39da5831-4cca-4b21-a816-3ec615b48e80\") " pod="openstack/nova-scheduler-0" Feb 19 20:00:23 crc kubenswrapper[4813]: I0219 20:00:23.024027 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpfzk\" (UniqueName: \"kubernetes.io/projected/39da5831-4cca-4b21-a816-3ec615b48e80-kube-api-access-mpfzk\") pod \"nova-scheduler-0\" (UID: \"39da5831-4cca-4b21-a816-3ec615b48e80\") " pod="openstack/nova-scheduler-0" Feb 19 20:00:23 crc kubenswrapper[4813]: I0219 20:00:23.024052 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39da5831-4cca-4b21-a816-3ec615b48e80-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"39da5831-4cca-4b21-a816-3ec615b48e80\") " pod="openstack/nova-scheduler-0" Feb 19 20:00:23 crc kubenswrapper[4813]: I0219 20:00:23.125378 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39da5831-4cca-4b21-a816-3ec615b48e80-config-data\") pod \"nova-scheduler-0\" (UID: \"39da5831-4cca-4b21-a816-3ec615b48e80\") " pod="openstack/nova-scheduler-0" Feb 19 20:00:23 crc kubenswrapper[4813]: I0219 20:00:23.125466 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpfzk\" (UniqueName: \"kubernetes.io/projected/39da5831-4cca-4b21-a816-3ec615b48e80-kube-api-access-mpfzk\") pod \"nova-scheduler-0\" (UID: \"39da5831-4cca-4b21-a816-3ec615b48e80\") " pod="openstack/nova-scheduler-0" Feb 19 20:00:23 crc kubenswrapper[4813]: I0219 20:00:23.125496 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39da5831-4cca-4b21-a816-3ec615b48e80-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"39da5831-4cca-4b21-a816-3ec615b48e80\") " pod="openstack/nova-scheduler-0" Feb 19 20:00:23 crc kubenswrapper[4813]: I0219 20:00:23.131568 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39da5831-4cca-4b21-a816-3ec615b48e80-config-data\") pod \"nova-scheduler-0\" (UID: \"39da5831-4cca-4b21-a816-3ec615b48e80\") " pod="openstack/nova-scheduler-0" Feb 19 20:00:23 crc kubenswrapper[4813]: I0219 20:00:23.131978 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39da5831-4cca-4b21-a816-3ec615b48e80-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"39da5831-4cca-4b21-a816-3ec615b48e80\") " pod="openstack/nova-scheduler-0" Feb 19 20:00:23 crc kubenswrapper[4813]: I0219 20:00:23.151311 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpfzk\" (UniqueName: \"kubernetes.io/projected/39da5831-4cca-4b21-a816-3ec615b48e80-kube-api-access-mpfzk\") pod \"nova-scheduler-0\" (UID: \"39da5831-4cca-4b21-a816-3ec615b48e80\") " pod="openstack/nova-scheduler-0" Feb 19 20:00:23 crc kubenswrapper[4813]: I0219 20:00:23.221465 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 20:00:23 crc kubenswrapper[4813]: I0219 20:00:23.482785 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e42b2871-2bbb-4439-9be6-ca3b594ce8f7" path="/var/lib/kubelet/pods/e42b2871-2bbb-4439-9be6-ca3b594ce8f7/volumes" Feb 19 20:00:23 crc kubenswrapper[4813]: I0219 20:00:23.671208 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 20:00:23 crc kubenswrapper[4813]: I0219 20:00:23.831271 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"39da5831-4cca-4b21-a816-3ec615b48e80","Type":"ContainerStarted","Data":"fffd920d013ac5c5a559aa48f7d2489665b83aec3f89f5af341a7683d2f4571e"} Feb 19 20:00:24 crc kubenswrapper[4813]: I0219 20:00:24.186114 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 20:00:24 crc kubenswrapper[4813]: I0219 20:00:24.186476 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 20:00:24 crc kubenswrapper[4813]: I0219 20:00:24.858208 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"39da5831-4cca-4b21-a816-3ec615b48e80","Type":"ContainerStarted","Data":"ff188836e3a71cf5bd5eef1237f2d1a08d130004535cc34d6805f89bfb231d20"} Feb 19 20:00:24 crc kubenswrapper[4813]: I0219 20:00:24.879873 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.879854799 podStartE2EDuration="2.879854799s" podCreationTimestamp="2026-02-19 20:00:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 20:00:24.877040802 +0000 UTC m=+5444.102481523" watchObservedRunningTime="2026-02-19 20:00:24.879854799 +0000 UTC m=+5444.105295340" Feb 19 20:00:27 crc kubenswrapper[4813]: I0219 20:00:27.884660 4813 generic.go:334] "Generic (PLEG): container finished" podID="488329c3-4802-41f2-afbf-66ff5389c1a2" containerID="c6ae10887f4836597b3bdd2d93672a6fdee2315446f3bed0ad1aebb8df86609c" exitCode=0 Feb 19 20:00:27 crc kubenswrapper[4813]: I0219 20:00:27.884753 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-287ns" event={"ID":"488329c3-4802-41f2-afbf-66ff5389c1a2","Type":"ContainerDied","Data":"c6ae10887f4836597b3bdd2d93672a6fdee2315446f3bed0ad1aebb8df86609c"} Feb 19 20:00:28 crc kubenswrapper[4813]: I0219 20:00:28.221930 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 19 20:00:30 crc kubenswrapper[4813]: I0219 20:00:29.182946 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 19 20:00:30 crc kubenswrapper[4813]: I0219 20:00:29.183136 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 19 20:00:30 crc kubenswrapper[4813]: I0219 20:00:29.198081 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 20:00:30 crc kubenswrapper[4813]: I0219 20:00:29.198114 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 20:00:30 crc kubenswrapper[4813]: I0219 20:00:29.260247 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-287ns" Feb 19 20:00:30 crc kubenswrapper[4813]: I0219 20:00:29.350562 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/488329c3-4802-41f2-afbf-66ff5389c1a2-config-data\") pod \"488329c3-4802-41f2-afbf-66ff5389c1a2\" (UID: \"488329c3-4802-41f2-afbf-66ff5389c1a2\") " Feb 19 20:00:30 crc kubenswrapper[4813]: I0219 20:00:29.350724 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/488329c3-4802-41f2-afbf-66ff5389c1a2-combined-ca-bundle\") pod \"488329c3-4802-41f2-afbf-66ff5389c1a2\" (UID: \"488329c3-4802-41f2-afbf-66ff5389c1a2\") " Feb 19 20:00:30 crc kubenswrapper[4813]: I0219 20:00:29.350780 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/488329c3-4802-41f2-afbf-66ff5389c1a2-scripts\") pod \"488329c3-4802-41f2-afbf-66ff5389c1a2\" (UID: \"488329c3-4802-41f2-afbf-66ff5389c1a2\") " Feb 19 20:00:30 crc kubenswrapper[4813]: I0219 20:00:29.350813 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2db89\" (UniqueName: \"kubernetes.io/projected/488329c3-4802-41f2-afbf-66ff5389c1a2-kube-api-access-2db89\") pod \"488329c3-4802-41f2-afbf-66ff5389c1a2\" (UID: \"488329c3-4802-41f2-afbf-66ff5389c1a2\") " Feb 19 20:00:30 crc kubenswrapper[4813]: I0219 20:00:29.371235 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/488329c3-4802-41f2-afbf-66ff5389c1a2-kube-api-access-2db89" (OuterVolumeSpecName: "kube-api-access-2db89") pod "488329c3-4802-41f2-afbf-66ff5389c1a2" (UID: "488329c3-4802-41f2-afbf-66ff5389c1a2"). InnerVolumeSpecName "kube-api-access-2db89". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:00:30 crc kubenswrapper[4813]: I0219 20:00:29.373500 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/488329c3-4802-41f2-afbf-66ff5389c1a2-scripts" (OuterVolumeSpecName: "scripts") pod "488329c3-4802-41f2-afbf-66ff5389c1a2" (UID: "488329c3-4802-41f2-afbf-66ff5389c1a2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:00:30 crc kubenswrapper[4813]: I0219 20:00:29.392227 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/488329c3-4802-41f2-afbf-66ff5389c1a2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "488329c3-4802-41f2-afbf-66ff5389c1a2" (UID: "488329c3-4802-41f2-afbf-66ff5389c1a2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:00:30 crc kubenswrapper[4813]: I0219 20:00:29.401515 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/488329c3-4802-41f2-afbf-66ff5389c1a2-config-data" (OuterVolumeSpecName: "config-data") pod "488329c3-4802-41f2-afbf-66ff5389c1a2" (UID: "488329c3-4802-41f2-afbf-66ff5389c1a2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:00:30 crc kubenswrapper[4813]: I0219 20:00:29.455149 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/488329c3-4802-41f2-afbf-66ff5389c1a2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 20:00:30 crc kubenswrapper[4813]: I0219 20:00:29.455191 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/488329c3-4802-41f2-afbf-66ff5389c1a2-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 20:00:30 crc kubenswrapper[4813]: I0219 20:00:29.455201 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2db89\" (UniqueName: \"kubernetes.io/projected/488329c3-4802-41f2-afbf-66ff5389c1a2-kube-api-access-2db89\") on node \"crc\" DevicePath \"\"" Feb 19 20:00:30 crc kubenswrapper[4813]: I0219 20:00:29.455221 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/488329c3-4802-41f2-afbf-66ff5389c1a2-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 20:00:30 crc kubenswrapper[4813]: I0219 20:00:29.902549 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-287ns" event={"ID":"488329c3-4802-41f2-afbf-66ff5389c1a2","Type":"ContainerDied","Data":"82ac58d8143f2d57ec64ed88648fcaafae18250775ebbc619c774be311aba665"} Feb 19 20:00:30 crc kubenswrapper[4813]: I0219 20:00:29.902581 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82ac58d8143f2d57ec64ed88648fcaafae18250775ebbc619c774be311aba665" Feb 19 20:00:30 crc kubenswrapper[4813]: I0219 20:00:29.902648 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-287ns" Feb 19 20:00:30 crc kubenswrapper[4813]: I0219 20:00:30.115074 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 20:00:30 crc kubenswrapper[4813]: I0219 20:00:30.115323 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3545a303-e475-410d-9748-5c14674c5973" containerName="nova-api-log" containerID="cri-o://7a56d5a97b0c0579e9a098cab123a6ecfb4da4690b529ffcbf05e3cc5dbd1e8a" gracePeriod=30 Feb 19 20:00:30 crc kubenswrapper[4813]: I0219 20:00:30.115748 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3545a303-e475-410d-9748-5c14674c5973" containerName="nova-api-api" containerID="cri-o://bc10905392e483254745d0e73f39292400a5c4ab7d56832dfbf00a6f80b6e762" gracePeriod=30 Feb 19 20:00:30 crc kubenswrapper[4813]: I0219 20:00:30.125490 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 20:00:30 crc kubenswrapper[4813]: I0219 20:00:30.125732 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="39da5831-4cca-4b21-a816-3ec615b48e80" containerName="nova-scheduler-scheduler" containerID="cri-o://ff188836e3a71cf5bd5eef1237f2d1a08d130004535cc34d6805f89bfb231d20" gracePeriod=30 Feb 19 20:00:30 crc kubenswrapper[4813]: I0219 20:00:30.128840 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3545a303-e475-410d-9748-5c14674c5973" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.64:8774/\": EOF" Feb 19 20:00:30 crc kubenswrapper[4813]: I0219 20:00:30.129017 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3545a303-e475-410d-9748-5c14674c5973" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.64:8774/\": EOF" Feb 19 20:00:30 crc kubenswrapper[4813]: I0219 20:00:30.198927 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 20:00:30 crc kubenswrapper[4813]: I0219 20:00:30.199225 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="51e8cdf9-4bd9-436f-8933-271ef1bef791" containerName="nova-metadata-log" containerID="cri-o://9f9f2c28ba1330f397d13573d8ef534e2d49d84c5a6ba4c4c4c8cc4b29439281" gracePeriod=30 Feb 19 20:00:30 crc kubenswrapper[4813]: I0219 20:00:30.199313 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="51e8cdf9-4bd9-436f-8933-271ef1bef791" containerName="nova-metadata-metadata" containerID="cri-o://c6a10b20d5807d213f6c61f27a2b93e6656f86063772ac1f5eb1a4e98795c2bc" gracePeriod=30 Feb 19 20:00:30 crc kubenswrapper[4813]: I0219 20:00:30.202987 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="51e8cdf9-4bd9-436f-8933-271ef1bef791" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.63:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:00:30 crc kubenswrapper[4813]: I0219 20:00:30.203115 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="51e8cdf9-4bd9-436f-8933-271ef1bef791" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.63:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:00:30 crc kubenswrapper[4813]: I0219 20:00:30.910831 4813 generic.go:334] "Generic (PLEG): container finished" podID="3545a303-e475-410d-9748-5c14674c5973" containerID="7a56d5a97b0c0579e9a098cab123a6ecfb4da4690b529ffcbf05e3cc5dbd1e8a" exitCode=143 Feb 19 20:00:30 crc kubenswrapper[4813]: I0219 20:00:30.911025 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3545a303-e475-410d-9748-5c14674c5973","Type":"ContainerDied","Data":"7a56d5a97b0c0579e9a098cab123a6ecfb4da4690b529ffcbf05e3cc5dbd1e8a"} Feb 19 20:00:30 crc kubenswrapper[4813]: I0219 20:00:30.912659 4813 generic.go:334] "Generic (PLEG): container finished" podID="51e8cdf9-4bd9-436f-8933-271ef1bef791" containerID="9f9f2c28ba1330f397d13573d8ef534e2d49d84c5a6ba4c4c4c8cc4b29439281" exitCode=143 Feb 19 20:00:30 crc kubenswrapper[4813]: I0219 20:00:30.912692 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"51e8cdf9-4bd9-436f-8933-271ef1bef791","Type":"ContainerDied","Data":"9f9f2c28ba1330f397d13573d8ef534e2d49d84c5a6ba4c4c4c8cc4b29439281"} Feb 19 20:00:31 crc kubenswrapper[4813]: I0219 20:00:31.912911 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 20:00:31 crc kubenswrapper[4813]: I0219 20:00:31.923744 4813 generic.go:334] "Generic (PLEG): container finished" podID="39da5831-4cca-4b21-a816-3ec615b48e80" containerID="ff188836e3a71cf5bd5eef1237f2d1a08d130004535cc34d6805f89bfb231d20" exitCode=0 Feb 19 20:00:31 crc kubenswrapper[4813]: I0219 20:00:31.923794 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"39da5831-4cca-4b21-a816-3ec615b48e80","Type":"ContainerDied","Data":"ff188836e3a71cf5bd5eef1237f2d1a08d130004535cc34d6805f89bfb231d20"} Feb 19 20:00:31 crc kubenswrapper[4813]: I0219 20:00:31.923829 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"39da5831-4cca-4b21-a816-3ec615b48e80","Type":"ContainerDied","Data":"fffd920d013ac5c5a559aa48f7d2489665b83aec3f89f5af341a7683d2f4571e"} Feb 19 20:00:31 crc kubenswrapper[4813]: I0219 20:00:31.923849 4813 scope.go:117] "RemoveContainer" containerID="ff188836e3a71cf5bd5eef1237f2d1a08d130004535cc34d6805f89bfb231d20" Feb 19 20:00:31 crc kubenswrapper[4813]: I0219 20:00:31.923999 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 20:00:31 crc kubenswrapper[4813]: I0219 20:00:31.956590 4813 scope.go:117] "RemoveContainer" containerID="ff188836e3a71cf5bd5eef1237f2d1a08d130004535cc34d6805f89bfb231d20" Feb 19 20:00:31 crc kubenswrapper[4813]: E0219 20:00:31.957219 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff188836e3a71cf5bd5eef1237f2d1a08d130004535cc34d6805f89bfb231d20\": container with ID starting with ff188836e3a71cf5bd5eef1237f2d1a08d130004535cc34d6805f89bfb231d20 not found: ID does not exist" containerID="ff188836e3a71cf5bd5eef1237f2d1a08d130004535cc34d6805f89bfb231d20" Feb 19 20:00:31 crc kubenswrapper[4813]: I0219 20:00:31.957260 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff188836e3a71cf5bd5eef1237f2d1a08d130004535cc34d6805f89bfb231d20"} err="failed to get container status \"ff188836e3a71cf5bd5eef1237f2d1a08d130004535cc34d6805f89bfb231d20\": rpc error: code = NotFound desc = could not find container \"ff188836e3a71cf5bd5eef1237f2d1a08d130004535cc34d6805f89bfb231d20\": container with ID starting with ff188836e3a71cf5bd5eef1237f2d1a08d130004535cc34d6805f89bfb231d20 not found: ID does not exist" Feb 19 20:00:32 crc kubenswrapper[4813]: I0219 20:00:32.000685 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpfzk\" (UniqueName: \"kubernetes.io/projected/39da5831-4cca-4b21-a816-3ec615b48e80-kube-api-access-mpfzk\") pod \"39da5831-4cca-4b21-a816-3ec615b48e80\" (UID: \"39da5831-4cca-4b21-a816-3ec615b48e80\") " Feb 19 20:00:32 crc kubenswrapper[4813]: I0219 20:00:32.001060 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39da5831-4cca-4b21-a816-3ec615b48e80-config-data\") pod \"39da5831-4cca-4b21-a816-3ec615b48e80\" (UID: \"39da5831-4cca-4b21-a816-3ec615b48e80\") " Feb 19 20:00:32 crc kubenswrapper[4813]: I0219 20:00:32.001098 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39da5831-4cca-4b21-a816-3ec615b48e80-combined-ca-bundle\") pod \"39da5831-4cca-4b21-a816-3ec615b48e80\" (UID: \"39da5831-4cca-4b21-a816-3ec615b48e80\") " Feb 19 20:00:32 crc kubenswrapper[4813]: I0219 20:00:32.007524 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39da5831-4cca-4b21-a816-3ec615b48e80-kube-api-access-mpfzk" (OuterVolumeSpecName: "kube-api-access-mpfzk") pod "39da5831-4cca-4b21-a816-3ec615b48e80" (UID: "39da5831-4cca-4b21-a816-3ec615b48e80"). InnerVolumeSpecName "kube-api-access-mpfzk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:00:32 crc kubenswrapper[4813]: I0219 20:00:32.025145 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39da5831-4cca-4b21-a816-3ec615b48e80-config-data" (OuterVolumeSpecName: "config-data") pod "39da5831-4cca-4b21-a816-3ec615b48e80" (UID: "39da5831-4cca-4b21-a816-3ec615b48e80"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:00:32 crc kubenswrapper[4813]: I0219 20:00:32.031077 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39da5831-4cca-4b21-a816-3ec615b48e80-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "39da5831-4cca-4b21-a816-3ec615b48e80" (UID: "39da5831-4cca-4b21-a816-3ec615b48e80"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:00:32 crc kubenswrapper[4813]: I0219 20:00:32.103797 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpfzk\" (UniqueName: \"kubernetes.io/projected/39da5831-4cca-4b21-a816-3ec615b48e80-kube-api-access-mpfzk\") on node \"crc\" DevicePath \"\"" Feb 19 20:00:32 crc kubenswrapper[4813]: I0219 20:00:32.103834 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39da5831-4cca-4b21-a816-3ec615b48e80-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 20:00:32 crc kubenswrapper[4813]: I0219 20:00:32.103848 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39da5831-4cca-4b21-a816-3ec615b48e80-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 20:00:32 crc kubenswrapper[4813]: I0219 20:00:32.262101 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 20:00:32 crc kubenswrapper[4813]: I0219 20:00:32.272575 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 20:00:32 crc kubenswrapper[4813]: I0219 20:00:32.280298 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 20:00:32 crc kubenswrapper[4813]: E0219 20:00:32.280728 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="488329c3-4802-41f2-afbf-66ff5389c1a2" containerName="nova-manage" Feb 19 20:00:32 crc kubenswrapper[4813]: I0219 20:00:32.280747 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="488329c3-4802-41f2-afbf-66ff5389c1a2" containerName="nova-manage" Feb 19 20:00:32 crc kubenswrapper[4813]: E0219 20:00:32.280786 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39da5831-4cca-4b21-a816-3ec615b48e80" containerName="nova-scheduler-scheduler" Feb 19 20:00:32 crc kubenswrapper[4813]: I0219 20:00:32.280795 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="39da5831-4cca-4b21-a816-3ec615b48e80" containerName="nova-scheduler-scheduler" Feb 19 20:00:32 crc kubenswrapper[4813]: I0219 20:00:32.281031 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="488329c3-4802-41f2-afbf-66ff5389c1a2" containerName="nova-manage" Feb 19 20:00:32 crc kubenswrapper[4813]: I0219 20:00:32.281051 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="39da5831-4cca-4b21-a816-3ec615b48e80" containerName="nova-scheduler-scheduler" Feb 19 20:00:32 crc kubenswrapper[4813]: I0219 20:00:32.281775 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 20:00:32 crc kubenswrapper[4813]: I0219 20:00:32.289372 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 19 20:00:32 crc kubenswrapper[4813]: I0219 20:00:32.296603 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 20:00:32 crc kubenswrapper[4813]: I0219 20:00:32.407893 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e9ac6a7-48ef-4b7c-b463-adb3efd9675d-config-data\") pod \"nova-scheduler-0\" (UID: \"5e9ac6a7-48ef-4b7c-b463-adb3efd9675d\") " pod="openstack/nova-scheduler-0" Feb 19 20:00:32 crc kubenswrapper[4813]: I0219 20:00:32.408095 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e9ac6a7-48ef-4b7c-b463-adb3efd9675d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"5e9ac6a7-48ef-4b7c-b463-adb3efd9675d\") " pod="openstack/nova-scheduler-0" Feb 19 20:00:32 crc kubenswrapper[4813]: I0219 20:00:32.408249 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7mh4\" (UniqueName: \"kubernetes.io/projected/5e9ac6a7-48ef-4b7c-b463-adb3efd9675d-kube-api-access-n7mh4\") pod \"nova-scheduler-0\" (UID: \"5e9ac6a7-48ef-4b7c-b463-adb3efd9675d\") " pod="openstack/nova-scheduler-0" Feb 19 20:00:32 crc kubenswrapper[4813]: I0219 20:00:32.509526 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7mh4\" (UniqueName: \"kubernetes.io/projected/5e9ac6a7-48ef-4b7c-b463-adb3efd9675d-kube-api-access-n7mh4\") pod \"nova-scheduler-0\" (UID: \"5e9ac6a7-48ef-4b7c-b463-adb3efd9675d\") " pod="openstack/nova-scheduler-0" Feb 19 20:00:32 crc kubenswrapper[4813]: I0219 20:00:32.509647 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e9ac6a7-48ef-4b7c-b463-adb3efd9675d-config-data\") pod \"nova-scheduler-0\" (UID: \"5e9ac6a7-48ef-4b7c-b463-adb3efd9675d\") " pod="openstack/nova-scheduler-0" Feb 19 20:00:32 crc kubenswrapper[4813]: I0219 20:00:32.509721 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e9ac6a7-48ef-4b7c-b463-adb3efd9675d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"5e9ac6a7-48ef-4b7c-b463-adb3efd9675d\") " pod="openstack/nova-scheduler-0" Feb 19 20:00:32 crc kubenswrapper[4813]: I0219 20:00:32.515273 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e9ac6a7-48ef-4b7c-b463-adb3efd9675d-config-data\") pod \"nova-scheduler-0\" (UID: \"5e9ac6a7-48ef-4b7c-b463-adb3efd9675d\") " pod="openstack/nova-scheduler-0" Feb 19 20:00:32 crc kubenswrapper[4813]: I0219 20:00:32.522676 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e9ac6a7-48ef-4b7c-b463-adb3efd9675d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"5e9ac6a7-48ef-4b7c-b463-adb3efd9675d\") " pod="openstack/nova-scheduler-0" Feb 19 20:00:32 crc kubenswrapper[4813]: I0219 20:00:32.526756 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7mh4\" (UniqueName: \"kubernetes.io/projected/5e9ac6a7-48ef-4b7c-b463-adb3efd9675d-kube-api-access-n7mh4\") pod \"nova-scheduler-0\" (UID: \"5e9ac6a7-48ef-4b7c-b463-adb3efd9675d\") " pod="openstack/nova-scheduler-0" Feb 19 20:00:32 crc kubenswrapper[4813]: I0219 20:00:32.610029 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 20:00:33 crc kubenswrapper[4813]: I0219 20:00:33.003130 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 20:00:33 crc kubenswrapper[4813]: I0219 20:00:33.481696 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39da5831-4cca-4b21-a816-3ec615b48e80" path="/var/lib/kubelet/pods/39da5831-4cca-4b21-a816-3ec615b48e80/volumes" Feb 19 20:00:33 crc kubenswrapper[4813]: I0219 20:00:33.945087 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5e9ac6a7-48ef-4b7c-b463-adb3efd9675d","Type":"ContainerStarted","Data":"6b2d492f986af0769ab9a01489d292ba42755bddb24d527af91d643eb8dd6b6f"} Feb 19 20:00:33 crc kubenswrapper[4813]: I0219 20:00:33.945165 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5e9ac6a7-48ef-4b7c-b463-adb3efd9675d","Type":"ContainerStarted","Data":"7f4f2d7261c75fd12b5299e4556e42ddf0dd909b3157bcc5434d9677d332a494"} Feb 19 20:00:33 crc kubenswrapper[4813]: I0219 20:00:33.964785 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.9647449080000001 podStartE2EDuration="1.964744908s" podCreationTimestamp="2026-02-19 20:00:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 20:00:33.962002033 +0000 UTC m=+5453.187442584" watchObservedRunningTime="2026-02-19 20:00:33.964744908 +0000 UTC m=+5453.190185449" Feb 19 20:00:34 crc kubenswrapper[4813]: I0219 20:00:34.956129 4813 generic.go:334] "Generic (PLEG): container finished" podID="51e8cdf9-4bd9-436f-8933-271ef1bef791" containerID="c6a10b20d5807d213f6c61f27a2b93e6656f86063772ac1f5eb1a4e98795c2bc" exitCode=0 Feb 19 20:00:34 crc kubenswrapper[4813]: I0219 20:00:34.956267 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"51e8cdf9-4bd9-436f-8933-271ef1bef791","Type":"ContainerDied","Data":"c6a10b20d5807d213f6c61f27a2b93e6656f86063772ac1f5eb1a4e98795c2bc"} Feb 19 20:00:34 crc kubenswrapper[4813]: I0219 20:00:34.957159 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"51e8cdf9-4bd9-436f-8933-271ef1bef791","Type":"ContainerDied","Data":"20bbdd74638ccb4ff5f3a3f39e7a2871990108c4a60fccf364ddc1e3b945b5f0"} Feb 19 20:00:34 crc kubenswrapper[4813]: I0219 20:00:34.957177 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20bbdd74638ccb4ff5f3a3f39e7a2871990108c4a60fccf364ddc1e3b945b5f0" Feb 19 20:00:34 crc kubenswrapper[4813]: I0219 20:00:34.984180 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 20:00:35 crc kubenswrapper[4813]: I0219 20:00:35.052808 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51e8cdf9-4bd9-436f-8933-271ef1bef791-combined-ca-bundle\") pod \"51e8cdf9-4bd9-436f-8933-271ef1bef791\" (UID: \"51e8cdf9-4bd9-436f-8933-271ef1bef791\") " Feb 19 20:00:35 crc kubenswrapper[4813]: I0219 20:00:35.052916 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51e8cdf9-4bd9-436f-8933-271ef1bef791-config-data\") pod \"51e8cdf9-4bd9-436f-8933-271ef1bef791\" (UID: \"51e8cdf9-4bd9-436f-8933-271ef1bef791\") " Feb 19 20:00:35 crc kubenswrapper[4813]: I0219 20:00:35.052971 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51e8cdf9-4bd9-436f-8933-271ef1bef791-logs\") pod \"51e8cdf9-4bd9-436f-8933-271ef1bef791\" (UID: \"51e8cdf9-4bd9-436f-8933-271ef1bef791\") " Feb 19 20:00:35 crc kubenswrapper[4813]: I0219 20:00:35.053066 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bqtv\" (UniqueName: \"kubernetes.io/projected/51e8cdf9-4bd9-436f-8933-271ef1bef791-kube-api-access-5bqtv\") pod \"51e8cdf9-4bd9-436f-8933-271ef1bef791\" (UID: \"51e8cdf9-4bd9-436f-8933-271ef1bef791\") " Feb 19 20:00:35 crc kubenswrapper[4813]: I0219 20:00:35.053536 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51e8cdf9-4bd9-436f-8933-271ef1bef791-logs" (OuterVolumeSpecName: "logs") pod "51e8cdf9-4bd9-436f-8933-271ef1bef791" (UID: "51e8cdf9-4bd9-436f-8933-271ef1bef791"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:00:35 crc kubenswrapper[4813]: I0219 20:00:35.054506 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51e8cdf9-4bd9-436f-8933-271ef1bef791-logs\") on node \"crc\" DevicePath \"\"" Feb 19 20:00:35 crc kubenswrapper[4813]: I0219 20:00:35.060052 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51e8cdf9-4bd9-436f-8933-271ef1bef791-kube-api-access-5bqtv" (OuterVolumeSpecName: "kube-api-access-5bqtv") pod "51e8cdf9-4bd9-436f-8933-271ef1bef791" (UID: "51e8cdf9-4bd9-436f-8933-271ef1bef791"). InnerVolumeSpecName "kube-api-access-5bqtv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:00:35 crc kubenswrapper[4813]: I0219 20:00:35.087439 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51e8cdf9-4bd9-436f-8933-271ef1bef791-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "51e8cdf9-4bd9-436f-8933-271ef1bef791" (UID: "51e8cdf9-4bd9-436f-8933-271ef1bef791"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:00:35 crc kubenswrapper[4813]: I0219 20:00:35.087844 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51e8cdf9-4bd9-436f-8933-271ef1bef791-config-data" (OuterVolumeSpecName: "config-data") pod "51e8cdf9-4bd9-436f-8933-271ef1bef791" (UID: "51e8cdf9-4bd9-436f-8933-271ef1bef791"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:00:35 crc kubenswrapper[4813]: I0219 20:00:35.156316 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51e8cdf9-4bd9-436f-8933-271ef1bef791-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 20:00:35 crc kubenswrapper[4813]: I0219 20:00:35.156650 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51e8cdf9-4bd9-436f-8933-271ef1bef791-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 20:00:35 crc kubenswrapper[4813]: I0219 20:00:35.156667 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bqtv\" (UniqueName: \"kubernetes.io/projected/51e8cdf9-4bd9-436f-8933-271ef1bef791-kube-api-access-5bqtv\") on node \"crc\" DevicePath \"\"" Feb 19 20:00:35 crc kubenswrapper[4813]: I0219 20:00:35.951071 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 20:00:35 crc kubenswrapper[4813]: I0219 20:00:35.972536 4813 generic.go:334] "Generic (PLEG): container finished" podID="3545a303-e475-410d-9748-5c14674c5973" containerID="bc10905392e483254745d0e73f39292400a5c4ab7d56832dfbf00a6f80b6e762" exitCode=0 Feb 19 20:00:35 crc kubenswrapper[4813]: I0219 20:00:35.972579 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 20:00:35 crc kubenswrapper[4813]: I0219 20:00:35.972626 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 20:00:35 crc kubenswrapper[4813]: I0219 20:00:35.972821 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3545a303-e475-410d-9748-5c14674c5973","Type":"ContainerDied","Data":"bc10905392e483254745d0e73f39292400a5c4ab7d56832dfbf00a6f80b6e762"} Feb 19 20:00:35 crc kubenswrapper[4813]: I0219 20:00:35.972848 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3545a303-e475-410d-9748-5c14674c5973","Type":"ContainerDied","Data":"1316a23429920407476e11ba74a63001753304d5e4626f66aff4ce3cb69d3056"} Feb 19 20:00:35 crc kubenswrapper[4813]: I0219 20:00:35.972863 4813 scope.go:117] "RemoveContainer" containerID="bc10905392e483254745d0e73f39292400a5c4ab7d56832dfbf00a6f80b6e762" Feb 19 20:00:36 crc kubenswrapper[4813]: I0219 20:00:36.003340 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 20:00:36 crc kubenswrapper[4813]: I0219 20:00:36.005120 4813 scope.go:117] "RemoveContainer" containerID="7a56d5a97b0c0579e9a098cab123a6ecfb4da4690b529ffcbf05e3cc5dbd1e8a" Feb 19 20:00:36 crc kubenswrapper[4813]: I0219 20:00:36.016134 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 20:00:36 crc kubenswrapper[4813]: I0219 20:00:36.027624 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 19 20:00:36 crc kubenswrapper[4813]: E0219 20:00:36.027928 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3545a303-e475-410d-9748-5c14674c5973" containerName="nova-api-log" Feb 19 20:00:36 crc kubenswrapper[4813]: I0219 20:00:36.027941 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="3545a303-e475-410d-9748-5c14674c5973" containerName="nova-api-log" Feb 19 20:00:36 crc kubenswrapper[4813]: E0219 20:00:36.027955 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3545a303-e475-410d-9748-5c14674c5973" containerName="nova-api-api" Feb 19 20:00:36 crc kubenswrapper[4813]: I0219 20:00:36.027961 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="3545a303-e475-410d-9748-5c14674c5973" containerName="nova-api-api" Feb 19 20:00:36 crc kubenswrapper[4813]: E0219 20:00:36.027985 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51e8cdf9-4bd9-436f-8933-271ef1bef791" containerName="nova-metadata-metadata" Feb 19 20:00:36 crc kubenswrapper[4813]: I0219 20:00:36.027991 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="51e8cdf9-4bd9-436f-8933-271ef1bef791" containerName="nova-metadata-metadata" Feb 19 20:00:36 crc kubenswrapper[4813]: E0219 20:00:36.028080 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51e8cdf9-4bd9-436f-8933-271ef1bef791" containerName="nova-metadata-log" Feb 19 20:00:36 crc kubenswrapper[4813]: I0219 20:00:36.028088 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="51e8cdf9-4bd9-436f-8933-271ef1bef791" containerName="nova-metadata-log" Feb 19 20:00:36 crc kubenswrapper[4813]: I0219 20:00:36.028247 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="51e8cdf9-4bd9-436f-8933-271ef1bef791" containerName="nova-metadata-log" Feb 19 20:00:36 crc kubenswrapper[4813]: I0219 20:00:36.028258 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="51e8cdf9-4bd9-436f-8933-271ef1bef791" containerName="nova-metadata-metadata" Feb 19 20:00:36 crc kubenswrapper[4813]: I0219 20:00:36.028276 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="3545a303-e475-410d-9748-5c14674c5973" containerName="nova-api-api" Feb 19 20:00:36 crc kubenswrapper[4813]: I0219 20:00:36.028287 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="3545a303-e475-410d-9748-5c14674c5973" containerName="nova-api-log" Feb 19 20:00:36 crc kubenswrapper[4813]: I0219 20:00:36.031074 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 20:00:36 crc kubenswrapper[4813]: I0219 20:00:36.034606 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 19 20:00:36 crc kubenswrapper[4813]: I0219 20:00:36.038884 4813 scope.go:117] "RemoveContainer" containerID="bc10905392e483254745d0e73f39292400a5c4ab7d56832dfbf00a6f80b6e762" Feb 19 20:00:36 crc kubenswrapper[4813]: I0219 20:00:36.039062 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 20:00:36 crc kubenswrapper[4813]: E0219 20:00:36.054965 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc10905392e483254745d0e73f39292400a5c4ab7d56832dfbf00a6f80b6e762\": container with ID starting with bc10905392e483254745d0e73f39292400a5c4ab7d56832dfbf00a6f80b6e762 not found: ID does not exist" containerID="bc10905392e483254745d0e73f39292400a5c4ab7d56832dfbf00a6f80b6e762" Feb 19 20:00:36 crc kubenswrapper[4813]: I0219 20:00:36.055018 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc10905392e483254745d0e73f39292400a5c4ab7d56832dfbf00a6f80b6e762"} err="failed to get container status \"bc10905392e483254745d0e73f39292400a5c4ab7d56832dfbf00a6f80b6e762\": rpc error: code = NotFound desc = could not find container \"bc10905392e483254745d0e73f39292400a5c4ab7d56832dfbf00a6f80b6e762\": container with ID starting with bc10905392e483254745d0e73f39292400a5c4ab7d56832dfbf00a6f80b6e762 not found: ID does not exist" Feb 19 20:00:36 crc kubenswrapper[4813]: I0219 20:00:36.055043 4813 scope.go:117] "RemoveContainer" containerID="7a56d5a97b0c0579e9a098cab123a6ecfb4da4690b529ffcbf05e3cc5dbd1e8a" Feb 19 20:00:36 crc kubenswrapper[4813]: E0219 20:00:36.055752 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a56d5a97b0c0579e9a098cab123a6ecfb4da4690b529ffcbf05e3cc5dbd1e8a\": container with ID starting with 7a56d5a97b0c0579e9a098cab123a6ecfb4da4690b529ffcbf05e3cc5dbd1e8a not found: ID does not exist" containerID="7a56d5a97b0c0579e9a098cab123a6ecfb4da4690b529ffcbf05e3cc5dbd1e8a" Feb 19 20:00:36 crc kubenswrapper[4813]: I0219 20:00:36.055801 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a56d5a97b0c0579e9a098cab123a6ecfb4da4690b529ffcbf05e3cc5dbd1e8a"} err="failed to get container status \"7a56d5a97b0c0579e9a098cab123a6ecfb4da4690b529ffcbf05e3cc5dbd1e8a\": rpc error: code = NotFound desc = could not find container \"7a56d5a97b0c0579e9a098cab123a6ecfb4da4690b529ffcbf05e3cc5dbd1e8a\": container with ID starting with 7a56d5a97b0c0579e9a098cab123a6ecfb4da4690b529ffcbf05e3cc5dbd1e8a not found: ID does not exist" Feb 19 20:00:36 crc kubenswrapper[4813]: I0219 20:00:36.072849 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3545a303-e475-410d-9748-5c14674c5973-logs\") pod \"3545a303-e475-410d-9748-5c14674c5973\" (UID: \"3545a303-e475-410d-9748-5c14674c5973\") " Feb 19 20:00:36 crc kubenswrapper[4813]: I0219 20:00:36.072969 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3545a303-e475-410d-9748-5c14674c5973-combined-ca-bundle\") pod \"3545a303-e475-410d-9748-5c14674c5973\" (UID: \"3545a303-e475-410d-9748-5c14674c5973\") " Feb 19 20:00:36 crc kubenswrapper[4813]: I0219 20:00:36.073028 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wsv2\" (UniqueName: \"kubernetes.io/projected/3545a303-e475-410d-9748-5c14674c5973-kube-api-access-9wsv2\") pod \"3545a303-e475-410d-9748-5c14674c5973\" (UID: \"3545a303-e475-410d-9748-5c14674c5973\") " Feb 19 20:00:36 crc kubenswrapper[4813]: I0219 20:00:36.073183 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3545a303-e475-410d-9748-5c14674c5973-config-data\") pod \"3545a303-e475-410d-9748-5c14674c5973\" (UID: \"3545a303-e475-410d-9748-5c14674c5973\") " Feb 19 20:00:36 crc kubenswrapper[4813]: I0219 20:00:36.073654 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3545a303-e475-410d-9748-5c14674c5973-logs" (OuterVolumeSpecName: "logs") pod "3545a303-e475-410d-9748-5c14674c5973" (UID: "3545a303-e475-410d-9748-5c14674c5973"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:00:36 crc kubenswrapper[4813]: I0219 20:00:36.078207 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3545a303-e475-410d-9748-5c14674c5973-kube-api-access-9wsv2" (OuterVolumeSpecName: "kube-api-access-9wsv2") pod "3545a303-e475-410d-9748-5c14674c5973" (UID: "3545a303-e475-410d-9748-5c14674c5973"). InnerVolumeSpecName "kube-api-access-9wsv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:00:36 crc kubenswrapper[4813]: I0219 20:00:36.096183 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3545a303-e475-410d-9748-5c14674c5973-config-data" (OuterVolumeSpecName: "config-data") pod "3545a303-e475-410d-9748-5c14674c5973" (UID: "3545a303-e475-410d-9748-5c14674c5973"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:00:36 crc kubenswrapper[4813]: I0219 20:00:36.099777 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3545a303-e475-410d-9748-5c14674c5973-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3545a303-e475-410d-9748-5c14674c5973" (UID: "3545a303-e475-410d-9748-5c14674c5973"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:00:36 crc kubenswrapper[4813]: I0219 20:00:36.175298 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41dfb00f-c637-43b9-91c2-a630d5f33b84-config-data\") pod \"nova-metadata-0\" (UID: \"41dfb00f-c637-43b9-91c2-a630d5f33b84\") " pod="openstack/nova-metadata-0" Feb 19 20:00:36 crc kubenswrapper[4813]: I0219 20:00:36.175377 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41dfb00f-c637-43b9-91c2-a630d5f33b84-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"41dfb00f-c637-43b9-91c2-a630d5f33b84\") " pod="openstack/nova-metadata-0" Feb 19 20:00:36 crc kubenswrapper[4813]: I0219 20:00:36.175451 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grhwm\" (UniqueName: \"kubernetes.io/projected/41dfb00f-c637-43b9-91c2-a630d5f33b84-kube-api-access-grhwm\") pod \"nova-metadata-0\" (UID: \"41dfb00f-c637-43b9-91c2-a630d5f33b84\") " pod="openstack/nova-metadata-0" Feb 19 20:00:36 crc kubenswrapper[4813]: I0219 20:00:36.175839 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41dfb00f-c637-43b9-91c2-a630d5f33b84-logs\") pod \"nova-metadata-0\" (UID: \"41dfb00f-c637-43b9-91c2-a630d5f33b84\") " pod="openstack/nova-metadata-0" Feb 19 20:00:36 crc kubenswrapper[4813]: I0219 20:00:36.176115 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3545a303-e475-410d-9748-5c14674c5973-logs\") on node \"crc\" DevicePath \"\"" Feb 19 20:00:36 crc kubenswrapper[4813]: I0219 20:00:36.176172 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3545a303-e475-410d-9748-5c14674c5973-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 20:00:36 crc kubenswrapper[4813]: I0219 20:00:36.176184 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wsv2\" (UniqueName: \"kubernetes.io/projected/3545a303-e475-410d-9748-5c14674c5973-kube-api-access-9wsv2\") on node \"crc\" DevicePath \"\"" Feb 19 20:00:36 crc kubenswrapper[4813]: I0219 20:00:36.176195 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3545a303-e475-410d-9748-5c14674c5973-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 20:00:36 crc kubenswrapper[4813]: I0219 20:00:36.277381 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grhwm\" (UniqueName: \"kubernetes.io/projected/41dfb00f-c637-43b9-91c2-a630d5f33b84-kube-api-access-grhwm\") pod \"nova-metadata-0\" (UID: \"41dfb00f-c637-43b9-91c2-a630d5f33b84\") " pod="openstack/nova-metadata-0" Feb 19 20:00:36 crc kubenswrapper[4813]: I0219 20:00:36.277548 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41dfb00f-c637-43b9-91c2-a630d5f33b84-logs\") pod \"nova-metadata-0\" (UID: \"41dfb00f-c637-43b9-91c2-a630d5f33b84\") " pod="openstack/nova-metadata-0" Feb 19 20:00:36 crc kubenswrapper[4813]: I0219 20:00:36.277620 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41dfb00f-c637-43b9-91c2-a630d5f33b84-config-data\") pod \"nova-metadata-0\" (UID: \"41dfb00f-c637-43b9-91c2-a630d5f33b84\") " pod="openstack/nova-metadata-0" Feb 19 20:00:36 crc kubenswrapper[4813]: I0219 20:00:36.277660 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41dfb00f-c637-43b9-91c2-a630d5f33b84-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"41dfb00f-c637-43b9-91c2-a630d5f33b84\") " pod="openstack/nova-metadata-0" Feb 19 20:00:36 crc kubenswrapper[4813]: I0219 20:00:36.278043 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41dfb00f-c637-43b9-91c2-a630d5f33b84-logs\") pod \"nova-metadata-0\" (UID: \"41dfb00f-c637-43b9-91c2-a630d5f33b84\") " pod="openstack/nova-metadata-0" Feb 19 20:00:36 crc kubenswrapper[4813]: I0219 20:00:36.281386 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41dfb00f-c637-43b9-91c2-a630d5f33b84-config-data\") pod \"nova-metadata-0\" (UID: \"41dfb00f-c637-43b9-91c2-a630d5f33b84\") " pod="openstack/nova-metadata-0" Feb 19 20:00:36 crc kubenswrapper[4813]: I0219 20:00:36.281520 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41dfb00f-c637-43b9-91c2-a630d5f33b84-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"41dfb00f-c637-43b9-91c2-a630d5f33b84\") " pod="openstack/nova-metadata-0" Feb 19 20:00:36 crc kubenswrapper[4813]: I0219 20:00:36.295167 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grhwm\" (UniqueName: \"kubernetes.io/projected/41dfb00f-c637-43b9-91c2-a630d5f33b84-kube-api-access-grhwm\") pod \"nova-metadata-0\" (UID: \"41dfb00f-c637-43b9-91c2-a630d5f33b84\") " pod="openstack/nova-metadata-0" Feb 19 20:00:36 crc kubenswrapper[4813]: I0219 20:00:36.313194 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 20:00:36 crc kubenswrapper[4813]: I0219 20:00:36.328430 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 19 20:00:36 crc kubenswrapper[4813]: I0219 20:00:36.343425 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 19 20:00:36 crc kubenswrapper[4813]: I0219 20:00:36.345298 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 20:00:36 crc kubenswrapper[4813]: I0219 20:00:36.348122 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 19 20:00:36 crc kubenswrapper[4813]: I0219 20:00:36.358843 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 20:00:36 crc kubenswrapper[4813]: I0219 20:00:36.379708 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 20:00:36 crc kubenswrapper[4813]: I0219 20:00:36.481706 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7qpj\" (UniqueName: \"kubernetes.io/projected/923da541-874b-47de-8b7e-443eeece9268-kube-api-access-v7qpj\") pod \"nova-api-0\" (UID: \"923da541-874b-47de-8b7e-443eeece9268\") " pod="openstack/nova-api-0" Feb 19 20:00:36 crc kubenswrapper[4813]: I0219 20:00:36.481986 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/923da541-874b-47de-8b7e-443eeece9268-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"923da541-874b-47de-8b7e-443eeece9268\") " pod="openstack/nova-api-0" Feb 19 20:00:36 crc kubenswrapper[4813]: I0219 20:00:36.482216 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/923da541-874b-47de-8b7e-443eeece9268-config-data\") pod \"nova-api-0\" (UID: \"923da541-874b-47de-8b7e-443eeece9268\") " pod="openstack/nova-api-0" Feb 19 20:00:36 crc kubenswrapper[4813]: I0219 20:00:36.482339 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/923da541-874b-47de-8b7e-443eeece9268-logs\") pod \"nova-api-0\" (UID: \"923da541-874b-47de-8b7e-443eeece9268\") " pod="openstack/nova-api-0" Feb 19 20:00:36 crc kubenswrapper[4813]: I0219 20:00:36.584430 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7qpj\" (UniqueName: \"kubernetes.io/projected/923da541-874b-47de-8b7e-443eeece9268-kube-api-access-v7qpj\") pod \"nova-api-0\" (UID: \"923da541-874b-47de-8b7e-443eeece9268\") " pod="openstack/nova-api-0" Feb 19 20:00:36 crc kubenswrapper[4813]: I0219 20:00:36.584509 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/923da541-874b-47de-8b7e-443eeece9268-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"923da541-874b-47de-8b7e-443eeece9268\") " pod="openstack/nova-api-0" Feb 19 20:00:36 crc kubenswrapper[4813]: I0219 20:00:36.584554 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/923da541-874b-47de-8b7e-443eeece9268-config-data\") pod \"nova-api-0\" (UID: \"923da541-874b-47de-8b7e-443eeece9268\") " pod="openstack/nova-api-0" Feb 19 20:00:36 crc kubenswrapper[4813]: I0219 20:00:36.584588 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/923da541-874b-47de-8b7e-443eeece9268-logs\") pod \"nova-api-0\" (UID: \"923da541-874b-47de-8b7e-443eeece9268\") " pod="openstack/nova-api-0" Feb 19 20:00:36 crc kubenswrapper[4813]: I0219 20:00:36.585067 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/923da541-874b-47de-8b7e-443eeece9268-logs\") pod \"nova-api-0\" (UID: \"923da541-874b-47de-8b7e-443eeece9268\") " pod="openstack/nova-api-0" Feb 19 20:00:36 crc kubenswrapper[4813]: I0219 20:00:36.589076 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/923da541-874b-47de-8b7e-443eeece9268-config-data\") pod \"nova-api-0\" (UID: \"923da541-874b-47de-8b7e-443eeece9268\") " pod="openstack/nova-api-0" Feb 19 20:00:36 crc kubenswrapper[4813]: I0219 20:00:36.593015 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/923da541-874b-47de-8b7e-443eeece9268-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"923da541-874b-47de-8b7e-443eeece9268\") " pod="openstack/nova-api-0" Feb 19 20:00:36 crc kubenswrapper[4813]: I0219 20:00:36.602084 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7qpj\" (UniqueName: \"kubernetes.io/projected/923da541-874b-47de-8b7e-443eeece9268-kube-api-access-v7qpj\") pod \"nova-api-0\" (UID: \"923da541-874b-47de-8b7e-443eeece9268\") " pod="openstack/nova-api-0" Feb 19 20:00:36 crc kubenswrapper[4813]: I0219 20:00:36.679945 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 20:00:36 crc kubenswrapper[4813]: I0219 20:00:36.847830 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 20:00:36 crc kubenswrapper[4813]: I0219 20:00:36.983341 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"41dfb00f-c637-43b9-91c2-a630d5f33b84","Type":"ContainerStarted","Data":"347ec9f624c6b277029cd0e7c9b9b3ebf5196efa2fedfb5827939426962b9e41"} Feb 19 20:00:37 crc kubenswrapper[4813]: I0219 20:00:37.163088 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 20:00:37 crc kubenswrapper[4813]: W0219 20:00:37.171379 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod923da541_874b_47de_8b7e_443eeece9268.slice/crio-d010e4237008d160ff87f8402c5ad4b4786bee387d7297341d2d6a410355ab69 WatchSource:0}: Error finding container d010e4237008d160ff87f8402c5ad4b4786bee387d7297341d2d6a410355ab69: Status 404 returned error can't find the container with id d010e4237008d160ff87f8402c5ad4b4786bee387d7297341d2d6a410355ab69 Feb 19 20:00:37 crc kubenswrapper[4813]: I0219 20:00:37.481929 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3545a303-e475-410d-9748-5c14674c5973" path="/var/lib/kubelet/pods/3545a303-e475-410d-9748-5c14674c5973/volumes" Feb 19 20:00:37 crc kubenswrapper[4813]: I0219 20:00:37.482597 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51e8cdf9-4bd9-436f-8933-271ef1bef791" path="/var/lib/kubelet/pods/51e8cdf9-4bd9-436f-8933-271ef1bef791/volumes" Feb 19 20:00:37 crc kubenswrapper[4813]: I0219 20:00:37.610922 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 19 20:00:37 crc kubenswrapper[4813]: I0219 20:00:37.993677 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"41dfb00f-c637-43b9-91c2-a630d5f33b84","Type":"ContainerStarted","Data":"3bdad82e87f13afb3e577d1fc6cad2dac390693761af7456c974c762f5ec89aa"} Feb 19 20:00:37 crc kubenswrapper[4813]: I0219 20:00:37.994071 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"41dfb00f-c637-43b9-91c2-a630d5f33b84","Type":"ContainerStarted","Data":"34700025615c6f6b10ac5f8a5519c464f983163af57ab127e3b4dd3982244bf1"} Feb 19 20:00:37 crc kubenswrapper[4813]: I0219 20:00:37.995898 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"923da541-874b-47de-8b7e-443eeece9268","Type":"ContainerStarted","Data":"8bfbcd775538911d1d8ea0fe78a7ed3d37f283ce3ed720735d8133cd8235b7c8"} Feb 19 20:00:37 crc kubenswrapper[4813]: I0219 20:00:37.996005 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"923da541-874b-47de-8b7e-443eeece9268","Type":"ContainerStarted","Data":"e6dc371266b55f936fb9bfce726adca44c75d9cd0cf96f431ed1281623a55cc6"} Feb 19 20:00:37 crc kubenswrapper[4813]: I0219 20:00:37.996027 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"923da541-874b-47de-8b7e-443eeece9268","Type":"ContainerStarted","Data":"d010e4237008d160ff87f8402c5ad4b4786bee387d7297341d2d6a410355ab69"} Feb 19 20:00:38 crc kubenswrapper[4813]: I0219 20:00:38.025605 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.025587442 podStartE2EDuration="3.025587442s" podCreationTimestamp="2026-02-19 20:00:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 20:00:38.016398088 +0000 UTC m=+5457.241838629" watchObservedRunningTime="2026-02-19 20:00:38.025587442 +0000 UTC m=+5457.251027993" Feb 19 20:00:38 crc kubenswrapper[4813]: I0219 20:00:38.037282 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.037264623 podStartE2EDuration="2.037264623s" podCreationTimestamp="2026-02-19 20:00:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 20:00:38.032448054 +0000 UTC m=+5457.257888595" watchObservedRunningTime="2026-02-19 20:00:38.037264623 +0000 UTC m=+5457.262705164" Feb 19 20:00:41 crc kubenswrapper[4813]: I0219 20:00:41.360217 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 20:00:41 crc kubenswrapper[4813]: I0219 20:00:41.360567 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 20:00:42 crc kubenswrapper[4813]: I0219 20:00:42.610587 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 19 20:00:42 crc kubenswrapper[4813]: I0219 20:00:42.642070 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 19 20:00:43 crc kubenswrapper[4813]: I0219 20:00:43.066745 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 19 20:00:46 crc kubenswrapper[4813]: I0219 20:00:46.360447 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 19 20:00:46 crc kubenswrapper[4813]: I0219 20:00:46.360989 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 19 20:00:46 crc kubenswrapper[4813]: I0219 20:00:46.680067 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 20:00:46 crc kubenswrapper[4813]: I0219 20:00:46.684464 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 20:00:47 crc kubenswrapper[4813]: I0219 20:00:47.442367 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="41dfb00f-c637-43b9-91c2-a630d5f33b84" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.68:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:00:47 crc kubenswrapper[4813]: I0219 20:00:47.442362 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="41dfb00f-c637-43b9-91c2-a630d5f33b84" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.68:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:00:47 crc kubenswrapper[4813]: I0219 20:00:47.762268 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="923da541-874b-47de-8b7e-443eeece9268" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.69:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:00:47 crc kubenswrapper[4813]: I0219 20:00:47.762271 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="923da541-874b-47de-8b7e-443eeece9268" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.69:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:00:52 crc kubenswrapper[4813]: I0219 20:00:52.594051 4813 scope.go:117] "RemoveContainer" containerID="c07a761b9baedcbe79b65b5a3f1a6bf4248aae776a3f436625a13deb47ec85d4" Feb 19 20:00:52 crc kubenswrapper[4813]: I0219 20:00:52.623797 4813 scope.go:117] "RemoveContainer" containerID="79fbfc08b43e363d77777ef16086d4ebd175a49dac210861dc287b9d77e246af" Feb 19 20:00:56 crc kubenswrapper[4813]: I0219 20:00:56.362676 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 19 20:00:56 crc kubenswrapper[4813]: I0219 20:00:56.363318 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 19 20:00:56 crc kubenswrapper[4813]: I0219 20:00:56.365477 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 19 20:00:56 crc kubenswrapper[4813]: I0219 20:00:56.366591 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 19 20:00:56 crc kubenswrapper[4813]: I0219 20:00:56.684604 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 19 20:00:56 crc kubenswrapper[4813]: I0219 20:00:56.684903 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 19 20:00:56 crc kubenswrapper[4813]: I0219 20:00:56.685138 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 19 20:00:56 crc kubenswrapper[4813]: I0219 20:00:56.688029 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 19 20:00:57 crc kubenswrapper[4813]: I0219 20:00:57.156135 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 19 20:00:57 crc kubenswrapper[4813]: I0219 20:00:57.163023 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 19 20:00:57 crc kubenswrapper[4813]: I0219 20:00:57.387657 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-fc545869f-pps9q"] Feb 19 20:00:57 crc kubenswrapper[4813]: I0219 20:00:57.389538 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fc545869f-pps9q" Feb 19 20:00:57 crc kubenswrapper[4813]: I0219 20:00:57.414129 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fc545869f-pps9q"] Feb 19 20:00:57 crc kubenswrapper[4813]: I0219 20:00:57.489877 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/76e05205-9be5-4146-ae08-63d85351bc1e-ovsdbserver-sb\") pod \"dnsmasq-dns-fc545869f-pps9q\" (UID: \"76e05205-9be5-4146-ae08-63d85351bc1e\") " pod="openstack/dnsmasq-dns-fc545869f-pps9q" Feb 19 20:00:57 crc kubenswrapper[4813]: I0219 20:00:57.489947 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76e05205-9be5-4146-ae08-63d85351bc1e-dns-svc\") pod \"dnsmasq-dns-fc545869f-pps9q\" (UID: \"76e05205-9be5-4146-ae08-63d85351bc1e\") " pod="openstack/dnsmasq-dns-fc545869f-pps9q" Feb 19 20:00:57 crc kubenswrapper[4813]: I0219 20:00:57.490017 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76e05205-9be5-4146-ae08-63d85351bc1e-config\") pod \"dnsmasq-dns-fc545869f-pps9q\" (UID: \"76e05205-9be5-4146-ae08-63d85351bc1e\") " pod="openstack/dnsmasq-dns-fc545869f-pps9q" Feb 19 20:00:57 crc kubenswrapper[4813]: I0219 20:00:57.490065 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/76e05205-9be5-4146-ae08-63d85351bc1e-ovsdbserver-nb\") pod \"dnsmasq-dns-fc545869f-pps9q\" (UID: \"76e05205-9be5-4146-ae08-63d85351bc1e\") " pod="openstack/dnsmasq-dns-fc545869f-pps9q" Feb 19 20:00:57 crc kubenswrapper[4813]: I0219 20:00:57.490107 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6w8gd\" (UniqueName: \"kubernetes.io/projected/76e05205-9be5-4146-ae08-63d85351bc1e-kube-api-access-6w8gd\") pod \"dnsmasq-dns-fc545869f-pps9q\" (UID: \"76e05205-9be5-4146-ae08-63d85351bc1e\") " pod="openstack/dnsmasq-dns-fc545869f-pps9q" Feb 19 20:00:57 crc kubenswrapper[4813]: I0219 20:00:57.591606 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/76e05205-9be5-4146-ae08-63d85351bc1e-ovsdbserver-sb\") pod \"dnsmasq-dns-fc545869f-pps9q\" (UID: \"76e05205-9be5-4146-ae08-63d85351bc1e\") " pod="openstack/dnsmasq-dns-fc545869f-pps9q" Feb 19 20:00:57 crc kubenswrapper[4813]: I0219 20:00:57.591738 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76e05205-9be5-4146-ae08-63d85351bc1e-dns-svc\") pod \"dnsmasq-dns-fc545869f-pps9q\" (UID: \"76e05205-9be5-4146-ae08-63d85351bc1e\") " pod="openstack/dnsmasq-dns-fc545869f-pps9q" Feb 19 20:00:57 crc kubenswrapper[4813]: I0219 20:00:57.591817 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76e05205-9be5-4146-ae08-63d85351bc1e-config\") pod \"dnsmasq-dns-fc545869f-pps9q\" (UID: \"76e05205-9be5-4146-ae08-63d85351bc1e\") " pod="openstack/dnsmasq-dns-fc545869f-pps9q" Feb 19 20:00:57 crc kubenswrapper[4813]: I0219 20:00:57.591910 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/76e05205-9be5-4146-ae08-63d85351bc1e-ovsdbserver-nb\") pod \"dnsmasq-dns-fc545869f-pps9q\" (UID: \"76e05205-9be5-4146-ae08-63d85351bc1e\") " pod="openstack/dnsmasq-dns-fc545869f-pps9q" Feb 19 20:00:57 crc kubenswrapper[4813]: I0219 20:00:57.591995 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6w8gd\" (UniqueName: \"kubernetes.io/projected/76e05205-9be5-4146-ae08-63d85351bc1e-kube-api-access-6w8gd\") pod \"dnsmasq-dns-fc545869f-pps9q\" (UID: \"76e05205-9be5-4146-ae08-63d85351bc1e\") " pod="openstack/dnsmasq-dns-fc545869f-pps9q" Feb 19 20:00:57 crc kubenswrapper[4813]: I0219 20:00:57.593253 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76e05205-9be5-4146-ae08-63d85351bc1e-config\") pod \"dnsmasq-dns-fc545869f-pps9q\" (UID: \"76e05205-9be5-4146-ae08-63d85351bc1e\") " pod="openstack/dnsmasq-dns-fc545869f-pps9q" Feb 19 20:00:57 crc kubenswrapper[4813]: I0219 20:00:57.594155 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/76e05205-9be5-4146-ae08-63d85351bc1e-ovsdbserver-nb\") pod \"dnsmasq-dns-fc545869f-pps9q\" (UID: \"76e05205-9be5-4146-ae08-63d85351bc1e\") " pod="openstack/dnsmasq-dns-fc545869f-pps9q" Feb 19 20:00:57 crc kubenswrapper[4813]: I0219 20:00:57.594396 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/76e05205-9be5-4146-ae08-63d85351bc1e-ovsdbserver-sb\") pod \"dnsmasq-dns-fc545869f-pps9q\" (UID: \"76e05205-9be5-4146-ae08-63d85351bc1e\") " pod="openstack/dnsmasq-dns-fc545869f-pps9q" Feb 19 20:00:57 crc kubenswrapper[4813]: I0219 20:00:57.603138 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76e05205-9be5-4146-ae08-63d85351bc1e-dns-svc\") pod \"dnsmasq-dns-fc545869f-pps9q\" (UID: \"76e05205-9be5-4146-ae08-63d85351bc1e\") " pod="openstack/dnsmasq-dns-fc545869f-pps9q" Feb 19 20:00:57 crc kubenswrapper[4813]: I0219 20:00:57.612911 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6w8gd\" (UniqueName: \"kubernetes.io/projected/76e05205-9be5-4146-ae08-63d85351bc1e-kube-api-access-6w8gd\") pod \"dnsmasq-dns-fc545869f-pps9q\" (UID: \"76e05205-9be5-4146-ae08-63d85351bc1e\") " pod="openstack/dnsmasq-dns-fc545869f-pps9q" Feb 19 20:00:57 crc kubenswrapper[4813]: I0219 20:00:57.707159 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fc545869f-pps9q" Feb 19 20:00:58 crc kubenswrapper[4813]: I0219 20:00:58.239479 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fc545869f-pps9q"] Feb 19 20:00:58 crc kubenswrapper[4813]: W0219 20:00:58.241767 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod76e05205_9be5_4146_ae08_63d85351bc1e.slice/crio-b92935516a6338e9591897520b0b129d3835845c346dee98c65624ff52384620 WatchSource:0}: Error finding container b92935516a6338e9591897520b0b129d3835845c346dee98c65624ff52384620: Status 404 returned error can't find the container with id b92935516a6338e9591897520b0b129d3835845c346dee98c65624ff52384620 Feb 19 20:00:59 crc kubenswrapper[4813]: I0219 20:00:59.170371 4813 generic.go:334] "Generic (PLEG): container finished" podID="76e05205-9be5-4146-ae08-63d85351bc1e" containerID="1612cae918c264674584b61eeca86c583d812c717d885459e16a3fc9eeee83b6" exitCode=0 Feb 19 20:00:59 crc kubenswrapper[4813]: I0219 20:00:59.170472 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fc545869f-pps9q" event={"ID":"76e05205-9be5-4146-ae08-63d85351bc1e","Type":"ContainerDied","Data":"1612cae918c264674584b61eeca86c583d812c717d885459e16a3fc9eeee83b6"} Feb 19 20:00:59 crc kubenswrapper[4813]: I0219 20:00:59.170684 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fc545869f-pps9q" event={"ID":"76e05205-9be5-4146-ae08-63d85351bc1e","Type":"ContainerStarted","Data":"b92935516a6338e9591897520b0b129d3835845c346dee98c65624ff52384620"} Feb 19 20:01:00 crc kubenswrapper[4813]: I0219 20:01:00.143085 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29525521-ldjmt"] Feb 19 20:01:00 crc kubenswrapper[4813]: I0219 20:01:00.144569 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29525521-ldjmt" Feb 19 20:01:00 crc kubenswrapper[4813]: I0219 20:01:00.161619 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29525521-ldjmt"] Feb 19 20:01:00 crc kubenswrapper[4813]: I0219 20:01:00.184612 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fc545869f-pps9q" event={"ID":"76e05205-9be5-4146-ae08-63d85351bc1e","Type":"ContainerStarted","Data":"5fb0fa461bbd792337387ffacd7e8d4210043c3346b5c59b17550cfcd09d42e0"} Feb 19 20:01:00 crc kubenswrapper[4813]: I0219 20:01:00.185828 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-fc545869f-pps9q" Feb 19 20:01:00 crc kubenswrapper[4813]: I0219 20:01:00.206622 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-fc545869f-pps9q" podStartSLOduration=3.20660344 podStartE2EDuration="3.20660344s" podCreationTimestamp="2026-02-19 20:00:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 20:01:00.204415612 +0000 UTC m=+5479.429856163" watchObservedRunningTime="2026-02-19 20:01:00.20660344 +0000 UTC m=+5479.432043971" Feb 19 20:01:00 crc kubenswrapper[4813]: I0219 20:01:00.242398 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87b3919b-c8b6-41e5-aac7-96b6eb3359bc-combined-ca-bundle\") pod \"keystone-cron-29525521-ldjmt\" (UID: \"87b3919b-c8b6-41e5-aac7-96b6eb3359bc\") " pod="openstack/keystone-cron-29525521-ldjmt" Feb 19 20:01:00 crc kubenswrapper[4813]: I0219 20:01:00.242479 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r7hs\" (UniqueName: \"kubernetes.io/projected/87b3919b-c8b6-41e5-aac7-96b6eb3359bc-kube-api-access-5r7hs\") pod \"keystone-cron-29525521-ldjmt\" (UID: \"87b3919b-c8b6-41e5-aac7-96b6eb3359bc\") " pod="openstack/keystone-cron-29525521-ldjmt" Feb 19 20:01:00 crc kubenswrapper[4813]: I0219 20:01:00.242531 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/87b3919b-c8b6-41e5-aac7-96b6eb3359bc-fernet-keys\") pod \"keystone-cron-29525521-ldjmt\" (UID: \"87b3919b-c8b6-41e5-aac7-96b6eb3359bc\") " pod="openstack/keystone-cron-29525521-ldjmt" Feb 19 20:01:00 crc kubenswrapper[4813]: I0219 20:01:00.242591 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87b3919b-c8b6-41e5-aac7-96b6eb3359bc-config-data\") pod \"keystone-cron-29525521-ldjmt\" (UID: \"87b3919b-c8b6-41e5-aac7-96b6eb3359bc\") " pod="openstack/keystone-cron-29525521-ldjmt" Feb 19 20:01:00 crc kubenswrapper[4813]: I0219 20:01:00.343642 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5r7hs\" (UniqueName: \"kubernetes.io/projected/87b3919b-c8b6-41e5-aac7-96b6eb3359bc-kube-api-access-5r7hs\") pod \"keystone-cron-29525521-ldjmt\" (UID: \"87b3919b-c8b6-41e5-aac7-96b6eb3359bc\") " pod="openstack/keystone-cron-29525521-ldjmt" Feb 19 20:01:00 crc kubenswrapper[4813]: I0219 20:01:00.343883 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/87b3919b-c8b6-41e5-aac7-96b6eb3359bc-fernet-keys\") pod \"keystone-cron-29525521-ldjmt\" (UID: \"87b3919b-c8b6-41e5-aac7-96b6eb3359bc\") " pod="openstack/keystone-cron-29525521-ldjmt" Feb 19 20:01:00 crc kubenswrapper[4813]: I0219 20:01:00.344022 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87b3919b-c8b6-41e5-aac7-96b6eb3359bc-config-data\") pod \"keystone-cron-29525521-ldjmt\" (UID: \"87b3919b-c8b6-41e5-aac7-96b6eb3359bc\") " pod="openstack/keystone-cron-29525521-ldjmt" Feb 19 20:01:00 crc kubenswrapper[4813]: I0219 20:01:00.344175 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87b3919b-c8b6-41e5-aac7-96b6eb3359bc-combined-ca-bundle\") pod \"keystone-cron-29525521-ldjmt\" (UID: \"87b3919b-c8b6-41e5-aac7-96b6eb3359bc\") " pod="openstack/keystone-cron-29525521-ldjmt" Feb 19 20:01:00 crc kubenswrapper[4813]: I0219 20:01:00.349202 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/87b3919b-c8b6-41e5-aac7-96b6eb3359bc-fernet-keys\") pod \"keystone-cron-29525521-ldjmt\" (UID: \"87b3919b-c8b6-41e5-aac7-96b6eb3359bc\") " pod="openstack/keystone-cron-29525521-ldjmt" Feb 19 20:01:00 crc kubenswrapper[4813]: I0219 20:01:00.349844 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87b3919b-c8b6-41e5-aac7-96b6eb3359bc-combined-ca-bundle\") pod \"keystone-cron-29525521-ldjmt\" (UID: \"87b3919b-c8b6-41e5-aac7-96b6eb3359bc\") " pod="openstack/keystone-cron-29525521-ldjmt" Feb 19 20:01:00 crc kubenswrapper[4813]: I0219 20:01:00.360364 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87b3919b-c8b6-41e5-aac7-96b6eb3359bc-config-data\") pod \"keystone-cron-29525521-ldjmt\" (UID: \"87b3919b-c8b6-41e5-aac7-96b6eb3359bc\") " pod="openstack/keystone-cron-29525521-ldjmt" Feb 19 20:01:00 crc kubenswrapper[4813]: I0219 20:01:00.362530 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r7hs\" (UniqueName: \"kubernetes.io/projected/87b3919b-c8b6-41e5-aac7-96b6eb3359bc-kube-api-access-5r7hs\") pod \"keystone-cron-29525521-ldjmt\" (UID: \"87b3919b-c8b6-41e5-aac7-96b6eb3359bc\") " pod="openstack/keystone-cron-29525521-ldjmt" Feb 19 20:01:00 crc kubenswrapper[4813]: I0219 20:01:00.465026 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29525521-ldjmt" Feb 19 20:01:00 crc kubenswrapper[4813]: I0219 20:01:00.920225 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29525521-ldjmt"] Feb 19 20:01:00 crc kubenswrapper[4813]: W0219 20:01:00.932931 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87b3919b_c8b6_41e5_aac7_96b6eb3359bc.slice/crio-0dee99c27324f7efdb9df36cfd0959b772d22b856dff770ca90079c9775b2646 WatchSource:0}: Error finding container 0dee99c27324f7efdb9df36cfd0959b772d22b856dff770ca90079c9775b2646: Status 404 returned error can't find the container with id 0dee99c27324f7efdb9df36cfd0959b772d22b856dff770ca90079c9775b2646 Feb 19 20:01:01 crc kubenswrapper[4813]: I0219 20:01:01.200445 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29525521-ldjmt" event={"ID":"87b3919b-c8b6-41e5-aac7-96b6eb3359bc","Type":"ContainerStarted","Data":"9b0cad3029eb38ac9ee6aca3718e45cdba1e1bda69648ea4a8ee1b45a8c65b6b"} Feb 19 20:01:01 crc kubenswrapper[4813]: I0219 20:01:01.200856 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29525521-ldjmt" event={"ID":"87b3919b-c8b6-41e5-aac7-96b6eb3359bc","Type":"ContainerStarted","Data":"0dee99c27324f7efdb9df36cfd0959b772d22b856dff770ca90079c9775b2646"} Feb 19 20:01:01 crc kubenswrapper[4813]: I0219 20:01:01.223245 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29525521-ldjmt" podStartSLOduration=1.223224374 podStartE2EDuration="1.223224374s" podCreationTimestamp="2026-02-19 20:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 20:01:01.221683647 +0000 UTC m=+5480.447124208" watchObservedRunningTime="2026-02-19 20:01:01.223224374 +0000 UTC m=+5480.448664935" Feb 19 20:01:03 crc kubenswrapper[4813]: I0219 20:01:03.214930 4813 generic.go:334] "Generic (PLEG): container finished" podID="87b3919b-c8b6-41e5-aac7-96b6eb3359bc" containerID="9b0cad3029eb38ac9ee6aca3718e45cdba1e1bda69648ea4a8ee1b45a8c65b6b" exitCode=0 Feb 19 20:01:03 crc kubenswrapper[4813]: I0219 20:01:03.215162 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29525521-ldjmt" event={"ID":"87b3919b-c8b6-41e5-aac7-96b6eb3359bc","Type":"ContainerDied","Data":"9b0cad3029eb38ac9ee6aca3718e45cdba1e1bda69648ea4a8ee1b45a8c65b6b"} Feb 19 20:01:04 crc kubenswrapper[4813]: I0219 20:01:04.614877 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29525521-ldjmt" Feb 19 20:01:04 crc kubenswrapper[4813]: I0219 20:01:04.622026 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5r7hs\" (UniqueName: \"kubernetes.io/projected/87b3919b-c8b6-41e5-aac7-96b6eb3359bc-kube-api-access-5r7hs\") pod \"87b3919b-c8b6-41e5-aac7-96b6eb3359bc\" (UID: \"87b3919b-c8b6-41e5-aac7-96b6eb3359bc\") " Feb 19 20:01:04 crc kubenswrapper[4813]: I0219 20:01:04.622216 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87b3919b-c8b6-41e5-aac7-96b6eb3359bc-config-data\") pod \"87b3919b-c8b6-41e5-aac7-96b6eb3359bc\" (UID: \"87b3919b-c8b6-41e5-aac7-96b6eb3359bc\") " Feb 19 20:01:04 crc kubenswrapper[4813]: I0219 20:01:04.622308 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87b3919b-c8b6-41e5-aac7-96b6eb3359bc-combined-ca-bundle\") pod \"87b3919b-c8b6-41e5-aac7-96b6eb3359bc\" (UID: \"87b3919b-c8b6-41e5-aac7-96b6eb3359bc\") " Feb 19 20:01:04 crc kubenswrapper[4813]: I0219 20:01:04.622381 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/87b3919b-c8b6-41e5-aac7-96b6eb3359bc-fernet-keys\") pod \"87b3919b-c8b6-41e5-aac7-96b6eb3359bc\" (UID: \"87b3919b-c8b6-41e5-aac7-96b6eb3359bc\") " Feb 19 20:01:04 crc kubenswrapper[4813]: I0219 20:01:04.629127 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87b3919b-c8b6-41e5-aac7-96b6eb3359bc-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "87b3919b-c8b6-41e5-aac7-96b6eb3359bc" (UID: "87b3919b-c8b6-41e5-aac7-96b6eb3359bc"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:01:04 crc kubenswrapper[4813]: I0219 20:01:04.633986 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87b3919b-c8b6-41e5-aac7-96b6eb3359bc-kube-api-access-5r7hs" (OuterVolumeSpecName: "kube-api-access-5r7hs") pod "87b3919b-c8b6-41e5-aac7-96b6eb3359bc" (UID: "87b3919b-c8b6-41e5-aac7-96b6eb3359bc"). InnerVolumeSpecName "kube-api-access-5r7hs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:01:04 crc kubenswrapper[4813]: I0219 20:01:04.657143 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87b3919b-c8b6-41e5-aac7-96b6eb3359bc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "87b3919b-c8b6-41e5-aac7-96b6eb3359bc" (UID: "87b3919b-c8b6-41e5-aac7-96b6eb3359bc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:01:04 crc kubenswrapper[4813]: I0219 20:01:04.672311 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87b3919b-c8b6-41e5-aac7-96b6eb3359bc-config-data" (OuterVolumeSpecName: "config-data") pod "87b3919b-c8b6-41e5-aac7-96b6eb3359bc" (UID: "87b3919b-c8b6-41e5-aac7-96b6eb3359bc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:01:04 crc kubenswrapper[4813]: I0219 20:01:04.724744 4813 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/87b3919b-c8b6-41e5-aac7-96b6eb3359bc-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 19 20:01:04 crc kubenswrapper[4813]: I0219 20:01:04.724785 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5r7hs\" (UniqueName: \"kubernetes.io/projected/87b3919b-c8b6-41e5-aac7-96b6eb3359bc-kube-api-access-5r7hs\") on node \"crc\" DevicePath \"\"" Feb 19 20:01:04 crc kubenswrapper[4813]: I0219 20:01:04.724798 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87b3919b-c8b6-41e5-aac7-96b6eb3359bc-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 20:01:04 crc kubenswrapper[4813]: I0219 20:01:04.724809 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87b3919b-c8b6-41e5-aac7-96b6eb3359bc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 20:01:05 crc kubenswrapper[4813]: I0219 20:01:05.233106 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29525521-ldjmt" event={"ID":"87b3919b-c8b6-41e5-aac7-96b6eb3359bc","Type":"ContainerDied","Data":"0dee99c27324f7efdb9df36cfd0959b772d22b856dff770ca90079c9775b2646"} Feb 19 20:01:05 crc kubenswrapper[4813]: I0219 20:01:05.233166 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0dee99c27324f7efdb9df36cfd0959b772d22b856dff770ca90079c9775b2646" Feb 19 20:01:05 crc kubenswrapper[4813]: I0219 20:01:05.233250 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29525521-ldjmt" Feb 19 20:01:07 crc kubenswrapper[4813]: I0219 20:01:07.708134 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-fc545869f-pps9q" Feb 19 20:01:07 crc kubenswrapper[4813]: I0219 20:01:07.781549 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6464f94485-cmrjz"] Feb 19 20:01:07 crc kubenswrapper[4813]: I0219 20:01:07.782579 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6464f94485-cmrjz" podUID="333f58de-9f17-415d-9667-3f0a8b1d0bae" containerName="dnsmasq-dns" containerID="cri-o://f4556a47b76c9f9a397dd7ac0cae211a861b23f84208b1ca0171a7dd87efd356" gracePeriod=10 Feb 19 20:01:08 crc kubenswrapper[4813]: E0219 20:01:08.096383 4813 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.69:49928->38.102.83.69:38045: write tcp 38.102.83.69:49928->38.102.83.69:38045: write: broken pipe Feb 19 20:01:08 crc kubenswrapper[4813]: I0219 20:01:08.259579 4813 generic.go:334] "Generic (PLEG): container finished" podID="333f58de-9f17-415d-9667-3f0a8b1d0bae" containerID="f4556a47b76c9f9a397dd7ac0cae211a861b23f84208b1ca0171a7dd87efd356" exitCode=0 Feb 19 20:01:08 crc kubenswrapper[4813]: I0219 20:01:08.259687 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6464f94485-cmrjz" event={"ID":"333f58de-9f17-415d-9667-3f0a8b1d0bae","Type":"ContainerDied","Data":"f4556a47b76c9f9a397dd7ac0cae211a861b23f84208b1ca0171a7dd87efd356"} Feb 19 20:01:08 crc kubenswrapper[4813]: I0219 20:01:08.260101 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6464f94485-cmrjz" event={"ID":"333f58de-9f17-415d-9667-3f0a8b1d0bae","Type":"ContainerDied","Data":"b0365dbf2a85b51892292d9bfcdb066fd1d66c1b32415ed86e56fee4b645781c"} Feb 19 20:01:08 crc kubenswrapper[4813]: I0219 20:01:08.260126 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0365dbf2a85b51892292d9bfcdb066fd1d66c1b32415ed86e56fee4b645781c" Feb 19 20:01:08 crc kubenswrapper[4813]: I0219 20:01:08.294554 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6464f94485-cmrjz" Feb 19 20:01:08 crc kubenswrapper[4813]: I0219 20:01:08.490275 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/333f58de-9f17-415d-9667-3f0a8b1d0bae-ovsdbserver-sb\") pod \"333f58de-9f17-415d-9667-3f0a8b1d0bae\" (UID: \"333f58de-9f17-415d-9667-3f0a8b1d0bae\") " Feb 19 20:01:08 crc kubenswrapper[4813]: I0219 20:01:08.490350 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/333f58de-9f17-415d-9667-3f0a8b1d0bae-ovsdbserver-nb\") pod \"333f58de-9f17-415d-9667-3f0a8b1d0bae\" (UID: \"333f58de-9f17-415d-9667-3f0a8b1d0bae\") " Feb 19 20:01:08 crc kubenswrapper[4813]: I0219 20:01:08.490424 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bt78\" (UniqueName: \"kubernetes.io/projected/333f58de-9f17-415d-9667-3f0a8b1d0bae-kube-api-access-9bt78\") pod \"333f58de-9f17-415d-9667-3f0a8b1d0bae\" (UID: \"333f58de-9f17-415d-9667-3f0a8b1d0bae\") " Feb 19 20:01:08 crc kubenswrapper[4813]: I0219 20:01:08.490500 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/333f58de-9f17-415d-9667-3f0a8b1d0bae-dns-svc\") pod \"333f58de-9f17-415d-9667-3f0a8b1d0bae\" (UID: \"333f58de-9f17-415d-9667-3f0a8b1d0bae\") " Feb 19 20:01:08 crc kubenswrapper[4813]: I0219 20:01:08.490528 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/333f58de-9f17-415d-9667-3f0a8b1d0bae-config\") pod \"333f58de-9f17-415d-9667-3f0a8b1d0bae\" (UID: \"333f58de-9f17-415d-9667-3f0a8b1d0bae\") " Feb 19 20:01:08 crc kubenswrapper[4813]: I0219 20:01:08.502306 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/333f58de-9f17-415d-9667-3f0a8b1d0bae-kube-api-access-9bt78" (OuterVolumeSpecName: "kube-api-access-9bt78") pod "333f58de-9f17-415d-9667-3f0a8b1d0bae" (UID: "333f58de-9f17-415d-9667-3f0a8b1d0bae"). InnerVolumeSpecName "kube-api-access-9bt78". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:01:08 crc kubenswrapper[4813]: I0219 20:01:08.540731 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/333f58de-9f17-415d-9667-3f0a8b1d0bae-config" (OuterVolumeSpecName: "config") pod "333f58de-9f17-415d-9667-3f0a8b1d0bae" (UID: "333f58de-9f17-415d-9667-3f0a8b1d0bae"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:01:08 crc kubenswrapper[4813]: I0219 20:01:08.543558 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/333f58de-9f17-415d-9667-3f0a8b1d0bae-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "333f58de-9f17-415d-9667-3f0a8b1d0bae" (UID: "333f58de-9f17-415d-9667-3f0a8b1d0bae"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:01:08 crc kubenswrapper[4813]: I0219 20:01:08.545146 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/333f58de-9f17-415d-9667-3f0a8b1d0bae-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "333f58de-9f17-415d-9667-3f0a8b1d0bae" (UID: "333f58de-9f17-415d-9667-3f0a8b1d0bae"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:01:08 crc kubenswrapper[4813]: I0219 20:01:08.561509 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/333f58de-9f17-415d-9667-3f0a8b1d0bae-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "333f58de-9f17-415d-9667-3f0a8b1d0bae" (UID: "333f58de-9f17-415d-9667-3f0a8b1d0bae"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:01:08 crc kubenswrapper[4813]: I0219 20:01:08.592227 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/333f58de-9f17-415d-9667-3f0a8b1d0bae-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 20:01:08 crc kubenswrapper[4813]: I0219 20:01:08.592273 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/333f58de-9f17-415d-9667-3f0a8b1d0bae-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 20:01:08 crc kubenswrapper[4813]: I0219 20:01:08.592287 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bt78\" (UniqueName: \"kubernetes.io/projected/333f58de-9f17-415d-9667-3f0a8b1d0bae-kube-api-access-9bt78\") on node \"crc\" DevicePath \"\"" Feb 19 20:01:08 crc kubenswrapper[4813]: I0219 20:01:08.592300 4813 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/333f58de-9f17-415d-9667-3f0a8b1d0bae-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 20:01:08 crc kubenswrapper[4813]: I0219 20:01:08.592315 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/333f58de-9f17-415d-9667-3f0a8b1d0bae-config\") on node \"crc\" DevicePath \"\"" Feb 19 20:01:09 crc kubenswrapper[4813]: I0219 20:01:09.266657 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6464f94485-cmrjz" Feb 19 20:01:09 crc kubenswrapper[4813]: I0219 20:01:09.309703 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6464f94485-cmrjz"] Feb 19 20:01:09 crc kubenswrapper[4813]: I0219 20:01:09.319924 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6464f94485-cmrjz"] Feb 19 20:01:09 crc kubenswrapper[4813]: I0219 20:01:09.483266 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="333f58de-9f17-415d-9667-3f0a8b1d0bae" path="/var/lib/kubelet/pods/333f58de-9f17-415d-9667-3f0a8b1d0bae/volumes" Feb 19 20:01:09 crc kubenswrapper[4813]: I0219 20:01:09.526101 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-jmjlq"] Feb 19 20:01:09 crc kubenswrapper[4813]: E0219 20:01:09.526539 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="333f58de-9f17-415d-9667-3f0a8b1d0bae" containerName="dnsmasq-dns" Feb 19 20:01:09 crc kubenswrapper[4813]: I0219 20:01:09.526562 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="333f58de-9f17-415d-9667-3f0a8b1d0bae" containerName="dnsmasq-dns" Feb 19 20:01:09 crc kubenswrapper[4813]: E0219 20:01:09.526596 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87b3919b-c8b6-41e5-aac7-96b6eb3359bc" containerName="keystone-cron" Feb 19 20:01:09 crc kubenswrapper[4813]: I0219 20:01:09.526602 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="87b3919b-c8b6-41e5-aac7-96b6eb3359bc" containerName="keystone-cron" Feb 19 20:01:09 crc kubenswrapper[4813]: E0219 20:01:09.526614 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="333f58de-9f17-415d-9667-3f0a8b1d0bae" containerName="init" Feb 19 20:01:09 crc kubenswrapper[4813]: I0219 20:01:09.526621 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="333f58de-9f17-415d-9667-3f0a8b1d0bae" containerName="init" Feb 19 20:01:09 crc kubenswrapper[4813]: I0219 20:01:09.526793 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="87b3919b-c8b6-41e5-aac7-96b6eb3359bc" containerName="keystone-cron" Feb 19 20:01:09 crc kubenswrapper[4813]: I0219 20:01:09.526813 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="333f58de-9f17-415d-9667-3f0a8b1d0bae" containerName="dnsmasq-dns" Feb 19 20:01:09 crc kubenswrapper[4813]: I0219 20:01:09.527540 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-jmjlq" Feb 19 20:01:09 crc kubenswrapper[4813]: I0219 20:01:09.536640 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-jmjlq"] Feb 19 20:01:09 crc kubenswrapper[4813]: I0219 20:01:09.611253 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-340d-account-create-update-d2bdn"] Feb 19 20:01:09 crc kubenswrapper[4813]: I0219 20:01:09.612735 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-340d-account-create-update-d2bdn" Feb 19 20:01:09 crc kubenswrapper[4813]: I0219 20:01:09.614382 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 19 20:01:09 crc kubenswrapper[4813]: I0219 20:01:09.614483 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/198bf5a5-5e44-4440-9d40-9dd8fb723007-operator-scripts\") pod \"cinder-db-create-jmjlq\" (UID: \"198bf5a5-5e44-4440-9d40-9dd8fb723007\") " pod="openstack/cinder-db-create-jmjlq" Feb 19 20:01:09 crc kubenswrapper[4813]: I0219 20:01:09.614591 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtbnl\" (UniqueName: \"kubernetes.io/projected/198bf5a5-5e44-4440-9d40-9dd8fb723007-kube-api-access-gtbnl\") pod \"cinder-db-create-jmjlq\" (UID: \"198bf5a5-5e44-4440-9d40-9dd8fb723007\") " pod="openstack/cinder-db-create-jmjlq" Feb 19 20:01:09 crc kubenswrapper[4813]: I0219 20:01:09.620507 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-340d-account-create-update-d2bdn"] Feb 19 20:01:09 crc kubenswrapper[4813]: I0219 20:01:09.715537 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/198bf5a5-5e44-4440-9d40-9dd8fb723007-operator-scripts\") pod \"cinder-db-create-jmjlq\" (UID: \"198bf5a5-5e44-4440-9d40-9dd8fb723007\") " pod="openstack/cinder-db-create-jmjlq" Feb 19 20:01:09 crc kubenswrapper[4813]: I0219 20:01:09.715622 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd4dc\" (UniqueName: \"kubernetes.io/projected/580be06f-69cc-4178-9781-86215efaffd0-kube-api-access-rd4dc\") pod \"cinder-340d-account-create-update-d2bdn\" (UID: \"580be06f-69cc-4178-9781-86215efaffd0\") " pod="openstack/cinder-340d-account-create-update-d2bdn" Feb 19 20:01:09 crc kubenswrapper[4813]: I0219 20:01:09.715690 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtbnl\" (UniqueName: \"kubernetes.io/projected/198bf5a5-5e44-4440-9d40-9dd8fb723007-kube-api-access-gtbnl\") pod \"cinder-db-create-jmjlq\" (UID: \"198bf5a5-5e44-4440-9d40-9dd8fb723007\") " pod="openstack/cinder-db-create-jmjlq" Feb 19 20:01:09 crc kubenswrapper[4813]: I0219 20:01:09.715710 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/580be06f-69cc-4178-9781-86215efaffd0-operator-scripts\") pod \"cinder-340d-account-create-update-d2bdn\" (UID: \"580be06f-69cc-4178-9781-86215efaffd0\") " pod="openstack/cinder-340d-account-create-update-d2bdn" Feb 19 20:01:09 crc kubenswrapper[4813]: I0219 20:01:09.716224 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/198bf5a5-5e44-4440-9d40-9dd8fb723007-operator-scripts\") pod \"cinder-db-create-jmjlq\" (UID: \"198bf5a5-5e44-4440-9d40-9dd8fb723007\") " pod="openstack/cinder-db-create-jmjlq" Feb 19 20:01:09 crc kubenswrapper[4813]: I0219 20:01:09.734719 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtbnl\" (UniqueName: \"kubernetes.io/projected/198bf5a5-5e44-4440-9d40-9dd8fb723007-kube-api-access-gtbnl\") pod \"cinder-db-create-jmjlq\" (UID: \"198bf5a5-5e44-4440-9d40-9dd8fb723007\") " pod="openstack/cinder-db-create-jmjlq" Feb 19 20:01:09 crc kubenswrapper[4813]: I0219 20:01:09.816257 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rd4dc\" (UniqueName: \"kubernetes.io/projected/580be06f-69cc-4178-9781-86215efaffd0-kube-api-access-rd4dc\") pod \"cinder-340d-account-create-update-d2bdn\" (UID: \"580be06f-69cc-4178-9781-86215efaffd0\") " pod="openstack/cinder-340d-account-create-update-d2bdn" Feb 19 20:01:09 crc kubenswrapper[4813]: I0219 20:01:09.816366 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/580be06f-69cc-4178-9781-86215efaffd0-operator-scripts\") pod \"cinder-340d-account-create-update-d2bdn\" (UID: \"580be06f-69cc-4178-9781-86215efaffd0\") " pod="openstack/cinder-340d-account-create-update-d2bdn" Feb 19 20:01:09 crc kubenswrapper[4813]: I0219 20:01:09.817092 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/580be06f-69cc-4178-9781-86215efaffd0-operator-scripts\") pod \"cinder-340d-account-create-update-d2bdn\" (UID: \"580be06f-69cc-4178-9781-86215efaffd0\") " pod="openstack/cinder-340d-account-create-update-d2bdn" Feb 19 20:01:09 crc kubenswrapper[4813]: I0219 20:01:09.832211 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rd4dc\" (UniqueName: \"kubernetes.io/projected/580be06f-69cc-4178-9781-86215efaffd0-kube-api-access-rd4dc\") pod \"cinder-340d-account-create-update-d2bdn\" (UID: \"580be06f-69cc-4178-9781-86215efaffd0\") " pod="openstack/cinder-340d-account-create-update-d2bdn" Feb 19 20:01:09 crc kubenswrapper[4813]: I0219 20:01:09.851337 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-jmjlq" Feb 19 20:01:09 crc kubenswrapper[4813]: I0219 20:01:09.931360 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-340d-account-create-update-d2bdn" Feb 19 20:01:10 crc kubenswrapper[4813]: I0219 20:01:10.309578 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-jmjlq"] Feb 19 20:01:10 crc kubenswrapper[4813]: I0219 20:01:10.404377 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-340d-account-create-update-d2bdn"] Feb 19 20:01:10 crc kubenswrapper[4813]: W0219 20:01:10.407943 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod580be06f_69cc_4178_9781_86215efaffd0.slice/crio-5d984b66fb5e970c4ab2b3d10a41a38d0f513821cb9185363266d5d9ad985de8 WatchSource:0}: Error finding container 5d984b66fb5e970c4ab2b3d10a41a38d0f513821cb9185363266d5d9ad985de8: Status 404 returned error can't find the container with id 5d984b66fb5e970c4ab2b3d10a41a38d0f513821cb9185363266d5d9ad985de8 Feb 19 20:01:11 crc kubenswrapper[4813]: I0219 20:01:11.284326 4813 generic.go:334] "Generic (PLEG): container finished" podID="198bf5a5-5e44-4440-9d40-9dd8fb723007" containerID="0d710f74bfd3a2aad96505f2af271775858cab9f852f5a4ba74b0cdb6c8a3421" exitCode=0 Feb 19 20:01:11 crc kubenswrapper[4813]: I0219 20:01:11.284486 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-jmjlq" event={"ID":"198bf5a5-5e44-4440-9d40-9dd8fb723007","Type":"ContainerDied","Data":"0d710f74bfd3a2aad96505f2af271775858cab9f852f5a4ba74b0cdb6c8a3421"} Feb 19 20:01:11 crc kubenswrapper[4813]: I0219 20:01:11.284576 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-jmjlq" event={"ID":"198bf5a5-5e44-4440-9d40-9dd8fb723007","Type":"ContainerStarted","Data":"638e3fb5b5979dde02992feed5535b96b1b74b803a17f01ef4b462cb90d742dd"} Feb 19 20:01:11 crc kubenswrapper[4813]: I0219 20:01:11.286221 4813 generic.go:334] "Generic (PLEG): container finished" podID="580be06f-69cc-4178-9781-86215efaffd0" containerID="5aa21d01f81a58c443cd77b9320acdede47f279102a11762b6b443b9107d8065" exitCode=0 Feb 19 20:01:11 crc kubenswrapper[4813]: I0219 20:01:11.286265 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-340d-account-create-update-d2bdn" event={"ID":"580be06f-69cc-4178-9781-86215efaffd0","Type":"ContainerDied","Data":"5aa21d01f81a58c443cd77b9320acdede47f279102a11762b6b443b9107d8065"} Feb 19 20:01:11 crc kubenswrapper[4813]: I0219 20:01:11.286293 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-340d-account-create-update-d2bdn" event={"ID":"580be06f-69cc-4178-9781-86215efaffd0","Type":"ContainerStarted","Data":"5d984b66fb5e970c4ab2b3d10a41a38d0f513821cb9185363266d5d9ad985de8"} Feb 19 20:01:12 crc kubenswrapper[4813]: I0219 20:01:12.703199 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-340d-account-create-update-d2bdn" Feb 19 20:01:12 crc kubenswrapper[4813]: I0219 20:01:12.712137 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-jmjlq" Feb 19 20:01:12 crc kubenswrapper[4813]: I0219 20:01:12.871170 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/198bf5a5-5e44-4440-9d40-9dd8fb723007-operator-scripts\") pod \"198bf5a5-5e44-4440-9d40-9dd8fb723007\" (UID: \"198bf5a5-5e44-4440-9d40-9dd8fb723007\") " Feb 19 20:01:12 crc kubenswrapper[4813]: I0219 20:01:12.871210 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/198bf5a5-5e44-4440-9d40-9dd8fb723007-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "198bf5a5-5e44-4440-9d40-9dd8fb723007" (UID: "198bf5a5-5e44-4440-9d40-9dd8fb723007"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:01:12 crc kubenswrapper[4813]: I0219 20:01:12.871303 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtbnl\" (UniqueName: \"kubernetes.io/projected/198bf5a5-5e44-4440-9d40-9dd8fb723007-kube-api-access-gtbnl\") pod \"198bf5a5-5e44-4440-9d40-9dd8fb723007\" (UID: \"198bf5a5-5e44-4440-9d40-9dd8fb723007\") " Feb 19 20:01:12 crc kubenswrapper[4813]: I0219 20:01:12.871467 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/580be06f-69cc-4178-9781-86215efaffd0-operator-scripts\") pod \"580be06f-69cc-4178-9781-86215efaffd0\" (UID: \"580be06f-69cc-4178-9781-86215efaffd0\") " Feb 19 20:01:12 crc kubenswrapper[4813]: I0219 20:01:12.871614 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rd4dc\" (UniqueName: \"kubernetes.io/projected/580be06f-69cc-4178-9781-86215efaffd0-kube-api-access-rd4dc\") pod \"580be06f-69cc-4178-9781-86215efaffd0\" (UID: \"580be06f-69cc-4178-9781-86215efaffd0\") " Feb 19 20:01:12 crc kubenswrapper[4813]: I0219 20:01:12.872123 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/580be06f-69cc-4178-9781-86215efaffd0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "580be06f-69cc-4178-9781-86215efaffd0" (UID: "580be06f-69cc-4178-9781-86215efaffd0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:01:12 crc kubenswrapper[4813]: I0219 20:01:12.872557 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/198bf5a5-5e44-4440-9d40-9dd8fb723007-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 20:01:12 crc kubenswrapper[4813]: I0219 20:01:12.872596 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/580be06f-69cc-4178-9781-86215efaffd0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 20:01:12 crc kubenswrapper[4813]: I0219 20:01:12.877165 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/580be06f-69cc-4178-9781-86215efaffd0-kube-api-access-rd4dc" (OuterVolumeSpecName: "kube-api-access-rd4dc") pod "580be06f-69cc-4178-9781-86215efaffd0" (UID: "580be06f-69cc-4178-9781-86215efaffd0"). InnerVolumeSpecName "kube-api-access-rd4dc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:01:12 crc kubenswrapper[4813]: I0219 20:01:12.877878 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/198bf5a5-5e44-4440-9d40-9dd8fb723007-kube-api-access-gtbnl" (OuterVolumeSpecName: "kube-api-access-gtbnl") pod "198bf5a5-5e44-4440-9d40-9dd8fb723007" (UID: "198bf5a5-5e44-4440-9d40-9dd8fb723007"). InnerVolumeSpecName "kube-api-access-gtbnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:01:12 crc kubenswrapper[4813]: I0219 20:01:12.973821 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rd4dc\" (UniqueName: \"kubernetes.io/projected/580be06f-69cc-4178-9781-86215efaffd0-kube-api-access-rd4dc\") on node \"crc\" DevicePath \"\"" Feb 19 20:01:12 crc kubenswrapper[4813]: I0219 20:01:12.973859 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gtbnl\" (UniqueName: \"kubernetes.io/projected/198bf5a5-5e44-4440-9d40-9dd8fb723007-kube-api-access-gtbnl\") on node \"crc\" DevicePath \"\"" Feb 19 20:01:13 crc kubenswrapper[4813]: I0219 20:01:13.301280 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-jmjlq" event={"ID":"198bf5a5-5e44-4440-9d40-9dd8fb723007","Type":"ContainerDied","Data":"638e3fb5b5979dde02992feed5535b96b1b74b803a17f01ef4b462cb90d742dd"} Feb 19 20:01:13 crc kubenswrapper[4813]: I0219 20:01:13.301323 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="638e3fb5b5979dde02992feed5535b96b1b74b803a17f01ef4b462cb90d742dd" Feb 19 20:01:13 crc kubenswrapper[4813]: I0219 20:01:13.301373 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-jmjlq" Feb 19 20:01:13 crc kubenswrapper[4813]: I0219 20:01:13.303467 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-340d-account-create-update-d2bdn" event={"ID":"580be06f-69cc-4178-9781-86215efaffd0","Type":"ContainerDied","Data":"5d984b66fb5e970c4ab2b3d10a41a38d0f513821cb9185363266d5d9ad985de8"} Feb 19 20:01:13 crc kubenswrapper[4813]: I0219 20:01:13.303498 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d984b66fb5e970c4ab2b3d10a41a38d0f513821cb9185363266d5d9ad985de8" Feb 19 20:01:13 crc kubenswrapper[4813]: I0219 20:01:13.303541 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-340d-account-create-update-d2bdn" Feb 19 20:01:14 crc kubenswrapper[4813]: I0219 20:01:14.846363 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-qhx27"] Feb 19 20:01:14 crc kubenswrapper[4813]: E0219 20:01:14.847164 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="198bf5a5-5e44-4440-9d40-9dd8fb723007" containerName="mariadb-database-create" Feb 19 20:01:14 crc kubenswrapper[4813]: I0219 20:01:14.847180 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="198bf5a5-5e44-4440-9d40-9dd8fb723007" containerName="mariadb-database-create" Feb 19 20:01:14 crc kubenswrapper[4813]: E0219 20:01:14.847194 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="580be06f-69cc-4178-9781-86215efaffd0" containerName="mariadb-account-create-update" Feb 19 20:01:14 crc kubenswrapper[4813]: I0219 20:01:14.847202 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="580be06f-69cc-4178-9781-86215efaffd0" containerName="mariadb-account-create-update" Feb 19 20:01:14 crc kubenswrapper[4813]: I0219 20:01:14.847410 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="580be06f-69cc-4178-9781-86215efaffd0" containerName="mariadb-account-create-update" Feb 19 20:01:14 crc kubenswrapper[4813]: I0219 20:01:14.847435 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="198bf5a5-5e44-4440-9d40-9dd8fb723007" containerName="mariadb-database-create" Feb 19 20:01:14 crc kubenswrapper[4813]: I0219 20:01:14.848184 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-qhx27" Feb 19 20:01:14 crc kubenswrapper[4813]: I0219 20:01:14.850099 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-dwvhb" Feb 19 20:01:14 crc kubenswrapper[4813]: I0219 20:01:14.850717 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 19 20:01:14 crc kubenswrapper[4813]: I0219 20:01:14.853246 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 19 20:01:14 crc kubenswrapper[4813]: I0219 20:01:14.853645 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-qhx27"] Feb 19 20:01:15 crc kubenswrapper[4813]: I0219 20:01:15.006583 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29e7a363-18a4-4d92-ac76-7cb3eb644a55-combined-ca-bundle\") pod \"cinder-db-sync-qhx27\" (UID: \"29e7a363-18a4-4d92-ac76-7cb3eb644a55\") " pod="openstack/cinder-db-sync-qhx27" Feb 19 20:01:15 crc kubenswrapper[4813]: I0219 20:01:15.006642 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/29e7a363-18a4-4d92-ac76-7cb3eb644a55-db-sync-config-data\") pod \"cinder-db-sync-qhx27\" (UID: \"29e7a363-18a4-4d92-ac76-7cb3eb644a55\") " pod="openstack/cinder-db-sync-qhx27" Feb 19 20:01:15 crc kubenswrapper[4813]: I0219 20:01:15.006690 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29e7a363-18a4-4d92-ac76-7cb3eb644a55-config-data\") pod \"cinder-db-sync-qhx27\" (UID: \"29e7a363-18a4-4d92-ac76-7cb3eb644a55\") " pod="openstack/cinder-db-sync-qhx27" Feb 19 20:01:15 crc kubenswrapper[4813]: I0219 20:01:15.006770 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29e7a363-18a4-4d92-ac76-7cb3eb644a55-scripts\") pod \"cinder-db-sync-qhx27\" (UID: \"29e7a363-18a4-4d92-ac76-7cb3eb644a55\") " pod="openstack/cinder-db-sync-qhx27" Feb 19 20:01:15 crc kubenswrapper[4813]: I0219 20:01:15.006787 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbntq\" (UniqueName: \"kubernetes.io/projected/29e7a363-18a4-4d92-ac76-7cb3eb644a55-kube-api-access-nbntq\") pod \"cinder-db-sync-qhx27\" (UID: \"29e7a363-18a4-4d92-ac76-7cb3eb644a55\") " pod="openstack/cinder-db-sync-qhx27" Feb 19 20:01:15 crc kubenswrapper[4813]: I0219 20:01:15.006850 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/29e7a363-18a4-4d92-ac76-7cb3eb644a55-etc-machine-id\") pod \"cinder-db-sync-qhx27\" (UID: \"29e7a363-18a4-4d92-ac76-7cb3eb644a55\") " pod="openstack/cinder-db-sync-qhx27" Feb 19 20:01:15 crc kubenswrapper[4813]: I0219 20:01:15.108089 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29e7a363-18a4-4d92-ac76-7cb3eb644a55-scripts\") pod \"cinder-db-sync-qhx27\" (UID: \"29e7a363-18a4-4d92-ac76-7cb3eb644a55\") " pod="openstack/cinder-db-sync-qhx27" Feb 19 20:01:15 crc kubenswrapper[4813]: I0219 20:01:15.108152 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbntq\" (UniqueName: \"kubernetes.io/projected/29e7a363-18a4-4d92-ac76-7cb3eb644a55-kube-api-access-nbntq\") pod \"cinder-db-sync-qhx27\" (UID: \"29e7a363-18a4-4d92-ac76-7cb3eb644a55\") " pod="openstack/cinder-db-sync-qhx27" Feb 19 20:01:15 crc kubenswrapper[4813]: I0219 20:01:15.108200 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/29e7a363-18a4-4d92-ac76-7cb3eb644a55-etc-machine-id\") pod \"cinder-db-sync-qhx27\" (UID: \"29e7a363-18a4-4d92-ac76-7cb3eb644a55\") " pod="openstack/cinder-db-sync-qhx27" Feb 19 20:01:15 crc kubenswrapper[4813]: I0219 20:01:15.108255 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29e7a363-18a4-4d92-ac76-7cb3eb644a55-combined-ca-bundle\") pod \"cinder-db-sync-qhx27\" (UID: \"29e7a363-18a4-4d92-ac76-7cb3eb644a55\") " pod="openstack/cinder-db-sync-qhx27" Feb 19 20:01:15 crc kubenswrapper[4813]: I0219 20:01:15.108278 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/29e7a363-18a4-4d92-ac76-7cb3eb644a55-db-sync-config-data\") pod \"cinder-db-sync-qhx27\" (UID: \"29e7a363-18a4-4d92-ac76-7cb3eb644a55\") " pod="openstack/cinder-db-sync-qhx27" Feb 19 20:01:15 crc kubenswrapper[4813]: I0219 20:01:15.108306 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29e7a363-18a4-4d92-ac76-7cb3eb644a55-config-data\") pod \"cinder-db-sync-qhx27\" (UID: \"29e7a363-18a4-4d92-ac76-7cb3eb644a55\") " pod="openstack/cinder-db-sync-qhx27" Feb 19 20:01:15 crc kubenswrapper[4813]: I0219 20:01:15.110197 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/29e7a363-18a4-4d92-ac76-7cb3eb644a55-etc-machine-id\") pod \"cinder-db-sync-qhx27\" (UID: \"29e7a363-18a4-4d92-ac76-7cb3eb644a55\") " pod="openstack/cinder-db-sync-qhx27" Feb 19 20:01:15 crc kubenswrapper[4813]: I0219 20:01:15.115520 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29e7a363-18a4-4d92-ac76-7cb3eb644a55-config-data\") pod \"cinder-db-sync-qhx27\" (UID: \"29e7a363-18a4-4d92-ac76-7cb3eb644a55\") " pod="openstack/cinder-db-sync-qhx27" Feb 19 20:01:15 crc kubenswrapper[4813]: I0219 20:01:15.115743 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/29e7a363-18a4-4d92-ac76-7cb3eb644a55-db-sync-config-data\") pod \"cinder-db-sync-qhx27\" (UID: \"29e7a363-18a4-4d92-ac76-7cb3eb644a55\") " pod="openstack/cinder-db-sync-qhx27" Feb 19 20:01:15 crc kubenswrapper[4813]: I0219 20:01:15.116273 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29e7a363-18a4-4d92-ac76-7cb3eb644a55-combined-ca-bundle\") pod \"cinder-db-sync-qhx27\" (UID: \"29e7a363-18a4-4d92-ac76-7cb3eb644a55\") " pod="openstack/cinder-db-sync-qhx27" Feb 19 20:01:15 crc kubenswrapper[4813]: I0219 20:01:15.125021 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29e7a363-18a4-4d92-ac76-7cb3eb644a55-scripts\") pod \"cinder-db-sync-qhx27\" (UID: \"29e7a363-18a4-4d92-ac76-7cb3eb644a55\") " pod="openstack/cinder-db-sync-qhx27" Feb 19 20:01:15 crc kubenswrapper[4813]: I0219 20:01:15.132136 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbntq\" (UniqueName: \"kubernetes.io/projected/29e7a363-18a4-4d92-ac76-7cb3eb644a55-kube-api-access-nbntq\") pod \"cinder-db-sync-qhx27\" (UID: \"29e7a363-18a4-4d92-ac76-7cb3eb644a55\") " pod="openstack/cinder-db-sync-qhx27" Feb 19 20:01:15 crc kubenswrapper[4813]: I0219 20:01:15.171579 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-qhx27" Feb 19 20:01:15 crc kubenswrapper[4813]: I0219 20:01:15.622753 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-qhx27"] Feb 19 20:01:16 crc kubenswrapper[4813]: I0219 20:01:16.334570 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-qhx27" event={"ID":"29e7a363-18a4-4d92-ac76-7cb3eb644a55","Type":"ContainerStarted","Data":"ac7ef514e2f5dae96c9d4fb572b6d519a1041bfceaaf8e17c16e320b29f179ab"} Feb 19 20:01:16 crc kubenswrapper[4813]: I0219 20:01:16.335253 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-qhx27" event={"ID":"29e7a363-18a4-4d92-ac76-7cb3eb644a55","Type":"ContainerStarted","Data":"d90ed984d7c1fc6d7135eb55c824217aa37b6de7cec082d97041d061802938b8"} Feb 19 20:01:16 crc kubenswrapper[4813]: I0219 20:01:16.362259 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-qhx27" podStartSLOduration=2.36223904 podStartE2EDuration="2.36223904s" podCreationTimestamp="2026-02-19 20:01:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 20:01:16.357179854 +0000 UTC m=+5495.582620405" watchObservedRunningTime="2026-02-19 20:01:16.36223904 +0000 UTC m=+5495.587679581" Feb 19 20:01:19 crc kubenswrapper[4813]: I0219 20:01:19.367344 4813 generic.go:334] "Generic (PLEG): container finished" podID="29e7a363-18a4-4d92-ac76-7cb3eb644a55" containerID="ac7ef514e2f5dae96c9d4fb572b6d519a1041bfceaaf8e17c16e320b29f179ab" exitCode=0 Feb 19 20:01:19 crc kubenswrapper[4813]: I0219 20:01:19.367396 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-qhx27" event={"ID":"29e7a363-18a4-4d92-ac76-7cb3eb644a55","Type":"ContainerDied","Data":"ac7ef514e2f5dae96c9d4fb572b6d519a1041bfceaaf8e17c16e320b29f179ab"} Feb 19 20:01:20 crc kubenswrapper[4813]: I0219 20:01:20.714726 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-qhx27" Feb 19 20:01:20 crc kubenswrapper[4813]: I0219 20:01:20.837507 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/29e7a363-18a4-4d92-ac76-7cb3eb644a55-etc-machine-id\") pod \"29e7a363-18a4-4d92-ac76-7cb3eb644a55\" (UID: \"29e7a363-18a4-4d92-ac76-7cb3eb644a55\") " Feb 19 20:01:20 crc kubenswrapper[4813]: I0219 20:01:20.837617 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbntq\" (UniqueName: \"kubernetes.io/projected/29e7a363-18a4-4d92-ac76-7cb3eb644a55-kube-api-access-nbntq\") pod \"29e7a363-18a4-4d92-ac76-7cb3eb644a55\" (UID: \"29e7a363-18a4-4d92-ac76-7cb3eb644a55\") " Feb 19 20:01:20 crc kubenswrapper[4813]: I0219 20:01:20.837647 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29e7a363-18a4-4d92-ac76-7cb3eb644a55-scripts\") pod \"29e7a363-18a4-4d92-ac76-7cb3eb644a55\" (UID: \"29e7a363-18a4-4d92-ac76-7cb3eb644a55\") " Feb 19 20:01:20 crc kubenswrapper[4813]: I0219 20:01:20.837686 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/29e7a363-18a4-4d92-ac76-7cb3eb644a55-db-sync-config-data\") pod \"29e7a363-18a4-4d92-ac76-7cb3eb644a55\" (UID: \"29e7a363-18a4-4d92-ac76-7cb3eb644a55\") " Feb 19 20:01:20 crc kubenswrapper[4813]: I0219 20:01:20.837614 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29e7a363-18a4-4d92-ac76-7cb3eb644a55-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "29e7a363-18a4-4d92-ac76-7cb3eb644a55" (UID: "29e7a363-18a4-4d92-ac76-7cb3eb644a55"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 20:01:20 crc kubenswrapper[4813]: I0219 20:01:20.837746 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29e7a363-18a4-4d92-ac76-7cb3eb644a55-config-data\") pod \"29e7a363-18a4-4d92-ac76-7cb3eb644a55\" (UID: \"29e7a363-18a4-4d92-ac76-7cb3eb644a55\") " Feb 19 20:01:20 crc kubenswrapper[4813]: I0219 20:01:20.837919 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29e7a363-18a4-4d92-ac76-7cb3eb644a55-combined-ca-bundle\") pod \"29e7a363-18a4-4d92-ac76-7cb3eb644a55\" (UID: \"29e7a363-18a4-4d92-ac76-7cb3eb644a55\") " Feb 19 20:01:20 crc kubenswrapper[4813]: I0219 20:01:20.838386 4813 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/29e7a363-18a4-4d92-ac76-7cb3eb644a55-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 19 20:01:20 crc kubenswrapper[4813]: I0219 20:01:20.845162 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29e7a363-18a4-4d92-ac76-7cb3eb644a55-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "29e7a363-18a4-4d92-ac76-7cb3eb644a55" (UID: "29e7a363-18a4-4d92-ac76-7cb3eb644a55"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:01:20 crc kubenswrapper[4813]: I0219 20:01:20.845189 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29e7a363-18a4-4d92-ac76-7cb3eb644a55-scripts" (OuterVolumeSpecName: "scripts") pod "29e7a363-18a4-4d92-ac76-7cb3eb644a55" (UID: "29e7a363-18a4-4d92-ac76-7cb3eb644a55"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:01:20 crc kubenswrapper[4813]: I0219 20:01:20.845194 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29e7a363-18a4-4d92-ac76-7cb3eb644a55-kube-api-access-nbntq" (OuterVolumeSpecName: "kube-api-access-nbntq") pod "29e7a363-18a4-4d92-ac76-7cb3eb644a55" (UID: "29e7a363-18a4-4d92-ac76-7cb3eb644a55"). InnerVolumeSpecName "kube-api-access-nbntq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:01:20 crc kubenswrapper[4813]: I0219 20:01:20.861586 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29e7a363-18a4-4d92-ac76-7cb3eb644a55-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "29e7a363-18a4-4d92-ac76-7cb3eb644a55" (UID: "29e7a363-18a4-4d92-ac76-7cb3eb644a55"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:01:20 crc kubenswrapper[4813]: I0219 20:01:20.880942 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29e7a363-18a4-4d92-ac76-7cb3eb644a55-config-data" (OuterVolumeSpecName: "config-data") pod "29e7a363-18a4-4d92-ac76-7cb3eb644a55" (UID: "29e7a363-18a4-4d92-ac76-7cb3eb644a55"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:01:20 crc kubenswrapper[4813]: I0219 20:01:20.939899 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbntq\" (UniqueName: \"kubernetes.io/projected/29e7a363-18a4-4d92-ac76-7cb3eb644a55-kube-api-access-nbntq\") on node \"crc\" DevicePath \"\"" Feb 19 20:01:20 crc kubenswrapper[4813]: I0219 20:01:20.939942 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29e7a363-18a4-4d92-ac76-7cb3eb644a55-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 20:01:20 crc kubenswrapper[4813]: I0219 20:01:20.940006 4813 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/29e7a363-18a4-4d92-ac76-7cb3eb644a55-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 20:01:20 crc kubenswrapper[4813]: I0219 20:01:20.940016 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29e7a363-18a4-4d92-ac76-7cb3eb644a55-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 20:01:20 crc kubenswrapper[4813]: I0219 20:01:20.940026 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29e7a363-18a4-4d92-ac76-7cb3eb644a55-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 20:01:21 crc kubenswrapper[4813]: I0219 20:01:21.387997 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-qhx27" event={"ID":"29e7a363-18a4-4d92-ac76-7cb3eb644a55","Type":"ContainerDied","Data":"d90ed984d7c1fc6d7135eb55c824217aa37b6de7cec082d97041d061802938b8"} Feb 19 20:01:21 crc kubenswrapper[4813]: I0219 20:01:21.388375 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d90ed984d7c1fc6d7135eb55c824217aa37b6de7cec082d97041d061802938b8" Feb 19 20:01:21 crc kubenswrapper[4813]: I0219 20:01:21.388439 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-qhx27" Feb 19 20:01:21 crc kubenswrapper[4813]: I0219 20:01:21.698096 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-765d77db77-p5vgn"] Feb 19 20:01:21 crc kubenswrapper[4813]: E0219 20:01:21.698617 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29e7a363-18a4-4d92-ac76-7cb3eb644a55" containerName="cinder-db-sync" Feb 19 20:01:21 crc kubenswrapper[4813]: I0219 20:01:21.698641 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="29e7a363-18a4-4d92-ac76-7cb3eb644a55" containerName="cinder-db-sync" Feb 19 20:01:21 crc kubenswrapper[4813]: I0219 20:01:21.698874 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="29e7a363-18a4-4d92-ac76-7cb3eb644a55" containerName="cinder-db-sync" Feb 19 20:01:21 crc kubenswrapper[4813]: I0219 20:01:21.706278 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-765d77db77-p5vgn" Feb 19 20:01:21 crc kubenswrapper[4813]: I0219 20:01:21.748705 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-765d77db77-p5vgn"] Feb 19 20:01:21 crc kubenswrapper[4813]: I0219 20:01:21.758661 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsb6g\" (UniqueName: \"kubernetes.io/projected/d4626039-bf53-4d57-b2a5-c6201bc3f776-kube-api-access-dsb6g\") pod \"dnsmasq-dns-765d77db77-p5vgn\" (UID: \"d4626039-bf53-4d57-b2a5-c6201bc3f776\") " pod="openstack/dnsmasq-dns-765d77db77-p5vgn" Feb 19 20:01:21 crc kubenswrapper[4813]: I0219 20:01:21.758821 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d4626039-bf53-4d57-b2a5-c6201bc3f776-ovsdbserver-nb\") pod \"dnsmasq-dns-765d77db77-p5vgn\" (UID: \"d4626039-bf53-4d57-b2a5-c6201bc3f776\") " pod="openstack/dnsmasq-dns-765d77db77-p5vgn" Feb 19 20:01:21 crc kubenswrapper[4813]: I0219 20:01:21.758897 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4626039-bf53-4d57-b2a5-c6201bc3f776-dns-svc\") pod \"dnsmasq-dns-765d77db77-p5vgn\" (UID: \"d4626039-bf53-4d57-b2a5-c6201bc3f776\") " pod="openstack/dnsmasq-dns-765d77db77-p5vgn" Feb 19 20:01:21 crc kubenswrapper[4813]: I0219 20:01:21.759090 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4626039-bf53-4d57-b2a5-c6201bc3f776-config\") pod \"dnsmasq-dns-765d77db77-p5vgn\" (UID: \"d4626039-bf53-4d57-b2a5-c6201bc3f776\") " pod="openstack/dnsmasq-dns-765d77db77-p5vgn" Feb 19 20:01:21 crc kubenswrapper[4813]: I0219 20:01:21.759208 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d4626039-bf53-4d57-b2a5-c6201bc3f776-ovsdbserver-sb\") pod \"dnsmasq-dns-765d77db77-p5vgn\" (UID: \"d4626039-bf53-4d57-b2a5-c6201bc3f776\") " pod="openstack/dnsmasq-dns-765d77db77-p5vgn" Feb 19 20:01:21 crc kubenswrapper[4813]: I0219 20:01:21.861167 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4626039-bf53-4d57-b2a5-c6201bc3f776-config\") pod \"dnsmasq-dns-765d77db77-p5vgn\" (UID: \"d4626039-bf53-4d57-b2a5-c6201bc3f776\") " pod="openstack/dnsmasq-dns-765d77db77-p5vgn" Feb 19 20:01:21 crc kubenswrapper[4813]: I0219 20:01:21.861280 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d4626039-bf53-4d57-b2a5-c6201bc3f776-ovsdbserver-sb\") pod \"dnsmasq-dns-765d77db77-p5vgn\" (UID: \"d4626039-bf53-4d57-b2a5-c6201bc3f776\") " pod="openstack/dnsmasq-dns-765d77db77-p5vgn" Feb 19 20:01:21 crc kubenswrapper[4813]: I0219 20:01:21.861355 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsb6g\" (UniqueName: \"kubernetes.io/projected/d4626039-bf53-4d57-b2a5-c6201bc3f776-kube-api-access-dsb6g\") pod \"dnsmasq-dns-765d77db77-p5vgn\" (UID: \"d4626039-bf53-4d57-b2a5-c6201bc3f776\") " pod="openstack/dnsmasq-dns-765d77db77-p5vgn" Feb 19 20:01:21 crc kubenswrapper[4813]: I0219 20:01:21.861416 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d4626039-bf53-4d57-b2a5-c6201bc3f776-ovsdbserver-nb\") pod \"dnsmasq-dns-765d77db77-p5vgn\" (UID: \"d4626039-bf53-4d57-b2a5-c6201bc3f776\") " pod="openstack/dnsmasq-dns-765d77db77-p5vgn" Feb 19 20:01:21 crc kubenswrapper[4813]: I0219 20:01:21.861451 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4626039-bf53-4d57-b2a5-c6201bc3f776-dns-svc\") pod \"dnsmasq-dns-765d77db77-p5vgn\" (UID: \"d4626039-bf53-4d57-b2a5-c6201bc3f776\") " pod="openstack/dnsmasq-dns-765d77db77-p5vgn" Feb 19 20:01:21 crc kubenswrapper[4813]: I0219 20:01:21.862116 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4626039-bf53-4d57-b2a5-c6201bc3f776-config\") pod \"dnsmasq-dns-765d77db77-p5vgn\" (UID: \"d4626039-bf53-4d57-b2a5-c6201bc3f776\") " pod="openstack/dnsmasq-dns-765d77db77-p5vgn" Feb 19 20:01:21 crc kubenswrapper[4813]: I0219 20:01:21.862431 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4626039-bf53-4d57-b2a5-c6201bc3f776-dns-svc\") pod \"dnsmasq-dns-765d77db77-p5vgn\" (UID: \"d4626039-bf53-4d57-b2a5-c6201bc3f776\") " pod="openstack/dnsmasq-dns-765d77db77-p5vgn" Feb 19 20:01:21 crc kubenswrapper[4813]: I0219 20:01:21.862897 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d4626039-bf53-4d57-b2a5-c6201bc3f776-ovsdbserver-sb\") pod \"dnsmasq-dns-765d77db77-p5vgn\" (UID: \"d4626039-bf53-4d57-b2a5-c6201bc3f776\") " pod="openstack/dnsmasq-dns-765d77db77-p5vgn" Feb 19 20:01:21 crc kubenswrapper[4813]: I0219 20:01:21.863121 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d4626039-bf53-4d57-b2a5-c6201bc3f776-ovsdbserver-nb\") pod \"dnsmasq-dns-765d77db77-p5vgn\" (UID: \"d4626039-bf53-4d57-b2a5-c6201bc3f776\") " pod="openstack/dnsmasq-dns-765d77db77-p5vgn" Feb 19 20:01:21 crc kubenswrapper[4813]: I0219 20:01:21.885482 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsb6g\" (UniqueName: \"kubernetes.io/projected/d4626039-bf53-4d57-b2a5-c6201bc3f776-kube-api-access-dsb6g\") pod \"dnsmasq-dns-765d77db77-p5vgn\" (UID: \"d4626039-bf53-4d57-b2a5-c6201bc3f776\") " pod="openstack/dnsmasq-dns-765d77db77-p5vgn" Feb 19 20:01:21 crc kubenswrapper[4813]: I0219 20:01:21.969490 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 19 20:01:21 crc kubenswrapper[4813]: I0219 20:01:21.971297 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 20:01:21 crc kubenswrapper[4813]: I0219 20:01:21.974713 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 19 20:01:21 crc kubenswrapper[4813]: I0219 20:01:21.975043 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-dwvhb" Feb 19 20:01:21 crc kubenswrapper[4813]: I0219 20:01:21.975308 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 19 20:01:21 crc kubenswrapper[4813]: I0219 20:01:21.989379 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 19 20:01:21 crc kubenswrapper[4813]: I0219 20:01:21.992564 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 19 20:01:22 crc kubenswrapper[4813]: I0219 20:01:22.057465 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-765d77db77-p5vgn" Feb 19 20:01:22 crc kubenswrapper[4813]: I0219 20:01:22.065763 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gdt7\" (UniqueName: \"kubernetes.io/projected/c82d10be-9073-449b-a21e-1ff8b08cb71e-kube-api-access-4gdt7\") pod \"cinder-api-0\" (UID: \"c82d10be-9073-449b-a21e-1ff8b08cb71e\") " pod="openstack/cinder-api-0" Feb 19 20:01:22 crc kubenswrapper[4813]: I0219 20:01:22.065925 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c82d10be-9073-449b-a21e-1ff8b08cb71e-logs\") pod \"cinder-api-0\" (UID: \"c82d10be-9073-449b-a21e-1ff8b08cb71e\") " pod="openstack/cinder-api-0" Feb 19 20:01:22 crc kubenswrapper[4813]: I0219 20:01:22.066039 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c82d10be-9073-449b-a21e-1ff8b08cb71e-scripts\") pod \"cinder-api-0\" (UID: \"c82d10be-9073-449b-a21e-1ff8b08cb71e\") " pod="openstack/cinder-api-0" Feb 19 20:01:22 crc kubenswrapper[4813]: I0219 20:01:22.066107 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c82d10be-9073-449b-a21e-1ff8b08cb71e-config-data\") pod \"cinder-api-0\" (UID: \"c82d10be-9073-449b-a21e-1ff8b08cb71e\") " pod="openstack/cinder-api-0" Feb 19 20:01:22 crc kubenswrapper[4813]: I0219 20:01:22.066196 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c82d10be-9073-449b-a21e-1ff8b08cb71e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c82d10be-9073-449b-a21e-1ff8b08cb71e\") " pod="openstack/cinder-api-0" Feb 19 20:01:22 crc kubenswrapper[4813]: I0219 20:01:22.066232 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c82d10be-9073-449b-a21e-1ff8b08cb71e-config-data-custom\") pod \"cinder-api-0\" (UID: \"c82d10be-9073-449b-a21e-1ff8b08cb71e\") " pod="openstack/cinder-api-0" Feb 19 20:01:22 crc kubenswrapper[4813]: I0219 20:01:22.066257 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c82d10be-9073-449b-a21e-1ff8b08cb71e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c82d10be-9073-449b-a21e-1ff8b08cb71e\") " pod="openstack/cinder-api-0" Feb 19 20:01:22 crc kubenswrapper[4813]: I0219 20:01:22.168017 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c82d10be-9073-449b-a21e-1ff8b08cb71e-logs\") pod \"cinder-api-0\" (UID: \"c82d10be-9073-449b-a21e-1ff8b08cb71e\") " pod="openstack/cinder-api-0" Feb 19 20:01:22 crc kubenswrapper[4813]: I0219 20:01:22.168313 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c82d10be-9073-449b-a21e-1ff8b08cb71e-scripts\") pod \"cinder-api-0\" (UID: \"c82d10be-9073-449b-a21e-1ff8b08cb71e\") " pod="openstack/cinder-api-0" Feb 19 20:01:22 crc kubenswrapper[4813]: I0219 20:01:22.168340 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c82d10be-9073-449b-a21e-1ff8b08cb71e-config-data\") pod \"cinder-api-0\" (UID: \"c82d10be-9073-449b-a21e-1ff8b08cb71e\") " pod="openstack/cinder-api-0" Feb 19 20:01:22 crc kubenswrapper[4813]: I0219 20:01:22.168395 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c82d10be-9073-449b-a21e-1ff8b08cb71e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c82d10be-9073-449b-a21e-1ff8b08cb71e\") " pod="openstack/cinder-api-0" Feb 19 20:01:22 crc kubenswrapper[4813]: I0219 20:01:22.168419 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c82d10be-9073-449b-a21e-1ff8b08cb71e-config-data-custom\") pod \"cinder-api-0\" (UID: \"c82d10be-9073-449b-a21e-1ff8b08cb71e\") " pod="openstack/cinder-api-0" Feb 19 20:01:22 crc kubenswrapper[4813]: I0219 20:01:22.168437 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c82d10be-9073-449b-a21e-1ff8b08cb71e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c82d10be-9073-449b-a21e-1ff8b08cb71e\") " pod="openstack/cinder-api-0" Feb 19 20:01:22 crc kubenswrapper[4813]: I0219 20:01:22.168479 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gdt7\" (UniqueName: \"kubernetes.io/projected/c82d10be-9073-449b-a21e-1ff8b08cb71e-kube-api-access-4gdt7\") pod \"cinder-api-0\" (UID: \"c82d10be-9073-449b-a21e-1ff8b08cb71e\") " pod="openstack/cinder-api-0" Feb 19 20:01:22 crc kubenswrapper[4813]: I0219 20:01:22.168611 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c82d10be-9073-449b-a21e-1ff8b08cb71e-logs\") pod \"cinder-api-0\" (UID: \"c82d10be-9073-449b-a21e-1ff8b08cb71e\") " pod="openstack/cinder-api-0" Feb 19 20:01:22 crc kubenswrapper[4813]: I0219 20:01:22.168773 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c82d10be-9073-449b-a21e-1ff8b08cb71e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c82d10be-9073-449b-a21e-1ff8b08cb71e\") " pod="openstack/cinder-api-0" Feb 19 20:01:22 crc kubenswrapper[4813]: I0219 20:01:22.171820 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c82d10be-9073-449b-a21e-1ff8b08cb71e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c82d10be-9073-449b-a21e-1ff8b08cb71e\") " pod="openstack/cinder-api-0" Feb 19 20:01:22 crc kubenswrapper[4813]: I0219 20:01:22.172622 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c82d10be-9073-449b-a21e-1ff8b08cb71e-scripts\") pod \"cinder-api-0\" (UID: \"c82d10be-9073-449b-a21e-1ff8b08cb71e\") " pod="openstack/cinder-api-0" Feb 19 20:01:22 crc kubenswrapper[4813]: I0219 20:01:22.172808 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c82d10be-9073-449b-a21e-1ff8b08cb71e-config-data\") pod \"cinder-api-0\" (UID: \"c82d10be-9073-449b-a21e-1ff8b08cb71e\") " pod="openstack/cinder-api-0" Feb 19 20:01:22 crc kubenswrapper[4813]: I0219 20:01:22.173246 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c82d10be-9073-449b-a21e-1ff8b08cb71e-config-data-custom\") pod \"cinder-api-0\" (UID: \"c82d10be-9073-449b-a21e-1ff8b08cb71e\") " pod="openstack/cinder-api-0" Feb 19 20:01:22 crc kubenswrapper[4813]: I0219 20:01:22.186110 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gdt7\" (UniqueName: \"kubernetes.io/projected/c82d10be-9073-449b-a21e-1ff8b08cb71e-kube-api-access-4gdt7\") pod \"cinder-api-0\" (UID: \"c82d10be-9073-449b-a21e-1ff8b08cb71e\") " pod="openstack/cinder-api-0" Feb 19 20:01:22 crc kubenswrapper[4813]: I0219 20:01:22.300175 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 20:01:22 crc kubenswrapper[4813]: I0219 20:01:22.609784 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-765d77db77-p5vgn"] Feb 19 20:01:22 crc kubenswrapper[4813]: I0219 20:01:22.846809 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 19 20:01:23 crc kubenswrapper[4813]: I0219 20:01:23.411048 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c82d10be-9073-449b-a21e-1ff8b08cb71e","Type":"ContainerStarted","Data":"2f0a6965a5b63a14c8169b127c09f0a0c4c5502405553bb36fe98e9b2d777263"} Feb 19 20:01:23 crc kubenswrapper[4813]: I0219 20:01:23.411376 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c82d10be-9073-449b-a21e-1ff8b08cb71e","Type":"ContainerStarted","Data":"e94e466069b9e5cf1069bb2956a67812d5bc01cb3f7b7a4be39c03d287844597"} Feb 19 20:01:23 crc kubenswrapper[4813]: I0219 20:01:23.413327 4813 generic.go:334] "Generic (PLEG): container finished" podID="d4626039-bf53-4d57-b2a5-c6201bc3f776" containerID="efe4a472b96468bd335fd0427fab2c8896f70f07b28fb19b613718a3b6192f79" exitCode=0 Feb 19 20:01:23 crc kubenswrapper[4813]: I0219 20:01:23.413397 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-765d77db77-p5vgn" event={"ID":"d4626039-bf53-4d57-b2a5-c6201bc3f776","Type":"ContainerDied","Data":"efe4a472b96468bd335fd0427fab2c8896f70f07b28fb19b613718a3b6192f79"} Feb 19 20:01:23 crc kubenswrapper[4813]: I0219 20:01:23.413465 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-765d77db77-p5vgn" event={"ID":"d4626039-bf53-4d57-b2a5-c6201bc3f776","Type":"ContainerStarted","Data":"c06c0083897e0583da6a7dbe7adc7ee2e5d82b3697d98640567a3abf2de3bd03"} Feb 19 20:01:24 crc kubenswrapper[4813]: I0219 20:01:24.425142 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c82d10be-9073-449b-a21e-1ff8b08cb71e","Type":"ContainerStarted","Data":"f08e3fa15b28667c62fd7a48a9c492b010c0d87465e00c5ab9c91539c747fa56"} Feb 19 20:01:24 crc kubenswrapper[4813]: I0219 20:01:24.425464 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 19 20:01:24 crc kubenswrapper[4813]: I0219 20:01:24.428259 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-765d77db77-p5vgn" event={"ID":"d4626039-bf53-4d57-b2a5-c6201bc3f776","Type":"ContainerStarted","Data":"a808dfe40aceef27b28870fe173c50690c23b5c2a68f33a9d3ccbb82b75365a5"} Feb 19 20:01:24 crc kubenswrapper[4813]: I0219 20:01:24.428429 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-765d77db77-p5vgn" Feb 19 20:01:24 crc kubenswrapper[4813]: I0219 20:01:24.451032 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.451012149 podStartE2EDuration="3.451012149s" podCreationTimestamp="2026-02-19 20:01:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 20:01:24.441063891 +0000 UTC m=+5503.666504432" watchObservedRunningTime="2026-02-19 20:01:24.451012149 +0000 UTC m=+5503.676452690" Feb 19 20:01:24 crc kubenswrapper[4813]: I0219 20:01:24.462442 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-765d77db77-p5vgn" podStartSLOduration=3.462425512 podStartE2EDuration="3.462425512s" podCreationTimestamp="2026-02-19 20:01:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 20:01:24.459004276 +0000 UTC m=+5503.684444837" watchObservedRunningTime="2026-02-19 20:01:24.462425512 +0000 UTC m=+5503.687866063" Feb 19 20:01:30 crc kubenswrapper[4813]: I0219 20:01:30.329801 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 20:01:30 crc kubenswrapper[4813]: I0219 20:01:30.330393 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 20:01:32 crc kubenswrapper[4813]: I0219 20:01:32.059218 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-765d77db77-p5vgn" Feb 19 20:01:32 crc kubenswrapper[4813]: I0219 20:01:32.119241 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fc545869f-pps9q"] Feb 19 20:01:32 crc kubenswrapper[4813]: I0219 20:01:32.119485 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-fc545869f-pps9q" podUID="76e05205-9be5-4146-ae08-63d85351bc1e" containerName="dnsmasq-dns" containerID="cri-o://5fb0fa461bbd792337387ffacd7e8d4210043c3346b5c59b17550cfcd09d42e0" gracePeriod=10 Feb 19 20:01:32 crc kubenswrapper[4813]: I0219 20:01:32.515512 4813 generic.go:334] "Generic (PLEG): container finished" podID="76e05205-9be5-4146-ae08-63d85351bc1e" containerID="5fb0fa461bbd792337387ffacd7e8d4210043c3346b5c59b17550cfcd09d42e0" exitCode=0 Feb 19 20:01:32 crc kubenswrapper[4813]: I0219 20:01:32.515559 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fc545869f-pps9q" event={"ID":"76e05205-9be5-4146-ae08-63d85351bc1e","Type":"ContainerDied","Data":"5fb0fa461bbd792337387ffacd7e8d4210043c3346b5c59b17550cfcd09d42e0"} Feb 19 20:01:32 crc kubenswrapper[4813]: I0219 20:01:32.630261 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fc545869f-pps9q" Feb 19 20:01:32 crc kubenswrapper[4813]: I0219 20:01:32.778762 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6w8gd\" (UniqueName: \"kubernetes.io/projected/76e05205-9be5-4146-ae08-63d85351bc1e-kube-api-access-6w8gd\") pod \"76e05205-9be5-4146-ae08-63d85351bc1e\" (UID: \"76e05205-9be5-4146-ae08-63d85351bc1e\") " Feb 19 20:01:32 crc kubenswrapper[4813]: I0219 20:01:32.778817 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76e05205-9be5-4146-ae08-63d85351bc1e-config\") pod \"76e05205-9be5-4146-ae08-63d85351bc1e\" (UID: \"76e05205-9be5-4146-ae08-63d85351bc1e\") " Feb 19 20:01:32 crc kubenswrapper[4813]: I0219 20:01:32.778842 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76e05205-9be5-4146-ae08-63d85351bc1e-dns-svc\") pod \"76e05205-9be5-4146-ae08-63d85351bc1e\" (UID: \"76e05205-9be5-4146-ae08-63d85351bc1e\") " Feb 19 20:01:32 crc kubenswrapper[4813]: I0219 20:01:32.778857 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/76e05205-9be5-4146-ae08-63d85351bc1e-ovsdbserver-nb\") pod \"76e05205-9be5-4146-ae08-63d85351bc1e\" (UID: \"76e05205-9be5-4146-ae08-63d85351bc1e\") " Feb 19 20:01:32 crc kubenswrapper[4813]: I0219 20:01:32.778898 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/76e05205-9be5-4146-ae08-63d85351bc1e-ovsdbserver-sb\") pod \"76e05205-9be5-4146-ae08-63d85351bc1e\" (UID: \"76e05205-9be5-4146-ae08-63d85351bc1e\") " Feb 19 20:01:32 crc kubenswrapper[4813]: I0219 20:01:32.799761 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76e05205-9be5-4146-ae08-63d85351bc1e-kube-api-access-6w8gd" (OuterVolumeSpecName: "kube-api-access-6w8gd") pod "76e05205-9be5-4146-ae08-63d85351bc1e" (UID: "76e05205-9be5-4146-ae08-63d85351bc1e"). InnerVolumeSpecName "kube-api-access-6w8gd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:01:32 crc kubenswrapper[4813]: I0219 20:01:32.845832 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76e05205-9be5-4146-ae08-63d85351bc1e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "76e05205-9be5-4146-ae08-63d85351bc1e" (UID: "76e05205-9be5-4146-ae08-63d85351bc1e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:01:32 crc kubenswrapper[4813]: I0219 20:01:32.857261 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76e05205-9be5-4146-ae08-63d85351bc1e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "76e05205-9be5-4146-ae08-63d85351bc1e" (UID: "76e05205-9be5-4146-ae08-63d85351bc1e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:01:32 crc kubenswrapper[4813]: I0219 20:01:32.866782 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76e05205-9be5-4146-ae08-63d85351bc1e-config" (OuterVolumeSpecName: "config") pod "76e05205-9be5-4146-ae08-63d85351bc1e" (UID: "76e05205-9be5-4146-ae08-63d85351bc1e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:01:32 crc kubenswrapper[4813]: I0219 20:01:32.868504 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76e05205-9be5-4146-ae08-63d85351bc1e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "76e05205-9be5-4146-ae08-63d85351bc1e" (UID: "76e05205-9be5-4146-ae08-63d85351bc1e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:01:32 crc kubenswrapper[4813]: I0219 20:01:32.882059 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6w8gd\" (UniqueName: \"kubernetes.io/projected/76e05205-9be5-4146-ae08-63d85351bc1e-kube-api-access-6w8gd\") on node \"crc\" DevicePath \"\"" Feb 19 20:01:32 crc kubenswrapper[4813]: I0219 20:01:32.882108 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76e05205-9be5-4146-ae08-63d85351bc1e-config\") on node \"crc\" DevicePath \"\"" Feb 19 20:01:32 crc kubenswrapper[4813]: I0219 20:01:32.882129 4813 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76e05205-9be5-4146-ae08-63d85351bc1e-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 20:01:32 crc kubenswrapper[4813]: I0219 20:01:32.882143 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/76e05205-9be5-4146-ae08-63d85351bc1e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 20:01:32 crc kubenswrapper[4813]: I0219 20:01:32.882156 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/76e05205-9be5-4146-ae08-63d85351bc1e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 20:01:33 crc kubenswrapper[4813]: I0219 20:01:33.525612 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fc545869f-pps9q" event={"ID":"76e05205-9be5-4146-ae08-63d85351bc1e","Type":"ContainerDied","Data":"b92935516a6338e9591897520b0b129d3835845c346dee98c65624ff52384620"} Feb 19 20:01:33 crc kubenswrapper[4813]: I0219 20:01:33.525680 4813 scope.go:117] "RemoveContainer" containerID="5fb0fa461bbd792337387ffacd7e8d4210043c3346b5c59b17550cfcd09d42e0" Feb 19 20:01:33 crc kubenswrapper[4813]: I0219 20:01:33.525679 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fc545869f-pps9q" Feb 19 20:01:33 crc kubenswrapper[4813]: I0219 20:01:33.549630 4813 scope.go:117] "RemoveContainer" containerID="1612cae918c264674584b61eeca86c583d812c717d885459e16a3fc9eeee83b6" Feb 19 20:01:33 crc kubenswrapper[4813]: I0219 20:01:33.560599 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fc545869f-pps9q"] Feb 19 20:01:33 crc kubenswrapper[4813]: I0219 20:01:33.568338 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-fc545869f-pps9q"] Feb 19 20:01:33 crc kubenswrapper[4813]: I0219 20:01:33.727349 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 20:01:33 crc kubenswrapper[4813]: I0219 20:01:33.727549 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="2d84f94f-cccd-4607-a769-24bfe4404005" containerName="nova-cell0-conductor-conductor" containerID="cri-o://822bba2cca4c4c1c10967652651427d926b2359d6c72b40183a6aa5a1efd92d0" gracePeriod=30 Feb 19 20:01:33 crc kubenswrapper[4813]: I0219 20:01:33.740443 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 20:01:33 crc kubenswrapper[4813]: I0219 20:01:33.740739 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="41dfb00f-c637-43b9-91c2-a630d5f33b84" containerName="nova-metadata-log" containerID="cri-o://34700025615c6f6b10ac5f8a5519c464f983163af57ab127e3b4dd3982244bf1" gracePeriod=30 Feb 19 20:01:33 crc kubenswrapper[4813]: I0219 20:01:33.740906 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="41dfb00f-c637-43b9-91c2-a630d5f33b84" containerName="nova-metadata-metadata" containerID="cri-o://3bdad82e87f13afb3e577d1fc6cad2dac390693761af7456c974c762f5ec89aa" gracePeriod=30 Feb 19 20:01:33 crc kubenswrapper[4813]: I0219 20:01:33.770052 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 20:01:33 crc kubenswrapper[4813]: I0219 20:01:33.770259 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="86684df4-7521-416c-8599-2c8db67240c3" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://4aba640ef27b9e451f61e1c02830aa27c17a67496af4b4d8b1b2a68fb377c9df" gracePeriod=30 Feb 19 20:01:33 crc kubenswrapper[4813]: I0219 20:01:33.798833 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 20:01:33 crc kubenswrapper[4813]: I0219 20:01:33.799073 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="923da541-874b-47de-8b7e-443eeece9268" containerName="nova-api-log" containerID="cri-o://e6dc371266b55f936fb9bfce726adca44c75d9cd0cf96f431ed1281623a55cc6" gracePeriod=30 Feb 19 20:01:33 crc kubenswrapper[4813]: I0219 20:01:33.799540 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="923da541-874b-47de-8b7e-443eeece9268" containerName="nova-api-api" containerID="cri-o://8bfbcd775538911d1d8ea0fe78a7ed3d37f283ce3ed720735d8133cd8235b7c8" gracePeriod=30 Feb 19 20:01:33 crc kubenswrapper[4813]: I0219 20:01:33.833012 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 20:01:33 crc kubenswrapper[4813]: I0219 20:01:33.833246 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="5e9ac6a7-48ef-4b7c-b463-adb3efd9675d" containerName="nova-scheduler-scheduler" containerID="cri-o://6b2d492f986af0769ab9a01489d292ba42755bddb24d527af91d643eb8dd6b6f" gracePeriod=30 Feb 19 20:01:34 crc kubenswrapper[4813]: I0219 20:01:34.304779 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-cell1-novncproxy-0" podUID="86684df4-7521-416c-8599-2c8db67240c3" containerName="nova-cell1-novncproxy-novncproxy" probeResult="failure" output="Get \"http://10.217.1.60:6080/vnc_lite.html\": dial tcp 10.217.1.60:6080: connect: connection refused" Feb 19 20:01:34 crc kubenswrapper[4813]: I0219 20:01:34.520728 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 19 20:01:34 crc kubenswrapper[4813]: I0219 20:01:34.592189 4813 generic.go:334] "Generic (PLEG): container finished" podID="41dfb00f-c637-43b9-91c2-a630d5f33b84" containerID="34700025615c6f6b10ac5f8a5519c464f983163af57ab127e3b4dd3982244bf1" exitCode=143 Feb 19 20:01:34 crc kubenswrapper[4813]: I0219 20:01:34.592258 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"41dfb00f-c637-43b9-91c2-a630d5f33b84","Type":"ContainerDied","Data":"34700025615c6f6b10ac5f8a5519c464f983163af57ab127e3b4dd3982244bf1"} Feb 19 20:01:34 crc kubenswrapper[4813]: I0219 20:01:34.597044 4813 generic.go:334] "Generic (PLEG): container finished" podID="86684df4-7521-416c-8599-2c8db67240c3" containerID="4aba640ef27b9e451f61e1c02830aa27c17a67496af4b4d8b1b2a68fb377c9df" exitCode=0 Feb 19 20:01:34 crc kubenswrapper[4813]: I0219 20:01:34.597108 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"86684df4-7521-416c-8599-2c8db67240c3","Type":"ContainerDied","Data":"4aba640ef27b9e451f61e1c02830aa27c17a67496af4b4d8b1b2a68fb377c9df"} Feb 19 20:01:34 crc kubenswrapper[4813]: I0219 20:01:34.614260 4813 generic.go:334] "Generic (PLEG): container finished" podID="923da541-874b-47de-8b7e-443eeece9268" containerID="e6dc371266b55f936fb9bfce726adca44c75d9cd0cf96f431ed1281623a55cc6" exitCode=143 Feb 19 20:01:34 crc kubenswrapper[4813]: I0219 20:01:34.614314 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"923da541-874b-47de-8b7e-443eeece9268","Type":"ContainerDied","Data":"e6dc371266b55f936fb9bfce726adca44c75d9cd0cf96f431ed1281623a55cc6"} Feb 19 20:01:34 crc kubenswrapper[4813]: I0219 20:01:34.758836 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 20:01:34 crc kubenswrapper[4813]: I0219 20:01:34.934205 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvxdm\" (UniqueName: \"kubernetes.io/projected/86684df4-7521-416c-8599-2c8db67240c3-kube-api-access-vvxdm\") pod \"86684df4-7521-416c-8599-2c8db67240c3\" (UID: \"86684df4-7521-416c-8599-2c8db67240c3\") " Feb 19 20:01:34 crc kubenswrapper[4813]: I0219 20:01:34.934256 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86684df4-7521-416c-8599-2c8db67240c3-config-data\") pod \"86684df4-7521-416c-8599-2c8db67240c3\" (UID: \"86684df4-7521-416c-8599-2c8db67240c3\") " Feb 19 20:01:34 crc kubenswrapper[4813]: I0219 20:01:34.934353 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86684df4-7521-416c-8599-2c8db67240c3-combined-ca-bundle\") pod \"86684df4-7521-416c-8599-2c8db67240c3\" (UID: \"86684df4-7521-416c-8599-2c8db67240c3\") " Feb 19 20:01:34 crc kubenswrapper[4813]: I0219 20:01:34.942743 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86684df4-7521-416c-8599-2c8db67240c3-kube-api-access-vvxdm" (OuterVolumeSpecName: "kube-api-access-vvxdm") pod "86684df4-7521-416c-8599-2c8db67240c3" (UID: "86684df4-7521-416c-8599-2c8db67240c3"). InnerVolumeSpecName "kube-api-access-vvxdm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:01:34 crc kubenswrapper[4813]: I0219 20:01:34.969924 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86684df4-7521-416c-8599-2c8db67240c3-config-data" (OuterVolumeSpecName: "config-data") pod "86684df4-7521-416c-8599-2c8db67240c3" (UID: "86684df4-7521-416c-8599-2c8db67240c3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:01:34 crc kubenswrapper[4813]: I0219 20:01:34.984025 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86684df4-7521-416c-8599-2c8db67240c3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "86684df4-7521-416c-8599-2c8db67240c3" (UID: "86684df4-7521-416c-8599-2c8db67240c3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:01:35 crc kubenswrapper[4813]: I0219 20:01:35.036724 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvxdm\" (UniqueName: \"kubernetes.io/projected/86684df4-7521-416c-8599-2c8db67240c3-kube-api-access-vvxdm\") on node \"crc\" DevicePath \"\"" Feb 19 20:01:35 crc kubenswrapper[4813]: I0219 20:01:35.036763 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86684df4-7521-416c-8599-2c8db67240c3-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 20:01:35 crc kubenswrapper[4813]: I0219 20:01:35.036773 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86684df4-7521-416c-8599-2c8db67240c3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 20:01:35 crc kubenswrapper[4813]: I0219 20:01:35.486117 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76e05205-9be5-4146-ae08-63d85351bc1e" path="/var/lib/kubelet/pods/76e05205-9be5-4146-ae08-63d85351bc1e/volumes" Feb 19 20:01:35 crc kubenswrapper[4813]: I0219 20:01:35.624154 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"86684df4-7521-416c-8599-2c8db67240c3","Type":"ContainerDied","Data":"4f7da98c01eadb2450c594c5d9b7349b4d3092eab2f7cdfcfcfc11ceac90a16c"} Feb 19 20:01:35 crc kubenswrapper[4813]: I0219 20:01:35.624219 4813 scope.go:117] "RemoveContainer" containerID="4aba640ef27b9e451f61e1c02830aa27c17a67496af4b4d8b1b2a68fb377c9df" Feb 19 20:01:35 crc kubenswrapper[4813]: I0219 20:01:35.624379 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 20:01:35 crc kubenswrapper[4813]: I0219 20:01:35.666919 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 20:01:35 crc kubenswrapper[4813]: I0219 20:01:35.690399 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 20:01:35 crc kubenswrapper[4813]: I0219 20:01:35.703111 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 20:01:35 crc kubenswrapper[4813]: E0219 20:01:35.703487 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86684df4-7521-416c-8599-2c8db67240c3" containerName="nova-cell1-novncproxy-novncproxy" Feb 19 20:01:35 crc kubenswrapper[4813]: I0219 20:01:35.703501 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="86684df4-7521-416c-8599-2c8db67240c3" containerName="nova-cell1-novncproxy-novncproxy" Feb 19 20:01:35 crc kubenswrapper[4813]: E0219 20:01:35.703535 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76e05205-9be5-4146-ae08-63d85351bc1e" containerName="init" Feb 19 20:01:35 crc kubenswrapper[4813]: I0219 20:01:35.703542 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="76e05205-9be5-4146-ae08-63d85351bc1e" containerName="init" Feb 19 20:01:35 crc kubenswrapper[4813]: E0219 20:01:35.703562 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76e05205-9be5-4146-ae08-63d85351bc1e" containerName="dnsmasq-dns" Feb 19 20:01:35 crc kubenswrapper[4813]: I0219 20:01:35.703570 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="76e05205-9be5-4146-ae08-63d85351bc1e" containerName="dnsmasq-dns" Feb 19 20:01:35 crc kubenswrapper[4813]: I0219 20:01:35.703735 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="76e05205-9be5-4146-ae08-63d85351bc1e" containerName="dnsmasq-dns" Feb 19 20:01:35 crc kubenswrapper[4813]: I0219 20:01:35.703765 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="86684df4-7521-416c-8599-2c8db67240c3" containerName="nova-cell1-novncproxy-novncproxy" Feb 19 20:01:35 crc kubenswrapper[4813]: I0219 20:01:35.704407 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 20:01:35 crc kubenswrapper[4813]: I0219 20:01:35.708501 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 19 20:01:35 crc kubenswrapper[4813]: I0219 20:01:35.711018 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 20:01:35 crc kubenswrapper[4813]: I0219 20:01:35.751316 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a81e70d3-41ae-4eee-b6da-bcdba9f7c1b0-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a81e70d3-41ae-4eee-b6da-bcdba9f7c1b0\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 20:01:35 crc kubenswrapper[4813]: I0219 20:01:35.751454 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2j6pz\" (UniqueName: \"kubernetes.io/projected/a81e70d3-41ae-4eee-b6da-bcdba9f7c1b0-kube-api-access-2j6pz\") pod \"nova-cell1-novncproxy-0\" (UID: \"a81e70d3-41ae-4eee-b6da-bcdba9f7c1b0\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 20:01:35 crc kubenswrapper[4813]: I0219 20:01:35.751540 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a81e70d3-41ae-4eee-b6da-bcdba9f7c1b0-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a81e70d3-41ae-4eee-b6da-bcdba9f7c1b0\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 20:01:35 crc kubenswrapper[4813]: I0219 20:01:35.852613 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2j6pz\" (UniqueName: \"kubernetes.io/projected/a81e70d3-41ae-4eee-b6da-bcdba9f7c1b0-kube-api-access-2j6pz\") pod \"nova-cell1-novncproxy-0\" (UID: \"a81e70d3-41ae-4eee-b6da-bcdba9f7c1b0\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 20:01:35 crc kubenswrapper[4813]: I0219 20:01:35.852691 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a81e70d3-41ae-4eee-b6da-bcdba9f7c1b0-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a81e70d3-41ae-4eee-b6da-bcdba9f7c1b0\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 20:01:35 crc kubenswrapper[4813]: I0219 20:01:35.852749 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a81e70d3-41ae-4eee-b6da-bcdba9f7c1b0-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a81e70d3-41ae-4eee-b6da-bcdba9f7c1b0\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 20:01:35 crc kubenswrapper[4813]: I0219 20:01:35.858484 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a81e70d3-41ae-4eee-b6da-bcdba9f7c1b0-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a81e70d3-41ae-4eee-b6da-bcdba9f7c1b0\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 20:01:35 crc kubenswrapper[4813]: I0219 20:01:35.869886 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2j6pz\" (UniqueName: \"kubernetes.io/projected/a81e70d3-41ae-4eee-b6da-bcdba9f7c1b0-kube-api-access-2j6pz\") pod \"nova-cell1-novncproxy-0\" (UID: \"a81e70d3-41ae-4eee-b6da-bcdba9f7c1b0\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 20:01:35 crc kubenswrapper[4813]: I0219 20:01:35.868670 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a81e70d3-41ae-4eee-b6da-bcdba9f7c1b0-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a81e70d3-41ae-4eee-b6da-bcdba9f7c1b0\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 20:01:36 crc kubenswrapper[4813]: I0219 20:01:36.024617 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 20:01:36 crc kubenswrapper[4813]: I0219 20:01:36.481773 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 20:01:36 crc kubenswrapper[4813]: W0219 20:01:36.508345 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda81e70d3_41ae_4eee_b6da_bcdba9f7c1b0.slice/crio-85cfa7db3eed919b1df0755cec78d54f70d8c7156515e4c66386620471efeb2d WatchSource:0}: Error finding container 85cfa7db3eed919b1df0755cec78d54f70d8c7156515e4c66386620471efeb2d: Status 404 returned error can't find the container with id 85cfa7db3eed919b1df0755cec78d54f70d8c7156515e4c66386620471efeb2d Feb 19 20:01:36 crc kubenswrapper[4813]: I0219 20:01:36.659156 4813 generic.go:334] "Generic (PLEG): container finished" podID="2d84f94f-cccd-4607-a769-24bfe4404005" containerID="822bba2cca4c4c1c10967652651427d926b2359d6c72b40183a6aa5a1efd92d0" exitCode=0 Feb 19 20:01:36 crc kubenswrapper[4813]: I0219 20:01:36.659236 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"2d84f94f-cccd-4607-a769-24bfe4404005","Type":"ContainerDied","Data":"822bba2cca4c4c1c10967652651427d926b2359d6c72b40183a6aa5a1efd92d0"} Feb 19 20:01:36 crc kubenswrapper[4813]: I0219 20:01:36.660992 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a81e70d3-41ae-4eee-b6da-bcdba9f7c1b0","Type":"ContainerStarted","Data":"85cfa7db3eed919b1df0755cec78d54f70d8c7156515e4c66386620471efeb2d"} Feb 19 20:01:36 crc kubenswrapper[4813]: I0219 20:01:36.784916 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 19 20:01:36 crc kubenswrapper[4813]: I0219 20:01:36.938438 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="41dfb00f-c637-43b9-91c2-a630d5f33b84" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.68:8775/\": read tcp 10.217.0.2:38334->10.217.1.68:8775: read: connection reset by peer" Feb 19 20:01:36 crc kubenswrapper[4813]: I0219 20:01:36.938797 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="41dfb00f-c637-43b9-91c2-a630d5f33b84" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.68:8775/\": read tcp 10.217.0.2:38320->10.217.1.68:8775: read: connection reset by peer" Feb 19 20:01:36 crc kubenswrapper[4813]: I0219 20:01:36.968407 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-api-0" podUID="923da541-874b-47de-8b7e-443eeece9268" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.69:8774/\": read tcp 10.217.0.2:50818->10.217.1.69:8774: read: connection reset by peer" Feb 19 20:01:36 crc kubenswrapper[4813]: I0219 20:01:36.969715 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdwzz\" (UniqueName: \"kubernetes.io/projected/2d84f94f-cccd-4607-a769-24bfe4404005-kube-api-access-wdwzz\") pod \"2d84f94f-cccd-4607-a769-24bfe4404005\" (UID: \"2d84f94f-cccd-4607-a769-24bfe4404005\") " Feb 19 20:01:36 crc kubenswrapper[4813]: I0219 20:01:36.969902 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d84f94f-cccd-4607-a769-24bfe4404005-combined-ca-bundle\") pod \"2d84f94f-cccd-4607-a769-24bfe4404005\" (UID: \"2d84f94f-cccd-4607-a769-24bfe4404005\") " Feb 19 20:01:36 crc kubenswrapper[4813]: I0219 20:01:36.970039 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d84f94f-cccd-4607-a769-24bfe4404005-config-data\") pod \"2d84f94f-cccd-4607-a769-24bfe4404005\" (UID: \"2d84f94f-cccd-4607-a769-24bfe4404005\") " Feb 19 20:01:36 crc kubenswrapper[4813]: I0219 20:01:36.972913 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-api-0" podUID="923da541-874b-47de-8b7e-443eeece9268" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.69:8774/\": read tcp 10.217.0.2:50828->10.217.1.69:8774: read: connection reset by peer" Feb 19 20:01:36 crc kubenswrapper[4813]: I0219 20:01:36.977358 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d84f94f-cccd-4607-a769-24bfe4404005-kube-api-access-wdwzz" (OuterVolumeSpecName: "kube-api-access-wdwzz") pod "2d84f94f-cccd-4607-a769-24bfe4404005" (UID: "2d84f94f-cccd-4607-a769-24bfe4404005"). InnerVolumeSpecName "kube-api-access-wdwzz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.032504 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.034962 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="7cd7d095-1bd3-4307-b317-81d6150f4bb2" containerName="nova-cell1-conductor-conductor" containerID="cri-o://5f875800ffd45e80283683e8420f66bdf3acfdc7bba50970ffbd88e70d650994" gracePeriod=30 Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.042468 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d84f94f-cccd-4607-a769-24bfe4404005-config-data" (OuterVolumeSpecName: "config-data") pod "2d84f94f-cccd-4607-a769-24bfe4404005" (UID: "2d84f94f-cccd-4607-a769-24bfe4404005"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.057821 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d84f94f-cccd-4607-a769-24bfe4404005-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2d84f94f-cccd-4607-a769-24bfe4404005" (UID: "2d84f94f-cccd-4607-a769-24bfe4404005"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.071930 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdwzz\" (UniqueName: \"kubernetes.io/projected/2d84f94f-cccd-4607-a769-24bfe4404005-kube-api-access-wdwzz\") on node \"crc\" DevicePath \"\"" Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.071978 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d84f94f-cccd-4607-a769-24bfe4404005-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.071988 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d84f94f-cccd-4607-a769-24bfe4404005-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.480935 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86684df4-7521-416c-8599-2c8db67240c3" path="/var/lib/kubelet/pods/86684df4-7521-416c-8599-2c8db67240c3/volumes" Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.492539 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.502995 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 20:01:37 crc kubenswrapper[4813]: E0219 20:01:37.611878 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6b2d492f986af0769ab9a01489d292ba42755bddb24d527af91d643eb8dd6b6f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 20:01:37 crc kubenswrapper[4813]: E0219 20:01:37.613256 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6b2d492f986af0769ab9a01489d292ba42755bddb24d527af91d643eb8dd6b6f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 20:01:37 crc kubenswrapper[4813]: E0219 20:01:37.614212 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6b2d492f986af0769ab9a01489d292ba42755bddb24d527af91d643eb8dd6b6f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 20:01:37 crc kubenswrapper[4813]: E0219 20:01:37.614267 4813 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="5e9ac6a7-48ef-4b7c-b463-adb3efd9675d" containerName="nova-scheduler-scheduler" Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.673460 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"2d84f94f-cccd-4607-a769-24bfe4404005","Type":"ContainerDied","Data":"c43144d7ab008d2f55dc61de2c2a82fe4fd242ec8866df3f6b9a9ccce5571b29"} Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.673518 4813 scope.go:117] "RemoveContainer" containerID="822bba2cca4c4c1c10967652651427d926b2359d6c72b40183a6aa5a1efd92d0" Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.673647 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.681913 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a81e70d3-41ae-4eee-b6da-bcdba9f7c1b0","Type":"ContainerStarted","Data":"aa619eb9328958700bcf807b2ff4a7c00077df2850a4188a24e92d8ba682c705"} Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.682650 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41dfb00f-c637-43b9-91c2-a630d5f33b84-combined-ca-bundle\") pod \"41dfb00f-c637-43b9-91c2-a630d5f33b84\" (UID: \"41dfb00f-c637-43b9-91c2-a630d5f33b84\") " Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.683244 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/923da541-874b-47de-8b7e-443eeece9268-config-data\") pod \"923da541-874b-47de-8b7e-443eeece9268\" (UID: \"923da541-874b-47de-8b7e-443eeece9268\") " Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.683303 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/923da541-874b-47de-8b7e-443eeece9268-combined-ca-bundle\") pod \"923da541-874b-47de-8b7e-443eeece9268\" (UID: \"923da541-874b-47de-8b7e-443eeece9268\") " Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.683382 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41dfb00f-c637-43b9-91c2-a630d5f33b84-config-data\") pod \"41dfb00f-c637-43b9-91c2-a630d5f33b84\" (UID: \"41dfb00f-c637-43b9-91c2-a630d5f33b84\") " Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.683493 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7qpj\" (UniqueName: \"kubernetes.io/projected/923da541-874b-47de-8b7e-443eeece9268-kube-api-access-v7qpj\") pod \"923da541-874b-47de-8b7e-443eeece9268\" (UID: \"923da541-874b-47de-8b7e-443eeece9268\") " Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.683556 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41dfb00f-c637-43b9-91c2-a630d5f33b84-logs\") pod \"41dfb00f-c637-43b9-91c2-a630d5f33b84\" (UID: \"41dfb00f-c637-43b9-91c2-a630d5f33b84\") " Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.683579 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/923da541-874b-47de-8b7e-443eeece9268-logs\") pod \"923da541-874b-47de-8b7e-443eeece9268\" (UID: \"923da541-874b-47de-8b7e-443eeece9268\") " Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.683618 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grhwm\" (UniqueName: \"kubernetes.io/projected/41dfb00f-c637-43b9-91c2-a630d5f33b84-kube-api-access-grhwm\") pod \"41dfb00f-c637-43b9-91c2-a630d5f33b84\" (UID: \"41dfb00f-c637-43b9-91c2-a630d5f33b84\") " Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.690263 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41dfb00f-c637-43b9-91c2-a630d5f33b84-logs" (OuterVolumeSpecName: "logs") pod "41dfb00f-c637-43b9-91c2-a630d5f33b84" (UID: "41dfb00f-c637-43b9-91c2-a630d5f33b84"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.692198 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/923da541-874b-47de-8b7e-443eeece9268-logs" (OuterVolumeSpecName: "logs") pod "923da541-874b-47de-8b7e-443eeece9268" (UID: "923da541-874b-47de-8b7e-443eeece9268"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.695982 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41dfb00f-c637-43b9-91c2-a630d5f33b84-kube-api-access-grhwm" (OuterVolumeSpecName: "kube-api-access-grhwm") pod "41dfb00f-c637-43b9-91c2-a630d5f33b84" (UID: "41dfb00f-c637-43b9-91c2-a630d5f33b84"). InnerVolumeSpecName "kube-api-access-grhwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.697308 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/923da541-874b-47de-8b7e-443eeece9268-kube-api-access-v7qpj" (OuterVolumeSpecName: "kube-api-access-v7qpj") pod "923da541-874b-47de-8b7e-443eeece9268" (UID: "923da541-874b-47de-8b7e-443eeece9268"). InnerVolumeSpecName "kube-api-access-v7qpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.697364 4813 generic.go:334] "Generic (PLEG): container finished" podID="923da541-874b-47de-8b7e-443eeece9268" containerID="8bfbcd775538911d1d8ea0fe78a7ed3d37f283ce3ed720735d8133cd8235b7c8" exitCode=0 Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.697494 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"923da541-874b-47de-8b7e-443eeece9268","Type":"ContainerDied","Data":"8bfbcd775538911d1d8ea0fe78a7ed3d37f283ce3ed720735d8133cd8235b7c8"} Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.697521 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"923da541-874b-47de-8b7e-443eeece9268","Type":"ContainerDied","Data":"d010e4237008d160ff87f8402c5ad4b4786bee387d7297341d2d6a410355ab69"} Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.697592 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.704288 4813 generic.go:334] "Generic (PLEG): container finished" podID="41dfb00f-c637-43b9-91c2-a630d5f33b84" containerID="3bdad82e87f13afb3e577d1fc6cad2dac390693761af7456c974c762f5ec89aa" exitCode=0 Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.704345 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"41dfb00f-c637-43b9-91c2-a630d5f33b84","Type":"ContainerDied","Data":"3bdad82e87f13afb3e577d1fc6cad2dac390693761af7456c974c762f5ec89aa"} Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.704376 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"41dfb00f-c637-43b9-91c2-a630d5f33b84","Type":"ContainerDied","Data":"347ec9f624c6b277029cd0e7c9b9b3ebf5196efa2fedfb5827939426962b9e41"} Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.704437 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.715094 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.734675 4813 scope.go:117] "RemoveContainer" containerID="8bfbcd775538911d1d8ea0fe78a7ed3d37f283ce3ed720735d8133cd8235b7c8" Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.750347 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.755267 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.755246585 podStartE2EDuration="2.755246585s" podCreationTimestamp="2026-02-19 20:01:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 20:01:37.715067993 +0000 UTC m=+5516.940508564" watchObservedRunningTime="2026-02-19 20:01:37.755246585 +0000 UTC m=+5516.980687126" Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.757712 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/923da541-874b-47de-8b7e-443eeece9268-config-data" (OuterVolumeSpecName: "config-data") pod "923da541-874b-47de-8b7e-443eeece9268" (UID: "923da541-874b-47de-8b7e-443eeece9268"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.769868 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 20:01:37 crc kubenswrapper[4813]: E0219 20:01:37.770382 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41dfb00f-c637-43b9-91c2-a630d5f33b84" containerName="nova-metadata-log" Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.770400 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="41dfb00f-c637-43b9-91c2-a630d5f33b84" containerName="nova-metadata-log" Feb 19 20:01:37 crc kubenswrapper[4813]: E0219 20:01:37.770419 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41dfb00f-c637-43b9-91c2-a630d5f33b84" containerName="nova-metadata-metadata" Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.770426 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="41dfb00f-c637-43b9-91c2-a630d5f33b84" containerName="nova-metadata-metadata" Feb 19 20:01:37 crc kubenswrapper[4813]: E0219 20:01:37.770443 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="923da541-874b-47de-8b7e-443eeece9268" containerName="nova-api-log" Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.770451 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="923da541-874b-47de-8b7e-443eeece9268" containerName="nova-api-log" Feb 19 20:01:37 crc kubenswrapper[4813]: E0219 20:01:37.770462 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d84f94f-cccd-4607-a769-24bfe4404005" containerName="nova-cell0-conductor-conductor" Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.770470 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d84f94f-cccd-4607-a769-24bfe4404005" containerName="nova-cell0-conductor-conductor" Feb 19 20:01:37 crc kubenswrapper[4813]: E0219 20:01:37.770488 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="923da541-874b-47de-8b7e-443eeece9268" containerName="nova-api-api" Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.770495 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="923da541-874b-47de-8b7e-443eeece9268" containerName="nova-api-api" Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.770816 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="41dfb00f-c637-43b9-91c2-a630d5f33b84" containerName="nova-metadata-log" Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.770841 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="41dfb00f-c637-43b9-91c2-a630d5f33b84" containerName="nova-metadata-metadata" Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.770861 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d84f94f-cccd-4607-a769-24bfe4404005" containerName="nova-cell0-conductor-conductor" Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.770875 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="923da541-874b-47de-8b7e-443eeece9268" containerName="nova-api-log" Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.770885 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="923da541-874b-47de-8b7e-443eeece9268" containerName="nova-api-api" Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.771612 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.773628 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.775232 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.789584 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7qpj\" (UniqueName: \"kubernetes.io/projected/923da541-874b-47de-8b7e-443eeece9268-kube-api-access-v7qpj\") on node \"crc\" DevicePath \"\"" Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.789660 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41dfb00f-c637-43b9-91c2-a630d5f33b84-logs\") on node \"crc\" DevicePath \"\"" Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.789673 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/923da541-874b-47de-8b7e-443eeece9268-logs\") on node \"crc\" DevicePath \"\"" Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.789682 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grhwm\" (UniqueName: \"kubernetes.io/projected/41dfb00f-c637-43b9-91c2-a630d5f33b84-kube-api-access-grhwm\") on node \"crc\" DevicePath \"\"" Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.789691 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/923da541-874b-47de-8b7e-443eeece9268-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.794071 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41dfb00f-c637-43b9-91c2-a630d5f33b84-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "41dfb00f-c637-43b9-91c2-a630d5f33b84" (UID: "41dfb00f-c637-43b9-91c2-a630d5f33b84"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.803626 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/923da541-874b-47de-8b7e-443eeece9268-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "923da541-874b-47de-8b7e-443eeece9268" (UID: "923da541-874b-47de-8b7e-443eeece9268"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.805899 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41dfb00f-c637-43b9-91c2-a630d5f33b84-config-data" (OuterVolumeSpecName: "config-data") pod "41dfb00f-c637-43b9-91c2-a630d5f33b84" (UID: "41dfb00f-c637-43b9-91c2-a630d5f33b84"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.836428 4813 scope.go:117] "RemoveContainer" containerID="e6dc371266b55f936fb9bfce726adca44c75d9cd0cf96f431ed1281623a55cc6" Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.866104 4813 scope.go:117] "RemoveContainer" containerID="8bfbcd775538911d1d8ea0fe78a7ed3d37f283ce3ed720735d8133cd8235b7c8" Feb 19 20:01:37 crc kubenswrapper[4813]: E0219 20:01:37.866571 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8bfbcd775538911d1d8ea0fe78a7ed3d37f283ce3ed720735d8133cd8235b7c8\": container with ID starting with 8bfbcd775538911d1d8ea0fe78a7ed3d37f283ce3ed720735d8133cd8235b7c8 not found: ID does not exist" containerID="8bfbcd775538911d1d8ea0fe78a7ed3d37f283ce3ed720735d8133cd8235b7c8" Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.866603 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bfbcd775538911d1d8ea0fe78a7ed3d37f283ce3ed720735d8133cd8235b7c8"} err="failed to get container status \"8bfbcd775538911d1d8ea0fe78a7ed3d37f283ce3ed720735d8133cd8235b7c8\": rpc error: code = NotFound desc = could not find container \"8bfbcd775538911d1d8ea0fe78a7ed3d37f283ce3ed720735d8133cd8235b7c8\": container with ID starting with 8bfbcd775538911d1d8ea0fe78a7ed3d37f283ce3ed720735d8133cd8235b7c8 not found: ID does not exist" Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.866625 4813 scope.go:117] "RemoveContainer" containerID="e6dc371266b55f936fb9bfce726adca44c75d9cd0cf96f431ed1281623a55cc6" Feb 19 20:01:37 crc kubenswrapper[4813]: E0219 20:01:37.866891 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6dc371266b55f936fb9bfce726adca44c75d9cd0cf96f431ed1281623a55cc6\": container with ID starting with e6dc371266b55f936fb9bfce726adca44c75d9cd0cf96f431ed1281623a55cc6 not found: ID does not exist" containerID="e6dc371266b55f936fb9bfce726adca44c75d9cd0cf96f431ed1281623a55cc6" Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.866938 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6dc371266b55f936fb9bfce726adca44c75d9cd0cf96f431ed1281623a55cc6"} err="failed to get container status \"e6dc371266b55f936fb9bfce726adca44c75d9cd0cf96f431ed1281623a55cc6\": rpc error: code = NotFound desc = could not find container \"e6dc371266b55f936fb9bfce726adca44c75d9cd0cf96f431ed1281623a55cc6\": container with ID starting with e6dc371266b55f936fb9bfce726adca44c75d9cd0cf96f431ed1281623a55cc6 not found: ID does not exist" Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.866981 4813 scope.go:117] "RemoveContainer" containerID="3bdad82e87f13afb3e577d1fc6cad2dac390693761af7456c974c762f5ec89aa" Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.891368 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20cd348c-2c8a-4a93-ba80-1b598b70b25f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"20cd348c-2c8a-4a93-ba80-1b598b70b25f\") " pod="openstack/nova-cell0-conductor-0" Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.891408 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwskt\" (UniqueName: \"kubernetes.io/projected/20cd348c-2c8a-4a93-ba80-1b598b70b25f-kube-api-access-rwskt\") pod \"nova-cell0-conductor-0\" (UID: \"20cd348c-2c8a-4a93-ba80-1b598b70b25f\") " pod="openstack/nova-cell0-conductor-0" Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.891559 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20cd348c-2c8a-4a93-ba80-1b598b70b25f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"20cd348c-2c8a-4a93-ba80-1b598b70b25f\") " pod="openstack/nova-cell0-conductor-0" Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.891757 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41dfb00f-c637-43b9-91c2-a630d5f33b84-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.891778 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/923da541-874b-47de-8b7e-443eeece9268-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.891788 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41dfb00f-c637-43b9-91c2-a630d5f33b84-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.923927 4813 scope.go:117] "RemoveContainer" containerID="34700025615c6f6b10ac5f8a5519c464f983163af57ab127e3b4dd3982244bf1" Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.954510 4813 scope.go:117] "RemoveContainer" containerID="3bdad82e87f13afb3e577d1fc6cad2dac390693761af7456c974c762f5ec89aa" Feb 19 20:01:37 crc kubenswrapper[4813]: E0219 20:01:37.954874 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bdad82e87f13afb3e577d1fc6cad2dac390693761af7456c974c762f5ec89aa\": container with ID starting with 3bdad82e87f13afb3e577d1fc6cad2dac390693761af7456c974c762f5ec89aa not found: ID does not exist" containerID="3bdad82e87f13afb3e577d1fc6cad2dac390693761af7456c974c762f5ec89aa" Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.954915 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bdad82e87f13afb3e577d1fc6cad2dac390693761af7456c974c762f5ec89aa"} err="failed to get container status \"3bdad82e87f13afb3e577d1fc6cad2dac390693761af7456c974c762f5ec89aa\": rpc error: code = NotFound desc = could not find container \"3bdad82e87f13afb3e577d1fc6cad2dac390693761af7456c974c762f5ec89aa\": container with ID starting with 3bdad82e87f13afb3e577d1fc6cad2dac390693761af7456c974c762f5ec89aa not found: ID does not exist" Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.954942 4813 scope.go:117] "RemoveContainer" containerID="34700025615c6f6b10ac5f8a5519c464f983163af57ab127e3b4dd3982244bf1" Feb 19 20:01:37 crc kubenswrapper[4813]: E0219 20:01:37.955414 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34700025615c6f6b10ac5f8a5519c464f983163af57ab127e3b4dd3982244bf1\": container with ID starting with 34700025615c6f6b10ac5f8a5519c464f983163af57ab127e3b4dd3982244bf1 not found: ID does not exist" containerID="34700025615c6f6b10ac5f8a5519c464f983163af57ab127e3b4dd3982244bf1" Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.955455 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34700025615c6f6b10ac5f8a5519c464f983163af57ab127e3b4dd3982244bf1"} err="failed to get container status \"34700025615c6f6b10ac5f8a5519c464f983163af57ab127e3b4dd3982244bf1\": rpc error: code = NotFound desc = could not find container \"34700025615c6f6b10ac5f8a5519c464f983163af57ab127e3b4dd3982244bf1\": container with ID starting with 34700025615c6f6b10ac5f8a5519c464f983163af57ab127e3b4dd3982244bf1 not found: ID does not exist" Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.992910 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20cd348c-2c8a-4a93-ba80-1b598b70b25f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"20cd348c-2c8a-4a93-ba80-1b598b70b25f\") " pod="openstack/nova-cell0-conductor-0" Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.993069 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20cd348c-2c8a-4a93-ba80-1b598b70b25f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"20cd348c-2c8a-4a93-ba80-1b598b70b25f\") " pod="openstack/nova-cell0-conductor-0" Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.993092 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwskt\" (UniqueName: \"kubernetes.io/projected/20cd348c-2c8a-4a93-ba80-1b598b70b25f-kube-api-access-rwskt\") pod \"nova-cell0-conductor-0\" (UID: \"20cd348c-2c8a-4a93-ba80-1b598b70b25f\") " pod="openstack/nova-cell0-conductor-0" Feb 19 20:01:37 crc kubenswrapper[4813]: I0219 20:01:37.999700 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20cd348c-2c8a-4a93-ba80-1b598b70b25f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"20cd348c-2c8a-4a93-ba80-1b598b70b25f\") " pod="openstack/nova-cell0-conductor-0" Feb 19 20:01:38 crc kubenswrapper[4813]: I0219 20:01:38.011509 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20cd348c-2c8a-4a93-ba80-1b598b70b25f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"20cd348c-2c8a-4a93-ba80-1b598b70b25f\") " pod="openstack/nova-cell0-conductor-0" Feb 19 20:01:38 crc kubenswrapper[4813]: I0219 20:01:38.020629 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwskt\" (UniqueName: \"kubernetes.io/projected/20cd348c-2c8a-4a93-ba80-1b598b70b25f-kube-api-access-rwskt\") pod \"nova-cell0-conductor-0\" (UID: \"20cd348c-2c8a-4a93-ba80-1b598b70b25f\") " pod="openstack/nova-cell0-conductor-0" Feb 19 20:01:38 crc kubenswrapper[4813]: I0219 20:01:38.123425 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 19 20:01:38 crc kubenswrapper[4813]: I0219 20:01:38.167008 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 20:01:38 crc kubenswrapper[4813]: I0219 20:01:38.187018 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 20:01:38 crc kubenswrapper[4813]: I0219 20:01:38.228009 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 20:01:38 crc kubenswrapper[4813]: I0219 20:01:38.259156 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 19 20:01:38 crc kubenswrapper[4813]: I0219 20:01:38.290156 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 19 20:01:38 crc kubenswrapper[4813]: I0219 20:01:38.291905 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 20:01:38 crc kubenswrapper[4813]: I0219 20:01:38.304335 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 19 20:01:38 crc kubenswrapper[4813]: I0219 20:01:38.337813 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 20:01:38 crc kubenswrapper[4813]: I0219 20:01:38.364597 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 19 20:01:38 crc kubenswrapper[4813]: I0219 20:01:38.366061 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 20:01:38 crc kubenswrapper[4813]: I0219 20:01:38.374681 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 19 20:01:38 crc kubenswrapper[4813]: I0219 20:01:38.377934 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 20:01:38 crc kubenswrapper[4813]: I0219 20:01:38.405465 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a04b863-542a-4de0-82c0-f8e12a63c47d-logs\") pod \"nova-metadata-0\" (UID: \"4a04b863-542a-4de0-82c0-f8e12a63c47d\") " pod="openstack/nova-metadata-0" Feb 19 20:01:38 crc kubenswrapper[4813]: I0219 20:01:38.405512 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a04b863-542a-4de0-82c0-f8e12a63c47d-config-data\") pod \"nova-metadata-0\" (UID: \"4a04b863-542a-4de0-82c0-f8e12a63c47d\") " pod="openstack/nova-metadata-0" Feb 19 20:01:38 crc kubenswrapper[4813]: I0219 20:01:38.405581 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a04b863-542a-4de0-82c0-f8e12a63c47d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4a04b863-542a-4de0-82c0-f8e12a63c47d\") " pod="openstack/nova-metadata-0" Feb 19 20:01:38 crc kubenswrapper[4813]: I0219 20:01:38.405614 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvr5x\" (UniqueName: \"kubernetes.io/projected/4a04b863-542a-4de0-82c0-f8e12a63c47d-kube-api-access-bvr5x\") pod \"nova-metadata-0\" (UID: \"4a04b863-542a-4de0-82c0-f8e12a63c47d\") " pod="openstack/nova-metadata-0" Feb 19 20:01:38 crc kubenswrapper[4813]: I0219 20:01:38.507391 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a04b863-542a-4de0-82c0-f8e12a63c47d-config-data\") pod \"nova-metadata-0\" (UID: \"4a04b863-542a-4de0-82c0-f8e12a63c47d\") " pod="openstack/nova-metadata-0" Feb 19 20:01:38 crc kubenswrapper[4813]: I0219 20:01:38.507498 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/035233b2-efbc-4c6a-a82d-44c4742eed8d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"035233b2-efbc-4c6a-a82d-44c4742eed8d\") " pod="openstack/nova-api-0" Feb 19 20:01:38 crc kubenswrapper[4813]: I0219 20:01:38.507568 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a04b863-542a-4de0-82c0-f8e12a63c47d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4a04b863-542a-4de0-82c0-f8e12a63c47d\") " pod="openstack/nova-metadata-0" Feb 19 20:01:38 crc kubenswrapper[4813]: I0219 20:01:38.507608 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvr5x\" (UniqueName: \"kubernetes.io/projected/4a04b863-542a-4de0-82c0-f8e12a63c47d-kube-api-access-bvr5x\") pod \"nova-metadata-0\" (UID: \"4a04b863-542a-4de0-82c0-f8e12a63c47d\") " pod="openstack/nova-metadata-0" Feb 19 20:01:38 crc kubenswrapper[4813]: I0219 20:01:38.507643 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/035233b2-efbc-4c6a-a82d-44c4742eed8d-logs\") pod \"nova-api-0\" (UID: \"035233b2-efbc-4c6a-a82d-44c4742eed8d\") " pod="openstack/nova-api-0" Feb 19 20:01:38 crc kubenswrapper[4813]: I0219 20:01:38.507671 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/035233b2-efbc-4c6a-a82d-44c4742eed8d-config-data\") pod \"nova-api-0\" (UID: \"035233b2-efbc-4c6a-a82d-44c4742eed8d\") " pod="openstack/nova-api-0" Feb 19 20:01:38 crc kubenswrapper[4813]: I0219 20:01:38.507744 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a04b863-542a-4de0-82c0-f8e12a63c47d-logs\") pod \"nova-metadata-0\" (UID: \"4a04b863-542a-4de0-82c0-f8e12a63c47d\") " pod="openstack/nova-metadata-0" Feb 19 20:01:38 crc kubenswrapper[4813]: I0219 20:01:38.508136 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a04b863-542a-4de0-82c0-f8e12a63c47d-logs\") pod \"nova-metadata-0\" (UID: \"4a04b863-542a-4de0-82c0-f8e12a63c47d\") " pod="openstack/nova-metadata-0" Feb 19 20:01:38 crc kubenswrapper[4813]: I0219 20:01:38.508155 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knjvd\" (UniqueName: \"kubernetes.io/projected/035233b2-efbc-4c6a-a82d-44c4742eed8d-kube-api-access-knjvd\") pod \"nova-api-0\" (UID: \"035233b2-efbc-4c6a-a82d-44c4742eed8d\") " pod="openstack/nova-api-0" Feb 19 20:01:38 crc kubenswrapper[4813]: I0219 20:01:38.513440 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a04b863-542a-4de0-82c0-f8e12a63c47d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4a04b863-542a-4de0-82c0-f8e12a63c47d\") " pod="openstack/nova-metadata-0" Feb 19 20:01:38 crc kubenswrapper[4813]: I0219 20:01:38.513542 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a04b863-542a-4de0-82c0-f8e12a63c47d-config-data\") pod \"nova-metadata-0\" (UID: \"4a04b863-542a-4de0-82c0-f8e12a63c47d\") " pod="openstack/nova-metadata-0" Feb 19 20:01:38 crc kubenswrapper[4813]: I0219 20:01:38.535644 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvr5x\" (UniqueName: \"kubernetes.io/projected/4a04b863-542a-4de0-82c0-f8e12a63c47d-kube-api-access-bvr5x\") pod \"nova-metadata-0\" (UID: \"4a04b863-542a-4de0-82c0-f8e12a63c47d\") " pod="openstack/nova-metadata-0" Feb 19 20:01:38 crc kubenswrapper[4813]: I0219 20:01:38.610036 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knjvd\" (UniqueName: \"kubernetes.io/projected/035233b2-efbc-4c6a-a82d-44c4742eed8d-kube-api-access-knjvd\") pod \"nova-api-0\" (UID: \"035233b2-efbc-4c6a-a82d-44c4742eed8d\") " pod="openstack/nova-api-0" Feb 19 20:01:38 crc kubenswrapper[4813]: I0219 20:01:38.610907 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/035233b2-efbc-4c6a-a82d-44c4742eed8d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"035233b2-efbc-4c6a-a82d-44c4742eed8d\") " pod="openstack/nova-api-0" Feb 19 20:01:38 crc kubenswrapper[4813]: I0219 20:01:38.611396 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/035233b2-efbc-4c6a-a82d-44c4742eed8d-logs\") pod \"nova-api-0\" (UID: \"035233b2-efbc-4c6a-a82d-44c4742eed8d\") " pod="openstack/nova-api-0" Feb 19 20:01:38 crc kubenswrapper[4813]: I0219 20:01:38.611608 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/035233b2-efbc-4c6a-a82d-44c4742eed8d-config-data\") pod \"nova-api-0\" (UID: \"035233b2-efbc-4c6a-a82d-44c4742eed8d\") " pod="openstack/nova-api-0" Feb 19 20:01:38 crc kubenswrapper[4813]: I0219 20:01:38.611921 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/035233b2-efbc-4c6a-a82d-44c4742eed8d-logs\") pod \"nova-api-0\" (UID: \"035233b2-efbc-4c6a-a82d-44c4742eed8d\") " pod="openstack/nova-api-0" Feb 19 20:01:38 crc kubenswrapper[4813]: I0219 20:01:38.615569 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/035233b2-efbc-4c6a-a82d-44c4742eed8d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"035233b2-efbc-4c6a-a82d-44c4742eed8d\") " pod="openstack/nova-api-0" Feb 19 20:01:38 crc kubenswrapper[4813]: I0219 20:01:38.618612 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/035233b2-efbc-4c6a-a82d-44c4742eed8d-config-data\") pod \"nova-api-0\" (UID: \"035233b2-efbc-4c6a-a82d-44c4742eed8d\") " pod="openstack/nova-api-0" Feb 19 20:01:38 crc kubenswrapper[4813]: I0219 20:01:38.630570 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knjvd\" (UniqueName: \"kubernetes.io/projected/035233b2-efbc-4c6a-a82d-44c4742eed8d-kube-api-access-knjvd\") pod \"nova-api-0\" (UID: \"035233b2-efbc-4c6a-a82d-44c4742eed8d\") " pod="openstack/nova-api-0" Feb 19 20:01:38 crc kubenswrapper[4813]: I0219 20:01:38.700974 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 20:01:38 crc kubenswrapper[4813]: I0219 20:01:38.713346 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 20:01:38 crc kubenswrapper[4813]: I0219 20:01:38.778627 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 20:01:39 crc kubenswrapper[4813]: I0219 20:01:39.220222 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 20:01:39 crc kubenswrapper[4813]: I0219 20:01:39.332699 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 20:01:39 crc kubenswrapper[4813]: W0219 20:01:39.336702 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod035233b2_efbc_4c6a_a82d_44c4742eed8d.slice/crio-0521cfc376211f3fd2d332281dbe2e05a1ac277569500737b24d1a442c082490 WatchSource:0}: Error finding container 0521cfc376211f3fd2d332281dbe2e05a1ac277569500737b24d1a442c082490: Status 404 returned error can't find the container with id 0521cfc376211f3fd2d332281dbe2e05a1ac277569500737b24d1a442c082490 Feb 19 20:01:39 crc kubenswrapper[4813]: I0219 20:01:39.514469 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d84f94f-cccd-4607-a769-24bfe4404005" path="/var/lib/kubelet/pods/2d84f94f-cccd-4607-a769-24bfe4404005/volumes" Feb 19 20:01:39 crc kubenswrapper[4813]: I0219 20:01:39.515442 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41dfb00f-c637-43b9-91c2-a630d5f33b84" path="/var/lib/kubelet/pods/41dfb00f-c637-43b9-91c2-a630d5f33b84/volumes" Feb 19 20:01:39 crc kubenswrapper[4813]: I0219 20:01:39.516893 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="923da541-874b-47de-8b7e-443eeece9268" path="/var/lib/kubelet/pods/923da541-874b-47de-8b7e-443eeece9268/volumes" Feb 19 20:01:39 crc kubenswrapper[4813]: I0219 20:01:39.726215 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4a04b863-542a-4de0-82c0-f8e12a63c47d","Type":"ContainerStarted","Data":"511fe444441c9025dc975559ed8447b43400e4c27812fc496dd0959b8eb5ef90"} Feb 19 20:01:39 crc kubenswrapper[4813]: I0219 20:01:39.726255 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4a04b863-542a-4de0-82c0-f8e12a63c47d","Type":"ContainerStarted","Data":"632809a3ca53594c5c70fbfaad235d24131ec52b12853bc8465d47d5f5220d30"} Feb 19 20:01:39 crc kubenswrapper[4813]: I0219 20:01:39.726266 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4a04b863-542a-4de0-82c0-f8e12a63c47d","Type":"ContainerStarted","Data":"7fe0cda6ec50a690e5a91cb4c8d2c53299df304134be44acc7af86efac1fab5b"} Feb 19 20:01:39 crc kubenswrapper[4813]: I0219 20:01:39.727676 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"035233b2-efbc-4c6a-a82d-44c4742eed8d","Type":"ContainerStarted","Data":"e7056ded0e8bafde2b6360aac62e8b6b4d1e4ed5cf58d16ad713302e1d113557"} Feb 19 20:01:39 crc kubenswrapper[4813]: I0219 20:01:39.727703 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"035233b2-efbc-4c6a-a82d-44c4742eed8d","Type":"ContainerStarted","Data":"31f65998b0ca5a56b7ce97c479cd10dc0c4e995d8cb6ddcb67d7297807b3885b"} Feb 19 20:01:39 crc kubenswrapper[4813]: I0219 20:01:39.727713 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"035233b2-efbc-4c6a-a82d-44c4742eed8d","Type":"ContainerStarted","Data":"0521cfc376211f3fd2d332281dbe2e05a1ac277569500737b24d1a442c082490"} Feb 19 20:01:39 crc kubenswrapper[4813]: I0219 20:01:39.729080 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"20cd348c-2c8a-4a93-ba80-1b598b70b25f","Type":"ContainerStarted","Data":"e4c15b47d9f9e1714cfaf983d3219a19b900fda24d017274573a61c209190e75"} Feb 19 20:01:39 crc kubenswrapper[4813]: I0219 20:01:39.729131 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"20cd348c-2c8a-4a93-ba80-1b598b70b25f","Type":"ContainerStarted","Data":"fb4205248bd32b1f82dd31b3db7ffe89122943010bc5bc10297756279593d3f8"} Feb 19 20:01:39 crc kubenswrapper[4813]: I0219 20:01:39.729590 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 19 20:01:39 crc kubenswrapper[4813]: I0219 20:01:39.743804 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=1.743789822 podStartE2EDuration="1.743789822s" podCreationTimestamp="2026-02-19 20:01:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 20:01:39.741635395 +0000 UTC m=+5518.967075936" watchObservedRunningTime="2026-02-19 20:01:39.743789822 +0000 UTC m=+5518.969230363" Feb 19 20:01:39 crc kubenswrapper[4813]: I0219 20:01:39.786017 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=1.7859953179999999 podStartE2EDuration="1.785995318s" podCreationTimestamp="2026-02-19 20:01:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 20:01:39.7828489 +0000 UTC m=+5519.008289461" watchObservedRunningTime="2026-02-19 20:01:39.785995318 +0000 UTC m=+5519.011435859" Feb 19 20:01:39 crc kubenswrapper[4813]: I0219 20:01:39.791293 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.791272651 podStartE2EDuration="2.791272651s" podCreationTimestamp="2026-02-19 20:01:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 20:01:39.769505968 +0000 UTC m=+5518.994946509" watchObservedRunningTime="2026-02-19 20:01:39.791272651 +0000 UTC m=+5519.016713192" Feb 19 20:01:40 crc kubenswrapper[4813]: I0219 20:01:40.260922 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 19 20:01:40 crc kubenswrapper[4813]: I0219 20:01:40.443366 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzclq\" (UniqueName: \"kubernetes.io/projected/7cd7d095-1bd3-4307-b317-81d6150f4bb2-kube-api-access-rzclq\") pod \"7cd7d095-1bd3-4307-b317-81d6150f4bb2\" (UID: \"7cd7d095-1bd3-4307-b317-81d6150f4bb2\") " Feb 19 20:01:40 crc kubenswrapper[4813]: I0219 20:01:40.443420 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cd7d095-1bd3-4307-b317-81d6150f4bb2-config-data\") pod \"7cd7d095-1bd3-4307-b317-81d6150f4bb2\" (UID: \"7cd7d095-1bd3-4307-b317-81d6150f4bb2\") " Feb 19 20:01:40 crc kubenswrapper[4813]: I0219 20:01:40.443610 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cd7d095-1bd3-4307-b317-81d6150f4bb2-combined-ca-bundle\") pod \"7cd7d095-1bd3-4307-b317-81d6150f4bb2\" (UID: \"7cd7d095-1bd3-4307-b317-81d6150f4bb2\") " Feb 19 20:01:40 crc kubenswrapper[4813]: I0219 20:01:40.451098 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cd7d095-1bd3-4307-b317-81d6150f4bb2-kube-api-access-rzclq" (OuterVolumeSpecName: "kube-api-access-rzclq") pod "7cd7d095-1bd3-4307-b317-81d6150f4bb2" (UID: "7cd7d095-1bd3-4307-b317-81d6150f4bb2"). InnerVolumeSpecName "kube-api-access-rzclq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:01:40 crc kubenswrapper[4813]: I0219 20:01:40.474098 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cd7d095-1bd3-4307-b317-81d6150f4bb2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7cd7d095-1bd3-4307-b317-81d6150f4bb2" (UID: "7cd7d095-1bd3-4307-b317-81d6150f4bb2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:01:40 crc kubenswrapper[4813]: I0219 20:01:40.478425 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cd7d095-1bd3-4307-b317-81d6150f4bb2-config-data" (OuterVolumeSpecName: "config-data") pod "7cd7d095-1bd3-4307-b317-81d6150f4bb2" (UID: "7cd7d095-1bd3-4307-b317-81d6150f4bb2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:01:40 crc kubenswrapper[4813]: I0219 20:01:40.545727 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cd7d095-1bd3-4307-b317-81d6150f4bb2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 20:01:40 crc kubenswrapper[4813]: I0219 20:01:40.545774 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzclq\" (UniqueName: \"kubernetes.io/projected/7cd7d095-1bd3-4307-b317-81d6150f4bb2-kube-api-access-rzclq\") on node \"crc\" DevicePath \"\"" Feb 19 20:01:40 crc kubenswrapper[4813]: I0219 20:01:40.545787 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cd7d095-1bd3-4307-b317-81d6150f4bb2-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 20:01:40 crc kubenswrapper[4813]: I0219 20:01:40.739469 4813 generic.go:334] "Generic (PLEG): container finished" podID="7cd7d095-1bd3-4307-b317-81d6150f4bb2" containerID="5f875800ffd45e80283683e8420f66bdf3acfdc7bba50970ffbd88e70d650994" exitCode=0 Feb 19 20:01:40 crc kubenswrapper[4813]: I0219 20:01:40.740253 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"7cd7d095-1bd3-4307-b317-81d6150f4bb2","Type":"ContainerDied","Data":"5f875800ffd45e80283683e8420f66bdf3acfdc7bba50970ffbd88e70d650994"} Feb 19 20:01:40 crc kubenswrapper[4813]: I0219 20:01:40.740304 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"7cd7d095-1bd3-4307-b317-81d6150f4bb2","Type":"ContainerDied","Data":"bd165d9e9a96b8a55a9fa2cb4d0ed43adee11e97f6d59fc2b6e74dbd9951c124"} Feb 19 20:01:40 crc kubenswrapper[4813]: I0219 20:01:40.740324 4813 scope.go:117] "RemoveContainer" containerID="5f875800ffd45e80283683e8420f66bdf3acfdc7bba50970ffbd88e70d650994" Feb 19 20:01:40 crc kubenswrapper[4813]: I0219 20:01:40.740459 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 19 20:01:40 crc kubenswrapper[4813]: I0219 20:01:40.770672 4813 scope.go:117] "RemoveContainer" containerID="5f875800ffd45e80283683e8420f66bdf3acfdc7bba50970ffbd88e70d650994" Feb 19 20:01:40 crc kubenswrapper[4813]: E0219 20:01:40.771426 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f875800ffd45e80283683e8420f66bdf3acfdc7bba50970ffbd88e70d650994\": container with ID starting with 5f875800ffd45e80283683e8420f66bdf3acfdc7bba50970ffbd88e70d650994 not found: ID does not exist" containerID="5f875800ffd45e80283683e8420f66bdf3acfdc7bba50970ffbd88e70d650994" Feb 19 20:01:40 crc kubenswrapper[4813]: I0219 20:01:40.771605 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f875800ffd45e80283683e8420f66bdf3acfdc7bba50970ffbd88e70d650994"} err="failed to get container status \"5f875800ffd45e80283683e8420f66bdf3acfdc7bba50970ffbd88e70d650994\": rpc error: code = NotFound desc = could not find container \"5f875800ffd45e80283683e8420f66bdf3acfdc7bba50970ffbd88e70d650994\": container with ID starting with 5f875800ffd45e80283683e8420f66bdf3acfdc7bba50970ffbd88e70d650994 not found: ID does not exist" Feb 19 20:01:40 crc kubenswrapper[4813]: I0219 20:01:40.792097 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 20:01:40 crc kubenswrapper[4813]: I0219 20:01:40.810240 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 20:01:40 crc kubenswrapper[4813]: I0219 20:01:40.821925 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 20:01:40 crc kubenswrapper[4813]: E0219 20:01:40.822669 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cd7d095-1bd3-4307-b317-81d6150f4bb2" containerName="nova-cell1-conductor-conductor" Feb 19 20:01:40 crc kubenswrapper[4813]: I0219 20:01:40.822704 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cd7d095-1bd3-4307-b317-81d6150f4bb2" containerName="nova-cell1-conductor-conductor" Feb 19 20:01:40 crc kubenswrapper[4813]: I0219 20:01:40.823060 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cd7d095-1bd3-4307-b317-81d6150f4bb2" containerName="nova-cell1-conductor-conductor" Feb 19 20:01:40 crc kubenswrapper[4813]: I0219 20:01:40.824045 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 19 20:01:40 crc kubenswrapper[4813]: I0219 20:01:40.829148 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 19 20:01:40 crc kubenswrapper[4813]: I0219 20:01:40.830829 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 20:01:40 crc kubenswrapper[4813]: I0219 20:01:40.954181 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqscl\" (UniqueName: \"kubernetes.io/projected/fdcf9cfd-6bb7-482e-ab95-f68f52b045ef-kube-api-access-rqscl\") pod \"nova-cell1-conductor-0\" (UID: \"fdcf9cfd-6bb7-482e-ab95-f68f52b045ef\") " pod="openstack/nova-cell1-conductor-0" Feb 19 20:01:40 crc kubenswrapper[4813]: I0219 20:01:40.954280 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdcf9cfd-6bb7-482e-ab95-f68f52b045ef-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"fdcf9cfd-6bb7-482e-ab95-f68f52b045ef\") " pod="openstack/nova-cell1-conductor-0" Feb 19 20:01:40 crc kubenswrapper[4813]: I0219 20:01:40.954521 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdcf9cfd-6bb7-482e-ab95-f68f52b045ef-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"fdcf9cfd-6bb7-482e-ab95-f68f52b045ef\") " pod="openstack/nova-cell1-conductor-0" Feb 19 20:01:41 crc kubenswrapper[4813]: I0219 20:01:41.025727 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 19 20:01:41 crc kubenswrapper[4813]: I0219 20:01:41.056426 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdcf9cfd-6bb7-482e-ab95-f68f52b045ef-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"fdcf9cfd-6bb7-482e-ab95-f68f52b045ef\") " pod="openstack/nova-cell1-conductor-0" Feb 19 20:01:41 crc kubenswrapper[4813]: I0219 20:01:41.056654 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqscl\" (UniqueName: \"kubernetes.io/projected/fdcf9cfd-6bb7-482e-ab95-f68f52b045ef-kube-api-access-rqscl\") pod \"nova-cell1-conductor-0\" (UID: \"fdcf9cfd-6bb7-482e-ab95-f68f52b045ef\") " pod="openstack/nova-cell1-conductor-0" Feb 19 20:01:41 crc kubenswrapper[4813]: I0219 20:01:41.056823 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdcf9cfd-6bb7-482e-ab95-f68f52b045ef-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"fdcf9cfd-6bb7-482e-ab95-f68f52b045ef\") " pod="openstack/nova-cell1-conductor-0" Feb 19 20:01:41 crc kubenswrapper[4813]: I0219 20:01:41.063856 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdcf9cfd-6bb7-482e-ab95-f68f52b045ef-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"fdcf9cfd-6bb7-482e-ab95-f68f52b045ef\") " pod="openstack/nova-cell1-conductor-0" Feb 19 20:01:41 crc kubenswrapper[4813]: I0219 20:01:41.068490 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdcf9cfd-6bb7-482e-ab95-f68f52b045ef-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"fdcf9cfd-6bb7-482e-ab95-f68f52b045ef\") " pod="openstack/nova-cell1-conductor-0" Feb 19 20:01:41 crc kubenswrapper[4813]: I0219 20:01:41.084466 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqscl\" (UniqueName: \"kubernetes.io/projected/fdcf9cfd-6bb7-482e-ab95-f68f52b045ef-kube-api-access-rqscl\") pod \"nova-cell1-conductor-0\" (UID: \"fdcf9cfd-6bb7-482e-ab95-f68f52b045ef\") " pod="openstack/nova-cell1-conductor-0" Feb 19 20:01:41 crc kubenswrapper[4813]: I0219 20:01:41.150142 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 19 20:01:41 crc kubenswrapper[4813]: I0219 20:01:41.496325 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cd7d095-1bd3-4307-b317-81d6150f4bb2" path="/var/lib/kubelet/pods/7cd7d095-1bd3-4307-b317-81d6150f4bb2/volumes" Feb 19 20:01:41 crc kubenswrapper[4813]: I0219 20:01:41.587472 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 20:01:41 crc kubenswrapper[4813]: I0219 20:01:41.705550 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 20:01:41 crc kubenswrapper[4813]: W0219 20:01:41.713511 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfdcf9cfd_6bb7_482e_ab95_f68f52b045ef.slice/crio-fa58099fa6b5f3a0c91cc0b5c6808ea75d0193175ee15d7f80516e3be35c51b2 WatchSource:0}: Error finding container fa58099fa6b5f3a0c91cc0b5c6808ea75d0193175ee15d7f80516e3be35c51b2: Status 404 returned error can't find the container with id fa58099fa6b5f3a0c91cc0b5c6808ea75d0193175ee15d7f80516e3be35c51b2 Feb 19 20:01:41 crc kubenswrapper[4813]: I0219 20:01:41.752542 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"fdcf9cfd-6bb7-482e-ab95-f68f52b045ef","Type":"ContainerStarted","Data":"fa58099fa6b5f3a0c91cc0b5c6808ea75d0193175ee15d7f80516e3be35c51b2"} Feb 19 20:01:41 crc kubenswrapper[4813]: I0219 20:01:41.754715 4813 generic.go:334] "Generic (PLEG): container finished" podID="5e9ac6a7-48ef-4b7c-b463-adb3efd9675d" containerID="6b2d492f986af0769ab9a01489d292ba42755bddb24d527af91d643eb8dd6b6f" exitCode=0 Feb 19 20:01:41 crc kubenswrapper[4813]: I0219 20:01:41.755597 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 20:01:41 crc kubenswrapper[4813]: I0219 20:01:41.756043 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5e9ac6a7-48ef-4b7c-b463-adb3efd9675d","Type":"ContainerDied","Data":"6b2d492f986af0769ab9a01489d292ba42755bddb24d527af91d643eb8dd6b6f"} Feb 19 20:01:41 crc kubenswrapper[4813]: I0219 20:01:41.756072 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5e9ac6a7-48ef-4b7c-b463-adb3efd9675d","Type":"ContainerDied","Data":"7f4f2d7261c75fd12b5299e4556e42ddf0dd909b3157bcc5434d9677d332a494"} Feb 19 20:01:41 crc kubenswrapper[4813]: I0219 20:01:41.756088 4813 scope.go:117] "RemoveContainer" containerID="6b2d492f986af0769ab9a01489d292ba42755bddb24d527af91d643eb8dd6b6f" Feb 19 20:01:41 crc kubenswrapper[4813]: I0219 20:01:41.768990 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e9ac6a7-48ef-4b7c-b463-adb3efd9675d-combined-ca-bundle\") pod \"5e9ac6a7-48ef-4b7c-b463-adb3efd9675d\" (UID: \"5e9ac6a7-48ef-4b7c-b463-adb3efd9675d\") " Feb 19 20:01:41 crc kubenswrapper[4813]: I0219 20:01:41.769091 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7mh4\" (UniqueName: \"kubernetes.io/projected/5e9ac6a7-48ef-4b7c-b463-adb3efd9675d-kube-api-access-n7mh4\") pod \"5e9ac6a7-48ef-4b7c-b463-adb3efd9675d\" (UID: \"5e9ac6a7-48ef-4b7c-b463-adb3efd9675d\") " Feb 19 20:01:41 crc kubenswrapper[4813]: I0219 20:01:41.769181 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e9ac6a7-48ef-4b7c-b463-adb3efd9675d-config-data\") pod \"5e9ac6a7-48ef-4b7c-b463-adb3efd9675d\" (UID: \"5e9ac6a7-48ef-4b7c-b463-adb3efd9675d\") " Feb 19 20:01:41 crc kubenswrapper[4813]: I0219 20:01:41.773036 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e9ac6a7-48ef-4b7c-b463-adb3efd9675d-kube-api-access-n7mh4" (OuterVolumeSpecName: "kube-api-access-n7mh4") pod "5e9ac6a7-48ef-4b7c-b463-adb3efd9675d" (UID: "5e9ac6a7-48ef-4b7c-b463-adb3efd9675d"). InnerVolumeSpecName "kube-api-access-n7mh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:01:41 crc kubenswrapper[4813]: I0219 20:01:41.773835 4813 scope.go:117] "RemoveContainer" containerID="6b2d492f986af0769ab9a01489d292ba42755bddb24d527af91d643eb8dd6b6f" Feb 19 20:01:41 crc kubenswrapper[4813]: E0219 20:01:41.774712 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b2d492f986af0769ab9a01489d292ba42755bddb24d527af91d643eb8dd6b6f\": container with ID starting with 6b2d492f986af0769ab9a01489d292ba42755bddb24d527af91d643eb8dd6b6f not found: ID does not exist" containerID="6b2d492f986af0769ab9a01489d292ba42755bddb24d527af91d643eb8dd6b6f" Feb 19 20:01:41 crc kubenswrapper[4813]: I0219 20:01:41.774747 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b2d492f986af0769ab9a01489d292ba42755bddb24d527af91d643eb8dd6b6f"} err="failed to get container status \"6b2d492f986af0769ab9a01489d292ba42755bddb24d527af91d643eb8dd6b6f\": rpc error: code = NotFound desc = could not find container \"6b2d492f986af0769ab9a01489d292ba42755bddb24d527af91d643eb8dd6b6f\": container with ID starting with 6b2d492f986af0769ab9a01489d292ba42755bddb24d527af91d643eb8dd6b6f not found: ID does not exist" Feb 19 20:01:41 crc kubenswrapper[4813]: I0219 20:01:41.799737 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e9ac6a7-48ef-4b7c-b463-adb3efd9675d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5e9ac6a7-48ef-4b7c-b463-adb3efd9675d" (UID: "5e9ac6a7-48ef-4b7c-b463-adb3efd9675d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:01:41 crc kubenswrapper[4813]: I0219 20:01:41.802773 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e9ac6a7-48ef-4b7c-b463-adb3efd9675d-config-data" (OuterVolumeSpecName: "config-data") pod "5e9ac6a7-48ef-4b7c-b463-adb3efd9675d" (UID: "5e9ac6a7-48ef-4b7c-b463-adb3efd9675d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:01:41 crc kubenswrapper[4813]: I0219 20:01:41.872704 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e9ac6a7-48ef-4b7c-b463-adb3efd9675d-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 20:01:41 crc kubenswrapper[4813]: I0219 20:01:41.872742 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e9ac6a7-48ef-4b7c-b463-adb3efd9675d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 20:01:41 crc kubenswrapper[4813]: I0219 20:01:41.872753 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7mh4\" (UniqueName: \"kubernetes.io/projected/5e9ac6a7-48ef-4b7c-b463-adb3efd9675d-kube-api-access-n7mh4\") on node \"crc\" DevicePath \"\"" Feb 19 20:01:42 crc kubenswrapper[4813]: I0219 20:01:42.149255 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 20:01:42 crc kubenswrapper[4813]: I0219 20:01:42.163842 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 20:01:42 crc kubenswrapper[4813]: I0219 20:01:42.189013 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 20:01:42 crc kubenswrapper[4813]: E0219 20:01:42.192596 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e9ac6a7-48ef-4b7c-b463-adb3efd9675d" containerName="nova-scheduler-scheduler" Feb 19 20:01:42 crc kubenswrapper[4813]: I0219 20:01:42.192636 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e9ac6a7-48ef-4b7c-b463-adb3efd9675d" containerName="nova-scheduler-scheduler" Feb 19 20:01:42 crc kubenswrapper[4813]: I0219 20:01:42.205314 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e9ac6a7-48ef-4b7c-b463-adb3efd9675d" containerName="nova-scheduler-scheduler" Feb 19 20:01:42 crc kubenswrapper[4813]: I0219 20:01:42.206587 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 20:01:42 crc kubenswrapper[4813]: I0219 20:01:42.210732 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 19 20:01:42 crc kubenswrapper[4813]: I0219 20:01:42.226454 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 20:01:42 crc kubenswrapper[4813]: I0219 20:01:42.285345 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47c17d2d-dd38-4554-aefc-6a8132743f0d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"47c17d2d-dd38-4554-aefc-6a8132743f0d\") " pod="openstack/nova-scheduler-0" Feb 19 20:01:42 crc kubenswrapper[4813]: I0219 20:01:42.285465 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5p7nn\" (UniqueName: \"kubernetes.io/projected/47c17d2d-dd38-4554-aefc-6a8132743f0d-kube-api-access-5p7nn\") pod \"nova-scheduler-0\" (UID: \"47c17d2d-dd38-4554-aefc-6a8132743f0d\") " pod="openstack/nova-scheduler-0" Feb 19 20:01:42 crc kubenswrapper[4813]: I0219 20:01:42.285492 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47c17d2d-dd38-4554-aefc-6a8132743f0d-config-data\") pod \"nova-scheduler-0\" (UID: \"47c17d2d-dd38-4554-aefc-6a8132743f0d\") " pod="openstack/nova-scheduler-0" Feb 19 20:01:42 crc kubenswrapper[4813]: I0219 20:01:42.387714 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47c17d2d-dd38-4554-aefc-6a8132743f0d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"47c17d2d-dd38-4554-aefc-6a8132743f0d\") " pod="openstack/nova-scheduler-0" Feb 19 20:01:42 crc kubenswrapper[4813]: I0219 20:01:42.387851 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5p7nn\" (UniqueName: \"kubernetes.io/projected/47c17d2d-dd38-4554-aefc-6a8132743f0d-kube-api-access-5p7nn\") pod \"nova-scheduler-0\" (UID: \"47c17d2d-dd38-4554-aefc-6a8132743f0d\") " pod="openstack/nova-scheduler-0" Feb 19 20:01:42 crc kubenswrapper[4813]: I0219 20:01:42.387881 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47c17d2d-dd38-4554-aefc-6a8132743f0d-config-data\") pod \"nova-scheduler-0\" (UID: \"47c17d2d-dd38-4554-aefc-6a8132743f0d\") " pod="openstack/nova-scheduler-0" Feb 19 20:01:42 crc kubenswrapper[4813]: I0219 20:01:42.392738 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47c17d2d-dd38-4554-aefc-6a8132743f0d-config-data\") pod \"nova-scheduler-0\" (UID: \"47c17d2d-dd38-4554-aefc-6a8132743f0d\") " pod="openstack/nova-scheduler-0" Feb 19 20:01:42 crc kubenswrapper[4813]: I0219 20:01:42.403010 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47c17d2d-dd38-4554-aefc-6a8132743f0d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"47c17d2d-dd38-4554-aefc-6a8132743f0d\") " pod="openstack/nova-scheduler-0" Feb 19 20:01:42 crc kubenswrapper[4813]: I0219 20:01:42.408537 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5p7nn\" (UniqueName: \"kubernetes.io/projected/47c17d2d-dd38-4554-aefc-6a8132743f0d-kube-api-access-5p7nn\") pod \"nova-scheduler-0\" (UID: \"47c17d2d-dd38-4554-aefc-6a8132743f0d\") " pod="openstack/nova-scheduler-0" Feb 19 20:01:42 crc kubenswrapper[4813]: I0219 20:01:42.528024 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 20:01:42 crc kubenswrapper[4813]: I0219 20:01:42.764685 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"fdcf9cfd-6bb7-482e-ab95-f68f52b045ef","Type":"ContainerStarted","Data":"47002169e518c8ac76b52c010516d0a8761988c12aa436f2c9e4294ec8f69d4a"} Feb 19 20:01:42 crc kubenswrapper[4813]: I0219 20:01:42.765212 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 19 20:01:42 crc kubenswrapper[4813]: I0219 20:01:42.948659 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.9486316500000003 podStartE2EDuration="2.94863165s" podCreationTimestamp="2026-02-19 20:01:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 20:01:42.7863678 +0000 UTC m=+5522.011808341" watchObservedRunningTime="2026-02-19 20:01:42.94863165 +0000 UTC m=+5522.174072211" Feb 19 20:01:42 crc kubenswrapper[4813]: W0219 20:01:42.958639 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod47c17d2d_dd38_4554_aefc_6a8132743f0d.slice/crio-08a6f1cb802d28eefe4a2835d3da492cb5400165f57a4dec01e32fe1518ae34b WatchSource:0}: Error finding container 08a6f1cb802d28eefe4a2835d3da492cb5400165f57a4dec01e32fe1518ae34b: Status 404 returned error can't find the container with id 08a6f1cb802d28eefe4a2835d3da492cb5400165f57a4dec01e32fe1518ae34b Feb 19 20:01:42 crc kubenswrapper[4813]: I0219 20:01:42.961279 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 20:01:43 crc kubenswrapper[4813]: I0219 20:01:43.486425 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e9ac6a7-48ef-4b7c-b463-adb3efd9675d" path="/var/lib/kubelet/pods/5e9ac6a7-48ef-4b7c-b463-adb3efd9675d/volumes" Feb 19 20:01:43 crc kubenswrapper[4813]: I0219 20:01:43.701440 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 20:01:43 crc kubenswrapper[4813]: I0219 20:01:43.701500 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 20:01:43 crc kubenswrapper[4813]: I0219 20:01:43.776009 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"47c17d2d-dd38-4554-aefc-6a8132743f0d","Type":"ContainerStarted","Data":"7bfa4f372e4835757393b69e224be9541ef1026a262b108c84eeb2602ec68e1f"} Feb 19 20:01:43 crc kubenswrapper[4813]: I0219 20:01:43.776062 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"47c17d2d-dd38-4554-aefc-6a8132743f0d","Type":"ContainerStarted","Data":"08a6f1cb802d28eefe4a2835d3da492cb5400165f57a4dec01e32fe1518ae34b"} Feb 19 20:01:43 crc kubenswrapper[4813]: I0219 20:01:43.799107 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.799085932 podStartE2EDuration="1.799085932s" podCreationTimestamp="2026-02-19 20:01:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 20:01:43.790086925 +0000 UTC m=+5523.015527456" watchObservedRunningTime="2026-02-19 20:01:43.799085932 +0000 UTC m=+5523.024526473" Feb 19 20:01:46 crc kubenswrapper[4813]: I0219 20:01:46.025035 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 19 20:01:46 crc kubenswrapper[4813]: I0219 20:01:46.038542 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 19 20:01:46 crc kubenswrapper[4813]: I0219 20:01:46.213480 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 19 20:01:46 crc kubenswrapper[4813]: I0219 20:01:46.822989 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 19 20:01:47 crc kubenswrapper[4813]: I0219 20:01:47.529065 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 19 20:01:48 crc kubenswrapper[4813]: I0219 20:01:48.155503 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 19 20:01:48 crc kubenswrapper[4813]: I0219 20:01:48.701930 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 19 20:01:48 crc kubenswrapper[4813]: I0219 20:01:48.701984 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 19 20:01:48 crc kubenswrapper[4813]: I0219 20:01:48.714502 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 20:01:48 crc kubenswrapper[4813]: I0219 20:01:48.714580 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 20:01:49 crc kubenswrapper[4813]: I0219 20:01:49.826147 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="035233b2-efbc-4c6a-a82d-44c4742eed8d" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.80:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:01:49 crc kubenswrapper[4813]: I0219 20:01:49.868094 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="4a04b863-542a-4de0-82c0-f8e12a63c47d" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.79:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:01:49 crc kubenswrapper[4813]: I0219 20:01:49.868223 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="035233b2-efbc-4c6a-a82d-44c4742eed8d" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.80:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:01:49 crc kubenswrapper[4813]: I0219 20:01:49.869032 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="4a04b863-542a-4de0-82c0-f8e12a63c47d" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.79:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:01:51 crc kubenswrapper[4813]: I0219 20:01:51.866031 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 20:01:51 crc kubenswrapper[4813]: I0219 20:01:51.868031 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 20:01:51 crc kubenswrapper[4813]: I0219 20:01:51.870447 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 19 20:01:51 crc kubenswrapper[4813]: I0219 20:01:51.881000 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 20:01:51 crc kubenswrapper[4813]: I0219 20:01:51.970916 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kj5f\" (UniqueName: \"kubernetes.io/projected/c04ffaf3-3436-4ef0-ba69-d77770b5b580-kube-api-access-5kj5f\") pod \"cinder-scheduler-0\" (UID: \"c04ffaf3-3436-4ef0-ba69-d77770b5b580\") " pod="openstack/cinder-scheduler-0" Feb 19 20:01:51 crc kubenswrapper[4813]: I0219 20:01:51.971017 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c04ffaf3-3436-4ef0-ba69-d77770b5b580-scripts\") pod \"cinder-scheduler-0\" (UID: \"c04ffaf3-3436-4ef0-ba69-d77770b5b580\") " pod="openstack/cinder-scheduler-0" Feb 19 20:01:51 crc kubenswrapper[4813]: I0219 20:01:51.971754 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c04ffaf3-3436-4ef0-ba69-d77770b5b580-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c04ffaf3-3436-4ef0-ba69-d77770b5b580\") " pod="openstack/cinder-scheduler-0" Feb 19 20:01:51 crc kubenswrapper[4813]: I0219 20:01:51.971838 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c04ffaf3-3436-4ef0-ba69-d77770b5b580-config-data\") pod \"cinder-scheduler-0\" (UID: \"c04ffaf3-3436-4ef0-ba69-d77770b5b580\") " pod="openstack/cinder-scheduler-0" Feb 19 20:01:51 crc kubenswrapper[4813]: I0219 20:01:51.972072 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c04ffaf3-3436-4ef0-ba69-d77770b5b580-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c04ffaf3-3436-4ef0-ba69-d77770b5b580\") " pod="openstack/cinder-scheduler-0" Feb 19 20:01:51 crc kubenswrapper[4813]: I0219 20:01:51.972119 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c04ffaf3-3436-4ef0-ba69-d77770b5b580-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c04ffaf3-3436-4ef0-ba69-d77770b5b580\") " pod="openstack/cinder-scheduler-0" Feb 19 20:01:52 crc kubenswrapper[4813]: I0219 20:01:52.074136 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c04ffaf3-3436-4ef0-ba69-d77770b5b580-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c04ffaf3-3436-4ef0-ba69-d77770b5b580\") " pod="openstack/cinder-scheduler-0" Feb 19 20:01:52 crc kubenswrapper[4813]: I0219 20:01:52.074364 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c04ffaf3-3436-4ef0-ba69-d77770b5b580-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c04ffaf3-3436-4ef0-ba69-d77770b5b580\") " pod="openstack/cinder-scheduler-0" Feb 19 20:01:52 crc kubenswrapper[4813]: I0219 20:01:52.074545 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kj5f\" (UniqueName: \"kubernetes.io/projected/c04ffaf3-3436-4ef0-ba69-d77770b5b580-kube-api-access-5kj5f\") pod \"cinder-scheduler-0\" (UID: \"c04ffaf3-3436-4ef0-ba69-d77770b5b580\") " pod="openstack/cinder-scheduler-0" Feb 19 20:01:52 crc kubenswrapper[4813]: I0219 20:01:52.074661 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c04ffaf3-3436-4ef0-ba69-d77770b5b580-scripts\") pod \"cinder-scheduler-0\" (UID: \"c04ffaf3-3436-4ef0-ba69-d77770b5b580\") " pod="openstack/cinder-scheduler-0" Feb 19 20:01:52 crc kubenswrapper[4813]: I0219 20:01:52.074738 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c04ffaf3-3436-4ef0-ba69-d77770b5b580-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c04ffaf3-3436-4ef0-ba69-d77770b5b580\") " pod="openstack/cinder-scheduler-0" Feb 19 20:01:52 crc kubenswrapper[4813]: I0219 20:01:52.074816 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c04ffaf3-3436-4ef0-ba69-d77770b5b580-config-data\") pod \"cinder-scheduler-0\" (UID: \"c04ffaf3-3436-4ef0-ba69-d77770b5b580\") " pod="openstack/cinder-scheduler-0" Feb 19 20:01:52 crc kubenswrapper[4813]: I0219 20:01:52.074874 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c04ffaf3-3436-4ef0-ba69-d77770b5b580-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c04ffaf3-3436-4ef0-ba69-d77770b5b580\") " pod="openstack/cinder-scheduler-0" Feb 19 20:01:52 crc kubenswrapper[4813]: I0219 20:01:52.080946 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c04ffaf3-3436-4ef0-ba69-d77770b5b580-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c04ffaf3-3436-4ef0-ba69-d77770b5b580\") " pod="openstack/cinder-scheduler-0" Feb 19 20:01:52 crc kubenswrapper[4813]: I0219 20:01:52.081829 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c04ffaf3-3436-4ef0-ba69-d77770b5b580-config-data\") pod \"cinder-scheduler-0\" (UID: \"c04ffaf3-3436-4ef0-ba69-d77770b5b580\") " pod="openstack/cinder-scheduler-0" Feb 19 20:01:52 crc kubenswrapper[4813]: I0219 20:01:52.090844 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c04ffaf3-3436-4ef0-ba69-d77770b5b580-scripts\") pod \"cinder-scheduler-0\" (UID: \"c04ffaf3-3436-4ef0-ba69-d77770b5b580\") " pod="openstack/cinder-scheduler-0" Feb 19 20:01:52 crc kubenswrapper[4813]: I0219 20:01:52.091161 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c04ffaf3-3436-4ef0-ba69-d77770b5b580-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c04ffaf3-3436-4ef0-ba69-d77770b5b580\") " pod="openstack/cinder-scheduler-0" Feb 19 20:01:52 crc kubenswrapper[4813]: I0219 20:01:52.094482 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kj5f\" (UniqueName: \"kubernetes.io/projected/c04ffaf3-3436-4ef0-ba69-d77770b5b580-kube-api-access-5kj5f\") pod \"cinder-scheduler-0\" (UID: \"c04ffaf3-3436-4ef0-ba69-d77770b5b580\") " pod="openstack/cinder-scheduler-0" Feb 19 20:01:52 crc kubenswrapper[4813]: I0219 20:01:52.190637 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 20:01:52 crc kubenswrapper[4813]: I0219 20:01:52.528497 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 19 20:01:52 crc kubenswrapper[4813]: I0219 20:01:52.563441 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 19 20:01:52 crc kubenswrapper[4813]: W0219 20:01:52.628796 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc04ffaf3_3436_4ef0_ba69_d77770b5b580.slice/crio-f30e5dfa7c32251f8c7ba9cb6a9e18e7a9616ffad4f04030103084b77b0e6919 WatchSource:0}: Error finding container f30e5dfa7c32251f8c7ba9cb6a9e18e7a9616ffad4f04030103084b77b0e6919: Status 404 returned error can't find the container with id f30e5dfa7c32251f8c7ba9cb6a9e18e7a9616ffad4f04030103084b77b0e6919 Feb 19 20:01:52 crc kubenswrapper[4813]: I0219 20:01:52.628834 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 20:01:53 crc kubenswrapper[4813]: I0219 20:01:53.037944 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c04ffaf3-3436-4ef0-ba69-d77770b5b580","Type":"ContainerStarted","Data":"f30e5dfa7c32251f8c7ba9cb6a9e18e7a9616ffad4f04030103084b77b0e6919"} Feb 19 20:01:53 crc kubenswrapper[4813]: I0219 20:01:53.219149 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 19 20:01:54 crc kubenswrapper[4813]: I0219 20:01:54.010826 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 19 20:01:54 crc kubenswrapper[4813]: I0219 20:01:54.011716 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="c82d10be-9073-449b-a21e-1ff8b08cb71e" containerName="cinder-api-log" containerID="cri-o://2f0a6965a5b63a14c8169b127c09f0a0c4c5502405553bb36fe98e9b2d777263" gracePeriod=30 Feb 19 20:01:54 crc kubenswrapper[4813]: I0219 20:01:54.012492 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="c82d10be-9073-449b-a21e-1ff8b08cb71e" containerName="cinder-api" containerID="cri-o://f08e3fa15b28667c62fd7a48a9c492b010c0d87465e00c5ab9c91539c747fa56" gracePeriod=30 Feb 19 20:01:54 crc kubenswrapper[4813]: I0219 20:01:54.052134 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c04ffaf3-3436-4ef0-ba69-d77770b5b580","Type":"ContainerStarted","Data":"b61bcb3d7a081863ccf6acfe6104d6092a64038900b811d1824de42e0d3e0e63"} Feb 19 20:01:54 crc kubenswrapper[4813]: I0219 20:01:54.054378 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c04ffaf3-3436-4ef0-ba69-d77770b5b580","Type":"ContainerStarted","Data":"75a058bcf5c5e7e3901870fa0159171640d2abfa37e3c38f056e1c949c19754a"} Feb 19 20:01:54 crc kubenswrapper[4813]: I0219 20:01:54.077344 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.077320094 podStartE2EDuration="3.077320094s" podCreationTimestamp="2026-02-19 20:01:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 20:01:54.07006416 +0000 UTC m=+5533.295504721" watchObservedRunningTime="2026-02-19 20:01:54.077320094 +0000 UTC m=+5533.302760635" Feb 19 20:01:54 crc kubenswrapper[4813]: I0219 20:01:54.871005 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-volume1-0"] Feb 19 20:01:54 crc kubenswrapper[4813]: I0219 20:01:54.873204 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Feb 19 20:01:54 crc kubenswrapper[4813]: I0219 20:01:54.875043 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-volume1-config-data" Feb 19 20:01:54 crc kubenswrapper[4813]: I0219 20:01:54.897142 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.026672 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p25nk\" (UniqueName: \"kubernetes.io/projected/4543bfcb-ce40-4284-8e91-5955bd0ada4f-kube-api-access-p25nk\") pod \"cinder-volume-volume1-0\" (UID: \"4543bfcb-ce40-4284-8e91-5955bd0ada4f\") " pod="openstack/cinder-volume-volume1-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.027012 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4543bfcb-ce40-4284-8e91-5955bd0ada4f-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"4543bfcb-ce40-4284-8e91-5955bd0ada4f\") " pod="openstack/cinder-volume-volume1-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.027189 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/4543bfcb-ce40-4284-8e91-5955bd0ada4f-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"4543bfcb-ce40-4284-8e91-5955bd0ada4f\") " pod="openstack/cinder-volume-volume1-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.027389 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/4543bfcb-ce40-4284-8e91-5955bd0ada4f-run\") pod \"cinder-volume-volume1-0\" (UID: \"4543bfcb-ce40-4284-8e91-5955bd0ada4f\") " pod="openstack/cinder-volume-volume1-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.027535 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4543bfcb-ce40-4284-8e91-5955bd0ada4f-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"4543bfcb-ce40-4284-8e91-5955bd0ada4f\") " pod="openstack/cinder-volume-volume1-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.027702 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4543bfcb-ce40-4284-8e91-5955bd0ada4f-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"4543bfcb-ce40-4284-8e91-5955bd0ada4f\") " pod="openstack/cinder-volume-volume1-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.028273 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/4543bfcb-ce40-4284-8e91-5955bd0ada4f-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"4543bfcb-ce40-4284-8e91-5955bd0ada4f\") " pod="openstack/cinder-volume-volume1-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.028347 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4543bfcb-ce40-4284-8e91-5955bd0ada4f-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"4543bfcb-ce40-4284-8e91-5955bd0ada4f\") " pod="openstack/cinder-volume-volume1-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.028373 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4543bfcb-ce40-4284-8e91-5955bd0ada4f-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"4543bfcb-ce40-4284-8e91-5955bd0ada4f\") " pod="openstack/cinder-volume-volume1-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.028515 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/4543bfcb-ce40-4284-8e91-5955bd0ada4f-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"4543bfcb-ce40-4284-8e91-5955bd0ada4f\") " pod="openstack/cinder-volume-volume1-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.028582 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/4543bfcb-ce40-4284-8e91-5955bd0ada4f-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"4543bfcb-ce40-4284-8e91-5955bd0ada4f\") " pod="openstack/cinder-volume-volume1-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.028638 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/4543bfcb-ce40-4284-8e91-5955bd0ada4f-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"4543bfcb-ce40-4284-8e91-5955bd0ada4f\") " pod="openstack/cinder-volume-volume1-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.028700 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/4543bfcb-ce40-4284-8e91-5955bd0ada4f-dev\") pod \"cinder-volume-volume1-0\" (UID: \"4543bfcb-ce40-4284-8e91-5955bd0ada4f\") " pod="openstack/cinder-volume-volume1-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.028716 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4543bfcb-ce40-4284-8e91-5955bd0ada4f-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"4543bfcb-ce40-4284-8e91-5955bd0ada4f\") " pod="openstack/cinder-volume-volume1-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.028733 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4543bfcb-ce40-4284-8e91-5955bd0ada4f-sys\") pod \"cinder-volume-volume1-0\" (UID: \"4543bfcb-ce40-4284-8e91-5955bd0ada4f\") " pod="openstack/cinder-volume-volume1-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.028774 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/4543bfcb-ce40-4284-8e91-5955bd0ada4f-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"4543bfcb-ce40-4284-8e91-5955bd0ada4f\") " pod="openstack/cinder-volume-volume1-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.062118 4813 generic.go:334] "Generic (PLEG): container finished" podID="c82d10be-9073-449b-a21e-1ff8b08cb71e" containerID="2f0a6965a5b63a14c8169b127c09f0a0c4c5502405553bb36fe98e9b2d777263" exitCode=143 Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.062729 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c82d10be-9073-449b-a21e-1ff8b08cb71e","Type":"ContainerDied","Data":"2f0a6965a5b63a14c8169b127c09f0a0c4c5502405553bb36fe98e9b2d777263"} Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.130705 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/4543bfcb-ce40-4284-8e91-5955bd0ada4f-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"4543bfcb-ce40-4284-8e91-5955bd0ada4f\") " pod="openstack/cinder-volume-volume1-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.130994 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p25nk\" (UniqueName: \"kubernetes.io/projected/4543bfcb-ce40-4284-8e91-5955bd0ada4f-kube-api-access-p25nk\") pod \"cinder-volume-volume1-0\" (UID: \"4543bfcb-ce40-4284-8e91-5955bd0ada4f\") " pod="openstack/cinder-volume-volume1-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.131079 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/4543bfcb-ce40-4284-8e91-5955bd0ada4f-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"4543bfcb-ce40-4284-8e91-5955bd0ada4f\") " pod="openstack/cinder-volume-volume1-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.131153 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4543bfcb-ce40-4284-8e91-5955bd0ada4f-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"4543bfcb-ce40-4284-8e91-5955bd0ada4f\") " pod="openstack/cinder-volume-volume1-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.131237 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/4543bfcb-ce40-4284-8e91-5955bd0ada4f-run\") pod \"cinder-volume-volume1-0\" (UID: \"4543bfcb-ce40-4284-8e91-5955bd0ada4f\") " pod="openstack/cinder-volume-volume1-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.131321 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4543bfcb-ce40-4284-8e91-5955bd0ada4f-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"4543bfcb-ce40-4284-8e91-5955bd0ada4f\") " pod="openstack/cinder-volume-volume1-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.131390 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4543bfcb-ce40-4284-8e91-5955bd0ada4f-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"4543bfcb-ce40-4284-8e91-5955bd0ada4f\") " pod="openstack/cinder-volume-volume1-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.131475 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/4543bfcb-ce40-4284-8e91-5955bd0ada4f-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"4543bfcb-ce40-4284-8e91-5955bd0ada4f\") " pod="openstack/cinder-volume-volume1-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.131560 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4543bfcb-ce40-4284-8e91-5955bd0ada4f-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"4543bfcb-ce40-4284-8e91-5955bd0ada4f\") " pod="openstack/cinder-volume-volume1-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.131630 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4543bfcb-ce40-4284-8e91-5955bd0ada4f-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"4543bfcb-ce40-4284-8e91-5955bd0ada4f\") " pod="openstack/cinder-volume-volume1-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.131720 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/4543bfcb-ce40-4284-8e91-5955bd0ada4f-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"4543bfcb-ce40-4284-8e91-5955bd0ada4f\") " pod="openstack/cinder-volume-volume1-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.131790 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/4543bfcb-ce40-4284-8e91-5955bd0ada4f-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"4543bfcb-ce40-4284-8e91-5955bd0ada4f\") " pod="openstack/cinder-volume-volume1-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.131866 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/4543bfcb-ce40-4284-8e91-5955bd0ada4f-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"4543bfcb-ce40-4284-8e91-5955bd0ada4f\") " pod="openstack/cinder-volume-volume1-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.131943 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/4543bfcb-ce40-4284-8e91-5955bd0ada4f-dev\") pod \"cinder-volume-volume1-0\" (UID: \"4543bfcb-ce40-4284-8e91-5955bd0ada4f\") " pod="openstack/cinder-volume-volume1-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.132030 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4543bfcb-ce40-4284-8e91-5955bd0ada4f-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"4543bfcb-ce40-4284-8e91-5955bd0ada4f\") " pod="openstack/cinder-volume-volume1-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.132118 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4543bfcb-ce40-4284-8e91-5955bd0ada4f-sys\") pod \"cinder-volume-volume1-0\" (UID: \"4543bfcb-ce40-4284-8e91-5955bd0ada4f\") " pod="openstack/cinder-volume-volume1-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.132259 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4543bfcb-ce40-4284-8e91-5955bd0ada4f-sys\") pod \"cinder-volume-volume1-0\" (UID: \"4543bfcb-ce40-4284-8e91-5955bd0ada4f\") " pod="openstack/cinder-volume-volume1-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.132604 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/4543bfcb-ce40-4284-8e91-5955bd0ada4f-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"4543bfcb-ce40-4284-8e91-5955bd0ada4f\") " pod="openstack/cinder-volume-volume1-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.133671 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4543bfcb-ce40-4284-8e91-5955bd0ada4f-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"4543bfcb-ce40-4284-8e91-5955bd0ada4f\") " pod="openstack/cinder-volume-volume1-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.133829 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/4543bfcb-ce40-4284-8e91-5955bd0ada4f-run\") pod \"cinder-volume-volume1-0\" (UID: \"4543bfcb-ce40-4284-8e91-5955bd0ada4f\") " pod="openstack/cinder-volume-volume1-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.134365 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/4543bfcb-ce40-4284-8e91-5955bd0ada4f-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"4543bfcb-ce40-4284-8e91-5955bd0ada4f\") " pod="openstack/cinder-volume-volume1-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.134505 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4543bfcb-ce40-4284-8e91-5955bd0ada4f-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"4543bfcb-ce40-4284-8e91-5955bd0ada4f\") " pod="openstack/cinder-volume-volume1-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.134625 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/4543bfcb-ce40-4284-8e91-5955bd0ada4f-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"4543bfcb-ce40-4284-8e91-5955bd0ada4f\") " pod="openstack/cinder-volume-volume1-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.135196 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/4543bfcb-ce40-4284-8e91-5955bd0ada4f-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"4543bfcb-ce40-4284-8e91-5955bd0ada4f\") " pod="openstack/cinder-volume-volume1-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.135221 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/4543bfcb-ce40-4284-8e91-5955bd0ada4f-dev\") pod \"cinder-volume-volume1-0\" (UID: \"4543bfcb-ce40-4284-8e91-5955bd0ada4f\") " pod="openstack/cinder-volume-volume1-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.135378 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/4543bfcb-ce40-4284-8e91-5955bd0ada4f-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"4543bfcb-ce40-4284-8e91-5955bd0ada4f\") " pod="openstack/cinder-volume-volume1-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.139263 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4543bfcb-ce40-4284-8e91-5955bd0ada4f-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"4543bfcb-ce40-4284-8e91-5955bd0ada4f\") " pod="openstack/cinder-volume-volume1-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.139815 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4543bfcb-ce40-4284-8e91-5955bd0ada4f-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"4543bfcb-ce40-4284-8e91-5955bd0ada4f\") " pod="openstack/cinder-volume-volume1-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.140649 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/4543bfcb-ce40-4284-8e91-5955bd0ada4f-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"4543bfcb-ce40-4284-8e91-5955bd0ada4f\") " pod="openstack/cinder-volume-volume1-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.143508 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4543bfcb-ce40-4284-8e91-5955bd0ada4f-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"4543bfcb-ce40-4284-8e91-5955bd0ada4f\") " pod="openstack/cinder-volume-volume1-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.152624 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p25nk\" (UniqueName: \"kubernetes.io/projected/4543bfcb-ce40-4284-8e91-5955bd0ada4f-kube-api-access-p25nk\") pod \"cinder-volume-volume1-0\" (UID: \"4543bfcb-ce40-4284-8e91-5955bd0ada4f\") " pod="openstack/cinder-volume-volume1-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.153708 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4543bfcb-ce40-4284-8e91-5955bd0ada4f-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"4543bfcb-ce40-4284-8e91-5955bd0ada4f\") " pod="openstack/cinder-volume-volume1-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.197234 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.576236 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.577854 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.581587 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.599770 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.745840 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/80ce1c4a-b9d4-4da3-a900-55a7dddd6070-lib-modules\") pod \"cinder-backup-0\" (UID: \"80ce1c4a-b9d4-4da3-a900-55a7dddd6070\") " pod="openstack/cinder-backup-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.745913 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/80ce1c4a-b9d4-4da3-a900-55a7dddd6070-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"80ce1c4a-b9d4-4da3-a900-55a7dddd6070\") " pod="openstack/cinder-backup-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.746030 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/80ce1c4a-b9d4-4da3-a900-55a7dddd6070-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"80ce1c4a-b9d4-4da3-a900-55a7dddd6070\") " pod="openstack/cinder-backup-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.746085 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/80ce1c4a-b9d4-4da3-a900-55a7dddd6070-ceph\") pod \"cinder-backup-0\" (UID: \"80ce1c4a-b9d4-4da3-a900-55a7dddd6070\") " pod="openstack/cinder-backup-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.746154 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/80ce1c4a-b9d4-4da3-a900-55a7dddd6070-etc-nvme\") pod \"cinder-backup-0\" (UID: \"80ce1c4a-b9d4-4da3-a900-55a7dddd6070\") " pod="openstack/cinder-backup-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.746210 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80ce1c4a-b9d4-4da3-a900-55a7dddd6070-config-data\") pod \"cinder-backup-0\" (UID: \"80ce1c4a-b9d4-4da3-a900-55a7dddd6070\") " pod="openstack/cinder-backup-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.746237 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80ce1c4a-b9d4-4da3-a900-55a7dddd6070-scripts\") pod \"cinder-backup-0\" (UID: \"80ce1c4a-b9d4-4da3-a900-55a7dddd6070\") " pod="openstack/cinder-backup-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.746302 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/80ce1c4a-b9d4-4da3-a900-55a7dddd6070-config-data-custom\") pod \"cinder-backup-0\" (UID: \"80ce1c4a-b9d4-4da3-a900-55a7dddd6070\") " pod="openstack/cinder-backup-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.746359 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/80ce1c4a-b9d4-4da3-a900-55a7dddd6070-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"80ce1c4a-b9d4-4da3-a900-55a7dddd6070\") " pod="openstack/cinder-backup-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.746455 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/80ce1c4a-b9d4-4da3-a900-55a7dddd6070-run\") pod \"cinder-backup-0\" (UID: \"80ce1c4a-b9d4-4da3-a900-55a7dddd6070\") " pod="openstack/cinder-backup-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.746533 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/80ce1c4a-b9d4-4da3-a900-55a7dddd6070-sys\") pod \"cinder-backup-0\" (UID: \"80ce1c4a-b9d4-4da3-a900-55a7dddd6070\") " pod="openstack/cinder-backup-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.746626 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/80ce1c4a-b9d4-4da3-a900-55a7dddd6070-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"80ce1c4a-b9d4-4da3-a900-55a7dddd6070\") " pod="openstack/cinder-backup-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.746713 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/80ce1c4a-b9d4-4da3-a900-55a7dddd6070-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"80ce1c4a-b9d4-4da3-a900-55a7dddd6070\") " pod="openstack/cinder-backup-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.746856 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8bv7\" (UniqueName: \"kubernetes.io/projected/80ce1c4a-b9d4-4da3-a900-55a7dddd6070-kube-api-access-x8bv7\") pod \"cinder-backup-0\" (UID: \"80ce1c4a-b9d4-4da3-a900-55a7dddd6070\") " pod="openstack/cinder-backup-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.746889 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/80ce1c4a-b9d4-4da3-a900-55a7dddd6070-dev\") pod \"cinder-backup-0\" (UID: \"80ce1c4a-b9d4-4da3-a900-55a7dddd6070\") " pod="openstack/cinder-backup-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.746914 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80ce1c4a-b9d4-4da3-a900-55a7dddd6070-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"80ce1c4a-b9d4-4da3-a900-55a7dddd6070\") " pod="openstack/cinder-backup-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.814178 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.844823 4813 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.848969 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/80ce1c4a-b9d4-4da3-a900-55a7dddd6070-dev\") pod \"cinder-backup-0\" (UID: \"80ce1c4a-b9d4-4da3-a900-55a7dddd6070\") " pod="openstack/cinder-backup-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.849288 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80ce1c4a-b9d4-4da3-a900-55a7dddd6070-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"80ce1c4a-b9d4-4da3-a900-55a7dddd6070\") " pod="openstack/cinder-backup-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.849324 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/80ce1c4a-b9d4-4da3-a900-55a7dddd6070-lib-modules\") pod \"cinder-backup-0\" (UID: \"80ce1c4a-b9d4-4da3-a900-55a7dddd6070\") " pod="openstack/cinder-backup-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.849357 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/80ce1c4a-b9d4-4da3-a900-55a7dddd6070-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"80ce1c4a-b9d4-4da3-a900-55a7dddd6070\") " pod="openstack/cinder-backup-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.849382 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/80ce1c4a-b9d4-4da3-a900-55a7dddd6070-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"80ce1c4a-b9d4-4da3-a900-55a7dddd6070\") " pod="openstack/cinder-backup-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.849396 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/80ce1c4a-b9d4-4da3-a900-55a7dddd6070-ceph\") pod \"cinder-backup-0\" (UID: \"80ce1c4a-b9d4-4da3-a900-55a7dddd6070\") " pod="openstack/cinder-backup-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.849419 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/80ce1c4a-b9d4-4da3-a900-55a7dddd6070-etc-nvme\") pod \"cinder-backup-0\" (UID: \"80ce1c4a-b9d4-4da3-a900-55a7dddd6070\") " pod="openstack/cinder-backup-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.849441 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80ce1c4a-b9d4-4da3-a900-55a7dddd6070-config-data\") pod \"cinder-backup-0\" (UID: \"80ce1c4a-b9d4-4da3-a900-55a7dddd6070\") " pod="openstack/cinder-backup-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.849459 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80ce1c4a-b9d4-4da3-a900-55a7dddd6070-scripts\") pod \"cinder-backup-0\" (UID: \"80ce1c4a-b9d4-4da3-a900-55a7dddd6070\") " pod="openstack/cinder-backup-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.849481 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/80ce1c4a-b9d4-4da3-a900-55a7dddd6070-config-data-custom\") pod \"cinder-backup-0\" (UID: \"80ce1c4a-b9d4-4da3-a900-55a7dddd6070\") " pod="openstack/cinder-backup-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.849502 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/80ce1c4a-b9d4-4da3-a900-55a7dddd6070-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"80ce1c4a-b9d4-4da3-a900-55a7dddd6070\") " pod="openstack/cinder-backup-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.849538 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/80ce1c4a-b9d4-4da3-a900-55a7dddd6070-run\") pod \"cinder-backup-0\" (UID: \"80ce1c4a-b9d4-4da3-a900-55a7dddd6070\") " pod="openstack/cinder-backup-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.849570 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/80ce1c4a-b9d4-4da3-a900-55a7dddd6070-sys\") pod \"cinder-backup-0\" (UID: \"80ce1c4a-b9d4-4da3-a900-55a7dddd6070\") " pod="openstack/cinder-backup-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.849590 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/80ce1c4a-b9d4-4da3-a900-55a7dddd6070-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"80ce1c4a-b9d4-4da3-a900-55a7dddd6070\") " pod="openstack/cinder-backup-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.849608 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/80ce1c4a-b9d4-4da3-a900-55a7dddd6070-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"80ce1c4a-b9d4-4da3-a900-55a7dddd6070\") " pod="openstack/cinder-backup-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.849657 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8bv7\" (UniqueName: \"kubernetes.io/projected/80ce1c4a-b9d4-4da3-a900-55a7dddd6070-kube-api-access-x8bv7\") pod \"cinder-backup-0\" (UID: \"80ce1c4a-b9d4-4da3-a900-55a7dddd6070\") " pod="openstack/cinder-backup-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.849970 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/80ce1c4a-b9d4-4da3-a900-55a7dddd6070-dev\") pod \"cinder-backup-0\" (UID: \"80ce1c4a-b9d4-4da3-a900-55a7dddd6070\") " pod="openstack/cinder-backup-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.852263 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/80ce1c4a-b9d4-4da3-a900-55a7dddd6070-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"80ce1c4a-b9d4-4da3-a900-55a7dddd6070\") " pod="openstack/cinder-backup-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.852328 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/80ce1c4a-b9d4-4da3-a900-55a7dddd6070-lib-modules\") pod \"cinder-backup-0\" (UID: \"80ce1c4a-b9d4-4da3-a900-55a7dddd6070\") " pod="openstack/cinder-backup-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.852375 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/80ce1c4a-b9d4-4da3-a900-55a7dddd6070-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"80ce1c4a-b9d4-4da3-a900-55a7dddd6070\") " pod="openstack/cinder-backup-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.852407 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/80ce1c4a-b9d4-4da3-a900-55a7dddd6070-run\") pod \"cinder-backup-0\" (UID: \"80ce1c4a-b9d4-4da3-a900-55a7dddd6070\") " pod="openstack/cinder-backup-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.855064 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/80ce1c4a-b9d4-4da3-a900-55a7dddd6070-sys\") pod \"cinder-backup-0\" (UID: \"80ce1c4a-b9d4-4da3-a900-55a7dddd6070\") " pod="openstack/cinder-backup-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.855064 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/80ce1c4a-b9d4-4da3-a900-55a7dddd6070-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"80ce1c4a-b9d4-4da3-a900-55a7dddd6070\") " pod="openstack/cinder-backup-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.855124 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/80ce1c4a-b9d4-4da3-a900-55a7dddd6070-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"80ce1c4a-b9d4-4da3-a900-55a7dddd6070\") " pod="openstack/cinder-backup-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.855168 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/80ce1c4a-b9d4-4da3-a900-55a7dddd6070-etc-nvme\") pod \"cinder-backup-0\" (UID: \"80ce1c4a-b9d4-4da3-a900-55a7dddd6070\") " pod="openstack/cinder-backup-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.855551 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/80ce1c4a-b9d4-4da3-a900-55a7dddd6070-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"80ce1c4a-b9d4-4da3-a900-55a7dddd6070\") " pod="openstack/cinder-backup-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.861048 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/80ce1c4a-b9d4-4da3-a900-55a7dddd6070-config-data-custom\") pod \"cinder-backup-0\" (UID: \"80ce1c4a-b9d4-4da3-a900-55a7dddd6070\") " pod="openstack/cinder-backup-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.861140 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80ce1c4a-b9d4-4da3-a900-55a7dddd6070-config-data\") pod \"cinder-backup-0\" (UID: \"80ce1c4a-b9d4-4da3-a900-55a7dddd6070\") " pod="openstack/cinder-backup-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.862521 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80ce1c4a-b9d4-4da3-a900-55a7dddd6070-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"80ce1c4a-b9d4-4da3-a900-55a7dddd6070\") " pod="openstack/cinder-backup-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.865406 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/80ce1c4a-b9d4-4da3-a900-55a7dddd6070-ceph\") pod \"cinder-backup-0\" (UID: \"80ce1c4a-b9d4-4da3-a900-55a7dddd6070\") " pod="openstack/cinder-backup-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.869422 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80ce1c4a-b9d4-4da3-a900-55a7dddd6070-scripts\") pod \"cinder-backup-0\" (UID: \"80ce1c4a-b9d4-4da3-a900-55a7dddd6070\") " pod="openstack/cinder-backup-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.876396 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8bv7\" (UniqueName: \"kubernetes.io/projected/80ce1c4a-b9d4-4da3-a900-55a7dddd6070-kube-api-access-x8bv7\") pod \"cinder-backup-0\" (UID: \"80ce1c4a-b9d4-4da3-a900-55a7dddd6070\") " pod="openstack/cinder-backup-0" Feb 19 20:01:55 crc kubenswrapper[4813]: I0219 20:01:55.895406 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Feb 19 20:01:56 crc kubenswrapper[4813]: I0219 20:01:56.096696 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"4543bfcb-ce40-4284-8e91-5955bd0ada4f","Type":"ContainerStarted","Data":"01b77c888ea8ab5da292b583f4563f24388bf7254714dda259b5172f5edc9b25"} Feb 19 20:01:56 crc kubenswrapper[4813]: I0219 20:01:56.437665 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Feb 19 20:01:57 crc kubenswrapper[4813]: I0219 20:01:57.108065 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"4543bfcb-ce40-4284-8e91-5955bd0ada4f","Type":"ContainerStarted","Data":"00264dddcd8196f9298a9a00364581b26d151708d244c1bf4f761159f3cf3f5c"} Feb 19 20:01:57 crc kubenswrapper[4813]: I0219 20:01:57.109754 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"4543bfcb-ce40-4284-8e91-5955bd0ada4f","Type":"ContainerStarted","Data":"5b2b748b5caa940110c28c27610c453e549ec0dfa2ec7bb0a75e55ed13a8efc0"} Feb 19 20:01:57 crc kubenswrapper[4813]: I0219 20:01:57.109782 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"80ce1c4a-b9d4-4da3-a900-55a7dddd6070","Type":"ContainerStarted","Data":"ad48baa684d35fade2b95fdbc1ebfa9c463b966c0d408398d06357b9cf9915b8"} Feb 19 20:01:57 crc kubenswrapper[4813]: I0219 20:01:57.142125 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-volume1-0" podStartSLOduration=2.414889569 podStartE2EDuration="3.142099119s" podCreationTimestamp="2026-02-19 20:01:54 +0000 UTC" firstStartedPulling="2026-02-19 20:01:55.844489041 +0000 UTC m=+5535.069929592" lastFinishedPulling="2026-02-19 20:01:56.571698601 +0000 UTC m=+5535.797139142" observedRunningTime="2026-02-19 20:01:57.141640304 +0000 UTC m=+5536.367080845" watchObservedRunningTime="2026-02-19 20:01:57.142099119 +0000 UTC m=+5536.367539660" Feb 19 20:01:57 crc kubenswrapper[4813]: I0219 20:01:57.190858 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 19 20:01:57 crc kubenswrapper[4813]: I0219 20:01:57.301351 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="c82d10be-9073-449b-a21e-1ff8b08cb71e" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.1.76:8776/healthcheck\": dial tcp 10.217.1.76:8776: connect: connection refused" Feb 19 20:01:57 crc kubenswrapper[4813]: I0219 20:01:57.618267 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 20:01:57 crc kubenswrapper[4813]: I0219 20:01:57.681417 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c82d10be-9073-449b-a21e-1ff8b08cb71e-config-data-custom\") pod \"c82d10be-9073-449b-a21e-1ff8b08cb71e\" (UID: \"c82d10be-9073-449b-a21e-1ff8b08cb71e\") " Feb 19 20:01:57 crc kubenswrapper[4813]: I0219 20:01:57.683005 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c82d10be-9073-449b-a21e-1ff8b08cb71e-etc-machine-id\") pod \"c82d10be-9073-449b-a21e-1ff8b08cb71e\" (UID: \"c82d10be-9073-449b-a21e-1ff8b08cb71e\") " Feb 19 20:01:57 crc kubenswrapper[4813]: I0219 20:01:57.683041 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gdt7\" (UniqueName: \"kubernetes.io/projected/c82d10be-9073-449b-a21e-1ff8b08cb71e-kube-api-access-4gdt7\") pod \"c82d10be-9073-449b-a21e-1ff8b08cb71e\" (UID: \"c82d10be-9073-449b-a21e-1ff8b08cb71e\") " Feb 19 20:01:57 crc kubenswrapper[4813]: I0219 20:01:57.683069 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c82d10be-9073-449b-a21e-1ff8b08cb71e-scripts\") pod \"c82d10be-9073-449b-a21e-1ff8b08cb71e\" (UID: \"c82d10be-9073-449b-a21e-1ff8b08cb71e\") " Feb 19 20:01:57 crc kubenswrapper[4813]: I0219 20:01:57.683094 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c82d10be-9073-449b-a21e-1ff8b08cb71e-combined-ca-bundle\") pod \"c82d10be-9073-449b-a21e-1ff8b08cb71e\" (UID: \"c82d10be-9073-449b-a21e-1ff8b08cb71e\") " Feb 19 20:01:57 crc kubenswrapper[4813]: I0219 20:01:57.683116 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c82d10be-9073-449b-a21e-1ff8b08cb71e-config-data\") pod \"c82d10be-9073-449b-a21e-1ff8b08cb71e\" (UID: \"c82d10be-9073-449b-a21e-1ff8b08cb71e\") " Feb 19 20:01:57 crc kubenswrapper[4813]: I0219 20:01:57.683147 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c82d10be-9073-449b-a21e-1ff8b08cb71e-logs\") pod \"c82d10be-9073-449b-a21e-1ff8b08cb71e\" (UID: \"c82d10be-9073-449b-a21e-1ff8b08cb71e\") " Feb 19 20:01:57 crc kubenswrapper[4813]: I0219 20:01:57.684657 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c82d10be-9073-449b-a21e-1ff8b08cb71e-logs" (OuterVolumeSpecName: "logs") pod "c82d10be-9073-449b-a21e-1ff8b08cb71e" (UID: "c82d10be-9073-449b-a21e-1ff8b08cb71e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:01:57 crc kubenswrapper[4813]: I0219 20:01:57.684875 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c82d10be-9073-449b-a21e-1ff8b08cb71e-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "c82d10be-9073-449b-a21e-1ff8b08cb71e" (UID: "c82d10be-9073-449b-a21e-1ff8b08cb71e"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 20:01:57 crc kubenswrapper[4813]: I0219 20:01:57.686078 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c82d10be-9073-449b-a21e-1ff8b08cb71e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c82d10be-9073-449b-a21e-1ff8b08cb71e" (UID: "c82d10be-9073-449b-a21e-1ff8b08cb71e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:01:57 crc kubenswrapper[4813]: I0219 20:01:57.688654 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c82d10be-9073-449b-a21e-1ff8b08cb71e-kube-api-access-4gdt7" (OuterVolumeSpecName: "kube-api-access-4gdt7") pod "c82d10be-9073-449b-a21e-1ff8b08cb71e" (UID: "c82d10be-9073-449b-a21e-1ff8b08cb71e"). InnerVolumeSpecName "kube-api-access-4gdt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:01:57 crc kubenswrapper[4813]: I0219 20:01:57.703811 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c82d10be-9073-449b-a21e-1ff8b08cb71e-scripts" (OuterVolumeSpecName: "scripts") pod "c82d10be-9073-449b-a21e-1ff8b08cb71e" (UID: "c82d10be-9073-449b-a21e-1ff8b08cb71e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:01:57 crc kubenswrapper[4813]: I0219 20:01:57.776285 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c82d10be-9073-449b-a21e-1ff8b08cb71e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c82d10be-9073-449b-a21e-1ff8b08cb71e" (UID: "c82d10be-9073-449b-a21e-1ff8b08cb71e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:01:57 crc kubenswrapper[4813]: I0219 20:01:57.783929 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c82d10be-9073-449b-a21e-1ff8b08cb71e-config-data" (OuterVolumeSpecName: "config-data") pod "c82d10be-9073-449b-a21e-1ff8b08cb71e" (UID: "c82d10be-9073-449b-a21e-1ff8b08cb71e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:01:57 crc kubenswrapper[4813]: I0219 20:01:57.785096 4813 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c82d10be-9073-449b-a21e-1ff8b08cb71e-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 20:01:57 crc kubenswrapper[4813]: I0219 20:01:57.785117 4813 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c82d10be-9073-449b-a21e-1ff8b08cb71e-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 19 20:01:57 crc kubenswrapper[4813]: I0219 20:01:57.785126 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gdt7\" (UniqueName: \"kubernetes.io/projected/c82d10be-9073-449b-a21e-1ff8b08cb71e-kube-api-access-4gdt7\") on node \"crc\" DevicePath \"\"" Feb 19 20:01:57 crc kubenswrapper[4813]: I0219 20:01:57.785135 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c82d10be-9073-449b-a21e-1ff8b08cb71e-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 20:01:57 crc kubenswrapper[4813]: I0219 20:01:57.785144 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c82d10be-9073-449b-a21e-1ff8b08cb71e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 20:01:57 crc kubenswrapper[4813]: I0219 20:01:57.785152 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c82d10be-9073-449b-a21e-1ff8b08cb71e-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 20:01:57 crc kubenswrapper[4813]: I0219 20:01:57.785160 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c82d10be-9073-449b-a21e-1ff8b08cb71e-logs\") on node \"crc\" DevicePath \"\"" Feb 19 20:01:58 crc kubenswrapper[4813]: I0219 20:01:58.124932 4813 generic.go:334] "Generic (PLEG): container finished" podID="c82d10be-9073-449b-a21e-1ff8b08cb71e" containerID="f08e3fa15b28667c62fd7a48a9c492b010c0d87465e00c5ab9c91539c747fa56" exitCode=0 Feb 19 20:01:58 crc kubenswrapper[4813]: I0219 20:01:58.125058 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c82d10be-9073-449b-a21e-1ff8b08cb71e","Type":"ContainerDied","Data":"f08e3fa15b28667c62fd7a48a9c492b010c0d87465e00c5ab9c91539c747fa56"} Feb 19 20:01:58 crc kubenswrapper[4813]: I0219 20:01:58.125096 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c82d10be-9073-449b-a21e-1ff8b08cb71e","Type":"ContainerDied","Data":"e94e466069b9e5cf1069bb2956a67812d5bc01cb3f7b7a4be39c03d287844597"} Feb 19 20:01:58 crc kubenswrapper[4813]: I0219 20:01:58.125116 4813 scope.go:117] "RemoveContainer" containerID="f08e3fa15b28667c62fd7a48a9c492b010c0d87465e00c5ab9c91539c747fa56" Feb 19 20:01:58 crc kubenswrapper[4813]: I0219 20:01:58.125259 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 20:01:58 crc kubenswrapper[4813]: I0219 20:01:58.152766 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"80ce1c4a-b9d4-4da3-a900-55a7dddd6070","Type":"ContainerStarted","Data":"f6ae178b3a3ef2588de4dcccb09a04bd4daf3c435b78c1273a7257cf64017025"} Feb 19 20:01:58 crc kubenswrapper[4813]: I0219 20:01:58.152835 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"80ce1c4a-b9d4-4da3-a900-55a7dddd6070","Type":"ContainerStarted","Data":"e6b354083c2bac3ec8a4536c010d82833449322a4cd3f87f2749d691a19fe8f0"} Feb 19 20:01:58 crc kubenswrapper[4813]: I0219 20:01:58.205571 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=2.476947869 podStartE2EDuration="3.205544802s" podCreationTimestamp="2026-02-19 20:01:55 +0000 UTC" firstStartedPulling="2026-02-19 20:01:56.459330434 +0000 UTC m=+5535.684770975" lastFinishedPulling="2026-02-19 20:01:57.187927347 +0000 UTC m=+5536.413367908" observedRunningTime="2026-02-19 20:01:58.188155885 +0000 UTC m=+5537.413596436" watchObservedRunningTime="2026-02-19 20:01:58.205544802 +0000 UTC m=+5537.430985343" Feb 19 20:01:58 crc kubenswrapper[4813]: I0219 20:01:58.225454 4813 scope.go:117] "RemoveContainer" containerID="2f0a6965a5b63a14c8169b127c09f0a0c4c5502405553bb36fe98e9b2d777263" Feb 19 20:01:58 crc kubenswrapper[4813]: I0219 20:01:58.242474 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 19 20:01:58 crc kubenswrapper[4813]: I0219 20:01:58.252796 4813 scope.go:117] "RemoveContainer" containerID="f08e3fa15b28667c62fd7a48a9c492b010c0d87465e00c5ab9c91539c747fa56" Feb 19 20:01:58 crc kubenswrapper[4813]: E0219 20:01:58.253894 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f08e3fa15b28667c62fd7a48a9c492b010c0d87465e00c5ab9c91539c747fa56\": container with ID starting with f08e3fa15b28667c62fd7a48a9c492b010c0d87465e00c5ab9c91539c747fa56 not found: ID does not exist" containerID="f08e3fa15b28667c62fd7a48a9c492b010c0d87465e00c5ab9c91539c747fa56" Feb 19 20:01:58 crc kubenswrapper[4813]: I0219 20:01:58.253926 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f08e3fa15b28667c62fd7a48a9c492b010c0d87465e00c5ab9c91539c747fa56"} err="failed to get container status \"f08e3fa15b28667c62fd7a48a9c492b010c0d87465e00c5ab9c91539c747fa56\": rpc error: code = NotFound desc = could not find container \"f08e3fa15b28667c62fd7a48a9c492b010c0d87465e00c5ab9c91539c747fa56\": container with ID starting with f08e3fa15b28667c62fd7a48a9c492b010c0d87465e00c5ab9c91539c747fa56 not found: ID does not exist" Feb 19 20:01:58 crc kubenswrapper[4813]: I0219 20:01:58.253945 4813 scope.go:117] "RemoveContainer" containerID="2f0a6965a5b63a14c8169b127c09f0a0c4c5502405553bb36fe98e9b2d777263" Feb 19 20:01:58 crc kubenswrapper[4813]: E0219 20:01:58.254264 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f0a6965a5b63a14c8169b127c09f0a0c4c5502405553bb36fe98e9b2d777263\": container with ID starting with 2f0a6965a5b63a14c8169b127c09f0a0c4c5502405553bb36fe98e9b2d777263 not found: ID does not exist" containerID="2f0a6965a5b63a14c8169b127c09f0a0c4c5502405553bb36fe98e9b2d777263" Feb 19 20:01:58 crc kubenswrapper[4813]: I0219 20:01:58.254303 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f0a6965a5b63a14c8169b127c09f0a0c4c5502405553bb36fe98e9b2d777263"} err="failed to get container status \"2f0a6965a5b63a14c8169b127c09f0a0c4c5502405553bb36fe98e9b2d777263\": rpc error: code = NotFound desc = could not find container \"2f0a6965a5b63a14c8169b127c09f0a0c4c5502405553bb36fe98e9b2d777263\": container with ID starting with 2f0a6965a5b63a14c8169b127c09f0a0c4c5502405553bb36fe98e9b2d777263 not found: ID does not exist" Feb 19 20:01:58 crc kubenswrapper[4813]: I0219 20:01:58.259392 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 19 20:01:58 crc kubenswrapper[4813]: I0219 20:01:58.268830 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 19 20:01:58 crc kubenswrapper[4813]: E0219 20:01:58.269254 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c82d10be-9073-449b-a21e-1ff8b08cb71e" containerName="cinder-api-log" Feb 19 20:01:58 crc kubenswrapper[4813]: I0219 20:01:58.269268 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="c82d10be-9073-449b-a21e-1ff8b08cb71e" containerName="cinder-api-log" Feb 19 20:01:58 crc kubenswrapper[4813]: E0219 20:01:58.269296 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c82d10be-9073-449b-a21e-1ff8b08cb71e" containerName="cinder-api" Feb 19 20:01:58 crc kubenswrapper[4813]: I0219 20:01:58.269302 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="c82d10be-9073-449b-a21e-1ff8b08cb71e" containerName="cinder-api" Feb 19 20:01:58 crc kubenswrapper[4813]: I0219 20:01:58.269463 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="c82d10be-9073-449b-a21e-1ff8b08cb71e" containerName="cinder-api" Feb 19 20:01:58 crc kubenswrapper[4813]: I0219 20:01:58.269477 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="c82d10be-9073-449b-a21e-1ff8b08cb71e" containerName="cinder-api-log" Feb 19 20:01:58 crc kubenswrapper[4813]: I0219 20:01:58.270458 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 20:01:58 crc kubenswrapper[4813]: I0219 20:01:58.273663 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 19 20:01:58 crc kubenswrapper[4813]: I0219 20:01:58.300590 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 19 20:01:58 crc kubenswrapper[4813]: I0219 20:01:58.397360 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c947e6fd-22fc-4fb5-bc41-33896ad1c161-config-data\") pod \"cinder-api-0\" (UID: \"c947e6fd-22fc-4fb5-bc41-33896ad1c161\") " pod="openstack/cinder-api-0" Feb 19 20:01:58 crc kubenswrapper[4813]: I0219 20:01:58.397405 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c947e6fd-22fc-4fb5-bc41-33896ad1c161-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c947e6fd-22fc-4fb5-bc41-33896ad1c161\") " pod="openstack/cinder-api-0" Feb 19 20:01:58 crc kubenswrapper[4813]: I0219 20:01:58.397772 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c947e6fd-22fc-4fb5-bc41-33896ad1c161-logs\") pod \"cinder-api-0\" (UID: \"c947e6fd-22fc-4fb5-bc41-33896ad1c161\") " pod="openstack/cinder-api-0" Feb 19 20:01:58 crc kubenswrapper[4813]: I0219 20:01:58.398069 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c947e6fd-22fc-4fb5-bc41-33896ad1c161-config-data-custom\") pod \"cinder-api-0\" (UID: \"c947e6fd-22fc-4fb5-bc41-33896ad1c161\") " pod="openstack/cinder-api-0" Feb 19 20:01:58 crc kubenswrapper[4813]: I0219 20:01:58.398113 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c947e6fd-22fc-4fb5-bc41-33896ad1c161-scripts\") pod \"cinder-api-0\" (UID: \"c947e6fd-22fc-4fb5-bc41-33896ad1c161\") " pod="openstack/cinder-api-0" Feb 19 20:01:58 crc kubenswrapper[4813]: I0219 20:01:58.398178 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmm5v\" (UniqueName: \"kubernetes.io/projected/c947e6fd-22fc-4fb5-bc41-33896ad1c161-kube-api-access-wmm5v\") pod \"cinder-api-0\" (UID: \"c947e6fd-22fc-4fb5-bc41-33896ad1c161\") " pod="openstack/cinder-api-0" Feb 19 20:01:58 crc kubenswrapper[4813]: I0219 20:01:58.398219 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c947e6fd-22fc-4fb5-bc41-33896ad1c161-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c947e6fd-22fc-4fb5-bc41-33896ad1c161\") " pod="openstack/cinder-api-0" Feb 19 20:01:58 crc kubenswrapper[4813]: I0219 20:01:58.500197 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c947e6fd-22fc-4fb5-bc41-33896ad1c161-logs\") pod \"cinder-api-0\" (UID: \"c947e6fd-22fc-4fb5-bc41-33896ad1c161\") " pod="openstack/cinder-api-0" Feb 19 20:01:58 crc kubenswrapper[4813]: I0219 20:01:58.500325 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c947e6fd-22fc-4fb5-bc41-33896ad1c161-config-data-custom\") pod \"cinder-api-0\" (UID: \"c947e6fd-22fc-4fb5-bc41-33896ad1c161\") " pod="openstack/cinder-api-0" Feb 19 20:01:58 crc kubenswrapper[4813]: I0219 20:01:58.500355 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c947e6fd-22fc-4fb5-bc41-33896ad1c161-scripts\") pod \"cinder-api-0\" (UID: \"c947e6fd-22fc-4fb5-bc41-33896ad1c161\") " pod="openstack/cinder-api-0" Feb 19 20:01:58 crc kubenswrapper[4813]: I0219 20:01:58.500388 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmm5v\" (UniqueName: \"kubernetes.io/projected/c947e6fd-22fc-4fb5-bc41-33896ad1c161-kube-api-access-wmm5v\") pod \"cinder-api-0\" (UID: \"c947e6fd-22fc-4fb5-bc41-33896ad1c161\") " pod="openstack/cinder-api-0" Feb 19 20:01:58 crc kubenswrapper[4813]: I0219 20:01:58.500419 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c947e6fd-22fc-4fb5-bc41-33896ad1c161-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c947e6fd-22fc-4fb5-bc41-33896ad1c161\") " pod="openstack/cinder-api-0" Feb 19 20:01:58 crc kubenswrapper[4813]: I0219 20:01:58.500472 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c947e6fd-22fc-4fb5-bc41-33896ad1c161-config-data\") pod \"cinder-api-0\" (UID: \"c947e6fd-22fc-4fb5-bc41-33896ad1c161\") " pod="openstack/cinder-api-0" Feb 19 20:01:58 crc kubenswrapper[4813]: I0219 20:01:58.500501 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c947e6fd-22fc-4fb5-bc41-33896ad1c161-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c947e6fd-22fc-4fb5-bc41-33896ad1c161\") " pod="openstack/cinder-api-0" Feb 19 20:01:58 crc kubenswrapper[4813]: I0219 20:01:58.500526 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c947e6fd-22fc-4fb5-bc41-33896ad1c161-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c947e6fd-22fc-4fb5-bc41-33896ad1c161\") " pod="openstack/cinder-api-0" Feb 19 20:01:58 crc kubenswrapper[4813]: I0219 20:01:58.500672 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c947e6fd-22fc-4fb5-bc41-33896ad1c161-logs\") pod \"cinder-api-0\" (UID: \"c947e6fd-22fc-4fb5-bc41-33896ad1c161\") " pod="openstack/cinder-api-0" Feb 19 20:01:58 crc kubenswrapper[4813]: I0219 20:01:58.508844 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c947e6fd-22fc-4fb5-bc41-33896ad1c161-config-data-custom\") pod \"cinder-api-0\" (UID: \"c947e6fd-22fc-4fb5-bc41-33896ad1c161\") " pod="openstack/cinder-api-0" Feb 19 20:01:58 crc kubenswrapper[4813]: I0219 20:01:58.508908 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c947e6fd-22fc-4fb5-bc41-33896ad1c161-config-data\") pod \"cinder-api-0\" (UID: \"c947e6fd-22fc-4fb5-bc41-33896ad1c161\") " pod="openstack/cinder-api-0" Feb 19 20:01:58 crc kubenswrapper[4813]: I0219 20:01:58.509400 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c947e6fd-22fc-4fb5-bc41-33896ad1c161-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c947e6fd-22fc-4fb5-bc41-33896ad1c161\") " pod="openstack/cinder-api-0" Feb 19 20:01:58 crc kubenswrapper[4813]: I0219 20:01:58.509729 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c947e6fd-22fc-4fb5-bc41-33896ad1c161-scripts\") pod \"cinder-api-0\" (UID: \"c947e6fd-22fc-4fb5-bc41-33896ad1c161\") " pod="openstack/cinder-api-0" Feb 19 20:01:58 crc kubenswrapper[4813]: I0219 20:01:58.518107 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmm5v\" (UniqueName: \"kubernetes.io/projected/c947e6fd-22fc-4fb5-bc41-33896ad1c161-kube-api-access-wmm5v\") pod \"cinder-api-0\" (UID: \"c947e6fd-22fc-4fb5-bc41-33896ad1c161\") " pod="openstack/cinder-api-0" Feb 19 20:01:58 crc kubenswrapper[4813]: I0219 20:01:58.589183 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 20:01:58 crc kubenswrapper[4813]: I0219 20:01:58.705678 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 19 20:01:58 crc kubenswrapper[4813]: I0219 20:01:58.706987 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 19 20:01:58 crc kubenswrapper[4813]: I0219 20:01:58.708492 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 19 20:01:58 crc kubenswrapper[4813]: I0219 20:01:58.722966 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 19 20:01:58 crc kubenswrapper[4813]: I0219 20:01:58.724203 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 19 20:01:58 crc kubenswrapper[4813]: I0219 20:01:58.724280 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 19 20:01:58 crc kubenswrapper[4813]: I0219 20:01:58.737079 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 19 20:01:59 crc kubenswrapper[4813]: I0219 20:01:59.128182 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 19 20:01:59 crc kubenswrapper[4813]: I0219 20:01:59.181814 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c947e6fd-22fc-4fb5-bc41-33896ad1c161","Type":"ContainerStarted","Data":"c9f7683d326fd93f40847c54b42d27a599208397af3b5a5051f6d2897168b3eb"} Feb 19 20:01:59 crc kubenswrapper[4813]: I0219 20:01:59.186472 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 19 20:01:59 crc kubenswrapper[4813]: I0219 20:01:59.187735 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 19 20:01:59 crc kubenswrapper[4813]: I0219 20:01:59.197719 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 19 20:01:59 crc kubenswrapper[4813]: I0219 20:01:59.489316 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c82d10be-9073-449b-a21e-1ff8b08cb71e" path="/var/lib/kubelet/pods/c82d10be-9073-449b-a21e-1ff8b08cb71e/volumes" Feb 19 20:02:00 crc kubenswrapper[4813]: I0219 20:02:00.197972 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Feb 19 20:02:00 crc kubenswrapper[4813]: I0219 20:02:00.209589 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c947e6fd-22fc-4fb5-bc41-33896ad1c161","Type":"ContainerStarted","Data":"4845256fc4aaa21ec9d6545dad92289b82de1a21cc10cacbdb12390944223636"} Feb 19 20:02:00 crc kubenswrapper[4813]: I0219 20:02:00.329390 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 20:02:00 crc kubenswrapper[4813]: I0219 20:02:00.329458 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 20:02:00 crc kubenswrapper[4813]: I0219 20:02:00.896079 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Feb 19 20:02:01 crc kubenswrapper[4813]: I0219 20:02:01.218811 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c947e6fd-22fc-4fb5-bc41-33896ad1c161","Type":"ContainerStarted","Data":"d0e066b96aee88b1dc58406ba87b636002251c5f325bbdcd3ce16e2f087d6834"} Feb 19 20:02:02 crc kubenswrapper[4813]: I0219 20:02:02.227551 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 19 20:02:02 crc kubenswrapper[4813]: I0219 20:02:02.404827 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 19 20:02:02 crc kubenswrapper[4813]: I0219 20:02:02.435156 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.435124327 podStartE2EDuration="4.435124327s" podCreationTimestamp="2026-02-19 20:01:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 20:02:01.240445253 +0000 UTC m=+5540.465885804" watchObservedRunningTime="2026-02-19 20:02:02.435124327 +0000 UTC m=+5541.660564888" Feb 19 20:02:02 crc kubenswrapper[4813]: I0219 20:02:02.473243 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 20:02:03 crc kubenswrapper[4813]: I0219 20:02:03.236343 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="c04ffaf3-3436-4ef0-ba69-d77770b5b580" containerName="cinder-scheduler" containerID="cri-o://75a058bcf5c5e7e3901870fa0159171640d2abfa37e3c38f056e1c949c19754a" gracePeriod=30 Feb 19 20:02:03 crc kubenswrapper[4813]: I0219 20:02:03.236383 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="c04ffaf3-3436-4ef0-ba69-d77770b5b580" containerName="probe" containerID="cri-o://b61bcb3d7a081863ccf6acfe6104d6092a64038900b811d1824de42e0d3e0e63" gracePeriod=30 Feb 19 20:02:04 crc kubenswrapper[4813]: I0219 20:02:04.248816 4813 generic.go:334] "Generic (PLEG): container finished" podID="c04ffaf3-3436-4ef0-ba69-d77770b5b580" containerID="b61bcb3d7a081863ccf6acfe6104d6092a64038900b811d1824de42e0d3e0e63" exitCode=0 Feb 19 20:02:04 crc kubenswrapper[4813]: I0219 20:02:04.249138 4813 generic.go:334] "Generic (PLEG): container finished" podID="c04ffaf3-3436-4ef0-ba69-d77770b5b580" containerID="75a058bcf5c5e7e3901870fa0159171640d2abfa37e3c38f056e1c949c19754a" exitCode=0 Feb 19 20:02:04 crc kubenswrapper[4813]: I0219 20:02:04.249165 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c04ffaf3-3436-4ef0-ba69-d77770b5b580","Type":"ContainerDied","Data":"b61bcb3d7a081863ccf6acfe6104d6092a64038900b811d1824de42e0d3e0e63"} Feb 19 20:02:04 crc kubenswrapper[4813]: I0219 20:02:04.249192 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c04ffaf3-3436-4ef0-ba69-d77770b5b580","Type":"ContainerDied","Data":"75a058bcf5c5e7e3901870fa0159171640d2abfa37e3c38f056e1c949c19754a"} Feb 19 20:02:04 crc kubenswrapper[4813]: I0219 20:02:04.249202 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c04ffaf3-3436-4ef0-ba69-d77770b5b580","Type":"ContainerDied","Data":"f30e5dfa7c32251f8c7ba9cb6a9e18e7a9616ffad4f04030103084b77b0e6919"} Feb 19 20:02:04 crc kubenswrapper[4813]: I0219 20:02:04.249214 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f30e5dfa7c32251f8c7ba9cb6a9e18e7a9616ffad4f04030103084b77b0e6919" Feb 19 20:02:04 crc kubenswrapper[4813]: I0219 20:02:04.275132 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 20:02:04 crc kubenswrapper[4813]: I0219 20:02:04.405693 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kj5f\" (UniqueName: \"kubernetes.io/projected/c04ffaf3-3436-4ef0-ba69-d77770b5b580-kube-api-access-5kj5f\") pod \"c04ffaf3-3436-4ef0-ba69-d77770b5b580\" (UID: \"c04ffaf3-3436-4ef0-ba69-d77770b5b580\") " Feb 19 20:02:04 crc kubenswrapper[4813]: I0219 20:02:04.406648 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c04ffaf3-3436-4ef0-ba69-d77770b5b580-etc-machine-id\") pod \"c04ffaf3-3436-4ef0-ba69-d77770b5b580\" (UID: \"c04ffaf3-3436-4ef0-ba69-d77770b5b580\") " Feb 19 20:02:04 crc kubenswrapper[4813]: I0219 20:02:04.406704 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c04ffaf3-3436-4ef0-ba69-d77770b5b580-combined-ca-bundle\") pod \"c04ffaf3-3436-4ef0-ba69-d77770b5b580\" (UID: \"c04ffaf3-3436-4ef0-ba69-d77770b5b580\") " Feb 19 20:02:04 crc kubenswrapper[4813]: I0219 20:02:04.406731 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c04ffaf3-3436-4ef0-ba69-d77770b5b580-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "c04ffaf3-3436-4ef0-ba69-d77770b5b580" (UID: "c04ffaf3-3436-4ef0-ba69-d77770b5b580"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 20:02:04 crc kubenswrapper[4813]: I0219 20:02:04.406772 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c04ffaf3-3436-4ef0-ba69-d77770b5b580-config-data-custom\") pod \"c04ffaf3-3436-4ef0-ba69-d77770b5b580\" (UID: \"c04ffaf3-3436-4ef0-ba69-d77770b5b580\") " Feb 19 20:02:04 crc kubenswrapper[4813]: I0219 20:02:04.406910 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c04ffaf3-3436-4ef0-ba69-d77770b5b580-config-data\") pod \"c04ffaf3-3436-4ef0-ba69-d77770b5b580\" (UID: \"c04ffaf3-3436-4ef0-ba69-d77770b5b580\") " Feb 19 20:02:04 crc kubenswrapper[4813]: I0219 20:02:04.407075 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c04ffaf3-3436-4ef0-ba69-d77770b5b580-scripts\") pod \"c04ffaf3-3436-4ef0-ba69-d77770b5b580\" (UID: \"c04ffaf3-3436-4ef0-ba69-d77770b5b580\") " Feb 19 20:02:04 crc kubenswrapper[4813]: I0219 20:02:04.407622 4813 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c04ffaf3-3436-4ef0-ba69-d77770b5b580-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 19 20:02:04 crc kubenswrapper[4813]: I0219 20:02:04.412140 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c04ffaf3-3436-4ef0-ba69-d77770b5b580-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c04ffaf3-3436-4ef0-ba69-d77770b5b580" (UID: "c04ffaf3-3436-4ef0-ba69-d77770b5b580"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:02:04 crc kubenswrapper[4813]: I0219 20:02:04.414677 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c04ffaf3-3436-4ef0-ba69-d77770b5b580-scripts" (OuterVolumeSpecName: "scripts") pod "c04ffaf3-3436-4ef0-ba69-d77770b5b580" (UID: "c04ffaf3-3436-4ef0-ba69-d77770b5b580"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:02:04 crc kubenswrapper[4813]: I0219 20:02:04.419598 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c04ffaf3-3436-4ef0-ba69-d77770b5b580-kube-api-access-5kj5f" (OuterVolumeSpecName: "kube-api-access-5kj5f") pod "c04ffaf3-3436-4ef0-ba69-d77770b5b580" (UID: "c04ffaf3-3436-4ef0-ba69-d77770b5b580"). InnerVolumeSpecName "kube-api-access-5kj5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:02:04 crc kubenswrapper[4813]: I0219 20:02:04.464554 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c04ffaf3-3436-4ef0-ba69-d77770b5b580-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c04ffaf3-3436-4ef0-ba69-d77770b5b580" (UID: "c04ffaf3-3436-4ef0-ba69-d77770b5b580"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:02:04 crc kubenswrapper[4813]: I0219 20:02:04.508914 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c04ffaf3-3436-4ef0-ba69-d77770b5b580-config-data" (OuterVolumeSpecName: "config-data") pod "c04ffaf3-3436-4ef0-ba69-d77770b5b580" (UID: "c04ffaf3-3436-4ef0-ba69-d77770b5b580"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:02:04 crc kubenswrapper[4813]: I0219 20:02:04.509923 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c04ffaf3-3436-4ef0-ba69-d77770b5b580-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 20:02:04 crc kubenswrapper[4813]: I0219 20:02:04.510077 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c04ffaf3-3436-4ef0-ba69-d77770b5b580-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 20:02:04 crc kubenswrapper[4813]: I0219 20:02:04.510153 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5kj5f\" (UniqueName: \"kubernetes.io/projected/c04ffaf3-3436-4ef0-ba69-d77770b5b580-kube-api-access-5kj5f\") on node \"crc\" DevicePath \"\"" Feb 19 20:02:04 crc kubenswrapper[4813]: I0219 20:02:04.510238 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c04ffaf3-3436-4ef0-ba69-d77770b5b580-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 20:02:04 crc kubenswrapper[4813]: I0219 20:02:04.510313 4813 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c04ffaf3-3436-4ef0-ba69-d77770b5b580-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 20:02:05 crc kubenswrapper[4813]: I0219 20:02:05.258482 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 20:02:05 crc kubenswrapper[4813]: I0219 20:02:05.298682 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 20:02:05 crc kubenswrapper[4813]: I0219 20:02:05.306864 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 20:02:05 crc kubenswrapper[4813]: I0219 20:02:05.331602 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 20:02:05 crc kubenswrapper[4813]: E0219 20:02:05.332175 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c04ffaf3-3436-4ef0-ba69-d77770b5b580" containerName="cinder-scheduler" Feb 19 20:02:05 crc kubenswrapper[4813]: I0219 20:02:05.332203 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="c04ffaf3-3436-4ef0-ba69-d77770b5b580" containerName="cinder-scheduler" Feb 19 20:02:05 crc kubenswrapper[4813]: E0219 20:02:05.332246 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c04ffaf3-3436-4ef0-ba69-d77770b5b580" containerName="probe" Feb 19 20:02:05 crc kubenswrapper[4813]: I0219 20:02:05.332255 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="c04ffaf3-3436-4ef0-ba69-d77770b5b580" containerName="probe" Feb 19 20:02:05 crc kubenswrapper[4813]: I0219 20:02:05.332529 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="c04ffaf3-3436-4ef0-ba69-d77770b5b580" containerName="probe" Feb 19 20:02:05 crc kubenswrapper[4813]: I0219 20:02:05.332577 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="c04ffaf3-3436-4ef0-ba69-d77770b5b580" containerName="cinder-scheduler" Feb 19 20:02:05 crc kubenswrapper[4813]: I0219 20:02:05.334199 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 20:02:05 crc kubenswrapper[4813]: I0219 20:02:05.336051 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 19 20:02:05 crc kubenswrapper[4813]: I0219 20:02:05.359979 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 20:02:05 crc kubenswrapper[4813]: I0219 20:02:05.426399 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/436773d7-aaef-4ce6-a7d4-987708819652-scripts\") pod \"cinder-scheduler-0\" (UID: \"436773d7-aaef-4ce6-a7d4-987708819652\") " pod="openstack/cinder-scheduler-0" Feb 19 20:02:05 crc kubenswrapper[4813]: I0219 20:02:05.426539 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/436773d7-aaef-4ce6-a7d4-987708819652-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"436773d7-aaef-4ce6-a7d4-987708819652\") " pod="openstack/cinder-scheduler-0" Feb 19 20:02:05 crc kubenswrapper[4813]: I0219 20:02:05.426805 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/436773d7-aaef-4ce6-a7d4-987708819652-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"436773d7-aaef-4ce6-a7d4-987708819652\") " pod="openstack/cinder-scheduler-0" Feb 19 20:02:05 crc kubenswrapper[4813]: I0219 20:02:05.426896 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/436773d7-aaef-4ce6-a7d4-987708819652-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"436773d7-aaef-4ce6-a7d4-987708819652\") " pod="openstack/cinder-scheduler-0" Feb 19 20:02:05 crc kubenswrapper[4813]: I0219 20:02:05.427066 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/436773d7-aaef-4ce6-a7d4-987708819652-config-data\") pod \"cinder-scheduler-0\" (UID: \"436773d7-aaef-4ce6-a7d4-987708819652\") " pod="openstack/cinder-scheduler-0" Feb 19 20:02:05 crc kubenswrapper[4813]: I0219 20:02:05.427217 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgvhr\" (UniqueName: \"kubernetes.io/projected/436773d7-aaef-4ce6-a7d4-987708819652-kube-api-access-kgvhr\") pod \"cinder-scheduler-0\" (UID: \"436773d7-aaef-4ce6-a7d4-987708819652\") " pod="openstack/cinder-scheduler-0" Feb 19 20:02:05 crc kubenswrapper[4813]: I0219 20:02:05.442476 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-volume1-0" Feb 19 20:02:05 crc kubenswrapper[4813]: I0219 20:02:05.486579 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c04ffaf3-3436-4ef0-ba69-d77770b5b580" path="/var/lib/kubelet/pods/c04ffaf3-3436-4ef0-ba69-d77770b5b580/volumes" Feb 19 20:02:05 crc kubenswrapper[4813]: I0219 20:02:05.529821 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgvhr\" (UniqueName: \"kubernetes.io/projected/436773d7-aaef-4ce6-a7d4-987708819652-kube-api-access-kgvhr\") pod \"cinder-scheduler-0\" (UID: \"436773d7-aaef-4ce6-a7d4-987708819652\") " pod="openstack/cinder-scheduler-0" Feb 19 20:02:05 crc kubenswrapper[4813]: I0219 20:02:05.529937 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/436773d7-aaef-4ce6-a7d4-987708819652-scripts\") pod \"cinder-scheduler-0\" (UID: \"436773d7-aaef-4ce6-a7d4-987708819652\") " pod="openstack/cinder-scheduler-0" Feb 19 20:02:05 crc kubenswrapper[4813]: I0219 20:02:05.530024 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/436773d7-aaef-4ce6-a7d4-987708819652-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"436773d7-aaef-4ce6-a7d4-987708819652\") " pod="openstack/cinder-scheduler-0" Feb 19 20:02:05 crc kubenswrapper[4813]: I0219 20:02:05.530144 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/436773d7-aaef-4ce6-a7d4-987708819652-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"436773d7-aaef-4ce6-a7d4-987708819652\") " pod="openstack/cinder-scheduler-0" Feb 19 20:02:05 crc kubenswrapper[4813]: I0219 20:02:05.530193 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/436773d7-aaef-4ce6-a7d4-987708819652-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"436773d7-aaef-4ce6-a7d4-987708819652\") " pod="openstack/cinder-scheduler-0" Feb 19 20:02:05 crc kubenswrapper[4813]: I0219 20:02:05.530232 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/436773d7-aaef-4ce6-a7d4-987708819652-config-data\") pod \"cinder-scheduler-0\" (UID: \"436773d7-aaef-4ce6-a7d4-987708819652\") " pod="openstack/cinder-scheduler-0" Feb 19 20:02:05 crc kubenswrapper[4813]: I0219 20:02:05.530654 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/436773d7-aaef-4ce6-a7d4-987708819652-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"436773d7-aaef-4ce6-a7d4-987708819652\") " pod="openstack/cinder-scheduler-0" Feb 19 20:02:05 crc kubenswrapper[4813]: I0219 20:02:05.535448 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/436773d7-aaef-4ce6-a7d4-987708819652-scripts\") pod \"cinder-scheduler-0\" (UID: \"436773d7-aaef-4ce6-a7d4-987708819652\") " pod="openstack/cinder-scheduler-0" Feb 19 20:02:05 crc kubenswrapper[4813]: I0219 20:02:05.536643 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/436773d7-aaef-4ce6-a7d4-987708819652-config-data\") pod \"cinder-scheduler-0\" (UID: \"436773d7-aaef-4ce6-a7d4-987708819652\") " pod="openstack/cinder-scheduler-0" Feb 19 20:02:05 crc kubenswrapper[4813]: I0219 20:02:05.539245 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/436773d7-aaef-4ce6-a7d4-987708819652-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"436773d7-aaef-4ce6-a7d4-987708819652\") " pod="openstack/cinder-scheduler-0" Feb 19 20:02:05 crc kubenswrapper[4813]: I0219 20:02:05.544502 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/436773d7-aaef-4ce6-a7d4-987708819652-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"436773d7-aaef-4ce6-a7d4-987708819652\") " pod="openstack/cinder-scheduler-0" Feb 19 20:02:05 crc kubenswrapper[4813]: I0219 20:02:05.547376 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgvhr\" (UniqueName: \"kubernetes.io/projected/436773d7-aaef-4ce6-a7d4-987708819652-kube-api-access-kgvhr\") pod \"cinder-scheduler-0\" (UID: \"436773d7-aaef-4ce6-a7d4-987708819652\") " pod="openstack/cinder-scheduler-0" Feb 19 20:02:05 crc kubenswrapper[4813]: I0219 20:02:05.652501 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 20:02:06 crc kubenswrapper[4813]: I0219 20:02:06.121111 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Feb 19 20:02:06 crc kubenswrapper[4813]: I0219 20:02:06.129164 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 20:02:06 crc kubenswrapper[4813]: I0219 20:02:06.319139 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"436773d7-aaef-4ce6-a7d4-987708819652","Type":"ContainerStarted","Data":"15a70d32e8106ab4a7536c81440a5708e5620a3908a54e42ddc86e1895fd4ac1"} Feb 19 20:02:07 crc kubenswrapper[4813]: I0219 20:02:07.332292 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"436773d7-aaef-4ce6-a7d4-987708819652","Type":"ContainerStarted","Data":"6060a74b5270479e18c5f979cc08a525e23eeac0e667c67312139ad5924c37d9"} Feb 19 20:02:08 crc kubenswrapper[4813]: I0219 20:02:08.344309 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"436773d7-aaef-4ce6-a7d4-987708819652","Type":"ContainerStarted","Data":"9a58316599f38a70760fbeae6afdac8192bc0179913b3225c0100a91ae287eec"} Feb 19 20:02:08 crc kubenswrapper[4813]: I0219 20:02:08.376795 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.376771933 podStartE2EDuration="3.376771933s" podCreationTimestamp="2026-02-19 20:02:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 20:02:08.367443635 +0000 UTC m=+5547.592884176" watchObservedRunningTime="2026-02-19 20:02:08.376771933 +0000 UTC m=+5547.602212504" Feb 19 20:02:10 crc kubenswrapper[4813]: I0219 20:02:10.605642 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 19 20:02:10 crc kubenswrapper[4813]: I0219 20:02:10.654284 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 19 20:02:15 crc kubenswrapper[4813]: I0219 20:02:15.854551 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 19 20:02:24 crc kubenswrapper[4813]: I0219 20:02:24.920755 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ksgt6"] Feb 19 20:02:24 crc kubenswrapper[4813]: I0219 20:02:24.927833 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ksgt6" Feb 19 20:02:24 crc kubenswrapper[4813]: I0219 20:02:24.939239 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ksgt6"] Feb 19 20:02:25 crc kubenswrapper[4813]: I0219 20:02:25.027808 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d0d1066-a33d-4267-bdee-d91e9a89adc9-catalog-content\") pod \"redhat-marketplace-ksgt6\" (UID: \"8d0d1066-a33d-4267-bdee-d91e9a89adc9\") " pod="openshift-marketplace/redhat-marketplace-ksgt6" Feb 19 20:02:25 crc kubenswrapper[4813]: I0219 20:02:25.028429 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d0d1066-a33d-4267-bdee-d91e9a89adc9-utilities\") pod \"redhat-marketplace-ksgt6\" (UID: \"8d0d1066-a33d-4267-bdee-d91e9a89adc9\") " pod="openshift-marketplace/redhat-marketplace-ksgt6" Feb 19 20:02:25 crc kubenswrapper[4813]: I0219 20:02:25.028638 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95ppl\" (UniqueName: \"kubernetes.io/projected/8d0d1066-a33d-4267-bdee-d91e9a89adc9-kube-api-access-95ppl\") pod \"redhat-marketplace-ksgt6\" (UID: \"8d0d1066-a33d-4267-bdee-d91e9a89adc9\") " pod="openshift-marketplace/redhat-marketplace-ksgt6" Feb 19 20:02:25 crc kubenswrapper[4813]: I0219 20:02:25.130687 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d0d1066-a33d-4267-bdee-d91e9a89adc9-catalog-content\") pod \"redhat-marketplace-ksgt6\" (UID: \"8d0d1066-a33d-4267-bdee-d91e9a89adc9\") " pod="openshift-marketplace/redhat-marketplace-ksgt6" Feb 19 20:02:25 crc kubenswrapper[4813]: I0219 20:02:25.130825 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d0d1066-a33d-4267-bdee-d91e9a89adc9-utilities\") pod \"redhat-marketplace-ksgt6\" (UID: \"8d0d1066-a33d-4267-bdee-d91e9a89adc9\") " pod="openshift-marketplace/redhat-marketplace-ksgt6" Feb 19 20:02:25 crc kubenswrapper[4813]: I0219 20:02:25.130909 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95ppl\" (UniqueName: \"kubernetes.io/projected/8d0d1066-a33d-4267-bdee-d91e9a89adc9-kube-api-access-95ppl\") pod \"redhat-marketplace-ksgt6\" (UID: \"8d0d1066-a33d-4267-bdee-d91e9a89adc9\") " pod="openshift-marketplace/redhat-marketplace-ksgt6" Feb 19 20:02:25 crc kubenswrapper[4813]: I0219 20:02:25.131484 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d0d1066-a33d-4267-bdee-d91e9a89adc9-catalog-content\") pod \"redhat-marketplace-ksgt6\" (UID: \"8d0d1066-a33d-4267-bdee-d91e9a89adc9\") " pod="openshift-marketplace/redhat-marketplace-ksgt6" Feb 19 20:02:25 crc kubenswrapper[4813]: I0219 20:02:25.132120 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d0d1066-a33d-4267-bdee-d91e9a89adc9-utilities\") pod \"redhat-marketplace-ksgt6\" (UID: \"8d0d1066-a33d-4267-bdee-d91e9a89adc9\") " pod="openshift-marketplace/redhat-marketplace-ksgt6" Feb 19 20:02:25 crc kubenswrapper[4813]: I0219 20:02:25.157096 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95ppl\" (UniqueName: \"kubernetes.io/projected/8d0d1066-a33d-4267-bdee-d91e9a89adc9-kube-api-access-95ppl\") pod \"redhat-marketplace-ksgt6\" (UID: \"8d0d1066-a33d-4267-bdee-d91e9a89adc9\") " pod="openshift-marketplace/redhat-marketplace-ksgt6" Feb 19 20:02:25 crc kubenswrapper[4813]: I0219 20:02:25.248404 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ksgt6" Feb 19 20:02:25 crc kubenswrapper[4813]: I0219 20:02:25.793662 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ksgt6"] Feb 19 20:02:25 crc kubenswrapper[4813]: W0219 20:02:25.798383 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d0d1066_a33d_4267_bdee_d91e9a89adc9.slice/crio-cf2b21adc5eae78799d0b67d0dbb0bf17fb0ea6cfe48bb767a3abda2a1e6d5e8 WatchSource:0}: Error finding container cf2b21adc5eae78799d0b67d0dbb0bf17fb0ea6cfe48bb767a3abda2a1e6d5e8: Status 404 returned error can't find the container with id cf2b21adc5eae78799d0b67d0dbb0bf17fb0ea6cfe48bb767a3abda2a1e6d5e8 Feb 19 20:02:26 crc kubenswrapper[4813]: I0219 20:02:26.514294 4813 generic.go:334] "Generic (PLEG): container finished" podID="8d0d1066-a33d-4267-bdee-d91e9a89adc9" containerID="efc26ece7d72a800b18324971a5ec11403ba1163513f2386072017b1791d2053" exitCode=0 Feb 19 20:02:26 crc kubenswrapper[4813]: I0219 20:02:26.514336 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ksgt6" event={"ID":"8d0d1066-a33d-4267-bdee-d91e9a89adc9","Type":"ContainerDied","Data":"efc26ece7d72a800b18324971a5ec11403ba1163513f2386072017b1791d2053"} Feb 19 20:02:26 crc kubenswrapper[4813]: I0219 20:02:26.514364 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ksgt6" event={"ID":"8d0d1066-a33d-4267-bdee-d91e9a89adc9","Type":"ContainerStarted","Data":"cf2b21adc5eae78799d0b67d0dbb0bf17fb0ea6cfe48bb767a3abda2a1e6d5e8"} Feb 19 20:02:28 crc kubenswrapper[4813]: I0219 20:02:28.536869 4813 generic.go:334] "Generic (PLEG): container finished" podID="8d0d1066-a33d-4267-bdee-d91e9a89adc9" containerID="9f29df73b8e8a6968b3fef3b8fd314f4f7221f9d389e79c5af821de247da070c" exitCode=0 Feb 19 20:02:28 crc kubenswrapper[4813]: I0219 20:02:28.537514 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ksgt6" event={"ID":"8d0d1066-a33d-4267-bdee-d91e9a89adc9","Type":"ContainerDied","Data":"9f29df73b8e8a6968b3fef3b8fd314f4f7221f9d389e79c5af821de247da070c"} Feb 19 20:02:29 crc kubenswrapper[4813]: I0219 20:02:29.549084 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ksgt6" event={"ID":"8d0d1066-a33d-4267-bdee-d91e9a89adc9","Type":"ContainerStarted","Data":"056997a3e396e8ca3cec7020b694a5cd8031037cc15b9339377106a4b9baba7b"} Feb 19 20:02:29 crc kubenswrapper[4813]: I0219 20:02:29.568511 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ksgt6" podStartSLOduration=3.037852045 podStartE2EDuration="5.568492633s" podCreationTimestamp="2026-02-19 20:02:24 +0000 UTC" firstStartedPulling="2026-02-19 20:02:26.516445312 +0000 UTC m=+5565.741885853" lastFinishedPulling="2026-02-19 20:02:29.0470859 +0000 UTC m=+5568.272526441" observedRunningTime="2026-02-19 20:02:29.563484877 +0000 UTC m=+5568.788925418" watchObservedRunningTime="2026-02-19 20:02:29.568492633 +0000 UTC m=+5568.793933174" Feb 19 20:02:30 crc kubenswrapper[4813]: I0219 20:02:30.329904 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 20:02:30 crc kubenswrapper[4813]: I0219 20:02:30.329983 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 20:02:30 crc kubenswrapper[4813]: I0219 20:02:30.330027 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" Feb 19 20:02:30 crc kubenswrapper[4813]: I0219 20:02:30.330689 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c5ad4b652219a5a2c769519cd60ad08e92b1309893f1bb32935c2a23f7bc230a"} pod="openshift-machine-config-operator/machine-config-daemon-gfswm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 20:02:30 crc kubenswrapper[4813]: I0219 20:02:30.330754 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" containerID="cri-o://c5ad4b652219a5a2c769519cd60ad08e92b1309893f1bb32935c2a23f7bc230a" gracePeriod=600 Feb 19 20:02:30 crc kubenswrapper[4813]: I0219 20:02:30.564272 4813 generic.go:334] "Generic (PLEG): container finished" podID="481977a2-7072-4176-abd4-863cb6104d70" containerID="c5ad4b652219a5a2c769519cd60ad08e92b1309893f1bb32935c2a23f7bc230a" exitCode=0 Feb 19 20:02:30 crc kubenswrapper[4813]: I0219 20:02:30.564356 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" event={"ID":"481977a2-7072-4176-abd4-863cb6104d70","Type":"ContainerDied","Data":"c5ad4b652219a5a2c769519cd60ad08e92b1309893f1bb32935c2a23f7bc230a"} Feb 19 20:02:30 crc kubenswrapper[4813]: I0219 20:02:30.564915 4813 scope.go:117] "RemoveContainer" containerID="6f703dfd90f9e11930a40b965ab018f3c46a606e13eaba250b413520a86bea4f" Feb 19 20:02:32 crc kubenswrapper[4813]: I0219 20:02:31.575901 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" event={"ID":"481977a2-7072-4176-abd4-863cb6104d70","Type":"ContainerStarted","Data":"77599afcb1bbfe89531d92db83d4ec060079eb9318249967ecfd9d52fd324241"} Feb 19 20:02:34 crc kubenswrapper[4813]: I0219 20:02:34.902430 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jf8jb"] Feb 19 20:02:34 crc kubenswrapper[4813]: I0219 20:02:34.905003 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jf8jb" Feb 19 20:02:34 crc kubenswrapper[4813]: I0219 20:02:34.916133 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jf8jb"] Feb 19 20:02:35 crc kubenswrapper[4813]: I0219 20:02:35.003762 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwk4v\" (UniqueName: \"kubernetes.io/projected/612cb9ab-30aa-4f44-b364-5449be79a8e2-kube-api-access-fwk4v\") pod \"redhat-operators-jf8jb\" (UID: \"612cb9ab-30aa-4f44-b364-5449be79a8e2\") " pod="openshift-marketplace/redhat-operators-jf8jb" Feb 19 20:02:35 crc kubenswrapper[4813]: I0219 20:02:35.003879 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/612cb9ab-30aa-4f44-b364-5449be79a8e2-catalog-content\") pod \"redhat-operators-jf8jb\" (UID: \"612cb9ab-30aa-4f44-b364-5449be79a8e2\") " pod="openshift-marketplace/redhat-operators-jf8jb" Feb 19 20:02:35 crc kubenswrapper[4813]: I0219 20:02:35.003912 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/612cb9ab-30aa-4f44-b364-5449be79a8e2-utilities\") pod \"redhat-operators-jf8jb\" (UID: \"612cb9ab-30aa-4f44-b364-5449be79a8e2\") " pod="openshift-marketplace/redhat-operators-jf8jb" Feb 19 20:02:35 crc kubenswrapper[4813]: I0219 20:02:35.106177 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/612cb9ab-30aa-4f44-b364-5449be79a8e2-catalog-content\") pod \"redhat-operators-jf8jb\" (UID: \"612cb9ab-30aa-4f44-b364-5449be79a8e2\") " pod="openshift-marketplace/redhat-operators-jf8jb" Feb 19 20:02:35 crc kubenswrapper[4813]: I0219 20:02:35.106251 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/612cb9ab-30aa-4f44-b364-5449be79a8e2-utilities\") pod \"redhat-operators-jf8jb\" (UID: \"612cb9ab-30aa-4f44-b364-5449be79a8e2\") " pod="openshift-marketplace/redhat-operators-jf8jb" Feb 19 20:02:35 crc kubenswrapper[4813]: I0219 20:02:35.106388 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwk4v\" (UniqueName: \"kubernetes.io/projected/612cb9ab-30aa-4f44-b364-5449be79a8e2-kube-api-access-fwk4v\") pod \"redhat-operators-jf8jb\" (UID: \"612cb9ab-30aa-4f44-b364-5449be79a8e2\") " pod="openshift-marketplace/redhat-operators-jf8jb" Feb 19 20:02:35 crc kubenswrapper[4813]: I0219 20:02:35.106836 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/612cb9ab-30aa-4f44-b364-5449be79a8e2-catalog-content\") pod \"redhat-operators-jf8jb\" (UID: \"612cb9ab-30aa-4f44-b364-5449be79a8e2\") " pod="openshift-marketplace/redhat-operators-jf8jb" Feb 19 20:02:35 crc kubenswrapper[4813]: I0219 20:02:35.106898 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/612cb9ab-30aa-4f44-b364-5449be79a8e2-utilities\") pod \"redhat-operators-jf8jb\" (UID: \"612cb9ab-30aa-4f44-b364-5449be79a8e2\") " pod="openshift-marketplace/redhat-operators-jf8jb" Feb 19 20:02:35 crc kubenswrapper[4813]: I0219 20:02:35.131658 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwk4v\" (UniqueName: \"kubernetes.io/projected/612cb9ab-30aa-4f44-b364-5449be79a8e2-kube-api-access-fwk4v\") pod \"redhat-operators-jf8jb\" (UID: \"612cb9ab-30aa-4f44-b364-5449be79a8e2\") " pod="openshift-marketplace/redhat-operators-jf8jb" Feb 19 20:02:35 crc kubenswrapper[4813]: I0219 20:02:35.235514 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jf8jb" Feb 19 20:02:35 crc kubenswrapper[4813]: I0219 20:02:35.248987 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ksgt6" Feb 19 20:02:35 crc kubenswrapper[4813]: I0219 20:02:35.249032 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ksgt6" Feb 19 20:02:35 crc kubenswrapper[4813]: I0219 20:02:35.303429 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ksgt6" Feb 19 20:02:35 crc kubenswrapper[4813]: I0219 20:02:35.680965 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ksgt6" Feb 19 20:02:35 crc kubenswrapper[4813]: I0219 20:02:35.690705 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jf8jb"] Feb 19 20:02:36 crc kubenswrapper[4813]: I0219 20:02:36.638659 4813 generic.go:334] "Generic (PLEG): container finished" podID="612cb9ab-30aa-4f44-b364-5449be79a8e2" containerID="623628eae6821b97748343604db6097db42ed6b0a3c0af7616f6cc86c9f5a9c7" exitCode=0 Feb 19 20:02:36 crc kubenswrapper[4813]: I0219 20:02:36.638730 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jf8jb" event={"ID":"612cb9ab-30aa-4f44-b364-5449be79a8e2","Type":"ContainerDied","Data":"623628eae6821b97748343604db6097db42ed6b0a3c0af7616f6cc86c9f5a9c7"} Feb 19 20:02:36 crc kubenswrapper[4813]: I0219 20:02:36.640048 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jf8jb" event={"ID":"612cb9ab-30aa-4f44-b364-5449be79a8e2","Type":"ContainerStarted","Data":"5223e1238cec235d9bd299bfc96b28a953d677d65efee9626542880b06efeb98"} Feb 19 20:02:37 crc kubenswrapper[4813]: I0219 20:02:37.649782 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jf8jb" event={"ID":"612cb9ab-30aa-4f44-b364-5449be79a8e2","Type":"ContainerStarted","Data":"160b090f555071c6ecd53ea1c088765b84d7f37dd6ccd4f9fc8461b5f23e8ccd"} Feb 19 20:02:39 crc kubenswrapper[4813]: I0219 20:02:39.496311 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ksgt6"] Feb 19 20:02:39 crc kubenswrapper[4813]: I0219 20:02:39.496888 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ksgt6" podUID="8d0d1066-a33d-4267-bdee-d91e9a89adc9" containerName="registry-server" containerID="cri-o://056997a3e396e8ca3cec7020b694a5cd8031037cc15b9339377106a4b9baba7b" gracePeriod=2 Feb 19 20:02:39 crc kubenswrapper[4813]: I0219 20:02:39.669801 4813 generic.go:334] "Generic (PLEG): container finished" podID="612cb9ab-30aa-4f44-b364-5449be79a8e2" containerID="160b090f555071c6ecd53ea1c088765b84d7f37dd6ccd4f9fc8461b5f23e8ccd" exitCode=0 Feb 19 20:02:39 crc kubenswrapper[4813]: I0219 20:02:39.669850 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jf8jb" event={"ID":"612cb9ab-30aa-4f44-b364-5449be79a8e2","Type":"ContainerDied","Data":"160b090f555071c6ecd53ea1c088765b84d7f37dd6ccd4f9fc8461b5f23e8ccd"} Feb 19 20:02:40 crc kubenswrapper[4813]: I0219 20:02:40.680495 4813 generic.go:334] "Generic (PLEG): container finished" podID="8d0d1066-a33d-4267-bdee-d91e9a89adc9" containerID="056997a3e396e8ca3cec7020b694a5cd8031037cc15b9339377106a4b9baba7b" exitCode=0 Feb 19 20:02:40 crc kubenswrapper[4813]: I0219 20:02:40.680591 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ksgt6" event={"ID":"8d0d1066-a33d-4267-bdee-d91e9a89adc9","Type":"ContainerDied","Data":"056997a3e396e8ca3cec7020b694a5cd8031037cc15b9339377106a4b9baba7b"} Feb 19 20:02:41 crc kubenswrapper[4813]: I0219 20:02:41.053447 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ksgt6" Feb 19 20:02:41 crc kubenswrapper[4813]: I0219 20:02:41.143333 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d0d1066-a33d-4267-bdee-d91e9a89adc9-catalog-content\") pod \"8d0d1066-a33d-4267-bdee-d91e9a89adc9\" (UID: \"8d0d1066-a33d-4267-bdee-d91e9a89adc9\") " Feb 19 20:02:41 crc kubenswrapper[4813]: I0219 20:02:41.144104 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95ppl\" (UniqueName: \"kubernetes.io/projected/8d0d1066-a33d-4267-bdee-d91e9a89adc9-kube-api-access-95ppl\") pod \"8d0d1066-a33d-4267-bdee-d91e9a89adc9\" (UID: \"8d0d1066-a33d-4267-bdee-d91e9a89adc9\") " Feb 19 20:02:41 crc kubenswrapper[4813]: I0219 20:02:41.144304 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d0d1066-a33d-4267-bdee-d91e9a89adc9-utilities\") pod \"8d0d1066-a33d-4267-bdee-d91e9a89adc9\" (UID: \"8d0d1066-a33d-4267-bdee-d91e9a89adc9\") " Feb 19 20:02:41 crc kubenswrapper[4813]: I0219 20:02:41.146485 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d0d1066-a33d-4267-bdee-d91e9a89adc9-utilities" (OuterVolumeSpecName: "utilities") pod "8d0d1066-a33d-4267-bdee-d91e9a89adc9" (UID: "8d0d1066-a33d-4267-bdee-d91e9a89adc9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:02:41 crc kubenswrapper[4813]: I0219 20:02:41.151893 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d0d1066-a33d-4267-bdee-d91e9a89adc9-kube-api-access-95ppl" (OuterVolumeSpecName: "kube-api-access-95ppl") pod "8d0d1066-a33d-4267-bdee-d91e9a89adc9" (UID: "8d0d1066-a33d-4267-bdee-d91e9a89adc9"). InnerVolumeSpecName "kube-api-access-95ppl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:02:41 crc kubenswrapper[4813]: I0219 20:02:41.164840 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d0d1066-a33d-4267-bdee-d91e9a89adc9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8d0d1066-a33d-4267-bdee-d91e9a89adc9" (UID: "8d0d1066-a33d-4267-bdee-d91e9a89adc9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:02:41 crc kubenswrapper[4813]: I0219 20:02:41.247007 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d0d1066-a33d-4267-bdee-d91e9a89adc9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 20:02:41 crc kubenswrapper[4813]: I0219 20:02:41.247051 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95ppl\" (UniqueName: \"kubernetes.io/projected/8d0d1066-a33d-4267-bdee-d91e9a89adc9-kube-api-access-95ppl\") on node \"crc\" DevicePath \"\"" Feb 19 20:02:41 crc kubenswrapper[4813]: I0219 20:02:41.247065 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d0d1066-a33d-4267-bdee-d91e9a89adc9-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 20:02:41 crc kubenswrapper[4813]: I0219 20:02:41.693310 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ksgt6" event={"ID":"8d0d1066-a33d-4267-bdee-d91e9a89adc9","Type":"ContainerDied","Data":"cf2b21adc5eae78799d0b67d0dbb0bf17fb0ea6cfe48bb767a3abda2a1e6d5e8"} Feb 19 20:02:41 crc kubenswrapper[4813]: I0219 20:02:41.693361 4813 scope.go:117] "RemoveContainer" containerID="056997a3e396e8ca3cec7020b694a5cd8031037cc15b9339377106a4b9baba7b" Feb 19 20:02:41 crc kubenswrapper[4813]: I0219 20:02:41.693368 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ksgt6" Feb 19 20:02:41 crc kubenswrapper[4813]: I0219 20:02:41.697730 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jf8jb" event={"ID":"612cb9ab-30aa-4f44-b364-5449be79a8e2","Type":"ContainerStarted","Data":"c0719951c54c2c575af64a2f9a80e625f35e99480061168f0eba4526cd86eaf9"} Feb 19 20:02:41 crc kubenswrapper[4813]: I0219 20:02:41.719149 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ksgt6"] Feb 19 20:02:41 crc kubenswrapper[4813]: I0219 20:02:41.727609 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ksgt6"] Feb 19 20:02:41 crc kubenswrapper[4813]: I0219 20:02:41.739068 4813 scope.go:117] "RemoveContainer" containerID="9f29df73b8e8a6968b3fef3b8fd314f4f7221f9d389e79c5af821de247da070c" Feb 19 20:02:41 crc kubenswrapper[4813]: I0219 20:02:41.742605 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jf8jb" podStartSLOduration=3.8367305849999997 podStartE2EDuration="7.742591893s" podCreationTimestamp="2026-02-19 20:02:34 +0000 UTC" firstStartedPulling="2026-02-19 20:02:36.640713659 +0000 UTC m=+5575.866154190" lastFinishedPulling="2026-02-19 20:02:40.546574957 +0000 UTC m=+5579.772015498" observedRunningTime="2026-02-19 20:02:41.732281754 +0000 UTC m=+5580.957722305" watchObservedRunningTime="2026-02-19 20:02:41.742591893 +0000 UTC m=+5580.968032444" Feb 19 20:02:41 crc kubenswrapper[4813]: I0219 20:02:41.762369 4813 scope.go:117] "RemoveContainer" containerID="efc26ece7d72a800b18324971a5ec11403ba1163513f2386072017b1791d2053" Feb 19 20:02:43 crc kubenswrapper[4813]: I0219 20:02:43.485287 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d0d1066-a33d-4267-bdee-d91e9a89adc9" path="/var/lib/kubelet/pods/8d0d1066-a33d-4267-bdee-d91e9a89adc9/volumes" Feb 19 20:02:45 crc kubenswrapper[4813]: I0219 20:02:45.236293 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jf8jb" Feb 19 20:02:45 crc kubenswrapper[4813]: I0219 20:02:45.236353 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jf8jb" Feb 19 20:02:46 crc kubenswrapper[4813]: I0219 20:02:46.281439 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jf8jb" podUID="612cb9ab-30aa-4f44-b364-5449be79a8e2" containerName="registry-server" probeResult="failure" output=< Feb 19 20:02:46 crc kubenswrapper[4813]: timeout: failed to connect service ":50051" within 1s Feb 19 20:02:46 crc kubenswrapper[4813]: > Feb 19 20:02:55 crc kubenswrapper[4813]: I0219 20:02:55.282120 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jf8jb" Feb 19 20:02:55 crc kubenswrapper[4813]: I0219 20:02:55.334664 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jf8jb" Feb 19 20:02:56 crc kubenswrapper[4813]: I0219 20:02:56.123113 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jf8jb"] Feb 19 20:02:56 crc kubenswrapper[4813]: I0219 20:02:56.853364 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jf8jb" podUID="612cb9ab-30aa-4f44-b364-5449be79a8e2" containerName="registry-server" containerID="cri-o://c0719951c54c2c575af64a2f9a80e625f35e99480061168f0eba4526cd86eaf9" gracePeriod=2 Feb 19 20:02:57 crc kubenswrapper[4813]: I0219 20:02:57.353774 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jf8jb" Feb 19 20:02:57 crc kubenswrapper[4813]: I0219 20:02:57.471881 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/612cb9ab-30aa-4f44-b364-5449be79a8e2-catalog-content\") pod \"612cb9ab-30aa-4f44-b364-5449be79a8e2\" (UID: \"612cb9ab-30aa-4f44-b364-5449be79a8e2\") " Feb 19 20:02:57 crc kubenswrapper[4813]: I0219 20:02:57.471968 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwk4v\" (UniqueName: \"kubernetes.io/projected/612cb9ab-30aa-4f44-b364-5449be79a8e2-kube-api-access-fwk4v\") pod \"612cb9ab-30aa-4f44-b364-5449be79a8e2\" (UID: \"612cb9ab-30aa-4f44-b364-5449be79a8e2\") " Feb 19 20:02:57 crc kubenswrapper[4813]: I0219 20:02:57.471991 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/612cb9ab-30aa-4f44-b364-5449be79a8e2-utilities\") pod \"612cb9ab-30aa-4f44-b364-5449be79a8e2\" (UID: \"612cb9ab-30aa-4f44-b364-5449be79a8e2\") " Feb 19 20:02:57 crc kubenswrapper[4813]: I0219 20:02:57.472847 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/612cb9ab-30aa-4f44-b364-5449be79a8e2-utilities" (OuterVolumeSpecName: "utilities") pod "612cb9ab-30aa-4f44-b364-5449be79a8e2" (UID: "612cb9ab-30aa-4f44-b364-5449be79a8e2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:02:57 crc kubenswrapper[4813]: I0219 20:02:57.477990 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/612cb9ab-30aa-4f44-b364-5449be79a8e2-kube-api-access-fwk4v" (OuterVolumeSpecName: "kube-api-access-fwk4v") pod "612cb9ab-30aa-4f44-b364-5449be79a8e2" (UID: "612cb9ab-30aa-4f44-b364-5449be79a8e2"). InnerVolumeSpecName "kube-api-access-fwk4v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:02:57 crc kubenswrapper[4813]: I0219 20:02:57.575053 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwk4v\" (UniqueName: \"kubernetes.io/projected/612cb9ab-30aa-4f44-b364-5449be79a8e2-kube-api-access-fwk4v\") on node \"crc\" DevicePath \"\"" Feb 19 20:02:57 crc kubenswrapper[4813]: I0219 20:02:57.575116 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/612cb9ab-30aa-4f44-b364-5449be79a8e2-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 20:02:57 crc kubenswrapper[4813]: I0219 20:02:57.606357 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/612cb9ab-30aa-4f44-b364-5449be79a8e2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "612cb9ab-30aa-4f44-b364-5449be79a8e2" (UID: "612cb9ab-30aa-4f44-b364-5449be79a8e2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:02:57 crc kubenswrapper[4813]: I0219 20:02:57.676663 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/612cb9ab-30aa-4f44-b364-5449be79a8e2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 20:02:57 crc kubenswrapper[4813]: I0219 20:02:57.863109 4813 generic.go:334] "Generic (PLEG): container finished" podID="612cb9ab-30aa-4f44-b364-5449be79a8e2" containerID="c0719951c54c2c575af64a2f9a80e625f35e99480061168f0eba4526cd86eaf9" exitCode=0 Feb 19 20:02:57 crc kubenswrapper[4813]: I0219 20:02:57.863169 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jf8jb" Feb 19 20:02:57 crc kubenswrapper[4813]: I0219 20:02:57.863198 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jf8jb" event={"ID":"612cb9ab-30aa-4f44-b364-5449be79a8e2","Type":"ContainerDied","Data":"c0719951c54c2c575af64a2f9a80e625f35e99480061168f0eba4526cd86eaf9"} Feb 19 20:02:57 crc kubenswrapper[4813]: I0219 20:02:57.863492 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jf8jb" event={"ID":"612cb9ab-30aa-4f44-b364-5449be79a8e2","Type":"ContainerDied","Data":"5223e1238cec235d9bd299bfc96b28a953d677d65efee9626542880b06efeb98"} Feb 19 20:02:57 crc kubenswrapper[4813]: I0219 20:02:57.863514 4813 scope.go:117] "RemoveContainer" containerID="c0719951c54c2c575af64a2f9a80e625f35e99480061168f0eba4526cd86eaf9" Feb 19 20:02:57 crc kubenswrapper[4813]: I0219 20:02:57.897315 4813 scope.go:117] "RemoveContainer" containerID="160b090f555071c6ecd53ea1c088765b84d7f37dd6ccd4f9fc8461b5f23e8ccd" Feb 19 20:02:57 crc kubenswrapper[4813]: I0219 20:02:57.897768 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jf8jb"] Feb 19 20:02:57 crc kubenswrapper[4813]: I0219 20:02:57.911169 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jf8jb"] Feb 19 20:02:57 crc kubenswrapper[4813]: I0219 20:02:57.920057 4813 scope.go:117] "RemoveContainer" containerID="623628eae6821b97748343604db6097db42ed6b0a3c0af7616f6cc86c9f5a9c7" Feb 19 20:02:57 crc kubenswrapper[4813]: I0219 20:02:57.952604 4813 scope.go:117] "RemoveContainer" containerID="c0719951c54c2c575af64a2f9a80e625f35e99480061168f0eba4526cd86eaf9" Feb 19 20:02:57 crc kubenswrapper[4813]: E0219 20:02:57.953232 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0719951c54c2c575af64a2f9a80e625f35e99480061168f0eba4526cd86eaf9\": container with ID starting with c0719951c54c2c575af64a2f9a80e625f35e99480061168f0eba4526cd86eaf9 not found: ID does not exist" containerID="c0719951c54c2c575af64a2f9a80e625f35e99480061168f0eba4526cd86eaf9" Feb 19 20:02:57 crc kubenswrapper[4813]: I0219 20:02:57.953263 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0719951c54c2c575af64a2f9a80e625f35e99480061168f0eba4526cd86eaf9"} err="failed to get container status \"c0719951c54c2c575af64a2f9a80e625f35e99480061168f0eba4526cd86eaf9\": rpc error: code = NotFound desc = could not find container \"c0719951c54c2c575af64a2f9a80e625f35e99480061168f0eba4526cd86eaf9\": container with ID starting with c0719951c54c2c575af64a2f9a80e625f35e99480061168f0eba4526cd86eaf9 not found: ID does not exist" Feb 19 20:02:57 crc kubenswrapper[4813]: I0219 20:02:57.953288 4813 scope.go:117] "RemoveContainer" containerID="160b090f555071c6ecd53ea1c088765b84d7f37dd6ccd4f9fc8461b5f23e8ccd" Feb 19 20:02:57 crc kubenswrapper[4813]: E0219 20:02:57.953605 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"160b090f555071c6ecd53ea1c088765b84d7f37dd6ccd4f9fc8461b5f23e8ccd\": container with ID starting with 160b090f555071c6ecd53ea1c088765b84d7f37dd6ccd4f9fc8461b5f23e8ccd not found: ID does not exist" containerID="160b090f555071c6ecd53ea1c088765b84d7f37dd6ccd4f9fc8461b5f23e8ccd" Feb 19 20:02:57 crc kubenswrapper[4813]: I0219 20:02:57.953629 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"160b090f555071c6ecd53ea1c088765b84d7f37dd6ccd4f9fc8461b5f23e8ccd"} err="failed to get container status \"160b090f555071c6ecd53ea1c088765b84d7f37dd6ccd4f9fc8461b5f23e8ccd\": rpc error: code = NotFound desc = could not find container \"160b090f555071c6ecd53ea1c088765b84d7f37dd6ccd4f9fc8461b5f23e8ccd\": container with ID starting with 160b090f555071c6ecd53ea1c088765b84d7f37dd6ccd4f9fc8461b5f23e8ccd not found: ID does not exist" Feb 19 20:02:57 crc kubenswrapper[4813]: I0219 20:02:57.953650 4813 scope.go:117] "RemoveContainer" containerID="623628eae6821b97748343604db6097db42ed6b0a3c0af7616f6cc86c9f5a9c7" Feb 19 20:02:57 crc kubenswrapper[4813]: E0219 20:02:57.953919 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"623628eae6821b97748343604db6097db42ed6b0a3c0af7616f6cc86c9f5a9c7\": container with ID starting with 623628eae6821b97748343604db6097db42ed6b0a3c0af7616f6cc86c9f5a9c7 not found: ID does not exist" containerID="623628eae6821b97748343604db6097db42ed6b0a3c0af7616f6cc86c9f5a9c7" Feb 19 20:02:57 crc kubenswrapper[4813]: I0219 20:02:57.953963 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"623628eae6821b97748343604db6097db42ed6b0a3c0af7616f6cc86c9f5a9c7"} err="failed to get container status \"623628eae6821b97748343604db6097db42ed6b0a3c0af7616f6cc86c9f5a9c7\": rpc error: code = NotFound desc = could not find container \"623628eae6821b97748343604db6097db42ed6b0a3c0af7616f6cc86c9f5a9c7\": container with ID starting with 623628eae6821b97748343604db6097db42ed6b0a3c0af7616f6cc86c9f5a9c7 not found: ID does not exist" Feb 19 20:02:59 crc kubenswrapper[4813]: I0219 20:02:59.482594 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="612cb9ab-30aa-4f44-b364-5449be79a8e2" path="/var/lib/kubelet/pods/612cb9ab-30aa-4f44-b364-5449be79a8e2/volumes" Feb 19 20:03:54 crc kubenswrapper[4813]: I0219 20:03:54.503995 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-frg8l"] Feb 19 20:03:54 crc kubenswrapper[4813]: E0219 20:03:54.505140 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d0d1066-a33d-4267-bdee-d91e9a89adc9" containerName="registry-server" Feb 19 20:03:54 crc kubenswrapper[4813]: I0219 20:03:54.505159 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d0d1066-a33d-4267-bdee-d91e9a89adc9" containerName="registry-server" Feb 19 20:03:54 crc kubenswrapper[4813]: E0219 20:03:54.505174 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="612cb9ab-30aa-4f44-b364-5449be79a8e2" containerName="registry-server" Feb 19 20:03:54 crc kubenswrapper[4813]: I0219 20:03:54.505181 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="612cb9ab-30aa-4f44-b364-5449be79a8e2" containerName="registry-server" Feb 19 20:03:54 crc kubenswrapper[4813]: E0219 20:03:54.505214 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d0d1066-a33d-4267-bdee-d91e9a89adc9" containerName="extract-utilities" Feb 19 20:03:54 crc kubenswrapper[4813]: I0219 20:03:54.505224 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d0d1066-a33d-4267-bdee-d91e9a89adc9" containerName="extract-utilities" Feb 19 20:03:54 crc kubenswrapper[4813]: E0219 20:03:54.505239 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="612cb9ab-30aa-4f44-b364-5449be79a8e2" containerName="extract-utilities" Feb 19 20:03:54 crc kubenswrapper[4813]: I0219 20:03:54.505247 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="612cb9ab-30aa-4f44-b364-5449be79a8e2" containerName="extract-utilities" Feb 19 20:03:54 crc kubenswrapper[4813]: E0219 20:03:54.505258 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="612cb9ab-30aa-4f44-b364-5449be79a8e2" containerName="extract-content" Feb 19 20:03:54 crc kubenswrapper[4813]: I0219 20:03:54.505266 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="612cb9ab-30aa-4f44-b364-5449be79a8e2" containerName="extract-content" Feb 19 20:03:54 crc kubenswrapper[4813]: E0219 20:03:54.505278 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d0d1066-a33d-4267-bdee-d91e9a89adc9" containerName="extract-content" Feb 19 20:03:54 crc kubenswrapper[4813]: I0219 20:03:54.505286 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d0d1066-a33d-4267-bdee-d91e9a89adc9" containerName="extract-content" Feb 19 20:03:54 crc kubenswrapper[4813]: I0219 20:03:54.505537 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="612cb9ab-30aa-4f44-b364-5449be79a8e2" containerName="registry-server" Feb 19 20:03:54 crc kubenswrapper[4813]: I0219 20:03:54.505563 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d0d1066-a33d-4267-bdee-d91e9a89adc9" containerName="registry-server" Feb 19 20:03:54 crc kubenswrapper[4813]: I0219 20:03:54.506448 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-frg8l" Feb 19 20:03:54 crc kubenswrapper[4813]: I0219 20:03:54.509883 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 19 20:03:54 crc kubenswrapper[4813]: I0219 20:03:54.510145 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-6dh2l" Feb 19 20:03:54 crc kubenswrapper[4813]: I0219 20:03:54.532002 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-6htkx"] Feb 19 20:03:54 crc kubenswrapper[4813]: I0219 20:03:54.534071 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-6htkx" Feb 19 20:03:54 crc kubenswrapper[4813]: I0219 20:03:54.543096 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-frg8l"] Feb 19 20:03:54 crc kubenswrapper[4813]: I0219 20:03:54.555341 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-6htkx"] Feb 19 20:03:54 crc kubenswrapper[4813]: I0219 20:03:54.577300 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs59p\" (UniqueName: \"kubernetes.io/projected/fdd55e37-6da1-4dfb-809c-0074790b1ffc-kube-api-access-bs59p\") pod \"ovn-controller-frg8l\" (UID: \"fdd55e37-6da1-4dfb-809c-0074790b1ffc\") " pod="openstack/ovn-controller-frg8l" Feb 19 20:03:54 crc kubenswrapper[4813]: I0219 20:03:54.577395 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/4624eabd-f427-4e53-a4e3-a49a2c7036b0-etc-ovs\") pod \"ovn-controller-ovs-6htkx\" (UID: \"4624eabd-f427-4e53-a4e3-a49a2c7036b0\") " pod="openstack/ovn-controller-ovs-6htkx" Feb 19 20:03:54 crc kubenswrapper[4813]: I0219 20:03:54.577456 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/fdd55e37-6da1-4dfb-809c-0074790b1ffc-var-log-ovn\") pod \"ovn-controller-frg8l\" (UID: \"fdd55e37-6da1-4dfb-809c-0074790b1ffc\") " pod="openstack/ovn-controller-frg8l" Feb 19 20:03:54 crc kubenswrapper[4813]: I0219 20:03:54.577508 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fdd55e37-6da1-4dfb-809c-0074790b1ffc-var-run\") pod \"ovn-controller-frg8l\" (UID: \"fdd55e37-6da1-4dfb-809c-0074790b1ffc\") " pod="openstack/ovn-controller-frg8l" Feb 19 20:03:54 crc kubenswrapper[4813]: I0219 20:03:54.577531 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/fdd55e37-6da1-4dfb-809c-0074790b1ffc-var-run-ovn\") pod \"ovn-controller-frg8l\" (UID: \"fdd55e37-6da1-4dfb-809c-0074790b1ffc\") " pod="openstack/ovn-controller-frg8l" Feb 19 20:03:54 crc kubenswrapper[4813]: I0219 20:03:54.577563 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/4624eabd-f427-4e53-a4e3-a49a2c7036b0-var-lib\") pod \"ovn-controller-ovs-6htkx\" (UID: \"4624eabd-f427-4e53-a4e3-a49a2c7036b0\") " pod="openstack/ovn-controller-ovs-6htkx" Feb 19 20:03:54 crc kubenswrapper[4813]: I0219 20:03:54.577629 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chvvv\" (UniqueName: \"kubernetes.io/projected/4624eabd-f427-4e53-a4e3-a49a2c7036b0-kube-api-access-chvvv\") pod \"ovn-controller-ovs-6htkx\" (UID: \"4624eabd-f427-4e53-a4e3-a49a2c7036b0\") " pod="openstack/ovn-controller-ovs-6htkx" Feb 19 20:03:54 crc kubenswrapper[4813]: I0219 20:03:54.577784 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/4624eabd-f427-4e53-a4e3-a49a2c7036b0-var-log\") pod \"ovn-controller-ovs-6htkx\" (UID: \"4624eabd-f427-4e53-a4e3-a49a2c7036b0\") " pod="openstack/ovn-controller-ovs-6htkx" Feb 19 20:03:54 crc kubenswrapper[4813]: I0219 20:03:54.577900 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4624eabd-f427-4e53-a4e3-a49a2c7036b0-scripts\") pod \"ovn-controller-ovs-6htkx\" (UID: \"4624eabd-f427-4e53-a4e3-a49a2c7036b0\") " pod="openstack/ovn-controller-ovs-6htkx" Feb 19 20:03:54 crc kubenswrapper[4813]: I0219 20:03:54.577935 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fdd55e37-6da1-4dfb-809c-0074790b1ffc-scripts\") pod \"ovn-controller-frg8l\" (UID: \"fdd55e37-6da1-4dfb-809c-0074790b1ffc\") " pod="openstack/ovn-controller-frg8l" Feb 19 20:03:54 crc kubenswrapper[4813]: I0219 20:03:54.578016 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4624eabd-f427-4e53-a4e3-a49a2c7036b0-var-run\") pod \"ovn-controller-ovs-6htkx\" (UID: \"4624eabd-f427-4e53-a4e3-a49a2c7036b0\") " pod="openstack/ovn-controller-ovs-6htkx" Feb 19 20:03:54 crc kubenswrapper[4813]: I0219 20:03:54.679530 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4624eabd-f427-4e53-a4e3-a49a2c7036b0-var-run\") pod \"ovn-controller-ovs-6htkx\" (UID: \"4624eabd-f427-4e53-a4e3-a49a2c7036b0\") " pod="openstack/ovn-controller-ovs-6htkx" Feb 19 20:03:54 crc kubenswrapper[4813]: I0219 20:03:54.679623 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs59p\" (UniqueName: \"kubernetes.io/projected/fdd55e37-6da1-4dfb-809c-0074790b1ffc-kube-api-access-bs59p\") pod \"ovn-controller-frg8l\" (UID: \"fdd55e37-6da1-4dfb-809c-0074790b1ffc\") " pod="openstack/ovn-controller-frg8l" Feb 19 20:03:54 crc kubenswrapper[4813]: I0219 20:03:54.679654 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/4624eabd-f427-4e53-a4e3-a49a2c7036b0-etc-ovs\") pod \"ovn-controller-ovs-6htkx\" (UID: \"4624eabd-f427-4e53-a4e3-a49a2c7036b0\") " pod="openstack/ovn-controller-ovs-6htkx" Feb 19 20:03:54 crc kubenswrapper[4813]: I0219 20:03:54.679692 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/fdd55e37-6da1-4dfb-809c-0074790b1ffc-var-log-ovn\") pod \"ovn-controller-frg8l\" (UID: \"fdd55e37-6da1-4dfb-809c-0074790b1ffc\") " pod="openstack/ovn-controller-frg8l" Feb 19 20:03:54 crc kubenswrapper[4813]: I0219 20:03:54.679717 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fdd55e37-6da1-4dfb-809c-0074790b1ffc-var-run\") pod \"ovn-controller-frg8l\" (UID: \"fdd55e37-6da1-4dfb-809c-0074790b1ffc\") " pod="openstack/ovn-controller-frg8l" Feb 19 20:03:54 crc kubenswrapper[4813]: I0219 20:03:54.679741 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/fdd55e37-6da1-4dfb-809c-0074790b1ffc-var-run-ovn\") pod \"ovn-controller-frg8l\" (UID: \"fdd55e37-6da1-4dfb-809c-0074790b1ffc\") " pod="openstack/ovn-controller-frg8l" Feb 19 20:03:54 crc kubenswrapper[4813]: I0219 20:03:54.679763 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/4624eabd-f427-4e53-a4e3-a49a2c7036b0-var-lib\") pod \"ovn-controller-ovs-6htkx\" (UID: \"4624eabd-f427-4e53-a4e3-a49a2c7036b0\") " pod="openstack/ovn-controller-ovs-6htkx" Feb 19 20:03:54 crc kubenswrapper[4813]: I0219 20:03:54.679799 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chvvv\" (UniqueName: \"kubernetes.io/projected/4624eabd-f427-4e53-a4e3-a49a2c7036b0-kube-api-access-chvvv\") pod \"ovn-controller-ovs-6htkx\" (UID: \"4624eabd-f427-4e53-a4e3-a49a2c7036b0\") " pod="openstack/ovn-controller-ovs-6htkx" Feb 19 20:03:54 crc kubenswrapper[4813]: I0219 20:03:54.679844 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/4624eabd-f427-4e53-a4e3-a49a2c7036b0-var-log\") pod \"ovn-controller-ovs-6htkx\" (UID: \"4624eabd-f427-4e53-a4e3-a49a2c7036b0\") " pod="openstack/ovn-controller-ovs-6htkx" Feb 19 20:03:54 crc kubenswrapper[4813]: I0219 20:03:54.679914 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4624eabd-f427-4e53-a4e3-a49a2c7036b0-scripts\") pod \"ovn-controller-ovs-6htkx\" (UID: \"4624eabd-f427-4e53-a4e3-a49a2c7036b0\") " pod="openstack/ovn-controller-ovs-6htkx" Feb 19 20:03:54 crc kubenswrapper[4813]: I0219 20:03:54.679931 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fdd55e37-6da1-4dfb-809c-0074790b1ffc-scripts\") pod \"ovn-controller-frg8l\" (UID: \"fdd55e37-6da1-4dfb-809c-0074790b1ffc\") " pod="openstack/ovn-controller-frg8l" Feb 19 20:03:54 crc kubenswrapper[4813]: I0219 20:03:54.681834 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fdd55e37-6da1-4dfb-809c-0074790b1ffc-scripts\") pod \"ovn-controller-frg8l\" (UID: \"fdd55e37-6da1-4dfb-809c-0074790b1ffc\") " pod="openstack/ovn-controller-frg8l" Feb 19 20:03:54 crc kubenswrapper[4813]: I0219 20:03:54.682190 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4624eabd-f427-4e53-a4e3-a49a2c7036b0-var-run\") pod \"ovn-controller-ovs-6htkx\" (UID: \"4624eabd-f427-4e53-a4e3-a49a2c7036b0\") " pod="openstack/ovn-controller-ovs-6htkx" Feb 19 20:03:54 crc kubenswrapper[4813]: I0219 20:03:54.682497 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/4624eabd-f427-4e53-a4e3-a49a2c7036b0-etc-ovs\") pod \"ovn-controller-ovs-6htkx\" (UID: \"4624eabd-f427-4e53-a4e3-a49a2c7036b0\") " pod="openstack/ovn-controller-ovs-6htkx" Feb 19 20:03:54 crc kubenswrapper[4813]: I0219 20:03:54.682627 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fdd55e37-6da1-4dfb-809c-0074790b1ffc-var-run\") pod \"ovn-controller-frg8l\" (UID: \"fdd55e37-6da1-4dfb-809c-0074790b1ffc\") " pod="openstack/ovn-controller-frg8l" Feb 19 20:03:54 crc kubenswrapper[4813]: I0219 20:03:54.682685 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/4624eabd-f427-4e53-a4e3-a49a2c7036b0-var-log\") pod \"ovn-controller-ovs-6htkx\" (UID: \"4624eabd-f427-4e53-a4e3-a49a2c7036b0\") " pod="openstack/ovn-controller-ovs-6htkx" Feb 19 20:03:54 crc kubenswrapper[4813]: I0219 20:03:54.682685 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/4624eabd-f427-4e53-a4e3-a49a2c7036b0-var-lib\") pod \"ovn-controller-ovs-6htkx\" (UID: \"4624eabd-f427-4e53-a4e3-a49a2c7036b0\") " pod="openstack/ovn-controller-ovs-6htkx" Feb 19 20:03:54 crc kubenswrapper[4813]: I0219 20:03:54.682748 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/fdd55e37-6da1-4dfb-809c-0074790b1ffc-var-run-ovn\") pod \"ovn-controller-frg8l\" (UID: \"fdd55e37-6da1-4dfb-809c-0074790b1ffc\") " pod="openstack/ovn-controller-frg8l" Feb 19 20:03:54 crc kubenswrapper[4813]: I0219 20:03:54.682767 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/fdd55e37-6da1-4dfb-809c-0074790b1ffc-var-log-ovn\") pod \"ovn-controller-frg8l\" (UID: \"fdd55e37-6da1-4dfb-809c-0074790b1ffc\") " pod="openstack/ovn-controller-frg8l" Feb 19 20:03:54 crc kubenswrapper[4813]: I0219 20:03:54.684714 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4624eabd-f427-4e53-a4e3-a49a2c7036b0-scripts\") pod \"ovn-controller-ovs-6htkx\" (UID: \"4624eabd-f427-4e53-a4e3-a49a2c7036b0\") " pod="openstack/ovn-controller-ovs-6htkx" Feb 19 20:03:54 crc kubenswrapper[4813]: I0219 20:03:54.702332 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chvvv\" (UniqueName: \"kubernetes.io/projected/4624eabd-f427-4e53-a4e3-a49a2c7036b0-kube-api-access-chvvv\") pod \"ovn-controller-ovs-6htkx\" (UID: \"4624eabd-f427-4e53-a4e3-a49a2c7036b0\") " pod="openstack/ovn-controller-ovs-6htkx" Feb 19 20:03:54 crc kubenswrapper[4813]: I0219 20:03:54.705555 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs59p\" (UniqueName: \"kubernetes.io/projected/fdd55e37-6da1-4dfb-809c-0074790b1ffc-kube-api-access-bs59p\") pod \"ovn-controller-frg8l\" (UID: \"fdd55e37-6da1-4dfb-809c-0074790b1ffc\") " pod="openstack/ovn-controller-frg8l" Feb 19 20:03:54 crc kubenswrapper[4813]: I0219 20:03:54.839150 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-frg8l" Feb 19 20:03:54 crc kubenswrapper[4813]: I0219 20:03:54.857942 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-6htkx" Feb 19 20:03:55 crc kubenswrapper[4813]: I0219 20:03:55.398814 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-frg8l"] Feb 19 20:03:55 crc kubenswrapper[4813]: I0219 20:03:55.787520 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-6htkx"] Feb 19 20:03:55 crc kubenswrapper[4813]: W0219 20:03:55.792964 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4624eabd_f427_4e53_a4e3_a49a2c7036b0.slice/crio-d4f37f7c666b2eda0fd89c5904aabc0366e342652f69177c1f108735a01dc763 WatchSource:0}: Error finding container d4f37f7c666b2eda0fd89c5904aabc0366e342652f69177c1f108735a01dc763: Status 404 returned error can't find the container with id d4f37f7c666b2eda0fd89c5904aabc0366e342652f69177c1f108735a01dc763 Feb 19 20:03:56 crc kubenswrapper[4813]: I0219 20:03:56.379072 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-6htkx" event={"ID":"4624eabd-f427-4e53-a4e3-a49a2c7036b0","Type":"ContainerStarted","Data":"7a24291f35c675a961764d0fd518179db208764725add07914d8349417173310"} Feb 19 20:03:56 crc kubenswrapper[4813]: I0219 20:03:56.379433 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-6htkx" event={"ID":"4624eabd-f427-4e53-a4e3-a49a2c7036b0","Type":"ContainerStarted","Data":"d4f37f7c666b2eda0fd89c5904aabc0366e342652f69177c1f108735a01dc763"} Feb 19 20:03:56 crc kubenswrapper[4813]: I0219 20:03:56.382203 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-frg8l" event={"ID":"fdd55e37-6da1-4dfb-809c-0074790b1ffc","Type":"ContainerStarted","Data":"aebce6dbec8e34537574814044d8a46d47187b8d4c298073ad0460cf3e208884"} Feb 19 20:03:56 crc kubenswrapper[4813]: I0219 20:03:56.382250 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-frg8l" event={"ID":"fdd55e37-6da1-4dfb-809c-0074790b1ffc","Type":"ContainerStarted","Data":"cca491d494024423026bce2d9c177ea0ae9da5c5bd5f02dc95df154e99fc8e24"} Feb 19 20:03:56 crc kubenswrapper[4813]: I0219 20:03:56.382456 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-frg8l" Feb 19 20:03:56 crc kubenswrapper[4813]: I0219 20:03:56.427472 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-frg8l" podStartSLOduration=2.427452363 podStartE2EDuration="2.427452363s" podCreationTimestamp="2026-02-19 20:03:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 20:03:56.416326919 +0000 UTC m=+5655.641767460" watchObservedRunningTime="2026-02-19 20:03:56.427452363 +0000 UTC m=+5655.652892904" Feb 19 20:03:57 crc kubenswrapper[4813]: I0219 20:03:57.108599 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-rlzqr"] Feb 19 20:03:57 crc kubenswrapper[4813]: I0219 20:03:57.109975 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-rlzqr" Feb 19 20:03:57 crc kubenswrapper[4813]: I0219 20:03:57.112941 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 19 20:03:57 crc kubenswrapper[4813]: I0219 20:03:57.119335 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-rlzqr"] Feb 19 20:03:57 crc kubenswrapper[4813]: I0219 20:03:57.268699 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/1d21cee5-7ae5-4af4-98dc-b09a8fe0f77b-ovn-rundir\") pod \"ovn-controller-metrics-rlzqr\" (UID: \"1d21cee5-7ae5-4af4-98dc-b09a8fe0f77b\") " pod="openstack/ovn-controller-metrics-rlzqr" Feb 19 20:03:57 crc kubenswrapper[4813]: I0219 20:03:57.269386 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/1d21cee5-7ae5-4af4-98dc-b09a8fe0f77b-ovs-rundir\") pod \"ovn-controller-metrics-rlzqr\" (UID: \"1d21cee5-7ae5-4af4-98dc-b09a8fe0f77b\") " pod="openstack/ovn-controller-metrics-rlzqr" Feb 19 20:03:57 crc kubenswrapper[4813]: I0219 20:03:57.269467 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d21cee5-7ae5-4af4-98dc-b09a8fe0f77b-config\") pod \"ovn-controller-metrics-rlzqr\" (UID: \"1d21cee5-7ae5-4af4-98dc-b09a8fe0f77b\") " pod="openstack/ovn-controller-metrics-rlzqr" Feb 19 20:03:57 crc kubenswrapper[4813]: I0219 20:03:57.269607 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grxjb\" (UniqueName: \"kubernetes.io/projected/1d21cee5-7ae5-4af4-98dc-b09a8fe0f77b-kube-api-access-grxjb\") pod \"ovn-controller-metrics-rlzqr\" (UID: \"1d21cee5-7ae5-4af4-98dc-b09a8fe0f77b\") " pod="openstack/ovn-controller-metrics-rlzqr" Feb 19 20:03:57 crc kubenswrapper[4813]: I0219 20:03:57.371366 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/1d21cee5-7ae5-4af4-98dc-b09a8fe0f77b-ovs-rundir\") pod \"ovn-controller-metrics-rlzqr\" (UID: \"1d21cee5-7ae5-4af4-98dc-b09a8fe0f77b\") " pod="openstack/ovn-controller-metrics-rlzqr" Feb 19 20:03:57 crc kubenswrapper[4813]: I0219 20:03:57.371418 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d21cee5-7ae5-4af4-98dc-b09a8fe0f77b-config\") pod \"ovn-controller-metrics-rlzqr\" (UID: \"1d21cee5-7ae5-4af4-98dc-b09a8fe0f77b\") " pod="openstack/ovn-controller-metrics-rlzqr" Feb 19 20:03:57 crc kubenswrapper[4813]: I0219 20:03:57.371471 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grxjb\" (UniqueName: \"kubernetes.io/projected/1d21cee5-7ae5-4af4-98dc-b09a8fe0f77b-kube-api-access-grxjb\") pod \"ovn-controller-metrics-rlzqr\" (UID: \"1d21cee5-7ae5-4af4-98dc-b09a8fe0f77b\") " pod="openstack/ovn-controller-metrics-rlzqr" Feb 19 20:03:57 crc kubenswrapper[4813]: I0219 20:03:57.371527 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/1d21cee5-7ae5-4af4-98dc-b09a8fe0f77b-ovn-rundir\") pod \"ovn-controller-metrics-rlzqr\" (UID: \"1d21cee5-7ae5-4af4-98dc-b09a8fe0f77b\") " pod="openstack/ovn-controller-metrics-rlzqr" Feb 19 20:03:57 crc kubenswrapper[4813]: I0219 20:03:57.371864 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/1d21cee5-7ae5-4af4-98dc-b09a8fe0f77b-ovn-rundir\") pod \"ovn-controller-metrics-rlzqr\" (UID: \"1d21cee5-7ae5-4af4-98dc-b09a8fe0f77b\") " pod="openstack/ovn-controller-metrics-rlzqr" Feb 19 20:03:57 crc kubenswrapper[4813]: I0219 20:03:57.372078 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/1d21cee5-7ae5-4af4-98dc-b09a8fe0f77b-ovs-rundir\") pod \"ovn-controller-metrics-rlzqr\" (UID: \"1d21cee5-7ae5-4af4-98dc-b09a8fe0f77b\") " pod="openstack/ovn-controller-metrics-rlzqr" Feb 19 20:03:57 crc kubenswrapper[4813]: I0219 20:03:57.372707 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d21cee5-7ae5-4af4-98dc-b09a8fe0f77b-config\") pod \"ovn-controller-metrics-rlzqr\" (UID: \"1d21cee5-7ae5-4af4-98dc-b09a8fe0f77b\") " pod="openstack/ovn-controller-metrics-rlzqr" Feb 19 20:03:57 crc kubenswrapper[4813]: I0219 20:03:57.392111 4813 generic.go:334] "Generic (PLEG): container finished" podID="4624eabd-f427-4e53-a4e3-a49a2c7036b0" containerID="7a24291f35c675a961764d0fd518179db208764725add07914d8349417173310" exitCode=0 Feb 19 20:03:57 crc kubenswrapper[4813]: I0219 20:03:57.392601 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-6htkx" event={"ID":"4624eabd-f427-4e53-a4e3-a49a2c7036b0","Type":"ContainerDied","Data":"7a24291f35c675a961764d0fd518179db208764725add07914d8349417173310"} Feb 19 20:03:57 crc kubenswrapper[4813]: I0219 20:03:57.393199 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grxjb\" (UniqueName: \"kubernetes.io/projected/1d21cee5-7ae5-4af4-98dc-b09a8fe0f77b-kube-api-access-grxjb\") pod \"ovn-controller-metrics-rlzqr\" (UID: \"1d21cee5-7ae5-4af4-98dc-b09a8fe0f77b\") " pod="openstack/ovn-controller-metrics-rlzqr" Feb 19 20:03:57 crc kubenswrapper[4813]: I0219 20:03:57.434814 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-rlzqr" Feb 19 20:03:58 crc kubenswrapper[4813]: I0219 20:03:58.049746 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-n4xk5"] Feb 19 20:03:58 crc kubenswrapper[4813]: I0219 20:03:58.061785 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-1b68-account-create-update-gslmf"] Feb 19 20:03:58 crc kubenswrapper[4813]: I0219 20:03:58.070934 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-n4xk5"] Feb 19 20:03:58 crc kubenswrapper[4813]: I0219 20:03:58.080910 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-1b68-account-create-update-gslmf"] Feb 19 20:03:58 crc kubenswrapper[4813]: W0219 20:03:58.088898 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d21cee5_7ae5_4af4_98dc_b09a8fe0f77b.slice/crio-c229fc271a7e9e0ec4f15ca043dccc207dee3119cdb5ac9d4ab74f0b472e5c5f WatchSource:0}: Error finding container c229fc271a7e9e0ec4f15ca043dccc207dee3119cdb5ac9d4ab74f0b472e5c5f: Status 404 returned error can't find the container with id c229fc271a7e9e0ec4f15ca043dccc207dee3119cdb5ac9d4ab74f0b472e5c5f Feb 19 20:03:58 crc kubenswrapper[4813]: I0219 20:03:58.090557 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-rlzqr"] Feb 19 20:03:58 crc kubenswrapper[4813]: I0219 20:03:58.402640 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-6htkx" event={"ID":"4624eabd-f427-4e53-a4e3-a49a2c7036b0","Type":"ContainerStarted","Data":"1afe456080ef753b3ee22057f9445c4f7810d3207b646a06def3862738c38dec"} Feb 19 20:03:58 crc kubenswrapper[4813]: I0219 20:03:58.402680 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-6htkx" event={"ID":"4624eabd-f427-4e53-a4e3-a49a2c7036b0","Type":"ContainerStarted","Data":"6affd088c55a3535ca30688eb4145598308fb13bc3b8a3613f0917e2384f7f45"} Feb 19 20:03:58 crc kubenswrapper[4813]: I0219 20:03:58.402814 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-6htkx" Feb 19 20:03:58 crc kubenswrapper[4813]: I0219 20:03:58.402840 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-6htkx" Feb 19 20:03:58 crc kubenswrapper[4813]: I0219 20:03:58.406585 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-rlzqr" event={"ID":"1d21cee5-7ae5-4af4-98dc-b09a8fe0f77b","Type":"ContainerStarted","Data":"37df27b407cc1ebc926e993adb68485a24f1843879aaaad967583fc9d45f10a3"} Feb 19 20:03:58 crc kubenswrapper[4813]: I0219 20:03:58.406615 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-rlzqr" event={"ID":"1d21cee5-7ae5-4af4-98dc-b09a8fe0f77b","Type":"ContainerStarted","Data":"c229fc271a7e9e0ec4f15ca043dccc207dee3119cdb5ac9d4ab74f0b472e5c5f"} Feb 19 20:03:58 crc kubenswrapper[4813]: I0219 20:03:58.450022 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-rlzqr" podStartSLOduration=1.450002101 podStartE2EDuration="1.450002101s" podCreationTimestamp="2026-02-19 20:03:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 20:03:58.443264223 +0000 UTC m=+5657.668704764" watchObservedRunningTime="2026-02-19 20:03:58.450002101 +0000 UTC m=+5657.675442642" Feb 19 20:03:58 crc kubenswrapper[4813]: I0219 20:03:58.451470 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-6htkx" podStartSLOduration=4.451464067 podStartE2EDuration="4.451464067s" podCreationTimestamp="2026-02-19 20:03:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 20:03:58.431440317 +0000 UTC m=+5657.656880878" watchObservedRunningTime="2026-02-19 20:03:58.451464067 +0000 UTC m=+5657.676904608" Feb 19 20:03:59 crc kubenswrapper[4813]: I0219 20:03:59.492771 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48470170-5b98-4a7d-a359-b14d60bbf229" path="/var/lib/kubelet/pods/48470170-5b98-4a7d-a359-b14d60bbf229/volumes" Feb 19 20:03:59 crc kubenswrapper[4813]: I0219 20:03:59.494077 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="815a91b3-eddd-434c-b30f-6b3a84c91efd" path="/var/lib/kubelet/pods/815a91b3-eddd-434c-b30f-6b3a84c91efd/volumes" Feb 19 20:04:06 crc kubenswrapper[4813]: I0219 20:04:06.029716 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-459mq"] Feb 19 20:04:06 crc kubenswrapper[4813]: I0219 20:04:06.039441 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-459mq"] Feb 19 20:04:07 crc kubenswrapper[4813]: I0219 20:04:07.481238 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8395bad4-6541-4a62-bd8a-f0185b9ccef5" path="/var/lib/kubelet/pods/8395bad4-6541-4a62-bd8a-f0185b9ccef5/volumes" Feb 19 20:04:16 crc kubenswrapper[4813]: I0219 20:04:16.204043 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-db-create-qjvd2"] Feb 19 20:04:16 crc kubenswrapper[4813]: I0219 20:04:16.206429 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-qjvd2" Feb 19 20:04:16 crc kubenswrapper[4813]: I0219 20:04:16.213605 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-create-qjvd2"] Feb 19 20:04:16 crc kubenswrapper[4813]: I0219 20:04:16.343924 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-282bh\" (UniqueName: \"kubernetes.io/projected/55d401e7-1ccb-4163-9366-cab53d918c33-kube-api-access-282bh\") pod \"octavia-db-create-qjvd2\" (UID: \"55d401e7-1ccb-4163-9366-cab53d918c33\") " pod="openstack/octavia-db-create-qjvd2" Feb 19 20:04:16 crc kubenswrapper[4813]: I0219 20:04:16.344721 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55d401e7-1ccb-4163-9366-cab53d918c33-operator-scripts\") pod \"octavia-db-create-qjvd2\" (UID: \"55d401e7-1ccb-4163-9366-cab53d918c33\") " pod="openstack/octavia-db-create-qjvd2" Feb 19 20:04:16 crc kubenswrapper[4813]: I0219 20:04:16.446423 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55d401e7-1ccb-4163-9366-cab53d918c33-operator-scripts\") pod \"octavia-db-create-qjvd2\" (UID: \"55d401e7-1ccb-4163-9366-cab53d918c33\") " pod="openstack/octavia-db-create-qjvd2" Feb 19 20:04:16 crc kubenswrapper[4813]: I0219 20:04:16.446523 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-282bh\" (UniqueName: \"kubernetes.io/projected/55d401e7-1ccb-4163-9366-cab53d918c33-kube-api-access-282bh\") pod \"octavia-db-create-qjvd2\" (UID: \"55d401e7-1ccb-4163-9366-cab53d918c33\") " pod="openstack/octavia-db-create-qjvd2" Feb 19 20:04:16 crc kubenswrapper[4813]: I0219 20:04:16.447453 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55d401e7-1ccb-4163-9366-cab53d918c33-operator-scripts\") pod \"octavia-db-create-qjvd2\" (UID: \"55d401e7-1ccb-4163-9366-cab53d918c33\") " pod="openstack/octavia-db-create-qjvd2" Feb 19 20:04:16 crc kubenswrapper[4813]: I0219 20:04:16.467649 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-282bh\" (UniqueName: \"kubernetes.io/projected/55d401e7-1ccb-4163-9366-cab53d918c33-kube-api-access-282bh\") pod \"octavia-db-create-qjvd2\" (UID: \"55d401e7-1ccb-4163-9366-cab53d918c33\") " pod="openstack/octavia-db-create-qjvd2" Feb 19 20:04:16 crc kubenswrapper[4813]: I0219 20:04:16.541568 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-qjvd2" Feb 19 20:04:17 crc kubenswrapper[4813]: I0219 20:04:17.017524 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-create-qjvd2"] Feb 19 20:04:17 crc kubenswrapper[4813]: I0219 20:04:17.501024 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-b182-account-create-update-mz22g"] Feb 19 20:04:17 crc kubenswrapper[4813]: I0219 20:04:17.502855 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-b182-account-create-update-mz22g" Feb 19 20:04:17 crc kubenswrapper[4813]: I0219 20:04:17.504989 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-db-secret" Feb 19 20:04:17 crc kubenswrapper[4813]: I0219 20:04:17.509023 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-b182-account-create-update-mz22g"] Feb 19 20:04:17 crc kubenswrapper[4813]: I0219 20:04:17.565675 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jzgg\" (UniqueName: \"kubernetes.io/projected/a44125bc-fce5-47ed-a37f-0b7f73470d95-kube-api-access-4jzgg\") pod \"octavia-b182-account-create-update-mz22g\" (UID: \"a44125bc-fce5-47ed-a37f-0b7f73470d95\") " pod="openstack/octavia-b182-account-create-update-mz22g" Feb 19 20:04:17 crc kubenswrapper[4813]: I0219 20:04:17.565750 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a44125bc-fce5-47ed-a37f-0b7f73470d95-operator-scripts\") pod \"octavia-b182-account-create-update-mz22g\" (UID: \"a44125bc-fce5-47ed-a37f-0b7f73470d95\") " pod="openstack/octavia-b182-account-create-update-mz22g" Feb 19 20:04:17 crc kubenswrapper[4813]: I0219 20:04:17.595658 4813 generic.go:334] "Generic (PLEG): container finished" podID="55d401e7-1ccb-4163-9366-cab53d918c33" containerID="4876e5288eac2af5e778f5aa255d8ffad1d9147536ac1ca3357edc4910d1159b" exitCode=0 Feb 19 20:04:17 crc kubenswrapper[4813]: I0219 20:04:17.595698 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-qjvd2" event={"ID":"55d401e7-1ccb-4163-9366-cab53d918c33","Type":"ContainerDied","Data":"4876e5288eac2af5e778f5aa255d8ffad1d9147536ac1ca3357edc4910d1159b"} Feb 19 20:04:17 crc kubenswrapper[4813]: I0219 20:04:17.595723 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-qjvd2" event={"ID":"55d401e7-1ccb-4163-9366-cab53d918c33","Type":"ContainerStarted","Data":"16d711302ecb30b277abda7949223fed4c63eff47742aa7baf1b25b78f7ba798"} Feb 19 20:04:17 crc kubenswrapper[4813]: I0219 20:04:17.667420 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a44125bc-fce5-47ed-a37f-0b7f73470d95-operator-scripts\") pod \"octavia-b182-account-create-update-mz22g\" (UID: \"a44125bc-fce5-47ed-a37f-0b7f73470d95\") " pod="openstack/octavia-b182-account-create-update-mz22g" Feb 19 20:04:17 crc kubenswrapper[4813]: I0219 20:04:17.667631 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jzgg\" (UniqueName: \"kubernetes.io/projected/a44125bc-fce5-47ed-a37f-0b7f73470d95-kube-api-access-4jzgg\") pod \"octavia-b182-account-create-update-mz22g\" (UID: \"a44125bc-fce5-47ed-a37f-0b7f73470d95\") " pod="openstack/octavia-b182-account-create-update-mz22g" Feb 19 20:04:17 crc kubenswrapper[4813]: I0219 20:04:17.668852 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a44125bc-fce5-47ed-a37f-0b7f73470d95-operator-scripts\") pod \"octavia-b182-account-create-update-mz22g\" (UID: \"a44125bc-fce5-47ed-a37f-0b7f73470d95\") " pod="openstack/octavia-b182-account-create-update-mz22g" Feb 19 20:04:17 crc kubenswrapper[4813]: I0219 20:04:17.687300 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jzgg\" (UniqueName: \"kubernetes.io/projected/a44125bc-fce5-47ed-a37f-0b7f73470d95-kube-api-access-4jzgg\") pod \"octavia-b182-account-create-update-mz22g\" (UID: \"a44125bc-fce5-47ed-a37f-0b7f73470d95\") " pod="openstack/octavia-b182-account-create-update-mz22g" Feb 19 20:04:17 crc kubenswrapper[4813]: I0219 20:04:17.828438 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-b182-account-create-update-mz22g" Feb 19 20:04:18 crc kubenswrapper[4813]: I0219 20:04:18.492735 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-b182-account-create-update-mz22g"] Feb 19 20:04:18 crc kubenswrapper[4813]: I0219 20:04:18.605065 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-b182-account-create-update-mz22g" event={"ID":"a44125bc-fce5-47ed-a37f-0b7f73470d95","Type":"ContainerStarted","Data":"c2aeddf56dc20ea04e0218a1d976ed2588f545ab0462a000618d18cfd93a315a"} Feb 19 20:04:18 crc kubenswrapper[4813]: I0219 20:04:18.908395 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-qjvd2" Feb 19 20:04:19 crc kubenswrapper[4813]: I0219 20:04:19.013750 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-282bh\" (UniqueName: \"kubernetes.io/projected/55d401e7-1ccb-4163-9366-cab53d918c33-kube-api-access-282bh\") pod \"55d401e7-1ccb-4163-9366-cab53d918c33\" (UID: \"55d401e7-1ccb-4163-9366-cab53d918c33\") " Feb 19 20:04:19 crc kubenswrapper[4813]: I0219 20:04:19.013825 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55d401e7-1ccb-4163-9366-cab53d918c33-operator-scripts\") pod \"55d401e7-1ccb-4163-9366-cab53d918c33\" (UID: \"55d401e7-1ccb-4163-9366-cab53d918c33\") " Feb 19 20:04:19 crc kubenswrapper[4813]: I0219 20:04:19.014363 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55d401e7-1ccb-4163-9366-cab53d918c33-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "55d401e7-1ccb-4163-9366-cab53d918c33" (UID: "55d401e7-1ccb-4163-9366-cab53d918c33"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:04:19 crc kubenswrapper[4813]: I0219 20:04:19.022939 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55d401e7-1ccb-4163-9366-cab53d918c33-kube-api-access-282bh" (OuterVolumeSpecName: "kube-api-access-282bh") pod "55d401e7-1ccb-4163-9366-cab53d918c33" (UID: "55d401e7-1ccb-4163-9366-cab53d918c33"). InnerVolumeSpecName "kube-api-access-282bh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:04:19 crc kubenswrapper[4813]: I0219 20:04:19.115729 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-282bh\" (UniqueName: \"kubernetes.io/projected/55d401e7-1ccb-4163-9366-cab53d918c33-kube-api-access-282bh\") on node \"crc\" DevicePath \"\"" Feb 19 20:04:19 crc kubenswrapper[4813]: I0219 20:04:19.116121 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55d401e7-1ccb-4163-9366-cab53d918c33-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 20:04:19 crc kubenswrapper[4813]: I0219 20:04:19.618494 4813 generic.go:334] "Generic (PLEG): container finished" podID="a44125bc-fce5-47ed-a37f-0b7f73470d95" containerID="f782edb18308bdd54775a5fb3aba2e804a17413ed6d02074a42c09c72d1a9c47" exitCode=0 Feb 19 20:04:19 crc kubenswrapper[4813]: I0219 20:04:19.618573 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-b182-account-create-update-mz22g" event={"ID":"a44125bc-fce5-47ed-a37f-0b7f73470d95","Type":"ContainerDied","Data":"f782edb18308bdd54775a5fb3aba2e804a17413ed6d02074a42c09c72d1a9c47"} Feb 19 20:04:19 crc kubenswrapper[4813]: I0219 20:04:19.620664 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-qjvd2" event={"ID":"55d401e7-1ccb-4163-9366-cab53d918c33","Type":"ContainerDied","Data":"16d711302ecb30b277abda7949223fed4c63eff47742aa7baf1b25b78f7ba798"} Feb 19 20:04:19 crc kubenswrapper[4813]: I0219 20:04:19.620738 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16d711302ecb30b277abda7949223fed4c63eff47742aa7baf1b25b78f7ba798" Feb 19 20:04:19 crc kubenswrapper[4813]: I0219 20:04:19.620839 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-qjvd2" Feb 19 20:04:20 crc kubenswrapper[4813]: I0219 20:04:20.047127 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-p4js4"] Feb 19 20:04:20 crc kubenswrapper[4813]: I0219 20:04:20.055717 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-p4js4"] Feb 19 20:04:20 crc kubenswrapper[4813]: I0219 20:04:20.991687 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-b182-account-create-update-mz22g" Feb 19 20:04:21 crc kubenswrapper[4813]: I0219 20:04:21.052441 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a44125bc-fce5-47ed-a37f-0b7f73470d95-operator-scripts\") pod \"a44125bc-fce5-47ed-a37f-0b7f73470d95\" (UID: \"a44125bc-fce5-47ed-a37f-0b7f73470d95\") " Feb 19 20:04:21 crc kubenswrapper[4813]: I0219 20:04:21.052538 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jzgg\" (UniqueName: \"kubernetes.io/projected/a44125bc-fce5-47ed-a37f-0b7f73470d95-kube-api-access-4jzgg\") pod \"a44125bc-fce5-47ed-a37f-0b7f73470d95\" (UID: \"a44125bc-fce5-47ed-a37f-0b7f73470d95\") " Feb 19 20:04:21 crc kubenswrapper[4813]: I0219 20:04:21.053004 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a44125bc-fce5-47ed-a37f-0b7f73470d95-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a44125bc-fce5-47ed-a37f-0b7f73470d95" (UID: "a44125bc-fce5-47ed-a37f-0b7f73470d95"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:04:21 crc kubenswrapper[4813]: I0219 20:04:21.053512 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a44125bc-fce5-47ed-a37f-0b7f73470d95-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 20:04:21 crc kubenswrapper[4813]: I0219 20:04:21.058447 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a44125bc-fce5-47ed-a37f-0b7f73470d95-kube-api-access-4jzgg" (OuterVolumeSpecName: "kube-api-access-4jzgg") pod "a44125bc-fce5-47ed-a37f-0b7f73470d95" (UID: "a44125bc-fce5-47ed-a37f-0b7f73470d95"). InnerVolumeSpecName "kube-api-access-4jzgg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:04:21 crc kubenswrapper[4813]: I0219 20:04:21.154708 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4jzgg\" (UniqueName: \"kubernetes.io/projected/a44125bc-fce5-47ed-a37f-0b7f73470d95-kube-api-access-4jzgg\") on node \"crc\" DevicePath \"\"" Feb 19 20:04:21 crc kubenswrapper[4813]: I0219 20:04:21.499338 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1db69cd-0899-4f7b-ac3f-fa9b75471765" path="/var/lib/kubelet/pods/f1db69cd-0899-4f7b-ac3f-fa9b75471765/volumes" Feb 19 20:04:21 crc kubenswrapper[4813]: I0219 20:04:21.640619 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-b182-account-create-update-mz22g" event={"ID":"a44125bc-fce5-47ed-a37f-0b7f73470d95","Type":"ContainerDied","Data":"c2aeddf56dc20ea04e0218a1d976ed2588f545ab0462a000618d18cfd93a315a"} Feb 19 20:04:21 crc kubenswrapper[4813]: I0219 20:04:21.640850 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-b182-account-create-update-mz22g" Feb 19 20:04:21 crc kubenswrapper[4813]: I0219 20:04:21.640858 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2aeddf56dc20ea04e0218a1d976ed2588f545ab0462a000618d18cfd93a315a" Feb 19 20:04:22 crc kubenswrapper[4813]: I0219 20:04:22.710606 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-persistence-db-create-bhnwp"] Feb 19 20:04:22 crc kubenswrapper[4813]: E0219 20:04:22.711005 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55d401e7-1ccb-4163-9366-cab53d918c33" containerName="mariadb-database-create" Feb 19 20:04:22 crc kubenswrapper[4813]: I0219 20:04:22.711017 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="55d401e7-1ccb-4163-9366-cab53d918c33" containerName="mariadb-database-create" Feb 19 20:04:22 crc kubenswrapper[4813]: E0219 20:04:22.711045 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a44125bc-fce5-47ed-a37f-0b7f73470d95" containerName="mariadb-account-create-update" Feb 19 20:04:22 crc kubenswrapper[4813]: I0219 20:04:22.711051 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="a44125bc-fce5-47ed-a37f-0b7f73470d95" containerName="mariadb-account-create-update" Feb 19 20:04:22 crc kubenswrapper[4813]: I0219 20:04:22.711275 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="a44125bc-fce5-47ed-a37f-0b7f73470d95" containerName="mariadb-account-create-update" Feb 19 20:04:22 crc kubenswrapper[4813]: I0219 20:04:22.711284 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="55d401e7-1ccb-4163-9366-cab53d918c33" containerName="mariadb-database-create" Feb 19 20:04:22 crc kubenswrapper[4813]: I0219 20:04:22.711872 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-bhnwp" Feb 19 20:04:22 crc kubenswrapper[4813]: I0219 20:04:22.720934 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-persistence-db-create-bhnwp"] Feb 19 20:04:22 crc kubenswrapper[4813]: I0219 20:04:22.785747 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d83b25e4-4a7e-4e6f-94da-25793b440419-operator-scripts\") pod \"octavia-persistence-db-create-bhnwp\" (UID: \"d83b25e4-4a7e-4e6f-94da-25793b440419\") " pod="openstack/octavia-persistence-db-create-bhnwp" Feb 19 20:04:22 crc kubenswrapper[4813]: I0219 20:04:22.785802 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9znz\" (UniqueName: \"kubernetes.io/projected/d83b25e4-4a7e-4e6f-94da-25793b440419-kube-api-access-l9znz\") pod \"octavia-persistence-db-create-bhnwp\" (UID: \"d83b25e4-4a7e-4e6f-94da-25793b440419\") " pod="openstack/octavia-persistence-db-create-bhnwp" Feb 19 20:04:22 crc kubenswrapper[4813]: I0219 20:04:22.887981 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d83b25e4-4a7e-4e6f-94da-25793b440419-operator-scripts\") pod \"octavia-persistence-db-create-bhnwp\" (UID: \"d83b25e4-4a7e-4e6f-94da-25793b440419\") " pod="openstack/octavia-persistence-db-create-bhnwp" Feb 19 20:04:22 crc kubenswrapper[4813]: I0219 20:04:22.888041 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9znz\" (UniqueName: \"kubernetes.io/projected/d83b25e4-4a7e-4e6f-94da-25793b440419-kube-api-access-l9znz\") pod \"octavia-persistence-db-create-bhnwp\" (UID: \"d83b25e4-4a7e-4e6f-94da-25793b440419\") " pod="openstack/octavia-persistence-db-create-bhnwp" Feb 19 20:04:22 crc kubenswrapper[4813]: I0219 20:04:22.888970 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d83b25e4-4a7e-4e6f-94da-25793b440419-operator-scripts\") pod \"octavia-persistence-db-create-bhnwp\" (UID: \"d83b25e4-4a7e-4e6f-94da-25793b440419\") " pod="openstack/octavia-persistence-db-create-bhnwp" Feb 19 20:04:22 crc kubenswrapper[4813]: I0219 20:04:22.905754 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9znz\" (UniqueName: \"kubernetes.io/projected/d83b25e4-4a7e-4e6f-94da-25793b440419-kube-api-access-l9znz\") pod \"octavia-persistence-db-create-bhnwp\" (UID: \"d83b25e4-4a7e-4e6f-94da-25793b440419\") " pod="openstack/octavia-persistence-db-create-bhnwp" Feb 19 20:04:23 crc kubenswrapper[4813]: I0219 20:04:23.036844 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-bhnwp" Feb 19 20:04:23 crc kubenswrapper[4813]: I0219 20:04:23.226708 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-5107-account-create-update-qv6n2"] Feb 19 20:04:23 crc kubenswrapper[4813]: I0219 20:04:23.228498 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-5107-account-create-update-qv6n2" Feb 19 20:04:23 crc kubenswrapper[4813]: I0219 20:04:23.233251 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-persistence-db-secret" Feb 19 20:04:23 crc kubenswrapper[4813]: I0219 20:04:23.242117 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-5107-account-create-update-qv6n2"] Feb 19 20:04:23 crc kubenswrapper[4813]: I0219 20:04:23.301567 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qs9rt\" (UniqueName: \"kubernetes.io/projected/14b1ae18-f817-4a70-b54e-9571ed349e07-kube-api-access-qs9rt\") pod \"octavia-5107-account-create-update-qv6n2\" (UID: \"14b1ae18-f817-4a70-b54e-9571ed349e07\") " pod="openstack/octavia-5107-account-create-update-qv6n2" Feb 19 20:04:23 crc kubenswrapper[4813]: I0219 20:04:23.301653 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14b1ae18-f817-4a70-b54e-9571ed349e07-operator-scripts\") pod \"octavia-5107-account-create-update-qv6n2\" (UID: \"14b1ae18-f817-4a70-b54e-9571ed349e07\") " pod="openstack/octavia-5107-account-create-update-qv6n2" Feb 19 20:04:23 crc kubenswrapper[4813]: I0219 20:04:23.403559 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14b1ae18-f817-4a70-b54e-9571ed349e07-operator-scripts\") pod \"octavia-5107-account-create-update-qv6n2\" (UID: \"14b1ae18-f817-4a70-b54e-9571ed349e07\") " pod="openstack/octavia-5107-account-create-update-qv6n2" Feb 19 20:04:23 crc kubenswrapper[4813]: I0219 20:04:23.403761 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qs9rt\" (UniqueName: \"kubernetes.io/projected/14b1ae18-f817-4a70-b54e-9571ed349e07-kube-api-access-qs9rt\") pod \"octavia-5107-account-create-update-qv6n2\" (UID: \"14b1ae18-f817-4a70-b54e-9571ed349e07\") " pod="openstack/octavia-5107-account-create-update-qv6n2" Feb 19 20:04:23 crc kubenswrapper[4813]: I0219 20:04:23.404881 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14b1ae18-f817-4a70-b54e-9571ed349e07-operator-scripts\") pod \"octavia-5107-account-create-update-qv6n2\" (UID: \"14b1ae18-f817-4a70-b54e-9571ed349e07\") " pod="openstack/octavia-5107-account-create-update-qv6n2" Feb 19 20:04:23 crc kubenswrapper[4813]: I0219 20:04:23.426105 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qs9rt\" (UniqueName: \"kubernetes.io/projected/14b1ae18-f817-4a70-b54e-9571ed349e07-kube-api-access-qs9rt\") pod \"octavia-5107-account-create-update-qv6n2\" (UID: \"14b1ae18-f817-4a70-b54e-9571ed349e07\") " pod="openstack/octavia-5107-account-create-update-qv6n2" Feb 19 20:04:23 crc kubenswrapper[4813]: I0219 20:04:23.523471 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-persistence-db-create-bhnwp"] Feb 19 20:04:23 crc kubenswrapper[4813]: W0219 20:04:23.527286 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd83b25e4_4a7e_4e6f_94da_25793b440419.slice/crio-027ddaa790272ef59a066c9ed6d8c60bd5d14424fd13b52877e7fc66a1b1b621 WatchSource:0}: Error finding container 027ddaa790272ef59a066c9ed6d8c60bd5d14424fd13b52877e7fc66a1b1b621: Status 404 returned error can't find the container with id 027ddaa790272ef59a066c9ed6d8c60bd5d14424fd13b52877e7fc66a1b1b621 Feb 19 20:04:23 crc kubenswrapper[4813]: I0219 20:04:23.558375 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-5107-account-create-update-qv6n2" Feb 19 20:04:23 crc kubenswrapper[4813]: I0219 20:04:23.663372 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-bhnwp" event={"ID":"d83b25e4-4a7e-4e6f-94da-25793b440419","Type":"ContainerStarted","Data":"027ddaa790272ef59a066c9ed6d8c60bd5d14424fd13b52877e7fc66a1b1b621"} Feb 19 20:04:24 crc kubenswrapper[4813]: W0219 20:04:24.048940 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14b1ae18_f817_4a70_b54e_9571ed349e07.slice/crio-8bf431d0edda7428f12351ca709bf0494f906fa1985ac0fabde23670994ba177 WatchSource:0}: Error finding container 8bf431d0edda7428f12351ca709bf0494f906fa1985ac0fabde23670994ba177: Status 404 returned error can't find the container with id 8bf431d0edda7428f12351ca709bf0494f906fa1985ac0fabde23670994ba177 Feb 19 20:04:24 crc kubenswrapper[4813]: I0219 20:04:24.056909 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-5107-account-create-update-qv6n2"] Feb 19 20:04:24 crc kubenswrapper[4813]: I0219 20:04:24.673489 4813 generic.go:334] "Generic (PLEG): container finished" podID="14b1ae18-f817-4a70-b54e-9571ed349e07" containerID="a87f0ba8eeb39bf7450f5036a23b5f8cece1e0c3a0e8cec3387c864de42b758b" exitCode=0 Feb 19 20:04:24 crc kubenswrapper[4813]: I0219 20:04:24.673604 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-5107-account-create-update-qv6n2" event={"ID":"14b1ae18-f817-4a70-b54e-9571ed349e07","Type":"ContainerDied","Data":"a87f0ba8eeb39bf7450f5036a23b5f8cece1e0c3a0e8cec3387c864de42b758b"} Feb 19 20:04:24 crc kubenswrapper[4813]: I0219 20:04:24.673915 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-5107-account-create-update-qv6n2" event={"ID":"14b1ae18-f817-4a70-b54e-9571ed349e07","Type":"ContainerStarted","Data":"8bf431d0edda7428f12351ca709bf0494f906fa1985ac0fabde23670994ba177"} Feb 19 20:04:24 crc kubenswrapper[4813]: I0219 20:04:24.676480 4813 generic.go:334] "Generic (PLEG): container finished" podID="d83b25e4-4a7e-4e6f-94da-25793b440419" containerID="9ee6ea12376792de511cb17ed67c16a30248607780b9ba0d8947b0921bba6870" exitCode=0 Feb 19 20:04:24 crc kubenswrapper[4813]: I0219 20:04:24.676506 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-bhnwp" event={"ID":"d83b25e4-4a7e-4e6f-94da-25793b440419","Type":"ContainerDied","Data":"9ee6ea12376792de511cb17ed67c16a30248607780b9ba0d8947b0921bba6870"} Feb 19 20:04:26 crc kubenswrapper[4813]: I0219 20:04:26.086481 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-bhnwp" Feb 19 20:04:26 crc kubenswrapper[4813]: I0219 20:04:26.091275 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-5107-account-create-update-qv6n2" Feb 19 20:04:26 crc kubenswrapper[4813]: I0219 20:04:26.155459 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14b1ae18-f817-4a70-b54e-9571ed349e07-operator-scripts\") pod \"14b1ae18-f817-4a70-b54e-9571ed349e07\" (UID: \"14b1ae18-f817-4a70-b54e-9571ed349e07\") " Feb 19 20:04:26 crc kubenswrapper[4813]: I0219 20:04:26.155834 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs9rt\" (UniqueName: \"kubernetes.io/projected/14b1ae18-f817-4a70-b54e-9571ed349e07-kube-api-access-qs9rt\") pod \"14b1ae18-f817-4a70-b54e-9571ed349e07\" (UID: \"14b1ae18-f817-4a70-b54e-9571ed349e07\") " Feb 19 20:04:26 crc kubenswrapper[4813]: I0219 20:04:26.155889 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14b1ae18-f817-4a70-b54e-9571ed349e07-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "14b1ae18-f817-4a70-b54e-9571ed349e07" (UID: "14b1ae18-f817-4a70-b54e-9571ed349e07"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:04:26 crc kubenswrapper[4813]: I0219 20:04:26.155928 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d83b25e4-4a7e-4e6f-94da-25793b440419-operator-scripts\") pod \"d83b25e4-4a7e-4e6f-94da-25793b440419\" (UID: \"d83b25e4-4a7e-4e6f-94da-25793b440419\") " Feb 19 20:04:26 crc kubenswrapper[4813]: I0219 20:04:26.156004 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9znz\" (UniqueName: \"kubernetes.io/projected/d83b25e4-4a7e-4e6f-94da-25793b440419-kube-api-access-l9znz\") pod \"d83b25e4-4a7e-4e6f-94da-25793b440419\" (UID: \"d83b25e4-4a7e-4e6f-94da-25793b440419\") " Feb 19 20:04:26 crc kubenswrapper[4813]: I0219 20:04:26.156524 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14b1ae18-f817-4a70-b54e-9571ed349e07-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 20:04:26 crc kubenswrapper[4813]: I0219 20:04:26.156663 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d83b25e4-4a7e-4e6f-94da-25793b440419-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d83b25e4-4a7e-4e6f-94da-25793b440419" (UID: "d83b25e4-4a7e-4e6f-94da-25793b440419"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:04:26 crc kubenswrapper[4813]: I0219 20:04:26.163459 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d83b25e4-4a7e-4e6f-94da-25793b440419-kube-api-access-l9znz" (OuterVolumeSpecName: "kube-api-access-l9znz") pod "d83b25e4-4a7e-4e6f-94da-25793b440419" (UID: "d83b25e4-4a7e-4e6f-94da-25793b440419"). InnerVolumeSpecName "kube-api-access-l9znz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:04:26 crc kubenswrapper[4813]: I0219 20:04:26.178406 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14b1ae18-f817-4a70-b54e-9571ed349e07-kube-api-access-qs9rt" (OuterVolumeSpecName: "kube-api-access-qs9rt") pod "14b1ae18-f817-4a70-b54e-9571ed349e07" (UID: "14b1ae18-f817-4a70-b54e-9571ed349e07"). InnerVolumeSpecName "kube-api-access-qs9rt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:04:26 crc kubenswrapper[4813]: I0219 20:04:26.258664 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs9rt\" (UniqueName: \"kubernetes.io/projected/14b1ae18-f817-4a70-b54e-9571ed349e07-kube-api-access-qs9rt\") on node \"crc\" DevicePath \"\"" Feb 19 20:04:26 crc kubenswrapper[4813]: I0219 20:04:26.258701 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d83b25e4-4a7e-4e6f-94da-25793b440419-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 20:04:26 crc kubenswrapper[4813]: I0219 20:04:26.258713 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9znz\" (UniqueName: \"kubernetes.io/projected/d83b25e4-4a7e-4e6f-94da-25793b440419-kube-api-access-l9znz\") on node \"crc\" DevicePath \"\"" Feb 19 20:04:26 crc kubenswrapper[4813]: I0219 20:04:26.692907 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-bhnwp" event={"ID":"d83b25e4-4a7e-4e6f-94da-25793b440419","Type":"ContainerDied","Data":"027ddaa790272ef59a066c9ed6d8c60bd5d14424fd13b52877e7fc66a1b1b621"} Feb 19 20:04:26 crc kubenswrapper[4813]: I0219 20:04:26.692946 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="027ddaa790272ef59a066c9ed6d8c60bd5d14424fd13b52877e7fc66a1b1b621" Feb 19 20:04:26 crc kubenswrapper[4813]: I0219 20:04:26.692940 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-bhnwp" Feb 19 20:04:26 crc kubenswrapper[4813]: I0219 20:04:26.694333 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-5107-account-create-update-qv6n2" event={"ID":"14b1ae18-f817-4a70-b54e-9571ed349e07","Type":"ContainerDied","Data":"8bf431d0edda7428f12351ca709bf0494f906fa1985ac0fabde23670994ba177"} Feb 19 20:04:26 crc kubenswrapper[4813]: I0219 20:04:26.694352 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bf431d0edda7428f12351ca709bf0494f906fa1985ac0fabde23670994ba177" Feb 19 20:04:26 crc kubenswrapper[4813]: I0219 20:04:26.694395 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-5107-account-create-update-qv6n2" Feb 19 20:04:29 crc kubenswrapper[4813]: I0219 20:04:29.312080 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-api-7fb974d8b5-2rxtm"] Feb 19 20:04:29 crc kubenswrapper[4813]: E0219 20:04:29.312852 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d83b25e4-4a7e-4e6f-94da-25793b440419" containerName="mariadb-database-create" Feb 19 20:04:29 crc kubenswrapper[4813]: I0219 20:04:29.312869 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="d83b25e4-4a7e-4e6f-94da-25793b440419" containerName="mariadb-database-create" Feb 19 20:04:29 crc kubenswrapper[4813]: E0219 20:04:29.312918 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14b1ae18-f817-4a70-b54e-9571ed349e07" containerName="mariadb-account-create-update" Feb 19 20:04:29 crc kubenswrapper[4813]: I0219 20:04:29.312928 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="14b1ae18-f817-4a70-b54e-9571ed349e07" containerName="mariadb-account-create-update" Feb 19 20:04:29 crc kubenswrapper[4813]: I0219 20:04:29.313171 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="14b1ae18-f817-4a70-b54e-9571ed349e07" containerName="mariadb-account-create-update" Feb 19 20:04:29 crc kubenswrapper[4813]: I0219 20:04:29.313206 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="d83b25e4-4a7e-4e6f-94da-25793b440419" containerName="mariadb-database-create" Feb 19 20:04:29 crc kubenswrapper[4813]: I0219 20:04:29.314995 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-7fb974d8b5-2rxtm" Feb 19 20:04:29 crc kubenswrapper[4813]: I0219 20:04:29.321633 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-api-scripts" Feb 19 20:04:29 crc kubenswrapper[4813]: I0219 20:04:29.321680 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-api-config-data" Feb 19 20:04:29 crc kubenswrapper[4813]: I0219 20:04:29.322045 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-octavia-dockercfg-6dnk5" Feb 19 20:04:29 crc kubenswrapper[4813]: I0219 20:04:29.335987 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-7fb974d8b5-2rxtm"] Feb 19 20:04:29 crc kubenswrapper[4813]: I0219 20:04:29.415844 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebf35e0c-e53b-4582-9cc8-2a9f54b9aa29-scripts\") pod \"octavia-api-7fb974d8b5-2rxtm\" (UID: \"ebf35e0c-e53b-4582-9cc8-2a9f54b9aa29\") " pod="openstack/octavia-api-7fb974d8b5-2rxtm" Feb 19 20:04:29 crc kubenswrapper[4813]: I0219 20:04:29.415933 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/ebf35e0c-e53b-4582-9cc8-2a9f54b9aa29-octavia-run\") pod \"octavia-api-7fb974d8b5-2rxtm\" (UID: \"ebf35e0c-e53b-4582-9cc8-2a9f54b9aa29\") " pod="openstack/octavia-api-7fb974d8b5-2rxtm" Feb 19 20:04:29 crc kubenswrapper[4813]: I0219 20:04:29.416072 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebf35e0c-e53b-4582-9cc8-2a9f54b9aa29-combined-ca-bundle\") pod \"octavia-api-7fb974d8b5-2rxtm\" (UID: \"ebf35e0c-e53b-4582-9cc8-2a9f54b9aa29\") " pod="openstack/octavia-api-7fb974d8b5-2rxtm" Feb 19 20:04:29 crc kubenswrapper[4813]: I0219 20:04:29.416122 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebf35e0c-e53b-4582-9cc8-2a9f54b9aa29-config-data\") pod \"octavia-api-7fb974d8b5-2rxtm\" (UID: \"ebf35e0c-e53b-4582-9cc8-2a9f54b9aa29\") " pod="openstack/octavia-api-7fb974d8b5-2rxtm" Feb 19 20:04:29 crc kubenswrapper[4813]: I0219 20:04:29.416152 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/ebf35e0c-e53b-4582-9cc8-2a9f54b9aa29-config-data-merged\") pod \"octavia-api-7fb974d8b5-2rxtm\" (UID: \"ebf35e0c-e53b-4582-9cc8-2a9f54b9aa29\") " pod="openstack/octavia-api-7fb974d8b5-2rxtm" Feb 19 20:04:29 crc kubenswrapper[4813]: I0219 20:04:29.517671 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebf35e0c-e53b-4582-9cc8-2a9f54b9aa29-combined-ca-bundle\") pod \"octavia-api-7fb974d8b5-2rxtm\" (UID: \"ebf35e0c-e53b-4582-9cc8-2a9f54b9aa29\") " pod="openstack/octavia-api-7fb974d8b5-2rxtm" Feb 19 20:04:29 crc kubenswrapper[4813]: I0219 20:04:29.517772 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebf35e0c-e53b-4582-9cc8-2a9f54b9aa29-config-data\") pod \"octavia-api-7fb974d8b5-2rxtm\" (UID: \"ebf35e0c-e53b-4582-9cc8-2a9f54b9aa29\") " pod="openstack/octavia-api-7fb974d8b5-2rxtm" Feb 19 20:04:29 crc kubenswrapper[4813]: I0219 20:04:29.517808 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/ebf35e0c-e53b-4582-9cc8-2a9f54b9aa29-config-data-merged\") pod \"octavia-api-7fb974d8b5-2rxtm\" (UID: \"ebf35e0c-e53b-4582-9cc8-2a9f54b9aa29\") " pod="openstack/octavia-api-7fb974d8b5-2rxtm" Feb 19 20:04:29 crc kubenswrapper[4813]: I0219 20:04:29.517863 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebf35e0c-e53b-4582-9cc8-2a9f54b9aa29-scripts\") pod \"octavia-api-7fb974d8b5-2rxtm\" (UID: \"ebf35e0c-e53b-4582-9cc8-2a9f54b9aa29\") " pod="openstack/octavia-api-7fb974d8b5-2rxtm" Feb 19 20:04:29 crc kubenswrapper[4813]: I0219 20:04:29.518020 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/ebf35e0c-e53b-4582-9cc8-2a9f54b9aa29-octavia-run\") pod \"octavia-api-7fb974d8b5-2rxtm\" (UID: \"ebf35e0c-e53b-4582-9cc8-2a9f54b9aa29\") " pod="openstack/octavia-api-7fb974d8b5-2rxtm" Feb 19 20:04:29 crc kubenswrapper[4813]: I0219 20:04:29.518819 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/ebf35e0c-e53b-4582-9cc8-2a9f54b9aa29-config-data-merged\") pod \"octavia-api-7fb974d8b5-2rxtm\" (UID: \"ebf35e0c-e53b-4582-9cc8-2a9f54b9aa29\") " pod="openstack/octavia-api-7fb974d8b5-2rxtm" Feb 19 20:04:29 crc kubenswrapper[4813]: I0219 20:04:29.520845 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/ebf35e0c-e53b-4582-9cc8-2a9f54b9aa29-octavia-run\") pod \"octavia-api-7fb974d8b5-2rxtm\" (UID: \"ebf35e0c-e53b-4582-9cc8-2a9f54b9aa29\") " pod="openstack/octavia-api-7fb974d8b5-2rxtm" Feb 19 20:04:29 crc kubenswrapper[4813]: I0219 20:04:29.524603 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebf35e0c-e53b-4582-9cc8-2a9f54b9aa29-scripts\") pod \"octavia-api-7fb974d8b5-2rxtm\" (UID: \"ebf35e0c-e53b-4582-9cc8-2a9f54b9aa29\") " pod="openstack/octavia-api-7fb974d8b5-2rxtm" Feb 19 20:04:29 crc kubenswrapper[4813]: I0219 20:04:29.524642 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebf35e0c-e53b-4582-9cc8-2a9f54b9aa29-combined-ca-bundle\") pod \"octavia-api-7fb974d8b5-2rxtm\" (UID: \"ebf35e0c-e53b-4582-9cc8-2a9f54b9aa29\") " pod="openstack/octavia-api-7fb974d8b5-2rxtm" Feb 19 20:04:29 crc kubenswrapper[4813]: I0219 20:04:29.526690 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebf35e0c-e53b-4582-9cc8-2a9f54b9aa29-config-data\") pod \"octavia-api-7fb974d8b5-2rxtm\" (UID: \"ebf35e0c-e53b-4582-9cc8-2a9f54b9aa29\") " pod="openstack/octavia-api-7fb974d8b5-2rxtm" Feb 19 20:04:29 crc kubenswrapper[4813]: I0219 20:04:29.651292 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-7fb974d8b5-2rxtm" Feb 19 20:04:29 crc kubenswrapper[4813]: I0219 20:04:29.911563 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-frg8l" podUID="fdd55e37-6da1-4dfb-809c-0074790b1ffc" containerName="ovn-controller" probeResult="failure" output=< Feb 19 20:04:29 crc kubenswrapper[4813]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 19 20:04:29 crc kubenswrapper[4813]: > Feb 19 20:04:29 crc kubenswrapper[4813]: I0219 20:04:29.928694 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-6htkx" Feb 19 20:04:29 crc kubenswrapper[4813]: I0219 20:04:29.978362 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-6htkx" Feb 19 20:04:30 crc kubenswrapper[4813]: I0219 20:04:30.086549 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-frg8l-config-w9xjw"] Feb 19 20:04:30 crc kubenswrapper[4813]: I0219 20:04:30.088161 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-frg8l-config-w9xjw" Feb 19 20:04:30 crc kubenswrapper[4813]: I0219 20:04:30.090697 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 19 20:04:30 crc kubenswrapper[4813]: I0219 20:04:30.102619 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-frg8l-config-w9xjw"] Feb 19 20:04:30 crc kubenswrapper[4813]: I0219 20:04:30.151682 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c083577f-0765-4288-94a7-7e30bbf65619-var-run-ovn\") pod \"ovn-controller-frg8l-config-w9xjw\" (UID: \"c083577f-0765-4288-94a7-7e30bbf65619\") " pod="openstack/ovn-controller-frg8l-config-w9xjw" Feb 19 20:04:30 crc kubenswrapper[4813]: I0219 20:04:30.151980 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdj7g\" (UniqueName: \"kubernetes.io/projected/c083577f-0765-4288-94a7-7e30bbf65619-kube-api-access-hdj7g\") pod \"ovn-controller-frg8l-config-w9xjw\" (UID: \"c083577f-0765-4288-94a7-7e30bbf65619\") " pod="openstack/ovn-controller-frg8l-config-w9xjw" Feb 19 20:04:30 crc kubenswrapper[4813]: I0219 20:04:30.152230 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c083577f-0765-4288-94a7-7e30bbf65619-additional-scripts\") pod \"ovn-controller-frg8l-config-w9xjw\" (UID: \"c083577f-0765-4288-94a7-7e30bbf65619\") " pod="openstack/ovn-controller-frg8l-config-w9xjw" Feb 19 20:04:30 crc kubenswrapper[4813]: I0219 20:04:30.152263 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c083577f-0765-4288-94a7-7e30bbf65619-var-log-ovn\") pod \"ovn-controller-frg8l-config-w9xjw\" (UID: \"c083577f-0765-4288-94a7-7e30bbf65619\") " pod="openstack/ovn-controller-frg8l-config-w9xjw" Feb 19 20:04:30 crc kubenswrapper[4813]: I0219 20:04:30.152560 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c083577f-0765-4288-94a7-7e30bbf65619-scripts\") pod \"ovn-controller-frg8l-config-w9xjw\" (UID: \"c083577f-0765-4288-94a7-7e30bbf65619\") " pod="openstack/ovn-controller-frg8l-config-w9xjw" Feb 19 20:04:30 crc kubenswrapper[4813]: I0219 20:04:30.152638 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c083577f-0765-4288-94a7-7e30bbf65619-var-run\") pod \"ovn-controller-frg8l-config-w9xjw\" (UID: \"c083577f-0765-4288-94a7-7e30bbf65619\") " pod="openstack/ovn-controller-frg8l-config-w9xjw" Feb 19 20:04:30 crc kubenswrapper[4813]: I0219 20:04:30.259016 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c083577f-0765-4288-94a7-7e30bbf65619-scripts\") pod \"ovn-controller-frg8l-config-w9xjw\" (UID: \"c083577f-0765-4288-94a7-7e30bbf65619\") " pod="openstack/ovn-controller-frg8l-config-w9xjw" Feb 19 20:04:30 crc kubenswrapper[4813]: I0219 20:04:30.259102 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c083577f-0765-4288-94a7-7e30bbf65619-var-run\") pod \"ovn-controller-frg8l-config-w9xjw\" (UID: \"c083577f-0765-4288-94a7-7e30bbf65619\") " pod="openstack/ovn-controller-frg8l-config-w9xjw" Feb 19 20:04:30 crc kubenswrapper[4813]: I0219 20:04:30.259127 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c083577f-0765-4288-94a7-7e30bbf65619-var-run-ovn\") pod \"ovn-controller-frg8l-config-w9xjw\" (UID: \"c083577f-0765-4288-94a7-7e30bbf65619\") " pod="openstack/ovn-controller-frg8l-config-w9xjw" Feb 19 20:04:30 crc kubenswrapper[4813]: I0219 20:04:30.259208 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdj7g\" (UniqueName: \"kubernetes.io/projected/c083577f-0765-4288-94a7-7e30bbf65619-kube-api-access-hdj7g\") pod \"ovn-controller-frg8l-config-w9xjw\" (UID: \"c083577f-0765-4288-94a7-7e30bbf65619\") " pod="openstack/ovn-controller-frg8l-config-w9xjw" Feb 19 20:04:30 crc kubenswrapper[4813]: I0219 20:04:30.259275 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c083577f-0765-4288-94a7-7e30bbf65619-additional-scripts\") pod \"ovn-controller-frg8l-config-w9xjw\" (UID: \"c083577f-0765-4288-94a7-7e30bbf65619\") " pod="openstack/ovn-controller-frg8l-config-w9xjw" Feb 19 20:04:30 crc kubenswrapper[4813]: I0219 20:04:30.259295 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c083577f-0765-4288-94a7-7e30bbf65619-var-log-ovn\") pod \"ovn-controller-frg8l-config-w9xjw\" (UID: \"c083577f-0765-4288-94a7-7e30bbf65619\") " pod="openstack/ovn-controller-frg8l-config-w9xjw" Feb 19 20:04:30 crc kubenswrapper[4813]: I0219 20:04:30.259631 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c083577f-0765-4288-94a7-7e30bbf65619-var-log-ovn\") pod \"ovn-controller-frg8l-config-w9xjw\" (UID: \"c083577f-0765-4288-94a7-7e30bbf65619\") " pod="openstack/ovn-controller-frg8l-config-w9xjw" Feb 19 20:04:30 crc kubenswrapper[4813]: I0219 20:04:30.261918 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c083577f-0765-4288-94a7-7e30bbf65619-scripts\") pod \"ovn-controller-frg8l-config-w9xjw\" (UID: \"c083577f-0765-4288-94a7-7e30bbf65619\") " pod="openstack/ovn-controller-frg8l-config-w9xjw" Feb 19 20:04:30 crc kubenswrapper[4813]: I0219 20:04:30.262058 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c083577f-0765-4288-94a7-7e30bbf65619-var-run\") pod \"ovn-controller-frg8l-config-w9xjw\" (UID: \"c083577f-0765-4288-94a7-7e30bbf65619\") " pod="openstack/ovn-controller-frg8l-config-w9xjw" Feb 19 20:04:30 crc kubenswrapper[4813]: I0219 20:04:30.262117 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c083577f-0765-4288-94a7-7e30bbf65619-var-run-ovn\") pod \"ovn-controller-frg8l-config-w9xjw\" (UID: \"c083577f-0765-4288-94a7-7e30bbf65619\") " pod="openstack/ovn-controller-frg8l-config-w9xjw" Feb 19 20:04:30 crc kubenswrapper[4813]: I0219 20:04:30.262480 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c083577f-0765-4288-94a7-7e30bbf65619-additional-scripts\") pod \"ovn-controller-frg8l-config-w9xjw\" (UID: \"c083577f-0765-4288-94a7-7e30bbf65619\") " pod="openstack/ovn-controller-frg8l-config-w9xjw" Feb 19 20:04:30 crc kubenswrapper[4813]: I0219 20:04:30.283353 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdj7g\" (UniqueName: \"kubernetes.io/projected/c083577f-0765-4288-94a7-7e30bbf65619-kube-api-access-hdj7g\") pod \"ovn-controller-frg8l-config-w9xjw\" (UID: \"c083577f-0765-4288-94a7-7e30bbf65619\") " pod="openstack/ovn-controller-frg8l-config-w9xjw" Feb 19 20:04:30 crc kubenswrapper[4813]: I0219 20:04:30.284401 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-7fb974d8b5-2rxtm"] Feb 19 20:04:30 crc kubenswrapper[4813]: I0219 20:04:30.329722 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 20:04:30 crc kubenswrapper[4813]: I0219 20:04:30.329773 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 20:04:30 crc kubenswrapper[4813]: I0219 20:04:30.406335 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-frg8l-config-w9xjw" Feb 19 20:04:30 crc kubenswrapper[4813]: I0219 20:04:30.735161 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-7fb974d8b5-2rxtm" event={"ID":"ebf35e0c-e53b-4582-9cc8-2a9f54b9aa29","Type":"ContainerStarted","Data":"0eac7e88e06bdf2f47fd3f9340d6f73c1a60606272452a92643bc0befa6175c7"} Feb 19 20:04:30 crc kubenswrapper[4813]: I0219 20:04:30.916982 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-frg8l-config-w9xjw"] Feb 19 20:04:30 crc kubenswrapper[4813]: W0219 20:04:30.934235 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc083577f_0765_4288_94a7_7e30bbf65619.slice/crio-d64460dc6658ef63ec557aad65130a25da995022ec2b8a8f0e579ab9245306b4 WatchSource:0}: Error finding container d64460dc6658ef63ec557aad65130a25da995022ec2b8a8f0e579ab9245306b4: Status 404 returned error can't find the container with id d64460dc6658ef63ec557aad65130a25da995022ec2b8a8f0e579ab9245306b4 Feb 19 20:04:31 crc kubenswrapper[4813]: I0219 20:04:31.745402 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-frg8l-config-w9xjw" event={"ID":"c083577f-0765-4288-94a7-7e30bbf65619","Type":"ContainerStarted","Data":"7afb80387c7b3f66bae5dec261f74209b4f4c2ed4d1056202f64e21a7b083774"} Feb 19 20:04:31 crc kubenswrapper[4813]: I0219 20:04:31.746123 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-frg8l-config-w9xjw" event={"ID":"c083577f-0765-4288-94a7-7e30bbf65619","Type":"ContainerStarted","Data":"d64460dc6658ef63ec557aad65130a25da995022ec2b8a8f0e579ab9245306b4"} Feb 19 20:04:32 crc kubenswrapper[4813]: I0219 20:04:32.759062 4813 generic.go:334] "Generic (PLEG): container finished" podID="c083577f-0765-4288-94a7-7e30bbf65619" containerID="7afb80387c7b3f66bae5dec261f74209b4f4c2ed4d1056202f64e21a7b083774" exitCode=0 Feb 19 20:04:32 crc kubenswrapper[4813]: I0219 20:04:32.759119 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-frg8l-config-w9xjw" event={"ID":"c083577f-0765-4288-94a7-7e30bbf65619","Type":"ContainerDied","Data":"7afb80387c7b3f66bae5dec261f74209b4f4c2ed4d1056202f64e21a7b083774"} Feb 19 20:04:34 crc kubenswrapper[4813]: I0219 20:04:34.923029 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-frg8l" Feb 19 20:04:39 crc kubenswrapper[4813]: I0219 20:04:39.952181 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-frg8l-config-w9xjw" Feb 19 20:04:40 crc kubenswrapper[4813]: I0219 20:04:40.087169 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c083577f-0765-4288-94a7-7e30bbf65619-additional-scripts\") pod \"c083577f-0765-4288-94a7-7e30bbf65619\" (UID: \"c083577f-0765-4288-94a7-7e30bbf65619\") " Feb 19 20:04:40 crc kubenswrapper[4813]: I0219 20:04:40.087287 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c083577f-0765-4288-94a7-7e30bbf65619-var-run-ovn\") pod \"c083577f-0765-4288-94a7-7e30bbf65619\" (UID: \"c083577f-0765-4288-94a7-7e30bbf65619\") " Feb 19 20:04:40 crc kubenswrapper[4813]: I0219 20:04:40.087341 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c083577f-0765-4288-94a7-7e30bbf65619-scripts\") pod \"c083577f-0765-4288-94a7-7e30bbf65619\" (UID: \"c083577f-0765-4288-94a7-7e30bbf65619\") " Feb 19 20:04:40 crc kubenswrapper[4813]: I0219 20:04:40.087440 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdj7g\" (UniqueName: \"kubernetes.io/projected/c083577f-0765-4288-94a7-7e30bbf65619-kube-api-access-hdj7g\") pod \"c083577f-0765-4288-94a7-7e30bbf65619\" (UID: \"c083577f-0765-4288-94a7-7e30bbf65619\") " Feb 19 20:04:40 crc kubenswrapper[4813]: I0219 20:04:40.087467 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c083577f-0765-4288-94a7-7e30bbf65619-var-run\") pod \"c083577f-0765-4288-94a7-7e30bbf65619\" (UID: \"c083577f-0765-4288-94a7-7e30bbf65619\") " Feb 19 20:04:40 crc kubenswrapper[4813]: I0219 20:04:40.087523 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c083577f-0765-4288-94a7-7e30bbf65619-var-log-ovn\") pod \"c083577f-0765-4288-94a7-7e30bbf65619\" (UID: \"c083577f-0765-4288-94a7-7e30bbf65619\") " Feb 19 20:04:40 crc kubenswrapper[4813]: I0219 20:04:40.087904 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c083577f-0765-4288-94a7-7e30bbf65619-var-run" (OuterVolumeSpecName: "var-run") pod "c083577f-0765-4288-94a7-7e30bbf65619" (UID: "c083577f-0765-4288-94a7-7e30bbf65619"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 20:04:40 crc kubenswrapper[4813]: I0219 20:04:40.087944 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c083577f-0765-4288-94a7-7e30bbf65619-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "c083577f-0765-4288-94a7-7e30bbf65619" (UID: "c083577f-0765-4288-94a7-7e30bbf65619"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 20:04:40 crc kubenswrapper[4813]: I0219 20:04:40.087988 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c083577f-0765-4288-94a7-7e30bbf65619-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "c083577f-0765-4288-94a7-7e30bbf65619" (UID: "c083577f-0765-4288-94a7-7e30bbf65619"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 20:04:40 crc kubenswrapper[4813]: I0219 20:04:40.088783 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c083577f-0765-4288-94a7-7e30bbf65619-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "c083577f-0765-4288-94a7-7e30bbf65619" (UID: "c083577f-0765-4288-94a7-7e30bbf65619"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:04:40 crc kubenswrapper[4813]: I0219 20:04:40.089228 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c083577f-0765-4288-94a7-7e30bbf65619-scripts" (OuterVolumeSpecName: "scripts") pod "c083577f-0765-4288-94a7-7e30bbf65619" (UID: "c083577f-0765-4288-94a7-7e30bbf65619"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:04:40 crc kubenswrapper[4813]: I0219 20:04:40.093585 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c083577f-0765-4288-94a7-7e30bbf65619-kube-api-access-hdj7g" (OuterVolumeSpecName: "kube-api-access-hdj7g") pod "c083577f-0765-4288-94a7-7e30bbf65619" (UID: "c083577f-0765-4288-94a7-7e30bbf65619"). InnerVolumeSpecName "kube-api-access-hdj7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:04:40 crc kubenswrapper[4813]: I0219 20:04:40.189564 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c083577f-0765-4288-94a7-7e30bbf65619-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 20:04:40 crc kubenswrapper[4813]: I0219 20:04:40.189621 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdj7g\" (UniqueName: \"kubernetes.io/projected/c083577f-0765-4288-94a7-7e30bbf65619-kube-api-access-hdj7g\") on node \"crc\" DevicePath \"\"" Feb 19 20:04:40 crc kubenswrapper[4813]: I0219 20:04:40.189634 4813 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c083577f-0765-4288-94a7-7e30bbf65619-var-run\") on node \"crc\" DevicePath \"\"" Feb 19 20:04:40 crc kubenswrapper[4813]: I0219 20:04:40.189643 4813 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c083577f-0765-4288-94a7-7e30bbf65619-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 19 20:04:40 crc kubenswrapper[4813]: I0219 20:04:40.189652 4813 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c083577f-0765-4288-94a7-7e30bbf65619-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 20:04:40 crc kubenswrapper[4813]: I0219 20:04:40.189661 4813 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c083577f-0765-4288-94a7-7e30bbf65619-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 19 20:04:40 crc kubenswrapper[4813]: I0219 20:04:40.842608 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-frg8l-config-w9xjw" Feb 19 20:04:40 crc kubenswrapper[4813]: I0219 20:04:40.842602 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-frg8l-config-w9xjw" event={"ID":"c083577f-0765-4288-94a7-7e30bbf65619","Type":"ContainerDied","Data":"d64460dc6658ef63ec557aad65130a25da995022ec2b8a8f0e579ab9245306b4"} Feb 19 20:04:40 crc kubenswrapper[4813]: I0219 20:04:40.843228 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d64460dc6658ef63ec557aad65130a25da995022ec2b8a8f0e579ab9245306b4" Feb 19 20:04:40 crc kubenswrapper[4813]: I0219 20:04:40.844042 4813 generic.go:334] "Generic (PLEG): container finished" podID="ebf35e0c-e53b-4582-9cc8-2a9f54b9aa29" containerID="f449b034b969013139a869121f351a2404ee7cfcd933fdb6a45150e779dc8b2b" exitCode=0 Feb 19 20:04:40 crc kubenswrapper[4813]: I0219 20:04:40.844071 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-7fb974d8b5-2rxtm" event={"ID":"ebf35e0c-e53b-4582-9cc8-2a9f54b9aa29","Type":"ContainerDied","Data":"f449b034b969013139a869121f351a2404ee7cfcd933fdb6a45150e779dc8b2b"} Feb 19 20:04:41 crc kubenswrapper[4813]: I0219 20:04:41.049425 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-frg8l-config-w9xjw"] Feb 19 20:04:41 crc kubenswrapper[4813]: I0219 20:04:41.064442 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-frg8l-config-w9xjw"] Feb 19 20:04:41 crc kubenswrapper[4813]: I0219 20:04:41.173601 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-frg8l-config-fks9h"] Feb 19 20:04:41 crc kubenswrapper[4813]: E0219 20:04:41.174096 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c083577f-0765-4288-94a7-7e30bbf65619" containerName="ovn-config" Feb 19 20:04:41 crc kubenswrapper[4813]: I0219 20:04:41.174120 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="c083577f-0765-4288-94a7-7e30bbf65619" containerName="ovn-config" Feb 19 20:04:41 crc kubenswrapper[4813]: I0219 20:04:41.174360 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="c083577f-0765-4288-94a7-7e30bbf65619" containerName="ovn-config" Feb 19 20:04:41 crc kubenswrapper[4813]: I0219 20:04:41.175358 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-frg8l-config-fks9h" Feb 19 20:04:41 crc kubenswrapper[4813]: I0219 20:04:41.177981 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 19 20:04:41 crc kubenswrapper[4813]: I0219 20:04:41.207193 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-frg8l-config-fks9h"] Feb 19 20:04:41 crc kubenswrapper[4813]: I0219 20:04:41.311322 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/29a775ce-f95b-419a-ad88-b3a9a132012c-var-run\") pod \"ovn-controller-frg8l-config-fks9h\" (UID: \"29a775ce-f95b-419a-ad88-b3a9a132012c\") " pod="openstack/ovn-controller-frg8l-config-fks9h" Feb 19 20:04:41 crc kubenswrapper[4813]: I0219 20:04:41.311395 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dkbv\" (UniqueName: \"kubernetes.io/projected/29a775ce-f95b-419a-ad88-b3a9a132012c-kube-api-access-2dkbv\") pod \"ovn-controller-frg8l-config-fks9h\" (UID: \"29a775ce-f95b-419a-ad88-b3a9a132012c\") " pod="openstack/ovn-controller-frg8l-config-fks9h" Feb 19 20:04:41 crc kubenswrapper[4813]: I0219 20:04:41.311841 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/29a775ce-f95b-419a-ad88-b3a9a132012c-var-run-ovn\") pod \"ovn-controller-frg8l-config-fks9h\" (UID: \"29a775ce-f95b-419a-ad88-b3a9a132012c\") " pod="openstack/ovn-controller-frg8l-config-fks9h" Feb 19 20:04:41 crc kubenswrapper[4813]: I0219 20:04:41.311886 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/29a775ce-f95b-419a-ad88-b3a9a132012c-additional-scripts\") pod \"ovn-controller-frg8l-config-fks9h\" (UID: \"29a775ce-f95b-419a-ad88-b3a9a132012c\") " pod="openstack/ovn-controller-frg8l-config-fks9h" Feb 19 20:04:41 crc kubenswrapper[4813]: I0219 20:04:41.311918 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/29a775ce-f95b-419a-ad88-b3a9a132012c-var-log-ovn\") pod \"ovn-controller-frg8l-config-fks9h\" (UID: \"29a775ce-f95b-419a-ad88-b3a9a132012c\") " pod="openstack/ovn-controller-frg8l-config-fks9h" Feb 19 20:04:41 crc kubenswrapper[4813]: I0219 20:04:41.312062 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/29a775ce-f95b-419a-ad88-b3a9a132012c-scripts\") pod \"ovn-controller-frg8l-config-fks9h\" (UID: \"29a775ce-f95b-419a-ad88-b3a9a132012c\") " pod="openstack/ovn-controller-frg8l-config-fks9h" Feb 19 20:04:41 crc kubenswrapper[4813]: I0219 20:04:41.429880 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/29a775ce-f95b-419a-ad88-b3a9a132012c-var-run\") pod \"ovn-controller-frg8l-config-fks9h\" (UID: \"29a775ce-f95b-419a-ad88-b3a9a132012c\") " pod="openstack/ovn-controller-frg8l-config-fks9h" Feb 19 20:04:41 crc kubenswrapper[4813]: I0219 20:04:41.429977 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dkbv\" (UniqueName: \"kubernetes.io/projected/29a775ce-f95b-419a-ad88-b3a9a132012c-kube-api-access-2dkbv\") pod \"ovn-controller-frg8l-config-fks9h\" (UID: \"29a775ce-f95b-419a-ad88-b3a9a132012c\") " pod="openstack/ovn-controller-frg8l-config-fks9h" Feb 19 20:04:41 crc kubenswrapper[4813]: I0219 20:04:41.430160 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/29a775ce-f95b-419a-ad88-b3a9a132012c-var-run-ovn\") pod \"ovn-controller-frg8l-config-fks9h\" (UID: \"29a775ce-f95b-419a-ad88-b3a9a132012c\") " pod="openstack/ovn-controller-frg8l-config-fks9h" Feb 19 20:04:41 crc kubenswrapper[4813]: I0219 20:04:41.430188 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/29a775ce-f95b-419a-ad88-b3a9a132012c-additional-scripts\") pod \"ovn-controller-frg8l-config-fks9h\" (UID: \"29a775ce-f95b-419a-ad88-b3a9a132012c\") " pod="openstack/ovn-controller-frg8l-config-fks9h" Feb 19 20:04:41 crc kubenswrapper[4813]: I0219 20:04:41.430213 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/29a775ce-f95b-419a-ad88-b3a9a132012c-var-log-ovn\") pod \"ovn-controller-frg8l-config-fks9h\" (UID: \"29a775ce-f95b-419a-ad88-b3a9a132012c\") " pod="openstack/ovn-controller-frg8l-config-fks9h" Feb 19 20:04:41 crc kubenswrapper[4813]: I0219 20:04:41.430240 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/29a775ce-f95b-419a-ad88-b3a9a132012c-scripts\") pod \"ovn-controller-frg8l-config-fks9h\" (UID: \"29a775ce-f95b-419a-ad88-b3a9a132012c\") " pod="openstack/ovn-controller-frg8l-config-fks9h" Feb 19 20:04:41 crc kubenswrapper[4813]: I0219 20:04:41.430654 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/29a775ce-f95b-419a-ad88-b3a9a132012c-var-run\") pod \"ovn-controller-frg8l-config-fks9h\" (UID: \"29a775ce-f95b-419a-ad88-b3a9a132012c\") " pod="openstack/ovn-controller-frg8l-config-fks9h" Feb 19 20:04:41 crc kubenswrapper[4813]: I0219 20:04:41.430739 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/29a775ce-f95b-419a-ad88-b3a9a132012c-var-run-ovn\") pod \"ovn-controller-frg8l-config-fks9h\" (UID: \"29a775ce-f95b-419a-ad88-b3a9a132012c\") " pod="openstack/ovn-controller-frg8l-config-fks9h" Feb 19 20:04:41 crc kubenswrapper[4813]: I0219 20:04:41.430746 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/29a775ce-f95b-419a-ad88-b3a9a132012c-var-log-ovn\") pod \"ovn-controller-frg8l-config-fks9h\" (UID: \"29a775ce-f95b-419a-ad88-b3a9a132012c\") " pod="openstack/ovn-controller-frg8l-config-fks9h" Feb 19 20:04:41 crc kubenswrapper[4813]: I0219 20:04:41.431215 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/29a775ce-f95b-419a-ad88-b3a9a132012c-additional-scripts\") pod \"ovn-controller-frg8l-config-fks9h\" (UID: \"29a775ce-f95b-419a-ad88-b3a9a132012c\") " pod="openstack/ovn-controller-frg8l-config-fks9h" Feb 19 20:04:41 crc kubenswrapper[4813]: I0219 20:04:41.432769 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/29a775ce-f95b-419a-ad88-b3a9a132012c-scripts\") pod \"ovn-controller-frg8l-config-fks9h\" (UID: \"29a775ce-f95b-419a-ad88-b3a9a132012c\") " pod="openstack/ovn-controller-frg8l-config-fks9h" Feb 19 20:04:41 crc kubenswrapper[4813]: I0219 20:04:41.455413 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dkbv\" (UniqueName: \"kubernetes.io/projected/29a775ce-f95b-419a-ad88-b3a9a132012c-kube-api-access-2dkbv\") pod \"ovn-controller-frg8l-config-fks9h\" (UID: \"29a775ce-f95b-419a-ad88-b3a9a132012c\") " pod="openstack/ovn-controller-frg8l-config-fks9h" Feb 19 20:04:41 crc kubenswrapper[4813]: I0219 20:04:41.484149 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c083577f-0765-4288-94a7-7e30bbf65619" path="/var/lib/kubelet/pods/c083577f-0765-4288-94a7-7e30bbf65619/volumes" Feb 19 20:04:41 crc kubenswrapper[4813]: I0219 20:04:41.604132 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-frg8l-config-fks9h" Feb 19 20:04:41 crc kubenswrapper[4813]: I0219 20:04:41.856912 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-7fb974d8b5-2rxtm" event={"ID":"ebf35e0c-e53b-4582-9cc8-2a9f54b9aa29","Type":"ContainerStarted","Data":"af5b2fbac21863427cbba79960a71a1f6c471c6330819b36c5bf1733e650936f"} Feb 19 20:04:41 crc kubenswrapper[4813]: I0219 20:04:41.857612 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-7fb974d8b5-2rxtm" Feb 19 20:04:41 crc kubenswrapper[4813]: I0219 20:04:41.857633 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-7fb974d8b5-2rxtm" event={"ID":"ebf35e0c-e53b-4582-9cc8-2a9f54b9aa29","Type":"ContainerStarted","Data":"16e348424a2595cb7bf515f7e88338c1fa852823ecf008080f08d654d545f5d0"} Feb 19 20:04:41 crc kubenswrapper[4813]: I0219 20:04:41.874123 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-api-7fb974d8b5-2rxtm" podStartSLOduration=3.182374492 podStartE2EDuration="12.874105937s" podCreationTimestamp="2026-02-19 20:04:29 +0000 UTC" firstStartedPulling="2026-02-19 20:04:30.288867887 +0000 UTC m=+5689.514308428" lastFinishedPulling="2026-02-19 20:04:39.980599332 +0000 UTC m=+5699.206039873" observedRunningTime="2026-02-19 20:04:41.87385991 +0000 UTC m=+5701.099300461" watchObservedRunningTime="2026-02-19 20:04:41.874105937 +0000 UTC m=+5701.099546478" Feb 19 20:04:42 crc kubenswrapper[4813]: I0219 20:04:42.105668 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-frg8l-config-fks9h"] Feb 19 20:04:42 crc kubenswrapper[4813]: I0219 20:04:42.866342 4813 generic.go:334] "Generic (PLEG): container finished" podID="29a775ce-f95b-419a-ad88-b3a9a132012c" containerID="e2aa76984ade9e60fc90274fd883b225a87c0c20c0556bfa465274c66c399a76" exitCode=0 Feb 19 20:04:42 crc kubenswrapper[4813]: I0219 20:04:42.866496 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-frg8l-config-fks9h" event={"ID":"29a775ce-f95b-419a-ad88-b3a9a132012c","Type":"ContainerDied","Data":"e2aa76984ade9e60fc90274fd883b225a87c0c20c0556bfa465274c66c399a76"} Feb 19 20:04:42 crc kubenswrapper[4813]: I0219 20:04:42.866989 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-frg8l-config-fks9h" event={"ID":"29a775ce-f95b-419a-ad88-b3a9a132012c","Type":"ContainerStarted","Data":"fe7f25c3df3690f835866cd5b6a055b4470f4e0d40075ea8472ac5d20817382a"} Feb 19 20:04:42 crc kubenswrapper[4813]: I0219 20:04:42.867234 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-7fb974d8b5-2rxtm" Feb 19 20:04:44 crc kubenswrapper[4813]: I0219 20:04:44.243637 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-frg8l-config-fks9h" Feb 19 20:04:44 crc kubenswrapper[4813]: I0219 20:04:44.422932 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/29a775ce-f95b-419a-ad88-b3a9a132012c-var-run-ovn\") pod \"29a775ce-f95b-419a-ad88-b3a9a132012c\" (UID: \"29a775ce-f95b-419a-ad88-b3a9a132012c\") " Feb 19 20:04:44 crc kubenswrapper[4813]: I0219 20:04:44.423045 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/29a775ce-f95b-419a-ad88-b3a9a132012c-var-log-ovn\") pod \"29a775ce-f95b-419a-ad88-b3a9a132012c\" (UID: \"29a775ce-f95b-419a-ad88-b3a9a132012c\") " Feb 19 20:04:44 crc kubenswrapper[4813]: I0219 20:04:44.423047 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29a775ce-f95b-419a-ad88-b3a9a132012c-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "29a775ce-f95b-419a-ad88-b3a9a132012c" (UID: "29a775ce-f95b-419a-ad88-b3a9a132012c"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 20:04:44 crc kubenswrapper[4813]: I0219 20:04:44.423172 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29a775ce-f95b-419a-ad88-b3a9a132012c-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "29a775ce-f95b-419a-ad88-b3a9a132012c" (UID: "29a775ce-f95b-419a-ad88-b3a9a132012c"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 20:04:44 crc kubenswrapper[4813]: I0219 20:04:44.423180 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/29a775ce-f95b-419a-ad88-b3a9a132012c-additional-scripts\") pod \"29a775ce-f95b-419a-ad88-b3a9a132012c\" (UID: \"29a775ce-f95b-419a-ad88-b3a9a132012c\") " Feb 19 20:04:44 crc kubenswrapper[4813]: I0219 20:04:44.423262 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dkbv\" (UniqueName: \"kubernetes.io/projected/29a775ce-f95b-419a-ad88-b3a9a132012c-kube-api-access-2dkbv\") pod \"29a775ce-f95b-419a-ad88-b3a9a132012c\" (UID: \"29a775ce-f95b-419a-ad88-b3a9a132012c\") " Feb 19 20:04:44 crc kubenswrapper[4813]: I0219 20:04:44.423378 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/29a775ce-f95b-419a-ad88-b3a9a132012c-var-run\") pod \"29a775ce-f95b-419a-ad88-b3a9a132012c\" (UID: \"29a775ce-f95b-419a-ad88-b3a9a132012c\") " Feb 19 20:04:44 crc kubenswrapper[4813]: I0219 20:04:44.423625 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/29a775ce-f95b-419a-ad88-b3a9a132012c-scripts\") pod \"29a775ce-f95b-419a-ad88-b3a9a132012c\" (UID: \"29a775ce-f95b-419a-ad88-b3a9a132012c\") " Feb 19 20:04:44 crc kubenswrapper[4813]: I0219 20:04:44.423910 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29a775ce-f95b-419a-ad88-b3a9a132012c-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "29a775ce-f95b-419a-ad88-b3a9a132012c" (UID: "29a775ce-f95b-419a-ad88-b3a9a132012c"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:04:44 crc kubenswrapper[4813]: I0219 20:04:44.424465 4813 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/29a775ce-f95b-419a-ad88-b3a9a132012c-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 19 20:04:44 crc kubenswrapper[4813]: I0219 20:04:44.424497 4813 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/29a775ce-f95b-419a-ad88-b3a9a132012c-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 19 20:04:44 crc kubenswrapper[4813]: I0219 20:04:44.424514 4813 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/29a775ce-f95b-419a-ad88-b3a9a132012c-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 20:04:44 crc kubenswrapper[4813]: I0219 20:04:44.424494 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29a775ce-f95b-419a-ad88-b3a9a132012c-var-run" (OuterVolumeSpecName: "var-run") pod "29a775ce-f95b-419a-ad88-b3a9a132012c" (UID: "29a775ce-f95b-419a-ad88-b3a9a132012c"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 20:04:44 crc kubenswrapper[4813]: I0219 20:04:44.425771 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29a775ce-f95b-419a-ad88-b3a9a132012c-scripts" (OuterVolumeSpecName: "scripts") pod "29a775ce-f95b-419a-ad88-b3a9a132012c" (UID: "29a775ce-f95b-419a-ad88-b3a9a132012c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:04:44 crc kubenswrapper[4813]: I0219 20:04:44.429881 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29a775ce-f95b-419a-ad88-b3a9a132012c-kube-api-access-2dkbv" (OuterVolumeSpecName: "kube-api-access-2dkbv") pod "29a775ce-f95b-419a-ad88-b3a9a132012c" (UID: "29a775ce-f95b-419a-ad88-b3a9a132012c"). InnerVolumeSpecName "kube-api-access-2dkbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:04:44 crc kubenswrapper[4813]: I0219 20:04:44.531148 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dkbv\" (UniqueName: \"kubernetes.io/projected/29a775ce-f95b-419a-ad88-b3a9a132012c-kube-api-access-2dkbv\") on node \"crc\" DevicePath \"\"" Feb 19 20:04:44 crc kubenswrapper[4813]: I0219 20:04:44.531205 4813 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/29a775ce-f95b-419a-ad88-b3a9a132012c-var-run\") on node \"crc\" DevicePath \"\"" Feb 19 20:04:44 crc kubenswrapper[4813]: I0219 20:04:44.531216 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/29a775ce-f95b-419a-ad88-b3a9a132012c-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 20:04:44 crc kubenswrapper[4813]: I0219 20:04:44.888462 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-frg8l-config-fks9h" event={"ID":"29a775ce-f95b-419a-ad88-b3a9a132012c","Type":"ContainerDied","Data":"fe7f25c3df3690f835866cd5b6a055b4470f4e0d40075ea8472ac5d20817382a"} Feb 19 20:04:44 crc kubenswrapper[4813]: I0219 20:04:44.888848 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe7f25c3df3690f835866cd5b6a055b4470f4e0d40075ea8472ac5d20817382a" Feb 19 20:04:44 crc kubenswrapper[4813]: I0219 20:04:44.888538 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-frg8l-config-fks9h" Feb 19 20:04:45 crc kubenswrapper[4813]: I0219 20:04:45.325045 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-frg8l-config-fks9h"] Feb 19 20:04:45 crc kubenswrapper[4813]: I0219 20:04:45.336063 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-frg8l-config-fks9h"] Feb 19 20:04:45 crc kubenswrapper[4813]: I0219 20:04:45.484344 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29a775ce-f95b-419a-ad88-b3a9a132012c" path="/var/lib/kubelet/pods/29a775ce-f95b-419a-ad88-b3a9a132012c/volumes" Feb 19 20:04:52 crc kubenswrapper[4813]: I0219 20:04:52.987126 4813 scope.go:117] "RemoveContainer" containerID="649cd16b233c4eca4393209d12c0e7988532812fafb39a3ef659a9c174df8a7c" Feb 19 20:04:53 crc kubenswrapper[4813]: I0219 20:04:53.018863 4813 scope.go:117] "RemoveContainer" containerID="ee915a7dc90b651974b775d56205a91aedde0e1ff436c0eccb0ee630dcd03fc2" Feb 19 20:04:53 crc kubenswrapper[4813]: I0219 20:04:53.110291 4813 scope.go:117] "RemoveContainer" containerID="3fdd14a3a0a78a2a46335d4ccb93ca0f6a434cad6a06adb36c1eabfd45db7cbe" Feb 19 20:04:53 crc kubenswrapper[4813]: I0219 20:04:53.133490 4813 scope.go:117] "RemoveContainer" containerID="a36c1c3af030f72abab22b0fb1f0485bc42c127f8c420510293f922eec434aa4" Feb 19 20:04:58 crc kubenswrapper[4813]: I0219 20:04:58.829560 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-rsyslog-j897l"] Feb 19 20:04:58 crc kubenswrapper[4813]: E0219 20:04:58.830575 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29a775ce-f95b-419a-ad88-b3a9a132012c" containerName="ovn-config" Feb 19 20:04:58 crc kubenswrapper[4813]: I0219 20:04:58.830686 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="29a775ce-f95b-419a-ad88-b3a9a132012c" containerName="ovn-config" Feb 19 20:04:58 crc kubenswrapper[4813]: I0219 20:04:58.830932 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="29a775ce-f95b-419a-ad88-b3a9a132012c" containerName="ovn-config" Feb 19 20:04:58 crc kubenswrapper[4813]: I0219 20:04:58.832184 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-rsyslog-j897l" Feb 19 20:04:58 crc kubenswrapper[4813]: I0219 20:04:58.835912 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-rsyslog-scripts" Feb 19 20:04:58 crc kubenswrapper[4813]: I0219 20:04:58.836706 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"octavia-hmport-map" Feb 19 20:04:58 crc kubenswrapper[4813]: I0219 20:04:58.837148 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-rsyslog-config-data" Feb 19 20:04:58 crc kubenswrapper[4813]: I0219 20:04:58.838313 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-j897l"] Feb 19 20:04:58 crc kubenswrapper[4813]: I0219 20:04:58.912498 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/41435d95-b06f-4563-b3d2-d5770f2d8116-config-data-merged\") pod \"octavia-rsyslog-j897l\" (UID: \"41435d95-b06f-4563-b3d2-d5770f2d8116\") " pod="openstack/octavia-rsyslog-j897l" Feb 19 20:04:58 crc kubenswrapper[4813]: I0219 20:04:58.912587 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41435d95-b06f-4563-b3d2-d5770f2d8116-scripts\") pod \"octavia-rsyslog-j897l\" (UID: \"41435d95-b06f-4563-b3d2-d5770f2d8116\") " pod="openstack/octavia-rsyslog-j897l" Feb 19 20:04:58 crc kubenswrapper[4813]: I0219 20:04:58.912614 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41435d95-b06f-4563-b3d2-d5770f2d8116-config-data\") pod \"octavia-rsyslog-j897l\" (UID: \"41435d95-b06f-4563-b3d2-d5770f2d8116\") " pod="openstack/octavia-rsyslog-j897l" Feb 19 20:04:58 crc kubenswrapper[4813]: I0219 20:04:58.912657 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/41435d95-b06f-4563-b3d2-d5770f2d8116-hm-ports\") pod \"octavia-rsyslog-j897l\" (UID: \"41435d95-b06f-4563-b3d2-d5770f2d8116\") " pod="openstack/octavia-rsyslog-j897l" Feb 19 20:04:59 crc kubenswrapper[4813]: I0219 20:04:59.014503 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/41435d95-b06f-4563-b3d2-d5770f2d8116-config-data-merged\") pod \"octavia-rsyslog-j897l\" (UID: \"41435d95-b06f-4563-b3d2-d5770f2d8116\") " pod="openstack/octavia-rsyslog-j897l" Feb 19 20:04:59 crc kubenswrapper[4813]: I0219 20:04:59.014565 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41435d95-b06f-4563-b3d2-d5770f2d8116-scripts\") pod \"octavia-rsyslog-j897l\" (UID: \"41435d95-b06f-4563-b3d2-d5770f2d8116\") " pod="openstack/octavia-rsyslog-j897l" Feb 19 20:04:59 crc kubenswrapper[4813]: I0219 20:04:59.014601 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41435d95-b06f-4563-b3d2-d5770f2d8116-config-data\") pod \"octavia-rsyslog-j897l\" (UID: \"41435d95-b06f-4563-b3d2-d5770f2d8116\") " pod="openstack/octavia-rsyslog-j897l" Feb 19 20:04:59 crc kubenswrapper[4813]: I0219 20:04:59.014648 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/41435d95-b06f-4563-b3d2-d5770f2d8116-hm-ports\") pod \"octavia-rsyslog-j897l\" (UID: \"41435d95-b06f-4563-b3d2-d5770f2d8116\") " pod="openstack/octavia-rsyslog-j897l" Feb 19 20:04:59 crc kubenswrapper[4813]: I0219 20:04:59.015330 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/41435d95-b06f-4563-b3d2-d5770f2d8116-config-data-merged\") pod \"octavia-rsyslog-j897l\" (UID: \"41435d95-b06f-4563-b3d2-d5770f2d8116\") " pod="openstack/octavia-rsyslog-j897l" Feb 19 20:04:59 crc kubenswrapper[4813]: I0219 20:04:59.015894 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/41435d95-b06f-4563-b3d2-d5770f2d8116-hm-ports\") pod \"octavia-rsyslog-j897l\" (UID: \"41435d95-b06f-4563-b3d2-d5770f2d8116\") " pod="openstack/octavia-rsyslog-j897l" Feb 19 20:04:59 crc kubenswrapper[4813]: I0219 20:04:59.020780 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41435d95-b06f-4563-b3d2-d5770f2d8116-scripts\") pod \"octavia-rsyslog-j897l\" (UID: \"41435d95-b06f-4563-b3d2-d5770f2d8116\") " pod="openstack/octavia-rsyslog-j897l" Feb 19 20:04:59 crc kubenswrapper[4813]: I0219 20:04:59.020792 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41435d95-b06f-4563-b3d2-d5770f2d8116-config-data\") pod \"octavia-rsyslog-j897l\" (UID: \"41435d95-b06f-4563-b3d2-d5770f2d8116\") " pod="openstack/octavia-rsyslog-j897l" Feb 19 20:04:59 crc kubenswrapper[4813]: I0219 20:04:59.160842 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-rsyslog-j897l" Feb 19 20:04:59 crc kubenswrapper[4813]: I0219 20:04:59.579656 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-image-upload-8d4564f8f-4kvjs"] Feb 19 20:04:59 crc kubenswrapper[4813]: I0219 20:04:59.581647 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-8d4564f8f-4kvjs" Feb 19 20:04:59 crc kubenswrapper[4813]: I0219 20:04:59.586081 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-config-data" Feb 19 20:04:59 crc kubenswrapper[4813]: I0219 20:04:59.589295 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-8d4564f8f-4kvjs"] Feb 19 20:04:59 crc kubenswrapper[4813]: I0219 20:04:59.735814 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/82fb7f76-1b17-45c4-81f8-1aa2df4eb3c6-httpd-config\") pod \"octavia-image-upload-8d4564f8f-4kvjs\" (UID: \"82fb7f76-1b17-45c4-81f8-1aa2df4eb3c6\") " pod="openstack/octavia-image-upload-8d4564f8f-4kvjs" Feb 19 20:04:59 crc kubenswrapper[4813]: I0219 20:04:59.736068 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/82fb7f76-1b17-45c4-81f8-1aa2df4eb3c6-amphora-image\") pod \"octavia-image-upload-8d4564f8f-4kvjs\" (UID: \"82fb7f76-1b17-45c4-81f8-1aa2df4eb3c6\") " pod="openstack/octavia-image-upload-8d4564f8f-4kvjs" Feb 19 20:04:59 crc kubenswrapper[4813]: I0219 20:04:59.840263 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/82fb7f76-1b17-45c4-81f8-1aa2df4eb3c6-amphora-image\") pod \"octavia-image-upload-8d4564f8f-4kvjs\" (UID: \"82fb7f76-1b17-45c4-81f8-1aa2df4eb3c6\") " pod="openstack/octavia-image-upload-8d4564f8f-4kvjs" Feb 19 20:04:59 crc kubenswrapper[4813]: I0219 20:04:59.840681 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/82fb7f76-1b17-45c4-81f8-1aa2df4eb3c6-httpd-config\") pod \"octavia-image-upload-8d4564f8f-4kvjs\" (UID: \"82fb7f76-1b17-45c4-81f8-1aa2df4eb3c6\") " pod="openstack/octavia-image-upload-8d4564f8f-4kvjs" Feb 19 20:04:59 crc kubenswrapper[4813]: I0219 20:04:59.841261 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/82fb7f76-1b17-45c4-81f8-1aa2df4eb3c6-amphora-image\") pod \"octavia-image-upload-8d4564f8f-4kvjs\" (UID: \"82fb7f76-1b17-45c4-81f8-1aa2df4eb3c6\") " pod="openstack/octavia-image-upload-8d4564f8f-4kvjs" Feb 19 20:04:59 crc kubenswrapper[4813]: I0219 20:04:59.849140 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-j897l"] Feb 19 20:04:59 crc kubenswrapper[4813]: I0219 20:04:59.851265 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/82fb7f76-1b17-45c4-81f8-1aa2df4eb3c6-httpd-config\") pod \"octavia-image-upload-8d4564f8f-4kvjs\" (UID: \"82fb7f76-1b17-45c4-81f8-1aa2df4eb3c6\") " pod="openstack/octavia-image-upload-8d4564f8f-4kvjs" Feb 19 20:04:59 crc kubenswrapper[4813]: I0219 20:04:59.913534 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-8d4564f8f-4kvjs" Feb 19 20:04:59 crc kubenswrapper[4813]: I0219 20:04:59.930552 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-j897l"] Feb 19 20:05:00 crc kubenswrapper[4813]: I0219 20:05:00.023908 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-j897l" event={"ID":"41435d95-b06f-4563-b3d2-d5770f2d8116","Type":"ContainerStarted","Data":"f8eaec21a79ee005cef666bc801c6cdf682ff950ca3535440dcebc2608893fe7"} Feb 19 20:05:00 crc kubenswrapper[4813]: I0219 20:05:00.330136 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 20:05:00 crc kubenswrapper[4813]: I0219 20:05:00.330540 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 20:05:00 crc kubenswrapper[4813]: I0219 20:05:00.613555 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-8d4564f8f-4kvjs"] Feb 19 20:05:00 crc kubenswrapper[4813]: I0219 20:05:00.684788 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-db-sync-pshkj"] Feb 19 20:05:00 crc kubenswrapper[4813]: I0219 20:05:00.689182 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-pshkj" Feb 19 20:05:00 crc kubenswrapper[4813]: I0219 20:05:00.694816 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-scripts" Feb 19 20:05:00 crc kubenswrapper[4813]: I0219 20:05:00.698670 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-sync-pshkj"] Feb 19 20:05:00 crc kubenswrapper[4813]: I0219 20:05:00.864210 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/28dca374-6303-4bb8-b64b-a08a87a702cb-config-data-merged\") pod \"octavia-db-sync-pshkj\" (UID: \"28dca374-6303-4bb8-b64b-a08a87a702cb\") " pod="openstack/octavia-db-sync-pshkj" Feb 19 20:05:00 crc kubenswrapper[4813]: I0219 20:05:00.864295 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28dca374-6303-4bb8-b64b-a08a87a702cb-scripts\") pod \"octavia-db-sync-pshkj\" (UID: \"28dca374-6303-4bb8-b64b-a08a87a702cb\") " pod="openstack/octavia-db-sync-pshkj" Feb 19 20:05:00 crc kubenswrapper[4813]: I0219 20:05:00.864317 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28dca374-6303-4bb8-b64b-a08a87a702cb-combined-ca-bundle\") pod \"octavia-db-sync-pshkj\" (UID: \"28dca374-6303-4bb8-b64b-a08a87a702cb\") " pod="openstack/octavia-db-sync-pshkj" Feb 19 20:05:00 crc kubenswrapper[4813]: I0219 20:05:00.864389 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28dca374-6303-4bb8-b64b-a08a87a702cb-config-data\") pod \"octavia-db-sync-pshkj\" (UID: \"28dca374-6303-4bb8-b64b-a08a87a702cb\") " pod="openstack/octavia-db-sync-pshkj" Feb 19 20:05:00 crc kubenswrapper[4813]: I0219 20:05:00.966677 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28dca374-6303-4bb8-b64b-a08a87a702cb-scripts\") pod \"octavia-db-sync-pshkj\" (UID: \"28dca374-6303-4bb8-b64b-a08a87a702cb\") " pod="openstack/octavia-db-sync-pshkj" Feb 19 20:05:00 crc kubenswrapper[4813]: I0219 20:05:00.966724 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28dca374-6303-4bb8-b64b-a08a87a702cb-combined-ca-bundle\") pod \"octavia-db-sync-pshkj\" (UID: \"28dca374-6303-4bb8-b64b-a08a87a702cb\") " pod="openstack/octavia-db-sync-pshkj" Feb 19 20:05:00 crc kubenswrapper[4813]: I0219 20:05:00.966810 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28dca374-6303-4bb8-b64b-a08a87a702cb-config-data\") pod \"octavia-db-sync-pshkj\" (UID: \"28dca374-6303-4bb8-b64b-a08a87a702cb\") " pod="openstack/octavia-db-sync-pshkj" Feb 19 20:05:00 crc kubenswrapper[4813]: I0219 20:05:00.966924 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/28dca374-6303-4bb8-b64b-a08a87a702cb-config-data-merged\") pod \"octavia-db-sync-pshkj\" (UID: \"28dca374-6303-4bb8-b64b-a08a87a702cb\") " pod="openstack/octavia-db-sync-pshkj" Feb 19 20:05:00 crc kubenswrapper[4813]: I0219 20:05:00.967451 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/28dca374-6303-4bb8-b64b-a08a87a702cb-config-data-merged\") pod \"octavia-db-sync-pshkj\" (UID: \"28dca374-6303-4bb8-b64b-a08a87a702cb\") " pod="openstack/octavia-db-sync-pshkj" Feb 19 20:05:00 crc kubenswrapper[4813]: I0219 20:05:00.976608 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28dca374-6303-4bb8-b64b-a08a87a702cb-combined-ca-bundle\") pod \"octavia-db-sync-pshkj\" (UID: \"28dca374-6303-4bb8-b64b-a08a87a702cb\") " pod="openstack/octavia-db-sync-pshkj" Feb 19 20:05:00 crc kubenswrapper[4813]: I0219 20:05:00.977871 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28dca374-6303-4bb8-b64b-a08a87a702cb-config-data\") pod \"octavia-db-sync-pshkj\" (UID: \"28dca374-6303-4bb8-b64b-a08a87a702cb\") " pod="openstack/octavia-db-sync-pshkj" Feb 19 20:05:00 crc kubenswrapper[4813]: I0219 20:05:00.991388 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28dca374-6303-4bb8-b64b-a08a87a702cb-scripts\") pod \"octavia-db-sync-pshkj\" (UID: \"28dca374-6303-4bb8-b64b-a08a87a702cb\") " pod="openstack/octavia-db-sync-pshkj" Feb 19 20:05:01 crc kubenswrapper[4813]: I0219 20:05:01.011859 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-pshkj" Feb 19 20:05:01 crc kubenswrapper[4813]: I0219 20:05:01.040112 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-8d4564f8f-4kvjs" event={"ID":"82fb7f76-1b17-45c4-81f8-1aa2df4eb3c6","Type":"ContainerStarted","Data":"f4bfec41de8295aab28819b0de01050c3bb6e19cc7058a57a5132b89d24e1540"} Feb 19 20:05:01 crc kubenswrapper[4813]: I0219 20:05:01.559829 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-sync-pshkj"] Feb 19 20:05:02 crc kubenswrapper[4813]: W0219 20:05:02.175059 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28dca374_6303_4bb8_b64b_a08a87a702cb.slice/crio-0384a1ce0f33d61302135576aecc11ef72d05ed69c9d19053d1baa474b81bf45 WatchSource:0}: Error finding container 0384a1ce0f33d61302135576aecc11ef72d05ed69c9d19053d1baa474b81bf45: Status 404 returned error can't find the container with id 0384a1ce0f33d61302135576aecc11ef72d05ed69c9d19053d1baa474b81bf45 Feb 19 20:05:03 crc kubenswrapper[4813]: I0219 20:05:03.073200 4813 generic.go:334] "Generic (PLEG): container finished" podID="28dca374-6303-4bb8-b64b-a08a87a702cb" containerID="0a2411818244f676d33ce81f243f7ebd9f52c06f70f0dbb05aebe98a47ee7e6d" exitCode=0 Feb 19 20:05:03 crc kubenswrapper[4813]: I0219 20:05:03.073240 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-pshkj" event={"ID":"28dca374-6303-4bb8-b64b-a08a87a702cb","Type":"ContainerDied","Data":"0a2411818244f676d33ce81f243f7ebd9f52c06f70f0dbb05aebe98a47ee7e6d"} Feb 19 20:05:03 crc kubenswrapper[4813]: I0219 20:05:03.073465 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-pshkj" event={"ID":"28dca374-6303-4bb8-b64b-a08a87a702cb","Type":"ContainerStarted","Data":"0384a1ce0f33d61302135576aecc11ef72d05ed69c9d19053d1baa474b81bf45"} Feb 19 20:05:04 crc kubenswrapper[4813]: I0219 20:05:04.088737 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-pshkj" event={"ID":"28dca374-6303-4bb8-b64b-a08a87a702cb","Type":"ContainerStarted","Data":"dc36ee8828ab0898b13efae2215462cd657982bedf308c7263eb1dd5322167ff"} Feb 19 20:05:04 crc kubenswrapper[4813]: I0219 20:05:04.095473 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-j897l" event={"ID":"41435d95-b06f-4563-b3d2-d5770f2d8116","Type":"ContainerStarted","Data":"db68b9a1b68b3464d031b0d442cf3b0e586f01bc4bc596dfe0ced7d4bd621995"} Feb 19 20:05:04 crc kubenswrapper[4813]: I0219 20:05:04.129939 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-db-sync-pshkj" podStartSLOduration=4.12992052 podStartE2EDuration="4.12992052s" podCreationTimestamp="2026-02-19 20:05:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 20:05:04.12764318 +0000 UTC m=+5723.353083721" watchObservedRunningTime="2026-02-19 20:05:04.12992052 +0000 UTC m=+5723.355361061" Feb 19 20:05:05 crc kubenswrapper[4813]: I0219 20:05:05.224128 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-7fb974d8b5-2rxtm" Feb 19 20:05:05 crc kubenswrapper[4813]: I0219 20:05:05.224409 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-7fb974d8b5-2rxtm" Feb 19 20:05:06 crc kubenswrapper[4813]: I0219 20:05:06.133408 4813 generic.go:334] "Generic (PLEG): container finished" podID="41435d95-b06f-4563-b3d2-d5770f2d8116" containerID="db68b9a1b68b3464d031b0d442cf3b0e586f01bc4bc596dfe0ced7d4bd621995" exitCode=0 Feb 19 20:05:06 crc kubenswrapper[4813]: I0219 20:05:06.134088 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-j897l" event={"ID":"41435d95-b06f-4563-b3d2-d5770f2d8116","Type":"ContainerDied","Data":"db68b9a1b68b3464d031b0d442cf3b0e586f01bc4bc596dfe0ced7d4bd621995"} Feb 19 20:05:06 crc kubenswrapper[4813]: I0219 20:05:06.140874 4813 generic.go:334] "Generic (PLEG): container finished" podID="28dca374-6303-4bb8-b64b-a08a87a702cb" containerID="dc36ee8828ab0898b13efae2215462cd657982bedf308c7263eb1dd5322167ff" exitCode=0 Feb 19 20:05:06 crc kubenswrapper[4813]: I0219 20:05:06.141250 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-pshkj" event={"ID":"28dca374-6303-4bb8-b64b-a08a87a702cb","Type":"ContainerDied","Data":"dc36ee8828ab0898b13efae2215462cd657982bedf308c7263eb1dd5322167ff"} Feb 19 20:05:10 crc kubenswrapper[4813]: I0219 20:05:10.763675 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-pshkj" Feb 19 20:05:10 crc kubenswrapper[4813]: I0219 20:05:10.862750 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28dca374-6303-4bb8-b64b-a08a87a702cb-combined-ca-bundle\") pod \"28dca374-6303-4bb8-b64b-a08a87a702cb\" (UID: \"28dca374-6303-4bb8-b64b-a08a87a702cb\") " Feb 19 20:05:10 crc kubenswrapper[4813]: I0219 20:05:10.862866 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/28dca374-6303-4bb8-b64b-a08a87a702cb-config-data-merged\") pod \"28dca374-6303-4bb8-b64b-a08a87a702cb\" (UID: \"28dca374-6303-4bb8-b64b-a08a87a702cb\") " Feb 19 20:05:10 crc kubenswrapper[4813]: I0219 20:05:10.862947 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28dca374-6303-4bb8-b64b-a08a87a702cb-scripts\") pod \"28dca374-6303-4bb8-b64b-a08a87a702cb\" (UID: \"28dca374-6303-4bb8-b64b-a08a87a702cb\") " Feb 19 20:05:10 crc kubenswrapper[4813]: I0219 20:05:10.863104 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28dca374-6303-4bb8-b64b-a08a87a702cb-config-data\") pod \"28dca374-6303-4bb8-b64b-a08a87a702cb\" (UID: \"28dca374-6303-4bb8-b64b-a08a87a702cb\") " Feb 19 20:05:10 crc kubenswrapper[4813]: I0219 20:05:10.871413 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28dca374-6303-4bb8-b64b-a08a87a702cb-config-data" (OuterVolumeSpecName: "config-data") pod "28dca374-6303-4bb8-b64b-a08a87a702cb" (UID: "28dca374-6303-4bb8-b64b-a08a87a702cb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:05:10 crc kubenswrapper[4813]: I0219 20:05:10.872104 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28dca374-6303-4bb8-b64b-a08a87a702cb-scripts" (OuterVolumeSpecName: "scripts") pod "28dca374-6303-4bb8-b64b-a08a87a702cb" (UID: "28dca374-6303-4bb8-b64b-a08a87a702cb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:05:10 crc kubenswrapper[4813]: I0219 20:05:10.894782 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28dca374-6303-4bb8-b64b-a08a87a702cb-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "28dca374-6303-4bb8-b64b-a08a87a702cb" (UID: "28dca374-6303-4bb8-b64b-a08a87a702cb"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:05:10 crc kubenswrapper[4813]: I0219 20:05:10.913105 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28dca374-6303-4bb8-b64b-a08a87a702cb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "28dca374-6303-4bb8-b64b-a08a87a702cb" (UID: "28dca374-6303-4bb8-b64b-a08a87a702cb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:05:10 crc kubenswrapper[4813]: I0219 20:05:10.972788 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28dca374-6303-4bb8-b64b-a08a87a702cb-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 20:05:10 crc kubenswrapper[4813]: I0219 20:05:10.973059 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28dca374-6303-4bb8-b64b-a08a87a702cb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 20:05:10 crc kubenswrapper[4813]: I0219 20:05:10.973071 4813 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/28dca374-6303-4bb8-b64b-a08a87a702cb-config-data-merged\") on node \"crc\" DevicePath \"\"" Feb 19 20:05:10 crc kubenswrapper[4813]: I0219 20:05:10.973081 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28dca374-6303-4bb8-b64b-a08a87a702cb-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 20:05:11 crc kubenswrapper[4813]: I0219 20:05:11.197315 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-pshkj" event={"ID":"28dca374-6303-4bb8-b64b-a08a87a702cb","Type":"ContainerDied","Data":"0384a1ce0f33d61302135576aecc11ef72d05ed69c9d19053d1baa474b81bf45"} Feb 19 20:05:11 crc kubenswrapper[4813]: I0219 20:05:11.197375 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-pshkj" Feb 19 20:05:11 crc kubenswrapper[4813]: I0219 20:05:11.197380 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0384a1ce0f33d61302135576aecc11ef72d05ed69c9d19053d1baa474b81bf45" Feb 19 20:05:13 crc kubenswrapper[4813]: I0219 20:05:13.220633 4813 generic.go:334] "Generic (PLEG): container finished" podID="82fb7f76-1b17-45c4-81f8-1aa2df4eb3c6" containerID="1de6c452c45a334abb81176d47fd563291a3497e003a296dc511bacc768d95d8" exitCode=0 Feb 19 20:05:13 crc kubenswrapper[4813]: I0219 20:05:13.220906 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-8d4564f8f-4kvjs" event={"ID":"82fb7f76-1b17-45c4-81f8-1aa2df4eb3c6","Type":"ContainerDied","Data":"1de6c452c45a334abb81176d47fd563291a3497e003a296dc511bacc768d95d8"} Feb 19 20:05:13 crc kubenswrapper[4813]: I0219 20:05:13.224241 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-j897l" event={"ID":"41435d95-b06f-4563-b3d2-d5770f2d8116","Type":"ContainerStarted","Data":"2fd0988abfbb02ad4df903fd6410bf9ae3e6232fdb15bd307d65b7fdfb89eba1"} Feb 19 20:05:13 crc kubenswrapper[4813]: I0219 20:05:13.224417 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-rsyslog-j897l" Feb 19 20:05:13 crc kubenswrapper[4813]: I0219 20:05:13.271266 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-rsyslog-j897l" podStartSLOduration=3.146356847 podStartE2EDuration="15.271250385s" podCreationTimestamp="2026-02-19 20:04:58 +0000 UTC" firstStartedPulling="2026-02-19 20:04:59.863104723 +0000 UTC m=+5719.088545264" lastFinishedPulling="2026-02-19 20:05:11.987998261 +0000 UTC m=+5731.213438802" observedRunningTime="2026-02-19 20:05:13.261594887 +0000 UTC m=+5732.487035428" watchObservedRunningTime="2026-02-19 20:05:13.271250385 +0000 UTC m=+5732.496690926" Feb 19 20:05:14 crc kubenswrapper[4813]: I0219 20:05:14.237330 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-8d4564f8f-4kvjs" event={"ID":"82fb7f76-1b17-45c4-81f8-1aa2df4eb3c6","Type":"ContainerStarted","Data":"9cf4d63a913d7cab3db0c8206dd662290298e6784255e2f210442318ad8c093a"} Feb 19 20:05:14 crc kubenswrapper[4813]: I0219 20:05:14.255011 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-image-upload-8d4564f8f-4kvjs" podStartSLOduration=3.797397041 podStartE2EDuration="15.254986003s" podCreationTimestamp="2026-02-19 20:04:59 +0000 UTC" firstStartedPulling="2026-02-19 20:05:00.620746015 +0000 UTC m=+5719.846186566" lastFinishedPulling="2026-02-19 20:05:12.078334987 +0000 UTC m=+5731.303775528" observedRunningTime="2026-02-19 20:05:14.25359175 +0000 UTC m=+5733.479032311" watchObservedRunningTime="2026-02-19 20:05:14.254986003 +0000 UTC m=+5733.480426544" Feb 19 20:05:29 crc kubenswrapper[4813]: I0219 20:05:29.207791 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-rsyslog-j897l" Feb 19 20:05:30 crc kubenswrapper[4813]: I0219 20:05:30.329749 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 20:05:30 crc kubenswrapper[4813]: I0219 20:05:30.330841 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 20:05:30 crc kubenswrapper[4813]: I0219 20:05:30.331038 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" Feb 19 20:05:30 crc kubenswrapper[4813]: I0219 20:05:30.332086 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"77599afcb1bbfe89531d92db83d4ec060079eb9318249967ecfd9d52fd324241"} pod="openshift-machine-config-operator/machine-config-daemon-gfswm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 20:05:30 crc kubenswrapper[4813]: I0219 20:05:30.332322 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" containerID="cri-o://77599afcb1bbfe89531d92db83d4ec060079eb9318249967ecfd9d52fd324241" gracePeriod=600 Feb 19 20:05:30 crc kubenswrapper[4813]: E0219 20:05:30.463203 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:05:31 crc kubenswrapper[4813]: I0219 20:05:31.407559 4813 generic.go:334] "Generic (PLEG): container finished" podID="481977a2-7072-4176-abd4-863cb6104d70" containerID="77599afcb1bbfe89531d92db83d4ec060079eb9318249967ecfd9d52fd324241" exitCode=0 Feb 19 20:05:31 crc kubenswrapper[4813]: I0219 20:05:31.407627 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" event={"ID":"481977a2-7072-4176-abd4-863cb6104d70","Type":"ContainerDied","Data":"77599afcb1bbfe89531d92db83d4ec060079eb9318249967ecfd9d52fd324241"} Feb 19 20:05:31 crc kubenswrapper[4813]: I0219 20:05:31.407982 4813 scope.go:117] "RemoveContainer" containerID="c5ad4b652219a5a2c769519cd60ad08e92b1309893f1bb32935c2a23f7bc230a" Feb 19 20:05:31 crc kubenswrapper[4813]: I0219 20:05:31.408625 4813 scope.go:117] "RemoveContainer" containerID="77599afcb1bbfe89531d92db83d4ec060079eb9318249967ecfd9d52fd324241" Feb 19 20:05:31 crc kubenswrapper[4813]: E0219 20:05:31.409039 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:05:34 crc kubenswrapper[4813]: I0219 20:05:34.249586 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-image-upload-8d4564f8f-4kvjs"] Feb 19 20:05:34 crc kubenswrapper[4813]: I0219 20:05:34.250336 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/octavia-image-upload-8d4564f8f-4kvjs" podUID="82fb7f76-1b17-45c4-81f8-1aa2df4eb3c6" containerName="octavia-amphora-httpd" containerID="cri-o://9cf4d63a913d7cab3db0c8206dd662290298e6784255e2f210442318ad8c093a" gracePeriod=30 Feb 19 20:05:34 crc kubenswrapper[4813]: I0219 20:05:34.437316 4813 generic.go:334] "Generic (PLEG): container finished" podID="82fb7f76-1b17-45c4-81f8-1aa2df4eb3c6" containerID="9cf4d63a913d7cab3db0c8206dd662290298e6784255e2f210442318ad8c093a" exitCode=0 Feb 19 20:05:34 crc kubenswrapper[4813]: I0219 20:05:34.437364 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-8d4564f8f-4kvjs" event={"ID":"82fb7f76-1b17-45c4-81f8-1aa2df4eb3c6","Type":"ContainerDied","Data":"9cf4d63a913d7cab3db0c8206dd662290298e6784255e2f210442318ad8c093a"} Feb 19 20:05:34 crc kubenswrapper[4813]: I0219 20:05:34.839240 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-8d4564f8f-4kvjs" Feb 19 20:05:34 crc kubenswrapper[4813]: I0219 20:05:34.915419 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/82fb7f76-1b17-45c4-81f8-1aa2df4eb3c6-httpd-config\") pod \"82fb7f76-1b17-45c4-81f8-1aa2df4eb3c6\" (UID: \"82fb7f76-1b17-45c4-81f8-1aa2df4eb3c6\") " Feb 19 20:05:34 crc kubenswrapper[4813]: I0219 20:05:34.915459 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/82fb7f76-1b17-45c4-81f8-1aa2df4eb3c6-amphora-image\") pod \"82fb7f76-1b17-45c4-81f8-1aa2df4eb3c6\" (UID: \"82fb7f76-1b17-45c4-81f8-1aa2df4eb3c6\") " Feb 19 20:05:34 crc kubenswrapper[4813]: I0219 20:05:34.971237 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82fb7f76-1b17-45c4-81f8-1aa2df4eb3c6-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "82fb7f76-1b17-45c4-81f8-1aa2df4eb3c6" (UID: "82fb7f76-1b17-45c4-81f8-1aa2df4eb3c6"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:05:35 crc kubenswrapper[4813]: I0219 20:05:35.018150 4813 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/82fb7f76-1b17-45c4-81f8-1aa2df4eb3c6-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 19 20:05:35 crc kubenswrapper[4813]: I0219 20:05:35.023224 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82fb7f76-1b17-45c4-81f8-1aa2df4eb3c6-amphora-image" (OuterVolumeSpecName: "amphora-image") pod "82fb7f76-1b17-45c4-81f8-1aa2df4eb3c6" (UID: "82fb7f76-1b17-45c4-81f8-1aa2df4eb3c6"). InnerVolumeSpecName "amphora-image". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:05:35 crc kubenswrapper[4813]: I0219 20:05:35.120003 4813 reconciler_common.go:293] "Volume detached for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/82fb7f76-1b17-45c4-81f8-1aa2df4eb3c6-amphora-image\") on node \"crc\" DevicePath \"\"" Feb 19 20:05:35 crc kubenswrapper[4813]: I0219 20:05:35.451552 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-8d4564f8f-4kvjs" event={"ID":"82fb7f76-1b17-45c4-81f8-1aa2df4eb3c6","Type":"ContainerDied","Data":"f4bfec41de8295aab28819b0de01050c3bb6e19cc7058a57a5132b89d24e1540"} Feb 19 20:05:35 crc kubenswrapper[4813]: I0219 20:05:35.451652 4813 scope.go:117] "RemoveContainer" containerID="9cf4d63a913d7cab3db0c8206dd662290298e6784255e2f210442318ad8c093a" Feb 19 20:05:35 crc kubenswrapper[4813]: I0219 20:05:35.451677 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-8d4564f8f-4kvjs" Feb 19 20:05:35 crc kubenswrapper[4813]: I0219 20:05:35.488983 4813 scope.go:117] "RemoveContainer" containerID="1de6c452c45a334abb81176d47fd563291a3497e003a296dc511bacc768d95d8" Feb 19 20:05:35 crc kubenswrapper[4813]: I0219 20:05:35.498886 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-image-upload-8d4564f8f-4kvjs"] Feb 19 20:05:35 crc kubenswrapper[4813]: I0219 20:05:35.514987 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-image-upload-8d4564f8f-4kvjs"] Feb 19 20:05:37 crc kubenswrapper[4813]: I0219 20:05:37.488998 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82fb7f76-1b17-45c4-81f8-1aa2df4eb3c6" path="/var/lib/kubelet/pods/82fb7f76-1b17-45c4-81f8-1aa2df4eb3c6/volumes" Feb 19 20:05:38 crc kubenswrapper[4813]: I0219 20:05:38.285456 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-image-upload-8d4564f8f-dxhzn"] Feb 19 20:05:38 crc kubenswrapper[4813]: E0219 20:05:38.285853 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28dca374-6303-4bb8-b64b-a08a87a702cb" containerName="init" Feb 19 20:05:38 crc kubenswrapper[4813]: I0219 20:05:38.285874 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="28dca374-6303-4bb8-b64b-a08a87a702cb" containerName="init" Feb 19 20:05:38 crc kubenswrapper[4813]: E0219 20:05:38.285908 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82fb7f76-1b17-45c4-81f8-1aa2df4eb3c6" containerName="octavia-amphora-httpd" Feb 19 20:05:38 crc kubenswrapper[4813]: I0219 20:05:38.285917 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="82fb7f76-1b17-45c4-81f8-1aa2df4eb3c6" containerName="octavia-amphora-httpd" Feb 19 20:05:38 crc kubenswrapper[4813]: E0219 20:05:38.285932 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28dca374-6303-4bb8-b64b-a08a87a702cb" containerName="octavia-db-sync" Feb 19 20:05:38 crc kubenswrapper[4813]: I0219 20:05:38.285939 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="28dca374-6303-4bb8-b64b-a08a87a702cb" containerName="octavia-db-sync" Feb 19 20:05:38 crc kubenswrapper[4813]: E0219 20:05:38.285980 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82fb7f76-1b17-45c4-81f8-1aa2df4eb3c6" containerName="init" Feb 19 20:05:38 crc kubenswrapper[4813]: I0219 20:05:38.285989 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="82fb7f76-1b17-45c4-81f8-1aa2df4eb3c6" containerName="init" Feb 19 20:05:38 crc kubenswrapper[4813]: I0219 20:05:38.286339 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="82fb7f76-1b17-45c4-81f8-1aa2df4eb3c6" containerName="octavia-amphora-httpd" Feb 19 20:05:38 crc kubenswrapper[4813]: I0219 20:05:38.286358 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="28dca374-6303-4bb8-b64b-a08a87a702cb" containerName="octavia-db-sync" Feb 19 20:05:38 crc kubenswrapper[4813]: I0219 20:05:38.287531 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-8d4564f8f-dxhzn" Feb 19 20:05:38 crc kubenswrapper[4813]: I0219 20:05:38.291941 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-config-data" Feb 19 20:05:38 crc kubenswrapper[4813]: I0219 20:05:38.298598 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-8d4564f8f-dxhzn"] Feb 19 20:05:38 crc kubenswrapper[4813]: I0219 20:05:38.396674 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/309fde8b-c7d5-47ee-9cf0-157d77af66c5-httpd-config\") pod \"octavia-image-upload-8d4564f8f-dxhzn\" (UID: \"309fde8b-c7d5-47ee-9cf0-157d77af66c5\") " pod="openstack/octavia-image-upload-8d4564f8f-dxhzn" Feb 19 20:05:38 crc kubenswrapper[4813]: I0219 20:05:38.396849 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/309fde8b-c7d5-47ee-9cf0-157d77af66c5-amphora-image\") pod \"octavia-image-upload-8d4564f8f-dxhzn\" (UID: \"309fde8b-c7d5-47ee-9cf0-157d77af66c5\") " pod="openstack/octavia-image-upload-8d4564f8f-dxhzn" Feb 19 20:05:38 crc kubenswrapper[4813]: I0219 20:05:38.499035 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/309fde8b-c7d5-47ee-9cf0-157d77af66c5-httpd-config\") pod \"octavia-image-upload-8d4564f8f-dxhzn\" (UID: \"309fde8b-c7d5-47ee-9cf0-157d77af66c5\") " pod="openstack/octavia-image-upload-8d4564f8f-dxhzn" Feb 19 20:05:38 crc kubenswrapper[4813]: I0219 20:05:38.499186 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/309fde8b-c7d5-47ee-9cf0-157d77af66c5-amphora-image\") pod \"octavia-image-upload-8d4564f8f-dxhzn\" (UID: \"309fde8b-c7d5-47ee-9cf0-157d77af66c5\") " pod="openstack/octavia-image-upload-8d4564f8f-dxhzn" Feb 19 20:05:38 crc kubenswrapper[4813]: I0219 20:05:38.500008 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/309fde8b-c7d5-47ee-9cf0-157d77af66c5-amphora-image\") pod \"octavia-image-upload-8d4564f8f-dxhzn\" (UID: \"309fde8b-c7d5-47ee-9cf0-157d77af66c5\") " pod="openstack/octavia-image-upload-8d4564f8f-dxhzn" Feb 19 20:05:38 crc kubenswrapper[4813]: I0219 20:05:38.507085 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/309fde8b-c7d5-47ee-9cf0-157d77af66c5-httpd-config\") pod \"octavia-image-upload-8d4564f8f-dxhzn\" (UID: \"309fde8b-c7d5-47ee-9cf0-157d77af66c5\") " pod="openstack/octavia-image-upload-8d4564f8f-dxhzn" Feb 19 20:05:38 crc kubenswrapper[4813]: I0219 20:05:38.654801 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-8d4564f8f-dxhzn" Feb 19 20:05:39 crc kubenswrapper[4813]: I0219 20:05:39.150480 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-8d4564f8f-dxhzn"] Feb 19 20:05:39 crc kubenswrapper[4813]: I0219 20:05:39.488726 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-8d4564f8f-dxhzn" event={"ID":"309fde8b-c7d5-47ee-9cf0-157d77af66c5","Type":"ContainerStarted","Data":"3fd75b3b5fc7d8b85e42ed099946b61466949397a28eed7d80269b7fe2a86578"} Feb 19 20:05:40 crc kubenswrapper[4813]: I0219 20:05:40.500426 4813 generic.go:334] "Generic (PLEG): container finished" podID="309fde8b-c7d5-47ee-9cf0-157d77af66c5" containerID="b1258b38467d044f33ec6f3f3e81c6d7fa638559ac49784d6f8cda0e1a80f289" exitCode=0 Feb 19 20:05:40 crc kubenswrapper[4813]: I0219 20:05:40.500660 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-8d4564f8f-dxhzn" event={"ID":"309fde8b-c7d5-47ee-9cf0-157d77af66c5","Type":"ContainerDied","Data":"b1258b38467d044f33ec6f3f3e81c6d7fa638559ac49784d6f8cda0e1a80f289"} Feb 19 20:05:41 crc kubenswrapper[4813]: I0219 20:05:41.538284 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-8d4564f8f-dxhzn" event={"ID":"309fde8b-c7d5-47ee-9cf0-157d77af66c5","Type":"ContainerStarted","Data":"475037aecf94c3126a2de259266ec4ccb4b6c005258ec576731f326586a131bc"} Feb 19 20:05:41 crc kubenswrapper[4813]: I0219 20:05:41.572757 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-image-upload-8d4564f8f-dxhzn" podStartSLOduration=3.150327374 podStartE2EDuration="3.572738033s" podCreationTimestamp="2026-02-19 20:05:38 +0000 UTC" firstStartedPulling="2026-02-19 20:05:39.169181477 +0000 UTC m=+5758.394622018" lastFinishedPulling="2026-02-19 20:05:39.591592146 +0000 UTC m=+5758.817032677" observedRunningTime="2026-02-19 20:05:41.554422147 +0000 UTC m=+5760.779862688" watchObservedRunningTime="2026-02-19 20:05:41.572738033 +0000 UTC m=+5760.798178574" Feb 19 20:05:43 crc kubenswrapper[4813]: I0219 20:05:43.472288 4813 scope.go:117] "RemoveContainer" containerID="77599afcb1bbfe89531d92db83d4ec060079eb9318249967ecfd9d52fd324241" Feb 19 20:05:43 crc kubenswrapper[4813]: E0219 20:05:43.472883 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:05:56 crc kubenswrapper[4813]: I0219 20:05:56.472016 4813 scope.go:117] "RemoveContainer" containerID="77599afcb1bbfe89531d92db83d4ec060079eb9318249967ecfd9d52fd324241" Feb 19 20:05:56 crc kubenswrapper[4813]: E0219 20:05:56.472818 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:06:02 crc kubenswrapper[4813]: I0219 20:06:02.976371 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-healthmanager-4z8pw"] Feb 19 20:06:02 crc kubenswrapper[4813]: I0219 20:06:02.978965 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-healthmanager-4z8pw" Feb 19 20:06:02 crc kubenswrapper[4813]: I0219 20:06:02.981892 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-healthmanager-config-data" Feb 19 20:06:02 crc kubenswrapper[4813]: I0219 20:06:02.981993 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-healthmanager-scripts" Feb 19 20:06:02 crc kubenswrapper[4813]: I0219 20:06:02.982056 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-certs-secret" Feb 19 20:06:02 crc kubenswrapper[4813]: I0219 20:06:02.988329 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-4z8pw"] Feb 19 20:06:03 crc kubenswrapper[4813]: I0219 20:06:03.064592 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/21cec241-a549-4faf-815f-73bc56483ed2-config-data-merged\") pod \"octavia-healthmanager-4z8pw\" (UID: \"21cec241-a549-4faf-815f-73bc56483ed2\") " pod="openstack/octavia-healthmanager-4z8pw" Feb 19 20:06:03 crc kubenswrapper[4813]: I0219 20:06:03.064713 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21cec241-a549-4faf-815f-73bc56483ed2-combined-ca-bundle\") pod \"octavia-healthmanager-4z8pw\" (UID: \"21cec241-a549-4faf-815f-73bc56483ed2\") " pod="openstack/octavia-healthmanager-4z8pw" Feb 19 20:06:03 crc kubenswrapper[4813]: I0219 20:06:03.064754 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/21cec241-a549-4faf-815f-73bc56483ed2-amphora-certs\") pod \"octavia-healthmanager-4z8pw\" (UID: \"21cec241-a549-4faf-815f-73bc56483ed2\") " pod="openstack/octavia-healthmanager-4z8pw" Feb 19 20:06:03 crc kubenswrapper[4813]: I0219 20:06:03.064839 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/21cec241-a549-4faf-815f-73bc56483ed2-hm-ports\") pod \"octavia-healthmanager-4z8pw\" (UID: \"21cec241-a549-4faf-815f-73bc56483ed2\") " pod="openstack/octavia-healthmanager-4z8pw" Feb 19 20:06:03 crc kubenswrapper[4813]: I0219 20:06:03.064979 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21cec241-a549-4faf-815f-73bc56483ed2-scripts\") pod \"octavia-healthmanager-4z8pw\" (UID: \"21cec241-a549-4faf-815f-73bc56483ed2\") " pod="openstack/octavia-healthmanager-4z8pw" Feb 19 20:06:03 crc kubenswrapper[4813]: I0219 20:06:03.065082 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21cec241-a549-4faf-815f-73bc56483ed2-config-data\") pod \"octavia-healthmanager-4z8pw\" (UID: \"21cec241-a549-4faf-815f-73bc56483ed2\") " pod="openstack/octavia-healthmanager-4z8pw" Feb 19 20:06:03 crc kubenswrapper[4813]: I0219 20:06:03.168008 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/21cec241-a549-4faf-815f-73bc56483ed2-config-data-merged\") pod \"octavia-healthmanager-4z8pw\" (UID: \"21cec241-a549-4faf-815f-73bc56483ed2\") " pod="openstack/octavia-healthmanager-4z8pw" Feb 19 20:06:03 crc kubenswrapper[4813]: I0219 20:06:03.168097 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21cec241-a549-4faf-815f-73bc56483ed2-combined-ca-bundle\") pod \"octavia-healthmanager-4z8pw\" (UID: \"21cec241-a549-4faf-815f-73bc56483ed2\") " pod="openstack/octavia-healthmanager-4z8pw" Feb 19 20:06:03 crc kubenswrapper[4813]: I0219 20:06:03.168127 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/21cec241-a549-4faf-815f-73bc56483ed2-amphora-certs\") pod \"octavia-healthmanager-4z8pw\" (UID: \"21cec241-a549-4faf-815f-73bc56483ed2\") " pod="openstack/octavia-healthmanager-4z8pw" Feb 19 20:06:03 crc kubenswrapper[4813]: I0219 20:06:03.168174 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/21cec241-a549-4faf-815f-73bc56483ed2-hm-ports\") pod \"octavia-healthmanager-4z8pw\" (UID: \"21cec241-a549-4faf-815f-73bc56483ed2\") " pod="openstack/octavia-healthmanager-4z8pw" Feb 19 20:06:03 crc kubenswrapper[4813]: I0219 20:06:03.168219 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21cec241-a549-4faf-815f-73bc56483ed2-scripts\") pod \"octavia-healthmanager-4z8pw\" (UID: \"21cec241-a549-4faf-815f-73bc56483ed2\") " pod="openstack/octavia-healthmanager-4z8pw" Feb 19 20:06:03 crc kubenswrapper[4813]: I0219 20:06:03.168256 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21cec241-a549-4faf-815f-73bc56483ed2-config-data\") pod \"octavia-healthmanager-4z8pw\" (UID: \"21cec241-a549-4faf-815f-73bc56483ed2\") " pod="openstack/octavia-healthmanager-4z8pw" Feb 19 20:06:03 crc kubenswrapper[4813]: I0219 20:06:03.168741 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/21cec241-a549-4faf-815f-73bc56483ed2-config-data-merged\") pod \"octavia-healthmanager-4z8pw\" (UID: \"21cec241-a549-4faf-815f-73bc56483ed2\") " pod="openstack/octavia-healthmanager-4z8pw" Feb 19 20:06:03 crc kubenswrapper[4813]: I0219 20:06:03.170181 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/21cec241-a549-4faf-815f-73bc56483ed2-hm-ports\") pod \"octavia-healthmanager-4z8pw\" (UID: \"21cec241-a549-4faf-815f-73bc56483ed2\") " pod="openstack/octavia-healthmanager-4z8pw" Feb 19 20:06:03 crc kubenswrapper[4813]: I0219 20:06:03.173811 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21cec241-a549-4faf-815f-73bc56483ed2-combined-ca-bundle\") pod \"octavia-healthmanager-4z8pw\" (UID: \"21cec241-a549-4faf-815f-73bc56483ed2\") " pod="openstack/octavia-healthmanager-4z8pw" Feb 19 20:06:03 crc kubenswrapper[4813]: I0219 20:06:03.174720 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21cec241-a549-4faf-815f-73bc56483ed2-config-data\") pod \"octavia-healthmanager-4z8pw\" (UID: \"21cec241-a549-4faf-815f-73bc56483ed2\") " pod="openstack/octavia-healthmanager-4z8pw" Feb 19 20:06:03 crc kubenswrapper[4813]: I0219 20:06:03.194467 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/21cec241-a549-4faf-815f-73bc56483ed2-amphora-certs\") pod \"octavia-healthmanager-4z8pw\" (UID: \"21cec241-a549-4faf-815f-73bc56483ed2\") " pod="openstack/octavia-healthmanager-4z8pw" Feb 19 20:06:03 crc kubenswrapper[4813]: I0219 20:06:03.196459 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21cec241-a549-4faf-815f-73bc56483ed2-scripts\") pod \"octavia-healthmanager-4z8pw\" (UID: \"21cec241-a549-4faf-815f-73bc56483ed2\") " pod="openstack/octavia-healthmanager-4z8pw" Feb 19 20:06:03 crc kubenswrapper[4813]: I0219 20:06:03.297482 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-healthmanager-4z8pw" Feb 19 20:06:04 crc kubenswrapper[4813]: W0219 20:06:04.014234 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod21cec241_a549_4faf_815f_73bc56483ed2.slice/crio-f221761cf4a3adc5ce8cdad5a8b95cf9801c66883cfb438d04e309aa343a18b0 WatchSource:0}: Error finding container f221761cf4a3adc5ce8cdad5a8b95cf9801c66883cfb438d04e309aa343a18b0: Status 404 returned error can't find the container with id f221761cf4a3adc5ce8cdad5a8b95cf9801c66883cfb438d04e309aa343a18b0 Feb 19 20:06:04 crc kubenswrapper[4813]: I0219 20:06:04.018078 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-4z8pw"] Feb 19 20:06:04 crc kubenswrapper[4813]: I0219 20:06:04.741722 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-4z8pw" event={"ID":"21cec241-a549-4faf-815f-73bc56483ed2","Type":"ContainerStarted","Data":"093003858c2e1158fe46413cc86be6d0a3a02493a6d775d5f1ce6cb1e05fe6c2"} Feb 19 20:06:04 crc kubenswrapper[4813]: I0219 20:06:04.742094 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-4z8pw" event={"ID":"21cec241-a549-4faf-815f-73bc56483ed2","Type":"ContainerStarted","Data":"f221761cf4a3adc5ce8cdad5a8b95cf9801c66883cfb438d04e309aa343a18b0"} Feb 19 20:06:05 crc kubenswrapper[4813]: I0219 20:06:05.232517 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-housekeeping-zvjnv"] Feb 19 20:06:05 crc kubenswrapper[4813]: I0219 20:06:05.234886 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-housekeeping-zvjnv" Feb 19 20:06:05 crc kubenswrapper[4813]: I0219 20:06:05.237165 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-housekeeping-scripts" Feb 19 20:06:05 crc kubenswrapper[4813]: I0219 20:06:05.243440 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-housekeeping-zvjnv"] Feb 19 20:06:05 crc kubenswrapper[4813]: I0219 20:06:05.265527 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-housekeeping-config-data" Feb 19 20:06:05 crc kubenswrapper[4813]: I0219 20:06:05.309369 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30418661-00be-4784-aa85-430abf02af27-scripts\") pod \"octavia-housekeeping-zvjnv\" (UID: \"30418661-00be-4784-aa85-430abf02af27\") " pod="openstack/octavia-housekeeping-zvjnv" Feb 19 20:06:05 crc kubenswrapper[4813]: I0219 20:06:05.309437 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/30418661-00be-4784-aa85-430abf02af27-config-data-merged\") pod \"octavia-housekeeping-zvjnv\" (UID: \"30418661-00be-4784-aa85-430abf02af27\") " pod="openstack/octavia-housekeeping-zvjnv" Feb 19 20:06:05 crc kubenswrapper[4813]: I0219 20:06:05.309536 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/30418661-00be-4784-aa85-430abf02af27-hm-ports\") pod \"octavia-housekeeping-zvjnv\" (UID: \"30418661-00be-4784-aa85-430abf02af27\") " pod="openstack/octavia-housekeeping-zvjnv" Feb 19 20:06:05 crc kubenswrapper[4813]: I0219 20:06:05.309556 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30418661-00be-4784-aa85-430abf02af27-config-data\") pod \"octavia-housekeeping-zvjnv\" (UID: \"30418661-00be-4784-aa85-430abf02af27\") " pod="openstack/octavia-housekeeping-zvjnv" Feb 19 20:06:05 crc kubenswrapper[4813]: I0219 20:06:05.309579 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/30418661-00be-4784-aa85-430abf02af27-amphora-certs\") pod \"octavia-housekeeping-zvjnv\" (UID: \"30418661-00be-4784-aa85-430abf02af27\") " pod="openstack/octavia-housekeeping-zvjnv" Feb 19 20:06:05 crc kubenswrapper[4813]: I0219 20:06:05.309992 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30418661-00be-4784-aa85-430abf02af27-combined-ca-bundle\") pod \"octavia-housekeeping-zvjnv\" (UID: \"30418661-00be-4784-aa85-430abf02af27\") " pod="openstack/octavia-housekeeping-zvjnv" Feb 19 20:06:05 crc kubenswrapper[4813]: I0219 20:06:05.411410 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30418661-00be-4784-aa85-430abf02af27-combined-ca-bundle\") pod \"octavia-housekeeping-zvjnv\" (UID: \"30418661-00be-4784-aa85-430abf02af27\") " pod="openstack/octavia-housekeeping-zvjnv" Feb 19 20:06:05 crc kubenswrapper[4813]: I0219 20:06:05.411498 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30418661-00be-4784-aa85-430abf02af27-scripts\") pod \"octavia-housekeeping-zvjnv\" (UID: \"30418661-00be-4784-aa85-430abf02af27\") " pod="openstack/octavia-housekeeping-zvjnv" Feb 19 20:06:05 crc kubenswrapper[4813]: I0219 20:06:05.411560 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/30418661-00be-4784-aa85-430abf02af27-config-data-merged\") pod \"octavia-housekeeping-zvjnv\" (UID: \"30418661-00be-4784-aa85-430abf02af27\") " pod="openstack/octavia-housekeeping-zvjnv" Feb 19 20:06:05 crc kubenswrapper[4813]: I0219 20:06:05.411612 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/30418661-00be-4784-aa85-430abf02af27-hm-ports\") pod \"octavia-housekeeping-zvjnv\" (UID: \"30418661-00be-4784-aa85-430abf02af27\") " pod="openstack/octavia-housekeeping-zvjnv" Feb 19 20:06:05 crc kubenswrapper[4813]: I0219 20:06:05.411641 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30418661-00be-4784-aa85-430abf02af27-config-data\") pod \"octavia-housekeeping-zvjnv\" (UID: \"30418661-00be-4784-aa85-430abf02af27\") " pod="openstack/octavia-housekeeping-zvjnv" Feb 19 20:06:05 crc kubenswrapper[4813]: I0219 20:06:05.411669 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/30418661-00be-4784-aa85-430abf02af27-amphora-certs\") pod \"octavia-housekeeping-zvjnv\" (UID: \"30418661-00be-4784-aa85-430abf02af27\") " pod="openstack/octavia-housekeeping-zvjnv" Feb 19 20:06:05 crc kubenswrapper[4813]: I0219 20:06:05.412196 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/30418661-00be-4784-aa85-430abf02af27-config-data-merged\") pod \"octavia-housekeeping-zvjnv\" (UID: \"30418661-00be-4784-aa85-430abf02af27\") " pod="openstack/octavia-housekeeping-zvjnv" Feb 19 20:06:05 crc kubenswrapper[4813]: I0219 20:06:05.413115 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/30418661-00be-4784-aa85-430abf02af27-hm-ports\") pod \"octavia-housekeeping-zvjnv\" (UID: \"30418661-00be-4784-aa85-430abf02af27\") " pod="openstack/octavia-housekeeping-zvjnv" Feb 19 20:06:05 crc kubenswrapper[4813]: I0219 20:06:05.417659 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30418661-00be-4784-aa85-430abf02af27-config-data\") pod \"octavia-housekeeping-zvjnv\" (UID: \"30418661-00be-4784-aa85-430abf02af27\") " pod="openstack/octavia-housekeeping-zvjnv" Feb 19 20:06:05 crc kubenswrapper[4813]: I0219 20:06:05.418417 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30418661-00be-4784-aa85-430abf02af27-combined-ca-bundle\") pod \"octavia-housekeeping-zvjnv\" (UID: \"30418661-00be-4784-aa85-430abf02af27\") " pod="openstack/octavia-housekeeping-zvjnv" Feb 19 20:06:05 crc kubenswrapper[4813]: I0219 20:06:05.419653 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/30418661-00be-4784-aa85-430abf02af27-amphora-certs\") pod \"octavia-housekeeping-zvjnv\" (UID: \"30418661-00be-4784-aa85-430abf02af27\") " pod="openstack/octavia-housekeeping-zvjnv" Feb 19 20:06:05 crc kubenswrapper[4813]: I0219 20:06:05.423488 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30418661-00be-4784-aa85-430abf02af27-scripts\") pod \"octavia-housekeeping-zvjnv\" (UID: \"30418661-00be-4784-aa85-430abf02af27\") " pod="openstack/octavia-housekeeping-zvjnv" Feb 19 20:06:05 crc kubenswrapper[4813]: I0219 20:06:05.581502 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-housekeeping-zvjnv" Feb 19 20:06:06 crc kubenswrapper[4813]: I0219 20:06:06.165344 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-housekeeping-zvjnv"] Feb 19 20:06:06 crc kubenswrapper[4813]: W0219 20:06:06.231444 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod30418661_00be_4784_aa85_430abf02af27.slice/crio-88a287e35e4220c4dde4172f5e8ee29d248107db6cc4659d141dfe641bab4ed6 WatchSource:0}: Error finding container 88a287e35e4220c4dde4172f5e8ee29d248107db6cc4659d141dfe641bab4ed6: Status 404 returned error can't find the container with id 88a287e35e4220c4dde4172f5e8ee29d248107db6cc4659d141dfe641bab4ed6 Feb 19 20:06:06 crc kubenswrapper[4813]: I0219 20:06:06.319185 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-worker-xqwkj"] Feb 19 20:06:06 crc kubenswrapper[4813]: I0219 20:06:06.321107 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-worker-xqwkj" Feb 19 20:06:06 crc kubenswrapper[4813]: I0219 20:06:06.324713 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-worker-config-data" Feb 19 20:06:06 crc kubenswrapper[4813]: I0219 20:06:06.325770 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-worker-scripts" Feb 19 20:06:06 crc kubenswrapper[4813]: I0219 20:06:06.333141 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-worker-xqwkj"] Feb 19 20:06:06 crc kubenswrapper[4813]: I0219 20:06:06.435802 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/66245c5a-bfae-4923-b9d2-a7e0614ac030-hm-ports\") pod \"octavia-worker-xqwkj\" (UID: \"66245c5a-bfae-4923-b9d2-a7e0614ac030\") " pod="openstack/octavia-worker-xqwkj" Feb 19 20:06:06 crc kubenswrapper[4813]: I0219 20:06:06.435896 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66245c5a-bfae-4923-b9d2-a7e0614ac030-config-data\") pod \"octavia-worker-xqwkj\" (UID: \"66245c5a-bfae-4923-b9d2-a7e0614ac030\") " pod="openstack/octavia-worker-xqwkj" Feb 19 20:06:06 crc kubenswrapper[4813]: I0219 20:06:06.435938 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/66245c5a-bfae-4923-b9d2-a7e0614ac030-config-data-merged\") pod \"octavia-worker-xqwkj\" (UID: \"66245c5a-bfae-4923-b9d2-a7e0614ac030\") " pod="openstack/octavia-worker-xqwkj" Feb 19 20:06:06 crc kubenswrapper[4813]: I0219 20:06:06.436008 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/66245c5a-bfae-4923-b9d2-a7e0614ac030-amphora-certs\") pod \"octavia-worker-xqwkj\" (UID: \"66245c5a-bfae-4923-b9d2-a7e0614ac030\") " pod="openstack/octavia-worker-xqwkj" Feb 19 20:06:06 crc kubenswrapper[4813]: I0219 20:06:06.436038 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66245c5a-bfae-4923-b9d2-a7e0614ac030-scripts\") pod \"octavia-worker-xqwkj\" (UID: \"66245c5a-bfae-4923-b9d2-a7e0614ac030\") " pod="openstack/octavia-worker-xqwkj" Feb 19 20:06:06 crc kubenswrapper[4813]: I0219 20:06:06.436067 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66245c5a-bfae-4923-b9d2-a7e0614ac030-combined-ca-bundle\") pod \"octavia-worker-xqwkj\" (UID: \"66245c5a-bfae-4923-b9d2-a7e0614ac030\") " pod="openstack/octavia-worker-xqwkj" Feb 19 20:06:06 crc kubenswrapper[4813]: I0219 20:06:06.537827 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/66245c5a-bfae-4923-b9d2-a7e0614ac030-amphora-certs\") pod \"octavia-worker-xqwkj\" (UID: \"66245c5a-bfae-4923-b9d2-a7e0614ac030\") " pod="openstack/octavia-worker-xqwkj" Feb 19 20:06:06 crc kubenswrapper[4813]: I0219 20:06:06.537875 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66245c5a-bfae-4923-b9d2-a7e0614ac030-scripts\") pod \"octavia-worker-xqwkj\" (UID: \"66245c5a-bfae-4923-b9d2-a7e0614ac030\") " pod="openstack/octavia-worker-xqwkj" Feb 19 20:06:06 crc kubenswrapper[4813]: I0219 20:06:06.537909 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66245c5a-bfae-4923-b9d2-a7e0614ac030-combined-ca-bundle\") pod \"octavia-worker-xqwkj\" (UID: \"66245c5a-bfae-4923-b9d2-a7e0614ac030\") " pod="openstack/octavia-worker-xqwkj" Feb 19 20:06:06 crc kubenswrapper[4813]: I0219 20:06:06.538049 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/66245c5a-bfae-4923-b9d2-a7e0614ac030-hm-ports\") pod \"octavia-worker-xqwkj\" (UID: \"66245c5a-bfae-4923-b9d2-a7e0614ac030\") " pod="openstack/octavia-worker-xqwkj" Feb 19 20:06:06 crc kubenswrapper[4813]: I0219 20:06:06.538116 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66245c5a-bfae-4923-b9d2-a7e0614ac030-config-data\") pod \"octavia-worker-xqwkj\" (UID: \"66245c5a-bfae-4923-b9d2-a7e0614ac030\") " pod="openstack/octavia-worker-xqwkj" Feb 19 20:06:06 crc kubenswrapper[4813]: I0219 20:06:06.538152 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/66245c5a-bfae-4923-b9d2-a7e0614ac030-config-data-merged\") pod \"octavia-worker-xqwkj\" (UID: \"66245c5a-bfae-4923-b9d2-a7e0614ac030\") " pod="openstack/octavia-worker-xqwkj" Feb 19 20:06:06 crc kubenswrapper[4813]: I0219 20:06:06.538646 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/66245c5a-bfae-4923-b9d2-a7e0614ac030-config-data-merged\") pod \"octavia-worker-xqwkj\" (UID: \"66245c5a-bfae-4923-b9d2-a7e0614ac030\") " pod="openstack/octavia-worker-xqwkj" Feb 19 20:06:06 crc kubenswrapper[4813]: I0219 20:06:06.539814 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/66245c5a-bfae-4923-b9d2-a7e0614ac030-hm-ports\") pod \"octavia-worker-xqwkj\" (UID: \"66245c5a-bfae-4923-b9d2-a7e0614ac030\") " pod="openstack/octavia-worker-xqwkj" Feb 19 20:06:06 crc kubenswrapper[4813]: I0219 20:06:06.545406 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66245c5a-bfae-4923-b9d2-a7e0614ac030-scripts\") pod \"octavia-worker-xqwkj\" (UID: \"66245c5a-bfae-4923-b9d2-a7e0614ac030\") " pod="openstack/octavia-worker-xqwkj" Feb 19 20:06:06 crc kubenswrapper[4813]: I0219 20:06:06.546492 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/66245c5a-bfae-4923-b9d2-a7e0614ac030-amphora-certs\") pod \"octavia-worker-xqwkj\" (UID: \"66245c5a-bfae-4923-b9d2-a7e0614ac030\") " pod="openstack/octavia-worker-xqwkj" Feb 19 20:06:06 crc kubenswrapper[4813]: I0219 20:06:06.547390 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66245c5a-bfae-4923-b9d2-a7e0614ac030-combined-ca-bundle\") pod \"octavia-worker-xqwkj\" (UID: \"66245c5a-bfae-4923-b9d2-a7e0614ac030\") " pod="openstack/octavia-worker-xqwkj" Feb 19 20:06:06 crc kubenswrapper[4813]: I0219 20:06:06.548729 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66245c5a-bfae-4923-b9d2-a7e0614ac030-config-data\") pod \"octavia-worker-xqwkj\" (UID: \"66245c5a-bfae-4923-b9d2-a7e0614ac030\") " pod="openstack/octavia-worker-xqwkj" Feb 19 20:06:06 crc kubenswrapper[4813]: I0219 20:06:06.691061 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-worker-xqwkj" Feb 19 20:06:06 crc kubenswrapper[4813]: I0219 20:06:06.768578 4813 generic.go:334] "Generic (PLEG): container finished" podID="21cec241-a549-4faf-815f-73bc56483ed2" containerID="093003858c2e1158fe46413cc86be6d0a3a02493a6d775d5f1ce6cb1e05fe6c2" exitCode=0 Feb 19 20:06:06 crc kubenswrapper[4813]: I0219 20:06:06.768665 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-4z8pw" event={"ID":"21cec241-a549-4faf-815f-73bc56483ed2","Type":"ContainerDied","Data":"093003858c2e1158fe46413cc86be6d0a3a02493a6d775d5f1ce6cb1e05fe6c2"} Feb 19 20:06:06 crc kubenswrapper[4813]: I0219 20:06:06.770521 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-zvjnv" event={"ID":"30418661-00be-4784-aa85-430abf02af27","Type":"ContainerStarted","Data":"88a287e35e4220c4dde4172f5e8ee29d248107db6cc4659d141dfe641bab4ed6"} Feb 19 20:06:07 crc kubenswrapper[4813]: I0219 20:06:07.284146 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-worker-xqwkj"] Feb 19 20:06:07 crc kubenswrapper[4813]: I0219 20:06:07.805074 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-4z8pw" event={"ID":"21cec241-a549-4faf-815f-73bc56483ed2","Type":"ContainerStarted","Data":"be98b957db816696742cb35a290966eb1d280c173ede05079207b069e6f19953"} Feb 19 20:06:07 crc kubenswrapper[4813]: I0219 20:06:07.806758 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-healthmanager-4z8pw" Feb 19 20:06:08 crc kubenswrapper[4813]: I0219 20:06:08.062440 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-healthmanager-4z8pw" podStartSLOduration=6.062422532 podStartE2EDuration="6.062422532s" podCreationTimestamp="2026-02-19 20:06:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 20:06:07.838198474 +0000 UTC m=+5787.063639015" watchObservedRunningTime="2026-02-19 20:06:08.062422532 +0000 UTC m=+5787.287863073" Feb 19 20:06:08 crc kubenswrapper[4813]: I0219 20:06:08.063649 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-4z8pw"] Feb 19 20:06:08 crc kubenswrapper[4813]: I0219 20:06:08.472524 4813 scope.go:117] "RemoveContainer" containerID="77599afcb1bbfe89531d92db83d4ec060079eb9318249967ecfd9d52fd324241" Feb 19 20:06:08 crc kubenswrapper[4813]: E0219 20:06:08.473155 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:06:08 crc kubenswrapper[4813]: I0219 20:06:08.832507 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-zvjnv" event={"ID":"30418661-00be-4784-aa85-430abf02af27","Type":"ContainerStarted","Data":"6756b5fa7aa512013ec771ddfe467c61d063f471de5ad70ef3a7e65eec5ce566"} Feb 19 20:06:08 crc kubenswrapper[4813]: I0219 20:06:08.842368 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-xqwkj" event={"ID":"66245c5a-bfae-4923-b9d2-a7e0614ac030","Type":"ContainerStarted","Data":"6e5dc7110590cf84edaf43fc3f92f8b59fdebf3423bdb9befcb3e5b4fa70cf05"} Feb 19 20:06:09 crc kubenswrapper[4813]: I0219 20:06:09.853353 4813 generic.go:334] "Generic (PLEG): container finished" podID="30418661-00be-4784-aa85-430abf02af27" containerID="6756b5fa7aa512013ec771ddfe467c61d063f471de5ad70ef3a7e65eec5ce566" exitCode=0 Feb 19 20:06:09 crc kubenswrapper[4813]: I0219 20:06:09.853457 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-zvjnv" event={"ID":"30418661-00be-4784-aa85-430abf02af27","Type":"ContainerDied","Data":"6756b5fa7aa512013ec771ddfe467c61d063f471de5ad70ef3a7e65eec5ce566"} Feb 19 20:06:10 crc kubenswrapper[4813]: I0219 20:06:10.865886 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-zvjnv" event={"ID":"30418661-00be-4784-aa85-430abf02af27","Type":"ContainerStarted","Data":"e169692e49c5addbae60b6bb37c7a85a8749203ea9c03116d48d6a002ea2fe06"} Feb 19 20:06:10 crc kubenswrapper[4813]: I0219 20:06:10.866593 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-housekeeping-zvjnv" Feb 19 20:06:10 crc kubenswrapper[4813]: I0219 20:06:10.868240 4813 generic.go:334] "Generic (PLEG): container finished" podID="66245c5a-bfae-4923-b9d2-a7e0614ac030" containerID="d213b252366a481bf6e246ead07f64ea38f7b8d8c57d1308e1126208328aea59" exitCode=0 Feb 19 20:06:10 crc kubenswrapper[4813]: I0219 20:06:10.868293 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-xqwkj" event={"ID":"66245c5a-bfae-4923-b9d2-a7e0614ac030","Type":"ContainerDied","Data":"d213b252366a481bf6e246ead07f64ea38f7b8d8c57d1308e1126208328aea59"} Feb 19 20:06:10 crc kubenswrapper[4813]: I0219 20:06:10.909887 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-housekeeping-zvjnv" podStartSLOduration=4.011640652 podStartE2EDuration="5.909864963s" podCreationTimestamp="2026-02-19 20:06:05 +0000 UTC" firstStartedPulling="2026-02-19 20:06:06.235356413 +0000 UTC m=+5785.460796954" lastFinishedPulling="2026-02-19 20:06:08.133580724 +0000 UTC m=+5787.359021265" observedRunningTime="2026-02-19 20:06:10.89100375 +0000 UTC m=+5790.116444291" watchObservedRunningTime="2026-02-19 20:06:10.909864963 +0000 UTC m=+5790.135305524" Feb 19 20:06:11 crc kubenswrapper[4813]: I0219 20:06:11.884790 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-xqwkj" event={"ID":"66245c5a-bfae-4923-b9d2-a7e0614ac030","Type":"ContainerStarted","Data":"c878f7f716b90dbf9ab5375bee847b971c303619e73ebfe0b489bf077c86f898"} Feb 19 20:06:11 crc kubenswrapper[4813]: I0219 20:06:11.910899 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-worker-xqwkj" podStartSLOduration=4.436905251 podStartE2EDuration="5.910880135s" podCreationTimestamp="2026-02-19 20:06:06 +0000 UTC" firstStartedPulling="2026-02-19 20:06:08.127874148 +0000 UTC m=+5787.353314689" lastFinishedPulling="2026-02-19 20:06:09.601849042 +0000 UTC m=+5788.827289573" observedRunningTime="2026-02-19 20:06:11.908373637 +0000 UTC m=+5791.133814178" watchObservedRunningTime="2026-02-19 20:06:11.910880135 +0000 UTC m=+5791.136320676" Feb 19 20:06:12 crc kubenswrapper[4813]: I0219 20:06:12.893935 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-worker-xqwkj" Feb 19 20:06:18 crc kubenswrapper[4813]: I0219 20:06:18.345610 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-healthmanager-4z8pw" Feb 19 20:06:20 crc kubenswrapper[4813]: I0219 20:06:20.617125 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-housekeeping-zvjnv" Feb 19 20:06:21 crc kubenswrapper[4813]: I0219 20:06:21.722299 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-worker-xqwkj" Feb 19 20:06:22 crc kubenswrapper[4813]: I0219 20:06:22.472363 4813 scope.go:117] "RemoveContainer" containerID="77599afcb1bbfe89531d92db83d4ec060079eb9318249967ecfd9d52fd324241" Feb 19 20:06:22 crc kubenswrapper[4813]: E0219 20:06:22.472898 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:06:30 crc kubenswrapper[4813]: I0219 20:06:30.359207 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-64447d6cd5-7c8jn"] Feb 19 20:06:30 crc kubenswrapper[4813]: I0219 20:06:30.371513 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-64447d6cd5-7c8jn" Feb 19 20:06:30 crc kubenswrapper[4813]: I0219 20:06:30.381511 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-v8dcj" Feb 19 20:06:30 crc kubenswrapper[4813]: I0219 20:06:30.381562 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Feb 19 20:06:30 crc kubenswrapper[4813]: I0219 20:06:30.381713 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Feb 19 20:06:30 crc kubenswrapper[4813]: I0219 20:06:30.381868 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Feb 19 20:06:30 crc kubenswrapper[4813]: I0219 20:06:30.387884 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-64447d6cd5-7c8jn"] Feb 19 20:06:30 crc kubenswrapper[4813]: I0219 20:06:30.402973 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 20:06:30 crc kubenswrapper[4813]: I0219 20:06:30.403215 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="4344547a-ae09-4f86-8196-ac453357cd15" containerName="glance-log" containerID="cri-o://df33cc9c2df9419e6f2e7cb6f3232c991442063e273a982bf39590502378c5ed" gracePeriod=30 Feb 19 20:06:30 crc kubenswrapper[4813]: I0219 20:06:30.403362 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="4344547a-ae09-4f86-8196-ac453357cd15" containerName="glance-httpd" containerID="cri-o://3e5ffc7cde1221dbcd57349e0adeaa0ed53928dddd6b35b1cdc6df2c70188ca1" gracePeriod=30 Feb 19 20:06:30 crc kubenswrapper[4813]: I0219 20:06:30.461891 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xx4zl\" (UniqueName: \"kubernetes.io/projected/1b44a06a-2700-4b32-9244-27f74d62d986-kube-api-access-xx4zl\") pod \"horizon-64447d6cd5-7c8jn\" (UID: \"1b44a06a-2700-4b32-9244-27f74d62d986\") " pod="openstack/horizon-64447d6cd5-7c8jn" Feb 19 20:06:30 crc kubenswrapper[4813]: I0219 20:06:30.462004 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1b44a06a-2700-4b32-9244-27f74d62d986-scripts\") pod \"horizon-64447d6cd5-7c8jn\" (UID: \"1b44a06a-2700-4b32-9244-27f74d62d986\") " pod="openstack/horizon-64447d6cd5-7c8jn" Feb 19 20:06:30 crc kubenswrapper[4813]: I0219 20:06:30.462055 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b44a06a-2700-4b32-9244-27f74d62d986-logs\") pod \"horizon-64447d6cd5-7c8jn\" (UID: \"1b44a06a-2700-4b32-9244-27f74d62d986\") " pod="openstack/horizon-64447d6cd5-7c8jn" Feb 19 20:06:30 crc kubenswrapper[4813]: I0219 20:06:30.462169 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1b44a06a-2700-4b32-9244-27f74d62d986-config-data\") pod \"horizon-64447d6cd5-7c8jn\" (UID: \"1b44a06a-2700-4b32-9244-27f74d62d986\") " pod="openstack/horizon-64447d6cd5-7c8jn" Feb 19 20:06:30 crc kubenswrapper[4813]: I0219 20:06:30.462232 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1b44a06a-2700-4b32-9244-27f74d62d986-horizon-secret-key\") pod \"horizon-64447d6cd5-7c8jn\" (UID: \"1b44a06a-2700-4b32-9244-27f74d62d986\") " pod="openstack/horizon-64447d6cd5-7c8jn" Feb 19 20:06:30 crc kubenswrapper[4813]: I0219 20:06:30.466198 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 20:06:30 crc kubenswrapper[4813]: I0219 20:06:30.466440 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="0dd6c088-3815-44be-863c-d72d71a2cfa5" containerName="glance-log" containerID="cri-o://ef77f4d11fa940a3768d863b7f292ecade1db28e65768f0b337515fc1810981c" gracePeriod=30 Feb 19 20:06:30 crc kubenswrapper[4813]: I0219 20:06:30.466906 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="0dd6c088-3815-44be-863c-d72d71a2cfa5" containerName="glance-httpd" containerID="cri-o://c3531ceb1e4ac87632a3dcd2f54be716aa24b8e2a285d6f679b54e3f2b99bdfe" gracePeriod=30 Feb 19 20:06:30 crc kubenswrapper[4813]: I0219 20:06:30.502576 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-74b5489f4f-lwvmc"] Feb 19 20:06:30 crc kubenswrapper[4813]: I0219 20:06:30.505503 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-74b5489f4f-lwvmc" Feb 19 20:06:30 crc kubenswrapper[4813]: I0219 20:06:30.520352 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-74b5489f4f-lwvmc"] Feb 19 20:06:30 crc kubenswrapper[4813]: I0219 20:06:30.564401 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1b44a06a-2700-4b32-9244-27f74d62d986-config-data\") pod \"horizon-64447d6cd5-7c8jn\" (UID: \"1b44a06a-2700-4b32-9244-27f74d62d986\") " pod="openstack/horizon-64447d6cd5-7c8jn" Feb 19 20:06:30 crc kubenswrapper[4813]: I0219 20:06:30.564470 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1b44a06a-2700-4b32-9244-27f74d62d986-horizon-secret-key\") pod \"horizon-64447d6cd5-7c8jn\" (UID: \"1b44a06a-2700-4b32-9244-27f74d62d986\") " pod="openstack/horizon-64447d6cd5-7c8jn" Feb 19 20:06:30 crc kubenswrapper[4813]: I0219 20:06:30.564530 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xx4zl\" (UniqueName: \"kubernetes.io/projected/1b44a06a-2700-4b32-9244-27f74d62d986-kube-api-access-xx4zl\") pod \"horizon-64447d6cd5-7c8jn\" (UID: \"1b44a06a-2700-4b32-9244-27f74d62d986\") " pod="openstack/horizon-64447d6cd5-7c8jn" Feb 19 20:06:30 crc kubenswrapper[4813]: I0219 20:06:30.564566 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1b44a06a-2700-4b32-9244-27f74d62d986-scripts\") pod \"horizon-64447d6cd5-7c8jn\" (UID: \"1b44a06a-2700-4b32-9244-27f74d62d986\") " pod="openstack/horizon-64447d6cd5-7c8jn" Feb 19 20:06:30 crc kubenswrapper[4813]: I0219 20:06:30.564594 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b44a06a-2700-4b32-9244-27f74d62d986-logs\") pod \"horizon-64447d6cd5-7c8jn\" (UID: \"1b44a06a-2700-4b32-9244-27f74d62d986\") " pod="openstack/horizon-64447d6cd5-7c8jn" Feb 19 20:06:30 crc kubenswrapper[4813]: I0219 20:06:30.565061 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b44a06a-2700-4b32-9244-27f74d62d986-logs\") pod \"horizon-64447d6cd5-7c8jn\" (UID: \"1b44a06a-2700-4b32-9244-27f74d62d986\") " pod="openstack/horizon-64447d6cd5-7c8jn" Feb 19 20:06:30 crc kubenswrapper[4813]: I0219 20:06:30.566486 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1b44a06a-2700-4b32-9244-27f74d62d986-scripts\") pod \"horizon-64447d6cd5-7c8jn\" (UID: \"1b44a06a-2700-4b32-9244-27f74d62d986\") " pod="openstack/horizon-64447d6cd5-7c8jn" Feb 19 20:06:30 crc kubenswrapper[4813]: I0219 20:06:30.574035 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1b44a06a-2700-4b32-9244-27f74d62d986-horizon-secret-key\") pod \"horizon-64447d6cd5-7c8jn\" (UID: \"1b44a06a-2700-4b32-9244-27f74d62d986\") " pod="openstack/horizon-64447d6cd5-7c8jn" Feb 19 20:06:30 crc kubenswrapper[4813]: I0219 20:06:30.575104 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1b44a06a-2700-4b32-9244-27f74d62d986-config-data\") pod \"horizon-64447d6cd5-7c8jn\" (UID: \"1b44a06a-2700-4b32-9244-27f74d62d986\") " pod="openstack/horizon-64447d6cd5-7c8jn" Feb 19 20:06:30 crc kubenswrapper[4813]: I0219 20:06:30.585153 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xx4zl\" (UniqueName: \"kubernetes.io/projected/1b44a06a-2700-4b32-9244-27f74d62d986-kube-api-access-xx4zl\") pod \"horizon-64447d6cd5-7c8jn\" (UID: \"1b44a06a-2700-4b32-9244-27f74d62d986\") " pod="openstack/horizon-64447d6cd5-7c8jn" Feb 19 20:06:30 crc kubenswrapper[4813]: I0219 20:06:30.666446 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/acf95000-e303-4f53-a73b-8692e75ecf6b-horizon-secret-key\") pod \"horizon-74b5489f4f-lwvmc\" (UID: \"acf95000-e303-4f53-a73b-8692e75ecf6b\") " pod="openstack/horizon-74b5489f4f-lwvmc" Feb 19 20:06:30 crc kubenswrapper[4813]: I0219 20:06:30.666743 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/acf95000-e303-4f53-a73b-8692e75ecf6b-scripts\") pod \"horizon-74b5489f4f-lwvmc\" (UID: \"acf95000-e303-4f53-a73b-8692e75ecf6b\") " pod="openstack/horizon-74b5489f4f-lwvmc" Feb 19 20:06:30 crc kubenswrapper[4813]: I0219 20:06:30.666849 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nx588\" (UniqueName: \"kubernetes.io/projected/acf95000-e303-4f53-a73b-8692e75ecf6b-kube-api-access-nx588\") pod \"horizon-74b5489f4f-lwvmc\" (UID: \"acf95000-e303-4f53-a73b-8692e75ecf6b\") " pod="openstack/horizon-74b5489f4f-lwvmc" Feb 19 20:06:30 crc kubenswrapper[4813]: I0219 20:06:30.666890 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/acf95000-e303-4f53-a73b-8692e75ecf6b-config-data\") pod \"horizon-74b5489f4f-lwvmc\" (UID: \"acf95000-e303-4f53-a73b-8692e75ecf6b\") " pod="openstack/horizon-74b5489f4f-lwvmc" Feb 19 20:06:30 crc kubenswrapper[4813]: I0219 20:06:30.667050 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/acf95000-e303-4f53-a73b-8692e75ecf6b-logs\") pod \"horizon-74b5489f4f-lwvmc\" (UID: \"acf95000-e303-4f53-a73b-8692e75ecf6b\") " pod="openstack/horizon-74b5489f4f-lwvmc" Feb 19 20:06:30 crc kubenswrapper[4813]: I0219 20:06:30.698423 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-64447d6cd5-7c8jn" Feb 19 20:06:30 crc kubenswrapper[4813]: I0219 20:06:30.768685 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/acf95000-e303-4f53-a73b-8692e75ecf6b-logs\") pod \"horizon-74b5489f4f-lwvmc\" (UID: \"acf95000-e303-4f53-a73b-8692e75ecf6b\") " pod="openstack/horizon-74b5489f4f-lwvmc" Feb 19 20:06:30 crc kubenswrapper[4813]: I0219 20:06:30.768802 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/acf95000-e303-4f53-a73b-8692e75ecf6b-horizon-secret-key\") pod \"horizon-74b5489f4f-lwvmc\" (UID: \"acf95000-e303-4f53-a73b-8692e75ecf6b\") " pod="openstack/horizon-74b5489f4f-lwvmc" Feb 19 20:06:30 crc kubenswrapper[4813]: I0219 20:06:30.768837 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/acf95000-e303-4f53-a73b-8692e75ecf6b-scripts\") pod \"horizon-74b5489f4f-lwvmc\" (UID: \"acf95000-e303-4f53-a73b-8692e75ecf6b\") " pod="openstack/horizon-74b5489f4f-lwvmc" Feb 19 20:06:30 crc kubenswrapper[4813]: I0219 20:06:30.768871 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nx588\" (UniqueName: \"kubernetes.io/projected/acf95000-e303-4f53-a73b-8692e75ecf6b-kube-api-access-nx588\") pod \"horizon-74b5489f4f-lwvmc\" (UID: \"acf95000-e303-4f53-a73b-8692e75ecf6b\") " pod="openstack/horizon-74b5489f4f-lwvmc" Feb 19 20:06:30 crc kubenswrapper[4813]: I0219 20:06:30.768889 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/acf95000-e303-4f53-a73b-8692e75ecf6b-config-data\") pod \"horizon-74b5489f4f-lwvmc\" (UID: \"acf95000-e303-4f53-a73b-8692e75ecf6b\") " pod="openstack/horizon-74b5489f4f-lwvmc" Feb 19 20:06:30 crc kubenswrapper[4813]: I0219 20:06:30.770233 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/acf95000-e303-4f53-a73b-8692e75ecf6b-config-data\") pod \"horizon-74b5489f4f-lwvmc\" (UID: \"acf95000-e303-4f53-a73b-8692e75ecf6b\") " pod="openstack/horizon-74b5489f4f-lwvmc" Feb 19 20:06:30 crc kubenswrapper[4813]: I0219 20:06:30.772559 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/acf95000-e303-4f53-a73b-8692e75ecf6b-logs\") pod \"horizon-74b5489f4f-lwvmc\" (UID: \"acf95000-e303-4f53-a73b-8692e75ecf6b\") " pod="openstack/horizon-74b5489f4f-lwvmc" Feb 19 20:06:30 crc kubenswrapper[4813]: I0219 20:06:30.773492 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/acf95000-e303-4f53-a73b-8692e75ecf6b-scripts\") pod \"horizon-74b5489f4f-lwvmc\" (UID: \"acf95000-e303-4f53-a73b-8692e75ecf6b\") " pod="openstack/horizon-74b5489f4f-lwvmc" Feb 19 20:06:30 crc kubenswrapper[4813]: I0219 20:06:30.775850 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/acf95000-e303-4f53-a73b-8692e75ecf6b-horizon-secret-key\") pod \"horizon-74b5489f4f-lwvmc\" (UID: \"acf95000-e303-4f53-a73b-8692e75ecf6b\") " pod="openstack/horizon-74b5489f4f-lwvmc" Feb 19 20:06:30 crc kubenswrapper[4813]: I0219 20:06:30.790838 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nx588\" (UniqueName: \"kubernetes.io/projected/acf95000-e303-4f53-a73b-8692e75ecf6b-kube-api-access-nx588\") pod \"horizon-74b5489f4f-lwvmc\" (UID: \"acf95000-e303-4f53-a73b-8692e75ecf6b\") " pod="openstack/horizon-74b5489f4f-lwvmc" Feb 19 20:06:30 crc kubenswrapper[4813]: I0219 20:06:30.963829 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-74b5489f4f-lwvmc" Feb 19 20:06:31 crc kubenswrapper[4813]: I0219 20:06:31.023235 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-64447d6cd5-7c8jn"] Feb 19 20:06:31 crc kubenswrapper[4813]: I0219 20:06:31.064421 4813 generic.go:334] "Generic (PLEG): container finished" podID="0dd6c088-3815-44be-863c-d72d71a2cfa5" containerID="ef77f4d11fa940a3768d863b7f292ecade1db28e65768f0b337515fc1810981c" exitCode=143 Feb 19 20:06:31 crc kubenswrapper[4813]: I0219 20:06:31.064474 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0dd6c088-3815-44be-863c-d72d71a2cfa5","Type":"ContainerDied","Data":"ef77f4d11fa940a3768d863b7f292ecade1db28e65768f0b337515fc1810981c"} Feb 19 20:06:31 crc kubenswrapper[4813]: I0219 20:06:31.066575 4813 generic.go:334] "Generic (PLEG): container finished" podID="4344547a-ae09-4f86-8196-ac453357cd15" containerID="df33cc9c2df9419e6f2e7cb6f3232c991442063e273a982bf39590502378c5ed" exitCode=143 Feb 19 20:06:31 crc kubenswrapper[4813]: I0219 20:06:31.066626 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5c79476fc-m49fs"] Feb 19 20:06:31 crc kubenswrapper[4813]: I0219 20:06:31.068504 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4344547a-ae09-4f86-8196-ac453357cd15","Type":"ContainerDied","Data":"df33cc9c2df9419e6f2e7cb6f3232c991442063e273a982bf39590502378c5ed"} Feb 19 20:06:31 crc kubenswrapper[4813]: I0219 20:06:31.068612 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5c79476fc-m49fs" Feb 19 20:06:31 crc kubenswrapper[4813]: I0219 20:06:31.078733 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5c79476fc-m49fs"] Feb 19 20:06:31 crc kubenswrapper[4813]: I0219 20:06:31.184916 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e3f864db-c1f6-40b4-895c-347947a296e5-config-data\") pod \"horizon-5c79476fc-m49fs\" (UID: \"e3f864db-c1f6-40b4-895c-347947a296e5\") " pod="openstack/horizon-5c79476fc-m49fs" Feb 19 20:06:31 crc kubenswrapper[4813]: I0219 20:06:31.185005 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3f864db-c1f6-40b4-895c-347947a296e5-logs\") pod \"horizon-5c79476fc-m49fs\" (UID: \"e3f864db-c1f6-40b4-895c-347947a296e5\") " pod="openstack/horizon-5c79476fc-m49fs" Feb 19 20:06:31 crc kubenswrapper[4813]: I0219 20:06:31.185059 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q76pf\" (UniqueName: \"kubernetes.io/projected/e3f864db-c1f6-40b4-895c-347947a296e5-kube-api-access-q76pf\") pod \"horizon-5c79476fc-m49fs\" (UID: \"e3f864db-c1f6-40b4-895c-347947a296e5\") " pod="openstack/horizon-5c79476fc-m49fs" Feb 19 20:06:31 crc kubenswrapper[4813]: I0219 20:06:31.185211 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e3f864db-c1f6-40b4-895c-347947a296e5-horizon-secret-key\") pod \"horizon-5c79476fc-m49fs\" (UID: \"e3f864db-c1f6-40b4-895c-347947a296e5\") " pod="openstack/horizon-5c79476fc-m49fs" Feb 19 20:06:31 crc kubenswrapper[4813]: I0219 20:06:31.185252 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e3f864db-c1f6-40b4-895c-347947a296e5-scripts\") pod \"horizon-5c79476fc-m49fs\" (UID: \"e3f864db-c1f6-40b4-895c-347947a296e5\") " pod="openstack/horizon-5c79476fc-m49fs" Feb 19 20:06:31 crc kubenswrapper[4813]: I0219 20:06:31.269037 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-64447d6cd5-7c8jn"] Feb 19 20:06:31 crc kubenswrapper[4813]: I0219 20:06:31.286536 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e3f864db-c1f6-40b4-895c-347947a296e5-horizon-secret-key\") pod \"horizon-5c79476fc-m49fs\" (UID: \"e3f864db-c1f6-40b4-895c-347947a296e5\") " pod="openstack/horizon-5c79476fc-m49fs" Feb 19 20:06:31 crc kubenswrapper[4813]: I0219 20:06:31.286611 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e3f864db-c1f6-40b4-895c-347947a296e5-scripts\") pod \"horizon-5c79476fc-m49fs\" (UID: \"e3f864db-c1f6-40b4-895c-347947a296e5\") " pod="openstack/horizon-5c79476fc-m49fs" Feb 19 20:06:31 crc kubenswrapper[4813]: I0219 20:06:31.286666 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e3f864db-c1f6-40b4-895c-347947a296e5-config-data\") pod \"horizon-5c79476fc-m49fs\" (UID: \"e3f864db-c1f6-40b4-895c-347947a296e5\") " pod="openstack/horizon-5c79476fc-m49fs" Feb 19 20:06:31 crc kubenswrapper[4813]: I0219 20:06:31.286698 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3f864db-c1f6-40b4-895c-347947a296e5-logs\") pod \"horizon-5c79476fc-m49fs\" (UID: \"e3f864db-c1f6-40b4-895c-347947a296e5\") " pod="openstack/horizon-5c79476fc-m49fs" Feb 19 20:06:31 crc kubenswrapper[4813]: I0219 20:06:31.286745 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q76pf\" (UniqueName: \"kubernetes.io/projected/e3f864db-c1f6-40b4-895c-347947a296e5-kube-api-access-q76pf\") pod \"horizon-5c79476fc-m49fs\" (UID: \"e3f864db-c1f6-40b4-895c-347947a296e5\") " pod="openstack/horizon-5c79476fc-m49fs" Feb 19 20:06:31 crc kubenswrapper[4813]: I0219 20:06:31.287922 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e3f864db-c1f6-40b4-895c-347947a296e5-scripts\") pod \"horizon-5c79476fc-m49fs\" (UID: \"e3f864db-c1f6-40b4-895c-347947a296e5\") " pod="openstack/horizon-5c79476fc-m49fs" Feb 19 20:06:31 crc kubenswrapper[4813]: I0219 20:06:31.288217 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3f864db-c1f6-40b4-895c-347947a296e5-logs\") pod \"horizon-5c79476fc-m49fs\" (UID: \"e3f864db-c1f6-40b4-895c-347947a296e5\") " pod="openstack/horizon-5c79476fc-m49fs" Feb 19 20:06:31 crc kubenswrapper[4813]: I0219 20:06:31.288826 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e3f864db-c1f6-40b4-895c-347947a296e5-config-data\") pod \"horizon-5c79476fc-m49fs\" (UID: \"e3f864db-c1f6-40b4-895c-347947a296e5\") " pod="openstack/horizon-5c79476fc-m49fs" Feb 19 20:06:31 crc kubenswrapper[4813]: I0219 20:06:31.298429 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e3f864db-c1f6-40b4-895c-347947a296e5-horizon-secret-key\") pod \"horizon-5c79476fc-m49fs\" (UID: \"e3f864db-c1f6-40b4-895c-347947a296e5\") " pod="openstack/horizon-5c79476fc-m49fs" Feb 19 20:06:31 crc kubenswrapper[4813]: I0219 20:06:31.304990 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q76pf\" (UniqueName: \"kubernetes.io/projected/e3f864db-c1f6-40b4-895c-347947a296e5-kube-api-access-q76pf\") pod \"horizon-5c79476fc-m49fs\" (UID: \"e3f864db-c1f6-40b4-895c-347947a296e5\") " pod="openstack/horizon-5c79476fc-m49fs" Feb 19 20:06:31 crc kubenswrapper[4813]: I0219 20:06:31.402973 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5c79476fc-m49fs" Feb 19 20:06:31 crc kubenswrapper[4813]: I0219 20:06:31.882095 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-74b5489f4f-lwvmc"] Feb 19 20:06:32 crc kubenswrapper[4813]: I0219 20:06:32.145213 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74b5489f4f-lwvmc" event={"ID":"acf95000-e303-4f53-a73b-8692e75ecf6b","Type":"ContainerStarted","Data":"5450a80407edf00d9f27ff702d0b9004f728497ae2c3e7acdcf4bf7ffc26016b"} Feb 19 20:06:32 crc kubenswrapper[4813]: I0219 20:06:32.157153 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-64447d6cd5-7c8jn" event={"ID":"1b44a06a-2700-4b32-9244-27f74d62d986","Type":"ContainerStarted","Data":"20cf785b7979e0bc2e50c0058ef3bb18f7a3f0e1acb4c0a14082e899bf5d3eae"} Feb 19 20:06:32 crc kubenswrapper[4813]: I0219 20:06:32.518666 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5c79476fc-m49fs"] Feb 19 20:06:32 crc kubenswrapper[4813]: W0219 20:06:32.534826 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3f864db_c1f6_40b4_895c_347947a296e5.slice/crio-fea652f7add761207c416a05d7772fd0ed81711c070af839783c22b639264dc8 WatchSource:0}: Error finding container fea652f7add761207c416a05d7772fd0ed81711c070af839783c22b639264dc8: Status 404 returned error can't find the container with id fea652f7add761207c416a05d7772fd0ed81711c070af839783c22b639264dc8 Feb 19 20:06:33 crc kubenswrapper[4813]: I0219 20:06:33.168453 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c79476fc-m49fs" event={"ID":"e3f864db-c1f6-40b4-895c-347947a296e5","Type":"ContainerStarted","Data":"fea652f7add761207c416a05d7772fd0ed81711c070af839783c22b639264dc8"} Feb 19 20:06:34 crc kubenswrapper[4813]: I0219 20:06:34.196561 4813 generic.go:334] "Generic (PLEG): container finished" podID="0dd6c088-3815-44be-863c-d72d71a2cfa5" containerID="c3531ceb1e4ac87632a3dcd2f54be716aa24b8e2a285d6f679b54e3f2b99bdfe" exitCode=0 Feb 19 20:06:34 crc kubenswrapper[4813]: I0219 20:06:34.196743 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0dd6c088-3815-44be-863c-d72d71a2cfa5","Type":"ContainerDied","Data":"c3531ceb1e4ac87632a3dcd2f54be716aa24b8e2a285d6f679b54e3f2b99bdfe"} Feb 19 20:06:34 crc kubenswrapper[4813]: I0219 20:06:34.201413 4813 generic.go:334] "Generic (PLEG): container finished" podID="4344547a-ae09-4f86-8196-ac453357cd15" containerID="3e5ffc7cde1221dbcd57349e0adeaa0ed53928dddd6b35b1cdc6df2c70188ca1" exitCode=0 Feb 19 20:06:34 crc kubenswrapper[4813]: I0219 20:06:34.201455 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4344547a-ae09-4f86-8196-ac453357cd15","Type":"ContainerDied","Data":"3e5ffc7cde1221dbcd57349e0adeaa0ed53928dddd6b35b1cdc6df2c70188ca1"} Feb 19 20:06:34 crc kubenswrapper[4813]: I0219 20:06:34.339225 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 20:06:34 crc kubenswrapper[4813]: I0219 20:06:34.355066 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 20:06:34 crc kubenswrapper[4813]: I0219 20:06:34.367036 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4344547a-ae09-4f86-8196-ac453357cd15-scripts\") pod \"4344547a-ae09-4f86-8196-ac453357cd15\" (UID: \"4344547a-ae09-4f86-8196-ac453357cd15\") " Feb 19 20:06:34 crc kubenswrapper[4813]: I0219 20:06:34.367562 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4344547a-ae09-4f86-8196-ac453357cd15-httpd-run\") pod \"4344547a-ae09-4f86-8196-ac453357cd15\" (UID: \"4344547a-ae09-4f86-8196-ac453357cd15\") " Feb 19 20:06:34 crc kubenswrapper[4813]: I0219 20:06:34.369378 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4344547a-ae09-4f86-8196-ac453357cd15-logs\") pod \"4344547a-ae09-4f86-8196-ac453357cd15\" (UID: \"4344547a-ae09-4f86-8196-ac453357cd15\") " Feb 19 20:06:34 crc kubenswrapper[4813]: I0219 20:06:34.369487 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/4344547a-ae09-4f86-8196-ac453357cd15-ceph\") pod \"4344547a-ae09-4f86-8196-ac453357cd15\" (UID: \"4344547a-ae09-4f86-8196-ac453357cd15\") " Feb 19 20:06:34 crc kubenswrapper[4813]: I0219 20:06:34.369622 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4344547a-ae09-4f86-8196-ac453357cd15-combined-ca-bundle\") pod \"4344547a-ae09-4f86-8196-ac453357cd15\" (UID: \"4344547a-ae09-4f86-8196-ac453357cd15\") " Feb 19 20:06:34 crc kubenswrapper[4813]: I0219 20:06:34.369742 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4344547a-ae09-4f86-8196-ac453357cd15-config-data\") pod \"4344547a-ae09-4f86-8196-ac453357cd15\" (UID: \"4344547a-ae09-4f86-8196-ac453357cd15\") " Feb 19 20:06:34 crc kubenswrapper[4813]: I0219 20:06:34.369925 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bx96n\" (UniqueName: \"kubernetes.io/projected/4344547a-ae09-4f86-8196-ac453357cd15-kube-api-access-bx96n\") pod \"4344547a-ae09-4f86-8196-ac453357cd15\" (UID: \"4344547a-ae09-4f86-8196-ac453357cd15\") " Feb 19 20:06:34 crc kubenswrapper[4813]: I0219 20:06:34.367889 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4344547a-ae09-4f86-8196-ac453357cd15-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "4344547a-ae09-4f86-8196-ac453357cd15" (UID: "4344547a-ae09-4f86-8196-ac453357cd15"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:06:34 crc kubenswrapper[4813]: I0219 20:06:34.370345 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4344547a-ae09-4f86-8196-ac453357cd15-logs" (OuterVolumeSpecName: "logs") pod "4344547a-ae09-4f86-8196-ac453357cd15" (UID: "4344547a-ae09-4f86-8196-ac453357cd15"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:06:34 crc kubenswrapper[4813]: I0219 20:06:34.371023 4813 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4344547a-ae09-4f86-8196-ac453357cd15-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 19 20:06:34 crc kubenswrapper[4813]: I0219 20:06:34.371261 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4344547a-ae09-4f86-8196-ac453357cd15-logs\") on node \"crc\" DevicePath \"\"" Feb 19 20:06:34 crc kubenswrapper[4813]: I0219 20:06:34.375314 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4344547a-ae09-4f86-8196-ac453357cd15-ceph" (OuterVolumeSpecName: "ceph") pod "4344547a-ae09-4f86-8196-ac453357cd15" (UID: "4344547a-ae09-4f86-8196-ac453357cd15"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:06:34 crc kubenswrapper[4813]: I0219 20:06:34.375964 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4344547a-ae09-4f86-8196-ac453357cd15-scripts" (OuterVolumeSpecName: "scripts") pod "4344547a-ae09-4f86-8196-ac453357cd15" (UID: "4344547a-ae09-4f86-8196-ac453357cd15"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:06:34 crc kubenswrapper[4813]: I0219 20:06:34.377345 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4344547a-ae09-4f86-8196-ac453357cd15-kube-api-access-bx96n" (OuterVolumeSpecName: "kube-api-access-bx96n") pod "4344547a-ae09-4f86-8196-ac453357cd15" (UID: "4344547a-ae09-4f86-8196-ac453357cd15"). InnerVolumeSpecName "kube-api-access-bx96n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:06:34 crc kubenswrapper[4813]: I0219 20:06:34.443211 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4344547a-ae09-4f86-8196-ac453357cd15-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4344547a-ae09-4f86-8196-ac453357cd15" (UID: "4344547a-ae09-4f86-8196-ac453357cd15"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:06:34 crc kubenswrapper[4813]: I0219 20:06:34.467264 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4344547a-ae09-4f86-8196-ac453357cd15-config-data" (OuterVolumeSpecName: "config-data") pod "4344547a-ae09-4f86-8196-ac453357cd15" (UID: "4344547a-ae09-4f86-8196-ac453357cd15"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:06:34 crc kubenswrapper[4813]: I0219 20:06:34.472672 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0dd6c088-3815-44be-863c-d72d71a2cfa5-httpd-run\") pod \"0dd6c088-3815-44be-863c-d72d71a2cfa5\" (UID: \"0dd6c088-3815-44be-863c-d72d71a2cfa5\") " Feb 19 20:06:34 crc kubenswrapper[4813]: I0219 20:06:34.472788 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0dd6c088-3815-44be-863c-d72d71a2cfa5-logs\") pod \"0dd6c088-3815-44be-863c-d72d71a2cfa5\" (UID: \"0dd6c088-3815-44be-863c-d72d71a2cfa5\") " Feb 19 20:06:34 crc kubenswrapper[4813]: I0219 20:06:34.472829 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpjg6\" (UniqueName: \"kubernetes.io/projected/0dd6c088-3815-44be-863c-d72d71a2cfa5-kube-api-access-jpjg6\") pod \"0dd6c088-3815-44be-863c-d72d71a2cfa5\" (UID: \"0dd6c088-3815-44be-863c-d72d71a2cfa5\") " Feb 19 20:06:34 crc kubenswrapper[4813]: I0219 20:06:34.472870 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0dd6c088-3815-44be-863c-d72d71a2cfa5-scripts\") pod \"0dd6c088-3815-44be-863c-d72d71a2cfa5\" (UID: \"0dd6c088-3815-44be-863c-d72d71a2cfa5\") " Feb 19 20:06:34 crc kubenswrapper[4813]: I0219 20:06:34.472901 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dd6c088-3815-44be-863c-d72d71a2cfa5-config-data\") pod \"0dd6c088-3815-44be-863c-d72d71a2cfa5\" (UID: \"0dd6c088-3815-44be-863c-d72d71a2cfa5\") " Feb 19 20:06:34 crc kubenswrapper[4813]: I0219 20:06:34.472997 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dd6c088-3815-44be-863c-d72d71a2cfa5-combined-ca-bundle\") pod \"0dd6c088-3815-44be-863c-d72d71a2cfa5\" (UID: \"0dd6c088-3815-44be-863c-d72d71a2cfa5\") " Feb 19 20:06:34 crc kubenswrapper[4813]: I0219 20:06:34.473066 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0dd6c088-3815-44be-863c-d72d71a2cfa5-ceph\") pod \"0dd6c088-3815-44be-863c-d72d71a2cfa5\" (UID: \"0dd6c088-3815-44be-863c-d72d71a2cfa5\") " Feb 19 20:06:34 crc kubenswrapper[4813]: I0219 20:06:34.473150 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0dd6c088-3815-44be-863c-d72d71a2cfa5-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "0dd6c088-3815-44be-863c-d72d71a2cfa5" (UID: "0dd6c088-3815-44be-863c-d72d71a2cfa5"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:06:34 crc kubenswrapper[4813]: I0219 20:06:34.473285 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0dd6c088-3815-44be-863c-d72d71a2cfa5-logs" (OuterVolumeSpecName: "logs") pod "0dd6c088-3815-44be-863c-d72d71a2cfa5" (UID: "0dd6c088-3815-44be-863c-d72d71a2cfa5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:06:34 crc kubenswrapper[4813]: I0219 20:06:34.473751 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bx96n\" (UniqueName: \"kubernetes.io/projected/4344547a-ae09-4f86-8196-ac453357cd15-kube-api-access-bx96n\") on node \"crc\" DevicePath \"\"" Feb 19 20:06:34 crc kubenswrapper[4813]: I0219 20:06:34.473778 4813 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0dd6c088-3815-44be-863c-d72d71a2cfa5-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 19 20:06:34 crc kubenswrapper[4813]: I0219 20:06:34.473791 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0dd6c088-3815-44be-863c-d72d71a2cfa5-logs\") on node \"crc\" DevicePath \"\"" Feb 19 20:06:34 crc kubenswrapper[4813]: I0219 20:06:34.473802 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4344547a-ae09-4f86-8196-ac453357cd15-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 20:06:34 crc kubenswrapper[4813]: I0219 20:06:34.473814 4813 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/4344547a-ae09-4f86-8196-ac453357cd15-ceph\") on node \"crc\" DevicePath \"\"" Feb 19 20:06:34 crc kubenswrapper[4813]: I0219 20:06:34.473825 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4344547a-ae09-4f86-8196-ac453357cd15-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 20:06:34 crc kubenswrapper[4813]: I0219 20:06:34.473837 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4344547a-ae09-4f86-8196-ac453357cd15-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 20:06:34 crc kubenswrapper[4813]: I0219 20:06:34.483189 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dd6c088-3815-44be-863c-d72d71a2cfa5-ceph" (OuterVolumeSpecName: "ceph") pod "0dd6c088-3815-44be-863c-d72d71a2cfa5" (UID: "0dd6c088-3815-44be-863c-d72d71a2cfa5"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:06:34 crc kubenswrapper[4813]: I0219 20:06:34.485162 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dd6c088-3815-44be-863c-d72d71a2cfa5-scripts" (OuterVolumeSpecName: "scripts") pod "0dd6c088-3815-44be-863c-d72d71a2cfa5" (UID: "0dd6c088-3815-44be-863c-d72d71a2cfa5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:06:34 crc kubenswrapper[4813]: I0219 20:06:34.485751 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dd6c088-3815-44be-863c-d72d71a2cfa5-kube-api-access-jpjg6" (OuterVolumeSpecName: "kube-api-access-jpjg6") pod "0dd6c088-3815-44be-863c-d72d71a2cfa5" (UID: "0dd6c088-3815-44be-863c-d72d71a2cfa5"). InnerVolumeSpecName "kube-api-access-jpjg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:06:34 crc kubenswrapper[4813]: I0219 20:06:34.507917 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dd6c088-3815-44be-863c-d72d71a2cfa5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0dd6c088-3815-44be-863c-d72d71a2cfa5" (UID: "0dd6c088-3815-44be-863c-d72d71a2cfa5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:06:34 crc kubenswrapper[4813]: I0219 20:06:34.540870 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dd6c088-3815-44be-863c-d72d71a2cfa5-config-data" (OuterVolumeSpecName: "config-data") pod "0dd6c088-3815-44be-863c-d72d71a2cfa5" (UID: "0dd6c088-3815-44be-863c-d72d71a2cfa5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:06:34 crc kubenswrapper[4813]: I0219 20:06:34.575883 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jpjg6\" (UniqueName: \"kubernetes.io/projected/0dd6c088-3815-44be-863c-d72d71a2cfa5-kube-api-access-jpjg6\") on node \"crc\" DevicePath \"\"" Feb 19 20:06:34 crc kubenswrapper[4813]: I0219 20:06:34.576345 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0dd6c088-3815-44be-863c-d72d71a2cfa5-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 20:06:34 crc kubenswrapper[4813]: I0219 20:06:34.576358 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dd6c088-3815-44be-863c-d72d71a2cfa5-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 20:06:34 crc kubenswrapper[4813]: I0219 20:06:34.576367 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dd6c088-3815-44be-863c-d72d71a2cfa5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 20:06:34 crc kubenswrapper[4813]: I0219 20:06:34.576375 4813 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0dd6c088-3815-44be-863c-d72d71a2cfa5-ceph\") on node \"crc\" DevicePath \"\"" Feb 19 20:06:35 crc kubenswrapper[4813]: I0219 20:06:35.213661 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0dd6c088-3815-44be-863c-d72d71a2cfa5","Type":"ContainerDied","Data":"6e5c5cfb95b3d8b3ad65fdc221a462c3f7b3c5f700087a02075616c71e10809f"} Feb 19 20:06:35 crc kubenswrapper[4813]: I0219 20:06:35.213733 4813 scope.go:117] "RemoveContainer" containerID="c3531ceb1e4ac87632a3dcd2f54be716aa24b8e2a285d6f679b54e3f2b99bdfe" Feb 19 20:06:35 crc kubenswrapper[4813]: I0219 20:06:35.213739 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 20:06:35 crc kubenswrapper[4813]: I0219 20:06:35.217352 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4344547a-ae09-4f86-8196-ac453357cd15","Type":"ContainerDied","Data":"9d987d4bacda3f0dd79a752f021f71a883b75e5b61dde5758be67b9323215f19"} Feb 19 20:06:35 crc kubenswrapper[4813]: I0219 20:06:35.217469 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 20:06:35 crc kubenswrapper[4813]: I0219 20:06:35.262594 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 20:06:35 crc kubenswrapper[4813]: I0219 20:06:35.285503 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 20:06:35 crc kubenswrapper[4813]: I0219 20:06:35.323710 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 20:06:35 crc kubenswrapper[4813]: I0219 20:06:35.323793 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 20:06:35 crc kubenswrapper[4813]: I0219 20:06:35.335250 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 20:06:35 crc kubenswrapper[4813]: E0219 20:06:35.336001 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dd6c088-3815-44be-863c-d72d71a2cfa5" containerName="glance-httpd" Feb 19 20:06:35 crc kubenswrapper[4813]: I0219 20:06:35.336030 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dd6c088-3815-44be-863c-d72d71a2cfa5" containerName="glance-httpd" Feb 19 20:06:35 crc kubenswrapper[4813]: E0219 20:06:35.336077 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4344547a-ae09-4f86-8196-ac453357cd15" containerName="glance-httpd" Feb 19 20:06:35 crc kubenswrapper[4813]: I0219 20:06:35.336089 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="4344547a-ae09-4f86-8196-ac453357cd15" containerName="glance-httpd" Feb 19 20:06:35 crc kubenswrapper[4813]: E0219 20:06:35.336104 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dd6c088-3815-44be-863c-d72d71a2cfa5" containerName="glance-log" Feb 19 20:06:35 crc kubenswrapper[4813]: I0219 20:06:35.336116 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dd6c088-3815-44be-863c-d72d71a2cfa5" containerName="glance-log" Feb 19 20:06:35 crc kubenswrapper[4813]: E0219 20:06:35.336154 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4344547a-ae09-4f86-8196-ac453357cd15" containerName="glance-log" Feb 19 20:06:35 crc kubenswrapper[4813]: I0219 20:06:35.336162 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="4344547a-ae09-4f86-8196-ac453357cd15" containerName="glance-log" Feb 19 20:06:35 crc kubenswrapper[4813]: I0219 20:06:35.336417 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="4344547a-ae09-4f86-8196-ac453357cd15" containerName="glance-httpd" Feb 19 20:06:35 crc kubenswrapper[4813]: I0219 20:06:35.336440 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="0dd6c088-3815-44be-863c-d72d71a2cfa5" containerName="glance-log" Feb 19 20:06:35 crc kubenswrapper[4813]: I0219 20:06:35.336461 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="4344547a-ae09-4f86-8196-ac453357cd15" containerName="glance-log" Feb 19 20:06:35 crc kubenswrapper[4813]: I0219 20:06:35.336472 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="0dd6c088-3815-44be-863c-d72d71a2cfa5" containerName="glance-httpd" Feb 19 20:06:35 crc kubenswrapper[4813]: I0219 20:06:35.337988 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 20:06:35 crc kubenswrapper[4813]: I0219 20:06:35.345525 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 19 20:06:35 crc kubenswrapper[4813]: I0219 20:06:35.345772 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-tcjbp" Feb 19 20:06:35 crc kubenswrapper[4813]: I0219 20:06:35.345911 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 19 20:06:35 crc kubenswrapper[4813]: I0219 20:06:35.350628 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 20:06:35 crc kubenswrapper[4813]: I0219 20:06:35.361294 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 20:06:35 crc kubenswrapper[4813]: I0219 20:06:35.363014 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 20:06:35 crc kubenswrapper[4813]: I0219 20:06:35.365003 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 19 20:06:35 crc kubenswrapper[4813]: I0219 20:06:35.377196 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 20:06:35 crc kubenswrapper[4813]: I0219 20:06:35.405101 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ce14a753-c8a1-4f0d-8244-21a07ec06064-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ce14a753-c8a1-4f0d-8244-21a07ec06064\") " pod="openstack/glance-default-external-api-0" Feb 19 20:06:35 crc kubenswrapper[4813]: I0219 20:06:35.405232 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c20d9f5-717c-4df2-9013-5d985ac9c6b8-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0c20d9f5-717c-4df2-9013-5d985ac9c6b8\") " pod="openstack/glance-default-internal-api-0" Feb 19 20:06:35 crc kubenswrapper[4813]: I0219 20:06:35.405413 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/ce14a753-c8a1-4f0d-8244-21a07ec06064-ceph\") pod \"glance-default-external-api-0\" (UID: \"ce14a753-c8a1-4f0d-8244-21a07ec06064\") " pod="openstack/glance-default-external-api-0" Feb 19 20:06:35 crc kubenswrapper[4813]: I0219 20:06:35.406722 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c20d9f5-717c-4df2-9013-5d985ac9c6b8-logs\") pod \"glance-default-internal-api-0\" (UID: \"0c20d9f5-717c-4df2-9013-5d985ac9c6b8\") " pod="openstack/glance-default-internal-api-0" Feb 19 20:06:35 crc kubenswrapper[4813]: I0219 20:06:35.407230 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgxs7\" (UniqueName: \"kubernetes.io/projected/0c20d9f5-717c-4df2-9013-5d985ac9c6b8-kube-api-access-jgxs7\") pod \"glance-default-internal-api-0\" (UID: \"0c20d9f5-717c-4df2-9013-5d985ac9c6b8\") " pod="openstack/glance-default-internal-api-0" Feb 19 20:06:35 crc kubenswrapper[4813]: I0219 20:06:35.407329 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce14a753-c8a1-4f0d-8244-21a07ec06064-logs\") pod \"glance-default-external-api-0\" (UID: \"ce14a753-c8a1-4f0d-8244-21a07ec06064\") " pod="openstack/glance-default-external-api-0" Feb 19 20:06:35 crc kubenswrapper[4813]: I0219 20:06:35.407477 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0c20d9f5-717c-4df2-9013-5d985ac9c6b8-ceph\") pod \"glance-default-internal-api-0\" (UID: \"0c20d9f5-717c-4df2-9013-5d985ac9c6b8\") " pod="openstack/glance-default-internal-api-0" Feb 19 20:06:35 crc kubenswrapper[4813]: I0219 20:06:35.407511 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce14a753-c8a1-4f0d-8244-21a07ec06064-scripts\") pod \"glance-default-external-api-0\" (UID: \"ce14a753-c8a1-4f0d-8244-21a07ec06064\") " pod="openstack/glance-default-external-api-0" Feb 19 20:06:35 crc kubenswrapper[4813]: I0219 20:06:35.407536 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkwff\" (UniqueName: \"kubernetes.io/projected/ce14a753-c8a1-4f0d-8244-21a07ec06064-kube-api-access-xkwff\") pod \"glance-default-external-api-0\" (UID: \"ce14a753-c8a1-4f0d-8244-21a07ec06064\") " pod="openstack/glance-default-external-api-0" Feb 19 20:06:35 crc kubenswrapper[4813]: I0219 20:06:35.407594 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c20d9f5-717c-4df2-9013-5d985ac9c6b8-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0c20d9f5-717c-4df2-9013-5d985ac9c6b8\") " pod="openstack/glance-default-internal-api-0" Feb 19 20:06:35 crc kubenswrapper[4813]: I0219 20:06:35.407644 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce14a753-c8a1-4f0d-8244-21a07ec06064-config-data\") pod \"glance-default-external-api-0\" (UID: \"ce14a753-c8a1-4f0d-8244-21a07ec06064\") " pod="openstack/glance-default-external-api-0" Feb 19 20:06:35 crc kubenswrapper[4813]: I0219 20:06:35.407678 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c20d9f5-717c-4df2-9013-5d985ac9c6b8-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0c20d9f5-717c-4df2-9013-5d985ac9c6b8\") " pod="openstack/glance-default-internal-api-0" Feb 19 20:06:35 crc kubenswrapper[4813]: I0219 20:06:35.407731 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0c20d9f5-717c-4df2-9013-5d985ac9c6b8-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0c20d9f5-717c-4df2-9013-5d985ac9c6b8\") " pod="openstack/glance-default-internal-api-0" Feb 19 20:06:35 crc kubenswrapper[4813]: I0219 20:06:35.407755 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce14a753-c8a1-4f0d-8244-21a07ec06064-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ce14a753-c8a1-4f0d-8244-21a07ec06064\") " pod="openstack/glance-default-external-api-0" Feb 19 20:06:35 crc kubenswrapper[4813]: I0219 20:06:35.475050 4813 scope.go:117] "RemoveContainer" containerID="77599afcb1bbfe89531d92db83d4ec060079eb9318249967ecfd9d52fd324241" Feb 19 20:06:35 crc kubenswrapper[4813]: E0219 20:06:35.475280 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:06:35 crc kubenswrapper[4813]: I0219 20:06:35.486024 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dd6c088-3815-44be-863c-d72d71a2cfa5" path="/var/lib/kubelet/pods/0dd6c088-3815-44be-863c-d72d71a2cfa5/volumes" Feb 19 20:06:35 crc kubenswrapper[4813]: I0219 20:06:35.486970 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4344547a-ae09-4f86-8196-ac453357cd15" path="/var/lib/kubelet/pods/4344547a-ae09-4f86-8196-ac453357cd15/volumes" Feb 19 20:06:35 crc kubenswrapper[4813]: I0219 20:06:35.509199 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/ce14a753-c8a1-4f0d-8244-21a07ec06064-ceph\") pod \"glance-default-external-api-0\" (UID: \"ce14a753-c8a1-4f0d-8244-21a07ec06064\") " pod="openstack/glance-default-external-api-0" Feb 19 20:06:35 crc kubenswrapper[4813]: I0219 20:06:35.509253 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c20d9f5-717c-4df2-9013-5d985ac9c6b8-logs\") pod \"glance-default-internal-api-0\" (UID: \"0c20d9f5-717c-4df2-9013-5d985ac9c6b8\") " pod="openstack/glance-default-internal-api-0" Feb 19 20:06:35 crc kubenswrapper[4813]: I0219 20:06:35.509276 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgxs7\" (UniqueName: \"kubernetes.io/projected/0c20d9f5-717c-4df2-9013-5d985ac9c6b8-kube-api-access-jgxs7\") pod \"glance-default-internal-api-0\" (UID: \"0c20d9f5-717c-4df2-9013-5d985ac9c6b8\") " pod="openstack/glance-default-internal-api-0" Feb 19 20:06:35 crc kubenswrapper[4813]: I0219 20:06:35.509302 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce14a753-c8a1-4f0d-8244-21a07ec06064-logs\") pod \"glance-default-external-api-0\" (UID: \"ce14a753-c8a1-4f0d-8244-21a07ec06064\") " pod="openstack/glance-default-external-api-0" Feb 19 20:06:35 crc kubenswrapper[4813]: I0219 20:06:35.509318 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0c20d9f5-717c-4df2-9013-5d985ac9c6b8-ceph\") pod \"glance-default-internal-api-0\" (UID: \"0c20d9f5-717c-4df2-9013-5d985ac9c6b8\") " pod="openstack/glance-default-internal-api-0" Feb 19 20:06:35 crc kubenswrapper[4813]: I0219 20:06:35.509342 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce14a753-c8a1-4f0d-8244-21a07ec06064-scripts\") pod \"glance-default-external-api-0\" (UID: \"ce14a753-c8a1-4f0d-8244-21a07ec06064\") " pod="openstack/glance-default-external-api-0" Feb 19 20:06:35 crc kubenswrapper[4813]: I0219 20:06:35.509361 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkwff\" (UniqueName: \"kubernetes.io/projected/ce14a753-c8a1-4f0d-8244-21a07ec06064-kube-api-access-xkwff\") pod \"glance-default-external-api-0\" (UID: \"ce14a753-c8a1-4f0d-8244-21a07ec06064\") " pod="openstack/glance-default-external-api-0" Feb 19 20:06:35 crc kubenswrapper[4813]: I0219 20:06:35.509434 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c20d9f5-717c-4df2-9013-5d985ac9c6b8-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0c20d9f5-717c-4df2-9013-5d985ac9c6b8\") " pod="openstack/glance-default-internal-api-0" Feb 19 20:06:35 crc kubenswrapper[4813]: I0219 20:06:35.509462 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce14a753-c8a1-4f0d-8244-21a07ec06064-config-data\") pod \"glance-default-external-api-0\" (UID: \"ce14a753-c8a1-4f0d-8244-21a07ec06064\") " pod="openstack/glance-default-external-api-0" Feb 19 20:06:35 crc kubenswrapper[4813]: I0219 20:06:35.509482 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c20d9f5-717c-4df2-9013-5d985ac9c6b8-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0c20d9f5-717c-4df2-9013-5d985ac9c6b8\") " pod="openstack/glance-default-internal-api-0" Feb 19 20:06:35 crc kubenswrapper[4813]: I0219 20:06:35.509517 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0c20d9f5-717c-4df2-9013-5d985ac9c6b8-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0c20d9f5-717c-4df2-9013-5d985ac9c6b8\") " pod="openstack/glance-default-internal-api-0" Feb 19 20:06:35 crc kubenswrapper[4813]: I0219 20:06:35.509537 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce14a753-c8a1-4f0d-8244-21a07ec06064-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ce14a753-c8a1-4f0d-8244-21a07ec06064\") " pod="openstack/glance-default-external-api-0" Feb 19 20:06:35 crc kubenswrapper[4813]: I0219 20:06:35.509589 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ce14a753-c8a1-4f0d-8244-21a07ec06064-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ce14a753-c8a1-4f0d-8244-21a07ec06064\") " pod="openstack/glance-default-external-api-0" Feb 19 20:06:35 crc kubenswrapper[4813]: I0219 20:06:35.509646 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c20d9f5-717c-4df2-9013-5d985ac9c6b8-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0c20d9f5-717c-4df2-9013-5d985ac9c6b8\") " pod="openstack/glance-default-internal-api-0" Feb 19 20:06:35 crc kubenswrapper[4813]: I0219 20:06:35.510744 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce14a753-c8a1-4f0d-8244-21a07ec06064-logs\") pod \"glance-default-external-api-0\" (UID: \"ce14a753-c8a1-4f0d-8244-21a07ec06064\") " pod="openstack/glance-default-external-api-0" Feb 19 20:06:35 crc kubenswrapper[4813]: I0219 20:06:35.511112 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c20d9f5-717c-4df2-9013-5d985ac9c6b8-logs\") pod \"glance-default-internal-api-0\" (UID: \"0c20d9f5-717c-4df2-9013-5d985ac9c6b8\") " pod="openstack/glance-default-internal-api-0" Feb 19 20:06:35 crc kubenswrapper[4813]: I0219 20:06:35.512826 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0c20d9f5-717c-4df2-9013-5d985ac9c6b8-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0c20d9f5-717c-4df2-9013-5d985ac9c6b8\") " pod="openstack/glance-default-internal-api-0" Feb 19 20:06:35 crc kubenswrapper[4813]: I0219 20:06:35.512993 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ce14a753-c8a1-4f0d-8244-21a07ec06064-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ce14a753-c8a1-4f0d-8244-21a07ec06064\") " pod="openstack/glance-default-external-api-0" Feb 19 20:06:35 crc kubenswrapper[4813]: I0219 20:06:35.515010 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/ce14a753-c8a1-4f0d-8244-21a07ec06064-ceph\") pod \"glance-default-external-api-0\" (UID: \"ce14a753-c8a1-4f0d-8244-21a07ec06064\") " pod="openstack/glance-default-external-api-0" Feb 19 20:06:35 crc kubenswrapper[4813]: I0219 20:06:35.515794 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c20d9f5-717c-4df2-9013-5d985ac9c6b8-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0c20d9f5-717c-4df2-9013-5d985ac9c6b8\") " pod="openstack/glance-default-internal-api-0" Feb 19 20:06:35 crc kubenswrapper[4813]: I0219 20:06:35.517221 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c20d9f5-717c-4df2-9013-5d985ac9c6b8-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0c20d9f5-717c-4df2-9013-5d985ac9c6b8\") " pod="openstack/glance-default-internal-api-0" Feb 19 20:06:35 crc kubenswrapper[4813]: I0219 20:06:35.519427 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c20d9f5-717c-4df2-9013-5d985ac9c6b8-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0c20d9f5-717c-4df2-9013-5d985ac9c6b8\") " pod="openstack/glance-default-internal-api-0" Feb 19 20:06:35 crc kubenswrapper[4813]: I0219 20:06:35.519597 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0c20d9f5-717c-4df2-9013-5d985ac9c6b8-ceph\") pod \"glance-default-internal-api-0\" (UID: \"0c20d9f5-717c-4df2-9013-5d985ac9c6b8\") " pod="openstack/glance-default-internal-api-0" Feb 19 20:06:35 crc kubenswrapper[4813]: I0219 20:06:35.521896 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce14a753-c8a1-4f0d-8244-21a07ec06064-scripts\") pod \"glance-default-external-api-0\" (UID: \"ce14a753-c8a1-4f0d-8244-21a07ec06064\") " pod="openstack/glance-default-external-api-0" Feb 19 20:06:35 crc kubenswrapper[4813]: I0219 20:06:35.522665 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce14a753-c8a1-4f0d-8244-21a07ec06064-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ce14a753-c8a1-4f0d-8244-21a07ec06064\") " pod="openstack/glance-default-external-api-0" Feb 19 20:06:35 crc kubenswrapper[4813]: I0219 20:06:35.523283 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce14a753-c8a1-4f0d-8244-21a07ec06064-config-data\") pod \"glance-default-external-api-0\" (UID: \"ce14a753-c8a1-4f0d-8244-21a07ec06064\") " pod="openstack/glance-default-external-api-0" Feb 19 20:06:35 crc kubenswrapper[4813]: I0219 20:06:35.529232 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkwff\" (UniqueName: \"kubernetes.io/projected/ce14a753-c8a1-4f0d-8244-21a07ec06064-kube-api-access-xkwff\") pod \"glance-default-external-api-0\" (UID: \"ce14a753-c8a1-4f0d-8244-21a07ec06064\") " pod="openstack/glance-default-external-api-0" Feb 19 20:06:35 crc kubenswrapper[4813]: I0219 20:06:35.532192 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgxs7\" (UniqueName: \"kubernetes.io/projected/0c20d9f5-717c-4df2-9013-5d985ac9c6b8-kube-api-access-jgxs7\") pod \"glance-default-internal-api-0\" (UID: \"0c20d9f5-717c-4df2-9013-5d985ac9c6b8\") " pod="openstack/glance-default-internal-api-0" Feb 19 20:06:35 crc kubenswrapper[4813]: I0219 20:06:35.664758 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 20:06:35 crc kubenswrapper[4813]: I0219 20:06:35.686901 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 20:06:40 crc kubenswrapper[4813]: I0219 20:06:40.123536 4813 scope.go:117] "RemoveContainer" containerID="ef77f4d11fa940a3768d863b7f292ecade1db28e65768f0b337515fc1810981c" Feb 19 20:06:40 crc kubenswrapper[4813]: I0219 20:06:40.272300 4813 scope.go:117] "RemoveContainer" containerID="3e5ffc7cde1221dbcd57349e0adeaa0ed53928dddd6b35b1cdc6df2c70188ca1" Feb 19 20:06:40 crc kubenswrapper[4813]: I0219 20:06:40.361242 4813 scope.go:117] "RemoveContainer" containerID="df33cc9c2df9419e6f2e7cb6f3232c991442063e273a982bf39590502378c5ed" Feb 19 20:06:40 crc kubenswrapper[4813]: I0219 20:06:40.759742 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 20:06:40 crc kubenswrapper[4813]: W0219 20:06:40.770520 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce14a753_c8a1_4f0d_8244_21a07ec06064.slice/crio-194caf347707126edd9270e429793c32f595f85fdd27c285058807039133d16a WatchSource:0}: Error finding container 194caf347707126edd9270e429793c32f595f85fdd27c285058807039133d16a: Status 404 returned error can't find the container with id 194caf347707126edd9270e429793c32f595f85fdd27c285058807039133d16a Feb 19 20:06:40 crc kubenswrapper[4813]: I0219 20:06:40.869922 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 20:06:40 crc kubenswrapper[4813]: W0219 20:06:40.891135 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c20d9f5_717c_4df2_9013_5d985ac9c6b8.slice/crio-8cfabcfe360b778f8558ad2603f035383f2e73449efa766c00825625cd16d7ea WatchSource:0}: Error finding container 8cfabcfe360b778f8558ad2603f035383f2e73449efa766c00825625cd16d7ea: Status 404 returned error can't find the container with id 8cfabcfe360b778f8558ad2603f035383f2e73449efa766c00825625cd16d7ea Feb 19 20:06:41 crc kubenswrapper[4813]: I0219 20:06:41.331395 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74b5489f4f-lwvmc" event={"ID":"acf95000-e303-4f53-a73b-8692e75ecf6b","Type":"ContainerStarted","Data":"9fe0b129de502003a1b8708aeec3f464ddf040bebabadecd5f191da0cc37958c"} Feb 19 20:06:41 crc kubenswrapper[4813]: I0219 20:06:41.331939 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74b5489f4f-lwvmc" event={"ID":"acf95000-e303-4f53-a73b-8692e75ecf6b","Type":"ContainerStarted","Data":"23bd45923ebdce56cc21dad8cdbeb1cc6890840ebde382429dd600ca68268e2d"} Feb 19 20:06:41 crc kubenswrapper[4813]: I0219 20:06:41.339428 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c79476fc-m49fs" event={"ID":"e3f864db-c1f6-40b4-895c-347947a296e5","Type":"ContainerStarted","Data":"f302b27142a4ceafba12c4275e6f35ba26d4b59ef39663db8027d22a25ce5ece"} Feb 19 20:06:41 crc kubenswrapper[4813]: I0219 20:06:41.339477 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c79476fc-m49fs" event={"ID":"e3f864db-c1f6-40b4-895c-347947a296e5","Type":"ContainerStarted","Data":"45b664f6e38426c32916fbd79b60ef1e78ec6330ec23cc4714186bdf164fe2d0"} Feb 19 20:06:41 crc kubenswrapper[4813]: I0219 20:06:41.343024 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0c20d9f5-717c-4df2-9013-5d985ac9c6b8","Type":"ContainerStarted","Data":"8cfabcfe360b778f8558ad2603f035383f2e73449efa766c00825625cd16d7ea"} Feb 19 20:06:41 crc kubenswrapper[4813]: I0219 20:06:41.347011 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ce14a753-c8a1-4f0d-8244-21a07ec06064","Type":"ContainerStarted","Data":"194caf347707126edd9270e429793c32f595f85fdd27c285058807039133d16a"} Feb 19 20:06:41 crc kubenswrapper[4813]: I0219 20:06:41.352334 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-64447d6cd5-7c8jn" event={"ID":"1b44a06a-2700-4b32-9244-27f74d62d986","Type":"ContainerStarted","Data":"307995718f92acf2e8243dd1ddbf62c4fa1f7caa5595e7fd57225f2d7bd05057"} Feb 19 20:06:41 crc kubenswrapper[4813]: I0219 20:06:41.352416 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-64447d6cd5-7c8jn" event={"ID":"1b44a06a-2700-4b32-9244-27f74d62d986","Type":"ContainerStarted","Data":"85f20b5735680107d82b84309837dae7cd6e4914f95e3dd4568567d1f3f06e35"} Feb 19 20:06:41 crc kubenswrapper[4813]: I0219 20:06:41.352476 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-64447d6cd5-7c8jn" podUID="1b44a06a-2700-4b32-9244-27f74d62d986" containerName="horizon-log" containerID="cri-o://85f20b5735680107d82b84309837dae7cd6e4914f95e3dd4568567d1f3f06e35" gracePeriod=30 Feb 19 20:06:41 crc kubenswrapper[4813]: I0219 20:06:41.352524 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-64447d6cd5-7c8jn" podUID="1b44a06a-2700-4b32-9244-27f74d62d986" containerName="horizon" containerID="cri-o://307995718f92acf2e8243dd1ddbf62c4fa1f7caa5595e7fd57225f2d7bd05057" gracePeriod=30 Feb 19 20:06:41 crc kubenswrapper[4813]: I0219 20:06:41.355096 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-74b5489f4f-lwvmc" podStartSLOduration=3.103724328 podStartE2EDuration="11.355077598s" podCreationTimestamp="2026-02-19 20:06:30 +0000 UTC" firstStartedPulling="2026-02-19 20:06:31.917122515 +0000 UTC m=+5811.142563056" lastFinishedPulling="2026-02-19 20:06:40.168475785 +0000 UTC m=+5819.393916326" observedRunningTime="2026-02-19 20:06:41.352387755 +0000 UTC m=+5820.577828296" watchObservedRunningTime="2026-02-19 20:06:41.355077598 +0000 UTC m=+5820.580518139" Feb 19 20:06:41 crc kubenswrapper[4813]: I0219 20:06:41.388244 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5c79476fc-m49fs" podStartSLOduration=2.6998721530000003 podStartE2EDuration="10.388204963s" podCreationTimestamp="2026-02-19 20:06:31 +0000 UTC" firstStartedPulling="2026-02-19 20:06:32.53945978 +0000 UTC m=+5811.764900321" lastFinishedPulling="2026-02-19 20:06:40.22779259 +0000 UTC m=+5819.453233131" observedRunningTime="2026-02-19 20:06:41.376338237 +0000 UTC m=+5820.601778808" watchObservedRunningTime="2026-02-19 20:06:41.388204963 +0000 UTC m=+5820.613645504" Feb 19 20:06:41 crc kubenswrapper[4813]: I0219 20:06:41.404040 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5c79476fc-m49fs" Feb 19 20:06:41 crc kubenswrapper[4813]: I0219 20:06:41.404078 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5c79476fc-m49fs" Feb 19 20:06:41 crc kubenswrapper[4813]: I0219 20:06:41.433233 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-64447d6cd5-7c8jn" podStartSLOduration=2.512512806 podStartE2EDuration="11.433184815s" podCreationTimestamp="2026-02-19 20:06:30 +0000 UTC" firstStartedPulling="2026-02-19 20:06:31.247512597 +0000 UTC m=+5810.472953138" lastFinishedPulling="2026-02-19 20:06:40.168184596 +0000 UTC m=+5819.393625147" observedRunningTime="2026-02-19 20:06:41.424104904 +0000 UTC m=+5820.649545455" watchObservedRunningTime="2026-02-19 20:06:41.433184815 +0000 UTC m=+5820.658625356" Feb 19 20:06:42 crc kubenswrapper[4813]: I0219 20:06:42.366514 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0c20d9f5-717c-4df2-9013-5d985ac9c6b8","Type":"ContainerStarted","Data":"9301dde154ad201deab0d41dc9f2fdf25140acf5fe67a7dff8f9b438a59d222e"} Feb 19 20:06:42 crc kubenswrapper[4813]: I0219 20:06:42.367124 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0c20d9f5-717c-4df2-9013-5d985ac9c6b8","Type":"ContainerStarted","Data":"c27e063c9a1e8493823e4939f949e34b607b405523121e0e259177958fdae4d7"} Feb 19 20:06:42 crc kubenswrapper[4813]: I0219 20:06:42.370502 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ce14a753-c8a1-4f0d-8244-21a07ec06064","Type":"ContainerStarted","Data":"aa6d2618984b05c231509c68fe33d91c42ead34630e67af9d9e5b96ec5cdf4b0"} Feb 19 20:06:42 crc kubenswrapper[4813]: I0219 20:06:42.370543 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ce14a753-c8a1-4f0d-8244-21a07ec06064","Type":"ContainerStarted","Data":"a0b3c7eae3990ee240806171cf25dbca3cef39af8468bfb1c29c27a89b9acebf"} Feb 19 20:06:42 crc kubenswrapper[4813]: I0219 20:06:42.393635 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.393618401 podStartE2EDuration="7.393618401s" podCreationTimestamp="2026-02-19 20:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 20:06:42.392155546 +0000 UTC m=+5821.617596087" watchObservedRunningTime="2026-02-19 20:06:42.393618401 +0000 UTC m=+5821.619058942" Feb 19 20:06:42 crc kubenswrapper[4813]: I0219 20:06:42.417272 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.417250803 podStartE2EDuration="7.417250803s" podCreationTimestamp="2026-02-19 20:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 20:06:42.409994818 +0000 UTC m=+5821.635435359" watchObservedRunningTime="2026-02-19 20:06:42.417250803 +0000 UTC m=+5821.642691344" Feb 19 20:06:43 crc kubenswrapper[4813]: I0219 20:06:43.045057 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-jk5nd"] Feb 19 20:06:43 crc kubenswrapper[4813]: I0219 20:06:43.058572 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-1eff-account-create-update-lqgmz"] Feb 19 20:06:43 crc kubenswrapper[4813]: I0219 20:06:43.072313 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-1eff-account-create-update-lqgmz"] Feb 19 20:06:43 crc kubenswrapper[4813]: I0219 20:06:43.083616 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-jk5nd"] Feb 19 20:06:43 crc kubenswrapper[4813]: I0219 20:06:43.488795 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="481fb453-85ee-472b-a0ae-876e114415d6" path="/var/lib/kubelet/pods/481fb453-85ee-472b-a0ae-876e114415d6/volumes" Feb 19 20:06:43 crc kubenswrapper[4813]: I0219 20:06:43.491066 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="caaee91d-ca35-47dc-bfd1-6f32edf33360" path="/var/lib/kubelet/pods/caaee91d-ca35-47dc-bfd1-6f32edf33360/volumes" Feb 19 20:06:45 crc kubenswrapper[4813]: I0219 20:06:45.666544 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 19 20:06:45 crc kubenswrapper[4813]: I0219 20:06:45.667392 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 19 20:06:45 crc kubenswrapper[4813]: I0219 20:06:45.687711 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 19 20:06:45 crc kubenswrapper[4813]: I0219 20:06:45.687939 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 19 20:06:45 crc kubenswrapper[4813]: I0219 20:06:45.704202 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 19 20:06:45 crc kubenswrapper[4813]: I0219 20:06:45.708039 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 19 20:06:45 crc kubenswrapper[4813]: I0219 20:06:45.722317 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 19 20:06:45 crc kubenswrapper[4813]: I0219 20:06:45.741437 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 19 20:06:46 crc kubenswrapper[4813]: I0219 20:06:46.415484 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 19 20:06:46 crc kubenswrapper[4813]: I0219 20:06:46.415798 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 19 20:06:46 crc kubenswrapper[4813]: I0219 20:06:46.415812 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 19 20:06:46 crc kubenswrapper[4813]: I0219 20:06:46.415822 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 19 20:06:48 crc kubenswrapper[4813]: I0219 20:06:48.471469 4813 scope.go:117] "RemoveContainer" containerID="77599afcb1bbfe89531d92db83d4ec060079eb9318249967ecfd9d52fd324241" Feb 19 20:06:48 crc kubenswrapper[4813]: E0219 20:06:48.472143 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:06:48 crc kubenswrapper[4813]: I0219 20:06:48.536210 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 19 20:06:48 crc kubenswrapper[4813]: I0219 20:06:48.798262 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 19 20:06:49 crc kubenswrapper[4813]: I0219 20:06:49.030621 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-m9bhp"] Feb 19 20:06:49 crc kubenswrapper[4813]: I0219 20:06:49.041510 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-m9bhp"] Feb 19 20:06:49 crc kubenswrapper[4813]: I0219 20:06:49.483081 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11899028-740e-4834-987e-b51a914fd1f5" path="/var/lib/kubelet/pods/11899028-740e-4834-987e-b51a914fd1f5/volumes" Feb 19 20:06:49 crc kubenswrapper[4813]: I0219 20:06:49.583411 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 19 20:06:50 crc kubenswrapper[4813]: I0219 20:06:50.133053 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 19 20:06:50 crc kubenswrapper[4813]: I0219 20:06:50.699416 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-64447d6cd5-7c8jn" Feb 19 20:06:50 crc kubenswrapper[4813]: I0219 20:06:50.964044 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-74b5489f4f-lwvmc" Feb 19 20:06:50 crc kubenswrapper[4813]: I0219 20:06:50.964399 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-74b5489f4f-lwvmc" Feb 19 20:06:50 crc kubenswrapper[4813]: I0219 20:06:50.966475 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-74b5489f4f-lwvmc" podUID="acf95000-e303-4f53-a73b-8692e75ecf6b" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.108:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.108:8080: connect: connection refused" Feb 19 20:06:51 crc kubenswrapper[4813]: I0219 20:06:51.405582 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5c79476fc-m49fs" podUID="e3f864db-c1f6-40b4-895c-347947a296e5" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.109:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.109:8080: connect: connection refused" Feb 19 20:06:53 crc kubenswrapper[4813]: I0219 20:06:53.313321 4813 scope.go:117] "RemoveContainer" containerID="3da73ec1fc397237d76cb1e8eddec085e09d798eed3ad8f636f5799498dba3ab" Feb 19 20:06:53 crc kubenswrapper[4813]: I0219 20:06:53.360385 4813 scope.go:117] "RemoveContainer" containerID="9f9f2c28ba1330f397d13573d8ef534e2d49d84c5a6ba4c4c4c8cc4b29439281" Feb 19 20:06:53 crc kubenswrapper[4813]: I0219 20:06:53.413325 4813 scope.go:117] "RemoveContainer" containerID="73e7a62e38d4c1f2ebc8c15fdd37d2456a28355f7878bcc8f1247f15f4774816" Feb 19 20:06:53 crc kubenswrapper[4813]: I0219 20:06:53.439376 4813 scope.go:117] "RemoveContainer" containerID="38ec3f52beb7682eebf064130a6ecb3357856c51414e5ca5008b815dac8adc92" Feb 19 20:06:53 crc kubenswrapper[4813]: I0219 20:06:53.501188 4813 scope.go:117] "RemoveContainer" containerID="c6a10b20d5807d213f6c61f27a2b93e6656f86063772ac1f5eb1a4e98795c2bc" Feb 19 20:06:53 crc kubenswrapper[4813]: I0219 20:06:53.528799 4813 scope.go:117] "RemoveContainer" containerID="89127d1134eb23667de45ce98f25fcb57637d284affe6ea05c06c4cf408acdb4" Feb 19 20:06:53 crc kubenswrapper[4813]: I0219 20:06:53.575073 4813 scope.go:117] "RemoveContainer" containerID="f4556a47b76c9f9a397dd7ac0cae211a861b23f84208b1ca0171a7dd87efd356" Feb 19 20:06:53 crc kubenswrapper[4813]: I0219 20:06:53.625556 4813 scope.go:117] "RemoveContainer" containerID="249b1d5ecaf21d9741fcb34412a6e6939b7d01047451d8af5b3521efbe2955d3" Feb 19 20:07:02 crc kubenswrapper[4813]: I0219 20:07:02.471737 4813 scope.go:117] "RemoveContainer" containerID="77599afcb1bbfe89531d92db83d4ec060079eb9318249967ecfd9d52fd324241" Feb 19 20:07:02 crc kubenswrapper[4813]: E0219 20:07:02.472480 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:07:02 crc kubenswrapper[4813]: I0219 20:07:02.740715 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-74b5489f4f-lwvmc" Feb 19 20:07:03 crc kubenswrapper[4813]: I0219 20:07:03.228316 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-5c79476fc-m49fs" Feb 19 20:07:04 crc kubenswrapper[4813]: I0219 20:07:04.420083 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-74b5489f4f-lwvmc" Feb 19 20:07:04 crc kubenswrapper[4813]: I0219 20:07:04.992453 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-5c79476fc-m49fs" Feb 19 20:07:05 crc kubenswrapper[4813]: I0219 20:07:05.063600 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-74b5489f4f-lwvmc"] Feb 19 20:07:05 crc kubenswrapper[4813]: I0219 20:07:05.063902 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-74b5489f4f-lwvmc" podUID="acf95000-e303-4f53-a73b-8692e75ecf6b" containerName="horizon-log" containerID="cri-o://23bd45923ebdce56cc21dad8cdbeb1cc6890840ebde382429dd600ca68268e2d" gracePeriod=30 Feb 19 20:07:05 crc kubenswrapper[4813]: I0219 20:07:05.064322 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-74b5489f4f-lwvmc" podUID="acf95000-e303-4f53-a73b-8692e75ecf6b" containerName="horizon" containerID="cri-o://9fe0b129de502003a1b8708aeec3f464ddf040bebabadecd5f191da0cc37958c" gracePeriod=30 Feb 19 20:07:08 crc kubenswrapper[4813]: I0219 20:07:08.672420 4813 generic.go:334] "Generic (PLEG): container finished" podID="acf95000-e303-4f53-a73b-8692e75ecf6b" containerID="9fe0b129de502003a1b8708aeec3f464ddf040bebabadecd5f191da0cc37958c" exitCode=0 Feb 19 20:07:08 crc kubenswrapper[4813]: I0219 20:07:08.672516 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74b5489f4f-lwvmc" event={"ID":"acf95000-e303-4f53-a73b-8692e75ecf6b","Type":"ContainerDied","Data":"9fe0b129de502003a1b8708aeec3f464ddf040bebabadecd5f191da0cc37958c"} Feb 19 20:07:10 crc kubenswrapper[4813]: I0219 20:07:10.965638 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-74b5489f4f-lwvmc" podUID="acf95000-e303-4f53-a73b-8692e75ecf6b" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.108:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.108:8080: connect: connection refused" Feb 19 20:07:11 crc kubenswrapper[4813]: I0219 20:07:11.725133 4813 generic.go:334] "Generic (PLEG): container finished" podID="1b44a06a-2700-4b32-9244-27f74d62d986" containerID="307995718f92acf2e8243dd1ddbf62c4fa1f7caa5595e7fd57225f2d7bd05057" exitCode=137 Feb 19 20:07:11 crc kubenswrapper[4813]: I0219 20:07:11.726274 4813 generic.go:334] "Generic (PLEG): container finished" podID="1b44a06a-2700-4b32-9244-27f74d62d986" containerID="85f20b5735680107d82b84309837dae7cd6e4914f95e3dd4568567d1f3f06e35" exitCode=137 Feb 19 20:07:11 crc kubenswrapper[4813]: I0219 20:07:11.725166 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-64447d6cd5-7c8jn" event={"ID":"1b44a06a-2700-4b32-9244-27f74d62d986","Type":"ContainerDied","Data":"307995718f92acf2e8243dd1ddbf62c4fa1f7caa5595e7fd57225f2d7bd05057"} Feb 19 20:07:11 crc kubenswrapper[4813]: I0219 20:07:11.726341 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-64447d6cd5-7c8jn" event={"ID":"1b44a06a-2700-4b32-9244-27f74d62d986","Type":"ContainerDied","Data":"85f20b5735680107d82b84309837dae7cd6e4914f95e3dd4568567d1f3f06e35"} Feb 19 20:07:11 crc kubenswrapper[4813]: I0219 20:07:11.860015 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-64447d6cd5-7c8jn" Feb 19 20:07:11 crc kubenswrapper[4813]: I0219 20:07:11.975862 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1b44a06a-2700-4b32-9244-27f74d62d986-scripts\") pod \"1b44a06a-2700-4b32-9244-27f74d62d986\" (UID: \"1b44a06a-2700-4b32-9244-27f74d62d986\") " Feb 19 20:07:11 crc kubenswrapper[4813]: I0219 20:07:11.975930 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1b44a06a-2700-4b32-9244-27f74d62d986-horizon-secret-key\") pod \"1b44a06a-2700-4b32-9244-27f74d62d986\" (UID: \"1b44a06a-2700-4b32-9244-27f74d62d986\") " Feb 19 20:07:11 crc kubenswrapper[4813]: I0219 20:07:11.975985 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b44a06a-2700-4b32-9244-27f74d62d986-logs\") pod \"1b44a06a-2700-4b32-9244-27f74d62d986\" (UID: \"1b44a06a-2700-4b32-9244-27f74d62d986\") " Feb 19 20:07:11 crc kubenswrapper[4813]: I0219 20:07:11.976677 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b44a06a-2700-4b32-9244-27f74d62d986-logs" (OuterVolumeSpecName: "logs") pod "1b44a06a-2700-4b32-9244-27f74d62d986" (UID: "1b44a06a-2700-4b32-9244-27f74d62d986"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:07:11 crc kubenswrapper[4813]: I0219 20:07:11.976117 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xx4zl\" (UniqueName: \"kubernetes.io/projected/1b44a06a-2700-4b32-9244-27f74d62d986-kube-api-access-xx4zl\") pod \"1b44a06a-2700-4b32-9244-27f74d62d986\" (UID: \"1b44a06a-2700-4b32-9244-27f74d62d986\") " Feb 19 20:07:11 crc kubenswrapper[4813]: I0219 20:07:11.976906 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1b44a06a-2700-4b32-9244-27f74d62d986-config-data\") pod \"1b44a06a-2700-4b32-9244-27f74d62d986\" (UID: \"1b44a06a-2700-4b32-9244-27f74d62d986\") " Feb 19 20:07:11 crc kubenswrapper[4813]: I0219 20:07:11.977754 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b44a06a-2700-4b32-9244-27f74d62d986-logs\") on node \"crc\" DevicePath \"\"" Feb 19 20:07:11 crc kubenswrapper[4813]: I0219 20:07:11.997274 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b44a06a-2700-4b32-9244-27f74d62d986-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "1b44a06a-2700-4b32-9244-27f74d62d986" (UID: "1b44a06a-2700-4b32-9244-27f74d62d986"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:07:11 crc kubenswrapper[4813]: I0219 20:07:11.997352 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b44a06a-2700-4b32-9244-27f74d62d986-kube-api-access-xx4zl" (OuterVolumeSpecName: "kube-api-access-xx4zl") pod "1b44a06a-2700-4b32-9244-27f74d62d986" (UID: "1b44a06a-2700-4b32-9244-27f74d62d986"). InnerVolumeSpecName "kube-api-access-xx4zl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:07:12 crc kubenswrapper[4813]: I0219 20:07:12.009108 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b44a06a-2700-4b32-9244-27f74d62d986-config-data" (OuterVolumeSpecName: "config-data") pod "1b44a06a-2700-4b32-9244-27f74d62d986" (UID: "1b44a06a-2700-4b32-9244-27f74d62d986"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:07:12 crc kubenswrapper[4813]: I0219 20:07:12.023461 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b44a06a-2700-4b32-9244-27f74d62d986-scripts" (OuterVolumeSpecName: "scripts") pod "1b44a06a-2700-4b32-9244-27f74d62d986" (UID: "1b44a06a-2700-4b32-9244-27f74d62d986"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:07:12 crc kubenswrapper[4813]: I0219 20:07:12.079208 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1b44a06a-2700-4b32-9244-27f74d62d986-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 20:07:12 crc kubenswrapper[4813]: I0219 20:07:12.079254 4813 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1b44a06a-2700-4b32-9244-27f74d62d986-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 19 20:07:12 crc kubenswrapper[4813]: I0219 20:07:12.079270 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xx4zl\" (UniqueName: \"kubernetes.io/projected/1b44a06a-2700-4b32-9244-27f74d62d986-kube-api-access-xx4zl\") on node \"crc\" DevicePath \"\"" Feb 19 20:07:12 crc kubenswrapper[4813]: I0219 20:07:12.079281 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1b44a06a-2700-4b32-9244-27f74d62d986-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 20:07:12 crc kubenswrapper[4813]: I0219 20:07:12.760700 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-64447d6cd5-7c8jn" event={"ID":"1b44a06a-2700-4b32-9244-27f74d62d986","Type":"ContainerDied","Data":"20cf785b7979e0bc2e50c0058ef3bb18f7a3f0e1acb4c0a14082e899bf5d3eae"} Feb 19 20:07:12 crc kubenswrapper[4813]: I0219 20:07:12.761084 4813 scope.go:117] "RemoveContainer" containerID="307995718f92acf2e8243dd1ddbf62c4fa1f7caa5595e7fd57225f2d7bd05057" Feb 19 20:07:12 crc kubenswrapper[4813]: I0219 20:07:12.760791 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-64447d6cd5-7c8jn" Feb 19 20:07:12 crc kubenswrapper[4813]: I0219 20:07:12.799043 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-64447d6cd5-7c8jn"] Feb 19 20:07:12 crc kubenswrapper[4813]: I0219 20:07:12.813208 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-64447d6cd5-7c8jn"] Feb 19 20:07:12 crc kubenswrapper[4813]: I0219 20:07:12.922229 4813 scope.go:117] "RemoveContainer" containerID="85f20b5735680107d82b84309837dae7cd6e4914f95e3dd4568567d1f3f06e35" Feb 19 20:07:13 crc kubenswrapper[4813]: I0219 20:07:13.471093 4813 scope.go:117] "RemoveContainer" containerID="77599afcb1bbfe89531d92db83d4ec060079eb9318249967ecfd9d52fd324241" Feb 19 20:07:13 crc kubenswrapper[4813]: E0219 20:07:13.471554 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:07:13 crc kubenswrapper[4813]: I0219 20:07:13.487604 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b44a06a-2700-4b32-9244-27f74d62d986" path="/var/lib/kubelet/pods/1b44a06a-2700-4b32-9244-27f74d62d986/volumes" Feb 19 20:07:16 crc kubenswrapper[4813]: I0219 20:07:16.052079 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-a0cf-account-create-update-7ghvw"] Feb 19 20:07:16 crc kubenswrapper[4813]: I0219 20:07:16.064764 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-gsckp"] Feb 19 20:07:16 crc kubenswrapper[4813]: I0219 20:07:16.074838 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-a0cf-account-create-update-7ghvw"] Feb 19 20:07:16 crc kubenswrapper[4813]: I0219 20:07:16.083567 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-gsckp"] Feb 19 20:07:17 crc kubenswrapper[4813]: I0219 20:07:17.492168 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6635e6e1-6786-4efd-8b75-96423fd91219" path="/var/lib/kubelet/pods/6635e6e1-6786-4efd-8b75-96423fd91219/volumes" Feb 19 20:07:17 crc kubenswrapper[4813]: I0219 20:07:17.493603 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73e60321-561a-4a68-b9c6-d197f26fc6d6" path="/var/lib/kubelet/pods/73e60321-561a-4a68-b9c6-d197f26fc6d6/volumes" Feb 19 20:07:20 crc kubenswrapper[4813]: I0219 20:07:20.965663 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-74b5489f4f-lwvmc" podUID="acf95000-e303-4f53-a73b-8692e75ecf6b" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.108:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.108:8080: connect: connection refused" Feb 19 20:07:25 crc kubenswrapper[4813]: I0219 20:07:25.046401 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-65dzd"] Feb 19 20:07:25 crc kubenswrapper[4813]: I0219 20:07:25.059811 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-65dzd"] Feb 19 20:07:25 crc kubenswrapper[4813]: I0219 20:07:25.509418 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca8d65e5-8a02-426f-aab4-b2ee48bfb93b" path="/var/lib/kubelet/pods/ca8d65e5-8a02-426f-aab4-b2ee48bfb93b/volumes" Feb 19 20:07:26 crc kubenswrapper[4813]: I0219 20:07:26.471643 4813 scope.go:117] "RemoveContainer" containerID="77599afcb1bbfe89531d92db83d4ec060079eb9318249967ecfd9d52fd324241" Feb 19 20:07:26 crc kubenswrapper[4813]: E0219 20:07:26.471893 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:07:30 crc kubenswrapper[4813]: I0219 20:07:30.965746 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-74b5489f4f-lwvmc" podUID="acf95000-e303-4f53-a73b-8692e75ecf6b" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.108:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.108:8080: connect: connection refused" Feb 19 20:07:30 crc kubenswrapper[4813]: I0219 20:07:30.966497 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-74b5489f4f-lwvmc" Feb 19 20:07:35 crc kubenswrapper[4813]: I0219 20:07:35.544506 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-74b5489f4f-lwvmc" Feb 19 20:07:35 crc kubenswrapper[4813]: I0219 20:07:35.705198 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nx588\" (UniqueName: \"kubernetes.io/projected/acf95000-e303-4f53-a73b-8692e75ecf6b-kube-api-access-nx588\") pod \"acf95000-e303-4f53-a73b-8692e75ecf6b\" (UID: \"acf95000-e303-4f53-a73b-8692e75ecf6b\") " Feb 19 20:07:35 crc kubenswrapper[4813]: I0219 20:07:35.705246 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/acf95000-e303-4f53-a73b-8692e75ecf6b-scripts\") pod \"acf95000-e303-4f53-a73b-8692e75ecf6b\" (UID: \"acf95000-e303-4f53-a73b-8692e75ecf6b\") " Feb 19 20:07:35 crc kubenswrapper[4813]: I0219 20:07:35.705292 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/acf95000-e303-4f53-a73b-8692e75ecf6b-logs\") pod \"acf95000-e303-4f53-a73b-8692e75ecf6b\" (UID: \"acf95000-e303-4f53-a73b-8692e75ecf6b\") " Feb 19 20:07:35 crc kubenswrapper[4813]: I0219 20:07:35.705512 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/acf95000-e303-4f53-a73b-8692e75ecf6b-horizon-secret-key\") pod \"acf95000-e303-4f53-a73b-8692e75ecf6b\" (UID: \"acf95000-e303-4f53-a73b-8692e75ecf6b\") " Feb 19 20:07:35 crc kubenswrapper[4813]: I0219 20:07:35.705531 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/acf95000-e303-4f53-a73b-8692e75ecf6b-config-data\") pod \"acf95000-e303-4f53-a73b-8692e75ecf6b\" (UID: \"acf95000-e303-4f53-a73b-8692e75ecf6b\") " Feb 19 20:07:35 crc kubenswrapper[4813]: I0219 20:07:35.706385 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/acf95000-e303-4f53-a73b-8692e75ecf6b-logs" (OuterVolumeSpecName: "logs") pod "acf95000-e303-4f53-a73b-8692e75ecf6b" (UID: "acf95000-e303-4f53-a73b-8692e75ecf6b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:07:35 crc kubenswrapper[4813]: I0219 20:07:35.732234 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acf95000-e303-4f53-a73b-8692e75ecf6b-kube-api-access-nx588" (OuterVolumeSpecName: "kube-api-access-nx588") pod "acf95000-e303-4f53-a73b-8692e75ecf6b" (UID: "acf95000-e303-4f53-a73b-8692e75ecf6b"). InnerVolumeSpecName "kube-api-access-nx588". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:07:35 crc kubenswrapper[4813]: I0219 20:07:35.739123 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acf95000-e303-4f53-a73b-8692e75ecf6b-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "acf95000-e303-4f53-a73b-8692e75ecf6b" (UID: "acf95000-e303-4f53-a73b-8692e75ecf6b"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:07:35 crc kubenswrapper[4813]: I0219 20:07:35.751750 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acf95000-e303-4f53-a73b-8692e75ecf6b-config-data" (OuterVolumeSpecName: "config-data") pod "acf95000-e303-4f53-a73b-8692e75ecf6b" (UID: "acf95000-e303-4f53-a73b-8692e75ecf6b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:07:35 crc kubenswrapper[4813]: I0219 20:07:35.752291 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acf95000-e303-4f53-a73b-8692e75ecf6b-scripts" (OuterVolumeSpecName: "scripts") pod "acf95000-e303-4f53-a73b-8692e75ecf6b" (UID: "acf95000-e303-4f53-a73b-8692e75ecf6b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:07:35 crc kubenswrapper[4813]: I0219 20:07:35.807639 4813 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/acf95000-e303-4f53-a73b-8692e75ecf6b-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 19 20:07:35 crc kubenswrapper[4813]: I0219 20:07:35.808103 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/acf95000-e303-4f53-a73b-8692e75ecf6b-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 20:07:35 crc kubenswrapper[4813]: I0219 20:07:35.808195 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nx588\" (UniqueName: \"kubernetes.io/projected/acf95000-e303-4f53-a73b-8692e75ecf6b-kube-api-access-nx588\") on node \"crc\" DevicePath \"\"" Feb 19 20:07:35 crc kubenswrapper[4813]: I0219 20:07:35.808277 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/acf95000-e303-4f53-a73b-8692e75ecf6b-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 20:07:35 crc kubenswrapper[4813]: I0219 20:07:35.808359 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/acf95000-e303-4f53-a73b-8692e75ecf6b-logs\") on node \"crc\" DevicePath \"\"" Feb 19 20:07:36 crc kubenswrapper[4813]: I0219 20:07:36.016430 4813 generic.go:334] "Generic (PLEG): container finished" podID="acf95000-e303-4f53-a73b-8692e75ecf6b" containerID="23bd45923ebdce56cc21dad8cdbeb1cc6890840ebde382429dd600ca68268e2d" exitCode=137 Feb 19 20:07:36 crc kubenswrapper[4813]: I0219 20:07:36.016472 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74b5489f4f-lwvmc" event={"ID":"acf95000-e303-4f53-a73b-8692e75ecf6b","Type":"ContainerDied","Data":"23bd45923ebdce56cc21dad8cdbeb1cc6890840ebde382429dd600ca68268e2d"} Feb 19 20:07:36 crc kubenswrapper[4813]: I0219 20:07:36.016498 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74b5489f4f-lwvmc" event={"ID":"acf95000-e303-4f53-a73b-8692e75ecf6b","Type":"ContainerDied","Data":"5450a80407edf00d9f27ff702d0b9004f728497ae2c3e7acdcf4bf7ffc26016b"} Feb 19 20:07:36 crc kubenswrapper[4813]: I0219 20:07:36.016513 4813 scope.go:117] "RemoveContainer" containerID="9fe0b129de502003a1b8708aeec3f464ddf040bebabadecd5f191da0cc37958c" Feb 19 20:07:36 crc kubenswrapper[4813]: I0219 20:07:36.016632 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-74b5489f4f-lwvmc" Feb 19 20:07:36 crc kubenswrapper[4813]: I0219 20:07:36.051109 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-74b5489f4f-lwvmc"] Feb 19 20:07:36 crc kubenswrapper[4813]: I0219 20:07:36.060863 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-74b5489f4f-lwvmc"] Feb 19 20:07:36 crc kubenswrapper[4813]: I0219 20:07:36.217046 4813 scope.go:117] "RemoveContainer" containerID="23bd45923ebdce56cc21dad8cdbeb1cc6890840ebde382429dd600ca68268e2d" Feb 19 20:07:36 crc kubenswrapper[4813]: I0219 20:07:36.233928 4813 scope.go:117] "RemoveContainer" containerID="9fe0b129de502003a1b8708aeec3f464ddf040bebabadecd5f191da0cc37958c" Feb 19 20:07:36 crc kubenswrapper[4813]: E0219 20:07:36.234432 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fe0b129de502003a1b8708aeec3f464ddf040bebabadecd5f191da0cc37958c\": container with ID starting with 9fe0b129de502003a1b8708aeec3f464ddf040bebabadecd5f191da0cc37958c not found: ID does not exist" containerID="9fe0b129de502003a1b8708aeec3f464ddf040bebabadecd5f191da0cc37958c" Feb 19 20:07:36 crc kubenswrapper[4813]: I0219 20:07:36.234480 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fe0b129de502003a1b8708aeec3f464ddf040bebabadecd5f191da0cc37958c"} err="failed to get container status \"9fe0b129de502003a1b8708aeec3f464ddf040bebabadecd5f191da0cc37958c\": rpc error: code = NotFound desc = could not find container \"9fe0b129de502003a1b8708aeec3f464ddf040bebabadecd5f191da0cc37958c\": container with ID starting with 9fe0b129de502003a1b8708aeec3f464ddf040bebabadecd5f191da0cc37958c not found: ID does not exist" Feb 19 20:07:36 crc kubenswrapper[4813]: I0219 20:07:36.234525 4813 scope.go:117] "RemoveContainer" containerID="23bd45923ebdce56cc21dad8cdbeb1cc6890840ebde382429dd600ca68268e2d" Feb 19 20:07:36 crc kubenswrapper[4813]: E0219 20:07:36.234867 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23bd45923ebdce56cc21dad8cdbeb1cc6890840ebde382429dd600ca68268e2d\": container with ID starting with 23bd45923ebdce56cc21dad8cdbeb1cc6890840ebde382429dd600ca68268e2d not found: ID does not exist" containerID="23bd45923ebdce56cc21dad8cdbeb1cc6890840ebde382429dd600ca68268e2d" Feb 19 20:07:36 crc kubenswrapper[4813]: I0219 20:07:36.234908 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23bd45923ebdce56cc21dad8cdbeb1cc6890840ebde382429dd600ca68268e2d"} err="failed to get container status \"23bd45923ebdce56cc21dad8cdbeb1cc6890840ebde382429dd600ca68268e2d\": rpc error: code = NotFound desc = could not find container \"23bd45923ebdce56cc21dad8cdbeb1cc6890840ebde382429dd600ca68268e2d\": container with ID starting with 23bd45923ebdce56cc21dad8cdbeb1cc6890840ebde382429dd600ca68268e2d not found: ID does not exist" Feb 19 20:07:37 crc kubenswrapper[4813]: I0219 20:07:37.472343 4813 scope.go:117] "RemoveContainer" containerID="77599afcb1bbfe89531d92db83d4ec060079eb9318249967ecfd9d52fd324241" Feb 19 20:07:37 crc kubenswrapper[4813]: E0219 20:07:37.472916 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:07:37 crc kubenswrapper[4813]: I0219 20:07:37.489809 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acf95000-e303-4f53-a73b-8692e75ecf6b" path="/var/lib/kubelet/pods/acf95000-e303-4f53-a73b-8692e75ecf6b/volumes" Feb 19 20:07:47 crc kubenswrapper[4813]: I0219 20:07:47.525263 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5f4b55c5d9-w5nw5"] Feb 19 20:07:47 crc kubenswrapper[4813]: E0219 20:07:47.526299 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acf95000-e303-4f53-a73b-8692e75ecf6b" containerName="horizon" Feb 19 20:07:47 crc kubenswrapper[4813]: I0219 20:07:47.526319 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="acf95000-e303-4f53-a73b-8692e75ecf6b" containerName="horizon" Feb 19 20:07:47 crc kubenswrapper[4813]: E0219 20:07:47.526345 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acf95000-e303-4f53-a73b-8692e75ecf6b" containerName="horizon-log" Feb 19 20:07:47 crc kubenswrapper[4813]: I0219 20:07:47.526354 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="acf95000-e303-4f53-a73b-8692e75ecf6b" containerName="horizon-log" Feb 19 20:07:47 crc kubenswrapper[4813]: E0219 20:07:47.526370 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b44a06a-2700-4b32-9244-27f74d62d986" containerName="horizon-log" Feb 19 20:07:47 crc kubenswrapper[4813]: I0219 20:07:47.526377 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b44a06a-2700-4b32-9244-27f74d62d986" containerName="horizon-log" Feb 19 20:07:47 crc kubenswrapper[4813]: E0219 20:07:47.526402 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b44a06a-2700-4b32-9244-27f74d62d986" containerName="horizon" Feb 19 20:07:47 crc kubenswrapper[4813]: I0219 20:07:47.526426 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b44a06a-2700-4b32-9244-27f74d62d986" containerName="horizon" Feb 19 20:07:47 crc kubenswrapper[4813]: I0219 20:07:47.526671 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b44a06a-2700-4b32-9244-27f74d62d986" containerName="horizon-log" Feb 19 20:07:47 crc kubenswrapper[4813]: I0219 20:07:47.526697 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="acf95000-e303-4f53-a73b-8692e75ecf6b" containerName="horizon" Feb 19 20:07:47 crc kubenswrapper[4813]: I0219 20:07:47.526711 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="acf95000-e303-4f53-a73b-8692e75ecf6b" containerName="horizon-log" Feb 19 20:07:47 crc kubenswrapper[4813]: I0219 20:07:47.526723 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b44a06a-2700-4b32-9244-27f74d62d986" containerName="horizon" Feb 19 20:07:47 crc kubenswrapper[4813]: I0219 20:07:47.529758 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5f4b55c5d9-w5nw5" Feb 19 20:07:47 crc kubenswrapper[4813]: I0219 20:07:47.539619 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5f4b55c5d9-w5nw5"] Feb 19 20:07:47 crc kubenswrapper[4813]: I0219 20:07:47.676857 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/888e6569-1a57-48ea-af7c-d5ab24ae7f68-config-data\") pod \"horizon-5f4b55c5d9-w5nw5\" (UID: \"888e6569-1a57-48ea-af7c-d5ab24ae7f68\") " pod="openstack/horizon-5f4b55c5d9-w5nw5" Feb 19 20:07:47 crc kubenswrapper[4813]: I0219 20:07:47.676905 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b44lz\" (UniqueName: \"kubernetes.io/projected/888e6569-1a57-48ea-af7c-d5ab24ae7f68-kube-api-access-b44lz\") pod \"horizon-5f4b55c5d9-w5nw5\" (UID: \"888e6569-1a57-48ea-af7c-d5ab24ae7f68\") " pod="openstack/horizon-5f4b55c5d9-w5nw5" Feb 19 20:07:47 crc kubenswrapper[4813]: I0219 20:07:47.676945 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/888e6569-1a57-48ea-af7c-d5ab24ae7f68-scripts\") pod \"horizon-5f4b55c5d9-w5nw5\" (UID: \"888e6569-1a57-48ea-af7c-d5ab24ae7f68\") " pod="openstack/horizon-5f4b55c5d9-w5nw5" Feb 19 20:07:47 crc kubenswrapper[4813]: I0219 20:07:47.677118 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/888e6569-1a57-48ea-af7c-d5ab24ae7f68-horizon-secret-key\") pod \"horizon-5f4b55c5d9-w5nw5\" (UID: \"888e6569-1a57-48ea-af7c-d5ab24ae7f68\") " pod="openstack/horizon-5f4b55c5d9-w5nw5" Feb 19 20:07:47 crc kubenswrapper[4813]: I0219 20:07:47.677147 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/888e6569-1a57-48ea-af7c-d5ab24ae7f68-logs\") pod \"horizon-5f4b55c5d9-w5nw5\" (UID: \"888e6569-1a57-48ea-af7c-d5ab24ae7f68\") " pod="openstack/horizon-5f4b55c5d9-w5nw5" Feb 19 20:07:47 crc kubenswrapper[4813]: I0219 20:07:47.779024 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/888e6569-1a57-48ea-af7c-d5ab24ae7f68-horizon-secret-key\") pod \"horizon-5f4b55c5d9-w5nw5\" (UID: \"888e6569-1a57-48ea-af7c-d5ab24ae7f68\") " pod="openstack/horizon-5f4b55c5d9-w5nw5" Feb 19 20:07:47 crc kubenswrapper[4813]: I0219 20:07:47.779075 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/888e6569-1a57-48ea-af7c-d5ab24ae7f68-logs\") pod \"horizon-5f4b55c5d9-w5nw5\" (UID: \"888e6569-1a57-48ea-af7c-d5ab24ae7f68\") " pod="openstack/horizon-5f4b55c5d9-w5nw5" Feb 19 20:07:47 crc kubenswrapper[4813]: I0219 20:07:47.779126 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/888e6569-1a57-48ea-af7c-d5ab24ae7f68-config-data\") pod \"horizon-5f4b55c5d9-w5nw5\" (UID: \"888e6569-1a57-48ea-af7c-d5ab24ae7f68\") " pod="openstack/horizon-5f4b55c5d9-w5nw5" Feb 19 20:07:47 crc kubenswrapper[4813]: I0219 20:07:47.779154 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b44lz\" (UniqueName: \"kubernetes.io/projected/888e6569-1a57-48ea-af7c-d5ab24ae7f68-kube-api-access-b44lz\") pod \"horizon-5f4b55c5d9-w5nw5\" (UID: \"888e6569-1a57-48ea-af7c-d5ab24ae7f68\") " pod="openstack/horizon-5f4b55c5d9-w5nw5" Feb 19 20:07:47 crc kubenswrapper[4813]: I0219 20:07:47.779186 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/888e6569-1a57-48ea-af7c-d5ab24ae7f68-scripts\") pod \"horizon-5f4b55c5d9-w5nw5\" (UID: \"888e6569-1a57-48ea-af7c-d5ab24ae7f68\") " pod="openstack/horizon-5f4b55c5d9-w5nw5" Feb 19 20:07:47 crc kubenswrapper[4813]: I0219 20:07:47.779886 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/888e6569-1a57-48ea-af7c-d5ab24ae7f68-scripts\") pod \"horizon-5f4b55c5d9-w5nw5\" (UID: \"888e6569-1a57-48ea-af7c-d5ab24ae7f68\") " pod="openstack/horizon-5f4b55c5d9-w5nw5" Feb 19 20:07:47 crc kubenswrapper[4813]: I0219 20:07:47.780536 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/888e6569-1a57-48ea-af7c-d5ab24ae7f68-logs\") pod \"horizon-5f4b55c5d9-w5nw5\" (UID: \"888e6569-1a57-48ea-af7c-d5ab24ae7f68\") " pod="openstack/horizon-5f4b55c5d9-w5nw5" Feb 19 20:07:47 crc kubenswrapper[4813]: I0219 20:07:47.781532 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/888e6569-1a57-48ea-af7c-d5ab24ae7f68-config-data\") pod \"horizon-5f4b55c5d9-w5nw5\" (UID: \"888e6569-1a57-48ea-af7c-d5ab24ae7f68\") " pod="openstack/horizon-5f4b55c5d9-w5nw5" Feb 19 20:07:47 crc kubenswrapper[4813]: I0219 20:07:47.798648 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/888e6569-1a57-48ea-af7c-d5ab24ae7f68-horizon-secret-key\") pod \"horizon-5f4b55c5d9-w5nw5\" (UID: \"888e6569-1a57-48ea-af7c-d5ab24ae7f68\") " pod="openstack/horizon-5f4b55c5d9-w5nw5" Feb 19 20:07:47 crc kubenswrapper[4813]: I0219 20:07:47.801353 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b44lz\" (UniqueName: \"kubernetes.io/projected/888e6569-1a57-48ea-af7c-d5ab24ae7f68-kube-api-access-b44lz\") pod \"horizon-5f4b55c5d9-w5nw5\" (UID: \"888e6569-1a57-48ea-af7c-d5ab24ae7f68\") " pod="openstack/horizon-5f4b55c5d9-w5nw5" Feb 19 20:07:47 crc kubenswrapper[4813]: I0219 20:07:47.862329 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5f4b55c5d9-w5nw5" Feb 19 20:07:48 crc kubenswrapper[4813]: I0219 20:07:48.366186 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5f4b55c5d9-w5nw5"] Feb 19 20:07:48 crc kubenswrapper[4813]: I0219 20:07:48.776839 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-tpm5r"] Feb 19 20:07:48 crc kubenswrapper[4813]: I0219 20:07:48.778919 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-tpm5r" Feb 19 20:07:48 crc kubenswrapper[4813]: I0219 20:07:48.805928 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-tpm5r"] Feb 19 20:07:48 crc kubenswrapper[4813]: I0219 20:07:48.887195 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-d489-account-create-update-kc5hq"] Feb 19 20:07:48 crc kubenswrapper[4813]: I0219 20:07:48.888697 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-d489-account-create-update-kc5hq" Feb 19 20:07:48 crc kubenswrapper[4813]: I0219 20:07:48.891007 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Feb 19 20:07:48 crc kubenswrapper[4813]: I0219 20:07:48.904463 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-d489-account-create-update-kc5hq"] Feb 19 20:07:48 crc kubenswrapper[4813]: I0219 20:07:48.925362 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnlgn\" (UniqueName: \"kubernetes.io/projected/c36176e6-298e-4037-b91f-1eb7eeb7092f-kube-api-access-nnlgn\") pod \"heat-db-create-tpm5r\" (UID: \"c36176e6-298e-4037-b91f-1eb7eeb7092f\") " pod="openstack/heat-db-create-tpm5r" Feb 19 20:07:48 crc kubenswrapper[4813]: I0219 20:07:48.925414 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c36176e6-298e-4037-b91f-1eb7eeb7092f-operator-scripts\") pod \"heat-db-create-tpm5r\" (UID: \"c36176e6-298e-4037-b91f-1eb7eeb7092f\") " pod="openstack/heat-db-create-tpm5r" Feb 19 20:07:49 crc kubenswrapper[4813]: I0219 20:07:49.027109 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4bcf0999-1a5d-40c1-905e-5e8bbc6c8123-operator-scripts\") pod \"heat-d489-account-create-update-kc5hq\" (UID: \"4bcf0999-1a5d-40c1-905e-5e8bbc6c8123\") " pod="openstack/heat-d489-account-create-update-kc5hq" Feb 19 20:07:49 crc kubenswrapper[4813]: I0219 20:07:49.028262 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjsjf\" (UniqueName: \"kubernetes.io/projected/4bcf0999-1a5d-40c1-905e-5e8bbc6c8123-kube-api-access-fjsjf\") pod \"heat-d489-account-create-update-kc5hq\" (UID: \"4bcf0999-1a5d-40c1-905e-5e8bbc6c8123\") " pod="openstack/heat-d489-account-create-update-kc5hq" Feb 19 20:07:49 crc kubenswrapper[4813]: I0219 20:07:49.028460 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnlgn\" (UniqueName: \"kubernetes.io/projected/c36176e6-298e-4037-b91f-1eb7eeb7092f-kube-api-access-nnlgn\") pod \"heat-db-create-tpm5r\" (UID: \"c36176e6-298e-4037-b91f-1eb7eeb7092f\") " pod="openstack/heat-db-create-tpm5r" Feb 19 20:07:49 crc kubenswrapper[4813]: I0219 20:07:49.028535 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c36176e6-298e-4037-b91f-1eb7eeb7092f-operator-scripts\") pod \"heat-db-create-tpm5r\" (UID: \"c36176e6-298e-4037-b91f-1eb7eeb7092f\") " pod="openstack/heat-db-create-tpm5r" Feb 19 20:07:49 crc kubenswrapper[4813]: I0219 20:07:49.029326 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c36176e6-298e-4037-b91f-1eb7eeb7092f-operator-scripts\") pod \"heat-db-create-tpm5r\" (UID: \"c36176e6-298e-4037-b91f-1eb7eeb7092f\") " pod="openstack/heat-db-create-tpm5r" Feb 19 20:07:49 crc kubenswrapper[4813]: I0219 20:07:49.050858 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnlgn\" (UniqueName: \"kubernetes.io/projected/c36176e6-298e-4037-b91f-1eb7eeb7092f-kube-api-access-nnlgn\") pod \"heat-db-create-tpm5r\" (UID: \"c36176e6-298e-4037-b91f-1eb7eeb7092f\") " pod="openstack/heat-db-create-tpm5r" Feb 19 20:07:49 crc kubenswrapper[4813]: I0219 20:07:49.131638 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4bcf0999-1a5d-40c1-905e-5e8bbc6c8123-operator-scripts\") pod \"heat-d489-account-create-update-kc5hq\" (UID: \"4bcf0999-1a5d-40c1-905e-5e8bbc6c8123\") " pod="openstack/heat-d489-account-create-update-kc5hq" Feb 19 20:07:49 crc kubenswrapper[4813]: I0219 20:07:49.131732 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjsjf\" (UniqueName: \"kubernetes.io/projected/4bcf0999-1a5d-40c1-905e-5e8bbc6c8123-kube-api-access-fjsjf\") pod \"heat-d489-account-create-update-kc5hq\" (UID: \"4bcf0999-1a5d-40c1-905e-5e8bbc6c8123\") " pod="openstack/heat-d489-account-create-update-kc5hq" Feb 19 20:07:49 crc kubenswrapper[4813]: I0219 20:07:49.132603 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4bcf0999-1a5d-40c1-905e-5e8bbc6c8123-operator-scripts\") pod \"heat-d489-account-create-update-kc5hq\" (UID: \"4bcf0999-1a5d-40c1-905e-5e8bbc6c8123\") " pod="openstack/heat-d489-account-create-update-kc5hq" Feb 19 20:07:49 crc kubenswrapper[4813]: I0219 20:07:49.148330 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjsjf\" (UniqueName: \"kubernetes.io/projected/4bcf0999-1a5d-40c1-905e-5e8bbc6c8123-kube-api-access-fjsjf\") pod \"heat-d489-account-create-update-kc5hq\" (UID: \"4bcf0999-1a5d-40c1-905e-5e8bbc6c8123\") " pod="openstack/heat-d489-account-create-update-kc5hq" Feb 19 20:07:49 crc kubenswrapper[4813]: I0219 20:07:49.156519 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-tpm5r" Feb 19 20:07:49 crc kubenswrapper[4813]: I0219 20:07:49.168330 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5f4b55c5d9-w5nw5" event={"ID":"888e6569-1a57-48ea-af7c-d5ab24ae7f68","Type":"ContainerStarted","Data":"76c46c7dab0bdc79275ccc7e518ade7310a9ca01f8e3939fa43879cac68039cc"} Feb 19 20:07:49 crc kubenswrapper[4813]: I0219 20:07:49.168381 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5f4b55c5d9-w5nw5" event={"ID":"888e6569-1a57-48ea-af7c-d5ab24ae7f68","Type":"ContainerStarted","Data":"1be796ad36aad68e8743a86a26500970db1567342fb0f3560400194a212cb760"} Feb 19 20:07:49 crc kubenswrapper[4813]: I0219 20:07:49.168391 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5f4b55c5d9-w5nw5" event={"ID":"888e6569-1a57-48ea-af7c-d5ab24ae7f68","Type":"ContainerStarted","Data":"27f20ff7ff370a5da60444edd5437120b512964c4f78139e4a5b1f6c7c7a3718"} Feb 19 20:07:49 crc kubenswrapper[4813]: I0219 20:07:49.201477 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5f4b55c5d9-w5nw5" podStartSLOduration=2.201456257 podStartE2EDuration="2.201456257s" podCreationTimestamp="2026-02-19 20:07:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 20:07:49.192420837 +0000 UTC m=+5888.417861378" watchObservedRunningTime="2026-02-19 20:07:49.201456257 +0000 UTC m=+5888.426896798" Feb 19 20:07:49 crc kubenswrapper[4813]: I0219 20:07:49.215380 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-d489-account-create-update-kc5hq" Feb 19 20:07:49 crc kubenswrapper[4813]: I0219 20:07:49.844369 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-tpm5r"] Feb 19 20:07:49 crc kubenswrapper[4813]: I0219 20:07:49.902565 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-d489-account-create-update-kc5hq"] Feb 19 20:07:49 crc kubenswrapper[4813]: W0219 20:07:49.913599 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4bcf0999_1a5d_40c1_905e_5e8bbc6c8123.slice/crio-e96140fdd3c257ee863aa3e9f4a4cb563299518f88f77eec8309843b7e69f217 WatchSource:0}: Error finding container e96140fdd3c257ee863aa3e9f4a4cb563299518f88f77eec8309843b7e69f217: Status 404 returned error can't find the container with id e96140fdd3c257ee863aa3e9f4a4cb563299518f88f77eec8309843b7e69f217 Feb 19 20:07:50 crc kubenswrapper[4813]: I0219 20:07:50.181553 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-d489-account-create-update-kc5hq" event={"ID":"4bcf0999-1a5d-40c1-905e-5e8bbc6c8123","Type":"ContainerStarted","Data":"e6de1d70482cf3b858196e59b3f7b0241edc9d96b1a12981463e418be224225e"} Feb 19 20:07:50 crc kubenswrapper[4813]: I0219 20:07:50.183735 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-d489-account-create-update-kc5hq" event={"ID":"4bcf0999-1a5d-40c1-905e-5e8bbc6c8123","Type":"ContainerStarted","Data":"e96140fdd3c257ee863aa3e9f4a4cb563299518f88f77eec8309843b7e69f217"} Feb 19 20:07:50 crc kubenswrapper[4813]: I0219 20:07:50.186240 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-tpm5r" event={"ID":"c36176e6-298e-4037-b91f-1eb7eeb7092f","Type":"ContainerStarted","Data":"43926e7bc624b01a0800b4ea47da8f86658ac864e4df90efdaa78ea4764f2b57"} Feb 19 20:07:50 crc kubenswrapper[4813]: I0219 20:07:50.186278 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-tpm5r" event={"ID":"c36176e6-298e-4037-b91f-1eb7eeb7092f","Type":"ContainerStarted","Data":"3480a837128b669fff8f44a3271aa6d303a5f9799dc967e4664e6319e90bed9a"} Feb 19 20:07:50 crc kubenswrapper[4813]: I0219 20:07:50.202497 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-d489-account-create-update-kc5hq" podStartSLOduration=2.202481158 podStartE2EDuration="2.202481158s" podCreationTimestamp="2026-02-19 20:07:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 20:07:50.197902747 +0000 UTC m=+5889.423343278" watchObservedRunningTime="2026-02-19 20:07:50.202481158 +0000 UTC m=+5889.427921699" Feb 19 20:07:50 crc kubenswrapper[4813]: I0219 20:07:50.215143 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-create-tpm5r" podStartSLOduration=2.215127679 podStartE2EDuration="2.215127679s" podCreationTimestamp="2026-02-19 20:07:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 20:07:50.213058425 +0000 UTC m=+5889.438498966" watchObservedRunningTime="2026-02-19 20:07:50.215127679 +0000 UTC m=+5889.440568220" Feb 19 20:07:51 crc kubenswrapper[4813]: I0219 20:07:51.197230 4813 generic.go:334] "Generic (PLEG): container finished" podID="c36176e6-298e-4037-b91f-1eb7eeb7092f" containerID="43926e7bc624b01a0800b4ea47da8f86658ac864e4df90efdaa78ea4764f2b57" exitCode=0 Feb 19 20:07:51 crc kubenswrapper[4813]: I0219 20:07:51.197338 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-tpm5r" event={"ID":"c36176e6-298e-4037-b91f-1eb7eeb7092f","Type":"ContainerDied","Data":"43926e7bc624b01a0800b4ea47da8f86658ac864e4df90efdaa78ea4764f2b57"} Feb 19 20:07:51 crc kubenswrapper[4813]: I0219 20:07:51.202867 4813 generic.go:334] "Generic (PLEG): container finished" podID="4bcf0999-1a5d-40c1-905e-5e8bbc6c8123" containerID="e6de1d70482cf3b858196e59b3f7b0241edc9d96b1a12981463e418be224225e" exitCode=0 Feb 19 20:07:51 crc kubenswrapper[4813]: I0219 20:07:51.202942 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-d489-account-create-update-kc5hq" event={"ID":"4bcf0999-1a5d-40c1-905e-5e8bbc6c8123","Type":"ContainerDied","Data":"e6de1d70482cf3b858196e59b3f7b0241edc9d96b1a12981463e418be224225e"} Feb 19 20:07:52 crc kubenswrapper[4813]: I0219 20:07:52.471535 4813 scope.go:117] "RemoveContainer" containerID="77599afcb1bbfe89531d92db83d4ec060079eb9318249967ecfd9d52fd324241" Feb 19 20:07:52 crc kubenswrapper[4813]: E0219 20:07:52.472196 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:07:52 crc kubenswrapper[4813]: I0219 20:07:52.657972 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-d489-account-create-update-kc5hq" Feb 19 20:07:52 crc kubenswrapper[4813]: I0219 20:07:52.667235 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-tpm5r" Feb 19 20:07:52 crc kubenswrapper[4813]: I0219 20:07:52.805107 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjsjf\" (UniqueName: \"kubernetes.io/projected/4bcf0999-1a5d-40c1-905e-5e8bbc6c8123-kube-api-access-fjsjf\") pod \"4bcf0999-1a5d-40c1-905e-5e8bbc6c8123\" (UID: \"4bcf0999-1a5d-40c1-905e-5e8bbc6c8123\") " Feb 19 20:07:52 crc kubenswrapper[4813]: I0219 20:07:52.805195 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4bcf0999-1a5d-40c1-905e-5e8bbc6c8123-operator-scripts\") pod \"4bcf0999-1a5d-40c1-905e-5e8bbc6c8123\" (UID: \"4bcf0999-1a5d-40c1-905e-5e8bbc6c8123\") " Feb 19 20:07:52 crc kubenswrapper[4813]: I0219 20:07:52.805339 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnlgn\" (UniqueName: \"kubernetes.io/projected/c36176e6-298e-4037-b91f-1eb7eeb7092f-kube-api-access-nnlgn\") pod \"c36176e6-298e-4037-b91f-1eb7eeb7092f\" (UID: \"c36176e6-298e-4037-b91f-1eb7eeb7092f\") " Feb 19 20:07:52 crc kubenswrapper[4813]: I0219 20:07:52.805365 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c36176e6-298e-4037-b91f-1eb7eeb7092f-operator-scripts\") pod \"c36176e6-298e-4037-b91f-1eb7eeb7092f\" (UID: \"c36176e6-298e-4037-b91f-1eb7eeb7092f\") " Feb 19 20:07:52 crc kubenswrapper[4813]: I0219 20:07:52.806382 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bcf0999-1a5d-40c1-905e-5e8bbc6c8123-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4bcf0999-1a5d-40c1-905e-5e8bbc6c8123" (UID: "4bcf0999-1a5d-40c1-905e-5e8bbc6c8123"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:07:52 crc kubenswrapper[4813]: I0219 20:07:52.806507 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c36176e6-298e-4037-b91f-1eb7eeb7092f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c36176e6-298e-4037-b91f-1eb7eeb7092f" (UID: "c36176e6-298e-4037-b91f-1eb7eeb7092f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:07:52 crc kubenswrapper[4813]: I0219 20:07:52.816228 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c36176e6-298e-4037-b91f-1eb7eeb7092f-kube-api-access-nnlgn" (OuterVolumeSpecName: "kube-api-access-nnlgn") pod "c36176e6-298e-4037-b91f-1eb7eeb7092f" (UID: "c36176e6-298e-4037-b91f-1eb7eeb7092f"). InnerVolumeSpecName "kube-api-access-nnlgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:07:52 crc kubenswrapper[4813]: I0219 20:07:52.816284 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bcf0999-1a5d-40c1-905e-5e8bbc6c8123-kube-api-access-fjsjf" (OuterVolumeSpecName: "kube-api-access-fjsjf") pod "4bcf0999-1a5d-40c1-905e-5e8bbc6c8123" (UID: "4bcf0999-1a5d-40c1-905e-5e8bbc6c8123"). InnerVolumeSpecName "kube-api-access-fjsjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:07:52 crc kubenswrapper[4813]: I0219 20:07:52.908864 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nnlgn\" (UniqueName: \"kubernetes.io/projected/c36176e6-298e-4037-b91f-1eb7eeb7092f-kube-api-access-nnlgn\") on node \"crc\" DevicePath \"\"" Feb 19 20:07:52 crc kubenswrapper[4813]: I0219 20:07:52.908929 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c36176e6-298e-4037-b91f-1eb7eeb7092f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 20:07:52 crc kubenswrapper[4813]: I0219 20:07:52.908949 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjsjf\" (UniqueName: \"kubernetes.io/projected/4bcf0999-1a5d-40c1-905e-5e8bbc6c8123-kube-api-access-fjsjf\") on node \"crc\" DevicePath \"\"" Feb 19 20:07:52 crc kubenswrapper[4813]: I0219 20:07:52.909000 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4bcf0999-1a5d-40c1-905e-5e8bbc6c8123-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 20:07:53 crc kubenswrapper[4813]: I0219 20:07:53.220117 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-d489-account-create-update-kc5hq" Feb 19 20:07:53 crc kubenswrapper[4813]: I0219 20:07:53.220155 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-d489-account-create-update-kc5hq" event={"ID":"4bcf0999-1a5d-40c1-905e-5e8bbc6c8123","Type":"ContainerDied","Data":"e96140fdd3c257ee863aa3e9f4a4cb563299518f88f77eec8309843b7e69f217"} Feb 19 20:07:53 crc kubenswrapper[4813]: I0219 20:07:53.220196 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e96140fdd3c257ee863aa3e9f4a4cb563299518f88f77eec8309843b7e69f217" Feb 19 20:07:53 crc kubenswrapper[4813]: I0219 20:07:53.223523 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-tpm5r" event={"ID":"c36176e6-298e-4037-b91f-1eb7eeb7092f","Type":"ContainerDied","Data":"3480a837128b669fff8f44a3271aa6d303a5f9799dc967e4664e6319e90bed9a"} Feb 19 20:07:53 crc kubenswrapper[4813]: I0219 20:07:53.223549 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-tpm5r" Feb 19 20:07:53 crc kubenswrapper[4813]: I0219 20:07:53.223560 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3480a837128b669fff8f44a3271aa6d303a5f9799dc967e4664e6319e90bed9a" Feb 19 20:07:53 crc kubenswrapper[4813]: I0219 20:07:53.839734 4813 scope.go:117] "RemoveContainer" containerID="75a058bcf5c5e7e3901870fa0159171640d2abfa37e3c38f056e1c949c19754a" Feb 19 20:07:53 crc kubenswrapper[4813]: I0219 20:07:53.869168 4813 scope.go:117] "RemoveContainer" containerID="901af8cf9f4076f6ef836e7266c838a35655300a2acbddcf0b99b721435aacf0" Feb 19 20:07:53 crc kubenswrapper[4813]: I0219 20:07:53.893465 4813 scope.go:117] "RemoveContainer" containerID="b61bcb3d7a081863ccf6acfe6104d6092a64038900b811d1824de42e0d3e0e63" Feb 19 20:07:53 crc kubenswrapper[4813]: I0219 20:07:53.945639 4813 scope.go:117] "RemoveContainer" containerID="92c0a4bf9042bb3b779d3bb566cfcedc731811c0187fc6e44eba0afd89457312" Feb 19 20:07:53 crc kubenswrapper[4813]: I0219 20:07:53.995173 4813 scope.go:117] "RemoveContainer" containerID="e98b49ddb1952baade3c95b6d16c78a8ae52f329b1a7e0382cd3db9d4024f0a9" Feb 19 20:07:54 crc kubenswrapper[4813]: I0219 20:07:54.007929 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-bf2dh"] Feb 19 20:07:54 crc kubenswrapper[4813]: E0219 20:07:54.008404 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bcf0999-1a5d-40c1-905e-5e8bbc6c8123" containerName="mariadb-account-create-update" Feb 19 20:07:54 crc kubenswrapper[4813]: I0219 20:07:54.008423 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bcf0999-1a5d-40c1-905e-5e8bbc6c8123" containerName="mariadb-account-create-update" Feb 19 20:07:54 crc kubenswrapper[4813]: E0219 20:07:54.008463 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c36176e6-298e-4037-b91f-1eb7eeb7092f" containerName="mariadb-database-create" Feb 19 20:07:54 crc kubenswrapper[4813]: I0219 20:07:54.008469 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="c36176e6-298e-4037-b91f-1eb7eeb7092f" containerName="mariadb-database-create" Feb 19 20:07:54 crc kubenswrapper[4813]: I0219 20:07:54.008671 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bcf0999-1a5d-40c1-905e-5e8bbc6c8123" containerName="mariadb-account-create-update" Feb 19 20:07:54 crc kubenswrapper[4813]: I0219 20:07:54.008693 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="c36176e6-298e-4037-b91f-1eb7eeb7092f" containerName="mariadb-database-create" Feb 19 20:07:54 crc kubenswrapper[4813]: I0219 20:07:54.009488 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-bf2dh" Feb 19 20:07:54 crc kubenswrapper[4813]: I0219 20:07:54.015928 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-bf2dh"] Feb 19 20:07:54 crc kubenswrapper[4813]: I0219 20:07:54.019297 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Feb 19 20:07:54 crc kubenswrapper[4813]: I0219 20:07:54.019660 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-z2jb8" Feb 19 20:07:54 crc kubenswrapper[4813]: I0219 20:07:54.149327 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c40ee40-5f00-4771-96d4-90e70b4c2580-combined-ca-bundle\") pod \"heat-db-sync-bf2dh\" (UID: \"0c40ee40-5f00-4771-96d4-90e70b4c2580\") " pod="openstack/heat-db-sync-bf2dh" Feb 19 20:07:54 crc kubenswrapper[4813]: I0219 20:07:54.149389 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2glc5\" (UniqueName: \"kubernetes.io/projected/0c40ee40-5f00-4771-96d4-90e70b4c2580-kube-api-access-2glc5\") pod \"heat-db-sync-bf2dh\" (UID: \"0c40ee40-5f00-4771-96d4-90e70b4c2580\") " pod="openstack/heat-db-sync-bf2dh" Feb 19 20:07:54 crc kubenswrapper[4813]: I0219 20:07:54.149435 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c40ee40-5f00-4771-96d4-90e70b4c2580-config-data\") pod \"heat-db-sync-bf2dh\" (UID: \"0c40ee40-5f00-4771-96d4-90e70b4c2580\") " pod="openstack/heat-db-sync-bf2dh" Feb 19 20:07:54 crc kubenswrapper[4813]: I0219 20:07:54.250795 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2glc5\" (UniqueName: \"kubernetes.io/projected/0c40ee40-5f00-4771-96d4-90e70b4c2580-kube-api-access-2glc5\") pod \"heat-db-sync-bf2dh\" (UID: \"0c40ee40-5f00-4771-96d4-90e70b4c2580\") " pod="openstack/heat-db-sync-bf2dh" Feb 19 20:07:54 crc kubenswrapper[4813]: I0219 20:07:54.251245 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c40ee40-5f00-4771-96d4-90e70b4c2580-config-data\") pod \"heat-db-sync-bf2dh\" (UID: \"0c40ee40-5f00-4771-96d4-90e70b4c2580\") " pod="openstack/heat-db-sync-bf2dh" Feb 19 20:07:54 crc kubenswrapper[4813]: I0219 20:07:54.251474 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c40ee40-5f00-4771-96d4-90e70b4c2580-combined-ca-bundle\") pod \"heat-db-sync-bf2dh\" (UID: \"0c40ee40-5f00-4771-96d4-90e70b4c2580\") " pod="openstack/heat-db-sync-bf2dh" Feb 19 20:07:54 crc kubenswrapper[4813]: I0219 20:07:54.257580 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c40ee40-5f00-4771-96d4-90e70b4c2580-config-data\") pod \"heat-db-sync-bf2dh\" (UID: \"0c40ee40-5f00-4771-96d4-90e70b4c2580\") " pod="openstack/heat-db-sync-bf2dh" Feb 19 20:07:54 crc kubenswrapper[4813]: I0219 20:07:54.269817 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2glc5\" (UniqueName: \"kubernetes.io/projected/0c40ee40-5f00-4771-96d4-90e70b4c2580-kube-api-access-2glc5\") pod \"heat-db-sync-bf2dh\" (UID: \"0c40ee40-5f00-4771-96d4-90e70b4c2580\") " pod="openstack/heat-db-sync-bf2dh" Feb 19 20:07:54 crc kubenswrapper[4813]: I0219 20:07:54.284562 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c40ee40-5f00-4771-96d4-90e70b4c2580-combined-ca-bundle\") pod \"heat-db-sync-bf2dh\" (UID: \"0c40ee40-5f00-4771-96d4-90e70b4c2580\") " pod="openstack/heat-db-sync-bf2dh" Feb 19 20:07:54 crc kubenswrapper[4813]: I0219 20:07:54.392915 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-bf2dh" Feb 19 20:07:54 crc kubenswrapper[4813]: I0219 20:07:54.895922 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-bf2dh"] Feb 19 20:07:54 crc kubenswrapper[4813]: I0219 20:07:54.915311 4813 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 20:07:55 crc kubenswrapper[4813]: I0219 20:07:55.247944 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-bf2dh" event={"ID":"0c40ee40-5f00-4771-96d4-90e70b4c2580","Type":"ContainerStarted","Data":"1c4354d5755964b0e32d50aef8ad56091829f79d24ce9cb03631d9125d5202eb"} Feb 19 20:07:57 crc kubenswrapper[4813]: I0219 20:07:57.862596 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5f4b55c5d9-w5nw5" Feb 19 20:07:57 crc kubenswrapper[4813]: I0219 20:07:57.862891 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5f4b55c5d9-w5nw5" Feb 19 20:08:02 crc kubenswrapper[4813]: I0219 20:08:02.316431 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-bf2dh" event={"ID":"0c40ee40-5f00-4771-96d4-90e70b4c2580","Type":"ContainerStarted","Data":"29e3cc9eb7cb8cc95d3e9e5a1de8380392abda408af89c06d742e5694b6e020e"} Feb 19 20:08:04 crc kubenswrapper[4813]: I0219 20:08:04.338833 4813 generic.go:334] "Generic (PLEG): container finished" podID="0c40ee40-5f00-4771-96d4-90e70b4c2580" containerID="29e3cc9eb7cb8cc95d3e9e5a1de8380392abda408af89c06d742e5694b6e020e" exitCode=0 Feb 19 20:08:04 crc kubenswrapper[4813]: I0219 20:08:04.339012 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-bf2dh" event={"ID":"0c40ee40-5f00-4771-96d4-90e70b4c2580","Type":"ContainerDied","Data":"29e3cc9eb7cb8cc95d3e9e5a1de8380392abda408af89c06d742e5694b6e020e"} Feb 19 20:08:05 crc kubenswrapper[4813]: I0219 20:08:05.693020 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-bf2dh" Feb 19 20:08:05 crc kubenswrapper[4813]: I0219 20:08:05.741898 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c40ee40-5f00-4771-96d4-90e70b4c2580-config-data\") pod \"0c40ee40-5f00-4771-96d4-90e70b4c2580\" (UID: \"0c40ee40-5f00-4771-96d4-90e70b4c2580\") " Feb 19 20:08:05 crc kubenswrapper[4813]: I0219 20:08:05.742112 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2glc5\" (UniqueName: \"kubernetes.io/projected/0c40ee40-5f00-4771-96d4-90e70b4c2580-kube-api-access-2glc5\") pod \"0c40ee40-5f00-4771-96d4-90e70b4c2580\" (UID: \"0c40ee40-5f00-4771-96d4-90e70b4c2580\") " Feb 19 20:08:05 crc kubenswrapper[4813]: I0219 20:08:05.742354 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c40ee40-5f00-4771-96d4-90e70b4c2580-combined-ca-bundle\") pod \"0c40ee40-5f00-4771-96d4-90e70b4c2580\" (UID: \"0c40ee40-5f00-4771-96d4-90e70b4c2580\") " Feb 19 20:08:05 crc kubenswrapper[4813]: I0219 20:08:05.753342 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c40ee40-5f00-4771-96d4-90e70b4c2580-kube-api-access-2glc5" (OuterVolumeSpecName: "kube-api-access-2glc5") pod "0c40ee40-5f00-4771-96d4-90e70b4c2580" (UID: "0c40ee40-5f00-4771-96d4-90e70b4c2580"). InnerVolumeSpecName "kube-api-access-2glc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:08:05 crc kubenswrapper[4813]: I0219 20:08:05.774731 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c40ee40-5f00-4771-96d4-90e70b4c2580-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0c40ee40-5f00-4771-96d4-90e70b4c2580" (UID: "0c40ee40-5f00-4771-96d4-90e70b4c2580"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:08:05 crc kubenswrapper[4813]: I0219 20:08:05.817639 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c40ee40-5f00-4771-96d4-90e70b4c2580-config-data" (OuterVolumeSpecName: "config-data") pod "0c40ee40-5f00-4771-96d4-90e70b4c2580" (UID: "0c40ee40-5f00-4771-96d4-90e70b4c2580"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:08:05 crc kubenswrapper[4813]: I0219 20:08:05.844660 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c40ee40-5f00-4771-96d4-90e70b4c2580-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 20:08:05 crc kubenswrapper[4813]: I0219 20:08:05.844890 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2glc5\" (UniqueName: \"kubernetes.io/projected/0c40ee40-5f00-4771-96d4-90e70b4c2580-kube-api-access-2glc5\") on node \"crc\" DevicePath \"\"" Feb 19 20:08:05 crc kubenswrapper[4813]: I0219 20:08:05.845012 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c40ee40-5f00-4771-96d4-90e70b4c2580-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 20:08:06 crc kubenswrapper[4813]: I0219 20:08:06.047747 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-qgx9f"] Feb 19 20:08:06 crc kubenswrapper[4813]: I0219 20:08:06.060455 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-758f-account-create-update-v6zkq"] Feb 19 20:08:06 crc kubenswrapper[4813]: I0219 20:08:06.071772 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-qgx9f"] Feb 19 20:08:06 crc kubenswrapper[4813]: I0219 20:08:06.080659 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-758f-account-create-update-v6zkq"] Feb 19 20:08:06 crc kubenswrapper[4813]: I0219 20:08:06.362243 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-bf2dh" event={"ID":"0c40ee40-5f00-4771-96d4-90e70b4c2580","Type":"ContainerDied","Data":"1c4354d5755964b0e32d50aef8ad56091829f79d24ce9cb03631d9125d5202eb"} Feb 19 20:08:06 crc kubenswrapper[4813]: I0219 20:08:06.362579 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c4354d5755964b0e32d50aef8ad56091829f79d24ce9cb03631d9125d5202eb" Feb 19 20:08:06 crc kubenswrapper[4813]: I0219 20:08:06.362327 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-bf2dh" Feb 19 20:08:06 crc kubenswrapper[4813]: I0219 20:08:06.472012 4813 scope.go:117] "RemoveContainer" containerID="77599afcb1bbfe89531d92db83d4ec060079eb9318249967ecfd9d52fd324241" Feb 19 20:08:06 crc kubenswrapper[4813]: E0219 20:08:06.472318 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:08:07 crc kubenswrapper[4813]: I0219 20:08:07.284003 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-79b88f9bd9-88k94"] Feb 19 20:08:07 crc kubenswrapper[4813]: E0219 20:08:07.284831 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c40ee40-5f00-4771-96d4-90e70b4c2580" containerName="heat-db-sync" Feb 19 20:08:07 crc kubenswrapper[4813]: I0219 20:08:07.284849 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c40ee40-5f00-4771-96d4-90e70b4c2580" containerName="heat-db-sync" Feb 19 20:08:07 crc kubenswrapper[4813]: I0219 20:08:07.285147 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c40ee40-5f00-4771-96d4-90e70b4c2580" containerName="heat-db-sync" Feb 19 20:08:07 crc kubenswrapper[4813]: I0219 20:08:07.286234 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-79b88f9bd9-88k94" Feb 19 20:08:07 crc kubenswrapper[4813]: I0219 20:08:07.290925 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-z2jb8" Feb 19 20:08:07 crc kubenswrapper[4813]: I0219 20:08:07.291260 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Feb 19 20:08:07 crc kubenswrapper[4813]: I0219 20:08:07.295360 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Feb 19 20:08:07 crc kubenswrapper[4813]: I0219 20:08:07.301751 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-79b88f9bd9-88k94"] Feb 19 20:08:07 crc kubenswrapper[4813]: I0219 20:08:07.382670 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b37fece7-aabf-465c-b612-eeb01bf398cb-config-data\") pod \"heat-engine-79b88f9bd9-88k94\" (UID: \"b37fece7-aabf-465c-b612-eeb01bf398cb\") " pod="openstack/heat-engine-79b88f9bd9-88k94" Feb 19 20:08:07 crc kubenswrapper[4813]: I0219 20:08:07.382744 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b37fece7-aabf-465c-b612-eeb01bf398cb-combined-ca-bundle\") pod \"heat-engine-79b88f9bd9-88k94\" (UID: \"b37fece7-aabf-465c-b612-eeb01bf398cb\") " pod="openstack/heat-engine-79b88f9bd9-88k94" Feb 19 20:08:07 crc kubenswrapper[4813]: I0219 20:08:07.382771 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fz6p\" (UniqueName: \"kubernetes.io/projected/b37fece7-aabf-465c-b612-eeb01bf398cb-kube-api-access-5fz6p\") pod \"heat-engine-79b88f9bd9-88k94\" (UID: \"b37fece7-aabf-465c-b612-eeb01bf398cb\") " pod="openstack/heat-engine-79b88f9bd9-88k94" Feb 19 20:08:07 crc kubenswrapper[4813]: I0219 20:08:07.382835 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b37fece7-aabf-465c-b612-eeb01bf398cb-config-data-custom\") pod \"heat-engine-79b88f9bd9-88k94\" (UID: \"b37fece7-aabf-465c-b612-eeb01bf398cb\") " pod="openstack/heat-engine-79b88f9bd9-88k94" Feb 19 20:08:07 crc kubenswrapper[4813]: I0219 20:08:07.429171 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-54f9fdc8b-rl6qw"] Feb 19 20:08:07 crc kubenswrapper[4813]: I0219 20:08:07.430668 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-54f9fdc8b-rl6qw" Feb 19 20:08:07 crc kubenswrapper[4813]: I0219 20:08:07.432367 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Feb 19 20:08:07 crc kubenswrapper[4813]: I0219 20:08:07.447011 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-6c47fc8b76-csbxx"] Feb 19 20:08:07 crc kubenswrapper[4813]: I0219 20:08:07.448720 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6c47fc8b76-csbxx" Feb 19 20:08:07 crc kubenswrapper[4813]: I0219 20:08:07.451472 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Feb 19 20:08:07 crc kubenswrapper[4813]: I0219 20:08:07.457607 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-54f9fdc8b-rl6qw"] Feb 19 20:08:07 crc kubenswrapper[4813]: I0219 20:08:07.485250 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1bf821c9-c4e7-44d0-a4ba-4651c90947bd-config-data-custom\") pod \"heat-api-6c47fc8b76-csbxx\" (UID: \"1bf821c9-c4e7-44d0-a4ba-4651c90947bd\") " pod="openstack/heat-api-6c47fc8b76-csbxx" Feb 19 20:08:07 crc kubenswrapper[4813]: I0219 20:08:07.485316 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccd48347-0524-47b2-a3b8-c38634e356a6-config-data\") pod \"heat-cfnapi-54f9fdc8b-rl6qw\" (UID: \"ccd48347-0524-47b2-a3b8-c38634e356a6\") " pod="openstack/heat-cfnapi-54f9fdc8b-rl6qw" Feb 19 20:08:07 crc kubenswrapper[4813]: I0219 20:08:07.485380 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b37fece7-aabf-465c-b612-eeb01bf398cb-config-data\") pod \"heat-engine-79b88f9bd9-88k94\" (UID: \"b37fece7-aabf-465c-b612-eeb01bf398cb\") " pod="openstack/heat-engine-79b88f9bd9-88k94" Feb 19 20:08:07 crc kubenswrapper[4813]: I0219 20:08:07.485454 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdrz9\" (UniqueName: \"kubernetes.io/projected/1bf821c9-c4e7-44d0-a4ba-4651c90947bd-kube-api-access-hdrz9\") pod \"heat-api-6c47fc8b76-csbxx\" (UID: \"1bf821c9-c4e7-44d0-a4ba-4651c90947bd\") " pod="openstack/heat-api-6c47fc8b76-csbxx" Feb 19 20:08:07 crc kubenswrapper[4813]: I0219 20:08:07.485497 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bf821c9-c4e7-44d0-a4ba-4651c90947bd-combined-ca-bundle\") pod \"heat-api-6c47fc8b76-csbxx\" (UID: \"1bf821c9-c4e7-44d0-a4ba-4651c90947bd\") " pod="openstack/heat-api-6c47fc8b76-csbxx" Feb 19 20:08:07 crc kubenswrapper[4813]: I0219 20:08:07.485524 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b37fece7-aabf-465c-b612-eeb01bf398cb-combined-ca-bundle\") pod \"heat-engine-79b88f9bd9-88k94\" (UID: \"b37fece7-aabf-465c-b612-eeb01bf398cb\") " pod="openstack/heat-engine-79b88f9bd9-88k94" Feb 19 20:08:07 crc kubenswrapper[4813]: I0219 20:08:07.485558 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fz6p\" (UniqueName: \"kubernetes.io/projected/b37fece7-aabf-465c-b612-eeb01bf398cb-kube-api-access-5fz6p\") pod \"heat-engine-79b88f9bd9-88k94\" (UID: \"b37fece7-aabf-465c-b612-eeb01bf398cb\") " pod="openstack/heat-engine-79b88f9bd9-88k94" Feb 19 20:08:07 crc kubenswrapper[4813]: I0219 20:08:07.485582 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bf821c9-c4e7-44d0-a4ba-4651c90947bd-config-data\") pod \"heat-api-6c47fc8b76-csbxx\" (UID: \"1bf821c9-c4e7-44d0-a4ba-4651c90947bd\") " pod="openstack/heat-api-6c47fc8b76-csbxx" Feb 19 20:08:07 crc kubenswrapper[4813]: I0219 20:08:07.485648 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ccd48347-0524-47b2-a3b8-c38634e356a6-config-data-custom\") pod \"heat-cfnapi-54f9fdc8b-rl6qw\" (UID: \"ccd48347-0524-47b2-a3b8-c38634e356a6\") " pod="openstack/heat-cfnapi-54f9fdc8b-rl6qw" Feb 19 20:08:07 crc kubenswrapper[4813]: I0219 20:08:07.485676 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgdrb\" (UniqueName: \"kubernetes.io/projected/ccd48347-0524-47b2-a3b8-c38634e356a6-kube-api-access-cgdrb\") pod \"heat-cfnapi-54f9fdc8b-rl6qw\" (UID: \"ccd48347-0524-47b2-a3b8-c38634e356a6\") " pod="openstack/heat-cfnapi-54f9fdc8b-rl6qw" Feb 19 20:08:07 crc kubenswrapper[4813]: I0219 20:08:07.485742 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccd48347-0524-47b2-a3b8-c38634e356a6-combined-ca-bundle\") pod \"heat-cfnapi-54f9fdc8b-rl6qw\" (UID: \"ccd48347-0524-47b2-a3b8-c38634e356a6\") " pod="openstack/heat-cfnapi-54f9fdc8b-rl6qw" Feb 19 20:08:07 crc kubenswrapper[4813]: I0219 20:08:07.485772 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b37fece7-aabf-465c-b612-eeb01bf398cb-config-data-custom\") pod \"heat-engine-79b88f9bd9-88k94\" (UID: \"b37fece7-aabf-465c-b612-eeb01bf398cb\") " pod="openstack/heat-engine-79b88f9bd9-88k94" Feb 19 20:08:07 crc kubenswrapper[4813]: I0219 20:08:07.491621 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35510066-9f5c-49f6-b361-6ddd69e8b62a" path="/var/lib/kubelet/pods/35510066-9f5c-49f6-b361-6ddd69e8b62a/volumes" Feb 19 20:08:07 crc kubenswrapper[4813]: I0219 20:08:07.494275 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87b91f05-3f40-4031-86ac-fb99aa3cc841" path="/var/lib/kubelet/pods/87b91f05-3f40-4031-86ac-fb99aa3cc841/volumes" Feb 19 20:08:07 crc kubenswrapper[4813]: I0219 20:08:07.494968 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6c47fc8b76-csbxx"] Feb 19 20:08:07 crc kubenswrapper[4813]: I0219 20:08:07.498548 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b37fece7-aabf-465c-b612-eeb01bf398cb-config-data-custom\") pod \"heat-engine-79b88f9bd9-88k94\" (UID: \"b37fece7-aabf-465c-b612-eeb01bf398cb\") " pod="openstack/heat-engine-79b88f9bd9-88k94" Feb 19 20:08:07 crc kubenswrapper[4813]: I0219 20:08:07.501074 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b37fece7-aabf-465c-b612-eeb01bf398cb-config-data\") pod \"heat-engine-79b88f9bd9-88k94\" (UID: \"b37fece7-aabf-465c-b612-eeb01bf398cb\") " pod="openstack/heat-engine-79b88f9bd9-88k94" Feb 19 20:08:07 crc kubenswrapper[4813]: I0219 20:08:07.514880 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fz6p\" (UniqueName: \"kubernetes.io/projected/b37fece7-aabf-465c-b612-eeb01bf398cb-kube-api-access-5fz6p\") pod \"heat-engine-79b88f9bd9-88k94\" (UID: \"b37fece7-aabf-465c-b612-eeb01bf398cb\") " pod="openstack/heat-engine-79b88f9bd9-88k94" Feb 19 20:08:07 crc kubenswrapper[4813]: I0219 20:08:07.516067 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b37fece7-aabf-465c-b612-eeb01bf398cb-combined-ca-bundle\") pod \"heat-engine-79b88f9bd9-88k94\" (UID: \"b37fece7-aabf-465c-b612-eeb01bf398cb\") " pod="openstack/heat-engine-79b88f9bd9-88k94" Feb 19 20:08:07 crc kubenswrapper[4813]: I0219 20:08:07.587942 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1bf821c9-c4e7-44d0-a4ba-4651c90947bd-config-data-custom\") pod \"heat-api-6c47fc8b76-csbxx\" (UID: \"1bf821c9-c4e7-44d0-a4ba-4651c90947bd\") " pod="openstack/heat-api-6c47fc8b76-csbxx" Feb 19 20:08:07 crc kubenswrapper[4813]: I0219 20:08:07.588029 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccd48347-0524-47b2-a3b8-c38634e356a6-config-data\") pod \"heat-cfnapi-54f9fdc8b-rl6qw\" (UID: \"ccd48347-0524-47b2-a3b8-c38634e356a6\") " pod="openstack/heat-cfnapi-54f9fdc8b-rl6qw" Feb 19 20:08:07 crc kubenswrapper[4813]: I0219 20:08:07.588111 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdrz9\" (UniqueName: \"kubernetes.io/projected/1bf821c9-c4e7-44d0-a4ba-4651c90947bd-kube-api-access-hdrz9\") pod \"heat-api-6c47fc8b76-csbxx\" (UID: \"1bf821c9-c4e7-44d0-a4ba-4651c90947bd\") " pod="openstack/heat-api-6c47fc8b76-csbxx" Feb 19 20:08:07 crc kubenswrapper[4813]: I0219 20:08:07.588155 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bf821c9-c4e7-44d0-a4ba-4651c90947bd-combined-ca-bundle\") pod \"heat-api-6c47fc8b76-csbxx\" (UID: \"1bf821c9-c4e7-44d0-a4ba-4651c90947bd\") " pod="openstack/heat-api-6c47fc8b76-csbxx" Feb 19 20:08:07 crc kubenswrapper[4813]: I0219 20:08:07.588189 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bf821c9-c4e7-44d0-a4ba-4651c90947bd-config-data\") pod \"heat-api-6c47fc8b76-csbxx\" (UID: \"1bf821c9-c4e7-44d0-a4ba-4651c90947bd\") " pod="openstack/heat-api-6c47fc8b76-csbxx" Feb 19 20:08:07 crc kubenswrapper[4813]: I0219 20:08:07.588250 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ccd48347-0524-47b2-a3b8-c38634e356a6-config-data-custom\") pod \"heat-cfnapi-54f9fdc8b-rl6qw\" (UID: \"ccd48347-0524-47b2-a3b8-c38634e356a6\") " pod="openstack/heat-cfnapi-54f9fdc8b-rl6qw" Feb 19 20:08:07 crc kubenswrapper[4813]: I0219 20:08:07.588271 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgdrb\" (UniqueName: \"kubernetes.io/projected/ccd48347-0524-47b2-a3b8-c38634e356a6-kube-api-access-cgdrb\") pod \"heat-cfnapi-54f9fdc8b-rl6qw\" (UID: \"ccd48347-0524-47b2-a3b8-c38634e356a6\") " pod="openstack/heat-cfnapi-54f9fdc8b-rl6qw" Feb 19 20:08:07 crc kubenswrapper[4813]: I0219 20:08:07.588323 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccd48347-0524-47b2-a3b8-c38634e356a6-combined-ca-bundle\") pod \"heat-cfnapi-54f9fdc8b-rl6qw\" (UID: \"ccd48347-0524-47b2-a3b8-c38634e356a6\") " pod="openstack/heat-cfnapi-54f9fdc8b-rl6qw" Feb 19 20:08:07 crc kubenswrapper[4813]: I0219 20:08:07.595468 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccd48347-0524-47b2-a3b8-c38634e356a6-combined-ca-bundle\") pod \"heat-cfnapi-54f9fdc8b-rl6qw\" (UID: \"ccd48347-0524-47b2-a3b8-c38634e356a6\") " pod="openstack/heat-cfnapi-54f9fdc8b-rl6qw" Feb 19 20:08:07 crc kubenswrapper[4813]: I0219 20:08:07.602242 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bf821c9-c4e7-44d0-a4ba-4651c90947bd-config-data\") pod \"heat-api-6c47fc8b76-csbxx\" (UID: \"1bf821c9-c4e7-44d0-a4ba-4651c90947bd\") " pod="openstack/heat-api-6c47fc8b76-csbxx" Feb 19 20:08:07 crc kubenswrapper[4813]: I0219 20:08:07.602607 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1bf821c9-c4e7-44d0-a4ba-4651c90947bd-config-data-custom\") pod \"heat-api-6c47fc8b76-csbxx\" (UID: \"1bf821c9-c4e7-44d0-a4ba-4651c90947bd\") " pod="openstack/heat-api-6c47fc8b76-csbxx" Feb 19 20:08:07 crc kubenswrapper[4813]: I0219 20:08:07.607034 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bf821c9-c4e7-44d0-a4ba-4651c90947bd-combined-ca-bundle\") pod \"heat-api-6c47fc8b76-csbxx\" (UID: \"1bf821c9-c4e7-44d0-a4ba-4651c90947bd\") " pod="openstack/heat-api-6c47fc8b76-csbxx" Feb 19 20:08:07 crc kubenswrapper[4813]: I0219 20:08:07.608431 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccd48347-0524-47b2-a3b8-c38634e356a6-config-data\") pod \"heat-cfnapi-54f9fdc8b-rl6qw\" (UID: \"ccd48347-0524-47b2-a3b8-c38634e356a6\") " pod="openstack/heat-cfnapi-54f9fdc8b-rl6qw" Feb 19 20:08:07 crc kubenswrapper[4813]: I0219 20:08:07.622028 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-79b88f9bd9-88k94" Feb 19 20:08:07 crc kubenswrapper[4813]: I0219 20:08:07.622833 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ccd48347-0524-47b2-a3b8-c38634e356a6-config-data-custom\") pod \"heat-cfnapi-54f9fdc8b-rl6qw\" (UID: \"ccd48347-0524-47b2-a3b8-c38634e356a6\") " pod="openstack/heat-cfnapi-54f9fdc8b-rl6qw" Feb 19 20:08:07 crc kubenswrapper[4813]: I0219 20:08:07.626120 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgdrb\" (UniqueName: \"kubernetes.io/projected/ccd48347-0524-47b2-a3b8-c38634e356a6-kube-api-access-cgdrb\") pod \"heat-cfnapi-54f9fdc8b-rl6qw\" (UID: \"ccd48347-0524-47b2-a3b8-c38634e356a6\") " pod="openstack/heat-cfnapi-54f9fdc8b-rl6qw" Feb 19 20:08:07 crc kubenswrapper[4813]: I0219 20:08:07.628526 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdrz9\" (UniqueName: \"kubernetes.io/projected/1bf821c9-c4e7-44d0-a4ba-4651c90947bd-kube-api-access-hdrz9\") pod \"heat-api-6c47fc8b76-csbxx\" (UID: \"1bf821c9-c4e7-44d0-a4ba-4651c90947bd\") " pod="openstack/heat-api-6c47fc8b76-csbxx" Feb 19 20:08:07 crc kubenswrapper[4813]: I0219 20:08:07.750811 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-54f9fdc8b-rl6qw" Feb 19 20:08:07 crc kubenswrapper[4813]: I0219 20:08:07.785280 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6c47fc8b76-csbxx" Feb 19 20:08:08 crc kubenswrapper[4813]: I0219 20:08:08.241134 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-79b88f9bd9-88k94"] Feb 19 20:08:08 crc kubenswrapper[4813]: W0219 20:08:08.368702 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1bf821c9_c4e7_44d0_a4ba_4651c90947bd.slice/crio-a72368f86d5e5941c41fad4e2fbd0af15a5bc370c3475b6b55e19785a585cad6 WatchSource:0}: Error finding container a72368f86d5e5941c41fad4e2fbd0af15a5bc370c3475b6b55e19785a585cad6: Status 404 returned error can't find the container with id a72368f86d5e5941c41fad4e2fbd0af15a5bc370c3475b6b55e19785a585cad6 Feb 19 20:08:08 crc kubenswrapper[4813]: I0219 20:08:08.369852 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6c47fc8b76-csbxx"] Feb 19 20:08:08 crc kubenswrapper[4813]: I0219 20:08:08.391060 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-79b88f9bd9-88k94" event={"ID":"b37fece7-aabf-465c-b612-eeb01bf398cb","Type":"ContainerStarted","Data":"6ca62ca42fbaad5215e8019db76d3c769856b8ca1763b39deee8ce309cbb2cca"} Feb 19 20:08:08 crc kubenswrapper[4813]: I0219 20:08:08.392794 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6c47fc8b76-csbxx" event={"ID":"1bf821c9-c4e7-44d0-a4ba-4651c90947bd","Type":"ContainerStarted","Data":"a72368f86d5e5941c41fad4e2fbd0af15a5bc370c3475b6b55e19785a585cad6"} Feb 19 20:08:08 crc kubenswrapper[4813]: I0219 20:08:08.452064 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-54f9fdc8b-rl6qw"] Feb 19 20:08:08 crc kubenswrapper[4813]: W0219 20:08:08.457435 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podccd48347_0524_47b2_a3b8_c38634e356a6.slice/crio-86981d0264c5aeb260f88689d57b34b9c09dadc1658f9bed6fa09100de617cb1 WatchSource:0}: Error finding container 86981d0264c5aeb260f88689d57b34b9c09dadc1658f9bed6fa09100de617cb1: Status 404 returned error can't find the container with id 86981d0264c5aeb260f88689d57b34b9c09dadc1658f9bed6fa09100de617cb1 Feb 19 20:08:09 crc kubenswrapper[4813]: I0219 20:08:09.410902 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-79b88f9bd9-88k94" event={"ID":"b37fece7-aabf-465c-b612-eeb01bf398cb","Type":"ContainerStarted","Data":"571c0766a0d691ae3b9170b8db37fa2daa36a78cbe001505303af31c17290fa3"} Feb 19 20:08:09 crc kubenswrapper[4813]: I0219 20:08:09.411201 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-79b88f9bd9-88k94" Feb 19 20:08:09 crc kubenswrapper[4813]: I0219 20:08:09.416148 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-54f9fdc8b-rl6qw" event={"ID":"ccd48347-0524-47b2-a3b8-c38634e356a6","Type":"ContainerStarted","Data":"86981d0264c5aeb260f88689d57b34b9c09dadc1658f9bed6fa09100de617cb1"} Feb 19 20:08:09 crc kubenswrapper[4813]: I0219 20:08:09.433396 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-79b88f9bd9-88k94" podStartSLOduration=2.433376018 podStartE2EDuration="2.433376018s" podCreationTimestamp="2026-02-19 20:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 20:08:09.423398699 +0000 UTC m=+5908.648839260" watchObservedRunningTime="2026-02-19 20:08:09.433376018 +0000 UTC m=+5908.658816559" Feb 19 20:08:09 crc kubenswrapper[4813]: I0219 20:08:09.841383 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-5f4b55c5d9-w5nw5" Feb 19 20:08:11 crc kubenswrapper[4813]: I0219 20:08:11.696435 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-5f4b55c5d9-w5nw5" Feb 19 20:08:11 crc kubenswrapper[4813]: I0219 20:08:11.805825 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5c79476fc-m49fs"] Feb 19 20:08:11 crc kubenswrapper[4813]: I0219 20:08:11.806276 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5c79476fc-m49fs" podUID="e3f864db-c1f6-40b4-895c-347947a296e5" containerName="horizon-log" containerID="cri-o://45b664f6e38426c32916fbd79b60ef1e78ec6330ec23cc4714186bdf164fe2d0" gracePeriod=30 Feb 19 20:08:11 crc kubenswrapper[4813]: I0219 20:08:11.806681 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5c79476fc-m49fs" podUID="e3f864db-c1f6-40b4-895c-347947a296e5" containerName="horizon" containerID="cri-o://f302b27142a4ceafba12c4275e6f35ba26d4b59ef39663db8027d22a25ce5ece" gracePeriod=30 Feb 19 20:08:12 crc kubenswrapper[4813]: I0219 20:08:12.444619 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-54f9fdc8b-rl6qw" event={"ID":"ccd48347-0524-47b2-a3b8-c38634e356a6","Type":"ContainerStarted","Data":"1394171ad5c42dd8a110dd45dee2aa2115a66591bb16784d3a5df2db4f66610a"} Feb 19 20:08:12 crc kubenswrapper[4813]: I0219 20:08:12.444828 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-54f9fdc8b-rl6qw" Feb 19 20:08:12 crc kubenswrapper[4813]: I0219 20:08:12.446945 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6c47fc8b76-csbxx" event={"ID":"1bf821c9-c4e7-44d0-a4ba-4651c90947bd","Type":"ContainerStarted","Data":"65a885b7f7578ba8b98950ac34615089a1bf3a922132f2113f8d555c7bf6b355"} Feb 19 20:08:12 crc kubenswrapper[4813]: I0219 20:08:12.447104 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-6c47fc8b76-csbxx" Feb 19 20:08:12 crc kubenswrapper[4813]: I0219 20:08:12.468652 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-54f9fdc8b-rl6qw" podStartSLOduration=2.681227026 podStartE2EDuration="5.46863022s" podCreationTimestamp="2026-02-19 20:08:07 +0000 UTC" firstStartedPulling="2026-02-19 20:08:08.460543108 +0000 UTC m=+5907.685983649" lastFinishedPulling="2026-02-19 20:08:11.247946302 +0000 UTC m=+5910.473386843" observedRunningTime="2026-02-19 20:08:12.4627933 +0000 UTC m=+5911.688233841" watchObservedRunningTime="2026-02-19 20:08:12.46863022 +0000 UTC m=+5911.694070751" Feb 19 20:08:12 crc kubenswrapper[4813]: I0219 20:08:12.485687 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-6c47fc8b76-csbxx" podStartSLOduration=2.613838442 podStartE2EDuration="5.485673217s" podCreationTimestamp="2026-02-19 20:08:07 +0000 UTC" firstStartedPulling="2026-02-19 20:08:08.372219616 +0000 UTC m=+5907.597660167" lastFinishedPulling="2026-02-19 20:08:11.244054401 +0000 UTC m=+5910.469494942" observedRunningTime="2026-02-19 20:08:12.483019155 +0000 UTC m=+5911.708459706" watchObservedRunningTime="2026-02-19 20:08:12.485673217 +0000 UTC m=+5911.711113758" Feb 19 20:08:15 crc kubenswrapper[4813]: I0219 20:08:15.042539 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-mwg5n"] Feb 19 20:08:15 crc kubenswrapper[4813]: I0219 20:08:15.051376 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-mwg5n"] Feb 19 20:08:15 crc kubenswrapper[4813]: I0219 20:08:15.488669 4813 generic.go:334] "Generic (PLEG): container finished" podID="e3f864db-c1f6-40b4-895c-347947a296e5" containerID="f302b27142a4ceafba12c4275e6f35ba26d4b59ef39663db8027d22a25ce5ece" exitCode=0 Feb 19 20:08:15 crc kubenswrapper[4813]: I0219 20:08:15.509455 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf454eb7-811a-46da-b647-e9a292c11f70" path="/var/lib/kubelet/pods/cf454eb7-811a-46da-b647-e9a292c11f70/volumes" Feb 19 20:08:15 crc kubenswrapper[4813]: I0219 20:08:15.510531 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c79476fc-m49fs" event={"ID":"e3f864db-c1f6-40b4-895c-347947a296e5","Type":"ContainerDied","Data":"f302b27142a4ceafba12c4275e6f35ba26d4b59ef39663db8027d22a25ce5ece"} Feb 19 20:08:17 crc kubenswrapper[4813]: I0219 20:08:17.472499 4813 scope.go:117] "RemoveContainer" containerID="77599afcb1bbfe89531d92db83d4ec060079eb9318249967ecfd9d52fd324241" Feb 19 20:08:17 crc kubenswrapper[4813]: E0219 20:08:17.473194 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:08:19 crc kubenswrapper[4813]: I0219 20:08:19.058412 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-54f9fdc8b-rl6qw" Feb 19 20:08:19 crc kubenswrapper[4813]: I0219 20:08:19.214500 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-6c47fc8b76-csbxx" Feb 19 20:08:21 crc kubenswrapper[4813]: I0219 20:08:21.404774 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5c79476fc-m49fs" podUID="e3f864db-c1f6-40b4-895c-347947a296e5" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.109:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.109:8080: connect: connection refused" Feb 19 20:08:27 crc kubenswrapper[4813]: I0219 20:08:27.659498 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-79b88f9bd9-88k94" Feb 19 20:08:31 crc kubenswrapper[4813]: I0219 20:08:31.405400 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5c79476fc-m49fs" podUID="e3f864db-c1f6-40b4-895c-347947a296e5" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.109:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.109:8080: connect: connection refused" Feb 19 20:08:32 crc kubenswrapper[4813]: I0219 20:08:32.473264 4813 scope.go:117] "RemoveContainer" containerID="77599afcb1bbfe89531d92db83d4ec060079eb9318249967ecfd9d52fd324241" Feb 19 20:08:32 crc kubenswrapper[4813]: E0219 20:08:32.473787 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:08:41 crc kubenswrapper[4813]: I0219 20:08:41.405249 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5c79476fc-m49fs" podUID="e3f864db-c1f6-40b4-895c-347947a296e5" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.109:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.109:8080: connect: connection refused" Feb 19 20:08:41 crc kubenswrapper[4813]: I0219 20:08:41.407148 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5c79476fc-m49fs" Feb 19 20:08:42 crc kubenswrapper[4813]: I0219 20:08:42.366922 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5c79476fc-m49fs" Feb 19 20:08:42 crc kubenswrapper[4813]: I0219 20:08:42.428834 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e3f864db-c1f6-40b4-895c-347947a296e5-horizon-secret-key\") pod \"e3f864db-c1f6-40b4-895c-347947a296e5\" (UID: \"e3f864db-c1f6-40b4-895c-347947a296e5\") " Feb 19 20:08:42 crc kubenswrapper[4813]: I0219 20:08:42.428975 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e3f864db-c1f6-40b4-895c-347947a296e5-scripts\") pod \"e3f864db-c1f6-40b4-895c-347947a296e5\" (UID: \"e3f864db-c1f6-40b4-895c-347947a296e5\") " Feb 19 20:08:42 crc kubenswrapper[4813]: I0219 20:08:42.429095 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3f864db-c1f6-40b4-895c-347947a296e5-logs\") pod \"e3f864db-c1f6-40b4-895c-347947a296e5\" (UID: \"e3f864db-c1f6-40b4-895c-347947a296e5\") " Feb 19 20:08:42 crc kubenswrapper[4813]: I0219 20:08:42.429142 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e3f864db-c1f6-40b4-895c-347947a296e5-config-data\") pod \"e3f864db-c1f6-40b4-895c-347947a296e5\" (UID: \"e3f864db-c1f6-40b4-895c-347947a296e5\") " Feb 19 20:08:42 crc kubenswrapper[4813]: I0219 20:08:42.429204 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q76pf\" (UniqueName: \"kubernetes.io/projected/e3f864db-c1f6-40b4-895c-347947a296e5-kube-api-access-q76pf\") pod \"e3f864db-c1f6-40b4-895c-347947a296e5\" (UID: \"e3f864db-c1f6-40b4-895c-347947a296e5\") " Feb 19 20:08:42 crc kubenswrapper[4813]: I0219 20:08:42.430088 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3f864db-c1f6-40b4-895c-347947a296e5-logs" (OuterVolumeSpecName: "logs") pod "e3f864db-c1f6-40b4-895c-347947a296e5" (UID: "e3f864db-c1f6-40b4-895c-347947a296e5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:08:42 crc kubenswrapper[4813]: I0219 20:08:42.430355 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3f864db-c1f6-40b4-895c-347947a296e5-logs\") on node \"crc\" DevicePath \"\"" Feb 19 20:08:42 crc kubenswrapper[4813]: I0219 20:08:42.437467 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3f864db-c1f6-40b4-895c-347947a296e5-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "e3f864db-c1f6-40b4-895c-347947a296e5" (UID: "e3f864db-c1f6-40b4-895c-347947a296e5"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:08:42 crc kubenswrapper[4813]: I0219 20:08:42.443452 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3f864db-c1f6-40b4-895c-347947a296e5-kube-api-access-q76pf" (OuterVolumeSpecName: "kube-api-access-q76pf") pod "e3f864db-c1f6-40b4-895c-347947a296e5" (UID: "e3f864db-c1f6-40b4-895c-347947a296e5"). InnerVolumeSpecName "kube-api-access-q76pf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:08:42 crc kubenswrapper[4813]: I0219 20:08:42.458786 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3f864db-c1f6-40b4-895c-347947a296e5-scripts" (OuterVolumeSpecName: "scripts") pod "e3f864db-c1f6-40b4-895c-347947a296e5" (UID: "e3f864db-c1f6-40b4-895c-347947a296e5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:08:42 crc kubenswrapper[4813]: I0219 20:08:42.459690 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3f864db-c1f6-40b4-895c-347947a296e5-config-data" (OuterVolumeSpecName: "config-data") pod "e3f864db-c1f6-40b4-895c-347947a296e5" (UID: "e3f864db-c1f6-40b4-895c-347947a296e5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:08:42 crc kubenswrapper[4813]: I0219 20:08:42.532205 4813 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e3f864db-c1f6-40b4-895c-347947a296e5-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 19 20:08:42 crc kubenswrapper[4813]: I0219 20:08:42.532238 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e3f864db-c1f6-40b4-895c-347947a296e5-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 20:08:42 crc kubenswrapper[4813]: I0219 20:08:42.532247 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e3f864db-c1f6-40b4-895c-347947a296e5-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 20:08:42 crc kubenswrapper[4813]: I0219 20:08:42.532256 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q76pf\" (UniqueName: \"kubernetes.io/projected/e3f864db-c1f6-40b4-895c-347947a296e5-kube-api-access-q76pf\") on node \"crc\" DevicePath \"\"" Feb 19 20:08:42 crc kubenswrapper[4813]: I0219 20:08:42.791820 4813 generic.go:334] "Generic (PLEG): container finished" podID="e3f864db-c1f6-40b4-895c-347947a296e5" containerID="45b664f6e38426c32916fbd79b60ef1e78ec6330ec23cc4714186bdf164fe2d0" exitCode=137 Feb 19 20:08:42 crc kubenswrapper[4813]: I0219 20:08:42.791867 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c79476fc-m49fs" event={"ID":"e3f864db-c1f6-40b4-895c-347947a296e5","Type":"ContainerDied","Data":"45b664f6e38426c32916fbd79b60ef1e78ec6330ec23cc4714186bdf164fe2d0"} Feb 19 20:08:42 crc kubenswrapper[4813]: I0219 20:08:42.791882 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5c79476fc-m49fs" Feb 19 20:08:42 crc kubenswrapper[4813]: I0219 20:08:42.791899 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c79476fc-m49fs" event={"ID":"e3f864db-c1f6-40b4-895c-347947a296e5","Type":"ContainerDied","Data":"fea652f7add761207c416a05d7772fd0ed81711c070af839783c22b639264dc8"} Feb 19 20:08:42 crc kubenswrapper[4813]: I0219 20:08:42.791919 4813 scope.go:117] "RemoveContainer" containerID="f302b27142a4ceafba12c4275e6f35ba26d4b59ef39663db8027d22a25ce5ece" Feb 19 20:08:42 crc kubenswrapper[4813]: I0219 20:08:42.836016 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5c79476fc-m49fs"] Feb 19 20:08:42 crc kubenswrapper[4813]: I0219 20:08:42.847277 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5c79476fc-m49fs"] Feb 19 20:08:42 crc kubenswrapper[4813]: I0219 20:08:42.991107 4813 scope.go:117] "RemoveContainer" containerID="45b664f6e38426c32916fbd79b60ef1e78ec6330ec23cc4714186bdf164fe2d0" Feb 19 20:08:43 crc kubenswrapper[4813]: I0219 20:08:43.015342 4813 scope.go:117] "RemoveContainer" containerID="f302b27142a4ceafba12c4275e6f35ba26d4b59ef39663db8027d22a25ce5ece" Feb 19 20:08:43 crc kubenswrapper[4813]: E0219 20:08:43.016058 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f302b27142a4ceafba12c4275e6f35ba26d4b59ef39663db8027d22a25ce5ece\": container with ID starting with f302b27142a4ceafba12c4275e6f35ba26d4b59ef39663db8027d22a25ce5ece not found: ID does not exist" containerID="f302b27142a4ceafba12c4275e6f35ba26d4b59ef39663db8027d22a25ce5ece" Feb 19 20:08:43 crc kubenswrapper[4813]: I0219 20:08:43.016100 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f302b27142a4ceafba12c4275e6f35ba26d4b59ef39663db8027d22a25ce5ece"} err="failed to get container status \"f302b27142a4ceafba12c4275e6f35ba26d4b59ef39663db8027d22a25ce5ece\": rpc error: code = NotFound desc = could not find container \"f302b27142a4ceafba12c4275e6f35ba26d4b59ef39663db8027d22a25ce5ece\": container with ID starting with f302b27142a4ceafba12c4275e6f35ba26d4b59ef39663db8027d22a25ce5ece not found: ID does not exist" Feb 19 20:08:43 crc kubenswrapper[4813]: I0219 20:08:43.016127 4813 scope.go:117] "RemoveContainer" containerID="45b664f6e38426c32916fbd79b60ef1e78ec6330ec23cc4714186bdf164fe2d0" Feb 19 20:08:43 crc kubenswrapper[4813]: E0219 20:08:43.016463 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45b664f6e38426c32916fbd79b60ef1e78ec6330ec23cc4714186bdf164fe2d0\": container with ID starting with 45b664f6e38426c32916fbd79b60ef1e78ec6330ec23cc4714186bdf164fe2d0 not found: ID does not exist" containerID="45b664f6e38426c32916fbd79b60ef1e78ec6330ec23cc4714186bdf164fe2d0" Feb 19 20:08:43 crc kubenswrapper[4813]: I0219 20:08:43.016485 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45b664f6e38426c32916fbd79b60ef1e78ec6330ec23cc4714186bdf164fe2d0"} err="failed to get container status \"45b664f6e38426c32916fbd79b60ef1e78ec6330ec23cc4714186bdf164fe2d0\": rpc error: code = NotFound desc = could not find container \"45b664f6e38426c32916fbd79b60ef1e78ec6330ec23cc4714186bdf164fe2d0\": container with ID starting with 45b664f6e38426c32916fbd79b60ef1e78ec6330ec23cc4714186bdf164fe2d0 not found: ID does not exist" Feb 19 20:08:43 crc kubenswrapper[4813]: I0219 20:08:43.489135 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3f864db-c1f6-40b4-895c-347947a296e5" path="/var/lib/kubelet/pods/e3f864db-c1f6-40b4-895c-347947a296e5/volumes" Feb 19 20:08:45 crc kubenswrapper[4813]: I0219 20:08:45.043011 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-qv4mk"] Feb 19 20:08:45 crc kubenswrapper[4813]: I0219 20:08:45.052634 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-1468-account-create-update-6jnz2"] Feb 19 20:08:45 crc kubenswrapper[4813]: I0219 20:08:45.063786 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-qv4mk"] Feb 19 20:08:45 crc kubenswrapper[4813]: I0219 20:08:45.075531 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-1468-account-create-update-6jnz2"] Feb 19 20:08:45 crc kubenswrapper[4813]: I0219 20:08:45.472430 4813 scope.go:117] "RemoveContainer" containerID="77599afcb1bbfe89531d92db83d4ec060079eb9318249967ecfd9d52fd324241" Feb 19 20:08:45 crc kubenswrapper[4813]: E0219 20:08:45.472765 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:08:45 crc kubenswrapper[4813]: I0219 20:08:45.485211 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d06c681-ae92-4c23-a05f-f08441382c05" path="/var/lib/kubelet/pods/1d06c681-ae92-4c23-a05f-f08441382c05/volumes" Feb 19 20:08:45 crc kubenswrapper[4813]: I0219 20:08:45.486205 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="530cfe42-e9cd-45ad-9c70-eee31f9522c7" path="/var/lib/kubelet/pods/530cfe42-e9cd-45ad-9c70-eee31f9522c7/volumes" Feb 19 20:08:50 crc kubenswrapper[4813]: I0219 20:08:50.035823 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-9sskl"] Feb 19 20:08:50 crc kubenswrapper[4813]: I0219 20:08:50.046838 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-9sskl"] Feb 19 20:08:51 crc kubenswrapper[4813]: I0219 20:08:51.488091 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a528fe9c-0064-4ed6-87dd-770fa3c0ee52" path="/var/lib/kubelet/pods/a528fe9c-0064-4ed6-87dd-770fa3c0ee52/volumes" Feb 19 20:08:54 crc kubenswrapper[4813]: I0219 20:08:54.234648 4813 scope.go:117] "RemoveContainer" containerID="655c9a79eb09c901e13e250a0e1070d2a8ec06741ed6260ff23f0159ac3f44d5" Feb 19 20:08:54 crc kubenswrapper[4813]: I0219 20:08:54.293410 4813 scope.go:117] "RemoveContainer" containerID="80d2716957bd3a3e5266f935a2e9bdb5c6d31d6a011d772f8cb5a34601edfa31" Feb 19 20:08:54 crc kubenswrapper[4813]: I0219 20:08:54.324884 4813 scope.go:117] "RemoveContainer" containerID="238915b1b1ddd39617184ea0c77d41fe50c46d78a64dadb1ec22d93b7c373657" Feb 19 20:08:54 crc kubenswrapper[4813]: I0219 20:08:54.371709 4813 scope.go:117] "RemoveContainer" containerID="53dc4e6d06be044a40965932dcb917c881872778b82e93a3321443417b9286ef" Feb 19 20:08:54 crc kubenswrapper[4813]: I0219 20:08:54.431810 4813 scope.go:117] "RemoveContainer" containerID="14df61ba8457621e0e837cde6563626760750e0cf84cb28171113ee5e0c544bb" Feb 19 20:08:54 crc kubenswrapper[4813]: I0219 20:08:54.465183 4813 scope.go:117] "RemoveContainer" containerID="d8418ddd7540177ceeb51bc8843377bc3bea865a18e0a5de264002e3cde5af24" Feb 19 20:08:58 crc kubenswrapper[4813]: I0219 20:08:58.094507 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nqmvs"] Feb 19 20:08:58 crc kubenswrapper[4813]: E0219 20:08:58.095285 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3f864db-c1f6-40b4-895c-347947a296e5" containerName="horizon-log" Feb 19 20:08:58 crc kubenswrapper[4813]: I0219 20:08:58.095296 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3f864db-c1f6-40b4-895c-347947a296e5" containerName="horizon-log" Feb 19 20:08:58 crc kubenswrapper[4813]: E0219 20:08:58.095326 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3f864db-c1f6-40b4-895c-347947a296e5" containerName="horizon" Feb 19 20:08:58 crc kubenswrapper[4813]: I0219 20:08:58.095332 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3f864db-c1f6-40b4-895c-347947a296e5" containerName="horizon" Feb 19 20:08:58 crc kubenswrapper[4813]: I0219 20:08:58.095494 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3f864db-c1f6-40b4-895c-347947a296e5" containerName="horizon-log" Feb 19 20:08:58 crc kubenswrapper[4813]: I0219 20:08:58.095508 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3f864db-c1f6-40b4-895c-347947a296e5" containerName="horizon" Feb 19 20:08:58 crc kubenswrapper[4813]: I0219 20:08:58.096879 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nqmvs" Feb 19 20:08:58 crc kubenswrapper[4813]: I0219 20:08:58.100145 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 19 20:08:58 crc kubenswrapper[4813]: I0219 20:08:58.125930 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nqmvs"] Feb 19 20:08:58 crc kubenswrapper[4813]: I0219 20:08:58.160328 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/268e20d4-ae45-4b3c-a5bd-9c06c4ddd945-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nqmvs\" (UID: \"268e20d4-ae45-4b3c-a5bd-9c06c4ddd945\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nqmvs" Feb 19 20:08:58 crc kubenswrapper[4813]: I0219 20:08:58.160652 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kw7h\" (UniqueName: \"kubernetes.io/projected/268e20d4-ae45-4b3c-a5bd-9c06c4ddd945-kube-api-access-2kw7h\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nqmvs\" (UID: \"268e20d4-ae45-4b3c-a5bd-9c06c4ddd945\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nqmvs" Feb 19 20:08:58 crc kubenswrapper[4813]: I0219 20:08:58.160779 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/268e20d4-ae45-4b3c-a5bd-9c06c4ddd945-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nqmvs\" (UID: \"268e20d4-ae45-4b3c-a5bd-9c06c4ddd945\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nqmvs" Feb 19 20:08:58 crc kubenswrapper[4813]: I0219 20:08:58.263145 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kw7h\" (UniqueName: \"kubernetes.io/projected/268e20d4-ae45-4b3c-a5bd-9c06c4ddd945-kube-api-access-2kw7h\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nqmvs\" (UID: \"268e20d4-ae45-4b3c-a5bd-9c06c4ddd945\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nqmvs" Feb 19 20:08:58 crc kubenswrapper[4813]: I0219 20:08:58.263204 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/268e20d4-ae45-4b3c-a5bd-9c06c4ddd945-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nqmvs\" (UID: \"268e20d4-ae45-4b3c-a5bd-9c06c4ddd945\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nqmvs" Feb 19 20:08:58 crc kubenswrapper[4813]: I0219 20:08:58.263676 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/268e20d4-ae45-4b3c-a5bd-9c06c4ddd945-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nqmvs\" (UID: \"268e20d4-ae45-4b3c-a5bd-9c06c4ddd945\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nqmvs" Feb 19 20:08:58 crc kubenswrapper[4813]: I0219 20:08:58.263745 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/268e20d4-ae45-4b3c-a5bd-9c06c4ddd945-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nqmvs\" (UID: \"268e20d4-ae45-4b3c-a5bd-9c06c4ddd945\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nqmvs" Feb 19 20:08:58 crc kubenswrapper[4813]: I0219 20:08:58.264011 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/268e20d4-ae45-4b3c-a5bd-9c06c4ddd945-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nqmvs\" (UID: \"268e20d4-ae45-4b3c-a5bd-9c06c4ddd945\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nqmvs" Feb 19 20:08:58 crc kubenswrapper[4813]: I0219 20:08:58.286122 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kw7h\" (UniqueName: \"kubernetes.io/projected/268e20d4-ae45-4b3c-a5bd-9c06c4ddd945-kube-api-access-2kw7h\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nqmvs\" (UID: \"268e20d4-ae45-4b3c-a5bd-9c06c4ddd945\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nqmvs" Feb 19 20:08:58 crc kubenswrapper[4813]: I0219 20:08:58.426580 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nqmvs" Feb 19 20:08:58 crc kubenswrapper[4813]: I0219 20:08:58.882236 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nqmvs"] Feb 19 20:08:58 crc kubenswrapper[4813]: I0219 20:08:58.966834 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nqmvs" event={"ID":"268e20d4-ae45-4b3c-a5bd-9c06c4ddd945","Type":"ContainerStarted","Data":"a08f9c4ec9aa6ba5b8cb2299b434925dc9f434fad7c354324b10bc6e39a6f0f2"} Feb 19 20:08:59 crc kubenswrapper[4813]: I0219 20:08:59.471663 4813 scope.go:117] "RemoveContainer" containerID="77599afcb1bbfe89531d92db83d4ec060079eb9318249967ecfd9d52fd324241" Feb 19 20:08:59 crc kubenswrapper[4813]: E0219 20:08:59.472068 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:08:59 crc kubenswrapper[4813]: I0219 20:08:59.975934 4813 generic.go:334] "Generic (PLEG): container finished" podID="268e20d4-ae45-4b3c-a5bd-9c06c4ddd945" containerID="9952f99d5c88c3f500cc9442b6858df224aa727139c2235d082cc964c51dcc3f" exitCode=0 Feb 19 20:08:59 crc kubenswrapper[4813]: I0219 20:08:59.976176 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nqmvs" event={"ID":"268e20d4-ae45-4b3c-a5bd-9c06c4ddd945","Type":"ContainerDied","Data":"9952f99d5c88c3f500cc9442b6858df224aa727139c2235d082cc964c51dcc3f"} Feb 19 20:09:02 crc kubenswrapper[4813]: I0219 20:09:02.001764 4813 generic.go:334] "Generic (PLEG): container finished" podID="268e20d4-ae45-4b3c-a5bd-9c06c4ddd945" containerID="3a6f0b0af01adda8df3214f3c08bfba781a360dfbd3a64d28dd147682f0a7063" exitCode=0 Feb 19 20:09:02 crc kubenswrapper[4813]: I0219 20:09:02.001906 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nqmvs" event={"ID":"268e20d4-ae45-4b3c-a5bd-9c06c4ddd945","Type":"ContainerDied","Data":"3a6f0b0af01adda8df3214f3c08bfba781a360dfbd3a64d28dd147682f0a7063"} Feb 19 20:09:03 crc kubenswrapper[4813]: I0219 20:09:03.015030 4813 generic.go:334] "Generic (PLEG): container finished" podID="268e20d4-ae45-4b3c-a5bd-9c06c4ddd945" containerID="8eea9735e1bd05c65aba26fddae6798093b1088a272c5852d873496ee5ff40cf" exitCode=0 Feb 19 20:09:03 crc kubenswrapper[4813]: I0219 20:09:03.015113 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nqmvs" event={"ID":"268e20d4-ae45-4b3c-a5bd-9c06c4ddd945","Type":"ContainerDied","Data":"8eea9735e1bd05c65aba26fddae6798093b1088a272c5852d873496ee5ff40cf"} Feb 19 20:09:04 crc kubenswrapper[4813]: I0219 20:09:04.498723 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nqmvs" Feb 19 20:09:04 crc kubenswrapper[4813]: I0219 20:09:04.607328 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/268e20d4-ae45-4b3c-a5bd-9c06c4ddd945-bundle\") pod \"268e20d4-ae45-4b3c-a5bd-9c06c4ddd945\" (UID: \"268e20d4-ae45-4b3c-a5bd-9c06c4ddd945\") " Feb 19 20:09:04 crc kubenswrapper[4813]: I0219 20:09:04.607484 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2kw7h\" (UniqueName: \"kubernetes.io/projected/268e20d4-ae45-4b3c-a5bd-9c06c4ddd945-kube-api-access-2kw7h\") pod \"268e20d4-ae45-4b3c-a5bd-9c06c4ddd945\" (UID: \"268e20d4-ae45-4b3c-a5bd-9c06c4ddd945\") " Feb 19 20:09:04 crc kubenswrapper[4813]: I0219 20:09:04.607571 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/268e20d4-ae45-4b3c-a5bd-9c06c4ddd945-util\") pod \"268e20d4-ae45-4b3c-a5bd-9c06c4ddd945\" (UID: \"268e20d4-ae45-4b3c-a5bd-9c06c4ddd945\") " Feb 19 20:09:04 crc kubenswrapper[4813]: I0219 20:09:04.611313 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/268e20d4-ae45-4b3c-a5bd-9c06c4ddd945-bundle" (OuterVolumeSpecName: "bundle") pod "268e20d4-ae45-4b3c-a5bd-9c06c4ddd945" (UID: "268e20d4-ae45-4b3c-a5bd-9c06c4ddd945"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:09:04 crc kubenswrapper[4813]: I0219 20:09:04.615039 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/268e20d4-ae45-4b3c-a5bd-9c06c4ddd945-kube-api-access-2kw7h" (OuterVolumeSpecName: "kube-api-access-2kw7h") pod "268e20d4-ae45-4b3c-a5bd-9c06c4ddd945" (UID: "268e20d4-ae45-4b3c-a5bd-9c06c4ddd945"). InnerVolumeSpecName "kube-api-access-2kw7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:09:04 crc kubenswrapper[4813]: I0219 20:09:04.636629 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/268e20d4-ae45-4b3c-a5bd-9c06c4ddd945-util" (OuterVolumeSpecName: "util") pod "268e20d4-ae45-4b3c-a5bd-9c06c4ddd945" (UID: "268e20d4-ae45-4b3c-a5bd-9c06c4ddd945"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:09:04 crc kubenswrapper[4813]: I0219 20:09:04.710546 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2kw7h\" (UniqueName: \"kubernetes.io/projected/268e20d4-ae45-4b3c-a5bd-9c06c4ddd945-kube-api-access-2kw7h\") on node \"crc\" DevicePath \"\"" Feb 19 20:09:04 crc kubenswrapper[4813]: I0219 20:09:04.710614 4813 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/268e20d4-ae45-4b3c-a5bd-9c06c4ddd945-util\") on node \"crc\" DevicePath \"\"" Feb 19 20:09:04 crc kubenswrapper[4813]: I0219 20:09:04.710633 4813 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/268e20d4-ae45-4b3c-a5bd-9c06c4ddd945-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 20:09:05 crc kubenswrapper[4813]: I0219 20:09:05.042590 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nqmvs" event={"ID":"268e20d4-ae45-4b3c-a5bd-9c06c4ddd945","Type":"ContainerDied","Data":"a08f9c4ec9aa6ba5b8cb2299b434925dc9f434fad7c354324b10bc6e39a6f0f2"} Feb 19 20:09:05 crc kubenswrapper[4813]: I0219 20:09:05.043026 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a08f9c4ec9aa6ba5b8cb2299b434925dc9f434fad7c354324b10bc6e39a6f0f2" Feb 19 20:09:05 crc kubenswrapper[4813]: I0219 20:09:05.042634 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nqmvs" Feb 19 20:09:12 crc kubenswrapper[4813]: I0219 20:09:12.472303 4813 scope.go:117] "RemoveContainer" containerID="77599afcb1bbfe89531d92db83d4ec060079eb9318249967ecfd9d52fd324241" Feb 19 20:09:12 crc kubenswrapper[4813]: E0219 20:09:12.473076 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:09:16 crc kubenswrapper[4813]: I0219 20:09:16.625539 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-llmlj"] Feb 19 20:09:16 crc kubenswrapper[4813]: E0219 20:09:16.626567 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="268e20d4-ae45-4b3c-a5bd-9c06c4ddd945" containerName="extract" Feb 19 20:09:16 crc kubenswrapper[4813]: I0219 20:09:16.626593 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="268e20d4-ae45-4b3c-a5bd-9c06c4ddd945" containerName="extract" Feb 19 20:09:16 crc kubenswrapper[4813]: E0219 20:09:16.626615 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="268e20d4-ae45-4b3c-a5bd-9c06c4ddd945" containerName="util" Feb 19 20:09:16 crc kubenswrapper[4813]: I0219 20:09:16.626620 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="268e20d4-ae45-4b3c-a5bd-9c06c4ddd945" containerName="util" Feb 19 20:09:16 crc kubenswrapper[4813]: E0219 20:09:16.626631 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="268e20d4-ae45-4b3c-a5bd-9c06c4ddd945" containerName="pull" Feb 19 20:09:16 crc kubenswrapper[4813]: I0219 20:09:16.626637 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="268e20d4-ae45-4b3c-a5bd-9c06c4ddd945" containerName="pull" Feb 19 20:09:16 crc kubenswrapper[4813]: I0219 20:09:16.626854 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="268e20d4-ae45-4b3c-a5bd-9c06c4ddd945" containerName="extract" Feb 19 20:09:16 crc kubenswrapper[4813]: I0219 20:09:16.627812 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-llmlj" Feb 19 20:09:16 crc kubenswrapper[4813]: I0219 20:09:16.629792 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 19 20:09:16 crc kubenswrapper[4813]: I0219 20:09:16.630290 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 19 20:09:16 crc kubenswrapper[4813]: I0219 20:09:16.630525 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-s6dqw" Feb 19 20:09:16 crc kubenswrapper[4813]: I0219 20:09:16.640831 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-llmlj"] Feb 19 20:09:16 crc kubenswrapper[4813]: I0219 20:09:16.673779 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgdjw\" (UniqueName: \"kubernetes.io/projected/e0932101-fd0d-44ec-9ec6-a2801d356faa-kube-api-access-hgdjw\") pod \"obo-prometheus-operator-68bc856cb9-llmlj\" (UID: \"e0932101-fd0d-44ec-9ec6-a2801d356faa\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-llmlj" Feb 19 20:09:16 crc kubenswrapper[4813]: I0219 20:09:16.736737 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5bffd859c-svjmq"] Feb 19 20:09:16 crc kubenswrapper[4813]: I0219 20:09:16.738078 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bffd859c-svjmq" Feb 19 20:09:16 crc kubenswrapper[4813]: I0219 20:09:16.740486 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-g44nz" Feb 19 20:09:16 crc kubenswrapper[4813]: I0219 20:09:16.742518 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 19 20:09:16 crc kubenswrapper[4813]: I0219 20:09:16.756306 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5bffd859c-9twz7"] Feb 19 20:09:16 crc kubenswrapper[4813]: I0219 20:09:16.757647 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bffd859c-9twz7" Feb 19 20:09:16 crc kubenswrapper[4813]: I0219 20:09:16.773858 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5bffd859c-svjmq"] Feb 19 20:09:16 crc kubenswrapper[4813]: I0219 20:09:16.778043 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e8435c9c-9eb7-4ad0-bf77-8b6b34952c02-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5bffd859c-svjmq\" (UID: \"e8435c9c-9eb7-4ad0-bf77-8b6b34952c02\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bffd859c-svjmq" Feb 19 20:09:16 crc kubenswrapper[4813]: I0219 20:09:16.778081 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e8435c9c-9eb7-4ad0-bf77-8b6b34952c02-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5bffd859c-svjmq\" (UID: \"e8435c9c-9eb7-4ad0-bf77-8b6b34952c02\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bffd859c-svjmq" Feb 19 20:09:16 crc kubenswrapper[4813]: I0219 20:09:16.778195 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgdjw\" (UniqueName: \"kubernetes.io/projected/e0932101-fd0d-44ec-9ec6-a2801d356faa-kube-api-access-hgdjw\") pod \"obo-prometheus-operator-68bc856cb9-llmlj\" (UID: \"e0932101-fd0d-44ec-9ec6-a2801d356faa\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-llmlj" Feb 19 20:09:16 crc kubenswrapper[4813]: I0219 20:09:16.813652 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgdjw\" (UniqueName: \"kubernetes.io/projected/e0932101-fd0d-44ec-9ec6-a2801d356faa-kube-api-access-hgdjw\") pod \"obo-prometheus-operator-68bc856cb9-llmlj\" (UID: \"e0932101-fd0d-44ec-9ec6-a2801d356faa\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-llmlj" Feb 19 20:09:16 crc kubenswrapper[4813]: I0219 20:09:16.824018 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5bffd859c-9twz7"] Feb 19 20:09:16 crc kubenswrapper[4813]: I0219 20:09:16.880246 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/07f692a3-7fb0-44fc-aa86-8574ccf76589-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5bffd859c-9twz7\" (UID: \"07f692a3-7fb0-44fc-aa86-8574ccf76589\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bffd859c-9twz7" Feb 19 20:09:16 crc kubenswrapper[4813]: I0219 20:09:16.880296 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e8435c9c-9eb7-4ad0-bf77-8b6b34952c02-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5bffd859c-svjmq\" (UID: \"e8435c9c-9eb7-4ad0-bf77-8b6b34952c02\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bffd859c-svjmq" Feb 19 20:09:16 crc kubenswrapper[4813]: I0219 20:09:16.880326 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e8435c9c-9eb7-4ad0-bf77-8b6b34952c02-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5bffd859c-svjmq\" (UID: \"e8435c9c-9eb7-4ad0-bf77-8b6b34952c02\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bffd859c-svjmq" Feb 19 20:09:16 crc kubenswrapper[4813]: I0219 20:09:16.880443 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/07f692a3-7fb0-44fc-aa86-8574ccf76589-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5bffd859c-9twz7\" (UID: \"07f692a3-7fb0-44fc-aa86-8574ccf76589\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bffd859c-9twz7" Feb 19 20:09:16 crc kubenswrapper[4813]: I0219 20:09:16.883790 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e8435c9c-9eb7-4ad0-bf77-8b6b34952c02-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5bffd859c-svjmq\" (UID: \"e8435c9c-9eb7-4ad0-bf77-8b6b34952c02\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bffd859c-svjmq" Feb 19 20:09:16 crc kubenswrapper[4813]: I0219 20:09:16.884166 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e8435c9c-9eb7-4ad0-bf77-8b6b34952c02-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5bffd859c-svjmq\" (UID: \"e8435c9c-9eb7-4ad0-bf77-8b6b34952c02\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bffd859c-svjmq" Feb 19 20:09:16 crc kubenswrapper[4813]: I0219 20:09:16.942990 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-fgkkm"] Feb 19 20:09:16 crc kubenswrapper[4813]: I0219 20:09:16.944531 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-fgkkm" Feb 19 20:09:16 crc kubenswrapper[4813]: I0219 20:09:16.948032 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-llmlj" Feb 19 20:09:16 crc kubenswrapper[4813]: I0219 20:09:16.949007 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 19 20:09:16 crc kubenswrapper[4813]: I0219 20:09:16.949145 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-fgkkm"] Feb 19 20:09:16 crc kubenswrapper[4813]: I0219 20:09:16.951915 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-sn7lq" Feb 19 20:09:17 crc kubenswrapper[4813]: I0219 20:09:17.001231 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/07f692a3-7fb0-44fc-aa86-8574ccf76589-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5bffd859c-9twz7\" (UID: \"07f692a3-7fb0-44fc-aa86-8574ccf76589\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bffd859c-9twz7" Feb 19 20:09:17 crc kubenswrapper[4813]: I0219 20:09:17.001364 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/86eb09f2-e8b8-45e2-a97f-5d2797af20fc-observability-operator-tls\") pod \"observability-operator-59bdc8b94-fgkkm\" (UID: \"86eb09f2-e8b8-45e2-a97f-5d2797af20fc\") " pod="openshift-operators/observability-operator-59bdc8b94-fgkkm" Feb 19 20:09:17 crc kubenswrapper[4813]: I0219 20:09:17.001401 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/07f692a3-7fb0-44fc-aa86-8574ccf76589-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5bffd859c-9twz7\" (UID: \"07f692a3-7fb0-44fc-aa86-8574ccf76589\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bffd859c-9twz7" Feb 19 20:09:17 crc kubenswrapper[4813]: I0219 20:09:17.001456 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbxsn\" (UniqueName: \"kubernetes.io/projected/86eb09f2-e8b8-45e2-a97f-5d2797af20fc-kube-api-access-bbxsn\") pod \"observability-operator-59bdc8b94-fgkkm\" (UID: \"86eb09f2-e8b8-45e2-a97f-5d2797af20fc\") " pod="openshift-operators/observability-operator-59bdc8b94-fgkkm" Feb 19 20:09:17 crc kubenswrapper[4813]: I0219 20:09:17.005023 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/07f692a3-7fb0-44fc-aa86-8574ccf76589-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5bffd859c-9twz7\" (UID: \"07f692a3-7fb0-44fc-aa86-8574ccf76589\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bffd859c-9twz7" Feb 19 20:09:17 crc kubenswrapper[4813]: I0219 20:09:17.010204 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/07f692a3-7fb0-44fc-aa86-8574ccf76589-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5bffd859c-9twz7\" (UID: \"07f692a3-7fb0-44fc-aa86-8574ccf76589\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bffd859c-9twz7" Feb 19 20:09:17 crc kubenswrapper[4813]: I0219 20:09:17.060632 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bffd859c-svjmq" Feb 19 20:09:17 crc kubenswrapper[4813]: I0219 20:09:17.088144 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bffd859c-9twz7" Feb 19 20:09:17 crc kubenswrapper[4813]: I0219 20:09:17.093116 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-pt49v"] Feb 19 20:09:17 crc kubenswrapper[4813]: I0219 20:09:17.095138 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-pt49v" Feb 19 20:09:17 crc kubenswrapper[4813]: I0219 20:09:17.097583 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-nqwxx" Feb 19 20:09:17 crc kubenswrapper[4813]: I0219 20:09:17.106868 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/86eb09f2-e8b8-45e2-a97f-5d2797af20fc-observability-operator-tls\") pod \"observability-operator-59bdc8b94-fgkkm\" (UID: \"86eb09f2-e8b8-45e2-a97f-5d2797af20fc\") " pod="openshift-operators/observability-operator-59bdc8b94-fgkkm" Feb 19 20:09:17 crc kubenswrapper[4813]: I0219 20:09:17.106981 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbxsn\" (UniqueName: \"kubernetes.io/projected/86eb09f2-e8b8-45e2-a97f-5d2797af20fc-kube-api-access-bbxsn\") pod \"observability-operator-59bdc8b94-fgkkm\" (UID: \"86eb09f2-e8b8-45e2-a97f-5d2797af20fc\") " pod="openshift-operators/observability-operator-59bdc8b94-fgkkm" Feb 19 20:09:17 crc kubenswrapper[4813]: I0219 20:09:17.111902 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-pt49v"] Feb 19 20:09:17 crc kubenswrapper[4813]: I0219 20:09:17.130609 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/86eb09f2-e8b8-45e2-a97f-5d2797af20fc-observability-operator-tls\") pod \"observability-operator-59bdc8b94-fgkkm\" (UID: \"86eb09f2-e8b8-45e2-a97f-5d2797af20fc\") " pod="openshift-operators/observability-operator-59bdc8b94-fgkkm" Feb 19 20:09:17 crc kubenswrapper[4813]: I0219 20:09:17.135316 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbxsn\" (UniqueName: \"kubernetes.io/projected/86eb09f2-e8b8-45e2-a97f-5d2797af20fc-kube-api-access-bbxsn\") pod \"observability-operator-59bdc8b94-fgkkm\" (UID: \"86eb09f2-e8b8-45e2-a97f-5d2797af20fc\") " pod="openshift-operators/observability-operator-59bdc8b94-fgkkm" Feb 19 20:09:17 crc kubenswrapper[4813]: I0219 20:09:17.209124 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/967b5a02-0a81-4b48-aa1a-983f7a923088-openshift-service-ca\") pod \"perses-operator-5bf474d74f-pt49v\" (UID: \"967b5a02-0a81-4b48-aa1a-983f7a923088\") " pod="openshift-operators/perses-operator-5bf474d74f-pt49v" Feb 19 20:09:17 crc kubenswrapper[4813]: I0219 20:09:17.209239 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55d8j\" (UniqueName: \"kubernetes.io/projected/967b5a02-0a81-4b48-aa1a-983f7a923088-kube-api-access-55d8j\") pod \"perses-operator-5bf474d74f-pt49v\" (UID: \"967b5a02-0a81-4b48-aa1a-983f7a923088\") " pod="openshift-operators/perses-operator-5bf474d74f-pt49v" Feb 19 20:09:17 crc kubenswrapper[4813]: I0219 20:09:17.265316 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-fgkkm" Feb 19 20:09:17 crc kubenswrapper[4813]: I0219 20:09:17.311184 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/967b5a02-0a81-4b48-aa1a-983f7a923088-openshift-service-ca\") pod \"perses-operator-5bf474d74f-pt49v\" (UID: \"967b5a02-0a81-4b48-aa1a-983f7a923088\") " pod="openshift-operators/perses-operator-5bf474d74f-pt49v" Feb 19 20:09:17 crc kubenswrapper[4813]: I0219 20:09:17.311284 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55d8j\" (UniqueName: \"kubernetes.io/projected/967b5a02-0a81-4b48-aa1a-983f7a923088-kube-api-access-55d8j\") pod \"perses-operator-5bf474d74f-pt49v\" (UID: \"967b5a02-0a81-4b48-aa1a-983f7a923088\") " pod="openshift-operators/perses-operator-5bf474d74f-pt49v" Feb 19 20:09:17 crc kubenswrapper[4813]: I0219 20:09:17.312836 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/967b5a02-0a81-4b48-aa1a-983f7a923088-openshift-service-ca\") pod \"perses-operator-5bf474d74f-pt49v\" (UID: \"967b5a02-0a81-4b48-aa1a-983f7a923088\") " pod="openshift-operators/perses-operator-5bf474d74f-pt49v" Feb 19 20:09:17 crc kubenswrapper[4813]: I0219 20:09:17.346531 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55d8j\" (UniqueName: \"kubernetes.io/projected/967b5a02-0a81-4b48-aa1a-983f7a923088-kube-api-access-55d8j\") pod \"perses-operator-5bf474d74f-pt49v\" (UID: \"967b5a02-0a81-4b48-aa1a-983f7a923088\") " pod="openshift-operators/perses-operator-5bf474d74f-pt49v" Feb 19 20:09:17 crc kubenswrapper[4813]: I0219 20:09:17.485281 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-pt49v" Feb 19 20:09:17 crc kubenswrapper[4813]: I0219 20:09:17.496966 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-llmlj"] Feb 19 20:09:17 crc kubenswrapper[4813]: I0219 20:09:17.632587 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5bffd859c-9twz7"] Feb 19 20:09:17 crc kubenswrapper[4813]: I0219 20:09:17.884523 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5bffd859c-svjmq"] Feb 19 20:09:17 crc kubenswrapper[4813]: W0219 20:09:17.894111 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8435c9c_9eb7_4ad0_bf77_8b6b34952c02.slice/crio-9fce20a537976ba85d7242e21ad14db3f6a623382f4bc4f8c750dbc77f5233b7 WatchSource:0}: Error finding container 9fce20a537976ba85d7242e21ad14db3f6a623382f4bc4f8c750dbc77f5233b7: Status 404 returned error can't find the container with id 9fce20a537976ba85d7242e21ad14db3f6a623382f4bc4f8c750dbc77f5233b7 Feb 19 20:09:17 crc kubenswrapper[4813]: I0219 20:09:17.981832 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-fgkkm"] Feb 19 20:09:18 crc kubenswrapper[4813]: I0219 20:09:18.136529 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-pt49v"] Feb 19 20:09:18 crc kubenswrapper[4813]: I0219 20:09:18.196038 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-llmlj" event={"ID":"e0932101-fd0d-44ec-9ec6-a2801d356faa","Type":"ContainerStarted","Data":"10743999791fea71d531424eeda6f10c6fd4121c70922ce04aaa3a69498af783"} Feb 19 20:09:18 crc kubenswrapper[4813]: I0219 20:09:18.197437 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bffd859c-9twz7" event={"ID":"07f692a3-7fb0-44fc-aa86-8574ccf76589","Type":"ContainerStarted","Data":"586d8aae98233375ce5625a9967f46f675acbcc0b8268cbf4910572aff932f8f"} Feb 19 20:09:18 crc kubenswrapper[4813]: I0219 20:09:18.198799 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-pt49v" event={"ID":"967b5a02-0a81-4b48-aa1a-983f7a923088","Type":"ContainerStarted","Data":"288f271844c8ad6049db2f9334e1a3e601387718eb846d65dd12b3ddb59a8b3d"} Feb 19 20:09:18 crc kubenswrapper[4813]: I0219 20:09:18.201463 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bffd859c-svjmq" event={"ID":"e8435c9c-9eb7-4ad0-bf77-8b6b34952c02","Type":"ContainerStarted","Data":"9fce20a537976ba85d7242e21ad14db3f6a623382f4bc4f8c750dbc77f5233b7"} Feb 19 20:09:18 crc kubenswrapper[4813]: I0219 20:09:18.203019 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-fgkkm" event={"ID":"86eb09f2-e8b8-45e2-a97f-5d2797af20fc","Type":"ContainerStarted","Data":"be7f7ecf59f6b2cc8a0922df8b2f29918f756b100b32a226cdcefee8b34ea6de"} Feb 19 20:09:23 crc kubenswrapper[4813]: I0219 20:09:23.473388 4813 scope.go:117] "RemoveContainer" containerID="77599afcb1bbfe89531d92db83d4ec060079eb9318249967ecfd9d52fd324241" Feb 19 20:09:23 crc kubenswrapper[4813]: E0219 20:09:23.474845 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:09:29 crc kubenswrapper[4813]: I0219 20:09:29.325036 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-llmlj" event={"ID":"e0932101-fd0d-44ec-9ec6-a2801d356faa","Type":"ContainerStarted","Data":"3dd12ba40289522821b6c9d1abbd297000394239299a7ff6b7835f642d7c4cab"} Feb 19 20:09:29 crc kubenswrapper[4813]: I0219 20:09:29.329805 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-pt49v" event={"ID":"967b5a02-0a81-4b48-aa1a-983f7a923088","Type":"ContainerStarted","Data":"d02ccbf575135833cbb2faaa39c35106b32a113833ac2f4665c7233866d3a982"} Feb 19 20:09:29 crc kubenswrapper[4813]: I0219 20:09:29.329879 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-pt49v" Feb 19 20:09:29 crc kubenswrapper[4813]: I0219 20:09:29.331961 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-fgkkm" event={"ID":"86eb09f2-e8b8-45e2-a97f-5d2797af20fc","Type":"ContainerStarted","Data":"bbc980adcd0725d19702c543752750caf1e5c1b43032c4853816b4e9e479ff8d"} Feb 19 20:09:29 crc kubenswrapper[4813]: I0219 20:09:29.332197 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-fgkkm" Feb 19 20:09:29 crc kubenswrapper[4813]: I0219 20:09:29.333305 4813 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-fgkkm container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.1.123:8081/healthz\": dial tcp 10.217.1.123:8081: connect: connection refused" start-of-body= Feb 19 20:09:29 crc kubenswrapper[4813]: I0219 20:09:29.333361 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-fgkkm" podUID="86eb09f2-e8b8-45e2-a97f-5d2797af20fc" containerName="operator" probeResult="failure" output="Get \"http://10.217.1.123:8081/healthz\": dial tcp 10.217.1.123:8081: connect: connection refused" Feb 19 20:09:29 crc kubenswrapper[4813]: I0219 20:09:29.354261 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-llmlj" podStartSLOduration=2.430792135 podStartE2EDuration="13.354229416s" podCreationTimestamp="2026-02-19 20:09:16 +0000 UTC" firstStartedPulling="2026-02-19 20:09:17.548229366 +0000 UTC m=+5976.773669907" lastFinishedPulling="2026-02-19 20:09:28.471666657 +0000 UTC m=+5987.697107188" observedRunningTime="2026-02-19 20:09:29.344437054 +0000 UTC m=+5988.569877615" watchObservedRunningTime="2026-02-19 20:09:29.354229416 +0000 UTC m=+5988.579669967" Feb 19 20:09:29 crc kubenswrapper[4813]: I0219 20:09:29.405487 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-fgkkm" podStartSLOduration=2.967453956 podStartE2EDuration="13.405472812s" podCreationTimestamp="2026-02-19 20:09:16 +0000 UTC" firstStartedPulling="2026-02-19 20:09:18.033238929 +0000 UTC m=+5977.258679470" lastFinishedPulling="2026-02-19 20:09:28.471257785 +0000 UTC m=+5987.696698326" observedRunningTime="2026-02-19 20:09:29.40510267 +0000 UTC m=+5988.630543211" watchObservedRunningTime="2026-02-19 20:09:29.405472812 +0000 UTC m=+5988.630913353" Feb 19 20:09:29 crc kubenswrapper[4813]: I0219 20:09:29.407988 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-pt49v" podStartSLOduration=2.092315788 podStartE2EDuration="12.407976149s" podCreationTimestamp="2026-02-19 20:09:17 +0000 UTC" firstStartedPulling="2026-02-19 20:09:18.155100378 +0000 UTC m=+5977.380540919" lastFinishedPulling="2026-02-19 20:09:28.470760739 +0000 UTC m=+5987.696201280" observedRunningTime="2026-02-19 20:09:29.372994146 +0000 UTC m=+5988.598434707" watchObservedRunningTime="2026-02-19 20:09:29.407976149 +0000 UTC m=+5988.633416680" Feb 19 20:09:30 crc kubenswrapper[4813]: I0219 20:09:30.343690 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bffd859c-9twz7" event={"ID":"07f692a3-7fb0-44fc-aa86-8574ccf76589","Type":"ContainerStarted","Data":"be8ab01c471fd80b1d419d0160204b56ba6f021909494523bc66b52182c19e14"} Feb 19 20:09:30 crc kubenswrapper[4813]: I0219 20:09:30.346771 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bffd859c-svjmq" event={"ID":"e8435c9c-9eb7-4ad0-bf77-8b6b34952c02","Type":"ContainerStarted","Data":"365dea2781f2996a1dc01477939c15b312b59492dcba1c74a3472ee660a9f545"} Feb 19 20:09:30 crc kubenswrapper[4813]: I0219 20:09:30.349647 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-fgkkm" Feb 19 20:09:30 crc kubenswrapper[4813]: I0219 20:09:30.395550 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bffd859c-9twz7" podStartSLOduration=3.5595952410000002 podStartE2EDuration="14.395521645s" podCreationTimestamp="2026-02-19 20:09:16 +0000 UTC" firstStartedPulling="2026-02-19 20:09:17.657860198 +0000 UTC m=+5976.883300739" lastFinishedPulling="2026-02-19 20:09:28.493786592 +0000 UTC m=+5987.719227143" observedRunningTime="2026-02-19 20:09:30.375059113 +0000 UTC m=+5989.600499704" watchObservedRunningTime="2026-02-19 20:09:30.395521645 +0000 UTC m=+5989.620962206" Feb 19 20:09:30 crc kubenswrapper[4813]: I0219 20:09:30.491273 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bffd859c-svjmq" podStartSLOduration=3.988915942 podStartE2EDuration="14.491253676s" podCreationTimestamp="2026-02-19 20:09:16 +0000 UTC" firstStartedPulling="2026-02-19 20:09:17.967203657 +0000 UTC m=+5977.192644198" lastFinishedPulling="2026-02-19 20:09:28.469541401 +0000 UTC m=+5987.694981932" observedRunningTime="2026-02-19 20:09:30.477696767 +0000 UTC m=+5989.703137328" watchObservedRunningTime="2026-02-19 20:09:30.491253676 +0000 UTC m=+5989.716694217" Feb 19 20:09:37 crc kubenswrapper[4813]: I0219 20:09:37.472617 4813 scope.go:117] "RemoveContainer" containerID="77599afcb1bbfe89531d92db83d4ec060079eb9318249967ecfd9d52fd324241" Feb 19 20:09:37 crc kubenswrapper[4813]: E0219 20:09:37.474088 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:09:37 crc kubenswrapper[4813]: I0219 20:09:37.489936 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-pt49v" Feb 19 20:09:39 crc kubenswrapper[4813]: I0219 20:09:39.189548 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Feb 19 20:09:39 crc kubenswrapper[4813]: I0219 20:09:39.190085 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="aa2c214f-33ef-4b76-b375-3eb37e6e17ad" containerName="openstackclient" containerID="cri-o://d5cc8ed4b6e4006265e8d5f6364d15ab59ab9a01349c37870c5838a289a2b517" gracePeriod=2 Feb 19 20:09:39 crc kubenswrapper[4813]: I0219 20:09:39.246079 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Feb 19 20:09:39 crc kubenswrapper[4813]: I0219 20:09:39.261878 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 19 20:09:39 crc kubenswrapper[4813]: E0219 20:09:39.262392 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa2c214f-33ef-4b76-b375-3eb37e6e17ad" containerName="openstackclient" Feb 19 20:09:39 crc kubenswrapper[4813]: I0219 20:09:39.262410 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa2c214f-33ef-4b76-b375-3eb37e6e17ad" containerName="openstackclient" Feb 19 20:09:39 crc kubenswrapper[4813]: I0219 20:09:39.262619 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa2c214f-33ef-4b76-b375-3eb37e6e17ad" containerName="openstackclient" Feb 19 20:09:39 crc kubenswrapper[4813]: I0219 20:09:39.263327 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 20:09:39 crc kubenswrapper[4813]: I0219 20:09:39.275205 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 19 20:09:39 crc kubenswrapper[4813]: I0219 20:09:39.297264 4813 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="aa2c214f-33ef-4b76-b375-3eb37e6e17ad" podUID="df26ed78-2f8d-41c4-971a-d826679ad985" Feb 19 20:09:39 crc kubenswrapper[4813]: I0219 20:09:39.416874 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 20:09:39 crc kubenswrapper[4813]: I0219 20:09:39.418358 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 20:09:39 crc kubenswrapper[4813]: I0219 20:09:39.417092 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgwdq\" (UniqueName: \"kubernetes.io/projected/df26ed78-2f8d-41c4-971a-d826679ad985-kube-api-access-cgwdq\") pod \"openstackclient\" (UID: \"df26ed78-2f8d-41c4-971a-d826679ad985\") " pod="openstack/openstackclient" Feb 19 20:09:39 crc kubenswrapper[4813]: I0219 20:09:39.418713 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/df26ed78-2f8d-41c4-971a-d826679ad985-openstack-config-secret\") pod \"openstackclient\" (UID: \"df26ed78-2f8d-41c4-971a-d826679ad985\") " pod="openstack/openstackclient" Feb 19 20:09:39 crc kubenswrapper[4813]: I0219 20:09:39.418840 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/df26ed78-2f8d-41c4-971a-d826679ad985-openstack-config\") pod \"openstackclient\" (UID: \"df26ed78-2f8d-41c4-971a-d826679ad985\") " pod="openstack/openstackclient" Feb 19 20:09:39 crc kubenswrapper[4813]: I0219 20:09:39.420784 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-srwkc" Feb 19 20:09:39 crc kubenswrapper[4813]: I0219 20:09:39.443919 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 20:09:39 crc kubenswrapper[4813]: I0219 20:09:39.521052 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/df26ed78-2f8d-41c4-971a-d826679ad985-openstack-config\") pod \"openstackclient\" (UID: \"df26ed78-2f8d-41c4-971a-d826679ad985\") " pod="openstack/openstackclient" Feb 19 20:09:39 crc kubenswrapper[4813]: I0219 20:09:39.521171 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgwdq\" (UniqueName: \"kubernetes.io/projected/df26ed78-2f8d-41c4-971a-d826679ad985-kube-api-access-cgwdq\") pod \"openstackclient\" (UID: \"df26ed78-2f8d-41c4-971a-d826679ad985\") " pod="openstack/openstackclient" Feb 19 20:09:39 crc kubenswrapper[4813]: I0219 20:09:39.521194 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5zzk\" (UniqueName: \"kubernetes.io/projected/292cbffe-04a8-45aa-8ab0-f05526828ffd-kube-api-access-q5zzk\") pod \"kube-state-metrics-0\" (UID: \"292cbffe-04a8-45aa-8ab0-f05526828ffd\") " pod="openstack/kube-state-metrics-0" Feb 19 20:09:39 crc kubenswrapper[4813]: I0219 20:09:39.521249 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/df26ed78-2f8d-41c4-971a-d826679ad985-openstack-config-secret\") pod \"openstackclient\" (UID: \"df26ed78-2f8d-41c4-971a-d826679ad985\") " pod="openstack/openstackclient" Feb 19 20:09:39 crc kubenswrapper[4813]: I0219 20:09:39.522458 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/df26ed78-2f8d-41c4-971a-d826679ad985-openstack-config\") pod \"openstackclient\" (UID: \"df26ed78-2f8d-41c4-971a-d826679ad985\") " pod="openstack/openstackclient" Feb 19 20:09:39 crc kubenswrapper[4813]: I0219 20:09:39.527332 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/df26ed78-2f8d-41c4-971a-d826679ad985-openstack-config-secret\") pod \"openstackclient\" (UID: \"df26ed78-2f8d-41c4-971a-d826679ad985\") " pod="openstack/openstackclient" Feb 19 20:09:39 crc kubenswrapper[4813]: I0219 20:09:39.546708 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgwdq\" (UniqueName: \"kubernetes.io/projected/df26ed78-2f8d-41c4-971a-d826679ad985-kube-api-access-cgwdq\") pod \"openstackclient\" (UID: \"df26ed78-2f8d-41c4-971a-d826679ad985\") " pod="openstack/openstackclient" Feb 19 20:09:39 crc kubenswrapper[4813]: I0219 20:09:39.589395 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 20:09:39 crc kubenswrapper[4813]: I0219 20:09:39.626179 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5zzk\" (UniqueName: \"kubernetes.io/projected/292cbffe-04a8-45aa-8ab0-f05526828ffd-kube-api-access-q5zzk\") pod \"kube-state-metrics-0\" (UID: \"292cbffe-04a8-45aa-8ab0-f05526828ffd\") " pod="openstack/kube-state-metrics-0" Feb 19 20:09:39 crc kubenswrapper[4813]: I0219 20:09:39.681740 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5zzk\" (UniqueName: \"kubernetes.io/projected/292cbffe-04a8-45aa-8ab0-f05526828ffd-kube-api-access-q5zzk\") pod \"kube-state-metrics-0\" (UID: \"292cbffe-04a8-45aa-8ab0-f05526828ffd\") " pod="openstack/kube-state-metrics-0" Feb 19 20:09:39 crc kubenswrapper[4813]: I0219 20:09:39.735796 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 20:09:40 crc kubenswrapper[4813]: I0219 20:09:40.147044 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/alertmanager-metric-storage-0"] Feb 19 20:09:40 crc kubenswrapper[4813]: I0219 20:09:40.153395 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Feb 19 20:09:40 crc kubenswrapper[4813]: I0219 20:09:40.157219 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-tls-assets-0" Feb 19 20:09:40 crc kubenswrapper[4813]: I0219 20:09:40.157374 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-web-config" Feb 19 20:09:40 crc kubenswrapper[4813]: I0219 20:09:40.157472 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-generated" Feb 19 20:09:40 crc kubenswrapper[4813]: I0219 20:09:40.160592 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-cluster-tls-config" Feb 19 20:09:40 crc kubenswrapper[4813]: I0219 20:09:40.160915 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-alertmanager-dockercfg-xgqww" Feb 19 20:09:40 crc kubenswrapper[4813]: I0219 20:09:40.166440 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Feb 19 20:09:40 crc kubenswrapper[4813]: I0219 20:09:40.246203 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/dcaba31c-8ef3-42e4-9b85-090fa358bc1b-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"dcaba31c-8ef3-42e4-9b85-090fa358bc1b\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 20:09:40 crc kubenswrapper[4813]: I0219 20:09:40.246259 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/dcaba31c-8ef3-42e4-9b85-090fa358bc1b-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"dcaba31c-8ef3-42e4-9b85-090fa358bc1b\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 20:09:40 crc kubenswrapper[4813]: I0219 20:09:40.246301 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/dcaba31c-8ef3-42e4-9b85-090fa358bc1b-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"dcaba31c-8ef3-42e4-9b85-090fa358bc1b\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 20:09:40 crc kubenswrapper[4813]: I0219 20:09:40.246333 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ctww\" (UniqueName: \"kubernetes.io/projected/dcaba31c-8ef3-42e4-9b85-090fa358bc1b-kube-api-access-5ctww\") pod \"alertmanager-metric-storage-0\" (UID: \"dcaba31c-8ef3-42e4-9b85-090fa358bc1b\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 20:09:40 crc kubenswrapper[4813]: I0219 20:09:40.246362 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/dcaba31c-8ef3-42e4-9b85-090fa358bc1b-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"dcaba31c-8ef3-42e4-9b85-090fa358bc1b\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 20:09:40 crc kubenswrapper[4813]: I0219 20:09:40.246420 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/dcaba31c-8ef3-42e4-9b85-090fa358bc1b-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"dcaba31c-8ef3-42e4-9b85-090fa358bc1b\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 20:09:40 crc kubenswrapper[4813]: I0219 20:09:40.246451 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/dcaba31c-8ef3-42e4-9b85-090fa358bc1b-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"dcaba31c-8ef3-42e4-9b85-090fa358bc1b\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 20:09:40 crc kubenswrapper[4813]: I0219 20:09:40.348009 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/dcaba31c-8ef3-42e4-9b85-090fa358bc1b-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"dcaba31c-8ef3-42e4-9b85-090fa358bc1b\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 20:09:40 crc kubenswrapper[4813]: I0219 20:09:40.348311 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/dcaba31c-8ef3-42e4-9b85-090fa358bc1b-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"dcaba31c-8ef3-42e4-9b85-090fa358bc1b\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 20:09:40 crc kubenswrapper[4813]: I0219 20:09:40.348350 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/dcaba31c-8ef3-42e4-9b85-090fa358bc1b-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"dcaba31c-8ef3-42e4-9b85-090fa358bc1b\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 20:09:40 crc kubenswrapper[4813]: I0219 20:09:40.348389 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/dcaba31c-8ef3-42e4-9b85-090fa358bc1b-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"dcaba31c-8ef3-42e4-9b85-090fa358bc1b\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 20:09:40 crc kubenswrapper[4813]: I0219 20:09:40.348419 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ctww\" (UniqueName: \"kubernetes.io/projected/dcaba31c-8ef3-42e4-9b85-090fa358bc1b-kube-api-access-5ctww\") pod \"alertmanager-metric-storage-0\" (UID: \"dcaba31c-8ef3-42e4-9b85-090fa358bc1b\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 20:09:40 crc kubenswrapper[4813]: I0219 20:09:40.348446 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/dcaba31c-8ef3-42e4-9b85-090fa358bc1b-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"dcaba31c-8ef3-42e4-9b85-090fa358bc1b\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 20:09:40 crc kubenswrapper[4813]: I0219 20:09:40.348526 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/dcaba31c-8ef3-42e4-9b85-090fa358bc1b-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"dcaba31c-8ef3-42e4-9b85-090fa358bc1b\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 20:09:40 crc kubenswrapper[4813]: I0219 20:09:40.353594 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/dcaba31c-8ef3-42e4-9b85-090fa358bc1b-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"dcaba31c-8ef3-42e4-9b85-090fa358bc1b\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 20:09:40 crc kubenswrapper[4813]: I0219 20:09:40.353725 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/dcaba31c-8ef3-42e4-9b85-090fa358bc1b-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"dcaba31c-8ef3-42e4-9b85-090fa358bc1b\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 20:09:40 crc kubenswrapper[4813]: I0219 20:09:40.356389 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/dcaba31c-8ef3-42e4-9b85-090fa358bc1b-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"dcaba31c-8ef3-42e4-9b85-090fa358bc1b\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 20:09:40 crc kubenswrapper[4813]: I0219 20:09:40.357934 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/dcaba31c-8ef3-42e4-9b85-090fa358bc1b-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"dcaba31c-8ef3-42e4-9b85-090fa358bc1b\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 20:09:40 crc kubenswrapper[4813]: I0219 20:09:40.358261 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/dcaba31c-8ef3-42e4-9b85-090fa358bc1b-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"dcaba31c-8ef3-42e4-9b85-090fa358bc1b\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 20:09:40 crc kubenswrapper[4813]: I0219 20:09:40.358576 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/dcaba31c-8ef3-42e4-9b85-090fa358bc1b-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"dcaba31c-8ef3-42e4-9b85-090fa358bc1b\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 20:09:40 crc kubenswrapper[4813]: I0219 20:09:40.392737 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ctww\" (UniqueName: \"kubernetes.io/projected/dcaba31c-8ef3-42e4-9b85-090fa358bc1b-kube-api-access-5ctww\") pod \"alertmanager-metric-storage-0\" (UID: \"dcaba31c-8ef3-42e4-9b85-090fa358bc1b\") " pod="openstack/alertmanager-metric-storage-0" Feb 19 20:09:40 crc kubenswrapper[4813]: I0219 20:09:40.477807 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 19 20:09:40 crc kubenswrapper[4813]: I0219 20:09:40.480640 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Feb 19 20:09:40 crc kubenswrapper[4813]: I0219 20:09:40.510976 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 20:09:40 crc kubenswrapper[4813]: I0219 20:09:40.750576 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 20:09:40 crc kubenswrapper[4813]: I0219 20:09:40.753442 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 19 20:09:40 crc kubenswrapper[4813]: I0219 20:09:40.759692 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 19 20:09:40 crc kubenswrapper[4813]: I0219 20:09:40.759908 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 19 20:09:40 crc kubenswrapper[4813]: I0219 20:09:40.760117 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-5f5lv" Feb 19 20:09:40 crc kubenswrapper[4813]: I0219 20:09:40.760213 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 19 20:09:40 crc kubenswrapper[4813]: I0219 20:09:40.760261 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 19 20:09:40 crc kubenswrapper[4813]: I0219 20:09:40.760404 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 19 20:09:40 crc kubenswrapper[4813]: I0219 20:09:40.760458 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 19 20:09:40 crc kubenswrapper[4813]: I0219 20:09:40.760585 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 19 20:09:40 crc kubenswrapper[4813]: I0219 20:09:40.795182 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 20:09:40 crc kubenswrapper[4813]: I0219 20:09:40.858114 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/634fbb89-ec12-40f4-98fb-daddb92d6843-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"634fbb89-ec12-40f4-98fb-daddb92d6843\") " pod="openstack/prometheus-metric-storage-0" Feb 19 20:09:40 crc kubenswrapper[4813]: I0219 20:09:40.858164 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/634fbb89-ec12-40f4-98fb-daddb92d6843-config\") pod \"prometheus-metric-storage-0\" (UID: \"634fbb89-ec12-40f4-98fb-daddb92d6843\") " pod="openstack/prometheus-metric-storage-0" Feb 19 20:09:40 crc kubenswrapper[4813]: I0219 20:09:40.858185 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/634fbb89-ec12-40f4-98fb-daddb92d6843-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"634fbb89-ec12-40f4-98fb-daddb92d6843\") " pod="openstack/prometheus-metric-storage-0" Feb 19 20:09:40 crc kubenswrapper[4813]: I0219 20:09:40.858208 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4fba3441-56f7-4168-accd-845b0736071c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4fba3441-56f7-4168-accd-845b0736071c\") pod \"prometheus-metric-storage-0\" (UID: \"634fbb89-ec12-40f4-98fb-daddb92d6843\") " pod="openstack/prometheus-metric-storage-0" Feb 19 20:09:40 crc kubenswrapper[4813]: I0219 20:09:40.858243 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/634fbb89-ec12-40f4-98fb-daddb92d6843-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"634fbb89-ec12-40f4-98fb-daddb92d6843\") " pod="openstack/prometheus-metric-storage-0" Feb 19 20:09:40 crc kubenswrapper[4813]: I0219 20:09:40.858269 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/634fbb89-ec12-40f4-98fb-daddb92d6843-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"634fbb89-ec12-40f4-98fb-daddb92d6843\") " pod="openstack/prometheus-metric-storage-0" Feb 19 20:09:40 crc kubenswrapper[4813]: I0219 20:09:40.858312 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mz8ql\" (UniqueName: \"kubernetes.io/projected/634fbb89-ec12-40f4-98fb-daddb92d6843-kube-api-access-mz8ql\") pod \"prometheus-metric-storage-0\" (UID: \"634fbb89-ec12-40f4-98fb-daddb92d6843\") " pod="openstack/prometheus-metric-storage-0" Feb 19 20:09:40 crc kubenswrapper[4813]: I0219 20:09:40.858343 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/634fbb89-ec12-40f4-98fb-daddb92d6843-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"634fbb89-ec12-40f4-98fb-daddb92d6843\") " pod="openstack/prometheus-metric-storage-0" Feb 19 20:09:40 crc kubenswrapper[4813]: I0219 20:09:40.858405 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/634fbb89-ec12-40f4-98fb-daddb92d6843-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"634fbb89-ec12-40f4-98fb-daddb92d6843\") " pod="openstack/prometheus-metric-storage-0" Feb 19 20:09:40 crc kubenswrapper[4813]: I0219 20:09:40.858435 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/634fbb89-ec12-40f4-98fb-daddb92d6843-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"634fbb89-ec12-40f4-98fb-daddb92d6843\") " pod="openstack/prometheus-metric-storage-0" Feb 19 20:09:40 crc kubenswrapper[4813]: I0219 20:09:40.963255 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/634fbb89-ec12-40f4-98fb-daddb92d6843-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"634fbb89-ec12-40f4-98fb-daddb92d6843\") " pod="openstack/prometheus-metric-storage-0" Feb 19 20:09:40 crc kubenswrapper[4813]: I0219 20:09:40.963304 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/634fbb89-ec12-40f4-98fb-daddb92d6843-config\") pod \"prometheus-metric-storage-0\" (UID: \"634fbb89-ec12-40f4-98fb-daddb92d6843\") " pod="openstack/prometheus-metric-storage-0" Feb 19 20:09:40 crc kubenswrapper[4813]: I0219 20:09:40.963328 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/634fbb89-ec12-40f4-98fb-daddb92d6843-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"634fbb89-ec12-40f4-98fb-daddb92d6843\") " pod="openstack/prometheus-metric-storage-0" Feb 19 20:09:40 crc kubenswrapper[4813]: I0219 20:09:40.963353 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4fba3441-56f7-4168-accd-845b0736071c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4fba3441-56f7-4168-accd-845b0736071c\") pod \"prometheus-metric-storage-0\" (UID: \"634fbb89-ec12-40f4-98fb-daddb92d6843\") " pod="openstack/prometheus-metric-storage-0" Feb 19 20:09:40 crc kubenswrapper[4813]: I0219 20:09:40.963404 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/634fbb89-ec12-40f4-98fb-daddb92d6843-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"634fbb89-ec12-40f4-98fb-daddb92d6843\") " pod="openstack/prometheus-metric-storage-0" Feb 19 20:09:40 crc kubenswrapper[4813]: I0219 20:09:40.963430 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/634fbb89-ec12-40f4-98fb-daddb92d6843-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"634fbb89-ec12-40f4-98fb-daddb92d6843\") " pod="openstack/prometheus-metric-storage-0" Feb 19 20:09:40 crc kubenswrapper[4813]: I0219 20:09:40.963476 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mz8ql\" (UniqueName: \"kubernetes.io/projected/634fbb89-ec12-40f4-98fb-daddb92d6843-kube-api-access-mz8ql\") pod \"prometheus-metric-storage-0\" (UID: \"634fbb89-ec12-40f4-98fb-daddb92d6843\") " pod="openstack/prometheus-metric-storage-0" Feb 19 20:09:40 crc kubenswrapper[4813]: I0219 20:09:40.963508 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/634fbb89-ec12-40f4-98fb-daddb92d6843-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"634fbb89-ec12-40f4-98fb-daddb92d6843\") " pod="openstack/prometheus-metric-storage-0" Feb 19 20:09:40 crc kubenswrapper[4813]: I0219 20:09:40.963540 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/634fbb89-ec12-40f4-98fb-daddb92d6843-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"634fbb89-ec12-40f4-98fb-daddb92d6843\") " pod="openstack/prometheus-metric-storage-0" Feb 19 20:09:40 crc kubenswrapper[4813]: I0219 20:09:40.963571 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/634fbb89-ec12-40f4-98fb-daddb92d6843-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"634fbb89-ec12-40f4-98fb-daddb92d6843\") " pod="openstack/prometheus-metric-storage-0" Feb 19 20:09:40 crc kubenswrapper[4813]: I0219 20:09:40.964759 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/634fbb89-ec12-40f4-98fb-daddb92d6843-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"634fbb89-ec12-40f4-98fb-daddb92d6843\") " pod="openstack/prometheus-metric-storage-0" Feb 19 20:09:40 crc kubenswrapper[4813]: I0219 20:09:40.965834 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/634fbb89-ec12-40f4-98fb-daddb92d6843-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"634fbb89-ec12-40f4-98fb-daddb92d6843\") " pod="openstack/prometheus-metric-storage-0" Feb 19 20:09:40 crc kubenswrapper[4813]: I0219 20:09:40.966030 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/634fbb89-ec12-40f4-98fb-daddb92d6843-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"634fbb89-ec12-40f4-98fb-daddb92d6843\") " pod="openstack/prometheus-metric-storage-0" Feb 19 20:09:40 crc kubenswrapper[4813]: I0219 20:09:40.968975 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/634fbb89-ec12-40f4-98fb-daddb92d6843-config\") pod \"prometheus-metric-storage-0\" (UID: \"634fbb89-ec12-40f4-98fb-daddb92d6843\") " pod="openstack/prometheus-metric-storage-0" Feb 19 20:09:40 crc kubenswrapper[4813]: I0219 20:09:40.975551 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/634fbb89-ec12-40f4-98fb-daddb92d6843-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"634fbb89-ec12-40f4-98fb-daddb92d6843\") " pod="openstack/prometheus-metric-storage-0" Feb 19 20:09:40 crc kubenswrapper[4813]: I0219 20:09:40.989679 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/634fbb89-ec12-40f4-98fb-daddb92d6843-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"634fbb89-ec12-40f4-98fb-daddb92d6843\") " pod="openstack/prometheus-metric-storage-0" Feb 19 20:09:40 crc kubenswrapper[4813]: I0219 20:09:40.990218 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/634fbb89-ec12-40f4-98fb-daddb92d6843-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"634fbb89-ec12-40f4-98fb-daddb92d6843\") " pod="openstack/prometheus-metric-storage-0" Feb 19 20:09:40 crc kubenswrapper[4813]: I0219 20:09:40.990400 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mz8ql\" (UniqueName: \"kubernetes.io/projected/634fbb89-ec12-40f4-98fb-daddb92d6843-kube-api-access-mz8ql\") pod \"prometheus-metric-storage-0\" (UID: \"634fbb89-ec12-40f4-98fb-daddb92d6843\") " pod="openstack/prometheus-metric-storage-0" Feb 19 20:09:41 crc kubenswrapper[4813]: I0219 20:09:41.002184 4813 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 20:09:41 crc kubenswrapper[4813]: I0219 20:09:41.002216 4813 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4fba3441-56f7-4168-accd-845b0736071c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4fba3441-56f7-4168-accd-845b0736071c\") pod \"prometheus-metric-storage-0\" (UID: \"634fbb89-ec12-40f4-98fb-daddb92d6843\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b7c3f1c451c6bd2e7ba9d3b6f88706efd645bb852d08a513e34ba93f21218d5c/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 19 20:09:41 crc kubenswrapper[4813]: I0219 20:09:41.007447 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/634fbb89-ec12-40f4-98fb-daddb92d6843-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"634fbb89-ec12-40f4-98fb-daddb92d6843\") " pod="openstack/prometheus-metric-storage-0" Feb 19 20:09:41 crc kubenswrapper[4813]: I0219 20:09:41.112664 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4fba3441-56f7-4168-accd-845b0736071c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4fba3441-56f7-4168-accd-845b0736071c\") pod \"prometheus-metric-storage-0\" (UID: \"634fbb89-ec12-40f4-98fb-daddb92d6843\") " pod="openstack/prometheus-metric-storage-0" Feb 19 20:09:41 crc kubenswrapper[4813]: I0219 20:09:41.197266 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Feb 19 20:09:41 crc kubenswrapper[4813]: W0219 20:09:41.216600 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddcaba31c_8ef3_42e4_9b85_090fa358bc1b.slice/crio-002d247f0dc834c4723247efd1c357ec6155fd1c8d3d6ac313c63f801667be4e WatchSource:0}: Error finding container 002d247f0dc834c4723247efd1c357ec6155fd1c8d3d6ac313c63f801667be4e: Status 404 returned error can't find the container with id 002d247f0dc834c4723247efd1c357ec6155fd1c8d3d6ac313c63f801667be4e Feb 19 20:09:41 crc kubenswrapper[4813]: I0219 20:09:41.386427 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 19 20:09:41 crc kubenswrapper[4813]: I0219 20:09:41.571437 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"df26ed78-2f8d-41c4-971a-d826679ad985","Type":"ContainerStarted","Data":"8b6be6e3c4314328fa4cb07a527cd14d21b63d0533b8494d5c1518a4d645aa37"} Feb 19 20:09:41 crc kubenswrapper[4813]: I0219 20:09:41.571472 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"df26ed78-2f8d-41c4-971a-d826679ad985","Type":"ContainerStarted","Data":"134f2c8ac7d6152c16214df42d28340c93f7d6ea3d2715b922ca5048d412dc7a"} Feb 19 20:09:41 crc kubenswrapper[4813]: I0219 20:09:41.571482 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"292cbffe-04a8-45aa-8ab0-f05526828ffd","Type":"ContainerStarted","Data":"d97bbc7635efdb133d67e081044a9c5b7af84d5be6b74f14179081e8190ff246"} Feb 19 20:09:41 crc kubenswrapper[4813]: I0219 20:09:41.571491 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"292cbffe-04a8-45aa-8ab0-f05526828ffd","Type":"ContainerStarted","Data":"3e01fc980ce59c2db8477429f8610fa9b636d2eb4014d07893df807effec4a4e"} Feb 19 20:09:41 crc kubenswrapper[4813]: I0219 20:09:41.571598 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 19 20:09:41 crc kubenswrapper[4813]: I0219 20:09:41.578267 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"dcaba31c-8ef3-42e4-9b85-090fa358bc1b","Type":"ContainerStarted","Data":"002d247f0dc834c4723247efd1c357ec6155fd1c8d3d6ac313c63f801667be4e"} Feb 19 20:09:41 crc kubenswrapper[4813]: I0219 20:09:41.582604 4813 generic.go:334] "Generic (PLEG): container finished" podID="aa2c214f-33ef-4b76-b375-3eb37e6e17ad" containerID="d5cc8ed4b6e4006265e8d5f6364d15ab59ab9a01349c37870c5838a289a2b517" exitCode=137 Feb 19 20:09:41 crc kubenswrapper[4813]: I0219 20:09:41.697043 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.697027849 podStartE2EDuration="2.697027849s" podCreationTimestamp="2026-02-19 20:09:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 20:09:41.691282992 +0000 UTC m=+6000.916723533" watchObservedRunningTime="2026-02-19 20:09:41.697027849 +0000 UTC m=+6000.922468390" Feb 19 20:09:41 crc kubenswrapper[4813]: I0219 20:09:41.723379 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 20:09:41 crc kubenswrapper[4813]: I0219 20:09:41.738573 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.166129549 podStartE2EDuration="2.738555675s" podCreationTimestamp="2026-02-19 20:09:39 +0000 UTC" firstStartedPulling="2026-02-19 20:09:40.521741886 +0000 UTC m=+5999.747182427" lastFinishedPulling="2026-02-19 20:09:41.094168022 +0000 UTC m=+6000.319608553" observedRunningTime="2026-02-19 20:09:41.716610325 +0000 UTC m=+6000.942050866" watchObservedRunningTime="2026-02-19 20:09:41.738555675 +0000 UTC m=+6000.963996216" Feb 19 20:09:42 crc kubenswrapper[4813]: I0219 20:09:41.744651 4813 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="aa2c214f-33ef-4b76-b375-3eb37e6e17ad" podUID="df26ed78-2f8d-41c4-971a-d826679ad985" Feb 19 20:09:42 crc kubenswrapper[4813]: I0219 20:09:41.824173 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/aa2c214f-33ef-4b76-b375-3eb37e6e17ad-openstack-config\") pod \"aa2c214f-33ef-4b76-b375-3eb37e6e17ad\" (UID: \"aa2c214f-33ef-4b76-b375-3eb37e6e17ad\") " Feb 19 20:09:42 crc kubenswrapper[4813]: I0219 20:09:41.824223 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/aa2c214f-33ef-4b76-b375-3eb37e6e17ad-openstack-config-secret\") pod \"aa2c214f-33ef-4b76-b375-3eb37e6e17ad\" (UID: \"aa2c214f-33ef-4b76-b375-3eb37e6e17ad\") " Feb 19 20:09:42 crc kubenswrapper[4813]: I0219 20:09:41.824247 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmlpv\" (UniqueName: \"kubernetes.io/projected/aa2c214f-33ef-4b76-b375-3eb37e6e17ad-kube-api-access-xmlpv\") pod \"aa2c214f-33ef-4b76-b375-3eb37e6e17ad\" (UID: \"aa2c214f-33ef-4b76-b375-3eb37e6e17ad\") " Feb 19 20:09:42 crc kubenswrapper[4813]: I0219 20:09:41.838159 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa2c214f-33ef-4b76-b375-3eb37e6e17ad-kube-api-access-xmlpv" (OuterVolumeSpecName: "kube-api-access-xmlpv") pod "aa2c214f-33ef-4b76-b375-3eb37e6e17ad" (UID: "aa2c214f-33ef-4b76-b375-3eb37e6e17ad"). InnerVolumeSpecName "kube-api-access-xmlpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:09:42 crc kubenswrapper[4813]: I0219 20:09:41.898565 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa2c214f-33ef-4b76-b375-3eb37e6e17ad-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "aa2c214f-33ef-4b76-b375-3eb37e6e17ad" (UID: "aa2c214f-33ef-4b76-b375-3eb37e6e17ad"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:09:42 crc kubenswrapper[4813]: I0219 20:09:41.931235 4813 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/aa2c214f-33ef-4b76-b375-3eb37e6e17ad-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 19 20:09:42 crc kubenswrapper[4813]: I0219 20:09:41.931263 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xmlpv\" (UniqueName: \"kubernetes.io/projected/aa2c214f-33ef-4b76-b375-3eb37e6e17ad-kube-api-access-xmlpv\") on node \"crc\" DevicePath \"\"" Feb 19 20:09:42 crc kubenswrapper[4813]: I0219 20:09:41.951126 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa2c214f-33ef-4b76-b375-3eb37e6e17ad-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "aa2c214f-33ef-4b76-b375-3eb37e6e17ad" (UID: "aa2c214f-33ef-4b76-b375-3eb37e6e17ad"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:09:42 crc kubenswrapper[4813]: I0219 20:09:42.032825 4813 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/aa2c214f-33ef-4b76-b375-3eb37e6e17ad-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 19 20:09:42 crc kubenswrapper[4813]: I0219 20:09:42.611225 4813 scope.go:117] "RemoveContainer" containerID="d5cc8ed4b6e4006265e8d5f6364d15ab59ab9a01349c37870c5838a289a2b517" Feb 19 20:09:42 crc kubenswrapper[4813]: I0219 20:09:42.612894 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 20:09:42 crc kubenswrapper[4813]: I0219 20:09:42.617434 4813 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="aa2c214f-33ef-4b76-b375-3eb37e6e17ad" podUID="df26ed78-2f8d-41c4-971a-d826679ad985" Feb 19 20:09:42 crc kubenswrapper[4813]: I0219 20:09:42.643636 4813 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="aa2c214f-33ef-4b76-b375-3eb37e6e17ad" podUID="df26ed78-2f8d-41c4-971a-d826679ad985" Feb 19 20:09:42 crc kubenswrapper[4813]: I0219 20:09:42.936489 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 20:09:43 crc kubenswrapper[4813]: I0219 20:09:43.489279 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa2c214f-33ef-4b76-b375-3eb37e6e17ad" path="/var/lib/kubelet/pods/aa2c214f-33ef-4b76-b375-3eb37e6e17ad/volumes" Feb 19 20:09:43 crc kubenswrapper[4813]: I0219 20:09:43.619671 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"634fbb89-ec12-40f4-98fb-daddb92d6843","Type":"ContainerStarted","Data":"5964eeca688936c59d97804911bdd8d778fcbd25bca0e212511986be3935c2fa"} Feb 19 20:09:47 crc kubenswrapper[4813]: I0219 20:09:47.040394 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-zzr5k"] Feb 19 20:09:47 crc kubenswrapper[4813]: I0219 20:09:47.055574 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-fm8jf"] Feb 19 20:09:47 crc kubenswrapper[4813]: I0219 20:09:47.072265 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-xg69l"] Feb 19 20:09:47 crc kubenswrapper[4813]: I0219 20:09:47.086482 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-zzr5k"] Feb 19 20:09:47 crc kubenswrapper[4813]: I0219 20:09:47.099607 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-fm8jf"] Feb 19 20:09:47 crc kubenswrapper[4813]: I0219 20:09:47.109377 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-xg69l"] Feb 19 20:09:47 crc kubenswrapper[4813]: I0219 20:09:47.503249 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36727543-174a-48ce-b5cc-0abd04f85e4c" path="/var/lib/kubelet/pods/36727543-174a-48ce-b5cc-0abd04f85e4c/volumes" Feb 19 20:09:47 crc kubenswrapper[4813]: I0219 20:09:47.505510 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fd7e145-bf0d-469b-880d-4d1086f799e2" path="/var/lib/kubelet/pods/5fd7e145-bf0d-469b-880d-4d1086f799e2/volumes" Feb 19 20:09:47 crc kubenswrapper[4813]: I0219 20:09:47.511984 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73ed1a04-38ca-473b-8d0d-6412fc7fc147" path="/var/lib/kubelet/pods/73ed1a04-38ca-473b-8d0d-6412fc7fc147/volumes" Feb 19 20:09:47 crc kubenswrapper[4813]: I0219 20:09:47.760686 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"dcaba31c-8ef3-42e4-9b85-090fa358bc1b","Type":"ContainerStarted","Data":"1a89fc8936fa8739e8d658c20199e993accd2fc64d5a91e7eb602e2980bb3eee"} Feb 19 20:09:48 crc kubenswrapper[4813]: I0219 20:09:48.038269 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-22c4-account-create-update-9tzrv"] Feb 19 20:09:48 crc kubenswrapper[4813]: I0219 20:09:48.049358 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-88c2-account-create-update-687tm"] Feb 19 20:09:48 crc kubenswrapper[4813]: I0219 20:09:48.064536 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-10bf-account-create-update-zk5c2"] Feb 19 20:09:48 crc kubenswrapper[4813]: I0219 20:09:48.075705 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-22c4-account-create-update-9tzrv"] Feb 19 20:09:48 crc kubenswrapper[4813]: I0219 20:09:48.084865 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-88c2-account-create-update-687tm"] Feb 19 20:09:48 crc kubenswrapper[4813]: I0219 20:09:48.093399 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-10bf-account-create-update-zk5c2"] Feb 19 20:09:49 crc kubenswrapper[4813]: I0219 20:09:49.487398 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="034ceb6c-2c10-4059-8f04-007215596cf8" path="/var/lib/kubelet/pods/034ceb6c-2c10-4059-8f04-007215596cf8/volumes" Feb 19 20:09:49 crc kubenswrapper[4813]: I0219 20:09:49.488887 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0949f55f-f63e-4af5-804f-57500e6d83f2" path="/var/lib/kubelet/pods/0949f55f-f63e-4af5-804f-57500e6d83f2/volumes" Feb 19 20:09:49 crc kubenswrapper[4813]: I0219 20:09:49.489446 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="712b0a56-84d1-4ab3-a0e0-5237a748b43d" path="/var/lib/kubelet/pods/712b0a56-84d1-4ab3-a0e0-5237a748b43d/volumes" Feb 19 20:09:49 crc kubenswrapper[4813]: I0219 20:09:49.743780 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 19 20:09:49 crc kubenswrapper[4813]: I0219 20:09:49.786333 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"634fbb89-ec12-40f4-98fb-daddb92d6843","Type":"ContainerStarted","Data":"65399cdfaf7992cb41fdb66e24af04019b7f2f859d9dee27652c4e077055c916"} Feb 19 20:09:50 crc kubenswrapper[4813]: I0219 20:09:50.472287 4813 scope.go:117] "RemoveContainer" containerID="77599afcb1bbfe89531d92db83d4ec060079eb9318249967ecfd9d52fd324241" Feb 19 20:09:50 crc kubenswrapper[4813]: E0219 20:09:50.472988 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:09:54 crc kubenswrapper[4813]: I0219 20:09:54.642658 4813 scope.go:117] "RemoveContainer" containerID="e061fbe17f425006faf09d648d10e5d3f36e0178e3f5ff42c0530d615b9ca178" Feb 19 20:09:54 crc kubenswrapper[4813]: I0219 20:09:54.690371 4813 scope.go:117] "RemoveContainer" containerID="25a0a85a23ae31c5a5c8f17018717a02e57010f24eb6d499a312e67fc6b0c7f6" Feb 19 20:09:54 crc kubenswrapper[4813]: I0219 20:09:54.766595 4813 scope.go:117] "RemoveContainer" containerID="c87bf07b391dc431d1211a9ebf4b34646f101df752289bc24e6149a04543dc9e" Feb 19 20:09:54 crc kubenswrapper[4813]: I0219 20:09:54.803246 4813 scope.go:117] "RemoveContainer" containerID="a116ef55d81e12fc8e54d567d1c226cb474cbf281bd724af9e2631fc072955f4" Feb 19 20:09:54 crc kubenswrapper[4813]: I0219 20:09:54.864669 4813 scope.go:117] "RemoveContainer" containerID="32144c334c41bfbc7c55454ea453f9e12f4f8822dcd1bd5c3d8861983d64a5a4" Feb 19 20:09:54 crc kubenswrapper[4813]: I0219 20:09:54.913029 4813 scope.go:117] "RemoveContainer" containerID="76471d8a318e103238b4b0bb913015c7a431100e43ff477f3721f5a22275ab58" Feb 19 20:09:55 crc kubenswrapper[4813]: I0219 20:09:55.887674 4813 generic.go:334] "Generic (PLEG): container finished" podID="dcaba31c-8ef3-42e4-9b85-090fa358bc1b" containerID="1a89fc8936fa8739e8d658c20199e993accd2fc64d5a91e7eb602e2980bb3eee" exitCode=0 Feb 19 20:09:55 crc kubenswrapper[4813]: I0219 20:09:55.887849 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"dcaba31c-8ef3-42e4-9b85-090fa358bc1b","Type":"ContainerDied","Data":"1a89fc8936fa8739e8d658c20199e993accd2fc64d5a91e7eb602e2980bb3eee"} Feb 19 20:09:56 crc kubenswrapper[4813]: I0219 20:09:56.898727 4813 generic.go:334] "Generic (PLEG): container finished" podID="634fbb89-ec12-40f4-98fb-daddb92d6843" containerID="65399cdfaf7992cb41fdb66e24af04019b7f2f859d9dee27652c4e077055c916" exitCode=0 Feb 19 20:09:56 crc kubenswrapper[4813]: I0219 20:09:56.898780 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"634fbb89-ec12-40f4-98fb-daddb92d6843","Type":"ContainerDied","Data":"65399cdfaf7992cb41fdb66e24af04019b7f2f859d9dee27652c4e077055c916"} Feb 19 20:09:57 crc kubenswrapper[4813]: I0219 20:09:57.032087 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-wh5fw"] Feb 19 20:09:57 crc kubenswrapper[4813]: I0219 20:09:57.040772 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-wh5fw"] Feb 19 20:09:57 crc kubenswrapper[4813]: I0219 20:09:57.493203 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12c13eaf-400f-4c9b-b8ae-b8c3ed93617f" path="/var/lib/kubelet/pods/12c13eaf-400f-4c9b-b8ae-b8c3ed93617f/volumes" Feb 19 20:09:59 crc kubenswrapper[4813]: I0219 20:09:59.934881 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"dcaba31c-8ef3-42e4-9b85-090fa358bc1b","Type":"ContainerStarted","Data":"62d2003cd3c97df48df2c63002047b3092317801ccaca36389ff70601b724c1e"} Feb 19 20:10:03 crc kubenswrapper[4813]: I0219 20:10:03.471901 4813 scope.go:117] "RemoveContainer" containerID="77599afcb1bbfe89531d92db83d4ec060079eb9318249967ecfd9d52fd324241" Feb 19 20:10:03 crc kubenswrapper[4813]: E0219 20:10:03.472899 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:10:03 crc kubenswrapper[4813]: I0219 20:10:03.978527 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"dcaba31c-8ef3-42e4-9b85-090fa358bc1b","Type":"ContainerStarted","Data":"5a0702464e16d94fb8ac3907d0aab6fb839e3692b289a61ff98913e0c12860df"} Feb 19 20:10:03 crc kubenswrapper[4813]: I0219 20:10:03.978888 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/alertmanager-metric-storage-0" Feb 19 20:10:03 crc kubenswrapper[4813]: I0219 20:10:03.980918 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/alertmanager-metric-storage-0" Feb 19 20:10:03 crc kubenswrapper[4813]: I0219 20:10:03.981744 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"634fbb89-ec12-40f4-98fb-daddb92d6843","Type":"ContainerStarted","Data":"8e25632cc61ea7090fb79db6818c676e3a725feba515ed1edee2cb364bde4545"} Feb 19 20:10:04 crc kubenswrapper[4813]: I0219 20:10:04.006737 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/alertmanager-metric-storage-0" podStartSLOduration=6.176058033 podStartE2EDuration="24.006718257s" podCreationTimestamp="2026-02-19 20:09:40 +0000 UTC" firstStartedPulling="2026-02-19 20:09:41.219342424 +0000 UTC m=+6000.444782955" lastFinishedPulling="2026-02-19 20:09:59.050002638 +0000 UTC m=+6018.275443179" observedRunningTime="2026-02-19 20:10:03.999187664 +0000 UTC m=+6023.224628265" watchObservedRunningTime="2026-02-19 20:10:04.006718257 +0000 UTC m=+6023.232158798" Feb 19 20:10:08 crc kubenswrapper[4813]: I0219 20:10:08.023688 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"634fbb89-ec12-40f4-98fb-daddb92d6843","Type":"ContainerStarted","Data":"e4b6a1189d1a05e454524bcda7bcbf710b58491a2a4b38270e4016deab85b1a7"} Feb 19 20:10:11 crc kubenswrapper[4813]: I0219 20:10:11.061755 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"634fbb89-ec12-40f4-98fb-daddb92d6843","Type":"ContainerStarted","Data":"8bda4a1f8bd2619e394c570307f65d44ec97624cf52c7421ee14c17c6128e908"} Feb 19 20:10:11 crc kubenswrapper[4813]: I0219 20:10:11.113147 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=4.529857162 podStartE2EDuration="32.11311971s" podCreationTimestamp="2026-02-19 20:09:39 +0000 UTC" firstStartedPulling="2026-02-19 20:09:42.948282233 +0000 UTC m=+6002.173722774" lastFinishedPulling="2026-02-19 20:10:10.531544771 +0000 UTC m=+6029.756985322" observedRunningTime="2026-02-19 20:10:11.100467809 +0000 UTC m=+6030.325908390" watchObservedRunningTime="2026-02-19 20:10:11.11311971 +0000 UTC m=+6030.338560291" Feb 19 20:10:11 crc kubenswrapper[4813]: I0219 20:10:11.387749 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 19 20:10:11 crc kubenswrapper[4813]: I0219 20:10:11.388182 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 19 20:10:11 crc kubenswrapper[4813]: I0219 20:10:11.393072 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 19 20:10:12 crc kubenswrapper[4813]: I0219 20:10:12.071341 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 19 20:10:13 crc kubenswrapper[4813]: I0219 20:10:13.361777 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 20:10:13 crc kubenswrapper[4813]: I0219 20:10:13.365376 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 20:10:13 crc kubenswrapper[4813]: I0219 20:10:13.369253 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 20:10:13 crc kubenswrapper[4813]: I0219 20:10:13.369509 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 20:10:13 crc kubenswrapper[4813]: I0219 20:10:13.372834 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 20:10:13 crc kubenswrapper[4813]: I0219 20:10:13.529276 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e963fa0b-314d-4f34-8624-a316e69590d2-run-httpd\") pod \"ceilometer-0\" (UID: \"e963fa0b-314d-4f34-8624-a316e69590d2\") " pod="openstack/ceilometer-0" Feb 19 20:10:13 crc kubenswrapper[4813]: I0219 20:10:13.529414 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e963fa0b-314d-4f34-8624-a316e69590d2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e963fa0b-314d-4f34-8624-a316e69590d2\") " pod="openstack/ceilometer-0" Feb 19 20:10:13 crc kubenswrapper[4813]: I0219 20:10:13.529470 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e963fa0b-314d-4f34-8624-a316e69590d2-log-httpd\") pod \"ceilometer-0\" (UID: \"e963fa0b-314d-4f34-8624-a316e69590d2\") " pod="openstack/ceilometer-0" Feb 19 20:10:13 crc kubenswrapper[4813]: I0219 20:10:13.529626 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e963fa0b-314d-4f34-8624-a316e69590d2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e963fa0b-314d-4f34-8624-a316e69590d2\") " pod="openstack/ceilometer-0" Feb 19 20:10:13 crc kubenswrapper[4813]: I0219 20:10:13.529673 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e963fa0b-314d-4f34-8624-a316e69590d2-scripts\") pod \"ceilometer-0\" (UID: \"e963fa0b-314d-4f34-8624-a316e69590d2\") " pod="openstack/ceilometer-0" Feb 19 20:10:13 crc kubenswrapper[4813]: I0219 20:10:13.529731 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e963fa0b-314d-4f34-8624-a316e69590d2-config-data\") pod \"ceilometer-0\" (UID: \"e963fa0b-314d-4f34-8624-a316e69590d2\") " pod="openstack/ceilometer-0" Feb 19 20:10:13 crc kubenswrapper[4813]: I0219 20:10:13.529794 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8cvg\" (UniqueName: \"kubernetes.io/projected/e963fa0b-314d-4f34-8624-a316e69590d2-kube-api-access-z8cvg\") pod \"ceilometer-0\" (UID: \"e963fa0b-314d-4f34-8624-a316e69590d2\") " pod="openstack/ceilometer-0" Feb 19 20:10:13 crc kubenswrapper[4813]: I0219 20:10:13.631914 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e963fa0b-314d-4f34-8624-a316e69590d2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e963fa0b-314d-4f34-8624-a316e69590d2\") " pod="openstack/ceilometer-0" Feb 19 20:10:13 crc kubenswrapper[4813]: I0219 20:10:13.632006 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e963fa0b-314d-4f34-8624-a316e69590d2-scripts\") pod \"ceilometer-0\" (UID: \"e963fa0b-314d-4f34-8624-a316e69590d2\") " pod="openstack/ceilometer-0" Feb 19 20:10:13 crc kubenswrapper[4813]: I0219 20:10:13.632057 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e963fa0b-314d-4f34-8624-a316e69590d2-config-data\") pod \"ceilometer-0\" (UID: \"e963fa0b-314d-4f34-8624-a316e69590d2\") " pod="openstack/ceilometer-0" Feb 19 20:10:13 crc kubenswrapper[4813]: I0219 20:10:13.632102 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8cvg\" (UniqueName: \"kubernetes.io/projected/e963fa0b-314d-4f34-8624-a316e69590d2-kube-api-access-z8cvg\") pod \"ceilometer-0\" (UID: \"e963fa0b-314d-4f34-8624-a316e69590d2\") " pod="openstack/ceilometer-0" Feb 19 20:10:13 crc kubenswrapper[4813]: I0219 20:10:13.632285 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e963fa0b-314d-4f34-8624-a316e69590d2-run-httpd\") pod \"ceilometer-0\" (UID: \"e963fa0b-314d-4f34-8624-a316e69590d2\") " pod="openstack/ceilometer-0" Feb 19 20:10:13 crc kubenswrapper[4813]: I0219 20:10:13.632334 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e963fa0b-314d-4f34-8624-a316e69590d2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e963fa0b-314d-4f34-8624-a316e69590d2\") " pod="openstack/ceilometer-0" Feb 19 20:10:13 crc kubenswrapper[4813]: I0219 20:10:13.632358 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e963fa0b-314d-4f34-8624-a316e69590d2-log-httpd\") pod \"ceilometer-0\" (UID: \"e963fa0b-314d-4f34-8624-a316e69590d2\") " pod="openstack/ceilometer-0" Feb 19 20:10:13 crc kubenswrapper[4813]: I0219 20:10:13.632927 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e963fa0b-314d-4f34-8624-a316e69590d2-log-httpd\") pod \"ceilometer-0\" (UID: \"e963fa0b-314d-4f34-8624-a316e69590d2\") " pod="openstack/ceilometer-0" Feb 19 20:10:13 crc kubenswrapper[4813]: I0219 20:10:13.633503 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e963fa0b-314d-4f34-8624-a316e69590d2-run-httpd\") pod \"ceilometer-0\" (UID: \"e963fa0b-314d-4f34-8624-a316e69590d2\") " pod="openstack/ceilometer-0" Feb 19 20:10:13 crc kubenswrapper[4813]: I0219 20:10:13.639841 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e963fa0b-314d-4f34-8624-a316e69590d2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e963fa0b-314d-4f34-8624-a316e69590d2\") " pod="openstack/ceilometer-0" Feb 19 20:10:13 crc kubenswrapper[4813]: I0219 20:10:13.640074 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e963fa0b-314d-4f34-8624-a316e69590d2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e963fa0b-314d-4f34-8624-a316e69590d2\") " pod="openstack/ceilometer-0" Feb 19 20:10:13 crc kubenswrapper[4813]: I0219 20:10:13.640489 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e963fa0b-314d-4f34-8624-a316e69590d2-scripts\") pod \"ceilometer-0\" (UID: \"e963fa0b-314d-4f34-8624-a316e69590d2\") " pod="openstack/ceilometer-0" Feb 19 20:10:13 crc kubenswrapper[4813]: I0219 20:10:13.651176 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e963fa0b-314d-4f34-8624-a316e69590d2-config-data\") pod \"ceilometer-0\" (UID: \"e963fa0b-314d-4f34-8624-a316e69590d2\") " pod="openstack/ceilometer-0" Feb 19 20:10:13 crc kubenswrapper[4813]: I0219 20:10:13.652963 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8cvg\" (UniqueName: \"kubernetes.io/projected/e963fa0b-314d-4f34-8624-a316e69590d2-kube-api-access-z8cvg\") pod \"ceilometer-0\" (UID: \"e963fa0b-314d-4f34-8624-a316e69590d2\") " pod="openstack/ceilometer-0" Feb 19 20:10:13 crc kubenswrapper[4813]: I0219 20:10:13.685911 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 20:10:14 crc kubenswrapper[4813]: I0219 20:10:14.188728 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 20:10:15 crc kubenswrapper[4813]: I0219 20:10:15.068111 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-cqkk6"] Feb 19 20:10:15 crc kubenswrapper[4813]: I0219 20:10:15.077367 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-cqkk6"] Feb 19 20:10:15 crc kubenswrapper[4813]: I0219 20:10:15.096657 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e963fa0b-314d-4f34-8624-a316e69590d2","Type":"ContainerStarted","Data":"217b46aec38f7eade99a483c12e094164905443b216e9d9cc4df231cd3643ab2"} Feb 19 20:10:15 crc kubenswrapper[4813]: I0219 20:10:15.096701 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e963fa0b-314d-4f34-8624-a316e69590d2","Type":"ContainerStarted","Data":"6d678c622dd95b9a9ad0e683a0208aa3d31b91e466f78d379526ce11683828d6"} Feb 19 20:10:15 crc kubenswrapper[4813]: I0219 20:10:15.493457 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfa00bea-2bcd-4843-ba6a-2e2b4070928d" path="/var/lib/kubelet/pods/dfa00bea-2bcd-4843-ba6a-2e2b4070928d/volumes" Feb 19 20:10:17 crc kubenswrapper[4813]: I0219 20:10:17.040820 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-x6wmz"] Feb 19 20:10:17 crc kubenswrapper[4813]: I0219 20:10:17.055448 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-x6wmz"] Feb 19 20:10:17 crc kubenswrapper[4813]: I0219 20:10:17.121486 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e963fa0b-314d-4f34-8624-a316e69590d2","Type":"ContainerStarted","Data":"3890e31bfb2ecd6a10390e44252a0321a34ec8fd75c7318c5f2f2b89014a3c36"} Feb 19 20:10:17 crc kubenswrapper[4813]: I0219 20:10:17.471508 4813 scope.go:117] "RemoveContainer" containerID="77599afcb1bbfe89531d92db83d4ec060079eb9318249967ecfd9d52fd324241" Feb 19 20:10:17 crc kubenswrapper[4813]: E0219 20:10:17.471855 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:10:17 crc kubenswrapper[4813]: I0219 20:10:17.483146 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffe08647-6b48-488d-86bd-84de78e5c05c" path="/var/lib/kubelet/pods/ffe08647-6b48-488d-86bd-84de78e5c05c/volumes" Feb 19 20:10:18 crc kubenswrapper[4813]: I0219 20:10:18.137611 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e963fa0b-314d-4f34-8624-a316e69590d2","Type":"ContainerStarted","Data":"fc1a81692bb8eafdce52f73d8f98014439da57df4f130666dabe8a782bcd8cdf"} Feb 19 20:10:19 crc kubenswrapper[4813]: I0219 20:10:19.152419 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e963fa0b-314d-4f34-8624-a316e69590d2","Type":"ContainerStarted","Data":"f996239da75428bb5f4a2c0b6d8fc70199f2372a2c9bbb5d96c19e9907033248"} Feb 19 20:10:19 crc kubenswrapper[4813]: I0219 20:10:19.152871 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 20:10:19 crc kubenswrapper[4813]: I0219 20:10:19.192167 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.642188221 podStartE2EDuration="6.192118047s" podCreationTimestamp="2026-02-19 20:10:13 +0000 UTC" firstStartedPulling="2026-02-19 20:10:14.19083996 +0000 UTC m=+6033.416280501" lastFinishedPulling="2026-02-19 20:10:18.740769786 +0000 UTC m=+6037.966210327" observedRunningTime="2026-02-19 20:10:19.177625998 +0000 UTC m=+6038.403066539" watchObservedRunningTime="2026-02-19 20:10:19.192118047 +0000 UTC m=+6038.417558598" Feb 19 20:10:24 crc kubenswrapper[4813]: I0219 20:10:24.878929 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-jpjbw"] Feb 19 20:10:24 crc kubenswrapper[4813]: I0219 20:10:24.880748 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-jpjbw" Feb 19 20:10:24 crc kubenswrapper[4813]: I0219 20:10:24.894684 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-jpjbw"] Feb 19 20:10:24 crc kubenswrapper[4813]: I0219 20:10:24.960384 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvct2\" (UniqueName: \"kubernetes.io/projected/78d21291-624b-49a4-a8ed-5b170b065fee-kube-api-access-xvct2\") pod \"aodh-db-create-jpjbw\" (UID: \"78d21291-624b-49a4-a8ed-5b170b065fee\") " pod="openstack/aodh-db-create-jpjbw" Feb 19 20:10:24 crc kubenswrapper[4813]: I0219 20:10:24.960543 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78d21291-624b-49a4-a8ed-5b170b065fee-operator-scripts\") pod \"aodh-db-create-jpjbw\" (UID: \"78d21291-624b-49a4-a8ed-5b170b065fee\") " pod="openstack/aodh-db-create-jpjbw" Feb 19 20:10:25 crc kubenswrapper[4813]: I0219 20:10:25.063043 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvct2\" (UniqueName: \"kubernetes.io/projected/78d21291-624b-49a4-a8ed-5b170b065fee-kube-api-access-xvct2\") pod \"aodh-db-create-jpjbw\" (UID: \"78d21291-624b-49a4-a8ed-5b170b065fee\") " pod="openstack/aodh-db-create-jpjbw" Feb 19 20:10:25 crc kubenswrapper[4813]: I0219 20:10:25.063194 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78d21291-624b-49a4-a8ed-5b170b065fee-operator-scripts\") pod \"aodh-db-create-jpjbw\" (UID: \"78d21291-624b-49a4-a8ed-5b170b065fee\") " pod="openstack/aodh-db-create-jpjbw" Feb 19 20:10:25 crc kubenswrapper[4813]: I0219 20:10:25.064421 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78d21291-624b-49a4-a8ed-5b170b065fee-operator-scripts\") pod \"aodh-db-create-jpjbw\" (UID: \"78d21291-624b-49a4-a8ed-5b170b065fee\") " pod="openstack/aodh-db-create-jpjbw" Feb 19 20:10:25 crc kubenswrapper[4813]: I0219 20:10:25.074042 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-abf8-account-create-update-7d97w"] Feb 19 20:10:25 crc kubenswrapper[4813]: I0219 20:10:25.075600 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-abf8-account-create-update-7d97w" Feb 19 20:10:25 crc kubenswrapper[4813]: I0219 20:10:25.077873 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Feb 19 20:10:25 crc kubenswrapper[4813]: I0219 20:10:25.084396 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvct2\" (UniqueName: \"kubernetes.io/projected/78d21291-624b-49a4-a8ed-5b170b065fee-kube-api-access-xvct2\") pod \"aodh-db-create-jpjbw\" (UID: \"78d21291-624b-49a4-a8ed-5b170b065fee\") " pod="openstack/aodh-db-create-jpjbw" Feb 19 20:10:25 crc kubenswrapper[4813]: I0219 20:10:25.103803 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-abf8-account-create-update-7d97w"] Feb 19 20:10:25 crc kubenswrapper[4813]: I0219 20:10:25.165591 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac343c30-c8b9-4248-8cd4-3e23e2ef42cc-operator-scripts\") pod \"aodh-abf8-account-create-update-7d97w\" (UID: \"ac343c30-c8b9-4248-8cd4-3e23e2ef42cc\") " pod="openstack/aodh-abf8-account-create-update-7d97w" Feb 19 20:10:25 crc kubenswrapper[4813]: I0219 20:10:25.165720 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qvxz\" (UniqueName: \"kubernetes.io/projected/ac343c30-c8b9-4248-8cd4-3e23e2ef42cc-kube-api-access-8qvxz\") pod \"aodh-abf8-account-create-update-7d97w\" (UID: \"ac343c30-c8b9-4248-8cd4-3e23e2ef42cc\") " pod="openstack/aodh-abf8-account-create-update-7d97w" Feb 19 20:10:25 crc kubenswrapper[4813]: I0219 20:10:25.202760 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-525bh"] Feb 19 20:10:25 crc kubenswrapper[4813]: I0219 20:10:25.204787 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-525bh" Feb 19 20:10:25 crc kubenswrapper[4813]: I0219 20:10:25.213546 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-jpjbw" Feb 19 20:10:25 crc kubenswrapper[4813]: I0219 20:10:25.227128 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-525bh"] Feb 19 20:10:25 crc kubenswrapper[4813]: I0219 20:10:25.267739 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac343c30-c8b9-4248-8cd4-3e23e2ef42cc-operator-scripts\") pod \"aodh-abf8-account-create-update-7d97w\" (UID: \"ac343c30-c8b9-4248-8cd4-3e23e2ef42cc\") " pod="openstack/aodh-abf8-account-create-update-7d97w" Feb 19 20:10:25 crc kubenswrapper[4813]: I0219 20:10:25.268199 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37855c01-edf5-4a59-81e1-84efd79fb00f-catalog-content\") pod \"community-operators-525bh\" (UID: \"37855c01-edf5-4a59-81e1-84efd79fb00f\") " pod="openshift-marketplace/community-operators-525bh" Feb 19 20:10:25 crc kubenswrapper[4813]: I0219 20:10:25.268504 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac343c30-c8b9-4248-8cd4-3e23e2ef42cc-operator-scripts\") pod \"aodh-abf8-account-create-update-7d97w\" (UID: \"ac343c30-c8b9-4248-8cd4-3e23e2ef42cc\") " pod="openstack/aodh-abf8-account-create-update-7d97w" Feb 19 20:10:25 crc kubenswrapper[4813]: I0219 20:10:25.268631 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qvxz\" (UniqueName: \"kubernetes.io/projected/ac343c30-c8b9-4248-8cd4-3e23e2ef42cc-kube-api-access-8qvxz\") pod \"aodh-abf8-account-create-update-7d97w\" (UID: \"ac343c30-c8b9-4248-8cd4-3e23e2ef42cc\") " pod="openstack/aodh-abf8-account-create-update-7d97w" Feb 19 20:10:25 crc kubenswrapper[4813]: I0219 20:10:25.268746 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbxc9\" (UniqueName: \"kubernetes.io/projected/37855c01-edf5-4a59-81e1-84efd79fb00f-kube-api-access-lbxc9\") pod \"community-operators-525bh\" (UID: \"37855c01-edf5-4a59-81e1-84efd79fb00f\") " pod="openshift-marketplace/community-operators-525bh" Feb 19 20:10:25 crc kubenswrapper[4813]: I0219 20:10:25.268840 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37855c01-edf5-4a59-81e1-84efd79fb00f-utilities\") pod \"community-operators-525bh\" (UID: \"37855c01-edf5-4a59-81e1-84efd79fb00f\") " pod="openshift-marketplace/community-operators-525bh" Feb 19 20:10:25 crc kubenswrapper[4813]: I0219 20:10:25.289702 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qvxz\" (UniqueName: \"kubernetes.io/projected/ac343c30-c8b9-4248-8cd4-3e23e2ef42cc-kube-api-access-8qvxz\") pod \"aodh-abf8-account-create-update-7d97w\" (UID: \"ac343c30-c8b9-4248-8cd4-3e23e2ef42cc\") " pod="openstack/aodh-abf8-account-create-update-7d97w" Feb 19 20:10:25 crc kubenswrapper[4813]: I0219 20:10:25.375590 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbxc9\" (UniqueName: \"kubernetes.io/projected/37855c01-edf5-4a59-81e1-84efd79fb00f-kube-api-access-lbxc9\") pod \"community-operators-525bh\" (UID: \"37855c01-edf5-4a59-81e1-84efd79fb00f\") " pod="openshift-marketplace/community-operators-525bh" Feb 19 20:10:25 crc kubenswrapper[4813]: I0219 20:10:25.375652 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37855c01-edf5-4a59-81e1-84efd79fb00f-utilities\") pod \"community-operators-525bh\" (UID: \"37855c01-edf5-4a59-81e1-84efd79fb00f\") " pod="openshift-marketplace/community-operators-525bh" Feb 19 20:10:25 crc kubenswrapper[4813]: I0219 20:10:25.375907 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37855c01-edf5-4a59-81e1-84efd79fb00f-catalog-content\") pod \"community-operators-525bh\" (UID: \"37855c01-edf5-4a59-81e1-84efd79fb00f\") " pod="openshift-marketplace/community-operators-525bh" Feb 19 20:10:25 crc kubenswrapper[4813]: I0219 20:10:25.376383 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37855c01-edf5-4a59-81e1-84efd79fb00f-catalog-content\") pod \"community-operators-525bh\" (UID: \"37855c01-edf5-4a59-81e1-84efd79fb00f\") " pod="openshift-marketplace/community-operators-525bh" Feb 19 20:10:25 crc kubenswrapper[4813]: I0219 20:10:25.376846 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37855c01-edf5-4a59-81e1-84efd79fb00f-utilities\") pod \"community-operators-525bh\" (UID: \"37855c01-edf5-4a59-81e1-84efd79fb00f\") " pod="openshift-marketplace/community-operators-525bh" Feb 19 20:10:25 crc kubenswrapper[4813]: I0219 20:10:25.400504 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbxc9\" (UniqueName: \"kubernetes.io/projected/37855c01-edf5-4a59-81e1-84efd79fb00f-kube-api-access-lbxc9\") pod \"community-operators-525bh\" (UID: \"37855c01-edf5-4a59-81e1-84efd79fb00f\") " pod="openshift-marketplace/community-operators-525bh" Feb 19 20:10:25 crc kubenswrapper[4813]: I0219 20:10:25.430410 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-abf8-account-create-update-7d97w" Feb 19 20:10:25 crc kubenswrapper[4813]: I0219 20:10:25.525611 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-525bh" Feb 19 20:10:25 crc kubenswrapper[4813]: I0219 20:10:25.776946 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-jpjbw"] Feb 19 20:10:26 crc kubenswrapper[4813]: I0219 20:10:26.079335 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-abf8-account-create-update-7d97w"] Feb 19 20:10:26 crc kubenswrapper[4813]: W0219 20:10:26.081128 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac343c30_c8b9_4248_8cd4_3e23e2ef42cc.slice/crio-01741f57fe02557091e382fbbabe6c5cfdc0374beb899247570d62b037e7819a WatchSource:0}: Error finding container 01741f57fe02557091e382fbbabe6c5cfdc0374beb899247570d62b037e7819a: Status 404 returned error can't find the container with id 01741f57fe02557091e382fbbabe6c5cfdc0374beb899247570d62b037e7819a Feb 19 20:10:26 crc kubenswrapper[4813]: I0219 20:10:26.186648 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-525bh"] Feb 19 20:10:26 crc kubenswrapper[4813]: W0219 20:10:26.201818 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37855c01_edf5_4a59_81e1_84efd79fb00f.slice/crio-415f7d04c220c00e26ed262142c2db0639ff6d1895bba9bdba0b416a782d6773 WatchSource:0}: Error finding container 415f7d04c220c00e26ed262142c2db0639ff6d1895bba9bdba0b416a782d6773: Status 404 returned error can't find the container with id 415f7d04c220c00e26ed262142c2db0639ff6d1895bba9bdba0b416a782d6773 Feb 19 20:10:26 crc kubenswrapper[4813]: I0219 20:10:26.245596 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-abf8-account-create-update-7d97w" event={"ID":"ac343c30-c8b9-4248-8cd4-3e23e2ef42cc","Type":"ContainerStarted","Data":"01741f57fe02557091e382fbbabe6c5cfdc0374beb899247570d62b037e7819a"} Feb 19 20:10:26 crc kubenswrapper[4813]: I0219 20:10:26.249711 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-jpjbw" event={"ID":"78d21291-624b-49a4-a8ed-5b170b065fee","Type":"ContainerStarted","Data":"f30234f15a207e94e0ea1c9c0bff11e6aad363a88944d29e6eda4a36546d353d"} Feb 19 20:10:26 crc kubenswrapper[4813]: I0219 20:10:26.249748 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-jpjbw" event={"ID":"78d21291-624b-49a4-a8ed-5b170b065fee","Type":"ContainerStarted","Data":"9e17890b6e93d06d08ed06cb5fafa30c769639b45e485785a9427aae69e815b5"} Feb 19 20:10:26 crc kubenswrapper[4813]: I0219 20:10:26.254804 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-525bh" event={"ID":"37855c01-edf5-4a59-81e1-84efd79fb00f","Type":"ContainerStarted","Data":"415f7d04c220c00e26ed262142c2db0639ff6d1895bba9bdba0b416a782d6773"} Feb 19 20:10:26 crc kubenswrapper[4813]: I0219 20:10:26.271929 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-create-jpjbw" podStartSLOduration=2.271909507 podStartE2EDuration="2.271909507s" podCreationTimestamp="2026-02-19 20:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 20:10:26.264375644 +0000 UTC m=+6045.489816195" watchObservedRunningTime="2026-02-19 20:10:26.271909507 +0000 UTC m=+6045.497350048" Feb 19 20:10:27 crc kubenswrapper[4813]: I0219 20:10:27.288716 4813 generic.go:334] "Generic (PLEG): container finished" podID="37855c01-edf5-4a59-81e1-84efd79fb00f" containerID="81d7539eaeb2faa06c6db01b30c14ae2b5d65b8a684b8e915c02202ef36b9b67" exitCode=0 Feb 19 20:10:27 crc kubenswrapper[4813]: I0219 20:10:27.289554 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-525bh" event={"ID":"37855c01-edf5-4a59-81e1-84efd79fb00f","Type":"ContainerDied","Data":"81d7539eaeb2faa06c6db01b30c14ae2b5d65b8a684b8e915c02202ef36b9b67"} Feb 19 20:10:27 crc kubenswrapper[4813]: I0219 20:10:27.291985 4813 generic.go:334] "Generic (PLEG): container finished" podID="ac343c30-c8b9-4248-8cd4-3e23e2ef42cc" containerID="6de5575d7b79d989e8f4a0ea5a0f4b01d549a50b2dfc6fe16d9b697f0d483e80" exitCode=0 Feb 19 20:10:27 crc kubenswrapper[4813]: I0219 20:10:27.292052 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-abf8-account-create-update-7d97w" event={"ID":"ac343c30-c8b9-4248-8cd4-3e23e2ef42cc","Type":"ContainerDied","Data":"6de5575d7b79d989e8f4a0ea5a0f4b01d549a50b2dfc6fe16d9b697f0d483e80"} Feb 19 20:10:27 crc kubenswrapper[4813]: I0219 20:10:27.297560 4813 generic.go:334] "Generic (PLEG): container finished" podID="78d21291-624b-49a4-a8ed-5b170b065fee" containerID="f30234f15a207e94e0ea1c9c0bff11e6aad363a88944d29e6eda4a36546d353d" exitCode=0 Feb 19 20:10:27 crc kubenswrapper[4813]: I0219 20:10:27.297623 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-jpjbw" event={"ID":"78d21291-624b-49a4-a8ed-5b170b065fee","Type":"ContainerDied","Data":"f30234f15a207e94e0ea1c9c0bff11e6aad363a88944d29e6eda4a36546d353d"} Feb 19 20:10:28 crc kubenswrapper[4813]: I0219 20:10:28.775346 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-abf8-account-create-update-7d97w" Feb 19 20:10:28 crc kubenswrapper[4813]: I0219 20:10:28.781839 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-jpjbw" Feb 19 20:10:28 crc kubenswrapper[4813]: I0219 20:10:28.853928 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78d21291-624b-49a4-a8ed-5b170b065fee-operator-scripts\") pod \"78d21291-624b-49a4-a8ed-5b170b065fee\" (UID: \"78d21291-624b-49a4-a8ed-5b170b065fee\") " Feb 19 20:10:28 crc kubenswrapper[4813]: I0219 20:10:28.854227 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qvxz\" (UniqueName: \"kubernetes.io/projected/ac343c30-c8b9-4248-8cd4-3e23e2ef42cc-kube-api-access-8qvxz\") pod \"ac343c30-c8b9-4248-8cd4-3e23e2ef42cc\" (UID: \"ac343c30-c8b9-4248-8cd4-3e23e2ef42cc\") " Feb 19 20:10:28 crc kubenswrapper[4813]: I0219 20:10:28.854280 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvct2\" (UniqueName: \"kubernetes.io/projected/78d21291-624b-49a4-a8ed-5b170b065fee-kube-api-access-xvct2\") pod \"78d21291-624b-49a4-a8ed-5b170b065fee\" (UID: \"78d21291-624b-49a4-a8ed-5b170b065fee\") " Feb 19 20:10:28 crc kubenswrapper[4813]: I0219 20:10:28.854364 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac343c30-c8b9-4248-8cd4-3e23e2ef42cc-operator-scripts\") pod \"ac343c30-c8b9-4248-8cd4-3e23e2ef42cc\" (UID: \"ac343c30-c8b9-4248-8cd4-3e23e2ef42cc\") " Feb 19 20:10:28 crc kubenswrapper[4813]: I0219 20:10:28.855628 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac343c30-c8b9-4248-8cd4-3e23e2ef42cc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ac343c30-c8b9-4248-8cd4-3e23e2ef42cc" (UID: "ac343c30-c8b9-4248-8cd4-3e23e2ef42cc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:10:28 crc kubenswrapper[4813]: I0219 20:10:28.856089 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78d21291-624b-49a4-a8ed-5b170b065fee-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "78d21291-624b-49a4-a8ed-5b170b065fee" (UID: "78d21291-624b-49a4-a8ed-5b170b065fee"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:10:28 crc kubenswrapper[4813]: I0219 20:10:28.862198 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac343c30-c8b9-4248-8cd4-3e23e2ef42cc-kube-api-access-8qvxz" (OuterVolumeSpecName: "kube-api-access-8qvxz") pod "ac343c30-c8b9-4248-8cd4-3e23e2ef42cc" (UID: "ac343c30-c8b9-4248-8cd4-3e23e2ef42cc"). InnerVolumeSpecName "kube-api-access-8qvxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:10:28 crc kubenswrapper[4813]: I0219 20:10:28.863357 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78d21291-624b-49a4-a8ed-5b170b065fee-kube-api-access-xvct2" (OuterVolumeSpecName: "kube-api-access-xvct2") pod "78d21291-624b-49a4-a8ed-5b170b065fee" (UID: "78d21291-624b-49a4-a8ed-5b170b065fee"). InnerVolumeSpecName "kube-api-access-xvct2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:10:28 crc kubenswrapper[4813]: I0219 20:10:28.956730 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78d21291-624b-49a4-a8ed-5b170b065fee-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 20:10:28 crc kubenswrapper[4813]: I0219 20:10:28.956766 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qvxz\" (UniqueName: \"kubernetes.io/projected/ac343c30-c8b9-4248-8cd4-3e23e2ef42cc-kube-api-access-8qvxz\") on node \"crc\" DevicePath \"\"" Feb 19 20:10:28 crc kubenswrapper[4813]: I0219 20:10:28.956776 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvct2\" (UniqueName: \"kubernetes.io/projected/78d21291-624b-49a4-a8ed-5b170b065fee-kube-api-access-xvct2\") on node \"crc\" DevicePath \"\"" Feb 19 20:10:28 crc kubenswrapper[4813]: I0219 20:10:28.956784 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac343c30-c8b9-4248-8cd4-3e23e2ef42cc-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 20:10:29 crc kubenswrapper[4813]: I0219 20:10:29.032795 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-287ns"] Feb 19 20:10:29 crc kubenswrapper[4813]: I0219 20:10:29.043744 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-287ns"] Feb 19 20:10:29 crc kubenswrapper[4813]: I0219 20:10:29.316675 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-abf8-account-create-update-7d97w" Feb 19 20:10:29 crc kubenswrapper[4813]: I0219 20:10:29.316915 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-abf8-account-create-update-7d97w" event={"ID":"ac343c30-c8b9-4248-8cd4-3e23e2ef42cc","Type":"ContainerDied","Data":"01741f57fe02557091e382fbbabe6c5cfdc0374beb899247570d62b037e7819a"} Feb 19 20:10:29 crc kubenswrapper[4813]: I0219 20:10:29.317329 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01741f57fe02557091e382fbbabe6c5cfdc0374beb899247570d62b037e7819a" Feb 19 20:10:29 crc kubenswrapper[4813]: I0219 20:10:29.318353 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-jpjbw" event={"ID":"78d21291-624b-49a4-a8ed-5b170b065fee","Type":"ContainerDied","Data":"9e17890b6e93d06d08ed06cb5fafa30c769639b45e485785a9427aae69e815b5"} Feb 19 20:10:29 crc kubenswrapper[4813]: I0219 20:10:29.318393 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e17890b6e93d06d08ed06cb5fafa30c769639b45e485785a9427aae69e815b5" Feb 19 20:10:29 crc kubenswrapper[4813]: I0219 20:10:29.318443 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-jpjbw" Feb 19 20:10:29 crc kubenswrapper[4813]: I0219 20:10:29.320589 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-525bh" event={"ID":"37855c01-edf5-4a59-81e1-84efd79fb00f","Type":"ContainerStarted","Data":"39dc7ad9f10e861799060f20aa27906770174863e719ab8c4c29020d318aab0a"} Feb 19 20:10:29 crc kubenswrapper[4813]: I0219 20:10:29.482596 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="488329c3-4802-41f2-afbf-66ff5389c1a2" path="/var/lib/kubelet/pods/488329c3-4802-41f2-afbf-66ff5389c1a2/volumes" Feb 19 20:10:30 crc kubenswrapper[4813]: I0219 20:10:30.481759 4813 scope.go:117] "RemoveContainer" containerID="77599afcb1bbfe89531d92db83d4ec060079eb9318249967ecfd9d52fd324241" Feb 19 20:10:30 crc kubenswrapper[4813]: I0219 20:10:30.975730 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-k2mns"] Feb 19 20:10:30 crc kubenswrapper[4813]: E0219 20:10:30.977320 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac343c30-c8b9-4248-8cd4-3e23e2ef42cc" containerName="mariadb-account-create-update" Feb 19 20:10:30 crc kubenswrapper[4813]: I0219 20:10:30.977350 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac343c30-c8b9-4248-8cd4-3e23e2ef42cc" containerName="mariadb-account-create-update" Feb 19 20:10:30 crc kubenswrapper[4813]: E0219 20:10:30.977385 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78d21291-624b-49a4-a8ed-5b170b065fee" containerName="mariadb-database-create" Feb 19 20:10:30 crc kubenswrapper[4813]: I0219 20:10:30.977396 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="78d21291-624b-49a4-a8ed-5b170b065fee" containerName="mariadb-database-create" Feb 19 20:10:30 crc kubenswrapper[4813]: I0219 20:10:30.977730 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="78d21291-624b-49a4-a8ed-5b170b065fee" containerName="mariadb-database-create" Feb 19 20:10:30 crc kubenswrapper[4813]: I0219 20:10:30.977772 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac343c30-c8b9-4248-8cd4-3e23e2ef42cc" containerName="mariadb-account-create-update" Feb 19 20:10:30 crc kubenswrapper[4813]: I0219 20:10:30.979288 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-k2mns" Feb 19 20:10:30 crc kubenswrapper[4813]: I0219 20:10:30.981616 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 19 20:10:30 crc kubenswrapper[4813]: I0219 20:10:30.982087 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-7nnz5" Feb 19 20:10:30 crc kubenswrapper[4813]: I0219 20:10:30.982501 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 19 20:10:30 crc kubenswrapper[4813]: I0219 20:10:30.982685 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 19 20:10:30 crc kubenswrapper[4813]: I0219 20:10:30.994629 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-k2mns"] Feb 19 20:10:30 crc kubenswrapper[4813]: I0219 20:10:30.997877 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/576b8eec-382c-47e9-b746-10c527fb9e85-combined-ca-bundle\") pod \"aodh-db-sync-k2mns\" (UID: \"576b8eec-382c-47e9-b746-10c527fb9e85\") " pod="openstack/aodh-db-sync-k2mns" Feb 19 20:10:30 crc kubenswrapper[4813]: I0219 20:10:30.998144 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/576b8eec-382c-47e9-b746-10c527fb9e85-config-data\") pod \"aodh-db-sync-k2mns\" (UID: \"576b8eec-382c-47e9-b746-10c527fb9e85\") " pod="openstack/aodh-db-sync-k2mns" Feb 19 20:10:30 crc kubenswrapper[4813]: I0219 20:10:30.998228 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/576b8eec-382c-47e9-b746-10c527fb9e85-scripts\") pod \"aodh-db-sync-k2mns\" (UID: \"576b8eec-382c-47e9-b746-10c527fb9e85\") " pod="openstack/aodh-db-sync-k2mns" Feb 19 20:10:30 crc kubenswrapper[4813]: I0219 20:10:30.998353 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwxrp\" (UniqueName: \"kubernetes.io/projected/576b8eec-382c-47e9-b746-10c527fb9e85-kube-api-access-dwxrp\") pod \"aodh-db-sync-k2mns\" (UID: \"576b8eec-382c-47e9-b746-10c527fb9e85\") " pod="openstack/aodh-db-sync-k2mns" Feb 19 20:10:31 crc kubenswrapper[4813]: I0219 20:10:31.100712 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwxrp\" (UniqueName: \"kubernetes.io/projected/576b8eec-382c-47e9-b746-10c527fb9e85-kube-api-access-dwxrp\") pod \"aodh-db-sync-k2mns\" (UID: \"576b8eec-382c-47e9-b746-10c527fb9e85\") " pod="openstack/aodh-db-sync-k2mns" Feb 19 20:10:31 crc kubenswrapper[4813]: I0219 20:10:31.100799 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/576b8eec-382c-47e9-b746-10c527fb9e85-combined-ca-bundle\") pod \"aodh-db-sync-k2mns\" (UID: \"576b8eec-382c-47e9-b746-10c527fb9e85\") " pod="openstack/aodh-db-sync-k2mns" Feb 19 20:10:31 crc kubenswrapper[4813]: I0219 20:10:31.100938 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/576b8eec-382c-47e9-b746-10c527fb9e85-config-data\") pod \"aodh-db-sync-k2mns\" (UID: \"576b8eec-382c-47e9-b746-10c527fb9e85\") " pod="openstack/aodh-db-sync-k2mns" Feb 19 20:10:31 crc kubenswrapper[4813]: I0219 20:10:31.101051 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/576b8eec-382c-47e9-b746-10c527fb9e85-scripts\") pod \"aodh-db-sync-k2mns\" (UID: \"576b8eec-382c-47e9-b746-10c527fb9e85\") " pod="openstack/aodh-db-sync-k2mns" Feb 19 20:10:31 crc kubenswrapper[4813]: I0219 20:10:31.106306 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/576b8eec-382c-47e9-b746-10c527fb9e85-scripts\") pod \"aodh-db-sync-k2mns\" (UID: \"576b8eec-382c-47e9-b746-10c527fb9e85\") " pod="openstack/aodh-db-sync-k2mns" Feb 19 20:10:31 crc kubenswrapper[4813]: I0219 20:10:31.106562 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/576b8eec-382c-47e9-b746-10c527fb9e85-combined-ca-bundle\") pod \"aodh-db-sync-k2mns\" (UID: \"576b8eec-382c-47e9-b746-10c527fb9e85\") " pod="openstack/aodh-db-sync-k2mns" Feb 19 20:10:31 crc kubenswrapper[4813]: I0219 20:10:31.107757 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/576b8eec-382c-47e9-b746-10c527fb9e85-config-data\") pod \"aodh-db-sync-k2mns\" (UID: \"576b8eec-382c-47e9-b746-10c527fb9e85\") " pod="openstack/aodh-db-sync-k2mns" Feb 19 20:10:31 crc kubenswrapper[4813]: I0219 20:10:31.122269 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwxrp\" (UniqueName: \"kubernetes.io/projected/576b8eec-382c-47e9-b746-10c527fb9e85-kube-api-access-dwxrp\") pod \"aodh-db-sync-k2mns\" (UID: \"576b8eec-382c-47e9-b746-10c527fb9e85\") " pod="openstack/aodh-db-sync-k2mns" Feb 19 20:10:31 crc kubenswrapper[4813]: I0219 20:10:31.303878 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-k2mns" Feb 19 20:10:31 crc kubenswrapper[4813]: I0219 20:10:31.341583 4813 generic.go:334] "Generic (PLEG): container finished" podID="37855c01-edf5-4a59-81e1-84efd79fb00f" containerID="39dc7ad9f10e861799060f20aa27906770174863e719ab8c4c29020d318aab0a" exitCode=0 Feb 19 20:10:31 crc kubenswrapper[4813]: I0219 20:10:31.341662 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-525bh" event={"ID":"37855c01-edf5-4a59-81e1-84efd79fb00f","Type":"ContainerDied","Data":"39dc7ad9f10e861799060f20aa27906770174863e719ab8c4c29020d318aab0a"} Feb 19 20:10:31 crc kubenswrapper[4813]: I0219 20:10:31.347404 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" event={"ID":"481977a2-7072-4176-abd4-863cb6104d70","Type":"ContainerStarted","Data":"b0d24274d72b671d1bb0829fa5b282a7b445e76453ddbdce75e0526585757520"} Feb 19 20:10:31 crc kubenswrapper[4813]: I0219 20:10:31.865945 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-k2mns"] Feb 19 20:10:32 crc kubenswrapper[4813]: I0219 20:10:32.359640 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-525bh" event={"ID":"37855c01-edf5-4a59-81e1-84efd79fb00f","Type":"ContainerStarted","Data":"f79b4441574c2854efff3c0006fa89f2d45112514fb93e4dedb6c1cf9054c79e"} Feb 19 20:10:32 crc kubenswrapper[4813]: I0219 20:10:32.360934 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-k2mns" event={"ID":"576b8eec-382c-47e9-b746-10c527fb9e85","Type":"ContainerStarted","Data":"bb67d992967347378fa38de6ac4dd0dc54cb6aa47d270ede975c411e9312abc5"} Feb 19 20:10:32 crc kubenswrapper[4813]: I0219 20:10:32.379267 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-525bh" podStartSLOduration=2.849213155 podStartE2EDuration="7.379248837s" podCreationTimestamp="2026-02-19 20:10:25 +0000 UTC" firstStartedPulling="2026-02-19 20:10:27.29169145 +0000 UTC m=+6046.517131991" lastFinishedPulling="2026-02-19 20:10:31.821727132 +0000 UTC m=+6051.047167673" observedRunningTime="2026-02-19 20:10:32.378027869 +0000 UTC m=+6051.603468420" watchObservedRunningTime="2026-02-19 20:10:32.379248837 +0000 UTC m=+6051.604689378" Feb 19 20:10:35 crc kubenswrapper[4813]: I0219 20:10:35.526258 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-525bh" Feb 19 20:10:35 crc kubenswrapper[4813]: I0219 20:10:35.526844 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-525bh" Feb 19 20:10:36 crc kubenswrapper[4813]: I0219 20:10:36.587406 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-525bh" podUID="37855c01-edf5-4a59-81e1-84efd79fb00f" containerName="registry-server" probeResult="failure" output=< Feb 19 20:10:36 crc kubenswrapper[4813]: timeout: failed to connect service ":50051" within 1s Feb 19 20:10:36 crc kubenswrapper[4813]: > Feb 19 20:10:37 crc kubenswrapper[4813]: I0219 20:10:37.420203 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-k2mns" event={"ID":"576b8eec-382c-47e9-b746-10c527fb9e85","Type":"ContainerStarted","Data":"d3ec7383d15ad5c45513d4da634586cb70b8dc2e5cf5f3f81784371d1a6bdf49"} Feb 19 20:10:37 crc kubenswrapper[4813]: I0219 20:10:37.441710 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-k2mns" podStartSLOduration=3.111511536 podStartE2EDuration="7.441686807s" podCreationTimestamp="2026-02-19 20:10:30 +0000 UTC" firstStartedPulling="2026-02-19 20:10:31.873673888 +0000 UTC m=+6051.099114429" lastFinishedPulling="2026-02-19 20:10:36.203849159 +0000 UTC m=+6055.429289700" observedRunningTime="2026-02-19 20:10:37.436764274 +0000 UTC m=+6056.662204845" watchObservedRunningTime="2026-02-19 20:10:37.441686807 +0000 UTC m=+6056.667127348" Feb 19 20:10:39 crc kubenswrapper[4813]: I0219 20:10:39.440335 4813 generic.go:334] "Generic (PLEG): container finished" podID="576b8eec-382c-47e9-b746-10c527fb9e85" containerID="d3ec7383d15ad5c45513d4da634586cb70b8dc2e5cf5f3f81784371d1a6bdf49" exitCode=0 Feb 19 20:10:39 crc kubenswrapper[4813]: I0219 20:10:39.440449 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-k2mns" event={"ID":"576b8eec-382c-47e9-b746-10c527fb9e85","Type":"ContainerDied","Data":"d3ec7383d15ad5c45513d4da634586cb70b8dc2e5cf5f3f81784371d1a6bdf49"} Feb 19 20:10:40 crc kubenswrapper[4813]: I0219 20:10:40.900847 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-k2mns" Feb 19 20:10:40 crc kubenswrapper[4813]: I0219 20:10:40.931726 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/576b8eec-382c-47e9-b746-10c527fb9e85-config-data\") pod \"576b8eec-382c-47e9-b746-10c527fb9e85\" (UID: \"576b8eec-382c-47e9-b746-10c527fb9e85\") " Feb 19 20:10:40 crc kubenswrapper[4813]: I0219 20:10:40.932176 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/576b8eec-382c-47e9-b746-10c527fb9e85-scripts\") pod \"576b8eec-382c-47e9-b746-10c527fb9e85\" (UID: \"576b8eec-382c-47e9-b746-10c527fb9e85\") " Feb 19 20:10:40 crc kubenswrapper[4813]: I0219 20:10:40.932279 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwxrp\" (UniqueName: \"kubernetes.io/projected/576b8eec-382c-47e9-b746-10c527fb9e85-kube-api-access-dwxrp\") pod \"576b8eec-382c-47e9-b746-10c527fb9e85\" (UID: \"576b8eec-382c-47e9-b746-10c527fb9e85\") " Feb 19 20:10:40 crc kubenswrapper[4813]: I0219 20:10:40.932524 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/576b8eec-382c-47e9-b746-10c527fb9e85-combined-ca-bundle\") pod \"576b8eec-382c-47e9-b746-10c527fb9e85\" (UID: \"576b8eec-382c-47e9-b746-10c527fb9e85\") " Feb 19 20:10:40 crc kubenswrapper[4813]: I0219 20:10:40.938480 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/576b8eec-382c-47e9-b746-10c527fb9e85-scripts" (OuterVolumeSpecName: "scripts") pod "576b8eec-382c-47e9-b746-10c527fb9e85" (UID: "576b8eec-382c-47e9-b746-10c527fb9e85"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:10:40 crc kubenswrapper[4813]: I0219 20:10:40.953335 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/576b8eec-382c-47e9-b746-10c527fb9e85-kube-api-access-dwxrp" (OuterVolumeSpecName: "kube-api-access-dwxrp") pod "576b8eec-382c-47e9-b746-10c527fb9e85" (UID: "576b8eec-382c-47e9-b746-10c527fb9e85"). InnerVolumeSpecName "kube-api-access-dwxrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:10:40 crc kubenswrapper[4813]: I0219 20:10:40.981883 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/576b8eec-382c-47e9-b746-10c527fb9e85-config-data" (OuterVolumeSpecName: "config-data") pod "576b8eec-382c-47e9-b746-10c527fb9e85" (UID: "576b8eec-382c-47e9-b746-10c527fb9e85"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:10:40 crc kubenswrapper[4813]: I0219 20:10:40.988462 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/576b8eec-382c-47e9-b746-10c527fb9e85-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "576b8eec-382c-47e9-b746-10c527fb9e85" (UID: "576b8eec-382c-47e9-b746-10c527fb9e85"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:10:41 crc kubenswrapper[4813]: I0219 20:10:41.036415 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/576b8eec-382c-47e9-b746-10c527fb9e85-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 20:10:41 crc kubenswrapper[4813]: I0219 20:10:41.036444 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/576b8eec-382c-47e9-b746-10c527fb9e85-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 20:10:41 crc kubenswrapper[4813]: I0219 20:10:41.036454 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwxrp\" (UniqueName: \"kubernetes.io/projected/576b8eec-382c-47e9-b746-10c527fb9e85-kube-api-access-dwxrp\") on node \"crc\" DevicePath \"\"" Feb 19 20:10:41 crc kubenswrapper[4813]: I0219 20:10:41.036464 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/576b8eec-382c-47e9-b746-10c527fb9e85-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 20:10:41 crc kubenswrapper[4813]: I0219 20:10:41.467987 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-k2mns" event={"ID":"576b8eec-382c-47e9-b746-10c527fb9e85","Type":"ContainerDied","Data":"bb67d992967347378fa38de6ac4dd0dc54cb6aa47d270ede975c411e9312abc5"} Feb 19 20:10:41 crc kubenswrapper[4813]: I0219 20:10:41.468032 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb67d992967347378fa38de6ac4dd0dc54cb6aa47d270ede975c411e9312abc5" Feb 19 20:10:41 crc kubenswrapper[4813]: I0219 20:10:41.468031 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-k2mns" Feb 19 20:10:43 crc kubenswrapper[4813]: I0219 20:10:43.698219 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 19 20:10:45 crc kubenswrapper[4813]: I0219 20:10:45.569972 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-525bh" Feb 19 20:10:45 crc kubenswrapper[4813]: I0219 20:10:45.616569 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-525bh" Feb 19 20:10:45 crc kubenswrapper[4813]: I0219 20:10:45.806418 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-525bh"] Feb 19 20:10:46 crc kubenswrapper[4813]: I0219 20:10:46.066681 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Feb 19 20:10:46 crc kubenswrapper[4813]: E0219 20:10:46.067103 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="576b8eec-382c-47e9-b746-10c527fb9e85" containerName="aodh-db-sync" Feb 19 20:10:46 crc kubenswrapper[4813]: I0219 20:10:46.067119 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="576b8eec-382c-47e9-b746-10c527fb9e85" containerName="aodh-db-sync" Feb 19 20:10:46 crc kubenswrapper[4813]: I0219 20:10:46.067314 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="576b8eec-382c-47e9-b746-10c527fb9e85" containerName="aodh-db-sync" Feb 19 20:10:46 crc kubenswrapper[4813]: I0219 20:10:46.069315 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 19 20:10:46 crc kubenswrapper[4813]: I0219 20:10:46.071098 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 19 20:10:46 crc kubenswrapper[4813]: I0219 20:10:46.071525 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-7nnz5" Feb 19 20:10:46 crc kubenswrapper[4813]: I0219 20:10:46.072744 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 19 20:10:46 crc kubenswrapper[4813]: I0219 20:10:46.086039 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 19 20:10:46 crc kubenswrapper[4813]: I0219 20:10:46.165763 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb9fbcb5-17e2-48ec-82dd-f9e4d1aba469-combined-ca-bundle\") pod \"aodh-0\" (UID: \"bb9fbcb5-17e2-48ec-82dd-f9e4d1aba469\") " pod="openstack/aodh-0" Feb 19 20:10:46 crc kubenswrapper[4813]: I0219 20:10:46.165910 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb9fbcb5-17e2-48ec-82dd-f9e4d1aba469-scripts\") pod \"aodh-0\" (UID: \"bb9fbcb5-17e2-48ec-82dd-f9e4d1aba469\") " pod="openstack/aodh-0" Feb 19 20:10:46 crc kubenswrapper[4813]: I0219 20:10:46.165933 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9s9hl\" (UniqueName: \"kubernetes.io/projected/bb9fbcb5-17e2-48ec-82dd-f9e4d1aba469-kube-api-access-9s9hl\") pod \"aodh-0\" (UID: \"bb9fbcb5-17e2-48ec-82dd-f9e4d1aba469\") " pod="openstack/aodh-0" Feb 19 20:10:46 crc kubenswrapper[4813]: I0219 20:10:46.166031 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb9fbcb5-17e2-48ec-82dd-f9e4d1aba469-config-data\") pod \"aodh-0\" (UID: \"bb9fbcb5-17e2-48ec-82dd-f9e4d1aba469\") " pod="openstack/aodh-0" Feb 19 20:10:46 crc kubenswrapper[4813]: I0219 20:10:46.268021 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb9fbcb5-17e2-48ec-82dd-f9e4d1aba469-scripts\") pod \"aodh-0\" (UID: \"bb9fbcb5-17e2-48ec-82dd-f9e4d1aba469\") " pod="openstack/aodh-0" Feb 19 20:10:46 crc kubenswrapper[4813]: I0219 20:10:46.268055 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9s9hl\" (UniqueName: \"kubernetes.io/projected/bb9fbcb5-17e2-48ec-82dd-f9e4d1aba469-kube-api-access-9s9hl\") pod \"aodh-0\" (UID: \"bb9fbcb5-17e2-48ec-82dd-f9e4d1aba469\") " pod="openstack/aodh-0" Feb 19 20:10:46 crc kubenswrapper[4813]: I0219 20:10:46.268124 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb9fbcb5-17e2-48ec-82dd-f9e4d1aba469-config-data\") pod \"aodh-0\" (UID: \"bb9fbcb5-17e2-48ec-82dd-f9e4d1aba469\") " pod="openstack/aodh-0" Feb 19 20:10:46 crc kubenswrapper[4813]: I0219 20:10:46.268234 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb9fbcb5-17e2-48ec-82dd-f9e4d1aba469-combined-ca-bundle\") pod \"aodh-0\" (UID: \"bb9fbcb5-17e2-48ec-82dd-f9e4d1aba469\") " pod="openstack/aodh-0" Feb 19 20:10:46 crc kubenswrapper[4813]: I0219 20:10:46.275690 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb9fbcb5-17e2-48ec-82dd-f9e4d1aba469-combined-ca-bundle\") pod \"aodh-0\" (UID: \"bb9fbcb5-17e2-48ec-82dd-f9e4d1aba469\") " pod="openstack/aodh-0" Feb 19 20:10:46 crc kubenswrapper[4813]: I0219 20:10:46.275864 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb9fbcb5-17e2-48ec-82dd-f9e4d1aba469-scripts\") pod \"aodh-0\" (UID: \"bb9fbcb5-17e2-48ec-82dd-f9e4d1aba469\") " pod="openstack/aodh-0" Feb 19 20:10:46 crc kubenswrapper[4813]: I0219 20:10:46.277137 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb9fbcb5-17e2-48ec-82dd-f9e4d1aba469-config-data\") pod \"aodh-0\" (UID: \"bb9fbcb5-17e2-48ec-82dd-f9e4d1aba469\") " pod="openstack/aodh-0" Feb 19 20:10:46 crc kubenswrapper[4813]: I0219 20:10:46.287298 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9s9hl\" (UniqueName: \"kubernetes.io/projected/bb9fbcb5-17e2-48ec-82dd-f9e4d1aba469-kube-api-access-9s9hl\") pod \"aodh-0\" (UID: \"bb9fbcb5-17e2-48ec-82dd-f9e4d1aba469\") " pod="openstack/aodh-0" Feb 19 20:10:46 crc kubenswrapper[4813]: I0219 20:10:46.393706 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 19 20:10:46 crc kubenswrapper[4813]: I0219 20:10:46.978006 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 19 20:10:47 crc kubenswrapper[4813]: I0219 20:10:47.546210 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"bb9fbcb5-17e2-48ec-82dd-f9e4d1aba469","Type":"ContainerStarted","Data":"828632a036072af77382a16f3f32199d359c8640c9bc6c4bbe8eef09aa1af1b9"} Feb 19 20:10:47 crc kubenswrapper[4813]: I0219 20:10:47.546394 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-525bh" podUID="37855c01-edf5-4a59-81e1-84efd79fb00f" containerName="registry-server" containerID="cri-o://f79b4441574c2854efff3c0006fa89f2d45112514fb93e4dedb6c1cf9054c79e" gracePeriod=2 Feb 19 20:10:48 crc kubenswrapper[4813]: I0219 20:10:48.035619 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-525bh" Feb 19 20:10:48 crc kubenswrapper[4813]: I0219 20:10:48.119427 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbxc9\" (UniqueName: \"kubernetes.io/projected/37855c01-edf5-4a59-81e1-84efd79fb00f-kube-api-access-lbxc9\") pod \"37855c01-edf5-4a59-81e1-84efd79fb00f\" (UID: \"37855c01-edf5-4a59-81e1-84efd79fb00f\") " Feb 19 20:10:48 crc kubenswrapper[4813]: I0219 20:10:48.119736 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37855c01-edf5-4a59-81e1-84efd79fb00f-utilities\") pod \"37855c01-edf5-4a59-81e1-84efd79fb00f\" (UID: \"37855c01-edf5-4a59-81e1-84efd79fb00f\") " Feb 19 20:10:48 crc kubenswrapper[4813]: I0219 20:10:48.119860 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37855c01-edf5-4a59-81e1-84efd79fb00f-catalog-content\") pod \"37855c01-edf5-4a59-81e1-84efd79fb00f\" (UID: \"37855c01-edf5-4a59-81e1-84efd79fb00f\") " Feb 19 20:10:48 crc kubenswrapper[4813]: I0219 20:10:48.120721 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37855c01-edf5-4a59-81e1-84efd79fb00f-utilities" (OuterVolumeSpecName: "utilities") pod "37855c01-edf5-4a59-81e1-84efd79fb00f" (UID: "37855c01-edf5-4a59-81e1-84efd79fb00f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:10:48 crc kubenswrapper[4813]: I0219 20:10:48.125244 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37855c01-edf5-4a59-81e1-84efd79fb00f-kube-api-access-lbxc9" (OuterVolumeSpecName: "kube-api-access-lbxc9") pod "37855c01-edf5-4a59-81e1-84efd79fb00f" (UID: "37855c01-edf5-4a59-81e1-84efd79fb00f"). InnerVolumeSpecName "kube-api-access-lbxc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:10:48 crc kubenswrapper[4813]: I0219 20:10:48.166338 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37855c01-edf5-4a59-81e1-84efd79fb00f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "37855c01-edf5-4a59-81e1-84efd79fb00f" (UID: "37855c01-edf5-4a59-81e1-84efd79fb00f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:10:48 crc kubenswrapper[4813]: I0219 20:10:48.222513 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37855c01-edf5-4a59-81e1-84efd79fb00f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 20:10:48 crc kubenswrapper[4813]: I0219 20:10:48.222570 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbxc9\" (UniqueName: \"kubernetes.io/projected/37855c01-edf5-4a59-81e1-84efd79fb00f-kube-api-access-lbxc9\") on node \"crc\" DevicePath \"\"" Feb 19 20:10:48 crc kubenswrapper[4813]: I0219 20:10:48.222587 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37855c01-edf5-4a59-81e1-84efd79fb00f-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 20:10:48 crc kubenswrapper[4813]: I0219 20:10:48.559425 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"bb9fbcb5-17e2-48ec-82dd-f9e4d1aba469","Type":"ContainerStarted","Data":"2c1f29dc836ae17b1adc76e4e31488b60a03c1ad6c5d2cb88abd511707511efb"} Feb 19 20:10:48 crc kubenswrapper[4813]: I0219 20:10:48.562680 4813 generic.go:334] "Generic (PLEG): container finished" podID="37855c01-edf5-4a59-81e1-84efd79fb00f" containerID="f79b4441574c2854efff3c0006fa89f2d45112514fb93e4dedb6c1cf9054c79e" exitCode=0 Feb 19 20:10:48 crc kubenswrapper[4813]: I0219 20:10:48.562720 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-525bh" event={"ID":"37855c01-edf5-4a59-81e1-84efd79fb00f","Type":"ContainerDied","Data":"f79b4441574c2854efff3c0006fa89f2d45112514fb93e4dedb6c1cf9054c79e"} Feb 19 20:10:48 crc kubenswrapper[4813]: I0219 20:10:48.562771 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-525bh" event={"ID":"37855c01-edf5-4a59-81e1-84efd79fb00f","Type":"ContainerDied","Data":"415f7d04c220c00e26ed262142c2db0639ff6d1895bba9bdba0b416a782d6773"} Feb 19 20:10:48 crc kubenswrapper[4813]: I0219 20:10:48.562783 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-525bh" Feb 19 20:10:48 crc kubenswrapper[4813]: I0219 20:10:48.562809 4813 scope.go:117] "RemoveContainer" containerID="f79b4441574c2854efff3c0006fa89f2d45112514fb93e4dedb6c1cf9054c79e" Feb 19 20:10:48 crc kubenswrapper[4813]: I0219 20:10:48.611513 4813 scope.go:117] "RemoveContainer" containerID="39dc7ad9f10e861799060f20aa27906770174863e719ab8c4c29020d318aab0a" Feb 19 20:10:48 crc kubenswrapper[4813]: I0219 20:10:48.611895 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 20:10:48 crc kubenswrapper[4813]: I0219 20:10:48.612178 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e963fa0b-314d-4f34-8624-a316e69590d2" containerName="ceilometer-central-agent" containerID="cri-o://217b46aec38f7eade99a483c12e094164905443b216e9d9cc4df231cd3643ab2" gracePeriod=30 Feb 19 20:10:48 crc kubenswrapper[4813]: I0219 20:10:48.612429 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e963fa0b-314d-4f34-8624-a316e69590d2" containerName="proxy-httpd" containerID="cri-o://f996239da75428bb5f4a2c0b6d8fc70199f2372a2c9bbb5d96c19e9907033248" gracePeriod=30 Feb 19 20:10:48 crc kubenswrapper[4813]: I0219 20:10:48.612484 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e963fa0b-314d-4f34-8624-a316e69590d2" containerName="ceilometer-notification-agent" containerID="cri-o://3890e31bfb2ecd6a10390e44252a0321a34ec8fd75c7318c5f2f2b89014a3c36" gracePeriod=30 Feb 19 20:10:48 crc kubenswrapper[4813]: I0219 20:10:48.612521 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e963fa0b-314d-4f34-8624-a316e69590d2" containerName="sg-core" containerID="cri-o://fc1a81692bb8eafdce52f73d8f98014439da57df4f130666dabe8a782bcd8cdf" gracePeriod=30 Feb 19 20:10:48 crc kubenswrapper[4813]: I0219 20:10:48.624268 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-525bh"] Feb 19 20:10:48 crc kubenswrapper[4813]: I0219 20:10:48.644621 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-525bh"] Feb 19 20:10:48 crc kubenswrapper[4813]: I0219 20:10:48.647878 4813 scope.go:117] "RemoveContainer" containerID="81d7539eaeb2faa06c6db01b30c14ae2b5d65b8a684b8e915c02202ef36b9b67" Feb 19 20:10:48 crc kubenswrapper[4813]: I0219 20:10:48.965186 4813 scope.go:117] "RemoveContainer" containerID="f79b4441574c2854efff3c0006fa89f2d45112514fb93e4dedb6c1cf9054c79e" Feb 19 20:10:48 crc kubenswrapper[4813]: E0219 20:10:48.965826 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f79b4441574c2854efff3c0006fa89f2d45112514fb93e4dedb6c1cf9054c79e\": container with ID starting with f79b4441574c2854efff3c0006fa89f2d45112514fb93e4dedb6c1cf9054c79e not found: ID does not exist" containerID="f79b4441574c2854efff3c0006fa89f2d45112514fb93e4dedb6c1cf9054c79e" Feb 19 20:10:48 crc kubenswrapper[4813]: I0219 20:10:48.965862 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f79b4441574c2854efff3c0006fa89f2d45112514fb93e4dedb6c1cf9054c79e"} err="failed to get container status \"f79b4441574c2854efff3c0006fa89f2d45112514fb93e4dedb6c1cf9054c79e\": rpc error: code = NotFound desc = could not find container \"f79b4441574c2854efff3c0006fa89f2d45112514fb93e4dedb6c1cf9054c79e\": container with ID starting with f79b4441574c2854efff3c0006fa89f2d45112514fb93e4dedb6c1cf9054c79e not found: ID does not exist" Feb 19 20:10:48 crc kubenswrapper[4813]: I0219 20:10:48.965886 4813 scope.go:117] "RemoveContainer" containerID="39dc7ad9f10e861799060f20aa27906770174863e719ab8c4c29020d318aab0a" Feb 19 20:10:48 crc kubenswrapper[4813]: E0219 20:10:48.966335 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39dc7ad9f10e861799060f20aa27906770174863e719ab8c4c29020d318aab0a\": container with ID starting with 39dc7ad9f10e861799060f20aa27906770174863e719ab8c4c29020d318aab0a not found: ID does not exist" containerID="39dc7ad9f10e861799060f20aa27906770174863e719ab8c4c29020d318aab0a" Feb 19 20:10:48 crc kubenswrapper[4813]: I0219 20:10:48.966378 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39dc7ad9f10e861799060f20aa27906770174863e719ab8c4c29020d318aab0a"} err="failed to get container status \"39dc7ad9f10e861799060f20aa27906770174863e719ab8c4c29020d318aab0a\": rpc error: code = NotFound desc = could not find container \"39dc7ad9f10e861799060f20aa27906770174863e719ab8c4c29020d318aab0a\": container with ID starting with 39dc7ad9f10e861799060f20aa27906770174863e719ab8c4c29020d318aab0a not found: ID does not exist" Feb 19 20:10:48 crc kubenswrapper[4813]: I0219 20:10:48.966408 4813 scope.go:117] "RemoveContainer" containerID="81d7539eaeb2faa06c6db01b30c14ae2b5d65b8a684b8e915c02202ef36b9b67" Feb 19 20:10:48 crc kubenswrapper[4813]: E0219 20:10:48.966748 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81d7539eaeb2faa06c6db01b30c14ae2b5d65b8a684b8e915c02202ef36b9b67\": container with ID starting with 81d7539eaeb2faa06c6db01b30c14ae2b5d65b8a684b8e915c02202ef36b9b67 not found: ID does not exist" containerID="81d7539eaeb2faa06c6db01b30c14ae2b5d65b8a684b8e915c02202ef36b9b67" Feb 19 20:10:48 crc kubenswrapper[4813]: I0219 20:10:48.966793 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81d7539eaeb2faa06c6db01b30c14ae2b5d65b8a684b8e915c02202ef36b9b67"} err="failed to get container status \"81d7539eaeb2faa06c6db01b30c14ae2b5d65b8a684b8e915c02202ef36b9b67\": rpc error: code = NotFound desc = could not find container \"81d7539eaeb2faa06c6db01b30c14ae2b5d65b8a684b8e915c02202ef36b9b67\": container with ID starting with 81d7539eaeb2faa06c6db01b30c14ae2b5d65b8a684b8e915c02202ef36b9b67 not found: ID does not exist" Feb 19 20:10:49 crc kubenswrapper[4813]: I0219 20:10:49.482788 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37855c01-edf5-4a59-81e1-84efd79fb00f" path="/var/lib/kubelet/pods/37855c01-edf5-4a59-81e1-84efd79fb00f/volumes" Feb 19 20:10:49 crc kubenswrapper[4813]: I0219 20:10:49.581842 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"bb9fbcb5-17e2-48ec-82dd-f9e4d1aba469","Type":"ContainerStarted","Data":"d9b2cdfa3dad3a075429edf7c28b33537d8ecedfaf2a1cd939ba43a65b696544"} Feb 19 20:10:49 crc kubenswrapper[4813]: I0219 20:10:49.586076 4813 generic.go:334] "Generic (PLEG): container finished" podID="e963fa0b-314d-4f34-8624-a316e69590d2" containerID="f996239da75428bb5f4a2c0b6d8fc70199f2372a2c9bbb5d96c19e9907033248" exitCode=0 Feb 19 20:10:49 crc kubenswrapper[4813]: I0219 20:10:49.586115 4813 generic.go:334] "Generic (PLEG): container finished" podID="e963fa0b-314d-4f34-8624-a316e69590d2" containerID="fc1a81692bb8eafdce52f73d8f98014439da57df4f130666dabe8a782bcd8cdf" exitCode=2 Feb 19 20:10:49 crc kubenswrapper[4813]: I0219 20:10:49.586125 4813 generic.go:334] "Generic (PLEG): container finished" podID="e963fa0b-314d-4f34-8624-a316e69590d2" containerID="217b46aec38f7eade99a483c12e094164905443b216e9d9cc4df231cd3643ab2" exitCode=0 Feb 19 20:10:49 crc kubenswrapper[4813]: I0219 20:10:49.586160 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e963fa0b-314d-4f34-8624-a316e69590d2","Type":"ContainerDied","Data":"f996239da75428bb5f4a2c0b6d8fc70199f2372a2c9bbb5d96c19e9907033248"} Feb 19 20:10:49 crc kubenswrapper[4813]: I0219 20:10:49.586195 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e963fa0b-314d-4f34-8624-a316e69590d2","Type":"ContainerDied","Data":"fc1a81692bb8eafdce52f73d8f98014439da57df4f130666dabe8a782bcd8cdf"} Feb 19 20:10:49 crc kubenswrapper[4813]: I0219 20:10:49.586208 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e963fa0b-314d-4f34-8624-a316e69590d2","Type":"ContainerDied","Data":"217b46aec38f7eade99a483c12e094164905443b216e9d9cc4df231cd3643ab2"} Feb 19 20:10:51 crc kubenswrapper[4813]: I0219 20:10:51.613432 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"bb9fbcb5-17e2-48ec-82dd-f9e4d1aba469","Type":"ContainerStarted","Data":"43d04d5697a25a20e93c78e760fa285d513a2f4208d7cd64bcffc9dc5cbb2b75"} Feb 19 20:10:52 crc kubenswrapper[4813]: I0219 20:10:52.638904 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"bb9fbcb5-17e2-48ec-82dd-f9e4d1aba469","Type":"ContainerStarted","Data":"ddc67bda3bdaa986af9b959ed72bd81729ed7b5a9342b2a32a4c5ada5485e388"} Feb 19 20:10:52 crc kubenswrapper[4813]: I0219 20:10:52.678222 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=1.832193764 podStartE2EDuration="6.678200679s" podCreationTimestamp="2026-02-19 20:10:46 +0000 UTC" firstStartedPulling="2026-02-19 20:10:46.99085575 +0000 UTC m=+6066.216296291" lastFinishedPulling="2026-02-19 20:10:51.836862665 +0000 UTC m=+6071.062303206" observedRunningTime="2026-02-19 20:10:52.673196324 +0000 UTC m=+6071.898636925" watchObservedRunningTime="2026-02-19 20:10:52.678200679 +0000 UTC m=+6071.903641230" Feb 19 20:10:53 crc kubenswrapper[4813]: I0219 20:10:53.514506 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 20:10:53 crc kubenswrapper[4813]: I0219 20:10:53.642759 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e963fa0b-314d-4f34-8624-a316e69590d2-run-httpd\") pod \"e963fa0b-314d-4f34-8624-a316e69590d2\" (UID: \"e963fa0b-314d-4f34-8624-a316e69590d2\") " Feb 19 20:10:53 crc kubenswrapper[4813]: I0219 20:10:53.642810 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e963fa0b-314d-4f34-8624-a316e69590d2-config-data\") pod \"e963fa0b-314d-4f34-8624-a316e69590d2\" (UID: \"e963fa0b-314d-4f34-8624-a316e69590d2\") " Feb 19 20:10:53 crc kubenswrapper[4813]: I0219 20:10:53.642843 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e963fa0b-314d-4f34-8624-a316e69590d2-scripts\") pod \"e963fa0b-314d-4f34-8624-a316e69590d2\" (UID: \"e963fa0b-314d-4f34-8624-a316e69590d2\") " Feb 19 20:10:53 crc kubenswrapper[4813]: I0219 20:10:53.642868 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e963fa0b-314d-4f34-8624-a316e69590d2-combined-ca-bundle\") pod \"e963fa0b-314d-4f34-8624-a316e69590d2\" (UID: \"e963fa0b-314d-4f34-8624-a316e69590d2\") " Feb 19 20:10:53 crc kubenswrapper[4813]: I0219 20:10:53.642930 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e963fa0b-314d-4f34-8624-a316e69590d2-log-httpd\") pod \"e963fa0b-314d-4f34-8624-a316e69590d2\" (UID: \"e963fa0b-314d-4f34-8624-a316e69590d2\") " Feb 19 20:10:53 crc kubenswrapper[4813]: I0219 20:10:53.643031 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8cvg\" (UniqueName: \"kubernetes.io/projected/e963fa0b-314d-4f34-8624-a316e69590d2-kube-api-access-z8cvg\") pod \"e963fa0b-314d-4f34-8624-a316e69590d2\" (UID: \"e963fa0b-314d-4f34-8624-a316e69590d2\") " Feb 19 20:10:53 crc kubenswrapper[4813]: I0219 20:10:53.643219 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e963fa0b-314d-4f34-8624-a316e69590d2-sg-core-conf-yaml\") pod \"e963fa0b-314d-4f34-8624-a316e69590d2\" (UID: \"e963fa0b-314d-4f34-8624-a316e69590d2\") " Feb 19 20:10:53 crc kubenswrapper[4813]: I0219 20:10:53.643210 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e963fa0b-314d-4f34-8624-a316e69590d2-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e963fa0b-314d-4f34-8624-a316e69590d2" (UID: "e963fa0b-314d-4f34-8624-a316e69590d2"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:10:53 crc kubenswrapper[4813]: I0219 20:10:53.643484 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e963fa0b-314d-4f34-8624-a316e69590d2-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e963fa0b-314d-4f34-8624-a316e69590d2" (UID: "e963fa0b-314d-4f34-8624-a316e69590d2"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:10:53 crc kubenswrapper[4813]: I0219 20:10:53.644462 4813 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e963fa0b-314d-4f34-8624-a316e69590d2-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 20:10:53 crc kubenswrapper[4813]: I0219 20:10:53.644487 4813 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e963fa0b-314d-4f34-8624-a316e69590d2-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 20:10:53 crc kubenswrapper[4813]: I0219 20:10:53.649120 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e963fa0b-314d-4f34-8624-a316e69590d2-scripts" (OuterVolumeSpecName: "scripts") pod "e963fa0b-314d-4f34-8624-a316e69590d2" (UID: "e963fa0b-314d-4f34-8624-a316e69590d2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:10:53 crc kubenswrapper[4813]: I0219 20:10:53.654629 4813 generic.go:334] "Generic (PLEG): container finished" podID="e963fa0b-314d-4f34-8624-a316e69590d2" containerID="3890e31bfb2ecd6a10390e44252a0321a34ec8fd75c7318c5f2f2b89014a3c36" exitCode=0 Feb 19 20:10:53 crc kubenswrapper[4813]: I0219 20:10:53.655777 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e963fa0b-314d-4f34-8624-a316e69590d2","Type":"ContainerDied","Data":"3890e31bfb2ecd6a10390e44252a0321a34ec8fd75c7318c5f2f2b89014a3c36"} Feb 19 20:10:53 crc kubenswrapper[4813]: I0219 20:10:53.655844 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 20:10:53 crc kubenswrapper[4813]: I0219 20:10:53.655863 4813 scope.go:117] "RemoveContainer" containerID="f996239da75428bb5f4a2c0b6d8fc70199f2372a2c9bbb5d96c19e9907033248" Feb 19 20:10:53 crc kubenswrapper[4813]: I0219 20:10:53.655847 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e963fa0b-314d-4f34-8624-a316e69590d2","Type":"ContainerDied","Data":"6d678c622dd95b9a9ad0e683a0208aa3d31b91e466f78d379526ce11683828d6"} Feb 19 20:10:53 crc kubenswrapper[4813]: I0219 20:10:53.670210 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e963fa0b-314d-4f34-8624-a316e69590d2-kube-api-access-z8cvg" (OuterVolumeSpecName: "kube-api-access-z8cvg") pod "e963fa0b-314d-4f34-8624-a316e69590d2" (UID: "e963fa0b-314d-4f34-8624-a316e69590d2"). InnerVolumeSpecName "kube-api-access-z8cvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:10:53 crc kubenswrapper[4813]: I0219 20:10:53.691087 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e963fa0b-314d-4f34-8624-a316e69590d2-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e963fa0b-314d-4f34-8624-a316e69590d2" (UID: "e963fa0b-314d-4f34-8624-a316e69590d2"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:10:53 crc kubenswrapper[4813]: I0219 20:10:53.746326 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e963fa0b-314d-4f34-8624-a316e69590d2-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 20:10:53 crc kubenswrapper[4813]: I0219 20:10:53.746365 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8cvg\" (UniqueName: \"kubernetes.io/projected/e963fa0b-314d-4f34-8624-a316e69590d2-kube-api-access-z8cvg\") on node \"crc\" DevicePath \"\"" Feb 19 20:10:53 crc kubenswrapper[4813]: I0219 20:10:53.746380 4813 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e963fa0b-314d-4f34-8624-a316e69590d2-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 20:10:53 crc kubenswrapper[4813]: I0219 20:10:53.762786 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e963fa0b-314d-4f34-8624-a316e69590d2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e963fa0b-314d-4f34-8624-a316e69590d2" (UID: "e963fa0b-314d-4f34-8624-a316e69590d2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:10:53 crc kubenswrapper[4813]: I0219 20:10:53.816179 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e963fa0b-314d-4f34-8624-a316e69590d2-config-data" (OuterVolumeSpecName: "config-data") pod "e963fa0b-314d-4f34-8624-a316e69590d2" (UID: "e963fa0b-314d-4f34-8624-a316e69590d2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:10:53 crc kubenswrapper[4813]: I0219 20:10:53.849216 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e963fa0b-314d-4f34-8624-a316e69590d2-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 20:10:53 crc kubenswrapper[4813]: I0219 20:10:53.849255 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e963fa0b-314d-4f34-8624-a316e69590d2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 20:10:53 crc kubenswrapper[4813]: I0219 20:10:53.868311 4813 scope.go:117] "RemoveContainer" containerID="fc1a81692bb8eafdce52f73d8f98014439da57df4f130666dabe8a782bcd8cdf" Feb 19 20:10:53 crc kubenswrapper[4813]: I0219 20:10:53.908680 4813 scope.go:117] "RemoveContainer" containerID="3890e31bfb2ecd6a10390e44252a0321a34ec8fd75c7318c5f2f2b89014a3c36" Feb 19 20:10:53 crc kubenswrapper[4813]: I0219 20:10:53.939316 4813 scope.go:117] "RemoveContainer" containerID="217b46aec38f7eade99a483c12e094164905443b216e9d9cc4df231cd3643ab2" Feb 19 20:10:53 crc kubenswrapper[4813]: I0219 20:10:53.979851 4813 scope.go:117] "RemoveContainer" containerID="f996239da75428bb5f4a2c0b6d8fc70199f2372a2c9bbb5d96c19e9907033248" Feb 19 20:10:53 crc kubenswrapper[4813]: E0219 20:10:53.980297 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f996239da75428bb5f4a2c0b6d8fc70199f2372a2c9bbb5d96c19e9907033248\": container with ID starting with f996239da75428bb5f4a2c0b6d8fc70199f2372a2c9bbb5d96c19e9907033248 not found: ID does not exist" containerID="f996239da75428bb5f4a2c0b6d8fc70199f2372a2c9bbb5d96c19e9907033248" Feb 19 20:10:53 crc kubenswrapper[4813]: I0219 20:10:53.980341 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f996239da75428bb5f4a2c0b6d8fc70199f2372a2c9bbb5d96c19e9907033248"} err="failed to get container status \"f996239da75428bb5f4a2c0b6d8fc70199f2372a2c9bbb5d96c19e9907033248\": rpc error: code = NotFound desc = could not find container \"f996239da75428bb5f4a2c0b6d8fc70199f2372a2c9bbb5d96c19e9907033248\": container with ID starting with f996239da75428bb5f4a2c0b6d8fc70199f2372a2c9bbb5d96c19e9907033248 not found: ID does not exist" Feb 19 20:10:53 crc kubenswrapper[4813]: I0219 20:10:53.980361 4813 scope.go:117] "RemoveContainer" containerID="fc1a81692bb8eafdce52f73d8f98014439da57df4f130666dabe8a782bcd8cdf" Feb 19 20:10:53 crc kubenswrapper[4813]: E0219 20:10:53.980557 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc1a81692bb8eafdce52f73d8f98014439da57df4f130666dabe8a782bcd8cdf\": container with ID starting with fc1a81692bb8eafdce52f73d8f98014439da57df4f130666dabe8a782bcd8cdf not found: ID does not exist" containerID="fc1a81692bb8eafdce52f73d8f98014439da57df4f130666dabe8a782bcd8cdf" Feb 19 20:10:53 crc kubenswrapper[4813]: I0219 20:10:53.980580 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc1a81692bb8eafdce52f73d8f98014439da57df4f130666dabe8a782bcd8cdf"} err="failed to get container status \"fc1a81692bb8eafdce52f73d8f98014439da57df4f130666dabe8a782bcd8cdf\": rpc error: code = NotFound desc = could not find container \"fc1a81692bb8eafdce52f73d8f98014439da57df4f130666dabe8a782bcd8cdf\": container with ID starting with fc1a81692bb8eafdce52f73d8f98014439da57df4f130666dabe8a782bcd8cdf not found: ID does not exist" Feb 19 20:10:53 crc kubenswrapper[4813]: I0219 20:10:53.980609 4813 scope.go:117] "RemoveContainer" containerID="3890e31bfb2ecd6a10390e44252a0321a34ec8fd75c7318c5f2f2b89014a3c36" Feb 19 20:10:53 crc kubenswrapper[4813]: E0219 20:10:53.980780 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3890e31bfb2ecd6a10390e44252a0321a34ec8fd75c7318c5f2f2b89014a3c36\": container with ID starting with 3890e31bfb2ecd6a10390e44252a0321a34ec8fd75c7318c5f2f2b89014a3c36 not found: ID does not exist" containerID="3890e31bfb2ecd6a10390e44252a0321a34ec8fd75c7318c5f2f2b89014a3c36" Feb 19 20:10:53 crc kubenswrapper[4813]: I0219 20:10:53.980799 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3890e31bfb2ecd6a10390e44252a0321a34ec8fd75c7318c5f2f2b89014a3c36"} err="failed to get container status \"3890e31bfb2ecd6a10390e44252a0321a34ec8fd75c7318c5f2f2b89014a3c36\": rpc error: code = NotFound desc = could not find container \"3890e31bfb2ecd6a10390e44252a0321a34ec8fd75c7318c5f2f2b89014a3c36\": container with ID starting with 3890e31bfb2ecd6a10390e44252a0321a34ec8fd75c7318c5f2f2b89014a3c36 not found: ID does not exist" Feb 19 20:10:53 crc kubenswrapper[4813]: I0219 20:10:53.980828 4813 scope.go:117] "RemoveContainer" containerID="217b46aec38f7eade99a483c12e094164905443b216e9d9cc4df231cd3643ab2" Feb 19 20:10:53 crc kubenswrapper[4813]: E0219 20:10:53.981736 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"217b46aec38f7eade99a483c12e094164905443b216e9d9cc4df231cd3643ab2\": container with ID starting with 217b46aec38f7eade99a483c12e094164905443b216e9d9cc4df231cd3643ab2 not found: ID does not exist" containerID="217b46aec38f7eade99a483c12e094164905443b216e9d9cc4df231cd3643ab2" Feb 19 20:10:53 crc kubenswrapper[4813]: I0219 20:10:53.981757 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"217b46aec38f7eade99a483c12e094164905443b216e9d9cc4df231cd3643ab2"} err="failed to get container status \"217b46aec38f7eade99a483c12e094164905443b216e9d9cc4df231cd3643ab2\": rpc error: code = NotFound desc = could not find container \"217b46aec38f7eade99a483c12e094164905443b216e9d9cc4df231cd3643ab2\": container with ID starting with 217b46aec38f7eade99a483c12e094164905443b216e9d9cc4df231cd3643ab2 not found: ID does not exist" Feb 19 20:10:54 crc kubenswrapper[4813]: I0219 20:10:54.064690 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 20:10:54 crc kubenswrapper[4813]: I0219 20:10:54.085754 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 20:10:54 crc kubenswrapper[4813]: I0219 20:10:54.103055 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 20:10:54 crc kubenswrapper[4813]: E0219 20:10:54.103557 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e963fa0b-314d-4f34-8624-a316e69590d2" containerName="ceilometer-notification-agent" Feb 19 20:10:54 crc kubenswrapper[4813]: I0219 20:10:54.103578 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="e963fa0b-314d-4f34-8624-a316e69590d2" containerName="ceilometer-notification-agent" Feb 19 20:10:54 crc kubenswrapper[4813]: E0219 20:10:54.103604 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e963fa0b-314d-4f34-8624-a316e69590d2" containerName="sg-core" Feb 19 20:10:54 crc kubenswrapper[4813]: I0219 20:10:54.103612 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="e963fa0b-314d-4f34-8624-a316e69590d2" containerName="sg-core" Feb 19 20:10:54 crc kubenswrapper[4813]: E0219 20:10:54.103625 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e963fa0b-314d-4f34-8624-a316e69590d2" containerName="ceilometer-central-agent" Feb 19 20:10:54 crc kubenswrapper[4813]: I0219 20:10:54.103631 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="e963fa0b-314d-4f34-8624-a316e69590d2" containerName="ceilometer-central-agent" Feb 19 20:10:54 crc kubenswrapper[4813]: E0219 20:10:54.103650 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37855c01-edf5-4a59-81e1-84efd79fb00f" containerName="registry-server" Feb 19 20:10:54 crc kubenswrapper[4813]: I0219 20:10:54.103656 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="37855c01-edf5-4a59-81e1-84efd79fb00f" containerName="registry-server" Feb 19 20:10:54 crc kubenswrapper[4813]: E0219 20:10:54.103672 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e963fa0b-314d-4f34-8624-a316e69590d2" containerName="proxy-httpd" Feb 19 20:10:54 crc kubenswrapper[4813]: I0219 20:10:54.103680 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="e963fa0b-314d-4f34-8624-a316e69590d2" containerName="proxy-httpd" Feb 19 20:10:54 crc kubenswrapper[4813]: E0219 20:10:54.103701 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37855c01-edf5-4a59-81e1-84efd79fb00f" containerName="extract-utilities" Feb 19 20:10:54 crc kubenswrapper[4813]: I0219 20:10:54.103707 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="37855c01-edf5-4a59-81e1-84efd79fb00f" containerName="extract-utilities" Feb 19 20:10:54 crc kubenswrapper[4813]: E0219 20:10:54.103723 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37855c01-edf5-4a59-81e1-84efd79fb00f" containerName="extract-content" Feb 19 20:10:54 crc kubenswrapper[4813]: I0219 20:10:54.103729 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="37855c01-edf5-4a59-81e1-84efd79fb00f" containerName="extract-content" Feb 19 20:10:54 crc kubenswrapper[4813]: I0219 20:10:54.103932 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="e963fa0b-314d-4f34-8624-a316e69590d2" containerName="sg-core" Feb 19 20:10:54 crc kubenswrapper[4813]: I0219 20:10:54.103945 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="e963fa0b-314d-4f34-8624-a316e69590d2" containerName="ceilometer-notification-agent" Feb 19 20:10:54 crc kubenswrapper[4813]: I0219 20:10:54.103980 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="37855c01-edf5-4a59-81e1-84efd79fb00f" containerName="registry-server" Feb 19 20:10:54 crc kubenswrapper[4813]: I0219 20:10:54.104000 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="e963fa0b-314d-4f34-8624-a316e69590d2" containerName="proxy-httpd" Feb 19 20:10:54 crc kubenswrapper[4813]: I0219 20:10:54.104032 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="e963fa0b-314d-4f34-8624-a316e69590d2" containerName="ceilometer-central-agent" Feb 19 20:10:54 crc kubenswrapper[4813]: I0219 20:10:54.106255 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 20:10:54 crc kubenswrapper[4813]: I0219 20:10:54.112691 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 20:10:54 crc kubenswrapper[4813]: I0219 20:10:54.112915 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 20:10:54 crc kubenswrapper[4813]: I0219 20:10:54.119456 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 20:10:54 crc kubenswrapper[4813]: I0219 20:10:54.263180 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ece3b6f-f7a2-4329-9ea0-b991941b9da1-scripts\") pod \"ceilometer-0\" (UID: \"0ece3b6f-f7a2-4329-9ea0-b991941b9da1\") " pod="openstack/ceilometer-0" Feb 19 20:10:54 crc kubenswrapper[4813]: I0219 20:10:54.263279 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0ece3b6f-f7a2-4329-9ea0-b991941b9da1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0ece3b6f-f7a2-4329-9ea0-b991941b9da1\") " pod="openstack/ceilometer-0" Feb 19 20:10:54 crc kubenswrapper[4813]: I0219 20:10:54.263338 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ece3b6f-f7a2-4329-9ea0-b991941b9da1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0ece3b6f-f7a2-4329-9ea0-b991941b9da1\") " pod="openstack/ceilometer-0" Feb 19 20:10:54 crc kubenswrapper[4813]: I0219 20:10:54.263465 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ece3b6f-f7a2-4329-9ea0-b991941b9da1-run-httpd\") pod \"ceilometer-0\" (UID: \"0ece3b6f-f7a2-4329-9ea0-b991941b9da1\") " pod="openstack/ceilometer-0" Feb 19 20:10:54 crc kubenswrapper[4813]: I0219 20:10:54.263608 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ece3b6f-f7a2-4329-9ea0-b991941b9da1-config-data\") pod \"ceilometer-0\" (UID: \"0ece3b6f-f7a2-4329-9ea0-b991941b9da1\") " pod="openstack/ceilometer-0" Feb 19 20:10:54 crc kubenswrapper[4813]: I0219 20:10:54.263672 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ece3b6f-f7a2-4329-9ea0-b991941b9da1-log-httpd\") pod \"ceilometer-0\" (UID: \"0ece3b6f-f7a2-4329-9ea0-b991941b9da1\") " pod="openstack/ceilometer-0" Feb 19 20:10:54 crc kubenswrapper[4813]: I0219 20:10:54.263966 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwnnq\" (UniqueName: \"kubernetes.io/projected/0ece3b6f-f7a2-4329-9ea0-b991941b9da1-kube-api-access-vwnnq\") pod \"ceilometer-0\" (UID: \"0ece3b6f-f7a2-4329-9ea0-b991941b9da1\") " pod="openstack/ceilometer-0" Feb 19 20:10:54 crc kubenswrapper[4813]: I0219 20:10:54.371819 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ece3b6f-f7a2-4329-9ea0-b991941b9da1-log-httpd\") pod \"ceilometer-0\" (UID: \"0ece3b6f-f7a2-4329-9ea0-b991941b9da1\") " pod="openstack/ceilometer-0" Feb 19 20:10:54 crc kubenswrapper[4813]: I0219 20:10:54.372153 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwnnq\" (UniqueName: \"kubernetes.io/projected/0ece3b6f-f7a2-4329-9ea0-b991941b9da1-kube-api-access-vwnnq\") pod \"ceilometer-0\" (UID: \"0ece3b6f-f7a2-4329-9ea0-b991941b9da1\") " pod="openstack/ceilometer-0" Feb 19 20:10:54 crc kubenswrapper[4813]: I0219 20:10:54.372292 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ece3b6f-f7a2-4329-9ea0-b991941b9da1-scripts\") pod \"ceilometer-0\" (UID: \"0ece3b6f-f7a2-4329-9ea0-b991941b9da1\") " pod="openstack/ceilometer-0" Feb 19 20:10:54 crc kubenswrapper[4813]: I0219 20:10:54.372372 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0ece3b6f-f7a2-4329-9ea0-b991941b9da1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0ece3b6f-f7a2-4329-9ea0-b991941b9da1\") " pod="openstack/ceilometer-0" Feb 19 20:10:54 crc kubenswrapper[4813]: I0219 20:10:54.372447 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ece3b6f-f7a2-4329-9ea0-b991941b9da1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0ece3b6f-f7a2-4329-9ea0-b991941b9da1\") " pod="openstack/ceilometer-0" Feb 19 20:10:54 crc kubenswrapper[4813]: I0219 20:10:54.372494 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ece3b6f-f7a2-4329-9ea0-b991941b9da1-run-httpd\") pod \"ceilometer-0\" (UID: \"0ece3b6f-f7a2-4329-9ea0-b991941b9da1\") " pod="openstack/ceilometer-0" Feb 19 20:10:54 crc kubenswrapper[4813]: I0219 20:10:54.372572 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ece3b6f-f7a2-4329-9ea0-b991941b9da1-config-data\") pod \"ceilometer-0\" (UID: \"0ece3b6f-f7a2-4329-9ea0-b991941b9da1\") " pod="openstack/ceilometer-0" Feb 19 20:10:54 crc kubenswrapper[4813]: I0219 20:10:54.377676 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ece3b6f-f7a2-4329-9ea0-b991941b9da1-log-httpd\") pod \"ceilometer-0\" (UID: \"0ece3b6f-f7a2-4329-9ea0-b991941b9da1\") " pod="openstack/ceilometer-0" Feb 19 20:10:54 crc kubenswrapper[4813]: I0219 20:10:54.378581 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ece3b6f-f7a2-4329-9ea0-b991941b9da1-run-httpd\") pod \"ceilometer-0\" (UID: \"0ece3b6f-f7a2-4329-9ea0-b991941b9da1\") " pod="openstack/ceilometer-0" Feb 19 20:10:54 crc kubenswrapper[4813]: I0219 20:10:54.384781 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ece3b6f-f7a2-4329-9ea0-b991941b9da1-scripts\") pod \"ceilometer-0\" (UID: \"0ece3b6f-f7a2-4329-9ea0-b991941b9da1\") " pod="openstack/ceilometer-0" Feb 19 20:10:54 crc kubenswrapper[4813]: I0219 20:10:54.390720 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ece3b6f-f7a2-4329-9ea0-b991941b9da1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0ece3b6f-f7a2-4329-9ea0-b991941b9da1\") " pod="openstack/ceilometer-0" Feb 19 20:10:54 crc kubenswrapper[4813]: I0219 20:10:54.395595 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0ece3b6f-f7a2-4329-9ea0-b991941b9da1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0ece3b6f-f7a2-4329-9ea0-b991941b9da1\") " pod="openstack/ceilometer-0" Feb 19 20:10:54 crc kubenswrapper[4813]: I0219 20:10:54.401080 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ece3b6f-f7a2-4329-9ea0-b991941b9da1-config-data\") pod \"ceilometer-0\" (UID: \"0ece3b6f-f7a2-4329-9ea0-b991941b9da1\") " pod="openstack/ceilometer-0" Feb 19 20:10:54 crc kubenswrapper[4813]: I0219 20:10:54.404843 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwnnq\" (UniqueName: \"kubernetes.io/projected/0ece3b6f-f7a2-4329-9ea0-b991941b9da1-kube-api-access-vwnnq\") pod \"ceilometer-0\" (UID: \"0ece3b6f-f7a2-4329-9ea0-b991941b9da1\") " pod="openstack/ceilometer-0" Feb 19 20:10:54 crc kubenswrapper[4813]: I0219 20:10:54.427597 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 20:10:54 crc kubenswrapper[4813]: I0219 20:10:54.899252 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 20:10:54 crc kubenswrapper[4813]: W0219 20:10:54.911591 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ece3b6f_f7a2_4329_9ea0_b991941b9da1.slice/crio-6113eb0362b356aefd2ba4dd0c4c6adc06f13519a36ae116a33c1d1bedf9a4ab WatchSource:0}: Error finding container 6113eb0362b356aefd2ba4dd0c4c6adc06f13519a36ae116a33c1d1bedf9a4ab: Status 404 returned error can't find the container with id 6113eb0362b356aefd2ba4dd0c4c6adc06f13519a36ae116a33c1d1bedf9a4ab Feb 19 20:10:55 crc kubenswrapper[4813]: I0219 20:10:55.111640 4813 scope.go:117] "RemoveContainer" containerID="14abec32d3c100619311126e80ac1248a5ffebf3248ea0153b1cce13a5dec395" Feb 19 20:10:55 crc kubenswrapper[4813]: I0219 20:10:55.162297 4813 scope.go:117] "RemoveContainer" containerID="8bb58da7cf44d5538bb3196418fc4065e50905582a5d1d9cb75f6f62d99c018b" Feb 19 20:10:55 crc kubenswrapper[4813]: I0219 20:10:55.224872 4813 scope.go:117] "RemoveContainer" containerID="7afb80387c7b3f66bae5dec261f74209b4f4c2ed4d1056202f64e21a7b083774" Feb 19 20:10:55 crc kubenswrapper[4813]: I0219 20:10:55.265159 4813 scope.go:117] "RemoveContainer" containerID="c6ae10887f4836597b3bdd2d93672a6fdee2315446f3bed0ad1aebb8df86609c" Feb 19 20:10:55 crc kubenswrapper[4813]: I0219 20:10:55.318654 4813 scope.go:117] "RemoveContainer" containerID="e2aa76984ade9e60fc90274fd883b225a87c0c20c0556bfa465274c66c399a76" Feb 19 20:10:55 crc kubenswrapper[4813]: I0219 20:10:55.358123 4813 scope.go:117] "RemoveContainer" containerID="c1d959e0038654d7c2ced3e92eede4bf9e67b754f1b4b0f4b620bf613955e28f" Feb 19 20:10:55 crc kubenswrapper[4813]: I0219 20:10:55.484986 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e963fa0b-314d-4f34-8624-a316e69590d2" path="/var/lib/kubelet/pods/e963fa0b-314d-4f34-8624-a316e69590d2/volumes" Feb 19 20:10:55 crc kubenswrapper[4813]: I0219 20:10:55.685384 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ece3b6f-f7a2-4329-9ea0-b991941b9da1","Type":"ContainerStarted","Data":"6113eb0362b356aefd2ba4dd0c4c6adc06f13519a36ae116a33c1d1bedf9a4ab"} Feb 19 20:10:56 crc kubenswrapper[4813]: I0219 20:10:56.699681 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ece3b6f-f7a2-4329-9ea0-b991941b9da1","Type":"ContainerStarted","Data":"e9151e03f171f27464c28aefba34e9a30036b369dd7aa5a1c58e439c3882d65a"} Feb 19 20:10:56 crc kubenswrapper[4813]: I0219 20:10:56.700190 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ece3b6f-f7a2-4329-9ea0-b991941b9da1","Type":"ContainerStarted","Data":"0b2be6ce7f5df923e89306a55d5a62a7b39f7855152875dddf2ac94818eeec53"} Feb 19 20:10:57 crc kubenswrapper[4813]: I0219 20:10:57.711523 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ece3b6f-f7a2-4329-9ea0-b991941b9da1","Type":"ContainerStarted","Data":"05cc210cea1a3ea95db35d367b8a465b2fa3f52072fbe5710267e486ac5b20d7"} Feb 19 20:10:58 crc kubenswrapper[4813]: I0219 20:10:58.508879 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-create-ts7jb"] Feb 19 20:10:58 crc kubenswrapper[4813]: I0219 20:10:58.511176 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-ts7jb" Feb 19 20:10:58 crc kubenswrapper[4813]: I0219 20:10:58.565517 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-ts7jb"] Feb 19 20:10:58 crc kubenswrapper[4813]: I0219 20:10:58.613004 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-c60d-account-create-update-p7csk"] Feb 19 20:10:58 crc kubenswrapper[4813]: I0219 20:10:58.614490 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-c60d-account-create-update-p7csk" Feb 19 20:10:58 crc kubenswrapper[4813]: I0219 20:10:58.616591 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-db-secret" Feb 19 20:10:58 crc kubenswrapper[4813]: I0219 20:10:58.622017 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-c60d-account-create-update-p7csk"] Feb 19 20:10:58 crc kubenswrapper[4813]: I0219 20:10:58.667515 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2qs5\" (UniqueName: \"kubernetes.io/projected/ed08568f-fdab-4315-a5b7-413e88d4edac-kube-api-access-l2qs5\") pod \"manila-db-create-ts7jb\" (UID: \"ed08568f-fdab-4315-a5b7-413e88d4edac\") " pod="openstack/manila-db-create-ts7jb" Feb 19 20:10:58 crc kubenswrapper[4813]: I0219 20:10:58.667584 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed08568f-fdab-4315-a5b7-413e88d4edac-operator-scripts\") pod \"manila-db-create-ts7jb\" (UID: \"ed08568f-fdab-4315-a5b7-413e88d4edac\") " pod="openstack/manila-db-create-ts7jb" Feb 19 20:10:58 crc kubenswrapper[4813]: I0219 20:10:58.724901 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ece3b6f-f7a2-4329-9ea0-b991941b9da1","Type":"ContainerStarted","Data":"eed6e79062684ad879223a3be56bf0d582ba151d6962b0350c0365883006847c"} Feb 19 20:10:58 crc kubenswrapper[4813]: I0219 20:10:58.726137 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 20:10:58 crc kubenswrapper[4813]: I0219 20:10:58.749740 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.367788072 podStartE2EDuration="4.749723341s" podCreationTimestamp="2026-02-19 20:10:54 +0000 UTC" firstStartedPulling="2026-02-19 20:10:54.920455905 +0000 UTC m=+6074.145896446" lastFinishedPulling="2026-02-19 20:10:58.302391164 +0000 UTC m=+6077.527831715" observedRunningTime="2026-02-19 20:10:58.742115695 +0000 UTC m=+6077.967556236" watchObservedRunningTime="2026-02-19 20:10:58.749723341 +0000 UTC m=+6077.975163882" Feb 19 20:10:58 crc kubenswrapper[4813]: I0219 20:10:58.770300 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz7tg\" (UniqueName: \"kubernetes.io/projected/e3b8e3a6-c6e6-4b33-9dba-1b43041daf22-kube-api-access-fz7tg\") pod \"manila-c60d-account-create-update-p7csk\" (UID: \"e3b8e3a6-c6e6-4b33-9dba-1b43041daf22\") " pod="openstack/manila-c60d-account-create-update-p7csk" Feb 19 20:10:58 crc kubenswrapper[4813]: I0219 20:10:58.770741 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3b8e3a6-c6e6-4b33-9dba-1b43041daf22-operator-scripts\") pod \"manila-c60d-account-create-update-p7csk\" (UID: \"e3b8e3a6-c6e6-4b33-9dba-1b43041daf22\") " pod="openstack/manila-c60d-account-create-update-p7csk" Feb 19 20:10:58 crc kubenswrapper[4813]: I0219 20:10:58.771123 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2qs5\" (UniqueName: \"kubernetes.io/projected/ed08568f-fdab-4315-a5b7-413e88d4edac-kube-api-access-l2qs5\") pod \"manila-db-create-ts7jb\" (UID: \"ed08568f-fdab-4315-a5b7-413e88d4edac\") " pod="openstack/manila-db-create-ts7jb" Feb 19 20:10:58 crc kubenswrapper[4813]: I0219 20:10:58.771247 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed08568f-fdab-4315-a5b7-413e88d4edac-operator-scripts\") pod \"manila-db-create-ts7jb\" (UID: \"ed08568f-fdab-4315-a5b7-413e88d4edac\") " pod="openstack/manila-db-create-ts7jb" Feb 19 20:10:58 crc kubenswrapper[4813]: I0219 20:10:58.772314 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed08568f-fdab-4315-a5b7-413e88d4edac-operator-scripts\") pod \"manila-db-create-ts7jb\" (UID: \"ed08568f-fdab-4315-a5b7-413e88d4edac\") " pod="openstack/manila-db-create-ts7jb" Feb 19 20:10:58 crc kubenswrapper[4813]: I0219 20:10:58.793544 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2qs5\" (UniqueName: \"kubernetes.io/projected/ed08568f-fdab-4315-a5b7-413e88d4edac-kube-api-access-l2qs5\") pod \"manila-db-create-ts7jb\" (UID: \"ed08568f-fdab-4315-a5b7-413e88d4edac\") " pod="openstack/manila-db-create-ts7jb" Feb 19 20:10:58 crc kubenswrapper[4813]: I0219 20:10:58.862147 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-ts7jb" Feb 19 20:10:58 crc kubenswrapper[4813]: I0219 20:10:58.872913 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fz7tg\" (UniqueName: \"kubernetes.io/projected/e3b8e3a6-c6e6-4b33-9dba-1b43041daf22-kube-api-access-fz7tg\") pod \"manila-c60d-account-create-update-p7csk\" (UID: \"e3b8e3a6-c6e6-4b33-9dba-1b43041daf22\") " pod="openstack/manila-c60d-account-create-update-p7csk" Feb 19 20:10:58 crc kubenswrapper[4813]: I0219 20:10:58.873065 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3b8e3a6-c6e6-4b33-9dba-1b43041daf22-operator-scripts\") pod \"manila-c60d-account-create-update-p7csk\" (UID: \"e3b8e3a6-c6e6-4b33-9dba-1b43041daf22\") " pod="openstack/manila-c60d-account-create-update-p7csk" Feb 19 20:10:58 crc kubenswrapper[4813]: I0219 20:10:58.873752 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3b8e3a6-c6e6-4b33-9dba-1b43041daf22-operator-scripts\") pod \"manila-c60d-account-create-update-p7csk\" (UID: \"e3b8e3a6-c6e6-4b33-9dba-1b43041daf22\") " pod="openstack/manila-c60d-account-create-update-p7csk" Feb 19 20:10:58 crc kubenswrapper[4813]: I0219 20:10:58.891913 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fz7tg\" (UniqueName: \"kubernetes.io/projected/e3b8e3a6-c6e6-4b33-9dba-1b43041daf22-kube-api-access-fz7tg\") pod \"manila-c60d-account-create-update-p7csk\" (UID: \"e3b8e3a6-c6e6-4b33-9dba-1b43041daf22\") " pod="openstack/manila-c60d-account-create-update-p7csk" Feb 19 20:10:58 crc kubenswrapper[4813]: I0219 20:10:58.943384 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-c60d-account-create-update-p7csk" Feb 19 20:10:59 crc kubenswrapper[4813]: I0219 20:10:59.370123 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-ts7jb"] Feb 19 20:10:59 crc kubenswrapper[4813]: I0219 20:10:59.455245 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-c60d-account-create-update-p7csk"] Feb 19 20:10:59 crc kubenswrapper[4813]: W0219 20:10:59.463645 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3b8e3a6_c6e6_4b33_9dba_1b43041daf22.slice/crio-9cb7f8df2e19b7acb634ecf0a79307fab12f03d3ba247432c966ef9f030d2821 WatchSource:0}: Error finding container 9cb7f8df2e19b7acb634ecf0a79307fab12f03d3ba247432c966ef9f030d2821: Status 404 returned error can't find the container with id 9cb7f8df2e19b7acb634ecf0a79307fab12f03d3ba247432c966ef9f030d2821 Feb 19 20:10:59 crc kubenswrapper[4813]: I0219 20:10:59.740618 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-c60d-account-create-update-p7csk" event={"ID":"e3b8e3a6-c6e6-4b33-9dba-1b43041daf22","Type":"ContainerStarted","Data":"012e97a9028f8779020c9cb805dc8fdd1493c440e2c302305b73f4393722bde4"} Feb 19 20:10:59 crc kubenswrapper[4813]: I0219 20:10:59.740975 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-c60d-account-create-update-p7csk" event={"ID":"e3b8e3a6-c6e6-4b33-9dba-1b43041daf22","Type":"ContainerStarted","Data":"9cb7f8df2e19b7acb634ecf0a79307fab12f03d3ba247432c966ef9f030d2821"} Feb 19 20:10:59 crc kubenswrapper[4813]: I0219 20:10:59.747310 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-ts7jb" event={"ID":"ed08568f-fdab-4315-a5b7-413e88d4edac","Type":"ContainerStarted","Data":"bf3389dd71739f5143f3a982a66a234f41741e9e9864c206e57c1b014c9eb0cf"} Feb 19 20:10:59 crc kubenswrapper[4813]: I0219 20:10:59.747370 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-ts7jb" event={"ID":"ed08568f-fdab-4315-a5b7-413e88d4edac","Type":"ContainerStarted","Data":"ef6037e28234d73e5cab1a9c4203ed53ef4b7ff38c8c7faa219bb51e497727be"} Feb 19 20:10:59 crc kubenswrapper[4813]: I0219 20:10:59.766704 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-c60d-account-create-update-p7csk" podStartSLOduration=1.766679267 podStartE2EDuration="1.766679267s" podCreationTimestamp="2026-02-19 20:10:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 20:10:59.757191133 +0000 UTC m=+6078.982631674" watchObservedRunningTime="2026-02-19 20:10:59.766679267 +0000 UTC m=+6078.992119808" Feb 19 20:10:59 crc kubenswrapper[4813]: I0219 20:10:59.781280 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-db-create-ts7jb" podStartSLOduration=1.7812614679999998 podStartE2EDuration="1.781261468s" podCreationTimestamp="2026-02-19 20:10:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 20:10:59.774026564 +0000 UTC m=+6078.999467095" watchObservedRunningTime="2026-02-19 20:10:59.781261468 +0000 UTC m=+6079.006702009" Feb 19 20:11:00 crc kubenswrapper[4813]: I0219 20:11:00.755654 4813 generic.go:334] "Generic (PLEG): container finished" podID="ed08568f-fdab-4315-a5b7-413e88d4edac" containerID="bf3389dd71739f5143f3a982a66a234f41741e9e9864c206e57c1b014c9eb0cf" exitCode=0 Feb 19 20:11:00 crc kubenswrapper[4813]: I0219 20:11:00.755709 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-ts7jb" event={"ID":"ed08568f-fdab-4315-a5b7-413e88d4edac","Type":"ContainerDied","Data":"bf3389dd71739f5143f3a982a66a234f41741e9e9864c206e57c1b014c9eb0cf"} Feb 19 20:11:00 crc kubenswrapper[4813]: I0219 20:11:00.758385 4813 generic.go:334] "Generic (PLEG): container finished" podID="e3b8e3a6-c6e6-4b33-9dba-1b43041daf22" containerID="012e97a9028f8779020c9cb805dc8fdd1493c440e2c302305b73f4393722bde4" exitCode=0 Feb 19 20:11:00 crc kubenswrapper[4813]: I0219 20:11:00.758487 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-c60d-account-create-update-p7csk" event={"ID":"e3b8e3a6-c6e6-4b33-9dba-1b43041daf22","Type":"ContainerDied","Data":"012e97a9028f8779020c9cb805dc8fdd1493c440e2c302305b73f4393722bde4"} Feb 19 20:11:02 crc kubenswrapper[4813]: I0219 20:11:02.262369 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-c60d-account-create-update-p7csk" Feb 19 20:11:02 crc kubenswrapper[4813]: I0219 20:11:02.265517 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-ts7jb" Feb 19 20:11:02 crc kubenswrapper[4813]: I0219 20:11:02.345230 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2qs5\" (UniqueName: \"kubernetes.io/projected/ed08568f-fdab-4315-a5b7-413e88d4edac-kube-api-access-l2qs5\") pod \"ed08568f-fdab-4315-a5b7-413e88d4edac\" (UID: \"ed08568f-fdab-4315-a5b7-413e88d4edac\") " Feb 19 20:11:02 crc kubenswrapper[4813]: I0219 20:11:02.345698 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fz7tg\" (UniqueName: \"kubernetes.io/projected/e3b8e3a6-c6e6-4b33-9dba-1b43041daf22-kube-api-access-fz7tg\") pod \"e3b8e3a6-c6e6-4b33-9dba-1b43041daf22\" (UID: \"e3b8e3a6-c6e6-4b33-9dba-1b43041daf22\") " Feb 19 20:11:02 crc kubenswrapper[4813]: I0219 20:11:02.345855 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed08568f-fdab-4315-a5b7-413e88d4edac-operator-scripts\") pod \"ed08568f-fdab-4315-a5b7-413e88d4edac\" (UID: \"ed08568f-fdab-4315-a5b7-413e88d4edac\") " Feb 19 20:11:02 crc kubenswrapper[4813]: I0219 20:11:02.345885 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3b8e3a6-c6e6-4b33-9dba-1b43041daf22-operator-scripts\") pod \"e3b8e3a6-c6e6-4b33-9dba-1b43041daf22\" (UID: \"e3b8e3a6-c6e6-4b33-9dba-1b43041daf22\") " Feb 19 20:11:02 crc kubenswrapper[4813]: I0219 20:11:02.346474 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed08568f-fdab-4315-a5b7-413e88d4edac-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ed08568f-fdab-4315-a5b7-413e88d4edac" (UID: "ed08568f-fdab-4315-a5b7-413e88d4edac"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:11:02 crc kubenswrapper[4813]: I0219 20:11:02.346503 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3b8e3a6-c6e6-4b33-9dba-1b43041daf22-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e3b8e3a6-c6e6-4b33-9dba-1b43041daf22" (UID: "e3b8e3a6-c6e6-4b33-9dba-1b43041daf22"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:11:02 crc kubenswrapper[4813]: I0219 20:11:02.351053 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3b8e3a6-c6e6-4b33-9dba-1b43041daf22-kube-api-access-fz7tg" (OuterVolumeSpecName: "kube-api-access-fz7tg") pod "e3b8e3a6-c6e6-4b33-9dba-1b43041daf22" (UID: "e3b8e3a6-c6e6-4b33-9dba-1b43041daf22"). InnerVolumeSpecName "kube-api-access-fz7tg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:11:02 crc kubenswrapper[4813]: I0219 20:11:02.351487 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed08568f-fdab-4315-a5b7-413e88d4edac-kube-api-access-l2qs5" (OuterVolumeSpecName: "kube-api-access-l2qs5") pod "ed08568f-fdab-4315-a5b7-413e88d4edac" (UID: "ed08568f-fdab-4315-a5b7-413e88d4edac"). InnerVolumeSpecName "kube-api-access-l2qs5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:11:02 crc kubenswrapper[4813]: I0219 20:11:02.448548 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2qs5\" (UniqueName: \"kubernetes.io/projected/ed08568f-fdab-4315-a5b7-413e88d4edac-kube-api-access-l2qs5\") on node \"crc\" DevicePath \"\"" Feb 19 20:11:02 crc kubenswrapper[4813]: I0219 20:11:02.448612 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fz7tg\" (UniqueName: \"kubernetes.io/projected/e3b8e3a6-c6e6-4b33-9dba-1b43041daf22-kube-api-access-fz7tg\") on node \"crc\" DevicePath \"\"" Feb 19 20:11:02 crc kubenswrapper[4813]: I0219 20:11:02.448633 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed08568f-fdab-4315-a5b7-413e88d4edac-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 20:11:02 crc kubenswrapper[4813]: I0219 20:11:02.448652 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3b8e3a6-c6e6-4b33-9dba-1b43041daf22-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 20:11:02 crc kubenswrapper[4813]: I0219 20:11:02.782121 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-ts7jb" event={"ID":"ed08568f-fdab-4315-a5b7-413e88d4edac","Type":"ContainerDied","Data":"ef6037e28234d73e5cab1a9c4203ed53ef4b7ff38c8c7faa219bb51e497727be"} Feb 19 20:11:02 crc kubenswrapper[4813]: I0219 20:11:02.782162 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-ts7jb" Feb 19 20:11:02 crc kubenswrapper[4813]: I0219 20:11:02.782178 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef6037e28234d73e5cab1a9c4203ed53ef4b7ff38c8c7faa219bb51e497727be" Feb 19 20:11:02 crc kubenswrapper[4813]: I0219 20:11:02.784515 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-c60d-account-create-update-p7csk" event={"ID":"e3b8e3a6-c6e6-4b33-9dba-1b43041daf22","Type":"ContainerDied","Data":"9cb7f8df2e19b7acb634ecf0a79307fab12f03d3ba247432c966ef9f030d2821"} Feb 19 20:11:02 crc kubenswrapper[4813]: I0219 20:11:02.784563 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9cb7f8df2e19b7acb634ecf0a79307fab12f03d3ba247432c966ef9f030d2821" Feb 19 20:11:02 crc kubenswrapper[4813]: I0219 20:11:02.784601 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-c60d-account-create-update-p7csk" Feb 19 20:11:03 crc kubenswrapper[4813]: I0219 20:11:03.895156 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-sync-kb2ww"] Feb 19 20:11:03 crc kubenswrapper[4813]: E0219 20:11:03.895888 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3b8e3a6-c6e6-4b33-9dba-1b43041daf22" containerName="mariadb-account-create-update" Feb 19 20:11:03 crc kubenswrapper[4813]: I0219 20:11:03.895903 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3b8e3a6-c6e6-4b33-9dba-1b43041daf22" containerName="mariadb-account-create-update" Feb 19 20:11:03 crc kubenswrapper[4813]: E0219 20:11:03.895929 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed08568f-fdab-4315-a5b7-413e88d4edac" containerName="mariadb-database-create" Feb 19 20:11:03 crc kubenswrapper[4813]: I0219 20:11:03.895939 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed08568f-fdab-4315-a5b7-413e88d4edac" containerName="mariadb-database-create" Feb 19 20:11:03 crc kubenswrapper[4813]: I0219 20:11:03.896203 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed08568f-fdab-4315-a5b7-413e88d4edac" containerName="mariadb-database-create" Feb 19 20:11:03 crc kubenswrapper[4813]: I0219 20:11:03.896437 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3b8e3a6-c6e6-4b33-9dba-1b43041daf22" containerName="mariadb-account-create-update" Feb 19 20:11:03 crc kubenswrapper[4813]: I0219 20:11:03.897310 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-kb2ww" Feb 19 20:11:03 crc kubenswrapper[4813]: I0219 20:11:03.901269 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-l5gc7" Feb 19 20:11:03 crc kubenswrapper[4813]: I0219 20:11:03.901498 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Feb 19 20:11:03 crc kubenswrapper[4813]: I0219 20:11:03.913770 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-kb2ww"] Feb 19 20:11:03 crc kubenswrapper[4813]: I0219 20:11:03.983875 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/229a6f2b-587e-4008-a4d7-9f0d26e3e446-config-data\") pod \"manila-db-sync-kb2ww\" (UID: \"229a6f2b-587e-4008-a4d7-9f0d26e3e446\") " pod="openstack/manila-db-sync-kb2ww" Feb 19 20:11:03 crc kubenswrapper[4813]: I0219 20:11:03.983929 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/229a6f2b-587e-4008-a4d7-9f0d26e3e446-combined-ca-bundle\") pod \"manila-db-sync-kb2ww\" (UID: \"229a6f2b-587e-4008-a4d7-9f0d26e3e446\") " pod="openstack/manila-db-sync-kb2ww" Feb 19 20:11:03 crc kubenswrapper[4813]: I0219 20:11:03.984050 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/229a6f2b-587e-4008-a4d7-9f0d26e3e446-job-config-data\") pod \"manila-db-sync-kb2ww\" (UID: \"229a6f2b-587e-4008-a4d7-9f0d26e3e446\") " pod="openstack/manila-db-sync-kb2ww" Feb 19 20:11:03 crc kubenswrapper[4813]: I0219 20:11:03.984224 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqvrt\" (UniqueName: \"kubernetes.io/projected/229a6f2b-587e-4008-a4d7-9f0d26e3e446-kube-api-access-sqvrt\") pod \"manila-db-sync-kb2ww\" (UID: \"229a6f2b-587e-4008-a4d7-9f0d26e3e446\") " pod="openstack/manila-db-sync-kb2ww" Feb 19 20:11:04 crc kubenswrapper[4813]: I0219 20:11:04.086267 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqvrt\" (UniqueName: \"kubernetes.io/projected/229a6f2b-587e-4008-a4d7-9f0d26e3e446-kube-api-access-sqvrt\") pod \"manila-db-sync-kb2ww\" (UID: \"229a6f2b-587e-4008-a4d7-9f0d26e3e446\") " pod="openstack/manila-db-sync-kb2ww" Feb 19 20:11:04 crc kubenswrapper[4813]: I0219 20:11:04.086874 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/229a6f2b-587e-4008-a4d7-9f0d26e3e446-config-data\") pod \"manila-db-sync-kb2ww\" (UID: \"229a6f2b-587e-4008-a4d7-9f0d26e3e446\") " pod="openstack/manila-db-sync-kb2ww" Feb 19 20:11:04 crc kubenswrapper[4813]: I0219 20:11:04.086927 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/229a6f2b-587e-4008-a4d7-9f0d26e3e446-combined-ca-bundle\") pod \"manila-db-sync-kb2ww\" (UID: \"229a6f2b-587e-4008-a4d7-9f0d26e3e446\") " pod="openstack/manila-db-sync-kb2ww" Feb 19 20:11:04 crc kubenswrapper[4813]: I0219 20:11:04.087039 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/229a6f2b-587e-4008-a4d7-9f0d26e3e446-job-config-data\") pod \"manila-db-sync-kb2ww\" (UID: \"229a6f2b-587e-4008-a4d7-9f0d26e3e446\") " pod="openstack/manila-db-sync-kb2ww" Feb 19 20:11:04 crc kubenswrapper[4813]: I0219 20:11:04.093536 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/229a6f2b-587e-4008-a4d7-9f0d26e3e446-combined-ca-bundle\") pod \"manila-db-sync-kb2ww\" (UID: \"229a6f2b-587e-4008-a4d7-9f0d26e3e446\") " pod="openstack/manila-db-sync-kb2ww" Feb 19 20:11:04 crc kubenswrapper[4813]: I0219 20:11:04.094869 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/229a6f2b-587e-4008-a4d7-9f0d26e3e446-job-config-data\") pod \"manila-db-sync-kb2ww\" (UID: \"229a6f2b-587e-4008-a4d7-9f0d26e3e446\") " pod="openstack/manila-db-sync-kb2ww" Feb 19 20:11:04 crc kubenswrapper[4813]: I0219 20:11:04.095371 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/229a6f2b-587e-4008-a4d7-9f0d26e3e446-config-data\") pod \"manila-db-sync-kb2ww\" (UID: \"229a6f2b-587e-4008-a4d7-9f0d26e3e446\") " pod="openstack/manila-db-sync-kb2ww" Feb 19 20:11:04 crc kubenswrapper[4813]: I0219 20:11:04.105833 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqvrt\" (UniqueName: \"kubernetes.io/projected/229a6f2b-587e-4008-a4d7-9f0d26e3e446-kube-api-access-sqvrt\") pod \"manila-db-sync-kb2ww\" (UID: \"229a6f2b-587e-4008-a4d7-9f0d26e3e446\") " pod="openstack/manila-db-sync-kb2ww" Feb 19 20:11:04 crc kubenswrapper[4813]: I0219 20:11:04.226763 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-kb2ww" Feb 19 20:11:04 crc kubenswrapper[4813]: I0219 20:11:04.864924 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-kb2ww"] Feb 19 20:11:05 crc kubenswrapper[4813]: I0219 20:11:05.833686 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-kb2ww" event={"ID":"229a6f2b-587e-4008-a4d7-9f0d26e3e446","Type":"ContainerStarted","Data":"c833ff8d1750dec405f7efe99fa4847cfad2835fe1d24ced8573cf4324197545"} Feb 19 20:11:09 crc kubenswrapper[4813]: I0219 20:11:09.905458 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-kb2ww" event={"ID":"229a6f2b-587e-4008-a4d7-9f0d26e3e446","Type":"ContainerStarted","Data":"7f4d8edd4b142cfb5985b5a0e7535978f47995e545005d5de7500b0ba6ec4c6b"} Feb 19 20:11:09 crc kubenswrapper[4813]: I0219 20:11:09.928836 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-db-sync-kb2ww" podStartSLOduration=2.526712744 podStartE2EDuration="6.928783138s" podCreationTimestamp="2026-02-19 20:11:03 +0000 UTC" firstStartedPulling="2026-02-19 20:11:04.878248816 +0000 UTC m=+6084.103689357" lastFinishedPulling="2026-02-19 20:11:09.28031921 +0000 UTC m=+6088.505759751" observedRunningTime="2026-02-19 20:11:09.918054526 +0000 UTC m=+6089.143495077" watchObservedRunningTime="2026-02-19 20:11:09.928783138 +0000 UTC m=+6089.154223689" Feb 19 20:11:11 crc kubenswrapper[4813]: I0219 20:11:11.932583 4813 generic.go:334] "Generic (PLEG): container finished" podID="229a6f2b-587e-4008-a4d7-9f0d26e3e446" containerID="7f4d8edd4b142cfb5985b5a0e7535978f47995e545005d5de7500b0ba6ec4c6b" exitCode=0 Feb 19 20:11:11 crc kubenswrapper[4813]: I0219 20:11:11.932629 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-kb2ww" event={"ID":"229a6f2b-587e-4008-a4d7-9f0d26e3e446","Type":"ContainerDied","Data":"7f4d8edd4b142cfb5985b5a0e7535978f47995e545005d5de7500b0ba6ec4c6b"} Feb 19 20:11:13 crc kubenswrapper[4813]: I0219 20:11:13.071637 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-jmjlq"] Feb 19 20:11:13 crc kubenswrapper[4813]: I0219 20:11:13.084275 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-jmjlq"] Feb 19 20:11:13 crc kubenswrapper[4813]: I0219 20:11:13.094575 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-340d-account-create-update-d2bdn"] Feb 19 20:11:13 crc kubenswrapper[4813]: I0219 20:11:13.103016 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-340d-account-create-update-d2bdn"] Feb 19 20:11:13 crc kubenswrapper[4813]: I0219 20:11:13.485037 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="198bf5a5-5e44-4440-9d40-9dd8fb723007" path="/var/lib/kubelet/pods/198bf5a5-5e44-4440-9d40-9dd8fb723007/volumes" Feb 19 20:11:13 crc kubenswrapper[4813]: I0219 20:11:13.486340 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="580be06f-69cc-4178-9781-86215efaffd0" path="/var/lib/kubelet/pods/580be06f-69cc-4178-9781-86215efaffd0/volumes" Feb 19 20:11:13 crc kubenswrapper[4813]: I0219 20:11:13.492367 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-kb2ww" Feb 19 20:11:13 crc kubenswrapper[4813]: I0219 20:11:13.646282 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/229a6f2b-587e-4008-a4d7-9f0d26e3e446-combined-ca-bundle\") pod \"229a6f2b-587e-4008-a4d7-9f0d26e3e446\" (UID: \"229a6f2b-587e-4008-a4d7-9f0d26e3e446\") " Feb 19 20:11:13 crc kubenswrapper[4813]: I0219 20:11:13.646352 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqvrt\" (UniqueName: \"kubernetes.io/projected/229a6f2b-587e-4008-a4d7-9f0d26e3e446-kube-api-access-sqvrt\") pod \"229a6f2b-587e-4008-a4d7-9f0d26e3e446\" (UID: \"229a6f2b-587e-4008-a4d7-9f0d26e3e446\") " Feb 19 20:11:13 crc kubenswrapper[4813]: I0219 20:11:13.646681 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/229a6f2b-587e-4008-a4d7-9f0d26e3e446-job-config-data\") pod \"229a6f2b-587e-4008-a4d7-9f0d26e3e446\" (UID: \"229a6f2b-587e-4008-a4d7-9f0d26e3e446\") " Feb 19 20:11:13 crc kubenswrapper[4813]: I0219 20:11:13.646826 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/229a6f2b-587e-4008-a4d7-9f0d26e3e446-config-data\") pod \"229a6f2b-587e-4008-a4d7-9f0d26e3e446\" (UID: \"229a6f2b-587e-4008-a4d7-9f0d26e3e446\") " Feb 19 20:11:13 crc kubenswrapper[4813]: I0219 20:11:13.652569 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/229a6f2b-587e-4008-a4d7-9f0d26e3e446-job-config-data" (OuterVolumeSpecName: "job-config-data") pod "229a6f2b-587e-4008-a4d7-9f0d26e3e446" (UID: "229a6f2b-587e-4008-a4d7-9f0d26e3e446"). InnerVolumeSpecName "job-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:11:13 crc kubenswrapper[4813]: I0219 20:11:13.653034 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/229a6f2b-587e-4008-a4d7-9f0d26e3e446-kube-api-access-sqvrt" (OuterVolumeSpecName: "kube-api-access-sqvrt") pod "229a6f2b-587e-4008-a4d7-9f0d26e3e446" (UID: "229a6f2b-587e-4008-a4d7-9f0d26e3e446"). InnerVolumeSpecName "kube-api-access-sqvrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:11:13 crc kubenswrapper[4813]: I0219 20:11:13.656490 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/229a6f2b-587e-4008-a4d7-9f0d26e3e446-config-data" (OuterVolumeSpecName: "config-data") pod "229a6f2b-587e-4008-a4d7-9f0d26e3e446" (UID: "229a6f2b-587e-4008-a4d7-9f0d26e3e446"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:11:13 crc kubenswrapper[4813]: I0219 20:11:13.686136 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/229a6f2b-587e-4008-a4d7-9f0d26e3e446-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "229a6f2b-587e-4008-a4d7-9f0d26e3e446" (UID: "229a6f2b-587e-4008-a4d7-9f0d26e3e446"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:11:13 crc kubenswrapper[4813]: I0219 20:11:13.750138 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/229a6f2b-587e-4008-a4d7-9f0d26e3e446-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 20:11:13 crc kubenswrapper[4813]: I0219 20:11:13.750390 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/229a6f2b-587e-4008-a4d7-9f0d26e3e446-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 20:11:13 crc kubenswrapper[4813]: I0219 20:11:13.750522 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqvrt\" (UniqueName: \"kubernetes.io/projected/229a6f2b-587e-4008-a4d7-9f0d26e3e446-kube-api-access-sqvrt\") on node \"crc\" DevicePath \"\"" Feb 19 20:11:13 crc kubenswrapper[4813]: I0219 20:11:13.750641 4813 reconciler_common.go:293] "Volume detached for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/229a6f2b-587e-4008-a4d7-9f0d26e3e446-job-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 20:11:13 crc kubenswrapper[4813]: I0219 20:11:13.958623 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-kb2ww" event={"ID":"229a6f2b-587e-4008-a4d7-9f0d26e3e446","Type":"ContainerDied","Data":"c833ff8d1750dec405f7efe99fa4847cfad2835fe1d24ced8573cf4324197545"} Feb 19 20:11:13 crc kubenswrapper[4813]: I0219 20:11:13.958689 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c833ff8d1750dec405f7efe99fa4847cfad2835fe1d24ced8573cf4324197545" Feb 19 20:11:13 crc kubenswrapper[4813]: I0219 20:11:13.958700 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-kb2ww" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.314203 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Feb 19 20:11:14 crc kubenswrapper[4813]: E0219 20:11:14.315060 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="229a6f2b-587e-4008-a4d7-9f0d26e3e446" containerName="manila-db-sync" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.315080 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="229a6f2b-587e-4008-a4d7-9f0d26e3e446" containerName="manila-db-sync" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.315343 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="229a6f2b-587e-4008-a4d7-9f0d26e3e446" containerName="manila-db-sync" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.316869 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.324319 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.324323 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scripts" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.324476 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-l5gc7" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.324599 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.331906 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.337163 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.343601 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.364382 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.379113 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.452051 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7496cf9857-rsxjk"] Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.453786 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7496cf9857-rsxjk" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.467586 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2883ddae-0938-4828-95c8-46934ade5fdd-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"2883ddae-0938-4828-95c8-46934ade5fdd\") " pod="openstack/manila-share-share1-0" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.467619 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/37e81308-2a91-47be-a816-37664edd2530-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"37e81308-2a91-47be-a816-37664edd2530\") " pod="openstack/manila-scheduler-0" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.467645 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37e81308-2a91-47be-a816-37664edd2530-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"37e81308-2a91-47be-a816-37664edd2530\") " pod="openstack/manila-scheduler-0" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.467666 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2883ddae-0938-4828-95c8-46934ade5fdd-config-data\") pod \"manila-share-share1-0\" (UID: \"2883ddae-0938-4828-95c8-46934ade5fdd\") " pod="openstack/manila-share-share1-0" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.467702 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlrlb\" (UniqueName: \"kubernetes.io/projected/37e81308-2a91-47be-a816-37664edd2530-kube-api-access-vlrlb\") pod \"manila-scheduler-0\" (UID: \"37e81308-2a91-47be-a816-37664edd2530\") " pod="openstack/manila-scheduler-0" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.467749 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/37e81308-2a91-47be-a816-37664edd2530-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"37e81308-2a91-47be-a816-37664edd2530\") " pod="openstack/manila-scheduler-0" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.467782 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/2883ddae-0938-4828-95c8-46934ade5fdd-ceph\") pod \"manila-share-share1-0\" (UID: \"2883ddae-0938-4828-95c8-46934ade5fdd\") " pod="openstack/manila-share-share1-0" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.467817 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37e81308-2a91-47be-a816-37664edd2530-scripts\") pod \"manila-scheduler-0\" (UID: \"37e81308-2a91-47be-a816-37664edd2530\") " pod="openstack/manila-scheduler-0" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.467887 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2883ddae-0938-4828-95c8-46934ade5fdd-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"2883ddae-0938-4828-95c8-46934ade5fdd\") " pod="openstack/manila-share-share1-0" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.467940 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2883ddae-0938-4828-95c8-46934ade5fdd-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"2883ddae-0938-4828-95c8-46934ade5fdd\") " pod="openstack/manila-share-share1-0" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.467993 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/2883ddae-0938-4828-95c8-46934ade5fdd-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"2883ddae-0938-4828-95c8-46934ade5fdd\") " pod="openstack/manila-share-share1-0" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.468218 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b99g8\" (UniqueName: \"kubernetes.io/projected/2883ddae-0938-4828-95c8-46934ade5fdd-kube-api-access-b99g8\") pod \"manila-share-share1-0\" (UID: \"2883ddae-0938-4828-95c8-46934ade5fdd\") " pod="openstack/manila-share-share1-0" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.468245 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2883ddae-0938-4828-95c8-46934ade5fdd-scripts\") pod \"manila-share-share1-0\" (UID: \"2883ddae-0938-4828-95c8-46934ade5fdd\") " pod="openstack/manila-share-share1-0" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.468272 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37e81308-2a91-47be-a816-37664edd2530-config-data\") pod \"manila-scheduler-0\" (UID: \"37e81308-2a91-47be-a816-37664edd2530\") " pod="openstack/manila-scheduler-0" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.470477 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7496cf9857-rsxjk"] Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.569743 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37e81308-2a91-47be-a816-37664edd2530-scripts\") pod \"manila-scheduler-0\" (UID: \"37e81308-2a91-47be-a816-37664edd2530\") " pod="openstack/manila-scheduler-0" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.569816 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2883ddae-0938-4828-95c8-46934ade5fdd-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"2883ddae-0938-4828-95c8-46934ade5fdd\") " pod="openstack/manila-share-share1-0" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.569848 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d70ae37-904c-4784-870e-a75d7042dd3e-config\") pod \"dnsmasq-dns-7496cf9857-rsxjk\" (UID: \"0d70ae37-904c-4784-870e-a75d7042dd3e\") " pod="openstack/dnsmasq-dns-7496cf9857-rsxjk" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.569883 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0d70ae37-904c-4784-870e-a75d7042dd3e-ovsdbserver-nb\") pod \"dnsmasq-dns-7496cf9857-rsxjk\" (UID: \"0d70ae37-904c-4784-870e-a75d7042dd3e\") " pod="openstack/dnsmasq-dns-7496cf9857-rsxjk" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.569905 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2883ddae-0938-4828-95c8-46934ade5fdd-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"2883ddae-0938-4828-95c8-46934ade5fdd\") " pod="openstack/manila-share-share1-0" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.569930 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/2883ddae-0938-4828-95c8-46934ade5fdd-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"2883ddae-0938-4828-95c8-46934ade5fdd\") " pod="openstack/manila-share-share1-0" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.569988 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b99g8\" (UniqueName: \"kubernetes.io/projected/2883ddae-0938-4828-95c8-46934ade5fdd-kube-api-access-b99g8\") pod \"manila-share-share1-0\" (UID: \"2883ddae-0938-4828-95c8-46934ade5fdd\") " pod="openstack/manila-share-share1-0" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.570006 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2883ddae-0938-4828-95c8-46934ade5fdd-scripts\") pod \"manila-share-share1-0\" (UID: \"2883ddae-0938-4828-95c8-46934ade5fdd\") " pod="openstack/manila-share-share1-0" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.570030 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37e81308-2a91-47be-a816-37664edd2530-config-data\") pod \"manila-scheduler-0\" (UID: \"37e81308-2a91-47be-a816-37664edd2530\") " pod="openstack/manila-scheduler-0" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.570049 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d70ae37-904c-4784-870e-a75d7042dd3e-dns-svc\") pod \"dnsmasq-dns-7496cf9857-rsxjk\" (UID: \"0d70ae37-904c-4784-870e-a75d7042dd3e\") " pod="openstack/dnsmasq-dns-7496cf9857-rsxjk" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.570105 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2883ddae-0938-4828-95c8-46934ade5fdd-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"2883ddae-0938-4828-95c8-46934ade5fdd\") " pod="openstack/manila-share-share1-0" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.570119 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/37e81308-2a91-47be-a816-37664edd2530-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"37e81308-2a91-47be-a816-37664edd2530\") " pod="openstack/manila-scheduler-0" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.570139 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37e81308-2a91-47be-a816-37664edd2530-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"37e81308-2a91-47be-a816-37664edd2530\") " pod="openstack/manila-scheduler-0" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.570156 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2883ddae-0938-4828-95c8-46934ade5fdd-config-data\") pod \"manila-share-share1-0\" (UID: \"2883ddae-0938-4828-95c8-46934ade5fdd\") " pod="openstack/manila-share-share1-0" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.570176 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pmmf\" (UniqueName: \"kubernetes.io/projected/0d70ae37-904c-4784-870e-a75d7042dd3e-kube-api-access-6pmmf\") pod \"dnsmasq-dns-7496cf9857-rsxjk\" (UID: \"0d70ae37-904c-4784-870e-a75d7042dd3e\") " pod="openstack/dnsmasq-dns-7496cf9857-rsxjk" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.570209 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlrlb\" (UniqueName: \"kubernetes.io/projected/37e81308-2a91-47be-a816-37664edd2530-kube-api-access-vlrlb\") pod \"manila-scheduler-0\" (UID: \"37e81308-2a91-47be-a816-37664edd2530\") " pod="openstack/manila-scheduler-0" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.570239 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0d70ae37-904c-4784-870e-a75d7042dd3e-ovsdbserver-sb\") pod \"dnsmasq-dns-7496cf9857-rsxjk\" (UID: \"0d70ae37-904c-4784-870e-a75d7042dd3e\") " pod="openstack/dnsmasq-dns-7496cf9857-rsxjk" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.570254 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/37e81308-2a91-47be-a816-37664edd2530-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"37e81308-2a91-47be-a816-37664edd2530\") " pod="openstack/manila-scheduler-0" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.570277 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/2883ddae-0938-4828-95c8-46934ade5fdd-ceph\") pod \"manila-share-share1-0\" (UID: \"2883ddae-0938-4828-95c8-46934ade5fdd\") " pod="openstack/manila-share-share1-0" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.571075 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2883ddae-0938-4828-95c8-46934ade5fdd-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"2883ddae-0938-4828-95c8-46934ade5fdd\") " pod="openstack/manila-share-share1-0" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.577795 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2883ddae-0938-4828-95c8-46934ade5fdd-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"2883ddae-0938-4828-95c8-46934ade5fdd\") " pod="openstack/manila-share-share1-0" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.577934 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/2883ddae-0938-4828-95c8-46934ade5fdd-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"2883ddae-0938-4828-95c8-46934ade5fdd\") " pod="openstack/manila-share-share1-0" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.580673 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37e81308-2a91-47be-a816-37664edd2530-scripts\") pod \"manila-scheduler-0\" (UID: \"37e81308-2a91-47be-a816-37664edd2530\") " pod="openstack/manila-scheduler-0" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.582210 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/37e81308-2a91-47be-a816-37664edd2530-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"37e81308-2a91-47be-a816-37664edd2530\") " pod="openstack/manila-scheduler-0" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.582383 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2883ddae-0938-4828-95c8-46934ade5fdd-scripts\") pod \"manila-share-share1-0\" (UID: \"2883ddae-0938-4828-95c8-46934ade5fdd\") " pod="openstack/manila-share-share1-0" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.585868 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/2883ddae-0938-4828-95c8-46934ade5fdd-ceph\") pod \"manila-share-share1-0\" (UID: \"2883ddae-0938-4828-95c8-46934ade5fdd\") " pod="openstack/manila-share-share1-0" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.587230 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2883ddae-0938-4828-95c8-46934ade5fdd-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"2883ddae-0938-4828-95c8-46934ade5fdd\") " pod="openstack/manila-share-share1-0" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.588983 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37e81308-2a91-47be-a816-37664edd2530-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"37e81308-2a91-47be-a816-37664edd2530\") " pod="openstack/manila-scheduler-0" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.592342 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2883ddae-0938-4828-95c8-46934ade5fdd-config-data\") pod \"manila-share-share1-0\" (UID: \"2883ddae-0938-4828-95c8-46934ade5fdd\") " pod="openstack/manila-share-share1-0" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.600823 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37e81308-2a91-47be-a816-37664edd2530-config-data\") pod \"manila-scheduler-0\" (UID: \"37e81308-2a91-47be-a816-37664edd2530\") " pod="openstack/manila-scheduler-0" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.606356 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/37e81308-2a91-47be-a816-37664edd2530-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"37e81308-2a91-47be-a816-37664edd2530\") " pod="openstack/manila-scheduler-0" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.612321 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b99g8\" (UniqueName: \"kubernetes.io/projected/2883ddae-0938-4828-95c8-46934ade5fdd-kube-api-access-b99g8\") pod \"manila-share-share1-0\" (UID: \"2883ddae-0938-4828-95c8-46934ade5fdd\") " pod="openstack/manila-share-share1-0" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.612666 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlrlb\" (UniqueName: \"kubernetes.io/projected/37e81308-2a91-47be-a816-37664edd2530-kube-api-access-vlrlb\") pod \"manila-scheduler-0\" (UID: \"37e81308-2a91-47be-a816-37664edd2530\") " pod="openstack/manila-scheduler-0" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.655540 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.671516 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d70ae37-904c-4784-870e-a75d7042dd3e-config\") pod \"dnsmasq-dns-7496cf9857-rsxjk\" (UID: \"0d70ae37-904c-4784-870e-a75d7042dd3e\") " pod="openstack/dnsmasq-dns-7496cf9857-rsxjk" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.671567 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0d70ae37-904c-4784-870e-a75d7042dd3e-ovsdbserver-nb\") pod \"dnsmasq-dns-7496cf9857-rsxjk\" (UID: \"0d70ae37-904c-4784-870e-a75d7042dd3e\") " pod="openstack/dnsmasq-dns-7496cf9857-rsxjk" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.671645 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d70ae37-904c-4784-870e-a75d7042dd3e-dns-svc\") pod \"dnsmasq-dns-7496cf9857-rsxjk\" (UID: \"0d70ae37-904c-4784-870e-a75d7042dd3e\") " pod="openstack/dnsmasq-dns-7496cf9857-rsxjk" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.671715 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pmmf\" (UniqueName: \"kubernetes.io/projected/0d70ae37-904c-4784-870e-a75d7042dd3e-kube-api-access-6pmmf\") pod \"dnsmasq-dns-7496cf9857-rsxjk\" (UID: \"0d70ae37-904c-4784-870e-a75d7042dd3e\") " pod="openstack/dnsmasq-dns-7496cf9857-rsxjk" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.671754 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0d70ae37-904c-4784-870e-a75d7042dd3e-ovsdbserver-sb\") pod \"dnsmasq-dns-7496cf9857-rsxjk\" (UID: \"0d70ae37-904c-4784-870e-a75d7042dd3e\") " pod="openstack/dnsmasq-dns-7496cf9857-rsxjk" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.672569 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0d70ae37-904c-4784-870e-a75d7042dd3e-ovsdbserver-sb\") pod \"dnsmasq-dns-7496cf9857-rsxjk\" (UID: \"0d70ae37-904c-4784-870e-a75d7042dd3e\") " pod="openstack/dnsmasq-dns-7496cf9857-rsxjk" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.673069 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d70ae37-904c-4784-870e-a75d7042dd3e-config\") pod \"dnsmasq-dns-7496cf9857-rsxjk\" (UID: \"0d70ae37-904c-4784-870e-a75d7042dd3e\") " pod="openstack/dnsmasq-dns-7496cf9857-rsxjk" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.673321 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.674320 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0d70ae37-904c-4784-870e-a75d7042dd3e-ovsdbserver-nb\") pod \"dnsmasq-dns-7496cf9857-rsxjk\" (UID: \"0d70ae37-904c-4784-870e-a75d7042dd3e\") " pod="openstack/dnsmasq-dns-7496cf9857-rsxjk" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.674886 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d70ae37-904c-4784-870e-a75d7042dd3e-dns-svc\") pod \"dnsmasq-dns-7496cf9857-rsxjk\" (UID: \"0d70ae37-904c-4784-870e-a75d7042dd3e\") " pod="openstack/dnsmasq-dns-7496cf9857-rsxjk" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.675901 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.686526 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.686619 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.689000 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.689515 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pmmf\" (UniqueName: \"kubernetes.io/projected/0d70ae37-904c-4784-870e-a75d7042dd3e-kube-api-access-6pmmf\") pod \"dnsmasq-dns-7496cf9857-rsxjk\" (UID: \"0d70ae37-904c-4784-870e-a75d7042dd3e\") " pod="openstack/dnsmasq-dns-7496cf9857-rsxjk" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.773546 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59f27dba-b24f-41f7-ac4e-75e1f3dfdc39-logs\") pod \"manila-api-0\" (UID: \"59f27dba-b24f-41f7-ac4e-75e1f3dfdc39\") " pod="openstack/manila-api-0" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.773939 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59f27dba-b24f-41f7-ac4e-75e1f3dfdc39-scripts\") pod \"manila-api-0\" (UID: \"59f27dba-b24f-41f7-ac4e-75e1f3dfdc39\") " pod="openstack/manila-api-0" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.774046 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qsjg\" (UniqueName: \"kubernetes.io/projected/59f27dba-b24f-41f7-ac4e-75e1f3dfdc39-kube-api-access-5qsjg\") pod \"manila-api-0\" (UID: \"59f27dba-b24f-41f7-ac4e-75e1f3dfdc39\") " pod="openstack/manila-api-0" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.774123 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/59f27dba-b24f-41f7-ac4e-75e1f3dfdc39-config-data-custom\") pod \"manila-api-0\" (UID: \"59f27dba-b24f-41f7-ac4e-75e1f3dfdc39\") " pod="openstack/manila-api-0" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.774200 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/59f27dba-b24f-41f7-ac4e-75e1f3dfdc39-etc-machine-id\") pod \"manila-api-0\" (UID: \"59f27dba-b24f-41f7-ac4e-75e1f3dfdc39\") " pod="openstack/manila-api-0" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.774231 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59f27dba-b24f-41f7-ac4e-75e1f3dfdc39-config-data\") pod \"manila-api-0\" (UID: \"59f27dba-b24f-41f7-ac4e-75e1f3dfdc39\") " pod="openstack/manila-api-0" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.774286 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59f27dba-b24f-41f7-ac4e-75e1f3dfdc39-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"59f27dba-b24f-41f7-ac4e-75e1f3dfdc39\") " pod="openstack/manila-api-0" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.778888 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7496cf9857-rsxjk" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.879354 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59f27dba-b24f-41f7-ac4e-75e1f3dfdc39-logs\") pod \"manila-api-0\" (UID: \"59f27dba-b24f-41f7-ac4e-75e1f3dfdc39\") " pod="openstack/manila-api-0" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.879397 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59f27dba-b24f-41f7-ac4e-75e1f3dfdc39-scripts\") pod \"manila-api-0\" (UID: \"59f27dba-b24f-41f7-ac4e-75e1f3dfdc39\") " pod="openstack/manila-api-0" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.879452 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qsjg\" (UniqueName: \"kubernetes.io/projected/59f27dba-b24f-41f7-ac4e-75e1f3dfdc39-kube-api-access-5qsjg\") pod \"manila-api-0\" (UID: \"59f27dba-b24f-41f7-ac4e-75e1f3dfdc39\") " pod="openstack/manila-api-0" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.879502 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/59f27dba-b24f-41f7-ac4e-75e1f3dfdc39-config-data-custom\") pod \"manila-api-0\" (UID: \"59f27dba-b24f-41f7-ac4e-75e1f3dfdc39\") " pod="openstack/manila-api-0" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.879555 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/59f27dba-b24f-41f7-ac4e-75e1f3dfdc39-etc-machine-id\") pod \"manila-api-0\" (UID: \"59f27dba-b24f-41f7-ac4e-75e1f3dfdc39\") " pod="openstack/manila-api-0" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.879573 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59f27dba-b24f-41f7-ac4e-75e1f3dfdc39-config-data\") pod \"manila-api-0\" (UID: \"59f27dba-b24f-41f7-ac4e-75e1f3dfdc39\") " pod="openstack/manila-api-0" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.879611 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59f27dba-b24f-41f7-ac4e-75e1f3dfdc39-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"59f27dba-b24f-41f7-ac4e-75e1f3dfdc39\") " pod="openstack/manila-api-0" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.880251 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59f27dba-b24f-41f7-ac4e-75e1f3dfdc39-logs\") pod \"manila-api-0\" (UID: \"59f27dba-b24f-41f7-ac4e-75e1f3dfdc39\") " pod="openstack/manila-api-0" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.882009 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/59f27dba-b24f-41f7-ac4e-75e1f3dfdc39-etc-machine-id\") pod \"manila-api-0\" (UID: \"59f27dba-b24f-41f7-ac4e-75e1f3dfdc39\") " pod="openstack/manila-api-0" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.886859 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/59f27dba-b24f-41f7-ac4e-75e1f3dfdc39-config-data-custom\") pod \"manila-api-0\" (UID: \"59f27dba-b24f-41f7-ac4e-75e1f3dfdc39\") " pod="openstack/manila-api-0" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.888727 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59f27dba-b24f-41f7-ac4e-75e1f3dfdc39-scripts\") pod \"manila-api-0\" (UID: \"59f27dba-b24f-41f7-ac4e-75e1f3dfdc39\") " pod="openstack/manila-api-0" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.891535 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59f27dba-b24f-41f7-ac4e-75e1f3dfdc39-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"59f27dba-b24f-41f7-ac4e-75e1f3dfdc39\") " pod="openstack/manila-api-0" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.900501 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qsjg\" (UniqueName: \"kubernetes.io/projected/59f27dba-b24f-41f7-ac4e-75e1f3dfdc39-kube-api-access-5qsjg\") pod \"manila-api-0\" (UID: \"59f27dba-b24f-41f7-ac4e-75e1f3dfdc39\") " pod="openstack/manila-api-0" Feb 19 20:11:14 crc kubenswrapper[4813]: I0219 20:11:14.910412 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59f27dba-b24f-41f7-ac4e-75e1f3dfdc39-config-data\") pod \"manila-api-0\" (UID: \"59f27dba-b24f-41f7-ac4e-75e1f3dfdc39\") " pod="openstack/manila-api-0" Feb 19 20:11:15 crc kubenswrapper[4813]: I0219 20:11:15.140022 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Feb 19 20:11:15 crc kubenswrapper[4813]: I0219 20:11:15.448078 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Feb 19 20:11:15 crc kubenswrapper[4813]: I0219 20:11:15.524861 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Feb 19 20:11:15 crc kubenswrapper[4813]: I0219 20:11:15.595426 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7496cf9857-rsxjk"] Feb 19 20:11:15 crc kubenswrapper[4813]: W0219 20:11:15.620637 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d70ae37_904c_4784_870e_a75d7042dd3e.slice/crio-a07ab10c961e187a51216cc5171b8aa92886fe4a006f69d032c5dffbb5e84697 WatchSource:0}: Error finding container a07ab10c961e187a51216cc5171b8aa92886fe4a006f69d032c5dffbb5e84697: Status 404 returned error can't find the container with id a07ab10c961e187a51216cc5171b8aa92886fe4a006f69d032c5dffbb5e84697 Feb 19 20:11:15 crc kubenswrapper[4813]: I0219 20:11:15.952594 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Feb 19 20:11:15 crc kubenswrapper[4813]: W0219 20:11:15.983353 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod59f27dba_b24f_41f7_ac4e_75e1f3dfdc39.slice/crio-12acfd58f37be3e7efa51af5dae787cddfed1ec530ba2a25bf4d3a7a2bbecd0c WatchSource:0}: Error finding container 12acfd58f37be3e7efa51af5dae787cddfed1ec530ba2a25bf4d3a7a2bbecd0c: Status 404 returned error can't find the container with id 12acfd58f37be3e7efa51af5dae787cddfed1ec530ba2a25bf4d3a7a2bbecd0c Feb 19 20:11:15 crc kubenswrapper[4813]: I0219 20:11:15.989330 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"37e81308-2a91-47be-a816-37664edd2530","Type":"ContainerStarted","Data":"0d75996847224df2b6b95318fb602e1d1787d36ea39ae4888235baca5ba25a27"} Feb 19 20:11:15 crc kubenswrapper[4813]: I0219 20:11:15.991343 4813 generic.go:334] "Generic (PLEG): container finished" podID="0d70ae37-904c-4784-870e-a75d7042dd3e" containerID="82d6644b346ddadee286529f9014922a9862fb7b1724d143becf34d1b77f8a4b" exitCode=0 Feb 19 20:11:15 crc kubenswrapper[4813]: I0219 20:11:15.991383 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7496cf9857-rsxjk" event={"ID":"0d70ae37-904c-4784-870e-a75d7042dd3e","Type":"ContainerDied","Data":"82d6644b346ddadee286529f9014922a9862fb7b1724d143becf34d1b77f8a4b"} Feb 19 20:11:15 crc kubenswrapper[4813]: I0219 20:11:15.991399 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7496cf9857-rsxjk" event={"ID":"0d70ae37-904c-4784-870e-a75d7042dd3e","Type":"ContainerStarted","Data":"a07ab10c961e187a51216cc5171b8aa92886fe4a006f69d032c5dffbb5e84697"} Feb 19 20:11:15 crc kubenswrapper[4813]: I0219 20:11:15.994140 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"2883ddae-0938-4828-95c8-46934ade5fdd","Type":"ContainerStarted","Data":"3573d6b53fbc9f86acd75880256892f5990344f4ee34268cd7c971232c281b2c"} Feb 19 20:11:17 crc kubenswrapper[4813]: I0219 20:11:17.010807 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"37e81308-2a91-47be-a816-37664edd2530","Type":"ContainerStarted","Data":"938612c29e778768e5266a3e3c3e9cab026fab6855fe4ba4e7bcffd66a0de23d"} Feb 19 20:11:17 crc kubenswrapper[4813]: I0219 20:11:17.017197 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7496cf9857-rsxjk" event={"ID":"0d70ae37-904c-4784-870e-a75d7042dd3e","Type":"ContainerStarted","Data":"a98cb5800d9939ceb7f61f9247324bd9ff192ce05069d012221d58af5501b3c8"} Feb 19 20:11:17 crc kubenswrapper[4813]: I0219 20:11:17.019453 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7496cf9857-rsxjk" Feb 19 20:11:17 crc kubenswrapper[4813]: I0219 20:11:17.026068 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"59f27dba-b24f-41f7-ac4e-75e1f3dfdc39","Type":"ContainerStarted","Data":"732fa9c778109f2157446fdcde8bd3822f5a4ca435f09cf9f078e30fae8ddda1"} Feb 19 20:11:17 crc kubenswrapper[4813]: I0219 20:11:17.026109 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"59f27dba-b24f-41f7-ac4e-75e1f3dfdc39","Type":"ContainerStarted","Data":"12acfd58f37be3e7efa51af5dae787cddfed1ec530ba2a25bf4d3a7a2bbecd0c"} Feb 19 20:11:17 crc kubenswrapper[4813]: I0219 20:11:17.055826 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7496cf9857-rsxjk" podStartSLOduration=3.055807668 podStartE2EDuration="3.055807668s" podCreationTimestamp="2026-02-19 20:11:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 20:11:17.040440803 +0000 UTC m=+6096.265881344" watchObservedRunningTime="2026-02-19 20:11:17.055807668 +0000 UTC m=+6096.281248209" Feb 19 20:11:18 crc kubenswrapper[4813]: I0219 20:11:18.037102 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"37e81308-2a91-47be-a816-37664edd2530","Type":"ContainerStarted","Data":"ff65580175a83b22366f2a1527862671ac4d55dbf148455b721df6af62632e96"} Feb 19 20:11:18 crc kubenswrapper[4813]: I0219 20:11:18.039704 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"59f27dba-b24f-41f7-ac4e-75e1f3dfdc39","Type":"ContainerStarted","Data":"8a9ca4d01608eb72bbd88d5af85358b002de931dca4442a3d1548c7bce84e17e"} Feb 19 20:11:18 crc kubenswrapper[4813]: I0219 20:11:18.060967 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=3.353890018 podStartE2EDuration="4.060940969s" podCreationTimestamp="2026-02-19 20:11:14 +0000 UTC" firstStartedPulling="2026-02-19 20:11:15.460386959 +0000 UTC m=+6094.685827500" lastFinishedPulling="2026-02-19 20:11:16.16743791 +0000 UTC m=+6095.392878451" observedRunningTime="2026-02-19 20:11:18.052979433 +0000 UTC m=+6097.278419974" watchObservedRunningTime="2026-02-19 20:11:18.060940969 +0000 UTC m=+6097.286381510" Feb 19 20:11:18 crc kubenswrapper[4813]: I0219 20:11:18.071966 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=4.071945559 podStartE2EDuration="4.071945559s" podCreationTimestamp="2026-02-19 20:11:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 20:11:18.069968148 +0000 UTC m=+6097.295408689" watchObservedRunningTime="2026-02-19 20:11:18.071945559 +0000 UTC m=+6097.297386100" Feb 19 20:11:19 crc kubenswrapper[4813]: I0219 20:11:19.051666 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Feb 19 20:11:21 crc kubenswrapper[4813]: I0219 20:11:21.026370 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-qhx27"] Feb 19 20:11:21 crc kubenswrapper[4813]: I0219 20:11:21.049067 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-qhx27"] Feb 19 20:11:21 crc kubenswrapper[4813]: I0219 20:11:21.487352 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29e7a363-18a4-4d92-ac76-7cb3eb644a55" path="/var/lib/kubelet/pods/29e7a363-18a4-4d92-ac76-7cb3eb644a55/volumes" Feb 19 20:11:23 crc kubenswrapper[4813]: I0219 20:11:23.101797 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"2883ddae-0938-4828-95c8-46934ade5fdd","Type":"ContainerStarted","Data":"d719f8eb913ee569424d6d1732f428d61c44736d947f6b317f2d6747e64ab652"} Feb 19 20:11:23 crc kubenswrapper[4813]: I0219 20:11:23.102553 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"2883ddae-0938-4828-95c8-46934ade5fdd","Type":"ContainerStarted","Data":"b4654fe3eacfa2a41fbddb52a597d15f628c39828e4479fc8061d3181655e170"} Feb 19 20:11:23 crc kubenswrapper[4813]: I0219 20:11:23.154924 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=2.771732581 podStartE2EDuration="9.154900783s" podCreationTimestamp="2026-02-19 20:11:14 +0000 UTC" firstStartedPulling="2026-02-19 20:11:15.586089437 +0000 UTC m=+6094.811529978" lastFinishedPulling="2026-02-19 20:11:21.969257629 +0000 UTC m=+6101.194698180" observedRunningTime="2026-02-19 20:11:23.123009647 +0000 UTC m=+6102.348450208" watchObservedRunningTime="2026-02-19 20:11:23.154900783 +0000 UTC m=+6102.380341324" Feb 19 20:11:24 crc kubenswrapper[4813]: I0219 20:11:24.433491 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 19 20:11:24 crc kubenswrapper[4813]: I0219 20:11:24.656093 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Feb 19 20:11:24 crc kubenswrapper[4813]: I0219 20:11:24.675803 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Feb 19 20:11:24 crc kubenswrapper[4813]: I0219 20:11:24.781235 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7496cf9857-rsxjk" Feb 19 20:11:24 crc kubenswrapper[4813]: I0219 20:11:24.880428 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-765d77db77-p5vgn"] Feb 19 20:11:24 crc kubenswrapper[4813]: I0219 20:11:24.880888 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-765d77db77-p5vgn" podUID="d4626039-bf53-4d57-b2a5-c6201bc3f776" containerName="dnsmasq-dns" containerID="cri-o://a808dfe40aceef27b28870fe173c50690c23b5c2a68f33a9d3ccbb82b75365a5" gracePeriod=10 Feb 19 20:11:25 crc kubenswrapper[4813]: I0219 20:11:25.159160 4813 generic.go:334] "Generic (PLEG): container finished" podID="d4626039-bf53-4d57-b2a5-c6201bc3f776" containerID="a808dfe40aceef27b28870fe173c50690c23b5c2a68f33a9d3ccbb82b75365a5" exitCode=0 Feb 19 20:11:25 crc kubenswrapper[4813]: I0219 20:11:25.159253 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-765d77db77-p5vgn" event={"ID":"d4626039-bf53-4d57-b2a5-c6201bc3f776","Type":"ContainerDied","Data":"a808dfe40aceef27b28870fe173c50690c23b5c2a68f33a9d3ccbb82b75365a5"} Feb 19 20:11:25 crc kubenswrapper[4813]: I0219 20:11:25.405477 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-765d77db77-p5vgn" Feb 19 20:11:25 crc kubenswrapper[4813]: I0219 20:11:25.493481 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d4626039-bf53-4d57-b2a5-c6201bc3f776-ovsdbserver-nb\") pod \"d4626039-bf53-4d57-b2a5-c6201bc3f776\" (UID: \"d4626039-bf53-4d57-b2a5-c6201bc3f776\") " Feb 19 20:11:25 crc kubenswrapper[4813]: I0219 20:11:25.493526 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dsb6g\" (UniqueName: \"kubernetes.io/projected/d4626039-bf53-4d57-b2a5-c6201bc3f776-kube-api-access-dsb6g\") pod \"d4626039-bf53-4d57-b2a5-c6201bc3f776\" (UID: \"d4626039-bf53-4d57-b2a5-c6201bc3f776\") " Feb 19 20:11:25 crc kubenswrapper[4813]: I0219 20:11:25.493763 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4626039-bf53-4d57-b2a5-c6201bc3f776-config\") pod \"d4626039-bf53-4d57-b2a5-c6201bc3f776\" (UID: \"d4626039-bf53-4d57-b2a5-c6201bc3f776\") " Feb 19 20:11:25 crc kubenswrapper[4813]: I0219 20:11:25.493844 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d4626039-bf53-4d57-b2a5-c6201bc3f776-ovsdbserver-sb\") pod \"d4626039-bf53-4d57-b2a5-c6201bc3f776\" (UID: \"d4626039-bf53-4d57-b2a5-c6201bc3f776\") " Feb 19 20:11:25 crc kubenswrapper[4813]: I0219 20:11:25.493894 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4626039-bf53-4d57-b2a5-c6201bc3f776-dns-svc\") pod \"d4626039-bf53-4d57-b2a5-c6201bc3f776\" (UID: \"d4626039-bf53-4d57-b2a5-c6201bc3f776\") " Feb 19 20:11:25 crc kubenswrapper[4813]: I0219 20:11:25.536638 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4626039-bf53-4d57-b2a5-c6201bc3f776-kube-api-access-dsb6g" (OuterVolumeSpecName: "kube-api-access-dsb6g") pod "d4626039-bf53-4d57-b2a5-c6201bc3f776" (UID: "d4626039-bf53-4d57-b2a5-c6201bc3f776"). InnerVolumeSpecName "kube-api-access-dsb6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:11:25 crc kubenswrapper[4813]: I0219 20:11:25.576453 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4626039-bf53-4d57-b2a5-c6201bc3f776-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d4626039-bf53-4d57-b2a5-c6201bc3f776" (UID: "d4626039-bf53-4d57-b2a5-c6201bc3f776"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:11:25 crc kubenswrapper[4813]: I0219 20:11:25.587064 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4626039-bf53-4d57-b2a5-c6201bc3f776-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d4626039-bf53-4d57-b2a5-c6201bc3f776" (UID: "d4626039-bf53-4d57-b2a5-c6201bc3f776"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:11:25 crc kubenswrapper[4813]: I0219 20:11:25.595835 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d4626039-bf53-4d57-b2a5-c6201bc3f776-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 20:11:25 crc kubenswrapper[4813]: I0219 20:11:25.595860 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d4626039-bf53-4d57-b2a5-c6201bc3f776-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 20:11:25 crc kubenswrapper[4813]: I0219 20:11:25.595869 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dsb6g\" (UniqueName: \"kubernetes.io/projected/d4626039-bf53-4d57-b2a5-c6201bc3f776-kube-api-access-dsb6g\") on node \"crc\" DevicePath \"\"" Feb 19 20:11:25 crc kubenswrapper[4813]: I0219 20:11:25.596557 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4626039-bf53-4d57-b2a5-c6201bc3f776-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d4626039-bf53-4d57-b2a5-c6201bc3f776" (UID: "d4626039-bf53-4d57-b2a5-c6201bc3f776"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:11:25 crc kubenswrapper[4813]: I0219 20:11:25.604388 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4626039-bf53-4d57-b2a5-c6201bc3f776-config" (OuterVolumeSpecName: "config") pod "d4626039-bf53-4d57-b2a5-c6201bc3f776" (UID: "d4626039-bf53-4d57-b2a5-c6201bc3f776"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:11:25 crc kubenswrapper[4813]: I0219 20:11:25.697869 4813 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4626039-bf53-4d57-b2a5-c6201bc3f776-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 20:11:25 crc kubenswrapper[4813]: I0219 20:11:25.698190 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4626039-bf53-4d57-b2a5-c6201bc3f776-config\") on node \"crc\" DevicePath \"\"" Feb 19 20:11:26 crc kubenswrapper[4813]: I0219 20:11:26.177018 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-765d77db77-p5vgn" event={"ID":"d4626039-bf53-4d57-b2a5-c6201bc3f776","Type":"ContainerDied","Data":"c06c0083897e0583da6a7dbe7adc7ee2e5d82b3697d98640567a3abf2de3bd03"} Feb 19 20:11:26 crc kubenswrapper[4813]: I0219 20:11:26.177068 4813 scope.go:117] "RemoveContainer" containerID="a808dfe40aceef27b28870fe173c50690c23b5c2a68f33a9d3ccbb82b75365a5" Feb 19 20:11:26 crc kubenswrapper[4813]: I0219 20:11:26.177079 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-765d77db77-p5vgn" Feb 19 20:11:26 crc kubenswrapper[4813]: I0219 20:11:26.205981 4813 scope.go:117] "RemoveContainer" containerID="efe4a472b96468bd335fd0427fab2c8896f70f07b28fb19b613718a3b6192f79" Feb 19 20:11:26 crc kubenswrapper[4813]: I0219 20:11:26.219216 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-765d77db77-p5vgn"] Feb 19 20:11:26 crc kubenswrapper[4813]: I0219 20:11:26.231953 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-765d77db77-p5vgn"] Feb 19 20:11:26 crc kubenswrapper[4813]: I0219 20:11:26.503040 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 20:11:26 crc kubenswrapper[4813]: I0219 20:11:26.503324 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0ece3b6f-f7a2-4329-9ea0-b991941b9da1" containerName="ceilometer-central-agent" containerID="cri-o://0b2be6ce7f5df923e89306a55d5a62a7b39f7855152875dddf2ac94818eeec53" gracePeriod=30 Feb 19 20:11:26 crc kubenswrapper[4813]: I0219 20:11:26.503390 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0ece3b6f-f7a2-4329-9ea0-b991941b9da1" containerName="proxy-httpd" containerID="cri-o://eed6e79062684ad879223a3be56bf0d582ba151d6962b0350c0365883006847c" gracePeriod=30 Feb 19 20:11:26 crc kubenswrapper[4813]: I0219 20:11:26.503450 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0ece3b6f-f7a2-4329-9ea0-b991941b9da1" containerName="sg-core" containerID="cri-o://05cc210cea1a3ea95db35d367b8a465b2fa3f52072fbe5710267e486ac5b20d7" gracePeriod=30 Feb 19 20:11:26 crc kubenswrapper[4813]: I0219 20:11:26.503499 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0ece3b6f-f7a2-4329-9ea0-b991941b9da1" containerName="ceilometer-notification-agent" containerID="cri-o://e9151e03f171f27464c28aefba34e9a30036b369dd7aa5a1c58e439c3882d65a" gracePeriod=30 Feb 19 20:11:27 crc kubenswrapper[4813]: I0219 20:11:27.192703 4813 generic.go:334] "Generic (PLEG): container finished" podID="0ece3b6f-f7a2-4329-9ea0-b991941b9da1" containerID="eed6e79062684ad879223a3be56bf0d582ba151d6962b0350c0365883006847c" exitCode=0 Feb 19 20:11:27 crc kubenswrapper[4813]: I0219 20:11:27.193047 4813 generic.go:334] "Generic (PLEG): container finished" podID="0ece3b6f-f7a2-4329-9ea0-b991941b9da1" containerID="05cc210cea1a3ea95db35d367b8a465b2fa3f52072fbe5710267e486ac5b20d7" exitCode=2 Feb 19 20:11:27 crc kubenswrapper[4813]: I0219 20:11:27.193059 4813 generic.go:334] "Generic (PLEG): container finished" podID="0ece3b6f-f7a2-4329-9ea0-b991941b9da1" containerID="0b2be6ce7f5df923e89306a55d5a62a7b39f7855152875dddf2ac94818eeec53" exitCode=0 Feb 19 20:11:27 crc kubenswrapper[4813]: I0219 20:11:27.192751 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ece3b6f-f7a2-4329-9ea0-b991941b9da1","Type":"ContainerDied","Data":"eed6e79062684ad879223a3be56bf0d582ba151d6962b0350c0365883006847c"} Feb 19 20:11:27 crc kubenswrapper[4813]: I0219 20:11:27.193113 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ece3b6f-f7a2-4329-9ea0-b991941b9da1","Type":"ContainerDied","Data":"05cc210cea1a3ea95db35d367b8a465b2fa3f52072fbe5710267e486ac5b20d7"} Feb 19 20:11:27 crc kubenswrapper[4813]: I0219 20:11:27.193129 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ece3b6f-f7a2-4329-9ea0-b991941b9da1","Type":"ContainerDied","Data":"0b2be6ce7f5df923e89306a55d5a62a7b39f7855152875dddf2ac94818eeec53"} Feb 19 20:11:27 crc kubenswrapper[4813]: I0219 20:11:27.490690 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4626039-bf53-4d57-b2a5-c6201bc3f776" path="/var/lib/kubelet/pods/d4626039-bf53-4d57-b2a5-c6201bc3f776/volumes" Feb 19 20:11:27 crc kubenswrapper[4813]: I0219 20:11:27.725224 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 20:11:27 crc kubenswrapper[4813]: I0219 20:11:27.840752 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwnnq\" (UniqueName: \"kubernetes.io/projected/0ece3b6f-f7a2-4329-9ea0-b991941b9da1-kube-api-access-vwnnq\") pod \"0ece3b6f-f7a2-4329-9ea0-b991941b9da1\" (UID: \"0ece3b6f-f7a2-4329-9ea0-b991941b9da1\") " Feb 19 20:11:27 crc kubenswrapper[4813]: I0219 20:11:27.841214 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ece3b6f-f7a2-4329-9ea0-b991941b9da1-run-httpd\") pod \"0ece3b6f-f7a2-4329-9ea0-b991941b9da1\" (UID: \"0ece3b6f-f7a2-4329-9ea0-b991941b9da1\") " Feb 19 20:11:27 crc kubenswrapper[4813]: I0219 20:11:27.841253 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ece3b6f-f7a2-4329-9ea0-b991941b9da1-config-data\") pod \"0ece3b6f-f7a2-4329-9ea0-b991941b9da1\" (UID: \"0ece3b6f-f7a2-4329-9ea0-b991941b9da1\") " Feb 19 20:11:27 crc kubenswrapper[4813]: I0219 20:11:27.841328 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ece3b6f-f7a2-4329-9ea0-b991941b9da1-scripts\") pod \"0ece3b6f-f7a2-4329-9ea0-b991941b9da1\" (UID: \"0ece3b6f-f7a2-4329-9ea0-b991941b9da1\") " Feb 19 20:11:27 crc kubenswrapper[4813]: I0219 20:11:27.841444 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ece3b6f-f7a2-4329-9ea0-b991941b9da1-combined-ca-bundle\") pod \"0ece3b6f-f7a2-4329-9ea0-b991941b9da1\" (UID: \"0ece3b6f-f7a2-4329-9ea0-b991941b9da1\") " Feb 19 20:11:27 crc kubenswrapper[4813]: I0219 20:11:27.841485 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0ece3b6f-f7a2-4329-9ea0-b991941b9da1-sg-core-conf-yaml\") pod \"0ece3b6f-f7a2-4329-9ea0-b991941b9da1\" (UID: \"0ece3b6f-f7a2-4329-9ea0-b991941b9da1\") " Feb 19 20:11:27 crc kubenswrapper[4813]: I0219 20:11:27.841560 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ece3b6f-f7a2-4329-9ea0-b991941b9da1-log-httpd\") pod \"0ece3b6f-f7a2-4329-9ea0-b991941b9da1\" (UID: \"0ece3b6f-f7a2-4329-9ea0-b991941b9da1\") " Feb 19 20:11:27 crc kubenswrapper[4813]: I0219 20:11:27.842622 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ece3b6f-f7a2-4329-9ea0-b991941b9da1-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0ece3b6f-f7a2-4329-9ea0-b991941b9da1" (UID: "0ece3b6f-f7a2-4329-9ea0-b991941b9da1"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:11:27 crc kubenswrapper[4813]: I0219 20:11:27.843660 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ece3b6f-f7a2-4329-9ea0-b991941b9da1-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0ece3b6f-f7a2-4329-9ea0-b991941b9da1" (UID: "0ece3b6f-f7a2-4329-9ea0-b991941b9da1"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:11:27 crc kubenswrapper[4813]: I0219 20:11:27.847984 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ece3b6f-f7a2-4329-9ea0-b991941b9da1-scripts" (OuterVolumeSpecName: "scripts") pod "0ece3b6f-f7a2-4329-9ea0-b991941b9da1" (UID: "0ece3b6f-f7a2-4329-9ea0-b991941b9da1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:11:27 crc kubenswrapper[4813]: I0219 20:11:27.849781 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ece3b6f-f7a2-4329-9ea0-b991941b9da1-kube-api-access-vwnnq" (OuterVolumeSpecName: "kube-api-access-vwnnq") pod "0ece3b6f-f7a2-4329-9ea0-b991941b9da1" (UID: "0ece3b6f-f7a2-4329-9ea0-b991941b9da1"). InnerVolumeSpecName "kube-api-access-vwnnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:11:27 crc kubenswrapper[4813]: I0219 20:11:27.878358 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ece3b6f-f7a2-4329-9ea0-b991941b9da1-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0ece3b6f-f7a2-4329-9ea0-b991941b9da1" (UID: "0ece3b6f-f7a2-4329-9ea0-b991941b9da1"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:11:27 crc kubenswrapper[4813]: I0219 20:11:27.944452 4813 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0ece3b6f-f7a2-4329-9ea0-b991941b9da1-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 20:11:27 crc kubenswrapper[4813]: I0219 20:11:27.944485 4813 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ece3b6f-f7a2-4329-9ea0-b991941b9da1-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 20:11:27 crc kubenswrapper[4813]: I0219 20:11:27.944496 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwnnq\" (UniqueName: \"kubernetes.io/projected/0ece3b6f-f7a2-4329-9ea0-b991941b9da1-kube-api-access-vwnnq\") on node \"crc\" DevicePath \"\"" Feb 19 20:11:27 crc kubenswrapper[4813]: I0219 20:11:27.944504 4813 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ece3b6f-f7a2-4329-9ea0-b991941b9da1-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 20:11:27 crc kubenswrapper[4813]: I0219 20:11:27.944514 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ece3b6f-f7a2-4329-9ea0-b991941b9da1-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 20:11:27 crc kubenswrapper[4813]: I0219 20:11:27.961958 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ece3b6f-f7a2-4329-9ea0-b991941b9da1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0ece3b6f-f7a2-4329-9ea0-b991941b9da1" (UID: "0ece3b6f-f7a2-4329-9ea0-b991941b9da1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:11:28 crc kubenswrapper[4813]: I0219 20:11:28.020644 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ece3b6f-f7a2-4329-9ea0-b991941b9da1-config-data" (OuterVolumeSpecName: "config-data") pod "0ece3b6f-f7a2-4329-9ea0-b991941b9da1" (UID: "0ece3b6f-f7a2-4329-9ea0-b991941b9da1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:11:28 crc kubenswrapper[4813]: I0219 20:11:28.046461 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ece3b6f-f7a2-4329-9ea0-b991941b9da1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 20:11:28 crc kubenswrapper[4813]: I0219 20:11:28.046498 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ece3b6f-f7a2-4329-9ea0-b991941b9da1-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 20:11:28 crc kubenswrapper[4813]: I0219 20:11:28.222793 4813 generic.go:334] "Generic (PLEG): container finished" podID="0ece3b6f-f7a2-4329-9ea0-b991941b9da1" containerID="e9151e03f171f27464c28aefba34e9a30036b369dd7aa5a1c58e439c3882d65a" exitCode=0 Feb 19 20:11:28 crc kubenswrapper[4813]: I0219 20:11:28.222830 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ece3b6f-f7a2-4329-9ea0-b991941b9da1","Type":"ContainerDied","Data":"e9151e03f171f27464c28aefba34e9a30036b369dd7aa5a1c58e439c3882d65a"} Feb 19 20:11:28 crc kubenswrapper[4813]: I0219 20:11:28.222857 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ece3b6f-f7a2-4329-9ea0-b991941b9da1","Type":"ContainerDied","Data":"6113eb0362b356aefd2ba4dd0c4c6adc06f13519a36ae116a33c1d1bedf9a4ab"} Feb 19 20:11:28 crc kubenswrapper[4813]: I0219 20:11:28.222874 4813 scope.go:117] "RemoveContainer" containerID="eed6e79062684ad879223a3be56bf0d582ba151d6962b0350c0365883006847c" Feb 19 20:11:28 crc kubenswrapper[4813]: I0219 20:11:28.222988 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 20:11:28 crc kubenswrapper[4813]: I0219 20:11:28.266139 4813 scope.go:117] "RemoveContainer" containerID="05cc210cea1a3ea95db35d367b8a465b2fa3f52072fbe5710267e486ac5b20d7" Feb 19 20:11:28 crc kubenswrapper[4813]: I0219 20:11:28.289413 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 20:11:28 crc kubenswrapper[4813]: I0219 20:11:28.294094 4813 scope.go:117] "RemoveContainer" containerID="e9151e03f171f27464c28aefba34e9a30036b369dd7aa5a1c58e439c3882d65a" Feb 19 20:11:28 crc kubenswrapper[4813]: I0219 20:11:28.317234 4813 scope.go:117] "RemoveContainer" containerID="0b2be6ce7f5df923e89306a55d5a62a7b39f7855152875dddf2ac94818eeec53" Feb 19 20:11:28 crc kubenswrapper[4813]: I0219 20:11:28.337020 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 20:11:28 crc kubenswrapper[4813]: I0219 20:11:28.345404 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 20:11:28 crc kubenswrapper[4813]: E0219 20:11:28.345970 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ece3b6f-f7a2-4329-9ea0-b991941b9da1" containerName="sg-core" Feb 19 20:11:28 crc kubenswrapper[4813]: I0219 20:11:28.345992 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ece3b6f-f7a2-4329-9ea0-b991941b9da1" containerName="sg-core" Feb 19 20:11:28 crc kubenswrapper[4813]: E0219 20:11:28.346008 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4626039-bf53-4d57-b2a5-c6201bc3f776" containerName="dnsmasq-dns" Feb 19 20:11:28 crc kubenswrapper[4813]: I0219 20:11:28.346015 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4626039-bf53-4d57-b2a5-c6201bc3f776" containerName="dnsmasq-dns" Feb 19 20:11:28 crc kubenswrapper[4813]: E0219 20:11:28.346026 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4626039-bf53-4d57-b2a5-c6201bc3f776" containerName="init" Feb 19 20:11:28 crc kubenswrapper[4813]: I0219 20:11:28.346032 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4626039-bf53-4d57-b2a5-c6201bc3f776" containerName="init" Feb 19 20:11:28 crc kubenswrapper[4813]: E0219 20:11:28.346043 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ece3b6f-f7a2-4329-9ea0-b991941b9da1" containerName="proxy-httpd" Feb 19 20:11:28 crc kubenswrapper[4813]: I0219 20:11:28.346048 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ece3b6f-f7a2-4329-9ea0-b991941b9da1" containerName="proxy-httpd" Feb 19 20:11:28 crc kubenswrapper[4813]: E0219 20:11:28.346073 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ece3b6f-f7a2-4329-9ea0-b991941b9da1" containerName="ceilometer-notification-agent" Feb 19 20:11:28 crc kubenswrapper[4813]: I0219 20:11:28.346081 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ece3b6f-f7a2-4329-9ea0-b991941b9da1" containerName="ceilometer-notification-agent" Feb 19 20:11:28 crc kubenswrapper[4813]: E0219 20:11:28.346092 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ece3b6f-f7a2-4329-9ea0-b991941b9da1" containerName="ceilometer-central-agent" Feb 19 20:11:28 crc kubenswrapper[4813]: I0219 20:11:28.346099 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ece3b6f-f7a2-4329-9ea0-b991941b9da1" containerName="ceilometer-central-agent" Feb 19 20:11:28 crc kubenswrapper[4813]: I0219 20:11:28.346273 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ece3b6f-f7a2-4329-9ea0-b991941b9da1" containerName="ceilometer-notification-agent" Feb 19 20:11:28 crc kubenswrapper[4813]: I0219 20:11:28.346292 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ece3b6f-f7a2-4329-9ea0-b991941b9da1" containerName="proxy-httpd" Feb 19 20:11:28 crc kubenswrapper[4813]: I0219 20:11:28.346308 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ece3b6f-f7a2-4329-9ea0-b991941b9da1" containerName="sg-core" Feb 19 20:11:28 crc kubenswrapper[4813]: I0219 20:11:28.346320 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4626039-bf53-4d57-b2a5-c6201bc3f776" containerName="dnsmasq-dns" Feb 19 20:11:28 crc kubenswrapper[4813]: I0219 20:11:28.346335 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ece3b6f-f7a2-4329-9ea0-b991941b9da1" containerName="ceilometer-central-agent" Feb 19 20:11:28 crc kubenswrapper[4813]: I0219 20:11:28.348326 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 20:11:28 crc kubenswrapper[4813]: I0219 20:11:28.351910 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 20:11:28 crc kubenswrapper[4813]: I0219 20:11:28.352092 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 20:11:28 crc kubenswrapper[4813]: I0219 20:11:28.358899 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 20:11:28 crc kubenswrapper[4813]: I0219 20:11:28.381797 4813 scope.go:117] "RemoveContainer" containerID="eed6e79062684ad879223a3be56bf0d582ba151d6962b0350c0365883006847c" Feb 19 20:11:28 crc kubenswrapper[4813]: E0219 20:11:28.384046 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eed6e79062684ad879223a3be56bf0d582ba151d6962b0350c0365883006847c\": container with ID starting with eed6e79062684ad879223a3be56bf0d582ba151d6962b0350c0365883006847c not found: ID does not exist" containerID="eed6e79062684ad879223a3be56bf0d582ba151d6962b0350c0365883006847c" Feb 19 20:11:28 crc kubenswrapper[4813]: I0219 20:11:28.384086 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eed6e79062684ad879223a3be56bf0d582ba151d6962b0350c0365883006847c"} err="failed to get container status \"eed6e79062684ad879223a3be56bf0d582ba151d6962b0350c0365883006847c\": rpc error: code = NotFound desc = could not find container \"eed6e79062684ad879223a3be56bf0d582ba151d6962b0350c0365883006847c\": container with ID starting with eed6e79062684ad879223a3be56bf0d582ba151d6962b0350c0365883006847c not found: ID does not exist" Feb 19 20:11:28 crc kubenswrapper[4813]: I0219 20:11:28.384111 4813 scope.go:117] "RemoveContainer" containerID="05cc210cea1a3ea95db35d367b8a465b2fa3f52072fbe5710267e486ac5b20d7" Feb 19 20:11:28 crc kubenswrapper[4813]: E0219 20:11:28.384448 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05cc210cea1a3ea95db35d367b8a465b2fa3f52072fbe5710267e486ac5b20d7\": container with ID starting with 05cc210cea1a3ea95db35d367b8a465b2fa3f52072fbe5710267e486ac5b20d7 not found: ID does not exist" containerID="05cc210cea1a3ea95db35d367b8a465b2fa3f52072fbe5710267e486ac5b20d7" Feb 19 20:11:28 crc kubenswrapper[4813]: I0219 20:11:28.384491 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05cc210cea1a3ea95db35d367b8a465b2fa3f52072fbe5710267e486ac5b20d7"} err="failed to get container status \"05cc210cea1a3ea95db35d367b8a465b2fa3f52072fbe5710267e486ac5b20d7\": rpc error: code = NotFound desc = could not find container \"05cc210cea1a3ea95db35d367b8a465b2fa3f52072fbe5710267e486ac5b20d7\": container with ID starting with 05cc210cea1a3ea95db35d367b8a465b2fa3f52072fbe5710267e486ac5b20d7 not found: ID does not exist" Feb 19 20:11:28 crc kubenswrapper[4813]: I0219 20:11:28.384515 4813 scope.go:117] "RemoveContainer" containerID="e9151e03f171f27464c28aefba34e9a30036b369dd7aa5a1c58e439c3882d65a" Feb 19 20:11:28 crc kubenswrapper[4813]: E0219 20:11:28.384807 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9151e03f171f27464c28aefba34e9a30036b369dd7aa5a1c58e439c3882d65a\": container with ID starting with e9151e03f171f27464c28aefba34e9a30036b369dd7aa5a1c58e439c3882d65a not found: ID does not exist" containerID="e9151e03f171f27464c28aefba34e9a30036b369dd7aa5a1c58e439c3882d65a" Feb 19 20:11:28 crc kubenswrapper[4813]: I0219 20:11:28.384834 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9151e03f171f27464c28aefba34e9a30036b369dd7aa5a1c58e439c3882d65a"} err="failed to get container status \"e9151e03f171f27464c28aefba34e9a30036b369dd7aa5a1c58e439c3882d65a\": rpc error: code = NotFound desc = could not find container \"e9151e03f171f27464c28aefba34e9a30036b369dd7aa5a1c58e439c3882d65a\": container with ID starting with e9151e03f171f27464c28aefba34e9a30036b369dd7aa5a1c58e439c3882d65a not found: ID does not exist" Feb 19 20:11:28 crc kubenswrapper[4813]: I0219 20:11:28.384848 4813 scope.go:117] "RemoveContainer" containerID="0b2be6ce7f5df923e89306a55d5a62a7b39f7855152875dddf2ac94818eeec53" Feb 19 20:11:28 crc kubenswrapper[4813]: E0219 20:11:28.385055 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b2be6ce7f5df923e89306a55d5a62a7b39f7855152875dddf2ac94818eeec53\": container with ID starting with 0b2be6ce7f5df923e89306a55d5a62a7b39f7855152875dddf2ac94818eeec53 not found: ID does not exist" containerID="0b2be6ce7f5df923e89306a55d5a62a7b39f7855152875dddf2ac94818eeec53" Feb 19 20:11:28 crc kubenswrapper[4813]: I0219 20:11:28.385076 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b2be6ce7f5df923e89306a55d5a62a7b39f7855152875dddf2ac94818eeec53"} err="failed to get container status \"0b2be6ce7f5df923e89306a55d5a62a7b39f7855152875dddf2ac94818eeec53\": rpc error: code = NotFound desc = could not find container \"0b2be6ce7f5df923e89306a55d5a62a7b39f7855152875dddf2ac94818eeec53\": container with ID starting with 0b2be6ce7f5df923e89306a55d5a62a7b39f7855152875dddf2ac94818eeec53 not found: ID does not exist" Feb 19 20:11:28 crc kubenswrapper[4813]: I0219 20:11:28.455587 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9xnm\" (UniqueName: \"kubernetes.io/projected/04f3db0a-bec0-4b42-8574-2c9d178ef929-kube-api-access-g9xnm\") pod \"ceilometer-0\" (UID: \"04f3db0a-bec0-4b42-8574-2c9d178ef929\") " pod="openstack/ceilometer-0" Feb 19 20:11:28 crc kubenswrapper[4813]: I0219 20:11:28.455644 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04f3db0a-bec0-4b42-8574-2c9d178ef929-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"04f3db0a-bec0-4b42-8574-2c9d178ef929\") " pod="openstack/ceilometer-0" Feb 19 20:11:28 crc kubenswrapper[4813]: I0219 20:11:28.455675 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04f3db0a-bec0-4b42-8574-2c9d178ef929-scripts\") pod \"ceilometer-0\" (UID: \"04f3db0a-bec0-4b42-8574-2c9d178ef929\") " pod="openstack/ceilometer-0" Feb 19 20:11:28 crc kubenswrapper[4813]: I0219 20:11:28.456130 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04f3db0a-bec0-4b42-8574-2c9d178ef929-run-httpd\") pod \"ceilometer-0\" (UID: \"04f3db0a-bec0-4b42-8574-2c9d178ef929\") " pod="openstack/ceilometer-0" Feb 19 20:11:28 crc kubenswrapper[4813]: I0219 20:11:28.456478 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04f3db0a-bec0-4b42-8574-2c9d178ef929-log-httpd\") pod \"ceilometer-0\" (UID: \"04f3db0a-bec0-4b42-8574-2c9d178ef929\") " pod="openstack/ceilometer-0" Feb 19 20:11:28 crc kubenswrapper[4813]: I0219 20:11:28.456541 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/04f3db0a-bec0-4b42-8574-2c9d178ef929-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"04f3db0a-bec0-4b42-8574-2c9d178ef929\") " pod="openstack/ceilometer-0" Feb 19 20:11:28 crc kubenswrapper[4813]: I0219 20:11:28.456758 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04f3db0a-bec0-4b42-8574-2c9d178ef929-config-data\") pod \"ceilometer-0\" (UID: \"04f3db0a-bec0-4b42-8574-2c9d178ef929\") " pod="openstack/ceilometer-0" Feb 19 20:11:28 crc kubenswrapper[4813]: I0219 20:11:28.558218 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04f3db0a-bec0-4b42-8574-2c9d178ef929-run-httpd\") pod \"ceilometer-0\" (UID: \"04f3db0a-bec0-4b42-8574-2c9d178ef929\") " pod="openstack/ceilometer-0" Feb 19 20:11:28 crc kubenswrapper[4813]: I0219 20:11:28.558459 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04f3db0a-bec0-4b42-8574-2c9d178ef929-log-httpd\") pod \"ceilometer-0\" (UID: \"04f3db0a-bec0-4b42-8574-2c9d178ef929\") " pod="openstack/ceilometer-0" Feb 19 20:11:28 crc kubenswrapper[4813]: I0219 20:11:28.558494 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/04f3db0a-bec0-4b42-8574-2c9d178ef929-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"04f3db0a-bec0-4b42-8574-2c9d178ef929\") " pod="openstack/ceilometer-0" Feb 19 20:11:28 crc kubenswrapper[4813]: I0219 20:11:28.558549 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04f3db0a-bec0-4b42-8574-2c9d178ef929-config-data\") pod \"ceilometer-0\" (UID: \"04f3db0a-bec0-4b42-8574-2c9d178ef929\") " pod="openstack/ceilometer-0" Feb 19 20:11:28 crc kubenswrapper[4813]: I0219 20:11:28.558608 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9xnm\" (UniqueName: \"kubernetes.io/projected/04f3db0a-bec0-4b42-8574-2c9d178ef929-kube-api-access-g9xnm\") pod \"ceilometer-0\" (UID: \"04f3db0a-bec0-4b42-8574-2c9d178ef929\") " pod="openstack/ceilometer-0" Feb 19 20:11:28 crc kubenswrapper[4813]: I0219 20:11:28.558647 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04f3db0a-bec0-4b42-8574-2c9d178ef929-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"04f3db0a-bec0-4b42-8574-2c9d178ef929\") " pod="openstack/ceilometer-0" Feb 19 20:11:28 crc kubenswrapper[4813]: I0219 20:11:28.558672 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04f3db0a-bec0-4b42-8574-2c9d178ef929-scripts\") pod \"ceilometer-0\" (UID: \"04f3db0a-bec0-4b42-8574-2c9d178ef929\") " pod="openstack/ceilometer-0" Feb 19 20:11:28 crc kubenswrapper[4813]: I0219 20:11:28.559462 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04f3db0a-bec0-4b42-8574-2c9d178ef929-log-httpd\") pod \"ceilometer-0\" (UID: \"04f3db0a-bec0-4b42-8574-2c9d178ef929\") " pod="openstack/ceilometer-0" Feb 19 20:11:28 crc kubenswrapper[4813]: I0219 20:11:28.559604 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04f3db0a-bec0-4b42-8574-2c9d178ef929-run-httpd\") pod \"ceilometer-0\" (UID: \"04f3db0a-bec0-4b42-8574-2c9d178ef929\") " pod="openstack/ceilometer-0" Feb 19 20:11:28 crc kubenswrapper[4813]: I0219 20:11:28.563034 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/04f3db0a-bec0-4b42-8574-2c9d178ef929-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"04f3db0a-bec0-4b42-8574-2c9d178ef929\") " pod="openstack/ceilometer-0" Feb 19 20:11:28 crc kubenswrapper[4813]: I0219 20:11:28.563464 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04f3db0a-bec0-4b42-8574-2c9d178ef929-scripts\") pod \"ceilometer-0\" (UID: \"04f3db0a-bec0-4b42-8574-2c9d178ef929\") " pod="openstack/ceilometer-0" Feb 19 20:11:28 crc kubenswrapper[4813]: I0219 20:11:28.563681 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04f3db0a-bec0-4b42-8574-2c9d178ef929-config-data\") pod \"ceilometer-0\" (UID: \"04f3db0a-bec0-4b42-8574-2c9d178ef929\") " pod="openstack/ceilometer-0" Feb 19 20:11:28 crc kubenswrapper[4813]: I0219 20:11:28.570748 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04f3db0a-bec0-4b42-8574-2c9d178ef929-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"04f3db0a-bec0-4b42-8574-2c9d178ef929\") " pod="openstack/ceilometer-0" Feb 19 20:11:28 crc kubenswrapper[4813]: I0219 20:11:28.579826 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9xnm\" (UniqueName: \"kubernetes.io/projected/04f3db0a-bec0-4b42-8574-2c9d178ef929-kube-api-access-g9xnm\") pod \"ceilometer-0\" (UID: \"04f3db0a-bec0-4b42-8574-2c9d178ef929\") " pod="openstack/ceilometer-0" Feb 19 20:11:28 crc kubenswrapper[4813]: I0219 20:11:28.685170 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 20:11:28 crc kubenswrapper[4813]: I0219 20:11:28.746600 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 20:11:29 crc kubenswrapper[4813]: I0219 20:11:29.165647 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 20:11:29 crc kubenswrapper[4813]: W0219 20:11:29.171889 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod04f3db0a_bec0_4b42_8574_2c9d178ef929.slice/crio-7fa2ae0de6bebe26894e291ee9e9481151d5317d1515dd99bbd71948d198a9b0 WatchSource:0}: Error finding container 7fa2ae0de6bebe26894e291ee9e9481151d5317d1515dd99bbd71948d198a9b0: Status 404 returned error can't find the container with id 7fa2ae0de6bebe26894e291ee9e9481151d5317d1515dd99bbd71948d198a9b0 Feb 19 20:11:29 crc kubenswrapper[4813]: I0219 20:11:29.235877 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04f3db0a-bec0-4b42-8574-2c9d178ef929","Type":"ContainerStarted","Data":"7fa2ae0de6bebe26894e291ee9e9481151d5317d1515dd99bbd71948d198a9b0"} Feb 19 20:11:29 crc kubenswrapper[4813]: I0219 20:11:29.484496 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ece3b6f-f7a2-4329-9ea0-b991941b9da1" path="/var/lib/kubelet/pods/0ece3b6f-f7a2-4329-9ea0-b991941b9da1/volumes" Feb 19 20:11:30 crc kubenswrapper[4813]: I0219 20:11:30.248596 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04f3db0a-bec0-4b42-8574-2c9d178ef929","Type":"ContainerStarted","Data":"c1c4ed58c1c5504e994bdae40fa758b5bd0afda63f78dd2a30b2fc6cefb9fb11"} Feb 19 20:11:31 crc kubenswrapper[4813]: I0219 20:11:31.262914 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04f3db0a-bec0-4b42-8574-2c9d178ef929","Type":"ContainerStarted","Data":"22557023d02dace0c7850f7bbf7ea640505bae7ae8fabe2353467e43a2df6cc1"} Feb 19 20:11:31 crc kubenswrapper[4813]: I0219 20:11:31.263681 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04f3db0a-bec0-4b42-8574-2c9d178ef929","Type":"ContainerStarted","Data":"518dd1dd6e0a6e13017d3729277144575eea3038e7722ff3e563a8db79fc70eb"} Feb 19 20:11:33 crc kubenswrapper[4813]: I0219 20:11:33.284166 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04f3db0a-bec0-4b42-8574-2c9d178ef929","Type":"ContainerStarted","Data":"e001fa9e7bab28f6125e9a2e0a0ecbeb6b842764b881ce1a6d065c370f2066cf"} Feb 19 20:11:33 crc kubenswrapper[4813]: I0219 20:11:33.284532 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="04f3db0a-bec0-4b42-8574-2c9d178ef929" containerName="ceilometer-central-agent" containerID="cri-o://c1c4ed58c1c5504e994bdae40fa758b5bd0afda63f78dd2a30b2fc6cefb9fb11" gracePeriod=30 Feb 19 20:11:33 crc kubenswrapper[4813]: I0219 20:11:33.284764 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 20:11:33 crc kubenswrapper[4813]: I0219 20:11:33.284771 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="04f3db0a-bec0-4b42-8574-2c9d178ef929" containerName="proxy-httpd" containerID="cri-o://e001fa9e7bab28f6125e9a2e0a0ecbeb6b842764b881ce1a6d065c370f2066cf" gracePeriod=30 Feb 19 20:11:33 crc kubenswrapper[4813]: I0219 20:11:33.284903 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="04f3db0a-bec0-4b42-8574-2c9d178ef929" containerName="sg-core" containerID="cri-o://22557023d02dace0c7850f7bbf7ea640505bae7ae8fabe2353467e43a2df6cc1" gracePeriod=30 Feb 19 20:11:33 crc kubenswrapper[4813]: I0219 20:11:33.285034 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="04f3db0a-bec0-4b42-8574-2c9d178ef929" containerName="ceilometer-notification-agent" containerID="cri-o://518dd1dd6e0a6e13017d3729277144575eea3038e7722ff3e563a8db79fc70eb" gracePeriod=30 Feb 19 20:11:33 crc kubenswrapper[4813]: I0219 20:11:33.317345 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.662901727 podStartE2EDuration="5.317331764s" podCreationTimestamp="2026-02-19 20:11:28 +0000 UTC" firstStartedPulling="2026-02-19 20:11:29.174673165 +0000 UTC m=+6108.400113726" lastFinishedPulling="2026-02-19 20:11:32.829103202 +0000 UTC m=+6112.054543763" observedRunningTime="2026-02-19 20:11:33.316520039 +0000 UTC m=+6112.541960590" watchObservedRunningTime="2026-02-19 20:11:33.317331764 +0000 UTC m=+6112.542772305" Feb 19 20:11:34 crc kubenswrapper[4813]: I0219 20:11:34.297770 4813 generic.go:334] "Generic (PLEG): container finished" podID="04f3db0a-bec0-4b42-8574-2c9d178ef929" containerID="e001fa9e7bab28f6125e9a2e0a0ecbeb6b842764b881ce1a6d065c370f2066cf" exitCode=0 Feb 19 20:11:34 crc kubenswrapper[4813]: I0219 20:11:34.298164 4813 generic.go:334] "Generic (PLEG): container finished" podID="04f3db0a-bec0-4b42-8574-2c9d178ef929" containerID="22557023d02dace0c7850f7bbf7ea640505bae7ae8fabe2353467e43a2df6cc1" exitCode=2 Feb 19 20:11:34 crc kubenswrapper[4813]: I0219 20:11:34.298181 4813 generic.go:334] "Generic (PLEG): container finished" podID="04f3db0a-bec0-4b42-8574-2c9d178ef929" containerID="518dd1dd6e0a6e13017d3729277144575eea3038e7722ff3e563a8db79fc70eb" exitCode=0 Feb 19 20:11:34 crc kubenswrapper[4813]: I0219 20:11:34.297999 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04f3db0a-bec0-4b42-8574-2c9d178ef929","Type":"ContainerDied","Data":"e001fa9e7bab28f6125e9a2e0a0ecbeb6b842764b881ce1a6d065c370f2066cf"} Feb 19 20:11:34 crc kubenswrapper[4813]: I0219 20:11:34.298220 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04f3db0a-bec0-4b42-8574-2c9d178ef929","Type":"ContainerDied","Data":"22557023d02dace0c7850f7bbf7ea640505bae7ae8fabe2353467e43a2df6cc1"} Feb 19 20:11:34 crc kubenswrapper[4813]: I0219 20:11:34.298240 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04f3db0a-bec0-4b42-8574-2c9d178ef929","Type":"ContainerDied","Data":"518dd1dd6e0a6e13017d3729277144575eea3038e7722ff3e563a8db79fc70eb"} Feb 19 20:11:35 crc kubenswrapper[4813]: I0219 20:11:35.312935 4813 generic.go:334] "Generic (PLEG): container finished" podID="04f3db0a-bec0-4b42-8574-2c9d178ef929" containerID="c1c4ed58c1c5504e994bdae40fa758b5bd0afda63f78dd2a30b2fc6cefb9fb11" exitCode=0 Feb 19 20:11:35 crc kubenswrapper[4813]: I0219 20:11:35.313278 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04f3db0a-bec0-4b42-8574-2c9d178ef929","Type":"ContainerDied","Data":"c1c4ed58c1c5504e994bdae40fa758b5bd0afda63f78dd2a30b2fc6cefb9fb11"} Feb 19 20:11:35 crc kubenswrapper[4813]: I0219 20:11:35.629175 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 20:11:35 crc kubenswrapper[4813]: I0219 20:11:35.768547 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04f3db0a-bec0-4b42-8574-2c9d178ef929-config-data\") pod \"04f3db0a-bec0-4b42-8574-2c9d178ef929\" (UID: \"04f3db0a-bec0-4b42-8574-2c9d178ef929\") " Feb 19 20:11:35 crc kubenswrapper[4813]: I0219 20:11:35.768685 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9xnm\" (UniqueName: \"kubernetes.io/projected/04f3db0a-bec0-4b42-8574-2c9d178ef929-kube-api-access-g9xnm\") pod \"04f3db0a-bec0-4b42-8574-2c9d178ef929\" (UID: \"04f3db0a-bec0-4b42-8574-2c9d178ef929\") " Feb 19 20:11:35 crc kubenswrapper[4813]: I0219 20:11:35.768794 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04f3db0a-bec0-4b42-8574-2c9d178ef929-log-httpd\") pod \"04f3db0a-bec0-4b42-8574-2c9d178ef929\" (UID: \"04f3db0a-bec0-4b42-8574-2c9d178ef929\") " Feb 19 20:11:35 crc kubenswrapper[4813]: I0219 20:11:35.768863 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04f3db0a-bec0-4b42-8574-2c9d178ef929-scripts\") pod \"04f3db0a-bec0-4b42-8574-2c9d178ef929\" (UID: \"04f3db0a-bec0-4b42-8574-2c9d178ef929\") " Feb 19 20:11:35 crc kubenswrapper[4813]: I0219 20:11:35.768886 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04f3db0a-bec0-4b42-8574-2c9d178ef929-combined-ca-bundle\") pod \"04f3db0a-bec0-4b42-8574-2c9d178ef929\" (UID: \"04f3db0a-bec0-4b42-8574-2c9d178ef929\") " Feb 19 20:11:35 crc kubenswrapper[4813]: I0219 20:11:35.768934 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04f3db0a-bec0-4b42-8574-2c9d178ef929-run-httpd\") pod \"04f3db0a-bec0-4b42-8574-2c9d178ef929\" (UID: \"04f3db0a-bec0-4b42-8574-2c9d178ef929\") " Feb 19 20:11:35 crc kubenswrapper[4813]: I0219 20:11:35.768966 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/04f3db0a-bec0-4b42-8574-2c9d178ef929-sg-core-conf-yaml\") pod \"04f3db0a-bec0-4b42-8574-2c9d178ef929\" (UID: \"04f3db0a-bec0-4b42-8574-2c9d178ef929\") " Feb 19 20:11:35 crc kubenswrapper[4813]: I0219 20:11:35.771221 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04f3db0a-bec0-4b42-8574-2c9d178ef929-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "04f3db0a-bec0-4b42-8574-2c9d178ef929" (UID: "04f3db0a-bec0-4b42-8574-2c9d178ef929"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:11:35 crc kubenswrapper[4813]: I0219 20:11:35.771467 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04f3db0a-bec0-4b42-8574-2c9d178ef929-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "04f3db0a-bec0-4b42-8574-2c9d178ef929" (UID: "04f3db0a-bec0-4b42-8574-2c9d178ef929"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:11:35 crc kubenswrapper[4813]: I0219 20:11:35.777062 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04f3db0a-bec0-4b42-8574-2c9d178ef929-kube-api-access-g9xnm" (OuterVolumeSpecName: "kube-api-access-g9xnm") pod "04f3db0a-bec0-4b42-8574-2c9d178ef929" (UID: "04f3db0a-bec0-4b42-8574-2c9d178ef929"). InnerVolumeSpecName "kube-api-access-g9xnm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:11:35 crc kubenswrapper[4813]: I0219 20:11:35.778169 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04f3db0a-bec0-4b42-8574-2c9d178ef929-scripts" (OuterVolumeSpecName: "scripts") pod "04f3db0a-bec0-4b42-8574-2c9d178ef929" (UID: "04f3db0a-bec0-4b42-8574-2c9d178ef929"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:11:35 crc kubenswrapper[4813]: I0219 20:11:35.802650 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04f3db0a-bec0-4b42-8574-2c9d178ef929-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "04f3db0a-bec0-4b42-8574-2c9d178ef929" (UID: "04f3db0a-bec0-4b42-8574-2c9d178ef929"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:11:35 crc kubenswrapper[4813]: I0219 20:11:35.858234 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04f3db0a-bec0-4b42-8574-2c9d178ef929-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "04f3db0a-bec0-4b42-8574-2c9d178ef929" (UID: "04f3db0a-bec0-4b42-8574-2c9d178ef929"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:11:35 crc kubenswrapper[4813]: I0219 20:11:35.871077 4813 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04f3db0a-bec0-4b42-8574-2c9d178ef929-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 20:11:35 crc kubenswrapper[4813]: I0219 20:11:35.871106 4813 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/04f3db0a-bec0-4b42-8574-2c9d178ef929-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 20:11:35 crc kubenswrapper[4813]: I0219 20:11:35.871117 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9xnm\" (UniqueName: \"kubernetes.io/projected/04f3db0a-bec0-4b42-8574-2c9d178ef929-kube-api-access-g9xnm\") on node \"crc\" DevicePath \"\"" Feb 19 20:11:35 crc kubenswrapper[4813]: I0219 20:11:35.871127 4813 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04f3db0a-bec0-4b42-8574-2c9d178ef929-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 20:11:35 crc kubenswrapper[4813]: I0219 20:11:35.871136 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04f3db0a-bec0-4b42-8574-2c9d178ef929-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 20:11:35 crc kubenswrapper[4813]: I0219 20:11:35.871143 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04f3db0a-bec0-4b42-8574-2c9d178ef929-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 20:11:35 crc kubenswrapper[4813]: I0219 20:11:35.878321 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04f3db0a-bec0-4b42-8574-2c9d178ef929-config-data" (OuterVolumeSpecName: "config-data") pod "04f3db0a-bec0-4b42-8574-2c9d178ef929" (UID: "04f3db0a-bec0-4b42-8574-2c9d178ef929"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:11:35 crc kubenswrapper[4813]: I0219 20:11:35.973181 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04f3db0a-bec0-4b42-8574-2c9d178ef929-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 20:11:36 crc kubenswrapper[4813]: I0219 20:11:36.149298 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Feb 19 20:11:36 crc kubenswrapper[4813]: I0219 20:11:36.323568 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04f3db0a-bec0-4b42-8574-2c9d178ef929","Type":"ContainerDied","Data":"7fa2ae0de6bebe26894e291ee9e9481151d5317d1515dd99bbd71948d198a9b0"} Feb 19 20:11:36 crc kubenswrapper[4813]: I0219 20:11:36.323630 4813 scope.go:117] "RemoveContainer" containerID="e001fa9e7bab28f6125e9a2e0a0ecbeb6b842764b881ce1a6d065c370f2066cf" Feb 19 20:11:36 crc kubenswrapper[4813]: I0219 20:11:36.323650 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 20:11:36 crc kubenswrapper[4813]: I0219 20:11:36.326708 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Feb 19 20:11:36 crc kubenswrapper[4813]: I0219 20:11:36.345818 4813 scope.go:117] "RemoveContainer" containerID="22557023d02dace0c7850f7bbf7ea640505bae7ae8fabe2353467e43a2df6cc1" Feb 19 20:11:36 crc kubenswrapper[4813]: I0219 20:11:36.379083 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 20:11:36 crc kubenswrapper[4813]: I0219 20:11:36.386198 4813 scope.go:117] "RemoveContainer" containerID="518dd1dd6e0a6e13017d3729277144575eea3038e7722ff3e563a8db79fc70eb" Feb 19 20:11:36 crc kubenswrapper[4813]: I0219 20:11:36.400261 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 20:11:36 crc kubenswrapper[4813]: I0219 20:11:36.415787 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 20:11:36 crc kubenswrapper[4813]: E0219 20:11:36.416403 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04f3db0a-bec0-4b42-8574-2c9d178ef929" containerName="ceilometer-central-agent" Feb 19 20:11:36 crc kubenswrapper[4813]: I0219 20:11:36.416431 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="04f3db0a-bec0-4b42-8574-2c9d178ef929" containerName="ceilometer-central-agent" Feb 19 20:11:36 crc kubenswrapper[4813]: E0219 20:11:36.416452 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04f3db0a-bec0-4b42-8574-2c9d178ef929" containerName="ceilometer-notification-agent" Feb 19 20:11:36 crc kubenswrapper[4813]: I0219 20:11:36.416461 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="04f3db0a-bec0-4b42-8574-2c9d178ef929" containerName="ceilometer-notification-agent" Feb 19 20:11:36 crc kubenswrapper[4813]: E0219 20:11:36.416499 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04f3db0a-bec0-4b42-8574-2c9d178ef929" containerName="proxy-httpd" Feb 19 20:11:36 crc kubenswrapper[4813]: I0219 20:11:36.416507 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="04f3db0a-bec0-4b42-8574-2c9d178ef929" containerName="proxy-httpd" Feb 19 20:11:36 crc kubenswrapper[4813]: E0219 20:11:36.416529 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04f3db0a-bec0-4b42-8574-2c9d178ef929" containerName="sg-core" Feb 19 20:11:36 crc kubenswrapper[4813]: I0219 20:11:36.416538 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="04f3db0a-bec0-4b42-8574-2c9d178ef929" containerName="sg-core" Feb 19 20:11:36 crc kubenswrapper[4813]: I0219 20:11:36.416786 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="04f3db0a-bec0-4b42-8574-2c9d178ef929" containerName="ceilometer-central-agent" Feb 19 20:11:36 crc kubenswrapper[4813]: I0219 20:11:36.416817 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="04f3db0a-bec0-4b42-8574-2c9d178ef929" containerName="ceilometer-notification-agent" Feb 19 20:11:36 crc kubenswrapper[4813]: I0219 20:11:36.416837 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="04f3db0a-bec0-4b42-8574-2c9d178ef929" containerName="proxy-httpd" Feb 19 20:11:36 crc kubenswrapper[4813]: I0219 20:11:36.416846 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="04f3db0a-bec0-4b42-8574-2c9d178ef929" containerName="sg-core" Feb 19 20:11:36 crc kubenswrapper[4813]: I0219 20:11:36.419250 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/manila-api-0" Feb 19 20:11:36 crc kubenswrapper[4813]: I0219 20:11:36.419377 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 20:11:36 crc kubenswrapper[4813]: I0219 20:11:36.426203 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 20:11:36 crc kubenswrapper[4813]: I0219 20:11:36.426500 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 20:11:36 crc kubenswrapper[4813]: I0219 20:11:36.426710 4813 scope.go:117] "RemoveContainer" containerID="c1c4ed58c1c5504e994bdae40fa758b5bd0afda63f78dd2a30b2fc6cefb9fb11" Feb 19 20:11:36 crc kubenswrapper[4813]: I0219 20:11:36.444646 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 20:11:36 crc kubenswrapper[4813]: I0219 20:11:36.487492 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c2fc0f1f-d05a-4280-93f3-672cbd77af00-log-httpd\") pod \"ceilometer-0\" (UID: \"c2fc0f1f-d05a-4280-93f3-672cbd77af00\") " pod="openstack/ceilometer-0" Feb 19 20:11:36 crc kubenswrapper[4813]: I0219 20:11:36.487562 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2fc0f1f-d05a-4280-93f3-672cbd77af00-scripts\") pod \"ceilometer-0\" (UID: \"c2fc0f1f-d05a-4280-93f3-672cbd77af00\") " pod="openstack/ceilometer-0" Feb 19 20:11:36 crc kubenswrapper[4813]: I0219 20:11:36.487631 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c2fc0f1f-d05a-4280-93f3-672cbd77af00-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c2fc0f1f-d05a-4280-93f3-672cbd77af00\") " pod="openstack/ceilometer-0" Feb 19 20:11:36 crc kubenswrapper[4813]: I0219 20:11:36.487664 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2fc0f1f-d05a-4280-93f3-672cbd77af00-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c2fc0f1f-d05a-4280-93f3-672cbd77af00\") " pod="openstack/ceilometer-0" Feb 19 20:11:36 crc kubenswrapper[4813]: I0219 20:11:36.487768 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c2fc0f1f-d05a-4280-93f3-672cbd77af00-run-httpd\") pod \"ceilometer-0\" (UID: \"c2fc0f1f-d05a-4280-93f3-672cbd77af00\") " pod="openstack/ceilometer-0" Feb 19 20:11:36 crc kubenswrapper[4813]: I0219 20:11:36.487815 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2fc0f1f-d05a-4280-93f3-672cbd77af00-config-data\") pod \"ceilometer-0\" (UID: \"c2fc0f1f-d05a-4280-93f3-672cbd77af00\") " pod="openstack/ceilometer-0" Feb 19 20:11:36 crc kubenswrapper[4813]: I0219 20:11:36.487833 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2885c\" (UniqueName: \"kubernetes.io/projected/c2fc0f1f-d05a-4280-93f3-672cbd77af00-kube-api-access-2885c\") pod \"ceilometer-0\" (UID: \"c2fc0f1f-d05a-4280-93f3-672cbd77af00\") " pod="openstack/ceilometer-0" Feb 19 20:11:36 crc kubenswrapper[4813]: I0219 20:11:36.589910 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c2fc0f1f-d05a-4280-93f3-672cbd77af00-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c2fc0f1f-d05a-4280-93f3-672cbd77af00\") " pod="openstack/ceilometer-0" Feb 19 20:11:36 crc kubenswrapper[4813]: I0219 20:11:36.590246 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2fc0f1f-d05a-4280-93f3-672cbd77af00-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c2fc0f1f-d05a-4280-93f3-672cbd77af00\") " pod="openstack/ceilometer-0" Feb 19 20:11:36 crc kubenswrapper[4813]: I0219 20:11:36.590370 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c2fc0f1f-d05a-4280-93f3-672cbd77af00-run-httpd\") pod \"ceilometer-0\" (UID: \"c2fc0f1f-d05a-4280-93f3-672cbd77af00\") " pod="openstack/ceilometer-0" Feb 19 20:11:36 crc kubenswrapper[4813]: I0219 20:11:36.590421 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2fc0f1f-d05a-4280-93f3-672cbd77af00-config-data\") pod \"ceilometer-0\" (UID: \"c2fc0f1f-d05a-4280-93f3-672cbd77af00\") " pod="openstack/ceilometer-0" Feb 19 20:11:36 crc kubenswrapper[4813]: I0219 20:11:36.590441 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2885c\" (UniqueName: \"kubernetes.io/projected/c2fc0f1f-d05a-4280-93f3-672cbd77af00-kube-api-access-2885c\") pod \"ceilometer-0\" (UID: \"c2fc0f1f-d05a-4280-93f3-672cbd77af00\") " pod="openstack/ceilometer-0" Feb 19 20:11:36 crc kubenswrapper[4813]: I0219 20:11:36.590542 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c2fc0f1f-d05a-4280-93f3-672cbd77af00-log-httpd\") pod \"ceilometer-0\" (UID: \"c2fc0f1f-d05a-4280-93f3-672cbd77af00\") " pod="openstack/ceilometer-0" Feb 19 20:11:36 crc kubenswrapper[4813]: I0219 20:11:36.590584 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2fc0f1f-d05a-4280-93f3-672cbd77af00-scripts\") pod \"ceilometer-0\" (UID: \"c2fc0f1f-d05a-4280-93f3-672cbd77af00\") " pod="openstack/ceilometer-0" Feb 19 20:11:36 crc kubenswrapper[4813]: I0219 20:11:36.590907 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c2fc0f1f-d05a-4280-93f3-672cbd77af00-log-httpd\") pod \"ceilometer-0\" (UID: \"c2fc0f1f-d05a-4280-93f3-672cbd77af00\") " pod="openstack/ceilometer-0" Feb 19 20:11:36 crc kubenswrapper[4813]: I0219 20:11:36.591521 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c2fc0f1f-d05a-4280-93f3-672cbd77af00-run-httpd\") pod \"ceilometer-0\" (UID: \"c2fc0f1f-d05a-4280-93f3-672cbd77af00\") " pod="openstack/ceilometer-0" Feb 19 20:11:36 crc kubenswrapper[4813]: I0219 20:11:36.594533 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c2fc0f1f-d05a-4280-93f3-672cbd77af00-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c2fc0f1f-d05a-4280-93f3-672cbd77af00\") " pod="openstack/ceilometer-0" Feb 19 20:11:36 crc kubenswrapper[4813]: I0219 20:11:36.594775 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2fc0f1f-d05a-4280-93f3-672cbd77af00-scripts\") pod \"ceilometer-0\" (UID: \"c2fc0f1f-d05a-4280-93f3-672cbd77af00\") " pod="openstack/ceilometer-0" Feb 19 20:11:36 crc kubenswrapper[4813]: I0219 20:11:36.596928 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2fc0f1f-d05a-4280-93f3-672cbd77af00-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c2fc0f1f-d05a-4280-93f3-672cbd77af00\") " pod="openstack/ceilometer-0" Feb 19 20:11:36 crc kubenswrapper[4813]: I0219 20:11:36.598410 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2fc0f1f-d05a-4280-93f3-672cbd77af00-config-data\") pod \"ceilometer-0\" (UID: \"c2fc0f1f-d05a-4280-93f3-672cbd77af00\") " pod="openstack/ceilometer-0" Feb 19 20:11:36 crc kubenswrapper[4813]: I0219 20:11:36.609056 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2885c\" (UniqueName: \"kubernetes.io/projected/c2fc0f1f-d05a-4280-93f3-672cbd77af00-kube-api-access-2885c\") pod \"ceilometer-0\" (UID: \"c2fc0f1f-d05a-4280-93f3-672cbd77af00\") " pod="openstack/ceilometer-0" Feb 19 20:11:36 crc kubenswrapper[4813]: I0219 20:11:36.752736 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 20:11:37 crc kubenswrapper[4813]: I0219 20:11:37.264885 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 20:11:37 crc kubenswrapper[4813]: I0219 20:11:37.357101 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c2fc0f1f-d05a-4280-93f3-672cbd77af00","Type":"ContainerStarted","Data":"59cf6913c9997b13c2254e9927ffc41793e1fd929e80d068cc465aece666782e"} Feb 19 20:11:37 crc kubenswrapper[4813]: I0219 20:11:37.483890 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04f3db0a-bec0-4b42-8574-2c9d178ef929" path="/var/lib/kubelet/pods/04f3db0a-bec0-4b42-8574-2c9d178ef929/volumes" Feb 19 20:11:38 crc kubenswrapper[4813]: I0219 20:11:38.367005 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c2fc0f1f-d05a-4280-93f3-672cbd77af00","Type":"ContainerStarted","Data":"b030e15a873e6d65b5b1a0c27843a9758a8ec6d4c72c04120fe462f513729d73"} Feb 19 20:11:39 crc kubenswrapper[4813]: I0219 20:11:39.378331 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c2fc0f1f-d05a-4280-93f3-672cbd77af00","Type":"ContainerStarted","Data":"e623b42b485b20f225153c2b484d0236dbc1c954874ed273923672eed7920eda"} Feb 19 20:11:39 crc kubenswrapper[4813]: I0219 20:11:39.379205 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c2fc0f1f-d05a-4280-93f3-672cbd77af00","Type":"ContainerStarted","Data":"d61ae0dfa35ea867b5494f8632bd453cf9b683e886b90b6042bf37b02c33c1b1"} Feb 19 20:11:41 crc kubenswrapper[4813]: I0219 20:11:41.399552 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c2fc0f1f-d05a-4280-93f3-672cbd77af00","Type":"ContainerStarted","Data":"bc956994e2865516dce6465ef166e336a7bc9c5b0e0d9e92b61db8890aa4a834"} Feb 19 20:11:41 crc kubenswrapper[4813]: I0219 20:11:41.400210 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 20:11:41 crc kubenswrapper[4813]: I0219 20:11:41.427457 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.7372535409999998 podStartE2EDuration="5.427441104s" podCreationTimestamp="2026-02-19 20:11:36 +0000 UTC" firstStartedPulling="2026-02-19 20:11:37.289100258 +0000 UTC m=+6116.514540799" lastFinishedPulling="2026-02-19 20:11:40.979287821 +0000 UTC m=+6120.204728362" observedRunningTime="2026-02-19 20:11:41.4244266 +0000 UTC m=+6120.649867131" watchObservedRunningTime="2026-02-19 20:11:41.427441104 +0000 UTC m=+6120.652881645" Feb 19 20:11:55 crc kubenswrapper[4813]: I0219 20:11:55.576975 4813 scope.go:117] "RemoveContainer" containerID="0d710f74bfd3a2aad96505f2af271775858cab9f852f5a4ba74b0cdb6c8a3421" Feb 19 20:11:55 crc kubenswrapper[4813]: I0219 20:11:55.614496 4813 scope.go:117] "RemoveContainer" containerID="5aa21d01f81a58c443cd77b9320acdede47f279102a11762b6b443b9107d8065" Feb 19 20:11:55 crc kubenswrapper[4813]: I0219 20:11:55.675607 4813 scope.go:117] "RemoveContainer" containerID="ac7ef514e2f5dae96c9d4fb572b6d519a1041bfceaaf8e17c16e320b29f179ab" Feb 19 20:12:06 crc kubenswrapper[4813]: I0219 20:12:06.760250 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 19 20:12:15 crc kubenswrapper[4813]: I0219 20:12:15.178191 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7486c8b5ff-pnjl4"] Feb 19 20:12:15 crc kubenswrapper[4813]: I0219 20:12:15.180381 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7486c8b5ff-pnjl4" Feb 19 20:12:15 crc kubenswrapper[4813]: I0219 20:12:15.182608 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1" Feb 19 20:12:15 crc kubenswrapper[4813]: I0219 20:12:15.193363 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7486c8b5ff-pnjl4"] Feb 19 20:12:15 crc kubenswrapper[4813]: I0219 20:12:15.352681 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/fd9130f7-d795-4083-b0ef-cbf28fa2ef0c-openstack-cell1\") pod \"dnsmasq-dns-7486c8b5ff-pnjl4\" (UID: \"fd9130f7-d795-4083-b0ef-cbf28fa2ef0c\") " pod="openstack/dnsmasq-dns-7486c8b5ff-pnjl4" Feb 19 20:12:15 crc kubenswrapper[4813]: I0219 20:12:15.353533 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd9130f7-d795-4083-b0ef-cbf28fa2ef0c-config\") pod \"dnsmasq-dns-7486c8b5ff-pnjl4\" (UID: \"fd9130f7-d795-4083-b0ef-cbf28fa2ef0c\") " pod="openstack/dnsmasq-dns-7486c8b5ff-pnjl4" Feb 19 20:12:15 crc kubenswrapper[4813]: I0219 20:12:15.353625 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr6gz\" (UniqueName: \"kubernetes.io/projected/fd9130f7-d795-4083-b0ef-cbf28fa2ef0c-kube-api-access-fr6gz\") pod \"dnsmasq-dns-7486c8b5ff-pnjl4\" (UID: \"fd9130f7-d795-4083-b0ef-cbf28fa2ef0c\") " pod="openstack/dnsmasq-dns-7486c8b5ff-pnjl4" Feb 19 20:12:15 crc kubenswrapper[4813]: I0219 20:12:15.353655 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fd9130f7-d795-4083-b0ef-cbf28fa2ef0c-ovsdbserver-sb\") pod \"dnsmasq-dns-7486c8b5ff-pnjl4\" (UID: \"fd9130f7-d795-4083-b0ef-cbf28fa2ef0c\") " pod="openstack/dnsmasq-dns-7486c8b5ff-pnjl4" Feb 19 20:12:15 crc kubenswrapper[4813]: I0219 20:12:15.353867 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fd9130f7-d795-4083-b0ef-cbf28fa2ef0c-ovsdbserver-nb\") pod \"dnsmasq-dns-7486c8b5ff-pnjl4\" (UID: \"fd9130f7-d795-4083-b0ef-cbf28fa2ef0c\") " pod="openstack/dnsmasq-dns-7486c8b5ff-pnjl4" Feb 19 20:12:15 crc kubenswrapper[4813]: I0219 20:12:15.353943 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fd9130f7-d795-4083-b0ef-cbf28fa2ef0c-dns-svc\") pod \"dnsmasq-dns-7486c8b5ff-pnjl4\" (UID: \"fd9130f7-d795-4083-b0ef-cbf28fa2ef0c\") " pod="openstack/dnsmasq-dns-7486c8b5ff-pnjl4" Feb 19 20:12:15 crc kubenswrapper[4813]: I0219 20:12:15.455926 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fr6gz\" (UniqueName: \"kubernetes.io/projected/fd9130f7-d795-4083-b0ef-cbf28fa2ef0c-kube-api-access-fr6gz\") pod \"dnsmasq-dns-7486c8b5ff-pnjl4\" (UID: \"fd9130f7-d795-4083-b0ef-cbf28fa2ef0c\") " pod="openstack/dnsmasq-dns-7486c8b5ff-pnjl4" Feb 19 20:12:15 crc kubenswrapper[4813]: I0219 20:12:15.456247 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fd9130f7-d795-4083-b0ef-cbf28fa2ef0c-ovsdbserver-sb\") pod \"dnsmasq-dns-7486c8b5ff-pnjl4\" (UID: \"fd9130f7-d795-4083-b0ef-cbf28fa2ef0c\") " pod="openstack/dnsmasq-dns-7486c8b5ff-pnjl4" Feb 19 20:12:15 crc kubenswrapper[4813]: I0219 20:12:15.456420 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fd9130f7-d795-4083-b0ef-cbf28fa2ef0c-ovsdbserver-nb\") pod \"dnsmasq-dns-7486c8b5ff-pnjl4\" (UID: \"fd9130f7-d795-4083-b0ef-cbf28fa2ef0c\") " pod="openstack/dnsmasq-dns-7486c8b5ff-pnjl4" Feb 19 20:12:15 crc kubenswrapper[4813]: I0219 20:12:15.456539 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fd9130f7-d795-4083-b0ef-cbf28fa2ef0c-dns-svc\") pod \"dnsmasq-dns-7486c8b5ff-pnjl4\" (UID: \"fd9130f7-d795-4083-b0ef-cbf28fa2ef0c\") " pod="openstack/dnsmasq-dns-7486c8b5ff-pnjl4" Feb 19 20:12:15 crc kubenswrapper[4813]: I0219 20:12:15.456704 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/fd9130f7-d795-4083-b0ef-cbf28fa2ef0c-openstack-cell1\") pod \"dnsmasq-dns-7486c8b5ff-pnjl4\" (UID: \"fd9130f7-d795-4083-b0ef-cbf28fa2ef0c\") " pod="openstack/dnsmasq-dns-7486c8b5ff-pnjl4" Feb 19 20:12:15 crc kubenswrapper[4813]: I0219 20:12:15.456866 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd9130f7-d795-4083-b0ef-cbf28fa2ef0c-config\") pod \"dnsmasq-dns-7486c8b5ff-pnjl4\" (UID: \"fd9130f7-d795-4083-b0ef-cbf28fa2ef0c\") " pod="openstack/dnsmasq-dns-7486c8b5ff-pnjl4" Feb 19 20:12:15 crc kubenswrapper[4813]: I0219 20:12:15.458003 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd9130f7-d795-4083-b0ef-cbf28fa2ef0c-config\") pod \"dnsmasq-dns-7486c8b5ff-pnjl4\" (UID: \"fd9130f7-d795-4083-b0ef-cbf28fa2ef0c\") " pod="openstack/dnsmasq-dns-7486c8b5ff-pnjl4" Feb 19 20:12:15 crc kubenswrapper[4813]: I0219 20:12:15.458045 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fd9130f7-d795-4083-b0ef-cbf28fa2ef0c-ovsdbserver-nb\") pod \"dnsmasq-dns-7486c8b5ff-pnjl4\" (UID: \"fd9130f7-d795-4083-b0ef-cbf28fa2ef0c\") " pod="openstack/dnsmasq-dns-7486c8b5ff-pnjl4" Feb 19 20:12:15 crc kubenswrapper[4813]: I0219 20:12:15.458361 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fd9130f7-d795-4083-b0ef-cbf28fa2ef0c-ovsdbserver-sb\") pod \"dnsmasq-dns-7486c8b5ff-pnjl4\" (UID: \"fd9130f7-d795-4083-b0ef-cbf28fa2ef0c\") " pod="openstack/dnsmasq-dns-7486c8b5ff-pnjl4" Feb 19 20:12:15 crc kubenswrapper[4813]: I0219 20:12:15.458864 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/fd9130f7-d795-4083-b0ef-cbf28fa2ef0c-openstack-cell1\") pod \"dnsmasq-dns-7486c8b5ff-pnjl4\" (UID: \"fd9130f7-d795-4083-b0ef-cbf28fa2ef0c\") " pod="openstack/dnsmasq-dns-7486c8b5ff-pnjl4" Feb 19 20:12:15 crc kubenswrapper[4813]: I0219 20:12:15.458870 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fd9130f7-d795-4083-b0ef-cbf28fa2ef0c-dns-svc\") pod \"dnsmasq-dns-7486c8b5ff-pnjl4\" (UID: \"fd9130f7-d795-4083-b0ef-cbf28fa2ef0c\") " pod="openstack/dnsmasq-dns-7486c8b5ff-pnjl4" Feb 19 20:12:15 crc kubenswrapper[4813]: I0219 20:12:15.495195 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fr6gz\" (UniqueName: \"kubernetes.io/projected/fd9130f7-d795-4083-b0ef-cbf28fa2ef0c-kube-api-access-fr6gz\") pod \"dnsmasq-dns-7486c8b5ff-pnjl4\" (UID: \"fd9130f7-d795-4083-b0ef-cbf28fa2ef0c\") " pod="openstack/dnsmasq-dns-7486c8b5ff-pnjl4" Feb 19 20:12:15 crc kubenswrapper[4813]: I0219 20:12:15.513026 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7486c8b5ff-pnjl4" Feb 19 20:12:16 crc kubenswrapper[4813]: I0219 20:12:16.091940 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7486c8b5ff-pnjl4"] Feb 19 20:12:16 crc kubenswrapper[4813]: I0219 20:12:16.830126 4813 generic.go:334] "Generic (PLEG): container finished" podID="fd9130f7-d795-4083-b0ef-cbf28fa2ef0c" containerID="ac4eeca767101a84f0a5925d8b0285e9d3241c0edcfa0a6196eb5e585b2e3609" exitCode=0 Feb 19 20:12:16 crc kubenswrapper[4813]: I0219 20:12:16.830187 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7486c8b5ff-pnjl4" event={"ID":"fd9130f7-d795-4083-b0ef-cbf28fa2ef0c","Type":"ContainerDied","Data":"ac4eeca767101a84f0a5925d8b0285e9d3241c0edcfa0a6196eb5e585b2e3609"} Feb 19 20:12:16 crc kubenswrapper[4813]: I0219 20:12:16.830480 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7486c8b5ff-pnjl4" event={"ID":"fd9130f7-d795-4083-b0ef-cbf28fa2ef0c","Type":"ContainerStarted","Data":"cf67a1a333ab2c14caaa07cf6a2e267abd0307a0d5020efddf0b44325e3f3df5"} Feb 19 20:12:17 crc kubenswrapper[4813]: I0219 20:12:17.857736 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7486c8b5ff-pnjl4" event={"ID":"fd9130f7-d795-4083-b0ef-cbf28fa2ef0c","Type":"ContainerStarted","Data":"128b9b2f2d14f4d23cf74bd9f16e49157b6279c180e31da4057b90bea9fad876"} Feb 19 20:12:17 crc kubenswrapper[4813]: I0219 20:12:17.858230 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7486c8b5ff-pnjl4" Feb 19 20:12:17 crc kubenswrapper[4813]: I0219 20:12:17.882180 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7486c8b5ff-pnjl4" podStartSLOduration=2.882160468 podStartE2EDuration="2.882160468s" podCreationTimestamp="2026-02-19 20:12:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 20:12:17.879158175 +0000 UTC m=+6157.104598736" watchObservedRunningTime="2026-02-19 20:12:17.882160468 +0000 UTC m=+6157.107601009" Feb 19 20:12:25 crc kubenswrapper[4813]: I0219 20:12:25.515212 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7486c8b5ff-pnjl4" Feb 19 20:12:25 crc kubenswrapper[4813]: I0219 20:12:25.592188 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7496cf9857-rsxjk"] Feb 19 20:12:25 crc kubenswrapper[4813]: I0219 20:12:25.592562 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7496cf9857-rsxjk" podUID="0d70ae37-904c-4784-870e-a75d7042dd3e" containerName="dnsmasq-dns" containerID="cri-o://a98cb5800d9939ceb7f61f9247324bd9ff192ce05069d012221d58af5501b3c8" gracePeriod=10 Feb 19 20:12:25 crc kubenswrapper[4813]: I0219 20:12:25.726522 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f4c4f5bd7-n9jxx"] Feb 19 20:12:25 crc kubenswrapper[4813]: I0219 20:12:25.728893 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f4c4f5bd7-n9jxx" Feb 19 20:12:25 crc kubenswrapper[4813]: I0219 20:12:25.770405 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f4c4f5bd7-n9jxx"] Feb 19 20:12:25 crc kubenswrapper[4813]: I0219 20:12:25.824290 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/387e8461-3709-4da1-a6b4-120a4ae6fc34-dns-svc\") pod \"dnsmasq-dns-7f4c4f5bd7-n9jxx\" (UID: \"387e8461-3709-4da1-a6b4-120a4ae6fc34\") " pod="openstack/dnsmasq-dns-7f4c4f5bd7-n9jxx" Feb 19 20:12:25 crc kubenswrapper[4813]: I0219 20:12:25.824456 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/387e8461-3709-4da1-a6b4-120a4ae6fc34-config\") pod \"dnsmasq-dns-7f4c4f5bd7-n9jxx\" (UID: \"387e8461-3709-4da1-a6b4-120a4ae6fc34\") " pod="openstack/dnsmasq-dns-7f4c4f5bd7-n9jxx" Feb 19 20:12:25 crc kubenswrapper[4813]: I0219 20:12:25.824505 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/387e8461-3709-4da1-a6b4-120a4ae6fc34-ovsdbserver-nb\") pod \"dnsmasq-dns-7f4c4f5bd7-n9jxx\" (UID: \"387e8461-3709-4da1-a6b4-120a4ae6fc34\") " pod="openstack/dnsmasq-dns-7f4c4f5bd7-n9jxx" Feb 19 20:12:25 crc kubenswrapper[4813]: I0219 20:12:25.824844 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/387e8461-3709-4da1-a6b4-120a4ae6fc34-ovsdbserver-sb\") pod \"dnsmasq-dns-7f4c4f5bd7-n9jxx\" (UID: \"387e8461-3709-4da1-a6b4-120a4ae6fc34\") " pod="openstack/dnsmasq-dns-7f4c4f5bd7-n9jxx" Feb 19 20:12:25 crc kubenswrapper[4813]: I0219 20:12:25.824973 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scmhc\" (UniqueName: \"kubernetes.io/projected/387e8461-3709-4da1-a6b4-120a4ae6fc34-kube-api-access-scmhc\") pod \"dnsmasq-dns-7f4c4f5bd7-n9jxx\" (UID: \"387e8461-3709-4da1-a6b4-120a4ae6fc34\") " pod="openstack/dnsmasq-dns-7f4c4f5bd7-n9jxx" Feb 19 20:12:25 crc kubenswrapper[4813]: I0219 20:12:25.825022 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/387e8461-3709-4da1-a6b4-120a4ae6fc34-openstack-cell1\") pod \"dnsmasq-dns-7f4c4f5bd7-n9jxx\" (UID: \"387e8461-3709-4da1-a6b4-120a4ae6fc34\") " pod="openstack/dnsmasq-dns-7f4c4f5bd7-n9jxx" Feb 19 20:12:25 crc kubenswrapper[4813]: I0219 20:12:25.926896 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scmhc\" (UniqueName: \"kubernetes.io/projected/387e8461-3709-4da1-a6b4-120a4ae6fc34-kube-api-access-scmhc\") pod \"dnsmasq-dns-7f4c4f5bd7-n9jxx\" (UID: \"387e8461-3709-4da1-a6b4-120a4ae6fc34\") " pod="openstack/dnsmasq-dns-7f4c4f5bd7-n9jxx" Feb 19 20:12:25 crc kubenswrapper[4813]: I0219 20:12:25.926977 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/387e8461-3709-4da1-a6b4-120a4ae6fc34-openstack-cell1\") pod \"dnsmasq-dns-7f4c4f5bd7-n9jxx\" (UID: \"387e8461-3709-4da1-a6b4-120a4ae6fc34\") " pod="openstack/dnsmasq-dns-7f4c4f5bd7-n9jxx" Feb 19 20:12:25 crc kubenswrapper[4813]: I0219 20:12:25.927100 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/387e8461-3709-4da1-a6b4-120a4ae6fc34-dns-svc\") pod \"dnsmasq-dns-7f4c4f5bd7-n9jxx\" (UID: \"387e8461-3709-4da1-a6b4-120a4ae6fc34\") " pod="openstack/dnsmasq-dns-7f4c4f5bd7-n9jxx" Feb 19 20:12:25 crc kubenswrapper[4813]: I0219 20:12:25.927148 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/387e8461-3709-4da1-a6b4-120a4ae6fc34-config\") pod \"dnsmasq-dns-7f4c4f5bd7-n9jxx\" (UID: \"387e8461-3709-4da1-a6b4-120a4ae6fc34\") " pod="openstack/dnsmasq-dns-7f4c4f5bd7-n9jxx" Feb 19 20:12:25 crc kubenswrapper[4813]: I0219 20:12:25.927180 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/387e8461-3709-4da1-a6b4-120a4ae6fc34-ovsdbserver-nb\") pod \"dnsmasq-dns-7f4c4f5bd7-n9jxx\" (UID: \"387e8461-3709-4da1-a6b4-120a4ae6fc34\") " pod="openstack/dnsmasq-dns-7f4c4f5bd7-n9jxx" Feb 19 20:12:25 crc kubenswrapper[4813]: I0219 20:12:25.927268 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/387e8461-3709-4da1-a6b4-120a4ae6fc34-ovsdbserver-sb\") pod \"dnsmasq-dns-7f4c4f5bd7-n9jxx\" (UID: \"387e8461-3709-4da1-a6b4-120a4ae6fc34\") " pod="openstack/dnsmasq-dns-7f4c4f5bd7-n9jxx" Feb 19 20:12:25 crc kubenswrapper[4813]: I0219 20:12:25.928230 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/387e8461-3709-4da1-a6b4-120a4ae6fc34-ovsdbserver-sb\") pod \"dnsmasq-dns-7f4c4f5bd7-n9jxx\" (UID: \"387e8461-3709-4da1-a6b4-120a4ae6fc34\") " pod="openstack/dnsmasq-dns-7f4c4f5bd7-n9jxx" Feb 19 20:12:25 crc kubenswrapper[4813]: I0219 20:12:25.929102 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/387e8461-3709-4da1-a6b4-120a4ae6fc34-openstack-cell1\") pod \"dnsmasq-dns-7f4c4f5bd7-n9jxx\" (UID: \"387e8461-3709-4da1-a6b4-120a4ae6fc34\") " pod="openstack/dnsmasq-dns-7f4c4f5bd7-n9jxx" Feb 19 20:12:25 crc kubenswrapper[4813]: I0219 20:12:25.929342 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/387e8461-3709-4da1-a6b4-120a4ae6fc34-dns-svc\") pod \"dnsmasq-dns-7f4c4f5bd7-n9jxx\" (UID: \"387e8461-3709-4da1-a6b4-120a4ae6fc34\") " pod="openstack/dnsmasq-dns-7f4c4f5bd7-n9jxx" Feb 19 20:12:25 crc kubenswrapper[4813]: I0219 20:12:25.929506 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/387e8461-3709-4da1-a6b4-120a4ae6fc34-ovsdbserver-nb\") pod \"dnsmasq-dns-7f4c4f5bd7-n9jxx\" (UID: \"387e8461-3709-4da1-a6b4-120a4ae6fc34\") " pod="openstack/dnsmasq-dns-7f4c4f5bd7-n9jxx" Feb 19 20:12:25 crc kubenswrapper[4813]: I0219 20:12:25.929625 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/387e8461-3709-4da1-a6b4-120a4ae6fc34-config\") pod \"dnsmasq-dns-7f4c4f5bd7-n9jxx\" (UID: \"387e8461-3709-4da1-a6b4-120a4ae6fc34\") " pod="openstack/dnsmasq-dns-7f4c4f5bd7-n9jxx" Feb 19 20:12:25 crc kubenswrapper[4813]: I0219 20:12:25.965857 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scmhc\" (UniqueName: \"kubernetes.io/projected/387e8461-3709-4da1-a6b4-120a4ae6fc34-kube-api-access-scmhc\") pod \"dnsmasq-dns-7f4c4f5bd7-n9jxx\" (UID: \"387e8461-3709-4da1-a6b4-120a4ae6fc34\") " pod="openstack/dnsmasq-dns-7f4c4f5bd7-n9jxx" Feb 19 20:12:25 crc kubenswrapper[4813]: I0219 20:12:25.997007 4813 generic.go:334] "Generic (PLEG): container finished" podID="0d70ae37-904c-4784-870e-a75d7042dd3e" containerID="a98cb5800d9939ceb7f61f9247324bd9ff192ce05069d012221d58af5501b3c8" exitCode=0 Feb 19 20:12:25 crc kubenswrapper[4813]: I0219 20:12:25.997072 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7496cf9857-rsxjk" event={"ID":"0d70ae37-904c-4784-870e-a75d7042dd3e","Type":"ContainerDied","Data":"a98cb5800d9939ceb7f61f9247324bd9ff192ce05069d012221d58af5501b3c8"} Feb 19 20:12:26 crc kubenswrapper[4813]: I0219 20:12:26.069146 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f4c4f5bd7-n9jxx" Feb 19 20:12:26 crc kubenswrapper[4813]: I0219 20:12:26.187738 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7496cf9857-rsxjk" Feb 19 20:12:26 crc kubenswrapper[4813]: I0219 20:12:26.334362 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d70ae37-904c-4784-870e-a75d7042dd3e-dns-svc\") pod \"0d70ae37-904c-4784-870e-a75d7042dd3e\" (UID: \"0d70ae37-904c-4784-870e-a75d7042dd3e\") " Feb 19 20:12:26 crc kubenswrapper[4813]: I0219 20:12:26.334432 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pmmf\" (UniqueName: \"kubernetes.io/projected/0d70ae37-904c-4784-870e-a75d7042dd3e-kube-api-access-6pmmf\") pod \"0d70ae37-904c-4784-870e-a75d7042dd3e\" (UID: \"0d70ae37-904c-4784-870e-a75d7042dd3e\") " Feb 19 20:12:26 crc kubenswrapper[4813]: I0219 20:12:26.334493 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0d70ae37-904c-4784-870e-a75d7042dd3e-ovsdbserver-nb\") pod \"0d70ae37-904c-4784-870e-a75d7042dd3e\" (UID: \"0d70ae37-904c-4784-870e-a75d7042dd3e\") " Feb 19 20:12:26 crc kubenswrapper[4813]: I0219 20:12:26.334546 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0d70ae37-904c-4784-870e-a75d7042dd3e-ovsdbserver-sb\") pod \"0d70ae37-904c-4784-870e-a75d7042dd3e\" (UID: \"0d70ae37-904c-4784-870e-a75d7042dd3e\") " Feb 19 20:12:26 crc kubenswrapper[4813]: I0219 20:12:26.334621 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d70ae37-904c-4784-870e-a75d7042dd3e-config\") pod \"0d70ae37-904c-4784-870e-a75d7042dd3e\" (UID: \"0d70ae37-904c-4784-870e-a75d7042dd3e\") " Feb 19 20:12:26 crc kubenswrapper[4813]: I0219 20:12:26.340534 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d70ae37-904c-4784-870e-a75d7042dd3e-kube-api-access-6pmmf" (OuterVolumeSpecName: "kube-api-access-6pmmf") pod "0d70ae37-904c-4784-870e-a75d7042dd3e" (UID: "0d70ae37-904c-4784-870e-a75d7042dd3e"). InnerVolumeSpecName "kube-api-access-6pmmf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:12:26 crc kubenswrapper[4813]: I0219 20:12:26.393593 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d70ae37-904c-4784-870e-a75d7042dd3e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0d70ae37-904c-4784-870e-a75d7042dd3e" (UID: "0d70ae37-904c-4784-870e-a75d7042dd3e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:12:26 crc kubenswrapper[4813]: I0219 20:12:26.395738 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d70ae37-904c-4784-870e-a75d7042dd3e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0d70ae37-904c-4784-870e-a75d7042dd3e" (UID: "0d70ae37-904c-4784-870e-a75d7042dd3e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:12:26 crc kubenswrapper[4813]: I0219 20:12:26.405426 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d70ae37-904c-4784-870e-a75d7042dd3e-config" (OuterVolumeSpecName: "config") pod "0d70ae37-904c-4784-870e-a75d7042dd3e" (UID: "0d70ae37-904c-4784-870e-a75d7042dd3e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:12:26 crc kubenswrapper[4813]: I0219 20:12:26.413536 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d70ae37-904c-4784-870e-a75d7042dd3e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0d70ae37-904c-4784-870e-a75d7042dd3e" (UID: "0d70ae37-904c-4784-870e-a75d7042dd3e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:12:26 crc kubenswrapper[4813]: I0219 20:12:26.437757 4813 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0d70ae37-904c-4784-870e-a75d7042dd3e-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 20:12:26 crc kubenswrapper[4813]: I0219 20:12:26.437789 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6pmmf\" (UniqueName: \"kubernetes.io/projected/0d70ae37-904c-4784-870e-a75d7042dd3e-kube-api-access-6pmmf\") on node \"crc\" DevicePath \"\"" Feb 19 20:12:26 crc kubenswrapper[4813]: I0219 20:12:26.437800 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0d70ae37-904c-4784-870e-a75d7042dd3e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 20:12:26 crc kubenswrapper[4813]: I0219 20:12:26.437809 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0d70ae37-904c-4784-870e-a75d7042dd3e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 20:12:26 crc kubenswrapper[4813]: I0219 20:12:26.437817 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d70ae37-904c-4784-870e-a75d7042dd3e-config\") on node \"crc\" DevicePath \"\"" Feb 19 20:12:26 crc kubenswrapper[4813]: I0219 20:12:26.523081 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f4c4f5bd7-n9jxx"] Feb 19 20:12:27 crc kubenswrapper[4813]: I0219 20:12:27.010024 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7496cf9857-rsxjk" event={"ID":"0d70ae37-904c-4784-870e-a75d7042dd3e","Type":"ContainerDied","Data":"a07ab10c961e187a51216cc5171b8aa92886fe4a006f69d032c5dffbb5e84697"} Feb 19 20:12:27 crc kubenswrapper[4813]: I0219 20:12:27.010323 4813 scope.go:117] "RemoveContainer" containerID="a98cb5800d9939ceb7f61f9247324bd9ff192ce05069d012221d58af5501b3c8" Feb 19 20:12:27 crc kubenswrapper[4813]: I0219 20:12:27.010055 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7496cf9857-rsxjk" Feb 19 20:12:27 crc kubenswrapper[4813]: I0219 20:12:27.013048 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f4c4f5bd7-n9jxx" event={"ID":"387e8461-3709-4da1-a6b4-120a4ae6fc34","Type":"ContainerDied","Data":"3e0ae34d1da9f7bdb5c27aba11ed7e693757659ebf964c72c84f5703f0f2426b"} Feb 19 20:12:27 crc kubenswrapper[4813]: I0219 20:12:27.012257 4813 generic.go:334] "Generic (PLEG): container finished" podID="387e8461-3709-4da1-a6b4-120a4ae6fc34" containerID="3e0ae34d1da9f7bdb5c27aba11ed7e693757659ebf964c72c84f5703f0f2426b" exitCode=0 Feb 19 20:12:27 crc kubenswrapper[4813]: I0219 20:12:27.013326 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f4c4f5bd7-n9jxx" event={"ID":"387e8461-3709-4da1-a6b4-120a4ae6fc34","Type":"ContainerStarted","Data":"ca562074ae202627f6109dfe97dd096fa06a8912d817e170f4efd57ca64cfd3d"} Feb 19 20:12:27 crc kubenswrapper[4813]: I0219 20:12:27.061302 4813 scope.go:117] "RemoveContainer" containerID="82d6644b346ddadee286529f9014922a9862fb7b1724d143becf34d1b77f8a4b" Feb 19 20:12:27 crc kubenswrapper[4813]: I0219 20:12:27.152510 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7496cf9857-rsxjk"] Feb 19 20:12:27 crc kubenswrapper[4813]: I0219 20:12:27.171929 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7496cf9857-rsxjk"] Feb 19 20:12:27 crc kubenswrapper[4813]: I0219 20:12:27.481840 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d70ae37-904c-4784-870e-a75d7042dd3e" path="/var/lib/kubelet/pods/0d70ae37-904c-4784-870e-a75d7042dd3e/volumes" Feb 19 20:12:28 crc kubenswrapper[4813]: I0219 20:12:28.025242 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f4c4f5bd7-n9jxx" event={"ID":"387e8461-3709-4da1-a6b4-120a4ae6fc34","Type":"ContainerStarted","Data":"1db54278cc2167333f7b4d482906c22bfce5560cdd71468ab22b1978613800e5"} Feb 19 20:12:28 crc kubenswrapper[4813]: I0219 20:12:28.025520 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7f4c4f5bd7-n9jxx" Feb 19 20:12:28 crc kubenswrapper[4813]: I0219 20:12:28.051056 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7f4c4f5bd7-n9jxx" podStartSLOduration=3.051034469 podStartE2EDuration="3.051034469s" podCreationTimestamp="2026-02-19 20:12:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 20:12:28.050020448 +0000 UTC m=+6167.275461059" watchObservedRunningTime="2026-02-19 20:12:28.051034469 +0000 UTC m=+6167.276475020" Feb 19 20:12:30 crc kubenswrapper[4813]: I0219 20:12:30.329981 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 20:12:30 crc kubenswrapper[4813]: I0219 20:12:30.330901 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 20:12:30 crc kubenswrapper[4813]: I0219 20:12:30.560029 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zk9bt"] Feb 19 20:12:30 crc kubenswrapper[4813]: E0219 20:12:30.560640 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d70ae37-904c-4784-870e-a75d7042dd3e" containerName="init" Feb 19 20:12:30 crc kubenswrapper[4813]: I0219 20:12:30.560661 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d70ae37-904c-4784-870e-a75d7042dd3e" containerName="init" Feb 19 20:12:30 crc kubenswrapper[4813]: E0219 20:12:30.560678 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d70ae37-904c-4784-870e-a75d7042dd3e" containerName="dnsmasq-dns" Feb 19 20:12:30 crc kubenswrapper[4813]: I0219 20:12:30.560687 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d70ae37-904c-4784-870e-a75d7042dd3e" containerName="dnsmasq-dns" Feb 19 20:12:30 crc kubenswrapper[4813]: I0219 20:12:30.560979 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d70ae37-904c-4784-870e-a75d7042dd3e" containerName="dnsmasq-dns" Feb 19 20:12:30 crc kubenswrapper[4813]: I0219 20:12:30.563126 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zk9bt" Feb 19 20:12:30 crc kubenswrapper[4813]: I0219 20:12:30.578936 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zk9bt"] Feb 19 20:12:30 crc kubenswrapper[4813]: I0219 20:12:30.634915 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b95d376-cc11-4d8c-a140-e10c92b17b4f-catalog-content\") pod \"redhat-marketplace-zk9bt\" (UID: \"5b95d376-cc11-4d8c-a140-e10c92b17b4f\") " pod="openshift-marketplace/redhat-marketplace-zk9bt" Feb 19 20:12:30 crc kubenswrapper[4813]: I0219 20:12:30.635173 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b95d376-cc11-4d8c-a140-e10c92b17b4f-utilities\") pod \"redhat-marketplace-zk9bt\" (UID: \"5b95d376-cc11-4d8c-a140-e10c92b17b4f\") " pod="openshift-marketplace/redhat-marketplace-zk9bt" Feb 19 20:12:30 crc kubenswrapper[4813]: I0219 20:12:30.635218 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2g44j\" (UniqueName: \"kubernetes.io/projected/5b95d376-cc11-4d8c-a140-e10c92b17b4f-kube-api-access-2g44j\") pod \"redhat-marketplace-zk9bt\" (UID: \"5b95d376-cc11-4d8c-a140-e10c92b17b4f\") " pod="openshift-marketplace/redhat-marketplace-zk9bt" Feb 19 20:12:30 crc kubenswrapper[4813]: I0219 20:12:30.736418 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b95d376-cc11-4d8c-a140-e10c92b17b4f-utilities\") pod \"redhat-marketplace-zk9bt\" (UID: \"5b95d376-cc11-4d8c-a140-e10c92b17b4f\") " pod="openshift-marketplace/redhat-marketplace-zk9bt" Feb 19 20:12:30 crc kubenswrapper[4813]: I0219 20:12:30.736478 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2g44j\" (UniqueName: \"kubernetes.io/projected/5b95d376-cc11-4d8c-a140-e10c92b17b4f-kube-api-access-2g44j\") pod \"redhat-marketplace-zk9bt\" (UID: \"5b95d376-cc11-4d8c-a140-e10c92b17b4f\") " pod="openshift-marketplace/redhat-marketplace-zk9bt" Feb 19 20:12:30 crc kubenswrapper[4813]: I0219 20:12:30.736665 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b95d376-cc11-4d8c-a140-e10c92b17b4f-catalog-content\") pod \"redhat-marketplace-zk9bt\" (UID: \"5b95d376-cc11-4d8c-a140-e10c92b17b4f\") " pod="openshift-marketplace/redhat-marketplace-zk9bt" Feb 19 20:12:30 crc kubenswrapper[4813]: I0219 20:12:30.737063 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b95d376-cc11-4d8c-a140-e10c92b17b4f-utilities\") pod \"redhat-marketplace-zk9bt\" (UID: \"5b95d376-cc11-4d8c-a140-e10c92b17b4f\") " pod="openshift-marketplace/redhat-marketplace-zk9bt" Feb 19 20:12:30 crc kubenswrapper[4813]: I0219 20:12:30.737151 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b95d376-cc11-4d8c-a140-e10c92b17b4f-catalog-content\") pod \"redhat-marketplace-zk9bt\" (UID: \"5b95d376-cc11-4d8c-a140-e10c92b17b4f\") " pod="openshift-marketplace/redhat-marketplace-zk9bt" Feb 19 20:12:30 crc kubenswrapper[4813]: I0219 20:12:30.765937 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2g44j\" (UniqueName: \"kubernetes.io/projected/5b95d376-cc11-4d8c-a140-e10c92b17b4f-kube-api-access-2g44j\") pod \"redhat-marketplace-zk9bt\" (UID: \"5b95d376-cc11-4d8c-a140-e10c92b17b4f\") " pod="openshift-marketplace/redhat-marketplace-zk9bt" Feb 19 20:12:30 crc kubenswrapper[4813]: I0219 20:12:30.889510 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zk9bt" Feb 19 20:12:31 crc kubenswrapper[4813]: I0219 20:12:31.355890 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zk9bt"] Feb 19 20:12:31 crc kubenswrapper[4813]: W0219 20:12:31.360384 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b95d376_cc11_4d8c_a140_e10c92b17b4f.slice/crio-665ed088d8c66c3e10db9a66be4fb3195915b772ce8c01e53679292168d4a96e WatchSource:0}: Error finding container 665ed088d8c66c3e10db9a66be4fb3195915b772ce8c01e53679292168d4a96e: Status 404 returned error can't find the container with id 665ed088d8c66c3e10db9a66be4fb3195915b772ce8c01e53679292168d4a96e Feb 19 20:12:32 crc kubenswrapper[4813]: I0219 20:12:32.082389 4813 generic.go:334] "Generic (PLEG): container finished" podID="5b95d376-cc11-4d8c-a140-e10c92b17b4f" containerID="015b15072f7a703c0f0fc8ad847cf09670bb269ea65db3749e2a5806b84e204a" exitCode=0 Feb 19 20:12:32 crc kubenswrapper[4813]: I0219 20:12:32.082473 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zk9bt" event={"ID":"5b95d376-cc11-4d8c-a140-e10c92b17b4f","Type":"ContainerDied","Data":"015b15072f7a703c0f0fc8ad847cf09670bb269ea65db3749e2a5806b84e204a"} Feb 19 20:12:32 crc kubenswrapper[4813]: I0219 20:12:32.082703 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zk9bt" event={"ID":"5b95d376-cc11-4d8c-a140-e10c92b17b4f","Type":"ContainerStarted","Data":"665ed088d8c66c3e10db9a66be4fb3195915b772ce8c01e53679292168d4a96e"} Feb 19 20:12:34 crc kubenswrapper[4813]: I0219 20:12:34.107398 4813 generic.go:334] "Generic (PLEG): container finished" podID="5b95d376-cc11-4d8c-a140-e10c92b17b4f" containerID="64c9764d6fc348df55255dd626468d01143aa2b710ad9b847980a64fa6b3b0e2" exitCode=0 Feb 19 20:12:34 crc kubenswrapper[4813]: I0219 20:12:34.107496 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zk9bt" event={"ID":"5b95d376-cc11-4d8c-a140-e10c92b17b4f","Type":"ContainerDied","Data":"64c9764d6fc348df55255dd626468d01143aa2b710ad9b847980a64fa6b3b0e2"} Feb 19 20:12:35 crc kubenswrapper[4813]: I0219 20:12:35.121590 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zk9bt" event={"ID":"5b95d376-cc11-4d8c-a140-e10c92b17b4f","Type":"ContainerStarted","Data":"3d7da86087b1dcf7c3de1d99e3fd698e6ed4e6d15afaa657760fbba1da29a80d"} Feb 19 20:12:35 crc kubenswrapper[4813]: I0219 20:12:35.149520 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zk9bt" podStartSLOduration=2.607332243 podStartE2EDuration="5.149500127s" podCreationTimestamp="2026-02-19 20:12:30 +0000 UTC" firstStartedPulling="2026-02-19 20:12:32.084217232 +0000 UTC m=+6171.309657773" lastFinishedPulling="2026-02-19 20:12:34.626385106 +0000 UTC m=+6173.851825657" observedRunningTime="2026-02-19 20:12:35.139757545 +0000 UTC m=+6174.365198116" watchObservedRunningTime="2026-02-19 20:12:35.149500127 +0000 UTC m=+6174.374940668" Feb 19 20:12:35 crc kubenswrapper[4813]: I0219 20:12:35.732927 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-m6fhq"] Feb 19 20:12:35 crc kubenswrapper[4813]: I0219 20:12:35.736314 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m6fhq" Feb 19 20:12:35 crc kubenswrapper[4813]: I0219 20:12:35.757130 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m6fhq"] Feb 19 20:12:35 crc kubenswrapper[4813]: I0219 20:12:35.769128 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7809c79f-0a8d-46ab-8679-8c90aef6e5ab-catalog-content\") pod \"redhat-operators-m6fhq\" (UID: \"7809c79f-0a8d-46ab-8679-8c90aef6e5ab\") " pod="openshift-marketplace/redhat-operators-m6fhq" Feb 19 20:12:35 crc kubenswrapper[4813]: I0219 20:12:35.769323 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7809c79f-0a8d-46ab-8679-8c90aef6e5ab-utilities\") pod \"redhat-operators-m6fhq\" (UID: \"7809c79f-0a8d-46ab-8679-8c90aef6e5ab\") " pod="openshift-marketplace/redhat-operators-m6fhq" Feb 19 20:12:35 crc kubenswrapper[4813]: I0219 20:12:35.769568 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltsm5\" (UniqueName: \"kubernetes.io/projected/7809c79f-0a8d-46ab-8679-8c90aef6e5ab-kube-api-access-ltsm5\") pod \"redhat-operators-m6fhq\" (UID: \"7809c79f-0a8d-46ab-8679-8c90aef6e5ab\") " pod="openshift-marketplace/redhat-operators-m6fhq" Feb 19 20:12:35 crc kubenswrapper[4813]: I0219 20:12:35.872483 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7809c79f-0a8d-46ab-8679-8c90aef6e5ab-utilities\") pod \"redhat-operators-m6fhq\" (UID: \"7809c79f-0a8d-46ab-8679-8c90aef6e5ab\") " pod="openshift-marketplace/redhat-operators-m6fhq" Feb 19 20:12:35 crc kubenswrapper[4813]: I0219 20:12:35.872639 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltsm5\" (UniqueName: \"kubernetes.io/projected/7809c79f-0a8d-46ab-8679-8c90aef6e5ab-kube-api-access-ltsm5\") pod \"redhat-operators-m6fhq\" (UID: \"7809c79f-0a8d-46ab-8679-8c90aef6e5ab\") " pod="openshift-marketplace/redhat-operators-m6fhq" Feb 19 20:12:35 crc kubenswrapper[4813]: I0219 20:12:35.873065 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7809c79f-0a8d-46ab-8679-8c90aef6e5ab-utilities\") pod \"redhat-operators-m6fhq\" (UID: \"7809c79f-0a8d-46ab-8679-8c90aef6e5ab\") " pod="openshift-marketplace/redhat-operators-m6fhq" Feb 19 20:12:35 crc kubenswrapper[4813]: I0219 20:12:35.873280 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7809c79f-0a8d-46ab-8679-8c90aef6e5ab-catalog-content\") pod \"redhat-operators-m6fhq\" (UID: \"7809c79f-0a8d-46ab-8679-8c90aef6e5ab\") " pod="openshift-marketplace/redhat-operators-m6fhq" Feb 19 20:12:35 crc kubenswrapper[4813]: I0219 20:12:35.873712 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7809c79f-0a8d-46ab-8679-8c90aef6e5ab-catalog-content\") pod \"redhat-operators-m6fhq\" (UID: \"7809c79f-0a8d-46ab-8679-8c90aef6e5ab\") " pod="openshift-marketplace/redhat-operators-m6fhq" Feb 19 20:12:35 crc kubenswrapper[4813]: I0219 20:12:35.896843 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltsm5\" (UniqueName: \"kubernetes.io/projected/7809c79f-0a8d-46ab-8679-8c90aef6e5ab-kube-api-access-ltsm5\") pod \"redhat-operators-m6fhq\" (UID: \"7809c79f-0a8d-46ab-8679-8c90aef6e5ab\") " pod="openshift-marketplace/redhat-operators-m6fhq" Feb 19 20:12:35 crc kubenswrapper[4813]: I0219 20:12:35.918653 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-s25gg"] Feb 19 20:12:35 crc kubenswrapper[4813]: I0219 20:12:35.927783 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s25gg" Feb 19 20:12:35 crc kubenswrapper[4813]: I0219 20:12:35.930231 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s25gg"] Feb 19 20:12:35 crc kubenswrapper[4813]: I0219 20:12:35.975061 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkkv8\" (UniqueName: \"kubernetes.io/projected/1303c253-fd20-41e7-9aab-deda5cd4093b-kube-api-access-tkkv8\") pod \"certified-operators-s25gg\" (UID: \"1303c253-fd20-41e7-9aab-deda5cd4093b\") " pod="openshift-marketplace/certified-operators-s25gg" Feb 19 20:12:35 crc kubenswrapper[4813]: I0219 20:12:35.975262 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1303c253-fd20-41e7-9aab-deda5cd4093b-catalog-content\") pod \"certified-operators-s25gg\" (UID: \"1303c253-fd20-41e7-9aab-deda5cd4093b\") " pod="openshift-marketplace/certified-operators-s25gg" Feb 19 20:12:35 crc kubenswrapper[4813]: I0219 20:12:35.975359 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1303c253-fd20-41e7-9aab-deda5cd4093b-utilities\") pod \"certified-operators-s25gg\" (UID: \"1303c253-fd20-41e7-9aab-deda5cd4093b\") " pod="openshift-marketplace/certified-operators-s25gg" Feb 19 20:12:36 crc kubenswrapper[4813]: I0219 20:12:36.067999 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m6fhq" Feb 19 20:12:36 crc kubenswrapper[4813]: I0219 20:12:36.071485 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7f4c4f5bd7-n9jxx" Feb 19 20:12:36 crc kubenswrapper[4813]: I0219 20:12:36.083814 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1303c253-fd20-41e7-9aab-deda5cd4093b-catalog-content\") pod \"certified-operators-s25gg\" (UID: \"1303c253-fd20-41e7-9aab-deda5cd4093b\") " pod="openshift-marketplace/certified-operators-s25gg" Feb 19 20:12:36 crc kubenswrapper[4813]: I0219 20:12:36.084068 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1303c253-fd20-41e7-9aab-deda5cd4093b-utilities\") pod \"certified-operators-s25gg\" (UID: \"1303c253-fd20-41e7-9aab-deda5cd4093b\") " pod="openshift-marketplace/certified-operators-s25gg" Feb 19 20:12:36 crc kubenswrapper[4813]: I0219 20:12:36.084202 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkkv8\" (UniqueName: \"kubernetes.io/projected/1303c253-fd20-41e7-9aab-deda5cd4093b-kube-api-access-tkkv8\") pod \"certified-operators-s25gg\" (UID: \"1303c253-fd20-41e7-9aab-deda5cd4093b\") " pod="openshift-marketplace/certified-operators-s25gg" Feb 19 20:12:36 crc kubenswrapper[4813]: I0219 20:12:36.084430 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1303c253-fd20-41e7-9aab-deda5cd4093b-catalog-content\") pod \"certified-operators-s25gg\" (UID: \"1303c253-fd20-41e7-9aab-deda5cd4093b\") " pod="openshift-marketplace/certified-operators-s25gg" Feb 19 20:12:36 crc kubenswrapper[4813]: I0219 20:12:36.084657 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1303c253-fd20-41e7-9aab-deda5cd4093b-utilities\") pod \"certified-operators-s25gg\" (UID: \"1303c253-fd20-41e7-9aab-deda5cd4093b\") " pod="openshift-marketplace/certified-operators-s25gg" Feb 19 20:12:36 crc kubenswrapper[4813]: I0219 20:12:36.123837 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkkv8\" (UniqueName: \"kubernetes.io/projected/1303c253-fd20-41e7-9aab-deda5cd4093b-kube-api-access-tkkv8\") pod \"certified-operators-s25gg\" (UID: \"1303c253-fd20-41e7-9aab-deda5cd4093b\") " pod="openshift-marketplace/certified-operators-s25gg" Feb 19 20:12:36 crc kubenswrapper[4813]: I0219 20:12:36.154111 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7486c8b5ff-pnjl4"] Feb 19 20:12:36 crc kubenswrapper[4813]: I0219 20:12:36.154366 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7486c8b5ff-pnjl4" podUID="fd9130f7-d795-4083-b0ef-cbf28fa2ef0c" containerName="dnsmasq-dns" containerID="cri-o://128b9b2f2d14f4d23cf74bd9f16e49157b6279c180e31da4057b90bea9fad876" gracePeriod=10 Feb 19 20:12:36 crc kubenswrapper[4813]: I0219 20:12:36.280765 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s25gg" Feb 19 20:12:36 crc kubenswrapper[4813]: I0219 20:12:36.643555 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m6fhq"] Feb 19 20:12:36 crc kubenswrapper[4813]: W0219 20:12:36.654882 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7809c79f_0a8d_46ab_8679_8c90aef6e5ab.slice/crio-94cffd516e3baeba27df890af4dd59b43e985548dc0020fc10ea3d9282b060b8 WatchSource:0}: Error finding container 94cffd516e3baeba27df890af4dd59b43e985548dc0020fc10ea3d9282b060b8: Status 404 returned error can't find the container with id 94cffd516e3baeba27df890af4dd59b43e985548dc0020fc10ea3d9282b060b8 Feb 19 20:12:36 crc kubenswrapper[4813]: I0219 20:12:36.750332 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7486c8b5ff-pnjl4" Feb 19 20:12:36 crc kubenswrapper[4813]: I0219 20:12:36.811044 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fd9130f7-d795-4083-b0ef-cbf28fa2ef0c-ovsdbserver-sb\") pod \"fd9130f7-d795-4083-b0ef-cbf28fa2ef0c\" (UID: \"fd9130f7-d795-4083-b0ef-cbf28fa2ef0c\") " Feb 19 20:12:36 crc kubenswrapper[4813]: I0219 20:12:36.811093 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd9130f7-d795-4083-b0ef-cbf28fa2ef0c-config\") pod \"fd9130f7-d795-4083-b0ef-cbf28fa2ef0c\" (UID: \"fd9130f7-d795-4083-b0ef-cbf28fa2ef0c\") " Feb 19 20:12:36 crc kubenswrapper[4813]: I0219 20:12:36.811114 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/fd9130f7-d795-4083-b0ef-cbf28fa2ef0c-openstack-cell1\") pod \"fd9130f7-d795-4083-b0ef-cbf28fa2ef0c\" (UID: \"fd9130f7-d795-4083-b0ef-cbf28fa2ef0c\") " Feb 19 20:12:36 crc kubenswrapper[4813]: I0219 20:12:36.811145 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fr6gz\" (UniqueName: \"kubernetes.io/projected/fd9130f7-d795-4083-b0ef-cbf28fa2ef0c-kube-api-access-fr6gz\") pod \"fd9130f7-d795-4083-b0ef-cbf28fa2ef0c\" (UID: \"fd9130f7-d795-4083-b0ef-cbf28fa2ef0c\") " Feb 19 20:12:36 crc kubenswrapper[4813]: I0219 20:12:36.811240 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fd9130f7-d795-4083-b0ef-cbf28fa2ef0c-ovsdbserver-nb\") pod \"fd9130f7-d795-4083-b0ef-cbf28fa2ef0c\" (UID: \"fd9130f7-d795-4083-b0ef-cbf28fa2ef0c\") " Feb 19 20:12:36 crc kubenswrapper[4813]: I0219 20:12:36.811871 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fd9130f7-d795-4083-b0ef-cbf28fa2ef0c-dns-svc\") pod \"fd9130f7-d795-4083-b0ef-cbf28fa2ef0c\" (UID: \"fd9130f7-d795-4083-b0ef-cbf28fa2ef0c\") " Feb 19 20:12:36 crc kubenswrapper[4813]: I0219 20:12:36.832861 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd9130f7-d795-4083-b0ef-cbf28fa2ef0c-kube-api-access-fr6gz" (OuterVolumeSpecName: "kube-api-access-fr6gz") pod "fd9130f7-d795-4083-b0ef-cbf28fa2ef0c" (UID: "fd9130f7-d795-4083-b0ef-cbf28fa2ef0c"). InnerVolumeSpecName "kube-api-access-fr6gz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:12:36 crc kubenswrapper[4813]: I0219 20:12:36.888279 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s25gg"] Feb 19 20:12:36 crc kubenswrapper[4813]: I0219 20:12:36.892744 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd9130f7-d795-4083-b0ef-cbf28fa2ef0c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fd9130f7-d795-4083-b0ef-cbf28fa2ef0c" (UID: "fd9130f7-d795-4083-b0ef-cbf28fa2ef0c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:12:36 crc kubenswrapper[4813]: I0219 20:12:36.899996 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd9130f7-d795-4083-b0ef-cbf28fa2ef0c-config" (OuterVolumeSpecName: "config") pod "fd9130f7-d795-4083-b0ef-cbf28fa2ef0c" (UID: "fd9130f7-d795-4083-b0ef-cbf28fa2ef0c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:12:36 crc kubenswrapper[4813]: I0219 20:12:36.912132 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd9130f7-d795-4083-b0ef-cbf28fa2ef0c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fd9130f7-d795-4083-b0ef-cbf28fa2ef0c" (UID: "fd9130f7-d795-4083-b0ef-cbf28fa2ef0c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:12:36 crc kubenswrapper[4813]: I0219 20:12:36.918197 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd9130f7-d795-4083-b0ef-cbf28fa2ef0c-openstack-cell1" (OuterVolumeSpecName: "openstack-cell1") pod "fd9130f7-d795-4083-b0ef-cbf28fa2ef0c" (UID: "fd9130f7-d795-4083-b0ef-cbf28fa2ef0c"). InnerVolumeSpecName "openstack-cell1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:12:36 crc kubenswrapper[4813]: I0219 20:12:36.918932 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/fd9130f7-d795-4083-b0ef-cbf28fa2ef0c-openstack-cell1\") pod \"fd9130f7-d795-4083-b0ef-cbf28fa2ef0c\" (UID: \"fd9130f7-d795-4083-b0ef-cbf28fa2ef0c\") " Feb 19 20:12:36 crc kubenswrapper[4813]: I0219 20:12:36.919667 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fd9130f7-d795-4083-b0ef-cbf28fa2ef0c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 20:12:36 crc kubenswrapper[4813]: I0219 20:12:36.919685 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd9130f7-d795-4083-b0ef-cbf28fa2ef0c-config\") on node \"crc\" DevicePath \"\"" Feb 19 20:12:36 crc kubenswrapper[4813]: I0219 20:12:36.919696 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fr6gz\" (UniqueName: \"kubernetes.io/projected/fd9130f7-d795-4083-b0ef-cbf28fa2ef0c-kube-api-access-fr6gz\") on node \"crc\" DevicePath \"\"" Feb 19 20:12:36 crc kubenswrapper[4813]: I0219 20:12:36.919706 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fd9130f7-d795-4083-b0ef-cbf28fa2ef0c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 20:12:36 crc kubenswrapper[4813]: W0219 20:12:36.919772 4813 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/fd9130f7-d795-4083-b0ef-cbf28fa2ef0c/volumes/kubernetes.io~configmap/openstack-cell1 Feb 19 20:12:36 crc kubenswrapper[4813]: I0219 20:12:36.919783 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd9130f7-d795-4083-b0ef-cbf28fa2ef0c-openstack-cell1" (OuterVolumeSpecName: "openstack-cell1") pod "fd9130f7-d795-4083-b0ef-cbf28fa2ef0c" (UID: "fd9130f7-d795-4083-b0ef-cbf28fa2ef0c"). InnerVolumeSpecName "openstack-cell1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:12:36 crc kubenswrapper[4813]: I0219 20:12:36.940762 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd9130f7-d795-4083-b0ef-cbf28fa2ef0c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fd9130f7-d795-4083-b0ef-cbf28fa2ef0c" (UID: "fd9130f7-d795-4083-b0ef-cbf28fa2ef0c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:12:37 crc kubenswrapper[4813]: I0219 20:12:37.022184 4813 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fd9130f7-d795-4083-b0ef-cbf28fa2ef0c-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 20:12:37 crc kubenswrapper[4813]: I0219 20:12:37.022217 4813 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/fd9130f7-d795-4083-b0ef-cbf28fa2ef0c-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 19 20:12:37 crc kubenswrapper[4813]: I0219 20:12:37.157071 4813 generic.go:334] "Generic (PLEG): container finished" podID="fd9130f7-d795-4083-b0ef-cbf28fa2ef0c" containerID="128b9b2f2d14f4d23cf74bd9f16e49157b6279c180e31da4057b90bea9fad876" exitCode=0 Feb 19 20:12:37 crc kubenswrapper[4813]: I0219 20:12:37.157132 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7486c8b5ff-pnjl4" Feb 19 20:12:37 crc kubenswrapper[4813]: I0219 20:12:37.157149 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7486c8b5ff-pnjl4" event={"ID":"fd9130f7-d795-4083-b0ef-cbf28fa2ef0c","Type":"ContainerDied","Data":"128b9b2f2d14f4d23cf74bd9f16e49157b6279c180e31da4057b90bea9fad876"} Feb 19 20:12:37 crc kubenswrapper[4813]: I0219 20:12:37.157173 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7486c8b5ff-pnjl4" event={"ID":"fd9130f7-d795-4083-b0ef-cbf28fa2ef0c","Type":"ContainerDied","Data":"cf67a1a333ab2c14caaa07cf6a2e267abd0307a0d5020efddf0b44325e3f3df5"} Feb 19 20:12:37 crc kubenswrapper[4813]: I0219 20:12:37.157189 4813 scope.go:117] "RemoveContainer" containerID="128b9b2f2d14f4d23cf74bd9f16e49157b6279c180e31da4057b90bea9fad876" Feb 19 20:12:37 crc kubenswrapper[4813]: I0219 20:12:37.168790 4813 generic.go:334] "Generic (PLEG): container finished" podID="1303c253-fd20-41e7-9aab-deda5cd4093b" containerID="c864a458eedca858ea9167feb7d41d50f69a1ca544ff9900836c679d5f5de996" exitCode=0 Feb 19 20:12:37 crc kubenswrapper[4813]: I0219 20:12:37.168847 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s25gg" event={"ID":"1303c253-fd20-41e7-9aab-deda5cd4093b","Type":"ContainerDied","Data":"c864a458eedca858ea9167feb7d41d50f69a1ca544ff9900836c679d5f5de996"} Feb 19 20:12:37 crc kubenswrapper[4813]: I0219 20:12:37.168894 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s25gg" event={"ID":"1303c253-fd20-41e7-9aab-deda5cd4093b","Type":"ContainerStarted","Data":"92eae21d1fca81cf7c6496061289ac1a29e1c3f7a24db13d6da5bd077961523c"} Feb 19 20:12:37 crc kubenswrapper[4813]: I0219 20:12:37.171178 4813 generic.go:334] "Generic (PLEG): container finished" podID="7809c79f-0a8d-46ab-8679-8c90aef6e5ab" containerID="7eb9c50ee9214a4367ce6f5b34f97956da0b04bb2a4167720e210d497ba797c8" exitCode=0 Feb 19 20:12:37 crc kubenswrapper[4813]: I0219 20:12:37.171411 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m6fhq" event={"ID":"7809c79f-0a8d-46ab-8679-8c90aef6e5ab","Type":"ContainerDied","Data":"7eb9c50ee9214a4367ce6f5b34f97956da0b04bb2a4167720e210d497ba797c8"} Feb 19 20:12:37 crc kubenswrapper[4813]: I0219 20:12:37.171443 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m6fhq" event={"ID":"7809c79f-0a8d-46ab-8679-8c90aef6e5ab","Type":"ContainerStarted","Data":"94cffd516e3baeba27df890af4dd59b43e985548dc0020fc10ea3d9282b060b8"} Feb 19 20:12:37 crc kubenswrapper[4813]: I0219 20:12:37.216030 4813 scope.go:117] "RemoveContainer" containerID="ac4eeca767101a84f0a5925d8b0285e9d3241c0edcfa0a6196eb5e585b2e3609" Feb 19 20:12:37 crc kubenswrapper[4813]: I0219 20:12:37.237578 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7486c8b5ff-pnjl4"] Feb 19 20:12:37 crc kubenswrapper[4813]: I0219 20:12:37.248512 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7486c8b5ff-pnjl4"] Feb 19 20:12:37 crc kubenswrapper[4813]: I0219 20:12:37.252098 4813 scope.go:117] "RemoveContainer" containerID="128b9b2f2d14f4d23cf74bd9f16e49157b6279c180e31da4057b90bea9fad876" Feb 19 20:12:37 crc kubenswrapper[4813]: E0219 20:12:37.253372 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"128b9b2f2d14f4d23cf74bd9f16e49157b6279c180e31da4057b90bea9fad876\": container with ID starting with 128b9b2f2d14f4d23cf74bd9f16e49157b6279c180e31da4057b90bea9fad876 not found: ID does not exist" containerID="128b9b2f2d14f4d23cf74bd9f16e49157b6279c180e31da4057b90bea9fad876" Feb 19 20:12:37 crc kubenswrapper[4813]: I0219 20:12:37.253413 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"128b9b2f2d14f4d23cf74bd9f16e49157b6279c180e31da4057b90bea9fad876"} err="failed to get container status \"128b9b2f2d14f4d23cf74bd9f16e49157b6279c180e31da4057b90bea9fad876\": rpc error: code = NotFound desc = could not find container \"128b9b2f2d14f4d23cf74bd9f16e49157b6279c180e31da4057b90bea9fad876\": container with ID starting with 128b9b2f2d14f4d23cf74bd9f16e49157b6279c180e31da4057b90bea9fad876 not found: ID does not exist" Feb 19 20:12:37 crc kubenswrapper[4813]: I0219 20:12:37.253448 4813 scope.go:117] "RemoveContainer" containerID="ac4eeca767101a84f0a5925d8b0285e9d3241c0edcfa0a6196eb5e585b2e3609" Feb 19 20:12:37 crc kubenswrapper[4813]: E0219 20:12:37.253814 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac4eeca767101a84f0a5925d8b0285e9d3241c0edcfa0a6196eb5e585b2e3609\": container with ID starting with ac4eeca767101a84f0a5925d8b0285e9d3241c0edcfa0a6196eb5e585b2e3609 not found: ID does not exist" containerID="ac4eeca767101a84f0a5925d8b0285e9d3241c0edcfa0a6196eb5e585b2e3609" Feb 19 20:12:37 crc kubenswrapper[4813]: I0219 20:12:37.253839 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac4eeca767101a84f0a5925d8b0285e9d3241c0edcfa0a6196eb5e585b2e3609"} err="failed to get container status \"ac4eeca767101a84f0a5925d8b0285e9d3241c0edcfa0a6196eb5e585b2e3609\": rpc error: code = NotFound desc = could not find container \"ac4eeca767101a84f0a5925d8b0285e9d3241c0edcfa0a6196eb5e585b2e3609\": container with ID starting with ac4eeca767101a84f0a5925d8b0285e9d3241c0edcfa0a6196eb5e585b2e3609 not found: ID does not exist" Feb 19 20:12:37 crc kubenswrapper[4813]: I0219 20:12:37.499701 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd9130f7-d795-4083-b0ef-cbf28fa2ef0c" path="/var/lib/kubelet/pods/fd9130f7-d795-4083-b0ef-cbf28fa2ef0c/volumes" Feb 19 20:12:38 crc kubenswrapper[4813]: I0219 20:12:38.187070 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s25gg" event={"ID":"1303c253-fd20-41e7-9aab-deda5cd4093b","Type":"ContainerStarted","Data":"46679de4e33723e85839f3c2e9da6c7b1839e8a9eecab565c7715857c9d6c15d"} Feb 19 20:12:38 crc kubenswrapper[4813]: I0219 20:12:38.190782 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m6fhq" event={"ID":"7809c79f-0a8d-46ab-8679-8c90aef6e5ab","Type":"ContainerStarted","Data":"bc6b4468289d771286a7efcc13c960c54ecad0d48898bdb1d1c82f5212ac07b7"} Feb 19 20:12:40 crc kubenswrapper[4813]: I0219 20:12:40.890044 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zk9bt" Feb 19 20:12:40 crc kubenswrapper[4813]: I0219 20:12:40.890374 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zk9bt" Feb 19 20:12:40 crc kubenswrapper[4813]: I0219 20:12:40.944510 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zk9bt" Feb 19 20:12:41 crc kubenswrapper[4813]: I0219 20:12:41.225778 4813 generic.go:334] "Generic (PLEG): container finished" podID="1303c253-fd20-41e7-9aab-deda5cd4093b" containerID="46679de4e33723e85839f3c2e9da6c7b1839e8a9eecab565c7715857c9d6c15d" exitCode=0 Feb 19 20:12:41 crc kubenswrapper[4813]: I0219 20:12:41.225854 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s25gg" event={"ID":"1303c253-fd20-41e7-9aab-deda5cd4093b","Type":"ContainerDied","Data":"46679de4e33723e85839f3c2e9da6c7b1839e8a9eecab565c7715857c9d6c15d"} Feb 19 20:12:41 crc kubenswrapper[4813]: I0219 20:12:41.364626 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zk9bt" Feb 19 20:12:42 crc kubenswrapper[4813]: I0219 20:12:42.238085 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s25gg" event={"ID":"1303c253-fd20-41e7-9aab-deda5cd4093b","Type":"ContainerStarted","Data":"f4215628d975183a94507b356ca9b95dd2396844e1c55cb9ac4517c1336aa5e4"} Feb 19 20:12:42 crc kubenswrapper[4813]: I0219 20:12:42.266723 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-s25gg" podStartSLOduration=2.820139655 podStartE2EDuration="7.266706114s" podCreationTimestamp="2026-02-19 20:12:35 +0000 UTC" firstStartedPulling="2026-02-19 20:12:37.170752528 +0000 UTC m=+6176.396193069" lastFinishedPulling="2026-02-19 20:12:41.617318967 +0000 UTC m=+6180.842759528" observedRunningTime="2026-02-19 20:12:42.258800369 +0000 UTC m=+6181.484240910" watchObservedRunningTime="2026-02-19 20:12:42.266706114 +0000 UTC m=+6181.492146655" Feb 19 20:12:43 crc kubenswrapper[4813]: I0219 20:12:43.321699 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zk9bt"] Feb 19 20:12:43 crc kubenswrapper[4813]: I0219 20:12:43.323262 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zk9bt" podUID="5b95d376-cc11-4d8c-a140-e10c92b17b4f" containerName="registry-server" containerID="cri-o://3d7da86087b1dcf7c3de1d99e3fd698e6ed4e6d15afaa657760fbba1da29a80d" gracePeriod=2 Feb 19 20:12:43 crc kubenswrapper[4813]: I0219 20:12:43.839685 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zk9bt" Feb 19 20:12:43 crc kubenswrapper[4813]: I0219 20:12:43.975823 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2g44j\" (UniqueName: \"kubernetes.io/projected/5b95d376-cc11-4d8c-a140-e10c92b17b4f-kube-api-access-2g44j\") pod \"5b95d376-cc11-4d8c-a140-e10c92b17b4f\" (UID: \"5b95d376-cc11-4d8c-a140-e10c92b17b4f\") " Feb 19 20:12:43 crc kubenswrapper[4813]: I0219 20:12:43.976043 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b95d376-cc11-4d8c-a140-e10c92b17b4f-catalog-content\") pod \"5b95d376-cc11-4d8c-a140-e10c92b17b4f\" (UID: \"5b95d376-cc11-4d8c-a140-e10c92b17b4f\") " Feb 19 20:12:43 crc kubenswrapper[4813]: I0219 20:12:43.976208 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b95d376-cc11-4d8c-a140-e10c92b17b4f-utilities\") pod \"5b95d376-cc11-4d8c-a140-e10c92b17b4f\" (UID: \"5b95d376-cc11-4d8c-a140-e10c92b17b4f\") " Feb 19 20:12:43 crc kubenswrapper[4813]: I0219 20:12:43.976915 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b95d376-cc11-4d8c-a140-e10c92b17b4f-utilities" (OuterVolumeSpecName: "utilities") pod "5b95d376-cc11-4d8c-a140-e10c92b17b4f" (UID: "5b95d376-cc11-4d8c-a140-e10c92b17b4f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:12:43 crc kubenswrapper[4813]: I0219 20:12:43.983101 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b95d376-cc11-4d8c-a140-e10c92b17b4f-kube-api-access-2g44j" (OuterVolumeSpecName: "kube-api-access-2g44j") pod "5b95d376-cc11-4d8c-a140-e10c92b17b4f" (UID: "5b95d376-cc11-4d8c-a140-e10c92b17b4f"). InnerVolumeSpecName "kube-api-access-2g44j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:12:44 crc kubenswrapper[4813]: I0219 20:12:44.005400 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b95d376-cc11-4d8c-a140-e10c92b17b4f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5b95d376-cc11-4d8c-a140-e10c92b17b4f" (UID: "5b95d376-cc11-4d8c-a140-e10c92b17b4f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:12:44 crc kubenswrapper[4813]: I0219 20:12:44.079886 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2g44j\" (UniqueName: \"kubernetes.io/projected/5b95d376-cc11-4d8c-a140-e10c92b17b4f-kube-api-access-2g44j\") on node \"crc\" DevicePath \"\"" Feb 19 20:12:44 crc kubenswrapper[4813]: I0219 20:12:44.079992 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b95d376-cc11-4d8c-a140-e10c92b17b4f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 20:12:44 crc kubenswrapper[4813]: I0219 20:12:44.080025 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b95d376-cc11-4d8c-a140-e10c92b17b4f-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 20:12:44 crc kubenswrapper[4813]: I0219 20:12:44.258771 4813 generic.go:334] "Generic (PLEG): container finished" podID="7809c79f-0a8d-46ab-8679-8c90aef6e5ab" containerID="bc6b4468289d771286a7efcc13c960c54ecad0d48898bdb1d1c82f5212ac07b7" exitCode=0 Feb 19 20:12:44 crc kubenswrapper[4813]: I0219 20:12:44.258832 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m6fhq" event={"ID":"7809c79f-0a8d-46ab-8679-8c90aef6e5ab","Type":"ContainerDied","Data":"bc6b4468289d771286a7efcc13c960c54ecad0d48898bdb1d1c82f5212ac07b7"} Feb 19 20:12:44 crc kubenswrapper[4813]: I0219 20:12:44.262026 4813 generic.go:334] "Generic (PLEG): container finished" podID="5b95d376-cc11-4d8c-a140-e10c92b17b4f" containerID="3d7da86087b1dcf7c3de1d99e3fd698e6ed4e6d15afaa657760fbba1da29a80d" exitCode=0 Feb 19 20:12:44 crc kubenswrapper[4813]: I0219 20:12:44.262078 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zk9bt" event={"ID":"5b95d376-cc11-4d8c-a140-e10c92b17b4f","Type":"ContainerDied","Data":"3d7da86087b1dcf7c3de1d99e3fd698e6ed4e6d15afaa657760fbba1da29a80d"} Feb 19 20:12:44 crc kubenswrapper[4813]: I0219 20:12:44.262101 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zk9bt" Feb 19 20:12:44 crc kubenswrapper[4813]: I0219 20:12:44.262111 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zk9bt" event={"ID":"5b95d376-cc11-4d8c-a140-e10c92b17b4f","Type":"ContainerDied","Data":"665ed088d8c66c3e10db9a66be4fb3195915b772ce8c01e53679292168d4a96e"} Feb 19 20:12:44 crc kubenswrapper[4813]: I0219 20:12:44.262142 4813 scope.go:117] "RemoveContainer" containerID="3d7da86087b1dcf7c3de1d99e3fd698e6ed4e6d15afaa657760fbba1da29a80d" Feb 19 20:12:44 crc kubenswrapper[4813]: I0219 20:12:44.294121 4813 scope.go:117] "RemoveContainer" containerID="64c9764d6fc348df55255dd626468d01143aa2b710ad9b847980a64fa6b3b0e2" Feb 19 20:12:44 crc kubenswrapper[4813]: I0219 20:12:44.315255 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zk9bt"] Feb 19 20:12:44 crc kubenswrapper[4813]: I0219 20:12:44.319175 4813 scope.go:117] "RemoveContainer" containerID="015b15072f7a703c0f0fc8ad847cf09670bb269ea65db3749e2a5806b84e204a" Feb 19 20:12:44 crc kubenswrapper[4813]: I0219 20:12:44.326499 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zk9bt"] Feb 19 20:12:44 crc kubenswrapper[4813]: I0219 20:12:44.375291 4813 scope.go:117] "RemoveContainer" containerID="3d7da86087b1dcf7c3de1d99e3fd698e6ed4e6d15afaa657760fbba1da29a80d" Feb 19 20:12:44 crc kubenswrapper[4813]: E0219 20:12:44.375727 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d7da86087b1dcf7c3de1d99e3fd698e6ed4e6d15afaa657760fbba1da29a80d\": container with ID starting with 3d7da86087b1dcf7c3de1d99e3fd698e6ed4e6d15afaa657760fbba1da29a80d not found: ID does not exist" containerID="3d7da86087b1dcf7c3de1d99e3fd698e6ed4e6d15afaa657760fbba1da29a80d" Feb 19 20:12:44 crc kubenswrapper[4813]: I0219 20:12:44.375768 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d7da86087b1dcf7c3de1d99e3fd698e6ed4e6d15afaa657760fbba1da29a80d"} err="failed to get container status \"3d7da86087b1dcf7c3de1d99e3fd698e6ed4e6d15afaa657760fbba1da29a80d\": rpc error: code = NotFound desc = could not find container \"3d7da86087b1dcf7c3de1d99e3fd698e6ed4e6d15afaa657760fbba1da29a80d\": container with ID starting with 3d7da86087b1dcf7c3de1d99e3fd698e6ed4e6d15afaa657760fbba1da29a80d not found: ID does not exist" Feb 19 20:12:44 crc kubenswrapper[4813]: I0219 20:12:44.375787 4813 scope.go:117] "RemoveContainer" containerID="64c9764d6fc348df55255dd626468d01143aa2b710ad9b847980a64fa6b3b0e2" Feb 19 20:12:44 crc kubenswrapper[4813]: E0219 20:12:44.376257 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64c9764d6fc348df55255dd626468d01143aa2b710ad9b847980a64fa6b3b0e2\": container with ID starting with 64c9764d6fc348df55255dd626468d01143aa2b710ad9b847980a64fa6b3b0e2 not found: ID does not exist" containerID="64c9764d6fc348df55255dd626468d01143aa2b710ad9b847980a64fa6b3b0e2" Feb 19 20:12:44 crc kubenswrapper[4813]: I0219 20:12:44.376299 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64c9764d6fc348df55255dd626468d01143aa2b710ad9b847980a64fa6b3b0e2"} err="failed to get container status \"64c9764d6fc348df55255dd626468d01143aa2b710ad9b847980a64fa6b3b0e2\": rpc error: code = NotFound desc = could not find container \"64c9764d6fc348df55255dd626468d01143aa2b710ad9b847980a64fa6b3b0e2\": container with ID starting with 64c9764d6fc348df55255dd626468d01143aa2b710ad9b847980a64fa6b3b0e2 not found: ID does not exist" Feb 19 20:12:44 crc kubenswrapper[4813]: I0219 20:12:44.376315 4813 scope.go:117] "RemoveContainer" containerID="015b15072f7a703c0f0fc8ad847cf09670bb269ea65db3749e2a5806b84e204a" Feb 19 20:12:44 crc kubenswrapper[4813]: E0219 20:12:44.376592 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"015b15072f7a703c0f0fc8ad847cf09670bb269ea65db3749e2a5806b84e204a\": container with ID starting with 015b15072f7a703c0f0fc8ad847cf09670bb269ea65db3749e2a5806b84e204a not found: ID does not exist" containerID="015b15072f7a703c0f0fc8ad847cf09670bb269ea65db3749e2a5806b84e204a" Feb 19 20:12:44 crc kubenswrapper[4813]: I0219 20:12:44.376612 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"015b15072f7a703c0f0fc8ad847cf09670bb269ea65db3749e2a5806b84e204a"} err="failed to get container status \"015b15072f7a703c0f0fc8ad847cf09670bb269ea65db3749e2a5806b84e204a\": rpc error: code = NotFound desc = could not find container \"015b15072f7a703c0f0fc8ad847cf09670bb269ea65db3749e2a5806b84e204a\": container with ID starting with 015b15072f7a703c0f0fc8ad847cf09670bb269ea65db3749e2a5806b84e204a not found: ID does not exist" Feb 19 20:12:45 crc kubenswrapper[4813]: I0219 20:12:45.275364 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m6fhq" event={"ID":"7809c79f-0a8d-46ab-8679-8c90aef6e5ab","Type":"ContainerStarted","Data":"a09a6705bb1b35e65e314b6e1988181a9c4e557310ff76d3598ffe899a7715ee"} Feb 19 20:12:45 crc kubenswrapper[4813]: I0219 20:12:45.301881 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-m6fhq" podStartSLOduration=2.689506423 podStartE2EDuration="10.301861606s" podCreationTimestamp="2026-02-19 20:12:35 +0000 UTC" firstStartedPulling="2026-02-19 20:12:37.173360739 +0000 UTC m=+6176.398801280" lastFinishedPulling="2026-02-19 20:12:44.785715882 +0000 UTC m=+6184.011156463" observedRunningTime="2026-02-19 20:12:45.292479676 +0000 UTC m=+6184.517920217" watchObservedRunningTime="2026-02-19 20:12:45.301861606 +0000 UTC m=+6184.527302137" Feb 19 20:12:45 crc kubenswrapper[4813]: I0219 20:12:45.489747 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b95d376-cc11-4d8c-a140-e10c92b17b4f" path="/var/lib/kubelet/pods/5b95d376-cc11-4d8c-a140-e10c92b17b4f/volumes" Feb 19 20:12:46 crc kubenswrapper[4813]: I0219 20:12:46.068666 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-m6fhq" Feb 19 20:12:46 crc kubenswrapper[4813]: I0219 20:12:46.069257 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-m6fhq" Feb 19 20:12:46 crc kubenswrapper[4813]: I0219 20:12:46.281843 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-s25gg" Feb 19 20:12:46 crc kubenswrapper[4813]: I0219 20:12:46.282195 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-s25gg" Feb 19 20:12:46 crc kubenswrapper[4813]: I0219 20:12:46.345181 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-s25gg" Feb 19 20:12:46 crc kubenswrapper[4813]: I0219 20:12:46.854824 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cdnndl"] Feb 19 20:12:46 crc kubenswrapper[4813]: E0219 20:12:46.856671 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd9130f7-d795-4083-b0ef-cbf28fa2ef0c" containerName="init" Feb 19 20:12:46 crc kubenswrapper[4813]: I0219 20:12:46.856867 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd9130f7-d795-4083-b0ef-cbf28fa2ef0c" containerName="init" Feb 19 20:12:46 crc kubenswrapper[4813]: E0219 20:12:46.857043 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b95d376-cc11-4d8c-a140-e10c92b17b4f" containerName="extract-content" Feb 19 20:12:46 crc kubenswrapper[4813]: I0219 20:12:46.857166 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b95d376-cc11-4d8c-a140-e10c92b17b4f" containerName="extract-content" Feb 19 20:12:46 crc kubenswrapper[4813]: E0219 20:12:46.857294 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b95d376-cc11-4d8c-a140-e10c92b17b4f" containerName="extract-utilities" Feb 19 20:12:46 crc kubenswrapper[4813]: I0219 20:12:46.857420 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b95d376-cc11-4d8c-a140-e10c92b17b4f" containerName="extract-utilities" Feb 19 20:12:46 crc kubenswrapper[4813]: E0219 20:12:46.857585 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b95d376-cc11-4d8c-a140-e10c92b17b4f" containerName="registry-server" Feb 19 20:12:46 crc kubenswrapper[4813]: I0219 20:12:46.857690 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b95d376-cc11-4d8c-a140-e10c92b17b4f" containerName="registry-server" Feb 19 20:12:46 crc kubenswrapper[4813]: E0219 20:12:46.857818 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd9130f7-d795-4083-b0ef-cbf28fa2ef0c" containerName="dnsmasq-dns" Feb 19 20:12:46 crc kubenswrapper[4813]: I0219 20:12:46.857984 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd9130f7-d795-4083-b0ef-cbf28fa2ef0c" containerName="dnsmasq-dns" Feb 19 20:12:46 crc kubenswrapper[4813]: I0219 20:12:46.858531 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b95d376-cc11-4d8c-a140-e10c92b17b4f" containerName="registry-server" Feb 19 20:12:46 crc kubenswrapper[4813]: I0219 20:12:46.858676 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd9130f7-d795-4083-b0ef-cbf28fa2ef0c" containerName="dnsmasq-dns" Feb 19 20:12:46 crc kubenswrapper[4813]: I0219 20:12:46.860518 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cdnndl" Feb 19 20:12:46 crc kubenswrapper[4813]: I0219 20:12:46.863499 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-2ttn4" Feb 19 20:12:46 crc kubenswrapper[4813]: I0219 20:12:46.863708 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 19 20:12:46 crc kubenswrapper[4813]: I0219 20:12:46.863884 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 19 20:12:46 crc kubenswrapper[4813]: I0219 20:12:46.864102 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 20:12:46 crc kubenswrapper[4813]: I0219 20:12:46.874055 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cdnndl"] Feb 19 20:12:46 crc kubenswrapper[4813]: I0219 20:12:46.947835 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dc7s5\" (UniqueName: \"kubernetes.io/projected/28d1a1d6-f34c-4c76-ace9-1f5d49cbda29-kube-api-access-dc7s5\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cdnndl\" (UID: \"28d1a1d6-f34c-4c76-ace9-1f5d49cbda29\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cdnndl" Feb 19 20:12:46 crc kubenswrapper[4813]: I0219 20:12:46.947884 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/28d1a1d6-f34c-4c76-ace9-1f5d49cbda29-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cdnndl\" (UID: \"28d1a1d6-f34c-4c76-ace9-1f5d49cbda29\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cdnndl" Feb 19 20:12:46 crc kubenswrapper[4813]: I0219 20:12:46.947929 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/28d1a1d6-f34c-4c76-ace9-1f5d49cbda29-ssh-key-openstack-cell1\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cdnndl\" (UID: \"28d1a1d6-f34c-4c76-ace9-1f5d49cbda29\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cdnndl" Feb 19 20:12:46 crc kubenswrapper[4813]: I0219 20:12:46.947987 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/28d1a1d6-f34c-4c76-ace9-1f5d49cbda29-ceph\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cdnndl\" (UID: \"28d1a1d6-f34c-4c76-ace9-1f5d49cbda29\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cdnndl" Feb 19 20:12:46 crc kubenswrapper[4813]: I0219 20:12:46.948013 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28d1a1d6-f34c-4c76-ace9-1f5d49cbda29-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cdnndl\" (UID: \"28d1a1d6-f34c-4c76-ace9-1f5d49cbda29\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cdnndl" Feb 19 20:12:47 crc kubenswrapper[4813]: I0219 20:12:47.050312 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dc7s5\" (UniqueName: \"kubernetes.io/projected/28d1a1d6-f34c-4c76-ace9-1f5d49cbda29-kube-api-access-dc7s5\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cdnndl\" (UID: \"28d1a1d6-f34c-4c76-ace9-1f5d49cbda29\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cdnndl" Feb 19 20:12:47 crc kubenswrapper[4813]: I0219 20:12:47.050381 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/28d1a1d6-f34c-4c76-ace9-1f5d49cbda29-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cdnndl\" (UID: \"28d1a1d6-f34c-4c76-ace9-1f5d49cbda29\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cdnndl" Feb 19 20:12:47 crc kubenswrapper[4813]: I0219 20:12:47.050443 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/28d1a1d6-f34c-4c76-ace9-1f5d49cbda29-ssh-key-openstack-cell1\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cdnndl\" (UID: \"28d1a1d6-f34c-4c76-ace9-1f5d49cbda29\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cdnndl" Feb 19 20:12:47 crc kubenswrapper[4813]: I0219 20:12:47.050509 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/28d1a1d6-f34c-4c76-ace9-1f5d49cbda29-ceph\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cdnndl\" (UID: \"28d1a1d6-f34c-4c76-ace9-1f5d49cbda29\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cdnndl" Feb 19 20:12:47 crc kubenswrapper[4813]: I0219 20:12:47.050543 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28d1a1d6-f34c-4c76-ace9-1f5d49cbda29-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cdnndl\" (UID: \"28d1a1d6-f34c-4c76-ace9-1f5d49cbda29\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cdnndl" Feb 19 20:12:47 crc kubenswrapper[4813]: I0219 20:12:47.056898 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/28d1a1d6-f34c-4c76-ace9-1f5d49cbda29-ceph\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cdnndl\" (UID: \"28d1a1d6-f34c-4c76-ace9-1f5d49cbda29\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cdnndl" Feb 19 20:12:47 crc kubenswrapper[4813]: I0219 20:12:47.057295 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/28d1a1d6-f34c-4c76-ace9-1f5d49cbda29-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cdnndl\" (UID: \"28d1a1d6-f34c-4c76-ace9-1f5d49cbda29\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cdnndl" Feb 19 20:12:47 crc kubenswrapper[4813]: I0219 20:12:47.057324 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/28d1a1d6-f34c-4c76-ace9-1f5d49cbda29-ssh-key-openstack-cell1\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cdnndl\" (UID: \"28d1a1d6-f34c-4c76-ace9-1f5d49cbda29\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cdnndl" Feb 19 20:12:47 crc kubenswrapper[4813]: I0219 20:12:47.058401 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28d1a1d6-f34c-4c76-ace9-1f5d49cbda29-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cdnndl\" (UID: \"28d1a1d6-f34c-4c76-ace9-1f5d49cbda29\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cdnndl" Feb 19 20:12:47 crc kubenswrapper[4813]: I0219 20:12:47.068159 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dc7s5\" (UniqueName: \"kubernetes.io/projected/28d1a1d6-f34c-4c76-ace9-1f5d49cbda29-kube-api-access-dc7s5\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cdnndl\" (UID: \"28d1a1d6-f34c-4c76-ace9-1f5d49cbda29\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cdnndl" Feb 19 20:12:47 crc kubenswrapper[4813]: I0219 20:12:47.124221 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-m6fhq" podUID="7809c79f-0a8d-46ab-8679-8c90aef6e5ab" containerName="registry-server" probeResult="failure" output=< Feb 19 20:12:47 crc kubenswrapper[4813]: timeout: failed to connect service ":50051" within 1s Feb 19 20:12:47 crc kubenswrapper[4813]: > Feb 19 20:12:47 crc kubenswrapper[4813]: I0219 20:12:47.185741 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cdnndl" Feb 19 20:12:47 crc kubenswrapper[4813]: I0219 20:12:47.357016 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-s25gg" Feb 19 20:12:47 crc kubenswrapper[4813]: I0219 20:12:47.910215 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cdnndl"] Feb 19 20:12:48 crc kubenswrapper[4813]: I0219 20:12:48.313084 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cdnndl" event={"ID":"28d1a1d6-f34c-4c76-ace9-1f5d49cbda29","Type":"ContainerStarted","Data":"59562db3bbc88be979292a34193eab36a0e5583c943ee76280c64b2228863c33"} Feb 19 20:12:49 crc kubenswrapper[4813]: I0219 20:12:49.707205 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s25gg"] Feb 19 20:12:50 crc kubenswrapper[4813]: I0219 20:12:50.334692 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-s25gg" podUID="1303c253-fd20-41e7-9aab-deda5cd4093b" containerName="registry-server" containerID="cri-o://f4215628d975183a94507b356ca9b95dd2396844e1c55cb9ac4517c1336aa5e4" gracePeriod=2 Feb 19 20:12:50 crc kubenswrapper[4813]: I0219 20:12:50.872460 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s25gg" Feb 19 20:12:50 crc kubenswrapper[4813]: I0219 20:12:50.954445 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1303c253-fd20-41e7-9aab-deda5cd4093b-utilities\") pod \"1303c253-fd20-41e7-9aab-deda5cd4093b\" (UID: \"1303c253-fd20-41e7-9aab-deda5cd4093b\") " Feb 19 20:12:50 crc kubenswrapper[4813]: I0219 20:12:50.955211 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkkv8\" (UniqueName: \"kubernetes.io/projected/1303c253-fd20-41e7-9aab-deda5cd4093b-kube-api-access-tkkv8\") pod \"1303c253-fd20-41e7-9aab-deda5cd4093b\" (UID: \"1303c253-fd20-41e7-9aab-deda5cd4093b\") " Feb 19 20:12:50 crc kubenswrapper[4813]: I0219 20:12:50.955291 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1303c253-fd20-41e7-9aab-deda5cd4093b-catalog-content\") pod \"1303c253-fd20-41e7-9aab-deda5cd4093b\" (UID: \"1303c253-fd20-41e7-9aab-deda5cd4093b\") " Feb 19 20:12:50 crc kubenswrapper[4813]: I0219 20:12:50.955366 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1303c253-fd20-41e7-9aab-deda5cd4093b-utilities" (OuterVolumeSpecName: "utilities") pod "1303c253-fd20-41e7-9aab-deda5cd4093b" (UID: "1303c253-fd20-41e7-9aab-deda5cd4093b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:12:50 crc kubenswrapper[4813]: I0219 20:12:50.956425 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1303c253-fd20-41e7-9aab-deda5cd4093b-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 20:12:50 crc kubenswrapper[4813]: I0219 20:12:50.965221 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1303c253-fd20-41e7-9aab-deda5cd4093b-kube-api-access-tkkv8" (OuterVolumeSpecName: "kube-api-access-tkkv8") pod "1303c253-fd20-41e7-9aab-deda5cd4093b" (UID: "1303c253-fd20-41e7-9aab-deda5cd4093b"). InnerVolumeSpecName "kube-api-access-tkkv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:12:51 crc kubenswrapper[4813]: I0219 20:12:51.013039 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1303c253-fd20-41e7-9aab-deda5cd4093b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1303c253-fd20-41e7-9aab-deda5cd4093b" (UID: "1303c253-fd20-41e7-9aab-deda5cd4093b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:12:51 crc kubenswrapper[4813]: I0219 20:12:51.058590 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tkkv8\" (UniqueName: \"kubernetes.io/projected/1303c253-fd20-41e7-9aab-deda5cd4093b-kube-api-access-tkkv8\") on node \"crc\" DevicePath \"\"" Feb 19 20:12:51 crc kubenswrapper[4813]: I0219 20:12:51.058630 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1303c253-fd20-41e7-9aab-deda5cd4093b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 20:12:51 crc kubenswrapper[4813]: I0219 20:12:51.350606 4813 generic.go:334] "Generic (PLEG): container finished" podID="1303c253-fd20-41e7-9aab-deda5cd4093b" containerID="f4215628d975183a94507b356ca9b95dd2396844e1c55cb9ac4517c1336aa5e4" exitCode=0 Feb 19 20:12:51 crc kubenswrapper[4813]: I0219 20:12:51.350658 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s25gg" event={"ID":"1303c253-fd20-41e7-9aab-deda5cd4093b","Type":"ContainerDied","Data":"f4215628d975183a94507b356ca9b95dd2396844e1c55cb9ac4517c1336aa5e4"} Feb 19 20:12:51 crc kubenswrapper[4813]: I0219 20:12:51.350668 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s25gg" Feb 19 20:12:51 crc kubenswrapper[4813]: I0219 20:12:51.350689 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s25gg" event={"ID":"1303c253-fd20-41e7-9aab-deda5cd4093b","Type":"ContainerDied","Data":"92eae21d1fca81cf7c6496061289ac1a29e1c3f7a24db13d6da5bd077961523c"} Feb 19 20:12:51 crc kubenswrapper[4813]: I0219 20:12:51.350709 4813 scope.go:117] "RemoveContainer" containerID="f4215628d975183a94507b356ca9b95dd2396844e1c55cb9ac4517c1336aa5e4" Feb 19 20:12:51 crc kubenswrapper[4813]: I0219 20:12:51.391461 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s25gg"] Feb 19 20:12:51 crc kubenswrapper[4813]: I0219 20:12:51.402104 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-s25gg"] Feb 19 20:12:51 crc kubenswrapper[4813]: I0219 20:12:51.487813 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1303c253-fd20-41e7-9aab-deda5cd4093b" path="/var/lib/kubelet/pods/1303c253-fd20-41e7-9aab-deda5cd4093b/volumes" Feb 19 20:12:56 crc kubenswrapper[4813]: I0219 20:12:56.161866 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-m6fhq" Feb 19 20:12:56 crc kubenswrapper[4813]: I0219 20:12:56.242940 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-m6fhq" Feb 19 20:12:56 crc kubenswrapper[4813]: I0219 20:12:56.425847 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m6fhq"] Feb 19 20:12:57 crc kubenswrapper[4813]: I0219 20:12:57.222560 4813 scope.go:117] "RemoveContainer" containerID="46679de4e33723e85839f3c2e9da6c7b1839e8a9eecab565c7715857c9d6c15d" Feb 19 20:12:57 crc kubenswrapper[4813]: I0219 20:12:57.297749 4813 scope.go:117] "RemoveContainer" containerID="c864a458eedca858ea9167feb7d41d50f69a1ca544ff9900836c679d5f5de996" Feb 19 20:12:57 crc kubenswrapper[4813]: I0219 20:12:57.377230 4813 scope.go:117] "RemoveContainer" containerID="f4215628d975183a94507b356ca9b95dd2396844e1c55cb9ac4517c1336aa5e4" Feb 19 20:12:57 crc kubenswrapper[4813]: E0219 20:12:57.377831 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4215628d975183a94507b356ca9b95dd2396844e1c55cb9ac4517c1336aa5e4\": container with ID starting with f4215628d975183a94507b356ca9b95dd2396844e1c55cb9ac4517c1336aa5e4 not found: ID does not exist" containerID="f4215628d975183a94507b356ca9b95dd2396844e1c55cb9ac4517c1336aa5e4" Feb 19 20:12:57 crc kubenswrapper[4813]: I0219 20:12:57.377887 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4215628d975183a94507b356ca9b95dd2396844e1c55cb9ac4517c1336aa5e4"} err="failed to get container status \"f4215628d975183a94507b356ca9b95dd2396844e1c55cb9ac4517c1336aa5e4\": rpc error: code = NotFound desc = could not find container \"f4215628d975183a94507b356ca9b95dd2396844e1c55cb9ac4517c1336aa5e4\": container with ID starting with f4215628d975183a94507b356ca9b95dd2396844e1c55cb9ac4517c1336aa5e4 not found: ID does not exist" Feb 19 20:12:57 crc kubenswrapper[4813]: I0219 20:12:57.377918 4813 scope.go:117] "RemoveContainer" containerID="46679de4e33723e85839f3c2e9da6c7b1839e8a9eecab565c7715857c9d6c15d" Feb 19 20:12:57 crc kubenswrapper[4813]: E0219 20:12:57.378276 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46679de4e33723e85839f3c2e9da6c7b1839e8a9eecab565c7715857c9d6c15d\": container with ID starting with 46679de4e33723e85839f3c2e9da6c7b1839e8a9eecab565c7715857c9d6c15d not found: ID does not exist" containerID="46679de4e33723e85839f3c2e9da6c7b1839e8a9eecab565c7715857c9d6c15d" Feb 19 20:12:57 crc kubenswrapper[4813]: I0219 20:12:57.378333 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46679de4e33723e85839f3c2e9da6c7b1839e8a9eecab565c7715857c9d6c15d"} err="failed to get container status \"46679de4e33723e85839f3c2e9da6c7b1839e8a9eecab565c7715857c9d6c15d\": rpc error: code = NotFound desc = could not find container \"46679de4e33723e85839f3c2e9da6c7b1839e8a9eecab565c7715857c9d6c15d\": container with ID starting with 46679de4e33723e85839f3c2e9da6c7b1839e8a9eecab565c7715857c9d6c15d not found: ID does not exist" Feb 19 20:12:57 crc kubenswrapper[4813]: I0219 20:12:57.378370 4813 scope.go:117] "RemoveContainer" containerID="c864a458eedca858ea9167feb7d41d50f69a1ca544ff9900836c679d5f5de996" Feb 19 20:12:57 crc kubenswrapper[4813]: E0219 20:12:57.378834 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c864a458eedca858ea9167feb7d41d50f69a1ca544ff9900836c679d5f5de996\": container with ID starting with c864a458eedca858ea9167feb7d41d50f69a1ca544ff9900836c679d5f5de996 not found: ID does not exist" containerID="c864a458eedca858ea9167feb7d41d50f69a1ca544ff9900836c679d5f5de996" Feb 19 20:12:57 crc kubenswrapper[4813]: I0219 20:12:57.378883 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c864a458eedca858ea9167feb7d41d50f69a1ca544ff9900836c679d5f5de996"} err="failed to get container status \"c864a458eedca858ea9167feb7d41d50f69a1ca544ff9900836c679d5f5de996\": rpc error: code = NotFound desc = could not find container \"c864a458eedca858ea9167feb7d41d50f69a1ca544ff9900836c679d5f5de996\": container with ID starting with c864a458eedca858ea9167feb7d41d50f69a1ca544ff9900836c679d5f5de996 not found: ID does not exist" Feb 19 20:12:57 crc kubenswrapper[4813]: I0219 20:12:57.409812 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-m6fhq" podUID="7809c79f-0a8d-46ab-8679-8c90aef6e5ab" containerName="registry-server" containerID="cri-o://a09a6705bb1b35e65e314b6e1988181a9c4e557310ff76d3598ffe899a7715ee" gracePeriod=2 Feb 19 20:12:57 crc kubenswrapper[4813]: I0219 20:12:57.831091 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m6fhq" Feb 19 20:12:57 crc kubenswrapper[4813]: I0219 20:12:57.920606 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltsm5\" (UniqueName: \"kubernetes.io/projected/7809c79f-0a8d-46ab-8679-8c90aef6e5ab-kube-api-access-ltsm5\") pod \"7809c79f-0a8d-46ab-8679-8c90aef6e5ab\" (UID: \"7809c79f-0a8d-46ab-8679-8c90aef6e5ab\") " Feb 19 20:12:57 crc kubenswrapper[4813]: I0219 20:12:57.920703 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7809c79f-0a8d-46ab-8679-8c90aef6e5ab-utilities\") pod \"7809c79f-0a8d-46ab-8679-8c90aef6e5ab\" (UID: \"7809c79f-0a8d-46ab-8679-8c90aef6e5ab\") " Feb 19 20:12:57 crc kubenswrapper[4813]: I0219 20:12:57.920826 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7809c79f-0a8d-46ab-8679-8c90aef6e5ab-catalog-content\") pod \"7809c79f-0a8d-46ab-8679-8c90aef6e5ab\" (UID: \"7809c79f-0a8d-46ab-8679-8c90aef6e5ab\") " Feb 19 20:12:57 crc kubenswrapper[4813]: I0219 20:12:57.921924 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7809c79f-0a8d-46ab-8679-8c90aef6e5ab-utilities" (OuterVolumeSpecName: "utilities") pod "7809c79f-0a8d-46ab-8679-8c90aef6e5ab" (UID: "7809c79f-0a8d-46ab-8679-8c90aef6e5ab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:12:57 crc kubenswrapper[4813]: I0219 20:12:57.925097 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7809c79f-0a8d-46ab-8679-8c90aef6e5ab-kube-api-access-ltsm5" (OuterVolumeSpecName: "kube-api-access-ltsm5") pod "7809c79f-0a8d-46ab-8679-8c90aef6e5ab" (UID: "7809c79f-0a8d-46ab-8679-8c90aef6e5ab"). InnerVolumeSpecName "kube-api-access-ltsm5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:12:58 crc kubenswrapper[4813]: I0219 20:12:58.023543 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ltsm5\" (UniqueName: \"kubernetes.io/projected/7809c79f-0a8d-46ab-8679-8c90aef6e5ab-kube-api-access-ltsm5\") on node \"crc\" DevicePath \"\"" Feb 19 20:12:58 crc kubenswrapper[4813]: I0219 20:12:58.023590 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7809c79f-0a8d-46ab-8679-8c90aef6e5ab-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 20:12:58 crc kubenswrapper[4813]: I0219 20:12:58.046314 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7809c79f-0a8d-46ab-8679-8c90aef6e5ab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7809c79f-0a8d-46ab-8679-8c90aef6e5ab" (UID: "7809c79f-0a8d-46ab-8679-8c90aef6e5ab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:12:58 crc kubenswrapper[4813]: I0219 20:12:58.125381 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7809c79f-0a8d-46ab-8679-8c90aef6e5ab-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 20:12:58 crc kubenswrapper[4813]: I0219 20:12:58.425238 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cdnndl" event={"ID":"28d1a1d6-f34c-4c76-ace9-1f5d49cbda29","Type":"ContainerStarted","Data":"0c8d0185d9e5dc5c50187920b62f4941b429fdf5e87787f59875b1878bf562b8"} Feb 19 20:12:58 crc kubenswrapper[4813]: I0219 20:12:58.432796 4813 generic.go:334] "Generic (PLEG): container finished" podID="7809c79f-0a8d-46ab-8679-8c90aef6e5ab" containerID="a09a6705bb1b35e65e314b6e1988181a9c4e557310ff76d3598ffe899a7715ee" exitCode=0 Feb 19 20:12:58 crc kubenswrapper[4813]: I0219 20:12:58.432842 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m6fhq" event={"ID":"7809c79f-0a8d-46ab-8679-8c90aef6e5ab","Type":"ContainerDied","Data":"a09a6705bb1b35e65e314b6e1988181a9c4e557310ff76d3598ffe899a7715ee"} Feb 19 20:12:58 crc kubenswrapper[4813]: I0219 20:12:58.432867 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m6fhq" event={"ID":"7809c79f-0a8d-46ab-8679-8c90aef6e5ab","Type":"ContainerDied","Data":"94cffd516e3baeba27df890af4dd59b43e985548dc0020fc10ea3d9282b060b8"} Feb 19 20:12:58 crc kubenswrapper[4813]: I0219 20:12:58.432872 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m6fhq" Feb 19 20:12:58 crc kubenswrapper[4813]: I0219 20:12:58.432882 4813 scope.go:117] "RemoveContainer" containerID="a09a6705bb1b35e65e314b6e1988181a9c4e557310ff76d3598ffe899a7715ee" Feb 19 20:12:58 crc kubenswrapper[4813]: I0219 20:12:58.458887 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cdnndl" podStartSLOduration=2.995578508 podStartE2EDuration="12.458860515s" podCreationTimestamp="2026-02-19 20:12:46 +0000 UTC" firstStartedPulling="2026-02-19 20:12:47.914604373 +0000 UTC m=+6187.140044914" lastFinishedPulling="2026-02-19 20:12:57.37788638 +0000 UTC m=+6196.603326921" observedRunningTime="2026-02-19 20:12:58.451821358 +0000 UTC m=+6197.677261949" watchObservedRunningTime="2026-02-19 20:12:58.458860515 +0000 UTC m=+6197.684301096" Feb 19 20:12:58 crc kubenswrapper[4813]: I0219 20:12:58.500547 4813 scope.go:117] "RemoveContainer" containerID="bc6b4468289d771286a7efcc13c960c54ecad0d48898bdb1d1c82f5212ac07b7" Feb 19 20:12:58 crc kubenswrapper[4813]: I0219 20:12:58.508768 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m6fhq"] Feb 19 20:12:58 crc kubenswrapper[4813]: I0219 20:12:58.522519 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-m6fhq"] Feb 19 20:12:58 crc kubenswrapper[4813]: I0219 20:12:58.533795 4813 scope.go:117] "RemoveContainer" containerID="7eb9c50ee9214a4367ce6f5b34f97956da0b04bb2a4167720e210d497ba797c8" Feb 19 20:12:58 crc kubenswrapper[4813]: I0219 20:12:58.596385 4813 scope.go:117] "RemoveContainer" containerID="a09a6705bb1b35e65e314b6e1988181a9c4e557310ff76d3598ffe899a7715ee" Feb 19 20:12:58 crc kubenswrapper[4813]: E0219 20:12:58.597010 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a09a6705bb1b35e65e314b6e1988181a9c4e557310ff76d3598ffe899a7715ee\": container with ID starting with a09a6705bb1b35e65e314b6e1988181a9c4e557310ff76d3598ffe899a7715ee not found: ID does not exist" containerID="a09a6705bb1b35e65e314b6e1988181a9c4e557310ff76d3598ffe899a7715ee" Feb 19 20:12:58 crc kubenswrapper[4813]: I0219 20:12:58.597058 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a09a6705bb1b35e65e314b6e1988181a9c4e557310ff76d3598ffe899a7715ee"} err="failed to get container status \"a09a6705bb1b35e65e314b6e1988181a9c4e557310ff76d3598ffe899a7715ee\": rpc error: code = NotFound desc = could not find container \"a09a6705bb1b35e65e314b6e1988181a9c4e557310ff76d3598ffe899a7715ee\": container with ID starting with a09a6705bb1b35e65e314b6e1988181a9c4e557310ff76d3598ffe899a7715ee not found: ID does not exist" Feb 19 20:12:58 crc kubenswrapper[4813]: I0219 20:12:58.597085 4813 scope.go:117] "RemoveContainer" containerID="bc6b4468289d771286a7efcc13c960c54ecad0d48898bdb1d1c82f5212ac07b7" Feb 19 20:12:58 crc kubenswrapper[4813]: E0219 20:12:58.597321 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc6b4468289d771286a7efcc13c960c54ecad0d48898bdb1d1c82f5212ac07b7\": container with ID starting with bc6b4468289d771286a7efcc13c960c54ecad0d48898bdb1d1c82f5212ac07b7 not found: ID does not exist" containerID="bc6b4468289d771286a7efcc13c960c54ecad0d48898bdb1d1c82f5212ac07b7" Feb 19 20:12:58 crc kubenswrapper[4813]: I0219 20:12:58.597351 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc6b4468289d771286a7efcc13c960c54ecad0d48898bdb1d1c82f5212ac07b7"} err="failed to get container status \"bc6b4468289d771286a7efcc13c960c54ecad0d48898bdb1d1c82f5212ac07b7\": rpc error: code = NotFound desc = could not find container \"bc6b4468289d771286a7efcc13c960c54ecad0d48898bdb1d1c82f5212ac07b7\": container with ID starting with bc6b4468289d771286a7efcc13c960c54ecad0d48898bdb1d1c82f5212ac07b7 not found: ID does not exist" Feb 19 20:12:58 crc kubenswrapper[4813]: I0219 20:12:58.597368 4813 scope.go:117] "RemoveContainer" containerID="7eb9c50ee9214a4367ce6f5b34f97956da0b04bb2a4167720e210d497ba797c8" Feb 19 20:12:58 crc kubenswrapper[4813]: E0219 20:12:58.597548 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7eb9c50ee9214a4367ce6f5b34f97956da0b04bb2a4167720e210d497ba797c8\": container with ID starting with 7eb9c50ee9214a4367ce6f5b34f97956da0b04bb2a4167720e210d497ba797c8 not found: ID does not exist" containerID="7eb9c50ee9214a4367ce6f5b34f97956da0b04bb2a4167720e210d497ba797c8" Feb 19 20:12:58 crc kubenswrapper[4813]: I0219 20:12:58.597569 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7eb9c50ee9214a4367ce6f5b34f97956da0b04bb2a4167720e210d497ba797c8"} err="failed to get container status \"7eb9c50ee9214a4367ce6f5b34f97956da0b04bb2a4167720e210d497ba797c8\": rpc error: code = NotFound desc = could not find container \"7eb9c50ee9214a4367ce6f5b34f97956da0b04bb2a4167720e210d497ba797c8\": container with ID starting with 7eb9c50ee9214a4367ce6f5b34f97956da0b04bb2a4167720e210d497ba797c8 not found: ID does not exist" Feb 19 20:12:59 crc kubenswrapper[4813]: I0219 20:12:59.492418 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7809c79f-0a8d-46ab-8679-8c90aef6e5ab" path="/var/lib/kubelet/pods/7809c79f-0a8d-46ab-8679-8c90aef6e5ab/volumes" Feb 19 20:13:00 crc kubenswrapper[4813]: I0219 20:13:00.330069 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 20:13:00 crc kubenswrapper[4813]: I0219 20:13:00.330157 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 20:13:11 crc kubenswrapper[4813]: I0219 20:13:11.597100 4813 generic.go:334] "Generic (PLEG): container finished" podID="28d1a1d6-f34c-4c76-ace9-1f5d49cbda29" containerID="0c8d0185d9e5dc5c50187920b62f4941b429fdf5e87787f59875b1878bf562b8" exitCode=0 Feb 19 20:13:11 crc kubenswrapper[4813]: I0219 20:13:11.597217 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cdnndl" event={"ID":"28d1a1d6-f34c-4c76-ace9-1f5d49cbda29","Type":"ContainerDied","Data":"0c8d0185d9e5dc5c50187920b62f4941b429fdf5e87787f59875b1878bf562b8"} Feb 19 20:13:13 crc kubenswrapper[4813]: I0219 20:13:13.164629 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cdnndl" Feb 19 20:13:13 crc kubenswrapper[4813]: I0219 20:13:13.188700 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/28d1a1d6-f34c-4c76-ace9-1f5d49cbda29-ceph\") pod \"28d1a1d6-f34c-4c76-ace9-1f5d49cbda29\" (UID: \"28d1a1d6-f34c-4c76-ace9-1f5d49cbda29\") " Feb 19 20:13:13 crc kubenswrapper[4813]: I0219 20:13:13.188772 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/28d1a1d6-f34c-4c76-ace9-1f5d49cbda29-inventory\") pod \"28d1a1d6-f34c-4c76-ace9-1f5d49cbda29\" (UID: \"28d1a1d6-f34c-4c76-ace9-1f5d49cbda29\") " Feb 19 20:13:13 crc kubenswrapper[4813]: I0219 20:13:13.188895 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dc7s5\" (UniqueName: \"kubernetes.io/projected/28d1a1d6-f34c-4c76-ace9-1f5d49cbda29-kube-api-access-dc7s5\") pod \"28d1a1d6-f34c-4c76-ace9-1f5d49cbda29\" (UID: \"28d1a1d6-f34c-4c76-ace9-1f5d49cbda29\") " Feb 19 20:13:13 crc kubenswrapper[4813]: I0219 20:13:13.189025 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28d1a1d6-f34c-4c76-ace9-1f5d49cbda29-pre-adoption-validation-combined-ca-bundle\") pod \"28d1a1d6-f34c-4c76-ace9-1f5d49cbda29\" (UID: \"28d1a1d6-f34c-4c76-ace9-1f5d49cbda29\") " Feb 19 20:13:13 crc kubenswrapper[4813]: I0219 20:13:13.189114 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/28d1a1d6-f34c-4c76-ace9-1f5d49cbda29-ssh-key-openstack-cell1\") pod \"28d1a1d6-f34c-4c76-ace9-1f5d49cbda29\" (UID: \"28d1a1d6-f34c-4c76-ace9-1f5d49cbda29\") " Feb 19 20:13:13 crc kubenswrapper[4813]: I0219 20:13:13.195714 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28d1a1d6-f34c-4c76-ace9-1f5d49cbda29-kube-api-access-dc7s5" (OuterVolumeSpecName: "kube-api-access-dc7s5") pod "28d1a1d6-f34c-4c76-ace9-1f5d49cbda29" (UID: "28d1a1d6-f34c-4c76-ace9-1f5d49cbda29"). InnerVolumeSpecName "kube-api-access-dc7s5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:13:13 crc kubenswrapper[4813]: I0219 20:13:13.195987 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28d1a1d6-f34c-4c76-ace9-1f5d49cbda29-ceph" (OuterVolumeSpecName: "ceph") pod "28d1a1d6-f34c-4c76-ace9-1f5d49cbda29" (UID: "28d1a1d6-f34c-4c76-ace9-1f5d49cbda29"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:13:13 crc kubenswrapper[4813]: I0219 20:13:13.205533 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28d1a1d6-f34c-4c76-ace9-1f5d49cbda29-pre-adoption-validation-combined-ca-bundle" (OuterVolumeSpecName: "pre-adoption-validation-combined-ca-bundle") pod "28d1a1d6-f34c-4c76-ace9-1f5d49cbda29" (UID: "28d1a1d6-f34c-4c76-ace9-1f5d49cbda29"). InnerVolumeSpecName "pre-adoption-validation-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:13:13 crc kubenswrapper[4813]: I0219 20:13:13.221378 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28d1a1d6-f34c-4c76-ace9-1f5d49cbda29-inventory" (OuterVolumeSpecName: "inventory") pod "28d1a1d6-f34c-4c76-ace9-1f5d49cbda29" (UID: "28d1a1d6-f34c-4c76-ace9-1f5d49cbda29"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:13:13 crc kubenswrapper[4813]: I0219 20:13:13.228835 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28d1a1d6-f34c-4c76-ace9-1f5d49cbda29-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "28d1a1d6-f34c-4c76-ace9-1f5d49cbda29" (UID: "28d1a1d6-f34c-4c76-ace9-1f5d49cbda29"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:13:13 crc kubenswrapper[4813]: I0219 20:13:13.291406 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dc7s5\" (UniqueName: \"kubernetes.io/projected/28d1a1d6-f34c-4c76-ace9-1f5d49cbda29-kube-api-access-dc7s5\") on node \"crc\" DevicePath \"\"" Feb 19 20:13:13 crc kubenswrapper[4813]: I0219 20:13:13.291453 4813 reconciler_common.go:293] "Volume detached for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28d1a1d6-f34c-4c76-ace9-1f5d49cbda29-pre-adoption-validation-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 20:13:13 crc kubenswrapper[4813]: I0219 20:13:13.291479 4813 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/28d1a1d6-f34c-4c76-ace9-1f5d49cbda29-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 19 20:13:13 crc kubenswrapper[4813]: I0219 20:13:13.291495 4813 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/28d1a1d6-f34c-4c76-ace9-1f5d49cbda29-ceph\") on node \"crc\" DevicePath \"\"" Feb 19 20:13:13 crc kubenswrapper[4813]: I0219 20:13:13.291509 4813 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/28d1a1d6-f34c-4c76-ace9-1f5d49cbda29-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 20:13:13 crc kubenswrapper[4813]: I0219 20:13:13.628249 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cdnndl" event={"ID":"28d1a1d6-f34c-4c76-ace9-1f5d49cbda29","Type":"ContainerDied","Data":"59562db3bbc88be979292a34193eab36a0e5583c943ee76280c64b2228863c33"} Feb 19 20:13:13 crc kubenswrapper[4813]: I0219 20:13:13.628297 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59562db3bbc88be979292a34193eab36a0e5583c943ee76280c64b2228863c33" Feb 19 20:13:13 crc kubenswrapper[4813]: I0219 20:13:13.628868 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cdnndl" Feb 19 20:13:19 crc kubenswrapper[4813]: I0219 20:13:19.380845 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-7mnnq"] Feb 19 20:13:19 crc kubenswrapper[4813]: E0219 20:13:19.381983 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28d1a1d6-f34c-4c76-ace9-1f5d49cbda29" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Feb 19 20:13:19 crc kubenswrapper[4813]: I0219 20:13:19.382000 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="28d1a1d6-f34c-4c76-ace9-1f5d49cbda29" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Feb 19 20:13:19 crc kubenswrapper[4813]: E0219 20:13:19.382033 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7809c79f-0a8d-46ab-8679-8c90aef6e5ab" containerName="registry-server" Feb 19 20:13:19 crc kubenswrapper[4813]: I0219 20:13:19.382056 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="7809c79f-0a8d-46ab-8679-8c90aef6e5ab" containerName="registry-server" Feb 19 20:13:19 crc kubenswrapper[4813]: E0219 20:13:19.382072 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7809c79f-0a8d-46ab-8679-8c90aef6e5ab" containerName="extract-utilities" Feb 19 20:13:19 crc kubenswrapper[4813]: I0219 20:13:19.382079 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="7809c79f-0a8d-46ab-8679-8c90aef6e5ab" containerName="extract-utilities" Feb 19 20:13:19 crc kubenswrapper[4813]: E0219 20:13:19.382093 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1303c253-fd20-41e7-9aab-deda5cd4093b" containerName="extract-content" Feb 19 20:13:19 crc kubenswrapper[4813]: I0219 20:13:19.382100 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="1303c253-fd20-41e7-9aab-deda5cd4093b" containerName="extract-content" Feb 19 20:13:19 crc kubenswrapper[4813]: E0219 20:13:19.382117 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7809c79f-0a8d-46ab-8679-8c90aef6e5ab" containerName="extract-content" Feb 19 20:13:19 crc kubenswrapper[4813]: I0219 20:13:19.382125 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="7809c79f-0a8d-46ab-8679-8c90aef6e5ab" containerName="extract-content" Feb 19 20:13:19 crc kubenswrapper[4813]: E0219 20:13:19.382144 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1303c253-fd20-41e7-9aab-deda5cd4093b" containerName="registry-server" Feb 19 20:13:19 crc kubenswrapper[4813]: I0219 20:13:19.382151 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="1303c253-fd20-41e7-9aab-deda5cd4093b" containerName="registry-server" Feb 19 20:13:19 crc kubenswrapper[4813]: E0219 20:13:19.382175 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1303c253-fd20-41e7-9aab-deda5cd4093b" containerName="extract-utilities" Feb 19 20:13:19 crc kubenswrapper[4813]: I0219 20:13:19.382183 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="1303c253-fd20-41e7-9aab-deda5cd4093b" containerName="extract-utilities" Feb 19 20:13:19 crc kubenswrapper[4813]: I0219 20:13:19.382416 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="7809c79f-0a8d-46ab-8679-8c90aef6e5ab" containerName="registry-server" Feb 19 20:13:19 crc kubenswrapper[4813]: I0219 20:13:19.382438 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="1303c253-fd20-41e7-9aab-deda5cd4093b" containerName="registry-server" Feb 19 20:13:19 crc kubenswrapper[4813]: I0219 20:13:19.382447 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="28d1a1d6-f34c-4c76-ace9-1f5d49cbda29" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Feb 19 20:13:19 crc kubenswrapper[4813]: I0219 20:13:19.383419 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-7mnnq" Feb 19 20:13:19 crc kubenswrapper[4813]: I0219 20:13:19.386524 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 20:13:19 crc kubenswrapper[4813]: I0219 20:13:19.386594 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 19 20:13:19 crc kubenswrapper[4813]: I0219 20:13:19.387102 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-2ttn4" Feb 19 20:13:19 crc kubenswrapper[4813]: I0219 20:13:19.387834 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 19 20:13:19 crc kubenswrapper[4813]: I0219 20:13:19.397173 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-7mnnq"] Feb 19 20:13:19 crc kubenswrapper[4813]: I0219 20:13:19.422978 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6a8dd50d-5b12-495b-961f-4ebd2ebe3033-ceph\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-7mnnq\" (UID: \"6a8dd50d-5b12-495b-961f-4ebd2ebe3033\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-7mnnq" Feb 19 20:13:19 crc kubenswrapper[4813]: I0219 20:13:19.423029 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zf6g8\" (UniqueName: \"kubernetes.io/projected/6a8dd50d-5b12-495b-961f-4ebd2ebe3033-kube-api-access-zf6g8\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-7mnnq\" (UID: \"6a8dd50d-5b12-495b-961f-4ebd2ebe3033\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-7mnnq" Feb 19 20:13:19 crc kubenswrapper[4813]: I0219 20:13:19.423055 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a8dd50d-5b12-495b-961f-4ebd2ebe3033-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-7mnnq\" (UID: \"6a8dd50d-5b12-495b-961f-4ebd2ebe3033\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-7mnnq" Feb 19 20:13:19 crc kubenswrapper[4813]: I0219 20:13:19.423179 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/6a8dd50d-5b12-495b-961f-4ebd2ebe3033-ssh-key-openstack-cell1\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-7mnnq\" (UID: \"6a8dd50d-5b12-495b-961f-4ebd2ebe3033\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-7mnnq" Feb 19 20:13:19 crc kubenswrapper[4813]: I0219 20:13:19.423265 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6a8dd50d-5b12-495b-961f-4ebd2ebe3033-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-7mnnq\" (UID: \"6a8dd50d-5b12-495b-961f-4ebd2ebe3033\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-7mnnq" Feb 19 20:13:19 crc kubenswrapper[4813]: I0219 20:13:19.525008 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6a8dd50d-5b12-495b-961f-4ebd2ebe3033-ceph\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-7mnnq\" (UID: \"6a8dd50d-5b12-495b-961f-4ebd2ebe3033\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-7mnnq" Feb 19 20:13:19 crc kubenswrapper[4813]: I0219 20:13:19.525055 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zf6g8\" (UniqueName: \"kubernetes.io/projected/6a8dd50d-5b12-495b-961f-4ebd2ebe3033-kube-api-access-zf6g8\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-7mnnq\" (UID: \"6a8dd50d-5b12-495b-961f-4ebd2ebe3033\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-7mnnq" Feb 19 20:13:19 crc kubenswrapper[4813]: I0219 20:13:19.525172 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a8dd50d-5b12-495b-961f-4ebd2ebe3033-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-7mnnq\" (UID: \"6a8dd50d-5b12-495b-961f-4ebd2ebe3033\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-7mnnq" Feb 19 20:13:19 crc kubenswrapper[4813]: I0219 20:13:19.525508 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/6a8dd50d-5b12-495b-961f-4ebd2ebe3033-ssh-key-openstack-cell1\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-7mnnq\" (UID: \"6a8dd50d-5b12-495b-961f-4ebd2ebe3033\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-7mnnq" Feb 19 20:13:19 crc kubenswrapper[4813]: I0219 20:13:19.525713 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6a8dd50d-5b12-495b-961f-4ebd2ebe3033-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-7mnnq\" (UID: \"6a8dd50d-5b12-495b-961f-4ebd2ebe3033\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-7mnnq" Feb 19 20:13:19 crc kubenswrapper[4813]: I0219 20:13:19.531860 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6a8dd50d-5b12-495b-961f-4ebd2ebe3033-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-7mnnq\" (UID: \"6a8dd50d-5b12-495b-961f-4ebd2ebe3033\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-7mnnq" Feb 19 20:13:19 crc kubenswrapper[4813]: I0219 20:13:19.532066 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a8dd50d-5b12-495b-961f-4ebd2ebe3033-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-7mnnq\" (UID: \"6a8dd50d-5b12-495b-961f-4ebd2ebe3033\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-7mnnq" Feb 19 20:13:19 crc kubenswrapper[4813]: I0219 20:13:19.533073 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6a8dd50d-5b12-495b-961f-4ebd2ebe3033-ceph\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-7mnnq\" (UID: \"6a8dd50d-5b12-495b-961f-4ebd2ebe3033\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-7mnnq" Feb 19 20:13:19 crc kubenswrapper[4813]: I0219 20:13:19.533266 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/6a8dd50d-5b12-495b-961f-4ebd2ebe3033-ssh-key-openstack-cell1\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-7mnnq\" (UID: \"6a8dd50d-5b12-495b-961f-4ebd2ebe3033\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-7mnnq" Feb 19 20:13:19 crc kubenswrapper[4813]: I0219 20:13:19.545043 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zf6g8\" (UniqueName: \"kubernetes.io/projected/6a8dd50d-5b12-495b-961f-4ebd2ebe3033-kube-api-access-zf6g8\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-7mnnq\" (UID: \"6a8dd50d-5b12-495b-961f-4ebd2ebe3033\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-7mnnq" Feb 19 20:13:19 crc kubenswrapper[4813]: I0219 20:13:19.713915 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-7mnnq" Feb 19 20:13:20 crc kubenswrapper[4813]: I0219 20:13:20.347197 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-7mnnq"] Feb 19 20:13:20 crc kubenswrapper[4813]: W0219 20:13:20.354519 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a8dd50d_5b12_495b_961f_4ebd2ebe3033.slice/crio-82654a3fa8a8ff2291cd454d68beaa2c783bd3524f1257b026a7f136dac936ed WatchSource:0}: Error finding container 82654a3fa8a8ff2291cd454d68beaa2c783bd3524f1257b026a7f136dac936ed: Status 404 returned error can't find the container with id 82654a3fa8a8ff2291cd454d68beaa2c783bd3524f1257b026a7f136dac936ed Feb 19 20:13:20 crc kubenswrapper[4813]: I0219 20:13:20.357698 4813 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 20:13:20 crc kubenswrapper[4813]: I0219 20:13:20.712526 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-7mnnq" event={"ID":"6a8dd50d-5b12-495b-961f-4ebd2ebe3033","Type":"ContainerStarted","Data":"82654a3fa8a8ff2291cd454d68beaa2c783bd3524f1257b026a7f136dac936ed"} Feb 19 20:13:21 crc kubenswrapper[4813]: I0219 20:13:21.727710 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-7mnnq" event={"ID":"6a8dd50d-5b12-495b-961f-4ebd2ebe3033","Type":"ContainerStarted","Data":"019d84991e4663a285e9ea54353cf27ed0fc6500878202957239fd225acedfe2"} Feb 19 20:13:21 crc kubenswrapper[4813]: I0219 20:13:21.755542 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-7mnnq" podStartSLOduration=2.139205155 podStartE2EDuration="2.755516139s" podCreationTimestamp="2026-02-19 20:13:19 +0000 UTC" firstStartedPulling="2026-02-19 20:13:20.357476105 +0000 UTC m=+6219.582916646" lastFinishedPulling="2026-02-19 20:13:20.973787089 +0000 UTC m=+6220.199227630" observedRunningTime="2026-02-19 20:13:21.743079204 +0000 UTC m=+6220.968519765" watchObservedRunningTime="2026-02-19 20:13:21.755516139 +0000 UTC m=+6220.980956700" Feb 19 20:13:30 crc kubenswrapper[4813]: I0219 20:13:30.329784 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 20:13:30 crc kubenswrapper[4813]: I0219 20:13:30.330579 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 20:13:30 crc kubenswrapper[4813]: I0219 20:13:30.330642 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" Feb 19 20:13:30 crc kubenswrapper[4813]: I0219 20:13:30.331586 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b0d24274d72b671d1bb0829fa5b282a7b445e76453ddbdce75e0526585757520"} pod="openshift-machine-config-operator/machine-config-daemon-gfswm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 20:13:30 crc kubenswrapper[4813]: I0219 20:13:30.331694 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" containerID="cri-o://b0d24274d72b671d1bb0829fa5b282a7b445e76453ddbdce75e0526585757520" gracePeriod=600 Feb 19 20:13:30 crc kubenswrapper[4813]: I0219 20:13:30.846433 4813 generic.go:334] "Generic (PLEG): container finished" podID="481977a2-7072-4176-abd4-863cb6104d70" containerID="b0d24274d72b671d1bb0829fa5b282a7b445e76453ddbdce75e0526585757520" exitCode=0 Feb 19 20:13:30 crc kubenswrapper[4813]: I0219 20:13:30.846523 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" event={"ID":"481977a2-7072-4176-abd4-863cb6104d70","Type":"ContainerDied","Data":"b0d24274d72b671d1bb0829fa5b282a7b445e76453ddbdce75e0526585757520"} Feb 19 20:13:30 crc kubenswrapper[4813]: I0219 20:13:30.846848 4813 scope.go:117] "RemoveContainer" containerID="77599afcb1bbfe89531d92db83d4ec060079eb9318249967ecfd9d52fd324241" Feb 19 20:13:31 crc kubenswrapper[4813]: I0219 20:13:31.878077 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" event={"ID":"481977a2-7072-4176-abd4-863cb6104d70","Type":"ContainerStarted","Data":"9fcf1cb4974507f67a050342b20928ea455d156c81a44f38f75e3a932e650ce5"} Feb 19 20:14:19 crc kubenswrapper[4813]: I0219 20:14:19.042428 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-db-create-qjvd2"] Feb 19 20:14:19 crc kubenswrapper[4813]: I0219 20:14:19.052592 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-db-create-qjvd2"] Feb 19 20:14:19 crc kubenswrapper[4813]: I0219 20:14:19.485187 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55d401e7-1ccb-4163-9366-cab53d918c33" path="/var/lib/kubelet/pods/55d401e7-1ccb-4163-9366-cab53d918c33/volumes" Feb 19 20:14:22 crc kubenswrapper[4813]: I0219 20:14:22.039667 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-b182-account-create-update-mz22g"] Feb 19 20:14:22 crc kubenswrapper[4813]: I0219 20:14:22.053806 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-b182-account-create-update-mz22g"] Feb 19 20:14:23 crc kubenswrapper[4813]: I0219 20:14:23.491469 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a44125bc-fce5-47ed-a37f-0b7f73470d95" path="/var/lib/kubelet/pods/a44125bc-fce5-47ed-a37f-0b7f73470d95/volumes" Feb 19 20:14:27 crc kubenswrapper[4813]: I0219 20:14:27.053352 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-persistence-db-create-bhnwp"] Feb 19 20:14:27 crc kubenswrapper[4813]: I0219 20:14:27.066278 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-5107-account-create-update-qv6n2"] Feb 19 20:14:27 crc kubenswrapper[4813]: I0219 20:14:27.076217 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-persistence-db-create-bhnwp"] Feb 19 20:14:27 crc kubenswrapper[4813]: I0219 20:14:27.085228 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-5107-account-create-update-qv6n2"] Feb 19 20:14:27 crc kubenswrapper[4813]: I0219 20:14:27.484645 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14b1ae18-f817-4a70-b54e-9571ed349e07" path="/var/lib/kubelet/pods/14b1ae18-f817-4a70-b54e-9571ed349e07/volumes" Feb 19 20:14:27 crc kubenswrapper[4813]: I0219 20:14:27.485490 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d83b25e4-4a7e-4e6f-94da-25793b440419" path="/var/lib/kubelet/pods/d83b25e4-4a7e-4e6f-94da-25793b440419/volumes" Feb 19 20:14:57 crc kubenswrapper[4813]: I0219 20:14:57.431185 4813 scope.go:117] "RemoveContainer" containerID="f782edb18308bdd54775a5fb3aba2e804a17413ed6d02074a42c09c72d1a9c47" Feb 19 20:14:57 crc kubenswrapper[4813]: I0219 20:14:57.469497 4813 scope.go:117] "RemoveContainer" containerID="9ee6ea12376792de511cb17ed67c16a30248607780b9ba0d8947b0921bba6870" Feb 19 20:14:57 crc kubenswrapper[4813]: I0219 20:14:57.518758 4813 scope.go:117] "RemoveContainer" containerID="4876e5288eac2af5e778f5aa255d8ffad1d9147536ac1ca3357edc4910d1159b" Feb 19 20:14:57 crc kubenswrapper[4813]: I0219 20:14:57.573894 4813 scope.go:117] "RemoveContainer" containerID="a87f0ba8eeb39bf7450f5036a23b5f8cece1e0c3a0e8cec3387c864de42b758b" Feb 19 20:15:00 crc kubenswrapper[4813]: I0219 20:15:00.161788 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525535-5l55h"] Feb 19 20:15:00 crc kubenswrapper[4813]: I0219 20:15:00.163886 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525535-5l55h" Feb 19 20:15:00 crc kubenswrapper[4813]: I0219 20:15:00.166421 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 20:15:00 crc kubenswrapper[4813]: I0219 20:15:00.168183 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 20:15:00 crc kubenswrapper[4813]: I0219 20:15:00.175720 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525535-5l55h"] Feb 19 20:15:00 crc kubenswrapper[4813]: I0219 20:15:00.243653 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4775m\" (UniqueName: \"kubernetes.io/projected/02122ced-659c-46e8-84ad-a54a7db7347a-kube-api-access-4775m\") pod \"collect-profiles-29525535-5l55h\" (UID: \"02122ced-659c-46e8-84ad-a54a7db7347a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525535-5l55h" Feb 19 20:15:00 crc kubenswrapper[4813]: I0219 20:15:00.243759 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/02122ced-659c-46e8-84ad-a54a7db7347a-config-volume\") pod \"collect-profiles-29525535-5l55h\" (UID: \"02122ced-659c-46e8-84ad-a54a7db7347a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525535-5l55h" Feb 19 20:15:00 crc kubenswrapper[4813]: I0219 20:15:00.243846 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/02122ced-659c-46e8-84ad-a54a7db7347a-secret-volume\") pod \"collect-profiles-29525535-5l55h\" (UID: \"02122ced-659c-46e8-84ad-a54a7db7347a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525535-5l55h" Feb 19 20:15:00 crc kubenswrapper[4813]: I0219 20:15:00.345172 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/02122ced-659c-46e8-84ad-a54a7db7347a-config-volume\") pod \"collect-profiles-29525535-5l55h\" (UID: \"02122ced-659c-46e8-84ad-a54a7db7347a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525535-5l55h" Feb 19 20:15:00 crc kubenswrapper[4813]: I0219 20:15:00.345519 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/02122ced-659c-46e8-84ad-a54a7db7347a-secret-volume\") pod \"collect-profiles-29525535-5l55h\" (UID: \"02122ced-659c-46e8-84ad-a54a7db7347a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525535-5l55h" Feb 19 20:15:00 crc kubenswrapper[4813]: I0219 20:15:00.345763 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4775m\" (UniqueName: \"kubernetes.io/projected/02122ced-659c-46e8-84ad-a54a7db7347a-kube-api-access-4775m\") pod \"collect-profiles-29525535-5l55h\" (UID: \"02122ced-659c-46e8-84ad-a54a7db7347a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525535-5l55h" Feb 19 20:15:00 crc kubenswrapper[4813]: I0219 20:15:00.346182 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/02122ced-659c-46e8-84ad-a54a7db7347a-config-volume\") pod \"collect-profiles-29525535-5l55h\" (UID: \"02122ced-659c-46e8-84ad-a54a7db7347a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525535-5l55h" Feb 19 20:15:00 crc kubenswrapper[4813]: I0219 20:15:00.351700 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/02122ced-659c-46e8-84ad-a54a7db7347a-secret-volume\") pod \"collect-profiles-29525535-5l55h\" (UID: \"02122ced-659c-46e8-84ad-a54a7db7347a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525535-5l55h" Feb 19 20:15:00 crc kubenswrapper[4813]: I0219 20:15:00.360567 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4775m\" (UniqueName: \"kubernetes.io/projected/02122ced-659c-46e8-84ad-a54a7db7347a-kube-api-access-4775m\") pod \"collect-profiles-29525535-5l55h\" (UID: \"02122ced-659c-46e8-84ad-a54a7db7347a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525535-5l55h" Feb 19 20:15:00 crc kubenswrapper[4813]: I0219 20:15:00.495485 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525535-5l55h" Feb 19 20:15:00 crc kubenswrapper[4813]: I0219 20:15:00.995028 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525535-5l55h"] Feb 19 20:15:01 crc kubenswrapper[4813]: I0219 20:15:01.906233 4813 generic.go:334] "Generic (PLEG): container finished" podID="02122ced-659c-46e8-84ad-a54a7db7347a" containerID="f3105cf6ae63ca9a510583ce6c46a05427dbc48f4614720f3cdb3d85a1ae1034" exitCode=0 Feb 19 20:15:01 crc kubenswrapper[4813]: I0219 20:15:01.906307 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525535-5l55h" event={"ID":"02122ced-659c-46e8-84ad-a54a7db7347a","Type":"ContainerDied","Data":"f3105cf6ae63ca9a510583ce6c46a05427dbc48f4614720f3cdb3d85a1ae1034"} Feb 19 20:15:01 crc kubenswrapper[4813]: I0219 20:15:01.906479 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525535-5l55h" event={"ID":"02122ced-659c-46e8-84ad-a54a7db7347a","Type":"ContainerStarted","Data":"082c50e1fd5b631d395fa02cbeef7a659f17685f29584fbf1c9f5185ada51a44"} Feb 19 20:15:03 crc kubenswrapper[4813]: I0219 20:15:03.362910 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525535-5l55h" Feb 19 20:15:03 crc kubenswrapper[4813]: I0219 20:15:03.416969 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/02122ced-659c-46e8-84ad-a54a7db7347a-config-volume\") pod \"02122ced-659c-46e8-84ad-a54a7db7347a\" (UID: \"02122ced-659c-46e8-84ad-a54a7db7347a\") " Feb 19 20:15:03 crc kubenswrapper[4813]: I0219 20:15:03.417039 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/02122ced-659c-46e8-84ad-a54a7db7347a-secret-volume\") pod \"02122ced-659c-46e8-84ad-a54a7db7347a\" (UID: \"02122ced-659c-46e8-84ad-a54a7db7347a\") " Feb 19 20:15:03 crc kubenswrapper[4813]: I0219 20:15:03.417179 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4775m\" (UniqueName: \"kubernetes.io/projected/02122ced-659c-46e8-84ad-a54a7db7347a-kube-api-access-4775m\") pod \"02122ced-659c-46e8-84ad-a54a7db7347a\" (UID: \"02122ced-659c-46e8-84ad-a54a7db7347a\") " Feb 19 20:15:03 crc kubenswrapper[4813]: I0219 20:15:03.421162 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02122ced-659c-46e8-84ad-a54a7db7347a-config-volume" (OuterVolumeSpecName: "config-volume") pod "02122ced-659c-46e8-84ad-a54a7db7347a" (UID: "02122ced-659c-46e8-84ad-a54a7db7347a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:15:03 crc kubenswrapper[4813]: I0219 20:15:03.425333 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02122ced-659c-46e8-84ad-a54a7db7347a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "02122ced-659c-46e8-84ad-a54a7db7347a" (UID: "02122ced-659c-46e8-84ad-a54a7db7347a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:15:03 crc kubenswrapper[4813]: I0219 20:15:03.425473 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02122ced-659c-46e8-84ad-a54a7db7347a-kube-api-access-4775m" (OuterVolumeSpecName: "kube-api-access-4775m") pod "02122ced-659c-46e8-84ad-a54a7db7347a" (UID: "02122ced-659c-46e8-84ad-a54a7db7347a"). InnerVolumeSpecName "kube-api-access-4775m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:15:03 crc kubenswrapper[4813]: I0219 20:15:03.524068 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4775m\" (UniqueName: \"kubernetes.io/projected/02122ced-659c-46e8-84ad-a54a7db7347a-kube-api-access-4775m\") on node \"crc\" DevicePath \"\"" Feb 19 20:15:03 crc kubenswrapper[4813]: I0219 20:15:03.524115 4813 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/02122ced-659c-46e8-84ad-a54a7db7347a-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 20:15:03 crc kubenswrapper[4813]: I0219 20:15:03.524134 4813 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/02122ced-659c-46e8-84ad-a54a7db7347a-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 20:15:03 crc kubenswrapper[4813]: I0219 20:15:03.929329 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525535-5l55h" event={"ID":"02122ced-659c-46e8-84ad-a54a7db7347a","Type":"ContainerDied","Data":"082c50e1fd5b631d395fa02cbeef7a659f17685f29584fbf1c9f5185ada51a44"} Feb 19 20:15:03 crc kubenswrapper[4813]: I0219 20:15:03.929644 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="082c50e1fd5b631d395fa02cbeef7a659f17685f29584fbf1c9f5185ada51a44" Feb 19 20:15:03 crc kubenswrapper[4813]: I0219 20:15:03.929562 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525535-5l55h" Feb 19 20:15:04 crc kubenswrapper[4813]: I0219 20:15:04.449433 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525490-4n9xs"] Feb 19 20:15:04 crc kubenswrapper[4813]: I0219 20:15:04.461437 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525490-4n9xs"] Feb 19 20:15:05 crc kubenswrapper[4813]: I0219 20:15:05.487069 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbf0f104-f84a-4537-8205-9791f33f7be0" path="/var/lib/kubelet/pods/dbf0f104-f84a-4537-8205-9791f33f7be0/volumes" Feb 19 20:15:11 crc kubenswrapper[4813]: I0219 20:15:11.056623 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-db-sync-pshkj"] Feb 19 20:15:11 crc kubenswrapper[4813]: I0219 20:15:11.066544 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-db-sync-pshkj"] Feb 19 20:15:11 crc kubenswrapper[4813]: I0219 20:15:11.484209 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28dca374-6303-4bb8-b64b-a08a87a702cb" path="/var/lib/kubelet/pods/28dca374-6303-4bb8-b64b-a08a87a702cb/volumes" Feb 19 20:15:30 crc kubenswrapper[4813]: I0219 20:15:30.329470 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 20:15:30 crc kubenswrapper[4813]: I0219 20:15:30.329999 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 20:15:57 crc kubenswrapper[4813]: I0219 20:15:57.718036 4813 scope.go:117] "RemoveContainer" containerID="dc36ee8828ab0898b13efae2215462cd657982bedf308c7263eb1dd5322167ff" Feb 19 20:15:57 crc kubenswrapper[4813]: I0219 20:15:57.756679 4813 scope.go:117] "RemoveContainer" containerID="c0e01636cd151dc9aec63da82175597cbfbb90bb1ff9da7551709ecdcbb57fc1" Feb 19 20:15:57 crc kubenswrapper[4813]: I0219 20:15:57.813673 4813 scope.go:117] "RemoveContainer" containerID="0a2411818244f676d33ce81f243f7ebd9f52c06f70f0dbb05aebe98a47ee7e6d" Feb 19 20:16:00 crc kubenswrapper[4813]: I0219 20:16:00.329432 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 20:16:00 crc kubenswrapper[4813]: I0219 20:16:00.329761 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 20:16:30 crc kubenswrapper[4813]: I0219 20:16:30.329486 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 20:16:30 crc kubenswrapper[4813]: I0219 20:16:30.329942 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 20:16:30 crc kubenswrapper[4813]: I0219 20:16:30.330003 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" Feb 19 20:16:30 crc kubenswrapper[4813]: I0219 20:16:30.330798 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9fcf1cb4974507f67a050342b20928ea455d156c81a44f38f75e3a932e650ce5"} pod="openshift-machine-config-operator/machine-config-daemon-gfswm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 20:16:30 crc kubenswrapper[4813]: I0219 20:16:30.330852 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" containerID="cri-o://9fcf1cb4974507f67a050342b20928ea455d156c81a44f38f75e3a932e650ce5" gracePeriod=600 Feb 19 20:16:30 crc kubenswrapper[4813]: E0219 20:16:30.468265 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:16:30 crc kubenswrapper[4813]: I0219 20:16:30.868350 4813 generic.go:334] "Generic (PLEG): container finished" podID="481977a2-7072-4176-abd4-863cb6104d70" containerID="9fcf1cb4974507f67a050342b20928ea455d156c81a44f38f75e3a932e650ce5" exitCode=0 Feb 19 20:16:30 crc kubenswrapper[4813]: I0219 20:16:30.868388 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" event={"ID":"481977a2-7072-4176-abd4-863cb6104d70","Type":"ContainerDied","Data":"9fcf1cb4974507f67a050342b20928ea455d156c81a44f38f75e3a932e650ce5"} Feb 19 20:16:30 crc kubenswrapper[4813]: I0219 20:16:30.868422 4813 scope.go:117] "RemoveContainer" containerID="b0d24274d72b671d1bb0829fa5b282a7b445e76453ddbdce75e0526585757520" Feb 19 20:16:30 crc kubenswrapper[4813]: I0219 20:16:30.869402 4813 scope.go:117] "RemoveContainer" containerID="9fcf1cb4974507f67a050342b20928ea455d156c81a44f38f75e3a932e650ce5" Feb 19 20:16:30 crc kubenswrapper[4813]: E0219 20:16:30.869825 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:16:43 crc kubenswrapper[4813]: I0219 20:16:43.472161 4813 scope.go:117] "RemoveContainer" containerID="9fcf1cb4974507f67a050342b20928ea455d156c81a44f38f75e3a932e650ce5" Feb 19 20:16:43 crc kubenswrapper[4813]: E0219 20:16:43.473549 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:16:57 crc kubenswrapper[4813]: I0219 20:16:57.472304 4813 scope.go:117] "RemoveContainer" containerID="9fcf1cb4974507f67a050342b20928ea455d156c81a44f38f75e3a932e650ce5" Feb 19 20:16:57 crc kubenswrapper[4813]: E0219 20:16:57.473549 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:17:12 crc kubenswrapper[4813]: I0219 20:17:12.471673 4813 scope.go:117] "RemoveContainer" containerID="9fcf1cb4974507f67a050342b20928ea455d156c81a44f38f75e3a932e650ce5" Feb 19 20:17:12 crc kubenswrapper[4813]: E0219 20:17:12.472855 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:17:23 crc kubenswrapper[4813]: I0219 20:17:23.472779 4813 scope.go:117] "RemoveContainer" containerID="9fcf1cb4974507f67a050342b20928ea455d156c81a44f38f75e3a932e650ce5" Feb 19 20:17:23 crc kubenswrapper[4813]: E0219 20:17:23.473835 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:17:34 crc kubenswrapper[4813]: I0219 20:17:34.472304 4813 scope.go:117] "RemoveContainer" containerID="9fcf1cb4974507f67a050342b20928ea455d156c81a44f38f75e3a932e650ce5" Feb 19 20:17:34 crc kubenswrapper[4813]: E0219 20:17:34.473068 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:17:49 crc kubenswrapper[4813]: I0219 20:17:49.473073 4813 scope.go:117] "RemoveContainer" containerID="9fcf1cb4974507f67a050342b20928ea455d156c81a44f38f75e3a932e650ce5" Feb 19 20:17:49 crc kubenswrapper[4813]: E0219 20:17:49.474460 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:17:52 crc kubenswrapper[4813]: I0219 20:17:52.072736 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-tpm5r"] Feb 19 20:17:52 crc kubenswrapper[4813]: I0219 20:17:52.088581 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-tpm5r"] Feb 19 20:17:52 crc kubenswrapper[4813]: I0219 20:17:52.100518 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-d489-account-create-update-kc5hq"] Feb 19 20:17:52 crc kubenswrapper[4813]: I0219 20:17:52.113538 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-d489-account-create-update-kc5hq"] Feb 19 20:17:53 crc kubenswrapper[4813]: I0219 20:17:53.494137 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bcf0999-1a5d-40c1-905e-5e8bbc6c8123" path="/var/lib/kubelet/pods/4bcf0999-1a5d-40c1-905e-5e8bbc6c8123/volumes" Feb 19 20:17:53 crc kubenswrapper[4813]: I0219 20:17:53.496389 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c36176e6-298e-4037-b91f-1eb7eeb7092f" path="/var/lib/kubelet/pods/c36176e6-298e-4037-b91f-1eb7eeb7092f/volumes" Feb 19 20:17:57 crc kubenswrapper[4813]: I0219 20:17:57.953723 4813 scope.go:117] "RemoveContainer" containerID="e6de1d70482cf3b858196e59b3f7b0241edc9d96b1a12981463e418be224225e" Feb 19 20:17:57 crc kubenswrapper[4813]: I0219 20:17:57.977499 4813 scope.go:117] "RemoveContainer" containerID="43926e7bc624b01a0800b4ea47da8f86658ac864e4df90efdaa78ea4764f2b57" Feb 19 20:18:00 crc kubenswrapper[4813]: I0219 20:18:00.472108 4813 scope.go:117] "RemoveContainer" containerID="9fcf1cb4974507f67a050342b20928ea455d156c81a44f38f75e3a932e650ce5" Feb 19 20:18:00 crc kubenswrapper[4813]: E0219 20:18:00.473652 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:18:06 crc kubenswrapper[4813]: I0219 20:18:06.030473 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-bf2dh"] Feb 19 20:18:06 crc kubenswrapper[4813]: I0219 20:18:06.042084 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-bf2dh"] Feb 19 20:18:07 crc kubenswrapper[4813]: I0219 20:18:07.485765 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c40ee40-5f00-4771-96d4-90e70b4c2580" path="/var/lib/kubelet/pods/0c40ee40-5f00-4771-96d4-90e70b4c2580/volumes" Feb 19 20:18:14 crc kubenswrapper[4813]: I0219 20:18:14.471573 4813 scope.go:117] "RemoveContainer" containerID="9fcf1cb4974507f67a050342b20928ea455d156c81a44f38f75e3a932e650ce5" Feb 19 20:18:14 crc kubenswrapper[4813]: E0219 20:18:14.472569 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:18:27 crc kubenswrapper[4813]: I0219 20:18:27.472725 4813 scope.go:117] "RemoveContainer" containerID="9fcf1cb4974507f67a050342b20928ea455d156c81a44f38f75e3a932e650ce5" Feb 19 20:18:27 crc kubenswrapper[4813]: E0219 20:18:27.473766 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:18:42 crc kubenswrapper[4813]: I0219 20:18:42.471362 4813 scope.go:117] "RemoveContainer" containerID="9fcf1cb4974507f67a050342b20928ea455d156c81a44f38f75e3a932e650ce5" Feb 19 20:18:42 crc kubenswrapper[4813]: E0219 20:18:42.472143 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:18:53 crc kubenswrapper[4813]: I0219 20:18:53.472132 4813 scope.go:117] "RemoveContainer" containerID="9fcf1cb4974507f67a050342b20928ea455d156c81a44f38f75e3a932e650ce5" Feb 19 20:18:53 crc kubenswrapper[4813]: E0219 20:18:53.472967 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:18:58 crc kubenswrapper[4813]: I0219 20:18:58.132025 4813 scope.go:117] "RemoveContainer" containerID="29e3cc9eb7cb8cc95d3e9e5a1de8380392abda408af89c06d742e5694b6e020e" Feb 19 20:19:07 crc kubenswrapper[4813]: I0219 20:19:07.472058 4813 scope.go:117] "RemoveContainer" containerID="9fcf1cb4974507f67a050342b20928ea455d156c81a44f38f75e3a932e650ce5" Feb 19 20:19:07 crc kubenswrapper[4813]: E0219 20:19:07.473285 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:19:22 crc kubenswrapper[4813]: I0219 20:19:22.472166 4813 scope.go:117] "RemoveContainer" containerID="9fcf1cb4974507f67a050342b20928ea455d156c81a44f38f75e3a932e650ce5" Feb 19 20:19:22 crc kubenswrapper[4813]: E0219 20:19:22.473074 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:19:34 crc kubenswrapper[4813]: I0219 20:19:34.472645 4813 scope.go:117] "RemoveContainer" containerID="9fcf1cb4974507f67a050342b20928ea455d156c81a44f38f75e3a932e650ce5" Feb 19 20:19:34 crc kubenswrapper[4813]: E0219 20:19:34.473510 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:19:45 crc kubenswrapper[4813]: I0219 20:19:45.478827 4813 scope.go:117] "RemoveContainer" containerID="9fcf1cb4974507f67a050342b20928ea455d156c81a44f38f75e3a932e650ce5" Feb 19 20:19:45 crc kubenswrapper[4813]: E0219 20:19:45.479504 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:19:56 crc kubenswrapper[4813]: I0219 20:19:56.473169 4813 scope.go:117] "RemoveContainer" containerID="9fcf1cb4974507f67a050342b20928ea455d156c81a44f38f75e3a932e650ce5" Feb 19 20:19:56 crc kubenswrapper[4813]: E0219 20:19:56.473812 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:20:11 crc kubenswrapper[4813]: I0219 20:20:11.481238 4813 scope.go:117] "RemoveContainer" containerID="9fcf1cb4974507f67a050342b20928ea455d156c81a44f38f75e3a932e650ce5" Feb 19 20:20:11 crc kubenswrapper[4813]: E0219 20:20:11.482763 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:20:25 crc kubenswrapper[4813]: I0219 20:20:25.473130 4813 scope.go:117] "RemoveContainer" containerID="9fcf1cb4974507f67a050342b20928ea455d156c81a44f38f75e3a932e650ce5" Feb 19 20:20:25 crc kubenswrapper[4813]: E0219 20:20:25.474728 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:20:29 crc kubenswrapper[4813]: I0219 20:20:29.075118 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-jpjbw"] Feb 19 20:20:29 crc kubenswrapper[4813]: I0219 20:20:29.090687 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-abf8-account-create-update-7d97w"] Feb 19 20:20:29 crc kubenswrapper[4813]: I0219 20:20:29.102966 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-jpjbw"] Feb 19 20:20:29 crc kubenswrapper[4813]: I0219 20:20:29.116261 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-abf8-account-create-update-7d97w"] Feb 19 20:20:29 crc kubenswrapper[4813]: I0219 20:20:29.496410 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78d21291-624b-49a4-a8ed-5b170b065fee" path="/var/lib/kubelet/pods/78d21291-624b-49a4-a8ed-5b170b065fee/volumes" Feb 19 20:20:29 crc kubenswrapper[4813]: I0219 20:20:29.497708 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac343c30-c8b9-4248-8cd4-3e23e2ef42cc" path="/var/lib/kubelet/pods/ac343c30-c8b9-4248-8cd4-3e23e2ef42cc/volumes" Feb 19 20:20:37 crc kubenswrapper[4813]: I0219 20:20:37.472453 4813 scope.go:117] "RemoveContainer" containerID="9fcf1cb4974507f67a050342b20928ea455d156c81a44f38f75e3a932e650ce5" Feb 19 20:20:37 crc kubenswrapper[4813]: E0219 20:20:37.475158 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:20:41 crc kubenswrapper[4813]: I0219 20:20:41.061388 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-k2mns"] Feb 19 20:20:41 crc kubenswrapper[4813]: I0219 20:20:41.078380 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-k2mns"] Feb 19 20:20:41 crc kubenswrapper[4813]: I0219 20:20:41.509228 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="576b8eec-382c-47e9-b746-10c527fb9e85" path="/var/lib/kubelet/pods/576b8eec-382c-47e9-b746-10c527fb9e85/volumes" Feb 19 20:20:51 crc kubenswrapper[4813]: I0219 20:20:51.487772 4813 scope.go:117] "RemoveContainer" containerID="9fcf1cb4974507f67a050342b20928ea455d156c81a44f38f75e3a932e650ce5" Feb 19 20:20:51 crc kubenswrapper[4813]: E0219 20:20:51.489352 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:20:58 crc kubenswrapper[4813]: I0219 20:20:58.262601 4813 scope.go:117] "RemoveContainer" containerID="f30234f15a207e94e0ea1c9c0bff11e6aad363a88944d29e6eda4a36546d353d" Feb 19 20:20:58 crc kubenswrapper[4813]: I0219 20:20:58.300759 4813 scope.go:117] "RemoveContainer" containerID="d3ec7383d15ad5c45513d4da634586cb70b8dc2e5cf5f3f81784371d1a6bdf49" Feb 19 20:20:58 crc kubenswrapper[4813]: I0219 20:20:58.376190 4813 scope.go:117] "RemoveContainer" containerID="6de5575d7b79d989e8f4a0ea5a0f4b01d549a50b2dfc6fe16d9b697f0d483e80" Feb 19 20:21:02 crc kubenswrapper[4813]: I0219 20:21:02.051040 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-c60d-account-create-update-p7csk"] Feb 19 20:21:02 crc kubenswrapper[4813]: I0219 20:21:02.066165 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-c60d-account-create-update-p7csk"] Feb 19 20:21:02 crc kubenswrapper[4813]: I0219 20:21:02.657872 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gmj44"] Feb 19 20:21:02 crc kubenswrapper[4813]: E0219 20:21:02.658544 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02122ced-659c-46e8-84ad-a54a7db7347a" containerName="collect-profiles" Feb 19 20:21:02 crc kubenswrapper[4813]: I0219 20:21:02.658916 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="02122ced-659c-46e8-84ad-a54a7db7347a" containerName="collect-profiles" Feb 19 20:21:02 crc kubenswrapper[4813]: I0219 20:21:02.659283 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="02122ced-659c-46e8-84ad-a54a7db7347a" containerName="collect-profiles" Feb 19 20:21:02 crc kubenswrapper[4813]: I0219 20:21:02.661670 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gmj44" Feb 19 20:21:02 crc kubenswrapper[4813]: I0219 20:21:02.676924 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gmj44"] Feb 19 20:21:02 crc kubenswrapper[4813]: I0219 20:21:02.821615 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19f9f0b6-13fe-46f8-8668-f348fbae2ba4-utilities\") pod \"community-operators-gmj44\" (UID: \"19f9f0b6-13fe-46f8-8668-f348fbae2ba4\") " pod="openshift-marketplace/community-operators-gmj44" Feb 19 20:21:02 crc kubenswrapper[4813]: I0219 20:21:02.821916 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tp46p\" (UniqueName: \"kubernetes.io/projected/19f9f0b6-13fe-46f8-8668-f348fbae2ba4-kube-api-access-tp46p\") pod \"community-operators-gmj44\" (UID: \"19f9f0b6-13fe-46f8-8668-f348fbae2ba4\") " pod="openshift-marketplace/community-operators-gmj44" Feb 19 20:21:02 crc kubenswrapper[4813]: I0219 20:21:02.822097 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19f9f0b6-13fe-46f8-8668-f348fbae2ba4-catalog-content\") pod \"community-operators-gmj44\" (UID: \"19f9f0b6-13fe-46f8-8668-f348fbae2ba4\") " pod="openshift-marketplace/community-operators-gmj44" Feb 19 20:21:02 crc kubenswrapper[4813]: I0219 20:21:02.924608 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tp46p\" (UniqueName: \"kubernetes.io/projected/19f9f0b6-13fe-46f8-8668-f348fbae2ba4-kube-api-access-tp46p\") pod \"community-operators-gmj44\" (UID: \"19f9f0b6-13fe-46f8-8668-f348fbae2ba4\") " pod="openshift-marketplace/community-operators-gmj44" Feb 19 20:21:02 crc kubenswrapper[4813]: I0219 20:21:02.924727 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19f9f0b6-13fe-46f8-8668-f348fbae2ba4-catalog-content\") pod \"community-operators-gmj44\" (UID: \"19f9f0b6-13fe-46f8-8668-f348fbae2ba4\") " pod="openshift-marketplace/community-operators-gmj44" Feb 19 20:21:02 crc kubenswrapper[4813]: I0219 20:21:02.924884 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19f9f0b6-13fe-46f8-8668-f348fbae2ba4-utilities\") pod \"community-operators-gmj44\" (UID: \"19f9f0b6-13fe-46f8-8668-f348fbae2ba4\") " pod="openshift-marketplace/community-operators-gmj44" Feb 19 20:21:02 crc kubenswrapper[4813]: I0219 20:21:02.925468 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19f9f0b6-13fe-46f8-8668-f348fbae2ba4-catalog-content\") pod \"community-operators-gmj44\" (UID: \"19f9f0b6-13fe-46f8-8668-f348fbae2ba4\") " pod="openshift-marketplace/community-operators-gmj44" Feb 19 20:21:02 crc kubenswrapper[4813]: I0219 20:21:02.925506 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19f9f0b6-13fe-46f8-8668-f348fbae2ba4-utilities\") pod \"community-operators-gmj44\" (UID: \"19f9f0b6-13fe-46f8-8668-f348fbae2ba4\") " pod="openshift-marketplace/community-operators-gmj44" Feb 19 20:21:02 crc kubenswrapper[4813]: I0219 20:21:02.943829 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tp46p\" (UniqueName: \"kubernetes.io/projected/19f9f0b6-13fe-46f8-8668-f348fbae2ba4-kube-api-access-tp46p\") pod \"community-operators-gmj44\" (UID: \"19f9f0b6-13fe-46f8-8668-f348fbae2ba4\") " pod="openshift-marketplace/community-operators-gmj44" Feb 19 20:21:02 crc kubenswrapper[4813]: I0219 20:21:02.988073 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gmj44" Feb 19 20:21:03 crc kubenswrapper[4813]: I0219 20:21:03.052910 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-create-ts7jb"] Feb 19 20:21:03 crc kubenswrapper[4813]: I0219 20:21:03.070520 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-create-ts7jb"] Feb 19 20:21:03 crc kubenswrapper[4813]: I0219 20:21:03.486317 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3b8e3a6-c6e6-4b33-9dba-1b43041daf22" path="/var/lib/kubelet/pods/e3b8e3a6-c6e6-4b33-9dba-1b43041daf22/volumes" Feb 19 20:21:03 crc kubenswrapper[4813]: I0219 20:21:03.487208 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed08568f-fdab-4315-a5b7-413e88d4edac" path="/var/lib/kubelet/pods/ed08568f-fdab-4315-a5b7-413e88d4edac/volumes" Feb 19 20:21:03 crc kubenswrapper[4813]: I0219 20:21:03.530198 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gmj44"] Feb 19 20:21:04 crc kubenswrapper[4813]: I0219 20:21:04.060418 4813 generic.go:334] "Generic (PLEG): container finished" podID="19f9f0b6-13fe-46f8-8668-f348fbae2ba4" containerID="e00b1806cbf098d2a3227b57d93532763097d5f09debf6cf09801969b3072098" exitCode=0 Feb 19 20:21:04 crc kubenswrapper[4813]: I0219 20:21:04.060484 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gmj44" event={"ID":"19f9f0b6-13fe-46f8-8668-f348fbae2ba4","Type":"ContainerDied","Data":"e00b1806cbf098d2a3227b57d93532763097d5f09debf6cf09801969b3072098"} Feb 19 20:21:04 crc kubenswrapper[4813]: I0219 20:21:04.061052 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gmj44" event={"ID":"19f9f0b6-13fe-46f8-8668-f348fbae2ba4","Type":"ContainerStarted","Data":"23e9d4fc9dab69edf50de8e6f409fd44a4d32972cb9cfb682af90df396821d78"} Feb 19 20:21:04 crc kubenswrapper[4813]: I0219 20:21:04.063832 4813 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 20:21:04 crc kubenswrapper[4813]: I0219 20:21:04.472269 4813 scope.go:117] "RemoveContainer" containerID="9fcf1cb4974507f67a050342b20928ea455d156c81a44f38f75e3a932e650ce5" Feb 19 20:21:04 crc kubenswrapper[4813]: E0219 20:21:04.472856 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:21:06 crc kubenswrapper[4813]: I0219 20:21:06.086126 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gmj44" event={"ID":"19f9f0b6-13fe-46f8-8668-f348fbae2ba4","Type":"ContainerStarted","Data":"e9e5072b59682ef84c7e0cc8a2e2e35a9c76ba8402832665a9bbf467b634a63f"} Feb 19 20:21:07 crc kubenswrapper[4813]: I0219 20:21:07.103088 4813 generic.go:334] "Generic (PLEG): container finished" podID="19f9f0b6-13fe-46f8-8668-f348fbae2ba4" containerID="e9e5072b59682ef84c7e0cc8a2e2e35a9c76ba8402832665a9bbf467b634a63f" exitCode=0 Feb 19 20:21:07 crc kubenswrapper[4813]: I0219 20:21:07.103191 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gmj44" event={"ID":"19f9f0b6-13fe-46f8-8668-f348fbae2ba4","Type":"ContainerDied","Data":"e9e5072b59682ef84c7e0cc8a2e2e35a9c76ba8402832665a9bbf467b634a63f"} Feb 19 20:21:08 crc kubenswrapper[4813]: I0219 20:21:08.123048 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gmj44" event={"ID":"19f9f0b6-13fe-46f8-8668-f348fbae2ba4","Type":"ContainerStarted","Data":"33d29d5c0c7c40bda4d1c45c7035b5249dcea2537fa1a6b88adeb3a1116e91e5"} Feb 19 20:21:08 crc kubenswrapper[4813]: I0219 20:21:08.159715 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gmj44" podStartSLOduration=2.7343738269999998 podStartE2EDuration="6.159699018s" podCreationTimestamp="2026-02-19 20:21:02 +0000 UTC" firstStartedPulling="2026-02-19 20:21:04.063592619 +0000 UTC m=+6683.289033160" lastFinishedPulling="2026-02-19 20:21:07.48891781 +0000 UTC m=+6686.714358351" observedRunningTime="2026-02-19 20:21:08.153806866 +0000 UTC m=+6687.379247437" watchObservedRunningTime="2026-02-19 20:21:08.159699018 +0000 UTC m=+6687.385139549" Feb 19 20:21:12 crc kubenswrapper[4813]: I0219 20:21:12.988543 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gmj44" Feb 19 20:21:12 crc kubenswrapper[4813]: I0219 20:21:12.989280 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gmj44" Feb 19 20:21:13 crc kubenswrapper[4813]: I0219 20:21:13.055470 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-sync-kb2ww"] Feb 19 20:21:13 crc kubenswrapper[4813]: I0219 20:21:13.059607 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gmj44" Feb 19 20:21:13 crc kubenswrapper[4813]: I0219 20:21:13.075127 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-sync-kb2ww"] Feb 19 20:21:13 crc kubenswrapper[4813]: I0219 20:21:13.237317 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gmj44" Feb 19 20:21:13 crc kubenswrapper[4813]: I0219 20:21:13.297359 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gmj44"] Feb 19 20:21:13 crc kubenswrapper[4813]: I0219 20:21:13.493088 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="229a6f2b-587e-4008-a4d7-9f0d26e3e446" path="/var/lib/kubelet/pods/229a6f2b-587e-4008-a4d7-9f0d26e3e446/volumes" Feb 19 20:21:15 crc kubenswrapper[4813]: I0219 20:21:15.213926 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gmj44" podUID="19f9f0b6-13fe-46f8-8668-f348fbae2ba4" containerName="registry-server" containerID="cri-o://33d29d5c0c7c40bda4d1c45c7035b5249dcea2537fa1a6b88adeb3a1116e91e5" gracePeriod=2 Feb 19 20:21:15 crc kubenswrapper[4813]: I0219 20:21:15.711704 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gmj44" Feb 19 20:21:15 crc kubenswrapper[4813]: I0219 20:21:15.726314 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tp46p\" (UniqueName: \"kubernetes.io/projected/19f9f0b6-13fe-46f8-8668-f348fbae2ba4-kube-api-access-tp46p\") pod \"19f9f0b6-13fe-46f8-8668-f348fbae2ba4\" (UID: \"19f9f0b6-13fe-46f8-8668-f348fbae2ba4\") " Feb 19 20:21:15 crc kubenswrapper[4813]: I0219 20:21:15.726475 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19f9f0b6-13fe-46f8-8668-f348fbae2ba4-utilities\") pod \"19f9f0b6-13fe-46f8-8668-f348fbae2ba4\" (UID: \"19f9f0b6-13fe-46f8-8668-f348fbae2ba4\") " Feb 19 20:21:15 crc kubenswrapper[4813]: I0219 20:21:15.726540 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19f9f0b6-13fe-46f8-8668-f348fbae2ba4-catalog-content\") pod \"19f9f0b6-13fe-46f8-8668-f348fbae2ba4\" (UID: \"19f9f0b6-13fe-46f8-8668-f348fbae2ba4\") " Feb 19 20:21:15 crc kubenswrapper[4813]: I0219 20:21:15.727334 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19f9f0b6-13fe-46f8-8668-f348fbae2ba4-utilities" (OuterVolumeSpecName: "utilities") pod "19f9f0b6-13fe-46f8-8668-f348fbae2ba4" (UID: "19f9f0b6-13fe-46f8-8668-f348fbae2ba4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:21:15 crc kubenswrapper[4813]: I0219 20:21:15.734000 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19f9f0b6-13fe-46f8-8668-f348fbae2ba4-kube-api-access-tp46p" (OuterVolumeSpecName: "kube-api-access-tp46p") pod "19f9f0b6-13fe-46f8-8668-f348fbae2ba4" (UID: "19f9f0b6-13fe-46f8-8668-f348fbae2ba4"). InnerVolumeSpecName "kube-api-access-tp46p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:21:15 crc kubenswrapper[4813]: I0219 20:21:15.807507 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19f9f0b6-13fe-46f8-8668-f348fbae2ba4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "19f9f0b6-13fe-46f8-8668-f348fbae2ba4" (UID: "19f9f0b6-13fe-46f8-8668-f348fbae2ba4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:21:15 crc kubenswrapper[4813]: I0219 20:21:15.828200 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19f9f0b6-13fe-46f8-8668-f348fbae2ba4-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 20:21:15 crc kubenswrapper[4813]: I0219 20:21:15.828230 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19f9f0b6-13fe-46f8-8668-f348fbae2ba4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 20:21:15 crc kubenswrapper[4813]: I0219 20:21:15.828240 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tp46p\" (UniqueName: \"kubernetes.io/projected/19f9f0b6-13fe-46f8-8668-f348fbae2ba4-kube-api-access-tp46p\") on node \"crc\" DevicePath \"\"" Feb 19 20:21:16 crc kubenswrapper[4813]: I0219 20:21:16.228136 4813 generic.go:334] "Generic (PLEG): container finished" podID="19f9f0b6-13fe-46f8-8668-f348fbae2ba4" containerID="33d29d5c0c7c40bda4d1c45c7035b5249dcea2537fa1a6b88adeb3a1116e91e5" exitCode=0 Feb 19 20:21:16 crc kubenswrapper[4813]: I0219 20:21:16.228226 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gmj44" Feb 19 20:21:16 crc kubenswrapper[4813]: I0219 20:21:16.228238 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gmj44" event={"ID":"19f9f0b6-13fe-46f8-8668-f348fbae2ba4","Type":"ContainerDied","Data":"33d29d5c0c7c40bda4d1c45c7035b5249dcea2537fa1a6b88adeb3a1116e91e5"} Feb 19 20:21:16 crc kubenswrapper[4813]: I0219 20:21:16.228563 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gmj44" event={"ID":"19f9f0b6-13fe-46f8-8668-f348fbae2ba4","Type":"ContainerDied","Data":"23e9d4fc9dab69edf50de8e6f409fd44a4d32972cb9cfb682af90df396821d78"} Feb 19 20:21:16 crc kubenswrapper[4813]: I0219 20:21:16.228586 4813 scope.go:117] "RemoveContainer" containerID="33d29d5c0c7c40bda4d1c45c7035b5249dcea2537fa1a6b88adeb3a1116e91e5" Feb 19 20:21:16 crc kubenswrapper[4813]: I0219 20:21:16.265018 4813 scope.go:117] "RemoveContainer" containerID="e9e5072b59682ef84c7e0cc8a2e2e35a9c76ba8402832665a9bbf467b634a63f" Feb 19 20:21:16 crc kubenswrapper[4813]: I0219 20:21:16.270001 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gmj44"] Feb 19 20:21:16 crc kubenswrapper[4813]: I0219 20:21:16.287920 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gmj44"] Feb 19 20:21:16 crc kubenswrapper[4813]: I0219 20:21:16.294448 4813 scope.go:117] "RemoveContainer" containerID="e00b1806cbf098d2a3227b57d93532763097d5f09debf6cf09801969b3072098" Feb 19 20:21:16 crc kubenswrapper[4813]: I0219 20:21:16.332581 4813 scope.go:117] "RemoveContainer" containerID="33d29d5c0c7c40bda4d1c45c7035b5249dcea2537fa1a6b88adeb3a1116e91e5" Feb 19 20:21:16 crc kubenswrapper[4813]: E0219 20:21:16.333209 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33d29d5c0c7c40bda4d1c45c7035b5249dcea2537fa1a6b88adeb3a1116e91e5\": container with ID starting with 33d29d5c0c7c40bda4d1c45c7035b5249dcea2537fa1a6b88adeb3a1116e91e5 not found: ID does not exist" containerID="33d29d5c0c7c40bda4d1c45c7035b5249dcea2537fa1a6b88adeb3a1116e91e5" Feb 19 20:21:16 crc kubenswrapper[4813]: I0219 20:21:16.333257 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33d29d5c0c7c40bda4d1c45c7035b5249dcea2537fa1a6b88adeb3a1116e91e5"} err="failed to get container status \"33d29d5c0c7c40bda4d1c45c7035b5249dcea2537fa1a6b88adeb3a1116e91e5\": rpc error: code = NotFound desc = could not find container \"33d29d5c0c7c40bda4d1c45c7035b5249dcea2537fa1a6b88adeb3a1116e91e5\": container with ID starting with 33d29d5c0c7c40bda4d1c45c7035b5249dcea2537fa1a6b88adeb3a1116e91e5 not found: ID does not exist" Feb 19 20:21:16 crc kubenswrapper[4813]: I0219 20:21:16.333290 4813 scope.go:117] "RemoveContainer" containerID="e9e5072b59682ef84c7e0cc8a2e2e35a9c76ba8402832665a9bbf467b634a63f" Feb 19 20:21:16 crc kubenswrapper[4813]: E0219 20:21:16.333674 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9e5072b59682ef84c7e0cc8a2e2e35a9c76ba8402832665a9bbf467b634a63f\": container with ID starting with e9e5072b59682ef84c7e0cc8a2e2e35a9c76ba8402832665a9bbf467b634a63f not found: ID does not exist" containerID="e9e5072b59682ef84c7e0cc8a2e2e35a9c76ba8402832665a9bbf467b634a63f" Feb 19 20:21:16 crc kubenswrapper[4813]: I0219 20:21:16.333716 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9e5072b59682ef84c7e0cc8a2e2e35a9c76ba8402832665a9bbf467b634a63f"} err="failed to get container status \"e9e5072b59682ef84c7e0cc8a2e2e35a9c76ba8402832665a9bbf467b634a63f\": rpc error: code = NotFound desc = could not find container \"e9e5072b59682ef84c7e0cc8a2e2e35a9c76ba8402832665a9bbf467b634a63f\": container with ID starting with e9e5072b59682ef84c7e0cc8a2e2e35a9c76ba8402832665a9bbf467b634a63f not found: ID does not exist" Feb 19 20:21:16 crc kubenswrapper[4813]: I0219 20:21:16.333768 4813 scope.go:117] "RemoveContainer" containerID="e00b1806cbf098d2a3227b57d93532763097d5f09debf6cf09801969b3072098" Feb 19 20:21:16 crc kubenswrapper[4813]: E0219 20:21:16.334143 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e00b1806cbf098d2a3227b57d93532763097d5f09debf6cf09801969b3072098\": container with ID starting with e00b1806cbf098d2a3227b57d93532763097d5f09debf6cf09801969b3072098 not found: ID does not exist" containerID="e00b1806cbf098d2a3227b57d93532763097d5f09debf6cf09801969b3072098" Feb 19 20:21:16 crc kubenswrapper[4813]: I0219 20:21:16.334221 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e00b1806cbf098d2a3227b57d93532763097d5f09debf6cf09801969b3072098"} err="failed to get container status \"e00b1806cbf098d2a3227b57d93532763097d5f09debf6cf09801969b3072098\": rpc error: code = NotFound desc = could not find container \"e00b1806cbf098d2a3227b57d93532763097d5f09debf6cf09801969b3072098\": container with ID starting with e00b1806cbf098d2a3227b57d93532763097d5f09debf6cf09801969b3072098 not found: ID does not exist" Feb 19 20:21:17 crc kubenswrapper[4813]: I0219 20:21:17.487685 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19f9f0b6-13fe-46f8-8668-f348fbae2ba4" path="/var/lib/kubelet/pods/19f9f0b6-13fe-46f8-8668-f348fbae2ba4/volumes" Feb 19 20:21:19 crc kubenswrapper[4813]: I0219 20:21:19.472992 4813 scope.go:117] "RemoveContainer" containerID="9fcf1cb4974507f67a050342b20928ea455d156c81a44f38f75e3a932e650ce5" Feb 19 20:21:19 crc kubenswrapper[4813]: E0219 20:21:19.473554 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:21:34 crc kubenswrapper[4813]: I0219 20:21:34.471574 4813 scope.go:117] "RemoveContainer" containerID="9fcf1cb4974507f67a050342b20928ea455d156c81a44f38f75e3a932e650ce5" Feb 19 20:21:35 crc kubenswrapper[4813]: I0219 20:21:35.436880 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" event={"ID":"481977a2-7072-4176-abd4-863cb6104d70","Type":"ContainerStarted","Data":"1a4be654309edb8bce6852630e3554100052394a7d7dafc5d56a5281e8d865c5"} Feb 19 20:21:58 crc kubenswrapper[4813]: I0219 20:21:58.491041 4813 scope.go:117] "RemoveContainer" containerID="bf3389dd71739f5143f3a982a66a234f41741e9e9864c206e57c1b014c9eb0cf" Feb 19 20:21:58 crc kubenswrapper[4813]: I0219 20:21:58.531523 4813 scope.go:117] "RemoveContainer" containerID="012e97a9028f8779020c9cb805dc8fdd1493c440e2c302305b73f4393722bde4" Feb 19 20:21:58 crc kubenswrapper[4813]: I0219 20:21:58.594469 4813 scope.go:117] "RemoveContainer" containerID="7f4d8edd4b142cfb5985b5a0e7535978f47995e545005d5de7500b0ba6ec4c6b" Feb 19 20:23:52 crc kubenswrapper[4813]: I0219 20:23:52.177699 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nj5jg"] Feb 19 20:23:52 crc kubenswrapper[4813]: E0219 20:23:52.178580 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19f9f0b6-13fe-46f8-8668-f348fbae2ba4" containerName="registry-server" Feb 19 20:23:52 crc kubenswrapper[4813]: I0219 20:23:52.178593 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="19f9f0b6-13fe-46f8-8668-f348fbae2ba4" containerName="registry-server" Feb 19 20:23:52 crc kubenswrapper[4813]: E0219 20:23:52.178605 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19f9f0b6-13fe-46f8-8668-f348fbae2ba4" containerName="extract-content" Feb 19 20:23:52 crc kubenswrapper[4813]: I0219 20:23:52.178611 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="19f9f0b6-13fe-46f8-8668-f348fbae2ba4" containerName="extract-content" Feb 19 20:23:52 crc kubenswrapper[4813]: E0219 20:23:52.178646 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19f9f0b6-13fe-46f8-8668-f348fbae2ba4" containerName="extract-utilities" Feb 19 20:23:52 crc kubenswrapper[4813]: I0219 20:23:52.178653 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="19f9f0b6-13fe-46f8-8668-f348fbae2ba4" containerName="extract-utilities" Feb 19 20:23:52 crc kubenswrapper[4813]: I0219 20:23:52.178846 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="19f9f0b6-13fe-46f8-8668-f348fbae2ba4" containerName="registry-server" Feb 19 20:23:52 crc kubenswrapper[4813]: I0219 20:23:52.180301 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nj5jg" Feb 19 20:23:52 crc kubenswrapper[4813]: I0219 20:23:52.190671 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nj5jg"] Feb 19 20:23:52 crc kubenswrapper[4813]: I0219 20:23:52.251394 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5901ba13-5df6-484d-8973-ecefbacfdf96-catalog-content\") pod \"redhat-operators-nj5jg\" (UID: \"5901ba13-5df6-484d-8973-ecefbacfdf96\") " pod="openshift-marketplace/redhat-operators-nj5jg" Feb 19 20:23:52 crc kubenswrapper[4813]: I0219 20:23:52.251453 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5901ba13-5df6-484d-8973-ecefbacfdf96-utilities\") pod \"redhat-operators-nj5jg\" (UID: \"5901ba13-5df6-484d-8973-ecefbacfdf96\") " pod="openshift-marketplace/redhat-operators-nj5jg" Feb 19 20:23:52 crc kubenswrapper[4813]: I0219 20:23:52.251501 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bwgv\" (UniqueName: \"kubernetes.io/projected/5901ba13-5df6-484d-8973-ecefbacfdf96-kube-api-access-9bwgv\") pod \"redhat-operators-nj5jg\" (UID: \"5901ba13-5df6-484d-8973-ecefbacfdf96\") " pod="openshift-marketplace/redhat-operators-nj5jg" Feb 19 20:23:52 crc kubenswrapper[4813]: I0219 20:23:52.353467 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5901ba13-5df6-484d-8973-ecefbacfdf96-catalog-content\") pod \"redhat-operators-nj5jg\" (UID: \"5901ba13-5df6-484d-8973-ecefbacfdf96\") " pod="openshift-marketplace/redhat-operators-nj5jg" Feb 19 20:23:52 crc kubenswrapper[4813]: I0219 20:23:52.353537 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5901ba13-5df6-484d-8973-ecefbacfdf96-utilities\") pod \"redhat-operators-nj5jg\" (UID: \"5901ba13-5df6-484d-8973-ecefbacfdf96\") " pod="openshift-marketplace/redhat-operators-nj5jg" Feb 19 20:23:52 crc kubenswrapper[4813]: I0219 20:23:52.353589 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bwgv\" (UniqueName: \"kubernetes.io/projected/5901ba13-5df6-484d-8973-ecefbacfdf96-kube-api-access-9bwgv\") pod \"redhat-operators-nj5jg\" (UID: \"5901ba13-5df6-484d-8973-ecefbacfdf96\") " pod="openshift-marketplace/redhat-operators-nj5jg" Feb 19 20:23:52 crc kubenswrapper[4813]: I0219 20:23:52.354467 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5901ba13-5df6-484d-8973-ecefbacfdf96-catalog-content\") pod \"redhat-operators-nj5jg\" (UID: \"5901ba13-5df6-484d-8973-ecefbacfdf96\") " pod="openshift-marketplace/redhat-operators-nj5jg" Feb 19 20:23:52 crc kubenswrapper[4813]: I0219 20:23:52.354699 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5901ba13-5df6-484d-8973-ecefbacfdf96-utilities\") pod \"redhat-operators-nj5jg\" (UID: \"5901ba13-5df6-484d-8973-ecefbacfdf96\") " pod="openshift-marketplace/redhat-operators-nj5jg" Feb 19 20:23:52 crc kubenswrapper[4813]: I0219 20:23:52.373599 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bwgv\" (UniqueName: \"kubernetes.io/projected/5901ba13-5df6-484d-8973-ecefbacfdf96-kube-api-access-9bwgv\") pod \"redhat-operators-nj5jg\" (UID: \"5901ba13-5df6-484d-8973-ecefbacfdf96\") " pod="openshift-marketplace/redhat-operators-nj5jg" Feb 19 20:23:52 crc kubenswrapper[4813]: I0219 20:23:52.501680 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nj5jg" Feb 19 20:23:52 crc kubenswrapper[4813]: I0219 20:23:52.957564 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nj5jg"] Feb 19 20:23:53 crc kubenswrapper[4813]: I0219 20:23:53.964327 4813 generic.go:334] "Generic (PLEG): container finished" podID="5901ba13-5df6-484d-8973-ecefbacfdf96" containerID="773a8fd9360d468060ffce4628d0a520bf48d7d1c54375530ff351393d3c570a" exitCode=0 Feb 19 20:23:53 crc kubenswrapper[4813]: I0219 20:23:53.964371 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nj5jg" event={"ID":"5901ba13-5df6-484d-8973-ecefbacfdf96","Type":"ContainerDied","Data":"773a8fd9360d468060ffce4628d0a520bf48d7d1c54375530ff351393d3c570a"} Feb 19 20:23:53 crc kubenswrapper[4813]: I0219 20:23:53.965547 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nj5jg" event={"ID":"5901ba13-5df6-484d-8973-ecefbacfdf96","Type":"ContainerStarted","Data":"6eb1025abbdc14c8f14df51baf7f434ef467faf4ef331f0e14e6aae337d20fd6"} Feb 19 20:23:54 crc kubenswrapper[4813]: I0219 20:23:54.974557 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nj5jg" event={"ID":"5901ba13-5df6-484d-8973-ecefbacfdf96","Type":"ContainerStarted","Data":"1f41481004f796298a2c1993a2f9aca1130f33b565e044e031ce8b1f2555700a"} Feb 19 20:24:00 crc kubenswrapper[4813]: I0219 20:24:00.022048 4813 generic.go:334] "Generic (PLEG): container finished" podID="5901ba13-5df6-484d-8973-ecefbacfdf96" containerID="1f41481004f796298a2c1993a2f9aca1130f33b565e044e031ce8b1f2555700a" exitCode=0 Feb 19 20:24:00 crc kubenswrapper[4813]: I0219 20:24:00.022190 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nj5jg" event={"ID":"5901ba13-5df6-484d-8973-ecefbacfdf96","Type":"ContainerDied","Data":"1f41481004f796298a2c1993a2f9aca1130f33b565e044e031ce8b1f2555700a"} Feb 19 20:24:00 crc kubenswrapper[4813]: I0219 20:24:00.266637 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-x592b"] Feb 19 20:24:00 crc kubenswrapper[4813]: I0219 20:24:00.268985 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x592b" Feb 19 20:24:00 crc kubenswrapper[4813]: I0219 20:24:00.279554 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-x592b"] Feb 19 20:24:00 crc kubenswrapper[4813]: I0219 20:24:00.329353 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 20:24:00 crc kubenswrapper[4813]: I0219 20:24:00.329409 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 20:24:00 crc kubenswrapper[4813]: I0219 20:24:00.424776 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc5945b7-5be3-46f3-ad89-63f247099a5d-utilities\") pod \"redhat-marketplace-x592b\" (UID: \"cc5945b7-5be3-46f3-ad89-63f247099a5d\") " pod="openshift-marketplace/redhat-marketplace-x592b" Feb 19 20:24:00 crc kubenswrapper[4813]: I0219 20:24:00.424892 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc5945b7-5be3-46f3-ad89-63f247099a5d-catalog-content\") pod \"redhat-marketplace-x592b\" (UID: \"cc5945b7-5be3-46f3-ad89-63f247099a5d\") " pod="openshift-marketplace/redhat-marketplace-x592b" Feb 19 20:24:00 crc kubenswrapper[4813]: I0219 20:24:00.425239 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s25vw\" (UniqueName: \"kubernetes.io/projected/cc5945b7-5be3-46f3-ad89-63f247099a5d-kube-api-access-s25vw\") pod \"redhat-marketplace-x592b\" (UID: \"cc5945b7-5be3-46f3-ad89-63f247099a5d\") " pod="openshift-marketplace/redhat-marketplace-x592b" Feb 19 20:24:00 crc kubenswrapper[4813]: I0219 20:24:00.527785 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc5945b7-5be3-46f3-ad89-63f247099a5d-utilities\") pod \"redhat-marketplace-x592b\" (UID: \"cc5945b7-5be3-46f3-ad89-63f247099a5d\") " pod="openshift-marketplace/redhat-marketplace-x592b" Feb 19 20:24:00 crc kubenswrapper[4813]: I0219 20:24:00.528271 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc5945b7-5be3-46f3-ad89-63f247099a5d-utilities\") pod \"redhat-marketplace-x592b\" (UID: \"cc5945b7-5be3-46f3-ad89-63f247099a5d\") " pod="openshift-marketplace/redhat-marketplace-x592b" Feb 19 20:24:00 crc kubenswrapper[4813]: I0219 20:24:00.528318 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc5945b7-5be3-46f3-ad89-63f247099a5d-catalog-content\") pod \"redhat-marketplace-x592b\" (UID: \"cc5945b7-5be3-46f3-ad89-63f247099a5d\") " pod="openshift-marketplace/redhat-marketplace-x592b" Feb 19 20:24:00 crc kubenswrapper[4813]: I0219 20:24:00.528505 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s25vw\" (UniqueName: \"kubernetes.io/projected/cc5945b7-5be3-46f3-ad89-63f247099a5d-kube-api-access-s25vw\") pod \"redhat-marketplace-x592b\" (UID: \"cc5945b7-5be3-46f3-ad89-63f247099a5d\") " pod="openshift-marketplace/redhat-marketplace-x592b" Feb 19 20:24:00 crc kubenswrapper[4813]: I0219 20:24:00.528586 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc5945b7-5be3-46f3-ad89-63f247099a5d-catalog-content\") pod \"redhat-marketplace-x592b\" (UID: \"cc5945b7-5be3-46f3-ad89-63f247099a5d\") " pod="openshift-marketplace/redhat-marketplace-x592b" Feb 19 20:24:00 crc kubenswrapper[4813]: I0219 20:24:00.554045 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s25vw\" (UniqueName: \"kubernetes.io/projected/cc5945b7-5be3-46f3-ad89-63f247099a5d-kube-api-access-s25vw\") pod \"redhat-marketplace-x592b\" (UID: \"cc5945b7-5be3-46f3-ad89-63f247099a5d\") " pod="openshift-marketplace/redhat-marketplace-x592b" Feb 19 20:24:00 crc kubenswrapper[4813]: I0219 20:24:00.598718 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x592b" Feb 19 20:24:01 crc kubenswrapper[4813]: I0219 20:24:01.033496 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nj5jg" event={"ID":"5901ba13-5df6-484d-8973-ecefbacfdf96","Type":"ContainerStarted","Data":"f80141018211d252399d4e17b37bd6ab4238ed1ef6c6b7892adb2d00306c6555"} Feb 19 20:24:01 crc kubenswrapper[4813]: I0219 20:24:01.055258 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nj5jg" podStartSLOduration=2.627778152 podStartE2EDuration="9.055241504s" podCreationTimestamp="2026-02-19 20:23:52 +0000 UTC" firstStartedPulling="2026-02-19 20:23:53.966315382 +0000 UTC m=+6853.191755913" lastFinishedPulling="2026-02-19 20:24:00.393778704 +0000 UTC m=+6859.619219265" observedRunningTime="2026-02-19 20:24:01.050676993 +0000 UTC m=+6860.276117534" watchObservedRunningTime="2026-02-19 20:24:01.055241504 +0000 UTC m=+6860.280682045" Feb 19 20:24:01 crc kubenswrapper[4813]: I0219 20:24:01.086675 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-x592b"] Feb 19 20:24:02 crc kubenswrapper[4813]: I0219 20:24:02.045460 4813 generic.go:334] "Generic (PLEG): container finished" podID="cc5945b7-5be3-46f3-ad89-63f247099a5d" containerID="e9f173f28bff879698cc79652fb723951af6e2669db582b394c3a9b4a7371de7" exitCode=0 Feb 19 20:24:02 crc kubenswrapper[4813]: I0219 20:24:02.045552 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x592b" event={"ID":"cc5945b7-5be3-46f3-ad89-63f247099a5d","Type":"ContainerDied","Data":"e9f173f28bff879698cc79652fb723951af6e2669db582b394c3a9b4a7371de7"} Feb 19 20:24:02 crc kubenswrapper[4813]: I0219 20:24:02.046242 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x592b" event={"ID":"cc5945b7-5be3-46f3-ad89-63f247099a5d","Type":"ContainerStarted","Data":"43ab1a0dcbeb4c1891a5d25ed71e5eeec82fc3fc6a4228f4ab168c25c582410e"} Feb 19 20:24:02 crc kubenswrapper[4813]: I0219 20:24:02.502306 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nj5jg" Feb 19 20:24:02 crc kubenswrapper[4813]: I0219 20:24:02.502354 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nj5jg" Feb 19 20:24:03 crc kubenswrapper[4813]: I0219 20:24:03.056809 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x592b" event={"ID":"cc5945b7-5be3-46f3-ad89-63f247099a5d","Type":"ContainerStarted","Data":"27f688c7e6c6e67a55177c4757e47b7afbf643d571acdc981dec0ee257cb0b26"} Feb 19 20:24:03 crc kubenswrapper[4813]: I0219 20:24:03.551593 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nj5jg" podUID="5901ba13-5df6-484d-8973-ecefbacfdf96" containerName="registry-server" probeResult="failure" output=< Feb 19 20:24:03 crc kubenswrapper[4813]: timeout: failed to connect service ":50051" within 1s Feb 19 20:24:03 crc kubenswrapper[4813]: > Feb 19 20:24:04 crc kubenswrapper[4813]: I0219 20:24:04.066079 4813 generic.go:334] "Generic (PLEG): container finished" podID="cc5945b7-5be3-46f3-ad89-63f247099a5d" containerID="27f688c7e6c6e67a55177c4757e47b7afbf643d571acdc981dec0ee257cb0b26" exitCode=0 Feb 19 20:24:04 crc kubenswrapper[4813]: I0219 20:24:04.066144 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x592b" event={"ID":"cc5945b7-5be3-46f3-ad89-63f247099a5d","Type":"ContainerDied","Data":"27f688c7e6c6e67a55177c4757e47b7afbf643d571acdc981dec0ee257cb0b26"} Feb 19 20:24:05 crc kubenswrapper[4813]: I0219 20:24:05.079456 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x592b" event={"ID":"cc5945b7-5be3-46f3-ad89-63f247099a5d","Type":"ContainerStarted","Data":"7be1655d134173607ee8b3e7001ead8b50ad275217a89d262b33d01e5132aa3e"} Feb 19 20:24:05 crc kubenswrapper[4813]: I0219 20:24:05.118062 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-x592b" podStartSLOduration=2.662573021 podStartE2EDuration="5.118034973s" podCreationTimestamp="2026-02-19 20:24:00 +0000 UTC" firstStartedPulling="2026-02-19 20:24:02.047470215 +0000 UTC m=+6861.272910756" lastFinishedPulling="2026-02-19 20:24:04.502932157 +0000 UTC m=+6863.728372708" observedRunningTime="2026-02-19 20:24:05.102145112 +0000 UTC m=+6864.327585693" watchObservedRunningTime="2026-02-19 20:24:05.118034973 +0000 UTC m=+6864.343475544" Feb 19 20:24:09 crc kubenswrapper[4813]: I0219 20:24:09.132549 4813 generic.go:334] "Generic (PLEG): container finished" podID="6a8dd50d-5b12-495b-961f-4ebd2ebe3033" containerID="019d84991e4663a285e9ea54353cf27ed0fc6500878202957239fd225acedfe2" exitCode=0 Feb 19 20:24:09 crc kubenswrapper[4813]: I0219 20:24:09.132637 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-7mnnq" event={"ID":"6a8dd50d-5b12-495b-961f-4ebd2ebe3033","Type":"ContainerDied","Data":"019d84991e4663a285e9ea54353cf27ed0fc6500878202957239fd225acedfe2"} Feb 19 20:24:10 crc kubenswrapper[4813]: I0219 20:24:10.600733 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-x592b" Feb 19 20:24:10 crc kubenswrapper[4813]: I0219 20:24:10.601085 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-x592b" Feb 19 20:24:10 crc kubenswrapper[4813]: I0219 20:24:10.630505 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-7mnnq" Feb 19 20:24:10 crc kubenswrapper[4813]: I0219 20:24:10.658760 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-x592b" Feb 19 20:24:10 crc kubenswrapper[4813]: I0219 20:24:10.675044 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/6a8dd50d-5b12-495b-961f-4ebd2ebe3033-ssh-key-openstack-cell1\") pod \"6a8dd50d-5b12-495b-961f-4ebd2ebe3033\" (UID: \"6a8dd50d-5b12-495b-961f-4ebd2ebe3033\") " Feb 19 20:24:10 crc kubenswrapper[4813]: I0219 20:24:10.675120 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a8dd50d-5b12-495b-961f-4ebd2ebe3033-tripleo-cleanup-combined-ca-bundle\") pod \"6a8dd50d-5b12-495b-961f-4ebd2ebe3033\" (UID: \"6a8dd50d-5b12-495b-961f-4ebd2ebe3033\") " Feb 19 20:24:10 crc kubenswrapper[4813]: I0219 20:24:10.675164 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zf6g8\" (UniqueName: \"kubernetes.io/projected/6a8dd50d-5b12-495b-961f-4ebd2ebe3033-kube-api-access-zf6g8\") pod \"6a8dd50d-5b12-495b-961f-4ebd2ebe3033\" (UID: \"6a8dd50d-5b12-495b-961f-4ebd2ebe3033\") " Feb 19 20:24:10 crc kubenswrapper[4813]: I0219 20:24:10.675448 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6a8dd50d-5b12-495b-961f-4ebd2ebe3033-ceph\") pod \"6a8dd50d-5b12-495b-961f-4ebd2ebe3033\" (UID: \"6a8dd50d-5b12-495b-961f-4ebd2ebe3033\") " Feb 19 20:24:10 crc kubenswrapper[4813]: I0219 20:24:10.675595 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6a8dd50d-5b12-495b-961f-4ebd2ebe3033-inventory\") pod \"6a8dd50d-5b12-495b-961f-4ebd2ebe3033\" (UID: \"6a8dd50d-5b12-495b-961f-4ebd2ebe3033\") " Feb 19 20:24:10 crc kubenswrapper[4813]: I0219 20:24:10.684852 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a8dd50d-5b12-495b-961f-4ebd2ebe3033-ceph" (OuterVolumeSpecName: "ceph") pod "6a8dd50d-5b12-495b-961f-4ebd2ebe3033" (UID: "6a8dd50d-5b12-495b-961f-4ebd2ebe3033"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:24:10 crc kubenswrapper[4813]: I0219 20:24:10.686472 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a8dd50d-5b12-495b-961f-4ebd2ebe3033-tripleo-cleanup-combined-ca-bundle" (OuterVolumeSpecName: "tripleo-cleanup-combined-ca-bundle") pod "6a8dd50d-5b12-495b-961f-4ebd2ebe3033" (UID: "6a8dd50d-5b12-495b-961f-4ebd2ebe3033"). InnerVolumeSpecName "tripleo-cleanup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:24:10 crc kubenswrapper[4813]: I0219 20:24:10.701477 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a8dd50d-5b12-495b-961f-4ebd2ebe3033-kube-api-access-zf6g8" (OuterVolumeSpecName: "kube-api-access-zf6g8") pod "6a8dd50d-5b12-495b-961f-4ebd2ebe3033" (UID: "6a8dd50d-5b12-495b-961f-4ebd2ebe3033"). InnerVolumeSpecName "kube-api-access-zf6g8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:24:10 crc kubenswrapper[4813]: I0219 20:24:10.716595 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a8dd50d-5b12-495b-961f-4ebd2ebe3033-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "6a8dd50d-5b12-495b-961f-4ebd2ebe3033" (UID: "6a8dd50d-5b12-495b-961f-4ebd2ebe3033"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:24:10 crc kubenswrapper[4813]: I0219 20:24:10.731322 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a8dd50d-5b12-495b-961f-4ebd2ebe3033-inventory" (OuterVolumeSpecName: "inventory") pod "6a8dd50d-5b12-495b-961f-4ebd2ebe3033" (UID: "6a8dd50d-5b12-495b-961f-4ebd2ebe3033"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:24:10 crc kubenswrapper[4813]: I0219 20:24:10.778423 4813 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/6a8dd50d-5b12-495b-961f-4ebd2ebe3033-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 19 20:24:10 crc kubenswrapper[4813]: I0219 20:24:10.778473 4813 reconciler_common.go:293] "Volume detached for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a8dd50d-5b12-495b-961f-4ebd2ebe3033-tripleo-cleanup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 20:24:10 crc kubenswrapper[4813]: I0219 20:24:10.778490 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zf6g8\" (UniqueName: \"kubernetes.io/projected/6a8dd50d-5b12-495b-961f-4ebd2ebe3033-kube-api-access-zf6g8\") on node \"crc\" DevicePath \"\"" Feb 19 20:24:10 crc kubenswrapper[4813]: I0219 20:24:10.778504 4813 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6a8dd50d-5b12-495b-961f-4ebd2ebe3033-ceph\") on node \"crc\" DevicePath \"\"" Feb 19 20:24:10 crc kubenswrapper[4813]: I0219 20:24:10.778516 4813 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6a8dd50d-5b12-495b-961f-4ebd2ebe3033-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 20:24:11 crc kubenswrapper[4813]: I0219 20:24:11.152778 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-7mnnq" event={"ID":"6a8dd50d-5b12-495b-961f-4ebd2ebe3033","Type":"ContainerDied","Data":"82654a3fa8a8ff2291cd454d68beaa2c783bd3524f1257b026a7f136dac936ed"} Feb 19 20:24:11 crc kubenswrapper[4813]: I0219 20:24:11.152817 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82654a3fa8a8ff2291cd454d68beaa2c783bd3524f1257b026a7f136dac936ed" Feb 19 20:24:11 crc kubenswrapper[4813]: I0219 20:24:11.152819 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-7mnnq" Feb 19 20:24:11 crc kubenswrapper[4813]: I0219 20:24:11.219241 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-x592b" Feb 19 20:24:11 crc kubenswrapper[4813]: I0219 20:24:11.268927 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-x592b"] Feb 19 20:24:13 crc kubenswrapper[4813]: I0219 20:24:13.173138 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-x592b" podUID="cc5945b7-5be3-46f3-ad89-63f247099a5d" containerName="registry-server" containerID="cri-o://7be1655d134173607ee8b3e7001ead8b50ad275217a89d262b33d01e5132aa3e" gracePeriod=2 Feb 19 20:24:13 crc kubenswrapper[4813]: I0219 20:24:13.551701 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nj5jg" podUID="5901ba13-5df6-484d-8973-ecefbacfdf96" containerName="registry-server" probeResult="failure" output=< Feb 19 20:24:13 crc kubenswrapper[4813]: timeout: failed to connect service ":50051" within 1s Feb 19 20:24:13 crc kubenswrapper[4813]: > Feb 19 20:24:13 crc kubenswrapper[4813]: I0219 20:24:13.735061 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x592b" Feb 19 20:24:13 crc kubenswrapper[4813]: I0219 20:24:13.843352 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc5945b7-5be3-46f3-ad89-63f247099a5d-catalog-content\") pod \"cc5945b7-5be3-46f3-ad89-63f247099a5d\" (UID: \"cc5945b7-5be3-46f3-ad89-63f247099a5d\") " Feb 19 20:24:13 crc kubenswrapper[4813]: I0219 20:24:13.843530 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc5945b7-5be3-46f3-ad89-63f247099a5d-utilities\") pod \"cc5945b7-5be3-46f3-ad89-63f247099a5d\" (UID: \"cc5945b7-5be3-46f3-ad89-63f247099a5d\") " Feb 19 20:24:13 crc kubenswrapper[4813]: I0219 20:24:13.843671 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s25vw\" (UniqueName: \"kubernetes.io/projected/cc5945b7-5be3-46f3-ad89-63f247099a5d-kube-api-access-s25vw\") pod \"cc5945b7-5be3-46f3-ad89-63f247099a5d\" (UID: \"cc5945b7-5be3-46f3-ad89-63f247099a5d\") " Feb 19 20:24:13 crc kubenswrapper[4813]: I0219 20:24:13.844631 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc5945b7-5be3-46f3-ad89-63f247099a5d-utilities" (OuterVolumeSpecName: "utilities") pod "cc5945b7-5be3-46f3-ad89-63f247099a5d" (UID: "cc5945b7-5be3-46f3-ad89-63f247099a5d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:24:13 crc kubenswrapper[4813]: I0219 20:24:13.850807 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc5945b7-5be3-46f3-ad89-63f247099a5d-kube-api-access-s25vw" (OuterVolumeSpecName: "kube-api-access-s25vw") pod "cc5945b7-5be3-46f3-ad89-63f247099a5d" (UID: "cc5945b7-5be3-46f3-ad89-63f247099a5d"). InnerVolumeSpecName "kube-api-access-s25vw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:24:13 crc kubenswrapper[4813]: I0219 20:24:13.873267 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc5945b7-5be3-46f3-ad89-63f247099a5d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc5945b7-5be3-46f3-ad89-63f247099a5d" (UID: "cc5945b7-5be3-46f3-ad89-63f247099a5d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:24:13 crc kubenswrapper[4813]: I0219 20:24:13.945780 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc5945b7-5be3-46f3-ad89-63f247099a5d-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 20:24:13 crc kubenswrapper[4813]: I0219 20:24:13.945817 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s25vw\" (UniqueName: \"kubernetes.io/projected/cc5945b7-5be3-46f3-ad89-63f247099a5d-kube-api-access-s25vw\") on node \"crc\" DevicePath \"\"" Feb 19 20:24:13 crc kubenswrapper[4813]: I0219 20:24:13.945834 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc5945b7-5be3-46f3-ad89-63f247099a5d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 20:24:14 crc kubenswrapper[4813]: I0219 20:24:14.191351 4813 generic.go:334] "Generic (PLEG): container finished" podID="cc5945b7-5be3-46f3-ad89-63f247099a5d" containerID="7be1655d134173607ee8b3e7001ead8b50ad275217a89d262b33d01e5132aa3e" exitCode=0 Feb 19 20:24:14 crc kubenswrapper[4813]: I0219 20:24:14.191391 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x592b" event={"ID":"cc5945b7-5be3-46f3-ad89-63f247099a5d","Type":"ContainerDied","Data":"7be1655d134173607ee8b3e7001ead8b50ad275217a89d262b33d01e5132aa3e"} Feb 19 20:24:14 crc kubenswrapper[4813]: I0219 20:24:14.191423 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x592b" event={"ID":"cc5945b7-5be3-46f3-ad89-63f247099a5d","Type":"ContainerDied","Data":"43ab1a0dcbeb4c1891a5d25ed71e5eeec82fc3fc6a4228f4ab168c25c582410e"} Feb 19 20:24:14 crc kubenswrapper[4813]: I0219 20:24:14.191433 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x592b" Feb 19 20:24:14 crc kubenswrapper[4813]: I0219 20:24:14.191443 4813 scope.go:117] "RemoveContainer" containerID="7be1655d134173607ee8b3e7001ead8b50ad275217a89d262b33d01e5132aa3e" Feb 19 20:24:14 crc kubenswrapper[4813]: I0219 20:24:14.220723 4813 scope.go:117] "RemoveContainer" containerID="27f688c7e6c6e67a55177c4757e47b7afbf643d571acdc981dec0ee257cb0b26" Feb 19 20:24:14 crc kubenswrapper[4813]: I0219 20:24:14.238838 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-x592b"] Feb 19 20:24:14 crc kubenswrapper[4813]: I0219 20:24:14.249737 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-x592b"] Feb 19 20:24:14 crc kubenswrapper[4813]: I0219 20:24:14.258422 4813 scope.go:117] "RemoveContainer" containerID="e9f173f28bff879698cc79652fb723951af6e2669db582b394c3a9b4a7371de7" Feb 19 20:24:14 crc kubenswrapper[4813]: I0219 20:24:14.310869 4813 scope.go:117] "RemoveContainer" containerID="7be1655d134173607ee8b3e7001ead8b50ad275217a89d262b33d01e5132aa3e" Feb 19 20:24:14 crc kubenswrapper[4813]: E0219 20:24:14.311694 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7be1655d134173607ee8b3e7001ead8b50ad275217a89d262b33d01e5132aa3e\": container with ID starting with 7be1655d134173607ee8b3e7001ead8b50ad275217a89d262b33d01e5132aa3e not found: ID does not exist" containerID="7be1655d134173607ee8b3e7001ead8b50ad275217a89d262b33d01e5132aa3e" Feb 19 20:24:14 crc kubenswrapper[4813]: I0219 20:24:14.311740 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7be1655d134173607ee8b3e7001ead8b50ad275217a89d262b33d01e5132aa3e"} err="failed to get container status \"7be1655d134173607ee8b3e7001ead8b50ad275217a89d262b33d01e5132aa3e\": rpc error: code = NotFound desc = could not find container \"7be1655d134173607ee8b3e7001ead8b50ad275217a89d262b33d01e5132aa3e\": container with ID starting with 7be1655d134173607ee8b3e7001ead8b50ad275217a89d262b33d01e5132aa3e not found: ID does not exist" Feb 19 20:24:14 crc kubenswrapper[4813]: I0219 20:24:14.311773 4813 scope.go:117] "RemoveContainer" containerID="27f688c7e6c6e67a55177c4757e47b7afbf643d571acdc981dec0ee257cb0b26" Feb 19 20:24:14 crc kubenswrapper[4813]: E0219 20:24:14.312291 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27f688c7e6c6e67a55177c4757e47b7afbf643d571acdc981dec0ee257cb0b26\": container with ID starting with 27f688c7e6c6e67a55177c4757e47b7afbf643d571acdc981dec0ee257cb0b26 not found: ID does not exist" containerID="27f688c7e6c6e67a55177c4757e47b7afbf643d571acdc981dec0ee257cb0b26" Feb 19 20:24:14 crc kubenswrapper[4813]: I0219 20:24:14.312332 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27f688c7e6c6e67a55177c4757e47b7afbf643d571acdc981dec0ee257cb0b26"} err="failed to get container status \"27f688c7e6c6e67a55177c4757e47b7afbf643d571acdc981dec0ee257cb0b26\": rpc error: code = NotFound desc = could not find container \"27f688c7e6c6e67a55177c4757e47b7afbf643d571acdc981dec0ee257cb0b26\": container with ID starting with 27f688c7e6c6e67a55177c4757e47b7afbf643d571acdc981dec0ee257cb0b26 not found: ID does not exist" Feb 19 20:24:14 crc kubenswrapper[4813]: I0219 20:24:14.312359 4813 scope.go:117] "RemoveContainer" containerID="e9f173f28bff879698cc79652fb723951af6e2669db582b394c3a9b4a7371de7" Feb 19 20:24:14 crc kubenswrapper[4813]: E0219 20:24:14.312893 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9f173f28bff879698cc79652fb723951af6e2669db582b394c3a9b4a7371de7\": container with ID starting with e9f173f28bff879698cc79652fb723951af6e2669db582b394c3a9b4a7371de7 not found: ID does not exist" containerID="e9f173f28bff879698cc79652fb723951af6e2669db582b394c3a9b4a7371de7" Feb 19 20:24:14 crc kubenswrapper[4813]: I0219 20:24:14.312931 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9f173f28bff879698cc79652fb723951af6e2669db582b394c3a9b4a7371de7"} err="failed to get container status \"e9f173f28bff879698cc79652fb723951af6e2669db582b394c3a9b4a7371de7\": rpc error: code = NotFound desc = could not find container \"e9f173f28bff879698cc79652fb723951af6e2669db582b394c3a9b4a7371de7\": container with ID starting with e9f173f28bff879698cc79652fb723951af6e2669db582b394c3a9b4a7371de7 not found: ID does not exist" Feb 19 20:24:15 crc kubenswrapper[4813]: I0219 20:24:15.488478 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc5945b7-5be3-46f3-ad89-63f247099a5d" path="/var/lib/kubelet/pods/cc5945b7-5be3-46f3-ad89-63f247099a5d/volumes" Feb 19 20:24:17 crc kubenswrapper[4813]: I0219 20:24:17.590265 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-q4q7w"] Feb 19 20:24:17 crc kubenswrapper[4813]: E0219 20:24:17.590944 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc5945b7-5be3-46f3-ad89-63f247099a5d" containerName="registry-server" Feb 19 20:24:17 crc kubenswrapper[4813]: I0219 20:24:17.590976 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc5945b7-5be3-46f3-ad89-63f247099a5d" containerName="registry-server" Feb 19 20:24:17 crc kubenswrapper[4813]: E0219 20:24:17.591029 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc5945b7-5be3-46f3-ad89-63f247099a5d" containerName="extract-content" Feb 19 20:24:17 crc kubenswrapper[4813]: I0219 20:24:17.591037 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc5945b7-5be3-46f3-ad89-63f247099a5d" containerName="extract-content" Feb 19 20:24:17 crc kubenswrapper[4813]: E0219 20:24:17.591048 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc5945b7-5be3-46f3-ad89-63f247099a5d" containerName="extract-utilities" Feb 19 20:24:17 crc kubenswrapper[4813]: I0219 20:24:17.591056 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc5945b7-5be3-46f3-ad89-63f247099a5d" containerName="extract-utilities" Feb 19 20:24:17 crc kubenswrapper[4813]: E0219 20:24:17.591078 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a8dd50d-5b12-495b-961f-4ebd2ebe3033" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Feb 19 20:24:17 crc kubenswrapper[4813]: I0219 20:24:17.591087 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a8dd50d-5b12-495b-961f-4ebd2ebe3033" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Feb 19 20:24:17 crc kubenswrapper[4813]: I0219 20:24:17.591319 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a8dd50d-5b12-495b-961f-4ebd2ebe3033" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Feb 19 20:24:17 crc kubenswrapper[4813]: I0219 20:24:17.591351 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc5945b7-5be3-46f3-ad89-63f247099a5d" containerName="registry-server" Feb 19 20:24:17 crc kubenswrapper[4813]: I0219 20:24:17.595018 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-q4q7w" Feb 19 20:24:17 crc kubenswrapper[4813]: I0219 20:24:17.598687 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 20:24:17 crc kubenswrapper[4813]: I0219 20:24:17.598845 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-2ttn4" Feb 19 20:24:17 crc kubenswrapper[4813]: I0219 20:24:17.599266 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 19 20:24:17 crc kubenswrapper[4813]: I0219 20:24:17.599422 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 19 20:24:17 crc kubenswrapper[4813]: I0219 20:24:17.604835 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-q4q7w"] Feb 19 20:24:17 crc kubenswrapper[4813]: I0219 20:24:17.734362 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c727ad19-3a38-4508-8eb9-dd36db85c774-inventory\") pod \"bootstrap-openstack-openstack-cell1-q4q7w\" (UID: \"c727ad19-3a38-4508-8eb9-dd36db85c774\") " pod="openstack/bootstrap-openstack-openstack-cell1-q4q7w" Feb 19 20:24:17 crc kubenswrapper[4813]: I0219 20:24:17.734803 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/c727ad19-3a38-4508-8eb9-dd36db85c774-ssh-key-openstack-cell1\") pod \"bootstrap-openstack-openstack-cell1-q4q7w\" (UID: \"c727ad19-3a38-4508-8eb9-dd36db85c774\") " pod="openstack/bootstrap-openstack-openstack-cell1-q4q7w" Feb 19 20:24:17 crc kubenswrapper[4813]: I0219 20:24:17.734908 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhmjc\" (UniqueName: \"kubernetes.io/projected/c727ad19-3a38-4508-8eb9-dd36db85c774-kube-api-access-bhmjc\") pod \"bootstrap-openstack-openstack-cell1-q4q7w\" (UID: \"c727ad19-3a38-4508-8eb9-dd36db85c774\") " pod="openstack/bootstrap-openstack-openstack-cell1-q4q7w" Feb 19 20:24:17 crc kubenswrapper[4813]: I0219 20:24:17.734979 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c727ad19-3a38-4508-8eb9-dd36db85c774-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-q4q7w\" (UID: \"c727ad19-3a38-4508-8eb9-dd36db85c774\") " pod="openstack/bootstrap-openstack-openstack-cell1-q4q7w" Feb 19 20:24:17 crc kubenswrapper[4813]: I0219 20:24:17.735117 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c727ad19-3a38-4508-8eb9-dd36db85c774-ceph\") pod \"bootstrap-openstack-openstack-cell1-q4q7w\" (UID: \"c727ad19-3a38-4508-8eb9-dd36db85c774\") " pod="openstack/bootstrap-openstack-openstack-cell1-q4q7w" Feb 19 20:24:17 crc kubenswrapper[4813]: I0219 20:24:17.836948 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c727ad19-3a38-4508-8eb9-dd36db85c774-ceph\") pod \"bootstrap-openstack-openstack-cell1-q4q7w\" (UID: \"c727ad19-3a38-4508-8eb9-dd36db85c774\") " pod="openstack/bootstrap-openstack-openstack-cell1-q4q7w" Feb 19 20:24:17 crc kubenswrapper[4813]: I0219 20:24:17.837117 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c727ad19-3a38-4508-8eb9-dd36db85c774-inventory\") pod \"bootstrap-openstack-openstack-cell1-q4q7w\" (UID: \"c727ad19-3a38-4508-8eb9-dd36db85c774\") " pod="openstack/bootstrap-openstack-openstack-cell1-q4q7w" Feb 19 20:24:17 crc kubenswrapper[4813]: I0219 20:24:17.838147 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/c727ad19-3a38-4508-8eb9-dd36db85c774-ssh-key-openstack-cell1\") pod \"bootstrap-openstack-openstack-cell1-q4q7w\" (UID: \"c727ad19-3a38-4508-8eb9-dd36db85c774\") " pod="openstack/bootstrap-openstack-openstack-cell1-q4q7w" Feb 19 20:24:17 crc kubenswrapper[4813]: I0219 20:24:17.838204 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhmjc\" (UniqueName: \"kubernetes.io/projected/c727ad19-3a38-4508-8eb9-dd36db85c774-kube-api-access-bhmjc\") pod \"bootstrap-openstack-openstack-cell1-q4q7w\" (UID: \"c727ad19-3a38-4508-8eb9-dd36db85c774\") " pod="openstack/bootstrap-openstack-openstack-cell1-q4q7w" Feb 19 20:24:17 crc kubenswrapper[4813]: I0219 20:24:17.838227 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c727ad19-3a38-4508-8eb9-dd36db85c774-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-q4q7w\" (UID: \"c727ad19-3a38-4508-8eb9-dd36db85c774\") " pod="openstack/bootstrap-openstack-openstack-cell1-q4q7w" Feb 19 20:24:17 crc kubenswrapper[4813]: I0219 20:24:17.843421 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c727ad19-3a38-4508-8eb9-dd36db85c774-inventory\") pod \"bootstrap-openstack-openstack-cell1-q4q7w\" (UID: \"c727ad19-3a38-4508-8eb9-dd36db85c774\") " pod="openstack/bootstrap-openstack-openstack-cell1-q4q7w" Feb 19 20:24:17 crc kubenswrapper[4813]: I0219 20:24:17.844476 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c727ad19-3a38-4508-8eb9-dd36db85c774-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-q4q7w\" (UID: \"c727ad19-3a38-4508-8eb9-dd36db85c774\") " pod="openstack/bootstrap-openstack-openstack-cell1-q4q7w" Feb 19 20:24:17 crc kubenswrapper[4813]: I0219 20:24:17.846541 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/c727ad19-3a38-4508-8eb9-dd36db85c774-ssh-key-openstack-cell1\") pod \"bootstrap-openstack-openstack-cell1-q4q7w\" (UID: \"c727ad19-3a38-4508-8eb9-dd36db85c774\") " pod="openstack/bootstrap-openstack-openstack-cell1-q4q7w" Feb 19 20:24:17 crc kubenswrapper[4813]: I0219 20:24:17.849687 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c727ad19-3a38-4508-8eb9-dd36db85c774-ceph\") pod \"bootstrap-openstack-openstack-cell1-q4q7w\" (UID: \"c727ad19-3a38-4508-8eb9-dd36db85c774\") " pod="openstack/bootstrap-openstack-openstack-cell1-q4q7w" Feb 19 20:24:17 crc kubenswrapper[4813]: I0219 20:24:17.860474 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhmjc\" (UniqueName: \"kubernetes.io/projected/c727ad19-3a38-4508-8eb9-dd36db85c774-kube-api-access-bhmjc\") pod \"bootstrap-openstack-openstack-cell1-q4q7w\" (UID: \"c727ad19-3a38-4508-8eb9-dd36db85c774\") " pod="openstack/bootstrap-openstack-openstack-cell1-q4q7w" Feb 19 20:24:17 crc kubenswrapper[4813]: I0219 20:24:17.924166 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-q4q7w" Feb 19 20:24:18 crc kubenswrapper[4813]: W0219 20:24:18.546687 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc727ad19_3a38_4508_8eb9_dd36db85c774.slice/crio-fe0d84fd13e5fe5f478053cf4325274a8c2e9fb5c87a4a36bdee85e5445e16cd WatchSource:0}: Error finding container fe0d84fd13e5fe5f478053cf4325274a8c2e9fb5c87a4a36bdee85e5445e16cd: Status 404 returned error can't find the container with id fe0d84fd13e5fe5f478053cf4325274a8c2e9fb5c87a4a36bdee85e5445e16cd Feb 19 20:24:18 crc kubenswrapper[4813]: I0219 20:24:18.549179 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-q4q7w"] Feb 19 20:24:19 crc kubenswrapper[4813]: I0219 20:24:19.247797 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-q4q7w" event={"ID":"c727ad19-3a38-4508-8eb9-dd36db85c774","Type":"ContainerStarted","Data":"fe0d84fd13e5fe5f478053cf4325274a8c2e9fb5c87a4a36bdee85e5445e16cd"} Feb 19 20:24:19 crc kubenswrapper[4813]: I0219 20:24:19.276400 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-openstack-openstack-cell1-q4q7w" podStartSLOduration=1.829834513 podStartE2EDuration="2.276345614s" podCreationTimestamp="2026-02-19 20:24:17 +0000 UTC" firstStartedPulling="2026-02-19 20:24:18.549742229 +0000 UTC m=+6877.775182790" lastFinishedPulling="2026-02-19 20:24:18.99625335 +0000 UTC m=+6878.221693891" observedRunningTime="2026-02-19 20:24:19.267265682 +0000 UTC m=+6878.492706233" watchObservedRunningTime="2026-02-19 20:24:19.276345614 +0000 UTC m=+6878.501786155" Feb 19 20:24:20 crc kubenswrapper[4813]: I0219 20:24:20.261031 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-q4q7w" event={"ID":"c727ad19-3a38-4508-8eb9-dd36db85c774","Type":"ContainerStarted","Data":"e828a55e42eaefb0c4b13c6f6518b78f0c70b4d51386b2f68dfb8eb61171110b"} Feb 19 20:24:22 crc kubenswrapper[4813]: I0219 20:24:22.559847 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nj5jg" Feb 19 20:24:22 crc kubenswrapper[4813]: I0219 20:24:22.624061 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nj5jg" Feb 19 20:24:23 crc kubenswrapper[4813]: I0219 20:24:23.384323 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nj5jg"] Feb 19 20:24:24 crc kubenswrapper[4813]: I0219 20:24:24.298900 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nj5jg" podUID="5901ba13-5df6-484d-8973-ecefbacfdf96" containerName="registry-server" containerID="cri-o://f80141018211d252399d4e17b37bd6ab4238ed1ef6c6b7892adb2d00306c6555" gracePeriod=2 Feb 19 20:24:24 crc kubenswrapper[4813]: I0219 20:24:24.811046 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nj5jg" Feb 19 20:24:24 crc kubenswrapper[4813]: I0219 20:24:24.895311 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bwgv\" (UniqueName: \"kubernetes.io/projected/5901ba13-5df6-484d-8973-ecefbacfdf96-kube-api-access-9bwgv\") pod \"5901ba13-5df6-484d-8973-ecefbacfdf96\" (UID: \"5901ba13-5df6-484d-8973-ecefbacfdf96\") " Feb 19 20:24:24 crc kubenswrapper[4813]: I0219 20:24:24.895516 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5901ba13-5df6-484d-8973-ecefbacfdf96-catalog-content\") pod \"5901ba13-5df6-484d-8973-ecefbacfdf96\" (UID: \"5901ba13-5df6-484d-8973-ecefbacfdf96\") " Feb 19 20:24:24 crc kubenswrapper[4813]: I0219 20:24:24.895752 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5901ba13-5df6-484d-8973-ecefbacfdf96-utilities\") pod \"5901ba13-5df6-484d-8973-ecefbacfdf96\" (UID: \"5901ba13-5df6-484d-8973-ecefbacfdf96\") " Feb 19 20:24:24 crc kubenswrapper[4813]: I0219 20:24:24.897682 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5901ba13-5df6-484d-8973-ecefbacfdf96-utilities" (OuterVolumeSpecName: "utilities") pod "5901ba13-5df6-484d-8973-ecefbacfdf96" (UID: "5901ba13-5df6-484d-8973-ecefbacfdf96"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:24:24 crc kubenswrapper[4813]: I0219 20:24:24.902924 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5901ba13-5df6-484d-8973-ecefbacfdf96-kube-api-access-9bwgv" (OuterVolumeSpecName: "kube-api-access-9bwgv") pod "5901ba13-5df6-484d-8973-ecefbacfdf96" (UID: "5901ba13-5df6-484d-8973-ecefbacfdf96"). InnerVolumeSpecName "kube-api-access-9bwgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:24:24 crc kubenswrapper[4813]: I0219 20:24:24.998044 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bwgv\" (UniqueName: \"kubernetes.io/projected/5901ba13-5df6-484d-8973-ecefbacfdf96-kube-api-access-9bwgv\") on node \"crc\" DevicePath \"\"" Feb 19 20:24:24 crc kubenswrapper[4813]: I0219 20:24:24.998075 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5901ba13-5df6-484d-8973-ecefbacfdf96-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 20:24:25 crc kubenswrapper[4813]: I0219 20:24:25.019228 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5901ba13-5df6-484d-8973-ecefbacfdf96-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5901ba13-5df6-484d-8973-ecefbacfdf96" (UID: "5901ba13-5df6-484d-8973-ecefbacfdf96"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:24:25 crc kubenswrapper[4813]: I0219 20:24:25.100005 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5901ba13-5df6-484d-8973-ecefbacfdf96-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 20:24:25 crc kubenswrapper[4813]: I0219 20:24:25.311438 4813 generic.go:334] "Generic (PLEG): container finished" podID="5901ba13-5df6-484d-8973-ecefbacfdf96" containerID="f80141018211d252399d4e17b37bd6ab4238ed1ef6c6b7892adb2d00306c6555" exitCode=0 Feb 19 20:24:25 crc kubenswrapper[4813]: I0219 20:24:25.311522 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nj5jg" event={"ID":"5901ba13-5df6-484d-8973-ecefbacfdf96","Type":"ContainerDied","Data":"f80141018211d252399d4e17b37bd6ab4238ed1ef6c6b7892adb2d00306c6555"} Feb 19 20:24:25 crc kubenswrapper[4813]: I0219 20:24:25.311796 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nj5jg" event={"ID":"5901ba13-5df6-484d-8973-ecefbacfdf96","Type":"ContainerDied","Data":"6eb1025abbdc14c8f14df51baf7f434ef467faf4ef331f0e14e6aae337d20fd6"} Feb 19 20:24:25 crc kubenswrapper[4813]: I0219 20:24:25.311840 4813 scope.go:117] "RemoveContainer" containerID="f80141018211d252399d4e17b37bd6ab4238ed1ef6c6b7892adb2d00306c6555" Feb 19 20:24:25 crc kubenswrapper[4813]: I0219 20:24:25.311593 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nj5jg" Feb 19 20:24:25 crc kubenswrapper[4813]: I0219 20:24:25.336514 4813 scope.go:117] "RemoveContainer" containerID="1f41481004f796298a2c1993a2f9aca1130f33b565e044e031ce8b1f2555700a" Feb 19 20:24:25 crc kubenswrapper[4813]: I0219 20:24:25.369525 4813 scope.go:117] "RemoveContainer" containerID="773a8fd9360d468060ffce4628d0a520bf48d7d1c54375530ff351393d3c570a" Feb 19 20:24:25 crc kubenswrapper[4813]: I0219 20:24:25.402278 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nj5jg"] Feb 19 20:24:25 crc kubenswrapper[4813]: I0219 20:24:25.406917 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nj5jg"] Feb 19 20:24:25 crc kubenswrapper[4813]: I0219 20:24:25.419562 4813 scope.go:117] "RemoveContainer" containerID="f80141018211d252399d4e17b37bd6ab4238ed1ef6c6b7892adb2d00306c6555" Feb 19 20:24:25 crc kubenswrapper[4813]: E0219 20:24:25.420059 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f80141018211d252399d4e17b37bd6ab4238ed1ef6c6b7892adb2d00306c6555\": container with ID starting with f80141018211d252399d4e17b37bd6ab4238ed1ef6c6b7892adb2d00306c6555 not found: ID does not exist" containerID="f80141018211d252399d4e17b37bd6ab4238ed1ef6c6b7892adb2d00306c6555" Feb 19 20:24:25 crc kubenswrapper[4813]: I0219 20:24:25.420100 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f80141018211d252399d4e17b37bd6ab4238ed1ef6c6b7892adb2d00306c6555"} err="failed to get container status \"f80141018211d252399d4e17b37bd6ab4238ed1ef6c6b7892adb2d00306c6555\": rpc error: code = NotFound desc = could not find container \"f80141018211d252399d4e17b37bd6ab4238ed1ef6c6b7892adb2d00306c6555\": container with ID starting with f80141018211d252399d4e17b37bd6ab4238ed1ef6c6b7892adb2d00306c6555 not found: ID does not exist" Feb 19 20:24:25 crc kubenswrapper[4813]: I0219 20:24:25.420131 4813 scope.go:117] "RemoveContainer" containerID="1f41481004f796298a2c1993a2f9aca1130f33b565e044e031ce8b1f2555700a" Feb 19 20:24:25 crc kubenswrapper[4813]: E0219 20:24:25.420566 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f41481004f796298a2c1993a2f9aca1130f33b565e044e031ce8b1f2555700a\": container with ID starting with 1f41481004f796298a2c1993a2f9aca1130f33b565e044e031ce8b1f2555700a not found: ID does not exist" containerID="1f41481004f796298a2c1993a2f9aca1130f33b565e044e031ce8b1f2555700a" Feb 19 20:24:25 crc kubenswrapper[4813]: I0219 20:24:25.420617 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f41481004f796298a2c1993a2f9aca1130f33b565e044e031ce8b1f2555700a"} err="failed to get container status \"1f41481004f796298a2c1993a2f9aca1130f33b565e044e031ce8b1f2555700a\": rpc error: code = NotFound desc = could not find container \"1f41481004f796298a2c1993a2f9aca1130f33b565e044e031ce8b1f2555700a\": container with ID starting with 1f41481004f796298a2c1993a2f9aca1130f33b565e044e031ce8b1f2555700a not found: ID does not exist" Feb 19 20:24:25 crc kubenswrapper[4813]: I0219 20:24:25.420648 4813 scope.go:117] "RemoveContainer" containerID="773a8fd9360d468060ffce4628d0a520bf48d7d1c54375530ff351393d3c570a" Feb 19 20:24:25 crc kubenswrapper[4813]: E0219 20:24:25.421109 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"773a8fd9360d468060ffce4628d0a520bf48d7d1c54375530ff351393d3c570a\": container with ID starting with 773a8fd9360d468060ffce4628d0a520bf48d7d1c54375530ff351393d3c570a not found: ID does not exist" containerID="773a8fd9360d468060ffce4628d0a520bf48d7d1c54375530ff351393d3c570a" Feb 19 20:24:25 crc kubenswrapper[4813]: I0219 20:24:25.421139 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"773a8fd9360d468060ffce4628d0a520bf48d7d1c54375530ff351393d3c570a"} err="failed to get container status \"773a8fd9360d468060ffce4628d0a520bf48d7d1c54375530ff351393d3c570a\": rpc error: code = NotFound desc = could not find container \"773a8fd9360d468060ffce4628d0a520bf48d7d1c54375530ff351393d3c570a\": container with ID starting with 773a8fd9360d468060ffce4628d0a520bf48d7d1c54375530ff351393d3c570a not found: ID does not exist" Feb 19 20:24:25 crc kubenswrapper[4813]: I0219 20:24:25.487019 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5901ba13-5df6-484d-8973-ecefbacfdf96" path="/var/lib/kubelet/pods/5901ba13-5df6-484d-8973-ecefbacfdf96/volumes" Feb 19 20:24:30 crc kubenswrapper[4813]: I0219 20:24:30.755407 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 20:24:30 crc kubenswrapper[4813]: I0219 20:24:30.755832 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 20:25:00 crc kubenswrapper[4813]: I0219 20:25:00.330487 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 20:25:00 crc kubenswrapper[4813]: I0219 20:25:00.331139 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 20:25:00 crc kubenswrapper[4813]: I0219 20:25:00.331192 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" Feb 19 20:25:00 crc kubenswrapper[4813]: I0219 20:25:00.332107 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1a4be654309edb8bce6852630e3554100052394a7d7dafc5d56a5281e8d865c5"} pod="openshift-machine-config-operator/machine-config-daemon-gfswm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 20:25:00 crc kubenswrapper[4813]: I0219 20:25:00.332163 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" containerID="cri-o://1a4be654309edb8bce6852630e3554100052394a7d7dafc5d56a5281e8d865c5" gracePeriod=600 Feb 19 20:25:00 crc kubenswrapper[4813]: I0219 20:25:00.713355 4813 generic.go:334] "Generic (PLEG): container finished" podID="481977a2-7072-4176-abd4-863cb6104d70" containerID="1a4be654309edb8bce6852630e3554100052394a7d7dafc5d56a5281e8d865c5" exitCode=0 Feb 19 20:25:00 crc kubenswrapper[4813]: I0219 20:25:00.713444 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" event={"ID":"481977a2-7072-4176-abd4-863cb6104d70","Type":"ContainerDied","Data":"1a4be654309edb8bce6852630e3554100052394a7d7dafc5d56a5281e8d865c5"} Feb 19 20:25:00 crc kubenswrapper[4813]: I0219 20:25:00.714111 4813 scope.go:117] "RemoveContainer" containerID="9fcf1cb4974507f67a050342b20928ea455d156c81a44f38f75e3a932e650ce5" Feb 19 20:25:01 crc kubenswrapper[4813]: I0219 20:25:01.731065 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" event={"ID":"481977a2-7072-4176-abd4-863cb6104d70","Type":"ContainerStarted","Data":"ae09c86ff9f8bc21a9cf3026fff1db9c93da58ba8343078b39d31938225ee5ee"} Feb 19 20:27:00 crc kubenswrapper[4813]: I0219 20:27:00.329697 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 20:27:00 crc kubenswrapper[4813]: I0219 20:27:00.330422 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 20:27:27 crc kubenswrapper[4813]: I0219 20:27:27.390079 4813 generic.go:334] "Generic (PLEG): container finished" podID="c727ad19-3a38-4508-8eb9-dd36db85c774" containerID="e828a55e42eaefb0c4b13c6f6518b78f0c70b4d51386b2f68dfb8eb61171110b" exitCode=0 Feb 19 20:27:27 crc kubenswrapper[4813]: I0219 20:27:27.390165 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-q4q7w" event={"ID":"c727ad19-3a38-4508-8eb9-dd36db85c774","Type":"ContainerDied","Data":"e828a55e42eaefb0c4b13c6f6518b78f0c70b4d51386b2f68dfb8eb61171110b"} Feb 19 20:27:28 crc kubenswrapper[4813]: I0219 20:27:28.871418 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-q4q7w" Feb 19 20:27:28 crc kubenswrapper[4813]: I0219 20:27:28.941302 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/c727ad19-3a38-4508-8eb9-dd36db85c774-ssh-key-openstack-cell1\") pod \"c727ad19-3a38-4508-8eb9-dd36db85c774\" (UID: \"c727ad19-3a38-4508-8eb9-dd36db85c774\") " Feb 19 20:27:28 crc kubenswrapper[4813]: I0219 20:27:28.941341 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhmjc\" (UniqueName: \"kubernetes.io/projected/c727ad19-3a38-4508-8eb9-dd36db85c774-kube-api-access-bhmjc\") pod \"c727ad19-3a38-4508-8eb9-dd36db85c774\" (UID: \"c727ad19-3a38-4508-8eb9-dd36db85c774\") " Feb 19 20:27:28 crc kubenswrapper[4813]: I0219 20:27:28.941372 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c727ad19-3a38-4508-8eb9-dd36db85c774-inventory\") pod \"c727ad19-3a38-4508-8eb9-dd36db85c774\" (UID: \"c727ad19-3a38-4508-8eb9-dd36db85c774\") " Feb 19 20:27:28 crc kubenswrapper[4813]: I0219 20:27:28.941521 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c727ad19-3a38-4508-8eb9-dd36db85c774-bootstrap-combined-ca-bundle\") pod \"c727ad19-3a38-4508-8eb9-dd36db85c774\" (UID: \"c727ad19-3a38-4508-8eb9-dd36db85c774\") " Feb 19 20:27:28 crc kubenswrapper[4813]: I0219 20:27:28.941612 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c727ad19-3a38-4508-8eb9-dd36db85c774-ceph\") pod \"c727ad19-3a38-4508-8eb9-dd36db85c774\" (UID: \"c727ad19-3a38-4508-8eb9-dd36db85c774\") " Feb 19 20:27:28 crc kubenswrapper[4813]: I0219 20:27:28.947191 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c727ad19-3a38-4508-8eb9-dd36db85c774-ceph" (OuterVolumeSpecName: "ceph") pod "c727ad19-3a38-4508-8eb9-dd36db85c774" (UID: "c727ad19-3a38-4508-8eb9-dd36db85c774"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:27:28 crc kubenswrapper[4813]: I0219 20:27:28.950543 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c727ad19-3a38-4508-8eb9-dd36db85c774-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "c727ad19-3a38-4508-8eb9-dd36db85c774" (UID: "c727ad19-3a38-4508-8eb9-dd36db85c774"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:27:28 crc kubenswrapper[4813]: I0219 20:27:28.955988 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c727ad19-3a38-4508-8eb9-dd36db85c774-kube-api-access-bhmjc" (OuterVolumeSpecName: "kube-api-access-bhmjc") pod "c727ad19-3a38-4508-8eb9-dd36db85c774" (UID: "c727ad19-3a38-4508-8eb9-dd36db85c774"). InnerVolumeSpecName "kube-api-access-bhmjc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:27:28 crc kubenswrapper[4813]: I0219 20:27:28.971840 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c727ad19-3a38-4508-8eb9-dd36db85c774-inventory" (OuterVolumeSpecName: "inventory") pod "c727ad19-3a38-4508-8eb9-dd36db85c774" (UID: "c727ad19-3a38-4508-8eb9-dd36db85c774"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:27:28 crc kubenswrapper[4813]: I0219 20:27:28.980174 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c727ad19-3a38-4508-8eb9-dd36db85c774-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "c727ad19-3a38-4508-8eb9-dd36db85c774" (UID: "c727ad19-3a38-4508-8eb9-dd36db85c774"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:27:29 crc kubenswrapper[4813]: I0219 20:27:29.044792 4813 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/c727ad19-3a38-4508-8eb9-dd36db85c774-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 19 20:27:29 crc kubenswrapper[4813]: I0219 20:27:29.044845 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhmjc\" (UniqueName: \"kubernetes.io/projected/c727ad19-3a38-4508-8eb9-dd36db85c774-kube-api-access-bhmjc\") on node \"crc\" DevicePath \"\"" Feb 19 20:27:29 crc kubenswrapper[4813]: I0219 20:27:29.044865 4813 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c727ad19-3a38-4508-8eb9-dd36db85c774-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 20:27:29 crc kubenswrapper[4813]: I0219 20:27:29.044886 4813 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c727ad19-3a38-4508-8eb9-dd36db85c774-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 20:27:29 crc kubenswrapper[4813]: I0219 20:27:29.044906 4813 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c727ad19-3a38-4508-8eb9-dd36db85c774-ceph\") on node \"crc\" DevicePath \"\"" Feb 19 20:27:29 crc kubenswrapper[4813]: I0219 20:27:29.413160 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-q4q7w" event={"ID":"c727ad19-3a38-4508-8eb9-dd36db85c774","Type":"ContainerDied","Data":"fe0d84fd13e5fe5f478053cf4325274a8c2e9fb5c87a4a36bdee85e5445e16cd"} Feb 19 20:27:29 crc kubenswrapper[4813]: I0219 20:27:29.413209 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe0d84fd13e5fe5f478053cf4325274a8c2e9fb5c87a4a36bdee85e5445e16cd" Feb 19 20:27:29 crc kubenswrapper[4813]: I0219 20:27:29.413256 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-q4q7w" Feb 19 20:27:29 crc kubenswrapper[4813]: I0219 20:27:29.508905 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-7htd2"] Feb 19 20:27:29 crc kubenswrapper[4813]: E0219 20:27:29.509622 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c727ad19-3a38-4508-8eb9-dd36db85c774" containerName="bootstrap-openstack-openstack-cell1" Feb 19 20:27:29 crc kubenswrapper[4813]: I0219 20:27:29.509639 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="c727ad19-3a38-4508-8eb9-dd36db85c774" containerName="bootstrap-openstack-openstack-cell1" Feb 19 20:27:29 crc kubenswrapper[4813]: E0219 20:27:29.509663 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5901ba13-5df6-484d-8973-ecefbacfdf96" containerName="extract-content" Feb 19 20:27:29 crc kubenswrapper[4813]: I0219 20:27:29.509670 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="5901ba13-5df6-484d-8973-ecefbacfdf96" containerName="extract-content" Feb 19 20:27:29 crc kubenswrapper[4813]: E0219 20:27:29.509700 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5901ba13-5df6-484d-8973-ecefbacfdf96" containerName="extract-utilities" Feb 19 20:27:29 crc kubenswrapper[4813]: I0219 20:27:29.509706 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="5901ba13-5df6-484d-8973-ecefbacfdf96" containerName="extract-utilities" Feb 19 20:27:29 crc kubenswrapper[4813]: E0219 20:27:29.509717 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5901ba13-5df6-484d-8973-ecefbacfdf96" containerName="registry-server" Feb 19 20:27:29 crc kubenswrapper[4813]: I0219 20:27:29.509723 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="5901ba13-5df6-484d-8973-ecefbacfdf96" containerName="registry-server" Feb 19 20:27:29 crc kubenswrapper[4813]: I0219 20:27:29.509914 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="c727ad19-3a38-4508-8eb9-dd36db85c774" containerName="bootstrap-openstack-openstack-cell1" Feb 19 20:27:29 crc kubenswrapper[4813]: I0219 20:27:29.509935 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="5901ba13-5df6-484d-8973-ecefbacfdf96" containerName="registry-server" Feb 19 20:27:29 crc kubenswrapper[4813]: I0219 20:27:29.510764 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-7htd2" Feb 19 20:27:29 crc kubenswrapper[4813]: I0219 20:27:29.512491 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 19 20:27:29 crc kubenswrapper[4813]: I0219 20:27:29.512639 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 20:27:29 crc kubenswrapper[4813]: I0219 20:27:29.512829 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 19 20:27:29 crc kubenswrapper[4813]: I0219 20:27:29.512890 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-2ttn4" Feb 19 20:27:29 crc kubenswrapper[4813]: I0219 20:27:29.522685 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-7htd2"] Feb 19 20:27:29 crc kubenswrapper[4813]: I0219 20:27:29.554574 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/57d33e77-d322-4e23-b98b-ed97c5f3b6af-inventory\") pod \"download-cache-openstack-openstack-cell1-7htd2\" (UID: \"57d33e77-d322-4e23-b98b-ed97c5f3b6af\") " pod="openstack/download-cache-openstack-openstack-cell1-7htd2" Feb 19 20:27:29 crc kubenswrapper[4813]: I0219 20:27:29.554617 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch6ts\" (UniqueName: \"kubernetes.io/projected/57d33e77-d322-4e23-b98b-ed97c5f3b6af-kube-api-access-ch6ts\") pod \"download-cache-openstack-openstack-cell1-7htd2\" (UID: \"57d33e77-d322-4e23-b98b-ed97c5f3b6af\") " pod="openstack/download-cache-openstack-openstack-cell1-7htd2" Feb 19 20:27:29 crc kubenswrapper[4813]: I0219 20:27:29.554671 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/57d33e77-d322-4e23-b98b-ed97c5f3b6af-ceph\") pod \"download-cache-openstack-openstack-cell1-7htd2\" (UID: \"57d33e77-d322-4e23-b98b-ed97c5f3b6af\") " pod="openstack/download-cache-openstack-openstack-cell1-7htd2" Feb 19 20:27:29 crc kubenswrapper[4813]: I0219 20:27:29.556099 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/57d33e77-d322-4e23-b98b-ed97c5f3b6af-ssh-key-openstack-cell1\") pod \"download-cache-openstack-openstack-cell1-7htd2\" (UID: \"57d33e77-d322-4e23-b98b-ed97c5f3b6af\") " pod="openstack/download-cache-openstack-openstack-cell1-7htd2" Feb 19 20:27:29 crc kubenswrapper[4813]: I0219 20:27:29.658704 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/57d33e77-d322-4e23-b98b-ed97c5f3b6af-ssh-key-openstack-cell1\") pod \"download-cache-openstack-openstack-cell1-7htd2\" (UID: \"57d33e77-d322-4e23-b98b-ed97c5f3b6af\") " pod="openstack/download-cache-openstack-openstack-cell1-7htd2" Feb 19 20:27:29 crc kubenswrapper[4813]: I0219 20:27:29.658884 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/57d33e77-d322-4e23-b98b-ed97c5f3b6af-inventory\") pod \"download-cache-openstack-openstack-cell1-7htd2\" (UID: \"57d33e77-d322-4e23-b98b-ed97c5f3b6af\") " pod="openstack/download-cache-openstack-openstack-cell1-7htd2" Feb 19 20:27:29 crc kubenswrapper[4813]: I0219 20:27:29.658908 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ch6ts\" (UniqueName: \"kubernetes.io/projected/57d33e77-d322-4e23-b98b-ed97c5f3b6af-kube-api-access-ch6ts\") pod \"download-cache-openstack-openstack-cell1-7htd2\" (UID: \"57d33e77-d322-4e23-b98b-ed97c5f3b6af\") " pod="openstack/download-cache-openstack-openstack-cell1-7htd2" Feb 19 20:27:29 crc kubenswrapper[4813]: I0219 20:27:29.658987 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/57d33e77-d322-4e23-b98b-ed97c5f3b6af-ceph\") pod \"download-cache-openstack-openstack-cell1-7htd2\" (UID: \"57d33e77-d322-4e23-b98b-ed97c5f3b6af\") " pod="openstack/download-cache-openstack-openstack-cell1-7htd2" Feb 19 20:27:29 crc kubenswrapper[4813]: I0219 20:27:29.665006 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/57d33e77-d322-4e23-b98b-ed97c5f3b6af-ssh-key-openstack-cell1\") pod \"download-cache-openstack-openstack-cell1-7htd2\" (UID: \"57d33e77-d322-4e23-b98b-ed97c5f3b6af\") " pod="openstack/download-cache-openstack-openstack-cell1-7htd2" Feb 19 20:27:29 crc kubenswrapper[4813]: I0219 20:27:29.665209 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/57d33e77-d322-4e23-b98b-ed97c5f3b6af-ceph\") pod \"download-cache-openstack-openstack-cell1-7htd2\" (UID: \"57d33e77-d322-4e23-b98b-ed97c5f3b6af\") " pod="openstack/download-cache-openstack-openstack-cell1-7htd2" Feb 19 20:27:29 crc kubenswrapper[4813]: I0219 20:27:29.666604 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/57d33e77-d322-4e23-b98b-ed97c5f3b6af-inventory\") pod \"download-cache-openstack-openstack-cell1-7htd2\" (UID: \"57d33e77-d322-4e23-b98b-ed97c5f3b6af\") " pod="openstack/download-cache-openstack-openstack-cell1-7htd2" Feb 19 20:27:29 crc kubenswrapper[4813]: I0219 20:27:29.678452 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ch6ts\" (UniqueName: \"kubernetes.io/projected/57d33e77-d322-4e23-b98b-ed97c5f3b6af-kube-api-access-ch6ts\") pod \"download-cache-openstack-openstack-cell1-7htd2\" (UID: \"57d33e77-d322-4e23-b98b-ed97c5f3b6af\") " pod="openstack/download-cache-openstack-openstack-cell1-7htd2" Feb 19 20:27:29 crc kubenswrapper[4813]: I0219 20:27:29.826576 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-7htd2" Feb 19 20:27:30 crc kubenswrapper[4813]: I0219 20:27:30.330235 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 20:27:30 crc kubenswrapper[4813]: I0219 20:27:30.330546 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 20:27:30 crc kubenswrapper[4813]: I0219 20:27:30.361127 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-7htd2"] Feb 19 20:27:30 crc kubenswrapper[4813]: I0219 20:27:30.366004 4813 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 20:27:30 crc kubenswrapper[4813]: I0219 20:27:30.421357 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-7htd2" event={"ID":"57d33e77-d322-4e23-b98b-ed97c5f3b6af","Type":"ContainerStarted","Data":"71d795ea8fcf3508dfa15c06ddb4d2bc1a73c7900880de4b16c377b764e39cf8"} Feb 19 20:27:31 crc kubenswrapper[4813]: I0219 20:27:31.433791 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-7htd2" event={"ID":"57d33e77-d322-4e23-b98b-ed97c5f3b6af","Type":"ContainerStarted","Data":"c34e9b9b788acbaef0c228a78aca5a90dadbf1ee5612789d7afbe383c7a35f3d"} Feb 19 20:27:31 crc kubenswrapper[4813]: I0219 20:27:31.463720 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-openstack-openstack-cell1-7htd2" podStartSLOduration=2.035386337 podStartE2EDuration="2.463699252s" podCreationTimestamp="2026-02-19 20:27:29 +0000 UTC" firstStartedPulling="2026-02-19 20:27:30.365700309 +0000 UTC m=+7069.591140850" lastFinishedPulling="2026-02-19 20:27:30.794013224 +0000 UTC m=+7070.019453765" observedRunningTime="2026-02-19 20:27:31.454395244 +0000 UTC m=+7070.679835805" watchObservedRunningTime="2026-02-19 20:27:31.463699252 +0000 UTC m=+7070.689139803" Feb 19 20:28:00 crc kubenswrapper[4813]: I0219 20:28:00.329462 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 20:28:00 crc kubenswrapper[4813]: I0219 20:28:00.330139 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 20:28:00 crc kubenswrapper[4813]: I0219 20:28:00.330201 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" Feb 19 20:28:00 crc kubenswrapper[4813]: I0219 20:28:00.331072 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ae09c86ff9f8bc21a9cf3026fff1db9c93da58ba8343078b39d31938225ee5ee"} pod="openshift-machine-config-operator/machine-config-daemon-gfswm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 20:28:00 crc kubenswrapper[4813]: I0219 20:28:00.331176 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" containerID="cri-o://ae09c86ff9f8bc21a9cf3026fff1db9c93da58ba8343078b39d31938225ee5ee" gracePeriod=600 Feb 19 20:28:00 crc kubenswrapper[4813]: E0219 20:28:00.522474 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:28:00 crc kubenswrapper[4813]: I0219 20:28:00.775693 4813 generic.go:334] "Generic (PLEG): container finished" podID="481977a2-7072-4176-abd4-863cb6104d70" containerID="ae09c86ff9f8bc21a9cf3026fff1db9c93da58ba8343078b39d31938225ee5ee" exitCode=0 Feb 19 20:28:00 crc kubenswrapper[4813]: I0219 20:28:00.775857 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" event={"ID":"481977a2-7072-4176-abd4-863cb6104d70","Type":"ContainerDied","Data":"ae09c86ff9f8bc21a9cf3026fff1db9c93da58ba8343078b39d31938225ee5ee"} Feb 19 20:28:00 crc kubenswrapper[4813]: I0219 20:28:00.776426 4813 scope.go:117] "RemoveContainer" containerID="1a4be654309edb8bce6852630e3554100052394a7d7dafc5d56a5281e8d865c5" Feb 19 20:28:00 crc kubenswrapper[4813]: I0219 20:28:00.777825 4813 scope.go:117] "RemoveContainer" containerID="ae09c86ff9f8bc21a9cf3026fff1db9c93da58ba8343078b39d31938225ee5ee" Feb 19 20:28:00 crc kubenswrapper[4813]: E0219 20:28:00.779091 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:28:15 crc kubenswrapper[4813]: I0219 20:28:15.487917 4813 scope.go:117] "RemoveContainer" containerID="ae09c86ff9f8bc21a9cf3026fff1db9c93da58ba8343078b39d31938225ee5ee" Feb 19 20:28:15 crc kubenswrapper[4813]: E0219 20:28:15.488865 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:28:27 crc kubenswrapper[4813]: I0219 20:28:27.472448 4813 scope.go:117] "RemoveContainer" containerID="ae09c86ff9f8bc21a9cf3026fff1db9c93da58ba8343078b39d31938225ee5ee" Feb 19 20:28:27 crc kubenswrapper[4813]: E0219 20:28:27.473441 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:28:35 crc kubenswrapper[4813]: I0219 20:28:35.783781 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nrc4g"] Feb 19 20:28:35 crc kubenswrapper[4813]: I0219 20:28:35.788344 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nrc4g" Feb 19 20:28:35 crc kubenswrapper[4813]: I0219 20:28:35.814038 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nrc4g"] Feb 19 20:28:35 crc kubenswrapper[4813]: I0219 20:28:35.952231 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfphn\" (UniqueName: \"kubernetes.io/projected/9e94199b-725a-48ca-bd41-5c53b817b7de-kube-api-access-zfphn\") pod \"certified-operators-nrc4g\" (UID: \"9e94199b-725a-48ca-bd41-5c53b817b7de\") " pod="openshift-marketplace/certified-operators-nrc4g" Feb 19 20:28:35 crc kubenswrapper[4813]: I0219 20:28:35.952350 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e94199b-725a-48ca-bd41-5c53b817b7de-catalog-content\") pod \"certified-operators-nrc4g\" (UID: \"9e94199b-725a-48ca-bd41-5c53b817b7de\") " pod="openshift-marketplace/certified-operators-nrc4g" Feb 19 20:28:35 crc kubenswrapper[4813]: I0219 20:28:35.952532 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e94199b-725a-48ca-bd41-5c53b817b7de-utilities\") pod \"certified-operators-nrc4g\" (UID: \"9e94199b-725a-48ca-bd41-5c53b817b7de\") " pod="openshift-marketplace/certified-operators-nrc4g" Feb 19 20:28:36 crc kubenswrapper[4813]: I0219 20:28:36.054763 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e94199b-725a-48ca-bd41-5c53b817b7de-utilities\") pod \"certified-operators-nrc4g\" (UID: \"9e94199b-725a-48ca-bd41-5c53b817b7de\") " pod="openshift-marketplace/certified-operators-nrc4g" Feb 19 20:28:36 crc kubenswrapper[4813]: I0219 20:28:36.054990 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfphn\" (UniqueName: \"kubernetes.io/projected/9e94199b-725a-48ca-bd41-5c53b817b7de-kube-api-access-zfphn\") pod \"certified-operators-nrc4g\" (UID: \"9e94199b-725a-48ca-bd41-5c53b817b7de\") " pod="openshift-marketplace/certified-operators-nrc4g" Feb 19 20:28:36 crc kubenswrapper[4813]: I0219 20:28:36.055111 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e94199b-725a-48ca-bd41-5c53b817b7de-catalog-content\") pod \"certified-operators-nrc4g\" (UID: \"9e94199b-725a-48ca-bd41-5c53b817b7de\") " pod="openshift-marketplace/certified-operators-nrc4g" Feb 19 20:28:36 crc kubenswrapper[4813]: I0219 20:28:36.055579 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e94199b-725a-48ca-bd41-5c53b817b7de-utilities\") pod \"certified-operators-nrc4g\" (UID: \"9e94199b-725a-48ca-bd41-5c53b817b7de\") " pod="openshift-marketplace/certified-operators-nrc4g" Feb 19 20:28:36 crc kubenswrapper[4813]: I0219 20:28:36.055650 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e94199b-725a-48ca-bd41-5c53b817b7de-catalog-content\") pod \"certified-operators-nrc4g\" (UID: \"9e94199b-725a-48ca-bd41-5c53b817b7de\") " pod="openshift-marketplace/certified-operators-nrc4g" Feb 19 20:28:36 crc kubenswrapper[4813]: I0219 20:28:36.079850 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfphn\" (UniqueName: \"kubernetes.io/projected/9e94199b-725a-48ca-bd41-5c53b817b7de-kube-api-access-zfphn\") pod \"certified-operators-nrc4g\" (UID: \"9e94199b-725a-48ca-bd41-5c53b817b7de\") " pod="openshift-marketplace/certified-operators-nrc4g" Feb 19 20:28:36 crc kubenswrapper[4813]: I0219 20:28:36.116635 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nrc4g" Feb 19 20:28:36 crc kubenswrapper[4813]: I0219 20:28:36.685990 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nrc4g"] Feb 19 20:28:37 crc kubenswrapper[4813]: I0219 20:28:37.267583 4813 generic.go:334] "Generic (PLEG): container finished" podID="9e94199b-725a-48ca-bd41-5c53b817b7de" containerID="09ca1d0f7e043a56f25b86ff4f9e203699fbed48f3294c08b4c39be4cca32cb5" exitCode=0 Feb 19 20:28:37 crc kubenswrapper[4813]: I0219 20:28:37.267651 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nrc4g" event={"ID":"9e94199b-725a-48ca-bd41-5c53b817b7de","Type":"ContainerDied","Data":"09ca1d0f7e043a56f25b86ff4f9e203699fbed48f3294c08b4c39be4cca32cb5"} Feb 19 20:28:37 crc kubenswrapper[4813]: I0219 20:28:37.267922 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nrc4g" event={"ID":"9e94199b-725a-48ca-bd41-5c53b817b7de","Type":"ContainerStarted","Data":"eb8899d005ab0e742273a1f7f2516ca0cb4d56429b1d2f15084619403e58e147"} Feb 19 20:28:38 crc kubenswrapper[4813]: I0219 20:28:38.281277 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nrc4g" event={"ID":"9e94199b-725a-48ca-bd41-5c53b817b7de","Type":"ContainerStarted","Data":"36ff0a2b6eb3e15db820fa449f501c5d3db1131d846a5d2f4730c896d7f23271"} Feb 19 20:28:39 crc kubenswrapper[4813]: I0219 20:28:39.472587 4813 scope.go:117] "RemoveContainer" containerID="ae09c86ff9f8bc21a9cf3026fff1db9c93da58ba8343078b39d31938225ee5ee" Feb 19 20:28:39 crc kubenswrapper[4813]: E0219 20:28:39.473574 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:28:40 crc kubenswrapper[4813]: I0219 20:28:40.306418 4813 generic.go:334] "Generic (PLEG): container finished" podID="9e94199b-725a-48ca-bd41-5c53b817b7de" containerID="36ff0a2b6eb3e15db820fa449f501c5d3db1131d846a5d2f4730c896d7f23271" exitCode=0 Feb 19 20:28:40 crc kubenswrapper[4813]: I0219 20:28:40.306480 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nrc4g" event={"ID":"9e94199b-725a-48ca-bd41-5c53b817b7de","Type":"ContainerDied","Data":"36ff0a2b6eb3e15db820fa449f501c5d3db1131d846a5d2f4730c896d7f23271"} Feb 19 20:28:41 crc kubenswrapper[4813]: I0219 20:28:41.320648 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nrc4g" event={"ID":"9e94199b-725a-48ca-bd41-5c53b817b7de","Type":"ContainerStarted","Data":"b4168608e5a7062571411235eeb312cc2a22152a7688d3137f79f4fbf8bf8242"} Feb 19 20:28:41 crc kubenswrapper[4813]: I0219 20:28:41.363344 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nrc4g" podStartSLOduration=2.692880711 podStartE2EDuration="6.363313097s" podCreationTimestamp="2026-02-19 20:28:35 +0000 UTC" firstStartedPulling="2026-02-19 20:28:37.27113984 +0000 UTC m=+7136.496580421" lastFinishedPulling="2026-02-19 20:28:40.941572256 +0000 UTC m=+7140.167012807" observedRunningTime="2026-02-19 20:28:41.34465064 +0000 UTC m=+7140.570091261" watchObservedRunningTime="2026-02-19 20:28:41.363313097 +0000 UTC m=+7140.588753678" Feb 19 20:28:46 crc kubenswrapper[4813]: I0219 20:28:46.117410 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nrc4g" Feb 19 20:28:46 crc kubenswrapper[4813]: I0219 20:28:46.118171 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nrc4g" Feb 19 20:28:46 crc kubenswrapper[4813]: I0219 20:28:46.214879 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nrc4g" Feb 19 20:28:46 crc kubenswrapper[4813]: I0219 20:28:46.440227 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nrc4g" Feb 19 20:28:48 crc kubenswrapper[4813]: I0219 20:28:48.382510 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nrc4g"] Feb 19 20:28:48 crc kubenswrapper[4813]: I0219 20:28:48.390848 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-nrc4g" podUID="9e94199b-725a-48ca-bd41-5c53b817b7de" containerName="registry-server" containerID="cri-o://b4168608e5a7062571411235eeb312cc2a22152a7688d3137f79f4fbf8bf8242" gracePeriod=2 Feb 19 20:28:48 crc kubenswrapper[4813]: I0219 20:28:48.959845 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nrc4g" Feb 19 20:28:49 crc kubenswrapper[4813]: I0219 20:28:49.085067 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e94199b-725a-48ca-bd41-5c53b817b7de-utilities\") pod \"9e94199b-725a-48ca-bd41-5c53b817b7de\" (UID: \"9e94199b-725a-48ca-bd41-5c53b817b7de\") " Feb 19 20:28:49 crc kubenswrapper[4813]: I0219 20:28:49.085256 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e94199b-725a-48ca-bd41-5c53b817b7de-catalog-content\") pod \"9e94199b-725a-48ca-bd41-5c53b817b7de\" (UID: \"9e94199b-725a-48ca-bd41-5c53b817b7de\") " Feb 19 20:28:49 crc kubenswrapper[4813]: I0219 20:28:49.085327 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zfphn\" (UniqueName: \"kubernetes.io/projected/9e94199b-725a-48ca-bd41-5c53b817b7de-kube-api-access-zfphn\") pod \"9e94199b-725a-48ca-bd41-5c53b817b7de\" (UID: \"9e94199b-725a-48ca-bd41-5c53b817b7de\") " Feb 19 20:28:49 crc kubenswrapper[4813]: I0219 20:28:49.086621 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e94199b-725a-48ca-bd41-5c53b817b7de-utilities" (OuterVolumeSpecName: "utilities") pod "9e94199b-725a-48ca-bd41-5c53b817b7de" (UID: "9e94199b-725a-48ca-bd41-5c53b817b7de"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:28:49 crc kubenswrapper[4813]: I0219 20:28:49.096543 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e94199b-725a-48ca-bd41-5c53b817b7de-kube-api-access-zfphn" (OuterVolumeSpecName: "kube-api-access-zfphn") pod "9e94199b-725a-48ca-bd41-5c53b817b7de" (UID: "9e94199b-725a-48ca-bd41-5c53b817b7de"). InnerVolumeSpecName "kube-api-access-zfphn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:28:49 crc kubenswrapper[4813]: I0219 20:28:49.162886 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e94199b-725a-48ca-bd41-5c53b817b7de-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9e94199b-725a-48ca-bd41-5c53b817b7de" (UID: "9e94199b-725a-48ca-bd41-5c53b817b7de"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:28:49 crc kubenswrapper[4813]: I0219 20:28:49.188725 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e94199b-725a-48ca-bd41-5c53b817b7de-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 20:28:49 crc kubenswrapper[4813]: I0219 20:28:49.188798 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e94199b-725a-48ca-bd41-5c53b817b7de-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 20:28:49 crc kubenswrapper[4813]: I0219 20:28:49.188823 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zfphn\" (UniqueName: \"kubernetes.io/projected/9e94199b-725a-48ca-bd41-5c53b817b7de-kube-api-access-zfphn\") on node \"crc\" DevicePath \"\"" Feb 19 20:28:49 crc kubenswrapper[4813]: I0219 20:28:49.404909 4813 generic.go:334] "Generic (PLEG): container finished" podID="9e94199b-725a-48ca-bd41-5c53b817b7de" containerID="b4168608e5a7062571411235eeb312cc2a22152a7688d3137f79f4fbf8bf8242" exitCode=0 Feb 19 20:28:49 crc kubenswrapper[4813]: I0219 20:28:49.404974 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nrc4g" Feb 19 20:28:49 crc kubenswrapper[4813]: I0219 20:28:49.405005 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nrc4g" event={"ID":"9e94199b-725a-48ca-bd41-5c53b817b7de","Type":"ContainerDied","Data":"b4168608e5a7062571411235eeb312cc2a22152a7688d3137f79f4fbf8bf8242"} Feb 19 20:28:49 crc kubenswrapper[4813]: I0219 20:28:49.405507 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nrc4g" event={"ID":"9e94199b-725a-48ca-bd41-5c53b817b7de","Type":"ContainerDied","Data":"eb8899d005ab0e742273a1f7f2516ca0cb4d56429b1d2f15084619403e58e147"} Feb 19 20:28:49 crc kubenswrapper[4813]: I0219 20:28:49.405527 4813 scope.go:117] "RemoveContainer" containerID="b4168608e5a7062571411235eeb312cc2a22152a7688d3137f79f4fbf8bf8242" Feb 19 20:28:49 crc kubenswrapper[4813]: I0219 20:28:49.427194 4813 scope.go:117] "RemoveContainer" containerID="36ff0a2b6eb3e15db820fa449f501c5d3db1131d846a5d2f4730c896d7f23271" Feb 19 20:28:49 crc kubenswrapper[4813]: I0219 20:28:49.443742 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nrc4g"] Feb 19 20:28:49 crc kubenswrapper[4813]: I0219 20:28:49.455346 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-nrc4g"] Feb 19 20:28:49 crc kubenswrapper[4813]: I0219 20:28:49.462868 4813 scope.go:117] "RemoveContainer" containerID="09ca1d0f7e043a56f25b86ff4f9e203699fbed48f3294c08b4c39be4cca32cb5" Feb 19 20:28:49 crc kubenswrapper[4813]: I0219 20:28:49.487812 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e94199b-725a-48ca-bd41-5c53b817b7de" path="/var/lib/kubelet/pods/9e94199b-725a-48ca-bd41-5c53b817b7de/volumes" Feb 19 20:28:49 crc kubenswrapper[4813]: I0219 20:28:49.531685 4813 scope.go:117] "RemoveContainer" containerID="b4168608e5a7062571411235eeb312cc2a22152a7688d3137f79f4fbf8bf8242" Feb 19 20:28:49 crc kubenswrapper[4813]: E0219 20:28:49.532353 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4168608e5a7062571411235eeb312cc2a22152a7688d3137f79f4fbf8bf8242\": container with ID starting with b4168608e5a7062571411235eeb312cc2a22152a7688d3137f79f4fbf8bf8242 not found: ID does not exist" containerID="b4168608e5a7062571411235eeb312cc2a22152a7688d3137f79f4fbf8bf8242" Feb 19 20:28:49 crc kubenswrapper[4813]: I0219 20:28:49.532387 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4168608e5a7062571411235eeb312cc2a22152a7688d3137f79f4fbf8bf8242"} err="failed to get container status \"b4168608e5a7062571411235eeb312cc2a22152a7688d3137f79f4fbf8bf8242\": rpc error: code = NotFound desc = could not find container \"b4168608e5a7062571411235eeb312cc2a22152a7688d3137f79f4fbf8bf8242\": container with ID starting with b4168608e5a7062571411235eeb312cc2a22152a7688d3137f79f4fbf8bf8242 not found: ID does not exist" Feb 19 20:28:49 crc kubenswrapper[4813]: I0219 20:28:49.532410 4813 scope.go:117] "RemoveContainer" containerID="36ff0a2b6eb3e15db820fa449f501c5d3db1131d846a5d2f4730c896d7f23271" Feb 19 20:28:49 crc kubenswrapper[4813]: E0219 20:28:49.532748 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36ff0a2b6eb3e15db820fa449f501c5d3db1131d846a5d2f4730c896d7f23271\": container with ID starting with 36ff0a2b6eb3e15db820fa449f501c5d3db1131d846a5d2f4730c896d7f23271 not found: ID does not exist" containerID="36ff0a2b6eb3e15db820fa449f501c5d3db1131d846a5d2f4730c896d7f23271" Feb 19 20:28:49 crc kubenswrapper[4813]: I0219 20:28:49.532767 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36ff0a2b6eb3e15db820fa449f501c5d3db1131d846a5d2f4730c896d7f23271"} err="failed to get container status \"36ff0a2b6eb3e15db820fa449f501c5d3db1131d846a5d2f4730c896d7f23271\": rpc error: code = NotFound desc = could not find container \"36ff0a2b6eb3e15db820fa449f501c5d3db1131d846a5d2f4730c896d7f23271\": container with ID starting with 36ff0a2b6eb3e15db820fa449f501c5d3db1131d846a5d2f4730c896d7f23271 not found: ID does not exist" Feb 19 20:28:49 crc kubenswrapper[4813]: I0219 20:28:49.532781 4813 scope.go:117] "RemoveContainer" containerID="09ca1d0f7e043a56f25b86ff4f9e203699fbed48f3294c08b4c39be4cca32cb5" Feb 19 20:28:49 crc kubenswrapper[4813]: E0219 20:28:49.533830 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09ca1d0f7e043a56f25b86ff4f9e203699fbed48f3294c08b4c39be4cca32cb5\": container with ID starting with 09ca1d0f7e043a56f25b86ff4f9e203699fbed48f3294c08b4c39be4cca32cb5 not found: ID does not exist" containerID="09ca1d0f7e043a56f25b86ff4f9e203699fbed48f3294c08b4c39be4cca32cb5" Feb 19 20:28:49 crc kubenswrapper[4813]: I0219 20:28:49.533884 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09ca1d0f7e043a56f25b86ff4f9e203699fbed48f3294c08b4c39be4cca32cb5"} err="failed to get container status \"09ca1d0f7e043a56f25b86ff4f9e203699fbed48f3294c08b4c39be4cca32cb5\": rpc error: code = NotFound desc = could not find container \"09ca1d0f7e043a56f25b86ff4f9e203699fbed48f3294c08b4c39be4cca32cb5\": container with ID starting with 09ca1d0f7e043a56f25b86ff4f9e203699fbed48f3294c08b4c39be4cca32cb5 not found: ID does not exist" Feb 19 20:28:51 crc kubenswrapper[4813]: I0219 20:28:51.477993 4813 scope.go:117] "RemoveContainer" containerID="ae09c86ff9f8bc21a9cf3026fff1db9c93da58ba8343078b39d31938225ee5ee" Feb 19 20:28:51 crc kubenswrapper[4813]: E0219 20:28:51.478763 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:29:05 crc kubenswrapper[4813]: I0219 20:29:05.472408 4813 scope.go:117] "RemoveContainer" containerID="ae09c86ff9f8bc21a9cf3026fff1db9c93da58ba8343078b39d31938225ee5ee" Feb 19 20:29:05 crc kubenswrapper[4813]: E0219 20:29:05.473547 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:29:19 crc kubenswrapper[4813]: I0219 20:29:19.471683 4813 scope.go:117] "RemoveContainer" containerID="ae09c86ff9f8bc21a9cf3026fff1db9c93da58ba8343078b39d31938225ee5ee" Feb 19 20:29:19 crc kubenswrapper[4813]: E0219 20:29:19.472481 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:29:33 crc kubenswrapper[4813]: I0219 20:29:33.472688 4813 scope.go:117] "RemoveContainer" containerID="ae09c86ff9f8bc21a9cf3026fff1db9c93da58ba8343078b39d31938225ee5ee" Feb 19 20:29:33 crc kubenswrapper[4813]: E0219 20:29:33.473575 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:29:33 crc kubenswrapper[4813]: I0219 20:29:33.925080 4813 generic.go:334] "Generic (PLEG): container finished" podID="57d33e77-d322-4e23-b98b-ed97c5f3b6af" containerID="c34e9b9b788acbaef0c228a78aca5a90dadbf1ee5612789d7afbe383c7a35f3d" exitCode=0 Feb 19 20:29:33 crc kubenswrapper[4813]: I0219 20:29:33.925122 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-7htd2" event={"ID":"57d33e77-d322-4e23-b98b-ed97c5f3b6af","Type":"ContainerDied","Data":"c34e9b9b788acbaef0c228a78aca5a90dadbf1ee5612789d7afbe383c7a35f3d"} Feb 19 20:29:35 crc kubenswrapper[4813]: I0219 20:29:35.411024 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-7htd2" Feb 19 20:29:35 crc kubenswrapper[4813]: I0219 20:29:35.534875 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/57d33e77-d322-4e23-b98b-ed97c5f3b6af-ceph\") pod \"57d33e77-d322-4e23-b98b-ed97c5f3b6af\" (UID: \"57d33e77-d322-4e23-b98b-ed97c5f3b6af\") " Feb 19 20:29:35 crc kubenswrapper[4813]: I0219 20:29:35.535068 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/57d33e77-d322-4e23-b98b-ed97c5f3b6af-ssh-key-openstack-cell1\") pod \"57d33e77-d322-4e23-b98b-ed97c5f3b6af\" (UID: \"57d33e77-d322-4e23-b98b-ed97c5f3b6af\") " Feb 19 20:29:35 crc kubenswrapper[4813]: I0219 20:29:35.535146 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ch6ts\" (UniqueName: \"kubernetes.io/projected/57d33e77-d322-4e23-b98b-ed97c5f3b6af-kube-api-access-ch6ts\") pod \"57d33e77-d322-4e23-b98b-ed97c5f3b6af\" (UID: \"57d33e77-d322-4e23-b98b-ed97c5f3b6af\") " Feb 19 20:29:35 crc kubenswrapper[4813]: I0219 20:29:35.535213 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/57d33e77-d322-4e23-b98b-ed97c5f3b6af-inventory\") pod \"57d33e77-d322-4e23-b98b-ed97c5f3b6af\" (UID: \"57d33e77-d322-4e23-b98b-ed97c5f3b6af\") " Feb 19 20:29:35 crc kubenswrapper[4813]: I0219 20:29:35.540698 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57d33e77-d322-4e23-b98b-ed97c5f3b6af-kube-api-access-ch6ts" (OuterVolumeSpecName: "kube-api-access-ch6ts") pod "57d33e77-d322-4e23-b98b-ed97c5f3b6af" (UID: "57d33e77-d322-4e23-b98b-ed97c5f3b6af"). InnerVolumeSpecName "kube-api-access-ch6ts". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:29:35 crc kubenswrapper[4813]: I0219 20:29:35.540810 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57d33e77-d322-4e23-b98b-ed97c5f3b6af-ceph" (OuterVolumeSpecName: "ceph") pod "57d33e77-d322-4e23-b98b-ed97c5f3b6af" (UID: "57d33e77-d322-4e23-b98b-ed97c5f3b6af"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:29:35 crc kubenswrapper[4813]: I0219 20:29:35.563825 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57d33e77-d322-4e23-b98b-ed97c5f3b6af-inventory" (OuterVolumeSpecName: "inventory") pod "57d33e77-d322-4e23-b98b-ed97c5f3b6af" (UID: "57d33e77-d322-4e23-b98b-ed97c5f3b6af"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:29:35 crc kubenswrapper[4813]: I0219 20:29:35.570009 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57d33e77-d322-4e23-b98b-ed97c5f3b6af-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "57d33e77-d322-4e23-b98b-ed97c5f3b6af" (UID: "57d33e77-d322-4e23-b98b-ed97c5f3b6af"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:29:35 crc kubenswrapper[4813]: I0219 20:29:35.637713 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ch6ts\" (UniqueName: \"kubernetes.io/projected/57d33e77-d322-4e23-b98b-ed97c5f3b6af-kube-api-access-ch6ts\") on node \"crc\" DevicePath \"\"" Feb 19 20:29:35 crc kubenswrapper[4813]: I0219 20:29:35.637759 4813 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/57d33e77-d322-4e23-b98b-ed97c5f3b6af-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 20:29:35 crc kubenswrapper[4813]: I0219 20:29:35.637775 4813 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/57d33e77-d322-4e23-b98b-ed97c5f3b6af-ceph\") on node \"crc\" DevicePath \"\"" Feb 19 20:29:35 crc kubenswrapper[4813]: I0219 20:29:35.637788 4813 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/57d33e77-d322-4e23-b98b-ed97c5f3b6af-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 19 20:29:35 crc kubenswrapper[4813]: I0219 20:29:35.949368 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-7htd2" event={"ID":"57d33e77-d322-4e23-b98b-ed97c5f3b6af","Type":"ContainerDied","Data":"71d795ea8fcf3508dfa15c06ddb4d2bc1a73c7900880de4b16c377b764e39cf8"} Feb 19 20:29:35 crc kubenswrapper[4813]: I0219 20:29:35.949645 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71d795ea8fcf3508dfa15c06ddb4d2bc1a73c7900880de4b16c377b764e39cf8" Feb 19 20:29:35 crc kubenswrapper[4813]: I0219 20:29:35.949604 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-7htd2" Feb 19 20:29:36 crc kubenswrapper[4813]: I0219 20:29:36.044637 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-xhqs2"] Feb 19 20:29:36 crc kubenswrapper[4813]: E0219 20:29:36.045248 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57d33e77-d322-4e23-b98b-ed97c5f3b6af" containerName="download-cache-openstack-openstack-cell1" Feb 19 20:29:36 crc kubenswrapper[4813]: I0219 20:29:36.045271 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="57d33e77-d322-4e23-b98b-ed97c5f3b6af" containerName="download-cache-openstack-openstack-cell1" Feb 19 20:29:36 crc kubenswrapper[4813]: E0219 20:29:36.045288 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e94199b-725a-48ca-bd41-5c53b817b7de" containerName="extract-utilities" Feb 19 20:29:36 crc kubenswrapper[4813]: I0219 20:29:36.045299 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e94199b-725a-48ca-bd41-5c53b817b7de" containerName="extract-utilities" Feb 19 20:29:36 crc kubenswrapper[4813]: E0219 20:29:36.045332 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e94199b-725a-48ca-bd41-5c53b817b7de" containerName="extract-content" Feb 19 20:29:36 crc kubenswrapper[4813]: I0219 20:29:36.045341 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e94199b-725a-48ca-bd41-5c53b817b7de" containerName="extract-content" Feb 19 20:29:36 crc kubenswrapper[4813]: E0219 20:29:36.045362 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e94199b-725a-48ca-bd41-5c53b817b7de" containerName="registry-server" Feb 19 20:29:36 crc kubenswrapper[4813]: I0219 20:29:36.045371 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e94199b-725a-48ca-bd41-5c53b817b7de" containerName="registry-server" Feb 19 20:29:36 crc kubenswrapper[4813]: I0219 20:29:36.045667 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e94199b-725a-48ca-bd41-5c53b817b7de" containerName="registry-server" Feb 19 20:29:36 crc kubenswrapper[4813]: I0219 20:29:36.045704 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="57d33e77-d322-4e23-b98b-ed97c5f3b6af" containerName="download-cache-openstack-openstack-cell1" Feb 19 20:29:36 crc kubenswrapper[4813]: I0219 20:29:36.046666 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-xhqs2" Feb 19 20:29:36 crc kubenswrapper[4813]: I0219 20:29:36.049775 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 20:29:36 crc kubenswrapper[4813]: I0219 20:29:36.049804 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 19 20:29:36 crc kubenswrapper[4813]: I0219 20:29:36.050438 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 19 20:29:36 crc kubenswrapper[4813]: I0219 20:29:36.051230 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-2ttn4" Feb 19 20:29:36 crc kubenswrapper[4813]: I0219 20:29:36.058820 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-xhqs2"] Feb 19 20:29:36 crc kubenswrapper[4813]: I0219 20:29:36.147369 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83b0f363-8b5f-4f90-809d-ddb74d9a159f-inventory\") pod \"configure-network-openstack-openstack-cell1-xhqs2\" (UID: \"83b0f363-8b5f-4f90-809d-ddb74d9a159f\") " pod="openstack/configure-network-openstack-openstack-cell1-xhqs2" Feb 19 20:29:36 crc kubenswrapper[4813]: I0219 20:29:36.147526 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/83b0f363-8b5f-4f90-809d-ddb74d9a159f-ssh-key-openstack-cell1\") pod \"configure-network-openstack-openstack-cell1-xhqs2\" (UID: \"83b0f363-8b5f-4f90-809d-ddb74d9a159f\") " pod="openstack/configure-network-openstack-openstack-cell1-xhqs2" Feb 19 20:29:36 crc kubenswrapper[4813]: I0219 20:29:36.147563 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5n46\" (UniqueName: \"kubernetes.io/projected/83b0f363-8b5f-4f90-809d-ddb74d9a159f-kube-api-access-s5n46\") pod \"configure-network-openstack-openstack-cell1-xhqs2\" (UID: \"83b0f363-8b5f-4f90-809d-ddb74d9a159f\") " pod="openstack/configure-network-openstack-openstack-cell1-xhqs2" Feb 19 20:29:36 crc kubenswrapper[4813]: I0219 20:29:36.147833 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/83b0f363-8b5f-4f90-809d-ddb74d9a159f-ceph\") pod \"configure-network-openstack-openstack-cell1-xhqs2\" (UID: \"83b0f363-8b5f-4f90-809d-ddb74d9a159f\") " pod="openstack/configure-network-openstack-openstack-cell1-xhqs2" Feb 19 20:29:36 crc kubenswrapper[4813]: I0219 20:29:36.249717 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83b0f363-8b5f-4f90-809d-ddb74d9a159f-inventory\") pod \"configure-network-openstack-openstack-cell1-xhqs2\" (UID: \"83b0f363-8b5f-4f90-809d-ddb74d9a159f\") " pod="openstack/configure-network-openstack-openstack-cell1-xhqs2" Feb 19 20:29:36 crc kubenswrapper[4813]: I0219 20:29:36.249813 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/83b0f363-8b5f-4f90-809d-ddb74d9a159f-ssh-key-openstack-cell1\") pod \"configure-network-openstack-openstack-cell1-xhqs2\" (UID: \"83b0f363-8b5f-4f90-809d-ddb74d9a159f\") " pod="openstack/configure-network-openstack-openstack-cell1-xhqs2" Feb 19 20:29:36 crc kubenswrapper[4813]: I0219 20:29:36.249848 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5n46\" (UniqueName: \"kubernetes.io/projected/83b0f363-8b5f-4f90-809d-ddb74d9a159f-kube-api-access-s5n46\") pod \"configure-network-openstack-openstack-cell1-xhqs2\" (UID: \"83b0f363-8b5f-4f90-809d-ddb74d9a159f\") " pod="openstack/configure-network-openstack-openstack-cell1-xhqs2" Feb 19 20:29:36 crc kubenswrapper[4813]: I0219 20:29:36.249909 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/83b0f363-8b5f-4f90-809d-ddb74d9a159f-ceph\") pod \"configure-network-openstack-openstack-cell1-xhqs2\" (UID: \"83b0f363-8b5f-4f90-809d-ddb74d9a159f\") " pod="openstack/configure-network-openstack-openstack-cell1-xhqs2" Feb 19 20:29:36 crc kubenswrapper[4813]: I0219 20:29:36.253823 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/83b0f363-8b5f-4f90-809d-ddb74d9a159f-ssh-key-openstack-cell1\") pod \"configure-network-openstack-openstack-cell1-xhqs2\" (UID: \"83b0f363-8b5f-4f90-809d-ddb74d9a159f\") " pod="openstack/configure-network-openstack-openstack-cell1-xhqs2" Feb 19 20:29:36 crc kubenswrapper[4813]: I0219 20:29:36.253857 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/83b0f363-8b5f-4f90-809d-ddb74d9a159f-ceph\") pod \"configure-network-openstack-openstack-cell1-xhqs2\" (UID: \"83b0f363-8b5f-4f90-809d-ddb74d9a159f\") " pod="openstack/configure-network-openstack-openstack-cell1-xhqs2" Feb 19 20:29:36 crc kubenswrapper[4813]: I0219 20:29:36.254549 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83b0f363-8b5f-4f90-809d-ddb74d9a159f-inventory\") pod \"configure-network-openstack-openstack-cell1-xhqs2\" (UID: \"83b0f363-8b5f-4f90-809d-ddb74d9a159f\") " pod="openstack/configure-network-openstack-openstack-cell1-xhqs2" Feb 19 20:29:36 crc kubenswrapper[4813]: I0219 20:29:36.266047 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5n46\" (UniqueName: \"kubernetes.io/projected/83b0f363-8b5f-4f90-809d-ddb74d9a159f-kube-api-access-s5n46\") pod \"configure-network-openstack-openstack-cell1-xhqs2\" (UID: \"83b0f363-8b5f-4f90-809d-ddb74d9a159f\") " pod="openstack/configure-network-openstack-openstack-cell1-xhqs2" Feb 19 20:29:36 crc kubenswrapper[4813]: I0219 20:29:36.372353 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-xhqs2" Feb 19 20:29:36 crc kubenswrapper[4813]: I0219 20:29:36.943512 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-xhqs2"] Feb 19 20:29:36 crc kubenswrapper[4813]: I0219 20:29:36.961079 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-xhqs2" event={"ID":"83b0f363-8b5f-4f90-809d-ddb74d9a159f","Type":"ContainerStarted","Data":"0ed5583c7cfe70eb8bba3939b3122313066dc04ef48b6b5ef67782b22fa55652"} Feb 19 20:29:37 crc kubenswrapper[4813]: I0219 20:29:37.971037 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-xhqs2" event={"ID":"83b0f363-8b5f-4f90-809d-ddb74d9a159f","Type":"ContainerStarted","Data":"91e6a9d7f2e93505064ddc11c8849b93b225e5770c41aff22cd72f47d1e3e138"} Feb 19 20:29:37 crc kubenswrapper[4813]: I0219 20:29:37.993594 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-openstack-openstack-cell1-xhqs2" podStartSLOduration=1.51041113 podStartE2EDuration="1.993576741s" podCreationTimestamp="2026-02-19 20:29:36 +0000 UTC" firstStartedPulling="2026-02-19 20:29:36.950875599 +0000 UTC m=+7196.176316180" lastFinishedPulling="2026-02-19 20:29:37.43404123 +0000 UTC m=+7196.659481791" observedRunningTime="2026-02-19 20:29:37.985921705 +0000 UTC m=+7197.211362246" watchObservedRunningTime="2026-02-19 20:29:37.993576741 +0000 UTC m=+7197.219017282" Feb 19 20:29:45 crc kubenswrapper[4813]: I0219 20:29:45.471057 4813 scope.go:117] "RemoveContainer" containerID="ae09c86ff9f8bc21a9cf3026fff1db9c93da58ba8343078b39d31938225ee5ee" Feb 19 20:29:45 crc kubenswrapper[4813]: E0219 20:29:45.471728 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:29:56 crc kubenswrapper[4813]: I0219 20:29:56.471429 4813 scope.go:117] "RemoveContainer" containerID="ae09c86ff9f8bc21a9cf3026fff1db9c93da58ba8343078b39d31938225ee5ee" Feb 19 20:29:56 crc kubenswrapper[4813]: E0219 20:29:56.472370 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:30:00 crc kubenswrapper[4813]: I0219 20:30:00.173259 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525550-mxfh9"] Feb 19 20:30:00 crc kubenswrapper[4813]: I0219 20:30:00.175585 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525550-mxfh9" Feb 19 20:30:00 crc kubenswrapper[4813]: I0219 20:30:00.179370 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 20:30:00 crc kubenswrapper[4813]: I0219 20:30:00.182402 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 20:30:00 crc kubenswrapper[4813]: I0219 20:30:00.185154 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525550-mxfh9"] Feb 19 20:30:00 crc kubenswrapper[4813]: I0219 20:30:00.206002 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ee77dc3f-374f-42c4-bd6c-7c48a63cfd8e-secret-volume\") pod \"collect-profiles-29525550-mxfh9\" (UID: \"ee77dc3f-374f-42c4-bd6c-7c48a63cfd8e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525550-mxfh9" Feb 19 20:30:00 crc kubenswrapper[4813]: I0219 20:30:00.206157 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gzsp\" (UniqueName: \"kubernetes.io/projected/ee77dc3f-374f-42c4-bd6c-7c48a63cfd8e-kube-api-access-9gzsp\") pod \"collect-profiles-29525550-mxfh9\" (UID: \"ee77dc3f-374f-42c4-bd6c-7c48a63cfd8e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525550-mxfh9" Feb 19 20:30:00 crc kubenswrapper[4813]: I0219 20:30:00.206361 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ee77dc3f-374f-42c4-bd6c-7c48a63cfd8e-config-volume\") pod \"collect-profiles-29525550-mxfh9\" (UID: \"ee77dc3f-374f-42c4-bd6c-7c48a63cfd8e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525550-mxfh9" Feb 19 20:30:00 crc kubenswrapper[4813]: I0219 20:30:00.308189 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ee77dc3f-374f-42c4-bd6c-7c48a63cfd8e-config-volume\") pod \"collect-profiles-29525550-mxfh9\" (UID: \"ee77dc3f-374f-42c4-bd6c-7c48a63cfd8e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525550-mxfh9" Feb 19 20:30:00 crc kubenswrapper[4813]: I0219 20:30:00.308449 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ee77dc3f-374f-42c4-bd6c-7c48a63cfd8e-secret-volume\") pod \"collect-profiles-29525550-mxfh9\" (UID: \"ee77dc3f-374f-42c4-bd6c-7c48a63cfd8e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525550-mxfh9" Feb 19 20:30:00 crc kubenswrapper[4813]: I0219 20:30:00.308604 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gzsp\" (UniqueName: \"kubernetes.io/projected/ee77dc3f-374f-42c4-bd6c-7c48a63cfd8e-kube-api-access-9gzsp\") pod \"collect-profiles-29525550-mxfh9\" (UID: \"ee77dc3f-374f-42c4-bd6c-7c48a63cfd8e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525550-mxfh9" Feb 19 20:30:00 crc kubenswrapper[4813]: I0219 20:30:00.311086 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ee77dc3f-374f-42c4-bd6c-7c48a63cfd8e-config-volume\") pod \"collect-profiles-29525550-mxfh9\" (UID: \"ee77dc3f-374f-42c4-bd6c-7c48a63cfd8e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525550-mxfh9" Feb 19 20:30:00 crc kubenswrapper[4813]: I0219 20:30:00.317617 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ee77dc3f-374f-42c4-bd6c-7c48a63cfd8e-secret-volume\") pod \"collect-profiles-29525550-mxfh9\" (UID: \"ee77dc3f-374f-42c4-bd6c-7c48a63cfd8e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525550-mxfh9" Feb 19 20:30:00 crc kubenswrapper[4813]: I0219 20:30:00.328265 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gzsp\" (UniqueName: \"kubernetes.io/projected/ee77dc3f-374f-42c4-bd6c-7c48a63cfd8e-kube-api-access-9gzsp\") pod \"collect-profiles-29525550-mxfh9\" (UID: \"ee77dc3f-374f-42c4-bd6c-7c48a63cfd8e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525550-mxfh9" Feb 19 20:30:00 crc kubenswrapper[4813]: I0219 20:30:00.507459 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525550-mxfh9" Feb 19 20:30:00 crc kubenswrapper[4813]: I0219 20:30:00.986318 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525550-mxfh9"] Feb 19 20:30:01 crc kubenswrapper[4813]: I0219 20:30:01.237597 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525550-mxfh9" event={"ID":"ee77dc3f-374f-42c4-bd6c-7c48a63cfd8e","Type":"ContainerStarted","Data":"2d8bf31f31a3933a7808ed8793c239fadbb6321b7c44b559eb3e924d8b92eb29"} Feb 19 20:30:01 crc kubenswrapper[4813]: I0219 20:30:01.237973 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525550-mxfh9" event={"ID":"ee77dc3f-374f-42c4-bd6c-7c48a63cfd8e","Type":"ContainerStarted","Data":"059de1471c77f9805f60e15fb91238880c6b558bf42d22c12ea5709c6bc05461"} Feb 19 20:30:01 crc kubenswrapper[4813]: I0219 20:30:01.256947 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29525550-mxfh9" podStartSLOduration=1.256917936 podStartE2EDuration="1.256917936s" podCreationTimestamp="2026-02-19 20:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 20:30:01.255089409 +0000 UTC m=+7220.480529950" watchObservedRunningTime="2026-02-19 20:30:01.256917936 +0000 UTC m=+7220.482358487" Feb 19 20:30:02 crc kubenswrapper[4813]: I0219 20:30:02.257374 4813 generic.go:334] "Generic (PLEG): container finished" podID="ee77dc3f-374f-42c4-bd6c-7c48a63cfd8e" containerID="2d8bf31f31a3933a7808ed8793c239fadbb6321b7c44b559eb3e924d8b92eb29" exitCode=0 Feb 19 20:30:02 crc kubenswrapper[4813]: I0219 20:30:02.257487 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525550-mxfh9" event={"ID":"ee77dc3f-374f-42c4-bd6c-7c48a63cfd8e","Type":"ContainerDied","Data":"2d8bf31f31a3933a7808ed8793c239fadbb6321b7c44b559eb3e924d8b92eb29"} Feb 19 20:30:03 crc kubenswrapper[4813]: I0219 20:30:03.664823 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525550-mxfh9" Feb 19 20:30:03 crc kubenswrapper[4813]: I0219 20:30:03.784424 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ee77dc3f-374f-42c4-bd6c-7c48a63cfd8e-secret-volume\") pod \"ee77dc3f-374f-42c4-bd6c-7c48a63cfd8e\" (UID: \"ee77dc3f-374f-42c4-bd6c-7c48a63cfd8e\") " Feb 19 20:30:03 crc kubenswrapper[4813]: I0219 20:30:03.784560 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gzsp\" (UniqueName: \"kubernetes.io/projected/ee77dc3f-374f-42c4-bd6c-7c48a63cfd8e-kube-api-access-9gzsp\") pod \"ee77dc3f-374f-42c4-bd6c-7c48a63cfd8e\" (UID: \"ee77dc3f-374f-42c4-bd6c-7c48a63cfd8e\") " Feb 19 20:30:03 crc kubenswrapper[4813]: I0219 20:30:03.784663 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ee77dc3f-374f-42c4-bd6c-7c48a63cfd8e-config-volume\") pod \"ee77dc3f-374f-42c4-bd6c-7c48a63cfd8e\" (UID: \"ee77dc3f-374f-42c4-bd6c-7c48a63cfd8e\") " Feb 19 20:30:03 crc kubenswrapper[4813]: I0219 20:30:03.785491 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee77dc3f-374f-42c4-bd6c-7c48a63cfd8e-config-volume" (OuterVolumeSpecName: "config-volume") pod "ee77dc3f-374f-42c4-bd6c-7c48a63cfd8e" (UID: "ee77dc3f-374f-42c4-bd6c-7c48a63cfd8e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:30:03 crc kubenswrapper[4813]: I0219 20:30:03.789408 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee77dc3f-374f-42c4-bd6c-7c48a63cfd8e-kube-api-access-9gzsp" (OuterVolumeSpecName: "kube-api-access-9gzsp") pod "ee77dc3f-374f-42c4-bd6c-7c48a63cfd8e" (UID: "ee77dc3f-374f-42c4-bd6c-7c48a63cfd8e"). InnerVolumeSpecName "kube-api-access-9gzsp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:30:03 crc kubenswrapper[4813]: I0219 20:30:03.789495 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee77dc3f-374f-42c4-bd6c-7c48a63cfd8e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ee77dc3f-374f-42c4-bd6c-7c48a63cfd8e" (UID: "ee77dc3f-374f-42c4-bd6c-7c48a63cfd8e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:30:03 crc kubenswrapper[4813]: I0219 20:30:03.886939 4813 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ee77dc3f-374f-42c4-bd6c-7c48a63cfd8e-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 20:30:03 crc kubenswrapper[4813]: I0219 20:30:03.887019 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9gzsp\" (UniqueName: \"kubernetes.io/projected/ee77dc3f-374f-42c4-bd6c-7c48a63cfd8e-kube-api-access-9gzsp\") on node \"crc\" DevicePath \"\"" Feb 19 20:30:03 crc kubenswrapper[4813]: I0219 20:30:03.887088 4813 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ee77dc3f-374f-42c4-bd6c-7c48a63cfd8e-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 20:30:04 crc kubenswrapper[4813]: I0219 20:30:04.279501 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525550-mxfh9" event={"ID":"ee77dc3f-374f-42c4-bd6c-7c48a63cfd8e","Type":"ContainerDied","Data":"059de1471c77f9805f60e15fb91238880c6b558bf42d22c12ea5709c6bc05461"} Feb 19 20:30:04 crc kubenswrapper[4813]: I0219 20:30:04.279537 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="059de1471c77f9805f60e15fb91238880c6b558bf42d22c12ea5709c6bc05461" Feb 19 20:30:04 crc kubenswrapper[4813]: I0219 20:30:04.279592 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525550-mxfh9" Feb 19 20:30:04 crc kubenswrapper[4813]: I0219 20:30:04.344642 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525505-tml84"] Feb 19 20:30:04 crc kubenswrapper[4813]: I0219 20:30:04.356265 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525505-tml84"] Feb 19 20:30:05 crc kubenswrapper[4813]: I0219 20:30:05.500922 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="870c7782-2146-4cec-92bc-590e85dfc2b8" path="/var/lib/kubelet/pods/870c7782-2146-4cec-92bc-590e85dfc2b8/volumes" Feb 19 20:30:08 crc kubenswrapper[4813]: I0219 20:30:08.471749 4813 scope.go:117] "RemoveContainer" containerID="ae09c86ff9f8bc21a9cf3026fff1db9c93da58ba8343078b39d31938225ee5ee" Feb 19 20:30:08 crc kubenswrapper[4813]: E0219 20:30:08.472807 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:30:23 crc kubenswrapper[4813]: I0219 20:30:23.472215 4813 scope.go:117] "RemoveContainer" containerID="ae09c86ff9f8bc21a9cf3026fff1db9c93da58ba8343078b39d31938225ee5ee" Feb 19 20:30:23 crc kubenswrapper[4813]: E0219 20:30:23.473367 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:30:38 crc kubenswrapper[4813]: I0219 20:30:38.473609 4813 scope.go:117] "RemoveContainer" containerID="ae09c86ff9f8bc21a9cf3026fff1db9c93da58ba8343078b39d31938225ee5ee" Feb 19 20:30:38 crc kubenswrapper[4813]: E0219 20:30:38.474822 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:30:51 crc kubenswrapper[4813]: I0219 20:30:51.487450 4813 scope.go:117] "RemoveContainer" containerID="ae09c86ff9f8bc21a9cf3026fff1db9c93da58ba8343078b39d31938225ee5ee" Feb 19 20:30:51 crc kubenswrapper[4813]: E0219 20:30:51.488866 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:30:56 crc kubenswrapper[4813]: I0219 20:30:56.869057 4813 generic.go:334] "Generic (PLEG): container finished" podID="83b0f363-8b5f-4f90-809d-ddb74d9a159f" containerID="91e6a9d7f2e93505064ddc11c8849b93b225e5770c41aff22cd72f47d1e3e138" exitCode=0 Feb 19 20:30:56 crc kubenswrapper[4813]: I0219 20:30:56.869039 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-xhqs2" event={"ID":"83b0f363-8b5f-4f90-809d-ddb74d9a159f","Type":"ContainerDied","Data":"91e6a9d7f2e93505064ddc11c8849b93b225e5770c41aff22cd72f47d1e3e138"} Feb 19 20:30:58 crc kubenswrapper[4813]: I0219 20:30:58.312210 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-xhqs2" Feb 19 20:30:58 crc kubenswrapper[4813]: I0219 20:30:58.490172 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/83b0f363-8b5f-4f90-809d-ddb74d9a159f-ssh-key-openstack-cell1\") pod \"83b0f363-8b5f-4f90-809d-ddb74d9a159f\" (UID: \"83b0f363-8b5f-4f90-809d-ddb74d9a159f\") " Feb 19 20:30:58 crc kubenswrapper[4813]: I0219 20:30:58.490746 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/83b0f363-8b5f-4f90-809d-ddb74d9a159f-ceph\") pod \"83b0f363-8b5f-4f90-809d-ddb74d9a159f\" (UID: \"83b0f363-8b5f-4f90-809d-ddb74d9a159f\") " Feb 19 20:30:58 crc kubenswrapper[4813]: I0219 20:30:58.490810 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5n46\" (UniqueName: \"kubernetes.io/projected/83b0f363-8b5f-4f90-809d-ddb74d9a159f-kube-api-access-s5n46\") pod \"83b0f363-8b5f-4f90-809d-ddb74d9a159f\" (UID: \"83b0f363-8b5f-4f90-809d-ddb74d9a159f\") " Feb 19 20:30:58 crc kubenswrapper[4813]: I0219 20:30:58.490903 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83b0f363-8b5f-4f90-809d-ddb74d9a159f-inventory\") pod \"83b0f363-8b5f-4f90-809d-ddb74d9a159f\" (UID: \"83b0f363-8b5f-4f90-809d-ddb74d9a159f\") " Feb 19 20:30:58 crc kubenswrapper[4813]: I0219 20:30:58.495523 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83b0f363-8b5f-4f90-809d-ddb74d9a159f-ceph" (OuterVolumeSpecName: "ceph") pod "83b0f363-8b5f-4f90-809d-ddb74d9a159f" (UID: "83b0f363-8b5f-4f90-809d-ddb74d9a159f"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:30:58 crc kubenswrapper[4813]: I0219 20:30:58.497171 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83b0f363-8b5f-4f90-809d-ddb74d9a159f-kube-api-access-s5n46" (OuterVolumeSpecName: "kube-api-access-s5n46") pod "83b0f363-8b5f-4f90-809d-ddb74d9a159f" (UID: "83b0f363-8b5f-4f90-809d-ddb74d9a159f"). InnerVolumeSpecName "kube-api-access-s5n46". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:30:58 crc kubenswrapper[4813]: I0219 20:30:58.523551 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83b0f363-8b5f-4f90-809d-ddb74d9a159f-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "83b0f363-8b5f-4f90-809d-ddb74d9a159f" (UID: "83b0f363-8b5f-4f90-809d-ddb74d9a159f"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:30:58 crc kubenswrapper[4813]: I0219 20:30:58.540539 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83b0f363-8b5f-4f90-809d-ddb74d9a159f-inventory" (OuterVolumeSpecName: "inventory") pod "83b0f363-8b5f-4f90-809d-ddb74d9a159f" (UID: "83b0f363-8b5f-4f90-809d-ddb74d9a159f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:30:58 crc kubenswrapper[4813]: I0219 20:30:58.593877 4813 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/83b0f363-8b5f-4f90-809d-ddb74d9a159f-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 19 20:30:58 crc kubenswrapper[4813]: I0219 20:30:58.593911 4813 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/83b0f363-8b5f-4f90-809d-ddb74d9a159f-ceph\") on node \"crc\" DevicePath \"\"" Feb 19 20:30:58 crc kubenswrapper[4813]: I0219 20:30:58.594072 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5n46\" (UniqueName: \"kubernetes.io/projected/83b0f363-8b5f-4f90-809d-ddb74d9a159f-kube-api-access-s5n46\") on node \"crc\" DevicePath \"\"" Feb 19 20:30:58 crc kubenswrapper[4813]: I0219 20:30:58.594579 4813 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83b0f363-8b5f-4f90-809d-ddb74d9a159f-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 20:30:58 crc kubenswrapper[4813]: I0219 20:30:58.897847 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-xhqs2" event={"ID":"83b0f363-8b5f-4f90-809d-ddb74d9a159f","Type":"ContainerDied","Data":"0ed5583c7cfe70eb8bba3939b3122313066dc04ef48b6b5ef67782b22fa55652"} Feb 19 20:30:58 crc kubenswrapper[4813]: I0219 20:30:58.897907 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ed5583c7cfe70eb8bba3939b3122313066dc04ef48b6b5ef67782b22fa55652" Feb 19 20:30:58 crc kubenswrapper[4813]: I0219 20:30:58.897931 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-xhqs2" Feb 19 20:30:58 crc kubenswrapper[4813]: I0219 20:30:58.990836 4813 scope.go:117] "RemoveContainer" containerID="e40bc99d97e432132dd3d0bbba2baa7522e379f4543de86e9f400a37ada16696" Feb 19 20:30:59 crc kubenswrapper[4813]: I0219 20:30:59.081001 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-5r98z"] Feb 19 20:30:59 crc kubenswrapper[4813]: E0219 20:30:59.081432 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee77dc3f-374f-42c4-bd6c-7c48a63cfd8e" containerName="collect-profiles" Feb 19 20:30:59 crc kubenswrapper[4813]: I0219 20:30:59.081449 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee77dc3f-374f-42c4-bd6c-7c48a63cfd8e" containerName="collect-profiles" Feb 19 20:30:59 crc kubenswrapper[4813]: E0219 20:30:59.081477 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83b0f363-8b5f-4f90-809d-ddb74d9a159f" containerName="configure-network-openstack-openstack-cell1" Feb 19 20:30:59 crc kubenswrapper[4813]: I0219 20:30:59.081485 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="83b0f363-8b5f-4f90-809d-ddb74d9a159f" containerName="configure-network-openstack-openstack-cell1" Feb 19 20:30:59 crc kubenswrapper[4813]: I0219 20:30:59.081684 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="83b0f363-8b5f-4f90-809d-ddb74d9a159f" containerName="configure-network-openstack-openstack-cell1" Feb 19 20:30:59 crc kubenswrapper[4813]: I0219 20:30:59.081704 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee77dc3f-374f-42c4-bd6c-7c48a63cfd8e" containerName="collect-profiles" Feb 19 20:30:59 crc kubenswrapper[4813]: I0219 20:30:59.082433 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-5r98z" Feb 19 20:30:59 crc kubenswrapper[4813]: I0219 20:30:59.085485 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-2ttn4" Feb 19 20:30:59 crc kubenswrapper[4813]: I0219 20:30:59.085770 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 19 20:30:59 crc kubenswrapper[4813]: I0219 20:30:59.089337 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 20:30:59 crc kubenswrapper[4813]: I0219 20:30:59.089914 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 19 20:30:59 crc kubenswrapper[4813]: I0219 20:30:59.101163 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-5r98z"] Feb 19 20:30:59 crc kubenswrapper[4813]: I0219 20:30:59.208334 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e3790acf-636a-4642-924c-6cfb71a2aa55-ceph\") pod \"validate-network-openstack-openstack-cell1-5r98z\" (UID: \"e3790acf-636a-4642-924c-6cfb71a2aa55\") " pod="openstack/validate-network-openstack-openstack-cell1-5r98z" Feb 19 20:30:59 crc kubenswrapper[4813]: I0219 20:30:59.208474 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdfst\" (UniqueName: \"kubernetes.io/projected/e3790acf-636a-4642-924c-6cfb71a2aa55-kube-api-access-qdfst\") pod \"validate-network-openstack-openstack-cell1-5r98z\" (UID: \"e3790acf-636a-4642-924c-6cfb71a2aa55\") " pod="openstack/validate-network-openstack-openstack-cell1-5r98z" Feb 19 20:30:59 crc kubenswrapper[4813]: I0219 20:30:59.208622 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/e3790acf-636a-4642-924c-6cfb71a2aa55-ssh-key-openstack-cell1\") pod \"validate-network-openstack-openstack-cell1-5r98z\" (UID: \"e3790acf-636a-4642-924c-6cfb71a2aa55\") " pod="openstack/validate-network-openstack-openstack-cell1-5r98z" Feb 19 20:30:59 crc kubenswrapper[4813]: I0219 20:30:59.208685 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3790acf-636a-4642-924c-6cfb71a2aa55-inventory\") pod \"validate-network-openstack-openstack-cell1-5r98z\" (UID: \"e3790acf-636a-4642-924c-6cfb71a2aa55\") " pod="openstack/validate-network-openstack-openstack-cell1-5r98z" Feb 19 20:30:59 crc kubenswrapper[4813]: I0219 20:30:59.311483 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdfst\" (UniqueName: \"kubernetes.io/projected/e3790acf-636a-4642-924c-6cfb71a2aa55-kube-api-access-qdfst\") pod \"validate-network-openstack-openstack-cell1-5r98z\" (UID: \"e3790acf-636a-4642-924c-6cfb71a2aa55\") " pod="openstack/validate-network-openstack-openstack-cell1-5r98z" Feb 19 20:30:59 crc kubenswrapper[4813]: I0219 20:30:59.311594 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/e3790acf-636a-4642-924c-6cfb71a2aa55-ssh-key-openstack-cell1\") pod \"validate-network-openstack-openstack-cell1-5r98z\" (UID: \"e3790acf-636a-4642-924c-6cfb71a2aa55\") " pod="openstack/validate-network-openstack-openstack-cell1-5r98z" Feb 19 20:30:59 crc kubenswrapper[4813]: I0219 20:30:59.311647 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3790acf-636a-4642-924c-6cfb71a2aa55-inventory\") pod \"validate-network-openstack-openstack-cell1-5r98z\" (UID: \"e3790acf-636a-4642-924c-6cfb71a2aa55\") " pod="openstack/validate-network-openstack-openstack-cell1-5r98z" Feb 19 20:30:59 crc kubenswrapper[4813]: I0219 20:30:59.311813 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e3790acf-636a-4642-924c-6cfb71a2aa55-ceph\") pod \"validate-network-openstack-openstack-cell1-5r98z\" (UID: \"e3790acf-636a-4642-924c-6cfb71a2aa55\") " pod="openstack/validate-network-openstack-openstack-cell1-5r98z" Feb 19 20:30:59 crc kubenswrapper[4813]: I0219 20:30:59.315595 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/e3790acf-636a-4642-924c-6cfb71a2aa55-ssh-key-openstack-cell1\") pod \"validate-network-openstack-openstack-cell1-5r98z\" (UID: \"e3790acf-636a-4642-924c-6cfb71a2aa55\") " pod="openstack/validate-network-openstack-openstack-cell1-5r98z" Feb 19 20:30:59 crc kubenswrapper[4813]: I0219 20:30:59.316344 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e3790acf-636a-4642-924c-6cfb71a2aa55-ceph\") pod \"validate-network-openstack-openstack-cell1-5r98z\" (UID: \"e3790acf-636a-4642-924c-6cfb71a2aa55\") " pod="openstack/validate-network-openstack-openstack-cell1-5r98z" Feb 19 20:30:59 crc kubenswrapper[4813]: I0219 20:30:59.316508 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3790acf-636a-4642-924c-6cfb71a2aa55-inventory\") pod \"validate-network-openstack-openstack-cell1-5r98z\" (UID: \"e3790acf-636a-4642-924c-6cfb71a2aa55\") " pod="openstack/validate-network-openstack-openstack-cell1-5r98z" Feb 19 20:30:59 crc kubenswrapper[4813]: I0219 20:30:59.328611 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdfst\" (UniqueName: \"kubernetes.io/projected/e3790acf-636a-4642-924c-6cfb71a2aa55-kube-api-access-qdfst\") pod \"validate-network-openstack-openstack-cell1-5r98z\" (UID: \"e3790acf-636a-4642-924c-6cfb71a2aa55\") " pod="openstack/validate-network-openstack-openstack-cell1-5r98z" Feb 19 20:30:59 crc kubenswrapper[4813]: I0219 20:30:59.425083 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-5r98z" Feb 19 20:30:59 crc kubenswrapper[4813]: I0219 20:30:59.964741 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-5r98z"] Feb 19 20:31:00 crc kubenswrapper[4813]: I0219 20:31:00.921411 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-5r98z" event={"ID":"e3790acf-636a-4642-924c-6cfb71a2aa55","Type":"ContainerStarted","Data":"bee1deaf93ce752cba47199b26953681e28e54d257ef95171045451fdab3de3e"} Feb 19 20:31:00 crc kubenswrapper[4813]: I0219 20:31:00.921998 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-5r98z" event={"ID":"e3790acf-636a-4642-924c-6cfb71a2aa55","Type":"ContainerStarted","Data":"791b9b204451c4fd2990e63451969db891f336ecb02bb2f33dca1ffc63139338"} Feb 19 20:31:00 crc kubenswrapper[4813]: I0219 20:31:00.938054 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-openstack-openstack-cell1-5r98z" podStartSLOduration=1.357470969 podStartE2EDuration="1.938037651s" podCreationTimestamp="2026-02-19 20:30:59 +0000 UTC" firstStartedPulling="2026-02-19 20:30:59.969246485 +0000 UTC m=+7279.194687016" lastFinishedPulling="2026-02-19 20:31:00.549813157 +0000 UTC m=+7279.775253698" observedRunningTime="2026-02-19 20:31:00.934161642 +0000 UTC m=+7280.159602193" watchObservedRunningTime="2026-02-19 20:31:00.938037651 +0000 UTC m=+7280.163478192" Feb 19 20:31:02 crc kubenswrapper[4813]: I0219 20:31:02.471595 4813 scope.go:117] "RemoveContainer" containerID="ae09c86ff9f8bc21a9cf3026fff1db9c93da58ba8343078b39d31938225ee5ee" Feb 19 20:31:02 crc kubenswrapper[4813]: E0219 20:31:02.472255 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:31:05 crc kubenswrapper[4813]: I0219 20:31:05.979722 4813 generic.go:334] "Generic (PLEG): container finished" podID="e3790acf-636a-4642-924c-6cfb71a2aa55" containerID="bee1deaf93ce752cba47199b26953681e28e54d257ef95171045451fdab3de3e" exitCode=0 Feb 19 20:31:05 crc kubenswrapper[4813]: I0219 20:31:05.979907 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-5r98z" event={"ID":"e3790acf-636a-4642-924c-6cfb71a2aa55","Type":"ContainerDied","Data":"bee1deaf93ce752cba47199b26953681e28e54d257ef95171045451fdab3de3e"} Feb 19 20:31:07 crc kubenswrapper[4813]: I0219 20:31:07.511465 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-5r98z" Feb 19 20:31:07 crc kubenswrapper[4813]: I0219 20:31:07.598264 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/e3790acf-636a-4642-924c-6cfb71a2aa55-ssh-key-openstack-cell1\") pod \"e3790acf-636a-4642-924c-6cfb71a2aa55\" (UID: \"e3790acf-636a-4642-924c-6cfb71a2aa55\") " Feb 19 20:31:07 crc kubenswrapper[4813]: I0219 20:31:07.598670 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3790acf-636a-4642-924c-6cfb71a2aa55-inventory\") pod \"e3790acf-636a-4642-924c-6cfb71a2aa55\" (UID: \"e3790acf-636a-4642-924c-6cfb71a2aa55\") " Feb 19 20:31:07 crc kubenswrapper[4813]: I0219 20:31:07.598907 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e3790acf-636a-4642-924c-6cfb71a2aa55-ceph\") pod \"e3790acf-636a-4642-924c-6cfb71a2aa55\" (UID: \"e3790acf-636a-4642-924c-6cfb71a2aa55\") " Feb 19 20:31:07 crc kubenswrapper[4813]: I0219 20:31:07.598992 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdfst\" (UniqueName: \"kubernetes.io/projected/e3790acf-636a-4642-924c-6cfb71a2aa55-kube-api-access-qdfst\") pod \"e3790acf-636a-4642-924c-6cfb71a2aa55\" (UID: \"e3790acf-636a-4642-924c-6cfb71a2aa55\") " Feb 19 20:31:07 crc kubenswrapper[4813]: I0219 20:31:07.604771 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3790acf-636a-4642-924c-6cfb71a2aa55-kube-api-access-qdfst" (OuterVolumeSpecName: "kube-api-access-qdfst") pod "e3790acf-636a-4642-924c-6cfb71a2aa55" (UID: "e3790acf-636a-4642-924c-6cfb71a2aa55"). InnerVolumeSpecName "kube-api-access-qdfst". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:31:07 crc kubenswrapper[4813]: I0219 20:31:07.618170 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3790acf-636a-4642-924c-6cfb71a2aa55-ceph" (OuterVolumeSpecName: "ceph") pod "e3790acf-636a-4642-924c-6cfb71a2aa55" (UID: "e3790acf-636a-4642-924c-6cfb71a2aa55"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:31:07 crc kubenswrapper[4813]: I0219 20:31:07.626598 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3790acf-636a-4642-924c-6cfb71a2aa55-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "e3790acf-636a-4642-924c-6cfb71a2aa55" (UID: "e3790acf-636a-4642-924c-6cfb71a2aa55"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:31:07 crc kubenswrapper[4813]: I0219 20:31:07.641511 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3790acf-636a-4642-924c-6cfb71a2aa55-inventory" (OuterVolumeSpecName: "inventory") pod "e3790acf-636a-4642-924c-6cfb71a2aa55" (UID: "e3790acf-636a-4642-924c-6cfb71a2aa55"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:31:07 crc kubenswrapper[4813]: I0219 20:31:07.703344 4813 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/e3790acf-636a-4642-924c-6cfb71a2aa55-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 19 20:31:07 crc kubenswrapper[4813]: I0219 20:31:07.703391 4813 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3790acf-636a-4642-924c-6cfb71a2aa55-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 20:31:07 crc kubenswrapper[4813]: I0219 20:31:07.703404 4813 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e3790acf-636a-4642-924c-6cfb71a2aa55-ceph\") on node \"crc\" DevicePath \"\"" Feb 19 20:31:07 crc kubenswrapper[4813]: I0219 20:31:07.703417 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdfst\" (UniqueName: \"kubernetes.io/projected/e3790acf-636a-4642-924c-6cfb71a2aa55-kube-api-access-qdfst\") on node \"crc\" DevicePath \"\"" Feb 19 20:31:08 crc kubenswrapper[4813]: I0219 20:31:08.004718 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-5r98z" event={"ID":"e3790acf-636a-4642-924c-6cfb71a2aa55","Type":"ContainerDied","Data":"791b9b204451c4fd2990e63451969db891f336ecb02bb2f33dca1ffc63139338"} Feb 19 20:31:08 crc kubenswrapper[4813]: I0219 20:31:08.004768 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="791b9b204451c4fd2990e63451969db891f336ecb02bb2f33dca1ffc63139338" Feb 19 20:31:08 crc kubenswrapper[4813]: I0219 20:31:08.004776 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-5r98z" Feb 19 20:31:08 crc kubenswrapper[4813]: I0219 20:31:08.081518 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-openstack-openstack-cell1-hpz86"] Feb 19 20:31:08 crc kubenswrapper[4813]: E0219 20:31:08.081946 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3790acf-636a-4642-924c-6cfb71a2aa55" containerName="validate-network-openstack-openstack-cell1" Feb 19 20:31:08 crc kubenswrapper[4813]: I0219 20:31:08.081973 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3790acf-636a-4642-924c-6cfb71a2aa55" containerName="validate-network-openstack-openstack-cell1" Feb 19 20:31:08 crc kubenswrapper[4813]: I0219 20:31:08.082167 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3790acf-636a-4642-924c-6cfb71a2aa55" containerName="validate-network-openstack-openstack-cell1" Feb 19 20:31:08 crc kubenswrapper[4813]: I0219 20:31:08.084313 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-hpz86" Feb 19 20:31:08 crc kubenswrapper[4813]: I0219 20:31:08.085984 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-2ttn4" Feb 19 20:31:08 crc kubenswrapper[4813]: I0219 20:31:08.087549 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 19 20:31:08 crc kubenswrapper[4813]: I0219 20:31:08.087672 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 20:31:08 crc kubenswrapper[4813]: I0219 20:31:08.087774 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 19 20:31:08 crc kubenswrapper[4813]: I0219 20:31:08.097471 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-openstack-openstack-cell1-hpz86"] Feb 19 20:31:08 crc kubenswrapper[4813]: I0219 20:31:08.220110 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/e2e16257-f929-4408-b781-d7f6aff2a944-ssh-key-openstack-cell1\") pod \"install-os-openstack-openstack-cell1-hpz86\" (UID: \"e2e16257-f929-4408-b781-d7f6aff2a944\") " pod="openstack/install-os-openstack-openstack-cell1-hpz86" Feb 19 20:31:08 crc kubenswrapper[4813]: I0219 20:31:08.220242 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e2e16257-f929-4408-b781-d7f6aff2a944-ceph\") pod \"install-os-openstack-openstack-cell1-hpz86\" (UID: \"e2e16257-f929-4408-b781-d7f6aff2a944\") " pod="openstack/install-os-openstack-openstack-cell1-hpz86" Feb 19 20:31:08 crc kubenswrapper[4813]: I0219 20:31:08.220327 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e2e16257-f929-4408-b781-d7f6aff2a944-inventory\") pod \"install-os-openstack-openstack-cell1-hpz86\" (UID: \"e2e16257-f929-4408-b781-d7f6aff2a944\") " pod="openstack/install-os-openstack-openstack-cell1-hpz86" Feb 19 20:31:08 crc kubenswrapper[4813]: I0219 20:31:08.220365 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pq5m7\" (UniqueName: \"kubernetes.io/projected/e2e16257-f929-4408-b781-d7f6aff2a944-kube-api-access-pq5m7\") pod \"install-os-openstack-openstack-cell1-hpz86\" (UID: \"e2e16257-f929-4408-b781-d7f6aff2a944\") " pod="openstack/install-os-openstack-openstack-cell1-hpz86" Feb 19 20:31:08 crc kubenswrapper[4813]: I0219 20:31:08.323182 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/e2e16257-f929-4408-b781-d7f6aff2a944-ssh-key-openstack-cell1\") pod \"install-os-openstack-openstack-cell1-hpz86\" (UID: \"e2e16257-f929-4408-b781-d7f6aff2a944\") " pod="openstack/install-os-openstack-openstack-cell1-hpz86" Feb 19 20:31:08 crc kubenswrapper[4813]: I0219 20:31:08.323378 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e2e16257-f929-4408-b781-d7f6aff2a944-ceph\") pod \"install-os-openstack-openstack-cell1-hpz86\" (UID: \"e2e16257-f929-4408-b781-d7f6aff2a944\") " pod="openstack/install-os-openstack-openstack-cell1-hpz86" Feb 19 20:31:08 crc kubenswrapper[4813]: I0219 20:31:08.323493 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e2e16257-f929-4408-b781-d7f6aff2a944-inventory\") pod \"install-os-openstack-openstack-cell1-hpz86\" (UID: \"e2e16257-f929-4408-b781-d7f6aff2a944\") " pod="openstack/install-os-openstack-openstack-cell1-hpz86" Feb 19 20:31:08 crc kubenswrapper[4813]: I0219 20:31:08.323554 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pq5m7\" (UniqueName: \"kubernetes.io/projected/e2e16257-f929-4408-b781-d7f6aff2a944-kube-api-access-pq5m7\") pod \"install-os-openstack-openstack-cell1-hpz86\" (UID: \"e2e16257-f929-4408-b781-d7f6aff2a944\") " pod="openstack/install-os-openstack-openstack-cell1-hpz86" Feb 19 20:31:08 crc kubenswrapper[4813]: I0219 20:31:08.328197 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e2e16257-f929-4408-b781-d7f6aff2a944-inventory\") pod \"install-os-openstack-openstack-cell1-hpz86\" (UID: \"e2e16257-f929-4408-b781-d7f6aff2a944\") " pod="openstack/install-os-openstack-openstack-cell1-hpz86" Feb 19 20:31:08 crc kubenswrapper[4813]: I0219 20:31:08.328636 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/e2e16257-f929-4408-b781-d7f6aff2a944-ssh-key-openstack-cell1\") pod \"install-os-openstack-openstack-cell1-hpz86\" (UID: \"e2e16257-f929-4408-b781-d7f6aff2a944\") " pod="openstack/install-os-openstack-openstack-cell1-hpz86" Feb 19 20:31:08 crc kubenswrapper[4813]: I0219 20:31:08.329987 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e2e16257-f929-4408-b781-d7f6aff2a944-ceph\") pod \"install-os-openstack-openstack-cell1-hpz86\" (UID: \"e2e16257-f929-4408-b781-d7f6aff2a944\") " pod="openstack/install-os-openstack-openstack-cell1-hpz86" Feb 19 20:31:08 crc kubenswrapper[4813]: I0219 20:31:08.344627 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pq5m7\" (UniqueName: \"kubernetes.io/projected/e2e16257-f929-4408-b781-d7f6aff2a944-kube-api-access-pq5m7\") pod \"install-os-openstack-openstack-cell1-hpz86\" (UID: \"e2e16257-f929-4408-b781-d7f6aff2a944\") " pod="openstack/install-os-openstack-openstack-cell1-hpz86" Feb 19 20:31:08 crc kubenswrapper[4813]: I0219 20:31:08.406709 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-hpz86" Feb 19 20:31:08 crc kubenswrapper[4813]: I0219 20:31:08.987413 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-openstack-openstack-cell1-hpz86"] Feb 19 20:31:09 crc kubenswrapper[4813]: I0219 20:31:09.017474 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-hpz86" event={"ID":"e2e16257-f929-4408-b781-d7f6aff2a944","Type":"ContainerStarted","Data":"086dd4e4490c1371f45899c9d8830868d93124eeaa36a6f80678c18e1a47e2aa"} Feb 19 20:31:10 crc kubenswrapper[4813]: I0219 20:31:10.028595 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-hpz86" event={"ID":"e2e16257-f929-4408-b781-d7f6aff2a944","Type":"ContainerStarted","Data":"5a4b639042c19f176a9ae94621899e875d0a4b3fa4a4ee269d3fb1eb3f70dae7"} Feb 19 20:31:10 crc kubenswrapper[4813]: I0219 20:31:10.052784 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-openstack-openstack-cell1-hpz86" podStartSLOduration=1.550936198 podStartE2EDuration="2.052763275s" podCreationTimestamp="2026-02-19 20:31:08 +0000 UTC" firstStartedPulling="2026-02-19 20:31:08.990289411 +0000 UTC m=+7288.215729992" lastFinishedPulling="2026-02-19 20:31:09.492116538 +0000 UTC m=+7288.717557069" observedRunningTime="2026-02-19 20:31:10.044261692 +0000 UTC m=+7289.269702293" watchObservedRunningTime="2026-02-19 20:31:10.052763275 +0000 UTC m=+7289.278203816" Feb 19 20:31:16 crc kubenswrapper[4813]: I0219 20:31:16.472826 4813 scope.go:117] "RemoveContainer" containerID="ae09c86ff9f8bc21a9cf3026fff1db9c93da58ba8343078b39d31938225ee5ee" Feb 19 20:31:16 crc kubenswrapper[4813]: E0219 20:31:16.473705 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:31:28 crc kubenswrapper[4813]: I0219 20:31:28.471829 4813 scope.go:117] "RemoveContainer" containerID="ae09c86ff9f8bc21a9cf3026fff1db9c93da58ba8343078b39d31938225ee5ee" Feb 19 20:31:28 crc kubenswrapper[4813]: E0219 20:31:28.472759 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:31:41 crc kubenswrapper[4813]: I0219 20:31:41.481517 4813 scope.go:117] "RemoveContainer" containerID="ae09c86ff9f8bc21a9cf3026fff1db9c93da58ba8343078b39d31938225ee5ee" Feb 19 20:31:41 crc kubenswrapper[4813]: E0219 20:31:41.482419 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:31:54 crc kubenswrapper[4813]: I0219 20:31:54.472374 4813 scope.go:117] "RemoveContainer" containerID="ae09c86ff9f8bc21a9cf3026fff1db9c93da58ba8343078b39d31938225ee5ee" Feb 19 20:31:54 crc kubenswrapper[4813]: E0219 20:31:54.473569 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:31:56 crc kubenswrapper[4813]: I0219 20:31:56.526797 4813 generic.go:334] "Generic (PLEG): container finished" podID="e2e16257-f929-4408-b781-d7f6aff2a944" containerID="5a4b639042c19f176a9ae94621899e875d0a4b3fa4a4ee269d3fb1eb3f70dae7" exitCode=0 Feb 19 20:31:56 crc kubenswrapper[4813]: I0219 20:31:56.527028 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-hpz86" event={"ID":"e2e16257-f929-4408-b781-d7f6aff2a944","Type":"ContainerDied","Data":"5a4b639042c19f176a9ae94621899e875d0a4b3fa4a4ee269d3fb1eb3f70dae7"} Feb 19 20:31:58 crc kubenswrapper[4813]: I0219 20:31:58.141668 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-hpz86" Feb 19 20:31:58 crc kubenswrapper[4813]: I0219 20:31:58.247576 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/e2e16257-f929-4408-b781-d7f6aff2a944-ssh-key-openstack-cell1\") pod \"e2e16257-f929-4408-b781-d7f6aff2a944\" (UID: \"e2e16257-f929-4408-b781-d7f6aff2a944\") " Feb 19 20:31:58 crc kubenswrapper[4813]: I0219 20:31:58.247655 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e2e16257-f929-4408-b781-d7f6aff2a944-ceph\") pod \"e2e16257-f929-4408-b781-d7f6aff2a944\" (UID: \"e2e16257-f929-4408-b781-d7f6aff2a944\") " Feb 19 20:31:58 crc kubenswrapper[4813]: I0219 20:31:58.247793 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pq5m7\" (UniqueName: \"kubernetes.io/projected/e2e16257-f929-4408-b781-d7f6aff2a944-kube-api-access-pq5m7\") pod \"e2e16257-f929-4408-b781-d7f6aff2a944\" (UID: \"e2e16257-f929-4408-b781-d7f6aff2a944\") " Feb 19 20:31:58 crc kubenswrapper[4813]: I0219 20:31:58.247826 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e2e16257-f929-4408-b781-d7f6aff2a944-inventory\") pod \"e2e16257-f929-4408-b781-d7f6aff2a944\" (UID: \"e2e16257-f929-4408-b781-d7f6aff2a944\") " Feb 19 20:31:58 crc kubenswrapper[4813]: I0219 20:31:58.254352 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2e16257-f929-4408-b781-d7f6aff2a944-kube-api-access-pq5m7" (OuterVolumeSpecName: "kube-api-access-pq5m7") pod "e2e16257-f929-4408-b781-d7f6aff2a944" (UID: "e2e16257-f929-4408-b781-d7f6aff2a944"). InnerVolumeSpecName "kube-api-access-pq5m7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:31:58 crc kubenswrapper[4813]: I0219 20:31:58.254539 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2e16257-f929-4408-b781-d7f6aff2a944-ceph" (OuterVolumeSpecName: "ceph") pod "e2e16257-f929-4408-b781-d7f6aff2a944" (UID: "e2e16257-f929-4408-b781-d7f6aff2a944"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:31:58 crc kubenswrapper[4813]: I0219 20:31:58.275665 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2e16257-f929-4408-b781-d7f6aff2a944-inventory" (OuterVolumeSpecName: "inventory") pod "e2e16257-f929-4408-b781-d7f6aff2a944" (UID: "e2e16257-f929-4408-b781-d7f6aff2a944"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:31:58 crc kubenswrapper[4813]: I0219 20:31:58.288184 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2e16257-f929-4408-b781-d7f6aff2a944-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "e2e16257-f929-4408-b781-d7f6aff2a944" (UID: "e2e16257-f929-4408-b781-d7f6aff2a944"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:31:58 crc kubenswrapper[4813]: I0219 20:31:58.350429 4813 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/e2e16257-f929-4408-b781-d7f6aff2a944-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 19 20:31:58 crc kubenswrapper[4813]: I0219 20:31:58.350466 4813 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e2e16257-f929-4408-b781-d7f6aff2a944-ceph\") on node \"crc\" DevicePath \"\"" Feb 19 20:31:58 crc kubenswrapper[4813]: I0219 20:31:58.350478 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pq5m7\" (UniqueName: \"kubernetes.io/projected/e2e16257-f929-4408-b781-d7f6aff2a944-kube-api-access-pq5m7\") on node \"crc\" DevicePath \"\"" Feb 19 20:31:58 crc kubenswrapper[4813]: I0219 20:31:58.350486 4813 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e2e16257-f929-4408-b781-d7f6aff2a944-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 20:31:58 crc kubenswrapper[4813]: I0219 20:31:58.550791 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-hpz86" event={"ID":"e2e16257-f929-4408-b781-d7f6aff2a944","Type":"ContainerDied","Data":"086dd4e4490c1371f45899c9d8830868d93124eeaa36a6f80678c18e1a47e2aa"} Feb 19 20:31:58 crc kubenswrapper[4813]: I0219 20:31:58.550836 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="086dd4e4490c1371f45899c9d8830868d93124eeaa36a6f80678c18e1a47e2aa" Feb 19 20:31:58 crc kubenswrapper[4813]: I0219 20:31:58.550901 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-hpz86" Feb 19 20:31:58 crc kubenswrapper[4813]: I0219 20:31:58.684681 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-4gdmm"] Feb 19 20:31:58 crc kubenswrapper[4813]: E0219 20:31:58.685633 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2e16257-f929-4408-b781-d7f6aff2a944" containerName="install-os-openstack-openstack-cell1" Feb 19 20:31:58 crc kubenswrapper[4813]: I0219 20:31:58.685678 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2e16257-f929-4408-b781-d7f6aff2a944" containerName="install-os-openstack-openstack-cell1" Feb 19 20:31:58 crc kubenswrapper[4813]: I0219 20:31:58.686125 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2e16257-f929-4408-b781-d7f6aff2a944" containerName="install-os-openstack-openstack-cell1" Feb 19 20:31:58 crc kubenswrapper[4813]: I0219 20:31:58.687559 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-4gdmm" Feb 19 20:31:58 crc kubenswrapper[4813]: I0219 20:31:58.690130 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-2ttn4" Feb 19 20:31:58 crc kubenswrapper[4813]: I0219 20:31:58.690423 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 19 20:31:58 crc kubenswrapper[4813]: I0219 20:31:58.690856 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 19 20:31:58 crc kubenswrapper[4813]: I0219 20:31:58.691786 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 20:31:58 crc kubenswrapper[4813]: I0219 20:31:58.696097 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-4gdmm"] Feb 19 20:31:58 crc kubenswrapper[4813]: I0219 20:31:58.760024 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wds6\" (UniqueName: \"kubernetes.io/projected/9c759860-2fa4-493c-8b71-796165e62357-kube-api-access-9wds6\") pod \"configure-os-openstack-openstack-cell1-4gdmm\" (UID: \"9c759860-2fa4-493c-8b71-796165e62357\") " pod="openstack/configure-os-openstack-openstack-cell1-4gdmm" Feb 19 20:31:58 crc kubenswrapper[4813]: I0219 20:31:58.760189 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9c759860-2fa4-493c-8b71-796165e62357-ceph\") pod \"configure-os-openstack-openstack-cell1-4gdmm\" (UID: \"9c759860-2fa4-493c-8b71-796165e62357\") " pod="openstack/configure-os-openstack-openstack-cell1-4gdmm" Feb 19 20:31:58 crc kubenswrapper[4813]: I0219 20:31:58.760220 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/9c759860-2fa4-493c-8b71-796165e62357-ssh-key-openstack-cell1\") pod \"configure-os-openstack-openstack-cell1-4gdmm\" (UID: \"9c759860-2fa4-493c-8b71-796165e62357\") " pod="openstack/configure-os-openstack-openstack-cell1-4gdmm" Feb 19 20:31:58 crc kubenswrapper[4813]: I0219 20:31:58.760620 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9c759860-2fa4-493c-8b71-796165e62357-inventory\") pod \"configure-os-openstack-openstack-cell1-4gdmm\" (UID: \"9c759860-2fa4-493c-8b71-796165e62357\") " pod="openstack/configure-os-openstack-openstack-cell1-4gdmm" Feb 19 20:31:58 crc kubenswrapper[4813]: I0219 20:31:58.863344 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9c759860-2fa4-493c-8b71-796165e62357-inventory\") pod \"configure-os-openstack-openstack-cell1-4gdmm\" (UID: \"9c759860-2fa4-493c-8b71-796165e62357\") " pod="openstack/configure-os-openstack-openstack-cell1-4gdmm" Feb 19 20:31:58 crc kubenswrapper[4813]: I0219 20:31:58.863454 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wds6\" (UniqueName: \"kubernetes.io/projected/9c759860-2fa4-493c-8b71-796165e62357-kube-api-access-9wds6\") pod \"configure-os-openstack-openstack-cell1-4gdmm\" (UID: \"9c759860-2fa4-493c-8b71-796165e62357\") " pod="openstack/configure-os-openstack-openstack-cell1-4gdmm" Feb 19 20:31:58 crc kubenswrapper[4813]: I0219 20:31:58.863636 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9c759860-2fa4-493c-8b71-796165e62357-ceph\") pod \"configure-os-openstack-openstack-cell1-4gdmm\" (UID: \"9c759860-2fa4-493c-8b71-796165e62357\") " pod="openstack/configure-os-openstack-openstack-cell1-4gdmm" Feb 19 20:31:58 crc kubenswrapper[4813]: I0219 20:31:58.863678 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/9c759860-2fa4-493c-8b71-796165e62357-ssh-key-openstack-cell1\") pod \"configure-os-openstack-openstack-cell1-4gdmm\" (UID: \"9c759860-2fa4-493c-8b71-796165e62357\") " pod="openstack/configure-os-openstack-openstack-cell1-4gdmm" Feb 19 20:31:58 crc kubenswrapper[4813]: I0219 20:31:58.867272 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9c759860-2fa4-493c-8b71-796165e62357-inventory\") pod \"configure-os-openstack-openstack-cell1-4gdmm\" (UID: \"9c759860-2fa4-493c-8b71-796165e62357\") " pod="openstack/configure-os-openstack-openstack-cell1-4gdmm" Feb 19 20:31:58 crc kubenswrapper[4813]: I0219 20:31:58.867341 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9c759860-2fa4-493c-8b71-796165e62357-ceph\") pod \"configure-os-openstack-openstack-cell1-4gdmm\" (UID: \"9c759860-2fa4-493c-8b71-796165e62357\") " pod="openstack/configure-os-openstack-openstack-cell1-4gdmm" Feb 19 20:31:58 crc kubenswrapper[4813]: I0219 20:31:58.868450 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/9c759860-2fa4-493c-8b71-796165e62357-ssh-key-openstack-cell1\") pod \"configure-os-openstack-openstack-cell1-4gdmm\" (UID: \"9c759860-2fa4-493c-8b71-796165e62357\") " pod="openstack/configure-os-openstack-openstack-cell1-4gdmm" Feb 19 20:31:58 crc kubenswrapper[4813]: I0219 20:31:58.890192 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wds6\" (UniqueName: \"kubernetes.io/projected/9c759860-2fa4-493c-8b71-796165e62357-kube-api-access-9wds6\") pod \"configure-os-openstack-openstack-cell1-4gdmm\" (UID: \"9c759860-2fa4-493c-8b71-796165e62357\") " pod="openstack/configure-os-openstack-openstack-cell1-4gdmm" Feb 19 20:31:59 crc kubenswrapper[4813]: I0219 20:31:59.028984 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-4gdmm" Feb 19 20:31:59 crc kubenswrapper[4813]: I0219 20:31:59.597091 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-4gdmm"] Feb 19 20:31:59 crc kubenswrapper[4813]: W0219 20:31:59.606447 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c759860_2fa4_493c_8b71_796165e62357.slice/crio-edaff71aff9b761a9f8e474a18b5b7baa7f89dc31ad9e54044543283f12cf63c WatchSource:0}: Error finding container edaff71aff9b761a9f8e474a18b5b7baa7f89dc31ad9e54044543283f12cf63c: Status 404 returned error can't find the container with id edaff71aff9b761a9f8e474a18b5b7baa7f89dc31ad9e54044543283f12cf63c Feb 19 20:32:00 crc kubenswrapper[4813]: I0219 20:32:00.569763 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-4gdmm" event={"ID":"9c759860-2fa4-493c-8b71-796165e62357","Type":"ContainerStarted","Data":"f876f12e25b8193bc76ecb05f63a823f442ce1c397a6a99a30495bde527bdbc9"} Feb 19 20:32:00 crc kubenswrapper[4813]: I0219 20:32:00.570081 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-4gdmm" event={"ID":"9c759860-2fa4-493c-8b71-796165e62357","Type":"ContainerStarted","Data":"edaff71aff9b761a9f8e474a18b5b7baa7f89dc31ad9e54044543283f12cf63c"} Feb 19 20:32:00 crc kubenswrapper[4813]: I0219 20:32:00.601312 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-openstack-openstack-cell1-4gdmm" podStartSLOduration=2.191230633 podStartE2EDuration="2.601289822s" podCreationTimestamp="2026-02-19 20:31:58 +0000 UTC" firstStartedPulling="2026-02-19 20:31:59.610038241 +0000 UTC m=+7338.835478782" lastFinishedPulling="2026-02-19 20:32:00.02009744 +0000 UTC m=+7339.245537971" observedRunningTime="2026-02-19 20:32:00.594202713 +0000 UTC m=+7339.819643264" watchObservedRunningTime="2026-02-19 20:32:00.601289822 +0000 UTC m=+7339.826730363" Feb 19 20:32:07 crc kubenswrapper[4813]: I0219 20:32:07.472144 4813 scope.go:117] "RemoveContainer" containerID="ae09c86ff9f8bc21a9cf3026fff1db9c93da58ba8343078b39d31938225ee5ee" Feb 19 20:32:07 crc kubenswrapper[4813]: E0219 20:32:07.473108 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:32:19 crc kubenswrapper[4813]: I0219 20:32:19.595127 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zsrmq"] Feb 19 20:32:19 crc kubenswrapper[4813]: I0219 20:32:19.598582 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zsrmq" Feb 19 20:32:19 crc kubenswrapper[4813]: I0219 20:32:19.615730 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zsrmq"] Feb 19 20:32:19 crc kubenswrapper[4813]: I0219 20:32:19.744007 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45b6z\" (UniqueName: \"kubernetes.io/projected/6029f0f4-98a3-4e66-b40c-a2c7c5b23d9f-kube-api-access-45b6z\") pod \"community-operators-zsrmq\" (UID: \"6029f0f4-98a3-4e66-b40c-a2c7c5b23d9f\") " pod="openshift-marketplace/community-operators-zsrmq" Feb 19 20:32:19 crc kubenswrapper[4813]: I0219 20:32:19.744183 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6029f0f4-98a3-4e66-b40c-a2c7c5b23d9f-utilities\") pod \"community-operators-zsrmq\" (UID: \"6029f0f4-98a3-4e66-b40c-a2c7c5b23d9f\") " pod="openshift-marketplace/community-operators-zsrmq" Feb 19 20:32:19 crc kubenswrapper[4813]: I0219 20:32:19.744205 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6029f0f4-98a3-4e66-b40c-a2c7c5b23d9f-catalog-content\") pod \"community-operators-zsrmq\" (UID: \"6029f0f4-98a3-4e66-b40c-a2c7c5b23d9f\") " pod="openshift-marketplace/community-operators-zsrmq" Feb 19 20:32:19 crc kubenswrapper[4813]: I0219 20:32:19.846158 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45b6z\" (UniqueName: \"kubernetes.io/projected/6029f0f4-98a3-4e66-b40c-a2c7c5b23d9f-kube-api-access-45b6z\") pod \"community-operators-zsrmq\" (UID: \"6029f0f4-98a3-4e66-b40c-a2c7c5b23d9f\") " pod="openshift-marketplace/community-operators-zsrmq" Feb 19 20:32:19 crc kubenswrapper[4813]: I0219 20:32:19.846355 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6029f0f4-98a3-4e66-b40c-a2c7c5b23d9f-utilities\") pod \"community-operators-zsrmq\" (UID: \"6029f0f4-98a3-4e66-b40c-a2c7c5b23d9f\") " pod="openshift-marketplace/community-operators-zsrmq" Feb 19 20:32:19 crc kubenswrapper[4813]: I0219 20:32:19.846383 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6029f0f4-98a3-4e66-b40c-a2c7c5b23d9f-catalog-content\") pod \"community-operators-zsrmq\" (UID: \"6029f0f4-98a3-4e66-b40c-a2c7c5b23d9f\") " pod="openshift-marketplace/community-operators-zsrmq" Feb 19 20:32:19 crc kubenswrapper[4813]: I0219 20:32:19.846873 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6029f0f4-98a3-4e66-b40c-a2c7c5b23d9f-utilities\") pod \"community-operators-zsrmq\" (UID: \"6029f0f4-98a3-4e66-b40c-a2c7c5b23d9f\") " pod="openshift-marketplace/community-operators-zsrmq" Feb 19 20:32:19 crc kubenswrapper[4813]: I0219 20:32:19.846877 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6029f0f4-98a3-4e66-b40c-a2c7c5b23d9f-catalog-content\") pod \"community-operators-zsrmq\" (UID: \"6029f0f4-98a3-4e66-b40c-a2c7c5b23d9f\") " pod="openshift-marketplace/community-operators-zsrmq" Feb 19 20:32:19 crc kubenswrapper[4813]: I0219 20:32:19.869068 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45b6z\" (UniqueName: \"kubernetes.io/projected/6029f0f4-98a3-4e66-b40c-a2c7c5b23d9f-kube-api-access-45b6z\") pod \"community-operators-zsrmq\" (UID: \"6029f0f4-98a3-4e66-b40c-a2c7c5b23d9f\") " pod="openshift-marketplace/community-operators-zsrmq" Feb 19 20:32:19 crc kubenswrapper[4813]: I0219 20:32:19.920500 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zsrmq" Feb 19 20:32:20 crc kubenswrapper[4813]: I0219 20:32:20.515286 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zsrmq"] Feb 19 20:32:20 crc kubenswrapper[4813]: I0219 20:32:20.794925 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zsrmq" event={"ID":"6029f0f4-98a3-4e66-b40c-a2c7c5b23d9f","Type":"ContainerStarted","Data":"a90005c7f4c77786028dbc8c62801a2ef9b3c6616a1a21f2714db725acfa13c0"} Feb 19 20:32:20 crc kubenswrapper[4813]: I0219 20:32:20.795279 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zsrmq" event={"ID":"6029f0f4-98a3-4e66-b40c-a2c7c5b23d9f","Type":"ContainerStarted","Data":"96ee622427f7d69d32f283943a245f9138182ac10575f3bc3947e63cbff8f561"} Feb 19 20:32:21 crc kubenswrapper[4813]: I0219 20:32:21.806267 4813 generic.go:334] "Generic (PLEG): container finished" podID="6029f0f4-98a3-4e66-b40c-a2c7c5b23d9f" containerID="a90005c7f4c77786028dbc8c62801a2ef9b3c6616a1a21f2714db725acfa13c0" exitCode=0 Feb 19 20:32:21 crc kubenswrapper[4813]: I0219 20:32:21.806325 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zsrmq" event={"ID":"6029f0f4-98a3-4e66-b40c-a2c7c5b23d9f","Type":"ContainerDied","Data":"a90005c7f4c77786028dbc8c62801a2ef9b3c6616a1a21f2714db725acfa13c0"} Feb 19 20:32:21 crc kubenswrapper[4813]: I0219 20:32:21.806612 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zsrmq" event={"ID":"6029f0f4-98a3-4e66-b40c-a2c7c5b23d9f","Type":"ContainerStarted","Data":"560e726c7a122e9be09cefd52f0943853fce371c3c47d3318cffcf01300e85cd"} Feb 19 20:32:22 crc kubenswrapper[4813]: I0219 20:32:22.471864 4813 scope.go:117] "RemoveContainer" containerID="ae09c86ff9f8bc21a9cf3026fff1db9c93da58ba8343078b39d31938225ee5ee" Feb 19 20:32:22 crc kubenswrapper[4813]: E0219 20:32:22.472141 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:32:23 crc kubenswrapper[4813]: I0219 20:32:23.829717 4813 generic.go:334] "Generic (PLEG): container finished" podID="6029f0f4-98a3-4e66-b40c-a2c7c5b23d9f" containerID="560e726c7a122e9be09cefd52f0943853fce371c3c47d3318cffcf01300e85cd" exitCode=0 Feb 19 20:32:23 crc kubenswrapper[4813]: I0219 20:32:23.829753 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zsrmq" event={"ID":"6029f0f4-98a3-4e66-b40c-a2c7c5b23d9f","Type":"ContainerDied","Data":"560e726c7a122e9be09cefd52f0943853fce371c3c47d3318cffcf01300e85cd"} Feb 19 20:32:24 crc kubenswrapper[4813]: I0219 20:32:24.843751 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zsrmq" event={"ID":"6029f0f4-98a3-4e66-b40c-a2c7c5b23d9f","Type":"ContainerStarted","Data":"de7e70a1af0c131fa725a0f33c3bd9dfa6c7ae53ca9ac99fe7a60ed00ee863ba"} Feb 19 20:32:24 crc kubenswrapper[4813]: I0219 20:32:24.862106 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zsrmq" podStartSLOduration=2.42737415 podStartE2EDuration="5.862092129s" podCreationTimestamp="2026-02-19 20:32:19 +0000 UTC" firstStartedPulling="2026-02-19 20:32:20.798473034 +0000 UTC m=+7360.023913595" lastFinishedPulling="2026-02-19 20:32:24.233191033 +0000 UTC m=+7363.458631574" observedRunningTime="2026-02-19 20:32:24.861350726 +0000 UTC m=+7364.086791267" watchObservedRunningTime="2026-02-19 20:32:24.862092129 +0000 UTC m=+7364.087532670" Feb 19 20:32:29 crc kubenswrapper[4813]: I0219 20:32:29.921422 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zsrmq" Feb 19 20:32:29 crc kubenswrapper[4813]: I0219 20:32:29.922113 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zsrmq" Feb 19 20:32:29 crc kubenswrapper[4813]: I0219 20:32:29.968114 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zsrmq" Feb 19 20:32:30 crc kubenswrapper[4813]: I0219 20:32:30.979928 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zsrmq" Feb 19 20:32:31 crc kubenswrapper[4813]: I0219 20:32:31.036127 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zsrmq"] Feb 19 20:32:32 crc kubenswrapper[4813]: I0219 20:32:32.935357 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zsrmq" podUID="6029f0f4-98a3-4e66-b40c-a2c7c5b23d9f" containerName="registry-server" containerID="cri-o://de7e70a1af0c131fa725a0f33c3bd9dfa6c7ae53ca9ac99fe7a60ed00ee863ba" gracePeriod=2 Feb 19 20:32:33 crc kubenswrapper[4813]: I0219 20:32:33.487045 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zsrmq" Feb 19 20:32:33 crc kubenswrapper[4813]: I0219 20:32:33.561856 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6029f0f4-98a3-4e66-b40c-a2c7c5b23d9f-utilities\") pod \"6029f0f4-98a3-4e66-b40c-a2c7c5b23d9f\" (UID: \"6029f0f4-98a3-4e66-b40c-a2c7c5b23d9f\") " Feb 19 20:32:33 crc kubenswrapper[4813]: I0219 20:32:33.561915 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45b6z\" (UniqueName: \"kubernetes.io/projected/6029f0f4-98a3-4e66-b40c-a2c7c5b23d9f-kube-api-access-45b6z\") pod \"6029f0f4-98a3-4e66-b40c-a2c7c5b23d9f\" (UID: \"6029f0f4-98a3-4e66-b40c-a2c7c5b23d9f\") " Feb 19 20:32:33 crc kubenswrapper[4813]: I0219 20:32:33.562292 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6029f0f4-98a3-4e66-b40c-a2c7c5b23d9f-catalog-content\") pod \"6029f0f4-98a3-4e66-b40c-a2c7c5b23d9f\" (UID: \"6029f0f4-98a3-4e66-b40c-a2c7c5b23d9f\") " Feb 19 20:32:33 crc kubenswrapper[4813]: I0219 20:32:33.563002 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6029f0f4-98a3-4e66-b40c-a2c7c5b23d9f-utilities" (OuterVolumeSpecName: "utilities") pod "6029f0f4-98a3-4e66-b40c-a2c7c5b23d9f" (UID: "6029f0f4-98a3-4e66-b40c-a2c7c5b23d9f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:32:33 crc kubenswrapper[4813]: I0219 20:32:33.564017 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6029f0f4-98a3-4e66-b40c-a2c7c5b23d9f-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 20:32:33 crc kubenswrapper[4813]: I0219 20:32:33.568851 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6029f0f4-98a3-4e66-b40c-a2c7c5b23d9f-kube-api-access-45b6z" (OuterVolumeSpecName: "kube-api-access-45b6z") pod "6029f0f4-98a3-4e66-b40c-a2c7c5b23d9f" (UID: "6029f0f4-98a3-4e66-b40c-a2c7c5b23d9f"). InnerVolumeSpecName "kube-api-access-45b6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:32:33 crc kubenswrapper[4813]: I0219 20:32:33.614030 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6029f0f4-98a3-4e66-b40c-a2c7c5b23d9f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6029f0f4-98a3-4e66-b40c-a2c7c5b23d9f" (UID: "6029f0f4-98a3-4e66-b40c-a2c7c5b23d9f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:32:33 crc kubenswrapper[4813]: I0219 20:32:33.665646 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6029f0f4-98a3-4e66-b40c-a2c7c5b23d9f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 20:32:33 crc kubenswrapper[4813]: I0219 20:32:33.665877 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45b6z\" (UniqueName: \"kubernetes.io/projected/6029f0f4-98a3-4e66-b40c-a2c7c5b23d9f-kube-api-access-45b6z\") on node \"crc\" DevicePath \"\"" Feb 19 20:32:33 crc kubenswrapper[4813]: I0219 20:32:33.947930 4813 generic.go:334] "Generic (PLEG): container finished" podID="6029f0f4-98a3-4e66-b40c-a2c7c5b23d9f" containerID="de7e70a1af0c131fa725a0f33c3bd9dfa6c7ae53ca9ac99fe7a60ed00ee863ba" exitCode=0 Feb 19 20:32:33 crc kubenswrapper[4813]: I0219 20:32:33.948077 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zsrmq" Feb 19 20:32:33 crc kubenswrapper[4813]: I0219 20:32:33.948060 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zsrmq" event={"ID":"6029f0f4-98a3-4e66-b40c-a2c7c5b23d9f","Type":"ContainerDied","Data":"de7e70a1af0c131fa725a0f33c3bd9dfa6c7ae53ca9ac99fe7a60ed00ee863ba"} Feb 19 20:32:33 crc kubenswrapper[4813]: I0219 20:32:33.949140 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zsrmq" event={"ID":"6029f0f4-98a3-4e66-b40c-a2c7c5b23d9f","Type":"ContainerDied","Data":"96ee622427f7d69d32f283943a245f9138182ac10575f3bc3947e63cbff8f561"} Feb 19 20:32:33 crc kubenswrapper[4813]: I0219 20:32:33.949170 4813 scope.go:117] "RemoveContainer" containerID="de7e70a1af0c131fa725a0f33c3bd9dfa6c7ae53ca9ac99fe7a60ed00ee863ba" Feb 19 20:32:33 crc kubenswrapper[4813]: I0219 20:32:33.972065 4813 scope.go:117] "RemoveContainer" containerID="560e726c7a122e9be09cefd52f0943853fce371c3c47d3318cffcf01300e85cd" Feb 19 20:32:34 crc kubenswrapper[4813]: I0219 20:32:34.004801 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zsrmq"] Feb 19 20:32:34 crc kubenswrapper[4813]: I0219 20:32:34.011286 4813 scope.go:117] "RemoveContainer" containerID="a90005c7f4c77786028dbc8c62801a2ef9b3c6616a1a21f2714db725acfa13c0" Feb 19 20:32:34 crc kubenswrapper[4813]: I0219 20:32:34.021081 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zsrmq"] Feb 19 20:32:34 crc kubenswrapper[4813]: I0219 20:32:34.046886 4813 scope.go:117] "RemoveContainer" containerID="de7e70a1af0c131fa725a0f33c3bd9dfa6c7ae53ca9ac99fe7a60ed00ee863ba" Feb 19 20:32:34 crc kubenswrapper[4813]: E0219 20:32:34.047663 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de7e70a1af0c131fa725a0f33c3bd9dfa6c7ae53ca9ac99fe7a60ed00ee863ba\": container with ID starting with de7e70a1af0c131fa725a0f33c3bd9dfa6c7ae53ca9ac99fe7a60ed00ee863ba not found: ID does not exist" containerID="de7e70a1af0c131fa725a0f33c3bd9dfa6c7ae53ca9ac99fe7a60ed00ee863ba" Feb 19 20:32:34 crc kubenswrapper[4813]: I0219 20:32:34.047846 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de7e70a1af0c131fa725a0f33c3bd9dfa6c7ae53ca9ac99fe7a60ed00ee863ba"} err="failed to get container status \"de7e70a1af0c131fa725a0f33c3bd9dfa6c7ae53ca9ac99fe7a60ed00ee863ba\": rpc error: code = NotFound desc = could not find container \"de7e70a1af0c131fa725a0f33c3bd9dfa6c7ae53ca9ac99fe7a60ed00ee863ba\": container with ID starting with de7e70a1af0c131fa725a0f33c3bd9dfa6c7ae53ca9ac99fe7a60ed00ee863ba not found: ID does not exist" Feb 19 20:32:34 crc kubenswrapper[4813]: I0219 20:32:34.048151 4813 scope.go:117] "RemoveContainer" containerID="560e726c7a122e9be09cefd52f0943853fce371c3c47d3318cffcf01300e85cd" Feb 19 20:32:34 crc kubenswrapper[4813]: E0219 20:32:34.048616 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"560e726c7a122e9be09cefd52f0943853fce371c3c47d3318cffcf01300e85cd\": container with ID starting with 560e726c7a122e9be09cefd52f0943853fce371c3c47d3318cffcf01300e85cd not found: ID does not exist" containerID="560e726c7a122e9be09cefd52f0943853fce371c3c47d3318cffcf01300e85cd" Feb 19 20:32:34 crc kubenswrapper[4813]: I0219 20:32:34.048781 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"560e726c7a122e9be09cefd52f0943853fce371c3c47d3318cffcf01300e85cd"} err="failed to get container status \"560e726c7a122e9be09cefd52f0943853fce371c3c47d3318cffcf01300e85cd\": rpc error: code = NotFound desc = could not find container \"560e726c7a122e9be09cefd52f0943853fce371c3c47d3318cffcf01300e85cd\": container with ID starting with 560e726c7a122e9be09cefd52f0943853fce371c3c47d3318cffcf01300e85cd not found: ID does not exist" Feb 19 20:32:34 crc kubenswrapper[4813]: I0219 20:32:34.048938 4813 scope.go:117] "RemoveContainer" containerID="a90005c7f4c77786028dbc8c62801a2ef9b3c6616a1a21f2714db725acfa13c0" Feb 19 20:32:34 crc kubenswrapper[4813]: E0219 20:32:34.049583 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a90005c7f4c77786028dbc8c62801a2ef9b3c6616a1a21f2714db725acfa13c0\": container with ID starting with a90005c7f4c77786028dbc8c62801a2ef9b3c6616a1a21f2714db725acfa13c0 not found: ID does not exist" containerID="a90005c7f4c77786028dbc8c62801a2ef9b3c6616a1a21f2714db725acfa13c0" Feb 19 20:32:34 crc kubenswrapper[4813]: I0219 20:32:34.049640 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a90005c7f4c77786028dbc8c62801a2ef9b3c6616a1a21f2714db725acfa13c0"} err="failed to get container status \"a90005c7f4c77786028dbc8c62801a2ef9b3c6616a1a21f2714db725acfa13c0\": rpc error: code = NotFound desc = could not find container \"a90005c7f4c77786028dbc8c62801a2ef9b3c6616a1a21f2714db725acfa13c0\": container with ID starting with a90005c7f4c77786028dbc8c62801a2ef9b3c6616a1a21f2714db725acfa13c0 not found: ID does not exist" Feb 19 20:32:35 crc kubenswrapper[4813]: I0219 20:32:35.495196 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6029f0f4-98a3-4e66-b40c-a2c7c5b23d9f" path="/var/lib/kubelet/pods/6029f0f4-98a3-4e66-b40c-a2c7c5b23d9f/volumes" Feb 19 20:32:36 crc kubenswrapper[4813]: I0219 20:32:36.472409 4813 scope.go:117] "RemoveContainer" containerID="ae09c86ff9f8bc21a9cf3026fff1db9c93da58ba8343078b39d31938225ee5ee" Feb 19 20:32:36 crc kubenswrapper[4813]: E0219 20:32:36.473028 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:32:47 crc kubenswrapper[4813]: I0219 20:32:47.100690 4813 generic.go:334] "Generic (PLEG): container finished" podID="9c759860-2fa4-493c-8b71-796165e62357" containerID="f876f12e25b8193bc76ecb05f63a823f442ce1c397a6a99a30495bde527bdbc9" exitCode=0 Feb 19 20:32:47 crc kubenswrapper[4813]: I0219 20:32:47.100807 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-4gdmm" event={"ID":"9c759860-2fa4-493c-8b71-796165e62357","Type":"ContainerDied","Data":"f876f12e25b8193bc76ecb05f63a823f442ce1c397a6a99a30495bde527bdbc9"} Feb 19 20:32:48 crc kubenswrapper[4813]: I0219 20:32:48.611437 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-4gdmm" Feb 19 20:32:48 crc kubenswrapper[4813]: I0219 20:32:48.724003 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9c759860-2fa4-493c-8b71-796165e62357-inventory\") pod \"9c759860-2fa4-493c-8b71-796165e62357\" (UID: \"9c759860-2fa4-493c-8b71-796165e62357\") " Feb 19 20:32:48 crc kubenswrapper[4813]: I0219 20:32:48.724116 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/9c759860-2fa4-493c-8b71-796165e62357-ssh-key-openstack-cell1\") pod \"9c759860-2fa4-493c-8b71-796165e62357\" (UID: \"9c759860-2fa4-493c-8b71-796165e62357\") " Feb 19 20:32:48 crc kubenswrapper[4813]: I0219 20:32:48.724322 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wds6\" (UniqueName: \"kubernetes.io/projected/9c759860-2fa4-493c-8b71-796165e62357-kube-api-access-9wds6\") pod \"9c759860-2fa4-493c-8b71-796165e62357\" (UID: \"9c759860-2fa4-493c-8b71-796165e62357\") " Feb 19 20:32:48 crc kubenswrapper[4813]: I0219 20:32:48.724505 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9c759860-2fa4-493c-8b71-796165e62357-ceph\") pod \"9c759860-2fa4-493c-8b71-796165e62357\" (UID: \"9c759860-2fa4-493c-8b71-796165e62357\") " Feb 19 20:32:48 crc kubenswrapper[4813]: I0219 20:32:48.735911 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c759860-2fa4-493c-8b71-796165e62357-kube-api-access-9wds6" (OuterVolumeSpecName: "kube-api-access-9wds6") pod "9c759860-2fa4-493c-8b71-796165e62357" (UID: "9c759860-2fa4-493c-8b71-796165e62357"). InnerVolumeSpecName "kube-api-access-9wds6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:32:48 crc kubenswrapper[4813]: I0219 20:32:48.735922 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c759860-2fa4-493c-8b71-796165e62357-ceph" (OuterVolumeSpecName: "ceph") pod "9c759860-2fa4-493c-8b71-796165e62357" (UID: "9c759860-2fa4-493c-8b71-796165e62357"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:32:48 crc kubenswrapper[4813]: I0219 20:32:48.771791 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c759860-2fa4-493c-8b71-796165e62357-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "9c759860-2fa4-493c-8b71-796165e62357" (UID: "9c759860-2fa4-493c-8b71-796165e62357"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:32:48 crc kubenswrapper[4813]: I0219 20:32:48.783100 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c759860-2fa4-493c-8b71-796165e62357-inventory" (OuterVolumeSpecName: "inventory") pod "9c759860-2fa4-493c-8b71-796165e62357" (UID: "9c759860-2fa4-493c-8b71-796165e62357"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:32:48 crc kubenswrapper[4813]: I0219 20:32:48.826608 4813 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9c759860-2fa4-493c-8b71-796165e62357-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 20:32:48 crc kubenswrapper[4813]: I0219 20:32:48.826648 4813 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/9c759860-2fa4-493c-8b71-796165e62357-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 19 20:32:48 crc kubenswrapper[4813]: I0219 20:32:48.826664 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wds6\" (UniqueName: \"kubernetes.io/projected/9c759860-2fa4-493c-8b71-796165e62357-kube-api-access-9wds6\") on node \"crc\" DevicePath \"\"" Feb 19 20:32:48 crc kubenswrapper[4813]: I0219 20:32:48.826676 4813 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9c759860-2fa4-493c-8b71-796165e62357-ceph\") on node \"crc\" DevicePath \"\"" Feb 19 20:32:49 crc kubenswrapper[4813]: I0219 20:32:49.126896 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-4gdmm" event={"ID":"9c759860-2fa4-493c-8b71-796165e62357","Type":"ContainerDied","Data":"edaff71aff9b761a9f8e474a18b5b7baa7f89dc31ad9e54044543283f12cf63c"} Feb 19 20:32:49 crc kubenswrapper[4813]: I0219 20:32:49.127387 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="edaff71aff9b761a9f8e474a18b5b7baa7f89dc31ad9e54044543283f12cf63c" Feb 19 20:32:49 crc kubenswrapper[4813]: I0219 20:32:49.127137 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-4gdmm" Feb 19 20:32:49 crc kubenswrapper[4813]: I0219 20:32:49.235213 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-openstack-nb7bx"] Feb 19 20:32:49 crc kubenswrapper[4813]: E0219 20:32:49.236131 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c759860-2fa4-493c-8b71-796165e62357" containerName="configure-os-openstack-openstack-cell1" Feb 19 20:32:49 crc kubenswrapper[4813]: I0219 20:32:49.236162 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c759860-2fa4-493c-8b71-796165e62357" containerName="configure-os-openstack-openstack-cell1" Feb 19 20:32:49 crc kubenswrapper[4813]: E0219 20:32:49.236211 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6029f0f4-98a3-4e66-b40c-a2c7c5b23d9f" containerName="registry-server" Feb 19 20:32:49 crc kubenswrapper[4813]: I0219 20:32:49.236222 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="6029f0f4-98a3-4e66-b40c-a2c7c5b23d9f" containerName="registry-server" Feb 19 20:32:49 crc kubenswrapper[4813]: E0219 20:32:49.236239 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6029f0f4-98a3-4e66-b40c-a2c7c5b23d9f" containerName="extract-utilities" Feb 19 20:32:49 crc kubenswrapper[4813]: I0219 20:32:49.236249 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="6029f0f4-98a3-4e66-b40c-a2c7c5b23d9f" containerName="extract-utilities" Feb 19 20:32:49 crc kubenswrapper[4813]: E0219 20:32:49.236265 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6029f0f4-98a3-4e66-b40c-a2c7c5b23d9f" containerName="extract-content" Feb 19 20:32:49 crc kubenswrapper[4813]: I0219 20:32:49.236276 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="6029f0f4-98a3-4e66-b40c-a2c7c5b23d9f" containerName="extract-content" Feb 19 20:32:49 crc kubenswrapper[4813]: I0219 20:32:49.236757 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="6029f0f4-98a3-4e66-b40c-a2c7c5b23d9f" containerName="registry-server" Feb 19 20:32:49 crc kubenswrapper[4813]: I0219 20:32:49.236809 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c759860-2fa4-493c-8b71-796165e62357" containerName="configure-os-openstack-openstack-cell1" Feb 19 20:32:49 crc kubenswrapper[4813]: I0219 20:32:49.238299 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-nb7bx" Feb 19 20:32:49 crc kubenswrapper[4813]: I0219 20:32:49.240742 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 19 20:32:49 crc kubenswrapper[4813]: I0219 20:32:49.240887 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 19 20:32:49 crc kubenswrapper[4813]: I0219 20:32:49.241012 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-2ttn4" Feb 19 20:32:49 crc kubenswrapper[4813]: I0219 20:32:49.246009 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 20:32:49 crc kubenswrapper[4813]: I0219 20:32:49.250485 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-openstack-nb7bx"] Feb 19 20:32:49 crc kubenswrapper[4813]: I0219 20:32:49.341550 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6268\" (UniqueName: \"kubernetes.io/projected/e4b63a21-1342-4837-b59f-9f0315daa673-kube-api-access-h6268\") pod \"ssh-known-hosts-openstack-nb7bx\" (UID: \"e4b63a21-1342-4837-b59f-9f0315daa673\") " pod="openstack/ssh-known-hosts-openstack-nb7bx" Feb 19 20:32:49 crc kubenswrapper[4813]: I0219 20:32:49.341663 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e4b63a21-1342-4837-b59f-9f0315daa673-ceph\") pod \"ssh-known-hosts-openstack-nb7bx\" (UID: \"e4b63a21-1342-4837-b59f-9f0315daa673\") " pod="openstack/ssh-known-hosts-openstack-nb7bx" Feb 19 20:32:49 crc kubenswrapper[4813]: I0219 20:32:49.341745 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/e4b63a21-1342-4837-b59f-9f0315daa673-ssh-key-openstack-cell1\") pod \"ssh-known-hosts-openstack-nb7bx\" (UID: \"e4b63a21-1342-4837-b59f-9f0315daa673\") " pod="openstack/ssh-known-hosts-openstack-nb7bx" Feb 19 20:32:49 crc kubenswrapper[4813]: I0219 20:32:49.341769 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e4b63a21-1342-4837-b59f-9f0315daa673-inventory-0\") pod \"ssh-known-hosts-openstack-nb7bx\" (UID: \"e4b63a21-1342-4837-b59f-9f0315daa673\") " pod="openstack/ssh-known-hosts-openstack-nb7bx" Feb 19 20:32:49 crc kubenswrapper[4813]: I0219 20:32:49.445191 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e4b63a21-1342-4837-b59f-9f0315daa673-inventory-0\") pod \"ssh-known-hosts-openstack-nb7bx\" (UID: \"e4b63a21-1342-4837-b59f-9f0315daa673\") " pod="openstack/ssh-known-hosts-openstack-nb7bx" Feb 19 20:32:49 crc kubenswrapper[4813]: I0219 20:32:49.445299 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6268\" (UniqueName: \"kubernetes.io/projected/e4b63a21-1342-4837-b59f-9f0315daa673-kube-api-access-h6268\") pod \"ssh-known-hosts-openstack-nb7bx\" (UID: \"e4b63a21-1342-4837-b59f-9f0315daa673\") " pod="openstack/ssh-known-hosts-openstack-nb7bx" Feb 19 20:32:49 crc kubenswrapper[4813]: I0219 20:32:49.445376 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e4b63a21-1342-4837-b59f-9f0315daa673-ceph\") pod \"ssh-known-hosts-openstack-nb7bx\" (UID: \"e4b63a21-1342-4837-b59f-9f0315daa673\") " pod="openstack/ssh-known-hosts-openstack-nb7bx" Feb 19 20:32:49 crc kubenswrapper[4813]: I0219 20:32:49.445458 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/e4b63a21-1342-4837-b59f-9f0315daa673-ssh-key-openstack-cell1\") pod \"ssh-known-hosts-openstack-nb7bx\" (UID: \"e4b63a21-1342-4837-b59f-9f0315daa673\") " pod="openstack/ssh-known-hosts-openstack-nb7bx" Feb 19 20:32:49 crc kubenswrapper[4813]: I0219 20:32:49.465523 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e4b63a21-1342-4837-b59f-9f0315daa673-ceph\") pod \"ssh-known-hosts-openstack-nb7bx\" (UID: \"e4b63a21-1342-4837-b59f-9f0315daa673\") " pod="openstack/ssh-known-hosts-openstack-nb7bx" Feb 19 20:32:49 crc kubenswrapper[4813]: I0219 20:32:49.466387 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/e4b63a21-1342-4837-b59f-9f0315daa673-ssh-key-openstack-cell1\") pod \"ssh-known-hosts-openstack-nb7bx\" (UID: \"e4b63a21-1342-4837-b59f-9f0315daa673\") " pod="openstack/ssh-known-hosts-openstack-nb7bx" Feb 19 20:32:49 crc kubenswrapper[4813]: I0219 20:32:49.468542 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6268\" (UniqueName: \"kubernetes.io/projected/e4b63a21-1342-4837-b59f-9f0315daa673-kube-api-access-h6268\") pod \"ssh-known-hosts-openstack-nb7bx\" (UID: \"e4b63a21-1342-4837-b59f-9f0315daa673\") " pod="openstack/ssh-known-hosts-openstack-nb7bx" Feb 19 20:32:49 crc kubenswrapper[4813]: I0219 20:32:49.473617 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e4b63a21-1342-4837-b59f-9f0315daa673-inventory-0\") pod \"ssh-known-hosts-openstack-nb7bx\" (UID: \"e4b63a21-1342-4837-b59f-9f0315daa673\") " pod="openstack/ssh-known-hosts-openstack-nb7bx" Feb 19 20:32:49 crc kubenswrapper[4813]: I0219 20:32:49.476071 4813 scope.go:117] "RemoveContainer" containerID="ae09c86ff9f8bc21a9cf3026fff1db9c93da58ba8343078b39d31938225ee5ee" Feb 19 20:32:49 crc kubenswrapper[4813]: E0219 20:32:49.476457 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:32:49 crc kubenswrapper[4813]: I0219 20:32:49.581439 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-nb7bx" Feb 19 20:32:50 crc kubenswrapper[4813]: I0219 20:32:50.128809 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-openstack-nb7bx"] Feb 19 20:32:50 crc kubenswrapper[4813]: W0219 20:32:50.140276 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode4b63a21_1342_4837_b59f_9f0315daa673.slice/crio-c5fafccae9fd92718b0081b8bbc9fd5e2fc935e2d2b49944905fa1ff5c8c3287 WatchSource:0}: Error finding container c5fafccae9fd92718b0081b8bbc9fd5e2fc935e2d2b49944905fa1ff5c8c3287: Status 404 returned error can't find the container with id c5fafccae9fd92718b0081b8bbc9fd5e2fc935e2d2b49944905fa1ff5c8c3287 Feb 19 20:32:50 crc kubenswrapper[4813]: I0219 20:32:50.143920 4813 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 20:32:51 crc kubenswrapper[4813]: I0219 20:32:51.149311 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-nb7bx" event={"ID":"e4b63a21-1342-4837-b59f-9f0315daa673","Type":"ContainerStarted","Data":"ed7f47ed180f7c26b1c9f4706d435774bf63a65fa3569e84b7608781348e9377"} Feb 19 20:32:51 crc kubenswrapper[4813]: I0219 20:32:51.149856 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-nb7bx" event={"ID":"e4b63a21-1342-4837-b59f-9f0315daa673","Type":"ContainerStarted","Data":"c5fafccae9fd92718b0081b8bbc9fd5e2fc935e2d2b49944905fa1ff5c8c3287"} Feb 19 20:32:51 crc kubenswrapper[4813]: I0219 20:32:51.180879 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-openstack-nb7bx" podStartSLOduration=1.7566822439999998 podStartE2EDuration="2.180861031s" podCreationTimestamp="2026-02-19 20:32:49 +0000 UTC" firstStartedPulling="2026-02-19 20:32:50.143624187 +0000 UTC m=+7389.369064748" lastFinishedPulling="2026-02-19 20:32:50.567802954 +0000 UTC m=+7389.793243535" observedRunningTime="2026-02-19 20:32:51.171345266 +0000 UTC m=+7390.396785807" watchObservedRunningTime="2026-02-19 20:32:51.180861031 +0000 UTC m=+7390.406301562" Feb 19 20:33:00 crc kubenswrapper[4813]: I0219 20:33:00.258297 4813 generic.go:334] "Generic (PLEG): container finished" podID="e4b63a21-1342-4837-b59f-9f0315daa673" containerID="ed7f47ed180f7c26b1c9f4706d435774bf63a65fa3569e84b7608781348e9377" exitCode=0 Feb 19 20:33:00 crc kubenswrapper[4813]: I0219 20:33:00.258359 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-nb7bx" event={"ID":"e4b63a21-1342-4837-b59f-9f0315daa673","Type":"ContainerDied","Data":"ed7f47ed180f7c26b1c9f4706d435774bf63a65fa3569e84b7608781348e9377"} Feb 19 20:33:01 crc kubenswrapper[4813]: I0219 20:33:01.816736 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-nb7bx" Feb 19 20:33:01 crc kubenswrapper[4813]: I0219 20:33:01.838981 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e4b63a21-1342-4837-b59f-9f0315daa673-inventory-0\") pod \"e4b63a21-1342-4837-b59f-9f0315daa673\" (UID: \"e4b63a21-1342-4837-b59f-9f0315daa673\") " Feb 19 20:33:01 crc kubenswrapper[4813]: I0219 20:33:01.839055 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6268\" (UniqueName: \"kubernetes.io/projected/e4b63a21-1342-4837-b59f-9f0315daa673-kube-api-access-h6268\") pod \"e4b63a21-1342-4837-b59f-9f0315daa673\" (UID: \"e4b63a21-1342-4837-b59f-9f0315daa673\") " Feb 19 20:33:01 crc kubenswrapper[4813]: I0219 20:33:01.839214 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/e4b63a21-1342-4837-b59f-9f0315daa673-ssh-key-openstack-cell1\") pod \"e4b63a21-1342-4837-b59f-9f0315daa673\" (UID: \"e4b63a21-1342-4837-b59f-9f0315daa673\") " Feb 19 20:33:01 crc kubenswrapper[4813]: I0219 20:33:01.839314 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e4b63a21-1342-4837-b59f-9f0315daa673-ceph\") pod \"e4b63a21-1342-4837-b59f-9f0315daa673\" (UID: \"e4b63a21-1342-4837-b59f-9f0315daa673\") " Feb 19 20:33:01 crc kubenswrapper[4813]: I0219 20:33:01.847018 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4b63a21-1342-4837-b59f-9f0315daa673-kube-api-access-h6268" (OuterVolumeSpecName: "kube-api-access-h6268") pod "e4b63a21-1342-4837-b59f-9f0315daa673" (UID: "e4b63a21-1342-4837-b59f-9f0315daa673"). InnerVolumeSpecName "kube-api-access-h6268". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:33:01 crc kubenswrapper[4813]: I0219 20:33:01.848538 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4b63a21-1342-4837-b59f-9f0315daa673-ceph" (OuterVolumeSpecName: "ceph") pod "e4b63a21-1342-4837-b59f-9f0315daa673" (UID: "e4b63a21-1342-4837-b59f-9f0315daa673"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:33:01 crc kubenswrapper[4813]: I0219 20:33:01.880242 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4b63a21-1342-4837-b59f-9f0315daa673-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "e4b63a21-1342-4837-b59f-9f0315daa673" (UID: "e4b63a21-1342-4837-b59f-9f0315daa673"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:33:01 crc kubenswrapper[4813]: I0219 20:33:01.886306 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4b63a21-1342-4837-b59f-9f0315daa673-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "e4b63a21-1342-4837-b59f-9f0315daa673" (UID: "e4b63a21-1342-4837-b59f-9f0315daa673"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:33:01 crc kubenswrapper[4813]: I0219 20:33:01.941266 4813 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e4b63a21-1342-4837-b59f-9f0315daa673-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 19 20:33:01 crc kubenswrapper[4813]: I0219 20:33:01.941304 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6268\" (UniqueName: \"kubernetes.io/projected/e4b63a21-1342-4837-b59f-9f0315daa673-kube-api-access-h6268\") on node \"crc\" DevicePath \"\"" Feb 19 20:33:01 crc kubenswrapper[4813]: I0219 20:33:01.941318 4813 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/e4b63a21-1342-4837-b59f-9f0315daa673-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 19 20:33:01 crc kubenswrapper[4813]: I0219 20:33:01.941330 4813 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e4b63a21-1342-4837-b59f-9f0315daa673-ceph\") on node \"crc\" DevicePath \"\"" Feb 19 20:33:02 crc kubenswrapper[4813]: I0219 20:33:02.289863 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-nb7bx" event={"ID":"e4b63a21-1342-4837-b59f-9f0315daa673","Type":"ContainerDied","Data":"c5fafccae9fd92718b0081b8bbc9fd5e2fc935e2d2b49944905fa1ff5c8c3287"} Feb 19 20:33:02 crc kubenswrapper[4813]: I0219 20:33:02.289936 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-nb7bx" Feb 19 20:33:02 crc kubenswrapper[4813]: I0219 20:33:02.289991 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5fafccae9fd92718b0081b8bbc9fd5e2fc935e2d2b49944905fa1ff5c8c3287" Feb 19 20:33:02 crc kubenswrapper[4813]: I0219 20:33:02.393929 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-openstack-openstack-cell1-2b87h"] Feb 19 20:33:02 crc kubenswrapper[4813]: E0219 20:33:02.394527 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4b63a21-1342-4837-b59f-9f0315daa673" containerName="ssh-known-hosts-openstack" Feb 19 20:33:02 crc kubenswrapper[4813]: I0219 20:33:02.394549 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4b63a21-1342-4837-b59f-9f0315daa673" containerName="ssh-known-hosts-openstack" Feb 19 20:33:02 crc kubenswrapper[4813]: I0219 20:33:02.394854 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4b63a21-1342-4837-b59f-9f0315daa673" containerName="ssh-known-hosts-openstack" Feb 19 20:33:02 crc kubenswrapper[4813]: I0219 20:33:02.396010 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-2b87h" Feb 19 20:33:02 crc kubenswrapper[4813]: I0219 20:33:02.400365 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 19 20:33:02 crc kubenswrapper[4813]: I0219 20:33:02.400452 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 20:33:02 crc kubenswrapper[4813]: I0219 20:33:02.400662 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-2ttn4" Feb 19 20:33:02 crc kubenswrapper[4813]: I0219 20:33:02.400752 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 19 20:33:02 crc kubenswrapper[4813]: I0219 20:33:02.405606 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-openstack-openstack-cell1-2b87h"] Feb 19 20:33:02 crc kubenswrapper[4813]: I0219 20:33:02.459978 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlbw7\" (UniqueName: \"kubernetes.io/projected/356733da-da64-42e7-931f-e8e5627d52cf-kube-api-access-qlbw7\") pod \"run-os-openstack-openstack-cell1-2b87h\" (UID: \"356733da-da64-42e7-931f-e8e5627d52cf\") " pod="openstack/run-os-openstack-openstack-cell1-2b87h" Feb 19 20:33:02 crc kubenswrapper[4813]: I0219 20:33:02.460065 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/356733da-da64-42e7-931f-e8e5627d52cf-inventory\") pod \"run-os-openstack-openstack-cell1-2b87h\" (UID: \"356733da-da64-42e7-931f-e8e5627d52cf\") " pod="openstack/run-os-openstack-openstack-cell1-2b87h" Feb 19 20:33:02 crc kubenswrapper[4813]: I0219 20:33:02.460121 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/356733da-da64-42e7-931f-e8e5627d52cf-ceph\") pod \"run-os-openstack-openstack-cell1-2b87h\" (UID: \"356733da-da64-42e7-931f-e8e5627d52cf\") " pod="openstack/run-os-openstack-openstack-cell1-2b87h" Feb 19 20:33:02 crc kubenswrapper[4813]: I0219 20:33:02.460145 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/356733da-da64-42e7-931f-e8e5627d52cf-ssh-key-openstack-cell1\") pod \"run-os-openstack-openstack-cell1-2b87h\" (UID: \"356733da-da64-42e7-931f-e8e5627d52cf\") " pod="openstack/run-os-openstack-openstack-cell1-2b87h" Feb 19 20:33:02 crc kubenswrapper[4813]: I0219 20:33:02.471896 4813 scope.go:117] "RemoveContainer" containerID="ae09c86ff9f8bc21a9cf3026fff1db9c93da58ba8343078b39d31938225ee5ee" Feb 19 20:33:02 crc kubenswrapper[4813]: I0219 20:33:02.561841 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlbw7\" (UniqueName: \"kubernetes.io/projected/356733da-da64-42e7-931f-e8e5627d52cf-kube-api-access-qlbw7\") pod \"run-os-openstack-openstack-cell1-2b87h\" (UID: \"356733da-da64-42e7-931f-e8e5627d52cf\") " pod="openstack/run-os-openstack-openstack-cell1-2b87h" Feb 19 20:33:02 crc kubenswrapper[4813]: I0219 20:33:02.562231 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/356733da-da64-42e7-931f-e8e5627d52cf-inventory\") pod \"run-os-openstack-openstack-cell1-2b87h\" (UID: \"356733da-da64-42e7-931f-e8e5627d52cf\") " pod="openstack/run-os-openstack-openstack-cell1-2b87h" Feb 19 20:33:02 crc kubenswrapper[4813]: I0219 20:33:02.562272 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/356733da-da64-42e7-931f-e8e5627d52cf-ceph\") pod \"run-os-openstack-openstack-cell1-2b87h\" (UID: \"356733da-da64-42e7-931f-e8e5627d52cf\") " pod="openstack/run-os-openstack-openstack-cell1-2b87h" Feb 19 20:33:02 crc kubenswrapper[4813]: I0219 20:33:02.562299 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/356733da-da64-42e7-931f-e8e5627d52cf-ssh-key-openstack-cell1\") pod \"run-os-openstack-openstack-cell1-2b87h\" (UID: \"356733da-da64-42e7-931f-e8e5627d52cf\") " pod="openstack/run-os-openstack-openstack-cell1-2b87h" Feb 19 20:33:02 crc kubenswrapper[4813]: I0219 20:33:02.566031 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/356733da-da64-42e7-931f-e8e5627d52cf-ceph\") pod \"run-os-openstack-openstack-cell1-2b87h\" (UID: \"356733da-da64-42e7-931f-e8e5627d52cf\") " pod="openstack/run-os-openstack-openstack-cell1-2b87h" Feb 19 20:33:02 crc kubenswrapper[4813]: I0219 20:33:02.567159 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/356733da-da64-42e7-931f-e8e5627d52cf-ssh-key-openstack-cell1\") pod \"run-os-openstack-openstack-cell1-2b87h\" (UID: \"356733da-da64-42e7-931f-e8e5627d52cf\") " pod="openstack/run-os-openstack-openstack-cell1-2b87h" Feb 19 20:33:02 crc kubenswrapper[4813]: I0219 20:33:02.567860 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/356733da-da64-42e7-931f-e8e5627d52cf-inventory\") pod \"run-os-openstack-openstack-cell1-2b87h\" (UID: \"356733da-da64-42e7-931f-e8e5627d52cf\") " pod="openstack/run-os-openstack-openstack-cell1-2b87h" Feb 19 20:33:02 crc kubenswrapper[4813]: I0219 20:33:02.585460 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlbw7\" (UniqueName: \"kubernetes.io/projected/356733da-da64-42e7-931f-e8e5627d52cf-kube-api-access-qlbw7\") pod \"run-os-openstack-openstack-cell1-2b87h\" (UID: \"356733da-da64-42e7-931f-e8e5627d52cf\") " pod="openstack/run-os-openstack-openstack-cell1-2b87h" Feb 19 20:33:02 crc kubenswrapper[4813]: I0219 20:33:02.722577 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-2b87h" Feb 19 20:33:03 crc kubenswrapper[4813]: I0219 20:33:03.256335 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-openstack-openstack-cell1-2b87h"] Feb 19 20:33:03 crc kubenswrapper[4813]: I0219 20:33:03.299505 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-2b87h" event={"ID":"356733da-da64-42e7-931f-e8e5627d52cf","Type":"ContainerStarted","Data":"a4ab432bd8f5de1d2c6533620b5874b2ffae068534a66e06bd765ed184838d72"} Feb 19 20:33:03 crc kubenswrapper[4813]: I0219 20:33:03.304923 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" event={"ID":"481977a2-7072-4176-abd4-863cb6104d70","Type":"ContainerStarted","Data":"93a1d84ffde757766624d5696e6c0d7a5731d522961bfffd46a28316fabd873d"} Feb 19 20:33:04 crc kubenswrapper[4813]: I0219 20:33:04.315360 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-2b87h" event={"ID":"356733da-da64-42e7-931f-e8e5627d52cf","Type":"ContainerStarted","Data":"c066820fb93a32ae8cb75b6db2084fdd0936b66f8a5d2f67e1c7696c9cfa44e0"} Feb 19 20:33:04 crc kubenswrapper[4813]: I0219 20:33:04.339135 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-openstack-openstack-cell1-2b87h" podStartSLOduration=1.904381013 podStartE2EDuration="2.339116456s" podCreationTimestamp="2026-02-19 20:33:02 +0000 UTC" firstStartedPulling="2026-02-19 20:33:03.259974127 +0000 UTC m=+7402.485414668" lastFinishedPulling="2026-02-19 20:33:03.69470957 +0000 UTC m=+7402.920150111" observedRunningTime="2026-02-19 20:33:04.333705949 +0000 UTC m=+7403.559146520" watchObservedRunningTime="2026-02-19 20:33:04.339116456 +0000 UTC m=+7403.564556997" Feb 19 20:33:14 crc kubenswrapper[4813]: I0219 20:33:14.418843 4813 generic.go:334] "Generic (PLEG): container finished" podID="356733da-da64-42e7-931f-e8e5627d52cf" containerID="c066820fb93a32ae8cb75b6db2084fdd0936b66f8a5d2f67e1c7696c9cfa44e0" exitCode=0 Feb 19 20:33:14 crc kubenswrapper[4813]: I0219 20:33:14.418908 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-2b87h" event={"ID":"356733da-da64-42e7-931f-e8e5627d52cf","Type":"ContainerDied","Data":"c066820fb93a32ae8cb75b6db2084fdd0936b66f8a5d2f67e1c7696c9cfa44e0"} Feb 19 20:33:15 crc kubenswrapper[4813]: I0219 20:33:15.880553 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-2b87h" Feb 19 20:33:15 crc kubenswrapper[4813]: I0219 20:33:15.913411 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qlbw7\" (UniqueName: \"kubernetes.io/projected/356733da-da64-42e7-931f-e8e5627d52cf-kube-api-access-qlbw7\") pod \"356733da-da64-42e7-931f-e8e5627d52cf\" (UID: \"356733da-da64-42e7-931f-e8e5627d52cf\") " Feb 19 20:33:15 crc kubenswrapper[4813]: I0219 20:33:15.913564 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/356733da-da64-42e7-931f-e8e5627d52cf-inventory\") pod \"356733da-da64-42e7-931f-e8e5627d52cf\" (UID: \"356733da-da64-42e7-931f-e8e5627d52cf\") " Feb 19 20:33:15 crc kubenswrapper[4813]: I0219 20:33:15.913651 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/356733da-da64-42e7-931f-e8e5627d52cf-ssh-key-openstack-cell1\") pod \"356733da-da64-42e7-931f-e8e5627d52cf\" (UID: \"356733da-da64-42e7-931f-e8e5627d52cf\") " Feb 19 20:33:15 crc kubenswrapper[4813]: I0219 20:33:15.913726 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/356733da-da64-42e7-931f-e8e5627d52cf-ceph\") pod \"356733da-da64-42e7-931f-e8e5627d52cf\" (UID: \"356733da-da64-42e7-931f-e8e5627d52cf\") " Feb 19 20:33:15 crc kubenswrapper[4813]: I0219 20:33:15.921125 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/356733da-da64-42e7-931f-e8e5627d52cf-ceph" (OuterVolumeSpecName: "ceph") pod "356733da-da64-42e7-931f-e8e5627d52cf" (UID: "356733da-da64-42e7-931f-e8e5627d52cf"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:33:15 crc kubenswrapper[4813]: I0219 20:33:15.921621 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/356733da-da64-42e7-931f-e8e5627d52cf-kube-api-access-qlbw7" (OuterVolumeSpecName: "kube-api-access-qlbw7") pod "356733da-da64-42e7-931f-e8e5627d52cf" (UID: "356733da-da64-42e7-931f-e8e5627d52cf"). InnerVolumeSpecName "kube-api-access-qlbw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:33:15 crc kubenswrapper[4813]: I0219 20:33:15.946591 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/356733da-da64-42e7-931f-e8e5627d52cf-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "356733da-da64-42e7-931f-e8e5627d52cf" (UID: "356733da-da64-42e7-931f-e8e5627d52cf"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:33:15 crc kubenswrapper[4813]: I0219 20:33:15.950759 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/356733da-da64-42e7-931f-e8e5627d52cf-inventory" (OuterVolumeSpecName: "inventory") pod "356733da-da64-42e7-931f-e8e5627d52cf" (UID: "356733da-da64-42e7-931f-e8e5627d52cf"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:33:16 crc kubenswrapper[4813]: I0219 20:33:16.018331 4813 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/356733da-da64-42e7-931f-e8e5627d52cf-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 20:33:16 crc kubenswrapper[4813]: I0219 20:33:16.018368 4813 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/356733da-da64-42e7-931f-e8e5627d52cf-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 19 20:33:16 crc kubenswrapper[4813]: I0219 20:33:16.018386 4813 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/356733da-da64-42e7-931f-e8e5627d52cf-ceph\") on node \"crc\" DevicePath \"\"" Feb 19 20:33:16 crc kubenswrapper[4813]: I0219 20:33:16.018397 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qlbw7\" (UniqueName: \"kubernetes.io/projected/356733da-da64-42e7-931f-e8e5627d52cf-kube-api-access-qlbw7\") on node \"crc\" DevicePath \"\"" Feb 19 20:33:16 crc kubenswrapper[4813]: I0219 20:33:16.449512 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-2b87h" event={"ID":"356733da-da64-42e7-931f-e8e5627d52cf","Type":"ContainerDied","Data":"a4ab432bd8f5de1d2c6533620b5874b2ffae068534a66e06bd765ed184838d72"} Feb 19 20:33:16 crc kubenswrapper[4813]: I0219 20:33:16.449591 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4ab432bd8f5de1d2c6533620b5874b2ffae068534a66e06bd765ed184838d72" Feb 19 20:33:16 crc kubenswrapper[4813]: I0219 20:33:16.449698 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-2b87h" Feb 19 20:33:16 crc kubenswrapper[4813]: I0219 20:33:16.535173 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-openstack-openstack-cell1-mz7hx"] Feb 19 20:33:16 crc kubenswrapper[4813]: E0219 20:33:16.535841 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="356733da-da64-42e7-931f-e8e5627d52cf" containerName="run-os-openstack-openstack-cell1" Feb 19 20:33:16 crc kubenswrapper[4813]: I0219 20:33:16.535870 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="356733da-da64-42e7-931f-e8e5627d52cf" containerName="run-os-openstack-openstack-cell1" Feb 19 20:33:16 crc kubenswrapper[4813]: I0219 20:33:16.536315 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="356733da-da64-42e7-931f-e8e5627d52cf" containerName="run-os-openstack-openstack-cell1" Feb 19 20:33:16 crc kubenswrapper[4813]: I0219 20:33:16.537520 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-mz7hx" Feb 19 20:33:16 crc kubenswrapper[4813]: I0219 20:33:16.540842 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 19 20:33:16 crc kubenswrapper[4813]: I0219 20:33:16.541128 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 20:33:16 crc kubenswrapper[4813]: I0219 20:33:16.541298 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-2ttn4" Feb 19 20:33:16 crc kubenswrapper[4813]: I0219 20:33:16.541447 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 19 20:33:16 crc kubenswrapper[4813]: I0219 20:33:16.552565 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-openstack-openstack-cell1-mz7hx"] Feb 19 20:33:16 crc kubenswrapper[4813]: I0219 20:33:16.632004 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22hx7\" (UniqueName: \"kubernetes.io/projected/7362face-9041-4faa-b26e-ac77330fdf8a-kube-api-access-22hx7\") pod \"reboot-os-openstack-openstack-cell1-mz7hx\" (UID: \"7362face-9041-4faa-b26e-ac77330fdf8a\") " pod="openstack/reboot-os-openstack-openstack-cell1-mz7hx" Feb 19 20:33:16 crc kubenswrapper[4813]: I0219 20:33:16.632197 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7362face-9041-4faa-b26e-ac77330fdf8a-inventory\") pod \"reboot-os-openstack-openstack-cell1-mz7hx\" (UID: \"7362face-9041-4faa-b26e-ac77330fdf8a\") " pod="openstack/reboot-os-openstack-openstack-cell1-mz7hx" Feb 19 20:33:16 crc kubenswrapper[4813]: I0219 20:33:16.632298 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/7362face-9041-4faa-b26e-ac77330fdf8a-ssh-key-openstack-cell1\") pod \"reboot-os-openstack-openstack-cell1-mz7hx\" (UID: \"7362face-9041-4faa-b26e-ac77330fdf8a\") " pod="openstack/reboot-os-openstack-openstack-cell1-mz7hx" Feb 19 20:33:16 crc kubenswrapper[4813]: I0219 20:33:16.632549 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7362face-9041-4faa-b26e-ac77330fdf8a-ceph\") pod \"reboot-os-openstack-openstack-cell1-mz7hx\" (UID: \"7362face-9041-4faa-b26e-ac77330fdf8a\") " pod="openstack/reboot-os-openstack-openstack-cell1-mz7hx" Feb 19 20:33:16 crc kubenswrapper[4813]: I0219 20:33:16.735186 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7362face-9041-4faa-b26e-ac77330fdf8a-inventory\") pod \"reboot-os-openstack-openstack-cell1-mz7hx\" (UID: \"7362face-9041-4faa-b26e-ac77330fdf8a\") " pod="openstack/reboot-os-openstack-openstack-cell1-mz7hx" Feb 19 20:33:16 crc kubenswrapper[4813]: I0219 20:33:16.735320 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/7362face-9041-4faa-b26e-ac77330fdf8a-ssh-key-openstack-cell1\") pod \"reboot-os-openstack-openstack-cell1-mz7hx\" (UID: \"7362face-9041-4faa-b26e-ac77330fdf8a\") " pod="openstack/reboot-os-openstack-openstack-cell1-mz7hx" Feb 19 20:33:16 crc kubenswrapper[4813]: I0219 20:33:16.735367 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7362face-9041-4faa-b26e-ac77330fdf8a-ceph\") pod \"reboot-os-openstack-openstack-cell1-mz7hx\" (UID: \"7362face-9041-4faa-b26e-ac77330fdf8a\") " pod="openstack/reboot-os-openstack-openstack-cell1-mz7hx" Feb 19 20:33:16 crc kubenswrapper[4813]: I0219 20:33:16.735543 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22hx7\" (UniqueName: \"kubernetes.io/projected/7362face-9041-4faa-b26e-ac77330fdf8a-kube-api-access-22hx7\") pod \"reboot-os-openstack-openstack-cell1-mz7hx\" (UID: \"7362face-9041-4faa-b26e-ac77330fdf8a\") " pod="openstack/reboot-os-openstack-openstack-cell1-mz7hx" Feb 19 20:33:16 crc kubenswrapper[4813]: I0219 20:33:16.742265 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7362face-9041-4faa-b26e-ac77330fdf8a-inventory\") pod \"reboot-os-openstack-openstack-cell1-mz7hx\" (UID: \"7362face-9041-4faa-b26e-ac77330fdf8a\") " pod="openstack/reboot-os-openstack-openstack-cell1-mz7hx" Feb 19 20:33:16 crc kubenswrapper[4813]: I0219 20:33:16.742311 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7362face-9041-4faa-b26e-ac77330fdf8a-ceph\") pod \"reboot-os-openstack-openstack-cell1-mz7hx\" (UID: \"7362face-9041-4faa-b26e-ac77330fdf8a\") " pod="openstack/reboot-os-openstack-openstack-cell1-mz7hx" Feb 19 20:33:16 crc kubenswrapper[4813]: I0219 20:33:16.743098 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/7362face-9041-4faa-b26e-ac77330fdf8a-ssh-key-openstack-cell1\") pod \"reboot-os-openstack-openstack-cell1-mz7hx\" (UID: \"7362face-9041-4faa-b26e-ac77330fdf8a\") " pod="openstack/reboot-os-openstack-openstack-cell1-mz7hx" Feb 19 20:33:16 crc kubenswrapper[4813]: I0219 20:33:16.757343 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22hx7\" (UniqueName: \"kubernetes.io/projected/7362face-9041-4faa-b26e-ac77330fdf8a-kube-api-access-22hx7\") pod \"reboot-os-openstack-openstack-cell1-mz7hx\" (UID: \"7362face-9041-4faa-b26e-ac77330fdf8a\") " pod="openstack/reboot-os-openstack-openstack-cell1-mz7hx" Feb 19 20:33:16 crc kubenswrapper[4813]: I0219 20:33:16.859111 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-mz7hx" Feb 19 20:33:17 crc kubenswrapper[4813]: I0219 20:33:17.387025 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-openstack-openstack-cell1-mz7hx"] Feb 19 20:33:17 crc kubenswrapper[4813]: W0219 20:33:17.390898 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7362face_9041_4faa_b26e_ac77330fdf8a.slice/crio-09d391ad235379f959b73ab6c40a6c3f9eb92224ab3c9dcb44fa7a132042d024 WatchSource:0}: Error finding container 09d391ad235379f959b73ab6c40a6c3f9eb92224ab3c9dcb44fa7a132042d024: Status 404 returned error can't find the container with id 09d391ad235379f959b73ab6c40a6c3f9eb92224ab3c9dcb44fa7a132042d024 Feb 19 20:33:17 crc kubenswrapper[4813]: I0219 20:33:17.460160 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-mz7hx" event={"ID":"7362face-9041-4faa-b26e-ac77330fdf8a","Type":"ContainerStarted","Data":"09d391ad235379f959b73ab6c40a6c3f9eb92224ab3c9dcb44fa7a132042d024"} Feb 19 20:33:18 crc kubenswrapper[4813]: I0219 20:33:18.470087 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-mz7hx" event={"ID":"7362face-9041-4faa-b26e-ac77330fdf8a","Type":"ContainerStarted","Data":"0d9e5862cd87b109f46e505f71f229feb5f4c1cac0073e9c36186e03883c8660"} Feb 19 20:33:18 crc kubenswrapper[4813]: I0219 20:33:18.493337 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-openstack-openstack-cell1-mz7hx" podStartSLOduration=2.031301994 podStartE2EDuration="2.49331371s" podCreationTimestamp="2026-02-19 20:33:16 +0000 UTC" firstStartedPulling="2026-02-19 20:33:17.396188185 +0000 UTC m=+7416.621628766" lastFinishedPulling="2026-02-19 20:33:17.858199941 +0000 UTC m=+7417.083640482" observedRunningTime="2026-02-19 20:33:18.490987078 +0000 UTC m=+7417.716427629" watchObservedRunningTime="2026-02-19 20:33:18.49331371 +0000 UTC m=+7417.718754251" Feb 19 20:33:35 crc kubenswrapper[4813]: I0219 20:33:35.651286 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-mz7hx" event={"ID":"7362face-9041-4faa-b26e-ac77330fdf8a","Type":"ContainerDied","Data":"0d9e5862cd87b109f46e505f71f229feb5f4c1cac0073e9c36186e03883c8660"} Feb 19 20:33:35 crc kubenswrapper[4813]: I0219 20:33:35.651231 4813 generic.go:334] "Generic (PLEG): container finished" podID="7362face-9041-4faa-b26e-ac77330fdf8a" containerID="0d9e5862cd87b109f46e505f71f229feb5f4c1cac0073e9c36186e03883c8660" exitCode=0 Feb 19 20:33:37 crc kubenswrapper[4813]: I0219 20:33:37.175666 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-mz7hx" Feb 19 20:33:37 crc kubenswrapper[4813]: I0219 20:33:37.286749 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22hx7\" (UniqueName: \"kubernetes.io/projected/7362face-9041-4faa-b26e-ac77330fdf8a-kube-api-access-22hx7\") pod \"7362face-9041-4faa-b26e-ac77330fdf8a\" (UID: \"7362face-9041-4faa-b26e-ac77330fdf8a\") " Feb 19 20:33:37 crc kubenswrapper[4813]: I0219 20:33:37.286839 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7362face-9041-4faa-b26e-ac77330fdf8a-ceph\") pod \"7362face-9041-4faa-b26e-ac77330fdf8a\" (UID: \"7362face-9041-4faa-b26e-ac77330fdf8a\") " Feb 19 20:33:37 crc kubenswrapper[4813]: I0219 20:33:37.286970 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/7362face-9041-4faa-b26e-ac77330fdf8a-ssh-key-openstack-cell1\") pod \"7362face-9041-4faa-b26e-ac77330fdf8a\" (UID: \"7362face-9041-4faa-b26e-ac77330fdf8a\") " Feb 19 20:33:37 crc kubenswrapper[4813]: I0219 20:33:37.287048 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7362face-9041-4faa-b26e-ac77330fdf8a-inventory\") pod \"7362face-9041-4faa-b26e-ac77330fdf8a\" (UID: \"7362face-9041-4faa-b26e-ac77330fdf8a\") " Feb 19 20:33:37 crc kubenswrapper[4813]: I0219 20:33:37.292029 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7362face-9041-4faa-b26e-ac77330fdf8a-ceph" (OuterVolumeSpecName: "ceph") pod "7362face-9041-4faa-b26e-ac77330fdf8a" (UID: "7362face-9041-4faa-b26e-ac77330fdf8a"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:33:37 crc kubenswrapper[4813]: I0219 20:33:37.292658 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7362face-9041-4faa-b26e-ac77330fdf8a-kube-api-access-22hx7" (OuterVolumeSpecName: "kube-api-access-22hx7") pod "7362face-9041-4faa-b26e-ac77330fdf8a" (UID: "7362face-9041-4faa-b26e-ac77330fdf8a"). InnerVolumeSpecName "kube-api-access-22hx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:33:37 crc kubenswrapper[4813]: E0219 20:33:37.317360 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7362face-9041-4faa-b26e-ac77330fdf8a-inventory podName:7362face-9041-4faa-b26e-ac77330fdf8a nodeName:}" failed. No retries permitted until 2026-02-19 20:33:37.817328971 +0000 UTC m=+7437.042769512 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "inventory" (UniqueName: "kubernetes.io/secret/7362face-9041-4faa-b26e-ac77330fdf8a-inventory") pod "7362face-9041-4faa-b26e-ac77330fdf8a" (UID: "7362face-9041-4faa-b26e-ac77330fdf8a") : error deleting /var/lib/kubelet/pods/7362face-9041-4faa-b26e-ac77330fdf8a/volume-subpaths: remove /var/lib/kubelet/pods/7362face-9041-4faa-b26e-ac77330fdf8a/volume-subpaths: no such file or directory Feb 19 20:33:37 crc kubenswrapper[4813]: I0219 20:33:37.320420 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7362face-9041-4faa-b26e-ac77330fdf8a-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "7362face-9041-4faa-b26e-ac77330fdf8a" (UID: "7362face-9041-4faa-b26e-ac77330fdf8a"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:33:37 crc kubenswrapper[4813]: I0219 20:33:37.390440 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22hx7\" (UniqueName: \"kubernetes.io/projected/7362face-9041-4faa-b26e-ac77330fdf8a-kube-api-access-22hx7\") on node \"crc\" DevicePath \"\"" Feb 19 20:33:37 crc kubenswrapper[4813]: I0219 20:33:37.390507 4813 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7362face-9041-4faa-b26e-ac77330fdf8a-ceph\") on node \"crc\" DevicePath \"\"" Feb 19 20:33:37 crc kubenswrapper[4813]: I0219 20:33:37.390536 4813 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/7362face-9041-4faa-b26e-ac77330fdf8a-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 19 20:33:37 crc kubenswrapper[4813]: I0219 20:33:37.676177 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-mz7hx" event={"ID":"7362face-9041-4faa-b26e-ac77330fdf8a","Type":"ContainerDied","Data":"09d391ad235379f959b73ab6c40a6c3f9eb92224ab3c9dcb44fa7a132042d024"} Feb 19 20:33:37 crc kubenswrapper[4813]: I0219 20:33:37.676641 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09d391ad235379f959b73ab6c40a6c3f9eb92224ab3c9dcb44fa7a132042d024" Feb 19 20:33:37 crc kubenswrapper[4813]: I0219 20:33:37.676298 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-mz7hx" Feb 19 20:33:37 crc kubenswrapper[4813]: I0219 20:33:37.784656 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-openstack-openstack-cell1-kn9xd"] Feb 19 20:33:37 crc kubenswrapper[4813]: E0219 20:33:37.785124 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7362face-9041-4faa-b26e-ac77330fdf8a" containerName="reboot-os-openstack-openstack-cell1" Feb 19 20:33:37 crc kubenswrapper[4813]: I0219 20:33:37.785139 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="7362face-9041-4faa-b26e-ac77330fdf8a" containerName="reboot-os-openstack-openstack-cell1" Feb 19 20:33:37 crc kubenswrapper[4813]: I0219 20:33:37.785358 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="7362face-9041-4faa-b26e-ac77330fdf8a" containerName="reboot-os-openstack-openstack-cell1" Feb 19 20:33:37 crc kubenswrapper[4813]: I0219 20:33:37.786170 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-kn9xd" Feb 19 20:33:37 crc kubenswrapper[4813]: I0219 20:33:37.804016 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-openstack-openstack-cell1-kn9xd"] Feb 19 20:33:37 crc kubenswrapper[4813]: I0219 20:33:37.902258 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7362face-9041-4faa-b26e-ac77330fdf8a-inventory\") pod \"7362face-9041-4faa-b26e-ac77330fdf8a\" (UID: \"7362face-9041-4faa-b26e-ac77330fdf8a\") " Feb 19 20:33:37 crc kubenswrapper[4813]: I0219 20:33:37.902622 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6e11569d-f74e-4d49-8046-59b3a85b2a20-ceph\") pod \"install-certs-openstack-openstack-cell1-kn9xd\" (UID: \"6e11569d-f74e-4d49-8046-59b3a85b2a20\") " pod="openstack/install-certs-openstack-openstack-cell1-kn9xd" Feb 19 20:33:37 crc kubenswrapper[4813]: I0219 20:33:37.902666 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e11569d-f74e-4d49-8046-59b3a85b2a20-neutron-sriov-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-kn9xd\" (UID: \"6e11569d-f74e-4d49-8046-59b3a85b2a20\") " pod="openstack/install-certs-openstack-openstack-cell1-kn9xd" Feb 19 20:33:37 crc kubenswrapper[4813]: I0219 20:33:37.902690 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e11569d-f74e-4d49-8046-59b3a85b2a20-nova-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-kn9xd\" (UID: \"6e11569d-f74e-4d49-8046-59b3a85b2a20\") " pod="openstack/install-certs-openstack-openstack-cell1-kn9xd" Feb 19 20:33:37 crc kubenswrapper[4813]: I0219 20:33:37.902726 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e11569d-f74e-4d49-8046-59b3a85b2a20-telemetry-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-kn9xd\" (UID: \"6e11569d-f74e-4d49-8046-59b3a85b2a20\") " pod="openstack/install-certs-openstack-openstack-cell1-kn9xd" Feb 19 20:33:37 crc kubenswrapper[4813]: I0219 20:33:37.902818 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e11569d-f74e-4d49-8046-59b3a85b2a20-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-kn9xd\" (UID: \"6e11569d-f74e-4d49-8046-59b3a85b2a20\") " pod="openstack/install-certs-openstack-openstack-cell1-kn9xd" Feb 19 20:33:37 crc kubenswrapper[4813]: I0219 20:33:37.902922 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e11569d-f74e-4d49-8046-59b3a85b2a20-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-kn9xd\" (UID: \"6e11569d-f74e-4d49-8046-59b3a85b2a20\") " pod="openstack/install-certs-openstack-openstack-cell1-kn9xd" Feb 19 20:33:37 crc kubenswrapper[4813]: I0219 20:33:37.902986 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e11569d-f74e-4d49-8046-59b3a85b2a20-libvirt-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-kn9xd\" (UID: \"6e11569d-f74e-4d49-8046-59b3a85b2a20\") " pod="openstack/install-certs-openstack-openstack-cell1-kn9xd" Feb 19 20:33:37 crc kubenswrapper[4813]: I0219 20:33:37.903041 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e11569d-f74e-4d49-8046-59b3a85b2a20-neutron-dhcp-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-kn9xd\" (UID: \"6e11569d-f74e-4d49-8046-59b3a85b2a20\") " pod="openstack/install-certs-openstack-openstack-cell1-kn9xd" Feb 19 20:33:37 crc kubenswrapper[4813]: I0219 20:33:37.903082 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dx9f\" (UniqueName: \"kubernetes.io/projected/6e11569d-f74e-4d49-8046-59b3a85b2a20-kube-api-access-8dx9f\") pod \"install-certs-openstack-openstack-cell1-kn9xd\" (UID: \"6e11569d-f74e-4d49-8046-59b3a85b2a20\") " pod="openstack/install-certs-openstack-openstack-cell1-kn9xd" Feb 19 20:33:37 crc kubenswrapper[4813]: I0219 20:33:37.903116 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e11569d-f74e-4d49-8046-59b3a85b2a20-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-kn9xd\" (UID: \"6e11569d-f74e-4d49-8046-59b3a85b2a20\") " pod="openstack/install-certs-openstack-openstack-cell1-kn9xd" Feb 19 20:33:37 crc kubenswrapper[4813]: I0219 20:33:37.903192 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6e11569d-f74e-4d49-8046-59b3a85b2a20-inventory\") pod \"install-certs-openstack-openstack-cell1-kn9xd\" (UID: \"6e11569d-f74e-4d49-8046-59b3a85b2a20\") " pod="openstack/install-certs-openstack-openstack-cell1-kn9xd" Feb 19 20:33:37 crc kubenswrapper[4813]: I0219 20:33:37.903245 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/6e11569d-f74e-4d49-8046-59b3a85b2a20-ssh-key-openstack-cell1\") pod \"install-certs-openstack-openstack-cell1-kn9xd\" (UID: \"6e11569d-f74e-4d49-8046-59b3a85b2a20\") " pod="openstack/install-certs-openstack-openstack-cell1-kn9xd" Feb 19 20:33:37 crc kubenswrapper[4813]: I0219 20:33:37.906214 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7362face-9041-4faa-b26e-ac77330fdf8a-inventory" (OuterVolumeSpecName: "inventory") pod "7362face-9041-4faa-b26e-ac77330fdf8a" (UID: "7362face-9041-4faa-b26e-ac77330fdf8a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:33:38 crc kubenswrapper[4813]: I0219 20:33:38.005419 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dx9f\" (UniqueName: \"kubernetes.io/projected/6e11569d-f74e-4d49-8046-59b3a85b2a20-kube-api-access-8dx9f\") pod \"install-certs-openstack-openstack-cell1-kn9xd\" (UID: \"6e11569d-f74e-4d49-8046-59b3a85b2a20\") " pod="openstack/install-certs-openstack-openstack-cell1-kn9xd" Feb 19 20:33:38 crc kubenswrapper[4813]: I0219 20:33:38.005462 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e11569d-f74e-4d49-8046-59b3a85b2a20-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-kn9xd\" (UID: \"6e11569d-f74e-4d49-8046-59b3a85b2a20\") " pod="openstack/install-certs-openstack-openstack-cell1-kn9xd" Feb 19 20:33:38 crc kubenswrapper[4813]: I0219 20:33:38.005512 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6e11569d-f74e-4d49-8046-59b3a85b2a20-inventory\") pod \"install-certs-openstack-openstack-cell1-kn9xd\" (UID: \"6e11569d-f74e-4d49-8046-59b3a85b2a20\") " pod="openstack/install-certs-openstack-openstack-cell1-kn9xd" Feb 19 20:33:38 crc kubenswrapper[4813]: I0219 20:33:38.005544 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/6e11569d-f74e-4d49-8046-59b3a85b2a20-ssh-key-openstack-cell1\") pod \"install-certs-openstack-openstack-cell1-kn9xd\" (UID: \"6e11569d-f74e-4d49-8046-59b3a85b2a20\") " pod="openstack/install-certs-openstack-openstack-cell1-kn9xd" Feb 19 20:33:38 crc kubenswrapper[4813]: I0219 20:33:38.005581 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6e11569d-f74e-4d49-8046-59b3a85b2a20-ceph\") pod \"install-certs-openstack-openstack-cell1-kn9xd\" (UID: \"6e11569d-f74e-4d49-8046-59b3a85b2a20\") " pod="openstack/install-certs-openstack-openstack-cell1-kn9xd" Feb 19 20:33:38 crc kubenswrapper[4813]: I0219 20:33:38.005603 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e11569d-f74e-4d49-8046-59b3a85b2a20-neutron-sriov-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-kn9xd\" (UID: \"6e11569d-f74e-4d49-8046-59b3a85b2a20\") " pod="openstack/install-certs-openstack-openstack-cell1-kn9xd" Feb 19 20:33:38 crc kubenswrapper[4813]: I0219 20:33:38.005621 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e11569d-f74e-4d49-8046-59b3a85b2a20-nova-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-kn9xd\" (UID: \"6e11569d-f74e-4d49-8046-59b3a85b2a20\") " pod="openstack/install-certs-openstack-openstack-cell1-kn9xd" Feb 19 20:33:38 crc kubenswrapper[4813]: I0219 20:33:38.005653 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e11569d-f74e-4d49-8046-59b3a85b2a20-telemetry-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-kn9xd\" (UID: \"6e11569d-f74e-4d49-8046-59b3a85b2a20\") " pod="openstack/install-certs-openstack-openstack-cell1-kn9xd" Feb 19 20:33:38 crc kubenswrapper[4813]: I0219 20:33:38.005680 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e11569d-f74e-4d49-8046-59b3a85b2a20-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-kn9xd\" (UID: \"6e11569d-f74e-4d49-8046-59b3a85b2a20\") " pod="openstack/install-certs-openstack-openstack-cell1-kn9xd" Feb 19 20:33:38 crc kubenswrapper[4813]: I0219 20:33:38.006554 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e11569d-f74e-4d49-8046-59b3a85b2a20-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-kn9xd\" (UID: \"6e11569d-f74e-4d49-8046-59b3a85b2a20\") " pod="openstack/install-certs-openstack-openstack-cell1-kn9xd" Feb 19 20:33:38 crc kubenswrapper[4813]: I0219 20:33:38.006594 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e11569d-f74e-4d49-8046-59b3a85b2a20-libvirt-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-kn9xd\" (UID: \"6e11569d-f74e-4d49-8046-59b3a85b2a20\") " pod="openstack/install-certs-openstack-openstack-cell1-kn9xd" Feb 19 20:33:38 crc kubenswrapper[4813]: I0219 20:33:38.006643 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e11569d-f74e-4d49-8046-59b3a85b2a20-neutron-dhcp-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-kn9xd\" (UID: \"6e11569d-f74e-4d49-8046-59b3a85b2a20\") " pod="openstack/install-certs-openstack-openstack-cell1-kn9xd" Feb 19 20:33:38 crc kubenswrapper[4813]: I0219 20:33:38.006727 4813 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7362face-9041-4faa-b26e-ac77330fdf8a-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 20:33:38 crc kubenswrapper[4813]: I0219 20:33:38.009899 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e11569d-f74e-4d49-8046-59b3a85b2a20-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-kn9xd\" (UID: \"6e11569d-f74e-4d49-8046-59b3a85b2a20\") " pod="openstack/install-certs-openstack-openstack-cell1-kn9xd" Feb 19 20:33:38 crc kubenswrapper[4813]: I0219 20:33:38.010194 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e11569d-f74e-4d49-8046-59b3a85b2a20-neutron-dhcp-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-kn9xd\" (UID: \"6e11569d-f74e-4d49-8046-59b3a85b2a20\") " pod="openstack/install-certs-openstack-openstack-cell1-kn9xd" Feb 19 20:33:38 crc kubenswrapper[4813]: I0219 20:33:38.010685 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6e11569d-f74e-4d49-8046-59b3a85b2a20-ceph\") pod \"install-certs-openstack-openstack-cell1-kn9xd\" (UID: \"6e11569d-f74e-4d49-8046-59b3a85b2a20\") " pod="openstack/install-certs-openstack-openstack-cell1-kn9xd" Feb 19 20:33:38 crc kubenswrapper[4813]: I0219 20:33:38.010750 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e11569d-f74e-4d49-8046-59b3a85b2a20-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-kn9xd\" (UID: \"6e11569d-f74e-4d49-8046-59b3a85b2a20\") " pod="openstack/install-certs-openstack-openstack-cell1-kn9xd" Feb 19 20:33:38 crc kubenswrapper[4813]: I0219 20:33:38.011089 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e11569d-f74e-4d49-8046-59b3a85b2a20-nova-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-kn9xd\" (UID: \"6e11569d-f74e-4d49-8046-59b3a85b2a20\") " pod="openstack/install-certs-openstack-openstack-cell1-kn9xd" Feb 19 20:33:38 crc kubenswrapper[4813]: I0219 20:33:38.011111 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e11569d-f74e-4d49-8046-59b3a85b2a20-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-kn9xd\" (UID: \"6e11569d-f74e-4d49-8046-59b3a85b2a20\") " pod="openstack/install-certs-openstack-openstack-cell1-kn9xd" Feb 19 20:33:38 crc kubenswrapper[4813]: I0219 20:33:38.011597 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6e11569d-f74e-4d49-8046-59b3a85b2a20-inventory\") pod \"install-certs-openstack-openstack-cell1-kn9xd\" (UID: \"6e11569d-f74e-4d49-8046-59b3a85b2a20\") " pod="openstack/install-certs-openstack-openstack-cell1-kn9xd" Feb 19 20:33:38 crc kubenswrapper[4813]: I0219 20:33:38.011912 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/6e11569d-f74e-4d49-8046-59b3a85b2a20-ssh-key-openstack-cell1\") pod \"install-certs-openstack-openstack-cell1-kn9xd\" (UID: \"6e11569d-f74e-4d49-8046-59b3a85b2a20\") " pod="openstack/install-certs-openstack-openstack-cell1-kn9xd" Feb 19 20:33:38 crc kubenswrapper[4813]: I0219 20:33:38.013329 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e11569d-f74e-4d49-8046-59b3a85b2a20-telemetry-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-kn9xd\" (UID: \"6e11569d-f74e-4d49-8046-59b3a85b2a20\") " pod="openstack/install-certs-openstack-openstack-cell1-kn9xd" Feb 19 20:33:38 crc kubenswrapper[4813]: I0219 20:33:38.014929 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e11569d-f74e-4d49-8046-59b3a85b2a20-neutron-sriov-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-kn9xd\" (UID: \"6e11569d-f74e-4d49-8046-59b3a85b2a20\") " pod="openstack/install-certs-openstack-openstack-cell1-kn9xd" Feb 19 20:33:38 crc kubenswrapper[4813]: I0219 20:33:38.015033 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e11569d-f74e-4d49-8046-59b3a85b2a20-libvirt-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-kn9xd\" (UID: \"6e11569d-f74e-4d49-8046-59b3a85b2a20\") " pod="openstack/install-certs-openstack-openstack-cell1-kn9xd" Feb 19 20:33:38 crc kubenswrapper[4813]: I0219 20:33:38.021664 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dx9f\" (UniqueName: \"kubernetes.io/projected/6e11569d-f74e-4d49-8046-59b3a85b2a20-kube-api-access-8dx9f\") pod \"install-certs-openstack-openstack-cell1-kn9xd\" (UID: \"6e11569d-f74e-4d49-8046-59b3a85b2a20\") " pod="openstack/install-certs-openstack-openstack-cell1-kn9xd" Feb 19 20:33:38 crc kubenswrapper[4813]: I0219 20:33:38.106852 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-kn9xd" Feb 19 20:33:38 crc kubenswrapper[4813]: I0219 20:33:38.693542 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-openstack-openstack-cell1-kn9xd"] Feb 19 20:33:38 crc kubenswrapper[4813]: W0219 20:33:38.699983 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e11569d_f74e_4d49_8046_59b3a85b2a20.slice/crio-857dde089c58ddaeba6457da89c747fe054da50aecc75e297b672195324af84a WatchSource:0}: Error finding container 857dde089c58ddaeba6457da89c747fe054da50aecc75e297b672195324af84a: Status 404 returned error can't find the container with id 857dde089c58ddaeba6457da89c747fe054da50aecc75e297b672195324af84a Feb 19 20:33:39 crc kubenswrapper[4813]: I0219 20:33:39.724608 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-kn9xd" event={"ID":"6e11569d-f74e-4d49-8046-59b3a85b2a20","Type":"ContainerStarted","Data":"824ca87ff9b7798d7729a87e06d9e3cf9ad381da1e56faf8d926ed79faf48bb0"} Feb 19 20:33:39 crc kubenswrapper[4813]: I0219 20:33:39.724670 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-kn9xd" event={"ID":"6e11569d-f74e-4d49-8046-59b3a85b2a20","Type":"ContainerStarted","Data":"857dde089c58ddaeba6457da89c747fe054da50aecc75e297b672195324af84a"} Feb 19 20:33:39 crc kubenswrapper[4813]: I0219 20:33:39.750974 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-openstack-openstack-cell1-kn9xd" podStartSLOduration=2.116114882 podStartE2EDuration="2.750934562s" podCreationTimestamp="2026-02-19 20:33:37 +0000 UTC" firstStartedPulling="2026-02-19 20:33:38.706810256 +0000 UTC m=+7437.932250807" lastFinishedPulling="2026-02-19 20:33:39.341629936 +0000 UTC m=+7438.567070487" observedRunningTime="2026-02-19 20:33:39.740410387 +0000 UTC m=+7438.965850928" watchObservedRunningTime="2026-02-19 20:33:39.750934562 +0000 UTC m=+7438.976375103" Feb 19 20:33:59 crc kubenswrapper[4813]: I0219 20:33:59.963749 4813 generic.go:334] "Generic (PLEG): container finished" podID="6e11569d-f74e-4d49-8046-59b3a85b2a20" containerID="824ca87ff9b7798d7729a87e06d9e3cf9ad381da1e56faf8d926ed79faf48bb0" exitCode=0 Feb 19 20:33:59 crc kubenswrapper[4813]: I0219 20:33:59.963859 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-kn9xd" event={"ID":"6e11569d-f74e-4d49-8046-59b3a85b2a20","Type":"ContainerDied","Data":"824ca87ff9b7798d7729a87e06d9e3cf9ad381da1e56faf8d926ed79faf48bb0"} Feb 19 20:34:01 crc kubenswrapper[4813]: I0219 20:34:01.506630 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-kn9xd" Feb 19 20:34:01 crc kubenswrapper[4813]: I0219 20:34:01.590563 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e11569d-f74e-4d49-8046-59b3a85b2a20-libvirt-combined-ca-bundle\") pod \"6e11569d-f74e-4d49-8046-59b3a85b2a20\" (UID: \"6e11569d-f74e-4d49-8046-59b3a85b2a20\") " Feb 19 20:34:01 crc kubenswrapper[4813]: I0219 20:34:01.590611 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e11569d-f74e-4d49-8046-59b3a85b2a20-bootstrap-combined-ca-bundle\") pod \"6e11569d-f74e-4d49-8046-59b3a85b2a20\" (UID: \"6e11569d-f74e-4d49-8046-59b3a85b2a20\") " Feb 19 20:34:01 crc kubenswrapper[4813]: I0219 20:34:01.590756 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6e11569d-f74e-4d49-8046-59b3a85b2a20-ceph\") pod \"6e11569d-f74e-4d49-8046-59b3a85b2a20\" (UID: \"6e11569d-f74e-4d49-8046-59b3a85b2a20\") " Feb 19 20:34:01 crc kubenswrapper[4813]: I0219 20:34:01.590832 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e11569d-f74e-4d49-8046-59b3a85b2a20-neutron-sriov-combined-ca-bundle\") pod \"6e11569d-f74e-4d49-8046-59b3a85b2a20\" (UID: \"6e11569d-f74e-4d49-8046-59b3a85b2a20\") " Feb 19 20:34:01 crc kubenswrapper[4813]: I0219 20:34:01.590856 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e11569d-f74e-4d49-8046-59b3a85b2a20-neutron-metadata-combined-ca-bundle\") pod \"6e11569d-f74e-4d49-8046-59b3a85b2a20\" (UID: \"6e11569d-f74e-4d49-8046-59b3a85b2a20\") " Feb 19 20:34:01 crc kubenswrapper[4813]: I0219 20:34:01.590945 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/6e11569d-f74e-4d49-8046-59b3a85b2a20-ssh-key-openstack-cell1\") pod \"6e11569d-f74e-4d49-8046-59b3a85b2a20\" (UID: \"6e11569d-f74e-4d49-8046-59b3a85b2a20\") " Feb 19 20:34:01 crc kubenswrapper[4813]: I0219 20:34:01.591069 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e11569d-f74e-4d49-8046-59b3a85b2a20-ovn-combined-ca-bundle\") pod \"6e11569d-f74e-4d49-8046-59b3a85b2a20\" (UID: \"6e11569d-f74e-4d49-8046-59b3a85b2a20\") " Feb 19 20:34:01 crc kubenswrapper[4813]: I0219 20:34:01.591142 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e11569d-f74e-4d49-8046-59b3a85b2a20-neutron-dhcp-combined-ca-bundle\") pod \"6e11569d-f74e-4d49-8046-59b3a85b2a20\" (UID: \"6e11569d-f74e-4d49-8046-59b3a85b2a20\") " Feb 19 20:34:01 crc kubenswrapper[4813]: I0219 20:34:01.591204 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e11569d-f74e-4d49-8046-59b3a85b2a20-nova-combined-ca-bundle\") pod \"6e11569d-f74e-4d49-8046-59b3a85b2a20\" (UID: \"6e11569d-f74e-4d49-8046-59b3a85b2a20\") " Feb 19 20:34:01 crc kubenswrapper[4813]: I0219 20:34:01.591282 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8dx9f\" (UniqueName: \"kubernetes.io/projected/6e11569d-f74e-4d49-8046-59b3a85b2a20-kube-api-access-8dx9f\") pod \"6e11569d-f74e-4d49-8046-59b3a85b2a20\" (UID: \"6e11569d-f74e-4d49-8046-59b3a85b2a20\") " Feb 19 20:34:01 crc kubenswrapper[4813]: I0219 20:34:01.591314 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6e11569d-f74e-4d49-8046-59b3a85b2a20-inventory\") pod \"6e11569d-f74e-4d49-8046-59b3a85b2a20\" (UID: \"6e11569d-f74e-4d49-8046-59b3a85b2a20\") " Feb 19 20:34:01 crc kubenswrapper[4813]: I0219 20:34:01.591381 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e11569d-f74e-4d49-8046-59b3a85b2a20-telemetry-combined-ca-bundle\") pod \"6e11569d-f74e-4d49-8046-59b3a85b2a20\" (UID: \"6e11569d-f74e-4d49-8046-59b3a85b2a20\") " Feb 19 20:34:01 crc kubenswrapper[4813]: I0219 20:34:01.598242 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e11569d-f74e-4d49-8046-59b3a85b2a20-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "6e11569d-f74e-4d49-8046-59b3a85b2a20" (UID: "6e11569d-f74e-4d49-8046-59b3a85b2a20"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:34:01 crc kubenswrapper[4813]: I0219 20:34:01.598387 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e11569d-f74e-4d49-8046-59b3a85b2a20-neutron-dhcp-combined-ca-bundle" (OuterVolumeSpecName: "neutron-dhcp-combined-ca-bundle") pod "6e11569d-f74e-4d49-8046-59b3a85b2a20" (UID: "6e11569d-f74e-4d49-8046-59b3a85b2a20"). InnerVolumeSpecName "neutron-dhcp-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:34:01 crc kubenswrapper[4813]: I0219 20:34:01.599250 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e11569d-f74e-4d49-8046-59b3a85b2a20-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "6e11569d-f74e-4d49-8046-59b3a85b2a20" (UID: "6e11569d-f74e-4d49-8046-59b3a85b2a20"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:34:01 crc kubenswrapper[4813]: I0219 20:34:01.599304 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e11569d-f74e-4d49-8046-59b3a85b2a20-ceph" (OuterVolumeSpecName: "ceph") pod "6e11569d-f74e-4d49-8046-59b3a85b2a20" (UID: "6e11569d-f74e-4d49-8046-59b3a85b2a20"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:34:01 crc kubenswrapper[4813]: I0219 20:34:01.600071 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e11569d-f74e-4d49-8046-59b3a85b2a20-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "6e11569d-f74e-4d49-8046-59b3a85b2a20" (UID: "6e11569d-f74e-4d49-8046-59b3a85b2a20"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:34:01 crc kubenswrapper[4813]: I0219 20:34:01.600233 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e11569d-f74e-4d49-8046-59b3a85b2a20-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "6e11569d-f74e-4d49-8046-59b3a85b2a20" (UID: "6e11569d-f74e-4d49-8046-59b3a85b2a20"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:34:01 crc kubenswrapper[4813]: I0219 20:34:01.600694 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e11569d-f74e-4d49-8046-59b3a85b2a20-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "6e11569d-f74e-4d49-8046-59b3a85b2a20" (UID: "6e11569d-f74e-4d49-8046-59b3a85b2a20"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:34:01 crc kubenswrapper[4813]: I0219 20:34:01.600846 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e11569d-f74e-4d49-8046-59b3a85b2a20-neutron-sriov-combined-ca-bundle" (OuterVolumeSpecName: "neutron-sriov-combined-ca-bundle") pod "6e11569d-f74e-4d49-8046-59b3a85b2a20" (UID: "6e11569d-f74e-4d49-8046-59b3a85b2a20"). InnerVolumeSpecName "neutron-sriov-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:34:01 crc kubenswrapper[4813]: I0219 20:34:01.601133 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e11569d-f74e-4d49-8046-59b3a85b2a20-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "6e11569d-f74e-4d49-8046-59b3a85b2a20" (UID: "6e11569d-f74e-4d49-8046-59b3a85b2a20"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:34:01 crc kubenswrapper[4813]: I0219 20:34:01.601301 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e11569d-f74e-4d49-8046-59b3a85b2a20-kube-api-access-8dx9f" (OuterVolumeSpecName: "kube-api-access-8dx9f") pod "6e11569d-f74e-4d49-8046-59b3a85b2a20" (UID: "6e11569d-f74e-4d49-8046-59b3a85b2a20"). InnerVolumeSpecName "kube-api-access-8dx9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:34:01 crc kubenswrapper[4813]: I0219 20:34:01.625508 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e11569d-f74e-4d49-8046-59b3a85b2a20-inventory" (OuterVolumeSpecName: "inventory") pod "6e11569d-f74e-4d49-8046-59b3a85b2a20" (UID: "6e11569d-f74e-4d49-8046-59b3a85b2a20"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:34:01 crc kubenswrapper[4813]: I0219 20:34:01.627983 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e11569d-f74e-4d49-8046-59b3a85b2a20-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "6e11569d-f74e-4d49-8046-59b3a85b2a20" (UID: "6e11569d-f74e-4d49-8046-59b3a85b2a20"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:34:01 crc kubenswrapper[4813]: I0219 20:34:01.694005 4813 reconciler_common.go:293] "Volume detached for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e11569d-f74e-4d49-8046-59b3a85b2a20-neutron-sriov-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 20:34:01 crc kubenswrapper[4813]: I0219 20:34:01.694041 4813 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e11569d-f74e-4d49-8046-59b3a85b2a20-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 20:34:01 crc kubenswrapper[4813]: I0219 20:34:01.694052 4813 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/6e11569d-f74e-4d49-8046-59b3a85b2a20-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 19 20:34:01 crc kubenswrapper[4813]: I0219 20:34:01.694067 4813 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e11569d-f74e-4d49-8046-59b3a85b2a20-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 20:34:01 crc kubenswrapper[4813]: I0219 20:34:01.694077 4813 reconciler_common.go:293] "Volume detached for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e11569d-f74e-4d49-8046-59b3a85b2a20-neutron-dhcp-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 20:34:01 crc kubenswrapper[4813]: I0219 20:34:01.694086 4813 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e11569d-f74e-4d49-8046-59b3a85b2a20-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 20:34:01 crc kubenswrapper[4813]: I0219 20:34:01.694095 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8dx9f\" (UniqueName: \"kubernetes.io/projected/6e11569d-f74e-4d49-8046-59b3a85b2a20-kube-api-access-8dx9f\") on node \"crc\" DevicePath \"\"" Feb 19 20:34:01 crc kubenswrapper[4813]: I0219 20:34:01.694103 4813 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6e11569d-f74e-4d49-8046-59b3a85b2a20-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 20:34:01 crc kubenswrapper[4813]: I0219 20:34:01.694112 4813 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e11569d-f74e-4d49-8046-59b3a85b2a20-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 20:34:01 crc kubenswrapper[4813]: I0219 20:34:01.694120 4813 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e11569d-f74e-4d49-8046-59b3a85b2a20-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 20:34:01 crc kubenswrapper[4813]: I0219 20:34:01.694128 4813 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e11569d-f74e-4d49-8046-59b3a85b2a20-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 20:34:01 crc kubenswrapper[4813]: I0219 20:34:01.694137 4813 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6e11569d-f74e-4d49-8046-59b3a85b2a20-ceph\") on node \"crc\" DevicePath \"\"" Feb 19 20:34:01 crc kubenswrapper[4813]: I0219 20:34:01.989524 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-kn9xd" event={"ID":"6e11569d-f74e-4d49-8046-59b3a85b2a20","Type":"ContainerDied","Data":"857dde089c58ddaeba6457da89c747fe054da50aecc75e297b672195324af84a"} Feb 19 20:34:01 crc kubenswrapper[4813]: I0219 20:34:01.989568 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="857dde089c58ddaeba6457da89c747fe054da50aecc75e297b672195324af84a" Feb 19 20:34:01 crc kubenswrapper[4813]: I0219 20:34:01.989665 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-kn9xd" Feb 19 20:34:02 crc kubenswrapper[4813]: I0219 20:34:02.125117 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-client-openstack-openstack-cell1-9vtcc"] Feb 19 20:34:02 crc kubenswrapper[4813]: E0219 20:34:02.125882 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e11569d-f74e-4d49-8046-59b3a85b2a20" containerName="install-certs-openstack-openstack-cell1" Feb 19 20:34:02 crc kubenswrapper[4813]: I0219 20:34:02.125963 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e11569d-f74e-4d49-8046-59b3a85b2a20" containerName="install-certs-openstack-openstack-cell1" Feb 19 20:34:02 crc kubenswrapper[4813]: I0219 20:34:02.126226 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e11569d-f74e-4d49-8046-59b3a85b2a20" containerName="install-certs-openstack-openstack-cell1" Feb 19 20:34:02 crc kubenswrapper[4813]: I0219 20:34:02.126934 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-openstack-openstack-cell1-9vtcc" Feb 19 20:34:02 crc kubenswrapper[4813]: I0219 20:34:02.135498 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 19 20:34:02 crc kubenswrapper[4813]: I0219 20:34:02.135687 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-2ttn4" Feb 19 20:34:02 crc kubenswrapper[4813]: I0219 20:34:02.136085 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 20:34:02 crc kubenswrapper[4813]: I0219 20:34:02.147275 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 19 20:34:02 crc kubenswrapper[4813]: I0219 20:34:02.152295 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-openstack-openstack-cell1-9vtcc"] Feb 19 20:34:02 crc kubenswrapper[4813]: I0219 20:34:02.306798 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1c2fa613-123c-4667-b4d3-0140b6d109d1-inventory\") pod \"ceph-client-openstack-openstack-cell1-9vtcc\" (UID: \"1c2fa613-123c-4667-b4d3-0140b6d109d1\") " pod="openstack/ceph-client-openstack-openstack-cell1-9vtcc" Feb 19 20:34:02 crc kubenswrapper[4813]: I0219 20:34:02.306857 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1c2fa613-123c-4667-b4d3-0140b6d109d1-ceph\") pod \"ceph-client-openstack-openstack-cell1-9vtcc\" (UID: \"1c2fa613-123c-4667-b4d3-0140b6d109d1\") " pod="openstack/ceph-client-openstack-openstack-cell1-9vtcc" Feb 19 20:34:02 crc kubenswrapper[4813]: I0219 20:34:02.307097 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/1c2fa613-123c-4667-b4d3-0140b6d109d1-ssh-key-openstack-cell1\") pod \"ceph-client-openstack-openstack-cell1-9vtcc\" (UID: \"1c2fa613-123c-4667-b4d3-0140b6d109d1\") " pod="openstack/ceph-client-openstack-openstack-cell1-9vtcc" Feb 19 20:34:02 crc kubenswrapper[4813]: I0219 20:34:02.307372 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqcfc\" (UniqueName: \"kubernetes.io/projected/1c2fa613-123c-4667-b4d3-0140b6d109d1-kube-api-access-hqcfc\") pod \"ceph-client-openstack-openstack-cell1-9vtcc\" (UID: \"1c2fa613-123c-4667-b4d3-0140b6d109d1\") " pod="openstack/ceph-client-openstack-openstack-cell1-9vtcc" Feb 19 20:34:02 crc kubenswrapper[4813]: I0219 20:34:02.410157 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqcfc\" (UniqueName: \"kubernetes.io/projected/1c2fa613-123c-4667-b4d3-0140b6d109d1-kube-api-access-hqcfc\") pod \"ceph-client-openstack-openstack-cell1-9vtcc\" (UID: \"1c2fa613-123c-4667-b4d3-0140b6d109d1\") " pod="openstack/ceph-client-openstack-openstack-cell1-9vtcc" Feb 19 20:34:02 crc kubenswrapper[4813]: I0219 20:34:02.410333 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1c2fa613-123c-4667-b4d3-0140b6d109d1-inventory\") pod \"ceph-client-openstack-openstack-cell1-9vtcc\" (UID: \"1c2fa613-123c-4667-b4d3-0140b6d109d1\") " pod="openstack/ceph-client-openstack-openstack-cell1-9vtcc" Feb 19 20:34:02 crc kubenswrapper[4813]: I0219 20:34:02.410387 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1c2fa613-123c-4667-b4d3-0140b6d109d1-ceph\") pod \"ceph-client-openstack-openstack-cell1-9vtcc\" (UID: \"1c2fa613-123c-4667-b4d3-0140b6d109d1\") " pod="openstack/ceph-client-openstack-openstack-cell1-9vtcc" Feb 19 20:34:02 crc kubenswrapper[4813]: I0219 20:34:02.410694 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/1c2fa613-123c-4667-b4d3-0140b6d109d1-ssh-key-openstack-cell1\") pod \"ceph-client-openstack-openstack-cell1-9vtcc\" (UID: \"1c2fa613-123c-4667-b4d3-0140b6d109d1\") " pod="openstack/ceph-client-openstack-openstack-cell1-9vtcc" Feb 19 20:34:02 crc kubenswrapper[4813]: I0219 20:34:02.414021 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1c2fa613-123c-4667-b4d3-0140b6d109d1-ceph\") pod \"ceph-client-openstack-openstack-cell1-9vtcc\" (UID: \"1c2fa613-123c-4667-b4d3-0140b6d109d1\") " pod="openstack/ceph-client-openstack-openstack-cell1-9vtcc" Feb 19 20:34:02 crc kubenswrapper[4813]: I0219 20:34:02.414738 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/1c2fa613-123c-4667-b4d3-0140b6d109d1-ssh-key-openstack-cell1\") pod \"ceph-client-openstack-openstack-cell1-9vtcc\" (UID: \"1c2fa613-123c-4667-b4d3-0140b6d109d1\") " pod="openstack/ceph-client-openstack-openstack-cell1-9vtcc" Feb 19 20:34:02 crc kubenswrapper[4813]: I0219 20:34:02.415886 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1c2fa613-123c-4667-b4d3-0140b6d109d1-inventory\") pod \"ceph-client-openstack-openstack-cell1-9vtcc\" (UID: \"1c2fa613-123c-4667-b4d3-0140b6d109d1\") " pod="openstack/ceph-client-openstack-openstack-cell1-9vtcc" Feb 19 20:34:02 crc kubenswrapper[4813]: I0219 20:34:02.432539 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqcfc\" (UniqueName: \"kubernetes.io/projected/1c2fa613-123c-4667-b4d3-0140b6d109d1-kube-api-access-hqcfc\") pod \"ceph-client-openstack-openstack-cell1-9vtcc\" (UID: \"1c2fa613-123c-4667-b4d3-0140b6d109d1\") " pod="openstack/ceph-client-openstack-openstack-cell1-9vtcc" Feb 19 20:34:02 crc kubenswrapper[4813]: I0219 20:34:02.462237 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-openstack-openstack-cell1-9vtcc" Feb 19 20:34:02 crc kubenswrapper[4813]: I0219 20:34:02.974144 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-openstack-openstack-cell1-9vtcc"] Feb 19 20:34:03 crc kubenswrapper[4813]: I0219 20:34:03.002905 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-openstack-openstack-cell1-9vtcc" event={"ID":"1c2fa613-123c-4667-b4d3-0140b6d109d1","Type":"ContainerStarted","Data":"5aa1fd8091c6a09191a29144843ba78e2fcf1bd613999f00e306dcb281fb3aee"} Feb 19 20:34:04 crc kubenswrapper[4813]: I0219 20:34:04.014570 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-openstack-openstack-cell1-9vtcc" event={"ID":"1c2fa613-123c-4667-b4d3-0140b6d109d1","Type":"ContainerStarted","Data":"bd6873bd7e265e89caa9d08176b3f1286616026c53efb65a90f0396d1fdd2d33"} Feb 19 20:34:04 crc kubenswrapper[4813]: I0219 20:34:04.044455 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-client-openstack-openstack-cell1-9vtcc" podStartSLOduration=1.5937226230000001 podStartE2EDuration="2.04443463s" podCreationTimestamp="2026-02-19 20:34:02 +0000 UTC" firstStartedPulling="2026-02-19 20:34:02.97980428 +0000 UTC m=+7462.205244831" lastFinishedPulling="2026-02-19 20:34:03.430516257 +0000 UTC m=+7462.655956838" observedRunningTime="2026-02-19 20:34:04.03469659 +0000 UTC m=+7463.260137171" watchObservedRunningTime="2026-02-19 20:34:04.04443463 +0000 UTC m=+7463.269875171" Feb 19 20:34:09 crc kubenswrapper[4813]: I0219 20:34:09.069280 4813 generic.go:334] "Generic (PLEG): container finished" podID="1c2fa613-123c-4667-b4d3-0140b6d109d1" containerID="bd6873bd7e265e89caa9d08176b3f1286616026c53efb65a90f0396d1fdd2d33" exitCode=0 Feb 19 20:34:09 crc kubenswrapper[4813]: I0219 20:34:09.069405 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-openstack-openstack-cell1-9vtcc" event={"ID":"1c2fa613-123c-4667-b4d3-0140b6d109d1","Type":"ContainerDied","Data":"bd6873bd7e265e89caa9d08176b3f1286616026c53efb65a90f0396d1fdd2d33"} Feb 19 20:34:10 crc kubenswrapper[4813]: I0219 20:34:10.524820 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-openstack-openstack-cell1-9vtcc" Feb 19 20:34:10 crc kubenswrapper[4813]: I0219 20:34:10.604553 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1c2fa613-123c-4667-b4d3-0140b6d109d1-inventory\") pod \"1c2fa613-123c-4667-b4d3-0140b6d109d1\" (UID: \"1c2fa613-123c-4667-b4d3-0140b6d109d1\") " Feb 19 20:34:10 crc kubenswrapper[4813]: I0219 20:34:10.604615 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqcfc\" (UniqueName: \"kubernetes.io/projected/1c2fa613-123c-4667-b4d3-0140b6d109d1-kube-api-access-hqcfc\") pod \"1c2fa613-123c-4667-b4d3-0140b6d109d1\" (UID: \"1c2fa613-123c-4667-b4d3-0140b6d109d1\") " Feb 19 20:34:10 crc kubenswrapper[4813]: I0219 20:34:10.604712 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1c2fa613-123c-4667-b4d3-0140b6d109d1-ceph\") pod \"1c2fa613-123c-4667-b4d3-0140b6d109d1\" (UID: \"1c2fa613-123c-4667-b4d3-0140b6d109d1\") " Feb 19 20:34:10 crc kubenswrapper[4813]: I0219 20:34:10.604846 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/1c2fa613-123c-4667-b4d3-0140b6d109d1-ssh-key-openstack-cell1\") pod \"1c2fa613-123c-4667-b4d3-0140b6d109d1\" (UID: \"1c2fa613-123c-4667-b4d3-0140b6d109d1\") " Feb 19 20:34:10 crc kubenswrapper[4813]: I0219 20:34:10.611739 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c2fa613-123c-4667-b4d3-0140b6d109d1-ceph" (OuterVolumeSpecName: "ceph") pod "1c2fa613-123c-4667-b4d3-0140b6d109d1" (UID: "1c2fa613-123c-4667-b4d3-0140b6d109d1"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:34:10 crc kubenswrapper[4813]: I0219 20:34:10.612184 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c2fa613-123c-4667-b4d3-0140b6d109d1-kube-api-access-hqcfc" (OuterVolumeSpecName: "kube-api-access-hqcfc") pod "1c2fa613-123c-4667-b4d3-0140b6d109d1" (UID: "1c2fa613-123c-4667-b4d3-0140b6d109d1"). InnerVolumeSpecName "kube-api-access-hqcfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:34:10 crc kubenswrapper[4813]: I0219 20:34:10.633766 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c2fa613-123c-4667-b4d3-0140b6d109d1-inventory" (OuterVolumeSpecName: "inventory") pod "1c2fa613-123c-4667-b4d3-0140b6d109d1" (UID: "1c2fa613-123c-4667-b4d3-0140b6d109d1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:34:10 crc kubenswrapper[4813]: I0219 20:34:10.640209 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c2fa613-123c-4667-b4d3-0140b6d109d1-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "1c2fa613-123c-4667-b4d3-0140b6d109d1" (UID: "1c2fa613-123c-4667-b4d3-0140b6d109d1"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:34:10 crc kubenswrapper[4813]: I0219 20:34:10.708172 4813 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/1c2fa613-123c-4667-b4d3-0140b6d109d1-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 19 20:34:10 crc kubenswrapper[4813]: I0219 20:34:10.708216 4813 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1c2fa613-123c-4667-b4d3-0140b6d109d1-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 20:34:10 crc kubenswrapper[4813]: I0219 20:34:10.708231 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqcfc\" (UniqueName: \"kubernetes.io/projected/1c2fa613-123c-4667-b4d3-0140b6d109d1-kube-api-access-hqcfc\") on node \"crc\" DevicePath \"\"" Feb 19 20:34:10 crc kubenswrapper[4813]: I0219 20:34:10.708245 4813 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1c2fa613-123c-4667-b4d3-0140b6d109d1-ceph\") on node \"crc\" DevicePath \"\"" Feb 19 20:34:11 crc kubenswrapper[4813]: I0219 20:34:11.097369 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-openstack-openstack-cell1-9vtcc" event={"ID":"1c2fa613-123c-4667-b4d3-0140b6d109d1","Type":"ContainerDied","Data":"5aa1fd8091c6a09191a29144843ba78e2fcf1bd613999f00e306dcb281fb3aee"} Feb 19 20:34:11 crc kubenswrapper[4813]: I0219 20:34:11.097430 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5aa1fd8091c6a09191a29144843ba78e2fcf1bd613999f00e306dcb281fb3aee" Feb 19 20:34:11 crc kubenswrapper[4813]: I0219 20:34:11.097440 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-openstack-openstack-cell1-9vtcc" Feb 19 20:34:11 crc kubenswrapper[4813]: I0219 20:34:11.205404 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-openstack-openstack-cell1-dcsw4"] Feb 19 20:34:11 crc kubenswrapper[4813]: E0219 20:34:11.205921 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c2fa613-123c-4667-b4d3-0140b6d109d1" containerName="ceph-client-openstack-openstack-cell1" Feb 19 20:34:11 crc kubenswrapper[4813]: I0219 20:34:11.205938 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c2fa613-123c-4667-b4d3-0140b6d109d1" containerName="ceph-client-openstack-openstack-cell1" Feb 19 20:34:11 crc kubenswrapper[4813]: I0219 20:34:11.206135 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c2fa613-123c-4667-b4d3-0140b6d109d1" containerName="ceph-client-openstack-openstack-cell1" Feb 19 20:34:11 crc kubenswrapper[4813]: I0219 20:34:11.206837 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-dcsw4" Feb 19 20:34:11 crc kubenswrapper[4813]: I0219 20:34:11.208871 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Feb 19 20:34:11 crc kubenswrapper[4813]: I0219 20:34:11.209069 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 19 20:34:11 crc kubenswrapper[4813]: I0219 20:34:11.209163 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-2ttn4" Feb 19 20:34:11 crc kubenswrapper[4813]: I0219 20:34:11.213806 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 20:34:11 crc kubenswrapper[4813]: I0219 20:34:11.215905 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 19 20:34:11 crc kubenswrapper[4813]: I0219 20:34:11.220012 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-openstack-openstack-cell1-dcsw4"] Feb 19 20:34:11 crc kubenswrapper[4813]: I0219 20:34:11.318675 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9c455b52-d215-4da6-aec6-edf7ea78770b-inventory\") pod \"ovn-openstack-openstack-cell1-dcsw4\" (UID: \"9c455b52-d215-4da6-aec6-edf7ea78770b\") " pod="openstack/ovn-openstack-openstack-cell1-dcsw4" Feb 19 20:34:11 crc kubenswrapper[4813]: I0219 20:34:11.318780 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkc6f\" (UniqueName: \"kubernetes.io/projected/9c455b52-d215-4da6-aec6-edf7ea78770b-kube-api-access-pkc6f\") pod \"ovn-openstack-openstack-cell1-dcsw4\" (UID: \"9c455b52-d215-4da6-aec6-edf7ea78770b\") " pod="openstack/ovn-openstack-openstack-cell1-dcsw4" Feb 19 20:34:11 crc kubenswrapper[4813]: I0219 20:34:11.318923 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/9c455b52-d215-4da6-aec6-edf7ea78770b-ovncontroller-config-0\") pod \"ovn-openstack-openstack-cell1-dcsw4\" (UID: \"9c455b52-d215-4da6-aec6-edf7ea78770b\") " pod="openstack/ovn-openstack-openstack-cell1-dcsw4" Feb 19 20:34:11 crc kubenswrapper[4813]: I0219 20:34:11.318997 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9c455b52-d215-4da6-aec6-edf7ea78770b-ceph\") pod \"ovn-openstack-openstack-cell1-dcsw4\" (UID: \"9c455b52-d215-4da6-aec6-edf7ea78770b\") " pod="openstack/ovn-openstack-openstack-cell1-dcsw4" Feb 19 20:34:11 crc kubenswrapper[4813]: I0219 20:34:11.319031 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/9c455b52-d215-4da6-aec6-edf7ea78770b-ssh-key-openstack-cell1\") pod \"ovn-openstack-openstack-cell1-dcsw4\" (UID: \"9c455b52-d215-4da6-aec6-edf7ea78770b\") " pod="openstack/ovn-openstack-openstack-cell1-dcsw4" Feb 19 20:34:11 crc kubenswrapper[4813]: I0219 20:34:11.319104 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c455b52-d215-4da6-aec6-edf7ea78770b-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-cell1-dcsw4\" (UID: \"9c455b52-d215-4da6-aec6-edf7ea78770b\") " pod="openstack/ovn-openstack-openstack-cell1-dcsw4" Feb 19 20:34:11 crc kubenswrapper[4813]: I0219 20:34:11.421212 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9c455b52-d215-4da6-aec6-edf7ea78770b-inventory\") pod \"ovn-openstack-openstack-cell1-dcsw4\" (UID: \"9c455b52-d215-4da6-aec6-edf7ea78770b\") " pod="openstack/ovn-openstack-openstack-cell1-dcsw4" Feb 19 20:34:11 crc kubenswrapper[4813]: I0219 20:34:11.421330 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkc6f\" (UniqueName: \"kubernetes.io/projected/9c455b52-d215-4da6-aec6-edf7ea78770b-kube-api-access-pkc6f\") pod \"ovn-openstack-openstack-cell1-dcsw4\" (UID: \"9c455b52-d215-4da6-aec6-edf7ea78770b\") " pod="openstack/ovn-openstack-openstack-cell1-dcsw4" Feb 19 20:34:11 crc kubenswrapper[4813]: I0219 20:34:11.421585 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/9c455b52-d215-4da6-aec6-edf7ea78770b-ovncontroller-config-0\") pod \"ovn-openstack-openstack-cell1-dcsw4\" (UID: \"9c455b52-d215-4da6-aec6-edf7ea78770b\") " pod="openstack/ovn-openstack-openstack-cell1-dcsw4" Feb 19 20:34:11 crc kubenswrapper[4813]: I0219 20:34:11.421647 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9c455b52-d215-4da6-aec6-edf7ea78770b-ceph\") pod \"ovn-openstack-openstack-cell1-dcsw4\" (UID: \"9c455b52-d215-4da6-aec6-edf7ea78770b\") " pod="openstack/ovn-openstack-openstack-cell1-dcsw4" Feb 19 20:34:11 crc kubenswrapper[4813]: I0219 20:34:11.421683 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/9c455b52-d215-4da6-aec6-edf7ea78770b-ssh-key-openstack-cell1\") pod \"ovn-openstack-openstack-cell1-dcsw4\" (UID: \"9c455b52-d215-4da6-aec6-edf7ea78770b\") " pod="openstack/ovn-openstack-openstack-cell1-dcsw4" Feb 19 20:34:11 crc kubenswrapper[4813]: I0219 20:34:11.421779 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c455b52-d215-4da6-aec6-edf7ea78770b-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-cell1-dcsw4\" (UID: \"9c455b52-d215-4da6-aec6-edf7ea78770b\") " pod="openstack/ovn-openstack-openstack-cell1-dcsw4" Feb 19 20:34:11 crc kubenswrapper[4813]: I0219 20:34:11.424829 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/9c455b52-d215-4da6-aec6-edf7ea78770b-ovncontroller-config-0\") pod \"ovn-openstack-openstack-cell1-dcsw4\" (UID: \"9c455b52-d215-4da6-aec6-edf7ea78770b\") " pod="openstack/ovn-openstack-openstack-cell1-dcsw4" Feb 19 20:34:11 crc kubenswrapper[4813]: I0219 20:34:11.427227 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c455b52-d215-4da6-aec6-edf7ea78770b-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-cell1-dcsw4\" (UID: \"9c455b52-d215-4da6-aec6-edf7ea78770b\") " pod="openstack/ovn-openstack-openstack-cell1-dcsw4" Feb 19 20:34:11 crc kubenswrapper[4813]: I0219 20:34:11.427675 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/9c455b52-d215-4da6-aec6-edf7ea78770b-ssh-key-openstack-cell1\") pod \"ovn-openstack-openstack-cell1-dcsw4\" (UID: \"9c455b52-d215-4da6-aec6-edf7ea78770b\") " pod="openstack/ovn-openstack-openstack-cell1-dcsw4" Feb 19 20:34:11 crc kubenswrapper[4813]: I0219 20:34:11.429724 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9c455b52-d215-4da6-aec6-edf7ea78770b-inventory\") pod \"ovn-openstack-openstack-cell1-dcsw4\" (UID: \"9c455b52-d215-4da6-aec6-edf7ea78770b\") " pod="openstack/ovn-openstack-openstack-cell1-dcsw4" Feb 19 20:34:11 crc kubenswrapper[4813]: I0219 20:34:11.430326 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9c455b52-d215-4da6-aec6-edf7ea78770b-ceph\") pod \"ovn-openstack-openstack-cell1-dcsw4\" (UID: \"9c455b52-d215-4da6-aec6-edf7ea78770b\") " pod="openstack/ovn-openstack-openstack-cell1-dcsw4" Feb 19 20:34:11 crc kubenswrapper[4813]: I0219 20:34:11.442032 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkc6f\" (UniqueName: \"kubernetes.io/projected/9c455b52-d215-4da6-aec6-edf7ea78770b-kube-api-access-pkc6f\") pod \"ovn-openstack-openstack-cell1-dcsw4\" (UID: \"9c455b52-d215-4da6-aec6-edf7ea78770b\") " pod="openstack/ovn-openstack-openstack-cell1-dcsw4" Feb 19 20:34:11 crc kubenswrapper[4813]: I0219 20:34:11.524544 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-dcsw4" Feb 19 20:34:12 crc kubenswrapper[4813]: I0219 20:34:12.079004 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-openstack-openstack-cell1-dcsw4"] Feb 19 20:34:12 crc kubenswrapper[4813]: I0219 20:34:12.130421 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-dcsw4" event={"ID":"9c455b52-d215-4da6-aec6-edf7ea78770b","Type":"ContainerStarted","Data":"b66c59248a65c7fdc4de177a0fc2d4f654e7ec9bcf756ccdf7820c297d0aa585"} Feb 19 20:34:14 crc kubenswrapper[4813]: I0219 20:34:14.162545 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-dcsw4" event={"ID":"9c455b52-d215-4da6-aec6-edf7ea78770b","Type":"ContainerStarted","Data":"0642f95bfe5f5de8328b28f484387cfacc82292895a6b63d20fadb4cae967ba9"} Feb 19 20:34:14 crc kubenswrapper[4813]: I0219 20:34:14.197709 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-openstack-openstack-cell1-dcsw4" podStartSLOduration=2.454146305 podStartE2EDuration="3.197685876s" podCreationTimestamp="2026-02-19 20:34:11 +0000 UTC" firstStartedPulling="2026-02-19 20:34:12.083714809 +0000 UTC m=+7471.309155360" lastFinishedPulling="2026-02-19 20:34:12.82725435 +0000 UTC m=+7472.052694931" observedRunningTime="2026-02-19 20:34:14.185692735 +0000 UTC m=+7473.411133316" watchObservedRunningTime="2026-02-19 20:34:14.197685876 +0000 UTC m=+7473.423126427" Feb 19 20:34:48 crc kubenswrapper[4813]: I0219 20:34:48.972238 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ngtmn"] Feb 19 20:34:48 crc kubenswrapper[4813]: I0219 20:34:48.975738 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ngtmn" Feb 19 20:34:48 crc kubenswrapper[4813]: I0219 20:34:48.987941 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ngtmn"] Feb 19 20:34:49 crc kubenswrapper[4813]: I0219 20:34:49.082353 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b45ed2a-aecd-4d86-9c24-ab91e46abe38-utilities\") pod \"redhat-marketplace-ngtmn\" (UID: \"4b45ed2a-aecd-4d86-9c24-ab91e46abe38\") " pod="openshift-marketplace/redhat-marketplace-ngtmn" Feb 19 20:34:49 crc kubenswrapper[4813]: I0219 20:34:49.082441 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j42j6\" (UniqueName: \"kubernetes.io/projected/4b45ed2a-aecd-4d86-9c24-ab91e46abe38-kube-api-access-j42j6\") pod \"redhat-marketplace-ngtmn\" (UID: \"4b45ed2a-aecd-4d86-9c24-ab91e46abe38\") " pod="openshift-marketplace/redhat-marketplace-ngtmn" Feb 19 20:34:49 crc kubenswrapper[4813]: I0219 20:34:49.083334 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b45ed2a-aecd-4d86-9c24-ab91e46abe38-catalog-content\") pod \"redhat-marketplace-ngtmn\" (UID: \"4b45ed2a-aecd-4d86-9c24-ab91e46abe38\") " pod="openshift-marketplace/redhat-marketplace-ngtmn" Feb 19 20:34:49 crc kubenswrapper[4813]: I0219 20:34:49.186357 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j42j6\" (UniqueName: \"kubernetes.io/projected/4b45ed2a-aecd-4d86-9c24-ab91e46abe38-kube-api-access-j42j6\") pod \"redhat-marketplace-ngtmn\" (UID: \"4b45ed2a-aecd-4d86-9c24-ab91e46abe38\") " pod="openshift-marketplace/redhat-marketplace-ngtmn" Feb 19 20:34:49 crc kubenswrapper[4813]: I0219 20:34:49.186523 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b45ed2a-aecd-4d86-9c24-ab91e46abe38-catalog-content\") pod \"redhat-marketplace-ngtmn\" (UID: \"4b45ed2a-aecd-4d86-9c24-ab91e46abe38\") " pod="openshift-marketplace/redhat-marketplace-ngtmn" Feb 19 20:34:49 crc kubenswrapper[4813]: I0219 20:34:49.186730 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b45ed2a-aecd-4d86-9c24-ab91e46abe38-utilities\") pod \"redhat-marketplace-ngtmn\" (UID: \"4b45ed2a-aecd-4d86-9c24-ab91e46abe38\") " pod="openshift-marketplace/redhat-marketplace-ngtmn" Feb 19 20:34:49 crc kubenswrapper[4813]: I0219 20:34:49.187282 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b45ed2a-aecd-4d86-9c24-ab91e46abe38-catalog-content\") pod \"redhat-marketplace-ngtmn\" (UID: \"4b45ed2a-aecd-4d86-9c24-ab91e46abe38\") " pod="openshift-marketplace/redhat-marketplace-ngtmn" Feb 19 20:34:49 crc kubenswrapper[4813]: I0219 20:34:49.187318 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b45ed2a-aecd-4d86-9c24-ab91e46abe38-utilities\") pod \"redhat-marketplace-ngtmn\" (UID: \"4b45ed2a-aecd-4d86-9c24-ab91e46abe38\") " pod="openshift-marketplace/redhat-marketplace-ngtmn" Feb 19 20:34:49 crc kubenswrapper[4813]: I0219 20:34:49.206561 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j42j6\" (UniqueName: \"kubernetes.io/projected/4b45ed2a-aecd-4d86-9c24-ab91e46abe38-kube-api-access-j42j6\") pod \"redhat-marketplace-ngtmn\" (UID: \"4b45ed2a-aecd-4d86-9c24-ab91e46abe38\") " pod="openshift-marketplace/redhat-marketplace-ngtmn" Feb 19 20:34:49 crc kubenswrapper[4813]: I0219 20:34:49.312423 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ngtmn" Feb 19 20:34:49 crc kubenswrapper[4813]: I0219 20:34:49.787711 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ngtmn"] Feb 19 20:34:49 crc kubenswrapper[4813]: W0219 20:34:49.792226 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b45ed2a_aecd_4d86_9c24_ab91e46abe38.slice/crio-d333259ab367cf0bc715b318532ba72b7fd9c02116814d270a9107ceb6480fe8 WatchSource:0}: Error finding container d333259ab367cf0bc715b318532ba72b7fd9c02116814d270a9107ceb6480fe8: Status 404 returned error can't find the container with id d333259ab367cf0bc715b318532ba72b7fd9c02116814d270a9107ceb6480fe8 Feb 19 20:34:50 crc kubenswrapper[4813]: I0219 20:34:50.624443 4813 generic.go:334] "Generic (PLEG): container finished" podID="4b45ed2a-aecd-4d86-9c24-ab91e46abe38" containerID="f11e8a92e34c46988b2d88e93f26b37dbb1193f49894236d4709955261ce4bbd" exitCode=0 Feb 19 20:34:50 crc kubenswrapper[4813]: I0219 20:34:50.624549 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ngtmn" event={"ID":"4b45ed2a-aecd-4d86-9c24-ab91e46abe38","Type":"ContainerDied","Data":"f11e8a92e34c46988b2d88e93f26b37dbb1193f49894236d4709955261ce4bbd"} Feb 19 20:34:50 crc kubenswrapper[4813]: I0219 20:34:50.624993 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ngtmn" event={"ID":"4b45ed2a-aecd-4d86-9c24-ab91e46abe38","Type":"ContainerStarted","Data":"d333259ab367cf0bc715b318532ba72b7fd9c02116814d270a9107ceb6480fe8"} Feb 19 20:34:52 crc kubenswrapper[4813]: I0219 20:34:52.653167 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ngtmn" event={"ID":"4b45ed2a-aecd-4d86-9c24-ab91e46abe38","Type":"ContainerStarted","Data":"eb2d5a841191acc3b490d7c2b5dbbdf2293b4d6f607b10b6fa995a59c9982cbd"} Feb 19 20:34:54 crc kubenswrapper[4813]: I0219 20:34:54.679346 4813 generic.go:334] "Generic (PLEG): container finished" podID="4b45ed2a-aecd-4d86-9c24-ab91e46abe38" containerID="eb2d5a841191acc3b490d7c2b5dbbdf2293b4d6f607b10b6fa995a59c9982cbd" exitCode=0 Feb 19 20:34:54 crc kubenswrapper[4813]: I0219 20:34:54.679463 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ngtmn" event={"ID":"4b45ed2a-aecd-4d86-9c24-ab91e46abe38","Type":"ContainerDied","Data":"eb2d5a841191acc3b490d7c2b5dbbdf2293b4d6f607b10b6fa995a59c9982cbd"} Feb 19 20:34:55 crc kubenswrapper[4813]: I0219 20:34:55.692354 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ngtmn" event={"ID":"4b45ed2a-aecd-4d86-9c24-ab91e46abe38","Type":"ContainerStarted","Data":"a02ce48cbf454221a8b77600286ef408747376d08de7fac7b2633921eed819f1"} Feb 19 20:34:55 crc kubenswrapper[4813]: I0219 20:34:55.713010 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ngtmn" podStartSLOduration=3.223090066 podStartE2EDuration="7.712991122s" podCreationTimestamp="2026-02-19 20:34:48 +0000 UTC" firstStartedPulling="2026-02-19 20:34:50.626834919 +0000 UTC m=+7509.852275460" lastFinishedPulling="2026-02-19 20:34:55.116735945 +0000 UTC m=+7514.342176516" observedRunningTime="2026-02-19 20:34:55.712639081 +0000 UTC m=+7514.938079632" watchObservedRunningTime="2026-02-19 20:34:55.712991122 +0000 UTC m=+7514.938431673" Feb 19 20:34:59 crc kubenswrapper[4813]: I0219 20:34:59.313109 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ngtmn" Feb 19 20:34:59 crc kubenswrapper[4813]: I0219 20:34:59.313560 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ngtmn" Feb 19 20:35:00 crc kubenswrapper[4813]: I0219 20:35:00.398505 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-ngtmn" podUID="4b45ed2a-aecd-4d86-9c24-ab91e46abe38" containerName="registry-server" probeResult="failure" output=< Feb 19 20:35:00 crc kubenswrapper[4813]: timeout: failed to connect service ":50051" within 1s Feb 19 20:35:00 crc kubenswrapper[4813]: > Feb 19 20:35:09 crc kubenswrapper[4813]: I0219 20:35:09.393380 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ngtmn" Feb 19 20:35:09 crc kubenswrapper[4813]: I0219 20:35:09.461790 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ngtmn" Feb 19 20:35:09 crc kubenswrapper[4813]: I0219 20:35:09.641305 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ngtmn"] Feb 19 20:35:10 crc kubenswrapper[4813]: I0219 20:35:10.864022 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ngtmn" podUID="4b45ed2a-aecd-4d86-9c24-ab91e46abe38" containerName="registry-server" containerID="cri-o://a02ce48cbf454221a8b77600286ef408747376d08de7fac7b2633921eed819f1" gracePeriod=2 Feb 19 20:35:11 crc kubenswrapper[4813]: I0219 20:35:11.522165 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ngtmn" Feb 19 20:35:11 crc kubenswrapper[4813]: I0219 20:35:11.622999 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j42j6\" (UniqueName: \"kubernetes.io/projected/4b45ed2a-aecd-4d86-9c24-ab91e46abe38-kube-api-access-j42j6\") pod \"4b45ed2a-aecd-4d86-9c24-ab91e46abe38\" (UID: \"4b45ed2a-aecd-4d86-9c24-ab91e46abe38\") " Feb 19 20:35:11 crc kubenswrapper[4813]: I0219 20:35:11.623057 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b45ed2a-aecd-4d86-9c24-ab91e46abe38-utilities\") pod \"4b45ed2a-aecd-4d86-9c24-ab91e46abe38\" (UID: \"4b45ed2a-aecd-4d86-9c24-ab91e46abe38\") " Feb 19 20:35:11 crc kubenswrapper[4813]: I0219 20:35:11.623239 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b45ed2a-aecd-4d86-9c24-ab91e46abe38-catalog-content\") pod \"4b45ed2a-aecd-4d86-9c24-ab91e46abe38\" (UID: \"4b45ed2a-aecd-4d86-9c24-ab91e46abe38\") " Feb 19 20:35:11 crc kubenswrapper[4813]: I0219 20:35:11.623821 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b45ed2a-aecd-4d86-9c24-ab91e46abe38-utilities" (OuterVolumeSpecName: "utilities") pod "4b45ed2a-aecd-4d86-9c24-ab91e46abe38" (UID: "4b45ed2a-aecd-4d86-9c24-ab91e46abe38"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:35:11 crc kubenswrapper[4813]: I0219 20:35:11.630352 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b45ed2a-aecd-4d86-9c24-ab91e46abe38-kube-api-access-j42j6" (OuterVolumeSpecName: "kube-api-access-j42j6") pod "4b45ed2a-aecd-4d86-9c24-ab91e46abe38" (UID: "4b45ed2a-aecd-4d86-9c24-ab91e46abe38"). InnerVolumeSpecName "kube-api-access-j42j6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:35:11 crc kubenswrapper[4813]: I0219 20:35:11.664320 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b45ed2a-aecd-4d86-9c24-ab91e46abe38-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4b45ed2a-aecd-4d86-9c24-ab91e46abe38" (UID: "4b45ed2a-aecd-4d86-9c24-ab91e46abe38"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:35:11 crc kubenswrapper[4813]: I0219 20:35:11.726020 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b45ed2a-aecd-4d86-9c24-ab91e46abe38-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 20:35:11 crc kubenswrapper[4813]: I0219 20:35:11.726079 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j42j6\" (UniqueName: \"kubernetes.io/projected/4b45ed2a-aecd-4d86-9c24-ab91e46abe38-kube-api-access-j42j6\") on node \"crc\" DevicePath \"\"" Feb 19 20:35:11 crc kubenswrapper[4813]: I0219 20:35:11.726101 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b45ed2a-aecd-4d86-9c24-ab91e46abe38-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 20:35:11 crc kubenswrapper[4813]: I0219 20:35:11.879777 4813 generic.go:334] "Generic (PLEG): container finished" podID="4b45ed2a-aecd-4d86-9c24-ab91e46abe38" containerID="a02ce48cbf454221a8b77600286ef408747376d08de7fac7b2633921eed819f1" exitCode=0 Feb 19 20:35:11 crc kubenswrapper[4813]: I0219 20:35:11.879863 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ngtmn" Feb 19 20:35:11 crc kubenswrapper[4813]: I0219 20:35:11.879887 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ngtmn" event={"ID":"4b45ed2a-aecd-4d86-9c24-ab91e46abe38","Type":"ContainerDied","Data":"a02ce48cbf454221a8b77600286ef408747376d08de7fac7b2633921eed819f1"} Feb 19 20:35:11 crc kubenswrapper[4813]: I0219 20:35:11.880526 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ngtmn" event={"ID":"4b45ed2a-aecd-4d86-9c24-ab91e46abe38","Type":"ContainerDied","Data":"d333259ab367cf0bc715b318532ba72b7fd9c02116814d270a9107ceb6480fe8"} Feb 19 20:35:11 crc kubenswrapper[4813]: I0219 20:35:11.880549 4813 scope.go:117] "RemoveContainer" containerID="a02ce48cbf454221a8b77600286ef408747376d08de7fac7b2633921eed819f1" Feb 19 20:35:11 crc kubenswrapper[4813]: I0219 20:35:11.918597 4813 scope.go:117] "RemoveContainer" containerID="eb2d5a841191acc3b490d7c2b5dbbdf2293b4d6f607b10b6fa995a59c9982cbd" Feb 19 20:35:11 crc kubenswrapper[4813]: I0219 20:35:11.933558 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ngtmn"] Feb 19 20:35:11 crc kubenswrapper[4813]: I0219 20:35:11.950922 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ngtmn"] Feb 19 20:35:11 crc kubenswrapper[4813]: I0219 20:35:11.952500 4813 scope.go:117] "RemoveContainer" containerID="f11e8a92e34c46988b2d88e93f26b37dbb1193f49894236d4709955261ce4bbd" Feb 19 20:35:11 crc kubenswrapper[4813]: I0219 20:35:11.997629 4813 scope.go:117] "RemoveContainer" containerID="a02ce48cbf454221a8b77600286ef408747376d08de7fac7b2633921eed819f1" Feb 19 20:35:11 crc kubenswrapper[4813]: E0219 20:35:11.998215 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a02ce48cbf454221a8b77600286ef408747376d08de7fac7b2633921eed819f1\": container with ID starting with a02ce48cbf454221a8b77600286ef408747376d08de7fac7b2633921eed819f1 not found: ID does not exist" containerID="a02ce48cbf454221a8b77600286ef408747376d08de7fac7b2633921eed819f1" Feb 19 20:35:11 crc kubenswrapper[4813]: I0219 20:35:11.998247 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a02ce48cbf454221a8b77600286ef408747376d08de7fac7b2633921eed819f1"} err="failed to get container status \"a02ce48cbf454221a8b77600286ef408747376d08de7fac7b2633921eed819f1\": rpc error: code = NotFound desc = could not find container \"a02ce48cbf454221a8b77600286ef408747376d08de7fac7b2633921eed819f1\": container with ID starting with a02ce48cbf454221a8b77600286ef408747376d08de7fac7b2633921eed819f1 not found: ID does not exist" Feb 19 20:35:11 crc kubenswrapper[4813]: I0219 20:35:11.998269 4813 scope.go:117] "RemoveContainer" containerID="eb2d5a841191acc3b490d7c2b5dbbdf2293b4d6f607b10b6fa995a59c9982cbd" Feb 19 20:35:11 crc kubenswrapper[4813]: E0219 20:35:11.998596 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb2d5a841191acc3b490d7c2b5dbbdf2293b4d6f607b10b6fa995a59c9982cbd\": container with ID starting with eb2d5a841191acc3b490d7c2b5dbbdf2293b4d6f607b10b6fa995a59c9982cbd not found: ID does not exist" containerID="eb2d5a841191acc3b490d7c2b5dbbdf2293b4d6f607b10b6fa995a59c9982cbd" Feb 19 20:35:11 crc kubenswrapper[4813]: I0219 20:35:11.998626 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb2d5a841191acc3b490d7c2b5dbbdf2293b4d6f607b10b6fa995a59c9982cbd"} err="failed to get container status \"eb2d5a841191acc3b490d7c2b5dbbdf2293b4d6f607b10b6fa995a59c9982cbd\": rpc error: code = NotFound desc = could not find container \"eb2d5a841191acc3b490d7c2b5dbbdf2293b4d6f607b10b6fa995a59c9982cbd\": container with ID starting with eb2d5a841191acc3b490d7c2b5dbbdf2293b4d6f607b10b6fa995a59c9982cbd not found: ID does not exist" Feb 19 20:35:11 crc kubenswrapper[4813]: I0219 20:35:11.998645 4813 scope.go:117] "RemoveContainer" containerID="f11e8a92e34c46988b2d88e93f26b37dbb1193f49894236d4709955261ce4bbd" Feb 19 20:35:11 crc kubenswrapper[4813]: E0219 20:35:11.998973 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f11e8a92e34c46988b2d88e93f26b37dbb1193f49894236d4709955261ce4bbd\": container with ID starting with f11e8a92e34c46988b2d88e93f26b37dbb1193f49894236d4709955261ce4bbd not found: ID does not exist" containerID="f11e8a92e34c46988b2d88e93f26b37dbb1193f49894236d4709955261ce4bbd" Feb 19 20:35:11 crc kubenswrapper[4813]: I0219 20:35:11.999007 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f11e8a92e34c46988b2d88e93f26b37dbb1193f49894236d4709955261ce4bbd"} err="failed to get container status \"f11e8a92e34c46988b2d88e93f26b37dbb1193f49894236d4709955261ce4bbd\": rpc error: code = NotFound desc = could not find container \"f11e8a92e34c46988b2d88e93f26b37dbb1193f49894236d4709955261ce4bbd\": container with ID starting with f11e8a92e34c46988b2d88e93f26b37dbb1193f49894236d4709955261ce4bbd not found: ID does not exist" Feb 19 20:35:13 crc kubenswrapper[4813]: I0219 20:35:13.491709 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b45ed2a-aecd-4d86-9c24-ab91e46abe38" path="/var/lib/kubelet/pods/4b45ed2a-aecd-4d86-9c24-ab91e46abe38/volumes" Feb 19 20:35:21 crc kubenswrapper[4813]: I0219 20:35:21.974335 4813 generic.go:334] "Generic (PLEG): container finished" podID="9c455b52-d215-4da6-aec6-edf7ea78770b" containerID="0642f95bfe5f5de8328b28f484387cfacc82292895a6b63d20fadb4cae967ba9" exitCode=0 Feb 19 20:35:21 crc kubenswrapper[4813]: I0219 20:35:21.974389 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-dcsw4" event={"ID":"9c455b52-d215-4da6-aec6-edf7ea78770b","Type":"ContainerDied","Data":"0642f95bfe5f5de8328b28f484387cfacc82292895a6b63d20fadb4cae967ba9"} Feb 19 20:35:23 crc kubenswrapper[4813]: I0219 20:35:23.653033 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-dcsw4" Feb 19 20:35:23 crc kubenswrapper[4813]: I0219 20:35:23.800356 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9c455b52-d215-4da6-aec6-edf7ea78770b-inventory\") pod \"9c455b52-d215-4da6-aec6-edf7ea78770b\" (UID: \"9c455b52-d215-4da6-aec6-edf7ea78770b\") " Feb 19 20:35:23 crc kubenswrapper[4813]: I0219 20:35:23.800472 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/9c455b52-d215-4da6-aec6-edf7ea78770b-ssh-key-openstack-cell1\") pod \"9c455b52-d215-4da6-aec6-edf7ea78770b\" (UID: \"9c455b52-d215-4da6-aec6-edf7ea78770b\") " Feb 19 20:35:23 crc kubenswrapper[4813]: I0219 20:35:23.800525 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9c455b52-d215-4da6-aec6-edf7ea78770b-ceph\") pod \"9c455b52-d215-4da6-aec6-edf7ea78770b\" (UID: \"9c455b52-d215-4da6-aec6-edf7ea78770b\") " Feb 19 20:35:23 crc kubenswrapper[4813]: I0219 20:35:23.800733 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c455b52-d215-4da6-aec6-edf7ea78770b-ovn-combined-ca-bundle\") pod \"9c455b52-d215-4da6-aec6-edf7ea78770b\" (UID: \"9c455b52-d215-4da6-aec6-edf7ea78770b\") " Feb 19 20:35:23 crc kubenswrapper[4813]: I0219 20:35:23.800788 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/9c455b52-d215-4da6-aec6-edf7ea78770b-ovncontroller-config-0\") pod \"9c455b52-d215-4da6-aec6-edf7ea78770b\" (UID: \"9c455b52-d215-4da6-aec6-edf7ea78770b\") " Feb 19 20:35:23 crc kubenswrapper[4813]: I0219 20:35:23.800834 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkc6f\" (UniqueName: \"kubernetes.io/projected/9c455b52-d215-4da6-aec6-edf7ea78770b-kube-api-access-pkc6f\") pod \"9c455b52-d215-4da6-aec6-edf7ea78770b\" (UID: \"9c455b52-d215-4da6-aec6-edf7ea78770b\") " Feb 19 20:35:23 crc kubenswrapper[4813]: I0219 20:35:23.809272 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c455b52-d215-4da6-aec6-edf7ea78770b-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "9c455b52-d215-4da6-aec6-edf7ea78770b" (UID: "9c455b52-d215-4da6-aec6-edf7ea78770b"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:35:23 crc kubenswrapper[4813]: I0219 20:35:23.809471 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c455b52-d215-4da6-aec6-edf7ea78770b-kube-api-access-pkc6f" (OuterVolumeSpecName: "kube-api-access-pkc6f") pod "9c455b52-d215-4da6-aec6-edf7ea78770b" (UID: "9c455b52-d215-4da6-aec6-edf7ea78770b"). InnerVolumeSpecName "kube-api-access-pkc6f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:35:23 crc kubenswrapper[4813]: I0219 20:35:23.810433 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c455b52-d215-4da6-aec6-edf7ea78770b-ceph" (OuterVolumeSpecName: "ceph") pod "9c455b52-d215-4da6-aec6-edf7ea78770b" (UID: "9c455b52-d215-4da6-aec6-edf7ea78770b"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:35:23 crc kubenswrapper[4813]: E0219 20:35:23.860448 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9c455b52-d215-4da6-aec6-edf7ea78770b-ssh-key-openstack-cell1 podName:9c455b52-d215-4da6-aec6-edf7ea78770b nodeName:}" failed. No retries permitted until 2026-02-19 20:35:24.36041178 +0000 UTC m=+7543.585852331 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "ssh-key-openstack-cell1" (UniqueName: "kubernetes.io/secret/9c455b52-d215-4da6-aec6-edf7ea78770b-ssh-key-openstack-cell1") pod "9c455b52-d215-4da6-aec6-edf7ea78770b" (UID: "9c455b52-d215-4da6-aec6-edf7ea78770b") : error deleting /var/lib/kubelet/pods/9c455b52-d215-4da6-aec6-edf7ea78770b/volume-subpaths: remove /var/lib/kubelet/pods/9c455b52-d215-4da6-aec6-edf7ea78770b/volume-subpaths: no such file or directory Feb 19 20:35:23 crc kubenswrapper[4813]: I0219 20:35:23.860877 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c455b52-d215-4da6-aec6-edf7ea78770b-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "9c455b52-d215-4da6-aec6-edf7ea78770b" (UID: "9c455b52-d215-4da6-aec6-edf7ea78770b"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:35:23 crc kubenswrapper[4813]: I0219 20:35:23.863781 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c455b52-d215-4da6-aec6-edf7ea78770b-inventory" (OuterVolumeSpecName: "inventory") pod "9c455b52-d215-4da6-aec6-edf7ea78770b" (UID: "9c455b52-d215-4da6-aec6-edf7ea78770b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:35:23 crc kubenswrapper[4813]: I0219 20:35:23.904028 4813 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c455b52-d215-4da6-aec6-edf7ea78770b-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 20:35:23 crc kubenswrapper[4813]: I0219 20:35:23.904058 4813 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/9c455b52-d215-4da6-aec6-edf7ea78770b-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Feb 19 20:35:23 crc kubenswrapper[4813]: I0219 20:35:23.904072 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkc6f\" (UniqueName: \"kubernetes.io/projected/9c455b52-d215-4da6-aec6-edf7ea78770b-kube-api-access-pkc6f\") on node \"crc\" DevicePath \"\"" Feb 19 20:35:23 crc kubenswrapper[4813]: I0219 20:35:23.904086 4813 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9c455b52-d215-4da6-aec6-edf7ea78770b-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 20:35:23 crc kubenswrapper[4813]: I0219 20:35:23.904097 4813 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9c455b52-d215-4da6-aec6-edf7ea78770b-ceph\") on node \"crc\" DevicePath \"\"" Feb 19 20:35:24 crc kubenswrapper[4813]: I0219 20:35:24.031535 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-dcsw4" event={"ID":"9c455b52-d215-4da6-aec6-edf7ea78770b","Type":"ContainerDied","Data":"b66c59248a65c7fdc4de177a0fc2d4f654e7ec9bcf756ccdf7820c297d0aa585"} Feb 19 20:35:24 crc kubenswrapper[4813]: I0219 20:35:24.031595 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b66c59248a65c7fdc4de177a0fc2d4f654e7ec9bcf756ccdf7820c297d0aa585" Feb 19 20:35:24 crc kubenswrapper[4813]: I0219 20:35:24.031682 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-dcsw4" Feb 19 20:35:24 crc kubenswrapper[4813]: I0219 20:35:24.124247 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-openstack-openstack-cell1-gwc79"] Feb 19 20:35:24 crc kubenswrapper[4813]: E0219 20:35:24.125195 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c455b52-d215-4da6-aec6-edf7ea78770b" containerName="ovn-openstack-openstack-cell1" Feb 19 20:35:24 crc kubenswrapper[4813]: I0219 20:35:24.125227 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c455b52-d215-4da6-aec6-edf7ea78770b" containerName="ovn-openstack-openstack-cell1" Feb 19 20:35:24 crc kubenswrapper[4813]: E0219 20:35:24.125289 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b45ed2a-aecd-4d86-9c24-ab91e46abe38" containerName="extract-utilities" Feb 19 20:35:24 crc kubenswrapper[4813]: I0219 20:35:24.125304 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b45ed2a-aecd-4d86-9c24-ab91e46abe38" containerName="extract-utilities" Feb 19 20:35:24 crc kubenswrapper[4813]: E0219 20:35:24.125332 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b45ed2a-aecd-4d86-9c24-ab91e46abe38" containerName="extract-content" Feb 19 20:35:24 crc kubenswrapper[4813]: I0219 20:35:24.125347 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b45ed2a-aecd-4d86-9c24-ab91e46abe38" containerName="extract-content" Feb 19 20:35:24 crc kubenswrapper[4813]: E0219 20:35:24.125379 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b45ed2a-aecd-4d86-9c24-ab91e46abe38" containerName="registry-server" Feb 19 20:35:24 crc kubenswrapper[4813]: I0219 20:35:24.125393 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b45ed2a-aecd-4d86-9c24-ab91e46abe38" containerName="registry-server" Feb 19 20:35:24 crc kubenswrapper[4813]: I0219 20:35:24.125745 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c455b52-d215-4da6-aec6-edf7ea78770b" containerName="ovn-openstack-openstack-cell1" Feb 19 20:35:24 crc kubenswrapper[4813]: I0219 20:35:24.125784 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b45ed2a-aecd-4d86-9c24-ab91e46abe38" containerName="registry-server" Feb 19 20:35:24 crc kubenswrapper[4813]: I0219 20:35:24.127118 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-gwc79" Feb 19 20:35:24 crc kubenswrapper[4813]: I0219 20:35:24.130076 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Feb 19 20:35:24 crc kubenswrapper[4813]: I0219 20:35:24.131157 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Feb 19 20:35:24 crc kubenswrapper[4813]: I0219 20:35:24.156090 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-openstack-openstack-cell1-gwc79"] Feb 19 20:35:24 crc kubenswrapper[4813]: I0219 20:35:24.218145 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/29b06165-79b3-41c2-afbb-c165336c5564-ceph\") pod \"neutron-metadata-openstack-openstack-cell1-gwc79\" (UID: \"29b06165-79b3-41c2-afbb-c165336c5564\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-gwc79" Feb 19 20:35:24 crc kubenswrapper[4813]: I0219 20:35:24.218196 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/29b06165-79b3-41c2-afbb-c165336c5564-inventory\") pod \"neutron-metadata-openstack-openstack-cell1-gwc79\" (UID: \"29b06165-79b3-41c2-afbb-c165336c5564\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-gwc79" Feb 19 20:35:24 crc kubenswrapper[4813]: I0219 20:35:24.218419 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/29b06165-79b3-41c2-afbb-c165336c5564-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-gwc79\" (UID: \"29b06165-79b3-41c2-afbb-c165336c5564\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-gwc79" Feb 19 20:35:24 crc kubenswrapper[4813]: I0219 20:35:24.218473 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2svfz\" (UniqueName: \"kubernetes.io/projected/29b06165-79b3-41c2-afbb-c165336c5564-kube-api-access-2svfz\") pod \"neutron-metadata-openstack-openstack-cell1-gwc79\" (UID: \"29b06165-79b3-41c2-afbb-c165336c5564\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-gwc79" Feb 19 20:35:24 crc kubenswrapper[4813]: I0219 20:35:24.218507 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29b06165-79b3-41c2-afbb-c165336c5564-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-cell1-gwc79\" (UID: \"29b06165-79b3-41c2-afbb-c165336c5564\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-gwc79" Feb 19 20:35:24 crc kubenswrapper[4813]: I0219 20:35:24.218574 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/29b06165-79b3-41c2-afbb-c165336c5564-ssh-key-openstack-cell1\") pod \"neutron-metadata-openstack-openstack-cell1-gwc79\" (UID: \"29b06165-79b3-41c2-afbb-c165336c5564\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-gwc79" Feb 19 20:35:24 crc kubenswrapper[4813]: I0219 20:35:24.218651 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/29b06165-79b3-41c2-afbb-c165336c5564-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-gwc79\" (UID: \"29b06165-79b3-41c2-afbb-c165336c5564\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-gwc79" Feb 19 20:35:24 crc kubenswrapper[4813]: I0219 20:35:24.320252 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/29b06165-79b3-41c2-afbb-c165336c5564-ceph\") pod \"neutron-metadata-openstack-openstack-cell1-gwc79\" (UID: \"29b06165-79b3-41c2-afbb-c165336c5564\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-gwc79" Feb 19 20:35:24 crc kubenswrapper[4813]: I0219 20:35:24.320305 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/29b06165-79b3-41c2-afbb-c165336c5564-inventory\") pod \"neutron-metadata-openstack-openstack-cell1-gwc79\" (UID: \"29b06165-79b3-41c2-afbb-c165336c5564\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-gwc79" Feb 19 20:35:24 crc kubenswrapper[4813]: I0219 20:35:24.320414 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/29b06165-79b3-41c2-afbb-c165336c5564-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-gwc79\" (UID: \"29b06165-79b3-41c2-afbb-c165336c5564\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-gwc79" Feb 19 20:35:24 crc kubenswrapper[4813]: I0219 20:35:24.320437 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2svfz\" (UniqueName: \"kubernetes.io/projected/29b06165-79b3-41c2-afbb-c165336c5564-kube-api-access-2svfz\") pod \"neutron-metadata-openstack-openstack-cell1-gwc79\" (UID: \"29b06165-79b3-41c2-afbb-c165336c5564\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-gwc79" Feb 19 20:35:24 crc kubenswrapper[4813]: I0219 20:35:24.320462 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29b06165-79b3-41c2-afbb-c165336c5564-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-cell1-gwc79\" (UID: \"29b06165-79b3-41c2-afbb-c165336c5564\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-gwc79" Feb 19 20:35:24 crc kubenswrapper[4813]: I0219 20:35:24.320498 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/29b06165-79b3-41c2-afbb-c165336c5564-ssh-key-openstack-cell1\") pod \"neutron-metadata-openstack-openstack-cell1-gwc79\" (UID: \"29b06165-79b3-41c2-afbb-c165336c5564\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-gwc79" Feb 19 20:35:24 crc kubenswrapper[4813]: I0219 20:35:24.320532 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/29b06165-79b3-41c2-afbb-c165336c5564-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-gwc79\" (UID: \"29b06165-79b3-41c2-afbb-c165336c5564\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-gwc79" Feb 19 20:35:24 crc kubenswrapper[4813]: I0219 20:35:24.334582 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/29b06165-79b3-41c2-afbb-c165336c5564-ceph\") pod \"neutron-metadata-openstack-openstack-cell1-gwc79\" (UID: \"29b06165-79b3-41c2-afbb-c165336c5564\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-gwc79" Feb 19 20:35:24 crc kubenswrapper[4813]: I0219 20:35:24.334903 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/29b06165-79b3-41c2-afbb-c165336c5564-inventory\") pod \"neutron-metadata-openstack-openstack-cell1-gwc79\" (UID: \"29b06165-79b3-41c2-afbb-c165336c5564\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-gwc79" Feb 19 20:35:24 crc kubenswrapper[4813]: I0219 20:35:24.335017 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/29b06165-79b3-41c2-afbb-c165336c5564-ssh-key-openstack-cell1\") pod \"neutron-metadata-openstack-openstack-cell1-gwc79\" (UID: \"29b06165-79b3-41c2-afbb-c165336c5564\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-gwc79" Feb 19 20:35:24 crc kubenswrapper[4813]: I0219 20:35:24.335346 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/29b06165-79b3-41c2-afbb-c165336c5564-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-gwc79\" (UID: \"29b06165-79b3-41c2-afbb-c165336c5564\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-gwc79" Feb 19 20:35:24 crc kubenswrapper[4813]: I0219 20:35:24.335939 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/29b06165-79b3-41c2-afbb-c165336c5564-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-gwc79\" (UID: \"29b06165-79b3-41c2-afbb-c165336c5564\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-gwc79" Feb 19 20:35:24 crc kubenswrapper[4813]: I0219 20:35:24.336372 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29b06165-79b3-41c2-afbb-c165336c5564-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-cell1-gwc79\" (UID: \"29b06165-79b3-41c2-afbb-c165336c5564\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-gwc79" Feb 19 20:35:24 crc kubenswrapper[4813]: I0219 20:35:24.348327 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2svfz\" (UniqueName: \"kubernetes.io/projected/29b06165-79b3-41c2-afbb-c165336c5564-kube-api-access-2svfz\") pod \"neutron-metadata-openstack-openstack-cell1-gwc79\" (UID: \"29b06165-79b3-41c2-afbb-c165336c5564\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-gwc79" Feb 19 20:35:24 crc kubenswrapper[4813]: I0219 20:35:24.422609 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/9c455b52-d215-4da6-aec6-edf7ea78770b-ssh-key-openstack-cell1\") pod \"9c455b52-d215-4da6-aec6-edf7ea78770b\" (UID: \"9c455b52-d215-4da6-aec6-edf7ea78770b\") " Feb 19 20:35:24 crc kubenswrapper[4813]: I0219 20:35:24.426719 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c455b52-d215-4da6-aec6-edf7ea78770b-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "9c455b52-d215-4da6-aec6-edf7ea78770b" (UID: "9c455b52-d215-4da6-aec6-edf7ea78770b"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:35:24 crc kubenswrapper[4813]: I0219 20:35:24.474308 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-gwc79" Feb 19 20:35:24 crc kubenswrapper[4813]: I0219 20:35:24.527665 4813 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/9c455b52-d215-4da6-aec6-edf7ea78770b-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 19 20:35:25 crc kubenswrapper[4813]: I0219 20:35:25.068858 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-openstack-openstack-cell1-gwc79"] Feb 19 20:35:25 crc kubenswrapper[4813]: I0219 20:35:25.669814 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vkzsb"] Feb 19 20:35:25 crc kubenswrapper[4813]: I0219 20:35:25.675104 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vkzsb" Feb 19 20:35:25 crc kubenswrapper[4813]: I0219 20:35:25.699365 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vkzsb"] Feb 19 20:35:25 crc kubenswrapper[4813]: I0219 20:35:25.760150 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e20360d1-df18-4e47-a30f-be87c869c48b-utilities\") pod \"redhat-operators-vkzsb\" (UID: \"e20360d1-df18-4e47-a30f-be87c869c48b\") " pod="openshift-marketplace/redhat-operators-vkzsb" Feb 19 20:35:25 crc kubenswrapper[4813]: I0219 20:35:25.760442 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e20360d1-df18-4e47-a30f-be87c869c48b-catalog-content\") pod \"redhat-operators-vkzsb\" (UID: \"e20360d1-df18-4e47-a30f-be87c869c48b\") " pod="openshift-marketplace/redhat-operators-vkzsb" Feb 19 20:35:25 crc kubenswrapper[4813]: I0219 20:35:25.760636 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xghc\" (UniqueName: \"kubernetes.io/projected/e20360d1-df18-4e47-a30f-be87c869c48b-kube-api-access-4xghc\") pod \"redhat-operators-vkzsb\" (UID: \"e20360d1-df18-4e47-a30f-be87c869c48b\") " pod="openshift-marketplace/redhat-operators-vkzsb" Feb 19 20:35:25 crc kubenswrapper[4813]: I0219 20:35:25.862140 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e20360d1-df18-4e47-a30f-be87c869c48b-utilities\") pod \"redhat-operators-vkzsb\" (UID: \"e20360d1-df18-4e47-a30f-be87c869c48b\") " pod="openshift-marketplace/redhat-operators-vkzsb" Feb 19 20:35:25 crc kubenswrapper[4813]: I0219 20:35:25.862294 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e20360d1-df18-4e47-a30f-be87c869c48b-catalog-content\") pod \"redhat-operators-vkzsb\" (UID: \"e20360d1-df18-4e47-a30f-be87c869c48b\") " pod="openshift-marketplace/redhat-operators-vkzsb" Feb 19 20:35:25 crc kubenswrapper[4813]: I0219 20:35:25.862392 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xghc\" (UniqueName: \"kubernetes.io/projected/e20360d1-df18-4e47-a30f-be87c869c48b-kube-api-access-4xghc\") pod \"redhat-operators-vkzsb\" (UID: \"e20360d1-df18-4e47-a30f-be87c869c48b\") " pod="openshift-marketplace/redhat-operators-vkzsb" Feb 19 20:35:25 crc kubenswrapper[4813]: I0219 20:35:25.862560 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e20360d1-df18-4e47-a30f-be87c869c48b-utilities\") pod \"redhat-operators-vkzsb\" (UID: \"e20360d1-df18-4e47-a30f-be87c869c48b\") " pod="openshift-marketplace/redhat-operators-vkzsb" Feb 19 20:35:25 crc kubenswrapper[4813]: I0219 20:35:25.862904 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e20360d1-df18-4e47-a30f-be87c869c48b-catalog-content\") pod \"redhat-operators-vkzsb\" (UID: \"e20360d1-df18-4e47-a30f-be87c869c48b\") " pod="openshift-marketplace/redhat-operators-vkzsb" Feb 19 20:35:25 crc kubenswrapper[4813]: I0219 20:35:25.884626 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xghc\" (UniqueName: \"kubernetes.io/projected/e20360d1-df18-4e47-a30f-be87c869c48b-kube-api-access-4xghc\") pod \"redhat-operators-vkzsb\" (UID: \"e20360d1-df18-4e47-a30f-be87c869c48b\") " pod="openshift-marketplace/redhat-operators-vkzsb" Feb 19 20:35:26 crc kubenswrapper[4813]: I0219 20:35:26.015241 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vkzsb" Feb 19 20:35:26 crc kubenswrapper[4813]: I0219 20:35:26.056853 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-gwc79" event={"ID":"29b06165-79b3-41c2-afbb-c165336c5564","Type":"ContainerStarted","Data":"9d281eb064a1e28decdcd8b5a0cbcd4378e6abc9ee374e960dd48278a09da472"} Feb 19 20:35:26 crc kubenswrapper[4813]: I0219 20:35:26.056892 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-gwc79" event={"ID":"29b06165-79b3-41c2-afbb-c165336c5564","Type":"ContainerStarted","Data":"0102aeb4669f186fcf2d9e581ffbed5eebb44c1435d7d74318ed3dc76eec920f"} Feb 19 20:35:26 crc kubenswrapper[4813]: I0219 20:35:26.085030 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-openstack-openstack-cell1-gwc79" podStartSLOduration=1.444944797 podStartE2EDuration="2.085005618s" podCreationTimestamp="2026-02-19 20:35:24 +0000 UTC" firstStartedPulling="2026-02-19 20:35:25.062414368 +0000 UTC m=+7544.287854909" lastFinishedPulling="2026-02-19 20:35:25.702475179 +0000 UTC m=+7544.927915730" observedRunningTime="2026-02-19 20:35:26.073645367 +0000 UTC m=+7545.299085908" watchObservedRunningTime="2026-02-19 20:35:26.085005618 +0000 UTC m=+7545.310446159" Feb 19 20:35:26 crc kubenswrapper[4813]: I0219 20:35:26.501307 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vkzsb"] Feb 19 20:35:26 crc kubenswrapper[4813]: W0219 20:35:26.501683 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode20360d1_df18_4e47_a30f_be87c869c48b.slice/crio-b1c4523edf8eadaac6b3ffc377383fa83eba8f9ff743b2ea702477c5f1947e5f WatchSource:0}: Error finding container b1c4523edf8eadaac6b3ffc377383fa83eba8f9ff743b2ea702477c5f1947e5f: Status 404 returned error can't find the container with id b1c4523edf8eadaac6b3ffc377383fa83eba8f9ff743b2ea702477c5f1947e5f Feb 19 20:35:27 crc kubenswrapper[4813]: I0219 20:35:27.066251 4813 generic.go:334] "Generic (PLEG): container finished" podID="e20360d1-df18-4e47-a30f-be87c869c48b" containerID="0e89a3dcfea2c76521c7427b616a6f06aebd9e745c6552d680b70a4043e32ca8" exitCode=0 Feb 19 20:35:27 crc kubenswrapper[4813]: I0219 20:35:27.066330 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vkzsb" event={"ID":"e20360d1-df18-4e47-a30f-be87c869c48b","Type":"ContainerDied","Data":"0e89a3dcfea2c76521c7427b616a6f06aebd9e745c6552d680b70a4043e32ca8"} Feb 19 20:35:27 crc kubenswrapper[4813]: I0219 20:35:27.066595 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vkzsb" event={"ID":"e20360d1-df18-4e47-a30f-be87c869c48b","Type":"ContainerStarted","Data":"b1c4523edf8eadaac6b3ffc377383fa83eba8f9ff743b2ea702477c5f1947e5f"} Feb 19 20:35:29 crc kubenswrapper[4813]: I0219 20:35:29.091867 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vkzsb" event={"ID":"e20360d1-df18-4e47-a30f-be87c869c48b","Type":"ContainerStarted","Data":"982c15800d9277933d94c3324be3b0b8204e044ceb3cc97b16165e3bbb16f23d"} Feb 19 20:35:30 crc kubenswrapper[4813]: I0219 20:35:30.330230 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 20:35:30 crc kubenswrapper[4813]: I0219 20:35:30.330570 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 20:35:36 crc kubenswrapper[4813]: I0219 20:35:36.204674 4813 generic.go:334] "Generic (PLEG): container finished" podID="e20360d1-df18-4e47-a30f-be87c869c48b" containerID="982c15800d9277933d94c3324be3b0b8204e044ceb3cc97b16165e3bbb16f23d" exitCode=0 Feb 19 20:35:36 crc kubenswrapper[4813]: I0219 20:35:36.204834 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vkzsb" event={"ID":"e20360d1-df18-4e47-a30f-be87c869c48b","Type":"ContainerDied","Data":"982c15800d9277933d94c3324be3b0b8204e044ceb3cc97b16165e3bbb16f23d"} Feb 19 20:35:38 crc kubenswrapper[4813]: I0219 20:35:38.225531 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vkzsb" event={"ID":"e20360d1-df18-4e47-a30f-be87c869c48b","Type":"ContainerStarted","Data":"6d4834671906c53b0e7dc5cf00ac21ce91fac6ac452701f05c18c1f41a6a75b9"} Feb 19 20:35:38 crc kubenswrapper[4813]: I0219 20:35:38.252536 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vkzsb" podStartSLOduration=3.145674617 podStartE2EDuration="13.252509408s" podCreationTimestamp="2026-02-19 20:35:25 +0000 UTC" firstStartedPulling="2026-02-19 20:35:27.069078197 +0000 UTC m=+7546.294518738" lastFinishedPulling="2026-02-19 20:35:37.175912998 +0000 UTC m=+7556.401353529" observedRunningTime="2026-02-19 20:35:38.244657915 +0000 UTC m=+7557.470098456" watchObservedRunningTime="2026-02-19 20:35:38.252509408 +0000 UTC m=+7557.477949989" Feb 19 20:35:46 crc kubenswrapper[4813]: I0219 20:35:46.015439 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vkzsb" Feb 19 20:35:46 crc kubenswrapper[4813]: I0219 20:35:46.016028 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vkzsb" Feb 19 20:35:46 crc kubenswrapper[4813]: I0219 20:35:46.072636 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vkzsb" Feb 19 20:35:46 crc kubenswrapper[4813]: I0219 20:35:46.387076 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vkzsb" Feb 19 20:35:46 crc kubenswrapper[4813]: I0219 20:35:46.446286 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vkzsb"] Feb 19 20:35:48 crc kubenswrapper[4813]: I0219 20:35:48.357453 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vkzsb" podUID="e20360d1-df18-4e47-a30f-be87c869c48b" containerName="registry-server" containerID="cri-o://6d4834671906c53b0e7dc5cf00ac21ce91fac6ac452701f05c18c1f41a6a75b9" gracePeriod=2 Feb 19 20:35:48 crc kubenswrapper[4813]: I0219 20:35:48.899419 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vkzsb" Feb 19 20:35:49 crc kubenswrapper[4813]: I0219 20:35:49.017726 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e20360d1-df18-4e47-a30f-be87c869c48b-catalog-content\") pod \"e20360d1-df18-4e47-a30f-be87c869c48b\" (UID: \"e20360d1-df18-4e47-a30f-be87c869c48b\") " Feb 19 20:35:49 crc kubenswrapper[4813]: I0219 20:35:49.017791 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e20360d1-df18-4e47-a30f-be87c869c48b-utilities\") pod \"e20360d1-df18-4e47-a30f-be87c869c48b\" (UID: \"e20360d1-df18-4e47-a30f-be87c869c48b\") " Feb 19 20:35:49 crc kubenswrapper[4813]: I0219 20:35:49.018036 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xghc\" (UniqueName: \"kubernetes.io/projected/e20360d1-df18-4e47-a30f-be87c869c48b-kube-api-access-4xghc\") pod \"e20360d1-df18-4e47-a30f-be87c869c48b\" (UID: \"e20360d1-df18-4e47-a30f-be87c869c48b\") " Feb 19 20:35:49 crc kubenswrapper[4813]: I0219 20:35:49.019042 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e20360d1-df18-4e47-a30f-be87c869c48b-utilities" (OuterVolumeSpecName: "utilities") pod "e20360d1-df18-4e47-a30f-be87c869c48b" (UID: "e20360d1-df18-4e47-a30f-be87c869c48b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:35:49 crc kubenswrapper[4813]: I0219 20:35:49.030480 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e20360d1-df18-4e47-a30f-be87c869c48b-kube-api-access-4xghc" (OuterVolumeSpecName: "kube-api-access-4xghc") pod "e20360d1-df18-4e47-a30f-be87c869c48b" (UID: "e20360d1-df18-4e47-a30f-be87c869c48b"). InnerVolumeSpecName "kube-api-access-4xghc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:35:49 crc kubenswrapper[4813]: I0219 20:35:49.120842 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e20360d1-df18-4e47-a30f-be87c869c48b-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 20:35:49 crc kubenswrapper[4813]: I0219 20:35:49.120884 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xghc\" (UniqueName: \"kubernetes.io/projected/e20360d1-df18-4e47-a30f-be87c869c48b-kube-api-access-4xghc\") on node \"crc\" DevicePath \"\"" Feb 19 20:35:49 crc kubenswrapper[4813]: I0219 20:35:49.155200 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e20360d1-df18-4e47-a30f-be87c869c48b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e20360d1-df18-4e47-a30f-be87c869c48b" (UID: "e20360d1-df18-4e47-a30f-be87c869c48b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:35:49 crc kubenswrapper[4813]: I0219 20:35:49.223258 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e20360d1-df18-4e47-a30f-be87c869c48b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 20:35:49 crc kubenswrapper[4813]: I0219 20:35:49.366841 4813 generic.go:334] "Generic (PLEG): container finished" podID="e20360d1-df18-4e47-a30f-be87c869c48b" containerID="6d4834671906c53b0e7dc5cf00ac21ce91fac6ac452701f05c18c1f41a6a75b9" exitCode=0 Feb 19 20:35:49 crc kubenswrapper[4813]: I0219 20:35:49.366879 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vkzsb" event={"ID":"e20360d1-df18-4e47-a30f-be87c869c48b","Type":"ContainerDied","Data":"6d4834671906c53b0e7dc5cf00ac21ce91fac6ac452701f05c18c1f41a6a75b9"} Feb 19 20:35:49 crc kubenswrapper[4813]: I0219 20:35:49.366902 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vkzsb" event={"ID":"e20360d1-df18-4e47-a30f-be87c869c48b","Type":"ContainerDied","Data":"b1c4523edf8eadaac6b3ffc377383fa83eba8f9ff743b2ea702477c5f1947e5f"} Feb 19 20:35:49 crc kubenswrapper[4813]: I0219 20:35:49.366920 4813 scope.go:117] "RemoveContainer" containerID="6d4834671906c53b0e7dc5cf00ac21ce91fac6ac452701f05c18c1f41a6a75b9" Feb 19 20:35:49 crc kubenswrapper[4813]: I0219 20:35:49.367002 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vkzsb" Feb 19 20:35:49 crc kubenswrapper[4813]: I0219 20:35:49.422527 4813 scope.go:117] "RemoveContainer" containerID="982c15800d9277933d94c3324be3b0b8204e044ceb3cc97b16165e3bbb16f23d" Feb 19 20:35:49 crc kubenswrapper[4813]: I0219 20:35:49.442943 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vkzsb"] Feb 19 20:35:49 crc kubenswrapper[4813]: I0219 20:35:49.450516 4813 scope.go:117] "RemoveContainer" containerID="0e89a3dcfea2c76521c7427b616a6f06aebd9e745c6552d680b70a4043e32ca8" Feb 19 20:35:49 crc kubenswrapper[4813]: I0219 20:35:49.457482 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vkzsb"] Feb 19 20:35:49 crc kubenswrapper[4813]: I0219 20:35:49.489154 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e20360d1-df18-4e47-a30f-be87c869c48b" path="/var/lib/kubelet/pods/e20360d1-df18-4e47-a30f-be87c869c48b/volumes" Feb 19 20:35:49 crc kubenswrapper[4813]: I0219 20:35:49.506682 4813 scope.go:117] "RemoveContainer" containerID="6d4834671906c53b0e7dc5cf00ac21ce91fac6ac452701f05c18c1f41a6a75b9" Feb 19 20:35:49 crc kubenswrapper[4813]: E0219 20:35:49.507094 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d4834671906c53b0e7dc5cf00ac21ce91fac6ac452701f05c18c1f41a6a75b9\": container with ID starting with 6d4834671906c53b0e7dc5cf00ac21ce91fac6ac452701f05c18c1f41a6a75b9 not found: ID does not exist" containerID="6d4834671906c53b0e7dc5cf00ac21ce91fac6ac452701f05c18c1f41a6a75b9" Feb 19 20:35:49 crc kubenswrapper[4813]: I0219 20:35:49.507137 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d4834671906c53b0e7dc5cf00ac21ce91fac6ac452701f05c18c1f41a6a75b9"} err="failed to get container status \"6d4834671906c53b0e7dc5cf00ac21ce91fac6ac452701f05c18c1f41a6a75b9\": rpc error: code = NotFound desc = could not find container \"6d4834671906c53b0e7dc5cf00ac21ce91fac6ac452701f05c18c1f41a6a75b9\": container with ID starting with 6d4834671906c53b0e7dc5cf00ac21ce91fac6ac452701f05c18c1f41a6a75b9 not found: ID does not exist" Feb 19 20:35:49 crc kubenswrapper[4813]: I0219 20:35:49.507167 4813 scope.go:117] "RemoveContainer" containerID="982c15800d9277933d94c3324be3b0b8204e044ceb3cc97b16165e3bbb16f23d" Feb 19 20:35:49 crc kubenswrapper[4813]: E0219 20:35:49.507485 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"982c15800d9277933d94c3324be3b0b8204e044ceb3cc97b16165e3bbb16f23d\": container with ID starting with 982c15800d9277933d94c3324be3b0b8204e044ceb3cc97b16165e3bbb16f23d not found: ID does not exist" containerID="982c15800d9277933d94c3324be3b0b8204e044ceb3cc97b16165e3bbb16f23d" Feb 19 20:35:49 crc kubenswrapper[4813]: I0219 20:35:49.507540 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"982c15800d9277933d94c3324be3b0b8204e044ceb3cc97b16165e3bbb16f23d"} err="failed to get container status \"982c15800d9277933d94c3324be3b0b8204e044ceb3cc97b16165e3bbb16f23d\": rpc error: code = NotFound desc = could not find container \"982c15800d9277933d94c3324be3b0b8204e044ceb3cc97b16165e3bbb16f23d\": container with ID starting with 982c15800d9277933d94c3324be3b0b8204e044ceb3cc97b16165e3bbb16f23d not found: ID does not exist" Feb 19 20:35:49 crc kubenswrapper[4813]: I0219 20:35:49.507577 4813 scope.go:117] "RemoveContainer" containerID="0e89a3dcfea2c76521c7427b616a6f06aebd9e745c6552d680b70a4043e32ca8" Feb 19 20:35:49 crc kubenswrapper[4813]: E0219 20:35:49.508180 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e89a3dcfea2c76521c7427b616a6f06aebd9e745c6552d680b70a4043e32ca8\": container with ID starting with 0e89a3dcfea2c76521c7427b616a6f06aebd9e745c6552d680b70a4043e32ca8 not found: ID does not exist" containerID="0e89a3dcfea2c76521c7427b616a6f06aebd9e745c6552d680b70a4043e32ca8" Feb 19 20:35:49 crc kubenswrapper[4813]: I0219 20:35:49.508218 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e89a3dcfea2c76521c7427b616a6f06aebd9e745c6552d680b70a4043e32ca8"} err="failed to get container status \"0e89a3dcfea2c76521c7427b616a6f06aebd9e745c6552d680b70a4043e32ca8\": rpc error: code = NotFound desc = could not find container \"0e89a3dcfea2c76521c7427b616a6f06aebd9e745c6552d680b70a4043e32ca8\": container with ID starting with 0e89a3dcfea2c76521c7427b616a6f06aebd9e745c6552d680b70a4043e32ca8 not found: ID does not exist" Feb 19 20:36:00 crc kubenswrapper[4813]: I0219 20:36:00.329807 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 20:36:00 crc kubenswrapper[4813]: I0219 20:36:00.331182 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 20:36:21 crc kubenswrapper[4813]: I0219 20:36:21.734527 4813 generic.go:334] "Generic (PLEG): container finished" podID="29b06165-79b3-41c2-afbb-c165336c5564" containerID="9d281eb064a1e28decdcd8b5a0cbcd4378e6abc9ee374e960dd48278a09da472" exitCode=0 Feb 19 20:36:21 crc kubenswrapper[4813]: I0219 20:36:21.734726 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-gwc79" event={"ID":"29b06165-79b3-41c2-afbb-c165336c5564","Type":"ContainerDied","Data":"9d281eb064a1e28decdcd8b5a0cbcd4378e6abc9ee374e960dd48278a09da472"} Feb 19 20:36:23 crc kubenswrapper[4813]: I0219 20:36:23.258254 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-gwc79" Feb 19 20:36:23 crc kubenswrapper[4813]: I0219 20:36:23.402578 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/29b06165-79b3-41c2-afbb-c165336c5564-inventory\") pod \"29b06165-79b3-41c2-afbb-c165336c5564\" (UID: \"29b06165-79b3-41c2-afbb-c165336c5564\") " Feb 19 20:36:23 crc kubenswrapper[4813]: I0219 20:36:23.404110 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29b06165-79b3-41c2-afbb-c165336c5564-neutron-metadata-combined-ca-bundle\") pod \"29b06165-79b3-41c2-afbb-c165336c5564\" (UID: \"29b06165-79b3-41c2-afbb-c165336c5564\") " Feb 19 20:36:23 crc kubenswrapper[4813]: I0219 20:36:23.404451 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/29b06165-79b3-41c2-afbb-c165336c5564-ssh-key-openstack-cell1\") pod \"29b06165-79b3-41c2-afbb-c165336c5564\" (UID: \"29b06165-79b3-41c2-afbb-c165336c5564\") " Feb 19 20:36:23 crc kubenswrapper[4813]: I0219 20:36:23.404543 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/29b06165-79b3-41c2-afbb-c165336c5564-neutron-ovn-metadata-agent-neutron-config-0\") pod \"29b06165-79b3-41c2-afbb-c165336c5564\" (UID: \"29b06165-79b3-41c2-afbb-c165336c5564\") " Feb 19 20:36:23 crc kubenswrapper[4813]: I0219 20:36:23.404645 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/29b06165-79b3-41c2-afbb-c165336c5564-nova-metadata-neutron-config-0\") pod \"29b06165-79b3-41c2-afbb-c165336c5564\" (UID: \"29b06165-79b3-41c2-afbb-c165336c5564\") " Feb 19 20:36:23 crc kubenswrapper[4813]: I0219 20:36:23.404687 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2svfz\" (UniqueName: \"kubernetes.io/projected/29b06165-79b3-41c2-afbb-c165336c5564-kube-api-access-2svfz\") pod \"29b06165-79b3-41c2-afbb-c165336c5564\" (UID: \"29b06165-79b3-41c2-afbb-c165336c5564\") " Feb 19 20:36:23 crc kubenswrapper[4813]: I0219 20:36:23.404717 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/29b06165-79b3-41c2-afbb-c165336c5564-ceph\") pod \"29b06165-79b3-41c2-afbb-c165336c5564\" (UID: \"29b06165-79b3-41c2-afbb-c165336c5564\") " Feb 19 20:36:23 crc kubenswrapper[4813]: I0219 20:36:23.409508 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29b06165-79b3-41c2-afbb-c165336c5564-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "29b06165-79b3-41c2-afbb-c165336c5564" (UID: "29b06165-79b3-41c2-afbb-c165336c5564"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:36:23 crc kubenswrapper[4813]: I0219 20:36:23.410748 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29b06165-79b3-41c2-afbb-c165336c5564-kube-api-access-2svfz" (OuterVolumeSpecName: "kube-api-access-2svfz") pod "29b06165-79b3-41c2-afbb-c165336c5564" (UID: "29b06165-79b3-41c2-afbb-c165336c5564"). InnerVolumeSpecName "kube-api-access-2svfz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:36:23 crc kubenswrapper[4813]: I0219 20:36:23.411184 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29b06165-79b3-41c2-afbb-c165336c5564-ceph" (OuterVolumeSpecName: "ceph") pod "29b06165-79b3-41c2-afbb-c165336c5564" (UID: "29b06165-79b3-41c2-afbb-c165336c5564"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:36:23 crc kubenswrapper[4813]: I0219 20:36:23.436526 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29b06165-79b3-41c2-afbb-c165336c5564-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "29b06165-79b3-41c2-afbb-c165336c5564" (UID: "29b06165-79b3-41c2-afbb-c165336c5564"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:36:23 crc kubenswrapper[4813]: I0219 20:36:23.438088 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29b06165-79b3-41c2-afbb-c165336c5564-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "29b06165-79b3-41c2-afbb-c165336c5564" (UID: "29b06165-79b3-41c2-afbb-c165336c5564"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:36:23 crc kubenswrapper[4813]: I0219 20:36:23.439987 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29b06165-79b3-41c2-afbb-c165336c5564-inventory" (OuterVolumeSpecName: "inventory") pod "29b06165-79b3-41c2-afbb-c165336c5564" (UID: "29b06165-79b3-41c2-afbb-c165336c5564"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:36:23 crc kubenswrapper[4813]: I0219 20:36:23.440586 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29b06165-79b3-41c2-afbb-c165336c5564-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "29b06165-79b3-41c2-afbb-c165336c5564" (UID: "29b06165-79b3-41c2-afbb-c165336c5564"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:36:23 crc kubenswrapper[4813]: I0219 20:36:23.506801 4813 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/29b06165-79b3-41c2-afbb-c165336c5564-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 19 20:36:23 crc kubenswrapper[4813]: I0219 20:36:23.506831 4813 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/29b06165-79b3-41c2-afbb-c165336c5564-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 19 20:36:23 crc kubenswrapper[4813]: I0219 20:36:23.506841 4813 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/29b06165-79b3-41c2-afbb-c165336c5564-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 19 20:36:23 crc kubenswrapper[4813]: I0219 20:36:23.506851 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2svfz\" (UniqueName: \"kubernetes.io/projected/29b06165-79b3-41c2-afbb-c165336c5564-kube-api-access-2svfz\") on node \"crc\" DevicePath \"\"" Feb 19 20:36:23 crc kubenswrapper[4813]: I0219 20:36:23.506860 4813 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/29b06165-79b3-41c2-afbb-c165336c5564-ceph\") on node \"crc\" DevicePath \"\"" Feb 19 20:36:23 crc kubenswrapper[4813]: I0219 20:36:23.506869 4813 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/29b06165-79b3-41c2-afbb-c165336c5564-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 20:36:23 crc kubenswrapper[4813]: I0219 20:36:23.506877 4813 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29b06165-79b3-41c2-afbb-c165336c5564-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 20:36:23 crc kubenswrapper[4813]: E0219 20:36:23.706545 4813 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod29b06165_79b3_41c2_afbb_c165336c5564.slice/crio-0102aeb4669f186fcf2d9e581ffbed5eebb44c1435d7d74318ed3dc76eec920f\": RecentStats: unable to find data in memory cache]" Feb 19 20:36:23 crc kubenswrapper[4813]: I0219 20:36:23.756499 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-gwc79" event={"ID":"29b06165-79b3-41c2-afbb-c165336c5564","Type":"ContainerDied","Data":"0102aeb4669f186fcf2d9e581ffbed5eebb44c1435d7d74318ed3dc76eec920f"} Feb 19 20:36:23 crc kubenswrapper[4813]: I0219 20:36:23.756870 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0102aeb4669f186fcf2d9e581ffbed5eebb44c1435d7d74318ed3dc76eec920f" Feb 19 20:36:23 crc kubenswrapper[4813]: I0219 20:36:23.756597 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-gwc79" Feb 19 20:36:23 crc kubenswrapper[4813]: I0219 20:36:23.856050 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-openstack-openstack-cell1-s69pm"] Feb 19 20:36:23 crc kubenswrapper[4813]: E0219 20:36:23.856477 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e20360d1-df18-4e47-a30f-be87c869c48b" containerName="extract-utilities" Feb 19 20:36:23 crc kubenswrapper[4813]: I0219 20:36:23.856490 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="e20360d1-df18-4e47-a30f-be87c869c48b" containerName="extract-utilities" Feb 19 20:36:23 crc kubenswrapper[4813]: E0219 20:36:23.856518 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e20360d1-df18-4e47-a30f-be87c869c48b" containerName="extract-content" Feb 19 20:36:23 crc kubenswrapper[4813]: I0219 20:36:23.856524 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="e20360d1-df18-4e47-a30f-be87c869c48b" containerName="extract-content" Feb 19 20:36:23 crc kubenswrapper[4813]: E0219 20:36:23.856540 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e20360d1-df18-4e47-a30f-be87c869c48b" containerName="registry-server" Feb 19 20:36:23 crc kubenswrapper[4813]: I0219 20:36:23.856546 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="e20360d1-df18-4e47-a30f-be87c869c48b" containerName="registry-server" Feb 19 20:36:23 crc kubenswrapper[4813]: E0219 20:36:23.856563 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29b06165-79b3-41c2-afbb-c165336c5564" containerName="neutron-metadata-openstack-openstack-cell1" Feb 19 20:36:23 crc kubenswrapper[4813]: I0219 20:36:23.856571 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="29b06165-79b3-41c2-afbb-c165336c5564" containerName="neutron-metadata-openstack-openstack-cell1" Feb 19 20:36:23 crc kubenswrapper[4813]: I0219 20:36:23.856851 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="e20360d1-df18-4e47-a30f-be87c869c48b" containerName="registry-server" Feb 19 20:36:23 crc kubenswrapper[4813]: I0219 20:36:23.856870 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="29b06165-79b3-41c2-afbb-c165336c5564" containerName="neutron-metadata-openstack-openstack-cell1" Feb 19 20:36:23 crc kubenswrapper[4813]: I0219 20:36:23.857702 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-s69pm" Feb 19 20:36:23 crc kubenswrapper[4813]: I0219 20:36:23.860380 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 19 20:36:23 crc kubenswrapper[4813]: I0219 20:36:23.860499 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Feb 19 20:36:23 crc kubenswrapper[4813]: I0219 20:36:23.860550 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-2ttn4" Feb 19 20:36:23 crc kubenswrapper[4813]: I0219 20:36:23.860596 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 19 20:36:23 crc kubenswrapper[4813]: I0219 20:36:23.861296 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 20:36:23 crc kubenswrapper[4813]: I0219 20:36:23.887856 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-openstack-openstack-cell1-s69pm"] Feb 19 20:36:24 crc kubenswrapper[4813]: I0219 20:36:24.016354 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df-libvirt-combined-ca-bundle\") pod \"libvirt-openstack-openstack-cell1-s69pm\" (UID: \"65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df\") " pod="openstack/libvirt-openstack-openstack-cell1-s69pm" Feb 19 20:36:24 crc kubenswrapper[4813]: I0219 20:36:24.016937 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df-libvirt-secret-0\") pod \"libvirt-openstack-openstack-cell1-s69pm\" (UID: \"65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df\") " pod="openstack/libvirt-openstack-openstack-cell1-s69pm" Feb 19 20:36:24 crc kubenswrapper[4813]: I0219 20:36:24.017299 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df-ceph\") pod \"libvirt-openstack-openstack-cell1-s69pm\" (UID: \"65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df\") " pod="openstack/libvirt-openstack-openstack-cell1-s69pm" Feb 19 20:36:24 crc kubenswrapper[4813]: I0219 20:36:24.017684 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df-inventory\") pod \"libvirt-openstack-openstack-cell1-s69pm\" (UID: \"65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df\") " pod="openstack/libvirt-openstack-openstack-cell1-s69pm" Feb 19 20:36:24 crc kubenswrapper[4813]: I0219 20:36:24.017735 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4gg2\" (UniqueName: \"kubernetes.io/projected/65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df-kube-api-access-v4gg2\") pod \"libvirt-openstack-openstack-cell1-s69pm\" (UID: \"65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df\") " pod="openstack/libvirt-openstack-openstack-cell1-s69pm" Feb 19 20:36:24 crc kubenswrapper[4813]: I0219 20:36:24.017857 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df-ssh-key-openstack-cell1\") pod \"libvirt-openstack-openstack-cell1-s69pm\" (UID: \"65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df\") " pod="openstack/libvirt-openstack-openstack-cell1-s69pm" Feb 19 20:36:24 crc kubenswrapper[4813]: I0219 20:36:24.120647 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df-libvirt-secret-0\") pod \"libvirt-openstack-openstack-cell1-s69pm\" (UID: \"65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df\") " pod="openstack/libvirt-openstack-openstack-cell1-s69pm" Feb 19 20:36:24 crc kubenswrapper[4813]: I0219 20:36:24.120741 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df-ceph\") pod \"libvirt-openstack-openstack-cell1-s69pm\" (UID: \"65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df\") " pod="openstack/libvirt-openstack-openstack-cell1-s69pm" Feb 19 20:36:24 crc kubenswrapper[4813]: I0219 20:36:24.120854 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df-inventory\") pod \"libvirt-openstack-openstack-cell1-s69pm\" (UID: \"65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df\") " pod="openstack/libvirt-openstack-openstack-cell1-s69pm" Feb 19 20:36:24 crc kubenswrapper[4813]: I0219 20:36:24.120887 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4gg2\" (UniqueName: \"kubernetes.io/projected/65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df-kube-api-access-v4gg2\") pod \"libvirt-openstack-openstack-cell1-s69pm\" (UID: \"65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df\") " pod="openstack/libvirt-openstack-openstack-cell1-s69pm" Feb 19 20:36:24 crc kubenswrapper[4813]: I0219 20:36:24.120944 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df-ssh-key-openstack-cell1\") pod \"libvirt-openstack-openstack-cell1-s69pm\" (UID: \"65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df\") " pod="openstack/libvirt-openstack-openstack-cell1-s69pm" Feb 19 20:36:24 crc kubenswrapper[4813]: I0219 20:36:24.121075 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df-libvirt-combined-ca-bundle\") pod \"libvirt-openstack-openstack-cell1-s69pm\" (UID: \"65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df\") " pod="openstack/libvirt-openstack-openstack-cell1-s69pm" Feb 19 20:36:24 crc kubenswrapper[4813]: I0219 20:36:24.126306 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df-inventory\") pod \"libvirt-openstack-openstack-cell1-s69pm\" (UID: \"65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df\") " pod="openstack/libvirt-openstack-openstack-cell1-s69pm" Feb 19 20:36:24 crc kubenswrapper[4813]: I0219 20:36:24.127111 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df-libvirt-secret-0\") pod \"libvirt-openstack-openstack-cell1-s69pm\" (UID: \"65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df\") " pod="openstack/libvirt-openstack-openstack-cell1-s69pm" Feb 19 20:36:24 crc kubenswrapper[4813]: I0219 20:36:24.127171 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df-libvirt-combined-ca-bundle\") pod \"libvirt-openstack-openstack-cell1-s69pm\" (UID: \"65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df\") " pod="openstack/libvirt-openstack-openstack-cell1-s69pm" Feb 19 20:36:24 crc kubenswrapper[4813]: I0219 20:36:24.129541 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df-ssh-key-openstack-cell1\") pod \"libvirt-openstack-openstack-cell1-s69pm\" (UID: \"65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df\") " pod="openstack/libvirt-openstack-openstack-cell1-s69pm" Feb 19 20:36:24 crc kubenswrapper[4813]: I0219 20:36:24.129742 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df-ceph\") pod \"libvirt-openstack-openstack-cell1-s69pm\" (UID: \"65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df\") " pod="openstack/libvirt-openstack-openstack-cell1-s69pm" Feb 19 20:36:24 crc kubenswrapper[4813]: I0219 20:36:24.157880 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4gg2\" (UniqueName: \"kubernetes.io/projected/65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df-kube-api-access-v4gg2\") pod \"libvirt-openstack-openstack-cell1-s69pm\" (UID: \"65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df\") " pod="openstack/libvirt-openstack-openstack-cell1-s69pm" Feb 19 20:36:24 crc kubenswrapper[4813]: I0219 20:36:24.175374 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-s69pm" Feb 19 20:36:24 crc kubenswrapper[4813]: I0219 20:36:24.744691 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-openstack-openstack-cell1-s69pm"] Feb 19 20:36:24 crc kubenswrapper[4813]: I0219 20:36:24.768981 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-s69pm" event={"ID":"65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df","Type":"ContainerStarted","Data":"803f8d2f5463d0ac9d5ff303fcf56fe706368799b454787537a4d7f19d4d7c58"} Feb 19 20:36:26 crc kubenswrapper[4813]: I0219 20:36:26.805494 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-s69pm" event={"ID":"65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df","Type":"ContainerStarted","Data":"91e09daa5ca1c225b5f10883f2c8064b0e06a11f53b53e9e512d56186dadeba9"} Feb 19 20:36:26 crc kubenswrapper[4813]: I0219 20:36:26.828164 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-openstack-openstack-cell1-s69pm" podStartSLOduration=2.6347473 podStartE2EDuration="3.828146122s" podCreationTimestamp="2026-02-19 20:36:23 +0000 UTC" firstStartedPulling="2026-02-19 20:36:24.751925033 +0000 UTC m=+7603.977365574" lastFinishedPulling="2026-02-19 20:36:25.945323815 +0000 UTC m=+7605.170764396" observedRunningTime="2026-02-19 20:36:26.826786071 +0000 UTC m=+7606.052226612" watchObservedRunningTime="2026-02-19 20:36:26.828146122 +0000 UTC m=+7606.053586673" Feb 19 20:36:30 crc kubenswrapper[4813]: I0219 20:36:30.330388 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 20:36:30 crc kubenswrapper[4813]: I0219 20:36:30.332203 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 20:36:30 crc kubenswrapper[4813]: I0219 20:36:30.332379 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" Feb 19 20:36:30 crc kubenswrapper[4813]: I0219 20:36:30.333384 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"93a1d84ffde757766624d5696e6c0d7a5731d522961bfffd46a28316fabd873d"} pod="openshift-machine-config-operator/machine-config-daemon-gfswm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 20:36:30 crc kubenswrapper[4813]: I0219 20:36:30.333546 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" containerID="cri-o://93a1d84ffde757766624d5696e6c0d7a5731d522961bfffd46a28316fabd873d" gracePeriod=600 Feb 19 20:36:30 crc kubenswrapper[4813]: I0219 20:36:30.849369 4813 generic.go:334] "Generic (PLEG): container finished" podID="481977a2-7072-4176-abd4-863cb6104d70" containerID="93a1d84ffde757766624d5696e6c0d7a5731d522961bfffd46a28316fabd873d" exitCode=0 Feb 19 20:36:30 crc kubenswrapper[4813]: I0219 20:36:30.849457 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" event={"ID":"481977a2-7072-4176-abd4-863cb6104d70","Type":"ContainerDied","Data":"93a1d84ffde757766624d5696e6c0d7a5731d522961bfffd46a28316fabd873d"} Feb 19 20:36:30 crc kubenswrapper[4813]: I0219 20:36:30.849813 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" event={"ID":"481977a2-7072-4176-abd4-863cb6104d70","Type":"ContainerStarted","Data":"db861e5d90a8de67f858b14686841353ea00fd3d2f4c54f7ce0d86d03a0106a3"} Feb 19 20:36:30 crc kubenswrapper[4813]: I0219 20:36:30.849850 4813 scope.go:117] "RemoveContainer" containerID="ae09c86ff9f8bc21a9cf3026fff1db9c93da58ba8343078b39d31938225ee5ee" Feb 19 20:38:30 crc kubenswrapper[4813]: I0219 20:38:30.329910 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 20:38:30 crc kubenswrapper[4813]: I0219 20:38:30.330657 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 20:39:00 crc kubenswrapper[4813]: I0219 20:39:00.330153 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 20:39:00 crc kubenswrapper[4813]: I0219 20:39:00.331238 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 20:39:30 crc kubenswrapper[4813]: I0219 20:39:30.330353 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 20:39:30 crc kubenswrapper[4813]: I0219 20:39:30.330905 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 20:39:30 crc kubenswrapper[4813]: I0219 20:39:30.330977 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" Feb 19 20:39:30 crc kubenswrapper[4813]: I0219 20:39:30.331783 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"db861e5d90a8de67f858b14686841353ea00fd3d2f4c54f7ce0d86d03a0106a3"} pod="openshift-machine-config-operator/machine-config-daemon-gfswm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 20:39:30 crc kubenswrapper[4813]: I0219 20:39:30.331848 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" containerID="cri-o://db861e5d90a8de67f858b14686841353ea00fd3d2f4c54f7ce0d86d03a0106a3" gracePeriod=600 Feb 19 20:39:30 crc kubenswrapper[4813]: E0219 20:39:30.452521 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:39:30 crc kubenswrapper[4813]: I0219 20:39:30.845162 4813 generic.go:334] "Generic (PLEG): container finished" podID="481977a2-7072-4176-abd4-863cb6104d70" containerID="db861e5d90a8de67f858b14686841353ea00fd3d2f4c54f7ce0d86d03a0106a3" exitCode=0 Feb 19 20:39:30 crc kubenswrapper[4813]: I0219 20:39:30.845216 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" event={"ID":"481977a2-7072-4176-abd4-863cb6104d70","Type":"ContainerDied","Data":"db861e5d90a8de67f858b14686841353ea00fd3d2f4c54f7ce0d86d03a0106a3"} Feb 19 20:39:30 crc kubenswrapper[4813]: I0219 20:39:30.845263 4813 scope.go:117] "RemoveContainer" containerID="93a1d84ffde757766624d5696e6c0d7a5731d522961bfffd46a28316fabd873d" Feb 19 20:39:30 crc kubenswrapper[4813]: I0219 20:39:30.847256 4813 scope.go:117] "RemoveContainer" containerID="db861e5d90a8de67f858b14686841353ea00fd3d2f4c54f7ce0d86d03a0106a3" Feb 19 20:39:30 crc kubenswrapper[4813]: E0219 20:39:30.847680 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:39:44 crc kubenswrapper[4813]: I0219 20:39:44.471104 4813 scope.go:117] "RemoveContainer" containerID="db861e5d90a8de67f858b14686841353ea00fd3d2f4c54f7ce0d86d03a0106a3" Feb 19 20:39:44 crc kubenswrapper[4813]: E0219 20:39:44.471835 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:39:59 crc kubenswrapper[4813]: I0219 20:39:59.471980 4813 scope.go:117] "RemoveContainer" containerID="db861e5d90a8de67f858b14686841353ea00fd3d2f4c54f7ce0d86d03a0106a3" Feb 19 20:39:59 crc kubenswrapper[4813]: E0219 20:39:59.472884 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:40:14 crc kubenswrapper[4813]: I0219 20:40:14.471313 4813 scope.go:117] "RemoveContainer" containerID="db861e5d90a8de67f858b14686841353ea00fd3d2f4c54f7ce0d86d03a0106a3" Feb 19 20:40:14 crc kubenswrapper[4813]: E0219 20:40:14.472325 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:40:26 crc kubenswrapper[4813]: I0219 20:40:26.472064 4813 scope.go:117] "RemoveContainer" containerID="db861e5d90a8de67f858b14686841353ea00fd3d2f4c54f7ce0d86d03a0106a3" Feb 19 20:40:26 crc kubenswrapper[4813]: E0219 20:40:26.472887 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:40:37 crc kubenswrapper[4813]: I0219 20:40:37.472755 4813 scope.go:117] "RemoveContainer" containerID="db861e5d90a8de67f858b14686841353ea00fd3d2f4c54f7ce0d86d03a0106a3" Feb 19 20:40:37 crc kubenswrapper[4813]: E0219 20:40:37.473786 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:40:49 crc kubenswrapper[4813]: I0219 20:40:49.670620 4813 generic.go:334] "Generic (PLEG): container finished" podID="65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df" containerID="91e09daa5ca1c225b5f10883f2c8064b0e06a11f53b53e9e512d56186dadeba9" exitCode=0 Feb 19 20:40:49 crc kubenswrapper[4813]: I0219 20:40:49.670715 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-s69pm" event={"ID":"65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df","Type":"ContainerDied","Data":"91e09daa5ca1c225b5f10883f2c8064b0e06a11f53b53e9e512d56186dadeba9"} Feb 19 20:40:51 crc kubenswrapper[4813]: I0219 20:40:51.136225 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-s69pm" Feb 19 20:40:51 crc kubenswrapper[4813]: I0219 20:40:51.241626 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df-libvirt-combined-ca-bundle\") pod \"65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df\" (UID: \"65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df\") " Feb 19 20:40:51 crc kubenswrapper[4813]: I0219 20:40:51.242054 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df-ceph\") pod \"65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df\" (UID: \"65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df\") " Feb 19 20:40:51 crc kubenswrapper[4813]: I0219 20:40:51.242114 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df-inventory\") pod \"65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df\" (UID: \"65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df\") " Feb 19 20:40:51 crc kubenswrapper[4813]: I0219 20:40:51.242166 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df-ssh-key-openstack-cell1\") pod \"65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df\" (UID: \"65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df\") " Feb 19 20:40:51 crc kubenswrapper[4813]: I0219 20:40:51.242229 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4gg2\" (UniqueName: \"kubernetes.io/projected/65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df-kube-api-access-v4gg2\") pod \"65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df\" (UID: \"65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df\") " Feb 19 20:40:51 crc kubenswrapper[4813]: I0219 20:40:51.242324 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df-libvirt-secret-0\") pod \"65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df\" (UID: \"65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df\") " Feb 19 20:40:51 crc kubenswrapper[4813]: I0219 20:40:51.247494 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df-ceph" (OuterVolumeSpecName: "ceph") pod "65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df" (UID: "65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:40:51 crc kubenswrapper[4813]: I0219 20:40:51.247613 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df" (UID: "65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:40:51 crc kubenswrapper[4813]: I0219 20:40:51.247761 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df-kube-api-access-v4gg2" (OuterVolumeSpecName: "kube-api-access-v4gg2") pod "65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df" (UID: "65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df"). InnerVolumeSpecName "kube-api-access-v4gg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:40:51 crc kubenswrapper[4813]: I0219 20:40:51.272926 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df" (UID: "65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:40:51 crc kubenswrapper[4813]: I0219 20:40:51.276541 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df-inventory" (OuterVolumeSpecName: "inventory") pod "65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df" (UID: "65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:40:51 crc kubenswrapper[4813]: I0219 20:40:51.281606 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df" (UID: "65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:40:51 crc kubenswrapper[4813]: I0219 20:40:51.345135 4813 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df-ceph\") on node \"crc\" DevicePath \"\"" Feb 19 20:40:51 crc kubenswrapper[4813]: I0219 20:40:51.345176 4813 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 20:40:51 crc kubenswrapper[4813]: I0219 20:40:51.345191 4813 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 19 20:40:51 crc kubenswrapper[4813]: I0219 20:40:51.345204 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4gg2\" (UniqueName: \"kubernetes.io/projected/65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df-kube-api-access-v4gg2\") on node \"crc\" DevicePath \"\"" Feb 19 20:40:51 crc kubenswrapper[4813]: I0219 20:40:51.345219 4813 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Feb 19 20:40:51 crc kubenswrapper[4813]: I0219 20:40:51.345231 4813 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 20:40:51 crc kubenswrapper[4813]: I0219 20:40:51.478649 4813 scope.go:117] "RemoveContainer" containerID="db861e5d90a8de67f858b14686841353ea00fd3d2f4c54f7ce0d86d03a0106a3" Feb 19 20:40:51 crc kubenswrapper[4813]: E0219 20:40:51.479072 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:40:51 crc kubenswrapper[4813]: I0219 20:40:51.700664 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-s69pm" event={"ID":"65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df","Type":"ContainerDied","Data":"803f8d2f5463d0ac9d5ff303fcf56fe706368799b454787537a4d7f19d4d7c58"} Feb 19 20:40:51 crc kubenswrapper[4813]: I0219 20:40:51.700907 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="803f8d2f5463d0ac9d5ff303fcf56fe706368799b454787537a4d7f19d4d7c58" Feb 19 20:40:51 crc kubenswrapper[4813]: I0219 20:40:51.700712 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-s69pm" Feb 19 20:40:51 crc kubenswrapper[4813]: I0219 20:40:51.794291 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-openstack-openstack-cell1-qp4gn"] Feb 19 20:40:51 crc kubenswrapper[4813]: E0219 20:40:51.794810 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df" containerName="libvirt-openstack-openstack-cell1" Feb 19 20:40:51 crc kubenswrapper[4813]: I0219 20:40:51.794834 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df" containerName="libvirt-openstack-openstack-cell1" Feb 19 20:40:51 crc kubenswrapper[4813]: I0219 20:40:51.795170 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df" containerName="libvirt-openstack-openstack-cell1" Feb 19 20:40:51 crc kubenswrapper[4813]: I0219 20:40:51.796075 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-qp4gn" Feb 19 20:40:51 crc kubenswrapper[4813]: I0219 20:40:51.799332 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-2ttn4" Feb 19 20:40:51 crc kubenswrapper[4813]: I0219 20:40:51.799701 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-cells-global-config" Feb 19 20:40:51 crc kubenswrapper[4813]: I0219 20:40:51.801697 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 19 20:40:51 crc kubenswrapper[4813]: I0219 20:40:51.801778 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 19 20:40:51 crc kubenswrapper[4813]: I0219 20:40:51.801838 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 20:40:51 crc kubenswrapper[4813]: I0219 20:40:51.801841 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 19 20:40:51 crc kubenswrapper[4813]: I0219 20:40:51.801898 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 19 20:40:51 crc kubenswrapper[4813]: I0219 20:40:51.811301 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-openstack-cell1-qp4gn"] Feb 19 20:40:51 crc kubenswrapper[4813]: I0219 20:40:51.958375 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/638b5e11-fd3f-4885-b9c0-463a8496bb74-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-openstack-cell1-qp4gn\" (UID: \"638b5e11-fd3f-4885-b9c0-463a8496bb74\") " pod="openstack/nova-cell1-openstack-openstack-cell1-qp4gn" Feb 19 20:40:51 crc kubenswrapper[4813]: I0219 20:40:51.958428 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/638b5e11-fd3f-4885-b9c0-463a8496bb74-ceph\") pod \"nova-cell1-openstack-openstack-cell1-qp4gn\" (UID: \"638b5e11-fd3f-4885-b9c0-463a8496bb74\") " pod="openstack/nova-cell1-openstack-openstack-cell1-qp4gn" Feb 19 20:40:51 crc kubenswrapper[4813]: I0219 20:40:51.958445 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/638b5e11-fd3f-4885-b9c0-463a8496bb74-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-openstack-cell1-qp4gn\" (UID: \"638b5e11-fd3f-4885-b9c0-463a8496bb74\") " pod="openstack/nova-cell1-openstack-openstack-cell1-qp4gn" Feb 19 20:40:51 crc kubenswrapper[4813]: I0219 20:40:51.958473 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/638b5e11-fd3f-4885-b9c0-463a8496bb74-inventory\") pod \"nova-cell1-openstack-openstack-cell1-qp4gn\" (UID: \"638b5e11-fd3f-4885-b9c0-463a8496bb74\") " pod="openstack/nova-cell1-openstack-openstack-cell1-qp4gn" Feb 19 20:40:51 crc kubenswrapper[4813]: I0219 20:40:51.958493 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/638b5e11-fd3f-4885-b9c0-463a8496bb74-nova-cells-global-config-1\") pod \"nova-cell1-openstack-openstack-cell1-qp4gn\" (UID: \"638b5e11-fd3f-4885-b9c0-463a8496bb74\") " pod="openstack/nova-cell1-openstack-openstack-cell1-qp4gn" Feb 19 20:40:51 crc kubenswrapper[4813]: I0219 20:40:51.958518 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/638b5e11-fd3f-4885-b9c0-463a8496bb74-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-openstack-cell1-qp4gn\" (UID: \"638b5e11-fd3f-4885-b9c0-463a8496bb74\") " pod="openstack/nova-cell1-openstack-openstack-cell1-qp4gn" Feb 19 20:40:51 crc kubenswrapper[4813]: I0219 20:40:51.958626 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjfw4\" (UniqueName: \"kubernetes.io/projected/638b5e11-fd3f-4885-b9c0-463a8496bb74-kube-api-access-bjfw4\") pod \"nova-cell1-openstack-openstack-cell1-qp4gn\" (UID: \"638b5e11-fd3f-4885-b9c0-463a8496bb74\") " pod="openstack/nova-cell1-openstack-openstack-cell1-qp4gn" Feb 19 20:40:51 crc kubenswrapper[4813]: I0219 20:40:51.958724 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/638b5e11-fd3f-4885-b9c0-463a8496bb74-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-openstack-cell1-qp4gn\" (UID: \"638b5e11-fd3f-4885-b9c0-463a8496bb74\") " pod="openstack/nova-cell1-openstack-openstack-cell1-qp4gn" Feb 19 20:40:51 crc kubenswrapper[4813]: I0219 20:40:51.958746 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/638b5e11-fd3f-4885-b9c0-463a8496bb74-nova-cells-global-config-0\") pod \"nova-cell1-openstack-openstack-cell1-qp4gn\" (UID: \"638b5e11-fd3f-4885-b9c0-463a8496bb74\") " pod="openstack/nova-cell1-openstack-openstack-cell1-qp4gn" Feb 19 20:40:51 crc kubenswrapper[4813]: I0219 20:40:51.958760 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/638b5e11-fd3f-4885-b9c0-463a8496bb74-nova-cell1-compute-config-3\") pod \"nova-cell1-openstack-openstack-cell1-qp4gn\" (UID: \"638b5e11-fd3f-4885-b9c0-463a8496bb74\") " pod="openstack/nova-cell1-openstack-openstack-cell1-qp4gn" Feb 19 20:40:51 crc kubenswrapper[4813]: I0219 20:40:51.958786 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/638b5e11-fd3f-4885-b9c0-463a8496bb74-ssh-key-openstack-cell1\") pod \"nova-cell1-openstack-openstack-cell1-qp4gn\" (UID: \"638b5e11-fd3f-4885-b9c0-463a8496bb74\") " pod="openstack/nova-cell1-openstack-openstack-cell1-qp4gn" Feb 19 20:40:51 crc kubenswrapper[4813]: I0219 20:40:51.958870 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/638b5e11-fd3f-4885-b9c0-463a8496bb74-nova-cell1-compute-config-2\") pod \"nova-cell1-openstack-openstack-cell1-qp4gn\" (UID: \"638b5e11-fd3f-4885-b9c0-463a8496bb74\") " pod="openstack/nova-cell1-openstack-openstack-cell1-qp4gn" Feb 19 20:40:51 crc kubenswrapper[4813]: I0219 20:40:51.958903 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/638b5e11-fd3f-4885-b9c0-463a8496bb74-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-openstack-cell1-qp4gn\" (UID: \"638b5e11-fd3f-4885-b9c0-463a8496bb74\") " pod="openstack/nova-cell1-openstack-openstack-cell1-qp4gn" Feb 19 20:40:52 crc kubenswrapper[4813]: I0219 20:40:52.060279 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/638b5e11-fd3f-4885-b9c0-463a8496bb74-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-openstack-cell1-qp4gn\" (UID: \"638b5e11-fd3f-4885-b9c0-463a8496bb74\") " pod="openstack/nova-cell1-openstack-openstack-cell1-qp4gn" Feb 19 20:40:52 crc kubenswrapper[4813]: I0219 20:40:52.060344 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/638b5e11-fd3f-4885-b9c0-463a8496bb74-ceph\") pod \"nova-cell1-openstack-openstack-cell1-qp4gn\" (UID: \"638b5e11-fd3f-4885-b9c0-463a8496bb74\") " pod="openstack/nova-cell1-openstack-openstack-cell1-qp4gn" Feb 19 20:40:52 crc kubenswrapper[4813]: I0219 20:40:52.060365 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/638b5e11-fd3f-4885-b9c0-463a8496bb74-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-openstack-cell1-qp4gn\" (UID: \"638b5e11-fd3f-4885-b9c0-463a8496bb74\") " pod="openstack/nova-cell1-openstack-openstack-cell1-qp4gn" Feb 19 20:40:52 crc kubenswrapper[4813]: I0219 20:40:52.060407 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/638b5e11-fd3f-4885-b9c0-463a8496bb74-inventory\") pod \"nova-cell1-openstack-openstack-cell1-qp4gn\" (UID: \"638b5e11-fd3f-4885-b9c0-463a8496bb74\") " pod="openstack/nova-cell1-openstack-openstack-cell1-qp4gn" Feb 19 20:40:52 crc kubenswrapper[4813]: I0219 20:40:52.060426 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/638b5e11-fd3f-4885-b9c0-463a8496bb74-nova-cells-global-config-1\") pod \"nova-cell1-openstack-openstack-cell1-qp4gn\" (UID: \"638b5e11-fd3f-4885-b9c0-463a8496bb74\") " pod="openstack/nova-cell1-openstack-openstack-cell1-qp4gn" Feb 19 20:40:52 crc kubenswrapper[4813]: I0219 20:40:52.060450 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/638b5e11-fd3f-4885-b9c0-463a8496bb74-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-openstack-cell1-qp4gn\" (UID: \"638b5e11-fd3f-4885-b9c0-463a8496bb74\") " pod="openstack/nova-cell1-openstack-openstack-cell1-qp4gn" Feb 19 20:40:52 crc kubenswrapper[4813]: I0219 20:40:52.060473 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjfw4\" (UniqueName: \"kubernetes.io/projected/638b5e11-fd3f-4885-b9c0-463a8496bb74-kube-api-access-bjfw4\") pod \"nova-cell1-openstack-openstack-cell1-qp4gn\" (UID: \"638b5e11-fd3f-4885-b9c0-463a8496bb74\") " pod="openstack/nova-cell1-openstack-openstack-cell1-qp4gn" Feb 19 20:40:52 crc kubenswrapper[4813]: I0219 20:40:52.060516 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/638b5e11-fd3f-4885-b9c0-463a8496bb74-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-openstack-cell1-qp4gn\" (UID: \"638b5e11-fd3f-4885-b9c0-463a8496bb74\") " pod="openstack/nova-cell1-openstack-openstack-cell1-qp4gn" Feb 19 20:40:52 crc kubenswrapper[4813]: I0219 20:40:52.060540 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/638b5e11-fd3f-4885-b9c0-463a8496bb74-nova-cells-global-config-0\") pod \"nova-cell1-openstack-openstack-cell1-qp4gn\" (UID: \"638b5e11-fd3f-4885-b9c0-463a8496bb74\") " pod="openstack/nova-cell1-openstack-openstack-cell1-qp4gn" Feb 19 20:40:52 crc kubenswrapper[4813]: I0219 20:40:52.060557 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/638b5e11-fd3f-4885-b9c0-463a8496bb74-nova-cell1-compute-config-3\") pod \"nova-cell1-openstack-openstack-cell1-qp4gn\" (UID: \"638b5e11-fd3f-4885-b9c0-463a8496bb74\") " pod="openstack/nova-cell1-openstack-openstack-cell1-qp4gn" Feb 19 20:40:52 crc kubenswrapper[4813]: I0219 20:40:52.060585 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/638b5e11-fd3f-4885-b9c0-463a8496bb74-ssh-key-openstack-cell1\") pod \"nova-cell1-openstack-openstack-cell1-qp4gn\" (UID: \"638b5e11-fd3f-4885-b9c0-463a8496bb74\") " pod="openstack/nova-cell1-openstack-openstack-cell1-qp4gn" Feb 19 20:40:52 crc kubenswrapper[4813]: I0219 20:40:52.060647 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/638b5e11-fd3f-4885-b9c0-463a8496bb74-nova-cell1-compute-config-2\") pod \"nova-cell1-openstack-openstack-cell1-qp4gn\" (UID: \"638b5e11-fd3f-4885-b9c0-463a8496bb74\") " pod="openstack/nova-cell1-openstack-openstack-cell1-qp4gn" Feb 19 20:40:52 crc kubenswrapper[4813]: I0219 20:40:52.060668 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/638b5e11-fd3f-4885-b9c0-463a8496bb74-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-openstack-cell1-qp4gn\" (UID: \"638b5e11-fd3f-4885-b9c0-463a8496bb74\") " pod="openstack/nova-cell1-openstack-openstack-cell1-qp4gn" Feb 19 20:40:52 crc kubenswrapper[4813]: I0219 20:40:52.062511 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/638b5e11-fd3f-4885-b9c0-463a8496bb74-nova-cells-global-config-1\") pod \"nova-cell1-openstack-openstack-cell1-qp4gn\" (UID: \"638b5e11-fd3f-4885-b9c0-463a8496bb74\") " pod="openstack/nova-cell1-openstack-openstack-cell1-qp4gn" Feb 19 20:40:52 crc kubenswrapper[4813]: I0219 20:40:52.062595 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/638b5e11-fd3f-4885-b9c0-463a8496bb74-nova-cells-global-config-0\") pod \"nova-cell1-openstack-openstack-cell1-qp4gn\" (UID: \"638b5e11-fd3f-4885-b9c0-463a8496bb74\") " pod="openstack/nova-cell1-openstack-openstack-cell1-qp4gn" Feb 19 20:40:52 crc kubenswrapper[4813]: I0219 20:40:52.065404 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/638b5e11-fd3f-4885-b9c0-463a8496bb74-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-openstack-cell1-qp4gn\" (UID: \"638b5e11-fd3f-4885-b9c0-463a8496bb74\") " pod="openstack/nova-cell1-openstack-openstack-cell1-qp4gn" Feb 19 20:40:52 crc kubenswrapper[4813]: I0219 20:40:52.066201 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/638b5e11-fd3f-4885-b9c0-463a8496bb74-ssh-key-openstack-cell1\") pod \"nova-cell1-openstack-openstack-cell1-qp4gn\" (UID: \"638b5e11-fd3f-4885-b9c0-463a8496bb74\") " pod="openstack/nova-cell1-openstack-openstack-cell1-qp4gn" Feb 19 20:40:52 crc kubenswrapper[4813]: I0219 20:40:52.066442 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/638b5e11-fd3f-4885-b9c0-463a8496bb74-nova-cell1-compute-config-2\") pod \"nova-cell1-openstack-openstack-cell1-qp4gn\" (UID: \"638b5e11-fd3f-4885-b9c0-463a8496bb74\") " pod="openstack/nova-cell1-openstack-openstack-cell1-qp4gn" Feb 19 20:40:52 crc kubenswrapper[4813]: I0219 20:40:52.066696 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/638b5e11-fd3f-4885-b9c0-463a8496bb74-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-openstack-cell1-qp4gn\" (UID: \"638b5e11-fd3f-4885-b9c0-463a8496bb74\") " pod="openstack/nova-cell1-openstack-openstack-cell1-qp4gn" Feb 19 20:40:52 crc kubenswrapper[4813]: I0219 20:40:52.066947 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/638b5e11-fd3f-4885-b9c0-463a8496bb74-nova-cell1-compute-config-3\") pod \"nova-cell1-openstack-openstack-cell1-qp4gn\" (UID: \"638b5e11-fd3f-4885-b9c0-463a8496bb74\") " pod="openstack/nova-cell1-openstack-openstack-cell1-qp4gn" Feb 19 20:40:52 crc kubenswrapper[4813]: I0219 20:40:52.067013 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/638b5e11-fd3f-4885-b9c0-463a8496bb74-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-openstack-cell1-qp4gn\" (UID: \"638b5e11-fd3f-4885-b9c0-463a8496bb74\") " pod="openstack/nova-cell1-openstack-openstack-cell1-qp4gn" Feb 19 20:40:52 crc kubenswrapper[4813]: I0219 20:40:52.067239 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/638b5e11-fd3f-4885-b9c0-463a8496bb74-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-openstack-cell1-qp4gn\" (UID: \"638b5e11-fd3f-4885-b9c0-463a8496bb74\") " pod="openstack/nova-cell1-openstack-openstack-cell1-qp4gn" Feb 19 20:40:52 crc kubenswrapper[4813]: I0219 20:40:52.067259 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/638b5e11-fd3f-4885-b9c0-463a8496bb74-inventory\") pod \"nova-cell1-openstack-openstack-cell1-qp4gn\" (UID: \"638b5e11-fd3f-4885-b9c0-463a8496bb74\") " pod="openstack/nova-cell1-openstack-openstack-cell1-qp4gn" Feb 19 20:40:52 crc kubenswrapper[4813]: I0219 20:40:52.067244 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/638b5e11-fd3f-4885-b9c0-463a8496bb74-ceph\") pod \"nova-cell1-openstack-openstack-cell1-qp4gn\" (UID: \"638b5e11-fd3f-4885-b9c0-463a8496bb74\") " pod="openstack/nova-cell1-openstack-openstack-cell1-qp4gn" Feb 19 20:40:52 crc kubenswrapper[4813]: I0219 20:40:52.067737 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/638b5e11-fd3f-4885-b9c0-463a8496bb74-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-openstack-cell1-qp4gn\" (UID: \"638b5e11-fd3f-4885-b9c0-463a8496bb74\") " pod="openstack/nova-cell1-openstack-openstack-cell1-qp4gn" Feb 19 20:40:52 crc kubenswrapper[4813]: I0219 20:40:52.077472 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjfw4\" (UniqueName: \"kubernetes.io/projected/638b5e11-fd3f-4885-b9c0-463a8496bb74-kube-api-access-bjfw4\") pod \"nova-cell1-openstack-openstack-cell1-qp4gn\" (UID: \"638b5e11-fd3f-4885-b9c0-463a8496bb74\") " pod="openstack/nova-cell1-openstack-openstack-cell1-qp4gn" Feb 19 20:40:52 crc kubenswrapper[4813]: I0219 20:40:52.117943 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-qp4gn" Feb 19 20:40:52 crc kubenswrapper[4813]: I0219 20:40:52.719730 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-openstack-cell1-qp4gn"] Feb 19 20:40:52 crc kubenswrapper[4813]: W0219 20:40:52.722677 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod638b5e11_fd3f_4885_b9c0_463a8496bb74.slice/crio-91238fcc02f6843f4ddf58cd4ed1f47cb9cf9c236f8a28eb481c75d605affe2d WatchSource:0}: Error finding container 91238fcc02f6843f4ddf58cd4ed1f47cb9cf9c236f8a28eb481c75d605affe2d: Status 404 returned error can't find the container with id 91238fcc02f6843f4ddf58cd4ed1f47cb9cf9c236f8a28eb481c75d605affe2d Feb 19 20:40:52 crc kubenswrapper[4813]: I0219 20:40:52.726162 4813 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 20:40:53 crc kubenswrapper[4813]: I0219 20:40:53.724781 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-qp4gn" event={"ID":"638b5e11-fd3f-4885-b9c0-463a8496bb74","Type":"ContainerStarted","Data":"91238fcc02f6843f4ddf58cd4ed1f47cb9cf9c236f8a28eb481c75d605affe2d"} Feb 19 20:40:54 crc kubenswrapper[4813]: I0219 20:40:54.738042 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-qp4gn" event={"ID":"638b5e11-fd3f-4885-b9c0-463a8496bb74","Type":"ContainerStarted","Data":"c519027aa3c1f40e1176a9bdeceb73c02c48f8aa0b49c695865d22dd93986356"} Feb 19 20:40:54 crc kubenswrapper[4813]: I0219 20:40:54.770259 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-openstack-openstack-cell1-qp4gn" podStartSLOduration=2.799424 podStartE2EDuration="3.770224459s" podCreationTimestamp="2026-02-19 20:40:51 +0000 UTC" firstStartedPulling="2026-02-19 20:40:52.725846583 +0000 UTC m=+7871.951287124" lastFinishedPulling="2026-02-19 20:40:53.696647042 +0000 UTC m=+7872.922087583" observedRunningTime="2026-02-19 20:40:54.761509429 +0000 UTC m=+7873.986949970" watchObservedRunningTime="2026-02-19 20:40:54.770224459 +0000 UTC m=+7873.995665010" Feb 19 20:41:06 crc kubenswrapper[4813]: I0219 20:41:06.471837 4813 scope.go:117] "RemoveContainer" containerID="db861e5d90a8de67f858b14686841353ea00fd3d2f4c54f7ce0d86d03a0106a3" Feb 19 20:41:06 crc kubenswrapper[4813]: E0219 20:41:06.473310 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:41:11 crc kubenswrapper[4813]: I0219 20:41:11.742249 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vjdnm"] Feb 19 20:41:11 crc kubenswrapper[4813]: I0219 20:41:11.745553 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vjdnm" Feb 19 20:41:11 crc kubenswrapper[4813]: I0219 20:41:11.762652 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vjdnm"] Feb 19 20:41:11 crc kubenswrapper[4813]: I0219 20:41:11.906215 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntvr6\" (UniqueName: \"kubernetes.io/projected/421e3305-062a-4f5c-9c3c-786ed2268833-kube-api-access-ntvr6\") pod \"certified-operators-vjdnm\" (UID: \"421e3305-062a-4f5c-9c3c-786ed2268833\") " pod="openshift-marketplace/certified-operators-vjdnm" Feb 19 20:41:11 crc kubenswrapper[4813]: I0219 20:41:11.906326 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/421e3305-062a-4f5c-9c3c-786ed2268833-utilities\") pod \"certified-operators-vjdnm\" (UID: \"421e3305-062a-4f5c-9c3c-786ed2268833\") " pod="openshift-marketplace/certified-operators-vjdnm" Feb 19 20:41:11 crc kubenswrapper[4813]: I0219 20:41:11.906478 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/421e3305-062a-4f5c-9c3c-786ed2268833-catalog-content\") pod \"certified-operators-vjdnm\" (UID: \"421e3305-062a-4f5c-9c3c-786ed2268833\") " pod="openshift-marketplace/certified-operators-vjdnm" Feb 19 20:41:12 crc kubenswrapper[4813]: I0219 20:41:12.008272 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntvr6\" (UniqueName: \"kubernetes.io/projected/421e3305-062a-4f5c-9c3c-786ed2268833-kube-api-access-ntvr6\") pod \"certified-operators-vjdnm\" (UID: \"421e3305-062a-4f5c-9c3c-786ed2268833\") " pod="openshift-marketplace/certified-operators-vjdnm" Feb 19 20:41:12 crc kubenswrapper[4813]: I0219 20:41:12.008365 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/421e3305-062a-4f5c-9c3c-786ed2268833-utilities\") pod \"certified-operators-vjdnm\" (UID: \"421e3305-062a-4f5c-9c3c-786ed2268833\") " pod="openshift-marketplace/certified-operators-vjdnm" Feb 19 20:41:12 crc kubenswrapper[4813]: I0219 20:41:12.008478 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/421e3305-062a-4f5c-9c3c-786ed2268833-catalog-content\") pod \"certified-operators-vjdnm\" (UID: \"421e3305-062a-4f5c-9c3c-786ed2268833\") " pod="openshift-marketplace/certified-operators-vjdnm" Feb 19 20:41:12 crc kubenswrapper[4813]: I0219 20:41:12.008942 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/421e3305-062a-4f5c-9c3c-786ed2268833-catalog-content\") pod \"certified-operators-vjdnm\" (UID: \"421e3305-062a-4f5c-9c3c-786ed2268833\") " pod="openshift-marketplace/certified-operators-vjdnm" Feb 19 20:41:12 crc kubenswrapper[4813]: I0219 20:41:12.008990 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/421e3305-062a-4f5c-9c3c-786ed2268833-utilities\") pod \"certified-operators-vjdnm\" (UID: \"421e3305-062a-4f5c-9c3c-786ed2268833\") " pod="openshift-marketplace/certified-operators-vjdnm" Feb 19 20:41:12 crc kubenswrapper[4813]: I0219 20:41:12.038535 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntvr6\" (UniqueName: \"kubernetes.io/projected/421e3305-062a-4f5c-9c3c-786ed2268833-kube-api-access-ntvr6\") pod \"certified-operators-vjdnm\" (UID: \"421e3305-062a-4f5c-9c3c-786ed2268833\") " pod="openshift-marketplace/certified-operators-vjdnm" Feb 19 20:41:12 crc kubenswrapper[4813]: I0219 20:41:12.071097 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vjdnm" Feb 19 20:41:13 crc kubenswrapper[4813]: I0219 20:41:13.204246 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vjdnm"] Feb 19 20:41:13 crc kubenswrapper[4813]: I0219 20:41:13.922984 4813 generic.go:334] "Generic (PLEG): container finished" podID="421e3305-062a-4f5c-9c3c-786ed2268833" containerID="2a76e0c168fd1c29b2eedda94e7fb1b34ce978b456d816a7e56b9be042f7ca56" exitCode=0 Feb 19 20:41:13 crc kubenswrapper[4813]: I0219 20:41:13.923101 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vjdnm" event={"ID":"421e3305-062a-4f5c-9c3c-786ed2268833","Type":"ContainerDied","Data":"2a76e0c168fd1c29b2eedda94e7fb1b34ce978b456d816a7e56b9be042f7ca56"} Feb 19 20:41:13 crc kubenswrapper[4813]: I0219 20:41:13.923496 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vjdnm" event={"ID":"421e3305-062a-4f5c-9c3c-786ed2268833","Type":"ContainerStarted","Data":"ab54825bcb44c053f8316ee9f56793eaaa73296006698dc427d8de2a0c57aa4a"} Feb 19 20:41:14 crc kubenswrapper[4813]: I0219 20:41:14.937734 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vjdnm" event={"ID":"421e3305-062a-4f5c-9c3c-786ed2268833","Type":"ContainerStarted","Data":"9b755da269ab3a0bdaa28209e8ee6a090260f9cd50d1f100af32d5ed54678bcd"} Feb 19 20:41:16 crc kubenswrapper[4813]: I0219 20:41:16.956720 4813 generic.go:334] "Generic (PLEG): container finished" podID="421e3305-062a-4f5c-9c3c-786ed2268833" containerID="9b755da269ab3a0bdaa28209e8ee6a090260f9cd50d1f100af32d5ed54678bcd" exitCode=0 Feb 19 20:41:16 crc kubenswrapper[4813]: I0219 20:41:16.957234 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vjdnm" event={"ID":"421e3305-062a-4f5c-9c3c-786ed2268833","Type":"ContainerDied","Data":"9b755da269ab3a0bdaa28209e8ee6a090260f9cd50d1f100af32d5ed54678bcd"} Feb 19 20:41:17 crc kubenswrapper[4813]: I0219 20:41:17.966184 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vjdnm" event={"ID":"421e3305-062a-4f5c-9c3c-786ed2268833","Type":"ContainerStarted","Data":"73e2dcf03b5ff950c2b375246609345e9856ff139cce5dcefbd332d25d44cbbd"} Feb 19 20:41:17 crc kubenswrapper[4813]: I0219 20:41:17.999132 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vjdnm" podStartSLOduration=3.5327531309999998 podStartE2EDuration="6.999113947s" podCreationTimestamp="2026-02-19 20:41:11 +0000 UTC" firstStartedPulling="2026-02-19 20:41:13.925433712 +0000 UTC m=+7893.150874253" lastFinishedPulling="2026-02-19 20:41:17.391794528 +0000 UTC m=+7896.617235069" observedRunningTime="2026-02-19 20:41:17.992340237 +0000 UTC m=+7897.217780798" watchObservedRunningTime="2026-02-19 20:41:17.999113947 +0000 UTC m=+7897.224554488" Feb 19 20:41:21 crc kubenswrapper[4813]: I0219 20:41:21.491973 4813 scope.go:117] "RemoveContainer" containerID="db861e5d90a8de67f858b14686841353ea00fd3d2f4c54f7ce0d86d03a0106a3" Feb 19 20:41:21 crc kubenswrapper[4813]: E0219 20:41:21.493115 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:41:22 crc kubenswrapper[4813]: I0219 20:41:22.072330 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vjdnm" Feb 19 20:41:22 crc kubenswrapper[4813]: I0219 20:41:22.072725 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vjdnm" Feb 19 20:41:22 crc kubenswrapper[4813]: I0219 20:41:22.119374 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vjdnm" Feb 19 20:41:23 crc kubenswrapper[4813]: I0219 20:41:23.091847 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vjdnm" Feb 19 20:41:23 crc kubenswrapper[4813]: I0219 20:41:23.177313 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vjdnm"] Feb 19 20:41:25 crc kubenswrapper[4813]: I0219 20:41:25.041236 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vjdnm" podUID="421e3305-062a-4f5c-9c3c-786ed2268833" containerName="registry-server" containerID="cri-o://73e2dcf03b5ff950c2b375246609345e9856ff139cce5dcefbd332d25d44cbbd" gracePeriod=2 Feb 19 20:41:25 crc kubenswrapper[4813]: I0219 20:41:25.530185 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vjdnm" Feb 19 20:41:25 crc kubenswrapper[4813]: I0219 20:41:25.617402 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntvr6\" (UniqueName: \"kubernetes.io/projected/421e3305-062a-4f5c-9c3c-786ed2268833-kube-api-access-ntvr6\") pod \"421e3305-062a-4f5c-9c3c-786ed2268833\" (UID: \"421e3305-062a-4f5c-9c3c-786ed2268833\") " Feb 19 20:41:25 crc kubenswrapper[4813]: I0219 20:41:25.617491 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/421e3305-062a-4f5c-9c3c-786ed2268833-catalog-content\") pod \"421e3305-062a-4f5c-9c3c-786ed2268833\" (UID: \"421e3305-062a-4f5c-9c3c-786ed2268833\") " Feb 19 20:41:25 crc kubenswrapper[4813]: I0219 20:41:25.617640 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/421e3305-062a-4f5c-9c3c-786ed2268833-utilities\") pod \"421e3305-062a-4f5c-9c3c-786ed2268833\" (UID: \"421e3305-062a-4f5c-9c3c-786ed2268833\") " Feb 19 20:41:25 crc kubenswrapper[4813]: I0219 20:41:25.618614 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/421e3305-062a-4f5c-9c3c-786ed2268833-utilities" (OuterVolumeSpecName: "utilities") pod "421e3305-062a-4f5c-9c3c-786ed2268833" (UID: "421e3305-062a-4f5c-9c3c-786ed2268833"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:41:25 crc kubenswrapper[4813]: I0219 20:41:25.622446 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/421e3305-062a-4f5c-9c3c-786ed2268833-kube-api-access-ntvr6" (OuterVolumeSpecName: "kube-api-access-ntvr6") pod "421e3305-062a-4f5c-9c3c-786ed2268833" (UID: "421e3305-062a-4f5c-9c3c-786ed2268833"). InnerVolumeSpecName "kube-api-access-ntvr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:41:25 crc kubenswrapper[4813]: I0219 20:41:25.684367 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/421e3305-062a-4f5c-9c3c-786ed2268833-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "421e3305-062a-4f5c-9c3c-786ed2268833" (UID: "421e3305-062a-4f5c-9c3c-786ed2268833"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:41:25 crc kubenswrapper[4813]: I0219 20:41:25.719763 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntvr6\" (UniqueName: \"kubernetes.io/projected/421e3305-062a-4f5c-9c3c-786ed2268833-kube-api-access-ntvr6\") on node \"crc\" DevicePath \"\"" Feb 19 20:41:25 crc kubenswrapper[4813]: I0219 20:41:25.719811 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/421e3305-062a-4f5c-9c3c-786ed2268833-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 20:41:25 crc kubenswrapper[4813]: I0219 20:41:25.719823 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/421e3305-062a-4f5c-9c3c-786ed2268833-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 20:41:26 crc kubenswrapper[4813]: I0219 20:41:26.059041 4813 generic.go:334] "Generic (PLEG): container finished" podID="421e3305-062a-4f5c-9c3c-786ed2268833" containerID="73e2dcf03b5ff950c2b375246609345e9856ff139cce5dcefbd332d25d44cbbd" exitCode=0 Feb 19 20:41:26 crc kubenswrapper[4813]: I0219 20:41:26.059110 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vjdnm" event={"ID":"421e3305-062a-4f5c-9c3c-786ed2268833","Type":"ContainerDied","Data":"73e2dcf03b5ff950c2b375246609345e9856ff139cce5dcefbd332d25d44cbbd"} Feb 19 20:41:26 crc kubenswrapper[4813]: I0219 20:41:26.059162 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vjdnm" event={"ID":"421e3305-062a-4f5c-9c3c-786ed2268833","Type":"ContainerDied","Data":"ab54825bcb44c053f8316ee9f56793eaaa73296006698dc427d8de2a0c57aa4a"} Feb 19 20:41:26 crc kubenswrapper[4813]: I0219 20:41:26.059197 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vjdnm" Feb 19 20:41:26 crc kubenswrapper[4813]: I0219 20:41:26.059218 4813 scope.go:117] "RemoveContainer" containerID="73e2dcf03b5ff950c2b375246609345e9856ff139cce5dcefbd332d25d44cbbd" Feb 19 20:41:26 crc kubenswrapper[4813]: I0219 20:41:26.089381 4813 scope.go:117] "RemoveContainer" containerID="9b755da269ab3a0bdaa28209e8ee6a090260f9cd50d1f100af32d5ed54678bcd" Feb 19 20:41:26 crc kubenswrapper[4813]: I0219 20:41:26.141346 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vjdnm"] Feb 19 20:41:26 crc kubenswrapper[4813]: I0219 20:41:26.156621 4813 scope.go:117] "RemoveContainer" containerID="2a76e0c168fd1c29b2eedda94e7fb1b34ce978b456d816a7e56b9be042f7ca56" Feb 19 20:41:26 crc kubenswrapper[4813]: I0219 20:41:26.159576 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vjdnm"] Feb 19 20:41:26 crc kubenswrapper[4813]: I0219 20:41:26.190952 4813 scope.go:117] "RemoveContainer" containerID="73e2dcf03b5ff950c2b375246609345e9856ff139cce5dcefbd332d25d44cbbd" Feb 19 20:41:26 crc kubenswrapper[4813]: E0219 20:41:26.191454 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73e2dcf03b5ff950c2b375246609345e9856ff139cce5dcefbd332d25d44cbbd\": container with ID starting with 73e2dcf03b5ff950c2b375246609345e9856ff139cce5dcefbd332d25d44cbbd not found: ID does not exist" containerID="73e2dcf03b5ff950c2b375246609345e9856ff139cce5dcefbd332d25d44cbbd" Feb 19 20:41:26 crc kubenswrapper[4813]: I0219 20:41:26.191509 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73e2dcf03b5ff950c2b375246609345e9856ff139cce5dcefbd332d25d44cbbd"} err="failed to get container status \"73e2dcf03b5ff950c2b375246609345e9856ff139cce5dcefbd332d25d44cbbd\": rpc error: code = NotFound desc = could not find container \"73e2dcf03b5ff950c2b375246609345e9856ff139cce5dcefbd332d25d44cbbd\": container with ID starting with 73e2dcf03b5ff950c2b375246609345e9856ff139cce5dcefbd332d25d44cbbd not found: ID does not exist" Feb 19 20:41:26 crc kubenswrapper[4813]: I0219 20:41:26.191542 4813 scope.go:117] "RemoveContainer" containerID="9b755da269ab3a0bdaa28209e8ee6a090260f9cd50d1f100af32d5ed54678bcd" Feb 19 20:41:26 crc kubenswrapper[4813]: E0219 20:41:26.192049 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b755da269ab3a0bdaa28209e8ee6a090260f9cd50d1f100af32d5ed54678bcd\": container with ID starting with 9b755da269ab3a0bdaa28209e8ee6a090260f9cd50d1f100af32d5ed54678bcd not found: ID does not exist" containerID="9b755da269ab3a0bdaa28209e8ee6a090260f9cd50d1f100af32d5ed54678bcd" Feb 19 20:41:26 crc kubenswrapper[4813]: I0219 20:41:26.192086 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b755da269ab3a0bdaa28209e8ee6a090260f9cd50d1f100af32d5ed54678bcd"} err="failed to get container status \"9b755da269ab3a0bdaa28209e8ee6a090260f9cd50d1f100af32d5ed54678bcd\": rpc error: code = NotFound desc = could not find container \"9b755da269ab3a0bdaa28209e8ee6a090260f9cd50d1f100af32d5ed54678bcd\": container with ID starting with 9b755da269ab3a0bdaa28209e8ee6a090260f9cd50d1f100af32d5ed54678bcd not found: ID does not exist" Feb 19 20:41:26 crc kubenswrapper[4813]: I0219 20:41:26.192106 4813 scope.go:117] "RemoveContainer" containerID="2a76e0c168fd1c29b2eedda94e7fb1b34ce978b456d816a7e56b9be042f7ca56" Feb 19 20:41:26 crc kubenswrapper[4813]: E0219 20:41:26.192407 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a76e0c168fd1c29b2eedda94e7fb1b34ce978b456d816a7e56b9be042f7ca56\": container with ID starting with 2a76e0c168fd1c29b2eedda94e7fb1b34ce978b456d816a7e56b9be042f7ca56 not found: ID does not exist" containerID="2a76e0c168fd1c29b2eedda94e7fb1b34ce978b456d816a7e56b9be042f7ca56" Feb 19 20:41:26 crc kubenswrapper[4813]: I0219 20:41:26.192442 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a76e0c168fd1c29b2eedda94e7fb1b34ce978b456d816a7e56b9be042f7ca56"} err="failed to get container status \"2a76e0c168fd1c29b2eedda94e7fb1b34ce978b456d816a7e56b9be042f7ca56\": rpc error: code = NotFound desc = could not find container \"2a76e0c168fd1c29b2eedda94e7fb1b34ce978b456d816a7e56b9be042f7ca56\": container with ID starting with 2a76e0c168fd1c29b2eedda94e7fb1b34ce978b456d816a7e56b9be042f7ca56 not found: ID does not exist" Feb 19 20:41:27 crc kubenswrapper[4813]: I0219 20:41:27.482243 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="421e3305-062a-4f5c-9c3c-786ed2268833" path="/var/lib/kubelet/pods/421e3305-062a-4f5c-9c3c-786ed2268833/volumes" Feb 19 20:41:33 crc kubenswrapper[4813]: I0219 20:41:33.479705 4813 scope.go:117] "RemoveContainer" containerID="db861e5d90a8de67f858b14686841353ea00fd3d2f4c54f7ce0d86d03a0106a3" Feb 19 20:41:33 crc kubenswrapper[4813]: E0219 20:41:33.482067 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:41:44 crc kubenswrapper[4813]: I0219 20:41:44.471409 4813 scope.go:117] "RemoveContainer" containerID="db861e5d90a8de67f858b14686841353ea00fd3d2f4c54f7ce0d86d03a0106a3" Feb 19 20:41:44 crc kubenswrapper[4813]: E0219 20:41:44.472244 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:41:57 crc kubenswrapper[4813]: I0219 20:41:57.473144 4813 scope.go:117] "RemoveContainer" containerID="db861e5d90a8de67f858b14686841353ea00fd3d2f4c54f7ce0d86d03a0106a3" Feb 19 20:41:57 crc kubenswrapper[4813]: E0219 20:41:57.474514 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:42:12 crc kubenswrapper[4813]: I0219 20:42:12.471056 4813 scope.go:117] "RemoveContainer" containerID="db861e5d90a8de67f858b14686841353ea00fd3d2f4c54f7ce0d86d03a0106a3" Feb 19 20:42:12 crc kubenswrapper[4813]: E0219 20:42:12.472142 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:42:26 crc kubenswrapper[4813]: I0219 20:42:26.472802 4813 scope.go:117] "RemoveContainer" containerID="db861e5d90a8de67f858b14686841353ea00fd3d2f4c54f7ce0d86d03a0106a3" Feb 19 20:42:26 crc kubenswrapper[4813]: E0219 20:42:26.474144 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:42:38 crc kubenswrapper[4813]: I0219 20:42:38.471708 4813 scope.go:117] "RemoveContainer" containerID="db861e5d90a8de67f858b14686841353ea00fd3d2f4c54f7ce0d86d03a0106a3" Feb 19 20:42:38 crc kubenswrapper[4813]: E0219 20:42:38.474133 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:42:52 crc kubenswrapper[4813]: I0219 20:42:52.472760 4813 scope.go:117] "RemoveContainer" containerID="db861e5d90a8de67f858b14686841353ea00fd3d2f4c54f7ce0d86d03a0106a3" Feb 19 20:42:52 crc kubenswrapper[4813]: E0219 20:42:52.474089 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:43:05 crc kubenswrapper[4813]: I0219 20:43:05.472596 4813 scope.go:117] "RemoveContainer" containerID="db861e5d90a8de67f858b14686841353ea00fd3d2f4c54f7ce0d86d03a0106a3" Feb 19 20:43:05 crc kubenswrapper[4813]: E0219 20:43:05.473725 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:43:19 crc kubenswrapper[4813]: I0219 20:43:19.473342 4813 scope.go:117] "RemoveContainer" containerID="db861e5d90a8de67f858b14686841353ea00fd3d2f4c54f7ce0d86d03a0106a3" Feb 19 20:43:19 crc kubenswrapper[4813]: E0219 20:43:19.474692 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:43:30 crc kubenswrapper[4813]: I0219 20:43:30.472717 4813 scope.go:117] "RemoveContainer" containerID="db861e5d90a8de67f858b14686841353ea00fd3d2f4c54f7ce0d86d03a0106a3" Feb 19 20:43:30 crc kubenswrapper[4813]: E0219 20:43:30.473896 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:43:37 crc kubenswrapper[4813]: I0219 20:43:37.873126 4813 generic.go:334] "Generic (PLEG): container finished" podID="638b5e11-fd3f-4885-b9c0-463a8496bb74" containerID="c519027aa3c1f40e1176a9bdeceb73c02c48f8aa0b49c695865d22dd93986356" exitCode=0 Feb 19 20:43:37 crc kubenswrapper[4813]: I0219 20:43:37.873217 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-qp4gn" event={"ID":"638b5e11-fd3f-4885-b9c0-463a8496bb74","Type":"ContainerDied","Data":"c519027aa3c1f40e1176a9bdeceb73c02c48f8aa0b49c695865d22dd93986356"} Feb 19 20:43:39 crc kubenswrapper[4813]: I0219 20:43:39.352270 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-qp4gn" Feb 19 20:43:39 crc kubenswrapper[4813]: I0219 20:43:39.461785 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/638b5e11-fd3f-4885-b9c0-463a8496bb74-nova-migration-ssh-key-1\") pod \"638b5e11-fd3f-4885-b9c0-463a8496bb74\" (UID: \"638b5e11-fd3f-4885-b9c0-463a8496bb74\") " Feb 19 20:43:39 crc kubenswrapper[4813]: I0219 20:43:39.461847 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/638b5e11-fd3f-4885-b9c0-463a8496bb74-ceph\") pod \"638b5e11-fd3f-4885-b9c0-463a8496bb74\" (UID: \"638b5e11-fd3f-4885-b9c0-463a8496bb74\") " Feb 19 20:43:39 crc kubenswrapper[4813]: I0219 20:43:39.461876 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/638b5e11-fd3f-4885-b9c0-463a8496bb74-nova-cell1-compute-config-1\") pod \"638b5e11-fd3f-4885-b9c0-463a8496bb74\" (UID: \"638b5e11-fd3f-4885-b9c0-463a8496bb74\") " Feb 19 20:43:39 crc kubenswrapper[4813]: I0219 20:43:39.461907 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/638b5e11-fd3f-4885-b9c0-463a8496bb74-nova-cell1-compute-config-3\") pod \"638b5e11-fd3f-4885-b9c0-463a8496bb74\" (UID: \"638b5e11-fd3f-4885-b9c0-463a8496bb74\") " Feb 19 20:43:39 crc kubenswrapper[4813]: I0219 20:43:39.462022 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/638b5e11-fd3f-4885-b9c0-463a8496bb74-nova-cell1-combined-ca-bundle\") pod \"638b5e11-fd3f-4885-b9c0-463a8496bb74\" (UID: \"638b5e11-fd3f-4885-b9c0-463a8496bb74\") " Feb 19 20:43:39 crc kubenswrapper[4813]: I0219 20:43:39.462100 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/638b5e11-fd3f-4885-b9c0-463a8496bb74-nova-cells-global-config-1\") pod \"638b5e11-fd3f-4885-b9c0-463a8496bb74\" (UID: \"638b5e11-fd3f-4885-b9c0-463a8496bb74\") " Feb 19 20:43:39 crc kubenswrapper[4813]: I0219 20:43:39.462137 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjfw4\" (UniqueName: \"kubernetes.io/projected/638b5e11-fd3f-4885-b9c0-463a8496bb74-kube-api-access-bjfw4\") pod \"638b5e11-fd3f-4885-b9c0-463a8496bb74\" (UID: \"638b5e11-fd3f-4885-b9c0-463a8496bb74\") " Feb 19 20:43:39 crc kubenswrapper[4813]: I0219 20:43:39.462185 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/638b5e11-fd3f-4885-b9c0-463a8496bb74-ssh-key-openstack-cell1\") pod \"638b5e11-fd3f-4885-b9c0-463a8496bb74\" (UID: \"638b5e11-fd3f-4885-b9c0-463a8496bb74\") " Feb 19 20:43:39 crc kubenswrapper[4813]: I0219 20:43:39.462262 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/638b5e11-fd3f-4885-b9c0-463a8496bb74-nova-cell1-compute-config-0\") pod \"638b5e11-fd3f-4885-b9c0-463a8496bb74\" (UID: \"638b5e11-fd3f-4885-b9c0-463a8496bb74\") " Feb 19 20:43:39 crc kubenswrapper[4813]: I0219 20:43:39.462300 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/638b5e11-fd3f-4885-b9c0-463a8496bb74-nova-cell1-compute-config-2\") pod \"638b5e11-fd3f-4885-b9c0-463a8496bb74\" (UID: \"638b5e11-fd3f-4885-b9c0-463a8496bb74\") " Feb 19 20:43:39 crc kubenswrapper[4813]: I0219 20:43:39.462836 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/638b5e11-fd3f-4885-b9c0-463a8496bb74-inventory\") pod \"638b5e11-fd3f-4885-b9c0-463a8496bb74\" (UID: \"638b5e11-fd3f-4885-b9c0-463a8496bb74\") " Feb 19 20:43:39 crc kubenswrapper[4813]: I0219 20:43:39.462885 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/638b5e11-fd3f-4885-b9c0-463a8496bb74-nova-cells-global-config-0\") pod \"638b5e11-fd3f-4885-b9c0-463a8496bb74\" (UID: \"638b5e11-fd3f-4885-b9c0-463a8496bb74\") " Feb 19 20:43:39 crc kubenswrapper[4813]: I0219 20:43:39.462929 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/638b5e11-fd3f-4885-b9c0-463a8496bb74-nova-migration-ssh-key-0\") pod \"638b5e11-fd3f-4885-b9c0-463a8496bb74\" (UID: \"638b5e11-fd3f-4885-b9c0-463a8496bb74\") " Feb 19 20:43:39 crc kubenswrapper[4813]: I0219 20:43:39.467677 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/638b5e11-fd3f-4885-b9c0-463a8496bb74-kube-api-access-bjfw4" (OuterVolumeSpecName: "kube-api-access-bjfw4") pod "638b5e11-fd3f-4885-b9c0-463a8496bb74" (UID: "638b5e11-fd3f-4885-b9c0-463a8496bb74"). InnerVolumeSpecName "kube-api-access-bjfw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:43:39 crc kubenswrapper[4813]: I0219 20:43:39.467696 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/638b5e11-fd3f-4885-b9c0-463a8496bb74-ceph" (OuterVolumeSpecName: "ceph") pod "638b5e11-fd3f-4885-b9c0-463a8496bb74" (UID: "638b5e11-fd3f-4885-b9c0-463a8496bb74"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:43:39 crc kubenswrapper[4813]: I0219 20:43:39.470205 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/638b5e11-fd3f-4885-b9c0-463a8496bb74-nova-cell1-combined-ca-bundle" (OuterVolumeSpecName: "nova-cell1-combined-ca-bundle") pod "638b5e11-fd3f-4885-b9c0-463a8496bb74" (UID: "638b5e11-fd3f-4885-b9c0-463a8496bb74"). InnerVolumeSpecName "nova-cell1-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:43:39 crc kubenswrapper[4813]: I0219 20:43:39.492753 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/638b5e11-fd3f-4885-b9c0-463a8496bb74-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "638b5e11-fd3f-4885-b9c0-463a8496bb74" (UID: "638b5e11-fd3f-4885-b9c0-463a8496bb74"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:43:39 crc kubenswrapper[4813]: I0219 20:43:39.493261 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/638b5e11-fd3f-4885-b9c0-463a8496bb74-nova-cell1-compute-config-2" (OuterVolumeSpecName: "nova-cell1-compute-config-2") pod "638b5e11-fd3f-4885-b9c0-463a8496bb74" (UID: "638b5e11-fd3f-4885-b9c0-463a8496bb74"). InnerVolumeSpecName "nova-cell1-compute-config-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:43:39 crc kubenswrapper[4813]: I0219 20:43:39.497695 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/638b5e11-fd3f-4885-b9c0-463a8496bb74-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "638b5e11-fd3f-4885-b9c0-463a8496bb74" (UID: "638b5e11-fd3f-4885-b9c0-463a8496bb74"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:43:39 crc kubenswrapper[4813]: I0219 20:43:39.498444 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/638b5e11-fd3f-4885-b9c0-463a8496bb74-nova-cell1-compute-config-3" (OuterVolumeSpecName: "nova-cell1-compute-config-3") pod "638b5e11-fd3f-4885-b9c0-463a8496bb74" (UID: "638b5e11-fd3f-4885-b9c0-463a8496bb74"). InnerVolumeSpecName "nova-cell1-compute-config-3". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:43:39 crc kubenswrapper[4813]: I0219 20:43:39.500287 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/638b5e11-fd3f-4885-b9c0-463a8496bb74-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "638b5e11-fd3f-4885-b9c0-463a8496bb74" (UID: "638b5e11-fd3f-4885-b9c0-463a8496bb74"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:43:39 crc kubenswrapper[4813]: I0219 20:43:39.504138 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/638b5e11-fd3f-4885-b9c0-463a8496bb74-nova-cells-global-config-0" (OuterVolumeSpecName: "nova-cells-global-config-0") pod "638b5e11-fd3f-4885-b9c0-463a8496bb74" (UID: "638b5e11-fd3f-4885-b9c0-463a8496bb74"). InnerVolumeSpecName "nova-cells-global-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:43:39 crc kubenswrapper[4813]: I0219 20:43:39.506008 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/638b5e11-fd3f-4885-b9c0-463a8496bb74-inventory" (OuterVolumeSpecName: "inventory") pod "638b5e11-fd3f-4885-b9c0-463a8496bb74" (UID: "638b5e11-fd3f-4885-b9c0-463a8496bb74"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:43:39 crc kubenswrapper[4813]: I0219 20:43:39.514342 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/638b5e11-fd3f-4885-b9c0-463a8496bb74-nova-cells-global-config-1" (OuterVolumeSpecName: "nova-cells-global-config-1") pod "638b5e11-fd3f-4885-b9c0-463a8496bb74" (UID: "638b5e11-fd3f-4885-b9c0-463a8496bb74"). InnerVolumeSpecName "nova-cells-global-config-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:43:39 crc kubenswrapper[4813]: I0219 20:43:39.515506 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/638b5e11-fd3f-4885-b9c0-463a8496bb74-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "638b5e11-fd3f-4885-b9c0-463a8496bb74" (UID: "638b5e11-fd3f-4885-b9c0-463a8496bb74"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:43:39 crc kubenswrapper[4813]: I0219 20:43:39.518215 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/638b5e11-fd3f-4885-b9c0-463a8496bb74-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "638b5e11-fd3f-4885-b9c0-463a8496bb74" (UID: "638b5e11-fd3f-4885-b9c0-463a8496bb74"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:43:39 crc kubenswrapper[4813]: I0219 20:43:39.578209 4813 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/638b5e11-fd3f-4885-b9c0-463a8496bb74-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 19 20:43:39 crc kubenswrapper[4813]: I0219 20:43:39.588831 4813 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/638b5e11-fd3f-4885-b9c0-463a8496bb74-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 19 20:43:39 crc kubenswrapper[4813]: I0219 20:43:39.588868 4813 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/638b5e11-fd3f-4885-b9c0-463a8496bb74-nova-cell1-compute-config-2\") on node \"crc\" DevicePath \"\"" Feb 19 20:43:39 crc kubenswrapper[4813]: I0219 20:43:39.588883 4813 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/638b5e11-fd3f-4885-b9c0-463a8496bb74-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 20:43:39 crc kubenswrapper[4813]: I0219 20:43:39.588898 4813 reconciler_common.go:293] "Volume detached for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/638b5e11-fd3f-4885-b9c0-463a8496bb74-nova-cells-global-config-0\") on node \"crc\" DevicePath \"\"" Feb 19 20:43:39 crc kubenswrapper[4813]: I0219 20:43:39.588911 4813 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/638b5e11-fd3f-4885-b9c0-463a8496bb74-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 19 20:43:39 crc kubenswrapper[4813]: I0219 20:43:39.588922 4813 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/638b5e11-fd3f-4885-b9c0-463a8496bb74-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 19 20:43:39 crc kubenswrapper[4813]: I0219 20:43:39.588933 4813 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/638b5e11-fd3f-4885-b9c0-463a8496bb74-ceph\") on node \"crc\" DevicePath \"\"" Feb 19 20:43:39 crc kubenswrapper[4813]: I0219 20:43:39.588946 4813 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/638b5e11-fd3f-4885-b9c0-463a8496bb74-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 19 20:43:39 crc kubenswrapper[4813]: I0219 20:43:39.588973 4813 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/638b5e11-fd3f-4885-b9c0-463a8496bb74-nova-cell1-compute-config-3\") on node \"crc\" DevicePath \"\"" Feb 19 20:43:39 crc kubenswrapper[4813]: I0219 20:43:39.588986 4813 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/638b5e11-fd3f-4885-b9c0-463a8496bb74-nova-cell1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 20:43:39 crc kubenswrapper[4813]: I0219 20:43:39.588999 4813 reconciler_common.go:293] "Volume detached for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/638b5e11-fd3f-4885-b9c0-463a8496bb74-nova-cells-global-config-1\") on node \"crc\" DevicePath \"\"" Feb 19 20:43:39 crc kubenswrapper[4813]: I0219 20:43:39.589013 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjfw4\" (UniqueName: \"kubernetes.io/projected/638b5e11-fd3f-4885-b9c0-463a8496bb74-kube-api-access-bjfw4\") on node \"crc\" DevicePath \"\"" Feb 19 20:43:39 crc kubenswrapper[4813]: I0219 20:43:39.901899 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-qp4gn" event={"ID":"638b5e11-fd3f-4885-b9c0-463a8496bb74","Type":"ContainerDied","Data":"91238fcc02f6843f4ddf58cd4ed1f47cb9cf9c236f8a28eb481c75d605affe2d"} Feb 19 20:43:39 crc kubenswrapper[4813]: I0219 20:43:39.902287 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91238fcc02f6843f4ddf58cd4ed1f47cb9cf9c236f8a28eb481c75d605affe2d" Feb 19 20:43:39 crc kubenswrapper[4813]: I0219 20:43:39.902032 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-qp4gn" Feb 19 20:43:40 crc kubenswrapper[4813]: I0219 20:43:40.002309 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-openstack-openstack-cell1-9snn7"] Feb 19 20:43:40 crc kubenswrapper[4813]: E0219 20:43:40.002729 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="638b5e11-fd3f-4885-b9c0-463a8496bb74" containerName="nova-cell1-openstack-openstack-cell1" Feb 19 20:43:40 crc kubenswrapper[4813]: I0219 20:43:40.002750 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="638b5e11-fd3f-4885-b9c0-463a8496bb74" containerName="nova-cell1-openstack-openstack-cell1" Feb 19 20:43:40 crc kubenswrapper[4813]: E0219 20:43:40.002768 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="421e3305-062a-4f5c-9c3c-786ed2268833" containerName="registry-server" Feb 19 20:43:40 crc kubenswrapper[4813]: I0219 20:43:40.002776 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="421e3305-062a-4f5c-9c3c-786ed2268833" containerName="registry-server" Feb 19 20:43:40 crc kubenswrapper[4813]: E0219 20:43:40.002827 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="421e3305-062a-4f5c-9c3c-786ed2268833" containerName="extract-content" Feb 19 20:43:40 crc kubenswrapper[4813]: I0219 20:43:40.002837 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="421e3305-062a-4f5c-9c3c-786ed2268833" containerName="extract-content" Feb 19 20:43:40 crc kubenswrapper[4813]: E0219 20:43:40.002856 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="421e3305-062a-4f5c-9c3c-786ed2268833" containerName="extract-utilities" Feb 19 20:43:40 crc kubenswrapper[4813]: I0219 20:43:40.002861 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="421e3305-062a-4f5c-9c3c-786ed2268833" containerName="extract-utilities" Feb 19 20:43:40 crc kubenswrapper[4813]: I0219 20:43:40.003056 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="638b5e11-fd3f-4885-b9c0-463a8496bb74" containerName="nova-cell1-openstack-openstack-cell1" Feb 19 20:43:40 crc kubenswrapper[4813]: I0219 20:43:40.003076 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="421e3305-062a-4f5c-9c3c-786ed2268833" containerName="registry-server" Feb 19 20:43:40 crc kubenswrapper[4813]: I0219 20:43:40.003806 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-9snn7" Feb 19 20:43:40 crc kubenswrapper[4813]: I0219 20:43:40.006306 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 19 20:43:40 crc kubenswrapper[4813]: I0219 20:43:40.006319 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 20:43:40 crc kubenswrapper[4813]: I0219 20:43:40.006464 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-2ttn4" Feb 19 20:43:40 crc kubenswrapper[4813]: I0219 20:43:40.006608 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 19 20:43:40 crc kubenswrapper[4813]: I0219 20:43:40.006751 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 19 20:43:40 crc kubenswrapper[4813]: I0219 20:43:40.027569 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-openstack-openstack-cell1-9snn7"] Feb 19 20:43:40 crc kubenswrapper[4813]: I0219 20:43:40.099481 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f75d7c97-6dce-40ee-a448-abb566750887-telemetry-combined-ca-bundle\") pod \"telemetry-openstack-openstack-cell1-9snn7\" (UID: \"f75d7c97-6dce-40ee-a448-abb566750887\") " pod="openstack/telemetry-openstack-openstack-cell1-9snn7" Feb 19 20:43:40 crc kubenswrapper[4813]: I0219 20:43:40.099789 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/f75d7c97-6dce-40ee-a448-abb566750887-ceilometer-compute-config-data-2\") pod \"telemetry-openstack-openstack-cell1-9snn7\" (UID: \"f75d7c97-6dce-40ee-a448-abb566750887\") " pod="openstack/telemetry-openstack-openstack-cell1-9snn7" Feb 19 20:43:40 crc kubenswrapper[4813]: I0219 20:43:40.099889 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f75d7c97-6dce-40ee-a448-abb566750887-inventory\") pod \"telemetry-openstack-openstack-cell1-9snn7\" (UID: \"f75d7c97-6dce-40ee-a448-abb566750887\") " pod="openstack/telemetry-openstack-openstack-cell1-9snn7" Feb 19 20:43:40 crc kubenswrapper[4813]: I0219 20:43:40.099964 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f75d7c97-6dce-40ee-a448-abb566750887-ceph\") pod \"telemetry-openstack-openstack-cell1-9snn7\" (UID: \"f75d7c97-6dce-40ee-a448-abb566750887\") " pod="openstack/telemetry-openstack-openstack-cell1-9snn7" Feb 19 20:43:40 crc kubenswrapper[4813]: I0219 20:43:40.100018 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/f75d7c97-6dce-40ee-a448-abb566750887-ceilometer-compute-config-data-0\") pod \"telemetry-openstack-openstack-cell1-9snn7\" (UID: \"f75d7c97-6dce-40ee-a448-abb566750887\") " pod="openstack/telemetry-openstack-openstack-cell1-9snn7" Feb 19 20:43:40 crc kubenswrapper[4813]: I0219 20:43:40.100041 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf2ps\" (UniqueName: \"kubernetes.io/projected/f75d7c97-6dce-40ee-a448-abb566750887-kube-api-access-cf2ps\") pod \"telemetry-openstack-openstack-cell1-9snn7\" (UID: \"f75d7c97-6dce-40ee-a448-abb566750887\") " pod="openstack/telemetry-openstack-openstack-cell1-9snn7" Feb 19 20:43:40 crc kubenswrapper[4813]: I0219 20:43:40.100057 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/f75d7c97-6dce-40ee-a448-abb566750887-ceilometer-compute-config-data-1\") pod \"telemetry-openstack-openstack-cell1-9snn7\" (UID: \"f75d7c97-6dce-40ee-a448-abb566750887\") " pod="openstack/telemetry-openstack-openstack-cell1-9snn7" Feb 19 20:43:40 crc kubenswrapper[4813]: I0219 20:43:40.100095 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/f75d7c97-6dce-40ee-a448-abb566750887-ssh-key-openstack-cell1\") pod \"telemetry-openstack-openstack-cell1-9snn7\" (UID: \"f75d7c97-6dce-40ee-a448-abb566750887\") " pod="openstack/telemetry-openstack-openstack-cell1-9snn7" Feb 19 20:43:40 crc kubenswrapper[4813]: I0219 20:43:40.202049 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/f75d7c97-6dce-40ee-a448-abb566750887-ssh-key-openstack-cell1\") pod \"telemetry-openstack-openstack-cell1-9snn7\" (UID: \"f75d7c97-6dce-40ee-a448-abb566750887\") " pod="openstack/telemetry-openstack-openstack-cell1-9snn7" Feb 19 20:43:40 crc kubenswrapper[4813]: I0219 20:43:40.202113 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f75d7c97-6dce-40ee-a448-abb566750887-telemetry-combined-ca-bundle\") pod \"telemetry-openstack-openstack-cell1-9snn7\" (UID: \"f75d7c97-6dce-40ee-a448-abb566750887\") " pod="openstack/telemetry-openstack-openstack-cell1-9snn7" Feb 19 20:43:40 crc kubenswrapper[4813]: I0219 20:43:40.202197 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/f75d7c97-6dce-40ee-a448-abb566750887-ceilometer-compute-config-data-2\") pod \"telemetry-openstack-openstack-cell1-9snn7\" (UID: \"f75d7c97-6dce-40ee-a448-abb566750887\") " pod="openstack/telemetry-openstack-openstack-cell1-9snn7" Feb 19 20:43:40 crc kubenswrapper[4813]: I0219 20:43:40.202226 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f75d7c97-6dce-40ee-a448-abb566750887-inventory\") pod \"telemetry-openstack-openstack-cell1-9snn7\" (UID: \"f75d7c97-6dce-40ee-a448-abb566750887\") " pod="openstack/telemetry-openstack-openstack-cell1-9snn7" Feb 19 20:43:40 crc kubenswrapper[4813]: I0219 20:43:40.202280 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f75d7c97-6dce-40ee-a448-abb566750887-ceph\") pod \"telemetry-openstack-openstack-cell1-9snn7\" (UID: \"f75d7c97-6dce-40ee-a448-abb566750887\") " pod="openstack/telemetry-openstack-openstack-cell1-9snn7" Feb 19 20:43:40 crc kubenswrapper[4813]: I0219 20:43:40.202324 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/f75d7c97-6dce-40ee-a448-abb566750887-ceilometer-compute-config-data-0\") pod \"telemetry-openstack-openstack-cell1-9snn7\" (UID: \"f75d7c97-6dce-40ee-a448-abb566750887\") " pod="openstack/telemetry-openstack-openstack-cell1-9snn7" Feb 19 20:43:40 crc kubenswrapper[4813]: I0219 20:43:40.202346 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cf2ps\" (UniqueName: \"kubernetes.io/projected/f75d7c97-6dce-40ee-a448-abb566750887-kube-api-access-cf2ps\") pod \"telemetry-openstack-openstack-cell1-9snn7\" (UID: \"f75d7c97-6dce-40ee-a448-abb566750887\") " pod="openstack/telemetry-openstack-openstack-cell1-9snn7" Feb 19 20:43:40 crc kubenswrapper[4813]: I0219 20:43:40.202363 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/f75d7c97-6dce-40ee-a448-abb566750887-ceilometer-compute-config-data-1\") pod \"telemetry-openstack-openstack-cell1-9snn7\" (UID: \"f75d7c97-6dce-40ee-a448-abb566750887\") " pod="openstack/telemetry-openstack-openstack-cell1-9snn7" Feb 19 20:43:40 crc kubenswrapper[4813]: I0219 20:43:40.207910 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/f75d7c97-6dce-40ee-a448-abb566750887-ssh-key-openstack-cell1\") pod \"telemetry-openstack-openstack-cell1-9snn7\" (UID: \"f75d7c97-6dce-40ee-a448-abb566750887\") " pod="openstack/telemetry-openstack-openstack-cell1-9snn7" Feb 19 20:43:40 crc kubenswrapper[4813]: I0219 20:43:40.208378 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f75d7c97-6dce-40ee-a448-abb566750887-telemetry-combined-ca-bundle\") pod \"telemetry-openstack-openstack-cell1-9snn7\" (UID: \"f75d7c97-6dce-40ee-a448-abb566750887\") " pod="openstack/telemetry-openstack-openstack-cell1-9snn7" Feb 19 20:43:40 crc kubenswrapper[4813]: I0219 20:43:40.208919 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f75d7c97-6dce-40ee-a448-abb566750887-inventory\") pod \"telemetry-openstack-openstack-cell1-9snn7\" (UID: \"f75d7c97-6dce-40ee-a448-abb566750887\") " pod="openstack/telemetry-openstack-openstack-cell1-9snn7" Feb 19 20:43:40 crc kubenswrapper[4813]: I0219 20:43:40.208919 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/f75d7c97-6dce-40ee-a448-abb566750887-ceilometer-compute-config-data-2\") pod \"telemetry-openstack-openstack-cell1-9snn7\" (UID: \"f75d7c97-6dce-40ee-a448-abb566750887\") " pod="openstack/telemetry-openstack-openstack-cell1-9snn7" Feb 19 20:43:40 crc kubenswrapper[4813]: I0219 20:43:40.209260 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/f75d7c97-6dce-40ee-a448-abb566750887-ceilometer-compute-config-data-0\") pod \"telemetry-openstack-openstack-cell1-9snn7\" (UID: \"f75d7c97-6dce-40ee-a448-abb566750887\") " pod="openstack/telemetry-openstack-openstack-cell1-9snn7" Feb 19 20:43:40 crc kubenswrapper[4813]: I0219 20:43:40.209520 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f75d7c97-6dce-40ee-a448-abb566750887-ceph\") pod \"telemetry-openstack-openstack-cell1-9snn7\" (UID: \"f75d7c97-6dce-40ee-a448-abb566750887\") " pod="openstack/telemetry-openstack-openstack-cell1-9snn7" Feb 19 20:43:40 crc kubenswrapper[4813]: I0219 20:43:40.210563 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/f75d7c97-6dce-40ee-a448-abb566750887-ceilometer-compute-config-data-1\") pod \"telemetry-openstack-openstack-cell1-9snn7\" (UID: \"f75d7c97-6dce-40ee-a448-abb566750887\") " pod="openstack/telemetry-openstack-openstack-cell1-9snn7" Feb 19 20:43:40 crc kubenswrapper[4813]: I0219 20:43:40.228650 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cf2ps\" (UniqueName: \"kubernetes.io/projected/f75d7c97-6dce-40ee-a448-abb566750887-kube-api-access-cf2ps\") pod \"telemetry-openstack-openstack-cell1-9snn7\" (UID: \"f75d7c97-6dce-40ee-a448-abb566750887\") " pod="openstack/telemetry-openstack-openstack-cell1-9snn7" Feb 19 20:43:40 crc kubenswrapper[4813]: I0219 20:43:40.327011 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-9snn7" Feb 19 20:43:40 crc kubenswrapper[4813]: I0219 20:43:40.763459 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-openstack-openstack-cell1-9snn7"] Feb 19 20:43:40 crc kubenswrapper[4813]: I0219 20:43:40.914786 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-9snn7" event={"ID":"f75d7c97-6dce-40ee-a448-abb566750887","Type":"ContainerStarted","Data":"2ed087d26ca2239f609e5868153f727b3f314743368d629a4cd4212b410fe6a0"} Feb 19 20:43:41 crc kubenswrapper[4813]: I0219 20:43:41.481780 4813 scope.go:117] "RemoveContainer" containerID="db861e5d90a8de67f858b14686841353ea00fd3d2f4c54f7ce0d86d03a0106a3" Feb 19 20:43:41 crc kubenswrapper[4813]: E0219 20:43:41.482327 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:43:41 crc kubenswrapper[4813]: I0219 20:43:41.925869 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-9snn7" event={"ID":"f75d7c97-6dce-40ee-a448-abb566750887","Type":"ContainerStarted","Data":"a3a85a9f2fc5b7560bcacdaebe50740e33ec42beff799f0e2952e3866711b6d7"} Feb 19 20:43:41 crc kubenswrapper[4813]: I0219 20:43:41.950200 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-openstack-openstack-cell1-9snn7" podStartSLOduration=2.49504644 podStartE2EDuration="2.950180139s" podCreationTimestamp="2026-02-19 20:43:39 +0000 UTC" firstStartedPulling="2026-02-19 20:43:40.76576755 +0000 UTC m=+8039.991208091" lastFinishedPulling="2026-02-19 20:43:41.220901249 +0000 UTC m=+8040.446341790" observedRunningTime="2026-02-19 20:43:41.948447204 +0000 UTC m=+8041.173887745" watchObservedRunningTime="2026-02-19 20:43:41.950180139 +0000 UTC m=+8041.175620680" Feb 19 20:43:54 crc kubenswrapper[4813]: I0219 20:43:54.472198 4813 scope.go:117] "RemoveContainer" containerID="db861e5d90a8de67f858b14686841353ea00fd3d2f4c54f7ce0d86d03a0106a3" Feb 19 20:43:54 crc kubenswrapper[4813]: E0219 20:43:54.473094 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:44:05 crc kubenswrapper[4813]: I0219 20:44:05.472012 4813 scope.go:117] "RemoveContainer" containerID="db861e5d90a8de67f858b14686841353ea00fd3d2f4c54f7ce0d86d03a0106a3" Feb 19 20:44:05 crc kubenswrapper[4813]: E0219 20:44:05.473487 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:44:18 crc kubenswrapper[4813]: I0219 20:44:18.472404 4813 scope.go:117] "RemoveContainer" containerID="db861e5d90a8de67f858b14686841353ea00fd3d2f4c54f7ce0d86d03a0106a3" Feb 19 20:44:18 crc kubenswrapper[4813]: E0219 20:44:18.473320 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:44:32 crc kubenswrapper[4813]: I0219 20:44:32.471510 4813 scope.go:117] "RemoveContainer" containerID="db861e5d90a8de67f858b14686841353ea00fd3d2f4c54f7ce0d86d03a0106a3" Feb 19 20:44:33 crc kubenswrapper[4813]: I0219 20:44:33.444654 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" event={"ID":"481977a2-7072-4176-abd4-863cb6104d70","Type":"ContainerStarted","Data":"9a24a471de296b18d9e542deb93ef67e933265bda54aedaf108fe07429fd6a27"} Feb 19 20:44:55 crc kubenswrapper[4813]: I0219 20:44:55.542990 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xjqb9"] Feb 19 20:44:55 crc kubenswrapper[4813]: I0219 20:44:55.547274 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xjqb9" Feb 19 20:44:55 crc kubenswrapper[4813]: I0219 20:44:55.564999 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xjqb9"] Feb 19 20:44:55 crc kubenswrapper[4813]: I0219 20:44:55.629271 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff019584-07d6-453e-b2e7-918a43b1b5d0-utilities\") pod \"redhat-marketplace-xjqb9\" (UID: \"ff019584-07d6-453e-b2e7-918a43b1b5d0\") " pod="openshift-marketplace/redhat-marketplace-xjqb9" Feb 19 20:44:55 crc kubenswrapper[4813]: I0219 20:44:55.629595 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mwgr\" (UniqueName: \"kubernetes.io/projected/ff019584-07d6-453e-b2e7-918a43b1b5d0-kube-api-access-7mwgr\") pod \"redhat-marketplace-xjqb9\" (UID: \"ff019584-07d6-453e-b2e7-918a43b1b5d0\") " pod="openshift-marketplace/redhat-marketplace-xjqb9" Feb 19 20:44:55 crc kubenswrapper[4813]: I0219 20:44:55.629639 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff019584-07d6-453e-b2e7-918a43b1b5d0-catalog-content\") pod \"redhat-marketplace-xjqb9\" (UID: \"ff019584-07d6-453e-b2e7-918a43b1b5d0\") " pod="openshift-marketplace/redhat-marketplace-xjqb9" Feb 19 20:44:55 crc kubenswrapper[4813]: I0219 20:44:55.732053 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff019584-07d6-453e-b2e7-918a43b1b5d0-utilities\") pod \"redhat-marketplace-xjqb9\" (UID: \"ff019584-07d6-453e-b2e7-918a43b1b5d0\") " pod="openshift-marketplace/redhat-marketplace-xjqb9" Feb 19 20:44:55 crc kubenswrapper[4813]: I0219 20:44:55.732288 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mwgr\" (UniqueName: \"kubernetes.io/projected/ff019584-07d6-453e-b2e7-918a43b1b5d0-kube-api-access-7mwgr\") pod \"redhat-marketplace-xjqb9\" (UID: \"ff019584-07d6-453e-b2e7-918a43b1b5d0\") " pod="openshift-marketplace/redhat-marketplace-xjqb9" Feb 19 20:44:55 crc kubenswrapper[4813]: I0219 20:44:55.732326 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff019584-07d6-453e-b2e7-918a43b1b5d0-catalog-content\") pod \"redhat-marketplace-xjqb9\" (UID: \"ff019584-07d6-453e-b2e7-918a43b1b5d0\") " pod="openshift-marketplace/redhat-marketplace-xjqb9" Feb 19 20:44:55 crc kubenswrapper[4813]: I0219 20:44:55.732611 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff019584-07d6-453e-b2e7-918a43b1b5d0-utilities\") pod \"redhat-marketplace-xjqb9\" (UID: \"ff019584-07d6-453e-b2e7-918a43b1b5d0\") " pod="openshift-marketplace/redhat-marketplace-xjqb9" Feb 19 20:44:55 crc kubenswrapper[4813]: I0219 20:44:55.733010 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff019584-07d6-453e-b2e7-918a43b1b5d0-catalog-content\") pod \"redhat-marketplace-xjqb9\" (UID: \"ff019584-07d6-453e-b2e7-918a43b1b5d0\") " pod="openshift-marketplace/redhat-marketplace-xjqb9" Feb 19 20:44:55 crc kubenswrapper[4813]: I0219 20:44:55.804196 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mwgr\" (UniqueName: \"kubernetes.io/projected/ff019584-07d6-453e-b2e7-918a43b1b5d0-kube-api-access-7mwgr\") pod \"redhat-marketplace-xjqb9\" (UID: \"ff019584-07d6-453e-b2e7-918a43b1b5d0\") " pod="openshift-marketplace/redhat-marketplace-xjqb9" Feb 19 20:44:55 crc kubenswrapper[4813]: I0219 20:44:55.875268 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xjqb9" Feb 19 20:44:56 crc kubenswrapper[4813]: I0219 20:44:56.451779 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xjqb9"] Feb 19 20:44:56 crc kubenswrapper[4813]: I0219 20:44:56.691289 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xjqb9" event={"ID":"ff019584-07d6-453e-b2e7-918a43b1b5d0","Type":"ContainerStarted","Data":"adce4cc757aec48b04df601ec35d44145754033ca9f2b0a5e0aafe29ecb6cf9c"} Feb 19 20:44:57 crc kubenswrapper[4813]: I0219 20:44:57.704535 4813 generic.go:334] "Generic (PLEG): container finished" podID="ff019584-07d6-453e-b2e7-918a43b1b5d0" containerID="919e5d34b15659ca3338ca2c3d510eeafaf8c31c24365a69251f26ac55829ebf" exitCode=0 Feb 19 20:44:57 crc kubenswrapper[4813]: I0219 20:44:57.704582 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xjqb9" event={"ID":"ff019584-07d6-453e-b2e7-918a43b1b5d0","Type":"ContainerDied","Data":"919e5d34b15659ca3338ca2c3d510eeafaf8c31c24365a69251f26ac55829ebf"} Feb 19 20:44:58 crc kubenswrapper[4813]: I0219 20:44:58.144204 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ddtzv"] Feb 19 20:44:58 crc kubenswrapper[4813]: I0219 20:44:58.147116 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ddtzv" Feb 19 20:44:58 crc kubenswrapper[4813]: I0219 20:44:58.158174 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ddtzv"] Feb 19 20:44:58 crc kubenswrapper[4813]: I0219 20:44:58.286377 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzplp\" (UniqueName: \"kubernetes.io/projected/3a2cad88-cbd2-4761-a8f1-11b4643f5cf4-kube-api-access-qzplp\") pod \"community-operators-ddtzv\" (UID: \"3a2cad88-cbd2-4761-a8f1-11b4643f5cf4\") " pod="openshift-marketplace/community-operators-ddtzv" Feb 19 20:44:58 crc kubenswrapper[4813]: I0219 20:44:58.286600 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a2cad88-cbd2-4761-a8f1-11b4643f5cf4-utilities\") pod \"community-operators-ddtzv\" (UID: \"3a2cad88-cbd2-4761-a8f1-11b4643f5cf4\") " pod="openshift-marketplace/community-operators-ddtzv" Feb 19 20:44:58 crc kubenswrapper[4813]: I0219 20:44:58.286714 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a2cad88-cbd2-4761-a8f1-11b4643f5cf4-catalog-content\") pod \"community-operators-ddtzv\" (UID: \"3a2cad88-cbd2-4761-a8f1-11b4643f5cf4\") " pod="openshift-marketplace/community-operators-ddtzv" Feb 19 20:44:58 crc kubenswrapper[4813]: I0219 20:44:58.388181 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzplp\" (UniqueName: \"kubernetes.io/projected/3a2cad88-cbd2-4761-a8f1-11b4643f5cf4-kube-api-access-qzplp\") pod \"community-operators-ddtzv\" (UID: \"3a2cad88-cbd2-4761-a8f1-11b4643f5cf4\") " pod="openshift-marketplace/community-operators-ddtzv" Feb 19 20:44:58 crc kubenswrapper[4813]: I0219 20:44:58.388636 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a2cad88-cbd2-4761-a8f1-11b4643f5cf4-utilities\") pod \"community-operators-ddtzv\" (UID: \"3a2cad88-cbd2-4761-a8f1-11b4643f5cf4\") " pod="openshift-marketplace/community-operators-ddtzv" Feb 19 20:44:58 crc kubenswrapper[4813]: I0219 20:44:58.388715 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a2cad88-cbd2-4761-a8f1-11b4643f5cf4-catalog-content\") pod \"community-operators-ddtzv\" (UID: \"3a2cad88-cbd2-4761-a8f1-11b4643f5cf4\") " pod="openshift-marketplace/community-operators-ddtzv" Feb 19 20:44:58 crc kubenswrapper[4813]: I0219 20:44:58.389153 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a2cad88-cbd2-4761-a8f1-11b4643f5cf4-utilities\") pod \"community-operators-ddtzv\" (UID: \"3a2cad88-cbd2-4761-a8f1-11b4643f5cf4\") " pod="openshift-marketplace/community-operators-ddtzv" Feb 19 20:44:58 crc kubenswrapper[4813]: I0219 20:44:58.389264 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a2cad88-cbd2-4761-a8f1-11b4643f5cf4-catalog-content\") pod \"community-operators-ddtzv\" (UID: \"3a2cad88-cbd2-4761-a8f1-11b4643f5cf4\") " pod="openshift-marketplace/community-operators-ddtzv" Feb 19 20:44:58 crc kubenswrapper[4813]: I0219 20:44:58.409241 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzplp\" (UniqueName: \"kubernetes.io/projected/3a2cad88-cbd2-4761-a8f1-11b4643f5cf4-kube-api-access-qzplp\") pod \"community-operators-ddtzv\" (UID: \"3a2cad88-cbd2-4761-a8f1-11b4643f5cf4\") " pod="openshift-marketplace/community-operators-ddtzv" Feb 19 20:44:58 crc kubenswrapper[4813]: I0219 20:44:58.478066 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ddtzv" Feb 19 20:44:58 crc kubenswrapper[4813]: I0219 20:44:58.717624 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xjqb9" event={"ID":"ff019584-07d6-453e-b2e7-918a43b1b5d0","Type":"ContainerStarted","Data":"b0c52d68e78fee4041f2fb4000fd333bd26709f8f9e48b96e4c682a03eaac5cb"} Feb 19 20:44:59 crc kubenswrapper[4813]: I0219 20:44:59.038616 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ddtzv"] Feb 19 20:44:59 crc kubenswrapper[4813]: I0219 20:44:59.728265 4813 generic.go:334] "Generic (PLEG): container finished" podID="ff019584-07d6-453e-b2e7-918a43b1b5d0" containerID="b0c52d68e78fee4041f2fb4000fd333bd26709f8f9e48b96e4c682a03eaac5cb" exitCode=0 Feb 19 20:44:59 crc kubenswrapper[4813]: I0219 20:44:59.728346 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xjqb9" event={"ID":"ff019584-07d6-453e-b2e7-918a43b1b5d0","Type":"ContainerDied","Data":"b0c52d68e78fee4041f2fb4000fd333bd26709f8f9e48b96e4c682a03eaac5cb"} Feb 19 20:44:59 crc kubenswrapper[4813]: I0219 20:44:59.730712 4813 generic.go:334] "Generic (PLEG): container finished" podID="3a2cad88-cbd2-4761-a8f1-11b4643f5cf4" containerID="f2bbaa88980d42c915b4e8b3b485953baaf1c9b1d336f8ccf774afc22604b126" exitCode=0 Feb 19 20:44:59 crc kubenswrapper[4813]: I0219 20:44:59.730813 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ddtzv" event={"ID":"3a2cad88-cbd2-4761-a8f1-11b4643f5cf4","Type":"ContainerDied","Data":"f2bbaa88980d42c915b4e8b3b485953baaf1c9b1d336f8ccf774afc22604b126"} Feb 19 20:44:59 crc kubenswrapper[4813]: I0219 20:44:59.730850 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ddtzv" event={"ID":"3a2cad88-cbd2-4761-a8f1-11b4643f5cf4","Type":"ContainerStarted","Data":"a211f51f8422e9cd725bc62179bb435251118f355f547c426eafb92cd9513274"} Feb 19 20:45:00 crc kubenswrapper[4813]: I0219 20:45:00.189976 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525565-p8tfn"] Feb 19 20:45:00 crc kubenswrapper[4813]: I0219 20:45:00.191581 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525565-p8tfn" Feb 19 20:45:00 crc kubenswrapper[4813]: I0219 20:45:00.194002 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 20:45:00 crc kubenswrapper[4813]: I0219 20:45:00.194080 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 20:45:00 crc kubenswrapper[4813]: I0219 20:45:00.202251 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525565-p8tfn"] Feb 19 20:45:00 crc kubenswrapper[4813]: I0219 20:45:00.326919 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9fpg\" (UniqueName: \"kubernetes.io/projected/d568c016-adf2-462d-a29e-333062b97008-kube-api-access-l9fpg\") pod \"collect-profiles-29525565-p8tfn\" (UID: \"d568c016-adf2-462d-a29e-333062b97008\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525565-p8tfn" Feb 19 20:45:00 crc kubenswrapper[4813]: I0219 20:45:00.327168 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d568c016-adf2-462d-a29e-333062b97008-config-volume\") pod \"collect-profiles-29525565-p8tfn\" (UID: \"d568c016-adf2-462d-a29e-333062b97008\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525565-p8tfn" Feb 19 20:45:00 crc kubenswrapper[4813]: I0219 20:45:00.327490 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d568c016-adf2-462d-a29e-333062b97008-secret-volume\") pod \"collect-profiles-29525565-p8tfn\" (UID: \"d568c016-adf2-462d-a29e-333062b97008\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525565-p8tfn" Feb 19 20:45:00 crc kubenswrapper[4813]: I0219 20:45:00.429006 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9fpg\" (UniqueName: \"kubernetes.io/projected/d568c016-adf2-462d-a29e-333062b97008-kube-api-access-l9fpg\") pod \"collect-profiles-29525565-p8tfn\" (UID: \"d568c016-adf2-462d-a29e-333062b97008\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525565-p8tfn" Feb 19 20:45:00 crc kubenswrapper[4813]: I0219 20:45:00.429101 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d568c016-adf2-462d-a29e-333062b97008-config-volume\") pod \"collect-profiles-29525565-p8tfn\" (UID: \"d568c016-adf2-462d-a29e-333062b97008\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525565-p8tfn" Feb 19 20:45:00 crc kubenswrapper[4813]: I0219 20:45:00.429199 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d568c016-adf2-462d-a29e-333062b97008-secret-volume\") pod \"collect-profiles-29525565-p8tfn\" (UID: \"d568c016-adf2-462d-a29e-333062b97008\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525565-p8tfn" Feb 19 20:45:00 crc kubenswrapper[4813]: I0219 20:45:00.430200 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d568c016-adf2-462d-a29e-333062b97008-config-volume\") pod \"collect-profiles-29525565-p8tfn\" (UID: \"d568c016-adf2-462d-a29e-333062b97008\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525565-p8tfn" Feb 19 20:45:00 crc kubenswrapper[4813]: I0219 20:45:00.436197 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d568c016-adf2-462d-a29e-333062b97008-secret-volume\") pod \"collect-profiles-29525565-p8tfn\" (UID: \"d568c016-adf2-462d-a29e-333062b97008\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525565-p8tfn" Feb 19 20:45:00 crc kubenswrapper[4813]: I0219 20:45:00.451239 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9fpg\" (UniqueName: \"kubernetes.io/projected/d568c016-adf2-462d-a29e-333062b97008-kube-api-access-l9fpg\") pod \"collect-profiles-29525565-p8tfn\" (UID: \"d568c016-adf2-462d-a29e-333062b97008\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525565-p8tfn" Feb 19 20:45:00 crc kubenswrapper[4813]: I0219 20:45:00.517026 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525565-p8tfn" Feb 19 20:45:00 crc kubenswrapper[4813]: I0219 20:45:00.751453 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xjqb9" event={"ID":"ff019584-07d6-453e-b2e7-918a43b1b5d0","Type":"ContainerStarted","Data":"c5c08b86fd5000822da5237f3674a057ebebb74fb79f5c4853933a351ec64514"} Feb 19 20:45:00 crc kubenswrapper[4813]: I0219 20:45:00.753931 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ddtzv" event={"ID":"3a2cad88-cbd2-4761-a8f1-11b4643f5cf4","Type":"ContainerStarted","Data":"e1879768ce557abcd8eb055f9d0c7cac4e29598c170ebca63ee4f977bdcd43ff"} Feb 19 20:45:00 crc kubenswrapper[4813]: I0219 20:45:00.804215 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xjqb9" podStartSLOduration=3.286736754 podStartE2EDuration="5.804197186s" podCreationTimestamp="2026-02-19 20:44:55 +0000 UTC" firstStartedPulling="2026-02-19 20:44:57.707216553 +0000 UTC m=+8116.932657084" lastFinishedPulling="2026-02-19 20:45:00.224676975 +0000 UTC m=+8119.450117516" observedRunningTime="2026-02-19 20:45:00.773621607 +0000 UTC m=+8119.999062148" watchObservedRunningTime="2026-02-19 20:45:00.804197186 +0000 UTC m=+8120.029637727" Feb 19 20:45:01 crc kubenswrapper[4813]: I0219 20:45:01.003671 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525565-p8tfn"] Feb 19 20:45:01 crc kubenswrapper[4813]: W0219 20:45:01.006496 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd568c016_adf2_462d_a29e_333062b97008.slice/crio-519f3c8997af532148cf5c6f1ddf7942c2be820769ed5d1e00b0765d604b44d8 WatchSource:0}: Error finding container 519f3c8997af532148cf5c6f1ddf7942c2be820769ed5d1e00b0765d604b44d8: Status 404 returned error can't find the container with id 519f3c8997af532148cf5c6f1ddf7942c2be820769ed5d1e00b0765d604b44d8 Feb 19 20:45:01 crc kubenswrapper[4813]: I0219 20:45:01.765226 4813 generic.go:334] "Generic (PLEG): container finished" podID="d568c016-adf2-462d-a29e-333062b97008" containerID="59df6e898302e668f3aa3821a7a6112e31c0fb2760c5f2c5df6329ba866b38f7" exitCode=0 Feb 19 20:45:01 crc kubenswrapper[4813]: I0219 20:45:01.765287 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525565-p8tfn" event={"ID":"d568c016-adf2-462d-a29e-333062b97008","Type":"ContainerDied","Data":"59df6e898302e668f3aa3821a7a6112e31c0fb2760c5f2c5df6329ba866b38f7"} Feb 19 20:45:01 crc kubenswrapper[4813]: I0219 20:45:01.765707 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525565-p8tfn" event={"ID":"d568c016-adf2-462d-a29e-333062b97008","Type":"ContainerStarted","Data":"519f3c8997af532148cf5c6f1ddf7942c2be820769ed5d1e00b0765d604b44d8"} Feb 19 20:45:03 crc kubenswrapper[4813]: I0219 20:45:03.462657 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525565-p8tfn" Feb 19 20:45:03 crc kubenswrapper[4813]: I0219 20:45:03.513904 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d568c016-adf2-462d-a29e-333062b97008-config-volume\") pod \"d568c016-adf2-462d-a29e-333062b97008\" (UID: \"d568c016-adf2-462d-a29e-333062b97008\") " Feb 19 20:45:03 crc kubenswrapper[4813]: I0219 20:45:03.514246 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d568c016-adf2-462d-a29e-333062b97008-secret-volume\") pod \"d568c016-adf2-462d-a29e-333062b97008\" (UID: \"d568c016-adf2-462d-a29e-333062b97008\") " Feb 19 20:45:03 crc kubenswrapper[4813]: I0219 20:45:03.514424 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9fpg\" (UniqueName: \"kubernetes.io/projected/d568c016-adf2-462d-a29e-333062b97008-kube-api-access-l9fpg\") pod \"d568c016-adf2-462d-a29e-333062b97008\" (UID: \"d568c016-adf2-462d-a29e-333062b97008\") " Feb 19 20:45:03 crc kubenswrapper[4813]: I0219 20:45:03.514799 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d568c016-adf2-462d-a29e-333062b97008-config-volume" (OuterVolumeSpecName: "config-volume") pod "d568c016-adf2-462d-a29e-333062b97008" (UID: "d568c016-adf2-462d-a29e-333062b97008"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:45:03 crc kubenswrapper[4813]: I0219 20:45:03.515533 4813 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d568c016-adf2-462d-a29e-333062b97008-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 20:45:03 crc kubenswrapper[4813]: I0219 20:45:03.521079 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d568c016-adf2-462d-a29e-333062b97008-kube-api-access-l9fpg" (OuterVolumeSpecName: "kube-api-access-l9fpg") pod "d568c016-adf2-462d-a29e-333062b97008" (UID: "d568c016-adf2-462d-a29e-333062b97008"). InnerVolumeSpecName "kube-api-access-l9fpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:45:03 crc kubenswrapper[4813]: I0219 20:45:03.521777 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d568c016-adf2-462d-a29e-333062b97008-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d568c016-adf2-462d-a29e-333062b97008" (UID: "d568c016-adf2-462d-a29e-333062b97008"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:45:03 crc kubenswrapper[4813]: I0219 20:45:03.618055 4813 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d568c016-adf2-462d-a29e-333062b97008-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 20:45:03 crc kubenswrapper[4813]: I0219 20:45:03.618108 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9fpg\" (UniqueName: \"kubernetes.io/projected/d568c016-adf2-462d-a29e-333062b97008-kube-api-access-l9fpg\") on node \"crc\" DevicePath \"\"" Feb 19 20:45:03 crc kubenswrapper[4813]: I0219 20:45:03.792308 4813 generic.go:334] "Generic (PLEG): container finished" podID="3a2cad88-cbd2-4761-a8f1-11b4643f5cf4" containerID="e1879768ce557abcd8eb055f9d0c7cac4e29598c170ebca63ee4f977bdcd43ff" exitCode=0 Feb 19 20:45:03 crc kubenswrapper[4813]: I0219 20:45:03.792372 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ddtzv" event={"ID":"3a2cad88-cbd2-4761-a8f1-11b4643f5cf4","Type":"ContainerDied","Data":"e1879768ce557abcd8eb055f9d0c7cac4e29598c170ebca63ee4f977bdcd43ff"} Feb 19 20:45:03 crc kubenswrapper[4813]: I0219 20:45:03.794410 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525565-p8tfn" event={"ID":"d568c016-adf2-462d-a29e-333062b97008","Type":"ContainerDied","Data":"519f3c8997af532148cf5c6f1ddf7942c2be820769ed5d1e00b0765d604b44d8"} Feb 19 20:45:03 crc kubenswrapper[4813]: I0219 20:45:03.794471 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="519f3c8997af532148cf5c6f1ddf7942c2be820769ed5d1e00b0765d604b44d8" Feb 19 20:45:03 crc kubenswrapper[4813]: I0219 20:45:03.794520 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525565-p8tfn" Feb 19 20:45:04 crc kubenswrapper[4813]: I0219 20:45:04.541969 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525520-wb9nz"] Feb 19 20:45:04 crc kubenswrapper[4813]: I0219 20:45:04.552205 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525520-wb9nz"] Feb 19 20:45:04 crc kubenswrapper[4813]: I0219 20:45:04.805461 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ddtzv" event={"ID":"3a2cad88-cbd2-4761-a8f1-11b4643f5cf4","Type":"ContainerStarted","Data":"3e91935630bcade2e28ef7892767d19144d87886ad9a5b2fa9971e51f42116a6"} Feb 19 20:45:04 crc kubenswrapper[4813]: I0219 20:45:04.829087 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ddtzv" podStartSLOduration=2.355083835 podStartE2EDuration="6.82906479s" podCreationTimestamp="2026-02-19 20:44:58 +0000 UTC" firstStartedPulling="2026-02-19 20:44:59.732490703 +0000 UTC m=+8118.957931254" lastFinishedPulling="2026-02-19 20:45:04.206471668 +0000 UTC m=+8123.431912209" observedRunningTime="2026-02-19 20:45:04.820058538 +0000 UTC m=+8124.045499079" watchObservedRunningTime="2026-02-19 20:45:04.82906479 +0000 UTC m=+8124.054505351" Feb 19 20:45:05 crc kubenswrapper[4813]: I0219 20:45:05.493062 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49f6c823-7d5e-4b6b-802f-30f2161fff59" path="/var/lib/kubelet/pods/49f6c823-7d5e-4b6b-802f-30f2161fff59/volumes" Feb 19 20:45:05 crc kubenswrapper[4813]: I0219 20:45:05.875605 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xjqb9" Feb 19 20:45:05 crc kubenswrapper[4813]: I0219 20:45:05.875943 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xjqb9" Feb 19 20:45:05 crc kubenswrapper[4813]: I0219 20:45:05.923837 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xjqb9" Feb 19 20:45:06 crc kubenswrapper[4813]: I0219 20:45:06.878684 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xjqb9" Feb 19 20:45:07 crc kubenswrapper[4813]: I0219 20:45:07.129107 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xjqb9"] Feb 19 20:45:08 crc kubenswrapper[4813]: I0219 20:45:08.478454 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ddtzv" Feb 19 20:45:08 crc kubenswrapper[4813]: I0219 20:45:08.478803 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ddtzv" Feb 19 20:45:08 crc kubenswrapper[4813]: I0219 20:45:08.535627 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ddtzv" Feb 19 20:45:08 crc kubenswrapper[4813]: I0219 20:45:08.846404 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xjqb9" podUID="ff019584-07d6-453e-b2e7-918a43b1b5d0" containerName="registry-server" containerID="cri-o://c5c08b86fd5000822da5237f3674a057ebebb74fb79f5c4853933a351ec64514" gracePeriod=2 Feb 19 20:45:09 crc kubenswrapper[4813]: I0219 20:45:09.370294 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xjqb9" Feb 19 20:45:09 crc kubenswrapper[4813]: I0219 20:45:09.440123 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff019584-07d6-453e-b2e7-918a43b1b5d0-utilities\") pod \"ff019584-07d6-453e-b2e7-918a43b1b5d0\" (UID: \"ff019584-07d6-453e-b2e7-918a43b1b5d0\") " Feb 19 20:45:09 crc kubenswrapper[4813]: I0219 20:45:09.440219 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff019584-07d6-453e-b2e7-918a43b1b5d0-catalog-content\") pod \"ff019584-07d6-453e-b2e7-918a43b1b5d0\" (UID: \"ff019584-07d6-453e-b2e7-918a43b1b5d0\") " Feb 19 20:45:09 crc kubenswrapper[4813]: I0219 20:45:09.440382 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mwgr\" (UniqueName: \"kubernetes.io/projected/ff019584-07d6-453e-b2e7-918a43b1b5d0-kube-api-access-7mwgr\") pod \"ff019584-07d6-453e-b2e7-918a43b1b5d0\" (UID: \"ff019584-07d6-453e-b2e7-918a43b1b5d0\") " Feb 19 20:45:09 crc kubenswrapper[4813]: I0219 20:45:09.446587 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff019584-07d6-453e-b2e7-918a43b1b5d0-utilities" (OuterVolumeSpecName: "utilities") pod "ff019584-07d6-453e-b2e7-918a43b1b5d0" (UID: "ff019584-07d6-453e-b2e7-918a43b1b5d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:45:09 crc kubenswrapper[4813]: I0219 20:45:09.455118 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff019584-07d6-453e-b2e7-918a43b1b5d0-kube-api-access-7mwgr" (OuterVolumeSpecName: "kube-api-access-7mwgr") pod "ff019584-07d6-453e-b2e7-918a43b1b5d0" (UID: "ff019584-07d6-453e-b2e7-918a43b1b5d0"). InnerVolumeSpecName "kube-api-access-7mwgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:45:09 crc kubenswrapper[4813]: I0219 20:45:09.469567 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff019584-07d6-453e-b2e7-918a43b1b5d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ff019584-07d6-453e-b2e7-918a43b1b5d0" (UID: "ff019584-07d6-453e-b2e7-918a43b1b5d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:45:09 crc kubenswrapper[4813]: I0219 20:45:09.544104 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff019584-07d6-453e-b2e7-918a43b1b5d0-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 20:45:09 crc kubenswrapper[4813]: I0219 20:45:09.544165 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff019584-07d6-453e-b2e7-918a43b1b5d0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 20:45:09 crc kubenswrapper[4813]: I0219 20:45:09.544183 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mwgr\" (UniqueName: \"kubernetes.io/projected/ff019584-07d6-453e-b2e7-918a43b1b5d0-kube-api-access-7mwgr\") on node \"crc\" DevicePath \"\"" Feb 19 20:45:09 crc kubenswrapper[4813]: I0219 20:45:09.871136 4813 generic.go:334] "Generic (PLEG): container finished" podID="ff019584-07d6-453e-b2e7-918a43b1b5d0" containerID="c5c08b86fd5000822da5237f3674a057ebebb74fb79f5c4853933a351ec64514" exitCode=0 Feb 19 20:45:09 crc kubenswrapper[4813]: I0219 20:45:09.871219 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xjqb9" Feb 19 20:45:09 crc kubenswrapper[4813]: I0219 20:45:09.871230 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xjqb9" event={"ID":"ff019584-07d6-453e-b2e7-918a43b1b5d0","Type":"ContainerDied","Data":"c5c08b86fd5000822da5237f3674a057ebebb74fb79f5c4853933a351ec64514"} Feb 19 20:45:09 crc kubenswrapper[4813]: I0219 20:45:09.871800 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xjqb9" event={"ID":"ff019584-07d6-453e-b2e7-918a43b1b5d0","Type":"ContainerDied","Data":"adce4cc757aec48b04df601ec35d44145754033ca9f2b0a5e0aafe29ecb6cf9c"} Feb 19 20:45:09 crc kubenswrapper[4813]: I0219 20:45:09.871840 4813 scope.go:117] "RemoveContainer" containerID="c5c08b86fd5000822da5237f3674a057ebebb74fb79f5c4853933a351ec64514" Feb 19 20:45:09 crc kubenswrapper[4813]: I0219 20:45:09.914054 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xjqb9"] Feb 19 20:45:09 crc kubenswrapper[4813]: I0219 20:45:09.918514 4813 scope.go:117] "RemoveContainer" containerID="b0c52d68e78fee4041f2fb4000fd333bd26709f8f9e48b96e4c682a03eaac5cb" Feb 19 20:45:09 crc kubenswrapper[4813]: I0219 20:45:09.929380 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xjqb9"] Feb 19 20:45:09 crc kubenswrapper[4813]: I0219 20:45:09.942160 4813 scope.go:117] "RemoveContainer" containerID="919e5d34b15659ca3338ca2c3d510eeafaf8c31c24365a69251f26ac55829ebf" Feb 19 20:45:09 crc kubenswrapper[4813]: I0219 20:45:09.993576 4813 scope.go:117] "RemoveContainer" containerID="c5c08b86fd5000822da5237f3674a057ebebb74fb79f5c4853933a351ec64514" Feb 19 20:45:09 crc kubenswrapper[4813]: E0219 20:45:09.994058 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5c08b86fd5000822da5237f3674a057ebebb74fb79f5c4853933a351ec64514\": container with ID starting with c5c08b86fd5000822da5237f3674a057ebebb74fb79f5c4853933a351ec64514 not found: ID does not exist" containerID="c5c08b86fd5000822da5237f3674a057ebebb74fb79f5c4853933a351ec64514" Feb 19 20:45:09 crc kubenswrapper[4813]: I0219 20:45:09.994330 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5c08b86fd5000822da5237f3674a057ebebb74fb79f5c4853933a351ec64514"} err="failed to get container status \"c5c08b86fd5000822da5237f3674a057ebebb74fb79f5c4853933a351ec64514\": rpc error: code = NotFound desc = could not find container \"c5c08b86fd5000822da5237f3674a057ebebb74fb79f5c4853933a351ec64514\": container with ID starting with c5c08b86fd5000822da5237f3674a057ebebb74fb79f5c4853933a351ec64514 not found: ID does not exist" Feb 19 20:45:09 crc kubenswrapper[4813]: I0219 20:45:09.994408 4813 scope.go:117] "RemoveContainer" containerID="b0c52d68e78fee4041f2fb4000fd333bd26709f8f9e48b96e4c682a03eaac5cb" Feb 19 20:45:09 crc kubenswrapper[4813]: E0219 20:45:09.994769 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0c52d68e78fee4041f2fb4000fd333bd26709f8f9e48b96e4c682a03eaac5cb\": container with ID starting with b0c52d68e78fee4041f2fb4000fd333bd26709f8f9e48b96e4c682a03eaac5cb not found: ID does not exist" containerID="b0c52d68e78fee4041f2fb4000fd333bd26709f8f9e48b96e4c682a03eaac5cb" Feb 19 20:45:09 crc kubenswrapper[4813]: I0219 20:45:09.994845 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0c52d68e78fee4041f2fb4000fd333bd26709f8f9e48b96e4c682a03eaac5cb"} err="failed to get container status \"b0c52d68e78fee4041f2fb4000fd333bd26709f8f9e48b96e4c682a03eaac5cb\": rpc error: code = NotFound desc = could not find container \"b0c52d68e78fee4041f2fb4000fd333bd26709f8f9e48b96e4c682a03eaac5cb\": container with ID starting with b0c52d68e78fee4041f2fb4000fd333bd26709f8f9e48b96e4c682a03eaac5cb not found: ID does not exist" Feb 19 20:45:09 crc kubenswrapper[4813]: I0219 20:45:09.994904 4813 scope.go:117] "RemoveContainer" containerID="919e5d34b15659ca3338ca2c3d510eeafaf8c31c24365a69251f26ac55829ebf" Feb 19 20:45:09 crc kubenswrapper[4813]: E0219 20:45:09.995237 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"919e5d34b15659ca3338ca2c3d510eeafaf8c31c24365a69251f26ac55829ebf\": container with ID starting with 919e5d34b15659ca3338ca2c3d510eeafaf8c31c24365a69251f26ac55829ebf not found: ID does not exist" containerID="919e5d34b15659ca3338ca2c3d510eeafaf8c31c24365a69251f26ac55829ebf" Feb 19 20:45:09 crc kubenswrapper[4813]: I0219 20:45:09.995259 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"919e5d34b15659ca3338ca2c3d510eeafaf8c31c24365a69251f26ac55829ebf"} err="failed to get container status \"919e5d34b15659ca3338ca2c3d510eeafaf8c31c24365a69251f26ac55829ebf\": rpc error: code = NotFound desc = could not find container \"919e5d34b15659ca3338ca2c3d510eeafaf8c31c24365a69251f26ac55829ebf\": container with ID starting with 919e5d34b15659ca3338ca2c3d510eeafaf8c31c24365a69251f26ac55829ebf not found: ID does not exist" Feb 19 20:45:11 crc kubenswrapper[4813]: I0219 20:45:11.483710 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff019584-07d6-453e-b2e7-918a43b1b5d0" path="/var/lib/kubelet/pods/ff019584-07d6-453e-b2e7-918a43b1b5d0/volumes" Feb 19 20:45:18 crc kubenswrapper[4813]: I0219 20:45:18.530618 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ddtzv" Feb 19 20:45:18 crc kubenswrapper[4813]: I0219 20:45:18.595387 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ddtzv"] Feb 19 20:45:18 crc kubenswrapper[4813]: I0219 20:45:18.952997 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ddtzv" podUID="3a2cad88-cbd2-4761-a8f1-11b4643f5cf4" containerName="registry-server" containerID="cri-o://3e91935630bcade2e28ef7892767d19144d87886ad9a5b2fa9971e51f42116a6" gracePeriod=2 Feb 19 20:45:19 crc kubenswrapper[4813]: I0219 20:45:19.567422 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ddtzv" Feb 19 20:45:19 crc kubenswrapper[4813]: I0219 20:45:19.663758 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzplp\" (UniqueName: \"kubernetes.io/projected/3a2cad88-cbd2-4761-a8f1-11b4643f5cf4-kube-api-access-qzplp\") pod \"3a2cad88-cbd2-4761-a8f1-11b4643f5cf4\" (UID: \"3a2cad88-cbd2-4761-a8f1-11b4643f5cf4\") " Feb 19 20:45:19 crc kubenswrapper[4813]: I0219 20:45:19.663869 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a2cad88-cbd2-4761-a8f1-11b4643f5cf4-catalog-content\") pod \"3a2cad88-cbd2-4761-a8f1-11b4643f5cf4\" (UID: \"3a2cad88-cbd2-4761-a8f1-11b4643f5cf4\") " Feb 19 20:45:19 crc kubenswrapper[4813]: I0219 20:45:19.663998 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a2cad88-cbd2-4761-a8f1-11b4643f5cf4-utilities\") pod \"3a2cad88-cbd2-4761-a8f1-11b4643f5cf4\" (UID: \"3a2cad88-cbd2-4761-a8f1-11b4643f5cf4\") " Feb 19 20:45:19 crc kubenswrapper[4813]: I0219 20:45:19.665336 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a2cad88-cbd2-4761-a8f1-11b4643f5cf4-utilities" (OuterVolumeSpecName: "utilities") pod "3a2cad88-cbd2-4761-a8f1-11b4643f5cf4" (UID: "3a2cad88-cbd2-4761-a8f1-11b4643f5cf4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:45:19 crc kubenswrapper[4813]: I0219 20:45:19.675042 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a2cad88-cbd2-4761-a8f1-11b4643f5cf4-kube-api-access-qzplp" (OuterVolumeSpecName: "kube-api-access-qzplp") pod "3a2cad88-cbd2-4761-a8f1-11b4643f5cf4" (UID: "3a2cad88-cbd2-4761-a8f1-11b4643f5cf4"). InnerVolumeSpecName "kube-api-access-qzplp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:45:19 crc kubenswrapper[4813]: I0219 20:45:19.737807 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a2cad88-cbd2-4761-a8f1-11b4643f5cf4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3a2cad88-cbd2-4761-a8f1-11b4643f5cf4" (UID: "3a2cad88-cbd2-4761-a8f1-11b4643f5cf4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:45:19 crc kubenswrapper[4813]: I0219 20:45:19.766786 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzplp\" (UniqueName: \"kubernetes.io/projected/3a2cad88-cbd2-4761-a8f1-11b4643f5cf4-kube-api-access-qzplp\") on node \"crc\" DevicePath \"\"" Feb 19 20:45:19 crc kubenswrapper[4813]: I0219 20:45:19.766824 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a2cad88-cbd2-4761-a8f1-11b4643f5cf4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 20:45:19 crc kubenswrapper[4813]: I0219 20:45:19.766833 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a2cad88-cbd2-4761-a8f1-11b4643f5cf4-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 20:45:19 crc kubenswrapper[4813]: I0219 20:45:19.980144 4813 generic.go:334] "Generic (PLEG): container finished" podID="3a2cad88-cbd2-4761-a8f1-11b4643f5cf4" containerID="3e91935630bcade2e28ef7892767d19144d87886ad9a5b2fa9971e51f42116a6" exitCode=0 Feb 19 20:45:19 crc kubenswrapper[4813]: I0219 20:45:19.980202 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ddtzv" Feb 19 20:45:19 crc kubenswrapper[4813]: I0219 20:45:19.980222 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ddtzv" event={"ID":"3a2cad88-cbd2-4761-a8f1-11b4643f5cf4","Type":"ContainerDied","Data":"3e91935630bcade2e28ef7892767d19144d87886ad9a5b2fa9971e51f42116a6"} Feb 19 20:45:19 crc kubenswrapper[4813]: I0219 20:45:19.980259 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ddtzv" event={"ID":"3a2cad88-cbd2-4761-a8f1-11b4643f5cf4","Type":"ContainerDied","Data":"a211f51f8422e9cd725bc62179bb435251118f355f547c426eafb92cd9513274"} Feb 19 20:45:19 crc kubenswrapper[4813]: I0219 20:45:19.980282 4813 scope.go:117] "RemoveContainer" containerID="3e91935630bcade2e28ef7892767d19144d87886ad9a5b2fa9971e51f42116a6" Feb 19 20:45:20 crc kubenswrapper[4813]: I0219 20:45:20.005824 4813 scope.go:117] "RemoveContainer" containerID="e1879768ce557abcd8eb055f9d0c7cac4e29598c170ebca63ee4f977bdcd43ff" Feb 19 20:45:20 crc kubenswrapper[4813]: I0219 20:45:20.016612 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ddtzv"] Feb 19 20:45:20 crc kubenswrapper[4813]: I0219 20:45:20.026085 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ddtzv"] Feb 19 20:45:20 crc kubenswrapper[4813]: I0219 20:45:20.048310 4813 scope.go:117] "RemoveContainer" containerID="f2bbaa88980d42c915b4e8b3b485953baaf1c9b1d336f8ccf774afc22604b126" Feb 19 20:45:20 crc kubenswrapper[4813]: I0219 20:45:20.082660 4813 scope.go:117] "RemoveContainer" containerID="3e91935630bcade2e28ef7892767d19144d87886ad9a5b2fa9971e51f42116a6" Feb 19 20:45:20 crc kubenswrapper[4813]: E0219 20:45:20.083300 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e91935630bcade2e28ef7892767d19144d87886ad9a5b2fa9971e51f42116a6\": container with ID starting with 3e91935630bcade2e28ef7892767d19144d87886ad9a5b2fa9971e51f42116a6 not found: ID does not exist" containerID="3e91935630bcade2e28ef7892767d19144d87886ad9a5b2fa9971e51f42116a6" Feb 19 20:45:20 crc kubenswrapper[4813]: I0219 20:45:20.083437 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e91935630bcade2e28ef7892767d19144d87886ad9a5b2fa9971e51f42116a6"} err="failed to get container status \"3e91935630bcade2e28ef7892767d19144d87886ad9a5b2fa9971e51f42116a6\": rpc error: code = NotFound desc = could not find container \"3e91935630bcade2e28ef7892767d19144d87886ad9a5b2fa9971e51f42116a6\": container with ID starting with 3e91935630bcade2e28ef7892767d19144d87886ad9a5b2fa9971e51f42116a6 not found: ID does not exist" Feb 19 20:45:20 crc kubenswrapper[4813]: I0219 20:45:20.083546 4813 scope.go:117] "RemoveContainer" containerID="e1879768ce557abcd8eb055f9d0c7cac4e29598c170ebca63ee4f977bdcd43ff" Feb 19 20:45:20 crc kubenswrapper[4813]: E0219 20:45:20.084042 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1879768ce557abcd8eb055f9d0c7cac4e29598c170ebca63ee4f977bdcd43ff\": container with ID starting with e1879768ce557abcd8eb055f9d0c7cac4e29598c170ebca63ee4f977bdcd43ff not found: ID does not exist" containerID="e1879768ce557abcd8eb055f9d0c7cac4e29598c170ebca63ee4f977bdcd43ff" Feb 19 20:45:20 crc kubenswrapper[4813]: I0219 20:45:20.084087 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1879768ce557abcd8eb055f9d0c7cac4e29598c170ebca63ee4f977bdcd43ff"} err="failed to get container status \"e1879768ce557abcd8eb055f9d0c7cac4e29598c170ebca63ee4f977bdcd43ff\": rpc error: code = NotFound desc = could not find container \"e1879768ce557abcd8eb055f9d0c7cac4e29598c170ebca63ee4f977bdcd43ff\": container with ID starting with e1879768ce557abcd8eb055f9d0c7cac4e29598c170ebca63ee4f977bdcd43ff not found: ID does not exist" Feb 19 20:45:20 crc kubenswrapper[4813]: I0219 20:45:20.084117 4813 scope.go:117] "RemoveContainer" containerID="f2bbaa88980d42c915b4e8b3b485953baaf1c9b1d336f8ccf774afc22604b126" Feb 19 20:45:20 crc kubenswrapper[4813]: E0219 20:45:20.084457 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2bbaa88980d42c915b4e8b3b485953baaf1c9b1d336f8ccf774afc22604b126\": container with ID starting with f2bbaa88980d42c915b4e8b3b485953baaf1c9b1d336f8ccf774afc22604b126 not found: ID does not exist" containerID="f2bbaa88980d42c915b4e8b3b485953baaf1c9b1d336f8ccf774afc22604b126" Feb 19 20:45:20 crc kubenswrapper[4813]: I0219 20:45:20.084557 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2bbaa88980d42c915b4e8b3b485953baaf1c9b1d336f8ccf774afc22604b126"} err="failed to get container status \"f2bbaa88980d42c915b4e8b3b485953baaf1c9b1d336f8ccf774afc22604b126\": rpc error: code = NotFound desc = could not find container \"f2bbaa88980d42c915b4e8b3b485953baaf1c9b1d336f8ccf774afc22604b126\": container with ID starting with f2bbaa88980d42c915b4e8b3b485953baaf1c9b1d336f8ccf774afc22604b126 not found: ID does not exist" Feb 19 20:45:21 crc kubenswrapper[4813]: I0219 20:45:21.487878 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a2cad88-cbd2-4761-a8f1-11b4643f5cf4" path="/var/lib/kubelet/pods/3a2cad88-cbd2-4761-a8f1-11b4643f5cf4/volumes" Feb 19 20:45:50 crc kubenswrapper[4813]: I0219 20:45:50.676920 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wzsp4"] Feb 19 20:45:50 crc kubenswrapper[4813]: E0219 20:45:50.678209 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d568c016-adf2-462d-a29e-333062b97008" containerName="collect-profiles" Feb 19 20:45:50 crc kubenswrapper[4813]: I0219 20:45:50.678230 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="d568c016-adf2-462d-a29e-333062b97008" containerName="collect-profiles" Feb 19 20:45:50 crc kubenswrapper[4813]: E0219 20:45:50.678248 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a2cad88-cbd2-4761-a8f1-11b4643f5cf4" containerName="extract-content" Feb 19 20:45:50 crc kubenswrapper[4813]: I0219 20:45:50.678255 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a2cad88-cbd2-4761-a8f1-11b4643f5cf4" containerName="extract-content" Feb 19 20:45:50 crc kubenswrapper[4813]: E0219 20:45:50.678280 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff019584-07d6-453e-b2e7-918a43b1b5d0" containerName="extract-content" Feb 19 20:45:50 crc kubenswrapper[4813]: I0219 20:45:50.678288 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff019584-07d6-453e-b2e7-918a43b1b5d0" containerName="extract-content" Feb 19 20:45:50 crc kubenswrapper[4813]: E0219 20:45:50.678307 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff019584-07d6-453e-b2e7-918a43b1b5d0" containerName="registry-server" Feb 19 20:45:50 crc kubenswrapper[4813]: I0219 20:45:50.678314 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff019584-07d6-453e-b2e7-918a43b1b5d0" containerName="registry-server" Feb 19 20:45:50 crc kubenswrapper[4813]: E0219 20:45:50.678332 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a2cad88-cbd2-4761-a8f1-11b4643f5cf4" containerName="registry-server" Feb 19 20:45:50 crc kubenswrapper[4813]: I0219 20:45:50.678338 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a2cad88-cbd2-4761-a8f1-11b4643f5cf4" containerName="registry-server" Feb 19 20:45:50 crc kubenswrapper[4813]: E0219 20:45:50.678362 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a2cad88-cbd2-4761-a8f1-11b4643f5cf4" containerName="extract-utilities" Feb 19 20:45:50 crc kubenswrapper[4813]: I0219 20:45:50.678370 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a2cad88-cbd2-4761-a8f1-11b4643f5cf4" containerName="extract-utilities" Feb 19 20:45:50 crc kubenswrapper[4813]: E0219 20:45:50.678386 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff019584-07d6-453e-b2e7-918a43b1b5d0" containerName="extract-utilities" Feb 19 20:45:50 crc kubenswrapper[4813]: I0219 20:45:50.678393 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff019584-07d6-453e-b2e7-918a43b1b5d0" containerName="extract-utilities" Feb 19 20:45:50 crc kubenswrapper[4813]: I0219 20:45:50.678615 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff019584-07d6-453e-b2e7-918a43b1b5d0" containerName="registry-server" Feb 19 20:45:50 crc kubenswrapper[4813]: I0219 20:45:50.678648 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="d568c016-adf2-462d-a29e-333062b97008" containerName="collect-profiles" Feb 19 20:45:50 crc kubenswrapper[4813]: I0219 20:45:50.678672 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a2cad88-cbd2-4761-a8f1-11b4643f5cf4" containerName="registry-server" Feb 19 20:45:50 crc kubenswrapper[4813]: I0219 20:45:50.681004 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wzsp4" Feb 19 20:45:50 crc kubenswrapper[4813]: I0219 20:45:50.691495 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wzsp4"] Feb 19 20:45:50 crc kubenswrapper[4813]: I0219 20:45:50.731267 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16b486ea-d91d-43cf-9256-ff29c0bab98e-utilities\") pod \"redhat-operators-wzsp4\" (UID: \"16b486ea-d91d-43cf-9256-ff29c0bab98e\") " pod="openshift-marketplace/redhat-operators-wzsp4" Feb 19 20:45:50 crc kubenswrapper[4813]: I0219 20:45:50.731613 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16b486ea-d91d-43cf-9256-ff29c0bab98e-catalog-content\") pod \"redhat-operators-wzsp4\" (UID: \"16b486ea-d91d-43cf-9256-ff29c0bab98e\") " pod="openshift-marketplace/redhat-operators-wzsp4" Feb 19 20:45:50 crc kubenswrapper[4813]: I0219 20:45:50.731790 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mc4vm\" (UniqueName: \"kubernetes.io/projected/16b486ea-d91d-43cf-9256-ff29c0bab98e-kube-api-access-mc4vm\") pod \"redhat-operators-wzsp4\" (UID: \"16b486ea-d91d-43cf-9256-ff29c0bab98e\") " pod="openshift-marketplace/redhat-operators-wzsp4" Feb 19 20:45:50 crc kubenswrapper[4813]: I0219 20:45:50.834301 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16b486ea-d91d-43cf-9256-ff29c0bab98e-utilities\") pod \"redhat-operators-wzsp4\" (UID: \"16b486ea-d91d-43cf-9256-ff29c0bab98e\") " pod="openshift-marketplace/redhat-operators-wzsp4" Feb 19 20:45:50 crc kubenswrapper[4813]: I0219 20:45:50.834429 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16b486ea-d91d-43cf-9256-ff29c0bab98e-catalog-content\") pod \"redhat-operators-wzsp4\" (UID: \"16b486ea-d91d-43cf-9256-ff29c0bab98e\") " pod="openshift-marketplace/redhat-operators-wzsp4" Feb 19 20:45:50 crc kubenswrapper[4813]: I0219 20:45:50.834522 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mc4vm\" (UniqueName: \"kubernetes.io/projected/16b486ea-d91d-43cf-9256-ff29c0bab98e-kube-api-access-mc4vm\") pod \"redhat-operators-wzsp4\" (UID: \"16b486ea-d91d-43cf-9256-ff29c0bab98e\") " pod="openshift-marketplace/redhat-operators-wzsp4" Feb 19 20:45:50 crc kubenswrapper[4813]: I0219 20:45:50.834921 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16b486ea-d91d-43cf-9256-ff29c0bab98e-utilities\") pod \"redhat-operators-wzsp4\" (UID: \"16b486ea-d91d-43cf-9256-ff29c0bab98e\") " pod="openshift-marketplace/redhat-operators-wzsp4" Feb 19 20:45:50 crc kubenswrapper[4813]: I0219 20:45:50.834945 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16b486ea-d91d-43cf-9256-ff29c0bab98e-catalog-content\") pod \"redhat-operators-wzsp4\" (UID: \"16b486ea-d91d-43cf-9256-ff29c0bab98e\") " pod="openshift-marketplace/redhat-operators-wzsp4" Feb 19 20:45:50 crc kubenswrapper[4813]: I0219 20:45:50.856132 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mc4vm\" (UniqueName: \"kubernetes.io/projected/16b486ea-d91d-43cf-9256-ff29c0bab98e-kube-api-access-mc4vm\") pod \"redhat-operators-wzsp4\" (UID: \"16b486ea-d91d-43cf-9256-ff29c0bab98e\") " pod="openshift-marketplace/redhat-operators-wzsp4" Feb 19 20:45:51 crc kubenswrapper[4813]: I0219 20:45:51.017096 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wzsp4" Feb 19 20:45:51 crc kubenswrapper[4813]: I0219 20:45:51.523147 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wzsp4"] Feb 19 20:45:52 crc kubenswrapper[4813]: I0219 20:45:52.352091 4813 generic.go:334] "Generic (PLEG): container finished" podID="16b486ea-d91d-43cf-9256-ff29c0bab98e" containerID="032a8fcb4234c0e432e95c38d30fabe126fd96a00d2ef0548e81a8805665cf08" exitCode=0 Feb 19 20:45:52 crc kubenswrapper[4813]: I0219 20:45:52.352182 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wzsp4" event={"ID":"16b486ea-d91d-43cf-9256-ff29c0bab98e","Type":"ContainerDied","Data":"032a8fcb4234c0e432e95c38d30fabe126fd96a00d2ef0548e81a8805665cf08"} Feb 19 20:45:52 crc kubenswrapper[4813]: I0219 20:45:52.352410 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wzsp4" event={"ID":"16b486ea-d91d-43cf-9256-ff29c0bab98e","Type":"ContainerStarted","Data":"572d7329b835614bb2c8091929a4932beec98c316f9cf8732d49482e16aba021"} Feb 19 20:45:54 crc kubenswrapper[4813]: I0219 20:45:54.378989 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wzsp4" event={"ID":"16b486ea-d91d-43cf-9256-ff29c0bab98e","Type":"ContainerStarted","Data":"5df74029f157b0693d75591a78c90064ced4e03f25689cf16dbdaea0e0af3ba0"} Feb 19 20:45:56 crc kubenswrapper[4813]: I0219 20:45:56.405825 4813 generic.go:334] "Generic (PLEG): container finished" podID="16b486ea-d91d-43cf-9256-ff29c0bab98e" containerID="5df74029f157b0693d75591a78c90064ced4e03f25689cf16dbdaea0e0af3ba0" exitCode=0 Feb 19 20:45:56 crc kubenswrapper[4813]: I0219 20:45:56.405908 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wzsp4" event={"ID":"16b486ea-d91d-43cf-9256-ff29c0bab98e","Type":"ContainerDied","Data":"5df74029f157b0693d75591a78c90064ced4e03f25689cf16dbdaea0e0af3ba0"} Feb 19 20:45:56 crc kubenswrapper[4813]: I0219 20:45:56.408857 4813 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 20:45:59 crc kubenswrapper[4813]: I0219 20:45:59.454207 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wzsp4" event={"ID":"16b486ea-d91d-43cf-9256-ff29c0bab98e","Type":"ContainerStarted","Data":"977def49ce0131b83033c436861a91df2c4ff9efa09d7d211f5ce0de4c6cda8d"} Feb 19 20:45:59 crc kubenswrapper[4813]: I0219 20:45:59.490055 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wzsp4" podStartSLOduration=3.4835159620000002 podStartE2EDuration="9.490016136s" podCreationTimestamp="2026-02-19 20:45:50 +0000 UTC" firstStartedPulling="2026-02-19 20:45:52.354765529 +0000 UTC m=+8171.580206080" lastFinishedPulling="2026-02-19 20:45:58.361265713 +0000 UTC m=+8177.586706254" observedRunningTime="2026-02-19 20:45:59.472203297 +0000 UTC m=+8178.697643838" watchObservedRunningTime="2026-02-19 20:45:59.490016136 +0000 UTC m=+8178.715456667" Feb 19 20:45:59 crc kubenswrapper[4813]: I0219 20:45:59.510335 4813 scope.go:117] "RemoveContainer" containerID="5fed16dab7ca501b54e79f8999ed126a67465d45677b3b7bf0d2900b7e574a0a" Feb 19 20:46:01 crc kubenswrapper[4813]: I0219 20:46:01.018279 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wzsp4" Feb 19 20:46:01 crc kubenswrapper[4813]: I0219 20:46:01.018657 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wzsp4" Feb 19 20:46:02 crc kubenswrapper[4813]: I0219 20:46:02.077726 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wzsp4" podUID="16b486ea-d91d-43cf-9256-ff29c0bab98e" containerName="registry-server" probeResult="failure" output=< Feb 19 20:46:02 crc kubenswrapper[4813]: timeout: failed to connect service ":50051" within 1s Feb 19 20:46:02 crc kubenswrapper[4813]: > Feb 19 20:46:11 crc kubenswrapper[4813]: I0219 20:46:11.087619 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wzsp4" Feb 19 20:46:11 crc kubenswrapper[4813]: I0219 20:46:11.176890 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wzsp4" Feb 19 20:46:11 crc kubenswrapper[4813]: I0219 20:46:11.334799 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wzsp4"] Feb 19 20:46:12 crc kubenswrapper[4813]: I0219 20:46:12.595361 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wzsp4" podUID="16b486ea-d91d-43cf-9256-ff29c0bab98e" containerName="registry-server" containerID="cri-o://977def49ce0131b83033c436861a91df2c4ff9efa09d7d211f5ce0de4c6cda8d" gracePeriod=2 Feb 19 20:46:13 crc kubenswrapper[4813]: I0219 20:46:13.198084 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wzsp4" Feb 19 20:46:13 crc kubenswrapper[4813]: I0219 20:46:13.367552 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16b486ea-d91d-43cf-9256-ff29c0bab98e-utilities\") pod \"16b486ea-d91d-43cf-9256-ff29c0bab98e\" (UID: \"16b486ea-d91d-43cf-9256-ff29c0bab98e\") " Feb 19 20:46:13 crc kubenswrapper[4813]: I0219 20:46:13.367814 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16b486ea-d91d-43cf-9256-ff29c0bab98e-catalog-content\") pod \"16b486ea-d91d-43cf-9256-ff29c0bab98e\" (UID: \"16b486ea-d91d-43cf-9256-ff29c0bab98e\") " Feb 19 20:46:13 crc kubenswrapper[4813]: I0219 20:46:13.367943 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mc4vm\" (UniqueName: \"kubernetes.io/projected/16b486ea-d91d-43cf-9256-ff29c0bab98e-kube-api-access-mc4vm\") pod \"16b486ea-d91d-43cf-9256-ff29c0bab98e\" (UID: \"16b486ea-d91d-43cf-9256-ff29c0bab98e\") " Feb 19 20:46:13 crc kubenswrapper[4813]: I0219 20:46:13.369175 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16b486ea-d91d-43cf-9256-ff29c0bab98e-utilities" (OuterVolumeSpecName: "utilities") pod "16b486ea-d91d-43cf-9256-ff29c0bab98e" (UID: "16b486ea-d91d-43cf-9256-ff29c0bab98e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:46:13 crc kubenswrapper[4813]: I0219 20:46:13.373590 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16b486ea-d91d-43cf-9256-ff29c0bab98e-kube-api-access-mc4vm" (OuterVolumeSpecName: "kube-api-access-mc4vm") pod "16b486ea-d91d-43cf-9256-ff29c0bab98e" (UID: "16b486ea-d91d-43cf-9256-ff29c0bab98e"). InnerVolumeSpecName "kube-api-access-mc4vm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:46:13 crc kubenswrapper[4813]: I0219 20:46:13.471739 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mc4vm\" (UniqueName: \"kubernetes.io/projected/16b486ea-d91d-43cf-9256-ff29c0bab98e-kube-api-access-mc4vm\") on node \"crc\" DevicePath \"\"" Feb 19 20:46:13 crc kubenswrapper[4813]: I0219 20:46:13.472460 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16b486ea-d91d-43cf-9256-ff29c0bab98e-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 20:46:13 crc kubenswrapper[4813]: I0219 20:46:13.505088 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16b486ea-d91d-43cf-9256-ff29c0bab98e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "16b486ea-d91d-43cf-9256-ff29c0bab98e" (UID: "16b486ea-d91d-43cf-9256-ff29c0bab98e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:46:13 crc kubenswrapper[4813]: I0219 20:46:13.577861 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16b486ea-d91d-43cf-9256-ff29c0bab98e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 20:46:13 crc kubenswrapper[4813]: I0219 20:46:13.607406 4813 generic.go:334] "Generic (PLEG): container finished" podID="16b486ea-d91d-43cf-9256-ff29c0bab98e" containerID="977def49ce0131b83033c436861a91df2c4ff9efa09d7d211f5ce0de4c6cda8d" exitCode=0 Feb 19 20:46:13 crc kubenswrapper[4813]: I0219 20:46:13.607471 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wzsp4" event={"ID":"16b486ea-d91d-43cf-9256-ff29c0bab98e","Type":"ContainerDied","Data":"977def49ce0131b83033c436861a91df2c4ff9efa09d7d211f5ce0de4c6cda8d"} Feb 19 20:46:13 crc kubenswrapper[4813]: I0219 20:46:13.607520 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wzsp4" event={"ID":"16b486ea-d91d-43cf-9256-ff29c0bab98e","Type":"ContainerDied","Data":"572d7329b835614bb2c8091929a4932beec98c316f9cf8732d49482e16aba021"} Feb 19 20:46:13 crc kubenswrapper[4813]: I0219 20:46:13.607548 4813 scope.go:117] "RemoveContainer" containerID="977def49ce0131b83033c436861a91df2c4ff9efa09d7d211f5ce0de4c6cda8d" Feb 19 20:46:13 crc kubenswrapper[4813]: I0219 20:46:13.608146 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wzsp4" Feb 19 20:46:13 crc kubenswrapper[4813]: I0219 20:46:13.637812 4813 scope.go:117] "RemoveContainer" containerID="5df74029f157b0693d75591a78c90064ced4e03f25689cf16dbdaea0e0af3ba0" Feb 19 20:46:13 crc kubenswrapper[4813]: I0219 20:46:13.645032 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wzsp4"] Feb 19 20:46:13 crc kubenswrapper[4813]: I0219 20:46:13.657884 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wzsp4"] Feb 19 20:46:13 crc kubenswrapper[4813]: I0219 20:46:13.668833 4813 scope.go:117] "RemoveContainer" containerID="032a8fcb4234c0e432e95c38d30fabe126fd96a00d2ef0548e81a8805665cf08" Feb 19 20:46:13 crc kubenswrapper[4813]: I0219 20:46:13.714319 4813 scope.go:117] "RemoveContainer" containerID="977def49ce0131b83033c436861a91df2c4ff9efa09d7d211f5ce0de4c6cda8d" Feb 19 20:46:13 crc kubenswrapper[4813]: E0219 20:46:13.715832 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"977def49ce0131b83033c436861a91df2c4ff9efa09d7d211f5ce0de4c6cda8d\": container with ID starting with 977def49ce0131b83033c436861a91df2c4ff9efa09d7d211f5ce0de4c6cda8d not found: ID does not exist" containerID="977def49ce0131b83033c436861a91df2c4ff9efa09d7d211f5ce0de4c6cda8d" Feb 19 20:46:13 crc kubenswrapper[4813]: I0219 20:46:13.715864 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"977def49ce0131b83033c436861a91df2c4ff9efa09d7d211f5ce0de4c6cda8d"} err="failed to get container status \"977def49ce0131b83033c436861a91df2c4ff9efa09d7d211f5ce0de4c6cda8d\": rpc error: code = NotFound desc = could not find container \"977def49ce0131b83033c436861a91df2c4ff9efa09d7d211f5ce0de4c6cda8d\": container with ID starting with 977def49ce0131b83033c436861a91df2c4ff9efa09d7d211f5ce0de4c6cda8d not found: ID does not exist" Feb 19 20:46:13 crc kubenswrapper[4813]: I0219 20:46:13.715890 4813 scope.go:117] "RemoveContainer" containerID="5df74029f157b0693d75591a78c90064ced4e03f25689cf16dbdaea0e0af3ba0" Feb 19 20:46:13 crc kubenswrapper[4813]: E0219 20:46:13.716282 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5df74029f157b0693d75591a78c90064ced4e03f25689cf16dbdaea0e0af3ba0\": container with ID starting with 5df74029f157b0693d75591a78c90064ced4e03f25689cf16dbdaea0e0af3ba0 not found: ID does not exist" containerID="5df74029f157b0693d75591a78c90064ced4e03f25689cf16dbdaea0e0af3ba0" Feb 19 20:46:13 crc kubenswrapper[4813]: I0219 20:46:13.716306 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5df74029f157b0693d75591a78c90064ced4e03f25689cf16dbdaea0e0af3ba0"} err="failed to get container status \"5df74029f157b0693d75591a78c90064ced4e03f25689cf16dbdaea0e0af3ba0\": rpc error: code = NotFound desc = could not find container \"5df74029f157b0693d75591a78c90064ced4e03f25689cf16dbdaea0e0af3ba0\": container with ID starting with 5df74029f157b0693d75591a78c90064ced4e03f25689cf16dbdaea0e0af3ba0 not found: ID does not exist" Feb 19 20:46:13 crc kubenswrapper[4813]: I0219 20:46:13.716325 4813 scope.go:117] "RemoveContainer" containerID="032a8fcb4234c0e432e95c38d30fabe126fd96a00d2ef0548e81a8805665cf08" Feb 19 20:46:13 crc kubenswrapper[4813]: E0219 20:46:13.716635 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"032a8fcb4234c0e432e95c38d30fabe126fd96a00d2ef0548e81a8805665cf08\": container with ID starting with 032a8fcb4234c0e432e95c38d30fabe126fd96a00d2ef0548e81a8805665cf08 not found: ID does not exist" containerID="032a8fcb4234c0e432e95c38d30fabe126fd96a00d2ef0548e81a8805665cf08" Feb 19 20:46:13 crc kubenswrapper[4813]: I0219 20:46:13.716667 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"032a8fcb4234c0e432e95c38d30fabe126fd96a00d2ef0548e81a8805665cf08"} err="failed to get container status \"032a8fcb4234c0e432e95c38d30fabe126fd96a00d2ef0548e81a8805665cf08\": rpc error: code = NotFound desc = could not find container \"032a8fcb4234c0e432e95c38d30fabe126fd96a00d2ef0548e81a8805665cf08\": container with ID starting with 032a8fcb4234c0e432e95c38d30fabe126fd96a00d2ef0548e81a8805665cf08 not found: ID does not exist" Feb 19 20:46:15 crc kubenswrapper[4813]: I0219 20:46:15.488496 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16b486ea-d91d-43cf-9256-ff29c0bab98e" path="/var/lib/kubelet/pods/16b486ea-d91d-43cf-9256-ff29c0bab98e/volumes" Feb 19 20:46:51 crc kubenswrapper[4813]: I0219 20:46:51.028552 4813 generic.go:334] "Generic (PLEG): container finished" podID="f75d7c97-6dce-40ee-a448-abb566750887" containerID="a3a85a9f2fc5b7560bcacdaebe50740e33ec42beff799f0e2952e3866711b6d7" exitCode=0 Feb 19 20:46:51 crc kubenswrapper[4813]: I0219 20:46:51.028657 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-9snn7" event={"ID":"f75d7c97-6dce-40ee-a448-abb566750887","Type":"ContainerDied","Data":"a3a85a9f2fc5b7560bcacdaebe50740e33ec42beff799f0e2952e3866711b6d7"} Feb 19 20:46:52 crc kubenswrapper[4813]: I0219 20:46:52.535490 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-9snn7" Feb 19 20:46:52 crc kubenswrapper[4813]: I0219 20:46:52.670998 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/f75d7c97-6dce-40ee-a448-abb566750887-ceilometer-compute-config-data-0\") pod \"f75d7c97-6dce-40ee-a448-abb566750887\" (UID: \"f75d7c97-6dce-40ee-a448-abb566750887\") " Feb 19 20:46:52 crc kubenswrapper[4813]: I0219 20:46:52.671378 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/f75d7c97-6dce-40ee-a448-abb566750887-ceilometer-compute-config-data-2\") pod \"f75d7c97-6dce-40ee-a448-abb566750887\" (UID: \"f75d7c97-6dce-40ee-a448-abb566750887\") " Feb 19 20:46:52 crc kubenswrapper[4813]: I0219 20:46:52.671442 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cf2ps\" (UniqueName: \"kubernetes.io/projected/f75d7c97-6dce-40ee-a448-abb566750887-kube-api-access-cf2ps\") pod \"f75d7c97-6dce-40ee-a448-abb566750887\" (UID: \"f75d7c97-6dce-40ee-a448-abb566750887\") " Feb 19 20:46:52 crc kubenswrapper[4813]: I0219 20:46:52.671463 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/f75d7c97-6dce-40ee-a448-abb566750887-ssh-key-openstack-cell1\") pod \"f75d7c97-6dce-40ee-a448-abb566750887\" (UID: \"f75d7c97-6dce-40ee-a448-abb566750887\") " Feb 19 20:46:52 crc kubenswrapper[4813]: I0219 20:46:52.671511 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f75d7c97-6dce-40ee-a448-abb566750887-ceph\") pod \"f75d7c97-6dce-40ee-a448-abb566750887\" (UID: \"f75d7c97-6dce-40ee-a448-abb566750887\") " Feb 19 20:46:52 crc kubenswrapper[4813]: I0219 20:46:52.671952 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/f75d7c97-6dce-40ee-a448-abb566750887-ceilometer-compute-config-data-1\") pod \"f75d7c97-6dce-40ee-a448-abb566750887\" (UID: \"f75d7c97-6dce-40ee-a448-abb566750887\") " Feb 19 20:46:52 crc kubenswrapper[4813]: I0219 20:46:52.672032 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f75d7c97-6dce-40ee-a448-abb566750887-telemetry-combined-ca-bundle\") pod \"f75d7c97-6dce-40ee-a448-abb566750887\" (UID: \"f75d7c97-6dce-40ee-a448-abb566750887\") " Feb 19 20:46:52 crc kubenswrapper[4813]: I0219 20:46:52.672087 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f75d7c97-6dce-40ee-a448-abb566750887-inventory\") pod \"f75d7c97-6dce-40ee-a448-abb566750887\" (UID: \"f75d7c97-6dce-40ee-a448-abb566750887\") " Feb 19 20:46:52 crc kubenswrapper[4813]: I0219 20:46:52.677533 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f75d7c97-6dce-40ee-a448-abb566750887-kube-api-access-cf2ps" (OuterVolumeSpecName: "kube-api-access-cf2ps") pod "f75d7c97-6dce-40ee-a448-abb566750887" (UID: "f75d7c97-6dce-40ee-a448-abb566750887"). InnerVolumeSpecName "kube-api-access-cf2ps". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:46:52 crc kubenswrapper[4813]: I0219 20:46:52.678709 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f75d7c97-6dce-40ee-a448-abb566750887-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "f75d7c97-6dce-40ee-a448-abb566750887" (UID: "f75d7c97-6dce-40ee-a448-abb566750887"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:46:52 crc kubenswrapper[4813]: I0219 20:46:52.681135 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f75d7c97-6dce-40ee-a448-abb566750887-ceph" (OuterVolumeSpecName: "ceph") pod "f75d7c97-6dce-40ee-a448-abb566750887" (UID: "f75d7c97-6dce-40ee-a448-abb566750887"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:46:52 crc kubenswrapper[4813]: I0219 20:46:52.702188 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f75d7c97-6dce-40ee-a448-abb566750887-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "f75d7c97-6dce-40ee-a448-abb566750887" (UID: "f75d7c97-6dce-40ee-a448-abb566750887"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:46:52 crc kubenswrapper[4813]: I0219 20:46:52.712184 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f75d7c97-6dce-40ee-a448-abb566750887-inventory" (OuterVolumeSpecName: "inventory") pod "f75d7c97-6dce-40ee-a448-abb566750887" (UID: "f75d7c97-6dce-40ee-a448-abb566750887"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:46:52 crc kubenswrapper[4813]: I0219 20:46:52.714872 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f75d7c97-6dce-40ee-a448-abb566750887-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "f75d7c97-6dce-40ee-a448-abb566750887" (UID: "f75d7c97-6dce-40ee-a448-abb566750887"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:46:52 crc kubenswrapper[4813]: I0219 20:46:52.723795 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f75d7c97-6dce-40ee-a448-abb566750887-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "f75d7c97-6dce-40ee-a448-abb566750887" (UID: "f75d7c97-6dce-40ee-a448-abb566750887"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:46:52 crc kubenswrapper[4813]: I0219 20:46:52.734248 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f75d7c97-6dce-40ee-a448-abb566750887-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "f75d7c97-6dce-40ee-a448-abb566750887" (UID: "f75d7c97-6dce-40ee-a448-abb566750887"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:46:52 crc kubenswrapper[4813]: I0219 20:46:52.774770 4813 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/f75d7c97-6dce-40ee-a448-abb566750887-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 19 20:46:52 crc kubenswrapper[4813]: I0219 20:46:52.774797 4813 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/f75d7c97-6dce-40ee-a448-abb566750887-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 19 20:46:52 crc kubenswrapper[4813]: I0219 20:46:52.774810 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cf2ps\" (UniqueName: \"kubernetes.io/projected/f75d7c97-6dce-40ee-a448-abb566750887-kube-api-access-cf2ps\") on node \"crc\" DevicePath \"\"" Feb 19 20:46:52 crc kubenswrapper[4813]: I0219 20:46:52.774819 4813 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/f75d7c97-6dce-40ee-a448-abb566750887-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 19 20:46:52 crc kubenswrapper[4813]: I0219 20:46:52.774828 4813 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f75d7c97-6dce-40ee-a448-abb566750887-ceph\") on node \"crc\" DevicePath \"\"" Feb 19 20:46:52 crc kubenswrapper[4813]: I0219 20:46:52.774843 4813 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/f75d7c97-6dce-40ee-a448-abb566750887-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 19 20:46:52 crc kubenswrapper[4813]: I0219 20:46:52.774852 4813 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f75d7c97-6dce-40ee-a448-abb566750887-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 20:46:52 crc kubenswrapper[4813]: I0219 20:46:52.774861 4813 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f75d7c97-6dce-40ee-a448-abb566750887-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 20:46:53 crc kubenswrapper[4813]: I0219 20:46:53.049622 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-9snn7" event={"ID":"f75d7c97-6dce-40ee-a448-abb566750887","Type":"ContainerDied","Data":"2ed087d26ca2239f609e5868153f727b3f314743368d629a4cd4212b410fe6a0"} Feb 19 20:46:53 crc kubenswrapper[4813]: I0219 20:46:53.049658 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ed087d26ca2239f609e5868153f727b3f314743368d629a4cd4212b410fe6a0" Feb 19 20:46:53 crc kubenswrapper[4813]: I0219 20:46:53.049729 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-9snn7" Feb 19 20:46:53 crc kubenswrapper[4813]: I0219 20:46:53.184557 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-sriov-openstack-openstack-cell1-2hs7t"] Feb 19 20:46:53 crc kubenswrapper[4813]: E0219 20:46:53.187600 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16b486ea-d91d-43cf-9256-ff29c0bab98e" containerName="extract-content" Feb 19 20:46:53 crc kubenswrapper[4813]: I0219 20:46:53.187635 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="16b486ea-d91d-43cf-9256-ff29c0bab98e" containerName="extract-content" Feb 19 20:46:53 crc kubenswrapper[4813]: E0219 20:46:53.187665 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16b486ea-d91d-43cf-9256-ff29c0bab98e" containerName="registry-server" Feb 19 20:46:53 crc kubenswrapper[4813]: I0219 20:46:53.187673 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="16b486ea-d91d-43cf-9256-ff29c0bab98e" containerName="registry-server" Feb 19 20:46:53 crc kubenswrapper[4813]: E0219 20:46:53.187706 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16b486ea-d91d-43cf-9256-ff29c0bab98e" containerName="extract-utilities" Feb 19 20:46:53 crc kubenswrapper[4813]: I0219 20:46:53.187715 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="16b486ea-d91d-43cf-9256-ff29c0bab98e" containerName="extract-utilities" Feb 19 20:46:53 crc kubenswrapper[4813]: E0219 20:46:53.187730 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f75d7c97-6dce-40ee-a448-abb566750887" containerName="telemetry-openstack-openstack-cell1" Feb 19 20:46:53 crc kubenswrapper[4813]: I0219 20:46:53.187737 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f75d7c97-6dce-40ee-a448-abb566750887" containerName="telemetry-openstack-openstack-cell1" Feb 19 20:46:53 crc kubenswrapper[4813]: I0219 20:46:53.188121 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="16b486ea-d91d-43cf-9256-ff29c0bab98e" containerName="registry-server" Feb 19 20:46:53 crc kubenswrapper[4813]: I0219 20:46:53.188185 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f75d7c97-6dce-40ee-a448-abb566750887" containerName="telemetry-openstack-openstack-cell1" Feb 19 20:46:53 crc kubenswrapper[4813]: I0219 20:46:53.189342 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-2hs7t" Feb 19 20:46:53 crc kubenswrapper[4813]: I0219 20:46:53.192375 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 20:46:53 crc kubenswrapper[4813]: I0219 20:46:53.192682 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 19 20:46:53 crc kubenswrapper[4813]: I0219 20:46:53.192802 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-2ttn4" Feb 19 20:46:53 crc kubenswrapper[4813]: I0219 20:46:53.193612 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-sriov-agent-neutron-config" Feb 19 20:46:53 crc kubenswrapper[4813]: I0219 20:46:53.195279 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 19 20:46:53 crc kubenswrapper[4813]: I0219 20:46:53.197292 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-sriov-openstack-openstack-cell1-2hs7t"] Feb 19 20:46:53 crc kubenswrapper[4813]: I0219 20:46:53.285439 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhsfs\" (UniqueName: \"kubernetes.io/projected/95dbe759-32fa-4d27-977a-399f24f2b75e-kube-api-access-fhsfs\") pod \"neutron-sriov-openstack-openstack-cell1-2hs7t\" (UID: \"95dbe759-32fa-4d27-977a-399f24f2b75e\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-2hs7t" Feb 19 20:46:53 crc kubenswrapper[4813]: I0219 20:46:53.285514 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/95dbe759-32fa-4d27-977a-399f24f2b75e-neutron-sriov-agent-neutron-config-0\") pod \"neutron-sriov-openstack-openstack-cell1-2hs7t\" (UID: \"95dbe759-32fa-4d27-977a-399f24f2b75e\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-2hs7t" Feb 19 20:46:53 crc kubenswrapper[4813]: I0219 20:46:53.285573 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/95dbe759-32fa-4d27-977a-399f24f2b75e-inventory\") pod \"neutron-sriov-openstack-openstack-cell1-2hs7t\" (UID: \"95dbe759-32fa-4d27-977a-399f24f2b75e\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-2hs7t" Feb 19 20:46:53 crc kubenswrapper[4813]: I0219 20:46:53.285710 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95dbe759-32fa-4d27-977a-399f24f2b75e-neutron-sriov-combined-ca-bundle\") pod \"neutron-sriov-openstack-openstack-cell1-2hs7t\" (UID: \"95dbe759-32fa-4d27-977a-399f24f2b75e\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-2hs7t" Feb 19 20:46:53 crc kubenswrapper[4813]: I0219 20:46:53.285729 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/95dbe759-32fa-4d27-977a-399f24f2b75e-ssh-key-openstack-cell1\") pod \"neutron-sriov-openstack-openstack-cell1-2hs7t\" (UID: \"95dbe759-32fa-4d27-977a-399f24f2b75e\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-2hs7t" Feb 19 20:46:53 crc kubenswrapper[4813]: I0219 20:46:53.285746 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/95dbe759-32fa-4d27-977a-399f24f2b75e-ceph\") pod \"neutron-sriov-openstack-openstack-cell1-2hs7t\" (UID: \"95dbe759-32fa-4d27-977a-399f24f2b75e\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-2hs7t" Feb 19 20:46:53 crc kubenswrapper[4813]: I0219 20:46:53.387691 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95dbe759-32fa-4d27-977a-399f24f2b75e-neutron-sriov-combined-ca-bundle\") pod \"neutron-sriov-openstack-openstack-cell1-2hs7t\" (UID: \"95dbe759-32fa-4d27-977a-399f24f2b75e\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-2hs7t" Feb 19 20:46:53 crc kubenswrapper[4813]: I0219 20:46:53.387730 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/95dbe759-32fa-4d27-977a-399f24f2b75e-ssh-key-openstack-cell1\") pod \"neutron-sriov-openstack-openstack-cell1-2hs7t\" (UID: \"95dbe759-32fa-4d27-977a-399f24f2b75e\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-2hs7t" Feb 19 20:46:53 crc kubenswrapper[4813]: I0219 20:46:53.387757 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/95dbe759-32fa-4d27-977a-399f24f2b75e-ceph\") pod \"neutron-sriov-openstack-openstack-cell1-2hs7t\" (UID: \"95dbe759-32fa-4d27-977a-399f24f2b75e\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-2hs7t" Feb 19 20:46:53 crc kubenswrapper[4813]: I0219 20:46:53.387817 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhsfs\" (UniqueName: \"kubernetes.io/projected/95dbe759-32fa-4d27-977a-399f24f2b75e-kube-api-access-fhsfs\") pod \"neutron-sriov-openstack-openstack-cell1-2hs7t\" (UID: \"95dbe759-32fa-4d27-977a-399f24f2b75e\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-2hs7t" Feb 19 20:46:53 crc kubenswrapper[4813]: I0219 20:46:53.387859 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/95dbe759-32fa-4d27-977a-399f24f2b75e-neutron-sriov-agent-neutron-config-0\") pod \"neutron-sriov-openstack-openstack-cell1-2hs7t\" (UID: \"95dbe759-32fa-4d27-977a-399f24f2b75e\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-2hs7t" Feb 19 20:46:53 crc kubenswrapper[4813]: I0219 20:46:53.387912 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/95dbe759-32fa-4d27-977a-399f24f2b75e-inventory\") pod \"neutron-sriov-openstack-openstack-cell1-2hs7t\" (UID: \"95dbe759-32fa-4d27-977a-399f24f2b75e\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-2hs7t" Feb 19 20:46:53 crc kubenswrapper[4813]: I0219 20:46:53.393067 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/95dbe759-32fa-4d27-977a-399f24f2b75e-ssh-key-openstack-cell1\") pod \"neutron-sriov-openstack-openstack-cell1-2hs7t\" (UID: \"95dbe759-32fa-4d27-977a-399f24f2b75e\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-2hs7t" Feb 19 20:46:53 crc kubenswrapper[4813]: I0219 20:46:53.393111 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/95dbe759-32fa-4d27-977a-399f24f2b75e-inventory\") pod \"neutron-sriov-openstack-openstack-cell1-2hs7t\" (UID: \"95dbe759-32fa-4d27-977a-399f24f2b75e\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-2hs7t" Feb 19 20:46:53 crc kubenswrapper[4813]: I0219 20:46:53.393323 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95dbe759-32fa-4d27-977a-399f24f2b75e-neutron-sriov-combined-ca-bundle\") pod \"neutron-sriov-openstack-openstack-cell1-2hs7t\" (UID: \"95dbe759-32fa-4d27-977a-399f24f2b75e\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-2hs7t" Feb 19 20:46:53 crc kubenswrapper[4813]: I0219 20:46:53.393714 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/95dbe759-32fa-4d27-977a-399f24f2b75e-neutron-sriov-agent-neutron-config-0\") pod \"neutron-sriov-openstack-openstack-cell1-2hs7t\" (UID: \"95dbe759-32fa-4d27-977a-399f24f2b75e\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-2hs7t" Feb 19 20:46:53 crc kubenswrapper[4813]: I0219 20:46:53.394482 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/95dbe759-32fa-4d27-977a-399f24f2b75e-ceph\") pod \"neutron-sriov-openstack-openstack-cell1-2hs7t\" (UID: \"95dbe759-32fa-4d27-977a-399f24f2b75e\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-2hs7t" Feb 19 20:46:53 crc kubenswrapper[4813]: I0219 20:46:53.407767 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhsfs\" (UniqueName: \"kubernetes.io/projected/95dbe759-32fa-4d27-977a-399f24f2b75e-kube-api-access-fhsfs\") pod \"neutron-sriov-openstack-openstack-cell1-2hs7t\" (UID: \"95dbe759-32fa-4d27-977a-399f24f2b75e\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-2hs7t" Feb 19 20:46:53 crc kubenswrapper[4813]: I0219 20:46:53.526411 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-2hs7t" Feb 19 20:46:54 crc kubenswrapper[4813]: I0219 20:46:54.054317 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-sriov-openstack-openstack-cell1-2hs7t"] Feb 19 20:46:54 crc kubenswrapper[4813]: I0219 20:46:54.062619 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-2hs7t" event={"ID":"95dbe759-32fa-4d27-977a-399f24f2b75e","Type":"ContainerStarted","Data":"8bf353baaf171eb10c21e86192ab29780d24a4be065d4dec292498067198c3da"} Feb 19 20:46:55 crc kubenswrapper[4813]: I0219 20:46:55.075749 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-2hs7t" event={"ID":"95dbe759-32fa-4d27-977a-399f24f2b75e","Type":"ContainerStarted","Data":"a049090551fee59e2c31ba5c9b591bac1416400a5428167549d71b93969a2a3e"} Feb 19 20:46:55 crc kubenswrapper[4813]: I0219 20:46:55.126728 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-sriov-openstack-openstack-cell1-2hs7t" podStartSLOduration=1.6850739780000001 podStartE2EDuration="2.126707194s" podCreationTimestamp="2026-02-19 20:46:53 +0000 UTC" firstStartedPulling="2026-02-19 20:46:54.049450447 +0000 UTC m=+8233.274890988" lastFinishedPulling="2026-02-19 20:46:54.491083633 +0000 UTC m=+8233.716524204" observedRunningTime="2026-02-19 20:46:55.112328784 +0000 UTC m=+8234.337769375" watchObservedRunningTime="2026-02-19 20:46:55.126707194 +0000 UTC m=+8234.352147755" Feb 19 20:47:00 crc kubenswrapper[4813]: I0219 20:47:00.329888 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 20:47:00 crc kubenswrapper[4813]: I0219 20:47:00.330597 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 20:47:30 crc kubenswrapper[4813]: I0219 20:47:30.330255 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 20:47:30 crc kubenswrapper[4813]: I0219 20:47:30.330896 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 20:48:00 crc kubenswrapper[4813]: I0219 20:48:00.330435 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 20:48:00 crc kubenswrapper[4813]: I0219 20:48:00.331209 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 20:48:00 crc kubenswrapper[4813]: I0219 20:48:00.331295 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" Feb 19 20:48:00 crc kubenswrapper[4813]: I0219 20:48:00.332518 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9a24a471de296b18d9e542deb93ef67e933265bda54aedaf108fe07429fd6a27"} pod="openshift-machine-config-operator/machine-config-daemon-gfswm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 20:48:00 crc kubenswrapper[4813]: I0219 20:48:00.332626 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" containerID="cri-o://9a24a471de296b18d9e542deb93ef67e933265bda54aedaf108fe07429fd6a27" gracePeriod=600 Feb 19 20:48:00 crc kubenswrapper[4813]: I0219 20:48:00.829660 4813 generic.go:334] "Generic (PLEG): container finished" podID="95dbe759-32fa-4d27-977a-399f24f2b75e" containerID="a049090551fee59e2c31ba5c9b591bac1416400a5428167549d71b93969a2a3e" exitCode=0 Feb 19 20:48:00 crc kubenswrapper[4813]: I0219 20:48:00.829700 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-2hs7t" event={"ID":"95dbe759-32fa-4d27-977a-399f24f2b75e","Type":"ContainerDied","Data":"a049090551fee59e2c31ba5c9b591bac1416400a5428167549d71b93969a2a3e"} Feb 19 20:48:00 crc kubenswrapper[4813]: I0219 20:48:00.834070 4813 generic.go:334] "Generic (PLEG): container finished" podID="481977a2-7072-4176-abd4-863cb6104d70" containerID="9a24a471de296b18d9e542deb93ef67e933265bda54aedaf108fe07429fd6a27" exitCode=0 Feb 19 20:48:00 crc kubenswrapper[4813]: I0219 20:48:00.834141 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" event={"ID":"481977a2-7072-4176-abd4-863cb6104d70","Type":"ContainerDied","Data":"9a24a471de296b18d9e542deb93ef67e933265bda54aedaf108fe07429fd6a27"} Feb 19 20:48:00 crc kubenswrapper[4813]: I0219 20:48:00.834247 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" event={"ID":"481977a2-7072-4176-abd4-863cb6104d70","Type":"ContainerStarted","Data":"b2b72de14d5df72f4b9cd6c4c268dd9d8a7c24958501081d46a76c40b1cb8b63"} Feb 19 20:48:00 crc kubenswrapper[4813]: I0219 20:48:00.834321 4813 scope.go:117] "RemoveContainer" containerID="db861e5d90a8de67f858b14686841353ea00fd3d2f4c54f7ce0d86d03a0106a3" Feb 19 20:48:02 crc kubenswrapper[4813]: I0219 20:48:02.276935 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-2hs7t" Feb 19 20:48:02 crc kubenswrapper[4813]: I0219 20:48:02.372935 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/95dbe759-32fa-4d27-977a-399f24f2b75e-neutron-sriov-agent-neutron-config-0\") pod \"95dbe759-32fa-4d27-977a-399f24f2b75e\" (UID: \"95dbe759-32fa-4d27-977a-399f24f2b75e\") " Feb 19 20:48:02 crc kubenswrapper[4813]: I0219 20:48:02.373031 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhsfs\" (UniqueName: \"kubernetes.io/projected/95dbe759-32fa-4d27-977a-399f24f2b75e-kube-api-access-fhsfs\") pod \"95dbe759-32fa-4d27-977a-399f24f2b75e\" (UID: \"95dbe759-32fa-4d27-977a-399f24f2b75e\") " Feb 19 20:48:02 crc kubenswrapper[4813]: I0219 20:48:02.373077 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/95dbe759-32fa-4d27-977a-399f24f2b75e-ceph\") pod \"95dbe759-32fa-4d27-977a-399f24f2b75e\" (UID: \"95dbe759-32fa-4d27-977a-399f24f2b75e\") " Feb 19 20:48:02 crc kubenswrapper[4813]: I0219 20:48:02.373288 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95dbe759-32fa-4d27-977a-399f24f2b75e-neutron-sriov-combined-ca-bundle\") pod \"95dbe759-32fa-4d27-977a-399f24f2b75e\" (UID: \"95dbe759-32fa-4d27-977a-399f24f2b75e\") " Feb 19 20:48:02 crc kubenswrapper[4813]: I0219 20:48:02.373363 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/95dbe759-32fa-4d27-977a-399f24f2b75e-ssh-key-openstack-cell1\") pod \"95dbe759-32fa-4d27-977a-399f24f2b75e\" (UID: \"95dbe759-32fa-4d27-977a-399f24f2b75e\") " Feb 19 20:48:02 crc kubenswrapper[4813]: I0219 20:48:02.373425 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/95dbe759-32fa-4d27-977a-399f24f2b75e-inventory\") pod \"95dbe759-32fa-4d27-977a-399f24f2b75e\" (UID: \"95dbe759-32fa-4d27-977a-399f24f2b75e\") " Feb 19 20:48:02 crc kubenswrapper[4813]: I0219 20:48:02.378793 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95dbe759-32fa-4d27-977a-399f24f2b75e-neutron-sriov-combined-ca-bundle" (OuterVolumeSpecName: "neutron-sriov-combined-ca-bundle") pod "95dbe759-32fa-4d27-977a-399f24f2b75e" (UID: "95dbe759-32fa-4d27-977a-399f24f2b75e"). InnerVolumeSpecName "neutron-sriov-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:48:02 crc kubenswrapper[4813]: I0219 20:48:02.382266 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95dbe759-32fa-4d27-977a-399f24f2b75e-kube-api-access-fhsfs" (OuterVolumeSpecName: "kube-api-access-fhsfs") pod "95dbe759-32fa-4d27-977a-399f24f2b75e" (UID: "95dbe759-32fa-4d27-977a-399f24f2b75e"). InnerVolumeSpecName "kube-api-access-fhsfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:48:02 crc kubenswrapper[4813]: I0219 20:48:02.383588 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95dbe759-32fa-4d27-977a-399f24f2b75e-ceph" (OuterVolumeSpecName: "ceph") pod "95dbe759-32fa-4d27-977a-399f24f2b75e" (UID: "95dbe759-32fa-4d27-977a-399f24f2b75e"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:48:02 crc kubenswrapper[4813]: I0219 20:48:02.404328 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95dbe759-32fa-4d27-977a-399f24f2b75e-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "95dbe759-32fa-4d27-977a-399f24f2b75e" (UID: "95dbe759-32fa-4d27-977a-399f24f2b75e"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:48:02 crc kubenswrapper[4813]: I0219 20:48:02.406938 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95dbe759-32fa-4d27-977a-399f24f2b75e-neutron-sriov-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-sriov-agent-neutron-config-0") pod "95dbe759-32fa-4d27-977a-399f24f2b75e" (UID: "95dbe759-32fa-4d27-977a-399f24f2b75e"). InnerVolumeSpecName "neutron-sriov-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:48:02 crc kubenswrapper[4813]: I0219 20:48:02.408067 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95dbe759-32fa-4d27-977a-399f24f2b75e-inventory" (OuterVolumeSpecName: "inventory") pod "95dbe759-32fa-4d27-977a-399f24f2b75e" (UID: "95dbe759-32fa-4d27-977a-399f24f2b75e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:48:02 crc kubenswrapper[4813]: I0219 20:48:02.476442 4813 reconciler_common.go:293] "Volume detached for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95dbe759-32fa-4d27-977a-399f24f2b75e-neutron-sriov-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 20:48:02 crc kubenswrapper[4813]: I0219 20:48:02.476500 4813 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/95dbe759-32fa-4d27-977a-399f24f2b75e-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 19 20:48:02 crc kubenswrapper[4813]: I0219 20:48:02.476542 4813 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/95dbe759-32fa-4d27-977a-399f24f2b75e-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 20:48:02 crc kubenswrapper[4813]: I0219 20:48:02.476552 4813 reconciler_common.go:293] "Volume detached for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/95dbe759-32fa-4d27-977a-399f24f2b75e-neutron-sriov-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 19 20:48:02 crc kubenswrapper[4813]: I0219 20:48:02.476567 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhsfs\" (UniqueName: \"kubernetes.io/projected/95dbe759-32fa-4d27-977a-399f24f2b75e-kube-api-access-fhsfs\") on node \"crc\" DevicePath \"\"" Feb 19 20:48:02 crc kubenswrapper[4813]: I0219 20:48:02.476593 4813 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/95dbe759-32fa-4d27-977a-399f24f2b75e-ceph\") on node \"crc\" DevicePath \"\"" Feb 19 20:48:02 crc kubenswrapper[4813]: I0219 20:48:02.856372 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-2hs7t" event={"ID":"95dbe759-32fa-4d27-977a-399f24f2b75e","Type":"ContainerDied","Data":"8bf353baaf171eb10c21e86192ab29780d24a4be065d4dec292498067198c3da"} Feb 19 20:48:02 crc kubenswrapper[4813]: I0219 20:48:02.856622 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-2hs7t" Feb 19 20:48:02 crc kubenswrapper[4813]: I0219 20:48:02.856627 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bf353baaf171eb10c21e86192ab29780d24a4be065d4dec292498067198c3da" Feb 19 20:48:02 crc kubenswrapper[4813]: I0219 20:48:02.972759 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-dhcp-openstack-openstack-cell1-4g9wj"] Feb 19 20:48:02 crc kubenswrapper[4813]: E0219 20:48:02.973312 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95dbe759-32fa-4d27-977a-399f24f2b75e" containerName="neutron-sriov-openstack-openstack-cell1" Feb 19 20:48:02 crc kubenswrapper[4813]: I0219 20:48:02.973385 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="95dbe759-32fa-4d27-977a-399f24f2b75e" containerName="neutron-sriov-openstack-openstack-cell1" Feb 19 20:48:02 crc kubenswrapper[4813]: I0219 20:48:02.973685 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="95dbe759-32fa-4d27-977a-399f24f2b75e" containerName="neutron-sriov-openstack-openstack-cell1" Feb 19 20:48:02 crc kubenswrapper[4813]: I0219 20:48:02.975296 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-4g9wj" Feb 19 20:48:02 crc kubenswrapper[4813]: I0219 20:48:02.980737 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-2ttn4" Feb 19 20:48:02 crc kubenswrapper[4813]: I0219 20:48:02.981187 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 19 20:48:02 crc kubenswrapper[4813]: I0219 20:48:02.981311 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 19 20:48:02 crc kubenswrapper[4813]: I0219 20:48:02.981502 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 20:48:02 crc kubenswrapper[4813]: I0219 20:48:02.981581 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-dhcp-agent-neutron-config" Feb 19 20:48:02 crc kubenswrapper[4813]: I0219 20:48:02.995754 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dhcp-openstack-openstack-cell1-4g9wj"] Feb 19 20:48:03 crc kubenswrapper[4813]: I0219 20:48:03.090779 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/02ac4dac-b810-49a4-894d-c54e84a2d6f1-ssh-key-openstack-cell1\") pod \"neutron-dhcp-openstack-openstack-cell1-4g9wj\" (UID: \"02ac4dac-b810-49a4-894d-c54e84a2d6f1\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-4g9wj" Feb 19 20:48:03 crc kubenswrapper[4813]: I0219 20:48:03.091084 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/02ac4dac-b810-49a4-894d-c54e84a2d6f1-inventory\") pod \"neutron-dhcp-openstack-openstack-cell1-4g9wj\" (UID: \"02ac4dac-b810-49a4-894d-c54e84a2d6f1\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-4g9wj" Feb 19 20:48:03 crc kubenswrapper[4813]: I0219 20:48:03.091249 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdlnv\" (UniqueName: \"kubernetes.io/projected/02ac4dac-b810-49a4-894d-c54e84a2d6f1-kube-api-access-fdlnv\") pod \"neutron-dhcp-openstack-openstack-cell1-4g9wj\" (UID: \"02ac4dac-b810-49a4-894d-c54e84a2d6f1\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-4g9wj" Feb 19 20:48:03 crc kubenswrapper[4813]: I0219 20:48:03.091463 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/02ac4dac-b810-49a4-894d-c54e84a2d6f1-neutron-dhcp-agent-neutron-config-0\") pod \"neutron-dhcp-openstack-openstack-cell1-4g9wj\" (UID: \"02ac4dac-b810-49a4-894d-c54e84a2d6f1\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-4g9wj" Feb 19 20:48:03 crc kubenswrapper[4813]: I0219 20:48:03.091624 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/02ac4dac-b810-49a4-894d-c54e84a2d6f1-ceph\") pod \"neutron-dhcp-openstack-openstack-cell1-4g9wj\" (UID: \"02ac4dac-b810-49a4-894d-c54e84a2d6f1\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-4g9wj" Feb 19 20:48:03 crc kubenswrapper[4813]: I0219 20:48:03.091797 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02ac4dac-b810-49a4-894d-c54e84a2d6f1-neutron-dhcp-combined-ca-bundle\") pod \"neutron-dhcp-openstack-openstack-cell1-4g9wj\" (UID: \"02ac4dac-b810-49a4-894d-c54e84a2d6f1\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-4g9wj" Feb 19 20:48:03 crc kubenswrapper[4813]: I0219 20:48:03.193139 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/02ac4dac-b810-49a4-894d-c54e84a2d6f1-neutron-dhcp-agent-neutron-config-0\") pod \"neutron-dhcp-openstack-openstack-cell1-4g9wj\" (UID: \"02ac4dac-b810-49a4-894d-c54e84a2d6f1\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-4g9wj" Feb 19 20:48:03 crc kubenswrapper[4813]: I0219 20:48:03.193222 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/02ac4dac-b810-49a4-894d-c54e84a2d6f1-ceph\") pod \"neutron-dhcp-openstack-openstack-cell1-4g9wj\" (UID: \"02ac4dac-b810-49a4-894d-c54e84a2d6f1\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-4g9wj" Feb 19 20:48:03 crc kubenswrapper[4813]: I0219 20:48:03.193280 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02ac4dac-b810-49a4-894d-c54e84a2d6f1-neutron-dhcp-combined-ca-bundle\") pod \"neutron-dhcp-openstack-openstack-cell1-4g9wj\" (UID: \"02ac4dac-b810-49a4-894d-c54e84a2d6f1\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-4g9wj" Feb 19 20:48:03 crc kubenswrapper[4813]: I0219 20:48:03.193313 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/02ac4dac-b810-49a4-894d-c54e84a2d6f1-ssh-key-openstack-cell1\") pod \"neutron-dhcp-openstack-openstack-cell1-4g9wj\" (UID: \"02ac4dac-b810-49a4-894d-c54e84a2d6f1\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-4g9wj" Feb 19 20:48:03 crc kubenswrapper[4813]: I0219 20:48:03.193331 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/02ac4dac-b810-49a4-894d-c54e84a2d6f1-inventory\") pod \"neutron-dhcp-openstack-openstack-cell1-4g9wj\" (UID: \"02ac4dac-b810-49a4-894d-c54e84a2d6f1\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-4g9wj" Feb 19 20:48:03 crc kubenswrapper[4813]: I0219 20:48:03.193366 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdlnv\" (UniqueName: \"kubernetes.io/projected/02ac4dac-b810-49a4-894d-c54e84a2d6f1-kube-api-access-fdlnv\") pod \"neutron-dhcp-openstack-openstack-cell1-4g9wj\" (UID: \"02ac4dac-b810-49a4-894d-c54e84a2d6f1\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-4g9wj" Feb 19 20:48:03 crc kubenswrapper[4813]: I0219 20:48:03.198367 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/02ac4dac-b810-49a4-894d-c54e84a2d6f1-ssh-key-openstack-cell1\") pod \"neutron-dhcp-openstack-openstack-cell1-4g9wj\" (UID: \"02ac4dac-b810-49a4-894d-c54e84a2d6f1\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-4g9wj" Feb 19 20:48:03 crc kubenswrapper[4813]: I0219 20:48:03.199058 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/02ac4dac-b810-49a4-894d-c54e84a2d6f1-ceph\") pod \"neutron-dhcp-openstack-openstack-cell1-4g9wj\" (UID: \"02ac4dac-b810-49a4-894d-c54e84a2d6f1\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-4g9wj" Feb 19 20:48:03 crc kubenswrapper[4813]: I0219 20:48:03.200940 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02ac4dac-b810-49a4-894d-c54e84a2d6f1-neutron-dhcp-combined-ca-bundle\") pod \"neutron-dhcp-openstack-openstack-cell1-4g9wj\" (UID: \"02ac4dac-b810-49a4-894d-c54e84a2d6f1\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-4g9wj" Feb 19 20:48:03 crc kubenswrapper[4813]: I0219 20:48:03.202838 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/02ac4dac-b810-49a4-894d-c54e84a2d6f1-inventory\") pod \"neutron-dhcp-openstack-openstack-cell1-4g9wj\" (UID: \"02ac4dac-b810-49a4-894d-c54e84a2d6f1\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-4g9wj" Feb 19 20:48:03 crc kubenswrapper[4813]: I0219 20:48:03.208391 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/02ac4dac-b810-49a4-894d-c54e84a2d6f1-neutron-dhcp-agent-neutron-config-0\") pod \"neutron-dhcp-openstack-openstack-cell1-4g9wj\" (UID: \"02ac4dac-b810-49a4-894d-c54e84a2d6f1\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-4g9wj" Feb 19 20:48:03 crc kubenswrapper[4813]: I0219 20:48:03.214312 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdlnv\" (UniqueName: \"kubernetes.io/projected/02ac4dac-b810-49a4-894d-c54e84a2d6f1-kube-api-access-fdlnv\") pod \"neutron-dhcp-openstack-openstack-cell1-4g9wj\" (UID: \"02ac4dac-b810-49a4-894d-c54e84a2d6f1\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-4g9wj" Feb 19 20:48:03 crc kubenswrapper[4813]: I0219 20:48:03.297452 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-4g9wj" Feb 19 20:48:03 crc kubenswrapper[4813]: I0219 20:48:03.866760 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dhcp-openstack-openstack-cell1-4g9wj"] Feb 19 20:48:04 crc kubenswrapper[4813]: I0219 20:48:04.925659 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-4g9wj" event={"ID":"02ac4dac-b810-49a4-894d-c54e84a2d6f1","Type":"ContainerStarted","Data":"3b2e26f6a5f798fa98d0a4ef2a839097d8be422a479d5c6258f2a2e7ee7907aa"} Feb 19 20:48:04 crc kubenswrapper[4813]: I0219 20:48:04.926313 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-4g9wj" event={"ID":"02ac4dac-b810-49a4-894d-c54e84a2d6f1","Type":"ContainerStarted","Data":"bd416cf1c5ffd2239d9ba5086da3281468a4019ae757a71202a9da88e291c1a5"} Feb 19 20:48:04 crc kubenswrapper[4813]: I0219 20:48:04.969902 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-dhcp-openstack-openstack-cell1-4g9wj" podStartSLOduration=2.508368336 podStartE2EDuration="2.969878585s" podCreationTimestamp="2026-02-19 20:48:02 +0000 UTC" firstStartedPulling="2026-02-19 20:48:03.870973668 +0000 UTC m=+8303.096414219" lastFinishedPulling="2026-02-19 20:48:04.332483907 +0000 UTC m=+8303.557924468" observedRunningTime="2026-02-19 20:48:04.960369827 +0000 UTC m=+8304.185810408" watchObservedRunningTime="2026-02-19 20:48:04.969878585 +0000 UTC m=+8304.195319136" Feb 19 20:49:22 crc kubenswrapper[4813]: I0219 20:49:22.830679 4813 generic.go:334] "Generic (PLEG): container finished" podID="02ac4dac-b810-49a4-894d-c54e84a2d6f1" containerID="3b2e26f6a5f798fa98d0a4ef2a839097d8be422a479d5c6258f2a2e7ee7907aa" exitCode=0 Feb 19 20:49:22 crc kubenswrapper[4813]: I0219 20:49:22.830723 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-4g9wj" event={"ID":"02ac4dac-b810-49a4-894d-c54e84a2d6f1","Type":"ContainerDied","Data":"3b2e26f6a5f798fa98d0a4ef2a839097d8be422a479d5c6258f2a2e7ee7907aa"} Feb 19 20:49:24 crc kubenswrapper[4813]: I0219 20:49:24.336399 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-4g9wj" Feb 19 20:49:24 crc kubenswrapper[4813]: I0219 20:49:24.453901 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02ac4dac-b810-49a4-894d-c54e84a2d6f1-neutron-dhcp-combined-ca-bundle\") pod \"02ac4dac-b810-49a4-894d-c54e84a2d6f1\" (UID: \"02ac4dac-b810-49a4-894d-c54e84a2d6f1\") " Feb 19 20:49:24 crc kubenswrapper[4813]: I0219 20:49:24.453982 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/02ac4dac-b810-49a4-894d-c54e84a2d6f1-ssh-key-openstack-cell1\") pod \"02ac4dac-b810-49a4-894d-c54e84a2d6f1\" (UID: \"02ac4dac-b810-49a4-894d-c54e84a2d6f1\") " Feb 19 20:49:24 crc kubenswrapper[4813]: I0219 20:49:24.454060 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/02ac4dac-b810-49a4-894d-c54e84a2d6f1-ceph\") pod \"02ac4dac-b810-49a4-894d-c54e84a2d6f1\" (UID: \"02ac4dac-b810-49a4-894d-c54e84a2d6f1\") " Feb 19 20:49:24 crc kubenswrapper[4813]: I0219 20:49:24.454124 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/02ac4dac-b810-49a4-894d-c54e84a2d6f1-inventory\") pod \"02ac4dac-b810-49a4-894d-c54e84a2d6f1\" (UID: \"02ac4dac-b810-49a4-894d-c54e84a2d6f1\") " Feb 19 20:49:24 crc kubenswrapper[4813]: I0219 20:49:24.454228 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/02ac4dac-b810-49a4-894d-c54e84a2d6f1-neutron-dhcp-agent-neutron-config-0\") pod \"02ac4dac-b810-49a4-894d-c54e84a2d6f1\" (UID: \"02ac4dac-b810-49a4-894d-c54e84a2d6f1\") " Feb 19 20:49:24 crc kubenswrapper[4813]: I0219 20:49:24.454338 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdlnv\" (UniqueName: \"kubernetes.io/projected/02ac4dac-b810-49a4-894d-c54e84a2d6f1-kube-api-access-fdlnv\") pod \"02ac4dac-b810-49a4-894d-c54e84a2d6f1\" (UID: \"02ac4dac-b810-49a4-894d-c54e84a2d6f1\") " Feb 19 20:49:24 crc kubenswrapper[4813]: I0219 20:49:24.462198 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02ac4dac-b810-49a4-894d-c54e84a2d6f1-neutron-dhcp-combined-ca-bundle" (OuterVolumeSpecName: "neutron-dhcp-combined-ca-bundle") pod "02ac4dac-b810-49a4-894d-c54e84a2d6f1" (UID: "02ac4dac-b810-49a4-894d-c54e84a2d6f1"). InnerVolumeSpecName "neutron-dhcp-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:49:24 crc kubenswrapper[4813]: I0219 20:49:24.462250 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02ac4dac-b810-49a4-894d-c54e84a2d6f1-ceph" (OuterVolumeSpecName: "ceph") pod "02ac4dac-b810-49a4-894d-c54e84a2d6f1" (UID: "02ac4dac-b810-49a4-894d-c54e84a2d6f1"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:49:24 crc kubenswrapper[4813]: I0219 20:49:24.462283 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02ac4dac-b810-49a4-894d-c54e84a2d6f1-kube-api-access-fdlnv" (OuterVolumeSpecName: "kube-api-access-fdlnv") pod "02ac4dac-b810-49a4-894d-c54e84a2d6f1" (UID: "02ac4dac-b810-49a4-894d-c54e84a2d6f1"). InnerVolumeSpecName "kube-api-access-fdlnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:49:24 crc kubenswrapper[4813]: I0219 20:49:24.487674 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02ac4dac-b810-49a4-894d-c54e84a2d6f1-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "02ac4dac-b810-49a4-894d-c54e84a2d6f1" (UID: "02ac4dac-b810-49a4-894d-c54e84a2d6f1"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:49:24 crc kubenswrapper[4813]: I0219 20:49:24.488373 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02ac4dac-b810-49a4-894d-c54e84a2d6f1-inventory" (OuterVolumeSpecName: "inventory") pod "02ac4dac-b810-49a4-894d-c54e84a2d6f1" (UID: "02ac4dac-b810-49a4-894d-c54e84a2d6f1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:49:24 crc kubenswrapper[4813]: I0219 20:49:24.495057 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02ac4dac-b810-49a4-894d-c54e84a2d6f1-neutron-dhcp-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-dhcp-agent-neutron-config-0") pod "02ac4dac-b810-49a4-894d-c54e84a2d6f1" (UID: "02ac4dac-b810-49a4-894d-c54e84a2d6f1"). InnerVolumeSpecName "neutron-dhcp-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:49:24 crc kubenswrapper[4813]: I0219 20:49:24.557740 4813 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/02ac4dac-b810-49a4-894d-c54e84a2d6f1-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 20:49:24 crc kubenswrapper[4813]: I0219 20:49:24.557791 4813 reconciler_common.go:293] "Volume detached for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/02ac4dac-b810-49a4-894d-c54e84a2d6f1-neutron-dhcp-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 19 20:49:24 crc kubenswrapper[4813]: I0219 20:49:24.557813 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdlnv\" (UniqueName: \"kubernetes.io/projected/02ac4dac-b810-49a4-894d-c54e84a2d6f1-kube-api-access-fdlnv\") on node \"crc\" DevicePath \"\"" Feb 19 20:49:24 crc kubenswrapper[4813]: I0219 20:49:24.557828 4813 reconciler_common.go:293] "Volume detached for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02ac4dac-b810-49a4-894d-c54e84a2d6f1-neutron-dhcp-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 20:49:24 crc kubenswrapper[4813]: I0219 20:49:24.557845 4813 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/02ac4dac-b810-49a4-894d-c54e84a2d6f1-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 19 20:49:24 crc kubenswrapper[4813]: I0219 20:49:24.557863 4813 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/02ac4dac-b810-49a4-894d-c54e84a2d6f1-ceph\") on node \"crc\" DevicePath \"\"" Feb 19 20:49:24 crc kubenswrapper[4813]: I0219 20:49:24.855746 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-4g9wj" event={"ID":"02ac4dac-b810-49a4-894d-c54e84a2d6f1","Type":"ContainerDied","Data":"bd416cf1c5ffd2239d9ba5086da3281468a4019ae757a71202a9da88e291c1a5"} Feb 19 20:49:24 crc kubenswrapper[4813]: I0219 20:49:24.855786 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd416cf1c5ffd2239d9ba5086da3281468a4019ae757a71202a9da88e291c1a5" Feb 19 20:49:24 crc kubenswrapper[4813]: I0219 20:49:24.855829 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-4g9wj" Feb 19 20:49:37 crc kubenswrapper[4813]: I0219 20:49:37.985091 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 20:49:37 crc kubenswrapper[4813]: I0219 20:49:37.986264 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="20cd348c-2c8a-4a93-ba80-1b598b70b25f" containerName="nova-cell0-conductor-conductor" containerID="cri-o://e4c15b47d9f9e1714cfaf983d3219a19b900fda24d017274573a61c209190e75" gracePeriod=30 Feb 19 20:49:38 crc kubenswrapper[4813]: I0219 20:49:38.001466 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 20:49:38 crc kubenswrapper[4813]: I0219 20:49:38.001702 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="fdcf9cfd-6bb7-482e-ab95-f68f52b045ef" containerName="nova-cell1-conductor-conductor" containerID="cri-o://47002169e518c8ac76b52c010516d0a8761988c12aa436f2c9e4294ec8f69d4a" gracePeriod=30 Feb 19 20:49:38 crc kubenswrapper[4813]: E0219 20:49:38.126298 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e4c15b47d9f9e1714cfaf983d3219a19b900fda24d017274573a61c209190e75" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 19 20:49:38 crc kubenswrapper[4813]: E0219 20:49:38.128065 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e4c15b47d9f9e1714cfaf983d3219a19b900fda24d017274573a61c209190e75" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 19 20:49:38 crc kubenswrapper[4813]: E0219 20:49:38.129844 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e4c15b47d9f9e1714cfaf983d3219a19b900fda24d017274573a61c209190e75" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 19 20:49:38 crc kubenswrapper[4813]: E0219 20:49:38.129921 4813 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="20cd348c-2c8a-4a93-ba80-1b598b70b25f" containerName="nova-cell0-conductor-conductor" Feb 19 20:49:38 crc kubenswrapper[4813]: I0219 20:49:38.646812 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 20:49:38 crc kubenswrapper[4813]: I0219 20:49:38.647092 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="035233b2-efbc-4c6a-a82d-44c4742eed8d" containerName="nova-api-log" containerID="cri-o://31f65998b0ca5a56b7ce97c479cd10dc0c4e995d8cb6ddcb67d7297807b3885b" gracePeriod=30 Feb 19 20:49:38 crc kubenswrapper[4813]: I0219 20:49:38.647645 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="035233b2-efbc-4c6a-a82d-44c4742eed8d" containerName="nova-api-api" containerID="cri-o://e7056ded0e8bafde2b6360aac62e8b6b4d1e4ed5cf58d16ad713302e1d113557" gracePeriod=30 Feb 19 20:49:38 crc kubenswrapper[4813]: I0219 20:49:38.743287 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 20:49:38 crc kubenswrapper[4813]: I0219 20:49:38.743714 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4a04b863-542a-4de0-82c0-f8e12a63c47d" containerName="nova-metadata-log" containerID="cri-o://632809a3ca53594c5c70fbfaad235d24131ec52b12853bc8465d47d5f5220d30" gracePeriod=30 Feb 19 20:49:38 crc kubenswrapper[4813]: I0219 20:49:38.744191 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4a04b863-542a-4de0-82c0-f8e12a63c47d" containerName="nova-metadata-metadata" containerID="cri-o://511fe444441c9025dc975559ed8447b43400e4c27812fc496dd0959b8eb5ef90" gracePeriod=30 Feb 19 20:49:38 crc kubenswrapper[4813]: I0219 20:49:38.764070 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 20:49:38 crc kubenswrapper[4813]: I0219 20:49:38.764505 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="47c17d2d-dd38-4554-aefc-6a8132743f0d" containerName="nova-scheduler-scheduler" containerID="cri-o://7bfa4f372e4835757393b69e224be9541ef1026a262b108c84eeb2602ec68e1f" gracePeriod=30 Feb 19 20:49:38 crc kubenswrapper[4813]: I0219 20:49:38.878414 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/nova-metadata-0" podUID="4a04b863-542a-4de0-82c0-f8e12a63c47d" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.79:8775/\": EOF" Feb 19 20:49:38 crc kubenswrapper[4813]: I0219 20:49:38.878524 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="4a04b863-542a-4de0-82c0-f8e12a63c47d" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.79:8775/\": EOF" Feb 19 20:49:38 crc kubenswrapper[4813]: I0219 20:49:38.878556 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/nova-metadata-0" podUID="4a04b863-542a-4de0-82c0-f8e12a63c47d" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.79:8775/\": EOF" Feb 19 20:49:38 crc kubenswrapper[4813]: I0219 20:49:38.997338 4813 generic.go:334] "Generic (PLEG): container finished" podID="4a04b863-542a-4de0-82c0-f8e12a63c47d" containerID="632809a3ca53594c5c70fbfaad235d24131ec52b12853bc8465d47d5f5220d30" exitCode=143 Feb 19 20:49:38 crc kubenswrapper[4813]: I0219 20:49:38.997380 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4a04b863-542a-4de0-82c0-f8e12a63c47d","Type":"ContainerDied","Data":"632809a3ca53594c5c70fbfaad235d24131ec52b12853bc8465d47d5f5220d30"} Feb 19 20:49:39 crc kubenswrapper[4813]: I0219 20:49:39.868494 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 19 20:49:40 crc kubenswrapper[4813]: I0219 20:49:40.008496 4813 generic.go:334] "Generic (PLEG): container finished" podID="035233b2-efbc-4c6a-a82d-44c4742eed8d" containerID="31f65998b0ca5a56b7ce97c479cd10dc0c4e995d8cb6ddcb67d7297807b3885b" exitCode=143 Feb 19 20:49:40 crc kubenswrapper[4813]: I0219 20:49:40.008554 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"035233b2-efbc-4c6a-a82d-44c4742eed8d","Type":"ContainerDied","Data":"31f65998b0ca5a56b7ce97c479cd10dc0c4e995d8cb6ddcb67d7297807b3885b"} Feb 19 20:49:40 crc kubenswrapper[4813]: I0219 20:49:40.010280 4813 generic.go:334] "Generic (PLEG): container finished" podID="fdcf9cfd-6bb7-482e-ab95-f68f52b045ef" containerID="47002169e518c8ac76b52c010516d0a8761988c12aa436f2c9e4294ec8f69d4a" exitCode=0 Feb 19 20:49:40 crc kubenswrapper[4813]: I0219 20:49:40.010318 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"fdcf9cfd-6bb7-482e-ab95-f68f52b045ef","Type":"ContainerDied","Data":"47002169e518c8ac76b52c010516d0a8761988c12aa436f2c9e4294ec8f69d4a"} Feb 19 20:49:40 crc kubenswrapper[4813]: I0219 20:49:40.010367 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"fdcf9cfd-6bb7-482e-ab95-f68f52b045ef","Type":"ContainerDied","Data":"fa58099fa6b5f3a0c91cc0b5c6808ea75d0193175ee15d7f80516e3be35c51b2"} Feb 19 20:49:40 crc kubenswrapper[4813]: I0219 20:49:40.010393 4813 scope.go:117] "RemoveContainer" containerID="47002169e518c8ac76b52c010516d0a8761988c12aa436f2c9e4294ec8f69d4a" Feb 19 20:49:40 crc kubenswrapper[4813]: I0219 20:49:40.010561 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 19 20:49:40 crc kubenswrapper[4813]: I0219 20:49:40.021073 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqscl\" (UniqueName: \"kubernetes.io/projected/fdcf9cfd-6bb7-482e-ab95-f68f52b045ef-kube-api-access-rqscl\") pod \"fdcf9cfd-6bb7-482e-ab95-f68f52b045ef\" (UID: \"fdcf9cfd-6bb7-482e-ab95-f68f52b045ef\") " Feb 19 20:49:40 crc kubenswrapper[4813]: I0219 20:49:40.021470 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdcf9cfd-6bb7-482e-ab95-f68f52b045ef-combined-ca-bundle\") pod \"fdcf9cfd-6bb7-482e-ab95-f68f52b045ef\" (UID: \"fdcf9cfd-6bb7-482e-ab95-f68f52b045ef\") " Feb 19 20:49:40 crc kubenswrapper[4813]: I0219 20:49:40.021716 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdcf9cfd-6bb7-482e-ab95-f68f52b045ef-config-data\") pod \"fdcf9cfd-6bb7-482e-ab95-f68f52b045ef\" (UID: \"fdcf9cfd-6bb7-482e-ab95-f68f52b045ef\") " Feb 19 20:49:40 crc kubenswrapper[4813]: I0219 20:49:40.027585 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdcf9cfd-6bb7-482e-ab95-f68f52b045ef-kube-api-access-rqscl" (OuterVolumeSpecName: "kube-api-access-rqscl") pod "fdcf9cfd-6bb7-482e-ab95-f68f52b045ef" (UID: "fdcf9cfd-6bb7-482e-ab95-f68f52b045ef"). InnerVolumeSpecName "kube-api-access-rqscl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:49:40 crc kubenswrapper[4813]: I0219 20:49:40.045658 4813 scope.go:117] "RemoveContainer" containerID="47002169e518c8ac76b52c010516d0a8761988c12aa436f2c9e4294ec8f69d4a" Feb 19 20:49:40 crc kubenswrapper[4813]: E0219 20:49:40.046757 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47002169e518c8ac76b52c010516d0a8761988c12aa436f2c9e4294ec8f69d4a\": container with ID starting with 47002169e518c8ac76b52c010516d0a8761988c12aa436f2c9e4294ec8f69d4a not found: ID does not exist" containerID="47002169e518c8ac76b52c010516d0a8761988c12aa436f2c9e4294ec8f69d4a" Feb 19 20:49:40 crc kubenswrapper[4813]: I0219 20:49:40.046797 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47002169e518c8ac76b52c010516d0a8761988c12aa436f2c9e4294ec8f69d4a"} err="failed to get container status \"47002169e518c8ac76b52c010516d0a8761988c12aa436f2c9e4294ec8f69d4a\": rpc error: code = NotFound desc = could not find container \"47002169e518c8ac76b52c010516d0a8761988c12aa436f2c9e4294ec8f69d4a\": container with ID starting with 47002169e518c8ac76b52c010516d0a8761988c12aa436f2c9e4294ec8f69d4a not found: ID does not exist" Feb 19 20:49:40 crc kubenswrapper[4813]: I0219 20:49:40.053053 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdcf9cfd-6bb7-482e-ab95-f68f52b045ef-config-data" (OuterVolumeSpecName: "config-data") pod "fdcf9cfd-6bb7-482e-ab95-f68f52b045ef" (UID: "fdcf9cfd-6bb7-482e-ab95-f68f52b045ef"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:49:40 crc kubenswrapper[4813]: I0219 20:49:40.053091 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdcf9cfd-6bb7-482e-ab95-f68f52b045ef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fdcf9cfd-6bb7-482e-ab95-f68f52b045ef" (UID: "fdcf9cfd-6bb7-482e-ab95-f68f52b045ef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:49:40 crc kubenswrapper[4813]: I0219 20:49:40.124200 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdcf9cfd-6bb7-482e-ab95-f68f52b045ef-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 20:49:40 crc kubenswrapper[4813]: I0219 20:49:40.124358 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqscl\" (UniqueName: \"kubernetes.io/projected/fdcf9cfd-6bb7-482e-ab95-f68f52b045ef-kube-api-access-rqscl\") on node \"crc\" DevicePath \"\"" Feb 19 20:49:40 crc kubenswrapper[4813]: I0219 20:49:40.124456 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdcf9cfd-6bb7-482e-ab95-f68f52b045ef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 20:49:40 crc kubenswrapper[4813]: I0219 20:49:40.347619 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 20:49:40 crc kubenswrapper[4813]: I0219 20:49:40.357615 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 20:49:40 crc kubenswrapper[4813]: I0219 20:49:40.381410 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 20:49:40 crc kubenswrapper[4813]: E0219 20:49:40.381868 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdcf9cfd-6bb7-482e-ab95-f68f52b045ef" containerName="nova-cell1-conductor-conductor" Feb 19 20:49:40 crc kubenswrapper[4813]: I0219 20:49:40.381888 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdcf9cfd-6bb7-482e-ab95-f68f52b045ef" containerName="nova-cell1-conductor-conductor" Feb 19 20:49:40 crc kubenswrapper[4813]: E0219 20:49:40.381913 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02ac4dac-b810-49a4-894d-c54e84a2d6f1" containerName="neutron-dhcp-openstack-openstack-cell1" Feb 19 20:49:40 crc kubenswrapper[4813]: I0219 20:49:40.381919 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="02ac4dac-b810-49a4-894d-c54e84a2d6f1" containerName="neutron-dhcp-openstack-openstack-cell1" Feb 19 20:49:40 crc kubenswrapper[4813]: I0219 20:49:40.382133 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdcf9cfd-6bb7-482e-ab95-f68f52b045ef" containerName="nova-cell1-conductor-conductor" Feb 19 20:49:40 crc kubenswrapper[4813]: I0219 20:49:40.382159 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="02ac4dac-b810-49a4-894d-c54e84a2d6f1" containerName="neutron-dhcp-openstack-openstack-cell1" Feb 19 20:49:40 crc kubenswrapper[4813]: I0219 20:49:40.382903 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 19 20:49:40 crc kubenswrapper[4813]: I0219 20:49:40.386287 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 19 20:49:40 crc kubenswrapper[4813]: I0219 20:49:40.390002 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 20:49:40 crc kubenswrapper[4813]: I0219 20:49:40.533557 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdc943f6-cc8a-4b0b-9051-1db1115793a8-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"cdc943f6-cc8a-4b0b-9051-1db1115793a8\") " pod="openstack/nova-cell1-conductor-0" Feb 19 20:49:40 crc kubenswrapper[4813]: I0219 20:49:40.533746 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw75j\" (UniqueName: \"kubernetes.io/projected/cdc943f6-cc8a-4b0b-9051-1db1115793a8-kube-api-access-fw75j\") pod \"nova-cell1-conductor-0\" (UID: \"cdc943f6-cc8a-4b0b-9051-1db1115793a8\") " pod="openstack/nova-cell1-conductor-0" Feb 19 20:49:40 crc kubenswrapper[4813]: I0219 20:49:40.533826 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdc943f6-cc8a-4b0b-9051-1db1115793a8-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"cdc943f6-cc8a-4b0b-9051-1db1115793a8\") " pod="openstack/nova-cell1-conductor-0" Feb 19 20:49:40 crc kubenswrapper[4813]: I0219 20:49:40.635939 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdc943f6-cc8a-4b0b-9051-1db1115793a8-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"cdc943f6-cc8a-4b0b-9051-1db1115793a8\") " pod="openstack/nova-cell1-conductor-0" Feb 19 20:49:40 crc kubenswrapper[4813]: I0219 20:49:40.636071 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw75j\" (UniqueName: \"kubernetes.io/projected/cdc943f6-cc8a-4b0b-9051-1db1115793a8-kube-api-access-fw75j\") pod \"nova-cell1-conductor-0\" (UID: \"cdc943f6-cc8a-4b0b-9051-1db1115793a8\") " pod="openstack/nova-cell1-conductor-0" Feb 19 20:49:40 crc kubenswrapper[4813]: I0219 20:49:40.636106 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdc943f6-cc8a-4b0b-9051-1db1115793a8-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"cdc943f6-cc8a-4b0b-9051-1db1115793a8\") " pod="openstack/nova-cell1-conductor-0" Feb 19 20:49:40 crc kubenswrapper[4813]: I0219 20:49:40.643341 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdc943f6-cc8a-4b0b-9051-1db1115793a8-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"cdc943f6-cc8a-4b0b-9051-1db1115793a8\") " pod="openstack/nova-cell1-conductor-0" Feb 19 20:49:40 crc kubenswrapper[4813]: I0219 20:49:40.643375 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdc943f6-cc8a-4b0b-9051-1db1115793a8-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"cdc943f6-cc8a-4b0b-9051-1db1115793a8\") " pod="openstack/nova-cell1-conductor-0" Feb 19 20:49:40 crc kubenswrapper[4813]: I0219 20:49:40.655109 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fw75j\" (UniqueName: \"kubernetes.io/projected/cdc943f6-cc8a-4b0b-9051-1db1115793a8-kube-api-access-fw75j\") pod \"nova-cell1-conductor-0\" (UID: \"cdc943f6-cc8a-4b0b-9051-1db1115793a8\") " pod="openstack/nova-cell1-conductor-0" Feb 19 20:49:40 crc kubenswrapper[4813]: I0219 20:49:40.705428 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 19 20:49:41 crc kubenswrapper[4813]: I0219 20:49:41.205260 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 20:49:41 crc kubenswrapper[4813]: I0219 20:49:41.487500 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdcf9cfd-6bb7-482e-ab95-f68f52b045ef" path="/var/lib/kubelet/pods/fdcf9cfd-6bb7-482e-ab95-f68f52b045ef/volumes" Feb 19 20:49:41 crc kubenswrapper[4813]: I0219 20:49:41.619001 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 20:49:41 crc kubenswrapper[4813]: I0219 20:49:41.760132 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47c17d2d-dd38-4554-aefc-6a8132743f0d-config-data\") pod \"47c17d2d-dd38-4554-aefc-6a8132743f0d\" (UID: \"47c17d2d-dd38-4554-aefc-6a8132743f0d\") " Feb 19 20:49:41 crc kubenswrapper[4813]: I0219 20:49:41.760198 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5p7nn\" (UniqueName: \"kubernetes.io/projected/47c17d2d-dd38-4554-aefc-6a8132743f0d-kube-api-access-5p7nn\") pod \"47c17d2d-dd38-4554-aefc-6a8132743f0d\" (UID: \"47c17d2d-dd38-4554-aefc-6a8132743f0d\") " Feb 19 20:49:41 crc kubenswrapper[4813]: I0219 20:49:41.760560 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47c17d2d-dd38-4554-aefc-6a8132743f0d-combined-ca-bundle\") pod \"47c17d2d-dd38-4554-aefc-6a8132743f0d\" (UID: \"47c17d2d-dd38-4554-aefc-6a8132743f0d\") " Feb 19 20:49:41 crc kubenswrapper[4813]: I0219 20:49:41.766172 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47c17d2d-dd38-4554-aefc-6a8132743f0d-kube-api-access-5p7nn" (OuterVolumeSpecName: "kube-api-access-5p7nn") pod "47c17d2d-dd38-4554-aefc-6a8132743f0d" (UID: "47c17d2d-dd38-4554-aefc-6a8132743f0d"). InnerVolumeSpecName "kube-api-access-5p7nn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:49:41 crc kubenswrapper[4813]: I0219 20:49:41.797263 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47c17d2d-dd38-4554-aefc-6a8132743f0d-config-data" (OuterVolumeSpecName: "config-data") pod "47c17d2d-dd38-4554-aefc-6a8132743f0d" (UID: "47c17d2d-dd38-4554-aefc-6a8132743f0d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:49:41 crc kubenswrapper[4813]: I0219 20:49:41.800837 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47c17d2d-dd38-4554-aefc-6a8132743f0d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "47c17d2d-dd38-4554-aefc-6a8132743f0d" (UID: "47c17d2d-dd38-4554-aefc-6a8132743f0d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:49:41 crc kubenswrapper[4813]: I0219 20:49:41.862656 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47c17d2d-dd38-4554-aefc-6a8132743f0d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 20:49:41 crc kubenswrapper[4813]: I0219 20:49:41.862691 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47c17d2d-dd38-4554-aefc-6a8132743f0d-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 20:49:41 crc kubenswrapper[4813]: I0219 20:49:41.862704 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5p7nn\" (UniqueName: \"kubernetes.io/projected/47c17d2d-dd38-4554-aefc-6a8132743f0d-kube-api-access-5p7nn\") on node \"crc\" DevicePath \"\"" Feb 19 20:49:42 crc kubenswrapper[4813]: I0219 20:49:42.031430 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"cdc943f6-cc8a-4b0b-9051-1db1115793a8","Type":"ContainerStarted","Data":"934aaaa0e70a67114da251ac2b4b997752ba336bff55b9f29ffe91686616911c"} Feb 19 20:49:42 crc kubenswrapper[4813]: I0219 20:49:42.031497 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"cdc943f6-cc8a-4b0b-9051-1db1115793a8","Type":"ContainerStarted","Data":"dc2595ff701338e92cee0f7cdd80c63fc088f898bc6e58a0d81aaed81c9d3ca1"} Feb 19 20:49:42 crc kubenswrapper[4813]: I0219 20:49:42.031525 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 19 20:49:42 crc kubenswrapper[4813]: I0219 20:49:42.033349 4813 generic.go:334] "Generic (PLEG): container finished" podID="47c17d2d-dd38-4554-aefc-6a8132743f0d" containerID="7bfa4f372e4835757393b69e224be9541ef1026a262b108c84eeb2602ec68e1f" exitCode=0 Feb 19 20:49:42 crc kubenswrapper[4813]: I0219 20:49:42.033391 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"47c17d2d-dd38-4554-aefc-6a8132743f0d","Type":"ContainerDied","Data":"7bfa4f372e4835757393b69e224be9541ef1026a262b108c84eeb2602ec68e1f"} Feb 19 20:49:42 crc kubenswrapper[4813]: I0219 20:49:42.033425 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"47c17d2d-dd38-4554-aefc-6a8132743f0d","Type":"ContainerDied","Data":"08a6f1cb802d28eefe4a2835d3da492cb5400165f57a4dec01e32fe1518ae34b"} Feb 19 20:49:42 crc kubenswrapper[4813]: I0219 20:49:42.033444 4813 scope.go:117] "RemoveContainer" containerID="7bfa4f372e4835757393b69e224be9541ef1026a262b108c84eeb2602ec68e1f" Feb 19 20:49:42 crc kubenswrapper[4813]: I0219 20:49:42.033470 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 20:49:42 crc kubenswrapper[4813]: I0219 20:49:42.048086 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.048070992 podStartE2EDuration="2.048070992s" podCreationTimestamp="2026-02-19 20:49:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 20:49:42.045550352 +0000 UTC m=+8401.270990903" watchObservedRunningTime="2026-02-19 20:49:42.048070992 +0000 UTC m=+8401.273511533" Feb 19 20:49:42 crc kubenswrapper[4813]: I0219 20:49:42.061743 4813 scope.go:117] "RemoveContainer" containerID="7bfa4f372e4835757393b69e224be9541ef1026a262b108c84eeb2602ec68e1f" Feb 19 20:49:42 crc kubenswrapper[4813]: E0219 20:49:42.062561 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bfa4f372e4835757393b69e224be9541ef1026a262b108c84eeb2602ec68e1f\": container with ID starting with 7bfa4f372e4835757393b69e224be9541ef1026a262b108c84eeb2602ec68e1f not found: ID does not exist" containerID="7bfa4f372e4835757393b69e224be9541ef1026a262b108c84eeb2602ec68e1f" Feb 19 20:49:42 crc kubenswrapper[4813]: I0219 20:49:42.062595 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bfa4f372e4835757393b69e224be9541ef1026a262b108c84eeb2602ec68e1f"} err="failed to get container status \"7bfa4f372e4835757393b69e224be9541ef1026a262b108c84eeb2602ec68e1f\": rpc error: code = NotFound desc = could not find container \"7bfa4f372e4835757393b69e224be9541ef1026a262b108c84eeb2602ec68e1f\": container with ID starting with 7bfa4f372e4835757393b69e224be9541ef1026a262b108c84eeb2602ec68e1f not found: ID does not exist" Feb 19 20:49:42 crc kubenswrapper[4813]: I0219 20:49:42.071844 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 20:49:42 crc kubenswrapper[4813]: I0219 20:49:42.083509 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 20:49:42 crc kubenswrapper[4813]: I0219 20:49:42.093247 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 20:49:42 crc kubenswrapper[4813]: E0219 20:49:42.093670 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47c17d2d-dd38-4554-aefc-6a8132743f0d" containerName="nova-scheduler-scheduler" Feb 19 20:49:42 crc kubenswrapper[4813]: I0219 20:49:42.093693 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="47c17d2d-dd38-4554-aefc-6a8132743f0d" containerName="nova-scheduler-scheduler" Feb 19 20:49:42 crc kubenswrapper[4813]: I0219 20:49:42.093943 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="47c17d2d-dd38-4554-aefc-6a8132743f0d" containerName="nova-scheduler-scheduler" Feb 19 20:49:42 crc kubenswrapper[4813]: I0219 20:49:42.094967 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 20:49:42 crc kubenswrapper[4813]: I0219 20:49:42.098018 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 19 20:49:42 crc kubenswrapper[4813]: I0219 20:49:42.102578 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 20:49:42 crc kubenswrapper[4813]: I0219 20:49:42.273024 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgvx8\" (UniqueName: \"kubernetes.io/projected/c9287374-ca07-402f-9748-9d55df1d0d3c-kube-api-access-wgvx8\") pod \"nova-scheduler-0\" (UID: \"c9287374-ca07-402f-9748-9d55df1d0d3c\") " pod="openstack/nova-scheduler-0" Feb 19 20:49:42 crc kubenswrapper[4813]: I0219 20:49:42.273112 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9287374-ca07-402f-9748-9d55df1d0d3c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c9287374-ca07-402f-9748-9d55df1d0d3c\") " pod="openstack/nova-scheduler-0" Feb 19 20:49:42 crc kubenswrapper[4813]: I0219 20:49:42.273158 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9287374-ca07-402f-9748-9d55df1d0d3c-config-data\") pod \"nova-scheduler-0\" (UID: \"c9287374-ca07-402f-9748-9d55df1d0d3c\") " pod="openstack/nova-scheduler-0" Feb 19 20:49:42 crc kubenswrapper[4813]: I0219 20:49:42.377624 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgvx8\" (UniqueName: \"kubernetes.io/projected/c9287374-ca07-402f-9748-9d55df1d0d3c-kube-api-access-wgvx8\") pod \"nova-scheduler-0\" (UID: \"c9287374-ca07-402f-9748-9d55df1d0d3c\") " pod="openstack/nova-scheduler-0" Feb 19 20:49:42 crc kubenswrapper[4813]: I0219 20:49:42.378838 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9287374-ca07-402f-9748-9d55df1d0d3c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c9287374-ca07-402f-9748-9d55df1d0d3c\") " pod="openstack/nova-scheduler-0" Feb 19 20:49:42 crc kubenswrapper[4813]: I0219 20:49:42.379138 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9287374-ca07-402f-9748-9d55df1d0d3c-config-data\") pod \"nova-scheduler-0\" (UID: \"c9287374-ca07-402f-9748-9d55df1d0d3c\") " pod="openstack/nova-scheduler-0" Feb 19 20:49:42 crc kubenswrapper[4813]: I0219 20:49:42.385724 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9287374-ca07-402f-9748-9d55df1d0d3c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c9287374-ca07-402f-9748-9d55df1d0d3c\") " pod="openstack/nova-scheduler-0" Feb 19 20:49:42 crc kubenswrapper[4813]: I0219 20:49:42.386244 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9287374-ca07-402f-9748-9d55df1d0d3c-config-data\") pod \"nova-scheduler-0\" (UID: \"c9287374-ca07-402f-9748-9d55df1d0d3c\") " pod="openstack/nova-scheduler-0" Feb 19 20:49:42 crc kubenswrapper[4813]: I0219 20:49:42.398683 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgvx8\" (UniqueName: \"kubernetes.io/projected/c9287374-ca07-402f-9748-9d55df1d0d3c-kube-api-access-wgvx8\") pod \"nova-scheduler-0\" (UID: \"c9287374-ca07-402f-9748-9d55df1d0d3c\") " pod="openstack/nova-scheduler-0" Feb 19 20:49:42 crc kubenswrapper[4813]: I0219 20:49:42.435036 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 20:49:42 crc kubenswrapper[4813]: I0219 20:49:42.982422 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.014046 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.067906 4813 generic.go:334] "Generic (PLEG): container finished" podID="035233b2-efbc-4c6a-a82d-44c4742eed8d" containerID="e7056ded0e8bafde2b6360aac62e8b6b4d1e4ed5cf58d16ad713302e1d113557" exitCode=0 Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.067996 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"035233b2-efbc-4c6a-a82d-44c4742eed8d","Type":"ContainerDied","Data":"e7056ded0e8bafde2b6360aac62e8b6b4d1e4ed5cf58d16ad713302e1d113557"} Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.068000 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.068040 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"035233b2-efbc-4c6a-a82d-44c4742eed8d","Type":"ContainerDied","Data":"0521cfc376211f3fd2d332281dbe2e05a1ac277569500737b24d1a442c082490"} Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.068056 4813 scope.go:117] "RemoveContainer" containerID="e7056ded0e8bafde2b6360aac62e8b6b4d1e4ed5cf58d16ad713302e1d113557" Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.071076 4813 generic.go:334] "Generic (PLEG): container finished" podID="4a04b863-542a-4de0-82c0-f8e12a63c47d" containerID="511fe444441c9025dc975559ed8447b43400e4c27812fc496dd0959b8eb5ef90" exitCode=0 Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.071121 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4a04b863-542a-4de0-82c0-f8e12a63c47d","Type":"ContainerDied","Data":"511fe444441c9025dc975559ed8447b43400e4c27812fc496dd0959b8eb5ef90"} Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.071139 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4a04b863-542a-4de0-82c0-f8e12a63c47d","Type":"ContainerDied","Data":"7fe0cda6ec50a690e5a91cb4c8d2c53299df304134be44acc7af86efac1fab5b"} Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.071147 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.095440 4813 scope.go:117] "RemoveContainer" containerID="31f65998b0ca5a56b7ce97c479cd10dc0c4e995d8cb6ddcb67d7297807b3885b" Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.103237 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvr5x\" (UniqueName: \"kubernetes.io/projected/4a04b863-542a-4de0-82c0-f8e12a63c47d-kube-api-access-bvr5x\") pod \"4a04b863-542a-4de0-82c0-f8e12a63c47d\" (UID: \"4a04b863-542a-4de0-82c0-f8e12a63c47d\") " Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.103304 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a04b863-542a-4de0-82c0-f8e12a63c47d-logs\") pod \"4a04b863-542a-4de0-82c0-f8e12a63c47d\" (UID: \"4a04b863-542a-4de0-82c0-f8e12a63c47d\") " Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.103353 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/035233b2-efbc-4c6a-a82d-44c4742eed8d-config-data\") pod \"035233b2-efbc-4c6a-a82d-44c4742eed8d\" (UID: \"035233b2-efbc-4c6a-a82d-44c4742eed8d\") " Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.103381 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/035233b2-efbc-4c6a-a82d-44c4742eed8d-combined-ca-bundle\") pod \"035233b2-efbc-4c6a-a82d-44c4742eed8d\" (UID: \"035233b2-efbc-4c6a-a82d-44c4742eed8d\") " Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.103459 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a04b863-542a-4de0-82c0-f8e12a63c47d-combined-ca-bundle\") pod \"4a04b863-542a-4de0-82c0-f8e12a63c47d\" (UID: \"4a04b863-542a-4de0-82c0-f8e12a63c47d\") " Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.103511 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knjvd\" (UniqueName: \"kubernetes.io/projected/035233b2-efbc-4c6a-a82d-44c4742eed8d-kube-api-access-knjvd\") pod \"035233b2-efbc-4c6a-a82d-44c4742eed8d\" (UID: \"035233b2-efbc-4c6a-a82d-44c4742eed8d\") " Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.103571 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/035233b2-efbc-4c6a-a82d-44c4742eed8d-logs\") pod \"035233b2-efbc-4c6a-a82d-44c4742eed8d\" (UID: \"035233b2-efbc-4c6a-a82d-44c4742eed8d\") " Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.103730 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a04b863-542a-4de0-82c0-f8e12a63c47d-config-data\") pod \"4a04b863-542a-4de0-82c0-f8e12a63c47d\" (UID: \"4a04b863-542a-4de0-82c0-f8e12a63c47d\") " Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.106762 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a04b863-542a-4de0-82c0-f8e12a63c47d-logs" (OuterVolumeSpecName: "logs") pod "4a04b863-542a-4de0-82c0-f8e12a63c47d" (UID: "4a04b863-542a-4de0-82c0-f8e12a63c47d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.110493 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/035233b2-efbc-4c6a-a82d-44c4742eed8d-logs" (OuterVolumeSpecName: "logs") pod "035233b2-efbc-4c6a-a82d-44c4742eed8d" (UID: "035233b2-efbc-4c6a-a82d-44c4742eed8d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.110728 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/035233b2-efbc-4c6a-a82d-44c4742eed8d-kube-api-access-knjvd" (OuterVolumeSpecName: "kube-api-access-knjvd") pod "035233b2-efbc-4c6a-a82d-44c4742eed8d" (UID: "035233b2-efbc-4c6a-a82d-44c4742eed8d"). InnerVolumeSpecName "kube-api-access-knjvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.111456 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a04b863-542a-4de0-82c0-f8e12a63c47d-kube-api-access-bvr5x" (OuterVolumeSpecName: "kube-api-access-bvr5x") pod "4a04b863-542a-4de0-82c0-f8e12a63c47d" (UID: "4a04b863-542a-4de0-82c0-f8e12a63c47d"). InnerVolumeSpecName "kube-api-access-bvr5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:49:43 crc kubenswrapper[4813]: E0219 20:49:43.125928 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e4c15b47d9f9e1714cfaf983d3219a19b900fda24d017274573a61c209190e75" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.136511 4813 scope.go:117] "RemoveContainer" containerID="e7056ded0e8bafde2b6360aac62e8b6b4d1e4ed5cf58d16ad713302e1d113557" Feb 19 20:49:43 crc kubenswrapper[4813]: E0219 20:49:43.143567 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e4c15b47d9f9e1714cfaf983d3219a19b900fda24d017274573a61c209190e75" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 19 20:49:43 crc kubenswrapper[4813]: E0219 20:49:43.143722 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7056ded0e8bafde2b6360aac62e8b6b4d1e4ed5cf58d16ad713302e1d113557\": container with ID starting with e7056ded0e8bafde2b6360aac62e8b6b4d1e4ed5cf58d16ad713302e1d113557 not found: ID does not exist" containerID="e7056ded0e8bafde2b6360aac62e8b6b4d1e4ed5cf58d16ad713302e1d113557" Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.143746 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7056ded0e8bafde2b6360aac62e8b6b4d1e4ed5cf58d16ad713302e1d113557"} err="failed to get container status \"e7056ded0e8bafde2b6360aac62e8b6b4d1e4ed5cf58d16ad713302e1d113557\": rpc error: code = NotFound desc = could not find container \"e7056ded0e8bafde2b6360aac62e8b6b4d1e4ed5cf58d16ad713302e1d113557\": container with ID starting with e7056ded0e8bafde2b6360aac62e8b6b4d1e4ed5cf58d16ad713302e1d113557 not found: ID does not exist" Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.143764 4813 scope.go:117] "RemoveContainer" containerID="31f65998b0ca5a56b7ce97c479cd10dc0c4e995d8cb6ddcb67d7297807b3885b" Feb 19 20:49:43 crc kubenswrapper[4813]: E0219 20:49:43.153879 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31f65998b0ca5a56b7ce97c479cd10dc0c4e995d8cb6ddcb67d7297807b3885b\": container with ID starting with 31f65998b0ca5a56b7ce97c479cd10dc0c4e995d8cb6ddcb67d7297807b3885b not found: ID does not exist" containerID="31f65998b0ca5a56b7ce97c479cd10dc0c4e995d8cb6ddcb67d7297807b3885b" Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.153929 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31f65998b0ca5a56b7ce97c479cd10dc0c4e995d8cb6ddcb67d7297807b3885b"} err="failed to get container status \"31f65998b0ca5a56b7ce97c479cd10dc0c4e995d8cb6ddcb67d7297807b3885b\": rpc error: code = NotFound desc = could not find container \"31f65998b0ca5a56b7ce97c479cd10dc0c4e995d8cb6ddcb67d7297807b3885b\": container with ID starting with 31f65998b0ca5a56b7ce97c479cd10dc0c4e995d8cb6ddcb67d7297807b3885b not found: ID does not exist" Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.153984 4813 scope.go:117] "RemoveContainer" containerID="511fe444441c9025dc975559ed8447b43400e4c27812fc496dd0959b8eb5ef90" Feb 19 20:49:43 crc kubenswrapper[4813]: E0219 20:49:43.155912 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e4c15b47d9f9e1714cfaf983d3219a19b900fda24d017274573a61c209190e75" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 19 20:49:43 crc kubenswrapper[4813]: E0219 20:49:43.156008 4813 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="20cd348c-2c8a-4a93-ba80-1b598b70b25f" containerName="nova-cell0-conductor-conductor" Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.157573 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/035233b2-efbc-4c6a-a82d-44c4742eed8d-config-data" (OuterVolumeSpecName: "config-data") pod "035233b2-efbc-4c6a-a82d-44c4742eed8d" (UID: "035233b2-efbc-4c6a-a82d-44c4742eed8d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.164264 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a04b863-542a-4de0-82c0-f8e12a63c47d-config-data" (OuterVolumeSpecName: "config-data") pod "4a04b863-542a-4de0-82c0-f8e12a63c47d" (UID: "4a04b863-542a-4de0-82c0-f8e12a63c47d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.164466 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/035233b2-efbc-4c6a-a82d-44c4742eed8d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "035233b2-efbc-4c6a-a82d-44c4742eed8d" (UID: "035233b2-efbc-4c6a-a82d-44c4742eed8d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.170162 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a04b863-542a-4de0-82c0-f8e12a63c47d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4a04b863-542a-4de0-82c0-f8e12a63c47d" (UID: "4a04b863-542a-4de0-82c0-f8e12a63c47d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.196658 4813 scope.go:117] "RemoveContainer" containerID="632809a3ca53594c5c70fbfaad235d24131ec52b12853bc8465d47d5f5220d30" Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.206518 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvr5x\" (UniqueName: \"kubernetes.io/projected/4a04b863-542a-4de0-82c0-f8e12a63c47d-kube-api-access-bvr5x\") on node \"crc\" DevicePath \"\"" Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.206709 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a04b863-542a-4de0-82c0-f8e12a63c47d-logs\") on node \"crc\" DevicePath \"\"" Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.206984 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/035233b2-efbc-4c6a-a82d-44c4742eed8d-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.207077 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/035233b2-efbc-4c6a-a82d-44c4742eed8d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.207137 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a04b863-542a-4de0-82c0-f8e12a63c47d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.207189 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-knjvd\" (UniqueName: \"kubernetes.io/projected/035233b2-efbc-4c6a-a82d-44c4742eed8d-kube-api-access-knjvd\") on node \"crc\" DevicePath \"\"" Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.207252 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/035233b2-efbc-4c6a-a82d-44c4742eed8d-logs\") on node \"crc\" DevicePath \"\"" Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.207312 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a04b863-542a-4de0-82c0-f8e12a63c47d-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.240197 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.245173 4813 scope.go:117] "RemoveContainer" containerID="511fe444441c9025dc975559ed8447b43400e4c27812fc496dd0959b8eb5ef90" Feb 19 20:49:43 crc kubenswrapper[4813]: E0219 20:49:43.246548 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"511fe444441c9025dc975559ed8447b43400e4c27812fc496dd0959b8eb5ef90\": container with ID starting with 511fe444441c9025dc975559ed8447b43400e4c27812fc496dd0959b8eb5ef90 not found: ID does not exist" containerID="511fe444441c9025dc975559ed8447b43400e4c27812fc496dd0959b8eb5ef90" Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.246590 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"511fe444441c9025dc975559ed8447b43400e4c27812fc496dd0959b8eb5ef90"} err="failed to get container status \"511fe444441c9025dc975559ed8447b43400e4c27812fc496dd0959b8eb5ef90\": rpc error: code = NotFound desc = could not find container \"511fe444441c9025dc975559ed8447b43400e4c27812fc496dd0959b8eb5ef90\": container with ID starting with 511fe444441c9025dc975559ed8447b43400e4c27812fc496dd0959b8eb5ef90 not found: ID does not exist" Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.246623 4813 scope.go:117] "RemoveContainer" containerID="632809a3ca53594c5c70fbfaad235d24131ec52b12853bc8465d47d5f5220d30" Feb 19 20:49:43 crc kubenswrapper[4813]: E0219 20:49:43.246927 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"632809a3ca53594c5c70fbfaad235d24131ec52b12853bc8465d47d5f5220d30\": container with ID starting with 632809a3ca53594c5c70fbfaad235d24131ec52b12853bc8465d47d5f5220d30 not found: ID does not exist" containerID="632809a3ca53594c5c70fbfaad235d24131ec52b12853bc8465d47d5f5220d30" Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.246972 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"632809a3ca53594c5c70fbfaad235d24131ec52b12853bc8465d47d5f5220d30"} err="failed to get container status \"632809a3ca53594c5c70fbfaad235d24131ec52b12853bc8465d47d5f5220d30\": rpc error: code = NotFound desc = could not find container \"632809a3ca53594c5c70fbfaad235d24131ec52b12853bc8465d47d5f5220d30\": container with ID starting with 632809a3ca53594c5c70fbfaad235d24131ec52b12853bc8465d47d5f5220d30 not found: ID does not exist" Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.483001 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47c17d2d-dd38-4554-aefc-6a8132743f0d" path="/var/lib/kubelet/pods/47c17d2d-dd38-4554-aefc-6a8132743f0d/volumes" Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.532611 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.542062 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.553882 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.564104 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.573315 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 19 20:49:43 crc kubenswrapper[4813]: E0219 20:49:43.573927 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="035233b2-efbc-4c6a-a82d-44c4742eed8d" containerName="nova-api-log" Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.573967 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="035233b2-efbc-4c6a-a82d-44c4742eed8d" containerName="nova-api-log" Feb 19 20:49:43 crc kubenswrapper[4813]: E0219 20:49:43.574000 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a04b863-542a-4de0-82c0-f8e12a63c47d" containerName="nova-metadata-metadata" Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.574009 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a04b863-542a-4de0-82c0-f8e12a63c47d" containerName="nova-metadata-metadata" Feb 19 20:49:43 crc kubenswrapper[4813]: E0219 20:49:43.574029 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="035233b2-efbc-4c6a-a82d-44c4742eed8d" containerName="nova-api-api" Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.574038 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="035233b2-efbc-4c6a-a82d-44c4742eed8d" containerName="nova-api-api" Feb 19 20:49:43 crc kubenswrapper[4813]: E0219 20:49:43.574065 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a04b863-542a-4de0-82c0-f8e12a63c47d" containerName="nova-metadata-log" Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.574073 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a04b863-542a-4de0-82c0-f8e12a63c47d" containerName="nova-metadata-log" Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.574324 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a04b863-542a-4de0-82c0-f8e12a63c47d" containerName="nova-metadata-log" Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.574355 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="035233b2-efbc-4c6a-a82d-44c4742eed8d" containerName="nova-api-log" Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.574373 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="035233b2-efbc-4c6a-a82d-44c4742eed8d" containerName="nova-api-api" Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.574388 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a04b863-542a-4de0-82c0-f8e12a63c47d" containerName="nova-metadata-metadata" Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.575814 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.580643 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.582895 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.584773 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.589997 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.595512 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.613013 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.716596 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bbd5f90-9900-44d3-bb78-efdb6e73d324-logs\") pod \"nova-metadata-0\" (UID: \"7bbd5f90-9900-44d3-bb78-efdb6e73d324\") " pod="openstack/nova-metadata-0" Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.716931 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/678728e5-b457-4f2e-8bc6-3599a2879262-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"678728e5-b457-4f2e-8bc6-3599a2879262\") " pod="openstack/nova-api-0" Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.717105 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frz8f\" (UniqueName: \"kubernetes.io/projected/678728e5-b457-4f2e-8bc6-3599a2879262-kube-api-access-frz8f\") pod \"nova-api-0\" (UID: \"678728e5-b457-4f2e-8bc6-3599a2879262\") " pod="openstack/nova-api-0" Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.717153 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2q2lb\" (UniqueName: \"kubernetes.io/projected/7bbd5f90-9900-44d3-bb78-efdb6e73d324-kube-api-access-2q2lb\") pod \"nova-metadata-0\" (UID: \"7bbd5f90-9900-44d3-bb78-efdb6e73d324\") " pod="openstack/nova-metadata-0" Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.717178 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/678728e5-b457-4f2e-8bc6-3599a2879262-config-data\") pod \"nova-api-0\" (UID: \"678728e5-b457-4f2e-8bc6-3599a2879262\") " pod="openstack/nova-api-0" Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.717238 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bbd5f90-9900-44d3-bb78-efdb6e73d324-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7bbd5f90-9900-44d3-bb78-efdb6e73d324\") " pod="openstack/nova-metadata-0" Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.717291 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bbd5f90-9900-44d3-bb78-efdb6e73d324-config-data\") pod \"nova-metadata-0\" (UID: \"7bbd5f90-9900-44d3-bb78-efdb6e73d324\") " pod="openstack/nova-metadata-0" Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.717366 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/678728e5-b457-4f2e-8bc6-3599a2879262-logs\") pod \"nova-api-0\" (UID: \"678728e5-b457-4f2e-8bc6-3599a2879262\") " pod="openstack/nova-api-0" Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.818758 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/678728e5-b457-4f2e-8bc6-3599a2879262-logs\") pod \"nova-api-0\" (UID: \"678728e5-b457-4f2e-8bc6-3599a2879262\") " pod="openstack/nova-api-0" Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.818828 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bbd5f90-9900-44d3-bb78-efdb6e73d324-logs\") pod \"nova-metadata-0\" (UID: \"7bbd5f90-9900-44d3-bb78-efdb6e73d324\") " pod="openstack/nova-metadata-0" Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.818857 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/678728e5-b457-4f2e-8bc6-3599a2879262-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"678728e5-b457-4f2e-8bc6-3599a2879262\") " pod="openstack/nova-api-0" Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.818939 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frz8f\" (UniqueName: \"kubernetes.io/projected/678728e5-b457-4f2e-8bc6-3599a2879262-kube-api-access-frz8f\") pod \"nova-api-0\" (UID: \"678728e5-b457-4f2e-8bc6-3599a2879262\") " pod="openstack/nova-api-0" Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.818986 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2q2lb\" (UniqueName: \"kubernetes.io/projected/7bbd5f90-9900-44d3-bb78-efdb6e73d324-kube-api-access-2q2lb\") pod \"nova-metadata-0\" (UID: \"7bbd5f90-9900-44d3-bb78-efdb6e73d324\") " pod="openstack/nova-metadata-0" Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.819004 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/678728e5-b457-4f2e-8bc6-3599a2879262-config-data\") pod \"nova-api-0\" (UID: \"678728e5-b457-4f2e-8bc6-3599a2879262\") " pod="openstack/nova-api-0" Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.819046 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bbd5f90-9900-44d3-bb78-efdb6e73d324-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7bbd5f90-9900-44d3-bb78-efdb6e73d324\") " pod="openstack/nova-metadata-0" Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.819075 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bbd5f90-9900-44d3-bb78-efdb6e73d324-config-data\") pod \"nova-metadata-0\" (UID: \"7bbd5f90-9900-44d3-bb78-efdb6e73d324\") " pod="openstack/nova-metadata-0" Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.819272 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/678728e5-b457-4f2e-8bc6-3599a2879262-logs\") pod \"nova-api-0\" (UID: \"678728e5-b457-4f2e-8bc6-3599a2879262\") " pod="openstack/nova-api-0" Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.819412 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bbd5f90-9900-44d3-bb78-efdb6e73d324-logs\") pod \"nova-metadata-0\" (UID: \"7bbd5f90-9900-44d3-bb78-efdb6e73d324\") " pod="openstack/nova-metadata-0" Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.823576 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bbd5f90-9900-44d3-bb78-efdb6e73d324-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7bbd5f90-9900-44d3-bb78-efdb6e73d324\") " pod="openstack/nova-metadata-0" Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.824178 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bbd5f90-9900-44d3-bb78-efdb6e73d324-config-data\") pod \"nova-metadata-0\" (UID: \"7bbd5f90-9900-44d3-bb78-efdb6e73d324\") " pod="openstack/nova-metadata-0" Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.827318 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/678728e5-b457-4f2e-8bc6-3599a2879262-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"678728e5-b457-4f2e-8bc6-3599a2879262\") " pod="openstack/nova-api-0" Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.830037 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/678728e5-b457-4f2e-8bc6-3599a2879262-config-data\") pod \"nova-api-0\" (UID: \"678728e5-b457-4f2e-8bc6-3599a2879262\") " pod="openstack/nova-api-0" Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.838348 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2q2lb\" (UniqueName: \"kubernetes.io/projected/7bbd5f90-9900-44d3-bb78-efdb6e73d324-kube-api-access-2q2lb\") pod \"nova-metadata-0\" (UID: \"7bbd5f90-9900-44d3-bb78-efdb6e73d324\") " pod="openstack/nova-metadata-0" Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.851927 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frz8f\" (UniqueName: \"kubernetes.io/projected/678728e5-b457-4f2e-8bc6-3599a2879262-kube-api-access-frz8f\") pod \"nova-api-0\" (UID: \"678728e5-b457-4f2e-8bc6-3599a2879262\") " pod="openstack/nova-api-0" Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.896171 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 20:49:43 crc kubenswrapper[4813]: I0219 20:49:43.912343 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 20:49:44 crc kubenswrapper[4813]: I0219 20:49:44.089683 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c9287374-ca07-402f-9748-9d55df1d0d3c","Type":"ContainerStarted","Data":"aeb9c1ca85e4b877afa16368f9055077aa02f39fcd55730a21985e0af310c397"} Feb 19 20:49:44 crc kubenswrapper[4813]: I0219 20:49:44.089966 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c9287374-ca07-402f-9748-9d55df1d0d3c","Type":"ContainerStarted","Data":"6b65163d188284fd0c4a65beae4ffb9ad31935f050d81187eda48727817ded28"} Feb 19 20:49:44 crc kubenswrapper[4813]: I0219 20:49:44.115595 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.115576196 podStartE2EDuration="2.115576196s" podCreationTimestamp="2026-02-19 20:49:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 20:49:44.112139138 +0000 UTC m=+8403.337579679" watchObservedRunningTime="2026-02-19 20:49:44.115576196 +0000 UTC m=+8403.341016727" Feb 19 20:49:44 crc kubenswrapper[4813]: I0219 20:49:44.413241 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 20:49:44 crc kubenswrapper[4813]: W0219 20:49:44.423863 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7bbd5f90_9900_44d3_bb78_efdb6e73d324.slice/crio-8345d26584c9ecf77f3b5b504eecfa62b5c8feb2443bcc7a9754610d431b9589 WatchSource:0}: Error finding container 8345d26584c9ecf77f3b5b504eecfa62b5c8feb2443bcc7a9754610d431b9589: Status 404 returned error can't find the container with id 8345d26584c9ecf77f3b5b504eecfa62b5c8feb2443bcc7a9754610d431b9589 Feb 19 20:49:44 crc kubenswrapper[4813]: W0219 20:49:44.521910 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod678728e5_b457_4f2e_8bc6_3599a2879262.slice/crio-2303ef42686586e3fbd851ae00a41197418cb19b617a22f81263b06172aa1f18 WatchSource:0}: Error finding container 2303ef42686586e3fbd851ae00a41197418cb19b617a22f81263b06172aa1f18: Status 404 returned error can't find the container with id 2303ef42686586e3fbd851ae00a41197418cb19b617a22f81263b06172aa1f18 Feb 19 20:49:44 crc kubenswrapper[4813]: I0219 20:49:44.525025 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 20:49:45 crc kubenswrapper[4813]: I0219 20:49:45.102324 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"678728e5-b457-4f2e-8bc6-3599a2879262","Type":"ContainerStarted","Data":"4d4b67e415f19ec3fcf6ff61269dfad67f2b366d5296ff9d08334af794789b5f"} Feb 19 20:49:45 crc kubenswrapper[4813]: I0219 20:49:45.103044 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"678728e5-b457-4f2e-8bc6-3599a2879262","Type":"ContainerStarted","Data":"c8ef5da2c95aa58b0874eff9661e588afc79a2ac7134bd567603a4eedaed56f8"} Feb 19 20:49:45 crc kubenswrapper[4813]: I0219 20:49:45.103068 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"678728e5-b457-4f2e-8bc6-3599a2879262","Type":"ContainerStarted","Data":"2303ef42686586e3fbd851ae00a41197418cb19b617a22f81263b06172aa1f18"} Feb 19 20:49:45 crc kubenswrapper[4813]: I0219 20:49:45.107239 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7bbd5f90-9900-44d3-bb78-efdb6e73d324","Type":"ContainerStarted","Data":"6f9fc2de2b6579726f04873926b6e19a2765b58a19ce77bba0402f86c23a7cf7"} Feb 19 20:49:45 crc kubenswrapper[4813]: I0219 20:49:45.107283 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7bbd5f90-9900-44d3-bb78-efdb6e73d324","Type":"ContainerStarted","Data":"9dfd04e2c610d66387a72643c7505ea868329126ec66a12d0da1c4c295c7d299"} Feb 19 20:49:45 crc kubenswrapper[4813]: I0219 20:49:45.107303 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7bbd5f90-9900-44d3-bb78-efdb6e73d324","Type":"ContainerStarted","Data":"8345d26584c9ecf77f3b5b504eecfa62b5c8feb2443bcc7a9754610d431b9589"} Feb 19 20:49:45 crc kubenswrapper[4813]: I0219 20:49:45.139905 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.139880672 podStartE2EDuration="2.139880672s" podCreationTimestamp="2026-02-19 20:49:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 20:49:45.121807766 +0000 UTC m=+8404.347248347" watchObservedRunningTime="2026-02-19 20:49:45.139880672 +0000 UTC m=+8404.365321213" Feb 19 20:49:45 crc kubenswrapper[4813]: I0219 20:49:45.162897 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.162871984 podStartE2EDuration="2.162871984s" podCreationTimestamp="2026-02-19 20:49:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 20:49:45.157169595 +0000 UTC m=+8404.382610136" watchObservedRunningTime="2026-02-19 20:49:45.162871984 +0000 UTC m=+8404.388312525" Feb 19 20:49:45 crc kubenswrapper[4813]: I0219 20:49:45.493331 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="035233b2-efbc-4c6a-a82d-44c4742eed8d" path="/var/lib/kubelet/pods/035233b2-efbc-4c6a-a82d-44c4742eed8d/volumes" Feb 19 20:49:45 crc kubenswrapper[4813]: I0219 20:49:45.495753 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a04b863-542a-4de0-82c0-f8e12a63c47d" path="/var/lib/kubelet/pods/4a04b863-542a-4de0-82c0-f8e12a63c47d/volumes" Feb 19 20:49:47 crc kubenswrapper[4813]: I0219 20:49:47.132117 4813 generic.go:334] "Generic (PLEG): container finished" podID="20cd348c-2c8a-4a93-ba80-1b598b70b25f" containerID="e4c15b47d9f9e1714cfaf983d3219a19b900fda24d017274573a61c209190e75" exitCode=0 Feb 19 20:49:47 crc kubenswrapper[4813]: I0219 20:49:47.132506 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"20cd348c-2c8a-4a93-ba80-1b598b70b25f","Type":"ContainerDied","Data":"e4c15b47d9f9e1714cfaf983d3219a19b900fda24d017274573a61c209190e75"} Feb 19 20:49:47 crc kubenswrapper[4813]: I0219 20:49:47.436162 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 19 20:49:47 crc kubenswrapper[4813]: I0219 20:49:47.523651 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 19 20:49:47 crc kubenswrapper[4813]: I0219 20:49:47.613984 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwskt\" (UniqueName: \"kubernetes.io/projected/20cd348c-2c8a-4a93-ba80-1b598b70b25f-kube-api-access-rwskt\") pod \"20cd348c-2c8a-4a93-ba80-1b598b70b25f\" (UID: \"20cd348c-2c8a-4a93-ba80-1b598b70b25f\") " Feb 19 20:49:47 crc kubenswrapper[4813]: I0219 20:49:47.614120 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20cd348c-2c8a-4a93-ba80-1b598b70b25f-config-data\") pod \"20cd348c-2c8a-4a93-ba80-1b598b70b25f\" (UID: \"20cd348c-2c8a-4a93-ba80-1b598b70b25f\") " Feb 19 20:49:47 crc kubenswrapper[4813]: I0219 20:49:47.614249 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20cd348c-2c8a-4a93-ba80-1b598b70b25f-combined-ca-bundle\") pod \"20cd348c-2c8a-4a93-ba80-1b598b70b25f\" (UID: \"20cd348c-2c8a-4a93-ba80-1b598b70b25f\") " Feb 19 20:49:47 crc kubenswrapper[4813]: I0219 20:49:47.620287 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20cd348c-2c8a-4a93-ba80-1b598b70b25f-kube-api-access-rwskt" (OuterVolumeSpecName: "kube-api-access-rwskt") pod "20cd348c-2c8a-4a93-ba80-1b598b70b25f" (UID: "20cd348c-2c8a-4a93-ba80-1b598b70b25f"). InnerVolumeSpecName "kube-api-access-rwskt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:49:47 crc kubenswrapper[4813]: I0219 20:49:47.647441 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20cd348c-2c8a-4a93-ba80-1b598b70b25f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "20cd348c-2c8a-4a93-ba80-1b598b70b25f" (UID: "20cd348c-2c8a-4a93-ba80-1b598b70b25f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:49:47 crc kubenswrapper[4813]: I0219 20:49:47.660116 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20cd348c-2c8a-4a93-ba80-1b598b70b25f-config-data" (OuterVolumeSpecName: "config-data") pod "20cd348c-2c8a-4a93-ba80-1b598b70b25f" (UID: "20cd348c-2c8a-4a93-ba80-1b598b70b25f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:49:47 crc kubenswrapper[4813]: I0219 20:49:47.716854 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20cd348c-2c8a-4a93-ba80-1b598b70b25f-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 20:49:47 crc kubenswrapper[4813]: I0219 20:49:47.716883 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20cd348c-2c8a-4a93-ba80-1b598b70b25f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 20:49:47 crc kubenswrapper[4813]: I0219 20:49:47.716894 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwskt\" (UniqueName: \"kubernetes.io/projected/20cd348c-2c8a-4a93-ba80-1b598b70b25f-kube-api-access-rwskt\") on node \"crc\" DevicePath \"\"" Feb 19 20:49:48 crc kubenswrapper[4813]: I0219 20:49:48.157692 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"20cd348c-2c8a-4a93-ba80-1b598b70b25f","Type":"ContainerDied","Data":"fb4205248bd32b1f82dd31b3db7ffe89122943010bc5bc10297756279593d3f8"} Feb 19 20:49:48 crc kubenswrapper[4813]: I0219 20:49:48.157750 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 19 20:49:48 crc kubenswrapper[4813]: I0219 20:49:48.157756 4813 scope.go:117] "RemoveContainer" containerID="e4c15b47d9f9e1714cfaf983d3219a19b900fda24d017274573a61c209190e75" Feb 19 20:49:48 crc kubenswrapper[4813]: I0219 20:49:48.208453 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 20:49:48 crc kubenswrapper[4813]: I0219 20:49:48.254933 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 20:49:48 crc kubenswrapper[4813]: I0219 20:49:48.266079 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 20:49:48 crc kubenswrapper[4813]: E0219 20:49:48.266779 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20cd348c-2c8a-4a93-ba80-1b598b70b25f" containerName="nova-cell0-conductor-conductor" Feb 19 20:49:48 crc kubenswrapper[4813]: I0219 20:49:48.266801 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="20cd348c-2c8a-4a93-ba80-1b598b70b25f" containerName="nova-cell0-conductor-conductor" Feb 19 20:49:48 crc kubenswrapper[4813]: I0219 20:49:48.267056 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="20cd348c-2c8a-4a93-ba80-1b598b70b25f" containerName="nova-cell0-conductor-conductor" Feb 19 20:49:48 crc kubenswrapper[4813]: I0219 20:49:48.267822 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 19 20:49:48 crc kubenswrapper[4813]: I0219 20:49:48.270623 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 19 20:49:48 crc kubenswrapper[4813]: I0219 20:49:48.278030 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 20:49:48 crc kubenswrapper[4813]: I0219 20:49:48.437837 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38051563-e7d4-4d23-a5ee-ef6608fc7004-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"38051563-e7d4-4d23-a5ee-ef6608fc7004\") " pod="openstack/nova-cell0-conductor-0" Feb 19 20:49:48 crc kubenswrapper[4813]: I0219 20:49:48.438064 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38051563-e7d4-4d23-a5ee-ef6608fc7004-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"38051563-e7d4-4d23-a5ee-ef6608fc7004\") " pod="openstack/nova-cell0-conductor-0" Feb 19 20:49:48 crc kubenswrapper[4813]: I0219 20:49:48.438087 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmvjf\" (UniqueName: \"kubernetes.io/projected/38051563-e7d4-4d23-a5ee-ef6608fc7004-kube-api-access-dmvjf\") pod \"nova-cell0-conductor-0\" (UID: \"38051563-e7d4-4d23-a5ee-ef6608fc7004\") " pod="openstack/nova-cell0-conductor-0" Feb 19 20:49:48 crc kubenswrapper[4813]: I0219 20:49:48.540148 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38051563-e7d4-4d23-a5ee-ef6608fc7004-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"38051563-e7d4-4d23-a5ee-ef6608fc7004\") " pod="openstack/nova-cell0-conductor-0" Feb 19 20:49:48 crc kubenswrapper[4813]: I0219 20:49:48.540543 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmvjf\" (UniqueName: \"kubernetes.io/projected/38051563-e7d4-4d23-a5ee-ef6608fc7004-kube-api-access-dmvjf\") pod \"nova-cell0-conductor-0\" (UID: \"38051563-e7d4-4d23-a5ee-ef6608fc7004\") " pod="openstack/nova-cell0-conductor-0" Feb 19 20:49:48 crc kubenswrapper[4813]: I0219 20:49:48.540689 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38051563-e7d4-4d23-a5ee-ef6608fc7004-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"38051563-e7d4-4d23-a5ee-ef6608fc7004\") " pod="openstack/nova-cell0-conductor-0" Feb 19 20:49:48 crc kubenswrapper[4813]: I0219 20:49:48.549046 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38051563-e7d4-4d23-a5ee-ef6608fc7004-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"38051563-e7d4-4d23-a5ee-ef6608fc7004\") " pod="openstack/nova-cell0-conductor-0" Feb 19 20:49:48 crc kubenswrapper[4813]: I0219 20:49:48.554652 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38051563-e7d4-4d23-a5ee-ef6608fc7004-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"38051563-e7d4-4d23-a5ee-ef6608fc7004\") " pod="openstack/nova-cell0-conductor-0" Feb 19 20:49:48 crc kubenswrapper[4813]: I0219 20:49:48.558732 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmvjf\" (UniqueName: \"kubernetes.io/projected/38051563-e7d4-4d23-a5ee-ef6608fc7004-kube-api-access-dmvjf\") pod \"nova-cell0-conductor-0\" (UID: \"38051563-e7d4-4d23-a5ee-ef6608fc7004\") " pod="openstack/nova-cell0-conductor-0" Feb 19 20:49:48 crc kubenswrapper[4813]: I0219 20:49:48.587236 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 19 20:49:48 crc kubenswrapper[4813]: I0219 20:49:48.896306 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 20:49:48 crc kubenswrapper[4813]: I0219 20:49:48.896400 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 20:49:49 crc kubenswrapper[4813]: I0219 20:49:49.094542 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 20:49:49 crc kubenswrapper[4813]: I0219 20:49:49.168884 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"38051563-e7d4-4d23-a5ee-ef6608fc7004","Type":"ContainerStarted","Data":"86a3e6f28f18de7030c036cd3450d667f480450bfe31671a3cda95f269be7e05"} Feb 19 20:49:49 crc kubenswrapper[4813]: I0219 20:49:49.484367 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20cd348c-2c8a-4a93-ba80-1b598b70b25f" path="/var/lib/kubelet/pods/20cd348c-2c8a-4a93-ba80-1b598b70b25f/volumes" Feb 19 20:49:50 crc kubenswrapper[4813]: I0219 20:49:50.184937 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"38051563-e7d4-4d23-a5ee-ef6608fc7004","Type":"ContainerStarted","Data":"84e74c26c7df3b63e23b43ffb8d177276f951ef7843e16291230cbf3e6a73119"} Feb 19 20:49:50 crc kubenswrapper[4813]: I0219 20:49:50.185572 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 19 20:49:50 crc kubenswrapper[4813]: I0219 20:49:50.217644 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.217615258 podStartE2EDuration="2.217615258s" podCreationTimestamp="2026-02-19 20:49:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 20:49:50.201456021 +0000 UTC m=+8409.426896602" watchObservedRunningTime="2026-02-19 20:49:50.217615258 +0000 UTC m=+8409.443055809" Feb 19 20:49:50 crc kubenswrapper[4813]: I0219 20:49:50.749533 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 19 20:49:52 crc kubenswrapper[4813]: I0219 20:49:52.436677 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 19 20:49:52 crc kubenswrapper[4813]: I0219 20:49:52.489080 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 19 20:49:53 crc kubenswrapper[4813]: I0219 20:49:53.240730 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 19 20:49:53 crc kubenswrapper[4813]: I0219 20:49:53.897259 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 19 20:49:53 crc kubenswrapper[4813]: I0219 20:49:53.897303 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 19 20:49:53 crc kubenswrapper[4813]: I0219 20:49:53.913054 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 20:49:53 crc kubenswrapper[4813]: I0219 20:49:53.913099 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 20:49:55 crc kubenswrapper[4813]: I0219 20:49:55.062122 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="7bbd5f90-9900-44d3-bb78-efdb6e73d324" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.186:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:49:55 crc kubenswrapper[4813]: I0219 20:49:55.062295 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="678728e5-b457-4f2e-8bc6-3599a2879262" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.187:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:49:55 crc kubenswrapper[4813]: I0219 20:49:55.062894 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="7bbd5f90-9900-44d3-bb78-efdb6e73d324" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.186:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:49:55 crc kubenswrapper[4813]: I0219 20:49:55.062936 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="678728e5-b457-4f2e-8bc6-3599a2879262" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.187:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:49:58 crc kubenswrapper[4813]: I0219 20:49:58.629015 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 19 20:50:00 crc kubenswrapper[4813]: I0219 20:50:00.329790 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 20:50:00 crc kubenswrapper[4813]: I0219 20:50:00.330168 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 20:50:03 crc kubenswrapper[4813]: I0219 20:50:03.899979 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 19 20:50:03 crc kubenswrapper[4813]: I0219 20:50:03.900638 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 19 20:50:03 crc kubenswrapper[4813]: I0219 20:50:03.903207 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 19 20:50:03 crc kubenswrapper[4813]: I0219 20:50:03.903667 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 19 20:50:03 crc kubenswrapper[4813]: I0219 20:50:03.917809 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 19 20:50:03 crc kubenswrapper[4813]: I0219 20:50:03.918493 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 19 20:50:03 crc kubenswrapper[4813]: I0219 20:50:03.918632 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 19 20:50:03 crc kubenswrapper[4813]: I0219 20:50:03.923469 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 19 20:50:04 crc kubenswrapper[4813]: I0219 20:50:04.320771 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 19 20:50:04 crc kubenswrapper[4813]: I0219 20:50:04.324673 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 19 20:50:05 crc kubenswrapper[4813]: I0219 20:50:05.562672 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c"] Feb 19 20:50:05 crc kubenswrapper[4813]: I0219 20:50:05.564188 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c" Feb 19 20:50:05 crc kubenswrapper[4813]: I0219 20:50:05.570318 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-cells-global-config" Feb 19 20:50:05 crc kubenswrapper[4813]: I0219 20:50:05.570329 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 19 20:50:05 crc kubenswrapper[4813]: I0219 20:50:05.571890 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 20:50:05 crc kubenswrapper[4813]: I0219 20:50:05.571929 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Feb 19 20:50:05 crc kubenswrapper[4813]: I0219 20:50:05.573288 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Feb 19 20:50:05 crc kubenswrapper[4813]: I0219 20:50:05.574537 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 19 20:50:05 crc kubenswrapper[4813]: I0219 20:50:05.574546 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-2ttn4" Feb 19 20:50:05 crc kubenswrapper[4813]: I0219 20:50:05.583690 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c"] Feb 19 20:50:05 crc kubenswrapper[4813]: I0219 20:50:05.726415 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/f853aaa8-39d2-4a98-b8d1-9fb7712d89a6-nova-cells-global-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c\" (UID: \"f853aaa8-39d2-4a98-b8d1-9fb7712d89a6\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c" Feb 19 20:50:05 crc kubenswrapper[4813]: I0219 20:50:05.726786 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/f853aaa8-39d2-4a98-b8d1-9fb7712d89a6-nova-cell1-compute-config-3\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c\" (UID: \"f853aaa8-39d2-4a98-b8d1-9fb7712d89a6\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c" Feb 19 20:50:05 crc kubenswrapper[4813]: I0219 20:50:05.726858 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f853aaa8-39d2-4a98-b8d1-9fb7712d89a6-ceph\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c\" (UID: \"f853aaa8-39d2-4a98-b8d1-9fb7712d89a6\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c" Feb 19 20:50:05 crc kubenswrapper[4813]: I0219 20:50:05.726923 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f853aaa8-39d2-4a98-b8d1-9fb7712d89a6-inventory\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c\" (UID: \"f853aaa8-39d2-4a98-b8d1-9fb7712d89a6\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c" Feb 19 20:50:05 crc kubenswrapper[4813]: I0219 20:50:05.726996 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f853aaa8-39d2-4a98-b8d1-9fb7712d89a6-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c\" (UID: \"f853aaa8-39d2-4a98-b8d1-9fb7712d89a6\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c" Feb 19 20:50:05 crc kubenswrapper[4813]: I0219 20:50:05.727066 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/f853aaa8-39d2-4a98-b8d1-9fb7712d89a6-nova-cells-global-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c\" (UID: \"f853aaa8-39d2-4a98-b8d1-9fb7712d89a6\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c" Feb 19 20:50:05 crc kubenswrapper[4813]: I0219 20:50:05.727169 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f853aaa8-39d2-4a98-b8d1-9fb7712d89a6-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c\" (UID: \"f853aaa8-39d2-4a98-b8d1-9fb7712d89a6\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c" Feb 19 20:50:05 crc kubenswrapper[4813]: I0219 20:50:05.727206 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28lj4\" (UniqueName: \"kubernetes.io/projected/f853aaa8-39d2-4a98-b8d1-9fb7712d89a6-kube-api-access-28lj4\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c\" (UID: \"f853aaa8-39d2-4a98-b8d1-9fb7712d89a6\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c" Feb 19 20:50:05 crc kubenswrapper[4813]: I0219 20:50:05.727235 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f853aaa8-39d2-4a98-b8d1-9fb7712d89a6-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c\" (UID: \"f853aaa8-39d2-4a98-b8d1-9fb7712d89a6\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c" Feb 19 20:50:05 crc kubenswrapper[4813]: I0219 20:50:05.727300 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f853aaa8-39d2-4a98-b8d1-9fb7712d89a6-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c\" (UID: \"f853aaa8-39d2-4a98-b8d1-9fb7712d89a6\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c" Feb 19 20:50:05 crc kubenswrapper[4813]: I0219 20:50:05.727433 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/f853aaa8-39d2-4a98-b8d1-9fb7712d89a6-ssh-key-openstack-cell1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c\" (UID: \"f853aaa8-39d2-4a98-b8d1-9fb7712d89a6\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c" Feb 19 20:50:05 crc kubenswrapper[4813]: I0219 20:50:05.727478 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/f853aaa8-39d2-4a98-b8d1-9fb7712d89a6-nova-cell1-compute-config-2\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c\" (UID: \"f853aaa8-39d2-4a98-b8d1-9fb7712d89a6\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c" Feb 19 20:50:05 crc kubenswrapper[4813]: I0219 20:50:05.727528 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f853aaa8-39d2-4a98-b8d1-9fb7712d89a6-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c\" (UID: \"f853aaa8-39d2-4a98-b8d1-9fb7712d89a6\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c" Feb 19 20:50:05 crc kubenswrapper[4813]: I0219 20:50:05.829103 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/f853aaa8-39d2-4a98-b8d1-9fb7712d89a6-ssh-key-openstack-cell1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c\" (UID: \"f853aaa8-39d2-4a98-b8d1-9fb7712d89a6\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c" Feb 19 20:50:05 crc kubenswrapper[4813]: I0219 20:50:05.829468 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/f853aaa8-39d2-4a98-b8d1-9fb7712d89a6-nova-cell1-compute-config-2\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c\" (UID: \"f853aaa8-39d2-4a98-b8d1-9fb7712d89a6\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c" Feb 19 20:50:05 crc kubenswrapper[4813]: I0219 20:50:05.829595 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f853aaa8-39d2-4a98-b8d1-9fb7712d89a6-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c\" (UID: \"f853aaa8-39d2-4a98-b8d1-9fb7712d89a6\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c" Feb 19 20:50:05 crc kubenswrapper[4813]: I0219 20:50:05.829786 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/f853aaa8-39d2-4a98-b8d1-9fb7712d89a6-nova-cells-global-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c\" (UID: \"f853aaa8-39d2-4a98-b8d1-9fb7712d89a6\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c" Feb 19 20:50:05 crc kubenswrapper[4813]: I0219 20:50:05.829886 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/f853aaa8-39d2-4a98-b8d1-9fb7712d89a6-nova-cell1-compute-config-3\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c\" (UID: \"f853aaa8-39d2-4a98-b8d1-9fb7712d89a6\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c" Feb 19 20:50:05 crc kubenswrapper[4813]: I0219 20:50:05.830011 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f853aaa8-39d2-4a98-b8d1-9fb7712d89a6-ceph\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c\" (UID: \"f853aaa8-39d2-4a98-b8d1-9fb7712d89a6\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c" Feb 19 20:50:05 crc kubenswrapper[4813]: I0219 20:50:05.830136 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f853aaa8-39d2-4a98-b8d1-9fb7712d89a6-inventory\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c\" (UID: \"f853aaa8-39d2-4a98-b8d1-9fb7712d89a6\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c" Feb 19 20:50:05 crc kubenswrapper[4813]: I0219 20:50:05.830269 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f853aaa8-39d2-4a98-b8d1-9fb7712d89a6-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c\" (UID: \"f853aaa8-39d2-4a98-b8d1-9fb7712d89a6\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c" Feb 19 20:50:05 crc kubenswrapper[4813]: I0219 20:50:05.830400 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/f853aaa8-39d2-4a98-b8d1-9fb7712d89a6-nova-cells-global-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c\" (UID: \"f853aaa8-39d2-4a98-b8d1-9fb7712d89a6\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c" Feb 19 20:50:05 crc kubenswrapper[4813]: I0219 20:50:05.830579 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f853aaa8-39d2-4a98-b8d1-9fb7712d89a6-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c\" (UID: \"f853aaa8-39d2-4a98-b8d1-9fb7712d89a6\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c" Feb 19 20:50:05 crc kubenswrapper[4813]: I0219 20:50:05.830699 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28lj4\" (UniqueName: \"kubernetes.io/projected/f853aaa8-39d2-4a98-b8d1-9fb7712d89a6-kube-api-access-28lj4\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c\" (UID: \"f853aaa8-39d2-4a98-b8d1-9fb7712d89a6\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c" Feb 19 20:50:05 crc kubenswrapper[4813]: I0219 20:50:05.830832 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f853aaa8-39d2-4a98-b8d1-9fb7712d89a6-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c\" (UID: \"f853aaa8-39d2-4a98-b8d1-9fb7712d89a6\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c" Feb 19 20:50:05 crc kubenswrapper[4813]: I0219 20:50:05.830929 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f853aaa8-39d2-4a98-b8d1-9fb7712d89a6-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c\" (UID: \"f853aaa8-39d2-4a98-b8d1-9fb7712d89a6\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c" Feb 19 20:50:05 crc kubenswrapper[4813]: I0219 20:50:05.835705 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f853aaa8-39d2-4a98-b8d1-9fb7712d89a6-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c\" (UID: \"f853aaa8-39d2-4a98-b8d1-9fb7712d89a6\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c" Feb 19 20:50:05 crc kubenswrapper[4813]: I0219 20:50:05.836632 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/f853aaa8-39d2-4a98-b8d1-9fb7712d89a6-ssh-key-openstack-cell1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c\" (UID: \"f853aaa8-39d2-4a98-b8d1-9fb7712d89a6\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c" Feb 19 20:50:05 crc kubenswrapper[4813]: I0219 20:50:05.837739 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/f853aaa8-39d2-4a98-b8d1-9fb7712d89a6-nova-cell1-compute-config-2\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c\" (UID: \"f853aaa8-39d2-4a98-b8d1-9fb7712d89a6\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c" Feb 19 20:50:05 crc kubenswrapper[4813]: I0219 20:50:05.838191 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/f853aaa8-39d2-4a98-b8d1-9fb7712d89a6-nova-cells-global-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c\" (UID: \"f853aaa8-39d2-4a98-b8d1-9fb7712d89a6\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c" Feb 19 20:50:05 crc kubenswrapper[4813]: I0219 20:50:05.838392 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f853aaa8-39d2-4a98-b8d1-9fb7712d89a6-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c\" (UID: \"f853aaa8-39d2-4a98-b8d1-9fb7712d89a6\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c" Feb 19 20:50:05 crc kubenswrapper[4813]: I0219 20:50:05.839009 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f853aaa8-39d2-4a98-b8d1-9fb7712d89a6-ceph\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c\" (UID: \"f853aaa8-39d2-4a98-b8d1-9fb7712d89a6\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c" Feb 19 20:50:05 crc kubenswrapper[4813]: I0219 20:50:05.840350 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/f853aaa8-39d2-4a98-b8d1-9fb7712d89a6-nova-cells-global-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c\" (UID: \"f853aaa8-39d2-4a98-b8d1-9fb7712d89a6\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c" Feb 19 20:50:05 crc kubenswrapper[4813]: I0219 20:50:05.841267 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f853aaa8-39d2-4a98-b8d1-9fb7712d89a6-inventory\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c\" (UID: \"f853aaa8-39d2-4a98-b8d1-9fb7712d89a6\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c" Feb 19 20:50:05 crc kubenswrapper[4813]: I0219 20:50:05.841832 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/f853aaa8-39d2-4a98-b8d1-9fb7712d89a6-nova-cell1-compute-config-3\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c\" (UID: \"f853aaa8-39d2-4a98-b8d1-9fb7712d89a6\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c" Feb 19 20:50:05 crc kubenswrapper[4813]: I0219 20:50:05.842280 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f853aaa8-39d2-4a98-b8d1-9fb7712d89a6-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c\" (UID: \"f853aaa8-39d2-4a98-b8d1-9fb7712d89a6\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c" Feb 19 20:50:05 crc kubenswrapper[4813]: I0219 20:50:05.843182 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f853aaa8-39d2-4a98-b8d1-9fb7712d89a6-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c\" (UID: \"f853aaa8-39d2-4a98-b8d1-9fb7712d89a6\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c" Feb 19 20:50:05 crc kubenswrapper[4813]: I0219 20:50:05.844066 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f853aaa8-39d2-4a98-b8d1-9fb7712d89a6-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c\" (UID: \"f853aaa8-39d2-4a98-b8d1-9fb7712d89a6\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c" Feb 19 20:50:05 crc kubenswrapper[4813]: I0219 20:50:05.854451 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28lj4\" (UniqueName: \"kubernetes.io/projected/f853aaa8-39d2-4a98-b8d1-9fb7712d89a6-kube-api-access-28lj4\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c\" (UID: \"f853aaa8-39d2-4a98-b8d1-9fb7712d89a6\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c" Feb 19 20:50:05 crc kubenswrapper[4813]: I0219 20:50:05.883302 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c" Feb 19 20:50:06 crc kubenswrapper[4813]: I0219 20:50:06.439519 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c"] Feb 19 20:50:07 crc kubenswrapper[4813]: I0219 20:50:07.354824 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c" event={"ID":"f853aaa8-39d2-4a98-b8d1-9fb7712d89a6","Type":"ContainerStarted","Data":"67fd85ec276018c89a6f118893722acae6406a9291826dcd951650f8c1713a74"} Feb 19 20:50:07 crc kubenswrapper[4813]: I0219 20:50:07.355545 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c" event={"ID":"f853aaa8-39d2-4a98-b8d1-9fb7712d89a6","Type":"ContainerStarted","Data":"20929b9d46beed745e049e248646f9c5db33a60d98da16d30da3181c60801823"} Feb 19 20:50:07 crc kubenswrapper[4813]: I0219 20:50:07.377882 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c" podStartSLOduration=1.8802330029999998 podStartE2EDuration="2.377862976s" podCreationTimestamp="2026-02-19 20:50:05 +0000 UTC" firstStartedPulling="2026-02-19 20:50:06.443869303 +0000 UTC m=+8425.669309844" lastFinishedPulling="2026-02-19 20:50:06.941499276 +0000 UTC m=+8426.166939817" observedRunningTime="2026-02-19 20:50:07.377078712 +0000 UTC m=+8426.602519303" watchObservedRunningTime="2026-02-19 20:50:07.377862976 +0000 UTC m=+8426.603303517" Feb 19 20:50:30 crc kubenswrapper[4813]: I0219 20:50:30.329931 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 20:50:30 crc kubenswrapper[4813]: I0219 20:50:30.330681 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 20:51:00 crc kubenswrapper[4813]: I0219 20:51:00.329913 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 20:51:00 crc kubenswrapper[4813]: I0219 20:51:00.330572 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 20:51:00 crc kubenswrapper[4813]: I0219 20:51:00.330616 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" Feb 19 20:51:00 crc kubenswrapper[4813]: I0219 20:51:00.331544 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b2b72de14d5df72f4b9cd6c4c268dd9d8a7c24958501081d46a76c40b1cb8b63"} pod="openshift-machine-config-operator/machine-config-daemon-gfswm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 20:51:00 crc kubenswrapper[4813]: I0219 20:51:00.331617 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" containerID="cri-o://b2b72de14d5df72f4b9cd6c4c268dd9d8a7c24958501081d46a76c40b1cb8b63" gracePeriod=600 Feb 19 20:51:00 crc kubenswrapper[4813]: E0219 20:51:00.488888 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:51:01 crc kubenswrapper[4813]: I0219 20:51:01.045633 4813 generic.go:334] "Generic (PLEG): container finished" podID="481977a2-7072-4176-abd4-863cb6104d70" containerID="b2b72de14d5df72f4b9cd6c4c268dd9d8a7c24958501081d46a76c40b1cb8b63" exitCode=0 Feb 19 20:51:01 crc kubenswrapper[4813]: I0219 20:51:01.045695 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" event={"ID":"481977a2-7072-4176-abd4-863cb6104d70","Type":"ContainerDied","Data":"b2b72de14d5df72f4b9cd6c4c268dd9d8a7c24958501081d46a76c40b1cb8b63"} Feb 19 20:51:01 crc kubenswrapper[4813]: I0219 20:51:01.045747 4813 scope.go:117] "RemoveContainer" containerID="9a24a471de296b18d9e542deb93ef67e933265bda54aedaf108fe07429fd6a27" Feb 19 20:51:01 crc kubenswrapper[4813]: I0219 20:51:01.046622 4813 scope.go:117] "RemoveContainer" containerID="b2b72de14d5df72f4b9cd6c4c268dd9d8a7c24958501081d46a76c40b1cb8b63" Feb 19 20:51:01 crc kubenswrapper[4813]: E0219 20:51:01.047008 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:51:14 crc kubenswrapper[4813]: I0219 20:51:14.471880 4813 scope.go:117] "RemoveContainer" containerID="b2b72de14d5df72f4b9cd6c4c268dd9d8a7c24958501081d46a76c40b1cb8b63" Feb 19 20:51:14 crc kubenswrapper[4813]: E0219 20:51:14.472767 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:51:28 crc kubenswrapper[4813]: I0219 20:51:28.471919 4813 scope.go:117] "RemoveContainer" containerID="b2b72de14d5df72f4b9cd6c4c268dd9d8a7c24958501081d46a76c40b1cb8b63" Feb 19 20:51:28 crc kubenswrapper[4813]: E0219 20:51:28.472650 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:51:42 crc kubenswrapper[4813]: I0219 20:51:42.473047 4813 scope.go:117] "RemoveContainer" containerID="b2b72de14d5df72f4b9cd6c4c268dd9d8a7c24958501081d46a76c40b1cb8b63" Feb 19 20:51:42 crc kubenswrapper[4813]: E0219 20:51:42.474220 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:51:55 crc kubenswrapper[4813]: I0219 20:51:55.471785 4813 scope.go:117] "RemoveContainer" containerID="b2b72de14d5df72f4b9cd6c4c268dd9d8a7c24958501081d46a76c40b1cb8b63" Feb 19 20:51:55 crc kubenswrapper[4813]: E0219 20:51:55.473133 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:52:07 crc kubenswrapper[4813]: I0219 20:52:07.471603 4813 scope.go:117] "RemoveContainer" containerID="b2b72de14d5df72f4b9cd6c4c268dd9d8a7c24958501081d46a76c40b1cb8b63" Feb 19 20:52:07 crc kubenswrapper[4813]: E0219 20:52:07.472326 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:52:19 crc kubenswrapper[4813]: I0219 20:52:19.471650 4813 scope.go:117] "RemoveContainer" containerID="b2b72de14d5df72f4b9cd6c4c268dd9d8a7c24958501081d46a76c40b1cb8b63" Feb 19 20:52:19 crc kubenswrapper[4813]: E0219 20:52:19.472732 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:52:32 crc kubenswrapper[4813]: I0219 20:52:32.472133 4813 scope.go:117] "RemoveContainer" containerID="b2b72de14d5df72f4b9cd6c4c268dd9d8a7c24958501081d46a76c40b1cb8b63" Feb 19 20:52:32 crc kubenswrapper[4813]: E0219 20:52:32.473197 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:52:43 crc kubenswrapper[4813]: I0219 20:52:43.472348 4813 scope.go:117] "RemoveContainer" containerID="b2b72de14d5df72f4b9cd6c4c268dd9d8a7c24958501081d46a76c40b1cb8b63" Feb 19 20:52:43 crc kubenswrapper[4813]: E0219 20:52:43.473871 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:52:57 crc kubenswrapper[4813]: I0219 20:52:57.471938 4813 scope.go:117] "RemoveContainer" containerID="b2b72de14d5df72f4b9cd6c4c268dd9d8a7c24958501081d46a76c40b1cb8b63" Feb 19 20:52:57 crc kubenswrapper[4813]: E0219 20:52:57.473362 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:53:09 crc kubenswrapper[4813]: I0219 20:53:09.472666 4813 scope.go:117] "RemoveContainer" containerID="b2b72de14d5df72f4b9cd6c4c268dd9d8a7c24958501081d46a76c40b1cb8b63" Feb 19 20:53:09 crc kubenswrapper[4813]: E0219 20:53:09.473734 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:53:22 crc kubenswrapper[4813]: I0219 20:53:22.472108 4813 scope.go:117] "RemoveContainer" containerID="b2b72de14d5df72f4b9cd6c4c268dd9d8a7c24958501081d46a76c40b1cb8b63" Feb 19 20:53:22 crc kubenswrapper[4813]: E0219 20:53:22.472846 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:53:33 crc kubenswrapper[4813]: I0219 20:53:33.472219 4813 scope.go:117] "RemoveContainer" containerID="b2b72de14d5df72f4b9cd6c4c268dd9d8a7c24958501081d46a76c40b1cb8b63" Feb 19 20:53:33 crc kubenswrapper[4813]: E0219 20:53:33.472944 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:53:37 crc kubenswrapper[4813]: I0219 20:53:37.308060 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qw5n2"] Feb 19 20:53:37 crc kubenswrapper[4813]: I0219 20:53:37.313137 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qw5n2" Feb 19 20:53:37 crc kubenswrapper[4813]: I0219 20:53:37.325023 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qw5n2"] Feb 19 20:53:37 crc kubenswrapper[4813]: I0219 20:53:37.406996 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83e39069-71c3-4962-bd7e-312097de1646-utilities\") pod \"certified-operators-qw5n2\" (UID: \"83e39069-71c3-4962-bd7e-312097de1646\") " pod="openshift-marketplace/certified-operators-qw5n2" Feb 19 20:53:37 crc kubenswrapper[4813]: I0219 20:53:37.407193 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7vsz\" (UniqueName: \"kubernetes.io/projected/83e39069-71c3-4962-bd7e-312097de1646-kube-api-access-g7vsz\") pod \"certified-operators-qw5n2\" (UID: \"83e39069-71c3-4962-bd7e-312097de1646\") " pod="openshift-marketplace/certified-operators-qw5n2" Feb 19 20:53:37 crc kubenswrapper[4813]: I0219 20:53:37.407217 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83e39069-71c3-4962-bd7e-312097de1646-catalog-content\") pod \"certified-operators-qw5n2\" (UID: \"83e39069-71c3-4962-bd7e-312097de1646\") " pod="openshift-marketplace/certified-operators-qw5n2" Feb 19 20:53:37 crc kubenswrapper[4813]: I0219 20:53:37.508979 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83e39069-71c3-4962-bd7e-312097de1646-utilities\") pod \"certified-operators-qw5n2\" (UID: \"83e39069-71c3-4962-bd7e-312097de1646\") " pod="openshift-marketplace/certified-operators-qw5n2" Feb 19 20:53:37 crc kubenswrapper[4813]: I0219 20:53:37.509265 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7vsz\" (UniqueName: \"kubernetes.io/projected/83e39069-71c3-4962-bd7e-312097de1646-kube-api-access-g7vsz\") pod \"certified-operators-qw5n2\" (UID: \"83e39069-71c3-4962-bd7e-312097de1646\") " pod="openshift-marketplace/certified-operators-qw5n2" Feb 19 20:53:37 crc kubenswrapper[4813]: I0219 20:53:37.509286 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83e39069-71c3-4962-bd7e-312097de1646-catalog-content\") pod \"certified-operators-qw5n2\" (UID: \"83e39069-71c3-4962-bd7e-312097de1646\") " pod="openshift-marketplace/certified-operators-qw5n2" Feb 19 20:53:37 crc kubenswrapper[4813]: I0219 20:53:37.509403 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83e39069-71c3-4962-bd7e-312097de1646-utilities\") pod \"certified-operators-qw5n2\" (UID: \"83e39069-71c3-4962-bd7e-312097de1646\") " pod="openshift-marketplace/certified-operators-qw5n2" Feb 19 20:53:37 crc kubenswrapper[4813]: I0219 20:53:37.509648 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83e39069-71c3-4962-bd7e-312097de1646-catalog-content\") pod \"certified-operators-qw5n2\" (UID: \"83e39069-71c3-4962-bd7e-312097de1646\") " pod="openshift-marketplace/certified-operators-qw5n2" Feb 19 20:53:37 crc kubenswrapper[4813]: I0219 20:53:37.533335 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7vsz\" (UniqueName: \"kubernetes.io/projected/83e39069-71c3-4962-bd7e-312097de1646-kube-api-access-g7vsz\") pod \"certified-operators-qw5n2\" (UID: \"83e39069-71c3-4962-bd7e-312097de1646\") " pod="openshift-marketplace/certified-operators-qw5n2" Feb 19 20:53:37 crc kubenswrapper[4813]: I0219 20:53:37.637937 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qw5n2" Feb 19 20:53:38 crc kubenswrapper[4813]: I0219 20:53:38.216452 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qw5n2"] Feb 19 20:53:38 crc kubenswrapper[4813]: I0219 20:53:38.968709 4813 generic.go:334] "Generic (PLEG): container finished" podID="83e39069-71c3-4962-bd7e-312097de1646" containerID="abeae3d49bc6cec0ac3f761c6b4abe8697b0ab6801360ff778d2771c03502b9a" exitCode=0 Feb 19 20:53:38 crc kubenswrapper[4813]: I0219 20:53:38.968798 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qw5n2" event={"ID":"83e39069-71c3-4962-bd7e-312097de1646","Type":"ContainerDied","Data":"abeae3d49bc6cec0ac3f761c6b4abe8697b0ab6801360ff778d2771c03502b9a"} Feb 19 20:53:38 crc kubenswrapper[4813]: I0219 20:53:38.969009 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qw5n2" event={"ID":"83e39069-71c3-4962-bd7e-312097de1646","Type":"ContainerStarted","Data":"b4277125e07b60c542512cc08089e3659837215d128466e8eabceba4b1c61c4c"} Feb 19 20:53:38 crc kubenswrapper[4813]: I0219 20:53:38.970422 4813 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 20:53:40 crc kubenswrapper[4813]: I0219 20:53:40.995418 4813 generic.go:334] "Generic (PLEG): container finished" podID="83e39069-71c3-4962-bd7e-312097de1646" containerID="f7e9722130cc9b17be64533287932d711021ed91d4c3b0becae9c1090c4445fe" exitCode=0 Feb 19 20:53:40 crc kubenswrapper[4813]: I0219 20:53:40.995478 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qw5n2" event={"ID":"83e39069-71c3-4962-bd7e-312097de1646","Type":"ContainerDied","Data":"f7e9722130cc9b17be64533287932d711021ed91d4c3b0becae9c1090c4445fe"} Feb 19 20:53:42 crc kubenswrapper[4813]: I0219 20:53:42.007536 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qw5n2" event={"ID":"83e39069-71c3-4962-bd7e-312097de1646","Type":"ContainerStarted","Data":"913a585032b147c3c542756aa59d15ed120b20c8e387c5404feb6b17b4800e99"} Feb 19 20:53:42 crc kubenswrapper[4813]: I0219 20:53:42.038711 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qw5n2" podStartSLOduration=2.597587168 podStartE2EDuration="5.038692755s" podCreationTimestamp="2026-02-19 20:53:37 +0000 UTC" firstStartedPulling="2026-02-19 20:53:38.970208976 +0000 UTC m=+8638.195649517" lastFinishedPulling="2026-02-19 20:53:41.411314523 +0000 UTC m=+8640.636755104" observedRunningTime="2026-02-19 20:53:42.02958823 +0000 UTC m=+8641.255028771" watchObservedRunningTime="2026-02-19 20:53:42.038692755 +0000 UTC m=+8641.264133296" Feb 19 20:53:46 crc kubenswrapper[4813]: I0219 20:53:46.472609 4813 scope.go:117] "RemoveContainer" containerID="b2b72de14d5df72f4b9cd6c4c268dd9d8a7c24958501081d46a76c40b1cb8b63" Feb 19 20:53:46 crc kubenswrapper[4813]: E0219 20:53:46.473876 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:53:47 crc kubenswrapper[4813]: I0219 20:53:47.638842 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qw5n2" Feb 19 20:53:47 crc kubenswrapper[4813]: I0219 20:53:47.641474 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qw5n2" Feb 19 20:53:47 crc kubenswrapper[4813]: I0219 20:53:47.705196 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qw5n2" Feb 19 20:53:48 crc kubenswrapper[4813]: I0219 20:53:48.087704 4813 generic.go:334] "Generic (PLEG): container finished" podID="f853aaa8-39d2-4a98-b8d1-9fb7712d89a6" containerID="67fd85ec276018c89a6f118893722acae6406a9291826dcd951650f8c1713a74" exitCode=0 Feb 19 20:53:48 crc kubenswrapper[4813]: I0219 20:53:48.087791 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c" event={"ID":"f853aaa8-39d2-4a98-b8d1-9fb7712d89a6","Type":"ContainerDied","Data":"67fd85ec276018c89a6f118893722acae6406a9291826dcd951650f8c1713a74"} Feb 19 20:53:48 crc kubenswrapper[4813]: I0219 20:53:48.179542 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qw5n2" Feb 19 20:53:48 crc kubenswrapper[4813]: I0219 20:53:48.238594 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qw5n2"] Feb 19 20:53:49 crc kubenswrapper[4813]: I0219 20:53:49.652712 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c" Feb 19 20:53:49 crc kubenswrapper[4813]: I0219 20:53:49.734928 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f853aaa8-39d2-4a98-b8d1-9fb7712d89a6-nova-cell1-compute-config-0\") pod \"f853aaa8-39d2-4a98-b8d1-9fb7712d89a6\" (UID: \"f853aaa8-39d2-4a98-b8d1-9fb7712d89a6\") " Feb 19 20:53:49 crc kubenswrapper[4813]: I0219 20:53:49.735002 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f853aaa8-39d2-4a98-b8d1-9fb7712d89a6-nova-migration-ssh-key-1\") pod \"f853aaa8-39d2-4a98-b8d1-9fb7712d89a6\" (UID: \"f853aaa8-39d2-4a98-b8d1-9fb7712d89a6\") " Feb 19 20:53:49 crc kubenswrapper[4813]: I0219 20:53:49.735048 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f853aaa8-39d2-4a98-b8d1-9fb7712d89a6-nova-cell1-combined-ca-bundle\") pod \"f853aaa8-39d2-4a98-b8d1-9fb7712d89a6\" (UID: \"f853aaa8-39d2-4a98-b8d1-9fb7712d89a6\") " Feb 19 20:53:49 crc kubenswrapper[4813]: I0219 20:53:49.735090 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f853aaa8-39d2-4a98-b8d1-9fb7712d89a6-nova-migration-ssh-key-0\") pod \"f853aaa8-39d2-4a98-b8d1-9fb7712d89a6\" (UID: \"f853aaa8-39d2-4a98-b8d1-9fb7712d89a6\") " Feb 19 20:53:49 crc kubenswrapper[4813]: I0219 20:53:49.735121 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28lj4\" (UniqueName: \"kubernetes.io/projected/f853aaa8-39d2-4a98-b8d1-9fb7712d89a6-kube-api-access-28lj4\") pod \"f853aaa8-39d2-4a98-b8d1-9fb7712d89a6\" (UID: \"f853aaa8-39d2-4a98-b8d1-9fb7712d89a6\") " Feb 19 20:53:49 crc kubenswrapper[4813]: I0219 20:53:49.735171 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/f853aaa8-39d2-4a98-b8d1-9fb7712d89a6-nova-cells-global-config-1\") pod \"f853aaa8-39d2-4a98-b8d1-9fb7712d89a6\" (UID: \"f853aaa8-39d2-4a98-b8d1-9fb7712d89a6\") " Feb 19 20:53:49 crc kubenswrapper[4813]: I0219 20:53:49.735227 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f853aaa8-39d2-4a98-b8d1-9fb7712d89a6-inventory\") pod \"f853aaa8-39d2-4a98-b8d1-9fb7712d89a6\" (UID: \"f853aaa8-39d2-4a98-b8d1-9fb7712d89a6\") " Feb 19 20:53:49 crc kubenswrapper[4813]: I0219 20:53:49.735286 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/f853aaa8-39d2-4a98-b8d1-9fb7712d89a6-nova-cell1-compute-config-3\") pod \"f853aaa8-39d2-4a98-b8d1-9fb7712d89a6\" (UID: \"f853aaa8-39d2-4a98-b8d1-9fb7712d89a6\") " Feb 19 20:53:49 crc kubenswrapper[4813]: I0219 20:53:49.735304 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f853aaa8-39d2-4a98-b8d1-9fb7712d89a6-nova-cell1-compute-config-1\") pod \"f853aaa8-39d2-4a98-b8d1-9fb7712d89a6\" (UID: \"f853aaa8-39d2-4a98-b8d1-9fb7712d89a6\") " Feb 19 20:53:49 crc kubenswrapper[4813]: I0219 20:53:49.735360 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/f853aaa8-39d2-4a98-b8d1-9fb7712d89a6-ssh-key-openstack-cell1\") pod \"f853aaa8-39d2-4a98-b8d1-9fb7712d89a6\" (UID: \"f853aaa8-39d2-4a98-b8d1-9fb7712d89a6\") " Feb 19 20:53:49 crc kubenswrapper[4813]: I0219 20:53:49.735413 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/f853aaa8-39d2-4a98-b8d1-9fb7712d89a6-nova-cells-global-config-0\") pod \"f853aaa8-39d2-4a98-b8d1-9fb7712d89a6\" (UID: \"f853aaa8-39d2-4a98-b8d1-9fb7712d89a6\") " Feb 19 20:53:49 crc kubenswrapper[4813]: I0219 20:53:49.735475 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/f853aaa8-39d2-4a98-b8d1-9fb7712d89a6-nova-cell1-compute-config-2\") pod \"f853aaa8-39d2-4a98-b8d1-9fb7712d89a6\" (UID: \"f853aaa8-39d2-4a98-b8d1-9fb7712d89a6\") " Feb 19 20:53:49 crc kubenswrapper[4813]: I0219 20:53:49.735495 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f853aaa8-39d2-4a98-b8d1-9fb7712d89a6-ceph\") pod \"f853aaa8-39d2-4a98-b8d1-9fb7712d89a6\" (UID: \"f853aaa8-39d2-4a98-b8d1-9fb7712d89a6\") " Feb 19 20:53:49 crc kubenswrapper[4813]: I0219 20:53:49.740829 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f853aaa8-39d2-4a98-b8d1-9fb7712d89a6-kube-api-access-28lj4" (OuterVolumeSpecName: "kube-api-access-28lj4") pod "f853aaa8-39d2-4a98-b8d1-9fb7712d89a6" (UID: "f853aaa8-39d2-4a98-b8d1-9fb7712d89a6"). InnerVolumeSpecName "kube-api-access-28lj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:53:49 crc kubenswrapper[4813]: I0219 20:53:49.755539 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f853aaa8-39d2-4a98-b8d1-9fb7712d89a6-ceph" (OuterVolumeSpecName: "ceph") pod "f853aaa8-39d2-4a98-b8d1-9fb7712d89a6" (UID: "f853aaa8-39d2-4a98-b8d1-9fb7712d89a6"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:53:49 crc kubenswrapper[4813]: I0219 20:53:49.757901 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f853aaa8-39d2-4a98-b8d1-9fb7712d89a6-nova-cell1-combined-ca-bundle" (OuterVolumeSpecName: "nova-cell1-combined-ca-bundle") pod "f853aaa8-39d2-4a98-b8d1-9fb7712d89a6" (UID: "f853aaa8-39d2-4a98-b8d1-9fb7712d89a6"). InnerVolumeSpecName "nova-cell1-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:53:49 crc kubenswrapper[4813]: I0219 20:53:49.762607 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f853aaa8-39d2-4a98-b8d1-9fb7712d89a6-nova-cells-global-config-0" (OuterVolumeSpecName: "nova-cells-global-config-0") pod "f853aaa8-39d2-4a98-b8d1-9fb7712d89a6" (UID: "f853aaa8-39d2-4a98-b8d1-9fb7712d89a6"). InnerVolumeSpecName "nova-cells-global-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:53:49 crc kubenswrapper[4813]: I0219 20:53:49.763492 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f853aaa8-39d2-4a98-b8d1-9fb7712d89a6-nova-cells-global-config-1" (OuterVolumeSpecName: "nova-cells-global-config-1") pod "f853aaa8-39d2-4a98-b8d1-9fb7712d89a6" (UID: "f853aaa8-39d2-4a98-b8d1-9fb7712d89a6"). InnerVolumeSpecName "nova-cells-global-config-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:53:49 crc kubenswrapper[4813]: I0219 20:53:49.769824 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f853aaa8-39d2-4a98-b8d1-9fb7712d89a6-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "f853aaa8-39d2-4a98-b8d1-9fb7712d89a6" (UID: "f853aaa8-39d2-4a98-b8d1-9fb7712d89a6"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:53:49 crc kubenswrapper[4813]: I0219 20:53:49.776936 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f853aaa8-39d2-4a98-b8d1-9fb7712d89a6-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "f853aaa8-39d2-4a98-b8d1-9fb7712d89a6" (UID: "f853aaa8-39d2-4a98-b8d1-9fb7712d89a6"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:53:49 crc kubenswrapper[4813]: I0219 20:53:49.786077 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f853aaa8-39d2-4a98-b8d1-9fb7712d89a6-nova-cell1-compute-config-2" (OuterVolumeSpecName: "nova-cell1-compute-config-2") pod "f853aaa8-39d2-4a98-b8d1-9fb7712d89a6" (UID: "f853aaa8-39d2-4a98-b8d1-9fb7712d89a6"). InnerVolumeSpecName "nova-cell1-compute-config-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:53:49 crc kubenswrapper[4813]: I0219 20:53:49.789306 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f853aaa8-39d2-4a98-b8d1-9fb7712d89a6-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "f853aaa8-39d2-4a98-b8d1-9fb7712d89a6" (UID: "f853aaa8-39d2-4a98-b8d1-9fb7712d89a6"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:53:49 crc kubenswrapper[4813]: I0219 20:53:49.790392 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f853aaa8-39d2-4a98-b8d1-9fb7712d89a6-inventory" (OuterVolumeSpecName: "inventory") pod "f853aaa8-39d2-4a98-b8d1-9fb7712d89a6" (UID: "f853aaa8-39d2-4a98-b8d1-9fb7712d89a6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:53:49 crc kubenswrapper[4813]: I0219 20:53:49.796710 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f853aaa8-39d2-4a98-b8d1-9fb7712d89a6-nova-cell1-compute-config-3" (OuterVolumeSpecName: "nova-cell1-compute-config-3") pod "f853aaa8-39d2-4a98-b8d1-9fb7712d89a6" (UID: "f853aaa8-39d2-4a98-b8d1-9fb7712d89a6"). InnerVolumeSpecName "nova-cell1-compute-config-3". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:53:49 crc kubenswrapper[4813]: I0219 20:53:49.798846 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f853aaa8-39d2-4a98-b8d1-9fb7712d89a6-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "f853aaa8-39d2-4a98-b8d1-9fb7712d89a6" (UID: "f853aaa8-39d2-4a98-b8d1-9fb7712d89a6"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:53:49 crc kubenswrapper[4813]: I0219 20:53:49.801324 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f853aaa8-39d2-4a98-b8d1-9fb7712d89a6-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "f853aaa8-39d2-4a98-b8d1-9fb7712d89a6" (UID: "f853aaa8-39d2-4a98-b8d1-9fb7712d89a6"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:53:49 crc kubenswrapper[4813]: I0219 20:53:49.838773 4813 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f853aaa8-39d2-4a98-b8d1-9fb7712d89a6-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 19 20:53:49 crc kubenswrapper[4813]: I0219 20:53:49.838804 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28lj4\" (UniqueName: \"kubernetes.io/projected/f853aaa8-39d2-4a98-b8d1-9fb7712d89a6-kube-api-access-28lj4\") on node \"crc\" DevicePath \"\"" Feb 19 20:53:49 crc kubenswrapper[4813]: I0219 20:53:49.838814 4813 reconciler_common.go:293] "Volume detached for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/f853aaa8-39d2-4a98-b8d1-9fb7712d89a6-nova-cells-global-config-1\") on node \"crc\" DevicePath \"\"" Feb 19 20:53:49 crc kubenswrapper[4813]: I0219 20:53:49.838826 4813 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f853aaa8-39d2-4a98-b8d1-9fb7712d89a6-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 20:53:49 crc kubenswrapper[4813]: I0219 20:53:49.838834 4813 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/f853aaa8-39d2-4a98-b8d1-9fb7712d89a6-nova-cell1-compute-config-3\") on node \"crc\" DevicePath \"\"" Feb 19 20:53:49 crc kubenswrapper[4813]: I0219 20:53:49.838843 4813 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f853aaa8-39d2-4a98-b8d1-9fb7712d89a6-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 19 20:53:49 crc kubenswrapper[4813]: I0219 20:53:49.838851 4813 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/f853aaa8-39d2-4a98-b8d1-9fb7712d89a6-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Feb 19 20:53:49 crc kubenswrapper[4813]: I0219 20:53:49.838861 4813 reconciler_common.go:293] "Volume detached for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/f853aaa8-39d2-4a98-b8d1-9fb7712d89a6-nova-cells-global-config-0\") on node \"crc\" DevicePath \"\"" Feb 19 20:53:49 crc kubenswrapper[4813]: I0219 20:53:49.838870 4813 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/f853aaa8-39d2-4a98-b8d1-9fb7712d89a6-nova-cell1-compute-config-2\") on node \"crc\" DevicePath \"\"" Feb 19 20:53:49 crc kubenswrapper[4813]: I0219 20:53:49.838878 4813 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f853aaa8-39d2-4a98-b8d1-9fb7712d89a6-ceph\") on node \"crc\" DevicePath \"\"" Feb 19 20:53:49 crc kubenswrapper[4813]: I0219 20:53:49.838900 4813 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f853aaa8-39d2-4a98-b8d1-9fb7712d89a6-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 19 20:53:49 crc kubenswrapper[4813]: I0219 20:53:49.838908 4813 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f853aaa8-39d2-4a98-b8d1-9fb7712d89a6-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 19 20:53:49 crc kubenswrapper[4813]: I0219 20:53:49.838919 4813 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f853aaa8-39d2-4a98-b8d1-9fb7712d89a6-nova-cell1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 20:53:50 crc kubenswrapper[4813]: I0219 20:53:50.109263 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c" Feb 19 20:53:50 crc kubenswrapper[4813]: I0219 20:53:50.109323 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c" event={"ID":"f853aaa8-39d2-4a98-b8d1-9fb7712d89a6","Type":"ContainerDied","Data":"20929b9d46beed745e049e248646f9c5db33a60d98da16d30da3181c60801823"} Feb 19 20:53:50 crc kubenswrapper[4813]: I0219 20:53:50.109348 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20929b9d46beed745e049e248646f9c5db33a60d98da16d30da3181c60801823" Feb 19 20:53:50 crc kubenswrapper[4813]: I0219 20:53:50.109387 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qw5n2" podUID="83e39069-71c3-4962-bd7e-312097de1646" containerName="registry-server" containerID="cri-o://913a585032b147c3c542756aa59d15ed120b20c8e387c5404feb6b17b4800e99" gracePeriod=2 Feb 19 20:53:50 crc kubenswrapper[4813]: I0219 20:53:50.503322 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qw5n2" Feb 19 20:53:50 crc kubenswrapper[4813]: I0219 20:53:50.550844 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83e39069-71c3-4962-bd7e-312097de1646-utilities\") pod \"83e39069-71c3-4962-bd7e-312097de1646\" (UID: \"83e39069-71c3-4962-bd7e-312097de1646\") " Feb 19 20:53:50 crc kubenswrapper[4813]: I0219 20:53:50.551393 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7vsz\" (UniqueName: \"kubernetes.io/projected/83e39069-71c3-4962-bd7e-312097de1646-kube-api-access-g7vsz\") pod \"83e39069-71c3-4962-bd7e-312097de1646\" (UID: \"83e39069-71c3-4962-bd7e-312097de1646\") " Feb 19 20:53:50 crc kubenswrapper[4813]: I0219 20:53:50.551474 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83e39069-71c3-4962-bd7e-312097de1646-catalog-content\") pod \"83e39069-71c3-4962-bd7e-312097de1646\" (UID: \"83e39069-71c3-4962-bd7e-312097de1646\") " Feb 19 20:53:50 crc kubenswrapper[4813]: I0219 20:53:50.553168 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83e39069-71c3-4962-bd7e-312097de1646-utilities" (OuterVolumeSpecName: "utilities") pod "83e39069-71c3-4962-bd7e-312097de1646" (UID: "83e39069-71c3-4962-bd7e-312097de1646"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:53:50 crc kubenswrapper[4813]: I0219 20:53:50.556858 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83e39069-71c3-4962-bd7e-312097de1646-kube-api-access-g7vsz" (OuterVolumeSpecName: "kube-api-access-g7vsz") pod "83e39069-71c3-4962-bd7e-312097de1646" (UID: "83e39069-71c3-4962-bd7e-312097de1646"). InnerVolumeSpecName "kube-api-access-g7vsz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:53:50 crc kubenswrapper[4813]: I0219 20:53:50.625258 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83e39069-71c3-4962-bd7e-312097de1646-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "83e39069-71c3-4962-bd7e-312097de1646" (UID: "83e39069-71c3-4962-bd7e-312097de1646"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:53:50 crc kubenswrapper[4813]: I0219 20:53:50.654685 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83e39069-71c3-4962-bd7e-312097de1646-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 20:53:50 crc kubenswrapper[4813]: I0219 20:53:50.654720 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7vsz\" (UniqueName: \"kubernetes.io/projected/83e39069-71c3-4962-bd7e-312097de1646-kube-api-access-g7vsz\") on node \"crc\" DevicePath \"\"" Feb 19 20:53:50 crc kubenswrapper[4813]: I0219 20:53:50.654732 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83e39069-71c3-4962-bd7e-312097de1646-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 20:53:51 crc kubenswrapper[4813]: I0219 20:53:51.130822 4813 generic.go:334] "Generic (PLEG): container finished" podID="83e39069-71c3-4962-bd7e-312097de1646" containerID="913a585032b147c3c542756aa59d15ed120b20c8e387c5404feb6b17b4800e99" exitCode=0 Feb 19 20:53:51 crc kubenswrapper[4813]: I0219 20:53:51.130936 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qw5n2" Feb 19 20:53:51 crc kubenswrapper[4813]: I0219 20:53:51.130927 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qw5n2" event={"ID":"83e39069-71c3-4962-bd7e-312097de1646","Type":"ContainerDied","Data":"913a585032b147c3c542756aa59d15ed120b20c8e387c5404feb6b17b4800e99"} Feb 19 20:53:51 crc kubenswrapper[4813]: I0219 20:53:51.131592 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qw5n2" event={"ID":"83e39069-71c3-4962-bd7e-312097de1646","Type":"ContainerDied","Data":"b4277125e07b60c542512cc08089e3659837215d128466e8eabceba4b1c61c4c"} Feb 19 20:53:51 crc kubenswrapper[4813]: I0219 20:53:51.131650 4813 scope.go:117] "RemoveContainer" containerID="913a585032b147c3c542756aa59d15ed120b20c8e387c5404feb6b17b4800e99" Feb 19 20:53:51 crc kubenswrapper[4813]: I0219 20:53:51.159752 4813 scope.go:117] "RemoveContainer" containerID="f7e9722130cc9b17be64533287932d711021ed91d4c3b0becae9c1090c4445fe" Feb 19 20:53:51 crc kubenswrapper[4813]: I0219 20:53:51.192572 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qw5n2"] Feb 19 20:53:51 crc kubenswrapper[4813]: I0219 20:53:51.207674 4813 scope.go:117] "RemoveContainer" containerID="abeae3d49bc6cec0ac3f761c6b4abe8697b0ab6801360ff778d2771c03502b9a" Feb 19 20:53:51 crc kubenswrapper[4813]: I0219 20:53:51.208936 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qw5n2"] Feb 19 20:53:51 crc kubenswrapper[4813]: I0219 20:53:51.244053 4813 scope.go:117] "RemoveContainer" containerID="913a585032b147c3c542756aa59d15ed120b20c8e387c5404feb6b17b4800e99" Feb 19 20:53:51 crc kubenswrapper[4813]: E0219 20:53:51.244661 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"913a585032b147c3c542756aa59d15ed120b20c8e387c5404feb6b17b4800e99\": container with ID starting with 913a585032b147c3c542756aa59d15ed120b20c8e387c5404feb6b17b4800e99 not found: ID does not exist" containerID="913a585032b147c3c542756aa59d15ed120b20c8e387c5404feb6b17b4800e99" Feb 19 20:53:51 crc kubenswrapper[4813]: I0219 20:53:51.244708 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"913a585032b147c3c542756aa59d15ed120b20c8e387c5404feb6b17b4800e99"} err="failed to get container status \"913a585032b147c3c542756aa59d15ed120b20c8e387c5404feb6b17b4800e99\": rpc error: code = NotFound desc = could not find container \"913a585032b147c3c542756aa59d15ed120b20c8e387c5404feb6b17b4800e99\": container with ID starting with 913a585032b147c3c542756aa59d15ed120b20c8e387c5404feb6b17b4800e99 not found: ID does not exist" Feb 19 20:53:51 crc kubenswrapper[4813]: I0219 20:53:51.244740 4813 scope.go:117] "RemoveContainer" containerID="f7e9722130cc9b17be64533287932d711021ed91d4c3b0becae9c1090c4445fe" Feb 19 20:53:51 crc kubenswrapper[4813]: E0219 20:53:51.245224 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7e9722130cc9b17be64533287932d711021ed91d4c3b0becae9c1090c4445fe\": container with ID starting with f7e9722130cc9b17be64533287932d711021ed91d4c3b0becae9c1090c4445fe not found: ID does not exist" containerID="f7e9722130cc9b17be64533287932d711021ed91d4c3b0becae9c1090c4445fe" Feb 19 20:53:51 crc kubenswrapper[4813]: I0219 20:53:51.245259 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7e9722130cc9b17be64533287932d711021ed91d4c3b0becae9c1090c4445fe"} err="failed to get container status \"f7e9722130cc9b17be64533287932d711021ed91d4c3b0becae9c1090c4445fe\": rpc error: code = NotFound desc = could not find container \"f7e9722130cc9b17be64533287932d711021ed91d4c3b0becae9c1090c4445fe\": container with ID starting with f7e9722130cc9b17be64533287932d711021ed91d4c3b0becae9c1090c4445fe not found: ID does not exist" Feb 19 20:53:51 crc kubenswrapper[4813]: I0219 20:53:51.245276 4813 scope.go:117] "RemoveContainer" containerID="abeae3d49bc6cec0ac3f761c6b4abe8697b0ab6801360ff778d2771c03502b9a" Feb 19 20:53:51 crc kubenswrapper[4813]: E0219 20:53:51.245654 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abeae3d49bc6cec0ac3f761c6b4abe8697b0ab6801360ff778d2771c03502b9a\": container with ID starting with abeae3d49bc6cec0ac3f761c6b4abe8697b0ab6801360ff778d2771c03502b9a not found: ID does not exist" containerID="abeae3d49bc6cec0ac3f761c6b4abe8697b0ab6801360ff778d2771c03502b9a" Feb 19 20:53:51 crc kubenswrapper[4813]: I0219 20:53:51.245685 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abeae3d49bc6cec0ac3f761c6b4abe8697b0ab6801360ff778d2771c03502b9a"} err="failed to get container status \"abeae3d49bc6cec0ac3f761c6b4abe8697b0ab6801360ff778d2771c03502b9a\": rpc error: code = NotFound desc = could not find container \"abeae3d49bc6cec0ac3f761c6b4abe8697b0ab6801360ff778d2771c03502b9a\": container with ID starting with abeae3d49bc6cec0ac3f761c6b4abe8697b0ab6801360ff778d2771c03502b9a not found: ID does not exist" Feb 19 20:53:51 crc kubenswrapper[4813]: I0219 20:53:51.566897 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83e39069-71c3-4962-bd7e-312097de1646" path="/var/lib/kubelet/pods/83e39069-71c3-4962-bd7e-312097de1646/volumes" Feb 19 20:54:01 crc kubenswrapper[4813]: I0219 20:54:01.471773 4813 scope.go:117] "RemoveContainer" containerID="b2b72de14d5df72f4b9cd6c4c268dd9d8a7c24958501081d46a76c40b1cb8b63" Feb 19 20:54:01 crc kubenswrapper[4813]: E0219 20:54:01.472990 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:54:14 crc kubenswrapper[4813]: I0219 20:54:14.471568 4813 scope.go:117] "RemoveContainer" containerID="b2b72de14d5df72f4b9cd6c4c268dd9d8a7c24958501081d46a76c40b1cb8b63" Feb 19 20:54:14 crc kubenswrapper[4813]: E0219 20:54:14.472380 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:54:27 crc kubenswrapper[4813]: I0219 20:54:27.471928 4813 scope.go:117] "RemoveContainer" containerID="b2b72de14d5df72f4b9cd6c4c268dd9d8a7c24958501081d46a76c40b1cb8b63" Feb 19 20:54:27 crc kubenswrapper[4813]: E0219 20:54:27.472636 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:54:38 crc kubenswrapper[4813]: I0219 20:54:38.472075 4813 scope.go:117] "RemoveContainer" containerID="b2b72de14d5df72f4b9cd6c4c268dd9d8a7c24958501081d46a76c40b1cb8b63" Feb 19 20:54:38 crc kubenswrapper[4813]: E0219 20:54:38.474441 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:54:50 crc kubenswrapper[4813]: I0219 20:54:50.471924 4813 scope.go:117] "RemoveContainer" containerID="b2b72de14d5df72f4b9cd6c4c268dd9d8a7c24958501081d46a76c40b1cb8b63" Feb 19 20:54:50 crc kubenswrapper[4813]: E0219 20:54:50.472677 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:55:04 crc kubenswrapper[4813]: I0219 20:55:04.472455 4813 scope.go:117] "RemoveContainer" containerID="b2b72de14d5df72f4b9cd6c4c268dd9d8a7c24958501081d46a76c40b1cb8b63" Feb 19 20:55:04 crc kubenswrapper[4813]: E0219 20:55:04.473563 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:55:19 crc kubenswrapper[4813]: I0219 20:55:19.472928 4813 scope.go:117] "RemoveContainer" containerID="b2b72de14d5df72f4b9cd6c4c268dd9d8a7c24958501081d46a76c40b1cb8b63" Feb 19 20:55:19 crc kubenswrapper[4813]: E0219 20:55:19.474067 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:55:30 crc kubenswrapper[4813]: I0219 20:55:30.471247 4813 scope.go:117] "RemoveContainer" containerID="b2b72de14d5df72f4b9cd6c4c268dd9d8a7c24958501081d46a76c40b1cb8b63" Feb 19 20:55:30 crc kubenswrapper[4813]: E0219 20:55:30.472101 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:55:45 crc kubenswrapper[4813]: I0219 20:55:45.476813 4813 scope.go:117] "RemoveContainer" containerID="b2b72de14d5df72f4b9cd6c4c268dd9d8a7c24958501081d46a76c40b1cb8b63" Feb 19 20:55:45 crc kubenswrapper[4813]: E0219 20:55:45.477591 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:55:57 crc kubenswrapper[4813]: I0219 20:55:57.808472 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7lwgm"] Feb 19 20:55:57 crc kubenswrapper[4813]: E0219 20:55:57.809577 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f853aaa8-39d2-4a98-b8d1-9fb7712d89a6" containerName="nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1" Feb 19 20:55:57 crc kubenswrapper[4813]: I0219 20:55:57.809591 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f853aaa8-39d2-4a98-b8d1-9fb7712d89a6" containerName="nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1" Feb 19 20:55:57 crc kubenswrapper[4813]: E0219 20:55:57.809610 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83e39069-71c3-4962-bd7e-312097de1646" containerName="extract-utilities" Feb 19 20:55:57 crc kubenswrapper[4813]: I0219 20:55:57.809617 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="83e39069-71c3-4962-bd7e-312097de1646" containerName="extract-utilities" Feb 19 20:55:57 crc kubenswrapper[4813]: E0219 20:55:57.809640 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83e39069-71c3-4962-bd7e-312097de1646" containerName="extract-content" Feb 19 20:55:57 crc kubenswrapper[4813]: I0219 20:55:57.809645 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="83e39069-71c3-4962-bd7e-312097de1646" containerName="extract-content" Feb 19 20:55:57 crc kubenswrapper[4813]: E0219 20:55:57.809657 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83e39069-71c3-4962-bd7e-312097de1646" containerName="registry-server" Feb 19 20:55:57 crc kubenswrapper[4813]: I0219 20:55:57.809662 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="83e39069-71c3-4962-bd7e-312097de1646" containerName="registry-server" Feb 19 20:55:57 crc kubenswrapper[4813]: I0219 20:55:57.809856 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f853aaa8-39d2-4a98-b8d1-9fb7712d89a6" containerName="nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1" Feb 19 20:55:57 crc kubenswrapper[4813]: I0219 20:55:57.809867 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="83e39069-71c3-4962-bd7e-312097de1646" containerName="registry-server" Feb 19 20:55:57 crc kubenswrapper[4813]: I0219 20:55:57.813401 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7lwgm" Feb 19 20:55:57 crc kubenswrapper[4813]: I0219 20:55:57.824976 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7lwgm"] Feb 19 20:55:57 crc kubenswrapper[4813]: I0219 20:55:57.920791 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1e34742-e747-4890-abb7-92186f6447fb-catalog-content\") pod \"redhat-operators-7lwgm\" (UID: \"b1e34742-e747-4890-abb7-92186f6447fb\") " pod="openshift-marketplace/redhat-operators-7lwgm" Feb 19 20:55:57 crc kubenswrapper[4813]: I0219 20:55:57.920943 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvdx5\" (UniqueName: \"kubernetes.io/projected/b1e34742-e747-4890-abb7-92186f6447fb-kube-api-access-nvdx5\") pod \"redhat-operators-7lwgm\" (UID: \"b1e34742-e747-4890-abb7-92186f6447fb\") " pod="openshift-marketplace/redhat-operators-7lwgm" Feb 19 20:55:57 crc kubenswrapper[4813]: I0219 20:55:57.920990 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1e34742-e747-4890-abb7-92186f6447fb-utilities\") pod \"redhat-operators-7lwgm\" (UID: \"b1e34742-e747-4890-abb7-92186f6447fb\") " pod="openshift-marketplace/redhat-operators-7lwgm" Feb 19 20:55:58 crc kubenswrapper[4813]: I0219 20:55:58.023271 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1e34742-e747-4890-abb7-92186f6447fb-catalog-content\") pod \"redhat-operators-7lwgm\" (UID: \"b1e34742-e747-4890-abb7-92186f6447fb\") " pod="openshift-marketplace/redhat-operators-7lwgm" Feb 19 20:55:58 crc kubenswrapper[4813]: I0219 20:55:58.023751 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvdx5\" (UniqueName: \"kubernetes.io/projected/b1e34742-e747-4890-abb7-92186f6447fb-kube-api-access-nvdx5\") pod \"redhat-operators-7lwgm\" (UID: \"b1e34742-e747-4890-abb7-92186f6447fb\") " pod="openshift-marketplace/redhat-operators-7lwgm" Feb 19 20:55:58 crc kubenswrapper[4813]: I0219 20:55:58.023766 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1e34742-e747-4890-abb7-92186f6447fb-catalog-content\") pod \"redhat-operators-7lwgm\" (UID: \"b1e34742-e747-4890-abb7-92186f6447fb\") " pod="openshift-marketplace/redhat-operators-7lwgm" Feb 19 20:55:58 crc kubenswrapper[4813]: I0219 20:55:58.023780 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1e34742-e747-4890-abb7-92186f6447fb-utilities\") pod \"redhat-operators-7lwgm\" (UID: \"b1e34742-e747-4890-abb7-92186f6447fb\") " pod="openshift-marketplace/redhat-operators-7lwgm" Feb 19 20:55:58 crc kubenswrapper[4813]: I0219 20:55:58.024178 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1e34742-e747-4890-abb7-92186f6447fb-utilities\") pod \"redhat-operators-7lwgm\" (UID: \"b1e34742-e747-4890-abb7-92186f6447fb\") " pod="openshift-marketplace/redhat-operators-7lwgm" Feb 19 20:55:58 crc kubenswrapper[4813]: I0219 20:55:58.044292 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvdx5\" (UniqueName: \"kubernetes.io/projected/b1e34742-e747-4890-abb7-92186f6447fb-kube-api-access-nvdx5\") pod \"redhat-operators-7lwgm\" (UID: \"b1e34742-e747-4890-abb7-92186f6447fb\") " pod="openshift-marketplace/redhat-operators-7lwgm" Feb 19 20:55:58 crc kubenswrapper[4813]: I0219 20:55:58.129303 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7lwgm" Feb 19 20:55:58 crc kubenswrapper[4813]: I0219 20:55:58.471486 4813 scope.go:117] "RemoveContainer" containerID="b2b72de14d5df72f4b9cd6c4c268dd9d8a7c24958501081d46a76c40b1cb8b63" Feb 19 20:55:58 crc kubenswrapper[4813]: E0219 20:55:58.472181 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 20:55:58 crc kubenswrapper[4813]: I0219 20:55:58.580676 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7lwgm"] Feb 19 20:55:58 crc kubenswrapper[4813]: I0219 20:55:58.667286 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7lwgm" event={"ID":"b1e34742-e747-4890-abb7-92186f6447fb","Type":"ContainerStarted","Data":"0c418a6651a37750dd12a1eeaf634b0811b7a03d4d0b8dc83baa51540ca61adf"} Feb 19 20:55:59 crc kubenswrapper[4813]: I0219 20:55:59.680746 4813 generic.go:334] "Generic (PLEG): container finished" podID="b1e34742-e747-4890-abb7-92186f6447fb" containerID="71a20ebd6874f1bdc9daf6147a02941aacea94101be1cfd4b4decfecfa300eb1" exitCode=0 Feb 19 20:55:59 crc kubenswrapper[4813]: I0219 20:55:59.681026 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7lwgm" event={"ID":"b1e34742-e747-4890-abb7-92186f6447fb","Type":"ContainerDied","Data":"71a20ebd6874f1bdc9daf6147a02941aacea94101be1cfd4b4decfecfa300eb1"} Feb 19 20:56:01 crc kubenswrapper[4813]: I0219 20:56:01.721897 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7lwgm" event={"ID":"b1e34742-e747-4890-abb7-92186f6447fb","Type":"ContainerStarted","Data":"4e252aa14ef5d8733305e2f58ddf2130a258cfc8dc0fb5b1426def8002d13552"} Feb 19 20:56:02 crc kubenswrapper[4813]: I0219 20:56:02.748026 4813 generic.go:334] "Generic (PLEG): container finished" podID="b1e34742-e747-4890-abb7-92186f6447fb" containerID="4e252aa14ef5d8733305e2f58ddf2130a258cfc8dc0fb5b1426def8002d13552" exitCode=0 Feb 19 20:56:02 crc kubenswrapper[4813]: I0219 20:56:02.748169 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7lwgm" event={"ID":"b1e34742-e747-4890-abb7-92186f6447fb","Type":"ContainerDied","Data":"4e252aa14ef5d8733305e2f58ddf2130a258cfc8dc0fb5b1426def8002d13552"} Feb 19 20:56:03 crc kubenswrapper[4813]: I0219 20:56:03.768056 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7lwgm" event={"ID":"b1e34742-e747-4890-abb7-92186f6447fb","Type":"ContainerStarted","Data":"b87968e42b00736afa6ede83002a132842b118908efe55849ec519e0bd3b3bb2"} Feb 19 20:56:03 crc kubenswrapper[4813]: I0219 20:56:03.809867 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7lwgm" podStartSLOduration=3.987237649 podStartE2EDuration="6.809840235s" podCreationTimestamp="2026-02-19 20:55:57 +0000 UTC" firstStartedPulling="2026-02-19 20:55:59.683488358 +0000 UTC m=+8778.908928899" lastFinishedPulling="2026-02-19 20:56:02.506090904 +0000 UTC m=+8781.731531485" observedRunningTime="2026-02-19 20:56:03.794598724 +0000 UTC m=+8783.020039295" watchObservedRunningTime="2026-02-19 20:56:03.809840235 +0000 UTC m=+8783.035280806" Feb 19 20:56:08 crc kubenswrapper[4813]: I0219 20:56:08.129896 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7lwgm" Feb 19 20:56:08 crc kubenswrapper[4813]: I0219 20:56:08.130478 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7lwgm" Feb 19 20:56:09 crc kubenswrapper[4813]: I0219 20:56:09.209217 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7lwgm" podUID="b1e34742-e747-4890-abb7-92186f6447fb" containerName="registry-server" probeResult="failure" output=< Feb 19 20:56:09 crc kubenswrapper[4813]: timeout: failed to connect service ":50051" within 1s Feb 19 20:56:09 crc kubenswrapper[4813]: > Feb 19 20:56:12 crc kubenswrapper[4813]: I0219 20:56:12.472064 4813 scope.go:117] "RemoveContainer" containerID="b2b72de14d5df72f4b9cd6c4c268dd9d8a7c24958501081d46a76c40b1cb8b63" Feb 19 20:56:12 crc kubenswrapper[4813]: I0219 20:56:12.981330 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" event={"ID":"481977a2-7072-4176-abd4-863cb6104d70","Type":"ContainerStarted","Data":"643ad5a27899764fe60b8450a47154aec9c0cdecfadc2e3fdb35dc34121d971d"} Feb 19 20:56:18 crc kubenswrapper[4813]: I0219 20:56:18.206845 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7lwgm" Feb 19 20:56:18 crc kubenswrapper[4813]: I0219 20:56:18.258936 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7lwgm" Feb 19 20:56:18 crc kubenswrapper[4813]: I0219 20:56:18.446158 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7lwgm"] Feb 19 20:56:20 crc kubenswrapper[4813]: I0219 20:56:20.071549 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7lwgm" podUID="b1e34742-e747-4890-abb7-92186f6447fb" containerName="registry-server" containerID="cri-o://b87968e42b00736afa6ede83002a132842b118908efe55849ec519e0bd3b3bb2" gracePeriod=2 Feb 19 20:56:20 crc kubenswrapper[4813]: I0219 20:56:20.686643 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7lwgm" Feb 19 20:56:20 crc kubenswrapper[4813]: I0219 20:56:20.769865 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvdx5\" (UniqueName: \"kubernetes.io/projected/b1e34742-e747-4890-abb7-92186f6447fb-kube-api-access-nvdx5\") pod \"b1e34742-e747-4890-abb7-92186f6447fb\" (UID: \"b1e34742-e747-4890-abb7-92186f6447fb\") " Feb 19 20:56:20 crc kubenswrapper[4813]: I0219 20:56:20.770029 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1e34742-e747-4890-abb7-92186f6447fb-utilities\") pod \"b1e34742-e747-4890-abb7-92186f6447fb\" (UID: \"b1e34742-e747-4890-abb7-92186f6447fb\") " Feb 19 20:56:20 crc kubenswrapper[4813]: I0219 20:56:20.770067 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1e34742-e747-4890-abb7-92186f6447fb-catalog-content\") pod \"b1e34742-e747-4890-abb7-92186f6447fb\" (UID: \"b1e34742-e747-4890-abb7-92186f6447fb\") " Feb 19 20:56:20 crc kubenswrapper[4813]: I0219 20:56:20.771611 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1e34742-e747-4890-abb7-92186f6447fb-utilities" (OuterVolumeSpecName: "utilities") pod "b1e34742-e747-4890-abb7-92186f6447fb" (UID: "b1e34742-e747-4890-abb7-92186f6447fb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:56:20 crc kubenswrapper[4813]: I0219 20:56:20.782108 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1e34742-e747-4890-abb7-92186f6447fb-kube-api-access-nvdx5" (OuterVolumeSpecName: "kube-api-access-nvdx5") pod "b1e34742-e747-4890-abb7-92186f6447fb" (UID: "b1e34742-e747-4890-abb7-92186f6447fb"). InnerVolumeSpecName "kube-api-access-nvdx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:56:20 crc kubenswrapper[4813]: I0219 20:56:20.878408 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvdx5\" (UniqueName: \"kubernetes.io/projected/b1e34742-e747-4890-abb7-92186f6447fb-kube-api-access-nvdx5\") on node \"crc\" DevicePath \"\"" Feb 19 20:56:20 crc kubenswrapper[4813]: I0219 20:56:20.878453 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1e34742-e747-4890-abb7-92186f6447fb-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 20:56:20 crc kubenswrapper[4813]: I0219 20:56:20.929241 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1e34742-e747-4890-abb7-92186f6447fb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b1e34742-e747-4890-abb7-92186f6447fb" (UID: "b1e34742-e747-4890-abb7-92186f6447fb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:56:20 crc kubenswrapper[4813]: I0219 20:56:20.981085 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1e34742-e747-4890-abb7-92186f6447fb-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 20:56:21 crc kubenswrapper[4813]: I0219 20:56:21.088890 4813 generic.go:334] "Generic (PLEG): container finished" podID="b1e34742-e747-4890-abb7-92186f6447fb" containerID="b87968e42b00736afa6ede83002a132842b118908efe55849ec519e0bd3b3bb2" exitCode=0 Feb 19 20:56:21 crc kubenswrapper[4813]: I0219 20:56:21.089028 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7lwgm" Feb 19 20:56:21 crc kubenswrapper[4813]: I0219 20:56:21.089024 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7lwgm" event={"ID":"b1e34742-e747-4890-abb7-92186f6447fb","Type":"ContainerDied","Data":"b87968e42b00736afa6ede83002a132842b118908efe55849ec519e0bd3b3bb2"} Feb 19 20:56:21 crc kubenswrapper[4813]: I0219 20:56:21.089597 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7lwgm" event={"ID":"b1e34742-e747-4890-abb7-92186f6447fb","Type":"ContainerDied","Data":"0c418a6651a37750dd12a1eeaf634b0811b7a03d4d0b8dc83baa51540ca61adf"} Feb 19 20:56:21 crc kubenswrapper[4813]: I0219 20:56:21.089630 4813 scope.go:117] "RemoveContainer" containerID="b87968e42b00736afa6ede83002a132842b118908efe55849ec519e0bd3b3bb2" Feb 19 20:56:21 crc kubenswrapper[4813]: I0219 20:56:21.130217 4813 scope.go:117] "RemoveContainer" containerID="4e252aa14ef5d8733305e2f58ddf2130a258cfc8dc0fb5b1426def8002d13552" Feb 19 20:56:21 crc kubenswrapper[4813]: I0219 20:56:21.136730 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7lwgm"] Feb 19 20:56:21 crc kubenswrapper[4813]: I0219 20:56:21.149006 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7lwgm"] Feb 19 20:56:21 crc kubenswrapper[4813]: I0219 20:56:21.167070 4813 scope.go:117] "RemoveContainer" containerID="71a20ebd6874f1bdc9daf6147a02941aacea94101be1cfd4b4decfecfa300eb1" Feb 19 20:56:21 crc kubenswrapper[4813]: I0219 20:56:21.216776 4813 scope.go:117] "RemoveContainer" containerID="b87968e42b00736afa6ede83002a132842b118908efe55849ec519e0bd3b3bb2" Feb 19 20:56:21 crc kubenswrapper[4813]: E0219 20:56:21.217312 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b87968e42b00736afa6ede83002a132842b118908efe55849ec519e0bd3b3bb2\": container with ID starting with b87968e42b00736afa6ede83002a132842b118908efe55849ec519e0bd3b3bb2 not found: ID does not exist" containerID="b87968e42b00736afa6ede83002a132842b118908efe55849ec519e0bd3b3bb2" Feb 19 20:56:21 crc kubenswrapper[4813]: I0219 20:56:21.217374 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b87968e42b00736afa6ede83002a132842b118908efe55849ec519e0bd3b3bb2"} err="failed to get container status \"b87968e42b00736afa6ede83002a132842b118908efe55849ec519e0bd3b3bb2\": rpc error: code = NotFound desc = could not find container \"b87968e42b00736afa6ede83002a132842b118908efe55849ec519e0bd3b3bb2\": container with ID starting with b87968e42b00736afa6ede83002a132842b118908efe55849ec519e0bd3b3bb2 not found: ID does not exist" Feb 19 20:56:21 crc kubenswrapper[4813]: I0219 20:56:21.217407 4813 scope.go:117] "RemoveContainer" containerID="4e252aa14ef5d8733305e2f58ddf2130a258cfc8dc0fb5b1426def8002d13552" Feb 19 20:56:21 crc kubenswrapper[4813]: E0219 20:56:21.217763 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e252aa14ef5d8733305e2f58ddf2130a258cfc8dc0fb5b1426def8002d13552\": container with ID starting with 4e252aa14ef5d8733305e2f58ddf2130a258cfc8dc0fb5b1426def8002d13552 not found: ID does not exist" containerID="4e252aa14ef5d8733305e2f58ddf2130a258cfc8dc0fb5b1426def8002d13552" Feb 19 20:56:21 crc kubenswrapper[4813]: I0219 20:56:21.217802 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e252aa14ef5d8733305e2f58ddf2130a258cfc8dc0fb5b1426def8002d13552"} err="failed to get container status \"4e252aa14ef5d8733305e2f58ddf2130a258cfc8dc0fb5b1426def8002d13552\": rpc error: code = NotFound desc = could not find container \"4e252aa14ef5d8733305e2f58ddf2130a258cfc8dc0fb5b1426def8002d13552\": container with ID starting with 4e252aa14ef5d8733305e2f58ddf2130a258cfc8dc0fb5b1426def8002d13552 not found: ID does not exist" Feb 19 20:56:21 crc kubenswrapper[4813]: I0219 20:56:21.217825 4813 scope.go:117] "RemoveContainer" containerID="71a20ebd6874f1bdc9daf6147a02941aacea94101be1cfd4b4decfecfa300eb1" Feb 19 20:56:21 crc kubenswrapper[4813]: E0219 20:56:21.218135 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71a20ebd6874f1bdc9daf6147a02941aacea94101be1cfd4b4decfecfa300eb1\": container with ID starting with 71a20ebd6874f1bdc9daf6147a02941aacea94101be1cfd4b4decfecfa300eb1 not found: ID does not exist" containerID="71a20ebd6874f1bdc9daf6147a02941aacea94101be1cfd4b4decfecfa300eb1" Feb 19 20:56:21 crc kubenswrapper[4813]: I0219 20:56:21.218197 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71a20ebd6874f1bdc9daf6147a02941aacea94101be1cfd4b4decfecfa300eb1"} err="failed to get container status \"71a20ebd6874f1bdc9daf6147a02941aacea94101be1cfd4b4decfecfa300eb1\": rpc error: code = NotFound desc = could not find container \"71a20ebd6874f1bdc9daf6147a02941aacea94101be1cfd4b4decfecfa300eb1\": container with ID starting with 71a20ebd6874f1bdc9daf6147a02941aacea94101be1cfd4b4decfecfa300eb1 not found: ID does not exist" Feb 19 20:56:21 crc kubenswrapper[4813]: I0219 20:56:21.490924 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1e34742-e747-4890-abb7-92186f6447fb" path="/var/lib/kubelet/pods/b1e34742-e747-4890-abb7-92186f6447fb/volumes" Feb 19 20:56:26 crc kubenswrapper[4813]: I0219 20:56:26.235253 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vbc5h"] Feb 19 20:56:26 crc kubenswrapper[4813]: E0219 20:56:26.236929 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1e34742-e747-4890-abb7-92186f6447fb" containerName="registry-server" Feb 19 20:56:26 crc kubenswrapper[4813]: I0219 20:56:26.236996 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1e34742-e747-4890-abb7-92186f6447fb" containerName="registry-server" Feb 19 20:56:26 crc kubenswrapper[4813]: E0219 20:56:26.237048 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1e34742-e747-4890-abb7-92186f6447fb" containerName="extract-content" Feb 19 20:56:26 crc kubenswrapper[4813]: I0219 20:56:26.237070 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1e34742-e747-4890-abb7-92186f6447fb" containerName="extract-content" Feb 19 20:56:26 crc kubenswrapper[4813]: E0219 20:56:26.237158 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1e34742-e747-4890-abb7-92186f6447fb" containerName="extract-utilities" Feb 19 20:56:26 crc kubenswrapper[4813]: I0219 20:56:26.237178 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1e34742-e747-4890-abb7-92186f6447fb" containerName="extract-utilities" Feb 19 20:56:26 crc kubenswrapper[4813]: I0219 20:56:26.237650 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1e34742-e747-4890-abb7-92186f6447fb" containerName="registry-server" Feb 19 20:56:26 crc kubenswrapper[4813]: I0219 20:56:26.241680 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vbc5h" Feb 19 20:56:26 crc kubenswrapper[4813]: I0219 20:56:26.251503 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vbc5h"] Feb 19 20:56:26 crc kubenswrapper[4813]: I0219 20:56:26.412377 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81b8745e-b4f8-4ca7-b67f-0993a12b6a07-utilities\") pod \"community-operators-vbc5h\" (UID: \"81b8745e-b4f8-4ca7-b67f-0993a12b6a07\") " pod="openshift-marketplace/community-operators-vbc5h" Feb 19 20:56:26 crc kubenswrapper[4813]: I0219 20:56:26.412702 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81b8745e-b4f8-4ca7-b67f-0993a12b6a07-catalog-content\") pod \"community-operators-vbc5h\" (UID: \"81b8745e-b4f8-4ca7-b67f-0993a12b6a07\") " pod="openshift-marketplace/community-operators-vbc5h" Feb 19 20:56:26 crc kubenswrapper[4813]: I0219 20:56:26.412922 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bv24d\" (UniqueName: \"kubernetes.io/projected/81b8745e-b4f8-4ca7-b67f-0993a12b6a07-kube-api-access-bv24d\") pod \"community-operators-vbc5h\" (UID: \"81b8745e-b4f8-4ca7-b67f-0993a12b6a07\") " pod="openshift-marketplace/community-operators-vbc5h" Feb 19 20:56:26 crc kubenswrapper[4813]: I0219 20:56:26.515322 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81b8745e-b4f8-4ca7-b67f-0993a12b6a07-utilities\") pod \"community-operators-vbc5h\" (UID: \"81b8745e-b4f8-4ca7-b67f-0993a12b6a07\") " pod="openshift-marketplace/community-operators-vbc5h" Feb 19 20:56:26 crc kubenswrapper[4813]: I0219 20:56:26.515459 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81b8745e-b4f8-4ca7-b67f-0993a12b6a07-catalog-content\") pod \"community-operators-vbc5h\" (UID: \"81b8745e-b4f8-4ca7-b67f-0993a12b6a07\") " pod="openshift-marketplace/community-operators-vbc5h" Feb 19 20:56:26 crc kubenswrapper[4813]: I0219 20:56:26.515521 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bv24d\" (UniqueName: \"kubernetes.io/projected/81b8745e-b4f8-4ca7-b67f-0993a12b6a07-kube-api-access-bv24d\") pod \"community-operators-vbc5h\" (UID: \"81b8745e-b4f8-4ca7-b67f-0993a12b6a07\") " pod="openshift-marketplace/community-operators-vbc5h" Feb 19 20:56:26 crc kubenswrapper[4813]: I0219 20:56:26.516024 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81b8745e-b4f8-4ca7-b67f-0993a12b6a07-catalog-content\") pod \"community-operators-vbc5h\" (UID: \"81b8745e-b4f8-4ca7-b67f-0993a12b6a07\") " pod="openshift-marketplace/community-operators-vbc5h" Feb 19 20:56:26 crc kubenswrapper[4813]: I0219 20:56:26.516238 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81b8745e-b4f8-4ca7-b67f-0993a12b6a07-utilities\") pod \"community-operators-vbc5h\" (UID: \"81b8745e-b4f8-4ca7-b67f-0993a12b6a07\") " pod="openshift-marketplace/community-operators-vbc5h" Feb 19 20:56:26 crc kubenswrapper[4813]: I0219 20:56:26.533595 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bv24d\" (UniqueName: \"kubernetes.io/projected/81b8745e-b4f8-4ca7-b67f-0993a12b6a07-kube-api-access-bv24d\") pod \"community-operators-vbc5h\" (UID: \"81b8745e-b4f8-4ca7-b67f-0993a12b6a07\") " pod="openshift-marketplace/community-operators-vbc5h" Feb 19 20:56:26 crc kubenswrapper[4813]: I0219 20:56:26.567983 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vbc5h" Feb 19 20:56:27 crc kubenswrapper[4813]: W0219 20:56:27.062046 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81b8745e_b4f8_4ca7_b67f_0993a12b6a07.slice/crio-c001384665716d1bceda1d8f47e6dd535f5bfe5a4708ffe3ee5704872e9876d8 WatchSource:0}: Error finding container c001384665716d1bceda1d8f47e6dd535f5bfe5a4708ffe3ee5704872e9876d8: Status 404 returned error can't find the container with id c001384665716d1bceda1d8f47e6dd535f5bfe5a4708ffe3ee5704872e9876d8 Feb 19 20:56:27 crc kubenswrapper[4813]: I0219 20:56:27.063978 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vbc5h"] Feb 19 20:56:27 crc kubenswrapper[4813]: I0219 20:56:27.162148 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vbc5h" event={"ID":"81b8745e-b4f8-4ca7-b67f-0993a12b6a07","Type":"ContainerStarted","Data":"c001384665716d1bceda1d8f47e6dd535f5bfe5a4708ffe3ee5704872e9876d8"} Feb 19 20:56:28 crc kubenswrapper[4813]: I0219 20:56:28.177421 4813 generic.go:334] "Generic (PLEG): container finished" podID="81b8745e-b4f8-4ca7-b67f-0993a12b6a07" containerID="3587254eab1a9b3c9b9c6904e7c4c08cb5d6ad982ab6b09b5a67c079c5e3a2a4" exitCode=0 Feb 19 20:56:28 crc kubenswrapper[4813]: I0219 20:56:28.177689 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vbc5h" event={"ID":"81b8745e-b4f8-4ca7-b67f-0993a12b6a07","Type":"ContainerDied","Data":"3587254eab1a9b3c9b9c6904e7c4c08cb5d6ad982ab6b09b5a67c079c5e3a2a4"} Feb 19 20:56:29 crc kubenswrapper[4813]: I0219 20:56:29.192801 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vbc5h" event={"ID":"81b8745e-b4f8-4ca7-b67f-0993a12b6a07","Type":"ContainerStarted","Data":"9333a1e2e58049edc23f99973944c1882c5b700d8131cb0c0c3850f17eca3613"} Feb 19 20:56:31 crc kubenswrapper[4813]: I0219 20:56:31.218500 4813 generic.go:334] "Generic (PLEG): container finished" podID="81b8745e-b4f8-4ca7-b67f-0993a12b6a07" containerID="9333a1e2e58049edc23f99973944c1882c5b700d8131cb0c0c3850f17eca3613" exitCode=0 Feb 19 20:56:31 crc kubenswrapper[4813]: I0219 20:56:31.218587 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vbc5h" event={"ID":"81b8745e-b4f8-4ca7-b67f-0993a12b6a07","Type":"ContainerDied","Data":"9333a1e2e58049edc23f99973944c1882c5b700d8131cb0c0c3850f17eca3613"} Feb 19 20:56:32 crc kubenswrapper[4813]: I0219 20:56:32.232597 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vbc5h" event={"ID":"81b8745e-b4f8-4ca7-b67f-0993a12b6a07","Type":"ContainerStarted","Data":"1b5d4d87e3f0b9d53435efd3891788a750080c724f73aa4d4267fece65d7ce6d"} Feb 19 20:56:32 crc kubenswrapper[4813]: I0219 20:56:32.274074 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vbc5h" podStartSLOduration=2.753764447 podStartE2EDuration="6.27405223s" podCreationTimestamp="2026-02-19 20:56:26 +0000 UTC" firstStartedPulling="2026-02-19 20:56:28.180379144 +0000 UTC m=+8807.405819695" lastFinishedPulling="2026-02-19 20:56:31.700666937 +0000 UTC m=+8810.926107478" observedRunningTime="2026-02-19 20:56:32.257735076 +0000 UTC m=+8811.483175627" watchObservedRunningTime="2026-02-19 20:56:32.27405223 +0000 UTC m=+8811.499492781" Feb 19 20:56:35 crc kubenswrapper[4813]: I0219 20:56:35.951311 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-copy-data"] Feb 19 20:56:35 crc kubenswrapper[4813]: I0219 20:56:35.952221 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mariadb-copy-data" podUID="fa05940d-c476-4e31-adce-a834c33da6de" containerName="adoption" containerID="cri-o://77f43f65aacf0bda469e9fd3b3b4433ac810fd81b47e9dbe6274e2a56f94842e" gracePeriod=30 Feb 19 20:56:36 crc kubenswrapper[4813]: I0219 20:56:36.569744 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vbc5h" Feb 19 20:56:36 crc kubenswrapper[4813]: I0219 20:56:36.569812 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vbc5h" Feb 19 20:56:36 crc kubenswrapper[4813]: I0219 20:56:36.648333 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vbc5h" Feb 19 20:56:37 crc kubenswrapper[4813]: I0219 20:56:37.382079 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vbc5h" Feb 19 20:56:37 crc kubenswrapper[4813]: I0219 20:56:37.454869 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vbc5h"] Feb 19 20:56:39 crc kubenswrapper[4813]: I0219 20:56:39.313094 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vbc5h" podUID="81b8745e-b4f8-4ca7-b67f-0993a12b6a07" containerName="registry-server" containerID="cri-o://1b5d4d87e3f0b9d53435efd3891788a750080c724f73aa4d4267fece65d7ce6d" gracePeriod=2 Feb 19 20:56:39 crc kubenswrapper[4813]: I0219 20:56:39.826885 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vbc5h" Feb 19 20:56:39 crc kubenswrapper[4813]: I0219 20:56:39.932641 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bv24d\" (UniqueName: \"kubernetes.io/projected/81b8745e-b4f8-4ca7-b67f-0993a12b6a07-kube-api-access-bv24d\") pod \"81b8745e-b4f8-4ca7-b67f-0993a12b6a07\" (UID: \"81b8745e-b4f8-4ca7-b67f-0993a12b6a07\") " Feb 19 20:56:39 crc kubenswrapper[4813]: I0219 20:56:39.932701 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81b8745e-b4f8-4ca7-b67f-0993a12b6a07-catalog-content\") pod \"81b8745e-b4f8-4ca7-b67f-0993a12b6a07\" (UID: \"81b8745e-b4f8-4ca7-b67f-0993a12b6a07\") " Feb 19 20:56:39 crc kubenswrapper[4813]: I0219 20:56:39.932861 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81b8745e-b4f8-4ca7-b67f-0993a12b6a07-utilities\") pod \"81b8745e-b4f8-4ca7-b67f-0993a12b6a07\" (UID: \"81b8745e-b4f8-4ca7-b67f-0993a12b6a07\") " Feb 19 20:56:39 crc kubenswrapper[4813]: I0219 20:56:39.933573 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81b8745e-b4f8-4ca7-b67f-0993a12b6a07-utilities" (OuterVolumeSpecName: "utilities") pod "81b8745e-b4f8-4ca7-b67f-0993a12b6a07" (UID: "81b8745e-b4f8-4ca7-b67f-0993a12b6a07"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:56:39 crc kubenswrapper[4813]: I0219 20:56:39.934419 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81b8745e-b4f8-4ca7-b67f-0993a12b6a07-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 20:56:39 crc kubenswrapper[4813]: I0219 20:56:39.940711 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81b8745e-b4f8-4ca7-b67f-0993a12b6a07-kube-api-access-bv24d" (OuterVolumeSpecName: "kube-api-access-bv24d") pod "81b8745e-b4f8-4ca7-b67f-0993a12b6a07" (UID: "81b8745e-b4f8-4ca7-b67f-0993a12b6a07"). InnerVolumeSpecName "kube-api-access-bv24d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:56:39 crc kubenswrapper[4813]: I0219 20:56:39.978818 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81b8745e-b4f8-4ca7-b67f-0993a12b6a07-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "81b8745e-b4f8-4ca7-b67f-0993a12b6a07" (UID: "81b8745e-b4f8-4ca7-b67f-0993a12b6a07"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:56:40 crc kubenswrapper[4813]: I0219 20:56:40.036585 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bv24d\" (UniqueName: \"kubernetes.io/projected/81b8745e-b4f8-4ca7-b67f-0993a12b6a07-kube-api-access-bv24d\") on node \"crc\" DevicePath \"\"" Feb 19 20:56:40 crc kubenswrapper[4813]: I0219 20:56:40.036617 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81b8745e-b4f8-4ca7-b67f-0993a12b6a07-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 20:56:40 crc kubenswrapper[4813]: I0219 20:56:40.328540 4813 generic.go:334] "Generic (PLEG): container finished" podID="81b8745e-b4f8-4ca7-b67f-0993a12b6a07" containerID="1b5d4d87e3f0b9d53435efd3891788a750080c724f73aa4d4267fece65d7ce6d" exitCode=0 Feb 19 20:56:40 crc kubenswrapper[4813]: I0219 20:56:40.328575 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vbc5h" event={"ID":"81b8745e-b4f8-4ca7-b67f-0993a12b6a07","Type":"ContainerDied","Data":"1b5d4d87e3f0b9d53435efd3891788a750080c724f73aa4d4267fece65d7ce6d"} Feb 19 20:56:40 crc kubenswrapper[4813]: I0219 20:56:40.328622 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vbc5h" event={"ID":"81b8745e-b4f8-4ca7-b67f-0993a12b6a07","Type":"ContainerDied","Data":"c001384665716d1bceda1d8f47e6dd535f5bfe5a4708ffe3ee5704872e9876d8"} Feb 19 20:56:40 crc kubenswrapper[4813]: I0219 20:56:40.328645 4813 scope.go:117] "RemoveContainer" containerID="1b5d4d87e3f0b9d53435efd3891788a750080c724f73aa4d4267fece65d7ce6d" Feb 19 20:56:40 crc kubenswrapper[4813]: I0219 20:56:40.328653 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vbc5h" Feb 19 20:56:40 crc kubenswrapper[4813]: I0219 20:56:40.373299 4813 scope.go:117] "RemoveContainer" containerID="9333a1e2e58049edc23f99973944c1882c5b700d8131cb0c0c3850f17eca3613" Feb 19 20:56:40 crc kubenswrapper[4813]: I0219 20:56:40.384925 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vbc5h"] Feb 19 20:56:40 crc kubenswrapper[4813]: I0219 20:56:40.399384 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vbc5h"] Feb 19 20:56:40 crc kubenswrapper[4813]: I0219 20:56:40.399915 4813 scope.go:117] "RemoveContainer" containerID="3587254eab1a9b3c9b9c6904e7c4c08cb5d6ad982ab6b09b5a67c079c5e3a2a4" Feb 19 20:56:40 crc kubenswrapper[4813]: I0219 20:56:40.469458 4813 scope.go:117] "RemoveContainer" containerID="1b5d4d87e3f0b9d53435efd3891788a750080c724f73aa4d4267fece65d7ce6d" Feb 19 20:56:40 crc kubenswrapper[4813]: E0219 20:56:40.470378 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b5d4d87e3f0b9d53435efd3891788a750080c724f73aa4d4267fece65d7ce6d\": container with ID starting with 1b5d4d87e3f0b9d53435efd3891788a750080c724f73aa4d4267fece65d7ce6d not found: ID does not exist" containerID="1b5d4d87e3f0b9d53435efd3891788a750080c724f73aa4d4267fece65d7ce6d" Feb 19 20:56:40 crc kubenswrapper[4813]: I0219 20:56:40.470455 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b5d4d87e3f0b9d53435efd3891788a750080c724f73aa4d4267fece65d7ce6d"} err="failed to get container status \"1b5d4d87e3f0b9d53435efd3891788a750080c724f73aa4d4267fece65d7ce6d\": rpc error: code = NotFound desc = could not find container \"1b5d4d87e3f0b9d53435efd3891788a750080c724f73aa4d4267fece65d7ce6d\": container with ID starting with 1b5d4d87e3f0b9d53435efd3891788a750080c724f73aa4d4267fece65d7ce6d not found: ID does not exist" Feb 19 20:56:40 crc kubenswrapper[4813]: I0219 20:56:40.470504 4813 scope.go:117] "RemoveContainer" containerID="9333a1e2e58049edc23f99973944c1882c5b700d8131cb0c0c3850f17eca3613" Feb 19 20:56:40 crc kubenswrapper[4813]: E0219 20:56:40.471015 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9333a1e2e58049edc23f99973944c1882c5b700d8131cb0c0c3850f17eca3613\": container with ID starting with 9333a1e2e58049edc23f99973944c1882c5b700d8131cb0c0c3850f17eca3613 not found: ID does not exist" containerID="9333a1e2e58049edc23f99973944c1882c5b700d8131cb0c0c3850f17eca3613" Feb 19 20:56:40 crc kubenswrapper[4813]: I0219 20:56:40.471065 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9333a1e2e58049edc23f99973944c1882c5b700d8131cb0c0c3850f17eca3613"} err="failed to get container status \"9333a1e2e58049edc23f99973944c1882c5b700d8131cb0c0c3850f17eca3613\": rpc error: code = NotFound desc = could not find container \"9333a1e2e58049edc23f99973944c1882c5b700d8131cb0c0c3850f17eca3613\": container with ID starting with 9333a1e2e58049edc23f99973944c1882c5b700d8131cb0c0c3850f17eca3613 not found: ID does not exist" Feb 19 20:56:40 crc kubenswrapper[4813]: I0219 20:56:40.471102 4813 scope.go:117] "RemoveContainer" containerID="3587254eab1a9b3c9b9c6904e7c4c08cb5d6ad982ab6b09b5a67c079c5e3a2a4" Feb 19 20:56:40 crc kubenswrapper[4813]: E0219 20:56:40.472561 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3587254eab1a9b3c9b9c6904e7c4c08cb5d6ad982ab6b09b5a67c079c5e3a2a4\": container with ID starting with 3587254eab1a9b3c9b9c6904e7c4c08cb5d6ad982ab6b09b5a67c079c5e3a2a4 not found: ID does not exist" containerID="3587254eab1a9b3c9b9c6904e7c4c08cb5d6ad982ab6b09b5a67c079c5e3a2a4" Feb 19 20:56:40 crc kubenswrapper[4813]: I0219 20:56:40.472611 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3587254eab1a9b3c9b9c6904e7c4c08cb5d6ad982ab6b09b5a67c079c5e3a2a4"} err="failed to get container status \"3587254eab1a9b3c9b9c6904e7c4c08cb5d6ad982ab6b09b5a67c079c5e3a2a4\": rpc error: code = NotFound desc = could not find container \"3587254eab1a9b3c9b9c6904e7c4c08cb5d6ad982ab6b09b5a67c079c5e3a2a4\": container with ID starting with 3587254eab1a9b3c9b9c6904e7c4c08cb5d6ad982ab6b09b5a67c079c5e3a2a4 not found: ID does not exist" Feb 19 20:56:41 crc kubenswrapper[4813]: I0219 20:56:41.489070 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81b8745e-b4f8-4ca7-b67f-0993a12b6a07" path="/var/lib/kubelet/pods/81b8745e-b4f8-4ca7-b67f-0993a12b6a07/volumes" Feb 19 20:57:06 crc kubenswrapper[4813]: I0219 20:57:06.498256 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Feb 19 20:57:06 crc kubenswrapper[4813]: I0219 20:57:06.559405 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mariadb-data\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc3120ee-a320-435e-a3db-082532da4fe5\") pod \"fa05940d-c476-4e31-adce-a834c33da6de\" (UID: \"fa05940d-c476-4e31-adce-a834c33da6de\") " Feb 19 20:57:06 crc kubenswrapper[4813]: I0219 20:57:06.559756 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvxcc\" (UniqueName: \"kubernetes.io/projected/fa05940d-c476-4e31-adce-a834c33da6de-kube-api-access-qvxcc\") pod \"fa05940d-c476-4e31-adce-a834c33da6de\" (UID: \"fa05940d-c476-4e31-adce-a834c33da6de\") " Feb 19 20:57:06 crc kubenswrapper[4813]: I0219 20:57:06.568745 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa05940d-c476-4e31-adce-a834c33da6de-kube-api-access-qvxcc" (OuterVolumeSpecName: "kube-api-access-qvxcc") pod "fa05940d-c476-4e31-adce-a834c33da6de" (UID: "fa05940d-c476-4e31-adce-a834c33da6de"). InnerVolumeSpecName "kube-api-access-qvxcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:57:06 crc kubenswrapper[4813]: I0219 20:57:06.594886 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc3120ee-a320-435e-a3db-082532da4fe5" (OuterVolumeSpecName: "mariadb-data") pod "fa05940d-c476-4e31-adce-a834c33da6de" (UID: "fa05940d-c476-4e31-adce-a834c33da6de"). InnerVolumeSpecName "pvc-dc3120ee-a320-435e-a3db-082532da4fe5". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 20:57:06 crc kubenswrapper[4813]: I0219 20:57:06.644810 4813 generic.go:334] "Generic (PLEG): container finished" podID="fa05940d-c476-4e31-adce-a834c33da6de" containerID="77f43f65aacf0bda469e9fd3b3b4433ac810fd81b47e9dbe6274e2a56f94842e" exitCode=137 Feb 19 20:57:06 crc kubenswrapper[4813]: I0219 20:57:06.644858 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"fa05940d-c476-4e31-adce-a834c33da6de","Type":"ContainerDied","Data":"77f43f65aacf0bda469e9fd3b3b4433ac810fd81b47e9dbe6274e2a56f94842e"} Feb 19 20:57:06 crc kubenswrapper[4813]: I0219 20:57:06.644888 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"fa05940d-c476-4e31-adce-a834c33da6de","Type":"ContainerDied","Data":"7406040b5b44dbef32df4c5027a41fa12f00617c5a55428d5f6a903c915855c7"} Feb 19 20:57:06 crc kubenswrapper[4813]: I0219 20:57:06.644909 4813 scope.go:117] "RemoveContainer" containerID="77f43f65aacf0bda469e9fd3b3b4433ac810fd81b47e9dbe6274e2a56f94842e" Feb 19 20:57:06 crc kubenswrapper[4813]: I0219 20:57:06.645070 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Feb 19 20:57:06 crc kubenswrapper[4813]: I0219 20:57:06.663462 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qvxcc\" (UniqueName: \"kubernetes.io/projected/fa05940d-c476-4e31-adce-a834c33da6de-kube-api-access-qvxcc\") on node \"crc\" DevicePath \"\"" Feb 19 20:57:06 crc kubenswrapper[4813]: I0219 20:57:06.663528 4813 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-dc3120ee-a320-435e-a3db-082532da4fe5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc3120ee-a320-435e-a3db-082532da4fe5\") on node \"crc\" " Feb 19 20:57:06 crc kubenswrapper[4813]: I0219 20:57:06.692431 4813 scope.go:117] "RemoveContainer" containerID="77f43f65aacf0bda469e9fd3b3b4433ac810fd81b47e9dbe6274e2a56f94842e" Feb 19 20:57:06 crc kubenswrapper[4813]: E0219 20:57:06.693481 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77f43f65aacf0bda469e9fd3b3b4433ac810fd81b47e9dbe6274e2a56f94842e\": container with ID starting with 77f43f65aacf0bda469e9fd3b3b4433ac810fd81b47e9dbe6274e2a56f94842e not found: ID does not exist" containerID="77f43f65aacf0bda469e9fd3b3b4433ac810fd81b47e9dbe6274e2a56f94842e" Feb 19 20:57:06 crc kubenswrapper[4813]: I0219 20:57:06.693565 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77f43f65aacf0bda469e9fd3b3b4433ac810fd81b47e9dbe6274e2a56f94842e"} err="failed to get container status \"77f43f65aacf0bda469e9fd3b3b4433ac810fd81b47e9dbe6274e2a56f94842e\": rpc error: code = NotFound desc = could not find container \"77f43f65aacf0bda469e9fd3b3b4433ac810fd81b47e9dbe6274e2a56f94842e\": container with ID starting with 77f43f65aacf0bda469e9fd3b3b4433ac810fd81b47e9dbe6274e2a56f94842e not found: ID does not exist" Feb 19 20:57:06 crc kubenswrapper[4813]: I0219 20:57:06.701494 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-copy-data"] Feb 19 20:57:06 crc kubenswrapper[4813]: I0219 20:57:06.703888 4813 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 19 20:57:06 crc kubenswrapper[4813]: I0219 20:57:06.704152 4813 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-dc3120ee-a320-435e-a3db-082532da4fe5" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc3120ee-a320-435e-a3db-082532da4fe5") on node "crc" Feb 19 20:57:06 crc kubenswrapper[4813]: I0219 20:57:06.712578 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-copy-data"] Feb 19 20:57:06 crc kubenswrapper[4813]: I0219 20:57:06.765775 4813 reconciler_common.go:293] "Volume detached for volume \"pvc-dc3120ee-a320-435e-a3db-082532da4fe5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc3120ee-a320-435e-a3db-082532da4fe5\") on node \"crc\" DevicePath \"\"" Feb 19 20:57:07 crc kubenswrapper[4813]: I0219 20:57:07.291414 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-copy-data"] Feb 19 20:57:07 crc kubenswrapper[4813]: I0219 20:57:07.291804 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-copy-data" podUID="fe0e72fd-4a3a-45e2-84fb-27f878d6abed" containerName="adoption" containerID="cri-o://dd0bdf42f1cf69ed721181e0595d5ee8635bad4e970d40bcf6a76058e01dd77d" gracePeriod=30 Feb 19 20:57:07 crc kubenswrapper[4813]: I0219 20:57:07.493662 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa05940d-c476-4e31-adce-a834c33da6de" path="/var/lib/kubelet/pods/fa05940d-c476-4e31-adce-a834c33da6de/volumes" Feb 19 20:57:37 crc kubenswrapper[4813]: I0219 20:57:37.886025 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Feb 19 20:57:37 crc kubenswrapper[4813]: I0219 20:57:37.953205 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/fe0e72fd-4a3a-45e2-84fb-27f878d6abed-ovn-data-cert\") pod \"fe0e72fd-4a3a-45e2-84fb-27f878d6abed\" (UID: \"fe0e72fd-4a3a-45e2-84fb-27f878d6abed\") " Feb 19 20:57:37 crc kubenswrapper[4813]: I0219 20:57:37.954126 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-data\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8e9d2a81-d3ea-4684-b3b4-403e3a517eec\") pod \"fe0e72fd-4a3a-45e2-84fb-27f878d6abed\" (UID: \"fe0e72fd-4a3a-45e2-84fb-27f878d6abed\") " Feb 19 20:57:37 crc kubenswrapper[4813]: I0219 20:57:37.954283 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbkzv\" (UniqueName: \"kubernetes.io/projected/fe0e72fd-4a3a-45e2-84fb-27f878d6abed-kube-api-access-mbkzv\") pod \"fe0e72fd-4a3a-45e2-84fb-27f878d6abed\" (UID: \"fe0e72fd-4a3a-45e2-84fb-27f878d6abed\") " Feb 19 20:57:37 crc kubenswrapper[4813]: I0219 20:57:37.962853 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe0e72fd-4a3a-45e2-84fb-27f878d6abed-ovn-data-cert" (OuterVolumeSpecName: "ovn-data-cert") pod "fe0e72fd-4a3a-45e2-84fb-27f878d6abed" (UID: "fe0e72fd-4a3a-45e2-84fb-27f878d6abed"). InnerVolumeSpecName "ovn-data-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:57:37 crc kubenswrapper[4813]: I0219 20:57:37.963591 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe0e72fd-4a3a-45e2-84fb-27f878d6abed-kube-api-access-mbkzv" (OuterVolumeSpecName: "kube-api-access-mbkzv") pod "fe0e72fd-4a3a-45e2-84fb-27f878d6abed" (UID: "fe0e72fd-4a3a-45e2-84fb-27f878d6abed"). InnerVolumeSpecName "kube-api-access-mbkzv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:57:37 crc kubenswrapper[4813]: I0219 20:57:37.974654 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8e9d2a81-d3ea-4684-b3b4-403e3a517eec" (OuterVolumeSpecName: "ovn-data") pod "fe0e72fd-4a3a-45e2-84fb-27f878d6abed" (UID: "fe0e72fd-4a3a-45e2-84fb-27f878d6abed"). InnerVolumeSpecName "pvc-8e9d2a81-d3ea-4684-b3b4-403e3a517eec". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 20:57:38 crc kubenswrapper[4813]: I0219 20:57:38.033666 4813 generic.go:334] "Generic (PLEG): container finished" podID="fe0e72fd-4a3a-45e2-84fb-27f878d6abed" containerID="dd0bdf42f1cf69ed721181e0595d5ee8635bad4e970d40bcf6a76058e01dd77d" exitCode=137 Feb 19 20:57:38 crc kubenswrapper[4813]: I0219 20:57:38.033711 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"fe0e72fd-4a3a-45e2-84fb-27f878d6abed","Type":"ContainerDied","Data":"dd0bdf42f1cf69ed721181e0595d5ee8635bad4e970d40bcf6a76058e01dd77d"} Feb 19 20:57:38 crc kubenswrapper[4813]: I0219 20:57:38.033737 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"fe0e72fd-4a3a-45e2-84fb-27f878d6abed","Type":"ContainerDied","Data":"87b7988136874be2257e26257e75812aa348d480b35f1ed7ce1a86c17af51ebb"} Feb 19 20:57:38 crc kubenswrapper[4813]: I0219 20:57:38.033757 4813 scope.go:117] "RemoveContainer" containerID="dd0bdf42f1cf69ed721181e0595d5ee8635bad4e970d40bcf6a76058e01dd77d" Feb 19 20:57:38 crc kubenswrapper[4813]: I0219 20:57:38.033759 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Feb 19 20:57:38 crc kubenswrapper[4813]: I0219 20:57:38.056461 4813 scope.go:117] "RemoveContainer" containerID="dd0bdf42f1cf69ed721181e0595d5ee8635bad4e970d40bcf6a76058e01dd77d" Feb 19 20:57:38 crc kubenswrapper[4813]: I0219 20:57:38.056980 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mbkzv\" (UniqueName: \"kubernetes.io/projected/fe0e72fd-4a3a-45e2-84fb-27f878d6abed-kube-api-access-mbkzv\") on node \"crc\" DevicePath \"\"" Feb 19 20:57:38 crc kubenswrapper[4813]: I0219 20:57:38.057013 4813 reconciler_common.go:293] "Volume detached for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/fe0e72fd-4a3a-45e2-84fb-27f878d6abed-ovn-data-cert\") on node \"crc\" DevicePath \"\"" Feb 19 20:57:38 crc kubenswrapper[4813]: I0219 20:57:38.057060 4813 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-8e9d2a81-d3ea-4684-b3b4-403e3a517eec\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8e9d2a81-d3ea-4684-b3b4-403e3a517eec\") on node \"crc\" " Feb 19 20:57:38 crc kubenswrapper[4813]: E0219 20:57:38.057065 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd0bdf42f1cf69ed721181e0595d5ee8635bad4e970d40bcf6a76058e01dd77d\": container with ID starting with dd0bdf42f1cf69ed721181e0595d5ee8635bad4e970d40bcf6a76058e01dd77d not found: ID does not exist" containerID="dd0bdf42f1cf69ed721181e0595d5ee8635bad4e970d40bcf6a76058e01dd77d" Feb 19 20:57:38 crc kubenswrapper[4813]: I0219 20:57:38.057088 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd0bdf42f1cf69ed721181e0595d5ee8635bad4e970d40bcf6a76058e01dd77d"} err="failed to get container status \"dd0bdf42f1cf69ed721181e0595d5ee8635bad4e970d40bcf6a76058e01dd77d\": rpc error: code = NotFound desc = could not find container \"dd0bdf42f1cf69ed721181e0595d5ee8635bad4e970d40bcf6a76058e01dd77d\": container with ID starting with dd0bdf42f1cf69ed721181e0595d5ee8635bad4e970d40bcf6a76058e01dd77d not found: ID does not exist" Feb 19 20:57:38 crc kubenswrapper[4813]: I0219 20:57:38.077755 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-copy-data"] Feb 19 20:57:38 crc kubenswrapper[4813]: I0219 20:57:38.089605 4813 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 19 20:57:38 crc kubenswrapper[4813]: I0219 20:57:38.089768 4813 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-8e9d2a81-d3ea-4684-b3b4-403e3a517eec" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8e9d2a81-d3ea-4684-b3b4-403e3a517eec") on node "crc" Feb 19 20:57:38 crc kubenswrapper[4813]: I0219 20:57:38.090489 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-copy-data"] Feb 19 20:57:38 crc kubenswrapper[4813]: I0219 20:57:38.158520 4813 reconciler_common.go:293] "Volume detached for volume \"pvc-8e9d2a81-d3ea-4684-b3b4-403e3a517eec\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8e9d2a81-d3ea-4684-b3b4-403e3a517eec\") on node \"crc\" DevicePath \"\"" Feb 19 20:57:39 crc kubenswrapper[4813]: I0219 20:57:39.490873 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe0e72fd-4a3a-45e2-84fb-27f878d6abed" path="/var/lib/kubelet/pods/fe0e72fd-4a3a-45e2-84fb-27f878d6abed/volumes" Feb 19 20:58:30 crc kubenswrapper[4813]: I0219 20:58:30.330101 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 20:58:30 crc kubenswrapper[4813]: I0219 20:58:30.330761 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 20:58:46 crc kubenswrapper[4813]: I0219 20:58:46.747039 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-2565l/must-gather-4q446"] Feb 19 20:58:46 crc kubenswrapper[4813]: E0219 20:58:46.748045 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81b8745e-b4f8-4ca7-b67f-0993a12b6a07" containerName="registry-server" Feb 19 20:58:46 crc kubenswrapper[4813]: I0219 20:58:46.748059 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="81b8745e-b4f8-4ca7-b67f-0993a12b6a07" containerName="registry-server" Feb 19 20:58:46 crc kubenswrapper[4813]: E0219 20:58:46.748078 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa05940d-c476-4e31-adce-a834c33da6de" containerName="adoption" Feb 19 20:58:46 crc kubenswrapper[4813]: I0219 20:58:46.748084 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa05940d-c476-4e31-adce-a834c33da6de" containerName="adoption" Feb 19 20:58:46 crc kubenswrapper[4813]: E0219 20:58:46.748104 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81b8745e-b4f8-4ca7-b67f-0993a12b6a07" containerName="extract-utilities" Feb 19 20:58:46 crc kubenswrapper[4813]: I0219 20:58:46.748110 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="81b8745e-b4f8-4ca7-b67f-0993a12b6a07" containerName="extract-utilities" Feb 19 20:58:46 crc kubenswrapper[4813]: E0219 20:58:46.748132 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe0e72fd-4a3a-45e2-84fb-27f878d6abed" containerName="adoption" Feb 19 20:58:46 crc kubenswrapper[4813]: I0219 20:58:46.748138 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe0e72fd-4a3a-45e2-84fb-27f878d6abed" containerName="adoption" Feb 19 20:58:46 crc kubenswrapper[4813]: E0219 20:58:46.748169 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81b8745e-b4f8-4ca7-b67f-0993a12b6a07" containerName="extract-content" Feb 19 20:58:46 crc kubenswrapper[4813]: I0219 20:58:46.748175 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="81b8745e-b4f8-4ca7-b67f-0993a12b6a07" containerName="extract-content" Feb 19 20:58:46 crc kubenswrapper[4813]: I0219 20:58:46.748370 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="81b8745e-b4f8-4ca7-b67f-0993a12b6a07" containerName="registry-server" Feb 19 20:58:46 crc kubenswrapper[4813]: I0219 20:58:46.748378 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa05940d-c476-4e31-adce-a834c33da6de" containerName="adoption" Feb 19 20:58:46 crc kubenswrapper[4813]: I0219 20:58:46.748404 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe0e72fd-4a3a-45e2-84fb-27f878d6abed" containerName="adoption" Feb 19 20:58:46 crc kubenswrapper[4813]: I0219 20:58:46.749511 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2565l/must-gather-4q446" Feb 19 20:58:46 crc kubenswrapper[4813]: I0219 20:58:46.752743 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-2565l"/"kube-root-ca.crt" Feb 19 20:58:46 crc kubenswrapper[4813]: I0219 20:58:46.753008 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-2565l"/"openshift-service-ca.crt" Feb 19 20:58:46 crc kubenswrapper[4813]: I0219 20:58:46.753545 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-2565l"/"default-dockercfg-84nsf" Feb 19 20:58:46 crc kubenswrapper[4813]: I0219 20:58:46.761352 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-2565l/must-gather-4q446"] Feb 19 20:58:46 crc kubenswrapper[4813]: I0219 20:58:46.811829 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbch8\" (UniqueName: \"kubernetes.io/projected/e3d21058-7dbc-453a-aae0-d76fe8da5b16-kube-api-access-hbch8\") pod \"must-gather-4q446\" (UID: \"e3d21058-7dbc-453a-aae0-d76fe8da5b16\") " pod="openshift-must-gather-2565l/must-gather-4q446" Feb 19 20:58:46 crc kubenswrapper[4813]: I0219 20:58:46.812015 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e3d21058-7dbc-453a-aae0-d76fe8da5b16-must-gather-output\") pod \"must-gather-4q446\" (UID: \"e3d21058-7dbc-453a-aae0-d76fe8da5b16\") " pod="openshift-must-gather-2565l/must-gather-4q446" Feb 19 20:58:46 crc kubenswrapper[4813]: I0219 20:58:46.913535 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbch8\" (UniqueName: \"kubernetes.io/projected/e3d21058-7dbc-453a-aae0-d76fe8da5b16-kube-api-access-hbch8\") pod \"must-gather-4q446\" (UID: \"e3d21058-7dbc-453a-aae0-d76fe8da5b16\") " pod="openshift-must-gather-2565l/must-gather-4q446" Feb 19 20:58:46 crc kubenswrapper[4813]: I0219 20:58:46.913668 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e3d21058-7dbc-453a-aae0-d76fe8da5b16-must-gather-output\") pod \"must-gather-4q446\" (UID: \"e3d21058-7dbc-453a-aae0-d76fe8da5b16\") " pod="openshift-must-gather-2565l/must-gather-4q446" Feb 19 20:58:46 crc kubenswrapper[4813]: I0219 20:58:46.914136 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e3d21058-7dbc-453a-aae0-d76fe8da5b16-must-gather-output\") pod \"must-gather-4q446\" (UID: \"e3d21058-7dbc-453a-aae0-d76fe8da5b16\") " pod="openshift-must-gather-2565l/must-gather-4q446" Feb 19 20:58:46 crc kubenswrapper[4813]: I0219 20:58:46.930626 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbch8\" (UniqueName: \"kubernetes.io/projected/e3d21058-7dbc-453a-aae0-d76fe8da5b16-kube-api-access-hbch8\") pod \"must-gather-4q446\" (UID: \"e3d21058-7dbc-453a-aae0-d76fe8da5b16\") " pod="openshift-must-gather-2565l/must-gather-4q446" Feb 19 20:58:47 crc kubenswrapper[4813]: I0219 20:58:47.071092 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2565l/must-gather-4q446" Feb 19 20:58:47 crc kubenswrapper[4813]: I0219 20:58:47.584190 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-2565l/must-gather-4q446"] Feb 19 20:58:47 crc kubenswrapper[4813]: I0219 20:58:47.607191 4813 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 20:58:47 crc kubenswrapper[4813]: I0219 20:58:47.922489 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2565l/must-gather-4q446" event={"ID":"e3d21058-7dbc-453a-aae0-d76fe8da5b16","Type":"ContainerStarted","Data":"8ebc87c3505e881b2f5c18127f4d937127da6ff1666b63dc7a6467f2ba5cf322"} Feb 19 20:58:54 crc kubenswrapper[4813]: I0219 20:58:54.994017 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2565l/must-gather-4q446" event={"ID":"e3d21058-7dbc-453a-aae0-d76fe8da5b16","Type":"ContainerStarted","Data":"40b3f467dae7f3d00e38871b770565c8fc7388e2337b86490eed5442f04755f5"} Feb 19 20:58:54 crc kubenswrapper[4813]: I0219 20:58:54.994461 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2565l/must-gather-4q446" event={"ID":"e3d21058-7dbc-453a-aae0-d76fe8da5b16","Type":"ContainerStarted","Data":"8037638bf0310664ddb957747ef2ca41c338a6cd4d2b3347a67e1cbc0e72d1dd"} Feb 19 20:58:58 crc kubenswrapper[4813]: I0219 20:58:58.484603 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-2565l/must-gather-4q446" podStartSLOduration=5.8970130659999995 podStartE2EDuration="12.484582461s" podCreationTimestamp="2026-02-19 20:58:46 +0000 UTC" firstStartedPulling="2026-02-19 20:58:47.606782482 +0000 UTC m=+8946.832223053" lastFinishedPulling="2026-02-19 20:58:54.194351907 +0000 UTC m=+8953.419792448" observedRunningTime="2026-02-19 20:58:55.021109236 +0000 UTC m=+8954.246549777" watchObservedRunningTime="2026-02-19 20:58:58.484582461 +0000 UTC m=+8957.710023002" Feb 19 20:58:58 crc kubenswrapper[4813]: I0219 20:58:58.486745 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-2565l/crc-debug-6w9ll"] Feb 19 20:58:58 crc kubenswrapper[4813]: I0219 20:58:58.488064 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2565l/crc-debug-6w9ll" Feb 19 20:58:58 crc kubenswrapper[4813]: I0219 20:58:58.676937 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1d5c057f-d53c-4783-9f26-67c5dd09fb64-host\") pod \"crc-debug-6w9ll\" (UID: \"1d5c057f-d53c-4783-9f26-67c5dd09fb64\") " pod="openshift-must-gather-2565l/crc-debug-6w9ll" Feb 19 20:58:58 crc kubenswrapper[4813]: I0219 20:58:58.677369 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sswx7\" (UniqueName: \"kubernetes.io/projected/1d5c057f-d53c-4783-9f26-67c5dd09fb64-kube-api-access-sswx7\") pod \"crc-debug-6w9ll\" (UID: \"1d5c057f-d53c-4783-9f26-67c5dd09fb64\") " pod="openshift-must-gather-2565l/crc-debug-6w9ll" Feb 19 20:58:58 crc kubenswrapper[4813]: I0219 20:58:58.779437 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1d5c057f-d53c-4783-9f26-67c5dd09fb64-host\") pod \"crc-debug-6w9ll\" (UID: \"1d5c057f-d53c-4783-9f26-67c5dd09fb64\") " pod="openshift-must-gather-2565l/crc-debug-6w9ll" Feb 19 20:58:58 crc kubenswrapper[4813]: I0219 20:58:58.779555 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sswx7\" (UniqueName: \"kubernetes.io/projected/1d5c057f-d53c-4783-9f26-67c5dd09fb64-kube-api-access-sswx7\") pod \"crc-debug-6w9ll\" (UID: \"1d5c057f-d53c-4783-9f26-67c5dd09fb64\") " pod="openshift-must-gather-2565l/crc-debug-6w9ll" Feb 19 20:58:58 crc kubenswrapper[4813]: I0219 20:58:58.779875 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1d5c057f-d53c-4783-9f26-67c5dd09fb64-host\") pod \"crc-debug-6w9ll\" (UID: \"1d5c057f-d53c-4783-9f26-67c5dd09fb64\") " pod="openshift-must-gather-2565l/crc-debug-6w9ll" Feb 19 20:58:58 crc kubenswrapper[4813]: I0219 20:58:58.801032 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sswx7\" (UniqueName: \"kubernetes.io/projected/1d5c057f-d53c-4783-9f26-67c5dd09fb64-kube-api-access-sswx7\") pod \"crc-debug-6w9ll\" (UID: \"1d5c057f-d53c-4783-9f26-67c5dd09fb64\") " pod="openshift-must-gather-2565l/crc-debug-6w9ll" Feb 19 20:58:58 crc kubenswrapper[4813]: I0219 20:58:58.810816 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2565l/crc-debug-6w9ll" Feb 19 20:58:58 crc kubenswrapper[4813]: W0219 20:58:58.853677 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d5c057f_d53c_4783_9f26_67c5dd09fb64.slice/crio-e451bc5ac4dc3b37fcd6fe4053a363297043cbbdbd274871591d6da727f36b63 WatchSource:0}: Error finding container e451bc5ac4dc3b37fcd6fe4053a363297043cbbdbd274871591d6da727f36b63: Status 404 returned error can't find the container with id e451bc5ac4dc3b37fcd6fe4053a363297043cbbdbd274871591d6da727f36b63 Feb 19 20:58:59 crc kubenswrapper[4813]: I0219 20:58:59.055843 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2565l/crc-debug-6w9ll" event={"ID":"1d5c057f-d53c-4783-9f26-67c5dd09fb64","Type":"ContainerStarted","Data":"e451bc5ac4dc3b37fcd6fe4053a363297043cbbdbd274871591d6da727f36b63"} Feb 19 20:59:00 crc kubenswrapper[4813]: I0219 20:59:00.330082 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 20:59:00 crc kubenswrapper[4813]: I0219 20:59:00.330375 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 20:59:10 crc kubenswrapper[4813]: I0219 20:59:10.162085 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2565l/crc-debug-6w9ll" event={"ID":"1d5c057f-d53c-4783-9f26-67c5dd09fb64","Type":"ContainerStarted","Data":"6c65bf7e6c35be86b165f76c99b0d460a5281d2e69afbfa6009a99488db55bbc"} Feb 19 20:59:10 crc kubenswrapper[4813]: I0219 20:59:10.176579 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-2565l/crc-debug-6w9ll" podStartSLOduration=1.317557262 podStartE2EDuration="12.176562011s" podCreationTimestamp="2026-02-19 20:58:58 +0000 UTC" firstStartedPulling="2026-02-19 20:58:58.856618777 +0000 UTC m=+8958.082059358" lastFinishedPulling="2026-02-19 20:59:09.715623576 +0000 UTC m=+8968.941064107" observedRunningTime="2026-02-19 20:59:10.174823127 +0000 UTC m=+8969.400263658" watchObservedRunningTime="2026-02-19 20:59:10.176562011 +0000 UTC m=+8969.402002562" Feb 19 20:59:30 crc kubenswrapper[4813]: I0219 20:59:30.329603 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 20:59:30 crc kubenswrapper[4813]: I0219 20:59:30.330353 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 20:59:30 crc kubenswrapper[4813]: I0219 20:59:30.330672 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" Feb 19 20:59:30 crc kubenswrapper[4813]: I0219 20:59:30.331697 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"643ad5a27899764fe60b8450a47154aec9c0cdecfadc2e3fdb35dc34121d971d"} pod="openshift-machine-config-operator/machine-config-daemon-gfswm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 20:59:30 crc kubenswrapper[4813]: I0219 20:59:30.331797 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" containerID="cri-o://643ad5a27899764fe60b8450a47154aec9c0cdecfadc2e3fdb35dc34121d971d" gracePeriod=600 Feb 19 20:59:31 crc kubenswrapper[4813]: I0219 20:59:31.351783 4813 generic.go:334] "Generic (PLEG): container finished" podID="481977a2-7072-4176-abd4-863cb6104d70" containerID="643ad5a27899764fe60b8450a47154aec9c0cdecfadc2e3fdb35dc34121d971d" exitCode=0 Feb 19 20:59:31 crc kubenswrapper[4813]: I0219 20:59:31.352090 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" event={"ID":"481977a2-7072-4176-abd4-863cb6104d70","Type":"ContainerDied","Data":"643ad5a27899764fe60b8450a47154aec9c0cdecfadc2e3fdb35dc34121d971d"} Feb 19 20:59:31 crc kubenswrapper[4813]: I0219 20:59:31.352116 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" event={"ID":"481977a2-7072-4176-abd4-863cb6104d70","Type":"ContainerStarted","Data":"6c6ebd25e13c34a5637b27b7ea17b9562833793e4c09cb4428c635b774a360cb"} Feb 19 20:59:31 crc kubenswrapper[4813]: I0219 20:59:31.352131 4813 scope.go:117] "RemoveContainer" containerID="b2b72de14d5df72f4b9cd6c4c268dd9d8a7c24958501081d46a76c40b1cb8b63" Feb 19 20:59:37 crc kubenswrapper[4813]: I0219 20:59:37.420301 4813 generic.go:334] "Generic (PLEG): container finished" podID="1d5c057f-d53c-4783-9f26-67c5dd09fb64" containerID="6c65bf7e6c35be86b165f76c99b0d460a5281d2e69afbfa6009a99488db55bbc" exitCode=0 Feb 19 20:59:37 crc kubenswrapper[4813]: I0219 20:59:37.420418 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2565l/crc-debug-6w9ll" event={"ID":"1d5c057f-d53c-4783-9f26-67c5dd09fb64","Type":"ContainerDied","Data":"6c65bf7e6c35be86b165f76c99b0d460a5281d2e69afbfa6009a99488db55bbc"} Feb 19 20:59:38 crc kubenswrapper[4813]: I0219 20:59:38.523719 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2565l/crc-debug-6w9ll" Feb 19 20:59:38 crc kubenswrapper[4813]: I0219 20:59:38.553256 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-2565l/crc-debug-6w9ll"] Feb 19 20:59:38 crc kubenswrapper[4813]: I0219 20:59:38.560299 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-2565l/crc-debug-6w9ll"] Feb 19 20:59:38 crc kubenswrapper[4813]: I0219 20:59:38.614086 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1d5c057f-d53c-4783-9f26-67c5dd09fb64-host\") pod \"1d5c057f-d53c-4783-9f26-67c5dd09fb64\" (UID: \"1d5c057f-d53c-4783-9f26-67c5dd09fb64\") " Feb 19 20:59:38 crc kubenswrapper[4813]: I0219 20:59:38.614205 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sswx7\" (UniqueName: \"kubernetes.io/projected/1d5c057f-d53c-4783-9f26-67c5dd09fb64-kube-api-access-sswx7\") pod \"1d5c057f-d53c-4783-9f26-67c5dd09fb64\" (UID: \"1d5c057f-d53c-4783-9f26-67c5dd09fb64\") " Feb 19 20:59:38 crc kubenswrapper[4813]: I0219 20:59:38.614227 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d5c057f-d53c-4783-9f26-67c5dd09fb64-host" (OuterVolumeSpecName: "host") pod "1d5c057f-d53c-4783-9f26-67c5dd09fb64" (UID: "1d5c057f-d53c-4783-9f26-67c5dd09fb64"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 20:59:38 crc kubenswrapper[4813]: I0219 20:59:38.614713 4813 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1d5c057f-d53c-4783-9f26-67c5dd09fb64-host\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:38 crc kubenswrapper[4813]: I0219 20:59:38.621105 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d5c057f-d53c-4783-9f26-67c5dd09fb64-kube-api-access-sswx7" (OuterVolumeSpecName: "kube-api-access-sswx7") pod "1d5c057f-d53c-4783-9f26-67c5dd09fb64" (UID: "1d5c057f-d53c-4783-9f26-67c5dd09fb64"). InnerVolumeSpecName "kube-api-access-sswx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:59:38 crc kubenswrapper[4813]: I0219 20:59:38.717285 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sswx7\" (UniqueName: \"kubernetes.io/projected/1d5c057f-d53c-4783-9f26-67c5dd09fb64-kube-api-access-sswx7\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:39 crc kubenswrapper[4813]: I0219 20:59:39.439257 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e451bc5ac4dc3b37fcd6fe4053a363297043cbbdbd274871591d6da727f36b63" Feb 19 20:59:39 crc kubenswrapper[4813]: I0219 20:59:39.439334 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2565l/crc-debug-6w9ll" Feb 19 20:59:39 crc kubenswrapper[4813]: I0219 20:59:39.481645 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d5c057f-d53c-4783-9f26-67c5dd09fb64" path="/var/lib/kubelet/pods/1d5c057f-d53c-4783-9f26-67c5dd09fb64/volumes" Feb 19 20:59:39 crc kubenswrapper[4813]: I0219 20:59:39.729673 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-2565l/crc-debug-7m4s4"] Feb 19 20:59:39 crc kubenswrapper[4813]: E0219 20:59:39.730199 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d5c057f-d53c-4783-9f26-67c5dd09fb64" containerName="container-00" Feb 19 20:59:39 crc kubenswrapper[4813]: I0219 20:59:39.730214 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d5c057f-d53c-4783-9f26-67c5dd09fb64" containerName="container-00" Feb 19 20:59:39 crc kubenswrapper[4813]: I0219 20:59:39.730455 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d5c057f-d53c-4783-9f26-67c5dd09fb64" containerName="container-00" Feb 19 20:59:39 crc kubenswrapper[4813]: I0219 20:59:39.731292 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2565l/crc-debug-7m4s4" Feb 19 20:59:39 crc kubenswrapper[4813]: I0219 20:59:39.837291 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t84jq\" (UniqueName: \"kubernetes.io/projected/9544a86d-41f0-4eff-8c3e-f0af8199a793-kube-api-access-t84jq\") pod \"crc-debug-7m4s4\" (UID: \"9544a86d-41f0-4eff-8c3e-f0af8199a793\") " pod="openshift-must-gather-2565l/crc-debug-7m4s4" Feb 19 20:59:39 crc kubenswrapper[4813]: I0219 20:59:39.837615 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9544a86d-41f0-4eff-8c3e-f0af8199a793-host\") pod \"crc-debug-7m4s4\" (UID: \"9544a86d-41f0-4eff-8c3e-f0af8199a793\") " pod="openshift-must-gather-2565l/crc-debug-7m4s4" Feb 19 20:59:39 crc kubenswrapper[4813]: I0219 20:59:39.940280 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t84jq\" (UniqueName: \"kubernetes.io/projected/9544a86d-41f0-4eff-8c3e-f0af8199a793-kube-api-access-t84jq\") pod \"crc-debug-7m4s4\" (UID: \"9544a86d-41f0-4eff-8c3e-f0af8199a793\") " pod="openshift-must-gather-2565l/crc-debug-7m4s4" Feb 19 20:59:39 crc kubenswrapper[4813]: I0219 20:59:39.940654 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9544a86d-41f0-4eff-8c3e-f0af8199a793-host\") pod \"crc-debug-7m4s4\" (UID: \"9544a86d-41f0-4eff-8c3e-f0af8199a793\") " pod="openshift-must-gather-2565l/crc-debug-7m4s4" Feb 19 20:59:39 crc kubenswrapper[4813]: I0219 20:59:39.940804 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9544a86d-41f0-4eff-8c3e-f0af8199a793-host\") pod \"crc-debug-7m4s4\" (UID: \"9544a86d-41f0-4eff-8c3e-f0af8199a793\") " pod="openshift-must-gather-2565l/crc-debug-7m4s4" Feb 19 20:59:39 crc kubenswrapper[4813]: I0219 20:59:39.958313 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t84jq\" (UniqueName: \"kubernetes.io/projected/9544a86d-41f0-4eff-8c3e-f0af8199a793-kube-api-access-t84jq\") pod \"crc-debug-7m4s4\" (UID: \"9544a86d-41f0-4eff-8c3e-f0af8199a793\") " pod="openshift-must-gather-2565l/crc-debug-7m4s4" Feb 19 20:59:40 crc kubenswrapper[4813]: I0219 20:59:40.049298 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2565l/crc-debug-7m4s4" Feb 19 20:59:40 crc kubenswrapper[4813]: I0219 20:59:40.453275 4813 generic.go:334] "Generic (PLEG): container finished" podID="9544a86d-41f0-4eff-8c3e-f0af8199a793" containerID="af0eda8c6c88bfc4ca4f468fa1c9e01af32eb8e1b811791866310b97e5f41928" exitCode=0 Feb 19 20:59:40 crc kubenswrapper[4813]: I0219 20:59:40.453395 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2565l/crc-debug-7m4s4" event={"ID":"9544a86d-41f0-4eff-8c3e-f0af8199a793","Type":"ContainerDied","Data":"af0eda8c6c88bfc4ca4f468fa1c9e01af32eb8e1b811791866310b97e5f41928"} Feb 19 20:59:40 crc kubenswrapper[4813]: I0219 20:59:40.453546 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2565l/crc-debug-7m4s4" event={"ID":"9544a86d-41f0-4eff-8c3e-f0af8199a793","Type":"ContainerStarted","Data":"a9eba9d21a5a9bab922d3e44903c669fe74131162dcca04409e8a41e232abab7"} Feb 19 20:59:40 crc kubenswrapper[4813]: I0219 20:59:40.628055 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-2565l/crc-debug-7m4s4"] Feb 19 20:59:40 crc kubenswrapper[4813]: I0219 20:59:40.637225 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-2565l/crc-debug-7m4s4"] Feb 19 20:59:41 crc kubenswrapper[4813]: I0219 20:59:41.588789 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2565l/crc-debug-7m4s4" Feb 19 20:59:41 crc kubenswrapper[4813]: I0219 20:59:41.674538 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9544a86d-41f0-4eff-8c3e-f0af8199a793-host\") pod \"9544a86d-41f0-4eff-8c3e-f0af8199a793\" (UID: \"9544a86d-41f0-4eff-8c3e-f0af8199a793\") " Feb 19 20:59:41 crc kubenswrapper[4813]: I0219 20:59:41.674746 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t84jq\" (UniqueName: \"kubernetes.io/projected/9544a86d-41f0-4eff-8c3e-f0af8199a793-kube-api-access-t84jq\") pod \"9544a86d-41f0-4eff-8c3e-f0af8199a793\" (UID: \"9544a86d-41f0-4eff-8c3e-f0af8199a793\") " Feb 19 20:59:41 crc kubenswrapper[4813]: I0219 20:59:41.675735 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9544a86d-41f0-4eff-8c3e-f0af8199a793-host" (OuterVolumeSpecName: "host") pod "9544a86d-41f0-4eff-8c3e-f0af8199a793" (UID: "9544a86d-41f0-4eff-8c3e-f0af8199a793"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 20:59:41 crc kubenswrapper[4813]: I0219 20:59:41.681235 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9544a86d-41f0-4eff-8c3e-f0af8199a793-kube-api-access-t84jq" (OuterVolumeSpecName: "kube-api-access-t84jq") pod "9544a86d-41f0-4eff-8c3e-f0af8199a793" (UID: "9544a86d-41f0-4eff-8c3e-f0af8199a793"). InnerVolumeSpecName "kube-api-access-t84jq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:59:41 crc kubenswrapper[4813]: I0219 20:59:41.777048 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t84jq\" (UniqueName: \"kubernetes.io/projected/9544a86d-41f0-4eff-8c3e-f0af8199a793-kube-api-access-t84jq\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:41 crc kubenswrapper[4813]: I0219 20:59:41.777075 4813 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9544a86d-41f0-4eff-8c3e-f0af8199a793-host\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:41 crc kubenswrapper[4813]: I0219 20:59:41.865033 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-2565l/crc-debug-226gv"] Feb 19 20:59:41 crc kubenswrapper[4813]: E0219 20:59:41.865428 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9544a86d-41f0-4eff-8c3e-f0af8199a793" containerName="container-00" Feb 19 20:59:41 crc kubenswrapper[4813]: I0219 20:59:41.865446 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="9544a86d-41f0-4eff-8c3e-f0af8199a793" containerName="container-00" Feb 19 20:59:41 crc kubenswrapper[4813]: I0219 20:59:41.865658 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="9544a86d-41f0-4eff-8c3e-f0af8199a793" containerName="container-00" Feb 19 20:59:41 crc kubenswrapper[4813]: I0219 20:59:41.866321 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2565l/crc-debug-226gv" Feb 19 20:59:41 crc kubenswrapper[4813]: I0219 20:59:41.980552 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/89e801d8-d728-4367-996f-5b7183fbcdca-host\") pod \"crc-debug-226gv\" (UID: \"89e801d8-d728-4367-996f-5b7183fbcdca\") " pod="openshift-must-gather-2565l/crc-debug-226gv" Feb 19 20:59:41 crc kubenswrapper[4813]: I0219 20:59:41.980634 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7bp5\" (UniqueName: \"kubernetes.io/projected/89e801d8-d728-4367-996f-5b7183fbcdca-kube-api-access-w7bp5\") pod \"crc-debug-226gv\" (UID: \"89e801d8-d728-4367-996f-5b7183fbcdca\") " pod="openshift-must-gather-2565l/crc-debug-226gv" Feb 19 20:59:42 crc kubenswrapper[4813]: I0219 20:59:42.082335 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/89e801d8-d728-4367-996f-5b7183fbcdca-host\") pod \"crc-debug-226gv\" (UID: \"89e801d8-d728-4367-996f-5b7183fbcdca\") " pod="openshift-must-gather-2565l/crc-debug-226gv" Feb 19 20:59:42 crc kubenswrapper[4813]: I0219 20:59:42.082428 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7bp5\" (UniqueName: \"kubernetes.io/projected/89e801d8-d728-4367-996f-5b7183fbcdca-kube-api-access-w7bp5\") pod \"crc-debug-226gv\" (UID: \"89e801d8-d728-4367-996f-5b7183fbcdca\") " pod="openshift-must-gather-2565l/crc-debug-226gv" Feb 19 20:59:42 crc kubenswrapper[4813]: I0219 20:59:42.082445 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/89e801d8-d728-4367-996f-5b7183fbcdca-host\") pod \"crc-debug-226gv\" (UID: \"89e801d8-d728-4367-996f-5b7183fbcdca\") " pod="openshift-must-gather-2565l/crc-debug-226gv" Feb 19 20:59:42 crc kubenswrapper[4813]: I0219 20:59:42.097128 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7bp5\" (UniqueName: \"kubernetes.io/projected/89e801d8-d728-4367-996f-5b7183fbcdca-kube-api-access-w7bp5\") pod \"crc-debug-226gv\" (UID: \"89e801d8-d728-4367-996f-5b7183fbcdca\") " pod="openshift-must-gather-2565l/crc-debug-226gv" Feb 19 20:59:42 crc kubenswrapper[4813]: I0219 20:59:42.181677 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2565l/crc-debug-226gv" Feb 19 20:59:42 crc kubenswrapper[4813]: I0219 20:59:42.477754 4813 scope.go:117] "RemoveContainer" containerID="af0eda8c6c88bfc4ca4f468fa1c9e01af32eb8e1b811791866310b97e5f41928" Feb 19 20:59:42 crc kubenswrapper[4813]: I0219 20:59:42.477770 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2565l/crc-debug-7m4s4" Feb 19 20:59:42 crc kubenswrapper[4813]: I0219 20:59:42.481454 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2565l/crc-debug-226gv" event={"ID":"89e801d8-d728-4367-996f-5b7183fbcdca","Type":"ContainerStarted","Data":"f4c360423fbac0244b6fc0c36e5062c36058735bca7ad5e44288a2c5bf3abc0d"} Feb 19 20:59:43 crc kubenswrapper[4813]: I0219 20:59:43.488923 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9544a86d-41f0-4eff-8c3e-f0af8199a793" path="/var/lib/kubelet/pods/9544a86d-41f0-4eff-8c3e-f0af8199a793/volumes" Feb 19 20:59:43 crc kubenswrapper[4813]: I0219 20:59:43.495815 4813 generic.go:334] "Generic (PLEG): container finished" podID="89e801d8-d728-4367-996f-5b7183fbcdca" containerID="69f3d74b58a36735095f9e6a36533d5841c007bdf2793ebbbe9b7454e8f22add" exitCode=0 Feb 19 20:59:43 crc kubenswrapper[4813]: I0219 20:59:43.495865 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2565l/crc-debug-226gv" event={"ID":"89e801d8-d728-4367-996f-5b7183fbcdca","Type":"ContainerDied","Data":"69f3d74b58a36735095f9e6a36533d5841c007bdf2793ebbbe9b7454e8f22add"} Feb 19 20:59:43 crc kubenswrapper[4813]: I0219 20:59:43.543967 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-2565l/crc-debug-226gv"] Feb 19 20:59:43 crc kubenswrapper[4813]: I0219 20:59:43.556482 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-2565l/crc-debug-226gv"] Feb 19 20:59:44 crc kubenswrapper[4813]: I0219 20:59:44.605795 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2565l/crc-debug-226gv" Feb 19 20:59:44 crc kubenswrapper[4813]: I0219 20:59:44.734519 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/89e801d8-d728-4367-996f-5b7183fbcdca-host\") pod \"89e801d8-d728-4367-996f-5b7183fbcdca\" (UID: \"89e801d8-d728-4367-996f-5b7183fbcdca\") " Feb 19 20:59:44 crc kubenswrapper[4813]: I0219 20:59:44.734626 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89e801d8-d728-4367-996f-5b7183fbcdca-host" (OuterVolumeSpecName: "host") pod "89e801d8-d728-4367-996f-5b7183fbcdca" (UID: "89e801d8-d728-4367-996f-5b7183fbcdca"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 20:59:44 crc kubenswrapper[4813]: I0219 20:59:44.734722 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7bp5\" (UniqueName: \"kubernetes.io/projected/89e801d8-d728-4367-996f-5b7183fbcdca-kube-api-access-w7bp5\") pod \"89e801d8-d728-4367-996f-5b7183fbcdca\" (UID: \"89e801d8-d728-4367-996f-5b7183fbcdca\") " Feb 19 20:59:44 crc kubenswrapper[4813]: I0219 20:59:44.735157 4813 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/89e801d8-d728-4367-996f-5b7183fbcdca-host\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:44 crc kubenswrapper[4813]: I0219 20:59:44.740004 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89e801d8-d728-4367-996f-5b7183fbcdca-kube-api-access-w7bp5" (OuterVolumeSpecName: "kube-api-access-w7bp5") pod "89e801d8-d728-4367-996f-5b7183fbcdca" (UID: "89e801d8-d728-4367-996f-5b7183fbcdca"). InnerVolumeSpecName "kube-api-access-w7bp5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:59:44 crc kubenswrapper[4813]: I0219 20:59:44.836502 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7bp5\" (UniqueName: \"kubernetes.io/projected/89e801d8-d728-4367-996f-5b7183fbcdca-kube-api-access-w7bp5\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:45 crc kubenswrapper[4813]: I0219 20:59:45.491915 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89e801d8-d728-4367-996f-5b7183fbcdca" path="/var/lib/kubelet/pods/89e801d8-d728-4367-996f-5b7183fbcdca/volumes" Feb 19 20:59:45 crc kubenswrapper[4813]: I0219 20:59:45.515430 4813 scope.go:117] "RemoveContainer" containerID="69f3d74b58a36735095f9e6a36533d5841c007bdf2793ebbbe9b7454e8f22add" Feb 19 20:59:45 crc kubenswrapper[4813]: I0219 20:59:45.515493 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2565l/crc-debug-226gv" Feb 19 21:00:00 crc kubenswrapper[4813]: I0219 21:00:00.190868 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525580-lqghc"] Feb 19 21:00:00 crc kubenswrapper[4813]: E0219 21:00:00.192039 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89e801d8-d728-4367-996f-5b7183fbcdca" containerName="container-00" Feb 19 21:00:00 crc kubenswrapper[4813]: I0219 21:00:00.192056 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="89e801d8-d728-4367-996f-5b7183fbcdca" containerName="container-00" Feb 19 21:00:00 crc kubenswrapper[4813]: I0219 21:00:00.192379 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="89e801d8-d728-4367-996f-5b7183fbcdca" containerName="container-00" Feb 19 21:00:00 crc kubenswrapper[4813]: I0219 21:00:00.193274 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525580-lqghc" Feb 19 21:00:00 crc kubenswrapper[4813]: I0219 21:00:00.198259 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 21:00:00 crc kubenswrapper[4813]: I0219 21:00:00.198539 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 21:00:00 crc kubenswrapper[4813]: I0219 21:00:00.206481 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525580-lqghc"] Feb 19 21:00:00 crc kubenswrapper[4813]: I0219 21:00:00.267265 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92pfx\" (UniqueName: \"kubernetes.io/projected/29716aeb-e3c3-469a-8840-b775132e6b56-kube-api-access-92pfx\") pod \"collect-profiles-29525580-lqghc\" (UID: \"29716aeb-e3c3-469a-8840-b775132e6b56\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525580-lqghc" Feb 19 21:00:00 crc kubenswrapper[4813]: I0219 21:00:00.267335 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/29716aeb-e3c3-469a-8840-b775132e6b56-secret-volume\") pod \"collect-profiles-29525580-lqghc\" (UID: \"29716aeb-e3c3-469a-8840-b775132e6b56\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525580-lqghc" Feb 19 21:00:00 crc kubenswrapper[4813]: I0219 21:00:00.267428 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/29716aeb-e3c3-469a-8840-b775132e6b56-config-volume\") pod \"collect-profiles-29525580-lqghc\" (UID: \"29716aeb-e3c3-469a-8840-b775132e6b56\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525580-lqghc" Feb 19 21:00:00 crc kubenswrapper[4813]: I0219 21:00:00.368849 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92pfx\" (UniqueName: \"kubernetes.io/projected/29716aeb-e3c3-469a-8840-b775132e6b56-kube-api-access-92pfx\") pod \"collect-profiles-29525580-lqghc\" (UID: \"29716aeb-e3c3-469a-8840-b775132e6b56\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525580-lqghc" Feb 19 21:00:00 crc kubenswrapper[4813]: I0219 21:00:00.368913 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/29716aeb-e3c3-469a-8840-b775132e6b56-secret-volume\") pod \"collect-profiles-29525580-lqghc\" (UID: \"29716aeb-e3c3-469a-8840-b775132e6b56\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525580-lqghc" Feb 19 21:00:00 crc kubenswrapper[4813]: I0219 21:00:00.369025 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/29716aeb-e3c3-469a-8840-b775132e6b56-config-volume\") pod \"collect-profiles-29525580-lqghc\" (UID: \"29716aeb-e3c3-469a-8840-b775132e6b56\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525580-lqghc" Feb 19 21:00:00 crc kubenswrapper[4813]: I0219 21:00:00.369906 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/29716aeb-e3c3-469a-8840-b775132e6b56-config-volume\") pod \"collect-profiles-29525580-lqghc\" (UID: \"29716aeb-e3c3-469a-8840-b775132e6b56\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525580-lqghc" Feb 19 21:00:00 crc kubenswrapper[4813]: I0219 21:00:00.377526 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/29716aeb-e3c3-469a-8840-b775132e6b56-secret-volume\") pod \"collect-profiles-29525580-lqghc\" (UID: \"29716aeb-e3c3-469a-8840-b775132e6b56\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525580-lqghc" Feb 19 21:00:00 crc kubenswrapper[4813]: I0219 21:00:00.393242 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92pfx\" (UniqueName: \"kubernetes.io/projected/29716aeb-e3c3-469a-8840-b775132e6b56-kube-api-access-92pfx\") pod \"collect-profiles-29525580-lqghc\" (UID: \"29716aeb-e3c3-469a-8840-b775132e6b56\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525580-lqghc" Feb 19 21:00:00 crc kubenswrapper[4813]: I0219 21:00:00.533045 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525580-lqghc" Feb 19 21:00:01 crc kubenswrapper[4813]: I0219 21:00:01.093495 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525580-lqghc"] Feb 19 21:00:01 crc kubenswrapper[4813]: W0219 21:00:01.098045 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29716aeb_e3c3_469a_8840_b775132e6b56.slice/crio-8cc3b4611022ed6c5c3d4a70a740965ce34be0919b7b107165ddf9c51c3053a7 WatchSource:0}: Error finding container 8cc3b4611022ed6c5c3d4a70a740965ce34be0919b7b107165ddf9c51c3053a7: Status 404 returned error can't find the container with id 8cc3b4611022ed6c5c3d4a70a740965ce34be0919b7b107165ddf9c51c3053a7 Feb 19 21:00:01 crc kubenswrapper[4813]: I0219 21:00:01.710471 4813 generic.go:334] "Generic (PLEG): container finished" podID="29716aeb-e3c3-469a-8840-b775132e6b56" containerID="733abeb8f1f0d4ee6b2077306ad838dabf90dc91eb7eb576b213d202aded02ee" exitCode=0 Feb 19 21:00:01 crc kubenswrapper[4813]: I0219 21:00:01.710597 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525580-lqghc" event={"ID":"29716aeb-e3c3-469a-8840-b775132e6b56","Type":"ContainerDied","Data":"733abeb8f1f0d4ee6b2077306ad838dabf90dc91eb7eb576b213d202aded02ee"} Feb 19 21:00:01 crc kubenswrapper[4813]: I0219 21:00:01.710880 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525580-lqghc" event={"ID":"29716aeb-e3c3-469a-8840-b775132e6b56","Type":"ContainerStarted","Data":"8cc3b4611022ed6c5c3d4a70a740965ce34be0919b7b107165ddf9c51c3053a7"} Feb 19 21:00:03 crc kubenswrapper[4813]: I0219 21:00:03.075422 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525580-lqghc" Feb 19 21:00:03 crc kubenswrapper[4813]: I0219 21:00:03.125011 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/29716aeb-e3c3-469a-8840-b775132e6b56-secret-volume\") pod \"29716aeb-e3c3-469a-8840-b775132e6b56\" (UID: \"29716aeb-e3c3-469a-8840-b775132e6b56\") " Feb 19 21:00:03 crc kubenswrapper[4813]: I0219 21:00:03.125145 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92pfx\" (UniqueName: \"kubernetes.io/projected/29716aeb-e3c3-469a-8840-b775132e6b56-kube-api-access-92pfx\") pod \"29716aeb-e3c3-469a-8840-b775132e6b56\" (UID: \"29716aeb-e3c3-469a-8840-b775132e6b56\") " Feb 19 21:00:03 crc kubenswrapper[4813]: I0219 21:00:03.125171 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/29716aeb-e3c3-469a-8840-b775132e6b56-config-volume\") pod \"29716aeb-e3c3-469a-8840-b775132e6b56\" (UID: \"29716aeb-e3c3-469a-8840-b775132e6b56\") " Feb 19 21:00:03 crc kubenswrapper[4813]: I0219 21:00:03.126203 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29716aeb-e3c3-469a-8840-b775132e6b56-config-volume" (OuterVolumeSpecName: "config-volume") pod "29716aeb-e3c3-469a-8840-b775132e6b56" (UID: "29716aeb-e3c3-469a-8840-b775132e6b56"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:00:03 crc kubenswrapper[4813]: I0219 21:00:03.130925 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29716aeb-e3c3-469a-8840-b775132e6b56-kube-api-access-92pfx" (OuterVolumeSpecName: "kube-api-access-92pfx") pod "29716aeb-e3c3-469a-8840-b775132e6b56" (UID: "29716aeb-e3c3-469a-8840-b775132e6b56"). InnerVolumeSpecName "kube-api-access-92pfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:00:03 crc kubenswrapper[4813]: I0219 21:00:03.132102 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29716aeb-e3c3-469a-8840-b775132e6b56-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "29716aeb-e3c3-469a-8840-b775132e6b56" (UID: "29716aeb-e3c3-469a-8840-b775132e6b56"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:00:03 crc kubenswrapper[4813]: I0219 21:00:03.227156 4813 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/29716aeb-e3c3-469a-8840-b775132e6b56-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 21:00:03 crc kubenswrapper[4813]: I0219 21:00:03.227190 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92pfx\" (UniqueName: \"kubernetes.io/projected/29716aeb-e3c3-469a-8840-b775132e6b56-kube-api-access-92pfx\") on node \"crc\" DevicePath \"\"" Feb 19 21:00:03 crc kubenswrapper[4813]: I0219 21:00:03.227200 4813 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/29716aeb-e3c3-469a-8840-b775132e6b56-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 21:00:03 crc kubenswrapper[4813]: I0219 21:00:03.734337 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525580-lqghc" event={"ID":"29716aeb-e3c3-469a-8840-b775132e6b56","Type":"ContainerDied","Data":"8cc3b4611022ed6c5c3d4a70a740965ce34be0919b7b107165ddf9c51c3053a7"} Feb 19 21:00:03 crc kubenswrapper[4813]: I0219 21:00:03.734622 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8cc3b4611022ed6c5c3d4a70a740965ce34be0919b7b107165ddf9c51c3053a7" Feb 19 21:00:03 crc kubenswrapper[4813]: I0219 21:00:03.734681 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525580-lqghc" Feb 19 21:00:04 crc kubenswrapper[4813]: I0219 21:00:04.148868 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525535-5l55h"] Feb 19 21:00:04 crc kubenswrapper[4813]: I0219 21:00:04.157636 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525535-5l55h"] Feb 19 21:00:05 crc kubenswrapper[4813]: I0219 21:00:05.485604 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02122ced-659c-46e8-84ad-a54a7db7347a" path="/var/lib/kubelet/pods/02122ced-659c-46e8-84ad-a54a7db7347a/volumes" Feb 19 21:01:00 crc kubenswrapper[4813]: I0219 21:01:00.190516 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29525581-2dq5v"] Feb 19 21:01:00 crc kubenswrapper[4813]: E0219 21:01:00.192079 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29716aeb-e3c3-469a-8840-b775132e6b56" containerName="collect-profiles" Feb 19 21:01:00 crc kubenswrapper[4813]: I0219 21:01:00.192104 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="29716aeb-e3c3-469a-8840-b775132e6b56" containerName="collect-profiles" Feb 19 21:01:00 crc kubenswrapper[4813]: I0219 21:01:00.192518 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="29716aeb-e3c3-469a-8840-b775132e6b56" containerName="collect-profiles" Feb 19 21:01:00 crc kubenswrapper[4813]: I0219 21:01:00.194326 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29525581-2dq5v" Feb 19 21:01:00 crc kubenswrapper[4813]: I0219 21:01:00.204977 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29525581-2dq5v"] Feb 19 21:01:00 crc kubenswrapper[4813]: I0219 21:01:00.320644 4813 scope.go:117] "RemoveContainer" containerID="f3105cf6ae63ca9a510583ce6c46a05427dbc48f4614720f3cdb3d85a1ae1034" Feb 19 21:01:00 crc kubenswrapper[4813]: I0219 21:01:00.345516 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8013965-431f-4d6b-8a69-d5f2025fbe9f-combined-ca-bundle\") pod \"keystone-cron-29525581-2dq5v\" (UID: \"f8013965-431f-4d6b-8a69-d5f2025fbe9f\") " pod="openstack/keystone-cron-29525581-2dq5v" Feb 19 21:01:00 crc kubenswrapper[4813]: I0219 21:01:00.345604 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8013965-431f-4d6b-8a69-d5f2025fbe9f-config-data\") pod \"keystone-cron-29525581-2dq5v\" (UID: \"f8013965-431f-4d6b-8a69-d5f2025fbe9f\") " pod="openstack/keystone-cron-29525581-2dq5v" Feb 19 21:01:00 crc kubenswrapper[4813]: I0219 21:01:00.346026 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f8013965-431f-4d6b-8a69-d5f2025fbe9f-fernet-keys\") pod \"keystone-cron-29525581-2dq5v\" (UID: \"f8013965-431f-4d6b-8a69-d5f2025fbe9f\") " pod="openstack/keystone-cron-29525581-2dq5v" Feb 19 21:01:00 crc kubenswrapper[4813]: I0219 21:01:00.346236 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdkfd\" (UniqueName: \"kubernetes.io/projected/f8013965-431f-4d6b-8a69-d5f2025fbe9f-kube-api-access-fdkfd\") pod \"keystone-cron-29525581-2dq5v\" (UID: \"f8013965-431f-4d6b-8a69-d5f2025fbe9f\") " pod="openstack/keystone-cron-29525581-2dq5v" Feb 19 21:01:00 crc kubenswrapper[4813]: I0219 21:01:00.449209 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8013965-431f-4d6b-8a69-d5f2025fbe9f-config-data\") pod \"keystone-cron-29525581-2dq5v\" (UID: \"f8013965-431f-4d6b-8a69-d5f2025fbe9f\") " pod="openstack/keystone-cron-29525581-2dq5v" Feb 19 21:01:00 crc kubenswrapper[4813]: I0219 21:01:00.449413 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f8013965-431f-4d6b-8a69-d5f2025fbe9f-fernet-keys\") pod \"keystone-cron-29525581-2dq5v\" (UID: \"f8013965-431f-4d6b-8a69-d5f2025fbe9f\") " pod="openstack/keystone-cron-29525581-2dq5v" Feb 19 21:01:00 crc kubenswrapper[4813]: I0219 21:01:00.449527 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdkfd\" (UniqueName: \"kubernetes.io/projected/f8013965-431f-4d6b-8a69-d5f2025fbe9f-kube-api-access-fdkfd\") pod \"keystone-cron-29525581-2dq5v\" (UID: \"f8013965-431f-4d6b-8a69-d5f2025fbe9f\") " pod="openstack/keystone-cron-29525581-2dq5v" Feb 19 21:01:00 crc kubenswrapper[4813]: I0219 21:01:00.449918 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8013965-431f-4d6b-8a69-d5f2025fbe9f-combined-ca-bundle\") pod \"keystone-cron-29525581-2dq5v\" (UID: \"f8013965-431f-4d6b-8a69-d5f2025fbe9f\") " pod="openstack/keystone-cron-29525581-2dq5v" Feb 19 21:01:00 crc kubenswrapper[4813]: I0219 21:01:00.460400 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8013965-431f-4d6b-8a69-d5f2025fbe9f-combined-ca-bundle\") pod \"keystone-cron-29525581-2dq5v\" (UID: \"f8013965-431f-4d6b-8a69-d5f2025fbe9f\") " pod="openstack/keystone-cron-29525581-2dq5v" Feb 19 21:01:00 crc kubenswrapper[4813]: I0219 21:01:00.460836 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8013965-431f-4d6b-8a69-d5f2025fbe9f-config-data\") pod \"keystone-cron-29525581-2dq5v\" (UID: \"f8013965-431f-4d6b-8a69-d5f2025fbe9f\") " pod="openstack/keystone-cron-29525581-2dq5v" Feb 19 21:01:00 crc kubenswrapper[4813]: I0219 21:01:00.460919 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f8013965-431f-4d6b-8a69-d5f2025fbe9f-fernet-keys\") pod \"keystone-cron-29525581-2dq5v\" (UID: \"f8013965-431f-4d6b-8a69-d5f2025fbe9f\") " pod="openstack/keystone-cron-29525581-2dq5v" Feb 19 21:01:00 crc kubenswrapper[4813]: I0219 21:01:00.471752 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdkfd\" (UniqueName: \"kubernetes.io/projected/f8013965-431f-4d6b-8a69-d5f2025fbe9f-kube-api-access-fdkfd\") pod \"keystone-cron-29525581-2dq5v\" (UID: \"f8013965-431f-4d6b-8a69-d5f2025fbe9f\") " pod="openstack/keystone-cron-29525581-2dq5v" Feb 19 21:01:00 crc kubenswrapper[4813]: I0219 21:01:00.520802 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29525581-2dq5v" Feb 19 21:01:01 crc kubenswrapper[4813]: I0219 21:01:01.044694 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29525581-2dq5v"] Feb 19 21:01:01 crc kubenswrapper[4813]: I0219 21:01:01.468383 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29525581-2dq5v" event={"ID":"f8013965-431f-4d6b-8a69-d5f2025fbe9f","Type":"ContainerStarted","Data":"c1c47e53efa86d4819d7fed09cdda77a0b371ee63442fe24b115f2cac6c7eb14"} Feb 19 21:01:01 crc kubenswrapper[4813]: I0219 21:01:01.468450 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29525581-2dq5v" event={"ID":"f8013965-431f-4d6b-8a69-d5f2025fbe9f","Type":"ContainerStarted","Data":"86ccddff440f31ba27fd092a0d1e8598dd6bccd07a66f07b139ff3b3220d9603"} Feb 19 21:01:01 crc kubenswrapper[4813]: I0219 21:01:01.505546 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29525581-2dq5v" podStartSLOduration=1.5054705940000002 podStartE2EDuration="1.505470594s" podCreationTimestamp="2026-02-19 21:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:01:01.497379075 +0000 UTC m=+9080.722819676" watchObservedRunningTime="2026-02-19 21:01:01.505470594 +0000 UTC m=+9080.730911175" Feb 19 21:01:05 crc kubenswrapper[4813]: I0219 21:01:05.527452 4813 generic.go:334] "Generic (PLEG): container finished" podID="f8013965-431f-4d6b-8a69-d5f2025fbe9f" containerID="c1c47e53efa86d4819d7fed09cdda77a0b371ee63442fe24b115f2cac6c7eb14" exitCode=0 Feb 19 21:01:05 crc kubenswrapper[4813]: I0219 21:01:05.527831 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29525581-2dq5v" event={"ID":"f8013965-431f-4d6b-8a69-d5f2025fbe9f","Type":"ContainerDied","Data":"c1c47e53efa86d4819d7fed09cdda77a0b371ee63442fe24b115f2cac6c7eb14"} Feb 19 21:01:06 crc kubenswrapper[4813]: I0219 21:01:06.203108 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7fl4b"] Feb 19 21:01:06 crc kubenswrapper[4813]: I0219 21:01:06.206834 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7fl4b" Feb 19 21:01:06 crc kubenswrapper[4813]: I0219 21:01:06.221297 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7fl4b"] Feb 19 21:01:06 crc kubenswrapper[4813]: I0219 21:01:06.385582 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dcdg\" (UniqueName: \"kubernetes.io/projected/989a278c-e7b7-477c-91b3-565ac2c291ce-kube-api-access-6dcdg\") pod \"redhat-marketplace-7fl4b\" (UID: \"989a278c-e7b7-477c-91b3-565ac2c291ce\") " pod="openshift-marketplace/redhat-marketplace-7fl4b" Feb 19 21:01:06 crc kubenswrapper[4813]: I0219 21:01:06.386034 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/989a278c-e7b7-477c-91b3-565ac2c291ce-catalog-content\") pod \"redhat-marketplace-7fl4b\" (UID: \"989a278c-e7b7-477c-91b3-565ac2c291ce\") " pod="openshift-marketplace/redhat-marketplace-7fl4b" Feb 19 21:01:06 crc kubenswrapper[4813]: I0219 21:01:06.386115 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/989a278c-e7b7-477c-91b3-565ac2c291ce-utilities\") pod \"redhat-marketplace-7fl4b\" (UID: \"989a278c-e7b7-477c-91b3-565ac2c291ce\") " pod="openshift-marketplace/redhat-marketplace-7fl4b" Feb 19 21:01:06 crc kubenswrapper[4813]: I0219 21:01:06.488549 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dcdg\" (UniqueName: \"kubernetes.io/projected/989a278c-e7b7-477c-91b3-565ac2c291ce-kube-api-access-6dcdg\") pod \"redhat-marketplace-7fl4b\" (UID: \"989a278c-e7b7-477c-91b3-565ac2c291ce\") " pod="openshift-marketplace/redhat-marketplace-7fl4b" Feb 19 21:01:06 crc kubenswrapper[4813]: I0219 21:01:06.488636 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/989a278c-e7b7-477c-91b3-565ac2c291ce-catalog-content\") pod \"redhat-marketplace-7fl4b\" (UID: \"989a278c-e7b7-477c-91b3-565ac2c291ce\") " pod="openshift-marketplace/redhat-marketplace-7fl4b" Feb 19 21:01:06 crc kubenswrapper[4813]: I0219 21:01:06.488696 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/989a278c-e7b7-477c-91b3-565ac2c291ce-utilities\") pod \"redhat-marketplace-7fl4b\" (UID: \"989a278c-e7b7-477c-91b3-565ac2c291ce\") " pod="openshift-marketplace/redhat-marketplace-7fl4b" Feb 19 21:01:06 crc kubenswrapper[4813]: I0219 21:01:06.489370 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/989a278c-e7b7-477c-91b3-565ac2c291ce-catalog-content\") pod \"redhat-marketplace-7fl4b\" (UID: \"989a278c-e7b7-477c-91b3-565ac2c291ce\") " pod="openshift-marketplace/redhat-marketplace-7fl4b" Feb 19 21:01:06 crc kubenswrapper[4813]: I0219 21:01:06.489639 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/989a278c-e7b7-477c-91b3-565ac2c291ce-utilities\") pod \"redhat-marketplace-7fl4b\" (UID: \"989a278c-e7b7-477c-91b3-565ac2c291ce\") " pod="openshift-marketplace/redhat-marketplace-7fl4b" Feb 19 21:01:06 crc kubenswrapper[4813]: I0219 21:01:06.744884 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dcdg\" (UniqueName: \"kubernetes.io/projected/989a278c-e7b7-477c-91b3-565ac2c291ce-kube-api-access-6dcdg\") pod \"redhat-marketplace-7fl4b\" (UID: \"989a278c-e7b7-477c-91b3-565ac2c291ce\") " pod="openshift-marketplace/redhat-marketplace-7fl4b" Feb 19 21:01:06 crc kubenswrapper[4813]: I0219 21:01:06.834731 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7fl4b" Feb 19 21:01:06 crc kubenswrapper[4813]: I0219 21:01:06.948063 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29525581-2dq5v" Feb 19 21:01:07 crc kubenswrapper[4813]: I0219 21:01:06.999885 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f8013965-431f-4d6b-8a69-d5f2025fbe9f-fernet-keys\") pod \"f8013965-431f-4d6b-8a69-d5f2025fbe9f\" (UID: \"f8013965-431f-4d6b-8a69-d5f2025fbe9f\") " Feb 19 21:01:07 crc kubenswrapper[4813]: I0219 21:01:07.000172 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8013965-431f-4d6b-8a69-d5f2025fbe9f-combined-ca-bundle\") pod \"f8013965-431f-4d6b-8a69-d5f2025fbe9f\" (UID: \"f8013965-431f-4d6b-8a69-d5f2025fbe9f\") " Feb 19 21:01:07 crc kubenswrapper[4813]: I0219 21:01:07.000242 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdkfd\" (UniqueName: \"kubernetes.io/projected/f8013965-431f-4d6b-8a69-d5f2025fbe9f-kube-api-access-fdkfd\") pod \"f8013965-431f-4d6b-8a69-d5f2025fbe9f\" (UID: \"f8013965-431f-4d6b-8a69-d5f2025fbe9f\") " Feb 19 21:01:07 crc kubenswrapper[4813]: I0219 21:01:07.000299 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8013965-431f-4d6b-8a69-d5f2025fbe9f-config-data\") pod \"f8013965-431f-4d6b-8a69-d5f2025fbe9f\" (UID: \"f8013965-431f-4d6b-8a69-d5f2025fbe9f\") " Feb 19 21:01:07 crc kubenswrapper[4813]: I0219 21:01:07.008388 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8013965-431f-4d6b-8a69-d5f2025fbe9f-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "f8013965-431f-4d6b-8a69-d5f2025fbe9f" (UID: "f8013965-431f-4d6b-8a69-d5f2025fbe9f"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:01:07 crc kubenswrapper[4813]: I0219 21:01:07.008518 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8013965-431f-4d6b-8a69-d5f2025fbe9f-kube-api-access-fdkfd" (OuterVolumeSpecName: "kube-api-access-fdkfd") pod "f8013965-431f-4d6b-8a69-d5f2025fbe9f" (UID: "f8013965-431f-4d6b-8a69-d5f2025fbe9f"). InnerVolumeSpecName "kube-api-access-fdkfd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:01:07 crc kubenswrapper[4813]: I0219 21:01:07.027075 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8013965-431f-4d6b-8a69-d5f2025fbe9f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f8013965-431f-4d6b-8a69-d5f2025fbe9f" (UID: "f8013965-431f-4d6b-8a69-d5f2025fbe9f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:01:07 crc kubenswrapper[4813]: I0219 21:01:07.102739 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8013965-431f-4d6b-8a69-d5f2025fbe9f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:01:07 crc kubenswrapper[4813]: I0219 21:01:07.102776 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdkfd\" (UniqueName: \"kubernetes.io/projected/f8013965-431f-4d6b-8a69-d5f2025fbe9f-kube-api-access-fdkfd\") on node \"crc\" DevicePath \"\"" Feb 19 21:01:07 crc kubenswrapper[4813]: I0219 21:01:07.102839 4813 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f8013965-431f-4d6b-8a69-d5f2025fbe9f-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 19 21:01:07 crc kubenswrapper[4813]: I0219 21:01:07.150554 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8013965-431f-4d6b-8a69-d5f2025fbe9f-config-data" (OuterVolumeSpecName: "config-data") pod "f8013965-431f-4d6b-8a69-d5f2025fbe9f" (UID: "f8013965-431f-4d6b-8a69-d5f2025fbe9f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:01:07 crc kubenswrapper[4813]: I0219 21:01:07.205575 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8013965-431f-4d6b-8a69-d5f2025fbe9f-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 21:01:07 crc kubenswrapper[4813]: W0219 21:01:07.278026 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod989a278c_e7b7_477c_91b3_565ac2c291ce.slice/crio-22a89529308b9482d2fa4daeb9112d239b4cd699b66da074484f6bbb8d0638c7 WatchSource:0}: Error finding container 22a89529308b9482d2fa4daeb9112d239b4cd699b66da074484f6bbb8d0638c7: Status 404 returned error can't find the container with id 22a89529308b9482d2fa4daeb9112d239b4cd699b66da074484f6bbb8d0638c7 Feb 19 21:01:07 crc kubenswrapper[4813]: I0219 21:01:07.280224 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7fl4b"] Feb 19 21:01:07 crc kubenswrapper[4813]: I0219 21:01:07.552066 4813 generic.go:334] "Generic (PLEG): container finished" podID="989a278c-e7b7-477c-91b3-565ac2c291ce" containerID="ef4b15cb76ccd2b19ee9f1f5f026fd21928a8efc49169240e05af96dde21e7bf" exitCode=0 Feb 19 21:01:07 crc kubenswrapper[4813]: I0219 21:01:07.552146 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7fl4b" event={"ID":"989a278c-e7b7-477c-91b3-565ac2c291ce","Type":"ContainerDied","Data":"ef4b15cb76ccd2b19ee9f1f5f026fd21928a8efc49169240e05af96dde21e7bf"} Feb 19 21:01:07 crc kubenswrapper[4813]: I0219 21:01:07.552188 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7fl4b" event={"ID":"989a278c-e7b7-477c-91b3-565ac2c291ce","Type":"ContainerStarted","Data":"22a89529308b9482d2fa4daeb9112d239b4cd699b66da074484f6bbb8d0638c7"} Feb 19 21:01:07 crc kubenswrapper[4813]: I0219 21:01:07.554556 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29525581-2dq5v" event={"ID":"f8013965-431f-4d6b-8a69-d5f2025fbe9f","Type":"ContainerDied","Data":"86ccddff440f31ba27fd092a0d1e8598dd6bccd07a66f07b139ff3b3220d9603"} Feb 19 21:01:07 crc kubenswrapper[4813]: I0219 21:01:07.554604 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86ccddff440f31ba27fd092a0d1e8598dd6bccd07a66f07b139ff3b3220d9603" Feb 19 21:01:07 crc kubenswrapper[4813]: I0219 21:01:07.554682 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29525581-2dq5v" Feb 19 21:01:08 crc kubenswrapper[4813]: I0219 21:01:08.572187 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7fl4b" event={"ID":"989a278c-e7b7-477c-91b3-565ac2c291ce","Type":"ContainerStarted","Data":"5f9a7c5e398209bee62762f2502b55715f3e38be9c992d71916a8e3ca2c9ec53"} Feb 19 21:01:09 crc kubenswrapper[4813]: I0219 21:01:09.622574 4813 generic.go:334] "Generic (PLEG): container finished" podID="989a278c-e7b7-477c-91b3-565ac2c291ce" containerID="5f9a7c5e398209bee62762f2502b55715f3e38be9c992d71916a8e3ca2c9ec53" exitCode=0 Feb 19 21:01:09 crc kubenswrapper[4813]: I0219 21:01:09.623056 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7fl4b" event={"ID":"989a278c-e7b7-477c-91b3-565ac2c291ce","Type":"ContainerDied","Data":"5f9a7c5e398209bee62762f2502b55715f3e38be9c992d71916a8e3ca2c9ec53"} Feb 19 21:01:10 crc kubenswrapper[4813]: I0219 21:01:10.638262 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7fl4b" event={"ID":"989a278c-e7b7-477c-91b3-565ac2c291ce","Type":"ContainerStarted","Data":"2ea80f89d0bc9f577efc347823e37899ee08ab79499eb3d79997202ff87f20c9"} Feb 19 21:01:10 crc kubenswrapper[4813]: I0219 21:01:10.658607 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7fl4b" podStartSLOduration=2.212733887 podStartE2EDuration="4.658584304s" podCreationTimestamp="2026-02-19 21:01:06 +0000 UTC" firstStartedPulling="2026-02-19 21:01:07.555476936 +0000 UTC m=+9086.780917507" lastFinishedPulling="2026-02-19 21:01:10.001327373 +0000 UTC m=+9089.226767924" observedRunningTime="2026-02-19 21:01:10.657587214 +0000 UTC m=+9089.883027785" watchObservedRunningTime="2026-02-19 21:01:10.658584304 +0000 UTC m=+9089.884024855" Feb 19 21:01:16 crc kubenswrapper[4813]: I0219 21:01:16.835713 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7fl4b" Feb 19 21:01:16 crc kubenswrapper[4813]: I0219 21:01:16.836476 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7fl4b" Feb 19 21:01:16 crc kubenswrapper[4813]: I0219 21:01:16.926528 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7fl4b" Feb 19 21:01:17 crc kubenswrapper[4813]: I0219 21:01:17.825165 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7fl4b" Feb 19 21:01:17 crc kubenswrapper[4813]: I0219 21:01:17.884312 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7fl4b"] Feb 19 21:01:19 crc kubenswrapper[4813]: I0219 21:01:19.771574 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7fl4b" podUID="989a278c-e7b7-477c-91b3-565ac2c291ce" containerName="registry-server" containerID="cri-o://2ea80f89d0bc9f577efc347823e37899ee08ab79499eb3d79997202ff87f20c9" gracePeriod=2 Feb 19 21:01:20 crc kubenswrapper[4813]: I0219 21:01:20.322212 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7fl4b" Feb 19 21:01:20 crc kubenswrapper[4813]: I0219 21:01:20.347982 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dcdg\" (UniqueName: \"kubernetes.io/projected/989a278c-e7b7-477c-91b3-565ac2c291ce-kube-api-access-6dcdg\") pod \"989a278c-e7b7-477c-91b3-565ac2c291ce\" (UID: \"989a278c-e7b7-477c-91b3-565ac2c291ce\") " Feb 19 21:01:20 crc kubenswrapper[4813]: I0219 21:01:20.348111 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/989a278c-e7b7-477c-91b3-565ac2c291ce-catalog-content\") pod \"989a278c-e7b7-477c-91b3-565ac2c291ce\" (UID: \"989a278c-e7b7-477c-91b3-565ac2c291ce\") " Feb 19 21:01:20 crc kubenswrapper[4813]: I0219 21:01:20.348301 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/989a278c-e7b7-477c-91b3-565ac2c291ce-utilities\") pod \"989a278c-e7b7-477c-91b3-565ac2c291ce\" (UID: \"989a278c-e7b7-477c-91b3-565ac2c291ce\") " Feb 19 21:01:20 crc kubenswrapper[4813]: I0219 21:01:20.349300 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/989a278c-e7b7-477c-91b3-565ac2c291ce-utilities" (OuterVolumeSpecName: "utilities") pod "989a278c-e7b7-477c-91b3-565ac2c291ce" (UID: "989a278c-e7b7-477c-91b3-565ac2c291ce"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:01:20 crc kubenswrapper[4813]: I0219 21:01:20.356759 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/989a278c-e7b7-477c-91b3-565ac2c291ce-kube-api-access-6dcdg" (OuterVolumeSpecName: "kube-api-access-6dcdg") pod "989a278c-e7b7-477c-91b3-565ac2c291ce" (UID: "989a278c-e7b7-477c-91b3-565ac2c291ce"). InnerVolumeSpecName "kube-api-access-6dcdg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:01:20 crc kubenswrapper[4813]: I0219 21:01:20.405327 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/989a278c-e7b7-477c-91b3-565ac2c291ce-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "989a278c-e7b7-477c-91b3-565ac2c291ce" (UID: "989a278c-e7b7-477c-91b3-565ac2c291ce"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:01:20 crc kubenswrapper[4813]: I0219 21:01:20.450680 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6dcdg\" (UniqueName: \"kubernetes.io/projected/989a278c-e7b7-477c-91b3-565ac2c291ce-kube-api-access-6dcdg\") on node \"crc\" DevicePath \"\"" Feb 19 21:01:20 crc kubenswrapper[4813]: I0219 21:01:20.450872 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/989a278c-e7b7-477c-91b3-565ac2c291ce-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 21:01:20 crc kubenswrapper[4813]: I0219 21:01:20.450983 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/989a278c-e7b7-477c-91b3-565ac2c291ce-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 21:01:20 crc kubenswrapper[4813]: I0219 21:01:20.786746 4813 generic.go:334] "Generic (PLEG): container finished" podID="989a278c-e7b7-477c-91b3-565ac2c291ce" containerID="2ea80f89d0bc9f577efc347823e37899ee08ab79499eb3d79997202ff87f20c9" exitCode=0 Feb 19 21:01:20 crc kubenswrapper[4813]: I0219 21:01:20.786801 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7fl4b" event={"ID":"989a278c-e7b7-477c-91b3-565ac2c291ce","Type":"ContainerDied","Data":"2ea80f89d0bc9f577efc347823e37899ee08ab79499eb3d79997202ff87f20c9"} Feb 19 21:01:20 crc kubenswrapper[4813]: I0219 21:01:20.786924 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7fl4b" Feb 19 21:01:20 crc kubenswrapper[4813]: I0219 21:01:20.787252 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7fl4b" event={"ID":"989a278c-e7b7-477c-91b3-565ac2c291ce","Type":"ContainerDied","Data":"22a89529308b9482d2fa4daeb9112d239b4cd699b66da074484f6bbb8d0638c7"} Feb 19 21:01:20 crc kubenswrapper[4813]: I0219 21:01:20.787283 4813 scope.go:117] "RemoveContainer" containerID="2ea80f89d0bc9f577efc347823e37899ee08ab79499eb3d79997202ff87f20c9" Feb 19 21:01:20 crc kubenswrapper[4813]: I0219 21:01:20.830854 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7fl4b"] Feb 19 21:01:20 crc kubenswrapper[4813]: I0219 21:01:20.835864 4813 scope.go:117] "RemoveContainer" containerID="5f9a7c5e398209bee62762f2502b55715f3e38be9c992d71916a8e3ca2c9ec53" Feb 19 21:01:20 crc kubenswrapper[4813]: I0219 21:01:20.845143 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7fl4b"] Feb 19 21:01:20 crc kubenswrapper[4813]: I0219 21:01:20.873249 4813 scope.go:117] "RemoveContainer" containerID="ef4b15cb76ccd2b19ee9f1f5f026fd21928a8efc49169240e05af96dde21e7bf" Feb 19 21:01:20 crc kubenswrapper[4813]: I0219 21:01:20.923754 4813 scope.go:117] "RemoveContainer" containerID="2ea80f89d0bc9f577efc347823e37899ee08ab79499eb3d79997202ff87f20c9" Feb 19 21:01:20 crc kubenswrapper[4813]: E0219 21:01:20.924295 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ea80f89d0bc9f577efc347823e37899ee08ab79499eb3d79997202ff87f20c9\": container with ID starting with 2ea80f89d0bc9f577efc347823e37899ee08ab79499eb3d79997202ff87f20c9 not found: ID does not exist" containerID="2ea80f89d0bc9f577efc347823e37899ee08ab79499eb3d79997202ff87f20c9" Feb 19 21:01:20 crc kubenswrapper[4813]: I0219 21:01:20.924348 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ea80f89d0bc9f577efc347823e37899ee08ab79499eb3d79997202ff87f20c9"} err="failed to get container status \"2ea80f89d0bc9f577efc347823e37899ee08ab79499eb3d79997202ff87f20c9\": rpc error: code = NotFound desc = could not find container \"2ea80f89d0bc9f577efc347823e37899ee08ab79499eb3d79997202ff87f20c9\": container with ID starting with 2ea80f89d0bc9f577efc347823e37899ee08ab79499eb3d79997202ff87f20c9 not found: ID does not exist" Feb 19 21:01:20 crc kubenswrapper[4813]: I0219 21:01:20.924382 4813 scope.go:117] "RemoveContainer" containerID="5f9a7c5e398209bee62762f2502b55715f3e38be9c992d71916a8e3ca2c9ec53" Feb 19 21:01:20 crc kubenswrapper[4813]: E0219 21:01:20.924709 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f9a7c5e398209bee62762f2502b55715f3e38be9c992d71916a8e3ca2c9ec53\": container with ID starting with 5f9a7c5e398209bee62762f2502b55715f3e38be9c992d71916a8e3ca2c9ec53 not found: ID does not exist" containerID="5f9a7c5e398209bee62762f2502b55715f3e38be9c992d71916a8e3ca2c9ec53" Feb 19 21:01:20 crc kubenswrapper[4813]: I0219 21:01:20.924789 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f9a7c5e398209bee62762f2502b55715f3e38be9c992d71916a8e3ca2c9ec53"} err="failed to get container status \"5f9a7c5e398209bee62762f2502b55715f3e38be9c992d71916a8e3ca2c9ec53\": rpc error: code = NotFound desc = could not find container \"5f9a7c5e398209bee62762f2502b55715f3e38be9c992d71916a8e3ca2c9ec53\": container with ID starting with 5f9a7c5e398209bee62762f2502b55715f3e38be9c992d71916a8e3ca2c9ec53 not found: ID does not exist" Feb 19 21:01:20 crc kubenswrapper[4813]: I0219 21:01:20.924855 4813 scope.go:117] "RemoveContainer" containerID="ef4b15cb76ccd2b19ee9f1f5f026fd21928a8efc49169240e05af96dde21e7bf" Feb 19 21:01:20 crc kubenswrapper[4813]: E0219 21:01:20.925265 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef4b15cb76ccd2b19ee9f1f5f026fd21928a8efc49169240e05af96dde21e7bf\": container with ID starting with ef4b15cb76ccd2b19ee9f1f5f026fd21928a8efc49169240e05af96dde21e7bf not found: ID does not exist" containerID="ef4b15cb76ccd2b19ee9f1f5f026fd21928a8efc49169240e05af96dde21e7bf" Feb 19 21:01:20 crc kubenswrapper[4813]: I0219 21:01:20.925311 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef4b15cb76ccd2b19ee9f1f5f026fd21928a8efc49169240e05af96dde21e7bf"} err="failed to get container status \"ef4b15cb76ccd2b19ee9f1f5f026fd21928a8efc49169240e05af96dde21e7bf\": rpc error: code = NotFound desc = could not find container \"ef4b15cb76ccd2b19ee9f1f5f026fd21928a8efc49169240e05af96dde21e7bf\": container with ID starting with ef4b15cb76ccd2b19ee9f1f5f026fd21928a8efc49169240e05af96dde21e7bf not found: ID does not exist" Feb 19 21:01:21 crc kubenswrapper[4813]: I0219 21:01:21.495868 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="989a278c-e7b7-477c-91b3-565ac2c291ce" path="/var/lib/kubelet/pods/989a278c-e7b7-477c-91b3-565ac2c291ce/volumes" Feb 19 21:01:30 crc kubenswrapper[4813]: I0219 21:01:30.329469 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 21:01:30 crc kubenswrapper[4813]: I0219 21:01:30.330156 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 21:02:00 crc kubenswrapper[4813]: I0219 21:02:00.329767 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 21:02:00 crc kubenswrapper[4813]: I0219 21:02:00.330498 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 21:02:30 crc kubenswrapper[4813]: I0219 21:02:30.330279 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 21:02:30 crc kubenswrapper[4813]: I0219 21:02:30.330991 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 21:02:30 crc kubenswrapper[4813]: I0219 21:02:30.331057 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" Feb 19 21:02:30 crc kubenswrapper[4813]: I0219 21:02:30.332253 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6c6ebd25e13c34a5637b27b7ea17b9562833793e4c09cb4428c635b774a360cb"} pod="openshift-machine-config-operator/machine-config-daemon-gfswm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 21:02:30 crc kubenswrapper[4813]: I0219 21:02:30.332349 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" containerID="cri-o://6c6ebd25e13c34a5637b27b7ea17b9562833793e4c09cb4428c635b774a360cb" gracePeriod=600 Feb 19 21:02:30 crc kubenswrapper[4813]: E0219 21:02:30.676084 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 21:02:31 crc kubenswrapper[4813]: I0219 21:02:31.407372 4813 generic.go:334] "Generic (PLEG): container finished" podID="481977a2-7072-4176-abd4-863cb6104d70" containerID="6c6ebd25e13c34a5637b27b7ea17b9562833793e4c09cb4428c635b774a360cb" exitCode=0 Feb 19 21:02:31 crc kubenswrapper[4813]: I0219 21:02:31.407463 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" event={"ID":"481977a2-7072-4176-abd4-863cb6104d70","Type":"ContainerDied","Data":"6c6ebd25e13c34a5637b27b7ea17b9562833793e4c09cb4428c635b774a360cb"} Feb 19 21:02:31 crc kubenswrapper[4813]: I0219 21:02:31.407578 4813 scope.go:117] "RemoveContainer" containerID="643ad5a27899764fe60b8450a47154aec9c0cdecfadc2e3fdb35dc34121d971d" Feb 19 21:02:31 crc kubenswrapper[4813]: I0219 21:02:31.408572 4813 scope.go:117] "RemoveContainer" containerID="6c6ebd25e13c34a5637b27b7ea17b9562833793e4c09cb4428c635b774a360cb" Feb 19 21:02:31 crc kubenswrapper[4813]: E0219 21:02:31.409226 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 21:02:46 crc kubenswrapper[4813]: I0219 21:02:46.471873 4813 scope.go:117] "RemoveContainer" containerID="6c6ebd25e13c34a5637b27b7ea17b9562833793e4c09cb4428c635b774a360cb" Feb 19 21:02:46 crc kubenswrapper[4813]: E0219 21:02:46.473181 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 21:03:00 crc kubenswrapper[4813]: I0219 21:03:00.471825 4813 scope.go:117] "RemoveContainer" containerID="6c6ebd25e13c34a5637b27b7ea17b9562833793e4c09cb4428c635b774a360cb" Feb 19 21:03:00 crc kubenswrapper[4813]: E0219 21:03:00.472891 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 21:03:13 crc kubenswrapper[4813]: I0219 21:03:13.472641 4813 scope.go:117] "RemoveContainer" containerID="6c6ebd25e13c34a5637b27b7ea17b9562833793e4c09cb4428c635b774a360cb" Feb 19 21:03:13 crc kubenswrapper[4813]: E0219 21:03:13.473807 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 21:03:26 crc kubenswrapper[4813]: I0219 21:03:26.471802 4813 scope.go:117] "RemoveContainer" containerID="6c6ebd25e13c34a5637b27b7ea17b9562833793e4c09cb4428c635b774a360cb" Feb 19 21:03:26 crc kubenswrapper[4813]: E0219 21:03:26.472696 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 21:03:38 crc kubenswrapper[4813]: I0219 21:03:38.472289 4813 scope.go:117] "RemoveContainer" containerID="6c6ebd25e13c34a5637b27b7ea17b9562833793e4c09cb4428c635b774a360cb" Feb 19 21:03:38 crc kubenswrapper[4813]: E0219 21:03:38.473327 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 21:03:52 crc kubenswrapper[4813]: I0219 21:03:52.473287 4813 scope.go:117] "RemoveContainer" containerID="6c6ebd25e13c34a5637b27b7ea17b9562833793e4c09cb4428c635b774a360cb" Feb 19 21:03:52 crc kubenswrapper[4813]: E0219 21:03:52.474385 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 21:04:05 crc kubenswrapper[4813]: I0219 21:04:05.471798 4813 scope.go:117] "RemoveContainer" containerID="6c6ebd25e13c34a5637b27b7ea17b9562833793e4c09cb4428c635b774a360cb" Feb 19 21:04:05 crc kubenswrapper[4813]: E0219 21:04:05.473061 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 21:04:16 crc kubenswrapper[4813]: I0219 21:04:16.472444 4813 scope.go:117] "RemoveContainer" containerID="6c6ebd25e13c34a5637b27b7ea17b9562833793e4c09cb4428c635b774a360cb" Feb 19 21:04:16 crc kubenswrapper[4813]: E0219 21:04:16.473677 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 21:04:29 crc kubenswrapper[4813]: I0219 21:04:29.471441 4813 scope.go:117] "RemoveContainer" containerID="6c6ebd25e13c34a5637b27b7ea17b9562833793e4c09cb4428c635b774a360cb" Feb 19 21:04:29 crc kubenswrapper[4813]: E0219 21:04:29.472377 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 21:04:44 crc kubenswrapper[4813]: I0219 21:04:44.472158 4813 scope.go:117] "RemoveContainer" containerID="6c6ebd25e13c34a5637b27b7ea17b9562833793e4c09cb4428c635b774a360cb" Feb 19 21:04:44 crc kubenswrapper[4813]: E0219 21:04:44.474213 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 21:04:59 crc kubenswrapper[4813]: I0219 21:04:59.471832 4813 scope.go:117] "RemoveContainer" containerID="6c6ebd25e13c34a5637b27b7ea17b9562833793e4c09cb4428c635b774a360cb" Feb 19 21:04:59 crc kubenswrapper[4813]: E0219 21:04:59.472747 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 21:05:04 crc kubenswrapper[4813]: I0219 21:05:04.739630 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xwtng"] Feb 19 21:05:04 crc kubenswrapper[4813]: E0219 21:05:04.741138 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="989a278c-e7b7-477c-91b3-565ac2c291ce" containerName="registry-server" Feb 19 21:05:04 crc kubenswrapper[4813]: I0219 21:05:04.741161 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="989a278c-e7b7-477c-91b3-565ac2c291ce" containerName="registry-server" Feb 19 21:05:04 crc kubenswrapper[4813]: E0219 21:05:04.741212 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="989a278c-e7b7-477c-91b3-565ac2c291ce" containerName="extract-utilities" Feb 19 21:05:04 crc kubenswrapper[4813]: I0219 21:05:04.741225 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="989a278c-e7b7-477c-91b3-565ac2c291ce" containerName="extract-utilities" Feb 19 21:05:04 crc kubenswrapper[4813]: E0219 21:05:04.741266 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8013965-431f-4d6b-8a69-d5f2025fbe9f" containerName="keystone-cron" Feb 19 21:05:04 crc kubenswrapper[4813]: I0219 21:05:04.741279 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8013965-431f-4d6b-8a69-d5f2025fbe9f" containerName="keystone-cron" Feb 19 21:05:04 crc kubenswrapper[4813]: E0219 21:05:04.741319 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="989a278c-e7b7-477c-91b3-565ac2c291ce" containerName="extract-content" Feb 19 21:05:04 crc kubenswrapper[4813]: I0219 21:05:04.741330 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="989a278c-e7b7-477c-91b3-565ac2c291ce" containerName="extract-content" Feb 19 21:05:04 crc kubenswrapper[4813]: I0219 21:05:04.741669 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="989a278c-e7b7-477c-91b3-565ac2c291ce" containerName="registry-server" Feb 19 21:05:04 crc kubenswrapper[4813]: I0219 21:05:04.741713 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8013965-431f-4d6b-8a69-d5f2025fbe9f" containerName="keystone-cron" Feb 19 21:05:04 crc kubenswrapper[4813]: I0219 21:05:04.744697 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xwtng" Feb 19 21:05:04 crc kubenswrapper[4813]: I0219 21:05:04.764725 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xwtng"] Feb 19 21:05:04 crc kubenswrapper[4813]: I0219 21:05:04.857674 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9f913c2-dc79-4b55-b5ee-c271bdd31960-utilities\") pod \"certified-operators-xwtng\" (UID: \"b9f913c2-dc79-4b55-b5ee-c271bdd31960\") " pod="openshift-marketplace/certified-operators-xwtng" Feb 19 21:05:04 crc kubenswrapper[4813]: I0219 21:05:04.857749 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9f913c2-dc79-4b55-b5ee-c271bdd31960-catalog-content\") pod \"certified-operators-xwtng\" (UID: \"b9f913c2-dc79-4b55-b5ee-c271bdd31960\") " pod="openshift-marketplace/certified-operators-xwtng" Feb 19 21:05:04 crc kubenswrapper[4813]: I0219 21:05:04.858245 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmh67\" (UniqueName: \"kubernetes.io/projected/b9f913c2-dc79-4b55-b5ee-c271bdd31960-kube-api-access-bmh67\") pod \"certified-operators-xwtng\" (UID: \"b9f913c2-dc79-4b55-b5ee-c271bdd31960\") " pod="openshift-marketplace/certified-operators-xwtng" Feb 19 21:05:04 crc kubenswrapper[4813]: I0219 21:05:04.960567 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmh67\" (UniqueName: \"kubernetes.io/projected/b9f913c2-dc79-4b55-b5ee-c271bdd31960-kube-api-access-bmh67\") pod \"certified-operators-xwtng\" (UID: \"b9f913c2-dc79-4b55-b5ee-c271bdd31960\") " pod="openshift-marketplace/certified-operators-xwtng" Feb 19 21:05:04 crc kubenswrapper[4813]: I0219 21:05:04.960729 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9f913c2-dc79-4b55-b5ee-c271bdd31960-utilities\") pod \"certified-operators-xwtng\" (UID: \"b9f913c2-dc79-4b55-b5ee-c271bdd31960\") " pod="openshift-marketplace/certified-operators-xwtng" Feb 19 21:05:04 crc kubenswrapper[4813]: I0219 21:05:04.960778 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9f913c2-dc79-4b55-b5ee-c271bdd31960-catalog-content\") pod \"certified-operators-xwtng\" (UID: \"b9f913c2-dc79-4b55-b5ee-c271bdd31960\") " pod="openshift-marketplace/certified-operators-xwtng" Feb 19 21:05:04 crc kubenswrapper[4813]: I0219 21:05:04.961381 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9f913c2-dc79-4b55-b5ee-c271bdd31960-catalog-content\") pod \"certified-operators-xwtng\" (UID: \"b9f913c2-dc79-4b55-b5ee-c271bdd31960\") " pod="openshift-marketplace/certified-operators-xwtng" Feb 19 21:05:04 crc kubenswrapper[4813]: I0219 21:05:04.961451 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9f913c2-dc79-4b55-b5ee-c271bdd31960-utilities\") pod \"certified-operators-xwtng\" (UID: \"b9f913c2-dc79-4b55-b5ee-c271bdd31960\") " pod="openshift-marketplace/certified-operators-xwtng" Feb 19 21:05:05 crc kubenswrapper[4813]: I0219 21:05:05.047795 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmh67\" (UniqueName: \"kubernetes.io/projected/b9f913c2-dc79-4b55-b5ee-c271bdd31960-kube-api-access-bmh67\") pod \"certified-operators-xwtng\" (UID: \"b9f913c2-dc79-4b55-b5ee-c271bdd31960\") " pod="openshift-marketplace/certified-operators-xwtng" Feb 19 21:05:05 crc kubenswrapper[4813]: I0219 21:05:05.087197 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xwtng" Feb 19 21:05:05 crc kubenswrapper[4813]: I0219 21:05:05.639213 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xwtng"] Feb 19 21:05:06 crc kubenswrapper[4813]: I0219 21:05:06.400145 4813 generic.go:334] "Generic (PLEG): container finished" podID="b9f913c2-dc79-4b55-b5ee-c271bdd31960" containerID="ba2ddb921d1ab10034065f8e15460f69827ebffdd615b63b2c0b2baec35b06f1" exitCode=0 Feb 19 21:05:06 crc kubenswrapper[4813]: I0219 21:05:06.400334 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xwtng" event={"ID":"b9f913c2-dc79-4b55-b5ee-c271bdd31960","Type":"ContainerDied","Data":"ba2ddb921d1ab10034065f8e15460f69827ebffdd615b63b2c0b2baec35b06f1"} Feb 19 21:05:06 crc kubenswrapper[4813]: I0219 21:05:06.400607 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xwtng" event={"ID":"b9f913c2-dc79-4b55-b5ee-c271bdd31960","Type":"ContainerStarted","Data":"c8027620bc06bba92ef3cea5b55c483e8349c9ed97c5aafedca9def0d181e564"} Feb 19 21:05:06 crc kubenswrapper[4813]: I0219 21:05:06.404163 4813 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 21:05:08 crc kubenswrapper[4813]: I0219 21:05:08.422360 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xwtng" event={"ID":"b9f913c2-dc79-4b55-b5ee-c271bdd31960","Type":"ContainerStarted","Data":"a155454f16103bb855521cc28154485f291e3b312075a69e1293cc06d9a531b2"} Feb 19 21:05:10 crc kubenswrapper[4813]: I0219 21:05:10.452843 4813 generic.go:334] "Generic (PLEG): container finished" podID="b9f913c2-dc79-4b55-b5ee-c271bdd31960" containerID="a155454f16103bb855521cc28154485f291e3b312075a69e1293cc06d9a531b2" exitCode=0 Feb 19 21:05:10 crc kubenswrapper[4813]: I0219 21:05:10.453333 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xwtng" event={"ID":"b9f913c2-dc79-4b55-b5ee-c271bdd31960","Type":"ContainerDied","Data":"a155454f16103bb855521cc28154485f291e3b312075a69e1293cc06d9a531b2"} Feb 19 21:05:10 crc kubenswrapper[4813]: I0219 21:05:10.474119 4813 scope.go:117] "RemoveContainer" containerID="6c6ebd25e13c34a5637b27b7ea17b9562833793e4c09cb4428c635b774a360cb" Feb 19 21:05:10 crc kubenswrapper[4813]: E0219 21:05:10.474728 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 21:05:11 crc kubenswrapper[4813]: I0219 21:05:11.503073 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xwtng" event={"ID":"b9f913c2-dc79-4b55-b5ee-c271bdd31960","Type":"ContainerStarted","Data":"e9e707aa59ac6b04a608e9faaa56d8c6bb42d5ac873184d3ca7d403efa7168a5"} Feb 19 21:05:11 crc kubenswrapper[4813]: I0219 21:05:11.528875 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xwtng" podStartSLOduration=3.040856153 podStartE2EDuration="7.528856523s" podCreationTimestamp="2026-02-19 21:05:04 +0000 UTC" firstStartedPulling="2026-02-19 21:05:06.403816606 +0000 UTC m=+9325.629257147" lastFinishedPulling="2026-02-19 21:05:10.891816966 +0000 UTC m=+9330.117257517" observedRunningTime="2026-02-19 21:05:11.520397732 +0000 UTC m=+9330.745838293" watchObservedRunningTime="2026-02-19 21:05:11.528856523 +0000 UTC m=+9330.754297074" Feb 19 21:05:15 crc kubenswrapper[4813]: I0219 21:05:15.087560 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xwtng" Feb 19 21:05:15 crc kubenswrapper[4813]: I0219 21:05:15.090150 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xwtng" Feb 19 21:05:15 crc kubenswrapper[4813]: I0219 21:05:15.175357 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xwtng" Feb 19 21:05:16 crc kubenswrapper[4813]: I0219 21:05:16.621904 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xwtng" Feb 19 21:05:16 crc kubenswrapper[4813]: I0219 21:05:16.706352 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xwtng"] Feb 19 21:05:18 crc kubenswrapper[4813]: I0219 21:05:18.576752 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xwtng" podUID="b9f913c2-dc79-4b55-b5ee-c271bdd31960" containerName="registry-server" containerID="cri-o://e9e707aa59ac6b04a608e9faaa56d8c6bb42d5ac873184d3ca7d403efa7168a5" gracePeriod=2 Feb 19 21:05:19 crc kubenswrapper[4813]: I0219 21:05:19.475947 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xwtng" Feb 19 21:05:19 crc kubenswrapper[4813]: I0219 21:05:19.593287 4813 generic.go:334] "Generic (PLEG): container finished" podID="b9f913c2-dc79-4b55-b5ee-c271bdd31960" containerID="e9e707aa59ac6b04a608e9faaa56d8c6bb42d5ac873184d3ca7d403efa7168a5" exitCode=0 Feb 19 21:05:19 crc kubenswrapper[4813]: I0219 21:05:19.593331 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xwtng" event={"ID":"b9f913c2-dc79-4b55-b5ee-c271bdd31960","Type":"ContainerDied","Data":"e9e707aa59ac6b04a608e9faaa56d8c6bb42d5ac873184d3ca7d403efa7168a5"} Feb 19 21:05:19 crc kubenswrapper[4813]: I0219 21:05:19.593359 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xwtng" event={"ID":"b9f913c2-dc79-4b55-b5ee-c271bdd31960","Type":"ContainerDied","Data":"c8027620bc06bba92ef3cea5b55c483e8349c9ed97c5aafedca9def0d181e564"} Feb 19 21:05:19 crc kubenswrapper[4813]: I0219 21:05:19.593378 4813 scope.go:117] "RemoveContainer" containerID="e9e707aa59ac6b04a608e9faaa56d8c6bb42d5ac873184d3ca7d403efa7168a5" Feb 19 21:05:19 crc kubenswrapper[4813]: I0219 21:05:19.594093 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xwtng" Feb 19 21:05:19 crc kubenswrapper[4813]: I0219 21:05:19.606189 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9f913c2-dc79-4b55-b5ee-c271bdd31960-utilities\") pod \"b9f913c2-dc79-4b55-b5ee-c271bdd31960\" (UID: \"b9f913c2-dc79-4b55-b5ee-c271bdd31960\") " Feb 19 21:05:19 crc kubenswrapper[4813]: I0219 21:05:19.606359 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9f913c2-dc79-4b55-b5ee-c271bdd31960-catalog-content\") pod \"b9f913c2-dc79-4b55-b5ee-c271bdd31960\" (UID: \"b9f913c2-dc79-4b55-b5ee-c271bdd31960\") " Feb 19 21:05:19 crc kubenswrapper[4813]: I0219 21:05:19.606467 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmh67\" (UniqueName: \"kubernetes.io/projected/b9f913c2-dc79-4b55-b5ee-c271bdd31960-kube-api-access-bmh67\") pod \"b9f913c2-dc79-4b55-b5ee-c271bdd31960\" (UID: \"b9f913c2-dc79-4b55-b5ee-c271bdd31960\") " Feb 19 21:05:19 crc kubenswrapper[4813]: I0219 21:05:19.606989 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9f913c2-dc79-4b55-b5ee-c271bdd31960-utilities" (OuterVolumeSpecName: "utilities") pod "b9f913c2-dc79-4b55-b5ee-c271bdd31960" (UID: "b9f913c2-dc79-4b55-b5ee-c271bdd31960"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:05:19 crc kubenswrapper[4813]: I0219 21:05:19.607211 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9f913c2-dc79-4b55-b5ee-c271bdd31960-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 21:05:19 crc kubenswrapper[4813]: I0219 21:05:19.614738 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9f913c2-dc79-4b55-b5ee-c271bdd31960-kube-api-access-bmh67" (OuterVolumeSpecName: "kube-api-access-bmh67") pod "b9f913c2-dc79-4b55-b5ee-c271bdd31960" (UID: "b9f913c2-dc79-4b55-b5ee-c271bdd31960"). InnerVolumeSpecName "kube-api-access-bmh67". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:05:19 crc kubenswrapper[4813]: I0219 21:05:19.624204 4813 scope.go:117] "RemoveContainer" containerID="a155454f16103bb855521cc28154485f291e3b312075a69e1293cc06d9a531b2" Feb 19 21:05:19 crc kubenswrapper[4813]: I0219 21:05:19.665201 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9f913c2-dc79-4b55-b5ee-c271bdd31960-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b9f913c2-dc79-4b55-b5ee-c271bdd31960" (UID: "b9f913c2-dc79-4b55-b5ee-c271bdd31960"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:05:19 crc kubenswrapper[4813]: I0219 21:05:19.686883 4813 scope.go:117] "RemoveContainer" containerID="ba2ddb921d1ab10034065f8e15460f69827ebffdd615b63b2c0b2baec35b06f1" Feb 19 21:05:19 crc kubenswrapper[4813]: I0219 21:05:19.710153 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9f913c2-dc79-4b55-b5ee-c271bdd31960-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 21:05:19 crc kubenswrapper[4813]: I0219 21:05:19.710202 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmh67\" (UniqueName: \"kubernetes.io/projected/b9f913c2-dc79-4b55-b5ee-c271bdd31960-kube-api-access-bmh67\") on node \"crc\" DevicePath \"\"" Feb 19 21:05:19 crc kubenswrapper[4813]: I0219 21:05:19.731523 4813 scope.go:117] "RemoveContainer" containerID="e9e707aa59ac6b04a608e9faaa56d8c6bb42d5ac873184d3ca7d403efa7168a5" Feb 19 21:05:19 crc kubenswrapper[4813]: E0219 21:05:19.732325 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9e707aa59ac6b04a608e9faaa56d8c6bb42d5ac873184d3ca7d403efa7168a5\": container with ID starting with e9e707aa59ac6b04a608e9faaa56d8c6bb42d5ac873184d3ca7d403efa7168a5 not found: ID does not exist" containerID="e9e707aa59ac6b04a608e9faaa56d8c6bb42d5ac873184d3ca7d403efa7168a5" Feb 19 21:05:19 crc kubenswrapper[4813]: I0219 21:05:19.732396 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9e707aa59ac6b04a608e9faaa56d8c6bb42d5ac873184d3ca7d403efa7168a5"} err="failed to get container status \"e9e707aa59ac6b04a608e9faaa56d8c6bb42d5ac873184d3ca7d403efa7168a5\": rpc error: code = NotFound desc = could not find container \"e9e707aa59ac6b04a608e9faaa56d8c6bb42d5ac873184d3ca7d403efa7168a5\": container with ID starting with e9e707aa59ac6b04a608e9faaa56d8c6bb42d5ac873184d3ca7d403efa7168a5 not found: ID does not exist" Feb 19 21:05:19 crc kubenswrapper[4813]: I0219 21:05:19.732439 4813 scope.go:117] "RemoveContainer" containerID="a155454f16103bb855521cc28154485f291e3b312075a69e1293cc06d9a531b2" Feb 19 21:05:19 crc kubenswrapper[4813]: E0219 21:05:19.733218 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a155454f16103bb855521cc28154485f291e3b312075a69e1293cc06d9a531b2\": container with ID starting with a155454f16103bb855521cc28154485f291e3b312075a69e1293cc06d9a531b2 not found: ID does not exist" containerID="a155454f16103bb855521cc28154485f291e3b312075a69e1293cc06d9a531b2" Feb 19 21:05:19 crc kubenswrapper[4813]: I0219 21:05:19.733273 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a155454f16103bb855521cc28154485f291e3b312075a69e1293cc06d9a531b2"} err="failed to get container status \"a155454f16103bb855521cc28154485f291e3b312075a69e1293cc06d9a531b2\": rpc error: code = NotFound desc = could not find container \"a155454f16103bb855521cc28154485f291e3b312075a69e1293cc06d9a531b2\": container with ID starting with a155454f16103bb855521cc28154485f291e3b312075a69e1293cc06d9a531b2 not found: ID does not exist" Feb 19 21:05:19 crc kubenswrapper[4813]: I0219 21:05:19.733302 4813 scope.go:117] "RemoveContainer" containerID="ba2ddb921d1ab10034065f8e15460f69827ebffdd615b63b2c0b2baec35b06f1" Feb 19 21:05:19 crc kubenswrapper[4813]: E0219 21:05:19.733819 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba2ddb921d1ab10034065f8e15460f69827ebffdd615b63b2c0b2baec35b06f1\": container with ID starting with ba2ddb921d1ab10034065f8e15460f69827ebffdd615b63b2c0b2baec35b06f1 not found: ID does not exist" containerID="ba2ddb921d1ab10034065f8e15460f69827ebffdd615b63b2c0b2baec35b06f1" Feb 19 21:05:19 crc kubenswrapper[4813]: I0219 21:05:19.733870 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba2ddb921d1ab10034065f8e15460f69827ebffdd615b63b2c0b2baec35b06f1"} err="failed to get container status \"ba2ddb921d1ab10034065f8e15460f69827ebffdd615b63b2c0b2baec35b06f1\": rpc error: code = NotFound desc = could not find container \"ba2ddb921d1ab10034065f8e15460f69827ebffdd615b63b2c0b2baec35b06f1\": container with ID starting with ba2ddb921d1ab10034065f8e15460f69827ebffdd615b63b2c0b2baec35b06f1 not found: ID does not exist" Feb 19 21:05:19 crc kubenswrapper[4813]: I0219 21:05:19.937642 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xwtng"] Feb 19 21:05:19 crc kubenswrapper[4813]: I0219 21:05:19.954631 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xwtng"] Feb 19 21:05:21 crc kubenswrapper[4813]: I0219 21:05:21.490750 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9f913c2-dc79-4b55-b5ee-c271bdd31960" path="/var/lib/kubelet/pods/b9f913c2-dc79-4b55-b5ee-c271bdd31960/volumes" Feb 19 21:05:25 crc kubenswrapper[4813]: I0219 21:05:25.472062 4813 scope.go:117] "RemoveContainer" containerID="6c6ebd25e13c34a5637b27b7ea17b9562833793e4c09cb4428c635b774a360cb" Feb 19 21:05:25 crc kubenswrapper[4813]: E0219 21:05:25.473131 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 21:05:38 crc kubenswrapper[4813]: I0219 21:05:38.472915 4813 scope.go:117] "RemoveContainer" containerID="6c6ebd25e13c34a5637b27b7ea17b9562833793e4c09cb4428c635b774a360cb" Feb 19 21:05:38 crc kubenswrapper[4813]: E0219 21:05:38.474036 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 21:05:51 crc kubenswrapper[4813]: I0219 21:05:51.484546 4813 scope.go:117] "RemoveContainer" containerID="6c6ebd25e13c34a5637b27b7ea17b9562833793e4c09cb4428c635b774a360cb" Feb 19 21:05:51 crc kubenswrapper[4813]: E0219 21:05:51.485643 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 21:06:00 crc kubenswrapper[4813]: I0219 21:06:00.530031 4813 scope.go:117] "RemoveContainer" containerID="6c65bf7e6c35be86b165f76c99b0d460a5281d2e69afbfa6009a99488db55bbc" Feb 19 21:06:02 crc kubenswrapper[4813]: I0219 21:06:02.472651 4813 scope.go:117] "RemoveContainer" containerID="6c6ebd25e13c34a5637b27b7ea17b9562833793e4c09cb4428c635b774a360cb" Feb 19 21:06:02 crc kubenswrapper[4813]: E0219 21:06:02.473497 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 21:06:13 crc kubenswrapper[4813]: I0219 21:06:13.870149 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-v95vq"] Feb 19 21:06:13 crc kubenswrapper[4813]: E0219 21:06:13.875173 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9f913c2-dc79-4b55-b5ee-c271bdd31960" containerName="extract-content" Feb 19 21:06:13 crc kubenswrapper[4813]: I0219 21:06:13.875445 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9f913c2-dc79-4b55-b5ee-c271bdd31960" containerName="extract-content" Feb 19 21:06:13 crc kubenswrapper[4813]: E0219 21:06:13.875640 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9f913c2-dc79-4b55-b5ee-c271bdd31960" containerName="registry-server" Feb 19 21:06:13 crc kubenswrapper[4813]: I0219 21:06:13.875798 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9f913c2-dc79-4b55-b5ee-c271bdd31960" containerName="registry-server" Feb 19 21:06:13 crc kubenswrapper[4813]: E0219 21:06:13.876030 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9f913c2-dc79-4b55-b5ee-c271bdd31960" containerName="extract-utilities" Feb 19 21:06:13 crc kubenswrapper[4813]: I0219 21:06:13.876189 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9f913c2-dc79-4b55-b5ee-c271bdd31960" containerName="extract-utilities" Feb 19 21:06:13 crc kubenswrapper[4813]: I0219 21:06:13.876769 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9f913c2-dc79-4b55-b5ee-c271bdd31960" containerName="registry-server" Feb 19 21:06:13 crc kubenswrapper[4813]: I0219 21:06:13.880195 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v95vq" Feb 19 21:06:13 crc kubenswrapper[4813]: I0219 21:06:13.899695 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v95vq"] Feb 19 21:06:14 crc kubenswrapper[4813]: I0219 21:06:14.053751 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfc63f1d-c2ca-4fa9-9fd7-1218152e70fa-catalog-content\") pod \"redhat-operators-v95vq\" (UID: \"cfc63f1d-c2ca-4fa9-9fd7-1218152e70fa\") " pod="openshift-marketplace/redhat-operators-v95vq" Feb 19 21:06:14 crc kubenswrapper[4813]: I0219 21:06:14.053904 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xxtl\" (UniqueName: \"kubernetes.io/projected/cfc63f1d-c2ca-4fa9-9fd7-1218152e70fa-kube-api-access-2xxtl\") pod \"redhat-operators-v95vq\" (UID: \"cfc63f1d-c2ca-4fa9-9fd7-1218152e70fa\") " pod="openshift-marketplace/redhat-operators-v95vq" Feb 19 21:06:14 crc kubenswrapper[4813]: I0219 21:06:14.053975 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfc63f1d-c2ca-4fa9-9fd7-1218152e70fa-utilities\") pod \"redhat-operators-v95vq\" (UID: \"cfc63f1d-c2ca-4fa9-9fd7-1218152e70fa\") " pod="openshift-marketplace/redhat-operators-v95vq" Feb 19 21:06:14 crc kubenswrapper[4813]: I0219 21:06:14.155980 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xxtl\" (UniqueName: \"kubernetes.io/projected/cfc63f1d-c2ca-4fa9-9fd7-1218152e70fa-kube-api-access-2xxtl\") pod \"redhat-operators-v95vq\" (UID: \"cfc63f1d-c2ca-4fa9-9fd7-1218152e70fa\") " pod="openshift-marketplace/redhat-operators-v95vq" Feb 19 21:06:14 crc kubenswrapper[4813]: I0219 21:06:14.156092 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfc63f1d-c2ca-4fa9-9fd7-1218152e70fa-utilities\") pod \"redhat-operators-v95vq\" (UID: \"cfc63f1d-c2ca-4fa9-9fd7-1218152e70fa\") " pod="openshift-marketplace/redhat-operators-v95vq" Feb 19 21:06:14 crc kubenswrapper[4813]: I0219 21:06:14.156222 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfc63f1d-c2ca-4fa9-9fd7-1218152e70fa-catalog-content\") pod \"redhat-operators-v95vq\" (UID: \"cfc63f1d-c2ca-4fa9-9fd7-1218152e70fa\") " pod="openshift-marketplace/redhat-operators-v95vq" Feb 19 21:06:14 crc kubenswrapper[4813]: I0219 21:06:14.156951 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfc63f1d-c2ca-4fa9-9fd7-1218152e70fa-catalog-content\") pod \"redhat-operators-v95vq\" (UID: \"cfc63f1d-c2ca-4fa9-9fd7-1218152e70fa\") " pod="openshift-marketplace/redhat-operators-v95vq" Feb 19 21:06:14 crc kubenswrapper[4813]: I0219 21:06:14.156983 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfc63f1d-c2ca-4fa9-9fd7-1218152e70fa-utilities\") pod \"redhat-operators-v95vq\" (UID: \"cfc63f1d-c2ca-4fa9-9fd7-1218152e70fa\") " pod="openshift-marketplace/redhat-operators-v95vq" Feb 19 21:06:14 crc kubenswrapper[4813]: I0219 21:06:14.188906 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xxtl\" (UniqueName: \"kubernetes.io/projected/cfc63f1d-c2ca-4fa9-9fd7-1218152e70fa-kube-api-access-2xxtl\") pod \"redhat-operators-v95vq\" (UID: \"cfc63f1d-c2ca-4fa9-9fd7-1218152e70fa\") " pod="openshift-marketplace/redhat-operators-v95vq" Feb 19 21:06:14 crc kubenswrapper[4813]: I0219 21:06:14.207522 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v95vq" Feb 19 21:06:14 crc kubenswrapper[4813]: I0219 21:06:14.471588 4813 scope.go:117] "RemoveContainer" containerID="6c6ebd25e13c34a5637b27b7ea17b9562833793e4c09cb4428c635b774a360cb" Feb 19 21:06:14 crc kubenswrapper[4813]: E0219 21:06:14.472167 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 21:06:14 crc kubenswrapper[4813]: I0219 21:06:14.717260 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v95vq"] Feb 19 21:06:15 crc kubenswrapper[4813]: W0219 21:06:15.174246 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcfc63f1d_c2ca_4fa9_9fd7_1218152e70fa.slice/crio-768693c969233312f4da3bf4f7e10d9df5a6fa7bf1fdd4f3277bf545ad2de311 WatchSource:0}: Error finding container 768693c969233312f4da3bf4f7e10d9df5a6fa7bf1fdd4f3277bf545ad2de311: Status 404 returned error can't find the container with id 768693c969233312f4da3bf4f7e10d9df5a6fa7bf1fdd4f3277bf545ad2de311 Feb 19 21:06:15 crc kubenswrapper[4813]: I0219 21:06:15.945570 4813 generic.go:334] "Generic (PLEG): container finished" podID="cfc63f1d-c2ca-4fa9-9fd7-1218152e70fa" containerID="f57a372b997f3c8175ae0fe0a423c1ae774eaadde7ef03245e8a5fa51872bc64" exitCode=0 Feb 19 21:06:15 crc kubenswrapper[4813]: I0219 21:06:15.945694 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v95vq" event={"ID":"cfc63f1d-c2ca-4fa9-9fd7-1218152e70fa","Type":"ContainerDied","Data":"f57a372b997f3c8175ae0fe0a423c1ae774eaadde7ef03245e8a5fa51872bc64"} Feb 19 21:06:15 crc kubenswrapper[4813]: I0219 21:06:15.946223 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v95vq" event={"ID":"cfc63f1d-c2ca-4fa9-9fd7-1218152e70fa","Type":"ContainerStarted","Data":"768693c969233312f4da3bf4f7e10d9df5a6fa7bf1fdd4f3277bf545ad2de311"} Feb 19 21:06:16 crc kubenswrapper[4813]: I0219 21:06:16.958846 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v95vq" event={"ID":"cfc63f1d-c2ca-4fa9-9fd7-1218152e70fa","Type":"ContainerStarted","Data":"56e1a71e2be86fb030977391718df56bca3f2ab75c05e4c039870d2cce9089b6"} Feb 19 21:06:18 crc kubenswrapper[4813]: I0219 21:06:18.987251 4813 generic.go:334] "Generic (PLEG): container finished" podID="cfc63f1d-c2ca-4fa9-9fd7-1218152e70fa" containerID="56e1a71e2be86fb030977391718df56bca3f2ab75c05e4c039870d2cce9089b6" exitCode=0 Feb 19 21:06:18 crc kubenswrapper[4813]: I0219 21:06:18.987880 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v95vq" event={"ID":"cfc63f1d-c2ca-4fa9-9fd7-1218152e70fa","Type":"ContainerDied","Data":"56e1a71e2be86fb030977391718df56bca3f2ab75c05e4c039870d2cce9089b6"} Feb 19 21:06:20 crc kubenswrapper[4813]: I0219 21:06:20.001875 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v95vq" event={"ID":"cfc63f1d-c2ca-4fa9-9fd7-1218152e70fa","Type":"ContainerStarted","Data":"fd717296ddc111342b5bca219fb37f9813e1aceda80553cf87452bd3e4722ae9"} Feb 19 21:06:20 crc kubenswrapper[4813]: I0219 21:06:20.019374 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-v95vq" podStartSLOduration=3.535031497 podStartE2EDuration="7.019356387s" podCreationTimestamp="2026-02-19 21:06:13 +0000 UTC" firstStartedPulling="2026-02-19 21:06:15.947701132 +0000 UTC m=+9395.173141683" lastFinishedPulling="2026-02-19 21:06:19.432026032 +0000 UTC m=+9398.657466573" observedRunningTime="2026-02-19 21:06:20.018636476 +0000 UTC m=+9399.244077037" watchObservedRunningTime="2026-02-19 21:06:20.019356387 +0000 UTC m=+9399.244796928" Feb 19 21:06:24 crc kubenswrapper[4813]: I0219 21:06:24.208205 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-v95vq" Feb 19 21:06:24 crc kubenswrapper[4813]: I0219 21:06:24.208726 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-v95vq" Feb 19 21:06:25 crc kubenswrapper[4813]: I0219 21:06:25.252833 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-v95vq" podUID="cfc63f1d-c2ca-4fa9-9fd7-1218152e70fa" containerName="registry-server" probeResult="failure" output=< Feb 19 21:06:25 crc kubenswrapper[4813]: timeout: failed to connect service ":50051" within 1s Feb 19 21:06:25 crc kubenswrapper[4813]: > Feb 19 21:06:25 crc kubenswrapper[4813]: I0219 21:06:25.471545 4813 scope.go:117] "RemoveContainer" containerID="6c6ebd25e13c34a5637b27b7ea17b9562833793e4c09cb4428c635b774a360cb" Feb 19 21:06:25 crc kubenswrapper[4813]: E0219 21:06:25.472164 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 21:06:28 crc kubenswrapper[4813]: I0219 21:06:28.691723 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-d7mr8"] Feb 19 21:06:28 crc kubenswrapper[4813]: I0219 21:06:28.697467 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d7mr8" Feb 19 21:06:28 crc kubenswrapper[4813]: I0219 21:06:28.714149 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d7mr8"] Feb 19 21:06:28 crc kubenswrapper[4813]: I0219 21:06:28.744176 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30365140-7d80-4179-aa44-a9f3623d4564-utilities\") pod \"community-operators-d7mr8\" (UID: \"30365140-7d80-4179-aa44-a9f3623d4564\") " pod="openshift-marketplace/community-operators-d7mr8" Feb 19 21:06:28 crc kubenswrapper[4813]: I0219 21:06:28.744264 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8gv6\" (UniqueName: \"kubernetes.io/projected/30365140-7d80-4179-aa44-a9f3623d4564-kube-api-access-w8gv6\") pod \"community-operators-d7mr8\" (UID: \"30365140-7d80-4179-aa44-a9f3623d4564\") " pod="openshift-marketplace/community-operators-d7mr8" Feb 19 21:06:28 crc kubenswrapper[4813]: I0219 21:06:28.744794 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30365140-7d80-4179-aa44-a9f3623d4564-catalog-content\") pod \"community-operators-d7mr8\" (UID: \"30365140-7d80-4179-aa44-a9f3623d4564\") " pod="openshift-marketplace/community-operators-d7mr8" Feb 19 21:06:28 crc kubenswrapper[4813]: I0219 21:06:28.846210 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30365140-7d80-4179-aa44-a9f3623d4564-utilities\") pod \"community-operators-d7mr8\" (UID: \"30365140-7d80-4179-aa44-a9f3623d4564\") " pod="openshift-marketplace/community-operators-d7mr8" Feb 19 21:06:28 crc kubenswrapper[4813]: I0219 21:06:28.846325 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8gv6\" (UniqueName: \"kubernetes.io/projected/30365140-7d80-4179-aa44-a9f3623d4564-kube-api-access-w8gv6\") pod \"community-operators-d7mr8\" (UID: \"30365140-7d80-4179-aa44-a9f3623d4564\") " pod="openshift-marketplace/community-operators-d7mr8" Feb 19 21:06:28 crc kubenswrapper[4813]: I0219 21:06:28.846511 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30365140-7d80-4179-aa44-a9f3623d4564-catalog-content\") pod \"community-operators-d7mr8\" (UID: \"30365140-7d80-4179-aa44-a9f3623d4564\") " pod="openshift-marketplace/community-operators-d7mr8" Feb 19 21:06:28 crc kubenswrapper[4813]: I0219 21:06:28.846857 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30365140-7d80-4179-aa44-a9f3623d4564-utilities\") pod \"community-operators-d7mr8\" (UID: \"30365140-7d80-4179-aa44-a9f3623d4564\") " pod="openshift-marketplace/community-operators-d7mr8" Feb 19 21:06:28 crc kubenswrapper[4813]: I0219 21:06:28.847152 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30365140-7d80-4179-aa44-a9f3623d4564-catalog-content\") pod \"community-operators-d7mr8\" (UID: \"30365140-7d80-4179-aa44-a9f3623d4564\") " pod="openshift-marketplace/community-operators-d7mr8" Feb 19 21:06:28 crc kubenswrapper[4813]: I0219 21:06:28.867029 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8gv6\" (UniqueName: \"kubernetes.io/projected/30365140-7d80-4179-aa44-a9f3623d4564-kube-api-access-w8gv6\") pod \"community-operators-d7mr8\" (UID: \"30365140-7d80-4179-aa44-a9f3623d4564\") " pod="openshift-marketplace/community-operators-d7mr8" Feb 19 21:06:29 crc kubenswrapper[4813]: I0219 21:06:29.033488 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d7mr8" Feb 19 21:06:29 crc kubenswrapper[4813]: I0219 21:06:29.761341 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d7mr8"] Feb 19 21:06:30 crc kubenswrapper[4813]: I0219 21:06:30.123903 4813 generic.go:334] "Generic (PLEG): container finished" podID="30365140-7d80-4179-aa44-a9f3623d4564" containerID="e1a225e003dfc72639fb306a4af76ada47f8e9f54ec9b5ecac023b860586c971" exitCode=0 Feb 19 21:06:30 crc kubenswrapper[4813]: I0219 21:06:30.124374 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d7mr8" event={"ID":"30365140-7d80-4179-aa44-a9f3623d4564","Type":"ContainerDied","Data":"e1a225e003dfc72639fb306a4af76ada47f8e9f54ec9b5ecac023b860586c971"} Feb 19 21:06:30 crc kubenswrapper[4813]: I0219 21:06:30.124426 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d7mr8" event={"ID":"30365140-7d80-4179-aa44-a9f3623d4564","Type":"ContainerStarted","Data":"91fe62ce34e8730b354f5e9ba896119b3809adafb476ecd8fbc185137230899d"} Feb 19 21:06:31 crc kubenswrapper[4813]: I0219 21:06:31.136862 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d7mr8" event={"ID":"30365140-7d80-4179-aa44-a9f3623d4564","Type":"ContainerStarted","Data":"d065be213850be42ddb4aaca1e5aa47f313f8b3941013fb21a02932004c6c8a9"} Feb 19 21:06:34 crc kubenswrapper[4813]: I0219 21:06:34.518733 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-v95vq" Feb 19 21:06:34 crc kubenswrapper[4813]: I0219 21:06:34.597171 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-v95vq" Feb 19 21:06:35 crc kubenswrapper[4813]: I0219 21:06:35.276586 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v95vq"] Feb 19 21:06:35 crc kubenswrapper[4813]: I0219 21:06:35.463913 4813 generic.go:334] "Generic (PLEG): container finished" podID="30365140-7d80-4179-aa44-a9f3623d4564" containerID="d065be213850be42ddb4aaca1e5aa47f313f8b3941013fb21a02932004c6c8a9" exitCode=0 Feb 19 21:06:35 crc kubenswrapper[4813]: I0219 21:06:35.463991 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d7mr8" event={"ID":"30365140-7d80-4179-aa44-a9f3623d4564","Type":"ContainerDied","Data":"d065be213850be42ddb4aaca1e5aa47f313f8b3941013fb21a02932004c6c8a9"} Feb 19 21:06:36 crc kubenswrapper[4813]: I0219 21:06:36.477994 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-v95vq" podUID="cfc63f1d-c2ca-4fa9-9fd7-1218152e70fa" containerName="registry-server" containerID="cri-o://fd717296ddc111342b5bca219fb37f9813e1aceda80553cf87452bd3e4722ae9" gracePeriod=2 Feb 19 21:06:36 crc kubenswrapper[4813]: I0219 21:06:36.478071 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d7mr8" event={"ID":"30365140-7d80-4179-aa44-a9f3623d4564","Type":"ContainerStarted","Data":"3808ded04b87e9438c73de71dae59e7efdc276f3d0ae23c517b1a6deb8137102"} Feb 19 21:06:36 crc kubenswrapper[4813]: I0219 21:06:36.523660 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-d7mr8" podStartSLOduration=2.6758797039999997 podStartE2EDuration="8.52363354s" podCreationTimestamp="2026-02-19 21:06:28 +0000 UTC" firstStartedPulling="2026-02-19 21:06:30.126481677 +0000 UTC m=+9409.351922218" lastFinishedPulling="2026-02-19 21:06:35.974235513 +0000 UTC m=+9415.199676054" observedRunningTime="2026-02-19 21:06:36.519691169 +0000 UTC m=+9415.745131720" watchObservedRunningTime="2026-02-19 21:06:36.52363354 +0000 UTC m=+9415.749074081" Feb 19 21:06:37 crc kubenswrapper[4813]: I0219 21:06:37.023380 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v95vq" Feb 19 21:06:37 crc kubenswrapper[4813]: I0219 21:06:37.135985 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xxtl\" (UniqueName: \"kubernetes.io/projected/cfc63f1d-c2ca-4fa9-9fd7-1218152e70fa-kube-api-access-2xxtl\") pod \"cfc63f1d-c2ca-4fa9-9fd7-1218152e70fa\" (UID: \"cfc63f1d-c2ca-4fa9-9fd7-1218152e70fa\") " Feb 19 21:06:37 crc kubenswrapper[4813]: I0219 21:06:37.137782 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfc63f1d-c2ca-4fa9-9fd7-1218152e70fa-catalog-content\") pod \"cfc63f1d-c2ca-4fa9-9fd7-1218152e70fa\" (UID: \"cfc63f1d-c2ca-4fa9-9fd7-1218152e70fa\") " Feb 19 21:06:37 crc kubenswrapper[4813]: I0219 21:06:37.139122 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfc63f1d-c2ca-4fa9-9fd7-1218152e70fa-utilities\") pod \"cfc63f1d-c2ca-4fa9-9fd7-1218152e70fa\" (UID: \"cfc63f1d-c2ca-4fa9-9fd7-1218152e70fa\") " Feb 19 21:06:37 crc kubenswrapper[4813]: I0219 21:06:37.140010 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfc63f1d-c2ca-4fa9-9fd7-1218152e70fa-utilities" (OuterVolumeSpecName: "utilities") pod "cfc63f1d-c2ca-4fa9-9fd7-1218152e70fa" (UID: "cfc63f1d-c2ca-4fa9-9fd7-1218152e70fa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:06:37 crc kubenswrapper[4813]: I0219 21:06:37.140311 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfc63f1d-c2ca-4fa9-9fd7-1218152e70fa-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 21:06:37 crc kubenswrapper[4813]: I0219 21:06:37.154524 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfc63f1d-c2ca-4fa9-9fd7-1218152e70fa-kube-api-access-2xxtl" (OuterVolumeSpecName: "kube-api-access-2xxtl") pod "cfc63f1d-c2ca-4fa9-9fd7-1218152e70fa" (UID: "cfc63f1d-c2ca-4fa9-9fd7-1218152e70fa"). InnerVolumeSpecName "kube-api-access-2xxtl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:06:37 crc kubenswrapper[4813]: I0219 21:06:37.242506 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xxtl\" (UniqueName: \"kubernetes.io/projected/cfc63f1d-c2ca-4fa9-9fd7-1218152e70fa-kube-api-access-2xxtl\") on node \"crc\" DevicePath \"\"" Feb 19 21:06:37 crc kubenswrapper[4813]: I0219 21:06:37.242707 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfc63f1d-c2ca-4fa9-9fd7-1218152e70fa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cfc63f1d-c2ca-4fa9-9fd7-1218152e70fa" (UID: "cfc63f1d-c2ca-4fa9-9fd7-1218152e70fa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:06:37 crc kubenswrapper[4813]: I0219 21:06:37.344709 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfc63f1d-c2ca-4fa9-9fd7-1218152e70fa-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 21:06:37 crc kubenswrapper[4813]: I0219 21:06:37.472771 4813 scope.go:117] "RemoveContainer" containerID="6c6ebd25e13c34a5637b27b7ea17b9562833793e4c09cb4428c635b774a360cb" Feb 19 21:06:37 crc kubenswrapper[4813]: E0219 21:06:37.473325 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 21:06:37 crc kubenswrapper[4813]: I0219 21:06:37.495480 4813 generic.go:334] "Generic (PLEG): container finished" podID="cfc63f1d-c2ca-4fa9-9fd7-1218152e70fa" containerID="fd717296ddc111342b5bca219fb37f9813e1aceda80553cf87452bd3e4722ae9" exitCode=0 Feb 19 21:06:37 crc kubenswrapper[4813]: I0219 21:06:37.495544 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v95vq" event={"ID":"cfc63f1d-c2ca-4fa9-9fd7-1218152e70fa","Type":"ContainerDied","Data":"fd717296ddc111342b5bca219fb37f9813e1aceda80553cf87452bd3e4722ae9"} Feb 19 21:06:37 crc kubenswrapper[4813]: I0219 21:06:37.495579 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v95vq" event={"ID":"cfc63f1d-c2ca-4fa9-9fd7-1218152e70fa","Type":"ContainerDied","Data":"768693c969233312f4da3bf4f7e10d9df5a6fa7bf1fdd4f3277bf545ad2de311"} Feb 19 21:06:37 crc kubenswrapper[4813]: I0219 21:06:37.495583 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v95vq" Feb 19 21:06:37 crc kubenswrapper[4813]: I0219 21:06:37.495601 4813 scope.go:117] "RemoveContainer" containerID="fd717296ddc111342b5bca219fb37f9813e1aceda80553cf87452bd3e4722ae9" Feb 19 21:06:37 crc kubenswrapper[4813]: I0219 21:06:37.540926 4813 scope.go:117] "RemoveContainer" containerID="56e1a71e2be86fb030977391718df56bca3f2ab75c05e4c039870d2cce9089b6" Feb 19 21:06:37 crc kubenswrapper[4813]: I0219 21:06:37.547734 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v95vq"] Feb 19 21:06:37 crc kubenswrapper[4813]: I0219 21:06:37.559177 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-v95vq"] Feb 19 21:06:37 crc kubenswrapper[4813]: I0219 21:06:37.576289 4813 scope.go:117] "RemoveContainer" containerID="f57a372b997f3c8175ae0fe0a423c1ae774eaadde7ef03245e8a5fa51872bc64" Feb 19 21:06:37 crc kubenswrapper[4813]: I0219 21:06:37.619350 4813 scope.go:117] "RemoveContainer" containerID="fd717296ddc111342b5bca219fb37f9813e1aceda80553cf87452bd3e4722ae9" Feb 19 21:06:37 crc kubenswrapper[4813]: E0219 21:06:37.619716 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd717296ddc111342b5bca219fb37f9813e1aceda80553cf87452bd3e4722ae9\": container with ID starting with fd717296ddc111342b5bca219fb37f9813e1aceda80553cf87452bd3e4722ae9 not found: ID does not exist" containerID="fd717296ddc111342b5bca219fb37f9813e1aceda80553cf87452bd3e4722ae9" Feb 19 21:06:37 crc kubenswrapper[4813]: I0219 21:06:37.619743 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd717296ddc111342b5bca219fb37f9813e1aceda80553cf87452bd3e4722ae9"} err="failed to get container status \"fd717296ddc111342b5bca219fb37f9813e1aceda80553cf87452bd3e4722ae9\": rpc error: code = NotFound desc = could not find container \"fd717296ddc111342b5bca219fb37f9813e1aceda80553cf87452bd3e4722ae9\": container with ID starting with fd717296ddc111342b5bca219fb37f9813e1aceda80553cf87452bd3e4722ae9 not found: ID does not exist" Feb 19 21:06:37 crc kubenswrapper[4813]: I0219 21:06:37.619760 4813 scope.go:117] "RemoveContainer" containerID="56e1a71e2be86fb030977391718df56bca3f2ab75c05e4c039870d2cce9089b6" Feb 19 21:06:37 crc kubenswrapper[4813]: E0219 21:06:37.620124 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56e1a71e2be86fb030977391718df56bca3f2ab75c05e4c039870d2cce9089b6\": container with ID starting with 56e1a71e2be86fb030977391718df56bca3f2ab75c05e4c039870d2cce9089b6 not found: ID does not exist" containerID="56e1a71e2be86fb030977391718df56bca3f2ab75c05e4c039870d2cce9089b6" Feb 19 21:06:37 crc kubenswrapper[4813]: I0219 21:06:37.620177 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56e1a71e2be86fb030977391718df56bca3f2ab75c05e4c039870d2cce9089b6"} err="failed to get container status \"56e1a71e2be86fb030977391718df56bca3f2ab75c05e4c039870d2cce9089b6\": rpc error: code = NotFound desc = could not find container \"56e1a71e2be86fb030977391718df56bca3f2ab75c05e4c039870d2cce9089b6\": container with ID starting with 56e1a71e2be86fb030977391718df56bca3f2ab75c05e4c039870d2cce9089b6 not found: ID does not exist" Feb 19 21:06:37 crc kubenswrapper[4813]: I0219 21:06:37.620208 4813 scope.go:117] "RemoveContainer" containerID="f57a372b997f3c8175ae0fe0a423c1ae774eaadde7ef03245e8a5fa51872bc64" Feb 19 21:06:37 crc kubenswrapper[4813]: E0219 21:06:37.620482 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f57a372b997f3c8175ae0fe0a423c1ae774eaadde7ef03245e8a5fa51872bc64\": container with ID starting with f57a372b997f3c8175ae0fe0a423c1ae774eaadde7ef03245e8a5fa51872bc64 not found: ID does not exist" containerID="f57a372b997f3c8175ae0fe0a423c1ae774eaadde7ef03245e8a5fa51872bc64" Feb 19 21:06:37 crc kubenswrapper[4813]: I0219 21:06:37.620502 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f57a372b997f3c8175ae0fe0a423c1ae774eaadde7ef03245e8a5fa51872bc64"} err="failed to get container status \"f57a372b997f3c8175ae0fe0a423c1ae774eaadde7ef03245e8a5fa51872bc64\": rpc error: code = NotFound desc = could not find container \"f57a372b997f3c8175ae0fe0a423c1ae774eaadde7ef03245e8a5fa51872bc64\": container with ID starting with f57a372b997f3c8175ae0fe0a423c1ae774eaadde7ef03245e8a5fa51872bc64 not found: ID does not exist" Feb 19 21:06:39 crc kubenswrapper[4813]: I0219 21:06:39.034618 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-d7mr8" Feb 19 21:06:39 crc kubenswrapper[4813]: I0219 21:06:39.035618 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-d7mr8" Feb 19 21:06:39 crc kubenswrapper[4813]: I0219 21:06:39.120118 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-d7mr8" Feb 19 21:06:39 crc kubenswrapper[4813]: I0219 21:06:39.495285 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfc63f1d-c2ca-4fa9-9fd7-1218152e70fa" path="/var/lib/kubelet/pods/cfc63f1d-c2ca-4fa9-9fd7-1218152e70fa/volumes" Feb 19 21:06:48 crc kubenswrapper[4813]: I0219 21:06:48.472436 4813 scope.go:117] "RemoveContainer" containerID="6c6ebd25e13c34a5637b27b7ea17b9562833793e4c09cb4428c635b774a360cb" Feb 19 21:06:48 crc kubenswrapper[4813]: E0219 21:06:48.475079 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 21:06:49 crc kubenswrapper[4813]: I0219 21:06:49.128875 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-d7mr8" Feb 19 21:06:49 crc kubenswrapper[4813]: I0219 21:06:49.215329 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d7mr8"] Feb 19 21:06:49 crc kubenswrapper[4813]: I0219 21:06:49.656932 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-d7mr8" podUID="30365140-7d80-4179-aa44-a9f3623d4564" containerName="registry-server" containerID="cri-o://3808ded04b87e9438c73de71dae59e7efdc276f3d0ae23c517b1a6deb8137102" gracePeriod=2 Feb 19 21:06:50 crc kubenswrapper[4813]: I0219 21:06:50.217249 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d7mr8" Feb 19 21:06:50 crc kubenswrapper[4813]: I0219 21:06:50.374724 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30365140-7d80-4179-aa44-a9f3623d4564-catalog-content\") pod \"30365140-7d80-4179-aa44-a9f3623d4564\" (UID: \"30365140-7d80-4179-aa44-a9f3623d4564\") " Feb 19 21:06:50 crc kubenswrapper[4813]: I0219 21:06:50.374789 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8gv6\" (UniqueName: \"kubernetes.io/projected/30365140-7d80-4179-aa44-a9f3623d4564-kube-api-access-w8gv6\") pod \"30365140-7d80-4179-aa44-a9f3623d4564\" (UID: \"30365140-7d80-4179-aa44-a9f3623d4564\") " Feb 19 21:06:50 crc kubenswrapper[4813]: I0219 21:06:50.375119 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30365140-7d80-4179-aa44-a9f3623d4564-utilities\") pod \"30365140-7d80-4179-aa44-a9f3623d4564\" (UID: \"30365140-7d80-4179-aa44-a9f3623d4564\") " Feb 19 21:06:50 crc kubenswrapper[4813]: I0219 21:06:50.376364 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30365140-7d80-4179-aa44-a9f3623d4564-utilities" (OuterVolumeSpecName: "utilities") pod "30365140-7d80-4179-aa44-a9f3623d4564" (UID: "30365140-7d80-4179-aa44-a9f3623d4564"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:06:50 crc kubenswrapper[4813]: I0219 21:06:50.384357 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30365140-7d80-4179-aa44-a9f3623d4564-kube-api-access-w8gv6" (OuterVolumeSpecName: "kube-api-access-w8gv6") pod "30365140-7d80-4179-aa44-a9f3623d4564" (UID: "30365140-7d80-4179-aa44-a9f3623d4564"). InnerVolumeSpecName "kube-api-access-w8gv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:06:50 crc kubenswrapper[4813]: I0219 21:06:50.434807 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30365140-7d80-4179-aa44-a9f3623d4564-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "30365140-7d80-4179-aa44-a9f3623d4564" (UID: "30365140-7d80-4179-aa44-a9f3623d4564"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:06:50 crc kubenswrapper[4813]: I0219 21:06:50.478499 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30365140-7d80-4179-aa44-a9f3623d4564-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 21:06:50 crc kubenswrapper[4813]: I0219 21:06:50.478548 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30365140-7d80-4179-aa44-a9f3623d4564-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 21:06:50 crc kubenswrapper[4813]: I0219 21:06:50.478569 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8gv6\" (UniqueName: \"kubernetes.io/projected/30365140-7d80-4179-aa44-a9f3623d4564-kube-api-access-w8gv6\") on node \"crc\" DevicePath \"\"" Feb 19 21:06:50 crc kubenswrapper[4813]: I0219 21:06:50.670048 4813 generic.go:334] "Generic (PLEG): container finished" podID="30365140-7d80-4179-aa44-a9f3623d4564" containerID="3808ded04b87e9438c73de71dae59e7efdc276f3d0ae23c517b1a6deb8137102" exitCode=0 Feb 19 21:06:50 crc kubenswrapper[4813]: I0219 21:06:50.670169 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d7mr8" Feb 19 21:06:50 crc kubenswrapper[4813]: I0219 21:06:50.670329 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d7mr8" event={"ID":"30365140-7d80-4179-aa44-a9f3623d4564","Type":"ContainerDied","Data":"3808ded04b87e9438c73de71dae59e7efdc276f3d0ae23c517b1a6deb8137102"} Feb 19 21:06:50 crc kubenswrapper[4813]: I0219 21:06:50.670383 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d7mr8" event={"ID":"30365140-7d80-4179-aa44-a9f3623d4564","Type":"ContainerDied","Data":"91fe62ce34e8730b354f5e9ba896119b3809adafb476ecd8fbc185137230899d"} Feb 19 21:06:50 crc kubenswrapper[4813]: I0219 21:06:50.670418 4813 scope.go:117] "RemoveContainer" containerID="3808ded04b87e9438c73de71dae59e7efdc276f3d0ae23c517b1a6deb8137102" Feb 19 21:06:50 crc kubenswrapper[4813]: I0219 21:06:50.691382 4813 scope.go:117] "RemoveContainer" containerID="d065be213850be42ddb4aaca1e5aa47f313f8b3941013fb21a02932004c6c8a9" Feb 19 21:06:50 crc kubenswrapper[4813]: I0219 21:06:50.720661 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d7mr8"] Feb 19 21:06:50 crc kubenswrapper[4813]: I0219 21:06:50.729591 4813 scope.go:117] "RemoveContainer" containerID="e1a225e003dfc72639fb306a4af76ada47f8e9f54ec9b5ecac023b860586c971" Feb 19 21:06:50 crc kubenswrapper[4813]: I0219 21:06:50.758563 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-d7mr8"] Feb 19 21:06:50 crc kubenswrapper[4813]: I0219 21:06:50.794190 4813 scope.go:117] "RemoveContainer" containerID="3808ded04b87e9438c73de71dae59e7efdc276f3d0ae23c517b1a6deb8137102" Feb 19 21:06:50 crc kubenswrapper[4813]: E0219 21:06:50.795119 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3808ded04b87e9438c73de71dae59e7efdc276f3d0ae23c517b1a6deb8137102\": container with ID starting with 3808ded04b87e9438c73de71dae59e7efdc276f3d0ae23c517b1a6deb8137102 not found: ID does not exist" containerID="3808ded04b87e9438c73de71dae59e7efdc276f3d0ae23c517b1a6deb8137102" Feb 19 21:06:50 crc kubenswrapper[4813]: I0219 21:06:50.795194 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3808ded04b87e9438c73de71dae59e7efdc276f3d0ae23c517b1a6deb8137102"} err="failed to get container status \"3808ded04b87e9438c73de71dae59e7efdc276f3d0ae23c517b1a6deb8137102\": rpc error: code = NotFound desc = could not find container \"3808ded04b87e9438c73de71dae59e7efdc276f3d0ae23c517b1a6deb8137102\": container with ID starting with 3808ded04b87e9438c73de71dae59e7efdc276f3d0ae23c517b1a6deb8137102 not found: ID does not exist" Feb 19 21:06:50 crc kubenswrapper[4813]: I0219 21:06:50.795225 4813 scope.go:117] "RemoveContainer" containerID="d065be213850be42ddb4aaca1e5aa47f313f8b3941013fb21a02932004c6c8a9" Feb 19 21:06:50 crc kubenswrapper[4813]: E0219 21:06:50.795553 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d065be213850be42ddb4aaca1e5aa47f313f8b3941013fb21a02932004c6c8a9\": container with ID starting with d065be213850be42ddb4aaca1e5aa47f313f8b3941013fb21a02932004c6c8a9 not found: ID does not exist" containerID="d065be213850be42ddb4aaca1e5aa47f313f8b3941013fb21a02932004c6c8a9" Feb 19 21:06:50 crc kubenswrapper[4813]: I0219 21:06:50.795582 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d065be213850be42ddb4aaca1e5aa47f313f8b3941013fb21a02932004c6c8a9"} err="failed to get container status \"d065be213850be42ddb4aaca1e5aa47f313f8b3941013fb21a02932004c6c8a9\": rpc error: code = NotFound desc = could not find container \"d065be213850be42ddb4aaca1e5aa47f313f8b3941013fb21a02932004c6c8a9\": container with ID starting with d065be213850be42ddb4aaca1e5aa47f313f8b3941013fb21a02932004c6c8a9 not found: ID does not exist" Feb 19 21:06:50 crc kubenswrapper[4813]: I0219 21:06:50.795599 4813 scope.go:117] "RemoveContainer" containerID="e1a225e003dfc72639fb306a4af76ada47f8e9f54ec9b5ecac023b860586c971" Feb 19 21:06:50 crc kubenswrapper[4813]: E0219 21:06:50.795871 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1a225e003dfc72639fb306a4af76ada47f8e9f54ec9b5ecac023b860586c971\": container with ID starting with e1a225e003dfc72639fb306a4af76ada47f8e9f54ec9b5ecac023b860586c971 not found: ID does not exist" containerID="e1a225e003dfc72639fb306a4af76ada47f8e9f54ec9b5ecac023b860586c971" Feb 19 21:06:50 crc kubenswrapper[4813]: I0219 21:06:50.795905 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1a225e003dfc72639fb306a4af76ada47f8e9f54ec9b5ecac023b860586c971"} err="failed to get container status \"e1a225e003dfc72639fb306a4af76ada47f8e9f54ec9b5ecac023b860586c971\": rpc error: code = NotFound desc = could not find container \"e1a225e003dfc72639fb306a4af76ada47f8e9f54ec9b5ecac023b860586c971\": container with ID starting with e1a225e003dfc72639fb306a4af76ada47f8e9f54ec9b5ecac023b860586c971 not found: ID does not exist" Feb 19 21:06:51 crc kubenswrapper[4813]: I0219 21:06:51.499785 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30365140-7d80-4179-aa44-a9f3623d4564" path="/var/lib/kubelet/pods/30365140-7d80-4179-aa44-a9f3623d4564/volumes" Feb 19 21:07:03 crc kubenswrapper[4813]: I0219 21:07:03.472271 4813 scope.go:117] "RemoveContainer" containerID="6c6ebd25e13c34a5637b27b7ea17b9562833793e4c09cb4428c635b774a360cb" Feb 19 21:07:03 crc kubenswrapper[4813]: E0219 21:07:03.473846 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 21:07:17 crc kubenswrapper[4813]: I0219 21:07:17.472086 4813 scope.go:117] "RemoveContainer" containerID="6c6ebd25e13c34a5637b27b7ea17b9562833793e4c09cb4428c635b774a360cb" Feb 19 21:07:17 crc kubenswrapper[4813]: E0219 21:07:17.472807 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 21:07:28 crc kubenswrapper[4813]: I0219 21:07:28.472412 4813 scope.go:117] "RemoveContainer" containerID="6c6ebd25e13c34a5637b27b7ea17b9562833793e4c09cb4428c635b774a360cb" Feb 19 21:07:28 crc kubenswrapper[4813]: E0219 21:07:28.473458 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 21:07:40 crc kubenswrapper[4813]: I0219 21:07:40.472066 4813 scope.go:117] "RemoveContainer" containerID="6c6ebd25e13c34a5637b27b7ea17b9562833793e4c09cb4428c635b774a360cb" Feb 19 21:07:41 crc kubenswrapper[4813]: I0219 21:07:41.326910 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" event={"ID":"481977a2-7072-4176-abd4-863cb6104d70","Type":"ContainerStarted","Data":"0cea0fffc64930c67d432880b28f3ce039eab2481117234ad32338c12c4c35a5"} Feb 19 21:08:10 crc kubenswrapper[4813]: I0219 21:08:10.981479 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_dcaba31c-8ef3-42e4-9b85-090fa358bc1b/init-config-reloader/0.log" Feb 19 21:08:11 crc kubenswrapper[4813]: I0219 21:08:11.146061 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_dcaba31c-8ef3-42e4-9b85-090fa358bc1b/init-config-reloader/0.log" Feb 19 21:08:11 crc kubenswrapper[4813]: I0219 21:08:11.197395 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_dcaba31c-8ef3-42e4-9b85-090fa358bc1b/alertmanager/0.log" Feb 19 21:08:11 crc kubenswrapper[4813]: I0219 21:08:11.243106 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_dcaba31c-8ef3-42e4-9b85-090fa358bc1b/config-reloader/0.log" Feb 19 21:08:11 crc kubenswrapper[4813]: I0219 21:08:11.374329 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_bb9fbcb5-17e2-48ec-82dd-f9e4d1aba469/aodh-api/0.log" Feb 19 21:08:11 crc kubenswrapper[4813]: I0219 21:08:11.428337 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_bb9fbcb5-17e2-48ec-82dd-f9e4d1aba469/aodh-evaluator/0.log" Feb 19 21:08:11 crc kubenswrapper[4813]: I0219 21:08:11.536489 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_bb9fbcb5-17e2-48ec-82dd-f9e4d1aba469/aodh-listener/0.log" Feb 19 21:08:11 crc kubenswrapper[4813]: I0219 21:08:11.589252 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-748c6b9b7b-zzvwq_f358878b-f864-4efd-a8a2-163128d1e49a/barbican-api/0.log" Feb 19 21:08:11 crc kubenswrapper[4813]: I0219 21:08:11.597144 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_bb9fbcb5-17e2-48ec-82dd-f9e4d1aba469/aodh-notifier/0.log" Feb 19 21:08:11 crc kubenswrapper[4813]: I0219 21:08:11.695388 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-748c6b9b7b-zzvwq_f358878b-f864-4efd-a8a2-163128d1e49a/barbican-api-log/0.log" Feb 19 21:08:11 crc kubenswrapper[4813]: I0219 21:08:11.799158 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-84947654b6-l9jwd_1cf66052-db02-4388-aaa3-65c44dbb3e74/barbican-keystone-listener-log/0.log" Feb 19 21:08:11 crc kubenswrapper[4813]: I0219 21:08:11.806798 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-84947654b6-l9jwd_1cf66052-db02-4388-aaa3-65c44dbb3e74/barbican-keystone-listener/0.log" Feb 19 21:08:11 crc kubenswrapper[4813]: I0219 21:08:11.977651 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-784cf99fcf-cp46x_098326bb-f104-4ccf-80dc-99e65bebc619/barbican-worker/0.log" Feb 19 21:08:11 crc kubenswrapper[4813]: I0219 21:08:11.982460 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-784cf99fcf-cp46x_098326bb-f104-4ccf-80dc-99e65bebc619/barbican-worker-log/0.log" Feb 19 21:08:12 crc kubenswrapper[4813]: I0219 21:08:12.121441 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-openstack-openstack-cell1-q4q7w_c727ad19-3a38-4508-8eb9-dd36db85c774/bootstrap-openstack-openstack-cell1/0.log" Feb 19 21:08:12 crc kubenswrapper[4813]: I0219 21:08:12.208557 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_c2fc0f1f-d05a-4280-93f3-672cbd77af00/ceilometer-central-agent/0.log" Feb 19 21:08:12 crc kubenswrapper[4813]: I0219 21:08:12.299215 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_c2fc0f1f-d05a-4280-93f3-672cbd77af00/ceilometer-notification-agent/0.log" Feb 19 21:08:12 crc kubenswrapper[4813]: I0219 21:08:12.349831 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_c2fc0f1f-d05a-4280-93f3-672cbd77af00/proxy-httpd/0.log" Feb 19 21:08:12 crc kubenswrapper[4813]: I0219 21:08:12.368518 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_c2fc0f1f-d05a-4280-93f3-672cbd77af00/sg-core/0.log" Feb 19 21:08:12 crc kubenswrapper[4813]: I0219 21:08:12.514605 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-client-openstack-openstack-cell1-9vtcc_1c2fa613-123c-4667-b4d3-0140b6d109d1/ceph-client-openstack-openstack-cell1/0.log" Feb 19 21:08:12 crc kubenswrapper[4813]: I0219 21:08:12.634332 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_c947e6fd-22fc-4fb5-bc41-33896ad1c161/cinder-api/0.log" Feb 19 21:08:12 crc kubenswrapper[4813]: I0219 21:08:12.674599 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_c947e6fd-22fc-4fb5-bc41-33896ad1c161/cinder-api-log/0.log" Feb 19 21:08:12 crc kubenswrapper[4813]: I0219 21:08:12.865023 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_80ce1c4a-b9d4-4da3-a900-55a7dddd6070/probe/0.log" Feb 19 21:08:12 crc kubenswrapper[4813]: I0219 21:08:12.912290 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_80ce1c4a-b9d4-4da3-a900-55a7dddd6070/cinder-backup/0.log" Feb 19 21:08:12 crc kubenswrapper[4813]: I0219 21:08:12.988857 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_436773d7-aaef-4ce6-a7d4-987708819652/cinder-scheduler/0.log" Feb 19 21:08:13 crc kubenswrapper[4813]: I0219 21:08:13.077447 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_436773d7-aaef-4ce6-a7d4-987708819652/probe/0.log" Feb 19 21:08:13 crc kubenswrapper[4813]: I0219 21:08:13.154057 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_4543bfcb-ce40-4284-8e91-5955bd0ada4f/cinder-volume/0.log" Feb 19 21:08:13 crc kubenswrapper[4813]: I0219 21:08:13.209402 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_4543bfcb-ce40-4284-8e91-5955bd0ada4f/probe/0.log" Feb 19 21:08:13 crc kubenswrapper[4813]: I0219 21:08:13.362107 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-openstack-openstack-cell1-xhqs2_83b0f363-8b5f-4f90-809d-ddb74d9a159f/configure-network-openstack-openstack-cell1/0.log" Feb 19 21:08:13 crc kubenswrapper[4813]: I0219 21:08:13.404359 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-openstack-openstack-cell1-4gdmm_9c759860-2fa4-493c-8b71-796165e62357/configure-os-openstack-openstack-cell1/0.log" Feb 19 21:08:13 crc kubenswrapper[4813]: I0219 21:08:13.594449 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-7f4c4f5bd7-n9jxx_387e8461-3709-4da1-a6b4-120a4ae6fc34/init/0.log" Feb 19 21:08:13 crc kubenswrapper[4813]: I0219 21:08:13.761495 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-7f4c4f5bd7-n9jxx_387e8461-3709-4da1-a6b4-120a4ae6fc34/init/0.log" Feb 19 21:08:13 crc kubenswrapper[4813]: I0219 21:08:13.790582 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-7f4c4f5bd7-n9jxx_387e8461-3709-4da1-a6b4-120a4ae6fc34/dnsmasq-dns/0.log" Feb 19 21:08:13 crc kubenswrapper[4813]: I0219 21:08:13.813313 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-openstack-openstack-cell1-7htd2_57d33e77-d322-4e23-b98b-ed97c5f3b6af/download-cache-openstack-openstack-cell1/0.log" Feb 19 21:08:14 crc kubenswrapper[4813]: I0219 21:08:14.027502 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_ce14a753-c8a1-4f0d-8244-21a07ec06064/glance-httpd/0.log" Feb 19 21:08:14 crc kubenswrapper[4813]: I0219 21:08:14.054539 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_ce14a753-c8a1-4f0d-8244-21a07ec06064/glance-log/0.log" Feb 19 21:08:14 crc kubenswrapper[4813]: I0219 21:08:14.069341 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_0c20d9f5-717c-4df2-9013-5d985ac9c6b8/glance-httpd/0.log" Feb 19 21:08:14 crc kubenswrapper[4813]: I0219 21:08:14.188780 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_0c20d9f5-717c-4df2-9013-5d985ac9c6b8/glance-log/0.log" Feb 19 21:08:14 crc kubenswrapper[4813]: I0219 21:08:14.355732 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-6c47fc8b76-csbxx_1bf821c9-c4e7-44d0-a4ba-4651c90947bd/heat-api/0.log" Feb 19 21:08:14 crc kubenswrapper[4813]: I0219 21:08:14.368893 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-54f9fdc8b-rl6qw_ccd48347-0524-47b2-a3b8-c38634e356a6/heat-cfnapi/0.log" Feb 19 21:08:14 crc kubenswrapper[4813]: I0219 21:08:14.526563 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-79b88f9bd9-88k94_b37fece7-aabf-465c-b612-eeb01bf398cb/heat-engine/0.log" Feb 19 21:08:14 crc kubenswrapper[4813]: I0219 21:08:14.622824 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5f4b55c5d9-w5nw5_888e6569-1a57-48ea-af7c-d5ab24ae7f68/horizon-log/0.log" Feb 19 21:08:14 crc kubenswrapper[4813]: I0219 21:08:14.628327 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5f4b55c5d9-w5nw5_888e6569-1a57-48ea-af7c-d5ab24ae7f68/horizon/0.log" Feb 19 21:08:14 crc kubenswrapper[4813]: I0219 21:08:14.788283 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-openstack-openstack-cell1-kn9xd_6e11569d-f74e-4d49-8046-59b3a85b2a20/install-certs-openstack-openstack-cell1/0.log" Feb 19 21:08:14 crc kubenswrapper[4813]: I0219 21:08:14.854768 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-openstack-openstack-cell1-hpz86_e2e16257-f929-4408-b781-d7f6aff2a944/install-os-openstack-openstack-cell1/0.log" Feb 19 21:08:15 crc kubenswrapper[4813]: I0219 21:08:15.020539 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29525521-ldjmt_87b3919b-c8b6-41e5-aac7-96b6eb3359bc/keystone-cron/0.log" Feb 19 21:08:15 crc kubenswrapper[4813]: I0219 21:08:15.055703 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-648fd9799d-6xngg_7752a372-6f12-490d-8b57-9c08bcc8ad6b/keystone-api/0.log" Feb 19 21:08:15 crc kubenswrapper[4813]: I0219 21:08:15.140242 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29525581-2dq5v_f8013965-431f-4d6b-8a69-d5f2025fbe9f/keystone-cron/0.log" Feb 19 21:08:15 crc kubenswrapper[4813]: I0219 21:08:15.264598 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_292cbffe-04a8-45aa-8ab0-f05526828ffd/kube-state-metrics/0.log" Feb 19 21:08:15 crc kubenswrapper[4813]: I0219 21:08:15.326638 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-openstack-openstack-cell1-s69pm_65ed17eb-3e0b-41ca-86fb-4f6ad6fee0df/libvirt-openstack-openstack-cell1/0.log" Feb 19 21:08:15 crc kubenswrapper[4813]: I0219 21:08:15.565658 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_59f27dba-b24f-41f7-ac4e-75e1f3dfdc39/manila-api-log/0.log" Feb 19 21:08:15 crc kubenswrapper[4813]: I0219 21:08:15.566777 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_59f27dba-b24f-41f7-ac4e-75e1f3dfdc39/manila-api/0.log" Feb 19 21:08:15 crc kubenswrapper[4813]: I0219 21:08:15.660160 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_37e81308-2a91-47be-a816-37664edd2530/manila-scheduler/0.log" Feb 19 21:08:15 crc kubenswrapper[4813]: I0219 21:08:15.677099 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_37e81308-2a91-47be-a816-37664edd2530/probe/0.log" Feb 19 21:08:15 crc kubenswrapper[4813]: I0219 21:08:15.809051 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_2883ddae-0938-4828-95c8-46934ade5fdd/manila-share/0.log" Feb 19 21:08:15 crc kubenswrapper[4813]: I0219 21:08:15.852738 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_2883ddae-0938-4828-95c8-46934ade5fdd/probe/0.log" Feb 19 21:08:16 crc kubenswrapper[4813]: I0219 21:08:16.156770 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7cb766c549-4vh2s_849e45f9-9e3d-4128-8143-90e3e920c2b5/neutron-httpd/0.log" Feb 19 21:08:16 crc kubenswrapper[4813]: I0219 21:08:16.192524 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7cb766c549-4vh2s_849e45f9-9e3d-4128-8143-90e3e920c2b5/neutron-api/0.log" Feb 19 21:08:16 crc kubenswrapper[4813]: I0219 21:08:16.266065 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-dhcp-openstack-openstack-cell1-4g9wj_02ac4dac-b810-49a4-894d-c54e84a2d6f1/neutron-dhcp-openstack-openstack-cell1/0.log" Feb 19 21:08:16 crc kubenswrapper[4813]: I0219 21:08:16.377128 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-openstack-openstack-cell1-gwc79_29b06165-79b3-41c2-afbb-c165336c5564/neutron-metadata-openstack-openstack-cell1/0.log" Feb 19 21:08:16 crc kubenswrapper[4813]: I0219 21:08:16.488627 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-sriov-openstack-openstack-cell1-2hs7t_95dbe759-32fa-4d27-977a-399f24f2b75e/neutron-sriov-openstack-openstack-cell1/0.log" Feb 19 21:08:16 crc kubenswrapper[4813]: I0219 21:08:16.890056 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_678728e5-b457-4f2e-8bc6-3599a2879262/nova-api-api/0.log" Feb 19 21:08:16 crc kubenswrapper[4813]: I0219 21:08:16.990165 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_678728e5-b457-4f2e-8bc6-3599a2879262/nova-api-log/0.log" Feb 19 21:08:17 crc kubenswrapper[4813]: I0219 21:08:17.095655 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_38051563-e7d4-4d23-a5ee-ef6608fc7004/nova-cell0-conductor-conductor/0.log" Feb 19 21:08:17 crc kubenswrapper[4813]: I0219 21:08:17.673701 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_cdc943f6-cc8a-4b0b-9051-1db1115793a8/nova-cell1-conductor-conductor/0.log" Feb 19 21:08:17 crc kubenswrapper[4813]: I0219 21:08:17.683842 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_a81e70d3-41ae-4eee-b6da-bcdba9f7c1b0/nova-cell1-novncproxy-novncproxy/0.log" Feb 19 21:08:17 crc kubenswrapper[4813]: I0219 21:08:17.870127 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellzxh4c_f853aaa8-39d2-4a98-b8d1-9fb7712d89a6/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1/0.log" Feb 19 21:08:17 crc kubenswrapper[4813]: I0219 21:08:17.959688 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-openstack-openstack-cell1-qp4gn_638b5e11-fd3f-4885-b9c0-463a8496bb74/nova-cell1-openstack-openstack-cell1/0.log" Feb 19 21:08:18 crc kubenswrapper[4813]: I0219 21:08:18.195780 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_7bbd5f90-9900-44d3-bb78-efdb6e73d324/nova-metadata-log/0.log" Feb 19 21:08:18 crc kubenswrapper[4813]: I0219 21:08:18.210323 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_7bbd5f90-9900-44d3-bb78-efdb6e73d324/nova-metadata-metadata/0.log" Feb 19 21:08:18 crc kubenswrapper[4813]: I0219 21:08:18.356644 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_c9287374-ca07-402f-9748-9d55df1d0d3c/nova-scheduler-scheduler/0.log" Feb 19 21:08:18 crc kubenswrapper[4813]: I0219 21:08:18.392367 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-7fb974d8b5-2rxtm_ebf35e0c-e53b-4582-9cc8-2a9f54b9aa29/init/0.log" Feb 19 21:08:18 crc kubenswrapper[4813]: I0219 21:08:18.599419 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-7fb974d8b5-2rxtm_ebf35e0c-e53b-4582-9cc8-2a9f54b9aa29/init/0.log" Feb 19 21:08:19 crc kubenswrapper[4813]: I0219 21:08:19.146980 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-healthmanager-4z8pw_21cec241-a549-4faf-815f-73bc56483ed2/init/0.log" Feb 19 21:08:19 crc kubenswrapper[4813]: I0219 21:08:19.161653 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-7fb974d8b5-2rxtm_ebf35e0c-e53b-4582-9cc8-2a9f54b9aa29/octavia-api-provider-agent/0.log" Feb 19 21:08:19 crc kubenswrapper[4813]: I0219 21:08:19.407199 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-healthmanager-4z8pw_21cec241-a549-4faf-815f-73bc56483ed2/init/0.log" Feb 19 21:08:19 crc kubenswrapper[4813]: I0219 21:08:19.424574 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-7fb974d8b5-2rxtm_ebf35e0c-e53b-4582-9cc8-2a9f54b9aa29/octavia-api/0.log" Feb 19 21:08:19 crc kubenswrapper[4813]: I0219 21:08:19.500494 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-healthmanager-4z8pw_21cec241-a549-4faf-815f-73bc56483ed2/octavia-healthmanager/0.log" Feb 19 21:08:19 crc kubenswrapper[4813]: I0219 21:08:19.597713 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-housekeeping-zvjnv_30418661-00be-4784-aa85-430abf02af27/init/0.log" Feb 19 21:08:19 crc kubenswrapper[4813]: I0219 21:08:19.835367 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-housekeeping-zvjnv_30418661-00be-4784-aa85-430abf02af27/init/0.log" Feb 19 21:08:19 crc kubenswrapper[4813]: I0219 21:08:19.866647 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-housekeeping-zvjnv_30418661-00be-4784-aa85-430abf02af27/octavia-housekeeping/0.log" Feb 19 21:08:19 crc kubenswrapper[4813]: I0219 21:08:19.939286 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-image-upload-8d4564f8f-dxhzn_309fde8b-c7d5-47ee-9cf0-157d77af66c5/init/0.log" Feb 19 21:08:20 crc kubenswrapper[4813]: I0219 21:08:20.105574 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-image-upload-8d4564f8f-dxhzn_309fde8b-c7d5-47ee-9cf0-157d77af66c5/octavia-amphora-httpd/0.log" Feb 19 21:08:20 crc kubenswrapper[4813]: I0219 21:08:20.114678 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-image-upload-8d4564f8f-dxhzn_309fde8b-c7d5-47ee-9cf0-157d77af66c5/init/0.log" Feb 19 21:08:20 crc kubenswrapper[4813]: I0219 21:08:20.162600 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-rsyslog-j897l_41435d95-b06f-4563-b3d2-d5770f2d8116/init/0.log" Feb 19 21:08:20 crc kubenswrapper[4813]: I0219 21:08:20.346712 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-rsyslog-j897l_41435d95-b06f-4563-b3d2-d5770f2d8116/init/0.log" Feb 19 21:08:20 crc kubenswrapper[4813]: I0219 21:08:20.394645 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-rsyslog-j897l_41435d95-b06f-4563-b3d2-d5770f2d8116/octavia-rsyslog/0.log" Feb 19 21:08:20 crc kubenswrapper[4813]: I0219 21:08:20.473588 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-worker-xqwkj_66245c5a-bfae-4923-b9d2-a7e0614ac030/init/0.log" Feb 19 21:08:20 crc kubenswrapper[4813]: I0219 21:08:20.654511 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-worker-xqwkj_66245c5a-bfae-4923-b9d2-a7e0614ac030/init/0.log" Feb 19 21:08:20 crc kubenswrapper[4813]: I0219 21:08:20.670875 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_2cdfd5a1-a1f0-4cd8-a24e-4c2e660a08c4/mysql-bootstrap/0.log" Feb 19 21:08:20 crc kubenswrapper[4813]: I0219 21:08:20.797116 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-worker-xqwkj_66245c5a-bfae-4923-b9d2-a7e0614ac030/octavia-worker/0.log" Feb 19 21:08:20 crc kubenswrapper[4813]: I0219 21:08:20.921981 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_2cdfd5a1-a1f0-4cd8-a24e-4c2e660a08c4/mysql-bootstrap/0.log" Feb 19 21:08:20 crc kubenswrapper[4813]: I0219 21:08:20.976046 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_2cdfd5a1-a1f0-4cd8-a24e-4c2e660a08c4/galera/0.log" Feb 19 21:08:21 crc kubenswrapper[4813]: I0219 21:08:21.077724 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_77107511-f026-4e3d-9598-65484b98aea8/mysql-bootstrap/0.log" Feb 19 21:08:21 crc kubenswrapper[4813]: I0219 21:08:21.203330 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_77107511-f026-4e3d-9598-65484b98aea8/galera/0.log" Feb 19 21:08:21 crc kubenswrapper[4813]: I0219 21:08:21.218677 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_77107511-f026-4e3d-9598-65484b98aea8/mysql-bootstrap/0.log" Feb 19 21:08:21 crc kubenswrapper[4813]: I0219 21:08:21.308340 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_df26ed78-2f8d-41c4-971a-d826679ad985/openstackclient/0.log" Feb 19 21:08:21 crc kubenswrapper[4813]: I0219 21:08:21.486578 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-frg8l_fdd55e37-6da1-4dfb-809c-0074790b1ffc/ovn-controller/0.log" Feb 19 21:08:21 crc kubenswrapper[4813]: I0219 21:08:21.541049 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-rlzqr_1d21cee5-7ae5-4af4-98dc-b09a8fe0f77b/openstack-network-exporter/0.log" Feb 19 21:08:21 crc kubenswrapper[4813]: I0219 21:08:21.688654 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-6htkx_4624eabd-f427-4e53-a4e3-a49a2c7036b0/ovsdb-server-init/0.log" Feb 19 21:08:21 crc kubenswrapper[4813]: I0219 21:08:21.879513 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-6htkx_4624eabd-f427-4e53-a4e3-a49a2c7036b0/ovsdb-server-init/0.log" Feb 19 21:08:21 crc kubenswrapper[4813]: I0219 21:08:21.923081 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-6htkx_4624eabd-f427-4e53-a4e3-a49a2c7036b0/ovs-vswitchd/0.log" Feb 19 21:08:21 crc kubenswrapper[4813]: I0219 21:08:21.936590 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-6htkx_4624eabd-f427-4e53-a4e3-a49a2c7036b0/ovsdb-server/0.log" Feb 19 21:08:22 crc kubenswrapper[4813]: I0219 21:08:22.148136 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_0b4f5f49-12cd-4e82-aaef-d52b3f186786/ovn-northd/0.log" Feb 19 21:08:22 crc kubenswrapper[4813]: I0219 21:08:22.158350 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_0b4f5f49-12cd-4e82-aaef-d52b3f186786/openstack-network-exporter/0.log" Feb 19 21:08:22 crc kubenswrapper[4813]: I0219 21:08:22.264068 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-openstack-openstack-cell1-dcsw4_9c455b52-d215-4da6-aec6-edf7ea78770b/ovn-openstack-openstack-cell1/0.log" Feb 19 21:08:22 crc kubenswrapper[4813]: I0219 21:08:22.362495 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_9ea9898d-86e7-4fa2-965f-43b2d6e44046/openstack-network-exporter/0.log" Feb 19 21:08:22 crc kubenswrapper[4813]: I0219 21:08:22.450583 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_9ea9898d-86e7-4fa2-965f-43b2d6e44046/ovsdbserver-nb/0.log" Feb 19 21:08:22 crc kubenswrapper[4813]: I0219 21:08:22.563047 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_67be32e9-4e1a-424b-8381-259b527565c6/openstack-network-exporter/0.log" Feb 19 21:08:22 crc kubenswrapper[4813]: I0219 21:08:22.674999 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_67be32e9-4e1a-424b-8381-259b527565c6/ovsdbserver-nb/0.log" Feb 19 21:08:22 crc kubenswrapper[4813]: I0219 21:08:22.765262 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_eef6fa0f-4b41-488c-afd1-1179ea364d95/openstack-network-exporter/0.log" Feb 19 21:08:22 crc kubenswrapper[4813]: I0219 21:08:22.841734 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_eef6fa0f-4b41-488c-afd1-1179ea364d95/ovsdbserver-nb/0.log" Feb 19 21:08:22 crc kubenswrapper[4813]: I0219 21:08:22.962469 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_02fe8e04-6814-4371-b705-24f31ab7cdb5/openstack-network-exporter/0.log" Feb 19 21:08:22 crc kubenswrapper[4813]: I0219 21:08:22.993470 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_02fe8e04-6814-4371-b705-24f31ab7cdb5/ovsdbserver-sb/0.log" Feb 19 21:08:23 crc kubenswrapper[4813]: I0219 21:08:23.176708 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_9e34a64c-2d41-4679-bece-0a45b44d6f81/ovsdbserver-sb/0.log" Feb 19 21:08:23 crc kubenswrapper[4813]: I0219 21:08:23.184428 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_9e34a64c-2d41-4679-bece-0a45b44d6f81/openstack-network-exporter/0.log" Feb 19 21:08:23 crc kubenswrapper[4813]: I0219 21:08:23.320786 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_0051fa4c-43a4-449a-bde5-68883d14e44c/openstack-network-exporter/0.log" Feb 19 21:08:23 crc kubenswrapper[4813]: I0219 21:08:23.370661 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_0051fa4c-43a4-449a-bde5-68883d14e44c/ovsdbserver-sb/0.log" Feb 19 21:08:23 crc kubenswrapper[4813]: I0219 21:08:23.587633 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-577797bdf8-4jfrw_dfd6d10a-dbfe-43fb-bde4-a35de8e29ca4/placement-api/0.log" Feb 19 21:08:23 crc kubenswrapper[4813]: I0219 21:08:23.643751 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-577797bdf8-4jfrw_dfd6d10a-dbfe-43fb-bde4-a35de8e29ca4/placement-log/0.log" Feb 19 21:08:23 crc kubenswrapper[4813]: I0219 21:08:23.695996 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_pre-adoption-validation-openstack-pre-adoption-openstack-cdnndl_28d1a1d6-f34c-4c76-ace9-1f5d49cbda29/pre-adoption-validation-openstack-pre-adoption-openstack-cell1/0.log" Feb 19 21:08:23 crc kubenswrapper[4813]: I0219 21:08:23.837606 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_634fbb89-ec12-40f4-98fb-daddb92d6843/init-config-reloader/0.log" Feb 19 21:08:23 crc kubenswrapper[4813]: I0219 21:08:23.962618 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_634fbb89-ec12-40f4-98fb-daddb92d6843/init-config-reloader/0.log" Feb 19 21:08:23 crc kubenswrapper[4813]: I0219 21:08:23.980747 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_634fbb89-ec12-40f4-98fb-daddb92d6843/prometheus/0.log" Feb 19 21:08:23 crc kubenswrapper[4813]: I0219 21:08:23.982495 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_634fbb89-ec12-40f4-98fb-daddb92d6843/config-reloader/0.log" Feb 19 21:08:24 crc kubenswrapper[4813]: I0219 21:08:24.027412 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_634fbb89-ec12-40f4-98fb-daddb92d6843/thanos-sidecar/0.log" Feb 19 21:08:24 crc kubenswrapper[4813]: I0219 21:08:24.188170 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_1011827d-556c-4cd2-8189-c6151a791a71/setup-container/0.log" Feb 19 21:08:24 crc kubenswrapper[4813]: I0219 21:08:24.416454 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_1011827d-556c-4cd2-8189-c6151a791a71/setup-container/0.log" Feb 19 21:08:24 crc kubenswrapper[4813]: I0219 21:08:24.420570 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_1011827d-556c-4cd2-8189-c6151a791a71/rabbitmq/0.log" Feb 19 21:08:24 crc kubenswrapper[4813]: I0219 21:08:24.436125 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_bb0aa614-4f8e-403d-bb8b-2c472cce87e3/setup-container/0.log" Feb 19 21:08:24 crc kubenswrapper[4813]: I0219 21:08:24.594573 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_bb0aa614-4f8e-403d-bb8b-2c472cce87e3/setup-container/0.log" Feb 19 21:08:24 crc kubenswrapper[4813]: I0219 21:08:24.639834 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-openstack-openstack-cell1-mz7hx_7362face-9041-4faa-b26e-ac77330fdf8a/reboot-os-openstack-openstack-cell1/0.log" Feb 19 21:08:24 crc kubenswrapper[4813]: I0219 21:08:24.676682 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_bb0aa614-4f8e-403d-bb8b-2c472cce87e3/rabbitmq/0.log" Feb 19 21:08:24 crc kubenswrapper[4813]: I0219 21:08:24.840628 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-openstack-openstack-cell1-2b87h_356733da-da64-42e7-931f-e8e5627d52cf/run-os-openstack-openstack-cell1/0.log" Feb 19 21:08:24 crc kubenswrapper[4813]: I0219 21:08:24.942546 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-openstack-nb7bx_e4b63a21-1342-4837-b59f-9f0315daa673/ssh-known-hosts-openstack/0.log" Feb 19 21:08:25 crc kubenswrapper[4813]: I0219 21:08:25.113929 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-openstack-openstack-cell1-9snn7_f75d7c97-6dce-40ee-a448-abb566750887/telemetry-openstack-openstack-cell1/0.log" Feb 19 21:08:25 crc kubenswrapper[4813]: I0219 21:08:25.197860 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tripleo-cleanup-tripleo-cleanup-openstack-cell1-7mnnq_6a8dd50d-5b12-495b-961f-4ebd2ebe3033/tripleo-cleanup-tripleo-cleanup-openstack-cell1/0.log" Feb 19 21:08:25 crc kubenswrapper[4813]: I0219 21:08:25.305904 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-openstack-openstack-cell1-5r98z_e3790acf-636a-4642-924c-6cfb71a2aa55/validate-network-openstack-openstack-cell1/0.log" Feb 19 21:08:26 crc kubenswrapper[4813]: I0219 21:08:26.185610 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_e6c529a9-395b-4e55-87d1-f93fb0c98cd6/memcached/0.log" Feb 19 21:08:50 crc kubenswrapper[4813]: I0219 21:08:50.040753 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b79676zqdb_a7b8ee18-5d07-4859-9fc5-07ac0fbddc4f/util/0.log" Feb 19 21:08:50 crc kubenswrapper[4813]: I0219 21:08:50.212931 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b79676zqdb_a7b8ee18-5d07-4859-9fc5-07ac0fbddc4f/util/0.log" Feb 19 21:08:50 crc kubenswrapper[4813]: I0219 21:08:50.218587 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b79676zqdb_a7b8ee18-5d07-4859-9fc5-07ac0fbddc4f/pull/0.log" Feb 19 21:08:50 crc kubenswrapper[4813]: I0219 21:08:50.269551 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b79676zqdb_a7b8ee18-5d07-4859-9fc5-07ac0fbddc4f/pull/0.log" Feb 19 21:08:50 crc kubenswrapper[4813]: I0219 21:08:50.457370 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b79676zqdb_a7b8ee18-5d07-4859-9fc5-07ac0fbddc4f/pull/0.log" Feb 19 21:08:50 crc kubenswrapper[4813]: I0219 21:08:50.457603 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b79676zqdb_a7b8ee18-5d07-4859-9fc5-07ac0fbddc4f/extract/0.log" Feb 19 21:08:50 crc kubenswrapper[4813]: I0219 21:08:50.459345 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b79676zqdb_a7b8ee18-5d07-4859-9fc5-07ac0fbddc4f/util/0.log" Feb 19 21:08:50 crc kubenswrapper[4813]: I0219 21:08:50.888740 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-h2q2p_2a98a45c-87a7-4ecf-a0e8-2e8743b82960/manager/0.log" Feb 19 21:08:51 crc kubenswrapper[4813]: I0219 21:08:51.384656 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987464f4-5t7t2_496ee8bf-6326-4315-9219-ad7d26760349/manager/0.log" Feb 19 21:08:51 crc kubenswrapper[4813]: I0219 21:08:51.653897 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-68vcr_293d6d95-e878-451c-8a2f-3040ca924854/manager/0.log" Feb 19 21:08:51 crc kubenswrapper[4813]: I0219 21:08:51.865742 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-cnb66_bb9127dd-a22e-4d2b-91c9-29021a547c96/manager/0.log" Feb 19 21:08:51 crc kubenswrapper[4813]: I0219 21:08:51.928801 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-tfr46_1067532d-ed6f-4474-8006-a4b4a6b1c89e/manager/0.log" Feb 19 21:08:52 crc kubenswrapper[4813]: I0219 21:08:52.497745 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-8rrl4_ada715c6-71df-428d-97ff-72c3abe923a5/manager/0.log" Feb 19 21:08:52 crc kubenswrapper[4813]: I0219 21:08:52.906738 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-j894w_de341409-e6ef-4b1a-a359-a8f39fa0bc91/manager/0.log" Feb 19 21:08:53 crc kubenswrapper[4813]: I0219 21:08:53.202544 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-54f6768c69-4pf4j_da53be97-4a7d-4c8a-bcfa-25e1ea34c7c6/manager/0.log" Feb 19 21:08:53 crc kubenswrapper[4813]: I0219 21:08:53.243815 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-g7x92_fd22b35b-ee39-435e-964c-d545597056b6/manager/0.log" Feb 19 21:08:53 crc kubenswrapper[4813]: I0219 21:08:53.507495 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-c942j_5eac30f8-7b86-4d18-bf6f-e6dbf42a0625/manager/0.log" Feb 19 21:08:53 crc kubenswrapper[4813]: I0219 21:08:53.527743 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64ddbf8bb-6nmgx_0794f801-e064-425c-ab9d-c00719fb3f86/manager/0.log" Feb 19 21:08:53 crc kubenswrapper[4813]: I0219 21:08:53.962570 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5d946d989d-fn86j_b88eb3b7-8ae3-4e00-8a0f-a5bb7109241d/manager/0.log" Feb 19 21:08:54 crc kubenswrapper[4813]: I0219 21:08:54.275847 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-fb5fcc5b8-r9fmx_c250b55d-1d6b-4f28-8a0c-833736ac564b/manager/0.log" Feb 19 21:08:54 crc kubenswrapper[4813]: I0219 21:08:54.426288 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-scptn_aa2a1869-c465-4840-8879-e882d8b996b5/manager/0.log" Feb 19 21:08:54 crc kubenswrapper[4813]: I0219 21:08:54.623121 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-6679bf9b57-mz5c2_ed3475d6-09d9-4dde-8d28-46876ec8862c/operator/0.log" Feb 19 21:08:54 crc kubenswrapper[4813]: I0219 21:08:54.979345 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-dzhpp_650c56a5-66e9-423c-8bed-c1680c1d53a8/registry-server/0.log" Feb 19 21:08:55 crc kubenswrapper[4813]: I0219 21:08:55.100229 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-d44cf6b75-m8fzh_60dc84e0-9a88-4edc-9aac-ffbc1baa4cc8/manager/0.log" Feb 19 21:08:55 crc kubenswrapper[4813]: I0219 21:08:55.260460 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-rgvk6_317c820b-5be9-49c1-b291-e0d62982fce8/manager/0.log" Feb 19 21:08:55 crc kubenswrapper[4813]: I0219 21:08:55.390476 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-qlcvr_139aa386-db56-4988-8ca6-2ea715cf9630/operator/0.log" Feb 19 21:08:55 crc kubenswrapper[4813]: I0219 21:08:55.833092 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-k6dm9_52c59bd3-a386-46a0-b528-1a658f9a64a1/manager/0.log" Feb 19 21:08:56 crc kubenswrapper[4813]: I0219 21:08:56.109080 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7866795846-glgns_78220d1d-a875-4e0d-af2d-8a0668017340/manager/0.log" Feb 19 21:08:56 crc kubenswrapper[4813]: I0219 21:08:56.174926 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7f45b4ff68-gtm5k_6d9cb4d7-d624-47bf-ad6f-977968f5c074/manager/0.log" Feb 19 21:08:56 crc kubenswrapper[4813]: I0219 21:08:56.273776 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5db88f68c-xq5wn_40a9fa53-e23a-4506-be2f-25a76446db8f/manager/0.log" Feb 19 21:08:57 crc kubenswrapper[4813]: I0219 21:08:57.783519 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-69ff7bc449-qfpxs_ae13095c-48b2-4236-8fef-528d9f0ad712/manager/0.log" Feb 19 21:08:58 crc kubenswrapper[4813]: I0219 21:08:58.067776 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f8888797-zqs6j_1ca28ba3-eb1b-427d-a9b9-f0dcf0cc7dd9/manager/0.log" Feb 19 21:09:18 crc kubenswrapper[4813]: I0219 21:09:18.236814 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-74n48_692367ab-53d4-4c1c-aa46-c70e247b848a/control-plane-machine-set-operator/0.log" Feb 19 21:09:18 crc kubenswrapper[4813]: I0219 21:09:18.388145 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-dlph2_36d151c0-fa73-4e5f-a6de-580629bef8f1/kube-rbac-proxy/0.log" Feb 19 21:09:18 crc kubenswrapper[4813]: I0219 21:09:18.412812 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-dlph2_36d151c0-fa73-4e5f-a6de-580629bef8f1/machine-api-operator/0.log" Feb 19 21:09:33 crc kubenswrapper[4813]: I0219 21:09:33.677485 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-545d4d4674-r94m6_5088c764-f0da-4ec1-a3a5-f1b5cee7d1e2/cert-manager-controller/0.log" Feb 19 21:09:33 crc kubenswrapper[4813]: I0219 21:09:33.849433 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-5545bd876-qnfn5_42281278-8e09-4677-b354-85ae0b547f03/cert-manager-cainjector/0.log" Feb 19 21:09:33 crc kubenswrapper[4813]: I0219 21:09:33.904396 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-6888856db4-4c7n8_8beeb97a-17d6-4a65-9818-e7488aa9e29b/cert-manager-webhook/0.log" Feb 19 21:09:47 crc kubenswrapper[4813]: I0219 21:09:47.464752 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-4g2vc_9f9fe016-b91a-4575-8312-e8731d603bfc/nmstate-console-plugin/0.log" Feb 19 21:09:47 crc kubenswrapper[4813]: I0219 21:09:47.619668 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-fgpk5_22bb5199-3f3d-42a2-8f5a-b97deac59140/nmstate-handler/0.log" Feb 19 21:09:47 crc kubenswrapper[4813]: I0219 21:09:47.663174 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-9g649_92bb56c6-4974-463d-9714-093301482525/kube-rbac-proxy/0.log" Feb 19 21:09:47 crc kubenswrapper[4813]: I0219 21:09:47.738763 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-9g649_92bb56c6-4974-463d-9714-093301482525/nmstate-metrics/0.log" Feb 19 21:09:47 crc kubenswrapper[4813]: I0219 21:09:47.825430 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-t29nq_df018e2f-81ed-4685-aa56-1fae8fee55ef/nmstate-operator/0.log" Feb 19 21:09:47 crc kubenswrapper[4813]: I0219 21:09:47.912016 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-hbm7r_4978407d-39ad-4cc4-b30c-88b2146d55b0/nmstate-webhook/0.log" Feb 19 21:10:00 crc kubenswrapper[4813]: I0219 21:10:00.329557 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 21:10:00 crc kubenswrapper[4813]: I0219 21:10:00.330757 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 21:10:02 crc kubenswrapper[4813]: I0219 21:10:02.209742 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-llmlj_e0932101-fd0d-44ec-9ec6-a2801d356faa/prometheus-operator/0.log" Feb 19 21:10:02 crc kubenswrapper[4813]: I0219 21:10:02.328830 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5bffd859c-9twz7_07f692a3-7fb0-44fc-aa86-8574ccf76589/prometheus-operator-admission-webhook/0.log" Feb 19 21:10:02 crc kubenswrapper[4813]: I0219 21:10:02.381698 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5bffd859c-svjmq_e8435c9c-9eb7-4ad0-bf77-8b6b34952c02/prometheus-operator-admission-webhook/0.log" Feb 19 21:10:02 crc kubenswrapper[4813]: I0219 21:10:02.526311 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-fgkkm_86eb09f2-e8b8-45e2-a97f-5d2797af20fc/operator/0.log" Feb 19 21:10:02 crc kubenswrapper[4813]: I0219 21:10:02.625623 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-pt49v_967b5a02-0a81-4b48-aa1a-983f7a923088/perses-operator/0.log" Feb 19 21:10:17 crc kubenswrapper[4813]: I0219 21:10:17.941027 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-qbxgb_534e8bf3-c43f-4d6b-9340-9a1f876a2697/kube-rbac-proxy/0.log" Feb 19 21:10:18 crc kubenswrapper[4813]: I0219 21:10:18.176713 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4rmhj_9e232092-adea-48a4-a349-a9574c974c6f/cp-frr-files/0.log" Feb 19 21:10:18 crc kubenswrapper[4813]: I0219 21:10:18.352142 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4rmhj_9e232092-adea-48a4-a349-a9574c974c6f/cp-frr-files/0.log" Feb 19 21:10:18 crc kubenswrapper[4813]: I0219 21:10:18.366475 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-qbxgb_534e8bf3-c43f-4d6b-9340-9a1f876a2697/controller/0.log" Feb 19 21:10:18 crc kubenswrapper[4813]: I0219 21:10:18.378656 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4rmhj_9e232092-adea-48a4-a349-a9574c974c6f/cp-reloader/0.log" Feb 19 21:10:18 crc kubenswrapper[4813]: I0219 21:10:18.380404 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4rmhj_9e232092-adea-48a4-a349-a9574c974c6f/cp-metrics/0.log" Feb 19 21:10:18 crc kubenswrapper[4813]: I0219 21:10:18.514586 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4rmhj_9e232092-adea-48a4-a349-a9574c974c6f/cp-reloader/0.log" Feb 19 21:10:18 crc kubenswrapper[4813]: I0219 21:10:18.675056 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4rmhj_9e232092-adea-48a4-a349-a9574c974c6f/cp-frr-files/0.log" Feb 19 21:10:18 crc kubenswrapper[4813]: I0219 21:10:18.709585 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4rmhj_9e232092-adea-48a4-a349-a9574c974c6f/cp-reloader/0.log" Feb 19 21:10:18 crc kubenswrapper[4813]: I0219 21:10:18.711849 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4rmhj_9e232092-adea-48a4-a349-a9574c974c6f/cp-metrics/0.log" Feb 19 21:10:18 crc kubenswrapper[4813]: I0219 21:10:18.734734 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4rmhj_9e232092-adea-48a4-a349-a9574c974c6f/cp-metrics/0.log" Feb 19 21:10:18 crc kubenswrapper[4813]: I0219 21:10:18.856383 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4rmhj_9e232092-adea-48a4-a349-a9574c974c6f/cp-frr-files/0.log" Feb 19 21:10:18 crc kubenswrapper[4813]: I0219 21:10:18.897927 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4rmhj_9e232092-adea-48a4-a349-a9574c974c6f/cp-reloader/0.log" Feb 19 21:10:18 crc kubenswrapper[4813]: I0219 21:10:18.908672 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4rmhj_9e232092-adea-48a4-a349-a9574c974c6f/cp-metrics/0.log" Feb 19 21:10:18 crc kubenswrapper[4813]: I0219 21:10:18.929357 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4rmhj_9e232092-adea-48a4-a349-a9574c974c6f/controller/0.log" Feb 19 21:10:19 crc kubenswrapper[4813]: I0219 21:10:19.076521 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4rmhj_9e232092-adea-48a4-a349-a9574c974c6f/frr-metrics/0.log" Feb 19 21:10:19 crc kubenswrapper[4813]: I0219 21:10:19.084533 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4rmhj_9e232092-adea-48a4-a349-a9574c974c6f/kube-rbac-proxy/0.log" Feb 19 21:10:19 crc kubenswrapper[4813]: I0219 21:10:19.125581 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4rmhj_9e232092-adea-48a4-a349-a9574c974c6f/kube-rbac-proxy-frr/0.log" Feb 19 21:10:19 crc kubenswrapper[4813]: I0219 21:10:19.280787 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4rmhj_9e232092-adea-48a4-a349-a9574c974c6f/reloader/0.log" Feb 19 21:10:19 crc kubenswrapper[4813]: I0219 21:10:19.350191 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-svjhx_3d58013d-9952-4d23-ba48-6a4395eafe6d/frr-k8s-webhook-server/0.log" Feb 19 21:10:20 crc kubenswrapper[4813]: I0219 21:10:20.180538 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-577f5d4d96-7cmj7_a22dc4b9-f796-4ad3-9195-823745af58b4/manager/0.log" Feb 19 21:10:20 crc kubenswrapper[4813]: I0219 21:10:20.366489 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6b6fccb66b-rkklz_9782c1db-811f-4fa4-ad33-e6e1bed6ddf5/webhook-server/0.log" Feb 19 21:10:20 crc kubenswrapper[4813]: I0219 21:10:20.497997 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-qfbjx_dbe32660-daba-4fb2-b3e2-475f5b092ed9/kube-rbac-proxy/0.log" Feb 19 21:10:21 crc kubenswrapper[4813]: I0219 21:10:21.643307 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-qfbjx_dbe32660-daba-4fb2-b3e2-475f5b092ed9/speaker/0.log" Feb 19 21:10:22 crc kubenswrapper[4813]: I0219 21:10:22.554879 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4rmhj_9e232092-adea-48a4-a349-a9574c974c6f/frr/0.log" Feb 19 21:10:30 crc kubenswrapper[4813]: I0219 21:10:30.329498 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 21:10:30 crc kubenswrapper[4813]: I0219 21:10:30.330162 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 21:10:36 crc kubenswrapper[4813]: I0219 21:10:36.151309 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jxj6l_7ab16436-c636-42ed-a497-0ab36f9a9074/util/0.log" Feb 19 21:10:36 crc kubenswrapper[4813]: I0219 21:10:36.412231 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jxj6l_7ab16436-c636-42ed-a497-0ab36f9a9074/util/0.log" Feb 19 21:10:36 crc kubenswrapper[4813]: I0219 21:10:36.439651 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jxj6l_7ab16436-c636-42ed-a497-0ab36f9a9074/pull/0.log" Feb 19 21:10:36 crc kubenswrapper[4813]: I0219 21:10:36.477450 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jxj6l_7ab16436-c636-42ed-a497-0ab36f9a9074/pull/0.log" Feb 19 21:10:36 crc kubenswrapper[4813]: I0219 21:10:36.618798 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jxj6l_7ab16436-c636-42ed-a497-0ab36f9a9074/util/0.log" Feb 19 21:10:36 crc kubenswrapper[4813]: I0219 21:10:36.691776 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jxj6l_7ab16436-c636-42ed-a497-0ab36f9a9074/pull/0.log" Feb 19 21:10:36 crc kubenswrapper[4813]: I0219 21:10:36.712242 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jxj6l_7ab16436-c636-42ed-a497-0ab36f9a9074/extract/0.log" Feb 19 21:10:36 crc kubenswrapper[4813]: I0219 21:10:36.786372 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nqmvs_268e20d4-ae45-4b3c-a5bd-9c06c4ddd945/util/0.log" Feb 19 21:10:37 crc kubenswrapper[4813]: I0219 21:10:37.228583 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nqmvs_268e20d4-ae45-4b3c-a5bd-9c06c4ddd945/pull/0.log" Feb 19 21:10:37 crc kubenswrapper[4813]: I0219 21:10:37.246757 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nqmvs_268e20d4-ae45-4b3c-a5bd-9c06c4ddd945/pull/0.log" Feb 19 21:10:37 crc kubenswrapper[4813]: I0219 21:10:37.267683 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nqmvs_268e20d4-ae45-4b3c-a5bd-9c06c4ddd945/util/0.log" Feb 19 21:10:37 crc kubenswrapper[4813]: I0219 21:10:37.415653 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nqmvs_268e20d4-ae45-4b3c-a5bd-9c06c4ddd945/extract/0.log" Feb 19 21:10:37 crc kubenswrapper[4813]: I0219 21:10:37.563446 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nqmvs_268e20d4-ae45-4b3c-a5bd-9c06c4ddd945/pull/0.log" Feb 19 21:10:37 crc kubenswrapper[4813]: I0219 21:10:37.567032 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nqmvs_268e20d4-ae45-4b3c-a5bd-9c06c4ddd945/util/0.log" Feb 19 21:10:37 crc kubenswrapper[4813]: I0219 21:10:37.637974 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gx7bx_fa567e17-7ee2-4f55-9907-6c7aed9af532/util/0.log" Feb 19 21:10:37 crc kubenswrapper[4813]: I0219 21:10:37.810903 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gx7bx_fa567e17-7ee2-4f55-9907-6c7aed9af532/pull/0.log" Feb 19 21:10:37 crc kubenswrapper[4813]: I0219 21:10:37.811089 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gx7bx_fa567e17-7ee2-4f55-9907-6c7aed9af532/util/0.log" Feb 19 21:10:37 crc kubenswrapper[4813]: I0219 21:10:37.823740 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gx7bx_fa567e17-7ee2-4f55-9907-6c7aed9af532/pull/0.log" Feb 19 21:10:38 crc kubenswrapper[4813]: I0219 21:10:38.061068 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gx7bx_fa567e17-7ee2-4f55-9907-6c7aed9af532/pull/0.log" Feb 19 21:10:38 crc kubenswrapper[4813]: I0219 21:10:38.062344 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gx7bx_fa567e17-7ee2-4f55-9907-6c7aed9af532/util/0.log" Feb 19 21:10:38 crc kubenswrapper[4813]: I0219 21:10:38.083548 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213gx7bx_fa567e17-7ee2-4f55-9907-6c7aed9af532/extract/0.log" Feb 19 21:10:38 crc kubenswrapper[4813]: I0219 21:10:38.244628 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6ckxz_ecf8698e-c530-429c-90ca-8f693c11d185/extract-utilities/0.log" Feb 19 21:10:38 crc kubenswrapper[4813]: I0219 21:10:38.378053 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6ckxz_ecf8698e-c530-429c-90ca-8f693c11d185/extract-utilities/0.log" Feb 19 21:10:38 crc kubenswrapper[4813]: I0219 21:10:38.408295 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6ckxz_ecf8698e-c530-429c-90ca-8f693c11d185/extract-content/0.log" Feb 19 21:10:38 crc kubenswrapper[4813]: I0219 21:10:38.418436 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6ckxz_ecf8698e-c530-429c-90ca-8f693c11d185/extract-content/0.log" Feb 19 21:10:38 crc kubenswrapper[4813]: I0219 21:10:38.570198 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6ckxz_ecf8698e-c530-429c-90ca-8f693c11d185/extract-content/0.log" Feb 19 21:10:38 crc kubenswrapper[4813]: I0219 21:10:38.664594 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6ckxz_ecf8698e-c530-429c-90ca-8f693c11d185/extract-utilities/0.log" Feb 19 21:10:38 crc kubenswrapper[4813]: I0219 21:10:38.811510 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-46zck_cded8d2a-b9bd-42fb-97b0-64f5bcaf26d6/extract-utilities/0.log" Feb 19 21:10:39 crc kubenswrapper[4813]: I0219 21:10:39.064253 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-46zck_cded8d2a-b9bd-42fb-97b0-64f5bcaf26d6/extract-content/0.log" Feb 19 21:10:39 crc kubenswrapper[4813]: I0219 21:10:39.139584 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-46zck_cded8d2a-b9bd-42fb-97b0-64f5bcaf26d6/extract-utilities/0.log" Feb 19 21:10:39 crc kubenswrapper[4813]: I0219 21:10:39.172829 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-46zck_cded8d2a-b9bd-42fb-97b0-64f5bcaf26d6/extract-content/0.log" Feb 19 21:10:39 crc kubenswrapper[4813]: I0219 21:10:39.362326 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-46zck_cded8d2a-b9bd-42fb-97b0-64f5bcaf26d6/extract-utilities/0.log" Feb 19 21:10:39 crc kubenswrapper[4813]: I0219 21:10:39.399063 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-46zck_cded8d2a-b9bd-42fb-97b0-64f5bcaf26d6/extract-content/0.log" Feb 19 21:10:39 crc kubenswrapper[4813]: I0219 21:10:39.601778 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan8dbm_1ab4cb0c-bcd2-4fa4-84b0-eca960beaf73/util/0.log" Feb 19 21:10:40 crc kubenswrapper[4813]: I0219 21:10:40.044751 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6ckxz_ecf8698e-c530-429c-90ca-8f693c11d185/registry-server/0.log" Feb 19 21:10:40 crc kubenswrapper[4813]: I0219 21:10:40.441849 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan8dbm_1ab4cb0c-bcd2-4fa4-84b0-eca960beaf73/pull/0.log" Feb 19 21:10:40 crc kubenswrapper[4813]: I0219 21:10:40.514406 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan8dbm_1ab4cb0c-bcd2-4fa4-84b0-eca960beaf73/pull/0.log" Feb 19 21:10:40 crc kubenswrapper[4813]: I0219 21:10:40.547184 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan8dbm_1ab4cb0c-bcd2-4fa4-84b0-eca960beaf73/util/0.log" Feb 19 21:10:40 crc kubenswrapper[4813]: I0219 21:10:40.756821 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan8dbm_1ab4cb0c-bcd2-4fa4-84b0-eca960beaf73/util/0.log" Feb 19 21:10:40 crc kubenswrapper[4813]: I0219 21:10:40.826065 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan8dbm_1ab4cb0c-bcd2-4fa4-84b0-eca960beaf73/extract/0.log" Feb 19 21:10:40 crc kubenswrapper[4813]: I0219 21:10:40.834218 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan8dbm_1ab4cb0c-bcd2-4fa4-84b0-eca960beaf73/pull/0.log" Feb 19 21:10:40 crc kubenswrapper[4813]: I0219 21:10:40.935332 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-6r4vx_9f6ab327-3242-4ed2-a472-21b7f1a0bcbf/marketplace-operator/0.log" Feb 19 21:10:41 crc kubenswrapper[4813]: I0219 21:10:41.061723 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8nj58_b4185c34-eea3-4fe8-99c8-2e54370f18af/extract-utilities/0.log" Feb 19 21:10:41 crc kubenswrapper[4813]: I0219 21:10:41.143600 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-46zck_cded8d2a-b9bd-42fb-97b0-64f5bcaf26d6/registry-server/0.log" Feb 19 21:10:41 crc kubenswrapper[4813]: I0219 21:10:41.218187 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8nj58_b4185c34-eea3-4fe8-99c8-2e54370f18af/extract-utilities/0.log" Feb 19 21:10:41 crc kubenswrapper[4813]: I0219 21:10:41.254298 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8nj58_b4185c34-eea3-4fe8-99c8-2e54370f18af/extract-content/0.log" Feb 19 21:10:41 crc kubenswrapper[4813]: I0219 21:10:41.278091 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8nj58_b4185c34-eea3-4fe8-99c8-2e54370f18af/extract-content/0.log" Feb 19 21:10:41 crc kubenswrapper[4813]: I0219 21:10:41.479354 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8nj58_b4185c34-eea3-4fe8-99c8-2e54370f18af/extract-content/0.log" Feb 19 21:10:41 crc kubenswrapper[4813]: I0219 21:10:41.479778 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8nj58_b4185c34-eea3-4fe8-99c8-2e54370f18af/extract-utilities/0.log" Feb 19 21:10:41 crc kubenswrapper[4813]: I0219 21:10:41.489317 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-twvpc_e0ab347f-5d4c-4597-bd74-163c4e0c4418/extract-utilities/0.log" Feb 19 21:10:42 crc kubenswrapper[4813]: I0219 21:10:42.079929 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8nj58_b4185c34-eea3-4fe8-99c8-2e54370f18af/registry-server/0.log" Feb 19 21:10:42 crc kubenswrapper[4813]: I0219 21:10:42.238705 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-twvpc_e0ab347f-5d4c-4597-bd74-163c4e0c4418/extract-content/0.log" Feb 19 21:10:42 crc kubenswrapper[4813]: I0219 21:10:42.241558 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-twvpc_e0ab347f-5d4c-4597-bd74-163c4e0c4418/extract-utilities/0.log" Feb 19 21:10:42 crc kubenswrapper[4813]: I0219 21:10:42.300095 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-twvpc_e0ab347f-5d4c-4597-bd74-163c4e0c4418/extract-content/0.log" Feb 19 21:10:42 crc kubenswrapper[4813]: I0219 21:10:42.675216 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-twvpc_e0ab347f-5d4c-4597-bd74-163c4e0c4418/extract-content/0.log" Feb 19 21:10:42 crc kubenswrapper[4813]: I0219 21:10:42.701390 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-twvpc_e0ab347f-5d4c-4597-bd74-163c4e0c4418/extract-utilities/0.log" Feb 19 21:10:43 crc kubenswrapper[4813]: I0219 21:10:43.709613 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-twvpc_e0ab347f-5d4c-4597-bd74-163c4e0c4418/registry-server/0.log" Feb 19 21:10:58 crc kubenswrapper[4813]: I0219 21:10:58.403395 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5bffd859c-9twz7_07f692a3-7fb0-44fc-aa86-8574ccf76589/prometheus-operator-admission-webhook/0.log" Feb 19 21:10:58 crc kubenswrapper[4813]: I0219 21:10:58.448014 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5bffd859c-svjmq_e8435c9c-9eb7-4ad0-bf77-8b6b34952c02/prometheus-operator-admission-webhook/0.log" Feb 19 21:10:58 crc kubenswrapper[4813]: I0219 21:10:58.451702 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-llmlj_e0932101-fd0d-44ec-9ec6-a2801d356faa/prometheus-operator/0.log" Feb 19 21:10:58 crc kubenswrapper[4813]: I0219 21:10:58.609165 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-fgkkm_86eb09f2-e8b8-45e2-a97f-5d2797af20fc/operator/0.log" Feb 19 21:10:58 crc kubenswrapper[4813]: I0219 21:10:58.636075 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-pt49v_967b5a02-0a81-4b48-aa1a-983f7a923088/perses-operator/0.log" Feb 19 21:11:00 crc kubenswrapper[4813]: I0219 21:11:00.329882 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 21:11:00 crc kubenswrapper[4813]: I0219 21:11:00.331263 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 21:11:00 crc kubenswrapper[4813]: I0219 21:11:00.331404 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" Feb 19 21:11:00 crc kubenswrapper[4813]: I0219 21:11:00.332342 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0cea0fffc64930c67d432880b28f3ce039eab2481117234ad32338c12c4c35a5"} pod="openshift-machine-config-operator/machine-config-daemon-gfswm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 21:11:00 crc kubenswrapper[4813]: I0219 21:11:00.332523 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" containerID="cri-o://0cea0fffc64930c67d432880b28f3ce039eab2481117234ad32338c12c4c35a5" gracePeriod=600 Feb 19 21:11:00 crc kubenswrapper[4813]: I0219 21:11:00.856353 4813 generic.go:334] "Generic (PLEG): container finished" podID="481977a2-7072-4176-abd4-863cb6104d70" containerID="0cea0fffc64930c67d432880b28f3ce039eab2481117234ad32338c12c4c35a5" exitCode=0 Feb 19 21:11:00 crc kubenswrapper[4813]: I0219 21:11:00.856411 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" event={"ID":"481977a2-7072-4176-abd4-863cb6104d70","Type":"ContainerDied","Data":"0cea0fffc64930c67d432880b28f3ce039eab2481117234ad32338c12c4c35a5"} Feb 19 21:11:00 crc kubenswrapper[4813]: I0219 21:11:00.856459 4813 scope.go:117] "RemoveContainer" containerID="6c6ebd25e13c34a5637b27b7ea17b9562833793e4c09cb4428c635b774a360cb" Feb 19 21:11:01 crc kubenswrapper[4813]: I0219 21:11:01.867964 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" event={"ID":"481977a2-7072-4176-abd4-863cb6104d70","Type":"ContainerStarted","Data":"a347cb44e5a8b3a2cfa74877d52037cc2fbf88e7df2b59e3a659204cfacb2f6a"} Feb 19 21:11:20 crc kubenswrapper[4813]: E0219 21:11:20.936735 4813 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.69:59496->38.102.83.69:38045: write tcp 38.102.83.69:59496->38.102.83.69:38045: write: broken pipe Feb 19 21:12:07 crc kubenswrapper[4813]: I0219 21:12:07.510074 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zwlbl"] Feb 19 21:12:07 crc kubenswrapper[4813]: E0219 21:12:07.514561 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30365140-7d80-4179-aa44-a9f3623d4564" containerName="extract-content" Feb 19 21:12:07 crc kubenswrapper[4813]: I0219 21:12:07.514690 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="30365140-7d80-4179-aa44-a9f3623d4564" containerName="extract-content" Feb 19 21:12:07 crc kubenswrapper[4813]: E0219 21:12:07.514723 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30365140-7d80-4179-aa44-a9f3623d4564" containerName="registry-server" Feb 19 21:12:07 crc kubenswrapper[4813]: I0219 21:12:07.514917 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="30365140-7d80-4179-aa44-a9f3623d4564" containerName="registry-server" Feb 19 21:12:07 crc kubenswrapper[4813]: E0219 21:12:07.515205 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfc63f1d-c2ca-4fa9-9fd7-1218152e70fa" containerName="registry-server" Feb 19 21:12:07 crc kubenswrapper[4813]: I0219 21:12:07.515226 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfc63f1d-c2ca-4fa9-9fd7-1218152e70fa" containerName="registry-server" Feb 19 21:12:07 crc kubenswrapper[4813]: E0219 21:12:07.515434 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfc63f1d-c2ca-4fa9-9fd7-1218152e70fa" containerName="extract-content" Feb 19 21:12:07 crc kubenswrapper[4813]: I0219 21:12:07.515451 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfc63f1d-c2ca-4fa9-9fd7-1218152e70fa" containerName="extract-content" Feb 19 21:12:07 crc kubenswrapper[4813]: E0219 21:12:07.515468 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfc63f1d-c2ca-4fa9-9fd7-1218152e70fa" containerName="extract-utilities" Feb 19 21:12:07 crc kubenswrapper[4813]: I0219 21:12:07.515478 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfc63f1d-c2ca-4fa9-9fd7-1218152e70fa" containerName="extract-utilities" Feb 19 21:12:07 crc kubenswrapper[4813]: E0219 21:12:07.515673 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30365140-7d80-4179-aa44-a9f3623d4564" containerName="extract-utilities" Feb 19 21:12:07 crc kubenswrapper[4813]: I0219 21:12:07.515691 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="30365140-7d80-4179-aa44-a9f3623d4564" containerName="extract-utilities" Feb 19 21:12:07 crc kubenswrapper[4813]: I0219 21:12:07.516662 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="30365140-7d80-4179-aa44-a9f3623d4564" containerName="registry-server" Feb 19 21:12:07 crc kubenswrapper[4813]: I0219 21:12:07.516704 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfc63f1d-c2ca-4fa9-9fd7-1218152e70fa" containerName="registry-server" Feb 19 21:12:07 crc kubenswrapper[4813]: I0219 21:12:07.522224 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zwlbl"] Feb 19 21:12:07 crc kubenswrapper[4813]: I0219 21:12:07.522324 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zwlbl" Feb 19 21:12:07 crc kubenswrapper[4813]: I0219 21:12:07.614910 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6366fbbc-ed2f-4fe5-a26e-e181e4273456-utilities\") pod \"redhat-marketplace-zwlbl\" (UID: \"6366fbbc-ed2f-4fe5-a26e-e181e4273456\") " pod="openshift-marketplace/redhat-marketplace-zwlbl" Feb 19 21:12:07 crc kubenswrapper[4813]: I0219 21:12:07.614995 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6366fbbc-ed2f-4fe5-a26e-e181e4273456-catalog-content\") pod \"redhat-marketplace-zwlbl\" (UID: \"6366fbbc-ed2f-4fe5-a26e-e181e4273456\") " pod="openshift-marketplace/redhat-marketplace-zwlbl" Feb 19 21:12:07 crc kubenswrapper[4813]: I0219 21:12:07.615064 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thqmv\" (UniqueName: \"kubernetes.io/projected/6366fbbc-ed2f-4fe5-a26e-e181e4273456-kube-api-access-thqmv\") pod \"redhat-marketplace-zwlbl\" (UID: \"6366fbbc-ed2f-4fe5-a26e-e181e4273456\") " pod="openshift-marketplace/redhat-marketplace-zwlbl" Feb 19 21:12:07 crc kubenswrapper[4813]: I0219 21:12:07.716711 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6366fbbc-ed2f-4fe5-a26e-e181e4273456-utilities\") pod \"redhat-marketplace-zwlbl\" (UID: \"6366fbbc-ed2f-4fe5-a26e-e181e4273456\") " pod="openshift-marketplace/redhat-marketplace-zwlbl" Feb 19 21:12:07 crc kubenswrapper[4813]: I0219 21:12:07.716765 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6366fbbc-ed2f-4fe5-a26e-e181e4273456-catalog-content\") pod \"redhat-marketplace-zwlbl\" (UID: \"6366fbbc-ed2f-4fe5-a26e-e181e4273456\") " pod="openshift-marketplace/redhat-marketplace-zwlbl" Feb 19 21:12:07 crc kubenswrapper[4813]: I0219 21:12:07.716825 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thqmv\" (UniqueName: \"kubernetes.io/projected/6366fbbc-ed2f-4fe5-a26e-e181e4273456-kube-api-access-thqmv\") pod \"redhat-marketplace-zwlbl\" (UID: \"6366fbbc-ed2f-4fe5-a26e-e181e4273456\") " pod="openshift-marketplace/redhat-marketplace-zwlbl" Feb 19 21:12:07 crc kubenswrapper[4813]: I0219 21:12:07.717332 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6366fbbc-ed2f-4fe5-a26e-e181e4273456-utilities\") pod \"redhat-marketplace-zwlbl\" (UID: \"6366fbbc-ed2f-4fe5-a26e-e181e4273456\") " pod="openshift-marketplace/redhat-marketplace-zwlbl" Feb 19 21:12:07 crc kubenswrapper[4813]: I0219 21:12:07.717462 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6366fbbc-ed2f-4fe5-a26e-e181e4273456-catalog-content\") pod \"redhat-marketplace-zwlbl\" (UID: \"6366fbbc-ed2f-4fe5-a26e-e181e4273456\") " pod="openshift-marketplace/redhat-marketplace-zwlbl" Feb 19 21:12:07 crc kubenswrapper[4813]: I0219 21:12:07.741801 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thqmv\" (UniqueName: \"kubernetes.io/projected/6366fbbc-ed2f-4fe5-a26e-e181e4273456-kube-api-access-thqmv\") pod \"redhat-marketplace-zwlbl\" (UID: \"6366fbbc-ed2f-4fe5-a26e-e181e4273456\") " pod="openshift-marketplace/redhat-marketplace-zwlbl" Feb 19 21:12:07 crc kubenswrapper[4813]: I0219 21:12:07.857846 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zwlbl" Feb 19 21:12:08 crc kubenswrapper[4813]: I0219 21:12:08.453522 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zwlbl"] Feb 19 21:12:08 crc kubenswrapper[4813]: I0219 21:12:08.666440 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zwlbl" event={"ID":"6366fbbc-ed2f-4fe5-a26e-e181e4273456","Type":"ContainerStarted","Data":"7bd89d468b7df58d6ce96b52a4834ee33d5a89c93ff1bcc30feaf63240e96f3f"} Feb 19 21:12:09 crc kubenswrapper[4813]: I0219 21:12:09.675476 4813 generic.go:334] "Generic (PLEG): container finished" podID="6366fbbc-ed2f-4fe5-a26e-e181e4273456" containerID="4cb9b3f28dbebcc004a20a600598e0b383a6da3eb3da98f7a7cf02f64adf5159" exitCode=0 Feb 19 21:12:09 crc kubenswrapper[4813]: I0219 21:12:09.675571 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zwlbl" event={"ID":"6366fbbc-ed2f-4fe5-a26e-e181e4273456","Type":"ContainerDied","Data":"4cb9b3f28dbebcc004a20a600598e0b383a6da3eb3da98f7a7cf02f64adf5159"} Feb 19 21:12:09 crc kubenswrapper[4813]: I0219 21:12:09.677877 4813 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 21:12:11 crc kubenswrapper[4813]: I0219 21:12:11.707667 4813 generic.go:334] "Generic (PLEG): container finished" podID="6366fbbc-ed2f-4fe5-a26e-e181e4273456" containerID="032bad420c746255a3d5a34ea56f72898872b55da9f43224a7124e14595b5d00" exitCode=0 Feb 19 21:12:11 crc kubenswrapper[4813]: I0219 21:12:11.708317 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zwlbl" event={"ID":"6366fbbc-ed2f-4fe5-a26e-e181e4273456","Type":"ContainerDied","Data":"032bad420c746255a3d5a34ea56f72898872b55da9f43224a7124e14595b5d00"} Feb 19 21:12:12 crc kubenswrapper[4813]: I0219 21:12:12.721070 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zwlbl" event={"ID":"6366fbbc-ed2f-4fe5-a26e-e181e4273456","Type":"ContainerStarted","Data":"85fbe9fd9355df042a61054b943ea5c5b1746d456533d7ab2b5a52e065ae62de"} Feb 19 21:12:12 crc kubenswrapper[4813]: I0219 21:12:12.749384 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zwlbl" podStartSLOduration=3.26628741 podStartE2EDuration="5.749362125s" podCreationTimestamp="2026-02-19 21:12:07 +0000 UTC" firstStartedPulling="2026-02-19 21:12:09.677642723 +0000 UTC m=+9748.903083264" lastFinishedPulling="2026-02-19 21:12:12.160717438 +0000 UTC m=+9751.386157979" observedRunningTime="2026-02-19 21:12:12.743440872 +0000 UTC m=+9751.968881453" watchObservedRunningTime="2026-02-19 21:12:12.749362125 +0000 UTC m=+9751.974802676" Feb 19 21:12:17 crc kubenswrapper[4813]: I0219 21:12:17.858934 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zwlbl" Feb 19 21:12:17 crc kubenswrapper[4813]: I0219 21:12:17.859637 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zwlbl" Feb 19 21:12:17 crc kubenswrapper[4813]: I0219 21:12:17.946265 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zwlbl" Feb 19 21:12:18 crc kubenswrapper[4813]: I0219 21:12:18.884489 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zwlbl" Feb 19 21:12:18 crc kubenswrapper[4813]: I0219 21:12:18.934500 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zwlbl"] Feb 19 21:12:20 crc kubenswrapper[4813]: I0219 21:12:20.828153 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zwlbl" podUID="6366fbbc-ed2f-4fe5-a26e-e181e4273456" containerName="registry-server" containerID="cri-o://85fbe9fd9355df042a61054b943ea5c5b1746d456533d7ab2b5a52e065ae62de" gracePeriod=2 Feb 19 21:12:21 crc kubenswrapper[4813]: I0219 21:12:21.410750 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zwlbl" Feb 19 21:12:21 crc kubenswrapper[4813]: I0219 21:12:21.475184 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6366fbbc-ed2f-4fe5-a26e-e181e4273456-catalog-content\") pod \"6366fbbc-ed2f-4fe5-a26e-e181e4273456\" (UID: \"6366fbbc-ed2f-4fe5-a26e-e181e4273456\") " Feb 19 21:12:21 crc kubenswrapper[4813]: I0219 21:12:21.475321 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thqmv\" (UniqueName: \"kubernetes.io/projected/6366fbbc-ed2f-4fe5-a26e-e181e4273456-kube-api-access-thqmv\") pod \"6366fbbc-ed2f-4fe5-a26e-e181e4273456\" (UID: \"6366fbbc-ed2f-4fe5-a26e-e181e4273456\") " Feb 19 21:12:21 crc kubenswrapper[4813]: I0219 21:12:21.475431 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6366fbbc-ed2f-4fe5-a26e-e181e4273456-utilities\") pod \"6366fbbc-ed2f-4fe5-a26e-e181e4273456\" (UID: \"6366fbbc-ed2f-4fe5-a26e-e181e4273456\") " Feb 19 21:12:21 crc kubenswrapper[4813]: I0219 21:12:21.477646 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6366fbbc-ed2f-4fe5-a26e-e181e4273456-utilities" (OuterVolumeSpecName: "utilities") pod "6366fbbc-ed2f-4fe5-a26e-e181e4273456" (UID: "6366fbbc-ed2f-4fe5-a26e-e181e4273456"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:12:21 crc kubenswrapper[4813]: I0219 21:12:21.487674 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6366fbbc-ed2f-4fe5-a26e-e181e4273456-kube-api-access-thqmv" (OuterVolumeSpecName: "kube-api-access-thqmv") pod "6366fbbc-ed2f-4fe5-a26e-e181e4273456" (UID: "6366fbbc-ed2f-4fe5-a26e-e181e4273456"). InnerVolumeSpecName "kube-api-access-thqmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:12:21 crc kubenswrapper[4813]: I0219 21:12:21.506563 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6366fbbc-ed2f-4fe5-a26e-e181e4273456-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6366fbbc-ed2f-4fe5-a26e-e181e4273456" (UID: "6366fbbc-ed2f-4fe5-a26e-e181e4273456"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:12:21 crc kubenswrapper[4813]: I0219 21:12:21.577688 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6366fbbc-ed2f-4fe5-a26e-e181e4273456-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 21:12:21 crc kubenswrapper[4813]: I0219 21:12:21.577714 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-thqmv\" (UniqueName: \"kubernetes.io/projected/6366fbbc-ed2f-4fe5-a26e-e181e4273456-kube-api-access-thqmv\") on node \"crc\" DevicePath \"\"" Feb 19 21:12:21 crc kubenswrapper[4813]: I0219 21:12:21.577723 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6366fbbc-ed2f-4fe5-a26e-e181e4273456-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 21:12:21 crc kubenswrapper[4813]: I0219 21:12:21.845275 4813 generic.go:334] "Generic (PLEG): container finished" podID="6366fbbc-ed2f-4fe5-a26e-e181e4273456" containerID="85fbe9fd9355df042a61054b943ea5c5b1746d456533d7ab2b5a52e065ae62de" exitCode=0 Feb 19 21:12:21 crc kubenswrapper[4813]: I0219 21:12:21.845336 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zwlbl" event={"ID":"6366fbbc-ed2f-4fe5-a26e-e181e4273456","Type":"ContainerDied","Data":"85fbe9fd9355df042a61054b943ea5c5b1746d456533d7ab2b5a52e065ae62de"} Feb 19 21:12:21 crc kubenswrapper[4813]: I0219 21:12:21.845376 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zwlbl" event={"ID":"6366fbbc-ed2f-4fe5-a26e-e181e4273456","Type":"ContainerDied","Data":"7bd89d468b7df58d6ce96b52a4834ee33d5a89c93ff1bcc30feaf63240e96f3f"} Feb 19 21:12:21 crc kubenswrapper[4813]: I0219 21:12:21.845406 4813 scope.go:117] "RemoveContainer" containerID="85fbe9fd9355df042a61054b943ea5c5b1746d456533d7ab2b5a52e065ae62de" Feb 19 21:12:21 crc kubenswrapper[4813]: I0219 21:12:21.845598 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zwlbl" Feb 19 21:12:21 crc kubenswrapper[4813]: I0219 21:12:21.890345 4813 scope.go:117] "RemoveContainer" containerID="032bad420c746255a3d5a34ea56f72898872b55da9f43224a7124e14595b5d00" Feb 19 21:12:21 crc kubenswrapper[4813]: I0219 21:12:21.893795 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zwlbl"] Feb 19 21:12:21 crc kubenswrapper[4813]: I0219 21:12:21.908457 4813 scope.go:117] "RemoveContainer" containerID="4cb9b3f28dbebcc004a20a600598e0b383a6da3eb3da98f7a7cf02f64adf5159" Feb 19 21:12:21 crc kubenswrapper[4813]: I0219 21:12:21.916656 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zwlbl"] Feb 19 21:12:21 crc kubenswrapper[4813]: I0219 21:12:21.987538 4813 scope.go:117] "RemoveContainer" containerID="85fbe9fd9355df042a61054b943ea5c5b1746d456533d7ab2b5a52e065ae62de" Feb 19 21:12:21 crc kubenswrapper[4813]: E0219 21:12:21.988273 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85fbe9fd9355df042a61054b943ea5c5b1746d456533d7ab2b5a52e065ae62de\": container with ID starting with 85fbe9fd9355df042a61054b943ea5c5b1746d456533d7ab2b5a52e065ae62de not found: ID does not exist" containerID="85fbe9fd9355df042a61054b943ea5c5b1746d456533d7ab2b5a52e065ae62de" Feb 19 21:12:21 crc kubenswrapper[4813]: I0219 21:12:21.988379 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85fbe9fd9355df042a61054b943ea5c5b1746d456533d7ab2b5a52e065ae62de"} err="failed to get container status \"85fbe9fd9355df042a61054b943ea5c5b1746d456533d7ab2b5a52e065ae62de\": rpc error: code = NotFound desc = could not find container \"85fbe9fd9355df042a61054b943ea5c5b1746d456533d7ab2b5a52e065ae62de\": container with ID starting with 85fbe9fd9355df042a61054b943ea5c5b1746d456533d7ab2b5a52e065ae62de not found: ID does not exist" Feb 19 21:12:21 crc kubenswrapper[4813]: I0219 21:12:21.988465 4813 scope.go:117] "RemoveContainer" containerID="032bad420c746255a3d5a34ea56f72898872b55da9f43224a7124e14595b5d00" Feb 19 21:12:21 crc kubenswrapper[4813]: E0219 21:12:21.988963 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"032bad420c746255a3d5a34ea56f72898872b55da9f43224a7124e14595b5d00\": container with ID starting with 032bad420c746255a3d5a34ea56f72898872b55da9f43224a7124e14595b5d00 not found: ID does not exist" containerID="032bad420c746255a3d5a34ea56f72898872b55da9f43224a7124e14595b5d00" Feb 19 21:12:21 crc kubenswrapper[4813]: I0219 21:12:21.989001 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"032bad420c746255a3d5a34ea56f72898872b55da9f43224a7124e14595b5d00"} err="failed to get container status \"032bad420c746255a3d5a34ea56f72898872b55da9f43224a7124e14595b5d00\": rpc error: code = NotFound desc = could not find container \"032bad420c746255a3d5a34ea56f72898872b55da9f43224a7124e14595b5d00\": container with ID starting with 032bad420c746255a3d5a34ea56f72898872b55da9f43224a7124e14595b5d00 not found: ID does not exist" Feb 19 21:12:21 crc kubenswrapper[4813]: I0219 21:12:21.989026 4813 scope.go:117] "RemoveContainer" containerID="4cb9b3f28dbebcc004a20a600598e0b383a6da3eb3da98f7a7cf02f64adf5159" Feb 19 21:12:21 crc kubenswrapper[4813]: E0219 21:12:21.989517 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cb9b3f28dbebcc004a20a600598e0b383a6da3eb3da98f7a7cf02f64adf5159\": container with ID starting with 4cb9b3f28dbebcc004a20a600598e0b383a6da3eb3da98f7a7cf02f64adf5159 not found: ID does not exist" containerID="4cb9b3f28dbebcc004a20a600598e0b383a6da3eb3da98f7a7cf02f64adf5159" Feb 19 21:12:21 crc kubenswrapper[4813]: I0219 21:12:21.989556 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cb9b3f28dbebcc004a20a600598e0b383a6da3eb3da98f7a7cf02f64adf5159"} err="failed to get container status \"4cb9b3f28dbebcc004a20a600598e0b383a6da3eb3da98f7a7cf02f64adf5159\": rpc error: code = NotFound desc = could not find container \"4cb9b3f28dbebcc004a20a600598e0b383a6da3eb3da98f7a7cf02f64adf5159\": container with ID starting with 4cb9b3f28dbebcc004a20a600598e0b383a6da3eb3da98f7a7cf02f64adf5159 not found: ID does not exist" Feb 19 21:12:23 crc kubenswrapper[4813]: I0219 21:12:23.490930 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6366fbbc-ed2f-4fe5-a26e-e181e4273456" path="/var/lib/kubelet/pods/6366fbbc-ed2f-4fe5-a26e-e181e4273456/volumes" Feb 19 21:13:03 crc kubenswrapper[4813]: I0219 21:13:03.267379 4813 trace.go:236] Trace[611186193]: "Calculate volume metrics of ovndbcluster-nb-etc-ovn for pod openstack/ovsdbserver-nb-1" (19-Feb-2026 21:13:02.231) (total time: 1034ms): Feb 19 21:13:03 crc kubenswrapper[4813]: Trace[611186193]: [1.034194081s] [1.034194081s] END Feb 19 21:13:07 crc kubenswrapper[4813]: I0219 21:13:07.427461 4813 generic.go:334] "Generic (PLEG): container finished" podID="e3d21058-7dbc-453a-aae0-d76fe8da5b16" containerID="8037638bf0310664ddb957747ef2ca41c338a6cd4d2b3347a67e1cbc0e72d1dd" exitCode=0 Feb 19 21:13:07 crc kubenswrapper[4813]: I0219 21:13:07.427521 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2565l/must-gather-4q446" event={"ID":"e3d21058-7dbc-453a-aae0-d76fe8da5b16","Type":"ContainerDied","Data":"8037638bf0310664ddb957747ef2ca41c338a6cd4d2b3347a67e1cbc0e72d1dd"} Feb 19 21:13:07 crc kubenswrapper[4813]: I0219 21:13:07.430491 4813 scope.go:117] "RemoveContainer" containerID="8037638bf0310664ddb957747ef2ca41c338a6cd4d2b3347a67e1cbc0e72d1dd" Feb 19 21:13:07 crc kubenswrapper[4813]: I0219 21:13:07.802631 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-2565l_must-gather-4q446_e3d21058-7dbc-453a-aae0-d76fe8da5b16/gather/0.log" Feb 19 21:13:15 crc kubenswrapper[4813]: I0219 21:13:15.742993 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-2565l/must-gather-4q446"] Feb 19 21:13:15 crc kubenswrapper[4813]: I0219 21:13:15.743684 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-2565l/must-gather-4q446" podUID="e3d21058-7dbc-453a-aae0-d76fe8da5b16" containerName="copy" containerID="cri-o://40b3f467dae7f3d00e38871b770565c8fc7388e2337b86490eed5442f04755f5" gracePeriod=2 Feb 19 21:13:15 crc kubenswrapper[4813]: I0219 21:13:15.775519 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-2565l/must-gather-4q446"] Feb 19 21:13:16 crc kubenswrapper[4813]: I0219 21:13:16.575796 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-2565l_must-gather-4q446_e3d21058-7dbc-453a-aae0-d76fe8da5b16/copy/0.log" Feb 19 21:13:16 crc kubenswrapper[4813]: I0219 21:13:16.576251 4813 generic.go:334] "Generic (PLEG): container finished" podID="e3d21058-7dbc-453a-aae0-d76fe8da5b16" containerID="40b3f467dae7f3d00e38871b770565c8fc7388e2337b86490eed5442f04755f5" exitCode=143 Feb 19 21:13:17 crc kubenswrapper[4813]: I0219 21:13:17.043128 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-2565l_must-gather-4q446_e3d21058-7dbc-453a-aae0-d76fe8da5b16/copy/0.log" Feb 19 21:13:17 crc kubenswrapper[4813]: I0219 21:13:17.044759 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2565l/must-gather-4q446" Feb 19 21:13:17 crc kubenswrapper[4813]: I0219 21:13:17.160916 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbch8\" (UniqueName: \"kubernetes.io/projected/e3d21058-7dbc-453a-aae0-d76fe8da5b16-kube-api-access-hbch8\") pod \"e3d21058-7dbc-453a-aae0-d76fe8da5b16\" (UID: \"e3d21058-7dbc-453a-aae0-d76fe8da5b16\") " Feb 19 21:13:17 crc kubenswrapper[4813]: I0219 21:13:17.161412 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e3d21058-7dbc-453a-aae0-d76fe8da5b16-must-gather-output\") pod \"e3d21058-7dbc-453a-aae0-d76fe8da5b16\" (UID: \"e3d21058-7dbc-453a-aae0-d76fe8da5b16\") " Feb 19 21:13:17 crc kubenswrapper[4813]: I0219 21:13:17.166803 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3d21058-7dbc-453a-aae0-d76fe8da5b16-kube-api-access-hbch8" (OuterVolumeSpecName: "kube-api-access-hbch8") pod "e3d21058-7dbc-453a-aae0-d76fe8da5b16" (UID: "e3d21058-7dbc-453a-aae0-d76fe8da5b16"). InnerVolumeSpecName "kube-api-access-hbch8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:13:17 crc kubenswrapper[4813]: I0219 21:13:17.264006 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbch8\" (UniqueName: \"kubernetes.io/projected/e3d21058-7dbc-453a-aae0-d76fe8da5b16-kube-api-access-hbch8\") on node \"crc\" DevicePath \"\"" Feb 19 21:13:17 crc kubenswrapper[4813]: I0219 21:13:17.405352 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3d21058-7dbc-453a-aae0-d76fe8da5b16-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "e3d21058-7dbc-453a-aae0-d76fe8da5b16" (UID: "e3d21058-7dbc-453a-aae0-d76fe8da5b16"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 21:13:17 crc kubenswrapper[4813]: I0219 21:13:17.473426 4813 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e3d21058-7dbc-453a-aae0-d76fe8da5b16-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 19 21:13:17 crc kubenswrapper[4813]: I0219 21:13:17.484197 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3d21058-7dbc-453a-aae0-d76fe8da5b16" path="/var/lib/kubelet/pods/e3d21058-7dbc-453a-aae0-d76fe8da5b16/volumes" Feb 19 21:13:17 crc kubenswrapper[4813]: I0219 21:13:17.584475 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-2565l_must-gather-4q446_e3d21058-7dbc-453a-aae0-d76fe8da5b16/copy/0.log" Feb 19 21:13:17 crc kubenswrapper[4813]: I0219 21:13:17.584731 4813 scope.go:117] "RemoveContainer" containerID="40b3f467dae7f3d00e38871b770565c8fc7388e2337b86490eed5442f04755f5" Feb 19 21:13:17 crc kubenswrapper[4813]: I0219 21:13:17.584846 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2565l/must-gather-4q446" Feb 19 21:13:17 crc kubenswrapper[4813]: I0219 21:13:17.606891 4813 scope.go:117] "RemoveContainer" containerID="8037638bf0310664ddb957747ef2ca41c338a6cd4d2b3347a67e1cbc0e72d1dd" Feb 19 21:13:30 crc kubenswrapper[4813]: I0219 21:13:30.329431 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 21:13:30 crc kubenswrapper[4813]: I0219 21:13:30.330167 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 21:14:00 crc kubenswrapper[4813]: I0219 21:14:00.329922 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 21:14:00 crc kubenswrapper[4813]: I0219 21:14:00.330582 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 21:14:30 crc kubenswrapper[4813]: I0219 21:14:30.329800 4813 patch_prober.go:28] interesting pod/machine-config-daemon-gfswm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 21:14:30 crc kubenswrapper[4813]: I0219 21:14:30.331180 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 21:14:30 crc kubenswrapper[4813]: I0219 21:14:30.331256 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" Feb 19 21:14:30 crc kubenswrapper[4813]: I0219 21:14:30.332134 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a347cb44e5a8b3a2cfa74877d52037cc2fbf88e7df2b59e3a659204cfacb2f6a"} pod="openshift-machine-config-operator/machine-config-daemon-gfswm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 21:14:30 crc kubenswrapper[4813]: I0219 21:14:30.332234 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" containerName="machine-config-daemon" containerID="cri-o://a347cb44e5a8b3a2cfa74877d52037cc2fbf88e7df2b59e3a659204cfacb2f6a" gracePeriod=600 Feb 19 21:14:30 crc kubenswrapper[4813]: E0219 21:14:30.471766 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 21:14:30 crc kubenswrapper[4813]: I0219 21:14:30.533885 4813 generic.go:334] "Generic (PLEG): container finished" podID="481977a2-7072-4176-abd4-863cb6104d70" containerID="a347cb44e5a8b3a2cfa74877d52037cc2fbf88e7df2b59e3a659204cfacb2f6a" exitCode=0 Feb 19 21:14:30 crc kubenswrapper[4813]: I0219 21:14:30.533928 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" event={"ID":"481977a2-7072-4176-abd4-863cb6104d70","Type":"ContainerDied","Data":"a347cb44e5a8b3a2cfa74877d52037cc2fbf88e7df2b59e3a659204cfacb2f6a"} Feb 19 21:14:30 crc kubenswrapper[4813]: I0219 21:14:30.533975 4813 scope.go:117] "RemoveContainer" containerID="0cea0fffc64930c67d432880b28f3ce039eab2481117234ad32338c12c4c35a5" Feb 19 21:14:30 crc kubenswrapper[4813]: I0219 21:14:30.535171 4813 scope.go:117] "RemoveContainer" containerID="a347cb44e5a8b3a2cfa74877d52037cc2fbf88e7df2b59e3a659204cfacb2f6a" Feb 19 21:14:30 crc kubenswrapper[4813]: E0219 21:14:30.536188 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 21:14:43 crc kubenswrapper[4813]: I0219 21:14:43.472009 4813 scope.go:117] "RemoveContainer" containerID="a347cb44e5a8b3a2cfa74877d52037cc2fbf88e7df2b59e3a659204cfacb2f6a" Feb 19 21:14:43 crc kubenswrapper[4813]: E0219 21:14:43.473198 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 21:14:56 crc kubenswrapper[4813]: I0219 21:14:56.472376 4813 scope.go:117] "RemoveContainer" containerID="a347cb44e5a8b3a2cfa74877d52037cc2fbf88e7df2b59e3a659204cfacb2f6a" Feb 19 21:14:56 crc kubenswrapper[4813]: E0219 21:14:56.473668 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 21:15:00 crc kubenswrapper[4813]: I0219 21:15:00.178091 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525595-qqgzl"] Feb 19 21:15:00 crc kubenswrapper[4813]: E0219 21:15:00.179626 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6366fbbc-ed2f-4fe5-a26e-e181e4273456" containerName="extract-utilities" Feb 19 21:15:00 crc kubenswrapper[4813]: I0219 21:15:00.179667 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="6366fbbc-ed2f-4fe5-a26e-e181e4273456" containerName="extract-utilities" Feb 19 21:15:00 crc kubenswrapper[4813]: E0219 21:15:00.179723 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6366fbbc-ed2f-4fe5-a26e-e181e4273456" containerName="extract-content" Feb 19 21:15:00 crc kubenswrapper[4813]: I0219 21:15:00.179733 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="6366fbbc-ed2f-4fe5-a26e-e181e4273456" containerName="extract-content" Feb 19 21:15:00 crc kubenswrapper[4813]: E0219 21:15:00.179756 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3d21058-7dbc-453a-aae0-d76fe8da5b16" containerName="copy" Feb 19 21:15:00 crc kubenswrapper[4813]: I0219 21:15:00.179766 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3d21058-7dbc-453a-aae0-d76fe8da5b16" containerName="copy" Feb 19 21:15:00 crc kubenswrapper[4813]: E0219 21:15:00.179788 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3d21058-7dbc-453a-aae0-d76fe8da5b16" containerName="gather" Feb 19 21:15:00 crc kubenswrapper[4813]: I0219 21:15:00.179798 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3d21058-7dbc-453a-aae0-d76fe8da5b16" containerName="gather" Feb 19 21:15:00 crc kubenswrapper[4813]: E0219 21:15:00.179827 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6366fbbc-ed2f-4fe5-a26e-e181e4273456" containerName="registry-server" Feb 19 21:15:00 crc kubenswrapper[4813]: I0219 21:15:00.179850 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="6366fbbc-ed2f-4fe5-a26e-e181e4273456" containerName="registry-server" Feb 19 21:15:00 crc kubenswrapper[4813]: I0219 21:15:00.180184 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="6366fbbc-ed2f-4fe5-a26e-e181e4273456" containerName="registry-server" Feb 19 21:15:00 crc kubenswrapper[4813]: I0219 21:15:00.180207 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3d21058-7dbc-453a-aae0-d76fe8da5b16" containerName="copy" Feb 19 21:15:00 crc kubenswrapper[4813]: I0219 21:15:00.180238 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3d21058-7dbc-453a-aae0-d76fe8da5b16" containerName="gather" Feb 19 21:15:00 crc kubenswrapper[4813]: I0219 21:15:00.181428 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525595-qqgzl" Feb 19 21:15:00 crc kubenswrapper[4813]: I0219 21:15:00.185751 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 21:15:00 crc kubenswrapper[4813]: I0219 21:15:00.187658 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 21:15:00 crc kubenswrapper[4813]: I0219 21:15:00.191782 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525595-qqgzl"] Feb 19 21:15:00 crc kubenswrapper[4813]: I0219 21:15:00.326150 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8bsj\" (UniqueName: \"kubernetes.io/projected/20a111fe-4686-4403-b8a1-651ecb4c7a0a-kube-api-access-x8bsj\") pod \"collect-profiles-29525595-qqgzl\" (UID: \"20a111fe-4686-4403-b8a1-651ecb4c7a0a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525595-qqgzl" Feb 19 21:15:00 crc kubenswrapper[4813]: I0219 21:15:00.326498 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/20a111fe-4686-4403-b8a1-651ecb4c7a0a-secret-volume\") pod \"collect-profiles-29525595-qqgzl\" (UID: \"20a111fe-4686-4403-b8a1-651ecb4c7a0a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525595-qqgzl" Feb 19 21:15:00 crc kubenswrapper[4813]: I0219 21:15:00.326622 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20a111fe-4686-4403-b8a1-651ecb4c7a0a-config-volume\") pod \"collect-profiles-29525595-qqgzl\" (UID: \"20a111fe-4686-4403-b8a1-651ecb4c7a0a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525595-qqgzl" Feb 19 21:15:00 crc kubenswrapper[4813]: I0219 21:15:00.428815 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8bsj\" (UniqueName: \"kubernetes.io/projected/20a111fe-4686-4403-b8a1-651ecb4c7a0a-kube-api-access-x8bsj\") pod \"collect-profiles-29525595-qqgzl\" (UID: \"20a111fe-4686-4403-b8a1-651ecb4c7a0a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525595-qqgzl" Feb 19 21:15:00 crc kubenswrapper[4813]: I0219 21:15:00.428875 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/20a111fe-4686-4403-b8a1-651ecb4c7a0a-secret-volume\") pod \"collect-profiles-29525595-qqgzl\" (UID: \"20a111fe-4686-4403-b8a1-651ecb4c7a0a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525595-qqgzl" Feb 19 21:15:00 crc kubenswrapper[4813]: I0219 21:15:00.428905 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20a111fe-4686-4403-b8a1-651ecb4c7a0a-config-volume\") pod \"collect-profiles-29525595-qqgzl\" (UID: \"20a111fe-4686-4403-b8a1-651ecb4c7a0a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525595-qqgzl" Feb 19 21:15:00 crc kubenswrapper[4813]: I0219 21:15:00.430747 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20a111fe-4686-4403-b8a1-651ecb4c7a0a-config-volume\") pod \"collect-profiles-29525595-qqgzl\" (UID: \"20a111fe-4686-4403-b8a1-651ecb4c7a0a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525595-qqgzl" Feb 19 21:15:00 crc kubenswrapper[4813]: I0219 21:15:00.449023 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/20a111fe-4686-4403-b8a1-651ecb4c7a0a-secret-volume\") pod \"collect-profiles-29525595-qqgzl\" (UID: \"20a111fe-4686-4403-b8a1-651ecb4c7a0a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525595-qqgzl" Feb 19 21:15:00 crc kubenswrapper[4813]: I0219 21:15:00.465440 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8bsj\" (UniqueName: \"kubernetes.io/projected/20a111fe-4686-4403-b8a1-651ecb4c7a0a-kube-api-access-x8bsj\") pod \"collect-profiles-29525595-qqgzl\" (UID: \"20a111fe-4686-4403-b8a1-651ecb4c7a0a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525595-qqgzl" Feb 19 21:15:00 crc kubenswrapper[4813]: I0219 21:15:00.513493 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525595-qqgzl" Feb 19 21:15:01 crc kubenswrapper[4813]: I0219 21:15:01.014772 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525595-qqgzl"] Feb 19 21:15:02 crc kubenswrapper[4813]: I0219 21:15:01.999803 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525595-qqgzl" event={"ID":"20a111fe-4686-4403-b8a1-651ecb4c7a0a","Type":"ContainerStarted","Data":"b2432517b511d03e408e01e5cb04253798cb6aec88308f1515c081bbb2d04b77"} Feb 19 21:15:02 crc kubenswrapper[4813]: I0219 21:15:02.000083 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525595-qqgzl" event={"ID":"20a111fe-4686-4403-b8a1-651ecb4c7a0a","Type":"ContainerStarted","Data":"b1449843f634e8006a0b7c309cfa17272f8304f2477d859f50d0a8315e912e2d"} Feb 19 21:15:03 crc kubenswrapper[4813]: I0219 21:15:03.012178 4813 generic.go:334] "Generic (PLEG): container finished" podID="20a111fe-4686-4403-b8a1-651ecb4c7a0a" containerID="b2432517b511d03e408e01e5cb04253798cb6aec88308f1515c081bbb2d04b77" exitCode=0 Feb 19 21:15:03 crc kubenswrapper[4813]: I0219 21:15:03.012270 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525595-qqgzl" event={"ID":"20a111fe-4686-4403-b8a1-651ecb4c7a0a","Type":"ContainerDied","Data":"b2432517b511d03e408e01e5cb04253798cb6aec88308f1515c081bbb2d04b77"} Feb 19 21:15:03 crc kubenswrapper[4813]: I0219 21:15:03.872705 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525595-qqgzl" Feb 19 21:15:04 crc kubenswrapper[4813]: I0219 21:15:04.006948 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8bsj\" (UniqueName: \"kubernetes.io/projected/20a111fe-4686-4403-b8a1-651ecb4c7a0a-kube-api-access-x8bsj\") pod \"20a111fe-4686-4403-b8a1-651ecb4c7a0a\" (UID: \"20a111fe-4686-4403-b8a1-651ecb4c7a0a\") " Feb 19 21:15:04 crc kubenswrapper[4813]: I0219 21:15:04.007044 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20a111fe-4686-4403-b8a1-651ecb4c7a0a-config-volume\") pod \"20a111fe-4686-4403-b8a1-651ecb4c7a0a\" (UID: \"20a111fe-4686-4403-b8a1-651ecb4c7a0a\") " Feb 19 21:15:04 crc kubenswrapper[4813]: I0219 21:15:04.007149 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/20a111fe-4686-4403-b8a1-651ecb4c7a0a-secret-volume\") pod \"20a111fe-4686-4403-b8a1-651ecb4c7a0a\" (UID: \"20a111fe-4686-4403-b8a1-651ecb4c7a0a\") " Feb 19 21:15:04 crc kubenswrapper[4813]: I0219 21:15:04.008806 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20a111fe-4686-4403-b8a1-651ecb4c7a0a-config-volume" (OuterVolumeSpecName: "config-volume") pod "20a111fe-4686-4403-b8a1-651ecb4c7a0a" (UID: "20a111fe-4686-4403-b8a1-651ecb4c7a0a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:15:04 crc kubenswrapper[4813]: I0219 21:15:04.019835 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20a111fe-4686-4403-b8a1-651ecb4c7a0a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "20a111fe-4686-4403-b8a1-651ecb4c7a0a" (UID: "20a111fe-4686-4403-b8a1-651ecb4c7a0a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:15:04 crc kubenswrapper[4813]: I0219 21:15:04.020059 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20a111fe-4686-4403-b8a1-651ecb4c7a0a-kube-api-access-x8bsj" (OuterVolumeSpecName: "kube-api-access-x8bsj") pod "20a111fe-4686-4403-b8a1-651ecb4c7a0a" (UID: "20a111fe-4686-4403-b8a1-651ecb4c7a0a"). InnerVolumeSpecName "kube-api-access-x8bsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:15:04 crc kubenswrapper[4813]: I0219 21:15:04.026457 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525595-qqgzl" event={"ID":"20a111fe-4686-4403-b8a1-651ecb4c7a0a","Type":"ContainerDied","Data":"b1449843f634e8006a0b7c309cfa17272f8304f2477d859f50d0a8315e912e2d"} Feb 19 21:15:04 crc kubenswrapper[4813]: I0219 21:15:04.026517 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1449843f634e8006a0b7c309cfa17272f8304f2477d859f50d0a8315e912e2d" Feb 19 21:15:04 crc kubenswrapper[4813]: I0219 21:15:04.026645 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525595-qqgzl" Feb 19 21:15:04 crc kubenswrapper[4813]: I0219 21:15:04.108973 4813 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20a111fe-4686-4403-b8a1-651ecb4c7a0a-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 21:15:04 crc kubenswrapper[4813]: I0219 21:15:04.109005 4813 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/20a111fe-4686-4403-b8a1-651ecb4c7a0a-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 21:15:04 crc kubenswrapper[4813]: I0219 21:15:04.109018 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8bsj\" (UniqueName: \"kubernetes.io/projected/20a111fe-4686-4403-b8a1-651ecb4c7a0a-kube-api-access-x8bsj\") on node \"crc\" DevicePath \"\"" Feb 19 21:15:04 crc kubenswrapper[4813]: I0219 21:15:04.955682 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525550-mxfh9"] Feb 19 21:15:04 crc kubenswrapper[4813]: I0219 21:15:04.964358 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525550-mxfh9"] Feb 19 21:15:05 crc kubenswrapper[4813]: I0219 21:15:05.493995 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee77dc3f-374f-42c4-bd6c-7c48a63cfd8e" path="/var/lib/kubelet/pods/ee77dc3f-374f-42c4-bd6c-7c48a63cfd8e/volumes" Feb 19 21:15:10 crc kubenswrapper[4813]: I0219 21:15:10.471741 4813 scope.go:117] "RemoveContainer" containerID="a347cb44e5a8b3a2cfa74877d52037cc2fbf88e7df2b59e3a659204cfacb2f6a" Feb 19 21:15:10 crc kubenswrapper[4813]: E0219 21:15:10.472918 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 21:15:25 crc kubenswrapper[4813]: I0219 21:15:25.475716 4813 scope.go:117] "RemoveContainer" containerID="a347cb44e5a8b3a2cfa74877d52037cc2fbf88e7df2b59e3a659204cfacb2f6a" Feb 19 21:15:25 crc kubenswrapper[4813]: E0219 21:15:25.477201 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 21:15:36 crc kubenswrapper[4813]: I0219 21:15:36.472859 4813 scope.go:117] "RemoveContainer" containerID="a347cb44e5a8b3a2cfa74877d52037cc2fbf88e7df2b59e3a659204cfacb2f6a" Feb 19 21:15:36 crc kubenswrapper[4813]: E0219 21:15:36.473926 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 21:15:49 crc kubenswrapper[4813]: I0219 21:15:49.471610 4813 scope.go:117] "RemoveContainer" containerID="a347cb44e5a8b3a2cfa74877d52037cc2fbf88e7df2b59e3a659204cfacb2f6a" Feb 19 21:15:49 crc kubenswrapper[4813]: E0219 21:15:49.474174 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 21:16:00 crc kubenswrapper[4813]: I0219 21:16:00.927590 4813 scope.go:117] "RemoveContainer" containerID="2d8bf31f31a3933a7808ed8793c239fadbb6321b7c44b559eb3e924d8b92eb29" Feb 19 21:16:02 crc kubenswrapper[4813]: I0219 21:16:02.471744 4813 scope.go:117] "RemoveContainer" containerID="a347cb44e5a8b3a2cfa74877d52037cc2fbf88e7df2b59e3a659204cfacb2f6a" Feb 19 21:16:02 crc kubenswrapper[4813]: E0219 21:16:02.472575 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 21:16:14 crc kubenswrapper[4813]: I0219 21:16:14.472545 4813 scope.go:117] "RemoveContainer" containerID="a347cb44e5a8b3a2cfa74877d52037cc2fbf88e7df2b59e3a659204cfacb2f6a" Feb 19 21:16:14 crc kubenswrapper[4813]: E0219 21:16:14.473169 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 21:16:28 crc kubenswrapper[4813]: I0219 21:16:28.472588 4813 scope.go:117] "RemoveContainer" containerID="a347cb44e5a8b3a2cfa74877d52037cc2fbf88e7df2b59e3a659204cfacb2f6a" Feb 19 21:16:28 crc kubenswrapper[4813]: E0219 21:16:28.473796 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 21:16:42 crc kubenswrapper[4813]: I0219 21:16:42.471384 4813 scope.go:117] "RemoveContainer" containerID="a347cb44e5a8b3a2cfa74877d52037cc2fbf88e7df2b59e3a659204cfacb2f6a" Feb 19 21:16:42 crc kubenswrapper[4813]: E0219 21:16:42.472314 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 21:16:54 crc kubenswrapper[4813]: I0219 21:16:54.472279 4813 scope.go:117] "RemoveContainer" containerID="a347cb44e5a8b3a2cfa74877d52037cc2fbf88e7df2b59e3a659204cfacb2f6a" Feb 19 21:16:54 crc kubenswrapper[4813]: E0219 21:16:54.473507 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gfswm_openshift-machine-config-operator(481977a2-7072-4176-abd4-863cb6104d70)\"" pod="openshift-machine-config-operator/machine-config-daemon-gfswm" podUID="481977a2-7072-4176-abd4-863cb6104d70" Feb 19 21:16:55 crc kubenswrapper[4813]: I0219 21:16:55.373618 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9zzgb"] Feb 19 21:16:55 crc kubenswrapper[4813]: E0219 21:16:55.374612 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20a111fe-4686-4403-b8a1-651ecb4c7a0a" containerName="collect-profiles" Feb 19 21:16:55 crc kubenswrapper[4813]: I0219 21:16:55.374636 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="20a111fe-4686-4403-b8a1-651ecb4c7a0a" containerName="collect-profiles" Feb 19 21:16:55 crc kubenswrapper[4813]: I0219 21:16:55.374877 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="20a111fe-4686-4403-b8a1-651ecb4c7a0a" containerName="collect-profiles" Feb 19 21:16:55 crc kubenswrapper[4813]: I0219 21:16:55.376880 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9zzgb" Feb 19 21:16:55 crc kubenswrapper[4813]: I0219 21:16:55.388818 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9zzgb"] Feb 19 21:16:55 crc kubenswrapper[4813]: I0219 21:16:55.486796 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84d54f3d-484d-4e50-b89b-e79d555256a2-utilities\") pod \"redhat-operators-9zzgb\" (UID: \"84d54f3d-484d-4e50-b89b-e79d555256a2\") " pod="openshift-marketplace/redhat-operators-9zzgb" Feb 19 21:16:55 crc kubenswrapper[4813]: I0219 21:16:55.486925 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n26pp\" (UniqueName: \"kubernetes.io/projected/84d54f3d-484d-4e50-b89b-e79d555256a2-kube-api-access-n26pp\") pod \"redhat-operators-9zzgb\" (UID: \"84d54f3d-484d-4e50-b89b-e79d555256a2\") " pod="openshift-marketplace/redhat-operators-9zzgb" Feb 19 21:16:55 crc kubenswrapper[4813]: I0219 21:16:55.487014 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84d54f3d-484d-4e50-b89b-e79d555256a2-catalog-content\") pod \"redhat-operators-9zzgb\" (UID: \"84d54f3d-484d-4e50-b89b-e79d555256a2\") " pod="openshift-marketplace/redhat-operators-9zzgb" Feb 19 21:16:55 crc kubenswrapper[4813]: I0219 21:16:55.591214 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n26pp\" (UniqueName: \"kubernetes.io/projected/84d54f3d-484d-4e50-b89b-e79d555256a2-kube-api-access-n26pp\") pod \"redhat-operators-9zzgb\" (UID: \"84d54f3d-484d-4e50-b89b-e79d555256a2\") " pod="openshift-marketplace/redhat-operators-9zzgb" Feb 19 21:16:55 crc kubenswrapper[4813]: I0219 21:16:55.591349 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84d54f3d-484d-4e50-b89b-e79d555256a2-catalog-content\") pod \"redhat-operators-9zzgb\" (UID: \"84d54f3d-484d-4e50-b89b-e79d555256a2\") " pod="openshift-marketplace/redhat-operators-9zzgb" Feb 19 21:16:55 crc kubenswrapper[4813]: I0219 21:16:55.591383 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84d54f3d-484d-4e50-b89b-e79d555256a2-utilities\") pod \"redhat-operators-9zzgb\" (UID: \"84d54f3d-484d-4e50-b89b-e79d555256a2\") " pod="openshift-marketplace/redhat-operators-9zzgb" Feb 19 21:16:55 crc kubenswrapper[4813]: I0219 21:16:55.593529 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84d54f3d-484d-4e50-b89b-e79d555256a2-catalog-content\") pod \"redhat-operators-9zzgb\" (UID: \"84d54f3d-484d-4e50-b89b-e79d555256a2\") " pod="openshift-marketplace/redhat-operators-9zzgb" Feb 19 21:16:55 crc kubenswrapper[4813]: I0219 21:16:55.593747 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84d54f3d-484d-4e50-b89b-e79d555256a2-utilities\") pod \"redhat-operators-9zzgb\" (UID: \"84d54f3d-484d-4e50-b89b-e79d555256a2\") " pod="openshift-marketplace/redhat-operators-9zzgb" Feb 19 21:16:55 crc kubenswrapper[4813]: I0219 21:16:55.636942 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n26pp\" (UniqueName: \"kubernetes.io/projected/84d54f3d-484d-4e50-b89b-e79d555256a2-kube-api-access-n26pp\") pod \"redhat-operators-9zzgb\" (UID: \"84d54f3d-484d-4e50-b89b-e79d555256a2\") " pod="openshift-marketplace/redhat-operators-9zzgb" Feb 19 21:16:55 crc kubenswrapper[4813]: I0219 21:16:55.710436 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9zzgb" Feb 19 21:16:56 crc kubenswrapper[4813]: I0219 21:16:56.203159 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9zzgb"] Feb 19 21:16:56 crc kubenswrapper[4813]: I0219 21:16:56.440990 4813 generic.go:334] "Generic (PLEG): container finished" podID="84d54f3d-484d-4e50-b89b-e79d555256a2" containerID="d4ff4a43a57492b1b7c64d2c73b2e9f09b9a97c87b7e9591e26dcf8d1984e15d" exitCode=0 Feb 19 21:16:56 crc kubenswrapper[4813]: I0219 21:16:56.441035 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9zzgb" event={"ID":"84d54f3d-484d-4e50-b89b-e79d555256a2","Type":"ContainerDied","Data":"d4ff4a43a57492b1b7c64d2c73b2e9f09b9a97c87b7e9591e26dcf8d1984e15d"} Feb 19 21:16:56 crc kubenswrapper[4813]: I0219 21:16:56.441074 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9zzgb" event={"ID":"84d54f3d-484d-4e50-b89b-e79d555256a2","Type":"ContainerStarted","Data":"85bdc879c60b53eb92d3caaa80c71867c9401171f6ff45cd8424453fa5c3b298"} Feb 19 21:16:57 crc kubenswrapper[4813]: I0219 21:16:57.459273 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9zzgb" event={"ID":"84d54f3d-484d-4e50-b89b-e79d555256a2","Type":"ContainerStarted","Data":"9ec1e467a49f917979ee3aff281a7ba4b1f24a70585234b23fa58e99d8862193"}